Secure Engineering Policy

This policy provides guidelines for engineers working on systems which need to store our data, as well as when using external systems to process our data. (For example, any Altis systems, or other “intranet” systems like Google Drive.)

While this policy is mainly about how we build things for ourselves, agency projects may wish to use these as well.

No such thing as internal traffic

Human Made doesn’t have any offices, so we don’t have any secure physical facilities. By default, all devices should be treated as unknown, and privileged functionality must be tied to user authentication and authorisation.

We implement the concept of zero trust, which means there is no “inside” or “outside” our corporate network (in fact, there isn’t really a corporate network at all).

This applies even to the Human Made Perimeter 81 VPN. The VPN is made available to add an additional layer of security, and to help us work with external teams who require it. For extra security, you can require the use of both the VPN and user authentication, but at no point should access restrictions be solely reliant on IP addresses.

Cryptography is hard

Whenever you’re creating anything which uses cryptography, use the most-secure existing tools available to you, and do not create your own. For connections over the web (including those between servers), use a standard like TLS for encryption (plus authentication at the application layer).

If you need advice on cryptography, contact the security team at security@humanmade.com.

Changes should be controlled, and reviewed

Any change we make needs to go through peer review before it rolls out to an operational system. Where possible, design these changes to go via GitHub pull requests. Consider implementing a change management process if the systems are processing higher classifications of data.

Any repository we deploy from automatically must have the production (usually main) branch protected against unreviewed PRs.

Changes shouldn’t be made to operational systems directly (i.e. hacking files on a server, or changing things in a console) unless absolutely necessary to resolve an incident; in this case, work with a colleague who can approve those changes, and write it down (such as in an incident Slack channel).

Don’t use customer data for non-customer things

Like it says. Don’t use customer data for testing unless they’ve approved it, and treat it as confidential data with limited access.

If you’re working on developing changes, consider carefully whether you need to use customer data at all, or whether you can use generated/fake data instead. If you do use any live customer data, this should be treated as Confidential data. Where possible, implement anonymisation techniques (such as with hm-anonymizer) to sanitize any data.

Follow industry standards

We’ve got a lot of best practices in our engineering handbook which cover things like the OWASP Top Ten and other standard procedures. Unless you’ve got a really good reason not to, you should follow those best practices whenever you can.

If you do run into one of those reasons, follow the security standards deviation process to make sure the deviation is documented and approved.

Specific configuration for systems

If you’re working on configuring systems themselves, follow best practices for system setup. In particular:

  • Ensure systems have synchronised clocks, to avoid confusion or potential exploits.
  • Implement separate development and production systems. Changes should be tested ahead of time wherever technically possible.
  • Where possible, allow for N systems instead of specifically one or two. This makes it easier to implement testing environments where needed.
  • For servers which contain data, implement backups with encryption at rest.
  • Log data, but only for as long as is necessary. Logs are super helpful for debugging or analysis, but could contain data, and have costs associated.
  • If the system contains sensitive data, build the capabilities for audit logging, and ensure the logs become part of our regular reviews.
  • Consider redundancy of systems and data, and whether redundant systems may be required.
  • Consider building immutable systems. Systems which are read-only and which are replaced rather than updated can be much more secure, and much easier to reason about.

Vulnerability management

Code we write, dependencies we use and other software we rely on is never 100% secure. You should assume and plan for vulnerabilities to be discovered in all components we build and use. This should include:

  • Review and be intentional about what dependencies we use.
  • Use scanning tools on an ongoing basis when possible.
  • Respond to monitoring and alerting of vulnerabilities to patch emergent issues.
  • Penetration testing systems, either internally or using external testers
  • Create processes to handle security incident response in a methodical and organized way