Human Made uses “AI tools” and technologies in our day-to-day operations, communications and engineering practices. The data security and privacy of any data processed by third party AI tooling falls under our existing Tools policy.
Generative AI Outputs
Always fact-check AI-generated outputs before using or sharing them. If AI provides citations, confirm they are real and relevant. Do not solely rely on AI instead of experts, legal counsel, or other trusted sources. Before sharing AI-generated outputs, edit or modify them to reflect your own judgment or clearly label them as AI-generated.
Third Party Model Training
Do not use AI services that train on the data provided to them via inference. Doing so could inadvertently lead to leaking private or sensitive information into a third party’s model. Many AI services provide business/enterprise plans that provide this.
Services can you can use without additional investigation or review:
- GitHub Copilot
- OpenAI via Human Made’s API keys
- ChatGPT Plus / Pro / Enterprise
- Claude Pro / Team / Enterprise
AI Programming Assistants
We make use of generative AI capabilities when writing code, documentation, testing and more as part of our software development lifecycle. This includes features such as AI autocomplete, prompt-based generation, and agentic development workflows.
All code produced by AI assistant tooling must be thoroughly reviewed, tested and understood.
Use of AI Agents and AI tool-calling
When giving AI services / tools access to third party tools, use the principle of least privilege. For example, read-only API keys for MCP servers that only need to read data. Prompt Injection is a major risk when connecting AI to existing data and tools, so extra caution is needed when using any MCP / AI integration that can write or modify data or other forms of external communication.