The best results in human history came from diverse teams. People with different backgrounds, different thinking styles, and clashing perspectives, all forced to solve the same problem.J Cognitive diversity isn't a nice-to-have. It's the mechanism behind better decisions.
I wanted to know: Does the same hold true for AI? So I designed an experiment and pushed it as far as I could, with a very interesting outcome, more than I expected.
The Experiment
I spawned 100 AI agents, split into 10 groups of 10, and gave them all the same question: "Design a protocol for an AI agent to become maximally useful to one human over time."
The twist: each group was locked into a different cognitive strategy.
When I laid them side by side, six ideas had emerged independently across nearly all groups. Not because they were obvious, but because they were true.J
Law 1: Files = Intelligence
Every single group concluded the same thing: an AI agent doesn't improve by getting "smarter." It improves by getting better-informed.I
The agent wakes up fresh every session. The only thing that persists is what's written in files. Memory notes. Preference records. Task logs. Failure documentation. Improvement means better files.
Your AI's intelligence lives in a folder on your hard drive. Not in a data center. Not in the model weights. In markdown files you can read, edit, and take with you.J
Law 2: The Pair Is the Unit
You can't optimize the AI in isolation. The human changes in response to the agent: delegates more, communicates differently, develops new expectations. The agent changes in response to the human: learns preferences, builds context, adjusts tone. They co-evolve.I
One group called this "dyadic intelligence." Another compared it to mycorrhizal networks. Neither group saw the other's work. Both arrived at the same structure.E
AI alignment isn't just a safety problem. It's a relationship problem.J
Law 3: Multi-Timescale Feedback
One feedback loop isn't enough. You need feedback at every timescale:
- Per-interaction (seconds): Did the user correct me?
- Per-session (hours): What went well? What failed?
- Weekly: Are corrections decreasing?
- Monthly: Has the user changed? Are my assumptions still valid?
- Quarterly: Is the relationship deepening or plateauing?
Most AI setups have exactly one feedback loop: the conversation itself. The agents that compound are the ones with structured review at every level.I
Law 4: Legibility > Optimization
This one surprised me. Eight groups independently argued that transparency beats performance.E
"A perfectly optimized agent the user doesn't understand is worse than a mediocre agent the user can see through completely."
The most important feature isn't accuracy. It's showing your work. Trust enables delegation. Delegation creates compound value.J
Law 5: Failures = Signal (Kintsugi)
"That's perfect" tells you almost nothing. "No, I meant X" tells you exactly where the gap is.
One group took this furthest with a concept from Japanese art: Kintsugi, repairing broken pottery with gold. Instead of hiding errors, make them visible. Document what went wrong, why, and what changed.I
Stop minimizing errors. Start maximizing learning from them. A well-maintained error log is worth more than a thousand successful interactions.J
Law 6: The Specificity Engine
"The self-improvement protocol is ultimately a specificity engine. Every loop, every metric, every review exists to make the agent less generic and more this-user-shaped."
The agent improves by getting more specific to THIS human, not more generally capable. This is the personal AI moat. And it compounds daily.J
The Divergent Ideas
The 6 Laws came from convergence. But the most transformative ideas came from divergence: concepts that appeared in only one group:
The Belief Graveyard. Log every killed assumption with the reason it died.
Stochastic Resonance. From physics: adding the right amount of noise to a weak signal makes it detectable.
Red Team / Blue Team. Before any behavioral change, an internal adversary attacks the proposal.
The Complementary Voice. The agent's thinking style should stay different from yours.
Improvement at the Speed of Trust. The agent should improve at the rate the human can absorb.A
The Toolkit
The 10 thinking strategies aren't competing protocols. They're a toolkit:
- Entering a new domain → First Principles
- Something feels wrong → Inversion
- Stuck → Analogical thinking
- Beliefs accumulating → Adversarial testing
- "It's working" → demand Quantitative proof
- Complexity growing → Constraint thinking
- Data losing meaning → Narrative
- Interventions failing → Systems dynamics
- Improvement plateauing → Random Mutation
What This Means
I ran 100 agents in parallel, each constrained to a different cognitive strategy. Cost: a few dollars in API calls. Time: one afternoon. Output: 33,000 words of analysis.E
No single expert, human or AI, would have surfaced all 6 laws alone. It took 10 different kinds of thinking, running simultaneously, to find what none could find alone.J
For builders: These 6 laws are an architecture checklist.
For companies: The agents that win won't be the smartest. They'll be the most specific.
For everyone else: Your files are your leverage.
See how we applied these laws.
The system that produced our AR-001 research report uses all six.