Cybernetic Organization
Our newest coworker isn’t human. Its name is Claude.
When Claude gets stuck, our advisor, GPT-5, helps us explore new angles. When we need to scan the web, we tap on DeepSeek. Together, our extended team represents a collection of new minds contributing to our organization.
Organizations have always been adaptive networks, with people as nodes and communication as edges. But they’ve never quite adapted like this. For the first time, the nodes aren’t all human. And the machines are talking back. The modern cybernetic system now includes minds that don’t sleep, don’t forget, and don’t think like us.¹
The reflex problem
A cybernetic system is defined by its feedback loops.
Organizations are cybernetic systems that self-regulate through their members who observe and respond to stimuli in the environment. From initial onboarding, individuals in the organization engage in cycles of observation, analysis, and reaction. Through this process, members learn and share knowledge, creating internal feedback loops that reflect their unique culture.
In today’s environment, AIs are increasingly becoming part of this feedback loop. While using AIs as tools can amplify our capabilities, they can also amplify our biases.
When ChatGPT tells you your strategy is brilliant, that’s not a signal; it’s a noisy channel.² You get some feedback, but you can’t tell if it’s good or not. If we’re not aware of it, this distortion can appear as validation.
No one wants to import poor reflexes into their organization. When we hire, we look for the right skills and culture fit. We seek to integrate people who can grow alongside us. Why should we not hold AIs to the same standard?
For the first time, we’re thinking about how we can align digital twins to our organizational reflexes while avoiding the negative influences that create cultural drift.
How organizations learn
Organizational learning is the by-product of information flowing efficiently through feedback networks.
There are two useful types of learning to consider. Level I is stimulus response - you touch something hot, you learn not to touch it again. Level II is learning to learn - you figure out what patterns predict heat.³
Organizations that get stuck optimizing existing processes but forget to question underlying assumptions are failing to reach Level II.
We should think of Level II as a collective process where information networks persist knowledge. It’s difficult for organizations to achieve Level II learning without intentional strategic deliberation. However, it is common for them to engage in this kind of work, as it is typically what enables the kind of continuous innovation needed to adapt to a changing market environment over time.
When we interact with AI systems that don’t share our organizational contexts and cultural artifacts, we lose important elements of what makes organizational learning possible.
It is now important to consider how to make the AI systems we interface with part of the organizational learning process.
Variety and survival
Ashby’s Law of Requisite Variety says that a system doesn’t need to be complicated, but it does need to have enough internal range to match the complexity of its environment if it’s going to respond effectively.⁵
Traditional organizations handle this through specialization. Hiring experts who compress complexity in their domains. But this creates coordination overhead. Information degrades at each handoff.
Deployed as variety amplifiers rather than narrow specialists, Claude, GPT-5, and DeepSeek deliver high adaptability across domains without the burden of coordination overhead
Most organizations get this backwards. They use AI to write more effective emails instead of relying on organizational intelligence.
The coordination bottleneck
The real bottleneck in every organization, exacerbated by AI, is deliberation bandwidth. A decision maker can only be in one room at a time. An engineer can only review a certain amount of code. Strategic thinking doesn’t parallelize well because it requires a coherent context.
Traditional solutions add layers. Managers, directors, VPs. Each layer supposedly multiplies leadership capacity. In practice, each layer degrades the signal. By the time information flows up and decisions flow down, the context has shifted.
What if AI could offer a different path?
What if you could construct a digital mind that could engage in multiple deliberations simultaneously, with each instance maintaining organizational context?
Working with digital minds
At Agency/42, we’ve been exploring new ways of working with AIs.
We started with a question: What happens when you take AI systems and shape them through persistent memory and organizational context?
The difference between using generic AI and developing digital minds is the difference between an outside consultant influencing your organizational network and duplicating the most impactful people on your team, allowing them to be in multiple places at once.
Our work revealed that persistent memory is fundamental to digital mind design.
When AI remembers your past decisions, it can help make future ones without importing someone else’s reflexes. When it understands your constraints, it maintains your variety instead of collapsing toward generic solutions. When it learns your patterns, it can extend them across multiple deliberations simultaneously. This contextual awareness helps solve the coordination bottleneck without sacrificing what makes your organization unique.
This became integrated into our design approaching Daybloom.
Daybloom handles memory, governance, data ingestion, and integration for AI characters and digital twins that can interact online.
Seeing how organizations were designing brand mascots and digital twins to help them sell products, we came to a realization: we are participating in the emergence of a genuinely new kind of cybernetic organization.
Systems that sense, process, and respond through hybrid human-AI networks. Systems that can leverage technology to enhance learning at Level II, not just Level I.
The organizations that thrive in the future will be thinking deeply about how AIs interface with their organizational learning process. They’ll become architects who build feedback loops, rather than new processes. They won’t just treat AI as modern automatons; they’ll work to integrate them into their organizational nervous system. And nervous systems, as any cyberneticist will tell you, determine not just what an organism can do, but what it can become.
Notes
-
Wiener, N. (1948). Cybernetics: Or Control and Communication in the Animal and the Machine. MIT Press.
-
Shannon, C. E. (1948). “A Mathematical Theory of Communication.” Bell System Technical Journal, 27(3), 379-423. Funny enough, Anthropic’s Claude was also after Shannon.
-
Bateson, G. (1972). Steps to an Ecology of Mind. University of Chicago Press.
-
Sutton, R. (2019). “The Bitter Lesson.” Available at: http://www.incompleteideas.net/IncIdeas/BitterLesson.html
-
Ashby, W. R. (1956). An Introduction to Cybernetics. Chapman & Hall.
-
Beer, S. (1972). Brain of the Firm. Allen Lane.
-
Eames, C. & R. Design Q&A. Herman Miller Stories. Available at: https://www.hermanmiller.com/stories/why-magazine/design-q-and-a-charles-and-ray-eames/
Co-written with Rob Renn. Originally published on agency42.co
