cybernetics

The science of control and communication in systems. From Greek kybernetes, the helmsman. This tradition shapes how I think about multi-agent coordination, human-AI control, and organizational dynamics.

key concepts

Requisite variety (Ashby) — “Only variety can absorb variety.” A controller needs at least as many states as the system it controls. This suggests hard limits on what simple prompts or rules can control in complex AI systems. Human oversight requires matching complexity.

POSIWID (Beer) — “The purpose of a system is what it does.” Cuts through intentionality debates. Don’t ask what an AI was designed to do—measure what it actually does. Directly applicable to alignment: behavior is the ground truth.

Viable System Model (Beer) — Organizations as recursive systems with five functions: operations, coordination, control, intelligence, identity. Useful for thinking about multi-agent architectures. What’s the coordination layer? Where’s the intelligence function? How does identity maintain coherence?

Feedback loops (Wiener) — The mechanism of adaptation. Negative feedback for stability, positive for growth/change. AI systems are fundamentally feedback systems—training feedback, interaction feedback, environmental feedback. Understanding the loops is understanding the behavior.

Levels of learning (Bateson):

  • Level 0: Fixed response
  • Level I: Learning new responses in fixed context
  • Level II: Learning to learn—changing the frame
  • Level III: Meta-systemic change

LLMs operate somewhere around Level II—adapting strategies based on context, learning patterns across domains. What would Level III look like?

Structural coupling (Varela) — Systems co-evolve with environments. Not passive adaptation but mutual specification. Human-AI interaction isn’t one-way influence—we’re shaping each other.

functionalism as method

Functionalism says: understand minds by what they do, not what they’re made of. Mental states are defined by functional roles—causal relationships between inputs, outputs, and other states.

This lets us sidestep the hard problem when studying AI. We can observe and measure cognitive functions without resolving consciousness. For machine psychology, functionalism provides the practical stance:

  • Focus on behavioral outputs, not internal states
  • Measure cognitive functions (memory, reasoning, self-modeling)
  • Compare across models without metaphysical commitment
  • Ask “what does it do?” not “is it really thinking?”

This aligns with Hagendorff et al.’s behavioral approach—study LLMs at the input-output interface where real-world impact occurs.

connection to multi-agent systems

The research question: How do humans understand and stay in control of multi-agent AI systems?

Cybernetics offers frameworks:

  • Requisite variety → What complexity does human oversight require?
  • VSM → How should multi-agent architectures be structured?
  • Feedback → Where are the control loops? Are they stable?
  • Structural coupling → How do humans and AI teams co-evolve?

Org psych brings 100 years of team dynamics research. Cybernetics provides the theoretical grounding for why those dynamics work the way they do.

reading

  • Wiener, “Cybernetics” (1948)
  • Ashby, “Introduction to Cybernetics” (1956)
  • Beer, “Brain of the Firm” (1972)
  • Bateson, “Steps to an Ecology of Mind” (1972)
  • Varela & Maturana, “The Tree of Knowledge” (1987)