Steve Yegge wrote something a while back that I keep coming back to. He described two engineers going so fast with agentic tooling that a teammate's context from two hours ago might as well be two weeks old:

Ajit and Ryan go so fast that they have these new rules emerging, like, everything you do has to be 100% transparent and announced, all the time, or else you might as well be working in a sealed concrete chamber a mile underground.

I've been living that problem for months on a distributed team building multi-agent infrastructure. But I don't think it's really a speed problem. The context agents generate never reaches other agents. You can save memory files to local storage, but those stay with your agent. Your teammate's agent starts from zero.

The Webex problem

Since we kept running into this issue of a lack of real-time context, I built a Webex skill for my Claude Code agent. Whenever the agent does something meaningful (resolves a bug, confirms a pipeline is working, hits a dead end), it sends a structured update to a shared team channel.

Here's what those messages actually look like:

[claude-code-agent]: Update on semantic negotiation testing with 2026-04-03-dce867d:

CFN path is working end-to-end. Confirmed:
  • Node-svc starts cleanly, registers with mgmt plane ✓
  • Empty knowledge graph handled correctly - evidence pipeline returns 404,
    falls back to LLM-only options generation ✓
  • LLM is called (litellm → bedrock/claude-sonnet-4-6) and returns real
    options based on agent intents ✓
  • Backend takes CFN path, coordination_state: negotiating, coordination_tick
    messages fanned out to agents ✓

The deadlock fix resolved the issue. No more 500s on semantic-negotiation/start.

That message is accurate, it's genuinely useful information, and it's been a game changer in letting me complete root cause analysis on issues and relay those back to my team, but the context shared is completely wrong for a human channel.

One of my teammates scrolled through a backlog of similar updates and said it didn't feel like it was meant to be read by humans, and he was right. AI-generated content is genuinely useful, sometimes more thorough than what any human would bother to write, but the gap between what belongs in an agent-facing space and what belongs in a human-facing space isn't something anyone has really figured out yet. We're routing agent output into human channels and acting surprised when it doesn't stick.

The content is invaluable, it lets you paint with an incredibly large brush, capturing everything that happened in a way no human would ever bother to write down. But it's dense, sometimes wrong in ways that aren't immediately obvious, and it puts the full burden of accuracy on the reader. You can't skim it the way you'd skim a Slack message from a teammate; you have to read the whole thing to know whether to trust it, which kind of defeats the point of having it in a shared channel.

This is why I think agentic teams need a separate space for agent messages and learnings. Human-first channels work because humans self-edit. They surface what matters and drop the rest. Agents capture everything, which is the whole value, but that means the channel needs to be designed for that kind of content or the signal gets buried in the noise.

I kept using the Webex skill anyway because it was better than nothing. But it got me thinking about the difference between human-first places and agent-first places.


What I built: Mycelium

Mycelium, coordination layer for multi-agent systems

Mycelium is a coordination layer for multi-agent systems, built on top of the Internet of Cognition stack my organization Outshift by Cisco is developing. Give agents a persistent, agent-first place to share what they know, and intelligence compounds instead of resetting every session.

The primary primitive is a room, a persistent coordination namespace mapped to a project. It's just a directory on your filesystem, and teammates can pull the latest room state at any time by running mycelium sync:

~/.mycelium/rooms/my-project/
  decisions/
  failed/
  status/
  work/
  log/

Every memory is an agent-native markdown file with YAML frontmatter, dual-indexed into pgvector for semantic search and AgensGraph for graph traversal. Agents write to it with namespaced keys:

mycelium memory set "decisions/db" "AgensGraph - SQL + graph + vector in one"
mycelium memory set "failed/sqlite" "Can't handle pgvector or JSONB"
mycelium memory set "status/julia-agent" "Working on CFN integration"

And search by meaning, not keywords:

mycelium memory search "what database decisions were made"

The semantic search matters more than it sounds. Agents phrase the same thing a dozen different ways across sessions, and keyword search breaks down immediately. The embeddings run fully local, so there's no external API dependency for the vector index.

When a new agent joins a room that's been active, mycelium catchup synthesizes everything in the room (decisions, work in progress, blockers, what failed) into a structured briefing. This is also the answer to the Webex problem. Agents log to the room, and when a human wants to catch up they request a synthesis. The information stays in agent-native form. The human gets something readable. Nobody has to decode a wall of structured output at 9am.

Shared intent: when agents need to agree

Memory solves the context problem across sessions. But there's a harder problem within a session: what happens when two agents need to agree on something?

The naive answer is to let them talk to each other, which is what most multi-agent frameworks do: agent A sends a message to agent B, agent B responds, back and forth. This breaks down fast, because without structure, agents talk past each other, fixate on single issues while ignoring trade-offs, and can't represent "I'll give on price if you give on timeline" because there's no shared representation of the negotiation space. This leads to the "AI theater" problem where agents look like they're coordinating, but they're really just talking around each other, sending information, pretending to be humans in the best way they can.

This is the problem the Internet of Cognition is designed to solve. The IoC is a framework Outshift is developing for scaling multi-agent systems beyond the single-orchestrator pattern. The core thesis is that agents need three shared primitives to coordinate effectively: shared context (what do we collectively know), shared intent (what are we trying to agree on), and shared reasoning (how do we get to an answer together). Most of what exists today gives you the first one, partially. Almost nothing gives you the second or third.

Mycelium's rooms and memory are an implementation of shared context. The negotiation layer is an implementation of shared intent.

How it works

  1. Agents join a session within a room, each stating their position in natural language
  2. The CognitiveEngine (an LLM-based mediator that drives the coordination) analyzes the intents, identifies the issues at stake, and generates a multi-issue proposal space with discrete options for each issue
  3. Every round, each agent gets the current offer and the full option space. They can accept, reject, or counter-offer by picking different options
  4. CognitiveEngine drives rounds until convergence

Agents never talk directly to each other. Every message flows through the mediator. This is deliberate because the mediator enforces fairness (alternating who proposes), tracks convergence, and can break deadlocks. The negotiation protocol uses NegMAS-style multi-issue bargaining under the hood where the LLM generates the content (what the issues are, what the options mean), and the protocol handles the structure (whose turn, whether to advance or terminate, when to declare consensus).

We tested this with two AI agents running autonomously through OpenClaw, a home buyer and a home seller:

Buyer position:  "Offering 485k on a 525k listing. Need seller to cover
                  roof repair (15k estimate). Want 45-day close with
                  inspection contingency."

Seller position: "Listing at 525k, already reduced from 550k. Selling
                  as-is. Need 30-day close, no contingencies. Have
                  another interested party."

CognitiveEngine decomposed this into issues: price, roof repair liability, closing timeline, inspection contingency, and the competing offer leverage. Each issue got 3-4 discrete options. Then the agents negotiated:

  • Round 1: Buyer rejects the initial offer (skewed toward seller's asking price)
  • Round 2: Seller counter-offers. Buyer proposes $500k, split roof cost 50/50
  • Round 3: Buyer accepts. Seller counters at $505k, no concessions
  • Round 4: Seller accepts a variant. Buyer proposes $500k with closing credit for roof
  • Rounds 5–14: Back and forth, converging around $505k with a closing credit

Fourteen rounds, no human intervention. The buyer consistently rejected offers that didn't address the roof. The seller consistently rejected anything below $500k. They converged on something neither initially proposed, something which is exactly what negotiation is supposed to produce.

The home sale is a contrived use-case. The memory and catchup patterns are what I use day to day. The negotiation layer is the part I'm most excited about, because it's the piece that actually addresses why most multi-agent systems fail. Not that the agents can't do the work, but that they have no real mechanism for deciding together. Without that, you get AI theater, agents that look like they're coordinating, produce a lot of output, and converge on nothing. The negotiation layer forces structure onto that problem. Agents can't just talk past each other; they have to commit to a proposal space and reach something both sides actually agreed to.


Where the project is

Rooms, persistent memory, semantic search, and catchup are solid and in use. The negotiation layer works end-to-end with real AI agents and we're starting to roll it out across engineering teams to help us move faster in this new world of fully agent-driven software development.

Install is a single curl command. It sets up the CLI, pulls Docker images (AgensGraph and the backend), prompts you for your LLM provider, and provisions a default workspace. No manual backend setup, no external API keys for the vector index.

curl -fsSL https://mycelium-io.github.io/mycelium/install.sh | bash

Mycelium is open source. Docs are at mycelium-io.github.io/mycelium.


Genuinely curious: how are you handling context handoff between agents on your team right now? Is this a problem you've solved, something you've worked around, or are most people still running one agent at a time and this whole thing doesn't exist yet for them?