As Meta launches an AI avatar of Mark Zuckerberg, Paul Armstrong writes why cloning your boss means cloning mistakes Mark Zuckerberg is building an AI version of himself so employees can interact with “him” at scale. What a delight for everyone. The pitch sounds efficient; tens of thousands of staff, direct access to leadership voice, fewer bottlenecks.

A neat solution to the pesky problem of one executive not being able to give personal time to 50,000 staff. The only trouble is that this isn’t leadership at scale; it’s more like judgement being handed to a system that can’t be held responsible for what it says or does. Judgement is being outsourced, not scaled Meta’s internal clone of Mark Zuckerberg is trained on his tone, views and (god help us) past decisions, designed to answer questions and guide employees without requiring his time.

If that sounds dystopian, it gets worse. Framed one way, that looks like access. Framed properly, that looks like thousands of new decision points being created without any corresponding increase in accountability.

Claude, ChatGPT and co generate outputs based on probability, not understanding just like corporate versions. Confident answers can still be wrong, fabricated or inconsistent depending on prompt and context, a limitation explored in research on hallucinations in large language models. Vendors often acknowledge this behaviour as an unresolved constraint rather than a solved problem, and a whole new industry of GEO (generative engine optimisation) experts are trying to sell companies expertise that they can influence the black boxes, when really everyone is just scrabbling for Altman and co to throw them a sign they’re not going to be clobbered next.

Businesses are already deploying clones and proxy systems far beyond executive chat. Hiring teams are already using awful AI avatar interviewers to screen candidates. HR departments are generating performance reviews and internal feedback using models trained on historical data.

Cloning expertise means cloning mistakes too Customer service bots negotiate refunds, explain policies and make commitments that bind the company. Each of those systems claims to remove workload, but really these systems just replace human judgement with output that meet the acceptable levels of probability. Think this isn’t going to land you in court?

Ask Air Canada, whose chatbot fabricated a refund policy that did not exist and the airline was forced to honour it in court. A single hallucination turned into a legal obligation. Scale that across thousands of interactions a day and the problem stops looking like a bug and starts looking like structural exposure mixed with a leadership failure to act.

Internal use cases carry the same risk, just less visibly. An AI clone answering employee questions about strategy won’t produce a single consistent view right now. Slight variations in phrasing, context or prompt will generate different answers that all sound authoritative.

Also how the employee takes that information may be different based on their training, understanding of different internal programs, policies. Alignment doesn’t improve, instead it fragments. Employees leave those interactions believing they have direction when in reality they have received one of many possible interpretations and no direct interaction which is, over the long term, demotivating and likely psychologically damaging.

Executives adopting these systems assume more access to leadership voice improves clarity, and increased output feels like progress. Underneath, a different pattern is emerging. Every additional AI interaction increases the number of decisions made inside the organisation, and total error rates rise sharply.

The problem of a synthetic executive voice Hiring is showing the damage earlier than most functions because outcomes are visible and measurable. AI screening systems routinely filter candidates based on proxies rather than capability and the results go viral. Strong candidates who don’t match the training data profile get rejected before a human even sees them if they ever do meaning weak signals become hiring criteria because they’re easy to model.

Organisations then wonder why performance drops despite more “efficient” processes. Now add cloned leadership on top of that stack and the problem compounds. A synthetic executive voice reinforces the same patterns at scale, creating a loop where hiring, feedback and internal communication all reflect the biases of the underlying model rather than the intent of the leadership team.

Culture stops being built and starts being output. External signals already point to how quickly these systems drift. Meta has faced scrutiny over AI personas that blur simulation and reality, where interaction is prioritised over accuracy or safety.

If users struggle to distinguish between a system and a person, employees interacting with an AI version of the CEO will do the same. Worst still is the hallowed discretionar