TL;DR LLMs don’t become agents because they’re more intelligent, but because we place them inside a system that makes them usable. That system — which handles context, tools, errors, and flows — is the harness. If an agent doesn’t work, the problem is most likely not the model, but everything you’ve built around it.

The agent isn’t broken. Your harness is. In recent years, we’ve seen an impres