This is a submission for the OpenClaw Challenge. Every agent system eventually hits the same wall: the model is not forgetting because it is dumb. It is forgetting because you are feeding it a landfill. Old tool output. Half-fixed errors. File reads from a task you abandoned twenty minutes ago. Five versions of the same plan. Then you ask the model to be precise while its context window is full of