The problem Most teams shipping LLM features test for code bugs but not for prompt-injection attacks in their inputs. They rely on the model's built-in safety. That's not a plan. What I built nukon-pi-detect is a tiny Python library + CLI that scans strings and files for known prompt-injection patterns before they reach your model. pip install nukon-pi-detect nukon-pi-detect scan --string