Most teams treat their AI coding rules files as trusted configuration.
Pillar Security demonstrated that attackers can embed invisible Unicode instructions into .cursor/rules/ and .github/copilot-instructions.md files that silently direct the model to inject backdoors into every code suggestion.
The attack survives forking and PR review.
I broke down the full chain, the detection method (it takes five seconds), and published a production security rules file with named CWE constraints that the OpenSSF and Cloud Security Alliance validated as measurably more effective than "write secure code" prompting.