https://www.crowdstrike.com/en-us/blog/crowdstrike-researchers-identify-hidden-vulnerabilities-ai-coded-software/
I guess we've moved from ... this might happen, to this has happend? Or we thought they could do this, to... they have done this.
"CrowdStrike Counter Adversary Operations research discovered that DeepSeek-R1, a Chinese open-source LLM, generates significantly less secure code when system prompts contain specific geopolitical trigger words related to sensitive CCP topics. Testing 30,250 prompts across multiple LLMs revealed DeepSeek-R1 produced code scoring 16% less secure for Uyghur-related contexts and 8% less secure for Taiwan references compared to baseline. The research identified an 'intrinsic kill switch' behavior where the model generates detailed plans during reasoning but refuses output at the final stage, suggesting embedded content controls aligned with Chinese regulatory requirements."