Security Risk From Hidden Backdoors In AI Models

featured

Recent research shows that AI large language models (LLMs) can be quietly poisoned during training with hidden backdoors that create a serious and hard to detect supply chain security risk for organisations deploying them.

Sleeper Agent Backdoors

Researchers say sleeper agent backdoors in LLMs pose a security risk to organisations deploying AI systems because they can be embedded during training and evade detection in routine testing. Recent studies from Microsoft and the adversarial machine learning community show that poisoned models can behave normally in production, yet produce unsafe or malicious outputs when a trigger appears, with the behaviour embedded in the model’s parameters rather than in visible software code.

Continue reading ...

MSP Members Only

...Free MSP Standard Access Required

Thank you for reading MSP Marketplace Create your FREE account or login to continue reading

Your Advert here?

Click here to find out about sponsorship