WorkforceAI

WorkforceAI

Share this post

WorkforceAI
WorkforceAI
‘Data Poisoning’ Poses an Insidious Risk to LLM Training

‘Data Poisoning’ Poses an Insidious Risk to LLM Training

The Wire: Watch your data – Incorrect or misleading information leads to misinformation and other mischief when it creeps into training data.

Mark Feffer's avatar
Mark Feffer
Mar 19, 2024
∙ Paid
2

Share this post

WorkforceAI
WorkforceAI
‘Data Poisoning’ Poses an Insidious Risk to LLM Training
1
Share

Large language models are vulnerable to security risks that researchers are just beginning to understand. That puts even more pressure on employers and technology executives to keep pace with the hazards involved with making data accessible, even if that access is limited.

Keep reading with a 7-day free trial

Subscribe to WorkforceAI to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Mark Feffer & Tramp Steamer Media, LLC
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share