‘Data Poisoning’ Poses an Insidious Risk to LLM Training
The Wire: Watch your data – Incorrect or misleading information leads to misinformation and other mischief when it creeps into training data.
Large language models are vulnerable to security risks that researchers are just beginning to understand. That puts even more pressure on employers and technology executives to keep pace with the hazards involved with making data accessible, even if that access is limited.
Keep reading with a 7-day free trial
Subscribe to WorkforceAI to keep reading this post and get 7 days of free access to the full post archives.