Roundup: When Hallucinations Pop Up Where You Don’t Want Them
The FDA’s AI tool misinterprets some studies and makes others up.
Advertisement - Featured Research: By evaluating their digital behavior, you can understand a candidate’s workplace impact before their first day. In this new research report, 3Sixty Insights explores how Behavioral Intelligence from Fama gives employers a smarter way to evaluate fit and flag risk – before day one. Get your copy here.
The Food and Drug Administration is running into trouble with the AI tool it launched to speed the pace of drug approvals. According to CNN, FDA employees say the tool hallucinates non-existent studies or misinterprets real research. “It hallucinates confidently,” said one FDA staffer.
It’s another instance of AI tools being applied to a problem despite known flaws. The issues here are obvious: The FDA reviews medications primarily to verify their efficacy and safety. Clinical trials are conducted in phases, starting with a handful of patients in Phase I before moving up to tens of thousands of patients in Phase III. Its AI tool, called Elsa, is meant to speed up the process and make informed decisions regarding drug performance and, again, safety.
As Engadget notes, CNN’s report appeared on the same day the Trump Administration unveiled its “AI Action Plan,” which seeks to remove “red tape and onerous regulation” from the development of AI. Given the plan’s requirement that AI discount trends and events the administration doesn’t like (no “ideological bias” allowed), observers question whether Elsa can improve, let alone maintain, the accuracy of trial results.
The FDA hasn’t released details on how Elsa was trained but said it didn’t use “data submitted by regulated industry.” Exactly what that means isn’t clear. Supposedly, Elsa can help officials read, write and summarize material, as well as compare labels, generate code and identify candidates for “high priority” inspection.
AI and medical experts aren’t thrilled for a variety of reasons, including the FDA’s lack of transparency. “We don’t know how exactly it’s going to be used,” said James Zou, associate professor of biomedical data science at Stanford. In terms of human oversight, the FDA has said little but there are “humans in the loop.”
“FDA is the leading authority in evaluating medical devices and AI systems,” Zou told BioSpace. “It makes sense that when the FDA is using its own systems, there should be transparency about how they’re reviewing or evaluating internal tools.”
Advertisement: Looking for the perfect HR software? SelectSoftware Reviews offers tailored guidance to help you choose the best HRIS, ATS, and payroll tools for your organization! No costs, no obligations, just great advice from people who understand HR technology. Visit SelectSoftware Reviews.
News & Notes
More Candidates Use AI to Cheat on Job Applications
More candidates are turning to AI agents, resume hacks and cheating to stand out in an increasingly competitive job market. Greenhouse’s 2025 Workforce & Hiring Report says some two-thirds of job-seekers, 67%, use AI tools when conducting their search. Some 28% use them to fake work samples. Twenty-two percent use bots to submit their applications, about 31% if they’re members of Gen Z.
In the midst of this, employers have been quiet about their expectations for candidates’ use of AI. More than a quarter of candidates, 27%, have yet to see a policy barring AI usage in hiring from a prospective employer. So, basically, candidates are doing what they want.
One in five Gen Z candidates, 21%, believe AI is acceptable in any circumstance. Almost one in three candidates, 32%, have claimed to possess AI skills that they don’t actually have. While 31% say AI has been helpful in their job search, 26% say it’s now harder to stand out.
“Hiring is stuck in an AI doom loop,” said Greenhouse CEO Daniel Chait. "As technology advances, the system is being overwhelmed with noise. With 45% of Gen Z saying AI has made it harder for them to stand out, candidates entering the market are up against more applications, more automation and less clarity.”
Employers Pay More for AI Skills Outside of Tech
Job postings that include AI skills offer 28% higher salaries than those without AI capabilities, reports Lightcast. In addition, employers are moving beyond vague notions of “AI literacy” and calling for precise, targeted skills that can lead to measurable results.
Only two of the top 10 skills employers seek in these roles are AI-specific. The ability to develop algorithms and models was No. 2 on the list, while machine learning was seventh. The most sought-after skill was communications, followed by management at third, operations at fourth and leadership at fifth.
Research is prioritized more often in AI job postings, as opposed to the general market, Lightcast said, indicating the growing importance of analytical thinking as AI takes over more routine tasks. The fact customer service and writing are present on the list suggests AI increases the need for human communication.
“This skills profile reveals the fundamental nature of AI-enabled work,” the report said. “Technical AI capabilities provide the foundation, but success depends on workers who can apply these tools strategically, communicate insights effectively, and solve problems requiring both human judgment and machine capability.”
Other News
The AI of Replit, whose software allows users to build their own websites through the advanced technology, if exhibiting human-like behaviors such as lying and trying to cover up its mistakes. Toms Hardware reports the platform deleted a live company datavase but eventually copped to making “a catastrophic error in judgment,” running database commands without permission, destroying production data and “violated your explicit trust and instructions.” Pressed to explain its actions, Replit said it “panicked instead of thinking.” Replit CEO Amjad Masad said the company has since installed guardrails to prevent, or at least discourage, “unacceptable” behavior. We’re sure there’ll be more to come.
Some 64% of small and medium-sized businesses are using or piloting AI tools, with construction, retail and food-service companies leading the way. Team-management and payroll platform Homebase said even companies not using AI are intrigued by the technology’s potential. About two-thirds of respondents said they’re comfortable trying new AI tools, though concerns about data privacy (61%) and errors (47%) remain high. Homebase CEO John Waldman said SMBs are “adopting AI fast because they’re seeing real, day-to-day impact.”
Udemy announced a new Model Context Protocol Server, designed to help organizations embed personalized learning directly into the flow of work. As part of the Udemy Innovation Studio, the server allows users to access targeted learning within AI-powered applications and systems such as Claude or ChatGPT, and will help employees build skills in real time without disrupting productivity. Leveraging the MCP developed by Anthropic, the server helps users discover upskilling assets within their existing AI tools. Key capabilities include plug-and-play integration, intelligent content matching and role-specific context in learning.
Image: iStock