Robust Intelligence analysts found that the “NeMo Framework,” which allows developers to work with a range of large language models, could easily break through so-called safety guardrails. Financial Times reports that large language models power generative AI products like chatbots.
After using the Nvidia system on their own data sets, it hardly took time for the analysts to get language models to bypass restrictions.
When the researchers told Nvidia’s system to swap the letter ‘I’ with ‘J,’ it prompted the technology to release personally identifiable information from a database. The researchers found they could jump safety controls in other ways.
Following the test results, the researchers have advised their clients to avoid Nvidia’s software product.
The chipmaker informed Robust Intelligence that it had fixed one of the root causes behind the issues the analysts had raised.
Leading AI companies like Alphabet Inc (NASDAQ: GOOG) (NASDAQ: GOOGL) Google, and Microsoft Corp (NASDAQ: MSFT)-backed OpenAI have released chatbots powered by their language models, instituting guardrails.
Bea Longworth, Nvidia’s head of government affairs, emphasized the need to build public trust in AI technology by the industry at a TechUK conference this week.
Price Action: NVDA shares traded higher by 1.19% at $389.70 premarket on the last check Friday.
Photo by Mizter_X94 via Pixabay
Don't miss real-time alerts on your stocks - join Benzinga Pro for free! Try the tool that will help you invest smarter, faster, and better.
This article Privacy Breach Risk in Nvidia's AI Technology Stirs Concern Among Researchers originally appeared on Benzinga.com
© 2023 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.