This amounts to a crucial moment for AI, Alice Xiang, Head of Sony Group's (SONY) AI Ethics Office and AI Lead Research Scientist, told Yahoo Finance Live (video above). “We're really seeing an inflection point with AI ethics, where it’s going from being just something that companies are doing on their own… [and now] we're seeing policymakers really dive into this space."
Xiang added: "This raises a lot of really interesting questions around how we ensure that we have governance processes in place to make sure AI that’s built is compliant with relevant laws.”
As consumers encounter AI more frequently, and with excitement and fear, regulators are taking notice. For example, in the European Union, regulators have proposed the AI Act, which would be the first AI governance law of its kind.
So, what are the AI issues that Xiang is watching as 2023 gets rolling? "Data, evaluation, and governance," she told Yahoo Finance.
"General purpose AI models like ChatGPT have definitely caught a lot of imagination recently, but all these models are built on tremendous amounts of data," said Xiang. "So we should be thinking carefully about the representativeness of that data. Is it diverse? Is it globally representative? Have we thought carefully about issues of privacy, copyright, and all the different things that go into making ethical data?"
Beyond data, ethical AI requires consistent, thoughtful evaluation, but there are few widely-accepted best practices right now – something that needs to change, Xiang said: “Once we have an AI model, how do we evaluate it? How do we make sure it works well for all consumers and that it reflects our values as a company?”
"AI ethics is still quite a new field," she added. "A lot of evaluation is pretty bespoke at the moment, and we're still in the process of developing the best standards across industry."
'Kind of like children'
Additionally, when it comes to governance, how can we go about managing AI that already exists? After all, it's not only about making sure that the AI, at a company like Sony, is compliant with new laws. It also involves making sure expectations are set correctly—and that consumers understand what the AI they're engaging with can and cannot do.
"With generative AI like ChatGPT, we need to ensure this doesn't contribute further to misinformation," she said. "At the end of the day, AI models, they're kind of like children. They're very smart and have some understanding of the world, but it's limited and they often can speak authoritatively about things they don't necessarily understand."
Allie Garfinkle is a Senior Tech Reporter at Yahoo Finance. Follow her on Twitter at @agarfinks.