廣告
香港股市 已收市
  • 恒指

    16,541.42
    +148.58 (+0.91%)
     
  • 國指

    5,810.79
    +82.66 (+1.44%)
     
  • 上證綜指

    3,041.17
    +30.50 (+1.01%)
     
  • 滬深300

    3,537.48
    +16.52 (+0.47%)
     
  • 美元

    7.8256
    0.0000 (0.00%)
     
  • 人民幣

    0.9221
    -0.0007 (-0.08%)
     
  • 道指

    39,807.37
    +47.29 (+0.12%)
     
  • 標普 500

    5,254.35
    +5.86 (+0.11%)
     
  • 納指

    16,379.46
    -20.06 (-0.12%)
     
  • 日圓

    0.0515
    +0.0000 (+0.02%)
     
  • 歐元

    8.4412
    +0.0006 (+0.01%)
     
  • 英鎊

    9.8700
    -0.0060 (-0.06%)
     
  • 紐約期油

    83.11
    -0.06 (-0.07%)
     
  • 金價

    2,254.80
    +16.40 (+0.73%)
     
  • Bitcoin

    70,159.55
    -510.49 (-0.72%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     

Top A.I. companies are getting serious about A.I. safety and concern about ‘extremely bad’ A.I. risks is growing

Yamada HITOSHI/Gamma-Rapho via Getty Images

Hello and welcome to May’s special monthly edition of Eye on A.I.

More from Fortune: 5 side hustles where you may earn over $20,000 per year—all while working from home Looking to make extra cash? This CD has a 5.15% APY right now Buying a house? Here's how much to save This is how much money you need to earn annually to comfortably buy a $600,000 home

The idea that increasingly capable and general-purpose artificial intelligence software could pose extreme risks, including the extermination of the entire human species, is controversial. A lot of A.I. experts believe such risks are outlandish and the danger so vanishingly remote as to not warrant consideration. Some of these same people see the emphasis on existential risks by a number of prominent technologists, including many who are working to build advanced A.I. systems themselves, as a cynical ploy intended both to hype the capabilities of their current A.I. systems and to distract regulators and the public from the real and concrete risks that already exist with today’s A.I. software.

And just to be clear, these real-world harms are numerous and serious: They include the reinforcement and amplification of existing systemic, societal biases, including racism and sexism, as well as an A.I. software development cycle that often depends on data taken without consent or regard to copyright, the use of underpaid contractors in the developing world to label data, and a fundamental lack of transparency into how A.I. software is created and what its strengths and weaknesses are. Other risks also include the large carbon footprint of many of today’s generative A.I. models and the tendency of companies to use automation as a way to eliminate jobs and pay workers less.

廣告

But, having said that, concerns about existential risk are becoming harder to ignore. A 2022 survey of researchers working at the cutting edge of A.I. technology in some of the most prominent A.I. labs revealed that about half of these researchers now think there is a greater than 10% chance that A.I.’s impact will be “extremely bad” and could include human extinction. (It is notable that a quarter of researchers still thought the chance of this happening was zero.) Geoff Hinton, the deep learning pioneer who recently stepped down from a role at Google so he could be freer to speak out about what he sees as the dangers of increasingly powerful A.I., has said models such as GPT-4 and PALM 2 have shifted his thinking and that he now believes we might stumble into inventing dangerous superintelligence anytime in the next two decades.

There are some signs that a grassroots movement is building around fears of A.I.’s existential risks. Some students picketed OpenAI CEO Sam Altman’s talk at University College London earlier this week. They were calling on OpenAI to abandon its pursuit of artificial general intelligence—the kind of general-purpose A.I. that could perform any cognitive task as well as a person—until scientists figure out how to ensure such systems are safe. The protestors pointed out that it was particularly crazy that Altman himself has warned that the downside risk from AGI could mean “lights out for all of us,” and yet he continues pursuing more and more advanced A.I. Similar protestors have picketed outside the London headquarters of Google DeepMind in the past week.

I am not sure who is right here. But I think that if there’s a nonzero chance of human extinction or other severely negative outcomes from advanced A.I., it is worthwhile having at least a few smart people thinking about how to prevent that from happening. It is interesting to see some of the top A.I. labs starting to collaborate on frameworks and protocols for A.I. Safety. Yesterday, a group of researchers from Google DeepMind, OpenAI, Anthropic, and several nonprofit think tanks and organizations interested in A.I. Safety published a paper detailing one possible framework and testing regime. The paper is important because the ideas in it could wind up forming the basis for an industry-wide effort and could guide regulators. This is especially true if a national or international agency specifically aimed at governing foundation models, the kinds of multipurpose A.I. systems that are underpinning the generative A.I. boom, comes into being. OpenAI’s Altman has called for the creation of such an agency, as have other A.I. experts, and this week Microsoft put its weight behind that idea too.

“If you are going to have any kind of safety standards that govern ‘is this A.I. system safe to deploy?’ then you're going to need tools for looking at that AI system and working out: What are its risks? What can it do? What can it not do? Where does it go wrong?” Toby Shevlane, a researcher at Google DeepMind, who is the lead author on the new paper, tells me.

In the paper, the researchers called for testing to be conducted by both the companies and labs developing advanced A.I. as well as by outside, independent auditors and risk assessors. “There are a number of benefits to having external perform the evaluation in addition to the internal staff,” Shevlane says, citing accountability and vetting safety claims made by the model creators. The researchers suggested that while internal safety processes might be sufficient to govern the training of powerful A.I. models, regulators, other labs and the scientific community as a whole should be informed of the results of these internal risk assessments. Then, before a model can be set loose in the world, external experts and auditors should have a role in assessing and testing the model for safety, with the results also reported to a regulatory agency, other labs, and the broader scientific community. Finally, once a model has been deployed, there should be continued monitoring of the model, with a system for flagging and reporting worrying incidents, similar to the system currently used to spot “adverse events” with medicines that have been approved for use.

The researchers identified nine A.I. capabilities that could pose significant risks and for which models should be assessed. Several of these, such as the ability to conduct cyberattacks and to deceive people into believing false information or into thinking that they are interacting with a person rather than a machine, are basically already true of today’s existing large language models. Today’s models also have some nascent capabilities in other areas the researchers identified as concerning, such as the ability to persuade and manipulate people into taking specific actions and the ability to engage in long-term planning, including setting sub-goals. Other dangerous capabilities the researchers highlighted include the ability to plan and execute political strategies, the ability to gain access to weapons, and the capacity to build other A.I. systems. Finally, they warned of A.I. systems that might develop situational awareness—including possibly understanding when they are being tested, allowing them to deceive evaluators—and the capacity to self-perpetuate and self-replicate.

The researchers said those training and testing powerful A.I. systems should take careful security measures, including possibly training and testing the A.I. models in isolated environments where the model had no ability to interact with wider computer networks or its ability to access other software tools could be carefully monitored and controlled. The paper also said that labs should develop ways to rapidly cut off a model’s access to networks and shut it down should it start to exhibit worrying behavior.

In many ways, the paper is less interesting for these specifics than for what its mere existence says about the communication and coordination between cutting-edge A.I. labs regarding shared standards for the responsible development of the technology. Competitive pressures are making the sharing of information on the models these tech companies are releasing increasingly fraught. (OpenAI famously refused to publish even basic information about GPT-4 for what it said was largely competitive reasons and Google has also said it will be less open going forward about exactly how it builds its cutting-edge A.I. models.) In this environment, it is good to see that tech companies are still willing to come together and try to develop some shared standards on A.I. safety. How easy it will be for such coordination to continue, absent a government-sponsored process, remains to be seen. Existing laws may also make it more difficult. In a white paper released earlier this week, Google president of global affairs Kent Walker called for a provision that would give tech companies safe harbor to discuss A.I. safety measures without falling afoul of antitrust laws. That is probably a sensible measure.

Of course, the most sensible thing might be for the companies to follow the protestors' advice, and abandon efforts to develop more powerful A.I. systems until we actually understand enough about how to control them to be sure they can be developed safely. But having a shared framework for thinking about extreme risks and some standard safety protocols is better than continuing to race headlong into the future without those things.

With that here’s a few more items of A.I. news from the past week:

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

This story was originally featured on Fortune.com

More from Fortune:
5 side hustles where you may earn over $20,000 per year—all while working from home
Looking to make extra cash? This CD has a 5.15% APY right now
Buying a house? Here's how much to save
This is how much money you need to earn annually to comfortably buy a $600,000 home