廣告
香港股市 已收市
  • 恒指

    16,541.42
    +148.58 (+0.91%)
     
  • 國指

    5,810.79
    +82.66 (+1.44%)
     
  • 上證綜指

    3,041.17
    +30.50 (+1.01%)
     
  • 滬深300

    3,537.48
    +16.52 (+0.47%)
     
  • 美元

    7.8256
    0.0000 (0.00%)
     
  • 人民幣

    0.9221
    -0.0007 (-0.08%)
     
  • 道指

    39,807.37
    +47.29 (+0.12%)
     
  • 標普 500

    5,254.35
    +5.86 (+0.11%)
     
  • 納指

    16,379.46
    -20.06 (-0.12%)
     
  • 日圓

    0.0515
    +0.0000 (+0.08%)
     
  • 歐元

    8.4432
    +0.0026 (+0.03%)
     
  • 英鎊

    9.8850
    +0.0090 (+0.09%)
     
  • 紐約期油

    83.11
    -0.06 (-0.07%)
     
  • 金價

    2,254.80
    +16.40 (+0.73%)
     
  • Bitcoin

    69,179.62
    -2,105.83 (-2.95%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     

Microsoft: Advanced A.I. models need government regulation, with rules similar to anti-fraud and terrorism safeguards at banks

Valeria Mongelli/Bloomberg via Getty Images

Microsoft said Thursday that advances in general-purpose artificial intelligence models are so significant that the U.S. should create a new regulatory agency to oversee the technology's development, and require companies working with these A.I. models to obtain licenses, similar to banking regulations designed to prevent fraud and money laundering.

More from Fortune: 5 side hustles where you may earn over $20,000 per year—all while working from home Looking to make extra cash? This CD has a 5.15% APY right now Buying a house? Here's how much to save This is how much money you need to earn annually to comfortably buy a $600,000 home

The announcement, which was made by Microsoft president Brad Smith during a speech at the company’s annual Build developer conference and an accompanying blog post, echoes recommendations that Sam Altman, the cofounder and CEO of OpenAI, which is closely partnered with Microsoft, made in testimony before a U.S. Senate subcommittee earlier this month.

While Microsoft said that existing legal frameworks and regulatory efforts were probably best suited for handling most A.I. applications, it singled out so-called foundation models as a special case. Because many different types of applications can be built on top of these general-purpose A.I. models, Smith said there will be a need for “new law and regulations…best implemented by a new government agency.”

廣告

Microsoft also said it thought that these powerful, highly capable A.I. models should have to be licensed, and that the data centers used to train and run them should also be subject to licensing. Microsoft advocated a “know your customer” (KYC) framework for companies developing advanced A.I. systems that would be similar to the one financial services companies are required to implement to prevent money laundering and sanctions busting. The company said A.I. companies working on foundation models should “know one’s cloud, one’s customers, and one’s content.”

Microsoft’s decision to back a new regulatory agency and a licensing regime for A.I. foundation models will be controversial. Some fear that this kind of a governance regime for advanced A.I. will be subject to “regulatory capture”—where large corporations shape the regulatory environment to suit their own business objectives, while using rules and licensing requirements to keep out competitors.

The companies betting big on proprietary A.I. models served through tightly controlled application programming interfaces (APIs), such as Microsoft, OpenAI, and Google, are already facing competition from a host of open-source A.I. models being developed by startups, academics, collectives of A.I. researchers, and individual developers. In many cases, these open-source models have been able to mimic the capabilities of the large-foundation models built by OpenAI and Google.

But, by their very nature, those offering open-source A.I. software are unlikely to be able to meet Microsoft’s proposed KYC regime, because open-source models can be downloaded by anyone and used for almost any purpose. At least one startup, called Together, has also proposed harnessing unused computing capacity—including people’s laptops—into networks for training large A.I. models. Such a scheme would allow A.I. developers to bypass the data centers of major cloud computing providers and, consequently, the type of licensing system Microsoft is proposing.

Altman, in his remarks before the Senate subcommittee and in recent speeches, has said OpenAI is not in favor of regulatory capture and that it wants to see the open-source community thrive. But he has also said that new rules, and probably new government bodies, are needed to deal with the risk of artificial general intelligence, or a single A.I. system that can perform the majority of cognitive tasks as well as humans (essentially, a form of superintelligence).

When one U.S. senator suggested that Altman himself might be a good choice to head the new A.I. regulatory agency he was proposing, Altman demurred, saying he was happy with his current job, while offering to provide the senators with a list of qualified candidates. That drew derision on social media from those concerned with Silicon Valley's approach to A.I., many of whom expressed dismay that lawmakers seemed so deferential to the OpenAI chief.

At the same time, Altman has said it may be hard for OpenAI to comply with a new European Union law, the Artificial Intelligence Act, that is currently being finalized. The new law would require companies training foundation models to ensure that they train, design and deploy their models with safeguards to ensure they are not breaching EU laws in areas such as data privacy. They also must publish a summary of any training data that is copyright-protected. Altman told reporters in London yesterday that while OpenAI would try to comply with the EU law, if it found it could not, it would simply have to pull its services and products from the European market.

Google, by contrast, in a policy white paper published earlier this week, stopped short of calling for a new regulatory agency. Instead, it called for existing “sectoral regulators to update existing oversight and enforcement regimes to apply to A.I. systems.” It said that these existing regulators should have to issue regular reports identifying gaps in the law or in government capacity, a provision that could pave the way for a new regulatory body at some future point. It also called for safe-harbor provisions that would allow leading companies working on advanced A.I. systems to collaborate on A.I. safety research without falling afoul of antitrust laws.

As part of its five-point blueprint for A.I. governance, Microsoft said it was in favor of building on existing efforts on A.I. risk management, such as a framework developed by the U.S. National Institute for Standards and Technology (NIST).

The company also said that any A.I. models used to control critical infrastructure, such as electrical grids, water systems, and traffic management networks, should contain “safety brakes” allowing the systems to quickly revert back to human control; and that the A.I. software controlling this kind of infrastructure should only be run in licensed data centers.

Academic research into A.I., which has struggled to keep pace with the rapid advances in A.I. being made inside corporate research labs, should be given more resources and access to cutting-edge A.I. systems, Smith said in his blog post, which also suggested increased collaboration between the public and private sector. Smith also called for more transparency into how A.I. models are built and trained—although he acknowledged some tension between openness and the need for security.

This story was originally featured on Fortune.com

More from Fortune:
5 side hustles where you may earn over $20,000 per year—all while working from home
Looking to make extra cash? This CD has a 5.15% APY right now
Buying a house? Here's how much to save
This is how much money you need to earn annually to comfortably buy a $600,000 home