Don’t wait for Post Office-style scandal before regulating AI, ministers told | Artificial intelligence (AI)


Ministers have been warned against waiting for a Post Office-like scandal involving artificial intelligence before stepping in to regulate the technology, after the Government said it would not rush into legislation .

The government will acknowledge on Tuesday that binding measures to oversee the development of cutting-edge AI will be needed at some point – but not immediately. Instead, ministers will outline “initial thinking on future binding requirements” for advanced systems and discuss them with technical, legal and civil society experts.

The government is also giving regulators £10m to help tackle AI risks, and asking them to set out their approach to the technology by April 30.

However, the Ada Lovelace Institute, an independent AI research body, said the government should not wait for an impasse with tech companies or mistakes on the scale of the Post Office scandal before acting.

Michael Birtwistle, associate director of the institute, said: “We should not wait until companies stop cooperating or until a Post Office-style scandal allows the government and regulators to respond. There is a very real risk that further delay in legislation will leave the UK powerless to prevent AI risks – or even respond effectively after the fact.

The potential for misuse of technology and its impact on people's lives was highlighted by the Horizon scandal, where hundreds of post office operators were wrongly taken to court over a faulty computer system.

The government has so far used a voluntary approach to regulating the most advanced systems. In November, he announced at a global AI security summit that a group of major tech companies, including ChatGPT developer OpenAI and Google, had agreed with the EU and 10 countries, including the United States. United Kingdom and France, to cooperate to test their most sophisticated systems. AI models.

In response to consultation on white paper on AI regulation, government sticks to its framework of established regulators – such as communications watchdog Ofcom and data regulator , the Information Commissioner’s Office – regulating AI with reference to five core principles. : security, transparency, fairness, accountability and the ability of newcomers to challenge established AI players.

“AI is evolving quickly, but we have shown that humans can evolve just as quickly,” said Technology Secretary Michelle Donelan. “By adopting an agile, sector-specific approach, we have started to get a handle on risks immediately, which in turn paves the way for the UK to become one of the first countries in the world to reap the benefits of AI in complete safety. »

The government is also expected to confirm that discussions between copyright holders and technology companies over the processing of copyrighted materials to create AI tools have not reached an agreement. The Intellectual Property Office, the government agency responsible for overseeing the UK's copyright regime, had attempted to draw up a code of practice but failed to reach agreement. The breakdown in negotiations was first reported by the Financial Times.

The use of copyrighted content in creating AI tools such as chatbots and image generators, which are “trained” on large amounts of data mined from the Internet, has become one of the most legally controversial aspects of the rise of generative AI. term for technology that instantly produces compelling text, image, and audio from hand-typed prompts.

Matthew Holman, partner at UK law firm Cripps, said: “Ultimately, AI developers need clarity from the UK government on how they can safely carry out data collection. data and systems training without being constantly exposed to the risk of copyright claims from countless rights holders.

“At the same time, copyright owners need help protecting their valuable intellectual property, which is routinely copied without permission. »



Source link

Scroll to Top