Developers working on artificial intelligence should be licensed and regulated in the same way as the pharmaceutical, medical or nuclear industries, according to a representative of the opposing British political party.
Lucy Powell, a politician and digital spokesperson for the UK Labor Party, told the Guardian on June 5 that companies like OpenAI or Google that have created AI models should “have a license to build those models”, adding:
“My real concern is the lack of any regulation of large language models that can then be applied to a range of AI tools, whether it’s governing how they’re built, how they’re are managed or how they are controlled.”
Powell argued that regulating the development of certain technologies is a better option than banning them, as the European Union has banned facial recognition tools.
She added that AI “can have many unintended consequences,” but if developers were forced to be open about their AI training models and datasets, some risks could be mitigated by the government.
“This technology is changing so rapidly that it requires an active and interventionist government approach, rather than a laissez-faire approach,” she said.
Before speaking at the TechUk conference tomorrow, I spoke to The Guardian about Labour’s approach to digital technology and AI https://t.co/qzypKE5uJU
— MP Lucy Powell (@LucyMPowell) June 5, 2023
Powell also believes such cutting-edge technology could have a huge impact on the UK economy and that Labor is in the process of finishing its own policies on AI and related technologies.
Next week, Labor leader Keir Starmer plans to hold a meeting with the party’s shadow cabinet at Google’s UK offices so he can speak with its AI-focused executives.
Related: EU officials want all AI-generated content tagged
Meanwhile, on June 5, Matt Clifford, chairman of the Advanced Research and Invention Agency – the government research agency set up last February – appeared on TalkTV to warn that AI could threaten humans in just two years.
EXCLUSIVE: Matt Clifford, adviser to the Prime Minister’s AI Task Force, says the world may have just two years left to tame artificial intelligence before computers become too powerful to control by humans.
— TalkTV (@TalkTV) June 5, 2023
“If we don’t start thinking now about how to regulate and think about safety, then in two years we will find that we have systems that are very powerful indeed,” he said. Clifford clarified, however, that a two-year timeframe is “the bullish end of the spectrum.”
Clifford pointed out that today’s AI tools could be used to help “launch large-scale cyberattacks.” OpenAI has offered $1 million to support AI-assisted cybersecurity technology to thwart such uses.
“I think there are (sic) a lot of different scenarios to worry about,” he said. “I certainly think it’s fair that he’s very high on policymakers’ agendas.”
BitCulture: Fine art on Solana, AI music, podcast + book reviews