Saturday 9 December 2023

Artificial Intelligence

I haven’t got much of a clue about Artificial Intelligence (AI), otherwise known as Large Language Models, although I keep reading about it and I’ve got an account with OpenAI, the outfit that invented ChatGPT. I tend to use Microsoft's Bing search engine which includes an element of AI. My impression is that the results often look impressive but to provide the answers it does, it uses random human sources, not necessarily the most accurate or authoritative ones so the truthfulness is questionable. I’ve also seen how it can be manipulated to make people - particularly celebrities - appear to be saying things they’ve never said or would say. From that point of view, it's very clever and dangerous.

I also note experts suggesting it will change the way we work and live, replacing jobs and increasing productivity and I've no reason to doubt it. There is the usual debate that follows in the wake of any new, cutting-edge technology about how it can and should be regulated.

Yesterday the EU Parliament and European Council reached a provisional agreement on an Artificial Intelligence Act that aims to "ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high risk AI, while boosting innovation and making Europe a leader in the field. The rules establish obligations for AI based on its potential risks and level of impact."

Some uses are outright banned like the "untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases" along with "systems that manipulate human behaviour to circumvent their free will."

There are obligations for high-risk systems, with exemptions for law enforcement, and so-called 'guardrails' for general-purpose AI systems. These include transparency requirements, technical documentation, complying with EU copyright law, and the circulation of detailed summaries about the content for training.

Models that meet certain criteria will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the EU Commission on serious incidents, ensure cybersecurity, and report on their energy efficiency. MEPs also insisted that, until harmonised EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the regulation.

Now, Brexit Britain was supposed to be in part about allowing us to become more ‘agile’ when it comes to new technologies and their regulation, to do things differently and by implication, faster and better. In October Rishi Sunak held a summit in London with many of the big names in AI and gave a speech at The Royal Society.

Sunak made a point of not regulating too soon:

"And only nation states have the power and legitimacy to keep their people safe. The UK’s answer is not to rush to regulate. 

"This is a point of principle – we believe in innovation, it’s a hallmark of the British economy……so we will always have a presumption to encourage it, not stifle it. And in any case, how can we write laws that make sense for something we don’t yet fully understand?

"So, instead, we’re building world-leading capability to understand and evaluate the safety of AI models within government."

He wants the UK to be a "global leader in safe AI" and by doing so "attract even more of the new jobs and investment that will come from this new wave of technology."

He may well turn out to be right, but I don't believe that waiting around and then trying to produce at some future date our own regulations independent of the EU, the world's pre-eminent regulator, will benefit this country - or work at all.

Sooner or later Britain will have to fall in line with Europe because the tech companies involved in AI (and a lot of the small ones will fall by the wayside) will have to take account of the EU's regulatory framework simply because of the size of the single market. Even if we allow them to innovate or experiment here, eventually they and we will succumb to the Brussels effect.

How much better would it have been for the UK to influence those regulations from the outset?

The EU's co-rapporteur Dragos Tudorache said: “The EU is the first in the world to set in place robust regulation on AI, guiding its development and evolution in a human-centric direction. The AI Act sets rules for large, powerful AI models, ensuring they do not present systemic risks to the Union and offers strong safeguards for our citizens and our democracies against any abuses of technology by public authorities. It protects our SMEs, strengthens our capacity to innovate and lead in the field of AI, and protects vulnerable sectors of our economy. The European Union has made impressive contributions to the world; the AI Act is another one that will significantly impact our digital future”.

The other rapporteur Brando Benifei (S&D, Italy) said: “Correct implementation will be key - the Parliament will continue to keep a close eye, to ensure support for new business ideas with sandboxes, and effective rules for the most powerful models”.

The reference to 'sandboxes' is a way of permitting innovation in a controlled environment before rolling it out more widely. 

Sunak is adopting the US approach of letting big tech rip (to use BoJo's phrase) and fixing any damage later if we can.  I am not sure the companies themselves find it helpful anyway. It must be better to design systems to comply with democratically set rules - even if they're limiting in some way - than spend millions of dollars developing software that is later banned.

Sunak said last month that "if harnessed in the right way, the power and possibility of this technology could dwarf anything any of us have achieved in a generation. And that’s why I make no apology for being pro-technology. It’s why I want to seize every opportunity for our country to benefit in the way I’m so convinced that it can.

I fear it's another gamble we are going to regret.