Future AI - CWG Speakers

AI: Enigma at Bletchley Park

Next week, the UK will host the AI Safety Summit. Bletchley Park, famed as the home of World War Two codebreakers, will provide the backdrop for the first multilateral meeting on artificial intelligence. Ahead of the conference, countries are rushing to draft initial legislation. The EU wants to announce its AI Act before the end of this year. China announced a “Global AI Governance Initiative” on 18th October. Meanwhile, Biden is set to announce an AI executive order on Monday ahead of Vice President Kamala Harris’ UK visit, where she will represent the US at the summit. 


The problem is that no one is yet sure what they are regulating or how to do so. Experts are divided as to how advanced current technologies are. Meta’s Yann LeCun believes development is still in its infancy. He says regulating AI models now would be like regulating the jet airline industry before such aeroplanes were even invented. At the other end of the spectrum, the computer scientist Eliezer Yudkowsky says the only way to deal with the AI threat is to shut it down now. Between these extremes are a range of views on how to balance external threats while harnessing AI’s potentially massive benefits. Policymakers, some of whom were scarcely aware of these developments before ChatGPT thrust AI into the public consciousness, will have to weigh up several issues as they chart a way forward.


Firstly, what technology can companies commercialise? Meta has been the exception amongst big tech in supporting more accessible open-source AI systems. LeCun argues that these stimulate competition and enable more people to build and use programs. He alleges that the more cautious Google and Microsoft-backed OpenAI champion onerous regulations to create barriers to entry. A more generous interpretation is that companies fear they will be forced to release reckless models if start-ups are doing the same. Corporations also realise that, given the media frenzy around AI this year, some form of regulation is inevitable. It is in their best interests to help shape any new laws. 

While these model-makers back regulation, they want it to be narrow. Google, Microsoft, OpenAI and Anthropic have formed the Frontier Model Forum. OpenAI defines frontier models as “large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks.” The forum argues that new regulations should be limited to models meeting this definition. 


This future regulatory landscape may differ across borders. Governments are keenly aware of the economic gains to be realised through a more laissez-faire approach. British PM Rishi Sunak says he will not “rush to regulate” the industry. His approach contrasts with the EU’s more heavy-handed one. In a boon to Brexiteers, Facebook co-founder Dustin Moskovitz said it was “better that the UK is out of the EU” and that he was “far more concerned about regulatory friction” in the European bloc. 

But it’s difficult to see how regulations can be effective without any common international standards. A hacker’s ability to create a bioweapon in a particular country will have devastating repercussions much further afield. Even political disinformation cannot be confined to national borders. If nefarious actors can create deep fakes somewhere, they can post these on universal platforms. Allegations from recent elections show this issue is already very real. 


Governments will also need to consider the wider effect on the working population. AI needs to benefit the labour force, not just corporations. Jonnie Penn speaks about the importance of “bottom-up” AI regulation. The Cambridge academic warns that tasks some see as menial and replaceable may be important to the workers in question. We should not assume that increased automation will be widely welcomed. The journalist Rana Faroohar drew upon the Hollywood writers strike as a successful example of this approach. Writers won an agreement on how and when the entertainment industry can use AI. Faroohar argues that “workers who have an everyday experience with the new technology are in a good position to understand how to curb it appropriately.” 


Beyond these more tangible concerns is the extent to which AI development constitutes an existential threat. We hear examples of models being asked to stop climate change or cure cancer. And the AI proceeds to exterminate humans, seeing this as the most efficient way to carry out the request. Quite how this happens is still unclear but it’s suggested that they may find a way to poison water systems or release a deadly virus. Techno-optimists like the aforementioned LeCun dismiss these doomsday scenarios. He argues we can encode “moral character” into these systems in the same way as we enact laws to govern behaviour. Even thinkers like Gary Marcus, much more sceptical of AI’s positive impact, see ‘terminator’ scenarios as outlandish. 

Others take it very seriously. Elon Musk refers to AI as a “civilisational risk”. LeCun’s fellow winners of the Turing Award for Computer Science, Geoffrey Hinton and Yoshua Bengio, also believe there are profound dangers ahead. Bengio has called for a “humanity defence organisation” to protect against an AI system that has its “own goals”. Hinton, the so-called ‘Godfather of AI’ was at the centre of headlines in May when he left Google so he could speak freely about his grave concerns. A former CBO of Google X, Mo Gawdat, says that “AI is beyond an emergency” and its threat is “bigger than climate change”. 


This is the salient issue in considering how deep and immediate incoming regulation needs to be. A light-touch approach likely suffices if these systems are still truly in early stages of development. In this scenario, negative externalities remain at the economic and social levels. These can be monitored. If, however, the doomsters are right, stringent and binding international regulation is required now. Differing views will be aired at Bletchley Park next week. Actions in its aftermath will show which way policymakers are leaning. 



Leave a Reply

Your email address will not be published. Required fields are marked *