Bletchley Park AI Summit: Global Concertation and Smart Lobbying
6 novembre 2023
The Bletchley Park Summit has brought together global leaders and industry giants to discuss AI regulation. The British government aims to establish a dedicated institute with global reach, at a time when regulatory initiatives are multiplying around the world, and now particularly in the US. The organisers chose to focus the agenda on the most extreme risks associated with so-called « frontier » models: existential risks and the threat of losing control over models that would break free from humans. However, a multitude of issues need to be addressed, not least the much more general challenge of making models more reliable, and the potential for manipulation on all fronts, which preoccupies policymakers around the world.
An Agenda Inspired by the Big Tech Perspective
The agenda of this summit could seem biased towards the industry giants, who have readily responded to the call. They are developing massive models that, while dominating the sector, are not always based on the most advanced AI techniques. Big Tech players would prefer regulation to focus more on the apocalyptic risks to humanity posed by ‘frontier’ innovations, and less on their own models. The idea is to focus regulation on licensing systems that will also slow down competition from emerging players, especially those from the open source community, without hindering the expansion of more established names.
Digital giants are obviously more reluctant to embrace the more detailed attempts at regulation being considered around the world. These, understandably, focus not only on the existential risks of the technological ‘frontier’, but also on the vulnerabilities of existing models. At the beginning of the summer, in an apparent paradox, Sam Altman strongly supported the idea of AI regulation, pointing out existential risks, at the US Congress, which had been slow to take up the issue, and a few days later threatened to desert Europe in the event of broader regulation. He then announced the first European deployment of OpenAI, in the UK.
Political Fragmentation and the Race to Regulate
The British government has been keen to bring in a wide range of governments across geopolitical fault lines, most notably China. Beyond this international concertation, there is also a desire to attract large corporations to the UK for their European investments. The symbol of Bletchley Park (where Alan Turing’s team cracked the Enigma code) gives this strategy a historic appeal in the face of Washington’s expanding regulatory initiatives and the bureaucratic approach of the European Parliament. It seems that the governments participating in this exercise in global governance have perceived some shortcomings and tended to stay on the sidelines.
In the United States, the Biden administration, after some delay, is trying to speed up the process of risk management and transparency by means of executive orders. At the same time, Washington has distanced itself from the London initiative and the idea of a global institute to regulate AI, preferring the idea of a national body.
As for the European Union, it has taken the lead with its AI Act, which is still in the making, but suffers from notable shortcomings, particularly regarding its complexity and the fact that it was drafted based on developments that preceded the explosion of language models. Many governments, notably in France and Germany, have begun to see this text as a threat to the development of AI in Europe and would like to see a more flexible and adaptive approach to regulation. However, this does not mean giving a blank cheque to Big Tech by centring regulation on the existential risks of « frontier » models.
Ensuring Security and the Emergence of New European Players
Europe possesses excellent skills in the field of AI and a considerable pool of engineers and scientists who can easily embark on the development of AI models for a wide range of applications. However, a lack of funding is significantly slowing the catch-up with the US. This in turn raises fears of long-term dependence on models developed in the US. Labyrinthine regulation risks widening this gap.
The role of open source is booming on the AI scene, allowing a wide range of players to enter the race for AI applications and model re-appropriation. This is the most direct way for Europe to position itself. This is easier than trying to develop entirely new giant models from scratch, although this issue remains just as crucial, not least for reasons of technological autonomy. It is therefore in Europe’s interest to offer clear rules that can be adapted to the risks involved, in line with actual technological developments, and to lower barriers to entry as much as possible.
It is essential for all economic areas that regulation guarantees the security of models and the emergence of new, innovative players, rather than securing the established positions of industry giants, which cannot simply guarantee security given the shortcomings of their own models.