Bletchley declaration on AI safety a good start
Ordinary people should have input
It is vitally important that democratic governments play a bigger role in shaping AI's future. Prof John Tasioulas, director of the Institute for Ethics in AI at the University of Oxford, Prof Hélène Landemore from Yale University and Sir Nigel Shadbolt, professorial research fellow in computer science at the University of Oxford, examine the issue.
Earlier this month, the UK government held the first AI (artificial intelligence) Safety Summit in the historically resonant setting of Bletchley Park, home to the legendary Second World War codebreakers led by the computing genius Alan Turing.
Delegates from 27 governments, heads of the leading AI companies and other interested parties attended the meeting.
It was convened to address the challenges and opportunities of this transformative and fast-evolving technology.
But what, if anything, did it achieve?
Cooperation vital
Decisions about the development of AI are overwhelmingly in the hands of the private sector, especially the tiny number of big tech companies with access to vast stores of digital data and immense computing power. These are needed to drive technological progress.
This technology has great potential to enhance areas such as education, health care, access to justice, scientific discovery and environmental protection.
If it is to do so, and do it in a responsible way, it is vitally important that democratic governments play a bigger role in shaping AI’s future.
Since many challenges posed by AI regulation cannot be addressed at a purely domestic level, international cooperation is urgently needed to establish basic global standards that mitigate the direst consequences of an AI “arms race” between countries.
This could hamper efforts to encourage responsible technological development.
Salient risks
The summit was very welcome, but the announcement that it would be centred on a theme of AI “safety” sparked concerns that it would be dominated by the agenda of a vociferous group of scientists, entrepreneurs, and policymakers.
They have put the “existential risk” posed by these technologies at the heart of discussion about AI regulation (setting rules). The existential risk they are referring to is an idea that sophsticated AI could cause the extinction of humanity.
We do not dismiss the possibility of AI running amok. However, we had two main difficulties with the framing of the event as a “safety” summit.
First, the existential threat from AI is given exaggerated significance relative to other existential risks, such as climate change or nuclear war.
It also receives excessive attention relative to other AI-created risks such as discrimination against people by algorithms, unemployment because of AI replacing jobs, the detrimental environmental impacts from the huge data centres needed to support computing power, and the subversion of democracy through the spread of misinformation and disinformation.
Second, making “safety” the overarching theme risked presenting AI regulation as a set of technical problems to be solved by experts in the tech industry and government.
This might not emphasise the wide ranging democratic consideration needed, involving all those who are affected by these technologies.
Suitable framing
In the event, these worries were somewhat misplaced.
The “Bletchley declaration” on AI unveiled at the summit encompasses not only avoiding catastrophe or threats to life and limb, but also priorities such as securing human rights and the UN Sustainable Development Goals.
In other words, a summit on “safety” ended up invoking pretty much all the issues upon which AI might have an effect.
The declaration was signed by all 27 countries attending, including the UK, the US, China, and India, as well as the European Union.
Hopefully, this amounts to de facto recognition that the “existential risk” framing was unduly restrictive. In retrospect, the talk of “safety” provided a politically neutral banner under which different factions across industry, government, and civil society could converge.
Burning question
But a major question is how the values identified in the declaration are to be interpreted and prioritised.
As regards these AI-related values, the document says “the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed”.
This is a highly unstructured list of concerns.
Isn’t privacy part of human rights? Ethics surely includes fairness. Human oversight might best be described as a process, rather than a value, unlike other items on the list.
Symbolic value?
As such, the value of the declaration may be largely symbolic of political leaders’ awareness that AI poses serious challenges and opportunities and their preparedness to cooperate on appropriate action.
But heavy lifting still needs to be done to translate the declaration’s values into effective regulation.
The process of translation requires informed and wide ranging democratic participation. It cannot be a top down process dominated by technocratic elites.
Historically, we know that exerting democratic control is the best way of ensuring that technological advances serve the common good rather than further augmenting the power of entrenched elites.
Positive developments
On the more positive side, a new UK AI Safety Institute was announced at the summit, which will carry out safety evaluations of frontier AI systems.
Also announced was the creation of a body, to be chaired by the leading AI scientist Yoshua Bengio, to report on the risks and capabilities of such systems.
The agreement of those companies in possession of such systems to make them available for scrutiny is especially welcome.
But perhaps the summit’s biggest achievement was that it brought China into the discussion despite predictable protests from hawks. A key challenge for democratic states is that of deciding how to cooperate with nations whose buy-in to global norms on AI is essential, but which are not themselves democracies.
Public
Another key challenge is for governments to nurture consideration of the issues by the public while drawing on technical expertise.
This expertise should include leading researchers employed by big tech. But it should not permit these experts either to dictate the values that AI technology should serve or to set which of the values should be priorities.
In this regard, UK prime minister Rishi Sunak’s near hour-long interview with high-profile summit attendee Elon Musk may have served to exacerbate a sense that the tech sector was over-represented relative to civil society.
The summit highlighted two fundamental questions, the answers to which will be decisive in shaping the future of AI.
The first is, to what extent will states be able to regulate AI development? The second is, how will genuine deliberation by the public and accountability be brought into this process? – The Conversation
Delegates from 27 governments, heads of the leading AI companies and other interested parties attended the meeting.
It was convened to address the challenges and opportunities of this transformative and fast-evolving technology.
But what, if anything, did it achieve?
Cooperation vital
Decisions about the development of AI are overwhelmingly in the hands of the private sector, especially the tiny number of big tech companies with access to vast stores of digital data and immense computing power. These are needed to drive technological progress.
This technology has great potential to enhance areas such as education, health care, access to justice, scientific discovery and environmental protection.
If it is to do so, and do it in a responsible way, it is vitally important that democratic governments play a bigger role in shaping AI’s future.
Since many challenges posed by AI regulation cannot be addressed at a purely domestic level, international cooperation is urgently needed to establish basic global standards that mitigate the direst consequences of an AI “arms race” between countries.
This could hamper efforts to encourage responsible technological development.
Salient risks
The summit was very welcome, but the announcement that it would be centred on a theme of AI “safety” sparked concerns that it would be dominated by the agenda of a vociferous group of scientists, entrepreneurs, and policymakers.
They have put the “existential risk” posed by these technologies at the heart of discussion about AI regulation (setting rules). The existential risk they are referring to is an idea that sophsticated AI could cause the extinction of humanity.
We do not dismiss the possibility of AI running amok. However, we had two main difficulties with the framing of the event as a “safety” summit.
First, the existential threat from AI is given exaggerated significance relative to other existential risks, such as climate change or nuclear war.
It also receives excessive attention relative to other AI-created risks such as discrimination against people by algorithms, unemployment because of AI replacing jobs, the detrimental environmental impacts from the huge data centres needed to support computing power, and the subversion of democracy through the spread of misinformation and disinformation.
Second, making “safety” the overarching theme risked presenting AI regulation as a set of technical problems to be solved by experts in the tech industry and government.
This might not emphasise the wide ranging democratic consideration needed, involving all those who are affected by these technologies.
Suitable framing
In the event, these worries were somewhat misplaced.
The “Bletchley declaration” on AI unveiled at the summit encompasses not only avoiding catastrophe or threats to life and limb, but also priorities such as securing human rights and the UN Sustainable Development Goals.
In other words, a summit on “safety” ended up invoking pretty much all the issues upon which AI might have an effect.
The declaration was signed by all 27 countries attending, including the UK, the US, China, and India, as well as the European Union.
Hopefully, this amounts to de facto recognition that the “existential risk” framing was unduly restrictive. In retrospect, the talk of “safety” provided a politically neutral banner under which different factions across industry, government, and civil society could converge.
Burning question
But a major question is how the values identified in the declaration are to be interpreted and prioritised.
As regards these AI-related values, the document says “the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed”.
This is a highly unstructured list of concerns.
Isn’t privacy part of human rights? Ethics surely includes fairness. Human oversight might best be described as a process, rather than a value, unlike other items on the list.
Symbolic value?
As such, the value of the declaration may be largely symbolic of political leaders’ awareness that AI poses serious challenges and opportunities and their preparedness to cooperate on appropriate action.
But heavy lifting still needs to be done to translate the declaration’s values into effective regulation.
The process of translation requires informed and wide ranging democratic participation. It cannot be a top down process dominated by technocratic elites.
Historically, we know that exerting democratic control is the best way of ensuring that technological advances serve the common good rather than further augmenting the power of entrenched elites.
Positive developments
On the more positive side, a new UK AI Safety Institute was announced at the summit, which will carry out safety evaluations of frontier AI systems.
Also announced was the creation of a body, to be chaired by the leading AI scientist Yoshua Bengio, to report on the risks and capabilities of such systems.
The agreement of those companies in possession of such systems to make them available for scrutiny is especially welcome.
But perhaps the summit’s biggest achievement was that it brought China into the discussion despite predictable protests from hawks. A key challenge for democratic states is that of deciding how to cooperate with nations whose buy-in to global norms on AI is essential, but which are not themselves democracies.
Public
Another key challenge is for governments to nurture consideration of the issues by the public while drawing on technical expertise.
This expertise should include leading researchers employed by big tech. But it should not permit these experts either to dictate the values that AI technology should serve or to set which of the values should be priorities.
In this regard, UK prime minister Rishi Sunak’s near hour-long interview with high-profile summit attendee Elon Musk may have served to exacerbate a sense that the tech sector was over-represented relative to civil society.
The summit highlighted two fundamental questions, the answers to which will be decisive in shaping the future of AI.
The first is, to what extent will states be able to regulate AI development? The second is, how will genuine deliberation by the public and accountability be brought into this process? – The Conversation
Comments
Namibian Sun
No comments have been left on this article