How Artificial is Human Intelligence -- and Humanity? Consideration of AI Safety versus Safety from Human Artifice {PT2}

 Dangerous development of AI?

Safety of AI? As implied above, it is appropriate to ask how "artificial" was the collective intelligence evident at the AI Safety Summit -- and how this might be best understood in the light of warnings immediately before the event (Sunak’s global AI safety summit risks achieving very little, warns tech boss, The Guardian, 20 October 2023; Editorial: Why the UK-led global AI summit is missing the point (Nature, 31 October 2023)).


Following the questionable pattern of many international declarations of the past, attention will necessarily focus on the Bletchley Declaration signed at the AI Safety Summit:


Alex McFarland: What is the Bletchley Declaration Signed by 28 Countries? (Unite.ai, 3 November 2023)

Chris Stokel-Walker: UK AI summit: Countries agree declaration on frontier AI risks (New Scientist, 1 November 2023; UK AI summit: US-led AI pledge threatens to overshadow Bletchley Park, New Scientist, 2 November 2023)

Matthew Sparkes: UK AI summit is a 'photo opportunity' not an open debate, critics say (New Scientist, 1 November 2023)

Dan Milmo: Five takeaways from UK’s AI safety summit at Bletchley Park (The Guardian, 2 November 2023)

Kiran Stacey: ‘It’s not clear we can control it’: what they said at the Bletchley Park AI summit (The Guardian, 12 November 2023).

As noted by Matthew Sparkes:


The reality is that technology – just as it has always done – is outpacing legislation. And if the world’s law-makers at least got up to speed on the latest developments in AI at Bletchley Park this week, it is hard to imagine they won’t need a refresher course by the time they meet again, with the face of AI having transformed once more. While summits might offer photo opportunities and the chance for politicians to rub shoulders with the likes of Elon Musk, no amount of gatherings can solve the problem of innovation outpacing legislation. (What did the UK's AI Safety Summit actually achieve? (New Scientist, 2 November 2023)


Sparkes concludes with the provocative speculation: Perhaps this meeting could have been a ChatGPT-generated email, and saved the carbon expenditure of jetting everyone in.


Relevance to global governance? Prior to the Bletchley summit, the UN Secretary-General announced the creation of a new High-level Advisory Body on Artificial Intelligence, with the following focus:


The Global AI Imperative: Globally coordinated AI governance is the only way to harness AI for humanity, while addressing its risks and uncertainties, as AI-related applications, algorithms, computing capacity and expertise become more widespread internationally.

The UN's Response: To foster a globally inclusive approach, the UN Secretary-General is convening a multi-stakeholder High-level Advisory Body on AI to undertake analysis and advance recommendations for the international governance of AI.

Calling for Interdisciplinary Expertise: Bringing together up to 38 experts in relevant disciplines from around the world, the Body will offer diverse perspectives and options on how AI can be governed for the common good, aligning internationally interoperable governance with human rights and the Sustainable Development Goals.

Curiously the comments prior to the declaration (and thereafter) tend to be vague in nature, especially with regard to both the risks and opportunities of AI. Emphasis is on the "tremendous" opportunities and the enthusiasm of techno-optimists, matched by the suggestions of the dangers to humanity by some -- readily to be recognized as fear-mongering typical of other crises.


Curiously missing is any emphasis on how AI might indeed enhance global governance, including the management of those risks and opportunities -- in the light of past inadequacies in response to crises, despite access to the highest relevant expertise. Specifically, it might be asked why no use was apparently made of AI to enhance the AI Safety Summit and the formulation of the Bletchley Declaration. Somewhat ironically, there is no indication of how AI will contribute to the working of the UN High-level Advisory Body on Artificial Intelligence -- or undermine it as a feature of the risks so strongly anticipated.


How might AI enhance interactions at the UN Summit of the Future planned for 2024 -- in the light of the UN Secretary-General's vision for the future of global cooperation in the form of a report titled Our Common Agenda (2021)?


Challenges for the UN system? It is appropriate to note that the UN system has been endowed with an International Computing Centre (UNICC) since 1971. It was created as an inter-organization facility to provide electronic data processing services. It was established by a Memorandum of Agreement among the United Nations (UN), the United Nations Development Programme (UNDP) and the World Health Organization (WHO), pursuant to resolution 2741 (XXV) of the United Nations General Assembly.


There has been relatively little trace of the involvement of the UNICC in AI, despite its participation in the AI for Good Global Summit series organized by the ITU:


Artificial Intelligence for Social Good (UNICC, 13 August, 2018)

AI for Health – UNICC’s Digital Trust Presentation at ITU (UNICC, 11 February 2019)

Accelerating Sustainable Development with Hyperautomation (UNICC, 18 December, 2020)

A Strengthened Threat Intelligence Posture for the UN Family (UNICC, 14 December, 2020)

Of potential relevance is the lack of recognition of the long-standing challenge to the relationship among UN agencies, and with variously associated bodies, as documented at the time of the creation of UNICC at the International Symposium on the Documentation of the United Nations (Acquisition and Organization of International Documentation, 1972). The introductory report to the Second World Symposium on International Documentation (1980) was republished under the title Societal Learning and the Erosion of Collective Memory (1980). It is appropriate to ask whether such challenges -- typical of problematic inter-agency dynamics at all levels of governance -- will be effectively addressed through the UN's new advisory body, whether or not it is assisted by AI.


"Dark AI" and globally organized crime? Missing from the current enthusiasm for institutional safeguards and regulation is any recognition of how AI may be exploited by constituencies skilled in the evasion of regulatory nets -- most obviously organized crime. This naivety is evident in the light of the development of the Dark Web, thereby suggesting the emergence of a "Dark AI":


Margarita Del Val: Dark AI tools: How profitable are they in the underground ecosystem? (Outpost24, 15 August 2023)

Tim Keary: The Future of Dark AI Tools: What to Expect Next? (Techopedia, 1 September 2023)

Mark Minevich: How To Combat The Dark Side Of AI (Forbes, 28 February 2020)

Daniel Iwugo: What is Dark AI? (Cybersecurity Education Initiative)

Such possibilities are only evident in the summit preoccupation with cybersecurity -- ironically matched by massive investment in cyberwarfare technology by those preoccupied with AI safety.


Given the role of intelligence in the "Great Game" of the 19th century, there is a case for a speculative reformulation in which the new variant is to be recognized as the "game" of humanity with AI, rather than as previously framed (Playing the Great Game with Intelligence: Authority versus the People, 2013).

Comments

Popular posts from this blog

The Mystery of Rh-Negative Blood Genetic Origin Unknown

Awareness of EBE Contact

American Airlines Flight 77 Evidence