Amidst competitive global politicking, as the world struggles to contain the toxic legacies of 2023 — cruel regional wars, civic conflicts and undeniable stories of anthropogenic harms — technocrats, ecocrats, and bureaucrats across the world continue to device and launch quiet initiatives portending a better and larger future for human rights.

One such initiative led, in early October 2023, to the Report of the High-Level Committee on Programmes and the High-Level Committee on Management joint session on the use and governance of AI and related frontier technologies hosted, significantly, by the United Nations Children’s Fund (UNICEF). Of course, the UN system had already begun its work on “frontier technologies” and artificial intelligence (AI), including the development, in 2019, of a United Nations system-wide strategic approach and road map for supporting capacity development of AI. The principles for the ethical use of AI in the United Nations

system were adumbrated in the famous UNESCO declaration which prescribes many values and principles, chief being the following: Respect, protect and promote human rights and fundamental freedoms and human dignity; ecological sustainability; diversity and inclusiveness. These range across eleven areas of specific concern that include good governance and just development. A “system-wide normative and operational framework on the use of AI in the United Nations system, based on these principles for the ethical use of AI” was finally recommended.

Important though these are for humanistic future development and applications of AI, they would be sheer flights of fancy if further work on this code were to ignore all talk of “digital sovereignty” and performances of digital diplomacy. In one way or another, the principle of territorial sovereignty is slowly but surely being transformed into that of digital sovereignty. Transborder, multilevel governance of AI is at the very heart of corporate governance, and sovereignty over peoples and nations is being transformed into masses of accumulated classified data. Disinformation, misleading information, and even hate speech, are the order of the day and the big question now is how to prevent these evils in governance and development so that some truth and accountability is ensured.

Karl Manheim and Lyric Kaplan, in a 2019 article in the Yale Journal of Law and Technology, portray in some macabre detail the threats “fuelled by growing deployment” of AI tools that lead to “[manipulation of] the preconditions and levers of democracy” and “threats to decisional and informational privacy”. AI “is the engine behind Big Data Analytics and the Internet of Things.” While some consumer benefit ensues, their “principal function at present is to capture personal information, create detailed behavioural profiles and sell us goods and agenda.” Privacy, anonymity and autonomy remain the “main casualties of AI’s ability to manipulate choices in economic and political decisions” to the extent that, unless determined steps at global, regional, and national levels are taken now, privacy and democracy will rapidly become wonders of the past!

The present digital wars between the US and China, in fact, represent three different “digital empires” in complicity as well as collision, as Anu Bradford analyses in a book-length study of China, the US and EU law and regulation regimes (Digital Empires, 2023). She shows that the free digital model of the US, which amounts to complete freedom to the AI industry (the techno-optimistic model) revives the models of free speech and open markets, leaving the form and content entirely open to free market forces. Free market fundamentalism has nurtured the growth and global eminence of the social media industry which (according to the Business Research Company) rose from $193.52 billion in 2001 to $231.1 billion in 2023, and is expected to grow to $454.37 billion in 2027.

All this techno-optimism run “wild” is yielding to the appeal of an “authoritarian” model of regulatory reach, based on state surveillance and hegemony over private AI companies. The Chinese state-driven regulatory model is on “the ascent worldwide, leading to growing concern in the US, the EU, and the rest of the democratic world about the implications of that ascent.”

The worry that “China’s regulatory model will prevail is real, both normatively and descriptively” because while China’s technological development is impressive, its way of “harnessing that technology is often deeply oppressive”. The Chinese state-driven model also “appeals to many developing authoritarian countries” because it “combines political control with tremendous technological success”. In contrast, the very few actually existing “democratic” societies seem to prefer the EU model, seen as providing the “necessary building blocks of a more equitable and human-centric digital economy.” The EU Declaration on Development on November 22, 2021, privileges a human rights-based approach to development, postulating respect for human rights as “a precondition for the achievement of inclusive and sustainable development”.

Bradford reminds us about the promise of an uncertain future of the technopolitical, although it remains open “whether surveillance capitalism, digital authoritarianism, or liberal democratic values will prevail as a foundation for human engagement and for our society as we advance further into the digital era”.

We would need to go far afield to even begin to look at the uses of AI technologies for war or terror purposes or perspectives. But AI has now irreversibly “revolutionised” warfare. The use of unmanned lethal autonomous weapons systems, abbreviated in an unconscious irony by the US Defence Department as LAWs, illustrates complete machine-learning dependence and dehumanisation of the means of warfare, setting back the project of international humanitarian law. The overall project of “humanising” AI applications in all contexts, civil or military, must continue lest, as the poet T S Eliot said in The Waste Land, we lose it in the “awful daring of a moment’s surrender which an age of prudence can never retract.”

The writer is professor of law, University of Warwick, and former vice chancellor of Universities of South Gujarat and Delhi

QOSHE - How AI is changing what sovereignty means - Upendra Baxi
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

How AI is changing what sovereignty means

12 1
05.01.2024

Amidst competitive global politicking, as the world struggles to contain the toxic legacies of 2023 — cruel regional wars, civic conflicts and undeniable stories of anthropogenic harms — technocrats, ecocrats, and bureaucrats across the world continue to device and launch quiet initiatives portending a better and larger future for human rights.

One such initiative led, in early October 2023, to the Report of the High-Level Committee on Programmes and the High-Level Committee on Management joint session on the use and governance of AI and related frontier technologies hosted, significantly, by the United Nations Children’s Fund (UNICEF). Of course, the UN system had already begun its work on “frontier technologies” and artificial intelligence (AI), including the development, in 2019, of a United Nations system-wide strategic approach and road map for supporting capacity development of AI. The principles for the ethical use of AI in the United Nations

system were adumbrated in the famous UNESCO declaration which prescribes many values and principles, chief being the following: Respect, protect and promote human rights and fundamental freedoms and human dignity; ecological sustainability; diversity and inclusiveness. These range across eleven areas of specific concern that include good governance and just development. A “system-wide normative and operational framework on the use of AI in the United Nations system, based on these principles for the ethical use of AI” was finally recommended.

Important though these are for humanistic future development and........

© Indian Express


Get it on Google Play