A recent report by the MIT Technology Review unveiled a worrying picture about the state of the Internet in 2023.

The report covered the mushrooming of low-quality junk websites filled with algorithmically generated text, flooding the entire web with “content” that drowns out any kind of meaningful material on the Internet. This is a significant moment in the trajectory of the history of technology.

Our digital spaces, once heralded as revolutionary, are collapsing. And the advent of AI is likely to quicken the collapse of these spaces, if we fail to understand the scale of what is happening.

Firstly, it is important to note that the current hype surrounding AI is more marketing than actual science, given that most developments in machine learning have been going on since at least the 20th century. In fact, their applications have already existed in our lives without most people ever perceiving it.

In 2023, when anything and everything from WhatsApp to smartphone cameras to Coca-Cola is being enhanced with AI, one can’t help but wonder if this term even means anything anymore. Have we considered whether we ever asked for this integration of AI into our lives?

Tech platforms are now filled with unnecessary AI features. At the same time, platforms that were once regarded for being user-friendly have been suddenly increasing the frequency of ads to frustrate the user into “buying a premium”, and make other attempts to cause inconvenience to users, such as Netflix’s recent ban on password sharing.

Technology scholar Cory Doctorow has coined the term “enshittification” to describe companies whose products start off as user-friendly and then degrade over time.

Social media users are today’s real lab rats – the psychologist B.F. Skinner would be proud. They are constantly at the mercy of manipulative software designed to extract attention and “engagement” every minute through notifications and like/follow buttons, which promote the generation of hateful and controversial content instead of something meaningful and nuanced.

Most of our endless, algorithmically generated feeds are covert ads.

The widespread adoption of the “infinite scroll” should have been a warning sign for everyone concerned about the harmful effects of social media, and even the creator regrets developing it (an Oppenheimer moment, perhaps) but it may be too late. It has morphed into something far worse – “short videos”.

Popularised by TikTok, and quickly copied by Instagram, Facebook, YouTube, and nearly every other app, the only word to describe the “short video” format is vicious. It is explicitly designed to hook users into a stream of audio-visual “content” that never ends, with that favourite phrase of the tech companies: ‘suggested for you’. The “content” is almost always bite-sized, random, decontextualised clips from films and music and sound and images and text smashed against each other, with much of it consumed (and then forgotten) because it is “relatable”.

In isolation, perhaps a single clip may be funny or pretty (or whatever the intended reaction may be). However, if taken as a whole, the endless stream is formless, shapeless, yet unimaginably destructive on a longer scale to attention spans. With things already in such a bad shape, the onslaught of AI-created “content” is likely to be catastrophic – “generative” AI is likely to be “degenerative” for human critical thinking capabilities.

Also read: AI and the Threat of Living in an It Cell Simulation

It gets worse when one looks at the statements and values of those who are pushing AI everywhere, particularly in the fields they wish to “revolutionise”. The CEO of OpenAI, who went around the world to promote his company’s products, has talked about his hopes to build AI that will replace the “median human” – a degrading term that he uses for what we would call the “average human”, a direct mission statement that should raise many alarms for those already concerned about the impact of AI on jobs. He has mentioned in a tweet about how AI could be used to provide healthcare services “to those who can’t afford it” – an appallingly insensitive and ignorant statement, considering it essentially refers to providing poor people with chatbots instead of actual healthcare services.

An OpenAI employee compared her ‘emotional, personal’ conversations with ChatGPT to therapy, drawing widespread criticism. This critique is noteworthy as such fanciful thoughts were first talked about in the 1960s by people who were amazed at having conversations with an early chatbot called ELIZA.

All of this is in addition to the long list of harms documented by critical tech scholars such as Kate Crawford: (i) the consumption of large amounts of water and electricity to operate them which is concerning, considering the rapidly growing threat of climate change, (ii) the scale of personal data used for surveillance and the costs required to build datasets, (iii) the biases in the datasets that give problematic outputs, (iv) the inherent political choices of “classification” and “categorisation” when building a dataset in the first place, (v) the traumatic experiences of those invisibly working to “clean” AI of all the violent and hateful content on the Internet, so that systems like ChatGPT can appear “friendly”, (vi) the unsolicited scraping of works of writers and artists to build datasets without their consent.

These are just a few examples, but it is clear that in attempts to solve social problems, AI mostly makes things even worse. These are the real threats, and not the fear of “killer robots rising”. The “killer robots rising” is more of a fantasy scenario, being a distraction from the actual serious harms outlined above. The actual AI can be seen as an entity that is likely to amplify existing faultlines in society across various levels.

The Internet, already overrun with websites full of popups and invasive ads, is suffering from an overuse of “search engine optimisation”. It is likely to become nearly unusable. Meanwhile, social media platforms are also experiencing a collapse due to their own challenges.

Google Search results have been shown to present incorrect, AI-generated answers as the “search results”, potentially dangerously misleading millions of people. Advanced AI media generators are likely to flood digital spaces with convincing fake videos of human beings with realistic faces and voices, with the potential to make literally everything on the Internet unreliable after a certain point, which is extremely worrying for the potential real world political impact.

Also read: What India Should Remember When It Comes to Experimenting With AI

This is the current state of tech platforms, of social media, and of the Internet as a whole. However, it is important to note that it is a critical moment. This leads us to ask questions we should have asked long ago. Is the current trajectory of technological development sustainable? Can we call for it to be made more democratic? Can we choose to abandon problematic platforms and support ones that deserve our attention? Can we make honest efforts to understand critical perspectives on technology and implement them practically?

Most importantly, are we aware of how AI is something that is going to have radical effects on our thoughts, ideas, and cultures on a global scale?

We need to ask ourselves how long we are going to remain passively accepting every single new feature that is being forced upon us by tech companies – how long do we go without questioning them, critiquing them, or outrightly rejecting them? Perhaps the AI era is a turning point – just when everything seems doomed, if we resolve to wake up from our collective slumber, it just might be that the era of “artificial intelligence” leads to an evolution in human intelligence.

Kaif Siddiqui is a PhD scholar at the NALSAR University of Law, Hyderabad.

QOSHE - Why We Need An Anti-AI Movement Too - Kaif Siddiqui
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

Why We Need An Anti-AI Movement Too

6 8
04.11.2023

A recent report by the MIT Technology Review unveiled a worrying picture about the state of the Internet in 2023.

The report covered the mushrooming of low-quality junk websites filled with algorithmically generated text, flooding the entire web with “content” that drowns out any kind of meaningful material on the Internet. This is a significant moment in the trajectory of the history of technology.

Our digital spaces, once heralded as revolutionary, are collapsing. And the advent of AI is likely to quicken the collapse of these spaces, if we fail to understand the scale of what is happening.

Firstly, it is important to note that the current hype surrounding AI is more marketing than actual science, given that most developments in machine learning have been going on since at least the 20th century. In fact, their applications have already existed in our lives without most people ever perceiving it.

In 2023, when anything and everything from WhatsApp to smartphone cameras to Coca-Cola is being enhanced with AI, one can’t help but wonder if this term even means anything anymore. Have we considered whether we ever asked for this integration of AI into our lives?

Tech platforms are now filled with unnecessary AI features. At the same time, platforms that were once regarded for being user-friendly have been suddenly increasing the frequency of ads to frustrate the user into “buying a premium”, and make other attempts to cause inconvenience to users, such as Netflix’s recent ban on password sharing.

Technology scholar Cory Doctorow has coined the term “enshittification” to describe companies whose products start off as user-friendly and then degrade over time.

Social media users are today’s real lab rats – the psychologist B.F. Skinner would be proud. They are constantly at the mercy of manipulative software designed to extract attention and “engagement” every minute through notifications and like/follow buttons, which promote the generation of hateful and........

© The Wire


Get it on Google Play