Notice, consent, harm
EVERY second, thousands of people tap ‘I agree’ without reading a word. The law is satisfied. The platform is protected. The user is not. Consent, as it operates in the digital environment today, is not a legal safeguard. It is a legal loophole. To meet the legal condition, you are given a privacy notice, you agree, and business continues as usual. The post-consent digital harm follows quietly and legally.
People do not agree on much these days. But one thing that brings them together is the feeling that they have reached a point of digital exhaustion, scrolling through mindless content. Even when they want to stop, the system is designed to keep them manipulated and glued to screens. The most they can do is raise their protest on the same social media platforms before being swept away in a fleeting storm of reels.
People are lonelier and more hopeless than ever before. Technology is the predominant reason. Everyone is desperate to understand why today’s youngsters seem unhappier than the generations that came before. Marc Zao-Sanders’ study in the Harvard Business Review found that therapy and companionship had become the single most common use of generative AI in 2025, surpassing coding, research and content creation combined. People are not just scrolling, they are seeking a connection with machines because they cannot find enough of it elsewhere. We have moved, quietly and without deliberate choice, from an attention economy to an attachment economy. This is not a technology story. It is a loneliness story.
To address digital harms, governments have reached for the most irrational tool available. Australia outlawed social media accounts for children under 16. Britain, France, Spain and dozens of other countries are moving in the same direction. Yet a blanket ban is not a solution; it is a relocation. Children do not disappear from the internet, they migrate to encrypted messaging apps and gaming platforms — spaces with less oversight and more danger. Bans also create a cliff edge, where children who have never been taught to navigate social media are suddenly let loose on unfiltered platforms the moment they turn 16. We are not teaching children to swim. We are ensuring they drown somewhere less visible.
Liability should be grounded in a duty not to harm.
The real flaw lies in the legal architecture underpinning all of this. Today, privacy is not about being private but about regulating relationships of power, and consent does not meaningfully constrain powerful actors such as corporations and state authorities. The disconnect between legal assumptions and social reality is particularly acute in Pakistan, where no comprehensive data protection statute exists and courts have acknowledged that millions may be subjected to warrantless surveillance through centralised interception systems. In the corporate domain, the absence of oversight exposes individuals to identity theft, financial fraud and pervasive misuse of personal information. There is little incentive for the state to regulate data privacy when it too receives copies of that data and benefits from the same surveillance infrastructure it ought to restrain.
It is unclear whether, in practice, there is any meaningful consent at all. Consent in privacy law rests on the false assumption that digital transactions resemble traditional two-party commercial exchanges. In reality, digital platforms involve extensive third-party data collection beyond users’ knowledge or control. Even in lopsided standard-form contracts, parties are at least aware of what they are exchanging, and legal scholars call this a ‘meeting of minds’. That meeting does not happen here. Individuals do not know what they are giving up, and what is being taken can be changed unilaterally, without notice.
Human-computer interaction has evolved beyond what existing law was built to govern, and the law must be rebuilt accordingly. Ignacio Cofone argues for shifting towards a harm-based liability framework, requiring corporations to be held accountable for the consequences of their data practices, not for the checklists they complete or the notices they send. Liability should be grounded in a duty not to harm. Such a model shifts attention from what individuals supposedly agreed to towards what the consequences of certain data practices actually are. Entities that design, control and profit from data infrastructures must bear responsibility for the risks they generate.
This shift may already be underway. An American jury recently concluded for the first time that social media platforms can be held liable as defective products for how they were designed, not for the content. Its legislative effect remains to be seen. But the direction is clear. Consent has failed. Accountability is the only honest substitute.
The writer is a lawyer based in Karachi.
sufiyan7576@gmail.com
Published in Dawn, April 17th, 2026
