London mayor Sadiq Khan has intensified his criticism of social media giants, calling for tougher regulation to curb what he describes as an “outrage economy” driving division, abuse and misinformation online. Speaking amid mounting concern over the impact of digital platforms on public life and democratic debate, Khan argues that tech companies are profiting from algorithms that reward anger and extremism, while failing to protect users from harm. His intervention adds fresh pressure on ministers and regulators to confront the business models underpinning the world’s biggest platforms, and raises questions over how far governments should go in policing online speech without undermining free expression.
Mayor warns of rising real world harms driven by online outrage economy
The London mayor has intensified his criticism of social media giants, arguing that their profit model is now directly linked to a surge in real-world intimidation, harassment and violence. According to City Hall officials, councils, schools and frontline services are reporting a sharp rise in incidents that begin with viral posts or doctored clips before spilling onto streets, into classrooms and even public transport. Khan’s team points to patterns in which controversial content is rewarded with higher visibility,creating perverse incentives for users who weaponise misinformation,racist tropes and conspiracy theories to gain followers and,in certain specific cases,financial rewards. Officials say this dynamic is fuelling a climate in which public servants, campaigners and ordinary residents are increasingly targeted after being singled out online.
- Coordinated pile-ons that migrate from comment threads to doorstep demonstrations.
- Targeted hate campaigns against women, minorities and political opponents.
- Disinformation spikes around elections, protests and high-profile trials.
- Copycat incidents following viral clips of street confrontations.
| Online Trigger | Offline Outcome |
|---|---|
| Trending hate hashtag | Abuse of commuters on busy routes |
| Viral conspiracy video | Harassment of NHS and council staff |
| Edited protest footage | Spontaneous confrontations at rallies |
City Hall sources say the links between platform algorithms and offline harm are now “too clear to ignore”, with London’s police and community groups increasingly forced into a reactive posture. Khan is understood to be pressing for tighter obligations on tech companies to limit the reach of content that promotes hatred,threats or incitement,especially when directed at named individuals. That includes calls for rapid takedown mechanisms, better cooperation with law enforcement and stronger protections for elected representatives and public sector workers who find themselves at the center of orchestrated storms. While acknowledging the importance of free expression, the mayor argues that current safeguards lag far behind the speed and scale at which outrage is being monetised, leaving cities to manage the fallout of a business model built on escalation.
Inside the algorithms that reward anger and polarisation on major platforms
Behind every furious quote-tweet, pile-on and viral pile of misinformation lies a complex stack of code quietly deciding what deserves attention. These systems are engineered to maximise engagement, and the metrics that matter most – clicks, comments, watch-time and shares – are disproportionately triggered by content that shocks or enrages. Platforms rapidly test millions of posts in real time, boosting those that keep users scrolling, nonetheless of whether they are constructive or corrosive. A snarky meme, a misleading headline, or a heavily-edited video clip that provokes moral outrage can outperform carefully researched journalism, not because it is more accurate, but because the algorithm has learned that anger is a reliable way to keep people hooked.
- Emotional intensity beats nuance and balance.
- Frequent posting is rewarded over thoughtful reflection.
- Conflict-laden content travels further than calm debate.
- Tribal signals – flags,slogans,hashtags – help the system target like-minded users.
| Signal | Typical Algorithm Response |
|---|---|
| High comment volume with arguments | Boosted to wider audiences |
| Rapid spike in angry reactions | Flagged as “highly engaging” |
| Short, shareable video clips | Prioritised in recommendation feeds |
Researchers warn that this optimisation loop gradually tilts public discourse towards extremes: posts that frame issues as existential threats to one’s identity or community are more likely to be surfaced than those that invite compromise. In effect, the design of these systems can transform ordinary disagreements into performative battles, pushing creators and politicians alike to adopt more incendiary tones simply to remain visible.As regulators and city leaders call for tighter oversight, the technical challenge is not only to dampen harmful amplification, but to recalibrate the underlying incentives so that context, accuracy and accountability are rewarded as handsomely as fury, fear and tribal loyalty.
Policy gaps exposed as tech firms struggle to curb abuse and disinformation
While platforms trumpet their investments in safety teams and AI moderation, the reality is that the current regulatory framework still lets the most inflammatory content thrive. Firms are locked into business models that reward virality, yet there are few binding rules forcing them to redesign algorithms that amplify outrage, conspiracy and targeted abuse. Loopholes in self-regulatory codes allow companies to cherry-pick openness metrics, while key areas remain under-scrutinised, including political microtargeting and the opaque role of recommender systems. Experts warn that without clear, enforceable standards, even well-meaning initiatives risk becoming PR exercises rather than structural change.
Lawmakers across jurisdictions are struggling to keep pace, leaving a patchwork of obligations that platforms can navigate with ease. Crucial questions remain unresolved, such as how to balance free expression with the need to protect users from coordinated harassment, and who bears ultimate liability for the viral spread of falsehoods. Policy proposals now under discussion focus on shifting obligation from individual users to the systems that shape what people see online, with calls for:
- Algorithmic transparency on how content is ranked and promoted
- Risk assessments for abuse, radicalisation and electoral interference
- Self-reliant audits of safety tools and enforcement practices
- Stronger penalties for platforms that repeatedly fail to act
| Policy Focus | Main Goal |
|---|---|
| Algorithm rules | Reduce reward for outrage |
| Transparency laws | Open data to regulators |
| Harms standards | Protect targets of abuse |
Targeted reforms proposed to regulate engagement incentives and protect users
City Hall sources say the mayor is pushing for a new regulatory toolkit that zeroes in on the mechanics of virality rather than only the content itself. Policy options being floated include mandatory transparency dashboards showing which posts are being algorithmically boosted, real-time data access for independent auditors, and clear limits on how much weight platforms can give to metrics such as comments, shares and watch time when those signals are driven by anger or outrage. Campaigners also want age-sensitive defaults so that younger users are automatically shielded from the most inflammatory recommendation loops, alongside tougher duties on platforms to prove their systems are not amplifying harm by design.
- Independent algorithm audits to assess whether rage-based content is rewarded
- Red lines on manipulative features, such as infinite scroll and auto-play for minors
- Penalties tied to harm, not just breaches, including revenue-based fines
- Stronger redress routes for users to challenge harmful recommendation patterns
| Measure | Who It Targets | Intended User Benefit |
|---|---|---|
| Engagement cap on toxic content | Platform ranking systems | Less exposure to polarising posts |
| Algorithmic risk reports | Tech executives | Greater oversight of design choices |
| Default safety modes for teens | Under-18 user accounts | Reduced contact with harmful trends |
Concluding Remarks
As Westminster weighs its next moves, Khan’s intervention underscores how the battle over online discourse has become a central faultline in British politics. For critics, the mayor is encroaching on free expression and overstating the power of platforms; for supporters, he is one of the few high‑profile figures willing to confront an algorithmic system they say is warping public life.
What happens next will depend on whether ministers are prepared to reopen a debate many had hoped the Online Safety Act had settled – and whether the tech companies themselves are willing to move beyond voluntary pledges and incremental tweaks.
For now,the “outrage economy” remains largely intact: a business model built on anger,amplification and attention. Khan’s demand for stronger action ensures that its costs – to politics, to public safety and to democratic trust – are unlikely to disappear from the agenda any time soon.