The debates about the UK’s ‘Online Harms’ legislation rage on, online and off; as vested interests, legitimate concerns and commercial opportunity vie for the prize of majority public support/MP’s patronage. I’ve been watching and participating in these debates for years, and having seen the same points made over and over (but each time with more technically-advanced and intrusive suggestions for eliminating the problem of people being people via screens and keyboards), am becoming weary of repeating the same arguments to every n00b with an app and no knowledge of history.

The discourse reminded me of one of the earlier (but still persistent) debates of the Internet Age; how to combat spam. I recalled seeing a meme pop up again and again on Slashdot; a set of multiple-choice responses to badly-thought-through ‘solutions’ which saved the responder from having to type it all out from scratch. With the help of the Twitter Hive Mind, I tracked down this work of genius on Cory Doctorow’s craphound.com site, and set to work revising it for today’s Online Harms proposals.

Here is the result!

(Warning; not intended as a comprehensive list of Reasons Why Not, there are plenty so I’m bound to have missed some, don’t @ me about that pls)

Your post advocates a

( ) technical ( ) legislative ( ) market-based ( ) vigilante

approach to fighting online harms. Your idea will not work. Here is why it won’t work. (One or more of the following may apply to your particular idea, and it may have other flaws that render it unworkable, harmful or pointless)

( ) Stalkers, zealots and oppressive states can easily use it to obtain targeting data for their abuses

( ) Activists, marginalised communities, people with nosy bosses and other legitimate users would be endangered

( ) CHILLING EFFECT ON FREEDOM OF SPEECH

( ) It is defenseless against people using it in bad faith

( ) It will silence some sad little trolls for a few weeks but make no difference to big-brand hate-pedlars and then we’ll all be stuck with it

( ) Users will not put up with it

( ) Google/Facebook will not put up with it

( ) Government/law enforcement will not put up with it

( ) Requires too much cooperation from platforms optimised to produce the opposite effects

( ) Relies on the delusion that local law can be enforced on a global network

( ) Relies on everyone always doing proper due diligence and not cutting corners

( ) Many people cannot afford to meet the verification criteria, financially or in term of personal risk

( ) The average internet user doesn’t want extra friction in their signup experience

( ) Doxxing will continue, but safe spaces will shrink

Specifically, your plan fails to account for

( ) Laws expressly prohibiting it

( ) Lack of centrally controlling authority for judging ‘harm’ and acceptable use

( ) VPNs, PO boxes and other legit obfuscation services

( ) The difficulty of evaluating irony, parody, literary context, civility and etiquette across multiple cultures and languages

( ) Asshats

( ) Jurisdictional problems

( ) Unpopularity of surveillance measures

( ) Public reluctance to accept corporate or government nosy-parkering

( ) Huge existing investment in low-friction interaction on platforms, for maximisation of data-harvesting

( ) Unfair impacts to people with less social capital than yourself

( ) Willingness of users to trust their ID info to Big Database

( ) Armies of propagandists, trolls, sealioners, bot accounts, contrarians, and techbro sociopaths

( ) Eternal arms race involved in all filtering approaches

( ) Extreme profitability of rage and division

( ) Joe jobs and/or identity theft

( ) Technically illiterate politicians

( ) Extreme stupidity of tech-solutionists

( ) Dishonesty on the part of platforms and legislators themselves

( ) Bias and bigotry which already existed offline and will continue to do so

( ) Browser wars

and the following philosophical objections may also apply:

( ) Ideas similar to yours are easy to come up with, yet none have ever been shown to be effective

( ) Any scheme based on outing/doxxing is unacceptable

( ) ‘Destroy Facebook’ should not be the defining objective of legislation

( ) Discriminatory exclusion sucks

( ) Every protective measure can be illegitimately weaponised by someone, somewhere, somehow.

( ) We should be able to talk about our bodies, experiences and beliefs without having to censor our conversations to avoid upsetting eavesdroppers

( ) Countermeasures should not involve oppression, unfair discrimination or punching down

( ) Countermeasures should not require warehouses full of exploitation-wage workers with PTSD to put into operation

( ) Countermeasures must not be based on magic beans, fairy dust or faith in the ‘objectivity’ of algorithms

( ) Transparency and accountability are strictly necessary for corporate entities; individuals need the protection of anonymity

( ) Why should we have to trust you and your database/algorithm/privilege?

( ) Incompatiblity with human rights, democratic mores or lessons from history

( ) Feel-good measures do nothing to solve the problem but create new problems of their own

( ) Power corrupts

( ) I don’t want to expose my entire digital social life to the government/data brokers/my boss/my family/my enemies

( ) Killing them that way is not slow and painful enough

Furthermore, this is what I think about you:

( ) Sorry mate, but I don’t think it would work.

( ) This is a dangerous idea, and you’re an asshat for suggesting it.

( ) Check your privilege, it’s getting in the way

( ) You work for Google/Facebook/Amazon/Experian/Palantir, don’t you?

I did look for attribution and licensing terms but couldn’t find any solid info, so please HMU if I need to fix that – this version is offered freely for anyone to cite, use, adapt etc unless anyone knows of a reason why that’s not okay

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.