Following on from an infuriated storm of tweets in response to the #OnlineHarms consultation, I thought the topic was worth expounding on because the sheer frickin’ insanity of proposing that the Government should define ‘acceptable’ legal content and then force tech companies to police it, evidently needs to be spelled out.
How did we get here?
Online harms don’t exist in a separate universe to offline harms yet they invoke the average person’s inner authoritarian far more easily. Perhaps that’s because the ‘online’ aspect enables a scale of volume that only fifty years ago would have been inconceivable, and brings complexities which our squishy little human brains have not yet adapted to be able to process without resorting to TL;DR: emotional knee-jerk reaction. Online interactions lack physical elements of communication which humans have evolved to rely on – body language, tone of voice, facial expression – leaving room for misinterpretation and, it would appear, in those gaps defensiveness breeds and contempt flowers. Empathy can be found in digital interactions, but you have to look pretty damn hard for it. It takes time and effort to absorb the other person’s reasoning and perspective, it takes humility to put oneself in the other’s position, it takes courage to examine the possibility that an adamantly-held belief may be contradicted by evidence or that a conflicting belief may have equal merit. If you read an average of a hundred tweets, a dozen Facebook posts or LinkedIn articles, watch twenty YouTube clips and read three blog posts per day, then approaching them all with empathy is probably impossible. It’s exhausting. Ultimately, more rewarding than allowing anger and incredulity to take hold, but since when have human beings been collectively good at delaying unhealthy gratification in the moment for abstract rewards in the future? (Question to which answer is ‘never’). So we have a situation where we feel more threatened by other people when we encounter them in a digital space.
Ok, ‘harmful’ content. What does that even mean? Some examples are obvious – direct threats of violence, doxxing, blackmail, sexual contact with children, fraud – and already prohibited by law. The barriers to enforcing those laws arise from the difficulties of investigation across borders (for which we have treaties) and the allocation of resources (for which we have democratic representation), rather than the digital factor itself.
It’s legal Jim, but not as we like it
The content that falls on and across the blurry edge of ‘legal’ is more problematic. One person’s religious dogma may be another person’s hate speech. One person’s erotica may be another person’s affront to decency. Companies pay silly money to other companies to skirt the edges of falsehood in order to persuade buyers that they can’t live without the thing being sold (we call this advertising, and it’s not uncommon for ‘skirting the edges’ to turn into ‘hopping across the boundary and hoping no-one notices’). Again – already regulated, but ineffectively. Putting the companies that stand to benefit from manipulation of public opinion in charge of gatekeeping ‘correct’ opinions seems….counterproductive at best. A bit like putting the world’s biggest (and disturbingly unethical) porn company in charge of allowing access to other producers’ legal porn.
Haters gonna hate
Hateful content spreads like wildfire – in fact, quicker than actual wildfires. (Maybe we should start saying that wildfires spread like hate content?). Lots of people disapprove of, dislike or even detest other people, and where in the physical world they might be circumspect about letting them know that directly; online, hatred and bullying are unleashed without even knowing the targets personally or ever having spoken directly to them. Troll armies and hate mobs can be marshalled at the click of a button – but from the platforms’ point of view, these are all users whose eyeballs are available for showing adverts to and whose data is harvested for commercial benefit. The equation is simple – get rid of the people who set off the mobs rather than the mobs themselves. It’s the philosophers trolley problem writ large (with poor spelling and grammar)
I think Piers Morgan is a dickhead – should the Government be able to prevent me from saying so simply because I’m saying it with pixels and not in the pub? As the meme goes; haters gonna hate. We already have hate speech laws (whether they are appropriate or effective is a topic for another day, but we have them and prosecution of these offences requires due process of law). Sometimes a discomforting point of view is necessary, even if it is impolitely delivered. Our collective intolerance of emotional discomfort is self-sustaining and is eclipsing our ability to analyse the merits and fallacies of opinions which we find disagreeable. The answer to this is education, not software engineering or removing fundamental rights and freedoms.
Fake news!
Propaganda, deliberate lies, wilful ignorance and intentional misinformation are spewed from a firehose of bought-and-paid-for accounts, while battles of ideology rage over the boundaries of “acceptable” opinion, turning mean and spiteful at the drop of a . All the while, tech platforms feed off our data, reducing us to datapoints which can be analysed, judged, manipulated and most importantly – monetised. Children and adults alike are targeted by predators and tricked or coerced into abusive scenarios, – sexual, financial, emotional, professional and more. Algorithms pigeonhole us according to how we measure up against the standard-issue Silicon Valley techbro specimen, and direct our behaviour towards the most profitable outcomes regardless of the social or humanitarian cost. Obviously Something Must Be Done otherwise we might just as well be in The Matrix’s hellish vision of pod-bred batteries powering the Machines.
But even though Something Must Be Done, it does not follow that anything that is done must therefore be the Right Thing To Do. (The legal blogger David Allen Green wrote an absolutely brilliant parody of this phenomena a while back in an way that almost perfectly predicts the content of the proposed Online Harms law). Unfortunately, human nature drives us reflexively towards answers which satisfy our emotional, zero-sum, fight/flight instincts, when our responses really need to be analysed and considered in order to engage our rational selves (as described by Daniel Kahneman in his book ‘Thinking Fast and Slow’). And we end up with nonsense like this (#onlineharms) and this, and this, and this. All legislating around the edges of problems with complex and multiple social factors to no great effect.
Who guards the guardians?
The proposal puts enormous power into the hands of those that evidence indicates we should trust with it least if all. The power to curtail freedoms of expression, speech, opinion and association will be bestowed on the basis of personal opinion (and inevitably; profit motive) rather than due process of law. The power to define content which is ‘harmful’ on the basis of no established metrics, no longitudinal studies, no tolerance for dissent and – it would appear – no critical thinking capabilities, is placed in the hands of people whose motivations have evidently diverged greatly from the upholding of liberal, law-based democracy. Some of them may even have noble intentions, but that doesn’t make their stupid, dangerous idea any less stupid or dangerous.
What could possibly go wrong?
Well, for a start, giving the likes of Facebook, Google and Amazon a justification for extending the dystopian degrees of surveillance they already conduct on everyone, rather than reining in their abuses of privacy, is a really bad move
Giving the green light to suppress marginalised and minority voices when the opinions they express cause discomfort to privileged majorities. Yeah, yeah, appeals processes, review boards, lessons learned, etc – but those avenues for redress are usually so convoluted and under-resourced that people become resigned – and then accepting – of their disempowerment rather than put themselves through the hassle of fighting to win back the rights that were unfairly denied them – unfair denials that can take place in their thousands every microsecond with no human intervention.
Enabling the insidious manipulation of public opinion through even more opacity and unaccountability. Far worse than Cambridge Analytica using psychographics and microtargeting to distort people’s view of the world to the benefit of their customers, this is a Government seeking to exercise control of who can say what to whom, when, who else can join that conversation when the topic is disturbing to some but NOT ACTUALLY ILLEGAL. (Expect to see all references to the damage and drawbacks of Brexit to disappear from the UK’s online presence.)
Terrorism is bad, mm’kay? We know that. But can speech which doesn’t itself contain threats of violence be terrorism in itself? And how can algorithmic judgement distinguish reporting on terrorism from threats? It doesn’t seem to be very good at that yet. Yet policy-makers’ ignorance around technology means that they are intent on legislating based on the capabilities they saw on an episode of ‘Spooks’, rather than realistic functionality.
If content is bad enough that something needs to be done, then it’s bad enough to make a law specific to that content, get it through Parliament, and enforce it with due process. We already have this for child and sexual abuse images, threats, defamation, false advertising, and incitement to violence. If these laws are already not working, then how can adding more laws be expected to succeed? One might suspect that making laws is nothing more than short-sighted posturing, an act irrelevant to the efficacy of the measures it describes, with no relevance to the publicly-stated outcome of desire.
Children are being exploited, manipulated and put on dangerous paths. I’m actually talking about the amount of data collection and profiling that goes on under the bonnet of the homework apps, games and devices pushed at them by tech platforms working through schools and parents – and we propose to put those same actors in charge of those kids’ moral and emotional protection? Why not focus efforts on educating schools and parents to identify and respond to online harms, while educating the kids themselves on how to handle issues of consent, boundary violations, distressing content and spotting nonsense? Because that would take longer than the average political term in power, would not enrich technologists and would require effort to be made by lots of people who’d rather point the finger elsewhere and be spoon-fed their decisions.
Eating disorder forums, self-harm and suicide content – definitely harmful. Not illegal. Who on earth believes that not having access to these online spaces will magically result in happy, healthy humans? Why are ‘solutions’ of silencing and excluding the people who seek this content out through their laptops or smartphones, more palatable than investment in mental health support services, fostering respect for human rights and dignity, condemning body-shaming and sexualisation of pre-adolescents, reducing pressure from schools and employers to conform to ‘productivity’ metrics, discouraging entertainment that is based on tearing other people down, shaming and humiliating them? Same answer as above.
If our Government wants to tackle ‘fake news’, it could start with some of the bare-faced lies, factual inaccuracies and magical thinking that emanates from Westminster before getting involved in what the ordinary citizens say to one another on Twitter. It could implement the Leveson 2 recommendations, scrutinise the interests of the non-dom media owners, update defamation law so that a true statement is not considered defamatory even if it pisses the subject off and costs them money. Why don’t they? I think you have your answer…
You can’t ‘fix’ human nature with technology. We can’t even all agree on what a ‘fix’ looks like, let alone how to apply it in a way that doesn’t cause more, worse, problems. Some people are rude, some obstreperous, some misguided, some deluded, some venal. None of those things are illegal, even if they are fecking annoying. Making laws that don’t fix the problem at hand while creating lots of other problems, is not the answer, and we should stop participating in the collective delusion that it is.