On regulating social media

In this note on LinkedIn, Katy Leeson asks for opinions in the comments. The LinkedIn comment section is limited to 1300 characters – far too little for what needs to be said on the topic to even scratch the surface. So I’m improvising a bit and publishing a few thoughts here.

What I liked most in the video is at ~1:45:

help parents understand social media.

If children find something “bad” online and they can’t discuss it with their parents, nothing else is going to really help.

The internet could have been the biggest library on the planet, but so far it turned out a cacophonia of trivia (“Das Internet hätte die größte Bibliothek der Welt werden können, aber bisher ist es der größte Stammtisch”).

Regulating social media is, like regulating the internet, a fiendishly difficult topic. I’ve been working on that more than twenty years ago. Some stuff has happened, but it is unclear whether it is, on balance, positive. The problem is not that governments, technologists or anybody else has been lazy or too greedy since the mid-nineties. The problem is that finding effective measures is REALLY hard.

First, there is the problem with nations. The internet (and social media) is international. Back in the 90’s, that was the big topic: some countries tried to limit sexuality (mostly USA), others tried to limit violence and hate speech (mostly Europe) and again others tried to limit self-harm (Australia). Thresholds between what is acceptable and what is not could range from “absolutely not” to “what’s the problem” in a few hundred kilometers (miles, if your’re in the UK or US). There is not “THE ONE” standard for what is acceptable and what is not, what is acceptable varies strongly according to location, culture and religion. That limits regulation (which is necessarily geographically bounded) seriously.

Second, there is an issue with balance: One of the first attempts to filter the internet in the 90’s was to filter certain strings, like “sex” or “breast”. The more serious problem that created was that all breast cancer support groups were immediately blacklisted, the more funny problem was that the German word “Staatsexamen” (state examination) was blacklisted, too. Especially when it comes to political opinion, the line between “legal political opinion” and “hate speech” is critical. Over the last 20 years, DCMA and other well-meant but heavy-handed regulations have lead to a wide variety of false takedowns (in addition to legitimate takedowns) – and national, cultural and religious standards are making the topic increasingly difficult.

Third, if we go to “corporate internet” as compared to “free internet”, we have the additional issue of corporate standards: Apple, Facebook, LinkedIn, Twitter etc. all have and enforce their own community standards. As a non-US national this feels like blatant cultural empirialism – by my standards, trivial stuff is taken down, and most offensive material stays online, because the community standards are biased towards the HQ of these companies, usually in the US. Freedom of expression is such a fundamental human right that it is a real problem if private companies start to rule what is and what is not legal content. A community standard is not democratically legitimated, still it heavily impacts the audience people can reach. Imagine a new social network which secretly biasses content filters to favor certain government policies. World wide. And even a well-intended corporation will do the opposite of “in dubio pro reo” – they foreseeably get much more trouble for keeping offensive content up than for “accidentally” taking down one piece of content or five too many.

Last but not least, there is an issue with volume. With billions of people online, more content is created in any moment than can be reliably checked (given the challenges 1-3 above). AI won’t help, rating centers won’t help, there is plain too much stuff out there.

Twenty years ago, we came up with an interesting platform to establish machine-readable content descriptions to reduce the ambiguities in content filters. I still believe this is the best technical solution possible. The catch is: the learning curve for publishing on (social network) is zero. The learning curve for content rating is steep and long. Not because the technology is hard, but because – unlike with social networks today – people have to reflect on what they are doing.

In effect, any regulation would require that that reflection happens somewhere. Most users are too short-sighted to do it, computers are certainly too dumb to do it (at a minimum, you need to understand irony to make the exercise meaningful), and platform operators, legal system etc. are obviously overwhelmed. It’s plain tough.

My personal conclusion twenty years ago was: If there is no technical solution, we need social solutions. Parents need to have and to use the time to discuss life, the universe and everything with their children. Both kids and parents need to understand social media and the internet.

That should reduce the problematic stuff online, and it should strengthen children’s resilience on- and also offline.

Systems Thinking