Banning kids from social media? A very bad idea…

As the idea of banning kids from social media is spreading round the globe – the new ban of under-16s from social media in Australia comes into action in a week or so – it is worth looking at the topic more deeply. The coverage in the media has been largely terrible, the comments from politicians even worse. This is not a simple subject, and, as ever, both media and politicians very much want it to be. This is nothing new: the subject of kids online has always been discussed without understanding, and in particular with a disdain for the voices of those most directly involved: the kids themselves.

It is nearly 20 years since American scholar danah boyd wrote her seminal book ‘it’s complicated: the social lives of networked teens‘ (available for free online here), and very little seems to have been learned. I call danah’s book seminal because it offered something different from all the discussions before – the view from the kids themselves, through interviews and analysis. It was not, however, seminal in the sense that it changed the perspective of the politicians and the media. They are still making exactly the same mistakes. They still haven’t learned. This was danah boyd’s hope:

“As you read this book, my hope is that you will suspend your
assumptions about youth in an effort to understand the social lives of
networked teens. By and large, the kids are all right. But they want
to be understood. This book is my attempt to do precisely that.”

‘By and large, the kids are all right.’ That was boyd’s conclusion in 2006, and it remains, by and large, true. For the most part, most kids, most of the time, are able to navigate the internet – and in particular social media – in ways that work. Rather than being a cess-pit of trolling and misinformation, the internet mostly works. Just like for the grown-ups, the internet is simply part of their lives – how they organise themselves, how they get information, how the socialise, how they do their (home)work, how they find entertainment, how they listen to music and watch television and movies, how they date, how they shop and much more. They’re not that different from the grown-ups. Indeed, in many ways they are better able to deal with the internet than the 50-somethings who are not just as likely to fall for misinformation and be steering into extremism but who actually vote accordingly and have the ability to shape the world into the fantasies that are damaging us so much.

Boyd’s book was titled it’s complicated, drawing from the old ‘relationship status’ category in Facebook back in the day, because this really is complicated. The internet – and social media in particular – has good aspects and bad aspects. Working out how to regulate it well means understanding both the good and the bad. When deciding how to regulate kids’ access to the net, we need to take that seriously. The public debate is all about the harm that can come from social media, whether it’s access to pornography or being bullied, pro-anorexia and self-harm sites or addictive games and sites like TikTok – and if this is all you see, of course a ban on kids from social media makes sense. That, though, is an incomplete and misleading picture of what the internet and social media provides for kids. We need to talk much more about the good things that the internet provides for kids – and for vulnerable kids in particular.

This is one of the key points: there are a great many good things about the internet and social media for most kids. It is critical to how they socialise – both online and in the ‘real’ world. They organise meet-ups, they work out what they can do together and much more. They communicate with each other, provide support to each other, solace when things go wrong, advice about how to deal with problems and so forth. This matters for all kids, but for particular kinds of kids especially. The internet provides a way out of loneliness, a way to distract yourself from what are often hard lives in the real world. An escape. People often talk as though the internet for kids is all about bullying – but it can often be exactly the opposite, a way to escape bullying. If you’re being bullied for your appearance, your ethnicity your name, your family, your poverty, any health condition – this is particularly important for many disabled kids, neurodivergence, sexuality, religion and much more, the internet can help. None of that has to show – you can create a life where the first thing that people see isn’t the thing that the bullies use to target you. For some kids, social media was and is the key to keeping happy and positive. It’s the real world where the pain comes, whether from bullies, from families, from schools and so on. This is another omission from much of the debate – not all parents are good and protective, attentive and well-meaning. Some are arrogant, ignorant, aggressive, oppressive, bullying, bigoted or hateful. What some kids need most is an escape from their parents.

Take away social media from these kids, you put them into a disastrous situation. And yet these kids are not the ones talked about. They should be. Instead, we see stereotypes and archetypes rolled again and again: nice kids with loving parents who are sucked into bad situations, uncaring internet giants designing algorithms to force them to watch terrible videos and so forth. There is of course truth here, but it’s not the whole truth, and before we do drastic things we should consider the wider implications.

Moreover, there seems to be an illusion that if we took social media (and phones) away from kids they would all suddenly take up healthy pursuits, from sports to arts and crafts, to reading Jane Austen or doing embroidery. The reality is that they generally can’t do any of that, because we’ve sold off the playing fields and shut down the youth clubs. We’ve made pursuits like that so expensive and exclusive that the vast majority don’t have a chance to do it. We’ve let our cities become so car-dependent that they can’t easily get to those few clubs that do still function and are affordable. If you want to help kids away from the unhealthy stuff on the internet, the starting point has to be to change all of this. Support the clubs, subsidise the activities and so on – and let the kids use the internet and social media to find out about them! Banning kids from YouTube stops them even using the videos that can help them learn things like cooking, sewing and dancing.

There’s much more to say on this subject – I have barely touched the surface here – but we need to talk about it all more honestly. The good as well as the bad. We need to find ways to address the undoubted harms that are there, without creating new harms by taking away what kids really need. This means being more intelligent, more nuanced, and more targeted. Look at the harms and address them specifically and directly. Some parts of the Online Safety Act have done this – the sections on cyberflashing (unsolicited dick pics) and epilepsy trolling show how it can be done. The attempts to deal with ‘revenge porn’ and things like pro-anorexia sites need to be sharpened up, and so forth. There is a great deal to do – but blunt instruments like banning kids from social media will do far more harm than good.

Paul Bernal, December 2025

Digital ID cards – some of the issues…

Since Keir Starmer announced the forthcoming introduction of mandatory ‘Digital ID’, the so-called ‘Brit Card’, there has been a lot of discussion and debate – some of it political, some technical or technological, some on civil liberties, some practical – and some quite heated. Some, too, has been either ignorant or dismissive – comments such as ‘the British are mad, everyone else in Europe is fine about ID cards’ are neither helpful nor understand what it is that the Brits worry about. Because the Brits do worry about ID cards, as the more than petition against the scheme gaining more then 2 million signatures in a matter of days has shown.

This is a more complex issue than it first seems. It is neither true that this is a guaranteed route to an authoritarian nightmare – a ‘papers please’ society – nor that it is a storm in a very British teacup, another piece of stupid British exceptionalism that we should forget about. Rather, there are a whole series of issues that really do deserve careful thought. This post attempts to set out some of these – to explain why people are worried, whether those worries are justified, and what possible solutions to those worries might be. This is just a starting point, however: there a many more points that a blog post cannot really hope to cover.

What is ID for?

The first question to ask is what you want ID for. For most people, in most aspects of their lives, there’s very little need for ID. You can go about your daily business without anyone needing to check your ID or who you are. You don’t need to prove your identity work – your employers know who you are – you don’t need to prove it to do things like shopping, eating at restaurants, going to movies or sports matches. You might need to prove some particular attribute – that you’re over 18, that you have a ticket, that you have the means to pay for something – but not your identity. The two are qualitatively different, and can be treated differently. It may be more convenient to have one card/device/other tool that does several of these functions, and that even contains some identifying information about you, but it’s not necessary in most cases. If I use my Apple Watch to pay for something, a connection is made from my watch to something confirming my bank details, but no-one else in the process needs to know. The shop doesn’t need to know, the shop assistant doesn’t need to know and so forth.

The same is true of many of the situations people think as needing ID – it isn’t really ID that’s needed, and it isn’t really the person who’s asking for the ID who needs it. A well designed ID system recognises this, and only asks for the information necessary in a particular situation. A well designed ID law would also recognise this, and only require the disclosure of ID when it is really necessary – not when it might be convenient or it might possibly help later.

How can ID be used?

There are two different ways that ID can be used. One is assertion. You can use a verified ID to assert your rights. ‘This is me’, confirming your right to do something. That is an active process, and one that is in the hands of the person asserting their identity. This is one of the best uses for an ID card, and when people say ‘they have them in Europe, and it’s really useful,’ this is generally what they mean. You can use this to cut through bureaucracy, to simplify processes like opening bank accounts or getting a job. This is also not the kind of use that worries people concerned about civil liberties.

The second use, the one that does concern people who care about civil liberties, is demand. That is, to be required to carry ID in case someone in authority demands it of you. This is the ‘papers please’ society that people fear, the idea that a police officer might stop you without any real reason and demand to see your ID card. The idea that ID cards should be mandatory fits with this concern – if it can’t be demanded of you, why would it be mandatory? And if it is mandatory, at some point it will be demanded of you.

It is easy in the current climate to think of ways this could be used politically and badly. Could Border Force demand it of people when they’re doing a raid? If so, who would they be checking? It is hard to imagine that checking would not be racially or religiously biased – who would be suspected of being an illegal immigrant? Alternatively, given the increasingly heavy handed policing of protest, could it be used to try to deter people from being involved in protests, whether political or environmental?

Immigration enforcement?

The stated use case for the system – at least in speeches – is immigration enforcement. Specifically, to make it harder for immigrants to work illegally. The idea is that people (all people, not just immigrants) have to show the new digital ID when they are hired. This will, according to Starmer, make it harder for immigrants who are not entitled to get hired. There are a number of problems with this.

  1. There are already checks like this – the right to work check – which scrupulous employers use, and which make it both difficult and unlikely that those who are not entitled to work will be hired. With this, passports and visas which show this entitlement are checked – and the system essentially works.
  2. The consequence of this is that those who do employ people who are not entitled to work are not, and will not be, scrupulous and law-abiding employers. Those unscrupulous employers are unlikely to change because of digital ID.
  3. What would really address this would be cracking down on unscrupulous employers – which is, at least to an extent, already happening, but could go further. This, however, would have nothing to do with the digital ID, and would not be made easier by the digital ID.

This makes the case for a massive new project difficult to sustain – if it is really the reason for bringing in digital ID. When all the other concerns are brought into play, it makes even less sense – and there are many other concerns to be considered.

Cards or databases?

The first of these concerns is the way that databases come in. The idea of digital ID is that it can link to various government databases, either existing databases or newly created ones for the digital ID. There will, presumably, be a database of the digital IDs themselves, to check whether a digital ID is authentic to start with. This would have to include sufficient biometric data to allow some kind of checking that the digital ID belongs to the physical individual who is claiming that it is theirs – facial recognition data, fingerprint data, or something along those lines – as well as citizenship or residency information.

The ID would then need to link to databases about immigration status, for example, and not just to the right to work – the information needed for the employment check that is the purported reason for introducing the digital ID, but also potentially to things like the right to use the NHS, or to entitlement to benefits. Then, again presumably, there are the other key government databases that could be linked to, such as those held by the DVLA, by HMRC or by the DWP. It would be logical to link to these, and this could increase the convenience and usefulness of the digital ID for people (in the assertion role of ID) as well as for the government.

From there, links could be made to more data – for example data held by the police or others about membership of various organisations, or information about activities. For example, if a protest is happening in the area, and the police want to stop people congregating, they could ask for ID then check directly whether this is someone known to be a member of a protest group, or to have previously been on protests – as a tool to try to head off protests, this could be effective. On the other hand, it could also be seen as distinctly authoritarian, raising more of the civil liberties concerns.

Of course if this information is already there, it can be reached by other means – and already is, for example through live facial recognition of protestors – so this is another tool rather than a unique one, but the existence of digital ID systems can make things more convenient for the authorities, as well as more convenient for the citizen. That has implications that need to be taken into account. Making things easier for authorities can mean enabling authoritarianism – it does not have to, but when systems are set up sufficient protections need to be built in to prevent it, and the rights of people need to be protected in law as well as in practice. This means oversight of systems, and rights to complain and to obtain redress.

What needs to be understood above all is that all data is vulnerable – and databases like these are particularly vulnerable, honeypots of data that can be exploited in all kinds of ways. Creating new data, and making links between databases, creates new vulnerabilities. There is a good reason that data protection has as one of its principles data minimisation. The BritCard appears to do the opposite of that: creating vulnerabilities and insecurities.

Function creep

Another major concern about systems like this is function creep, sometimes called mission creep. That is, a system may be designed and authorised for one use, but then later gets used for something above and beyond the original idea. This is not just something from conspiracy theories – though conspiracy theorists do generally believe in it – but from experience both of laws and of surveillance systems. The Regulation of Investigatory Powers Act 2000, for example, was brought in ostensibly to deal with terrorism and serious crime, but ended up being used to deal with dog fouling and to allow councils to monitor whether children really did live in catchment areas for particular schools – and these are just some of the examples. Similarly, the ANPR cameras installed to monitor London’s Congestion Charge can now be used for criminal investigations and prosecutions. That might well be appropriate and efficient, but it was not what the cameras were designed or authorised for.

In practice, function creep may well be inevitable – ideas of how to use systems may simply not have been conceived, or even have been possible, when systems were devised and when the laws enabling them were passed. It is a mistake to assume that they are the result of a conspiracy, of dishonesty by those behind the schemes – but it is also incumbent on those considering schemes to think about where function creep might occur, and to either put in place protections against that function creep or be more open about what the possibilities are when the systems are authorised. Here, for example, if the real uses of digital ID are likely to be more than just the checking of people when hiring them for work, the government should be up front about that.

Costs, complications and practicalities

The next set of concerns are in many ways practical. This kind of a project is an immense undertaking – this is digital ID, and that means a massive government IT project. Who will actually do this project? It seems highly unlikely that it would be done ‘in house’ by the UK government, and that means using private companies to do the work. The question of which companies is huge one. Will it be U.S. companies, such as the somewhat notorious Palantir? That would and should raise huge alarm bells, particularly given the current state of politics in the U.S.. Would our data be secure in the hands of a company whose founder and chairman thinks regulation of AI will hasten the arrival of the Antichrist? (this is not a joke, but real). Can we trust these companies to do this work to the benefit of the people of the UK?

This is the kind of thing that can be protected against. In Switzerland, for example, where an optional form of electronic ID was recently voted for in a referendum, it was decided that the work needed to be done in house, for exactly these reasons. The UK could do this – or at the very least, it could place strict rules about who can and cannot bid for the project, and avoid the natural and appropriate worries about some of the potential bidders. Palantir, at the very least, should be excluded. Then there is the possibility of the work being farmed out to people with family or other connections to ministers – this kind of cronyism (well, in reality corruption) is very familiar in the UK, particular during the COVID pandemic. Who is going to get the work, and hence the money, for this project? Will it be done transparently and fairly? There is also the question – one that needs to be considered every time a technological project is proposed – of whether a technological ‘solution’ is being oversold by its proponents. Selling shiny solutions to desperate governments has been very lucrative for many decades, regardless of whether the solutions actually solve anything. It needs to be guarded against from the start.

Then there is the question of cost. This kind of a project will be very expensive, and given the experience of large government IT projects is likely to be far more expensive than any initial estimates. Given that, we need to be very clear about the benefits from the project from the outset, before committing so much to it. Whether it is the various failed NHS IT projects over the last few decades or even HS2, government projects do tend to end up much more costly than expected. It would be very optimistic to expect anything different here, particularly as this is something new, not like any other related project.

The consequences of errors

There are two kinds of concerns about this kind of project: problems that arise intentionally, as part of the design of the system, or inherent in the system, and problems that arise through errors. The Post Office Horizon IT scandal should give everyone food for thought here. What happens when ID information is wrong? People can fail to get jobs, at the very least, or they could end up being imprisoned or deported inappropriately – because (again, presumably) employers will be expected to report people attempting to get jobs illegally. This is not a joke – though ‘Computer Says No’ seems funny, the computer saying no in this kind of case can be significant. Moreover, what a computer says is treated as gospel – it can be taken as unquestionably right, as we saw in the Horizon IT scandal to devastating effect – and proving that it is wrong can be nigh-on impossible. We have also seen the experience across the Atlantic of what over-enthusiastic enforcement of immigration rules can result in, whether or not the information used to enforce is real.

Digital and other exclusion?

One of the other concerns about digital ID is the way it could exclude certain groups. As presented, this would be an app-based system, presumably for Apple and Android phones – so anyone who either doesn’t have or struggles with those phones will either be unable to use this system or be disadvantaged through it. That disadvantage would be particularly important in the assertion role of ID: if we think this digital ID is going to make people’s lives more convenient, that won’t be so for those who can’t use it, increasing already existing digital exclusion or digital disadvantage.

If the system is mandatory, and there are people who can’t use the Smartphone/App system, then an alternative has to be provided – perhaps an actual ID card, in physical form – and an alternative infrastructure has to be provided. Again, this is likely to cause disadvantage and might well be challengeable (the devil will be in the detail) and certainly will make the whole thing more costly and complex, and provide more opportunities for errors, as well as more possibilities for subverting or bypassing controls.

Then there is the question of people who can’t afford Smartphones, or use alternative systems to the mainstream Apple or Android, or whose phones are outdated and can’t use the app. What will the government do for them? Will they provide smartphones for those who can’t afford them, then update them as they become obsolete? There were related issues for the Covid tracking apps – issues that contributed to their failure. Technology is not as simple as politicians often think – as Matt Hancock found to his cost, when he had to humiliatingly climb down over his initial plans for a tracking app.

But it works in Europe

This is one of the most regular claims, but it misses the point. ID cards do work in Europe, but in ways that this government is not talking about. It works as a de-facto travel document between EU states. It can be used as an assertive tool for dealing with bureaucracy. Nowhere other than Estonia is it used as a digital ID, and in Estonia this is not for immigration enforcement or anything similar, but a tool of government efficiency and access: Estonia has the most digital government in Europe. Further, there is no evidence that ID cards lower the rates of ‘illegal’ working – the ‘shadow economies’ in countries with ID cards are just as big (or bigger) than ours.

Moreover, these European countries have strong constitutional protections for privacy – we do not. Our main protection comes through the European Convention on Human Rights, which opposition parties are planning to leave, and even the Labour government is considering either leaving or weakening the rights, particularly the Article 8 right to a private life which is the key here. Our other protection comes from data protection law, and since Brexit we have looked to diverge from the GDPR and weaken privacy protections in terms of data. The European model is not one we can use as a positive comparison to suggest that ID cards are a good idea – if the UK government were putting forward a European-style ID card with European-style protections, it might be. They’re not.

Conclusions

This is not a simple idea, nor a simple issue. There are positive possibilities for digital ID – as an assertive tool it could be great – but it is highly unlikely to have anything more than a peripheral effect on the issue the government is touting it for. That needs to be changed. They need to understand what it could actually work for, and be honest and clear about it. They should know what the concerns of people are, and do what they can to assuage them. Steer clear of the likes of Palantir. Give the idea time to settle down, and be clear of what the pitfalls are likely to be.

As it is, this looks poorly planned, flimsily justified, and impractical. I would like to have a positive case made for digital ID. This is not it.

Riots and social media regulation – some thoughts

Governments are very bad at internet regulation. That much has been clear for a very long time – scholars and others who look at the subject generally wince when they hear about a new plan to ‘rein in the Wild West’ or something similar. It is thus with trepidation that many of us are preparing for the attempts to ‘regulate’ further after the key role that social media seems to have played in the riots that followed the brutal child murders in Southport on 29th July. ‘Something must be done’ is the cry – it seems highly likely that something will be done, but whether something should be done, and if it should, what should be done are really not such easy questions.

Regulating social media is difficult

The first and most important thing to be clear about is that there are no easy answers here. Social media regulation is difficult. Freedom of speech is difficult. Anyone who suggests that there are easy solutions is primarily demonstrating that they don’t understand either social media or freedom of speech – but that of course won’t stop the suggestions. So far, amongst other things people have suggested banning Twitter/X in the UK (which would be extremely authoritarian with nasty repercussions) and removing anonymity (one of the absolute worst ideas, as I will try to explain below). ‘Magic wand’ solutions are often offered: they almost never work, and almost always have very severe side effects. They make things worse, not better.

What was the real problem?

When you are trying to suggest ‘solutions’, the first thing to do is to try to understand what the problem is – and also what the problem isn’t.

In this case, soon after the news of the horrendous murders came out, a false story appeared on Twitter/X that the murderer was a Muslim, an asylum seeker, and had recently arrived on one of the small boats that have been crossing the channel. That story was amplified by some big accounts on Twitter – including politicians, ‘influencers’ and people in the media, causing furious anger and calls for action. Those calls for action led to the riots – organised apparently through social media, supported and illustrated with pictures and videos pushed through social media.

So there are a number of stages involving social media that are involved, three of which are key:

  1. The creation of the false story – it is still not entirely clear who did this, though at least two sources have been mentioned, one a woman in the UK, the other a (fake) news site that appears to have its base in Pakistan.
  2. The spreading of the false story – a lot of people and accounts are involved here, from the very big and prominent to the small, in some cases anonymous accounts.
  3. The organisation of the riots on the back of this widespread false story – largely on social media, though almost certainly other communications methods were also involved. It is not hard to envisage drunken discussions in pubs having a certain impact.

The first of these, the creation of the story, is very difficult to stop happening. People can write what they want – that’s the nature of freedom of speech and the way that social media works. In certain circumstances it can already be criminal, and I understand that the woman in the UK has already been charged under S179 of the Online Safety Act 2023, ‘False communications offence‘. Whether this charge will stick is another matter – this is a new and largely untested law, and s179 has some criteria that may make it difficult.

S179 says:

“A person commits an offence if—
(a) the person sends a message (see section 182),
(b) the message conveys information that the person knows to be false,
(c) at the time of sending it, the person intended the message, or the information in it, to cause non-trivial psychological or physical harm to a likely audience, and
(d)the person has no reasonable excuse for sending the message.”

Most of that would cover the creation of such as story if the creator hoped to cause harm – but did they? And was the harm they intended to cause to a likely audience? Potential rioters themselves may not be harmed, but want to cause harm to others. It will be interesting to see how this pans out – it may be a good test of the law.

Having said that, it isn’t really the key point. There will always be people who try to spread rumours like this – rumours that they want to be true, if not in the detail but the substance. They wanted it to be a Muslim asylum-seeker recently arrived by boat. That, of course, is part of the key to successful fake news: create news that people want to believe, and they’re not just more likely to believe it, but more likely to spread it.

It’s about the big accounts

And it’s the spreading that’s the key – there are vast numbers of rumours and fake news out there, but the only ones that have an impact are those that are spread to wide audiences – and the key to that is getting big accounts to spread them. An anonymous account with 20 followers makes no difference. A known account with 2 million followers makes an enormous difference. In the way that Elon Musk’s twitter works, if that account is verified – the paid-for blue ticks – that helps too, as Twitter/X prioritises tweets from verified accounts. Even Nigel Farage recognised this, as he tried to lay the blame on (other) big accounts such as Andrew Tate.

These big accounts are also the key to the third part of the problem – the organisation of the riots. If a small, anonymous account suggests a meeting, it won’t have much effect, but if an influential ‘leader’ suggests it, crowds will turn up.

This has a number of implications – the most important of which is that thinking that what matters are the small accounts, the anonymous accounts, is to fundamentally misunderstand the problem. It’s like trying to cure measles by painting over the spots, one by one. What you need is (a) a vaccine and (b) some way to stop the spreading. Both of these mean you have to deal with the big accounts. Any ‘solution’ that starts by dealing with the small accounts, or dealing with anonymity, is not just bound to fail but will have devastating consequences for the people who rely on anonymity for their own protection – people like the victims of spousal abuse, like children who have escaped abusive parents or are the victims of cyberbullies, like people with names that indicate their religion or ethnic backgrounds. If Islamophobia or anti-Semitism is on the rise, forcing people to label themselves as Muslims or Jews by virtue of requiring real names could have horrible consequences.

We should therefore avoid attacking anonymity or pseudonymity at all costs. Instead, if you really want to deal with this kind of problem, there needs to be a concerted attempt to deal with the big accounts. This might be through the law, or might be through some other kind of sanctions. MPs who are involved in this could be sanctioned by parliament. Broadcast journalists by Ofcom, and so forth. This might be easier than creating new law – amongst other things because how you frame the law would be very troublesome, and almost certain to have unforeseen consequences as well as suppressing free speech.

What about the social media companies?

The first thing to be clear about is that this is not about ‘rogue’ algorithms or ‘misuse’ of social media. This is how the algorithms are intended to work – sending content that people are interested in to them. It would also be highly unlikely to get the social media companies to sanction the big accounts of their own accord – these are the accounts that drive engagement, that support the companies’ business models. And, if Elon Musk’s recent behaviour is anything to go by, he’s every bit as likely to want to magnify these stories and support these accounts rather than oppose them.

The Online Safety Act tries to impose a ‘duty of care’ on social media companies, but care for whom? In this case, the harm done is not to their users, but to others. Do they have a duty of care for everyone? It is hard to see how this would work in practice.

There’s already plenty of law…

The last question is whether we need more law anyway. There’s already a lot of law out there. When the dust settles, we’ll see that people have been prosecuted under public order legislation, under malicious communications legislation, for communications offences, and so on. Punishing those who actually riot is not going to be a problem. Punishing those who used social media to instigate these acts is not likely to prove a problem either.

Punishing those behind the acts is another matter. It seems notable to me that of the many proposals being mentioned by politicians so far, none seem to be even trying to hold those whose rhetoric, both online and offline, have made it all happen, to account. Until and unless they do, the rest is all irrelevant.

Digital ID cards, and why we should be nervous…

Pretty much the moment Keir Starmer became Prime Minister, his esteemed predecessor Tony Blair wheeled out, yet again, a call for Digital ID. It’s a bit of a pattern: whenever something happens (generally something bad) that has even a peripheral connection to ID, Blair, his foundation or one of his acolytes will come out with the suggestion that digital ID will solve the problem. It seems to be an idée fixe: a panacea that will ‘solve’ terrorism, immigration, policing, housing etc, all at the stroke of a digital pen.

In this case, Blair was talking about immigration – somehow issuing a digital ID card to immigrants, particularly those arriving by small boats, will mean we have ‘control’ over them. We’ll be able to monitor them, know where they are, recall them when needed, and thus get to grips with the apparently overwhelming problem we have with immigration.

It won’t, of course, be able to do this – of which more later – but immediately it was announced the usual cries came out about why ID cards generally are a good idea, and we ought to bring them in immediately. After all, most of continental Europe uses them, and uses them well, so why are we so stubbornly resistant to them in the UK?

On the surface it seems a very sensible answer. Yes, we in the UK are very stubborn about it, from the famous case of Willcock v Muckle back in 1951, where Harry Willcock successfully challenged the police’s use of ID cards that had been brought in during the Second World War to the fight against Blair’s attempts to bring them in when he was Prime Minister – attempts that were frustrated right up until Gordon Brown’s time as PM finished. The abandonment of the (incomplete) ID card policy was one of the first acts of the new Coalition government in 2010. Why are we so stubborn about this, despite happily embracing CCTV cameras on every street, and accepting blithely (except for a few admirable activists) the police’s use of live facial recognition technology? Do the Brits care about privacy at all, or only just the ‘papers please’ attitude that ID cards seem to represent, because we’re still obsessed with war films and evading the Gestapo like Gordon Jackson and Richard Attenborough in the Great Escape?

I’m sure there’s something in that. We do want to feel heroic, and we do want to feel different (better) than continental Europeans, but that really isn’t the whole story. To understand why, we need to look at how identity cards can be (and would be) used in practice. There are two dimensions to this: using it to assert your identity, in order to claim rights or entitlements, or being required to produce it by some kind of authority, in order to prove who you are, so that they can in some way ‘check up’ on you.

Asserting your identity

Assertion of your identity is a positive act, and is the one that most people think about when seeing ID cards as a good thing. You can use it to prove who you are when you want to do something positive – the same way you use a passport to travel to another country. This is me, you say, and I can prove it.

Papers, please..

The other aspect is when you are required to show your ID. When a police officer stops you on the street and says ‘let’s see some ID’. When you’re minding your own business, but circumstances put you in the way of someone in authority who either wants or has the right to challenge you. This, the ‘papers please’ aspect, is the one that disturbs people – and is the one that Harry Willcock successfully challenged back in 1951. The essence of the challenge back then – and the disturbance now – is to question the right of the authorities to demand your ID without any reason. If you’re just peacefully going about your business, then your identity remains your business. That’s the logic. Britain, the opposers of ID cards would like to think, is not a ‘papers please’ society.

People who are regularly stopped and asked who they are would scoff at the idea that we’re not a papers please society – black kids in inner cities, for example – but it is still something many people cling onto as part of their image of what their country is like.

Voter ID

Voter ID does not quite fit either of these categories, but it illustrates a key point. You don’t have to vote, so it’s in some ways an option that you choose. That means it doesn’t quite fit the classic ‘papers please’ scenario. However, it’s part of normal life, and we should, if we believe in democracy, be encouraging people to vote, rather than putting barriers in their way.

That brings in the question of when is it appropriate to require ID. We require passports for international travel, because that’s what has been agreed as part of the international order. We require driving licences to drive because public safety demands that drivers be able to drive before they’re allowed on the roads – but note that we’re generally not required to produce those driving licences unless something goes wrong. We require proof of age to buy alcohol or cigarettes, because we as a society have agreed that children should not drink or smoke. We require Voter ID, theoretically, to prevent voter fraud – specifically personation. The problem with the Voter ID theory is that the evidence does not suggest that this kind of voter fraud exists in anything but a minuscule way, and certainly not enough to warrant intervention like this. That, though, is a discussion for another time. Voter ID certainly does not require a specific form of ID, just enough identification to reduce the likelihood of personation to an acceptably low level (essentially, a level low enough to remove any potential interference with the democratic process).

One ring…

We do, of course, have sufficient ID systems to do all this already. Driving licences work. Passports work. Kids have a range of ways to convince shops and pubs to let them buy alcohol. So why do we ‘need’ a universal digital ID? From a positive perspective, having a universal system seems attractive. Everyone will have one, it will be a recognisable system that anyone who needs to check will understand. If it’s ‘modern’ it will be both digital and biometric, so it will be (in the eyes of its advocates) impossible to forge, verifiable directly and so forth. Fraud will be minimised. Personation will disappear. We’ll all be protected from the fakers and criminals – that at least is the logic, and part of the attraction. Indeed, a subtext for many people is that we respectable citizens, who don’t have anything to hide, will be delighted to produce our digital ID on demand, to show the officers that we’re trustworthy good people – and that this will protect us from the dodgy people, the criminals, the ‘illegals’, the people who do have something to hide. Anyone against ID cards is supporting criminality. Anyone who refuses to produce a card on demand is obviously suspicious.

When you think about this in the context of immigration enforcement – what Tony Blair was talking about – the implications become a little starker. If immigrants are issued with ID cards and have to show them to ‘prove’ their right to be here, who do we think will be asked to produce them? How will the authorities know when to ask? It’s the people who look like they might be an immigrant, who sound like they might be an immigrant, whose name looks or sounds like an immigrant’s name. So if your skin is dark, if your accent is ‘foreign’, if your name isn’t identifiably ‘British’, the chances of being challenged to produce this ID are increased significantly. This isn’t just a ‘papers please’ society, it’s something qualitatively worse.

Then we come to the digital element of this. Having an ID card is just one part of this. The digital side is another – and an attraction to those in authority. A digital ID card links to a database – that’s the point. Your driving licence links to the DVLA database. Your biometric passport to the passport office. Your work ID card to your employer’s database (to give you access to your buildings etc). A universal digital ID would link to some kind of universal database – and through it to other governmental databases. The idea is direct – produce your ID card, and anything on those databases could get flagged up. Moreover, when you are checked, that act of being checked will produce a record to add to that database. A police officer asks for your ID at an environmental protest? You’re logged as having attended that protest.

If you build it, they will come

If you build a system that allows this kind of checking in, that links to a central database, that can be easily checked, what will happen? More uses will be found for that ID. Use it for voting? Check. Police checks at events or protests? Check. At shops to check your age to buy alcohol? Check. Access to rock festivals? Check. As a digital ID to access government websites? Check. As a proof of age to access ‘adult material’ on the internet? Check. Function creep is real – history has shown that again and again. The Regulation of Investigatory Powers Act (RIPA), ostensibly for serious crime, ended up being used for fly-tipping, dog-fouling and checking children’s catchment areas for schools, amongst other things. This is not a tinfoil-hatted conspiracy theory, but the reality of this kind of a project.

What should we do?

The first thing to understand here is that the risks mentioned are real. When embarking on a project like this, those risks have to be understood and mitigated against. There’s a reason that this kind of a project is less dangerous in most European countries than here – those countries have written constitutions with constitutional protection for privacy. In the UK, we don’t. In the UK, we do it largely on a wing and a prayer – and we have a terrible record of farming out this kind of thing to corporations who both do it on the cheap and have an incentive to try to profit from the data they gather, and indeed to find other uses.

That needs to be dealt with before even considering this kind of thing. The protections need to be in place first, and in our current situation that seems highly unlikely. The Home Office in particular needs at least a thoroughgoing reform, and more likely a replacement, before it can or should be trusted with this kind of thing.

Disclosure minimisation

We also need to think about how the whole thing should be approached. The concept of disclosure minimisation needs to come in here. People should be asked for identification in the minimum number of situations, and the minimum number of people should be authorised to ask for it. It should never be the default. When asked for information, they should be asked for the minimum information. That is, if you need to know someone is old enough to buy alcohol, you don’t need to know anything else – their name, address etc is irrelevant. An ID card system could be designed just to release the relevant attribute rather than all information. This would mean the minimum data is gathered – and, following the principles of data protection, the data should only be kept for the minimum of time. If you need to check someone’s age when they buy alcohol, the data from that check should be immediately deleted – or at least de-identified – so that it does not leave a data-trail of innocent information.

Grit in the wheels

Finally, we should remember that there are good things about a diverse, ‘messy’ situation. There’s nothing wrong with having a driving license for driving, a passport for travelling, a credit card for payment, a work ID for access to your workplace. Keeping functions separate, keeping data separate, reduces risks and protects you from misuse, from function creep – and importantly from hacks and data leaks. A universal database would be a major target for hackers. ‘HACK ME PLEASE’ might as well be written on it in letters 100 m high. Making things easier for hackers is rarely a good thing.

Why?

The biggest question for advocates of universal digital ID systems is ‘why’? Why do you need it? What problem is it solving that has not already been solved? Will it actually solve that problem?

In practice, these systems are often solutions in search of a problem – hence the reason that Blair and others wheel them out after a wide variety of events, hoping finally to convince people that now is the time.

It really isn’t.

Do we even need an Online Safety Bill?

There are many reasons to be concerned about the #OnlineSafetyBill, the latest manifestation of which has just been launched, to a mixture of fanfares and fury. The massive attacks on privacy (including an awful general monitoring requirement) and freedom of speech (most directly through the highly contentious ‘legal but harmful’ concept) are just the starting point. The likely use of the ‘duty of care’ demanded of online service providers to limit or even ban both encryption and anonymity, thereby making all of us less – and in particular children – less safe and less free is another. The political control of censorship via Ofcom is in some ways even worse – as is the near certain inability of Ofcom to do the gargantuan tasks being required of it – and that’s not even starting on the mammoth and costly bureaucratic burdens being foisted on people operating online services. Cans of worms like age verification and other digital identity issues are just waiting to be opened, without their extensive downsides being even mentioned. And that’s not all – it’s such a huge and all-encompassing bill there are too many problems with it to mention in a blog post.

All that, however, misses the main point. Why are we even doing this? Do we even need an Online Safety Bill?

The main reasons the government seem to be doing this are based on what is a kind of classical misunderstanding of the internet. In my 2018 book, The Internet, Warts and All, I wrote about how the way we look at the internet overall impacted upon how we thought it should be regulated. The net is a complex, messy and confusing place at times – it has many warts. The challenge is to see it warts and all: to look at the big picture, to see the messy reality, and approach it accordingly.

Some people don’t even see the warts, so don’t think anything needs to be done – we should leave the internet alone, let it regulate itself. Others see only the warts, and miss the big picture. That’s what lies behind the Online Safety Bill. An obsession with the warts, and a desire to eradicate them with the strongest of caustic medicine, regardless of the damage to the face itself. That’s the view of the internet as a ‘Wild West’, full only of trolls and bots, ravaged by abuse and misinformation, where no-one dares roam without their trusty six-shooter.

The thing is, it’s just not true. Almost all the time, for the vast majority of people, the internet is something they use without much problem. They work, they shop, they get their news and their entertainment, they converse and socialise. They find romance. They buy their cars and homes – not just their books and groceries. They live. The internet does have warts – and no-one should underestimate the impact of trolling or misinformation in particular (there’s a chapter on each in The Internet, Warts and All) but neither should we forget what the internet really is.

If we see only the warts, we end up with disastrous legislation like the Online Safety Bill. If we see the warts, but treat them as warts, we have a chance to do regulation more reasonably, and not do untold damage on the way. As an example, the inclusion of cyber flashing in the bill is very welcome. It’s a wart that can be treated, and without anything in the way of negative consequences. Smaller, piecemeal legislation dealing with particular harms is a far more logical – and effective – way of dealing with the problems we have on the net than grand gestures like the Online Safety Bill, which will almost certainly do far more harm than good.

The gaping void at the heart of the Online Safety Bill

The latest manifestation of the much heralded Online Safety Bill is due to make its appearance tomorrow. It’s a massive bill, covering a wide range of topics and a huge number of issues about what happens online – and yet there’s a gaping void at its heart, a void that means that it will have almost no chance of succeeding in any of its key aims.

There are many things that should worry us about the Online Safety Bill. The vagueness of the ‘duty of care’ that it imposes on online service providers. The deliberately grey area of ‘harmful but legal’ content. Its focus on content rather than behaviour (which means it misses a massive amount of trolling, bullying and hate). The inevitable inadequacy of Ofcom as a regulator for something it knows very little about – clever trolls and others will run rings around it, and will even take joy in doing so. And, indeed, its aim – why do we want the U.K. to be the safest place to be online rather than the most creative, the most productive, even the best place to be online?

All that is vital, and most of it has been written about by people much more expert than I am in the field. That, however, is not what this piece focuses on. This is about something rather different: a blind spot at the heart of the bill. For all its focus on online harms and online safety, the bill misses how a great deal of the harms take place – because those harms come from the people behind the bill itself. It is easy to focus on evil, anonymous trolls and bots, and on hidden Russian creators of fake news – they’re convenient enemies, particularly right now – but at the heart of a great deal of harm are people very different: mainstream politicians and journalists. Blue tick accounts. The Press. The Online Safety bill says almost nothing about them, and as a result it is highly unlikely to have any kind of success, except on the periphery.

Trolling begins at home

Everyone hates trolls – indeed, the idea that the internet is full of evil trolls was one of the reasons behind the whole online harms approach – but they rarely think the whole thing through. What is generally considered to be trolling encompasses a lot of different activities – but most people’s ideas of what a troll looks like seem to be relatively consistent. Sad, angry, anonymous people – images like furious men tapping away at their keyboards in the basement of their parents’ homes are very common. There is of course some truth in this kind of image – but it’s a tiny part of the picture. Indeed, it’s very much a symptom rather than the disease itself.

Two factors are rarely discussed enough. One is the observation that many (perhaps most) trolls don’t consider themselves to be trolls. Indeed, very much the opposite: they consider their enemies to be trolls, and they themselves are either the victims of trolls or the noble warriors fighting against evil trolls. This is true not only of those debates where there is some kind of relative equality of argument or of power, but of those where to most relatively neutral observers there’s clearly a ‘good’ side and a ‘bad’ side.

The other is to ask how trolls find their victims. How they choose who to target, who to victimise, who to abuse. One of the most direct ways is through a pile-on. That is, someone points at a potential victim, saying ‘look at this idiot,’ or words to that effect, hinting that they deserve to be attacked. When the person pointing has thousands (or millions) of followers, those followers then pile on to the victim.

Who’s the troll here? The big account who just said something relatively innocent (‘look at this idiot’) or the followers who add the abuse, the racism, the misogyny, the death or rape threats? The big account stands back, claiming innocence, and pretending that the trolls had nothing to do with them. And of course those big accounts can be politicians or journalists – indeed some of the worst pile-ons are instigated by the biggest and most mainstream of accounts. MPs. Journalists from big newspapers or broadcasters.

That’s not the only kind of trolling that MPs and journalists engage in – without recognising or acknowledging that it is trolling. Indeed, the minister responsible for the Online Safety Bill, Nadine Dorries, has herself been called out for what many would describe as trolling. And yet she would vehemently deny being a troll – and believe that she is right in doing so.

The trouble is, not only are these kinds of activities by MPs and journalists actually trolling, but they’re much more dangerous trolling than that of the small, anonymous accounts that people tend to focus on. One relatively innocent tweet by someone with 100,000 followers can bring about thousands of vicious attacks. If we want to deal with the viciousness, we need to look at the big accounts, and at the structural trolling that goes on as a result. The Online Safety Bill does nothing for that at all – because it would mean both challenging the whole structure of social media and admitting the role that politicians themselves play in the online harm they claim to be dealing with.

Fakery begins at home

It’s a similar – or even worse – story with harmful misinformation. Again, the pantomime villains are Russian trolls, creating fake news in troll farms outside St Petersburg. These, of course, do exist – but again, they’re just a small part of the picture. As I’ve written before, mainstream politicians such as Jacob Rees-Mogg employ some of the same tactics and methods of those we usually think of as spreading fake news – and he’s far from alone. Fake news and other forms of misinformation do not exist in a vacuum – very much the opposite. Fake news works when it fits with people’s existing prejudices and biases, when it confirms what they already think. So, to make fake news work, you create it to fit in with those prejudices – and you twist reality to fit with those prejudices.

If this sounds familiar, it should. Fake news isn’t something new, it’s just a new manifestation of the techniques employed by politicians and (particularly tabloid) journalists ever since politics and journalism has existed. Of course neither the politicians or journalists would be happy to acknowledge this. ‘Spin’ sounds much better than misinformation. And yet the relationship is very close. Spin helps create a fake narrative that is every bit as damaging as actual fake news – and far harder to detect, disprove or oppose.

As with trolling, the effect of all of this is much greater if the accounts spreading it have both credibility and large numbers of followers. That means that the ones that matter are the big, blue tick accounts rather than the dodgy anonymous trolls – and again, the structure of social media that allows information to be spread so rapidly via those big, blue tick accounts. And again, this is not the focus of of the Online Safety Bill. Safer to focus on the obviously villainous than acknowledge our own role in villainy.

Who gets a free (press) pass?

One final thought. If the Online Safety Bill gets passed – and it almost certainly will – it will mean that the press is the only bit of the media that is not regulated. Broadcast media has had statutory regulation for a long time – with Ofcom as the regulator. After the Online Safety Bill, the same will be true about social media. And yet those of us with memories long enough to remember the Leveson Inquiry will remember the vehemence with which the press resisted any idea of statutory regulation of the press, as though it were an intolerable affront to free speech.

I don’t think they were necessarily wrong – but they should be clear that statutory regulation of social media is every bit as much of an affront to free speech. Indeed, in many ways a worse one – as it is the ordinary people, rather than the relatively privileged peoples who run the newspapers and magazines, whose free speech is being curtailed. That ought to matter.

A gaping void

As it is, the Online Safety Bill looks likely to attack the symptoms rather than the causes of online harms. Unless it finds a way to address the underlying problems – and to confront the massive blind spot it has for the role of politicians and journalists – it will be just yet another massive game of Whac-A-Mole, doomed to failure and disappointment.

That, frankly, is what I expect to happen. The bill will be passed, everything will trumpet how we’re finally taming the Wild West, but nothing will really happen. Trolls will continue trolling – new ones replacing those who do get caught – and misinformation will continue to spread. The powerful will still be unscathed, and the hate will still spread. And a few years later we will have another go. With the same result.

In praise of hiding..

The new government anti-encryption campaign, ‘No Place to Hide’, has a great many problems. It’s based on many false assumptions, but the biggest of all of these is the whole idea that hiding is a bad thing. It can be, of course, when ‘bad guys’ hide from the authorities, which is what the government is grasping at, but in practice we *all* need to be able to hide sometimes.

Indeed, the weaker and more vulnerable we are, the more we need places to hide. The more predators we face, the more we need places to hide. And if we believe – and the government campaign is based on this assumption – that there are a lot of dangerous predators around on the internet – that becomes especially important. Places to hide become critical. Learning how to hide becomes critical. Having the tools and techniques not just available for a few, specially talented or trained individuals but for everyone, including the most vulnerable, becomes critical.

This means that the tools and systems used by those people – the mainstream systems, the most popular networks and messaging services – are the ones where safety is the most important, where privacy is the most important. Geeks and nerds can always find their own way to do this – it’s no problem for an adept to use their own encryption tools, or to communicate using secure systems such as Signal, or even to build their own tools. They’re not the ones that are the issue here. It’s the mainstream that matters – which is why the government campaign is so fundamentally flawed. They want to stop Facebook rolling out end-to-end encryption on Facebook’s messenger – when that’s exactly what’s needed to help.

We should be encouraging more end-to-end encryption, not less. We should be teaching our kids how to be more secure and more private online – and letting them teach us at the same time. They know more about the need for privacy than we often give them credit for. We need to learn how to trust them too.

Who needs privacy?

You might be forgiven for thinking that this government is very keen on privacy. After all, MPs all seem to enjoy the end-to-end encryption provided by the WhatsApp groups that they use to make their plots and plans, and they’ve been very keen to keep the details of their numerous parties during lockdown as private as possible – so successfully that it seems to have taken a year or more for information about evidently well-attended (work) events to become public. Some also seem enthused by the use of private email for work purposes, and to destroy evidence trails to keep other information private and thwart FOI requests – Sue Gray even provided some advice on the subject a few years back.

On the other hand, they also love surveillance – 2016’s Investigatory Powers Act gives immense powers to the authorities to watch pretty much our every move on the internet, and gather pretty much any form of data about us that’s held by pretty much anyone. They’ve also been very keen to force everyone to use ‘real names’ on social media – which, though it may not seem completely obvious, is a move designed primarily to cut privacy. And, for many years, they’ve been fighting against the expansion of the use of encryption. Indeed, a new wave of attacks on encryption is just beginning.

So what’s going on? In some ways, it’s very simple: they want privacy for themselves, and no privacy for anyone else. It fits the general pattern of ‘one rule for us, another for everyone else’, but it’s much more insidious than that. It’s not just a double-standard, it’s the reverse of what is appropriate – because it needs to be understood that privacy is ultimately about power.

People need privacy against those who have power over them – employees need privacy from their employers (something exemplified by the needs of whistleblowers for privacy and anonymity), citizens need privacy from their governments, victims need privacy from their stalkers and bullies and so on. Kids need privacy from their parents, their teachers and more. The weaker and more vulnerable people are, the more they need privacy – and the approach by the government is exactly the opposite. The powerful (themselves) get more privacy, the weaker (ordinary people, and in particular minority groups and children) get less or even no privacy. The people who should have more accountability – notably the government – get privacy to prevent that accountability – whilst the people who need more protection lose the protection that privacy can provide

This is why moves to ban or limit the use of end-to-end encryption are so bad. Powerful people – and tech-savvy people, like the criminals that they use as the excuse for trying to restrict encryption – will always be able to get that encryption. You can do it yourself, if you know how. The rest of the people – the ‘ordinary’ users of things like Facebook messenger – are the ones who need it, to protect themselves from criminals, stalkers, bullies etc – and are the ones that moves like this from the government are trying to stop getting it.

The push will be a strong one – trying to persuade us that in order to protect kids etc we need to be able to see everything they’re doing, so we need to (effectively) remove all their privacy. That’s just wrong. Making their communications ‘open’ to the authorities, to their parents etc also makes it open to their enemies – bullies, abusers, scammers etc, and indeed those parents or authority figures who are themselves dangerous to kids. We need to understand that this is wrong.

None of this is easy – and it’s very hard to give someone privacy when you don’t trust them. That’s another key here. We need to learn who to trust and how to trust them – and we need to do our best to teach our kids how to look after themselves. To a great extent they know – kids understand privacy far more that people give them credit for – and we need to trust that too.

BT’s ‘Walk me home’: tech solutionism at its worst

Magical thinking

It’s all too easy to see a difficult, societal problem and try to solve it with a technological ‘magic wand’. We tend to treat technology as magical a lot of the time – Arthur C Clarke’s Third Law, from as far back as 1962, that “Any sufficiently advanced technology is indistinguishable from magic” has a great deal of truth to it, and is the route to a great many problems. This latest one, BT’s idea that women can ‘opt-in’, probably via an app, to being tracked in real-time as they walk home alone, is just the latest in a long series of these kinds of ideas. Click on the app, and you have a fairy godmother watching you, ready to protect you from the evil monsters who might be out to get you.

More surveillance doesn’t mean more security

That’s the essence of this kind of thinking. By tech, we can sort everything out. And, as so often, the method by which this tech will solve everything is surveillance. It’s another classical trap – the idea that as long as we can monitor things, track things, gather more data, we can solve the problems. If only we knew, if only we were able to watch, everything would be OK.

This is the logic that lies behind ideas such as backdoors into encryption – still being touted on a big scale by many in governments all over the world – which mean, just as BT’s ‘walk me home’ would – actually reducing security and increasing risks for most of those involved. Just as breaking encryption will make children more vulnerable, getting women to put themselves under real-time surveillance at their key moments of risk will be likely to make them more vulnerable rather than less.

Look at the downsides….

It will make them easier to identify, and easier to locate – they will be effectively ‘registered’ on the system through downloading and activating the app, it will record their location, their regular routes – and the times they use them, their phone numbers and more. It will identify them as vulnerable – and make them even more of a target.

This, again, is a classical trap of tech solutionism. It’s easy just to look at a piece of tech in terms of how it’s intended to be used, and in terms to the intended user. In this case, that the people tracking the relevant woman will be only people who have her best interests at heart, and who will only intervene in the best way, as the system intends. The good police officer, acting in the best possible way.

All systems – and all data – will be misused

This is in itself magical thinking, and the opposite of the way we should be looking at this. We have to be aware that all systems will be misused. History show this – particularly in relation to technology. Just as one example, there are a whole series of data protection cases involving police officers misusing their ‘authorised’ access to data – from the Bignell case in 1998, where officers used their access to a motor vehicle database to find out details of cars for personal purposes onwards. It must never be forgotten that Wayne Couzens was a serving police officer when he abducted, raped and murdered Sarah Everard

This kind of a system will also create a database of vulnerable women – together with their personal details, their phone numbers, their home addresses, the routes they take to get home – including when they use them – and that they feel vulnerable coming home. This will be a honeypot of data for any potential stalker – and again, we must not forget that Wayne Couzens was a serving police officer, and that he planned the abduction, rape and murder of Sarah Everard carefully. Systems like this would be a perfect tool for another would-be Wayne Couzens – and also to ‘smaller scale’ creeps and misogynists. The plethora of stories about police officers and others misusing their position to pester women – and worse – that have come out in the wake of the abduction, rape and murder of Sarah Everard should make it abundantly clear that this isn’t a minor concern.

A route to victim-blaming – and infringing women’s rights

Perhaps even more importantly, systems like this are part of a route to blame the victim for the crime. ‘If only she’d used her ‘walk me home’ she would have been OK’ could be the new ‘if only she hadn’t dressed provocatively’. It puts pressure on women to let themselves be tracked and monitored – as well as making it their fault if they don’t use this ‘tool’ to save themselves.

This in itself is an infringement on women’s rights. Not just the right to be safe – which is fundamental – but the right to privacy, to freedom of action, and much more. It’s treating women as though they are like pets, to be microchipped for their own protection, registered on a database so that men can protect them. And if they don’t take advantage of this, well, they deserve what they get.

Avoiding the issue – and avoiding responsibility

All of which brings us back to the real problem: male violence. Tech solutionism is about attempting to use tech to solve societal problems – the societal problem here is male violence. So long as the focus is on the tech, and the tech that can be used by the women, the focus is off the men whose violence is the real problem. And so long as we thing that problem can be solved with an app, we fail to acknowledge how serious a problem it is, how deep a problem it is, and how serious a solution it requires.

It also means that many of those involved avoid taking the responsibility that they have for the problem. The police. The Home Office. Men. Avoiding responsibility has become an art form for the Metropolitan Police, and for Cressida Dick in particular. Some of the officers who shared abusive messages with Wayne Couzens are still working at the Met – and those are just the ones that we know about. This problem is deep-set. It is societal

Societal problems need societal solutions

The bottom line here is that this a massive societal problem – and that is something that won’t be solved by an app. It requires a societal solution – and that isn’t easy, it isn’t quick, and it isn’t something that can be done without pain and sacrifice. The pain and sacrifice, though, should not come from the victims. At the moment, and with ‘solutions’ like BT’s ‘Walk me home’, it is only the victims who are being expected to sacrifice anything. That is straightforwardly wrong.

The starting point should be with the police. That there have been no resignations – least of all from Cressida Dick – is no surprise at all. Beyond a few pseudo-apologies and a concerted attempt to present Couzens as an ‘ex’ police officer, there’s been almost nothing. He was a serving officer when he did the crime. The Met should be facing radical change – if it expects to regain trust, it must change. Societal solutions mean that we need to be able to trust the police.

It is only when we can trust the police that technological tools like BT’s ‘Walk Me Home’ have a chance of playing a part – a small part – in helping women. The trust has to come first. The change in the police has to come first. Without that, we have no chance.

Children need anonymity and encryption!

In recent weeks, two of the oldest issues on the internet have reared their ugly heads again: the demand that people use their ‘real names’ on social media, and the suggestion that we should undermine or ban the use of encryption – in particular end-to-end encryption. As has so often been the case, the argument has been made that we need to do this to ‘protect’ children. ‘Won’t someone think of the children’ has been a regular cry from people seeking to ‘rein in’ the internet for decades – this is the latest manifestation of something with which those concerned with internet governance are very familiar.

Superficially, both these ideas are attractive. If we force people to use their real names, bullies and paedophiles will be easier to catch, and trolls won’t dare do their trolling – for shame, perhaps, or because it’s only the mask of anonymity that gives them the courage to be bad. Similarly, if we ban encryption we’ll be able to catch the bullies and paedophiles, as the police will be able to see their messages, the social media companies will be able to algorithmically trawl through kids’ feeds and see if they’re being targeted and so forth. That, however, is very much only the superficial view. In reality, forcing real names and banning or restricting end-to-end encryption will make everyone less safe and secure, but will be particularly damaging for kids. For a whole series of reasons, kids benefit from both anonymity and encryption. Indeed, it can be argued that they need to have both anonymity and encryption available to them. A real ‘duty of care’ – as suggested by the Online Safety Bill – should mean that all social media systems implement end-to-end encryption for its messaging and make anonymity and pseudonymity easily available for all.

Children need anonymity

The issues surrounding anonymity on the internet have a long history – Pete Steiner’s seminal cartoon ‘On the Internet, Nobody Knows You’re a Dog’ was in the New Yorker in 1993, before social media in its current form was even conceived: Mark Zuckerberg was 9 years old.

It’s barely true these days – indeed, very much the reverse a lot of the time, as the profiling and targeting systems of the social media and advertising companies often mean they know more about us that we know ourselves – but it makes a key point about anonymity on the net. It can allow people to at least hide some things about themselves.

This is seen by many as a bad thing – but for children, particularly children who are the victims of bullies and worse, it’s a critical protector. As those who bully kids are often those who know the kids – from school, for example – being forced to use your real name means leaving yourself exposed to exactly those bullies. Real names becomes a tool for bullies online – and will force victims either to accept the bullying or avoid using the internet. This, of course, is not just true for bullies, but for overbearing parents, sadistic teachers and much worse. It is really important not to just think about good parents and protective teachers. For the vulnerable children, parents and teachers can be exactly the people they need to avoid – and there’s a good reason for that, as we shall see.

Some of those who had advocated for real names have recognised this negative impact, and instead suggest a system like ‘verified IDs’. That is, people don’t have to use their real names, but in order to use social media they need to prove to the social media company who they are – providing some kind of ID verification documentation (passports, even birth certificates etc) – but can then use a pseudonym. This might help a little – but has another fundamental flaw. The information that is gathered – the ID data – will be a honeypot of critically important and dangerous data, both a target for hackers and a temptation for the social media companies to use for other purposes – profiling and targeting in particular. Being able to access this kind of information about kids in particular is critically dangerous. Hacking and selling such information to child abusers in particular isn’t just a risk, it is pretty much inevitable. The only way to safeguard this kind of data is not to gather it at all, let alone put it in a database that might as well have the words ‘hack me’ written in red letters a hundred feet tall.

Children need encryption

Encryption is a vital protection for almost everything we do that really matters on the internet. It’s what makes online banking even possible, for example. This is just as true for kids as it is for adults – indeed, in some particular ways it is even more true for kids. End-to-end encryption is especially important – that is, the kind of encryption that means on the sender and recipient of a message can read it, not even the service that the message is sent on.

The example that Priti Patel and others are fighting in particular is the implementation of end-to-end encryption across all Facebook’s messaging system – it already exists on WhatsApp. End-to-end encryption would mean that not even Facebook could read the messages sent over the system. The opposers of the idea think it means that they won’t be able to find out when bullies, paedophiles etc are communicating with kids – bullying or grooming for example – but that misses a key part of the problem. Encryption doesn’t just protected ‘bad guys’, it protect everyone. Breaking encryption doesn’t just give a way in for the police and other authorities, it gives a way in for everyone. It removes the protection that the kids have from those who might target them.

End-to-end encryption protects against one other group that can and does pose a very significant risk to kids: the social media companies themselves. It should be remembered that the profiling and targeting of kids that is done by the social media companies is itself a significant danger to kids. In 2017, for example, a leaked document revealed that Facebook in Australia was offering advertisers (and hence not just advertisers) the opportunity to target vulnerable teens in real time.

“…By monitoring posts, pictures, interactions and internet activity in real-time, Facebook can work out when young people feel “stressed”, “defeated”, “overwhelmed”, “anxious”, “nervous”, “stupid”, “silly”, “useless”, and a “failure”,”

Facebook, of course, backed off from this particular programme when it was revealed – but it should not be seen as an anomaly but as part of the way that this kind of system works, and of the harm that the social media services themselves can represent for kids. End-to-end encryption to begin to limit this kind of thing – only to a certain extent, as the profiling and targeting mechanisms work on much more than just the content of messages. It could be a start though, and as kids move towards more private messaging systems the capabilities of this kind of hard could be reduced. If more secure, private and encrypted systems become the norm, children in particular will be safer and more secure

Children need privacy

The reason that kids need anonymity and encryption is a fundamental one. It’s because they need privacy, and in the current state of the internet anonymity and encryption are key protectors of privacy. More fundamentally than that, we need to remember that everyone needs privacy. This is especially true for children – because privacy is about power. We need privacy from those who have power over us – an employee needs privacy from their employer, a citizen from their government, everyone needs privacy from criminals, terrorists and so forth. For children this is especially intense, because so many kinds of people have power over children. By their nature, they’re more vulnerable – which is why we have the instinct to wish to protect them. We need to understand, though, what that protection could and should really mean.

As noted at the start, ‘won’t someone think of the children‘ has been a regular mantra – but it only gives one small side of the story. We need not just to think of the children, but think like the children and think with the children. Move more to thinking from their perspective, and not just treat them as though they need to be wrapped in cotton wool and watched like hawks. We also need to prepare them for adulthood – which means instilling in them good practices and arming them with the skills they need for the future. That means anonymity and encryption too.

Duty of care?

Priti Patel has suggested the duty of care could mean no end-to-end encryption, and Dowden has suggested everyone should have verified ID. There’s a stronger argument in both cases precisely the opposite way around – that a duty of care should mean that end-to-end encryption is mandatory on all messenger apps and messaging systems within social networking services, and that real names mandates should not be allowed on social networking systems. If we really have a duty of care to our kids, that’s what we should do.

Paul Bernal is Professor of Information Technology Law at UEA Law School