We are on the record. With or without our consent, data about us has been recorded, stored, analysed and used to predict our behaviours and gain insights into our brightest dreams and darkest secrets. Data is nothing more than our recorded actions and words: it is not good or bad in its own right. Data can give us the information we need to cure cancer, or it can be weaponised and used to steer our behaviours. What is problematic is not that data can be exploited, but that we never consented to the exploitation. When we signed up for Facebook, we were never asked whether it would be alright if the platform shared what it had learned about us with political campaigners and advertisers. Actors who, unbeknownst to most of us, went on to target disinformation campaigns at those most susceptible to their messaging.
The answer seems obvious: we need to give the user more control over the data they share and more insight into how it might be used. It is the solution put forth by many a privacy advocate. Unfortunately, while it seems sensible, this approach is misguided — or at the very least incomplete. Here’s why. Remember that time a marketing stunt unintentionally revealed the location of secret US military bases? It happened in late 2017, when Strava, an app that enables runners and cyclists to track their exercise routes, aggregated all the routes generated by its users in a data visualisation map for all the world to see. What it did not realise was that some of its users were US soldiers, who had used the app while running around their military bases. Things started to unravel when an Australian university student spotted the routes in remote areas, such as Afghanistan, and quickly determined that these patterns revealed US military bases. He was right.
This example shows how difficult it is to foresee the consequences of us, consensually, sharing data. The problem is that most of the data we share is not just about us, it’s often about other people and things as well. This is as true for our running routes as it is for our text messages, family pictures and DNA. Say, in a moment of self-discovery, you decided to send your DNA to an analyst. You would not just be sharing data about you. You would also be sharing data about your family and their future offspring. Should it be up to you alone to decide who collects this data?
Another example. Some UK-based health insurers have started to give rewards to people who volunteer information about themselves. Did you connect your Fitbit to the insurance app? Here’s a free cinema ticket. Can you prove you went to a gym today? A free cappuccino. At first sight, this scheme appears harmless. Nobody is forced to share data and those who do get fun perks without penalties. Now fast-forward five years. By now, all healthy, tech-savvy individuals have opted into the service, enjoying their many caffeinated movies. But what does this tell us about those who have not opted in? Can we assume their unwillingness to upload their fitness logs means they are less healthy? What is to stop an insurance company from charging them higher rates? Clearly, even when sharing data makes sense to somebody on an individual level, we should be mindful of the negative externalities to society as a whole.
If individual consent is insufficient, what is the alternative? Just as it would be ludicrous to expect any one person to individually evaluate whether the air they breathe is toxic every time they inhale, we cannot expect people to make a million tiny decisions about their data on a daily basis. Strong defaults and safety standards are imperative. Rather than place the burden of consent solely on the shoulders of the individual, we should collectively decide what data we want to collect and give access to, and under what conditions. Such an exercise should weigh the interests of different groups, while protecting the most vulnerable. Left-wing movements have a history of navigating the tensions between collective harm and individual freedom. It’s time for their voice to be heard in this debate as well. Ultimately, can we really expect individuals to make more informed decisions than an organisation with the analytical resources of the US army?