Privacy vs. openness

I created this poll on Twitter:

137 people replied and although a clear majority (77%) were at least somewhat concerned, 23% said they were not at all concerned. I have thoughts on soon creating the same poll again, but this time targeting academics into opensource. Of course will there be a great overlap in respondents but that doesn’t matter, I am curious to see if we’d see similar results there: about a fourth of them not at all concerned about surveillance & lack of privacy online.

I have heard colleagues in open and replicable science, both within and outside of IGDORE, arguing that privacy concerns are the opposite of openness and transparency.

I would be very curious to hear peoples thoughts on this. How and why can we care about privacy while at the same time being strong proponents of openness? Where is the line between good and bad openness?

Openness when other people put their trust in you, putting you in a position to hurt them.

Privacy when other people are in a position to hurt you.

Openness:

  • Government institutions
  • Software that is to be run on other people’s computers or data
  • Scientific findings

Privacy

  • Individuals
  • Family
  • Friendships
  • Military
  • Medical
2 Likes

This is my very personal stance. I care more about openess than privacy and this is true in work as well as in personal relationships.

However, I also start to believe that total openess (or not being concerned about privacy) can damage others when they are not part of the equation. Example: I do not really care if my web history and preferences are stored by algorithms and sold to advertising companies. I do not buy their stuff anyway and I am more concerned about making annoying adds disappear from my web pages. However, my data can be part of a much larger dataset. Its analysis can influence marketing strategies woldwide and affect millions of people. Hence, I should care more about my personal privacy when its violation can hurt others.

I don’t know if this is a meaningful contribution. I just wanted to share some thoughts :slight_smile:

1 Like

Something that seriously concerns me about mass surveillance is that it’s not only about protecting myself and my family, but also about protecting people whom I don’t know and never will know. And this goes well beyond marketing strategies. For example do we today know that collected metadata is being used by governments to identify and locate suspected terrorists, and ultimately for sending drones to assassinate them (see e.g. The Intercept, 2014 ). We can assume that they often will correctly identify the person they are looking for. It should however be considered established today that the drones are occasionally assassinating innocent people.

[A]s the former JSOC drone operator recounts, tracking people by metadata and then killing them by SIM card is inherently flawed. The NSA “will develop a pattern,” he says, “where they understand that this is what this person’s voice sounds like, this is who his friends are, this is who his commander is, this is who his subordinates are. And they put them into a matrix. But it’s not always correct. There’s a lot of human error in that.” [Quote from The Intercept (2014) article]

So how do they use the metadata to identify the targets? Well, one crucial point in this process is often overlooked by people who don’t care about privacy: the very first thing do to is to narrow down the number of relevant people. Let me explain this process by giving an example from a criminal investigation of serial rapes that took place in Sweden, in a small town called Umeå, between 1998 and 2006: the so called Haga man case.

The police had collected his DNA from the crime scenes, good quality DNA, so if only they managed to test the right person, the case would be solved. But the investigation never led them to a suspect that was a positive match. Basically they were at a point in the investigation where most adult men in Umeå could be a potential suspect. The police asked for access to the medical PKU lab, where DNA is stored from most Swedish citizens born in Sweden, but the PKU lab refused, because it is still to date not allowed to use the PKU samples for criminal investigations. So finally did the police ask adult men in Umeå to help the investigation by themselves coming in to the station and provide DNA-tests, in order for the police to narrow down the number of suspects through exclusion. DNA-tests from 776 men were collected. In the end, this mass testing was not what solved the case (the perpetrator was found through tips from the public), but it shows how an identification and locating procedure works in criminal and intelligence settings: data from the irrelevant/innocent people is crucial in finding the actual target.

And my understanding is that this is exactly how it works with mass surveillance and the collection of metadata. When we don’t protect our own privacy online are we all helping governments out by handing them our data which they can use to narrow down their searches. Some may want to help the governments that way because they trust them to do the right thing and find the right targets. Others, like myself, are very concerned and don’t want to assist in any way and therefore try to not provide any data.

Facebook is really huge in police and intelligence settings; they use it all the time for everything. Similar thing with Twitter. Anyone who is serious about not participating in mass surveillance should really not be on Facebook and Twitter. I am on both. And I would be among the first to join a public campaign to move to a more privacy-respecting platform. Maybe something like Free Our Knowledge (ping @cooper.smout)?

The analysts have a bachelor in psychology

It could also be mentioned that the analysts who connect the different pieces of metadata are people like you and me: people with a behavioral science background, often with a particular focus during their education on criminology or forensic/investigative psychology. My understanding is that they typically have a bachelor or master in these areas. (I did myself apply for a few positions like that before I became a PhD student, but I was never called for interview. I guess my profile isn’t really what they’re looking for…) What I want to say with this is that it’s easy to get the picture of the analysts really knowing what they’re doing, that they somehow got these magical powers in making the correct decisions. But if we psychologists try to put ourselves in the same position and ask “how sure am I that I would be able to correctly identify someone if I had this huge amount of metadata?”, then I think we all realise how extremely shaky the process must be. And haven’t we all as researchers/PhD’s in psychology been concerned about students rolling their eyes over research participants, making fun of them, getting to the wrong conclusions, being way too overconfident in their work, etc? Remember how confident we all were before finishing our PhDs? Well, those analysts have been our students, and they never continued to PhD level.

I liked this differentiation. I guess the military item should be supplemented with police and intelligence authorities?

I would really love to read more about this topic and differentiation. Anyone got anything to recommend?

You may find some food for thought in this article.

1 Like