Privacy

The basis of privacy is:

  • I don’t want people knowing what I am doing when I am not in a public space.

  • I don’t want anybody to have access to my communications that I am not broadcasting publicly.

Why not? Well:

  • I am entitled to act and think in any way I want, in private, and without harming anybody else.

  • I do not want people to use my private activity or messages against me in any way.

These rights are sacrosanct. The practicality is that, without the ability to develop the self in private - without the ability to develop a distinct persona, distinct ideas, and distinct strategies - there would be very little self (as Ayn Rand knew) and, as a consequence a slower development of humanity… fewer interesting people leads to fewer interesting ideas. When you invade peoples’ privacy, and provide them with information based on that invasion, the effect is an increase in homogeneity of ideas. The worst possible outcome, of course, is that the privacy gets invaded, a statistical or algorithmic entity starts to work on the data, a group or individual is branded likely to be a threat to a hegemony, and a murder takes place by an automated machine (as we saw in the Obama era drone strikes), or by state, as we saw in Russia/East Germany. The risk of the catastrophe is so large that we, as a modern society, seek to protect privacy.

It used to be that the only organisations with the infrastructure and incentive to invade privacy on a massive scale were governments. Of course, journalists and competitive people/organisations became interested in invading privacy too, but the state apparatus would protect this - laws protected privacy. Nowadays, however, we have mass invasions of privacy by technology companies. They typically trace individuals in a space they believe is private - their use of the internet. The tech companies assemble a ‘fingerprint’ type identity for the individual using the internet (although probably anonymised) identifying interests, associates and a variety of other characteristics. They then give paying subscribers (often marketers or political organisations) the ability to present their products to the individuals on websites. Normally it is relatively harmless advertising i.e. there is an argument that the invasion of privacy leads to an enhanced experience for the consumer - he or she is shown the most interesting things they could possibly see. That, however, does not justify the risk. What we have seen with purported Russian influence in US elections and the Cambridge Analytica scandal is that when these things go wrong, they can affect our way of life and have the potential to cause catastrophe. The long-term effects of targeted advertising are more difficult to quantify.

The social harm from people being presented only with information that conforms to their own beliefs is ossification. We know that a diversity of beliefs and influences is central to developing inter-societal tolerance. More importantly, to put it in a 1984-type theory: if everyone has the same information, everyone thinks the same, and then we get no - or much slower - development of society.

The worst possible outcome is that the profile developed in private browsing history becomes identified with an individual and that is used against the individual. This raises the spectre of automated identification based on statistics - automated policing and profiling, automated identification of targets for theft and so on; as well as the ability to identify individuals, buy their private information and use it against them. The problems with statistics are very well known - if you stereotype based on probabilities (even if you get the probabilistic theory right, which is difficult), by definition there will be a percentage of the identified group that does not meet the stereotype. The automated identifier will make mistakes, and it is real, innocent people that will suffer those mistakes, without the ability to appeal before they suffer.

Moving through the levels of privacy is instructive. It is clear I would like the tech companies to stop fingerprinting me, selling my profile to the highest bidder, and showing me repetitive advertisements. But the practical harm is not great - it causes me little distress apart from the irritation of seeing something I have already seen before. That said, the tech companies should be stopped: the incremental benefit of advertising is offset by the real possibility of distress, or of material societal effect, increases the longer they are allowed to persist. Fortunately the law agrees with me - privacy laws have been enacted in many countries, but enforcement is a problem. How should society stop companies taking private data and selling it?

Previous
Previous

Equity, Debt & Donations