next up previous contents
Next: Message Integrity Up: The Ethics of Modern Previous: A New Model for   Contents

Subsections

Privacy

Review

By this point, I have synthesized a framework for modelling communication issues. I have begun deriving ethical principles from this framework, most notably the principles of receiver/sender symmetry and only humans communicate. Finally, I have demonstrated not only why software belongs in this new framework, but why software's ``mutable expression'' shatters the old frameworks and renders them literally meaningless. It is now time to expand the focus a little bit, and try to apply the framework to other issues, and see what more can be learned from a careful examination of the issues. Examining ``privacy'' will exercise the model well.

Privacy: Traditional Motivation

``Traditional'' Privacy Defined

As we did for free speech and censorship, I wish to more carefully define ``privacy'' and maintain the original sense of the term, while extending as necessary to maintain its sense under modern pressures. The Merriam-Webster Online Collegiate Dictionary defines privacy as

privacy
1. a : the quality or state of being apart from company or observation : SECLUSION b : freedom from unauthorized intrusion <one's right to privacy>
I drop the second meaning, which is considered archaic and not germane to our point: ``a place of seclusion'', and the third meaning which is basically redundant: ``a private matter : SECRET''. For our purposes, we may also drop 1a; it contributes to our general understanding of privacy but communication ethics does not come into play when one is alone and no communication is occurring. Thus, we will deal primarily with ``privacy'' in the sense of 1b: ``freedom from unauthorized intrusion''.

United States Legal Basis

In the United States, the right to privacy derives from the Fourth Amendment to the United States Constitution, which is part of the Bill of Rights:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
You can see the word ``privacy'' never directly appears in the Amendment but based on the definition above (``freedom from unauthorized intrusion''), it's clear how this relates to a person's right to privacy from the government. The doctrine of privacy has subsequently been heavily interpreted by the Supreme Court, in ways that can and have been themselves the subject of lengthy books.

Traditional Privacy Broken Down

I see ``traditional'' privacy as having two aspects:

I define ``surveillance'' as ``collecting information about people''. I deliberately leave out any considerations of ``intent''. When you accidentally look into your neighbor's window and happen to see them, for the purposes of this essay, that's ``surveillance'', even though I'd never use the term that way normally. I'd like a more neutral term but I can't think of one that doesn't introduce its own distortions.

The reason I believe intent shouldn't enter into it at the most fundamental level is that the intent of the collector has no effect on the data collected. Nor does the intent of the collector constrain what will be done with the data in the future; police investigations routinely use data that was collected for accounting purposes, such as phone records. Such intent is useful in the context of a specific problem, but it is not worth clouding the issue by trying to make it part of the fundamental part of privacy. ``Intent'' is a secondary consideration at best. What fundamentally matters is that surveillance has occurred, and information has been collected.

This corresponds pretty strongly with the dictionary definition, but in normal usage with regards to communications issues, there's clearly another aspect as well:

Analysis

Despite the dictionary definition, ``not sharing information'' is the more fundamental of the two aspects of privacy. If surveillance seems to occur, yet the information collected is never shared with a human, in a sense privacy has not truly been violated, even if people may feel it has been.

The real privacy concerns people have today fundamentally revolve around who has access to what information, not what information is collected. One common straw man argument attributed to privacy advocates goes like this: ``Privacy advocates want to never given anybody their information, such as their address. But if you wish to order a book to be delivered to your house, you must divulge your address to somebody in order to accomplish this. Since this information must be known by somebody in order to do business, it is not possible to maintain privacy and do business. Since business is important, privacy must fall by the wayside.'' The ``straw man'' nature of this argument lies in the definition of privacy as ``not collecting information''. Nobody disagrees that information is necessary to do business; it is what is done with the information afterwards that constitutes a privacy breach, not the collection itself.

(It is also worth observing that less information is strictly necessary then might be obvious at first, which is another subtle weakness in this straw man argument. One common example is a Post Office box, which does not correspond to a physical location as an address does. The black market uses other techniques to obscure sources and destinations, while still doing business. One method frequently seen in the movies in the context of paying off a kidnapper is the ``dead drop''. There are ways of making it difficult to collect useful information if the parties are motivated enough.)

I can't prove it, but I would suspect that the vast majority of information collected has some legitimate use, and is not just random surveillance for no apparent reason. Therefore, for the purposes of this essay, I will mostly drop ``Freedom From Surveillance'' from my consideration. The part of surveillance I want to talk about is captured entirely by considering how surveillance information is communicated.

To continue to refine the generic meanings of the term ``privacy'' down to what I think is truly fundamental, it is worth noting another distinction based on this separation. What we would typically consider ``surveillance'' is only (legally) available to the government; it is illegal for a private party to engage in many surveillance activities, for a variety of reasons: Powers reserved to the government (go ahead... try to get a search warrant as a private citizen...), the inability to place cameras in locations owned by the government, practical resource considerations. The only privacy issues concerning individuals and corporations are those concerning the sharing of information. Governments also have the ability to force information sharing to occur, especially in law investigations, because of the natural position of power it occupies. So while in theory I'd draw the separation as in 8.2.3, when following practical news stories, I tend to group them into two categories: Corporate privacy invasion, and government privacy invasion. Similar to the difference I drew between ``censorship'' and ``free speech'', while the effects of corporate privacy invasion and government privacy invasion may be somewhat similar, the methods used to accomplish them, and the corresponding countermeasures necessary, are quite different, and can not be lumped together as one issue without loss of clarity.

Communication Privacy

Let us try to cast this refined version of the concept of ``privacy'' in terms of the communication model. Despite the fact that I consider discussing the ethics of surveillance beyond the domain of this essay, it is worth observing that one kind of surveillance can be easily modelled, the wire tap:

Wire Tap

Figure 22: Wiretapping
\includegraphics[width=3in,keepaspectratio]{wiretap.eps}

A wire tap occurs for some connection when a wire tapper interferes with the medium in such a way that it sends a copy of the message to the wire tapper. A wire tap is usually targeted at a particular entity, which may be either the sender or the receiver of this particular connection, and the wire tap is without the consent of that entity. (If the entity did consent to this arrangement, we would model that as the entity opening new connections to the wiretapper and deliberately sending new messages, containing copies of the original message, which is an entirely different situation ethically.)

In accordance with the only humans communicate property, it is not a wiretap if no wire tapper ever sees the message. In theory, when one sends a message over the Internet or a phone call over the switched circuit network, there are any number of copies made of the message en route to the receiver. As long as the message is not stored and nobody ever sees it, that's not a wiretap, it's just normal operation.

This does not cover all forms of surveillance, of course, just communication interception.

Privacy

Privacy is a meta-property of the connection; ``sharing'' information obtained from a given message occurs outside of the original connection, so it is not strictly a property of a connection. Here's how I'd define privacy in terms of the model:

Privacy
Privacy is only an issue when the sender includes information in a message about the sender or some other entity that could be used to cause that entity some form of harm. ``Harm'' here runs the gamut from ``minor annoyance'' (another spammer gets your email address) through ``life-threatening'' (the location of a Witness Protection Program protectee is leaked), varying based on the nature of the information and who obtains it. The right to privacy is the right to control who has access to that information.
I've deliberately left the wide range of possible harm in the definition because I believe that matches how we use the term. Of course as always the exact nature of the possible harm plays into whether a given action is ethical.

In addition to data strictly contained in the message, people are also concerned about the collection and distribution of metadata about the messages, such as patterns in web page requests or what kind of television shows they tend to watch. In a way, this is information that is still contained in the messages, as it can not exist without any messages sent at all. So while it may not be directly contained in any one individual message, there is nothing special about metadata that merits special definition or handling.

Based on the clarity afforded by this definition, we can knock down another common argument against the need for privacy: ``If I'm doing nothing wrong (usually wrong is used synonymously with illegal), then I don't need privacy.'' There are two basic problems with this argument: One, ``privacy'' encompasses far more then just ``hiding illegality''; certainly information about the commission of illegal, immoral, or socially unacceptable acts fits into the definition above quite handily, in that extreme harm can come to the entity if the information is shared with the wrong people, but that is hardly the only information that fits the definition. It is trivial to come up with instances where a person is doing nothing wrong at all, yet may still wish to prevent some other entity from obtaining information about them. For instance, there's the Witness Protection Program I used parenthetically above; the witness has not (necessarily) done anything wrong. Or consider someone being stalked who wishes to prevent a stalker from obtaining their address or other vital information. (And it's not just celebrities who get stalked, us Normals have to deal with it as well. Most of us probably know someone who has been stalked (to varying degrees) at some point.) Obviously these are extreme examples used for rhetorical purposes, but lesser examples are easily thought of, too.

The second problem is the hidden assumption that the purpose of privacy is inevitably to commit the ``sin of omission'', to hide something that you should be punished for. I would say this is incorrect. Let us explore the question ``What is the ethical reason that privacy is desirable?''

Privacy Ethics

Information Is Power

In a nutshell, the ethics of privacy can be derived from the fact that knowledge is power. The more people know about you, the more power they have over you.

Did someone say power? That's a big clue that the principle of symmetry (see 3.4) should applies here. We can boil the question down to ``Is symmetry between the sender and the receiver maintained?''

We can get a clue from the section describing the symmetry property (see 3.4). We can recast the privacy problem into an economic one, where ``economic'' is used broadly to mean not just monetary issues, but all value transfers a person may wish to engage in. One of the basic ethical principles of a free economy is that with few exceptions, people are allowed to set the value of what they own. When a person is not free to set the value of what they own, they are effectively under the power of the entity that is setting the ``price'' for the goods or service.

Examples of this are easy to see by turning to the government. It is illegal to sell body parts. It is illegal to sell yourself or anyone else into slavery. In most parts of the US, prostitution is illegal. None of these things are physically impossible now that there is a law; instead, the ``price'' for doing these things is significant jail time and/or stiff fines (if you're caught!). On the flip side, where a government can force lower prices, it is illegal to abuse a monopoly to artificially inflate prices. Many prices are subsidized by a government to keep the goods or services available to all, such as Canada's ``free'' health care. Illegal drug possession can carry stiff consequences. All of these demonstrate how power can be exerted simply by increasing and decreasing the perceived values of various actions and objects.

(``Free'' gets scare-quoted because I prefer the more accurate term ``paid for''. Try replacing ``paid for'' wherever you see the word ``free'' in advertising; usually the advertisement is much less appealing after that.)

Generally, we want to reserve this unilateral power to governments. A relationship between two persons or person-like entities should be governed by mutual agreement, which is really another expression of the symmetry property: There is nothing special about either entity in such a relationship that entitles one or the other to special privileges. (Ideally, the government is By The People, For The People anyhow so even the powers reserved to the government are in some sense consensual, although in a collective sense rather then an individual sense.)

Using this analysis, we can construct a more active and practically useful definition of privacy:

Privacy
Privacy is violated when information of some value is taken from entity A by entity B and used in some manner that might cause A some form of non-monetary harm, without B compensating A in some mutually agreeable manner.
You could cast this in purely economic terms by dropping the phrase ``and used in some manner that might cause A some form of non-monetary harm'' without too much loss, but that allows too many cases that are purely economic, which I think fails to capture the sense of what people mean by privacy. If I steal a valuable product design document from you and sell it on eBay, that would fit a purely economic definition of privacy as it may cause great monetary harm, but most people would consider that just theft, not a privacy issue. So let us confine ourselves to discussing non-monetary harm, which as I mentioned above ranges from ``minor annoyance'' to ``life-threatening''. I also observe that this is not always a strict separation; one privacy violation can cause both monetary and non-monetary forms of harm at the same time, possibly constituting both theft and a privacy violation.

This definition has a distinct advantage over the previous one: It provides us with an easy yardstick to examine privacy relationships in the world around us to determine how ethical they are. We can also define privacy-sensitive information:

privacy-sensitive information
Information that could cause an entity non-economic harm, which the entity may or may not be willing to sell for some price.
You could probably write this next section now without me spelling it out for you, but good essay form requires me to do it anyhow, so please bear with me as we apply the yardstick to real life.

Applying the Privacy Yardstick

It is easy to see that in attempting to apply this yardstick, one must carefully search for people or person-like entities that actually meet the guidelines for ethical behavior, rather then the opposite. Current common practice appears to be to take whatever you have the technical capability of acquiring.

Most information collection right now is either

In theory we can avoid nearly all privacy invasion by moving into a hand-crafted log cabin and living off the land. Often a false dichotomy is handed to us between doing that, or putting up with the privacy invasions that seem to be a part of modern technological life. But it does not follow that this dichotomy should exist. If a monopoly exists in a given domain, or all producers of some good or service engage in some privacy violation (effectively an ``oligarchy'' from the point of view of privacy issues), then there really is no effective choice. This is an abuse of monopoly or oligarchy power to force you to either lower you price for your privacy sensitive information (down to zero) or do without some good or service. This is unethically raising the price of a service beyond what a competitive, informed market should bear.

Another virtually ignored aspect of the current privacy free-for-all is the effect of distribution on information. I do not particularly care that my insurance company knows my mailing address. They need it to send me my bills. Yet when they sell this exact same information to other entities, I consider my privacy violated, as they send me pre-approved credit card after pre-approved credit card, which I must spend my valuable time destroying to prevent somebody else from using the applications or cards to rack up charges in my name, which has already happened to my wife once. Information harmless in one entity's possession may be very, very harmful if another entity gets it, yet there is little or no acknowledgement of this fact in either economic reality or the privacy debate.

This answers another common argument, the claim that once I have some information, I can sell it to anyone I want. There is a usually-unspoken claim that a person's privacy is no more violated after the sale of my address information then before the sale, when at least one person already possessed the privacy-sensitive information. This argument falls down on two points: One, with every harmful event that occurs caused by additional sale and use of the information occurs, the privacy violations become more and more ethically serious. The only way to sustain this argument is to frame the problem in terms of binary ``privacy violated/not-violated'', which as usual is too simplistic to handle the real world. Two, I may not consider my privacy violated at all until the sale, if the first entity has some good reason to have the knowledge, so there can still be a fresh privacy violation, even with a binary view.

In fact, the vast majority of value things like our address have to marketers is the value it has in combination with other bits and pieces of information. It's really the exceptional database where each single record is inherently a privacy violation; your credit record, your medical history, and your criminal record, if any. The rest of the value lies in the combination with seemingly harmless bits of information. Thus, when you see privacy advocates like myself getting upset at what seems to be a trivial violation, bear in mind that we see it as not an additive privacy violation, but a multiplicative violation. The first few bits of data are worthless, but start adding a few thousand bits here and a few thousand there and pretty soon you're talking real knowledge.

The state of privacy in the current world is absolutely reprehensible, not because so much privacy-sensitive information is being collected, but because so much information is being collected and used without mutually agreeable compensation being arranged for the source of the information. Instead, entities, mostly corporations, are abusing their positions to effectively force people to cede this information for no benefit whatsoever, and with little or no effective ability to just opt-out of the collection entirely if the collecting entity is unwilling to meet the price set by the information owner. Further, there is no acknowledgement of the increased value of information as it combined with other bits. This is basically large-scale theft, in many cases theft of information that has value transcending mere money.

Who Needs Privacy?

To go back to the argument that started this all, ``If I'm doing nothing wrong, I don't need privacy'' (section 8.3.2), what an incredibly simplistic and naive view of privacy that exemplifies! It would be nice if we could simplify privacy down to only ``hiding crimes'' and make a hard-and-fast judgement on privacy that applies for all, but it just doesn't work in the real world. ``Hiding crimes'' is only one rather small aspect of the whole of privacy, and it's not even a very interesting one in the final analysis. It's just a particular case involving high levels of harm, which isn't that unusual as many other privacy violations can include similar levels of harm without involving crime; ask anybody who has had to recover from identity theft how much fun that is. It is incredibly short-sighted to take that small aspect of privacy and try to extend it to cover all cases.

It is equally foolhardy to force one person's values onto all, as it would be foolhardy to do so in the monetary arena. Some people may value their privacy so little that they are effectively willing to give everything away, but that does not imply that that is necessarily true for everyone. (I wonder how many people truly don't value their privacy, and how many people would suddenly value it if a fair market arose that would pay you a fair price for your privacy-sensitive information. ) One of the Great Truths of Life is that people value things differently; this is a very important part of life and can not be simply waved away just because you only see your own viewpoint, and thus see only one value for things.

In fact, another way of phrasing my rebuttal to the ``If I'm doing nothing wrong, I don't need privacy'' line is based on pure economics: When your privacy is violated without mutually agreeable compensation, you are quite literally being stolen from. If you don't really care about privacy, that's fine, but you should still be compensated for the information being taken from you. You are literally losing value measurable in dollars in today's anarchic environment every time your privacy is invaded.

How do you know there is a value measured in dollars? A corporation would not bother to collect and sell such information if there was no monetary benefit, so the simple observation that the corporations obviously consider themselves enriched by this privacy-sensitive information is sufficient to show that it has a dollar value to them. It would be hard to put a solid number on such a diffuse asset for a corporation, but for a more solid number, criminals were able to sell stolen identity information for as much as $60 per record in 2002 (http://www.wired.com/news/privacy/0,1848,56567,00.html). I'd guess that was a conservative valuation, too, since those records were used to commit large-scale credit fraud.

Privacy-sensitive information is treated so cavalierly now that it is leaked without a second thought:

One credit card company kept calling and calling even though I repeatedly said it was a wrong number. They insisted, so one day, I just never said I wasn't the guy they were looking for.. It got scary: I never realized how easy it is to get information from people like this.. These repo/credit companies call and give you soo [sic] much information without verifying who they are talking to. I knew all about this guy that had a white ford ranger pickup about to be repo'd, he only had a PO box (haha they sold a pickup to a guy with no address), he made cabinets, lived in New Mexico, had my phone number, hadn't paid his $239/mo payment for 4 months, AND I verified his social security number. I got all this information through passively sitting through their "can you confirm your address is..." type questions. - ``Broodje'' on Slashdot (http://slashdot.org/comments.pl?sid=53726&;cid=5295656)
Why bother validating you're talking to who you think you're talking to, when there's no penalty for leaking this information? Note with a name and a social security number, ``Broodje'' could have committed any credit card fraud he'd please. Identity theft (http://www.tompaine.com/feature2.cfm/ID/8976) can never be completely eliminated, but such casual treatment of privacy-sensitive data makes it easy; if data was treated with more respect and more suspicion it would be much more difficult to commit it.

Who need privacy? Everybody! Everybody, that is, who is interested in not being forced into subservient relationships, including criminal ones, by any entity that happens to have the power to collect information that might be harmful to you. I suppose if you don't mind this subservience, then privacy issues won't bother you. But please forgive the rest of us for objecting to the yoke.

In the era of ``power politics'' where every conceivable petty ``power struggle'' is immediately transformed into a violent struggle of epic proportions, where people equate denying a promotion based on race to murder with a straight face, it's easy to tune this line of reasoning out. But I think at the core of the power rhetoric there is still some kernel of truth. When a telemarketer calls me because they have my number, obtained through a privacy breach, and they take 10 seconds of my life away (which is about as far as they can get now), that is a real power they have over me. This is hardly the end of the world, yet one should not make the mistake of exaggerating in the opposite direction. This is a real effect, and in the course of your life, the amount of your time wasted by telemarketers becomes significant, time you cannot get back. It's hard to see because there's no alternate world you can peek into to see a life without privacy invasions to compare to, but this is very, very real. As people continue to be complacent and unable to perceive the effects clearly, they are getting worse and worse.

Why can I safely predict privacy violations will continue to get worse? Because there is an inherent economic interest in pushing privacy violations as far as possible, by definition. Violating privacy means some value profit for the violator, with no motivation to stop and every motivation to increase the violations. Until we actively fight this as a society, it will get worse indefinitely. Someday we will rise to fight this, too, because the intrusion is going to monotonically increase in the absence of backlash.

A Social/Personal Footnote

It is undeniable that there are social and personal aspects to what sort of privacy is desirable. Many of these aspects are unimportant, and can be determined by the society as they see fit. Whether or not you keep your blinds open on a first-floor apartment is a personal, ethically neutral decision.

The preceding analysis also gives us a method for determining whether or not an issue is such a purely personal or social decision, or a decision potentially containing ethical concerns. If the information gathered fits into the above analysis, then there is at least the potential for ethical issues to emerge. If the analysis makes no sense for the particular issue, then it can be determined by the society or person (as appropriate) with no ethical issues to worry about. Many mundane daily privacy issues are of this nature, but it is also a mistake to then assume that all privacy issues are of this nature, which I've also seen.

Privacy Legal Machinery

The most interesting thing about this analysis is that it almost directly parallels the arguments for having the whole concept of intellectual property in the first place. Ethically, people must own their privacy sensitive information, there is a value loss if the privacy-sensitive information is stolen (sometimes value not measurable in mere money), we must have fine-grained control over this information. It should be easily seen that privacy-sensitive information is just information, like anything else, and as such, constitutes concrete parts which can be used to create messages, with all that implies.

This is a powerful mechanism for understanding the great variety of ways in which privacy-sensitive information can be abused. There would be no e-mail spam in the world were it not for the ability of smart expressions to take a list of email addresses, which are privacy-sensitive bits of information, and send emails to each one, a simple attack not possible before the computer era. If it wasn't for the great interconnectedness of modern databases, a simple error in one credit history database would not be able to screw up a person's life for months or years at a time.

Can We Use Existing Machinery?

This would seem to call for some sort of legal protection. At first glance, it seems like we could just declare privacy-sensitive information to fall under copyright laws and be done with it. If this worked, it would have the virtue of simplicity. Unfortunately, on closer examination it doesn't work at all.

First, original copyright law deals only with expressions. You cannot copyright a fact, for several reasons, not least of which is failure to meet the creativity criterion. This makes it extremely difficult to use copyright machinery to protect such information. Ignoring the creativity problem, you could try to justify other's recordings of your address as a derivative work of your original recording of your address, but that does nothing to prevent people from independently recording your address, getting their own ``copyright'', and leaving you unprotected.

Second, as we discussed above, copyright in its current form really only deals with the concrete part aspect of communication. Our whole desire for privacy centers around the desire to control the flow of the information, which is to say, the human-experienced part of the communication. This highlights one way in which that division is sensible. Copyright is not a very good mechanism right now because the expression model can't handle this sort of information, and hopefully once copyright is simplified to cover only the concrete part aspect, it will be even less appropriate. We need some other form of protection, one that does not exist right now.

In its current form, copyright is primarily concerned with the recovery of loss. The penalties for copyright violation increase as the damage done to the copyright owner increases. We want a legal mechanism concerned with the prevention of injury, which is completely different.

The closest currently-existing legal mechanism that meets that criterion is trade secret law. There are some similarities: Trade secrets protect information of economic value as long as it is maintained secret. It is concerned with preventing the secret from getting out and being used by somebody else for gain, which sounds like how we'd like to protect privacy.

But there are some serious problems, too: Once the trade secret is independently found, it is no longer protected, so one accidental release of your address without proper trade secret protection and it's no longer a secret. Since by now we've all released all kinds of personal information without trade secret protection, we can't even claim trade secret status on our information in theory or in an ambitious lawsuit. It also (as far as I can tell) deals strictly with monetary value, where our privacy concerns go beyond that, as we wish to be able to consider some information priceless, as is our right to set the value of our information.

Current legislation dealing directly with privacy suffers from the same symptoms as the rest of intellectual property law, as discussed way back in chapter 2: It is haphazard and chaotic as it tries to deal piecemeal with each isolated situation as it arises, instead of being based on a cohesive theory of privacy. It is a list of special cases, which is obsolete before it is even put into effect. It is clearly inadequate for the larger task of protecting people's privacy as a whole.

So to answer the question posed in the title question: No, there is no existing legal machinery that we can use or extend to protect our privacy. Even the current privacy laws are too focused to be made generally useful.

A New Form of Intellectual Property

The only practical way to protect privacy is to create a new legal concept matching what I call privacy-sensitive information and create the legal machinery to protect it.

We need to grant entities the right to decide what constitutes ``privacy-sensitive information'' and require information brokers to respect the fact that the information is considered privacy sensitive and not distribute it. We need clear guidelines on what constitutes ``privacy-sensitive'' so that people can't abuse it, as they inevitably will. The definition given above would be a good start, as it correctly focuses on people and not technology, unlike other attempts I've seen to create privacy machinery. We need to establish meaningful penalties for violating privacy, applicable across the whole domain of privacy-sensitive information, not mere subsets like ``medical data''.

Sound ambitious? It really isn't. Already current privacy legislation is hinting at this level of protection. There is precedent for controlling the dissemination of information, in both the form of trade secrets and the concept of confidential information. There is precedent for the owner of information setting value or refusing distribution entirely, as in current copyright law. (Compulsory licensing is the exception, not the rule.) There is certainly precedent for granting only limited rights, not a binary ``possession/no possession'' status, in current copyright law.

This is not calling for anything truly novel in execution, only a re-combination of already-existing legal machinery. Given the existence of copyright, patents, trademarks, trade secrets, and confidential information, this isn't so much a blazing of new territory as closing a gap in existing communication-ethics-based law, one being exploited by many entities as they benefit from selling our information without passing any benefit back to us.

Finally, one way or another more privacy legislation will be enacted. It can either try to merely address symptoms, which we've already seen in legislation like HIPAA, or more directly solve the fundamental problems. For society's benefit, the latter is much more desirable.

Another nice benefit is that once these protections are enacted, a privacy market can develop, allowing society itself to directly decide what their privacy is worth, almost exactly analogously to how the government manages the economy itself. Many people have researched how this could be made technologically feasible, but without a legal framework enforcing the technological protections, the technological solutions are worthless in practice.

Practical Privacy

I stated earlier that prevention of surveillance is less fundamental then prevention of communication of information, and that preventing surveillance is more a practical concern then a theoretical one. By this I mean to imply that a focus purely on surveillance, without addressing to whom that information is communicated, is doomed to failure, because of the number of fully legitimate sources of information.

I do not mean to imply that it is a waste of time to pursue limitations on surveillance, though. Obviously, information never collected can never be communicated in such a way as to violate privacy. In the long run this will not be sufficient, though, because the amount of information that can be extracted from communication always exceeds the literal content of the communication.

Suppose you are given all the receipts from my grocery shopping trips. Along with the literal information contained directly on the receipt, which is simply a list of items, you can derive much interesting information. With a good baseline understanding of the shopping patterns of people in my demographic group, you could probably derive the fact that I am trying to lose weight on a high protein, low-carb diet, but that someone else in my household is not on that diet. You could derive I like certain types of food, and perhaps that I dislike others.

Beyond that, if you had a large enough database, you could derive other things. If someone buys a lot of Gatorade or other sports drinks, it is more likely they are males age 14-30. Buying mineral water would be associated with other personality traits. Buying a lot of herbal remedies would be associated with other traits. The amount of information obtainable just from a large collection of your grocery receipts would probably surprise you.

Start combining sources of information together and the possibilities increase even more. While it is not possible to build a 100% accurate model of a person, a lot of privacy-sensitive information exists in data that could only be discovered by combing through a lot of data. This is exactly the sort of thing the government was proposing with its Total Information Awareness program.

This suggests another practical avenue for controlling privacy violations, which is enacting restrictions on who can combine what data. Exactly what restrictions would be in place is a matter for specific law, but I would suggest that licensing people who can access this sort of information would be a fine idea. The privacy value of even such mundane data as how much Gatorade I buy increases as it is combined with other data, and that should affect how we perceive the ethics of such actions. This is another sort of thing where sufficient changes in quantity becomes a change in quality; adding two pieces of data together is probably harmless, adding millions very intrusive, and while there is no obvious, firm line we can draw where the transition occurs, a transition occurs nonetheless.

Again, lest you think this is theoretical, watch a detective drama on television sometime, such as CSI. While the television detectives of course live in an idealized world where every problem has a neat solution, the general principle of convicting a criminal based on a scrap of thread, the precise impact angle of a bullet, a thirty-second cell phone call record (not even the contents of the call, just the fact one was placed), and the microscopic striations on a bullet shows how much information can be extracted from even the simplest scraps of data, when intelligently assembled. In fact, there's nothing particularly hard about this, we all do similar things all the time; the only challenge is automating such logic.

Conclusion

In the final analysis, modern privacy concerns center around the flow of privacy-sensitive information, rather then the gathering of that information in the first place. In other words, privacy is primarily a communication concern. Concerns about surveillance are practically worthwhile, because it's difficult or impossible for an entity to possess information without communicating it to somebody, but in theory it is possible. By modelling privacy concerns based on the flow of information, we can and should begin to see such information as another kind of intellectual property, subject to the same protections and legal machinery. It should be legally meaningful to say to a corporation that they can have my address but are forbidden to sell or even give that information to anyone else (even ``business partners''), just as they can sell me a book contingent on the condition that I don't post copies on the Internet, regardless of the price I'd charge. New legislation will be required to support this, but not truly new legal principles.


next up previous contents
Next: Message Integrity Up: The Ethics of Modern Previous: A New Model for   Contents
2006-12-20