User:Lemeclebed/Obscurity By Design

Although privacy is often used within an ideal image of total protection, this is never achievable in practice and is an elusive concept The concept of obscurity by design is aimed at enabling an individual to manage the risk that published information will be reachable and understandable by third parties to whom it was not intended.

It might be considered a practical step towards “good enough” privacy, although some may argue that it is impossible to know when privacy is good enough, as the future may render todays decisions inappropriate. Such reasoning is similar to the nothing to hide, nothing to fear, NTHNTF, principle so contested by privacy advocates.

However, the ability to manage the exposure of information online is important and offers some protection against abuse. Obscurity by design is an approach that complements other methods such as privacy by design and suggests that sufficient tools should be available to the end users to manage the obscurity of their own information, by design.

Privacy as Risk Management
Starting with a consideration of the risks faced by an individual in the on-line world there are two main types:
 * Endogenous risks, associated with the interaction of the individual with technology. These can generally be understood and knowable in advance.
 * Exogenous risks, associated with the external interactions the individual makes in the course of his, or her, online activities, for example, interactions with other people. These are emergent and generally unknowable at the start.

Protecting the privacy of an individual has to then take these two sources of risk into account.

The privacy by design movement has worked to minimise primarily endogenous risks that can be found in “back-end” systems. Privacy by design advocates a holistic approach to privacy considerations throughout the entire lifecycle and organisational management of technology development. Consequently it results in the systematic adoption and deployment of traditional forms of privacy enhancing technologies (PETs).

However, this approach does not systematically treat the other risk factors facing someone in the online world and those are the exogenous risks. It is these "front end" activities that are primarily targeted by the “obscurity by design” principle.

There would at first sight appear to be an contradiction between the concepts of privacy and protection of personal data on one hand, and the deliberate intentions of social interaction on the internet to reveal information, on the other. This indicates one of the major problems of the concept of privacy and that is context. This contextual complexity makes for universal design decisions that adequately protect, and not overly restrict, individuals, in every situation, extremely difficult to implement with the exception of broad, generic provisions.

Social Interactions as Risk Acceptance
In practice, all social interactions have some level of obscurity associated with them, and in an online world an individual can control the level of obscurity of their interactions through three main methods:


 * By limiting the on-line channels through which information is presented.
 * By limiting the audience that is reached through the channels.
 * By limiting the meaning of the information that is presented.

This allows a user to be proactive and to define the balance between opacity and transparency in the understanding of their online activities.

One of the techniques most commonly used are pseudonyms and the use of multiple accounts. However the right to anonymity online has long been debated. The use of real identities has been central to many online social networks business models, including Facebook and Google, which involves building detailed profiles of people so they can send targeted advertisements based on their true personalities. In recent times this stance has softened a little due to, inter alia, the controversy concerning drag queens stage names and the growth of advertisement free social network sites, such as Ello that welcome pseudonyms. Subsequently, Google announced changes in its terms and conditions that allow pseudonyms online.

Classes of Information
An individual should be able to select the level of “privacy” needed from broadly 3 categories:


 * 1) Public
 * 2) Obscure
 * 3) Confidential

There are types of information published, as on Twitter, that are designed to reach a large audience and so are demonstrably public. Whereas others, for example related to banking information that would most certainly be considered confidential.

It is this middle category that represents perhaps the bulk of online information where obscurity is most effective. Publication, but to a limited network of recipients.

Obscurity as Risk Mitigation
Information online is classed as “obscure” if one or more of the following elements is present in the context in which it is presented. Each of them can be made available to the user to control by incorporating them into the design principles of the system:


 * 1) Restricted Visibility
 * 2) Access Protection
 * 3) Reduced Identifiability
 * 4) Reduced Clarity

Restricted Visibility
If the information cannot be found easily then it can be said to be obscure. While this may not offer complete protection, as the original content is still online, in practice it has a lot of perceived value. The right to be forgotten requires search engines to remove, upon request, personal information that is no longer in the public interest. The fact that Google received more than 12,000 requests for removal on the first day indicates the value perceived in obscuring access to personal information through search engines.

Design principles such as limiting the scope of visibility, for example, to friends, or friends of friends, in Facebook, can be used to offer direct control of the level of obscurity. This is a mechanism that can be directly enforced by the software. Some sites, for example Linkedin, allow control of the individual parts of the profile that are visible, and therefore indexable, by search engines.

Other mechanisms used, for example the robots.txt files used on websites to request that the content of the site not be indexed, rely on the accessing software to behave correctly and compliance is thus not enforced.

Finally, tools and strategies that would allow information to be “devalued” by search engines, effectively search engine “unoptimisation”, which allows information that is picked up by search engine trawlers, but unintended to be searchable per-se, could be another tool designed to manage obscurity.

Access Protection
Application of passwords and in particular minimum standards for password protection, as well as two or more factor authentication, are all used to protect access.

Restrictions to access are often buried in privacy settings that are unknown to users with the default settings often being set to permit the most liberal access to information. Status quo bias means that users have a tendency to stick with default settings and obscurity by design principles would mandate these to be set to “obscurity by default”.

Feedback mechanisms which are inbuilt into the user experience can be used to inform and engender awareness, and ultimately modify behaviour on the part of the user by sensitising them to their individual and tailored risk profile. Such design elements would encourage the user to act and manage the level of obscurity. For example the “people who viewed your profile” in Linkedin provides information on the exposure of the profile, inside and outside the network, thus informing the user of their level of visibility.

Reduced Identifiability
The linkage between a given piece of information and a given individual can be obfuscated through the use of pseudonyms. Another element of identifiability is metadata associated with information and content. Metadata adds additional context to the content, increasing the likelihood of identifiability. For example a mobile communication in itself says very little, but the time and place of the call, together with the numbers involved substantially increases the identifiability of who made the call, even if it was not the owner of the phone. This is what makes metadata so valuable to surveillance agencies.

Reduced Clarity
Information is often passed between individuals, in the presence of others, that is deliberately missing important content, relying instead on known common understanding between the source and the intended recipient. In this way, the information can be obscured to third parties.

An extreme form of clarity reduction by design is public-private key encryption. Only because of a specific common knowledge that is known to the two parties, the public and private key, can the communication be rendered intelligible. However, as this is a “back end” technique, as opposed to “front end” and related to the social interactions, it properly belongs to privacy by design.

Areas of Design
As a front-end approach, obscurity by design relies on modifying behaviour and in that regard there are parallels with the categories proposed by Lawrence Lessig that can be used to influence behaviour:


 * 1) Law
 * 2) Market Forces
 * 3) Societal Norms
 * 4) Code (sometime called architecture and in this context, technology)

Each of these can be considered as opportunities for design elements, to contribute to re-enforcing behaviour, to manage the obscurity of information.


 * The Law can contribute to the reduction of information combination. For example, much concern was raised over Google’s combined terms and conditions policies across their product portfolio that was considered to give too much combined personal information, that exceeded the purpose for which it was all individually collected. Indeed, many terms and conditions of use prohibit certain behaviour. For example, a prohibition on “screen scraping” to prohibit trawling of websites by automated systems.
 * Market forces can contribute, for example through law enforcement and penalties, to reduce the attractiveness of circumventing regulations.
 * Norms, especially community norms can be used to self-regulate behaviour. Flickr for example has strong community policies in place and some design elements that help re-enforce community behaviour, by for example, adding reminder pop-ups occasionally about the code of conduct.
 * Code, or technology in this case, is used extensively already to manage obscurity. The trawling of photographs and re-constructing the connection between online content and individuals has seen considerable advances and press recently with Facebook facial recognition. Whereas a tool to blur faces in online content has been announced by Google that provides capability for increasing obscurity and therefore helping manage privacy risk through obscurity.

Increased Concerns
Simple obscurity techniques may no longer be enough to impair the determined. With sufficient data and computation power, identifiability can be increased through the techniques of data harvesting and big data. This raises the concern that personal data can be derived from the combination of non personal data fragments and even from deliberately anonymised data. De-anonymisation concerns and the concept of informational synergy, have been extensively described in the mosaic theory.