User:Lemeclebed/sandbox

=Obscurity By Design=

Although privacy is often used within an ideal image of total protection, this is never achievable in practice. The concept of obscurity by design is aimed at enabling an individual to manage the risk that published information will be reachable and understandable by third parties to whom it was not intended.

It might be considered a direction towards “good enough” privacy, although some may argue that it is impossible to know when privacy is good enough, as the future may render todays decisions inappropriate. It is similar to the nothing to hide, nothing to fear, NTHNTF, principle so contested by privacy advocates.

However, practically, the ability to manage the exposure of information online is important and offers some protection against abuse. Obscurity by design is an approach that complements other methods and suggests that sufficient tools should be available to the end users to manage the obscurity of their own information, by design.

Privacy as Risk Management
Starting with a consideration of the risks faced by an individual in the on-line world there are two main types:
 * Endogenous risks, associated with the interaction of the individual with technology. These can generally be understood and knowable in advance.
 * Exogenous risks, associated with the external interactions the individual makes in the course of his, or her, online activities. For example, interactions with other people. These are emergent and generally unknowable at the start.

Protecting the privacy of an individual has to then take these two risk sources into account.

The privacy by design movement has worked to minimise primarily endogenous risks that can be found in “back-end” systems. Privacy by design advocates a holistic approach to privacy considerations throughout the entire lifecycle and organisational management of technology development. Consequently it results in the systematic adoption and deployment of traditional forms of privacy enhancing technologies (PETs).

However, this approach does not systematically treat the other risk factors facing someone in the online world and those are the exogenous risks. It is this area that is primarily targeted by the “obscurity by design” movement.

There would at first sight appear to be an contradiction between the concepts of privacy and protection of personal data on one hand, and the deliberate intentions of social interaction on the internet to reveal information on the other. This indicates one of the major problems of the concept of privacy and that is context. This contextual complexity makes for universal design decisions that adequately protect, and not overly restrict, individuals, in every situation, extremely difficult to implement with the exception of broad, generic provisions.

Social Interactions as Risk Acceptance
In practice, all social interactions have some level of obscurity associated with them and in an online world an individual can control the level of obscurity of their interactions through three main methods:


 * By limiting the on-line channels through which information is presented.
 * By limiting the audience that is reached through the channels.
 * By limiting the meaning of the information that is presented.

This allows a user to be proactive and to define the balance between opacity and transparency in the understanding of their online activities.

One of the techniques most commonly uses are pseudonyms and the use of multiple accounts. However the right to anonymity online has long been debated[ 1]. The use of real identities has been central to many online social networks business models, including Facebook and Google, which involves building detailed profiles of people so they can send targeted advertisements based on their true personalities. In recent times this stance has softened a little due to the controversy concerning drag queens stage names[ 2] and the growth of advertisement social network sites, such as Ello [3] that welcome pseudonyms. Consequently, Google announced changes in its terms and conditions that allow pseudonyms online.

Classes of Information
An individual should be able to select the level of “privacy” needed from broadly 3 categories:


 * 1) Public
 * 2) Obscure
 * 3) Confidential

There are types of information published, as on Twitter, that are designed to reach a large audience and so are demonstrably public. Whereas others, for example related to banking information that would most certainly be considered confidential.

It is this middle category that represents perhaps the bulk of online information where obscurity is most effective, some form of publication but to a limited social network.

Obscurity as Risk Mitigation
It is postulated that information online is classes as “obscure” if one or more of the following elements is present in the context in which it is presented. Each of them can be made available to the user to control by incorporating them into the design principles of the system:


 * 1) Restricted Visibility.
 * 2) Access Protection
 * 3) Reduced Identifiability
 * 4) Reduced Clarity

Restricted Visibility
If the information cannot be found easily then it can be said to be obscure. While this may not seem to offer much protection, in practice it has a lot of perceived value. The “right to be forgotten” discussion recently require search engines to remove, upon request, personal information that is no longer in the public interest. The fact that Google received xxxxx of these requests within yyyy time indicates the value perceived in obscuring access to personal information through search engines.

Design principles such as the limited depth on visibility such as friends, or friends of friends, in Facebook can be used to offer direct control of the level of obscurity. This is a mechanism that can be directly enforced by the software. Some sites, for example Linkedin, allow control of the individual parts of the profile that are visible, and therefore indexable, by search engines.

Other mechanisms used, for example the robots.txt files used on websites to request that the content of the site not be indexed, rely on the accessing software to behave correctly and compliance is thus not enforced.

Finally, tools and strategies that would allow information to be “devalued” by search engines, effectively search engine “unoptimisation”, which allows information that is picked up by search engine trawlers, but unintended to be searchable per-se, could be considered as part of obscurity by design.

Access Protection
Application of passwords and in particular minimum standards for password protection as well as two or more factor authentication are all used to protect access.

Restrictions to access are often buried in privacy settings that are unknown to users with the default settings often being the most liberal access to information. “Status Quo” bias means that users have a tendency to stick with default settings and obscurity by design would mandate these to be “obscurity by default”.

However, feedback mechanisms which are inbuilt into the user experience can be used to inform and engender awareness, and ultimately behaviour modification on the part of the user by sensitising them to their individual and tailored risk profile. Such design elements would increase the obscurity by design. For example the “people who viewed your profile” in Linkedin provides information on the exposure of the profile, inside and outside the network.

Reduced Identifiability
The linkage between a given piece of information and a given individual can be obfuscated through the use of pseudonyms. Another element of identifiability would be meta-data associated with information and content. It is one reason why access to meta-data is so sought after by law-enforcement agencies and government surveillance agencies. Meta data adds additional context to the content, increasing the likelihood of identifiability. For example a mobile communication in itself says very little, but the time and place of the call, together with the numbers involved substantially increases the identifiability of who made the call, even if it was not the owner of the phone.

Reduced Clarity
Information is often passed between individuals, in the presence of others, that is deliberately missing important content, relying instead on known common understanding between the source and the intended recipient. In this way, the information can be obscured to third parties.

An extreme form of clarity reduction by design is public-private key encryption. Only because of a specific context that is known between two parties, the public and private key, can the communication be rendered intelligible. However, as this is a “back end” technique, as opposed to “front end” and related to the social interactions, it properly belongs to privacy by design.

Areas of Design
As a front-end approach, obscurity by design relies on modifying behaviour and in that regard there are parallels with the categories proposed by Lawrence Lessig that can be used to influence behaviour:


 * 1) Law
 * 2) Market Forces
 * 3) Societal Norms
 * 4) Code (sometime called architecture and in this context, technology)

Each of these care be addressed as design elements in an approach to obscurity by design and contribute to the management of the obscurity needed.

The Law can contribute to the reduction of information combination. For example, much concern was raised over Google’s combined terms and conditions policies across their product portfolio that was considered to give too much combined personal information that exceeded the purpose for which it was all individually collected.

Many terms and conditions prohibit certain behaviour. For example, a prohibition on “screen scraping” to prevent trawling of websites by automated systems.

Market forces can contribute, for example through law enforcement and penalties, to reduce the attractiveness of circumventing regulations.

Norms, especially community norms can be used to self-regulate behaviour. Flickr for example has strong community policies in place and design elements can help re-enforce community behaviour by for example adding reminder pop-ups occasionally about the code of conduct.

Code, or technology in this case, is used extensively already to manage obscurity. The trawling of photographs and re-constructing the connection between online content and individuals has seen considerable advances and press recently (Facebook). Whereas a tool to blur faces in online content has been announced by Google that provides capability for increasing obscurity and therefore manage privacy risk.

Future Concerns
However, there is increased concern that identifiability is increasing through the techniques of data harvesting and big data. This raises the concern that personal data can be derived from the combination of non personal data fragments and even from deliberately anonymised data. De-anonymisation concerns have been developed through the mosaic theory.

[1] https://www.eff.org/deeplinks/2011/07/case-pseudonyms

[2] http://www.wsj.com/articles/facebook-changes-real-name-policy-after-uproar-from-drag-queens-1412223040

[3] https://ello.co/wtf/post/about-ello