Security parameter

In cryptography, a security parameter is a way of measuring of how "hard" it is for an adversary to break a cryptographic scheme. There are two main types of security parameter: computational and statistical, often denoted by $$\kappa$$ and $$\lambda$$, respectively. Roughly speaking, the computational security parameter is a measure for the input size of the computational problem on which the cryptographic scheme is based, which determines its computational complexity, whereas the statistical security parameter is a measure of the probability with which an adversary can break the scheme (whatever that means for the protocol).

Security parameters are usually expressed in unary representation - i.e. $$\kappa$$ is expressed as a string of $$\kappa$$ $$1$$s, $$\kappa=1\cdots 1$$, conventionally written as $$1^\kappa$$ - so that the time complexity of the cryptographic algorithm is polynomial in the size of the input.

Computational security
The security of cryptographic primitives relies on the hardness of some hard problems. One sets the computational security parameter $$\kappa$$ such that $$O(2^{\kappa})$$ computation is considered intractable.

Examples

 * If the security of a scheme depends on the secrecy of a key for a pseudorandom function (PRF), then we may specify that the PRF key should be sampled from the space $$\{0,1\}^\kappa$$ so that a brute-force search requires $$O(2^\kappa)$$ computational power.
 * In the RSA cryptosystem, the security parameter $$\kappa$$ denotes the length in bits of the modulus n; the positive integer n must therefore be a number in the set {0, ..., 2$\kappa$ - 1}.

Statistical security
Security in cryptography often relies on the fact that statistical distance between is small. We formalise this using the statistical security parameter by saying that the distributions are statistically close if the statistical distance between distributions can be expressed as a negligible function in the security parameter. One sets the statistical security parameter $$\sigma$$ such that $$O(2^{-\sigma})$$ is considered a "small enough" chance of the adversary winning.
 * a distribution predicated on a secret, and
 * a simulated distribution produced by an entity that does not know the secret

Consider the following two broad categories of attack of adversaries on a given cryptographic scheme: attacks in which the adversary tries to learn secret information, and attacks in which the adversary tries to convince an honest party to accept a false statement as true (or vice versa). In the first case, for example a public-key encryption scheme, an adversary may be able to obtain a large amount of information from which he can attempt to learn secret information, e.g. by examining the distribution of ciphertexts for a fixed plaintext encrypted under different randomness. In the second case, it may be that the adversary must guess a challenge or a secret and can do so with some fixed probability; in this we can talk about distributions by considering the algorithm for sampling the challenge in the protocol. In both cases, we can talk about the chance of the adversary "winning" in a loose sense, and can parameterise the statistical security by requiring the distributions to be statistically close in the first case or defining a challenge space dependent on the statistical security parameter in the second case.

Examples

 * In encryption schemes, one aspect of security is (at a high level) that anything that can be learnt about a plaintext given a ciphertext can also be learnt from a randomly-sampled string (of the same length as ciphertexts) that is independent of the plaintext. Formally, one would need to show that a uniform distribution over a set of strings of fixed length is statistically close to a uniform distribution over the space of all possible ciphertexts.
 * In zero knowledge protocols, we can further subdivide the statistical security parameters into zero knowledge and soundness statistical security parameters. The former parameterises what the transcript leaks about the secret knowledge, and the latter parameterises the chance with which a dishonest prover can convince an honest verifier that he knows a secret even if he doesn't.
 * In universal composability, the security of a protocol relies on the statistical indistinguishability of distributions of a real-world and an ideal-world execution. Interestingly, for a computationally unbounded environment it is not sufficient for distributions to be statistically indistinguishable since the environment can run the experiment enough times to observe which distribution is being produced (real or ideal); however, any standalone adversary against the protocol will only win with negligible probability in the statistical security parameter since it only engages in the protocol once.