Separation of protection and security

From Wikipedia, the free encyclopedia

In computer sciences, the separation of protection and security is an application of the separation of mechanism and policy principle[1]. The protection mechanism is supposed to be a component that implements the security policy. However, many frameworks consider both as security controls of varying types. For example, protection mechanisms would be considered technical controls, while a policy would be considered an administrative control.

Overview[edit]

The adoption of this distinction in a computer architecture probably means that protection is provided as a fault tolerance mechanism by hardware/firmware and kernel, whereas the operating system and applications implement their security policies. In this design, security policies rely therefore on the protection mechanisms and on additional cryptography techniques.

Examples of models with protection and security separation include access matrix, UCLA Data Secure Unix, take-grant and filter. Such separation is not found in models like high-water mark, Bell–LaPadula (original and revisited), information flow, strong dependency and constraints.[2]

Critique[edit]

However, the line between 'separation of mechanism and policy' and 'separation of protection and security' isn't clear. The terms 'protection' and 'security' aren't widely considered distinct. For example, 'computer security' is commonly defined as 'the protection of computer systems'. Indeed, the major hardware approach[3] of hierarchical protection domains considers its use to be both for security and protection. A prominent example of this approach is the ring architecture with "supervisor mode" and "user mode".[4] Such an approach adopts a policy already at the lower levels (hardware/firmware/kernel), restricting the rest of the system to rely on it. Therefore, the choice to distinguish between protection and security in the overall architecture design implies rejection of the hierarchical approach in favour of another one, the capability-based addressing.[1][5]

See also[edit]

Notes[edit]

  1. ^ a b Wulf 74 pp.337-345
  2. ^ Landwehr 81, pp. 254, 257; there's a table showing which models for computer security separates protection mechanism and security policy on p. 273
  3. ^ Swift 2005 p.26
  4. ^ Intel Corporation 2002
  5. ^ Houdek et al. 1981

References[edit]

  • Houdek, M. E., Soltis, F. G., and Hoffman, R. L. 1981. IBM System/38 support for capability-based addressing. In Proceedings of the 8th ACM International Symposium on Computer Architecture. ACM/IEEE, pp. 341–348.
  • Intel Corporation (2002) The IA-32 Architecture Software Developer’s Manual, Volume 1: Basic Architecture
  • Carl E. Landwehr Formal Models for Computer Security [1] Volume 13, Issue 3 (September 1981) pp. 247 – 278
  • Swift, Michael M; Brian N. Bershad, Henry M. Levy, Improving the reliability of commodity operating systems, [2] ACM Transactions on Computer Systems (TOCS), v.23 n.1, p. 77-110, February 2005
  • Wulf, W.; E. Cohen; W. Corwin; A. Jones; R. Levin; C. Pierson; F. Pollack (June 1974). "HYDRA: the kernel of a multiprocessor operating system". Communications of the ACM. 17 (6): 337–345. doi:10.1145/355616.364017. ISSN 0001-0782. S2CID 8011765. [3]