Sensitivity analysis is based on a fault-failure model of software and is based on the premise that software testability can predict the probability that failure will occur when a fault exists given a particular input distribution. A sensitive location is one in which faults cannot hide during testing. The internal states are perturbed to determine sensitivity. This technique requires instrumentation of the code and produces a count of the total executions through an operation, an infection rate estimate, and a propagation analysis.
A method developed to describe the value of an information system by its owner by taking into account the cost, capability, and jeopardy to mission accomplishment or human life associated with the system.
An intrusion detection and prevention system (IDPS) component that monitors and analyzes network activity and may also perform prevention actions.
An intervening space established by the act of setting or keeping apart.
(1) A security principle that divides critical functions among different employees in an attempt to ensure that no one employee has enough information or access privilege to perpetrate damaging fraud. (2) A principle of design that separates functions with differing requirements for security or integrity into separate protection domains. Separation of duty is sometimes implemented as an authorization rule, specifying that two or more subjects are required to authorize an operation. The goal is to ensure that no single individual (acting alone) can compromise an application system’s features and its control functions. For example, security function is separated from security operations. This is a management and preventive control.
A technique of controlling access by precluding sharing; names given to objects are only meaningful to a single subject and thus cannot be addressed by other subjects.
The principle of separation of privileges asserts that protection mechanisms where two keys (held by different parties) are required for access are stronger mechanisms than those requiring only one key. The rationale behind this principle is that “no single accident, deception, or breach of trust is sufficient” to circumvent the mechanism. In computer systems the separation is often implemented as a requirement for multiple conditions (access rules) to be met before access is allowed.
A protocol for carrying IP over an asynchronous serial communications line. Point-to-Point Protocol (PPP) replaced the SLIP.
(1) A host that provides one or more services for other hosts over a network as a primary function. (2) A computer program that provides services to other computer programs in the same or another computer. (3) A computer running a server program is frequently referred to as a server, though it may also be running other client (and server) programs.
A system architect responsible for the overall design, implementation, and maintenance of a server.
These threats are due to poorly implemented session-tracking, which may provide an avenue of attack. Similarly, user-provided input might eventually be passed to an application interface that interprets the input as part of a command, such as a Structured Query Language (SQL) command. Attackers may also inject custom code into the website for subsequent browsers to process via cross-site scripting (XSS). Subtle changes introduced into the Web server can radically change the server’s behavior (including turning a trusted entity into malicious one), the accuracy of the computation (including changing computational algorithms to yield incorrect results), or the confidentiality of the information (e.g., disclosing collected information).
A physical security control that uses a network configuration mechanism to monitor theft or damage because all servers are kept in a single, secure location.
Network traffic is distributed dynamically across groups of servers running a common application so that no one server is overwhelmed. Server load balancing increases server availability and application system availability, and could be a viable contingency measure when it is implemented among different sites. In this regard, the application system continues to operate as long as one or more sites remain operational.
The purpose is the same as the disk arrays. A file server is duplicated instead of the disk. All information is written to both servers simultaneously. This is a technical and recovery control, and ensures the availability goal.