Internal controls may be preventive detective corrective which is preventive control

Information Risk Assessment

Timothy Virtue, Justin Rainey, in HCISPP Study Guide, 2015

Controls related to time generally fall into three categories:

Preventative

Detective

Corrective

Preventative

Preventative controls are designed to be implemented prior to a threat event and reduce and/or avoid the likelihood and potential impact of a successful threat event. Examples of preventative controls include policies, standards, processes, procedures, encryption, firewalls, and physical barriers.

Detective

Detective controls are designed to detect a threat event while it is occurring and provide assistance during investigations and audits after the event has occurred. Examples of detective controls include security event log monitoring, host and network intrusion detection of threat events, and antivirus identification of malicious code.

Corrective

Corrective controls are designed to mitigate or limit the potential impact of a threat event once it has occurred and recover to normal operations. Examples of corrective controls include automatic removal of malicious code by antivirus software, business continuity and recovery plans, and host and network intrusion prevention of threat events.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128020432000069

Local System Attacks

Thomas Wilhelm, in Professional Penetration Testing (Second Edition), 2013

Encrypted Tunnels

After a system has been exploited and we have an account, any activity we do over the netcat connection could be detected by network defensive appliances, including intrusion detection/prevention systems, as seen in Figure 9.26. To prevent detection, we need to set up an encrypted tunnel as quickly as possible. For this example, we will use Open Secure Shell (SSH).

Internal controls may be preventive detective corrective which is preventive control

Figure 9.26. Network defenses blocking malware over cleartext channel.

An SSH tunnel will allow us to push malware and additional exploits onto the victim system without being detected, because all the traffic between the attack system and the victim is encrypted. Once we have an encrypted tunnel, we can continue our attack into the network.

Our initial connection with the netcat reverse shell will be useful in setting up the SSH tunnel. In the lab, we will be using a very simplified example of how preventative controls are established within a network; but the concept is identical to more complex networks. In this scenario, we are using the iptables application to specifically deny all traffic originating from 192.168.1.10, which is the attack system in this case.

Warning

Improperly configuring iptables in the lab network can prevent a denial-of-service attack against the host or attack system, producing incorrect results. Creating firewall rules is not covered in this book but is an important skill to have as a penetration tester, especially when looking for firewall rule misconfiguration that can be exploited in a pentest.

Because we have already compromised the Hackerdemia disk in our example, we will add an additional target to simulate how we would use the exploited system to attack other targets in the network as seen in Figure 9.27. We are also going to add an optional bit of complexity to our attack—we will be adding a host firewall as well (if you want to replicate this scenario in your own lab without the host firewall, that’s fine).

Internal controls may be preventive detective corrective which is preventive control

Figure 9.27. Tunneling network configuration.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597499934000094

Risk Management, Security Compliance, and Audit Controls

Craig Wright, in The IT Regulatory and Standards Compliance Handbook, 2008

FMECA Analysis

MIL-STD-1629 Procedures for Performing a Failure Mode, Effects and Criticality Analysis should be understood in detail. Failure mode, effects and criticality analysis helps to identify:

Risk factors

Preventative controls

Corrective controls

FMECA couples business continuity planning and disaster recovery into the initial analysis:

Identifies potential failures

Identifies the worst case for all failures

Occurrence and effects of failure are reduced through additional controls

The FMECA Process consists of the following stages:

1

Define the system or target:

a

What is the systems mission?

b

How does the system interface with other systems?

c

What expectations are there? For example, how do performance and reliability affect the system?

2

Create block diagrams:

a

FMECA relies on the creation of block diagrams.

b

Diagrams illustrate all functional entities and how the information flows between them.

3

Identify all possible individual module system failures and system interface failures:

a

Every block in every line that connects the block is a potential point of failure.

b

Identify how each failure would affect the overall mission of the system.

4

Analyze each possible failure in terms of a worst-case scenario:

a

Determine a severity level for the failure.

b

Assign this value to the possible outcome.

5

Identify:

a

Mechanisms for detecting failures.

b

Compensating controls relating to the failures.

6

Create and describe any actions necessary to prevent or eliminate the failure or effects of the failure:

a

Define additional setting controls to prevent or detect the failure.

7

Analyze and describe any and all effects of the additional controls:

a

Define the roles and responsibilities for addressing the compensating controls.

8

Document the analysis:

a

Explain the problems found in the solutions.

b

Document residual risks, for example, days without compensating controls.

c

Describe the potential impact of these residual risks.

FMECA Summary

This process involves a detailed analysis based on qualitative methods. It is a reasonably objective method and helps to identify controls and issues. It also identifies residual risks and issues. The Failure Mode, Effects and Criticality Analysis model is well accepted in many government and military organizations. The strength of this process lies in its ability to determine the point of failure and focus limited resources to adding controls where they add the most value.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597492669000205

Controls

Jack Freund, Jack Jones, in Measuring and Managing Information Risk, 2015

Control relationships

Not surprisingly, controls have relationships between each other. For example, some controls depend on the existence of other controls to be effective. To illustrate this point, let’s start simple. Imagine that there are only three categories of asset-level controls: preventative, detective, and responsive (Figure 11.1). Preventative controls affect the likelihood of a loss event occurring, detective controls enable us to recognize when a loss event has occurred, and responsive controls allow us to minimize the loss event’s effect on the organization. (We’ll go deeper soon, but for now we just need to make a couple of key points regarding the relationship between controls of varying types.)

Internal controls may be preventive detective corrective which is preventive control

FIGURE 11.1. Basic control ontology.

Imagine that it’s possible to have controls that are perfect (i.e., they are 100% effective) (Figure 11.2). (Yes, we know there is no such thing as a perfect control, but bear with us while we make a point.) With perfect preventative controls, we wouldn’t need detective or responsive controls, because there would be no loss events to detect or respond to. Example: Suppose there were such a thing as unbreakable encryption, both in terms of key strength and how it’s used. With perfect encryption, we wouldn’t need to worry about detecting when someone has broken the encryption, nor would we need to have an incident response capability to manage such an event. (This whole concept of control perfection has got to be driving some of you nuts… Hang in there.)

Internal controls may be preventive detective corrective which is preventive control

FIGURE 11.2. Controls in a perfect world.

Conversely, with perfect detective and responsive controls, there would be no need for preventative controls (i.e., instantaneous detection and response capabilities that eliminate the materialization of loss even when preventative controls fail) (Figure 11.3). Example: Assume someone breaks our encryption. We would detect it instantaneously, and our response would eliminate any potential for loss to materialize.

Internal controls may be preventive detective corrective which is preventive control

FIGURE 11.3. Alternate version of controls in a perfect world.

To eliminate the need for preventative controls, however, we need to have both perfect detection and perfect response. If either of these is imperfect, then our need for preventative controls returns.

The point of this fantasy is to illustrate that control relationships take either of two forms: and or or. Those of you with engineering backgrounds or other exposure to Boolean logic may have already recognized this. Preventative controls have an or relationship with the combination of detection and response controls, whereas detection and response have an and relationship between each other. In other words, we can have preventative controls or detection and response controls.

TALKING ABOUT RISK

For those of you who aren’t familiar with Boolean concepts, simply think of it this way: when an and relationship exists between two controls, both have to be effective for their benefit to be realized. For example, you can have the best detection capability in the world, but if your response capabilities are badly broken, then the overall capability is broken. With an or relationship, if either of two controls is effective, then the overall benefit is realized.

Now let’s set fantasy aside. Suppose our preventative controls are only 90% effective. In other words, when threat agents act in a manner that could result in loss, 90% of the time their actions are thwarted (e.g., only 10% of fraud attempts are successful at gaining access to money). This means that 10% of the time we have to detect that a loss event has occurred and respond to it. Therefore, if our detection and response controls are, in combination, 90% effective against that 10% of events (e.g., we are able to recover 90% of the money the fraudsters tried to run off with), then the combined preventative, detective, and responsive control effectiveness is 99% (Figure 11.4).

Internal controls may be preventive detective corrective which is preventive control

FIGURE 11.4. Combined effectiveness.

Being able to recognize the relationships and dependencies between different controls enables us to more effectively recognize where gaps exists, and prevent gaps in the first place. It also enables us to do a better job of gauging the efficacy of combinations of controls.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124202313000117

Botnet Detection: Tools and Techniques

Craig A. Schiller, ... Michael Cross, in Botnets, 2007

Virus Detection on Hosts

How do you manage the botnet problem—or indeed, any security problem? Here's a simplification of a common model describing controls for an operational environment:

Administrative controls (policies, standards, procedures)

Preventative controls (physical, technical, or administrative measures to lower your systems’ exposure to malicious action)

Detective controls (measures to identify and react to security breaches and malicious action)

Corrective controls (measures to reduce the likelihood of a recurrence of a given breach)

Recovery controls (measures to restore systems to normal operation)

You can see from this list that detection is only part of the management process. In fact, when we talk about detection as in “virus detection,” we're often using the term as shorthand for an approach that covers more than one of these controls. Here we consider antivirus as a special case of a HIDS, but it doesn't have to be (and, in enterprise terms, it shouldn't be) restricted to a single layer of the “onion.” The antivirus industry might not have invented defense in depth or multilayering, but it was one of the first kids on the block (Fred Cohen: A Short Course on Computer Viruses, Wiley). In a well-protected enterprise, antivirus sits on the desktop, on laptops, on LAN servers, on application servers, on mail servers, and so on. It's likely to embrace real-time (on-access) scanning at several of those levels, as well as or instead of on-demand (scheduled or user-initiated) scanning. It might include some measure of generic filtering (especially in e-mail and/or Web traffic) and should certainly include some measure of heuristic analysis as well as pure virus-specific detection (see the following discussion).

Nowadays full-strength commercial antivirus software for the enterprise normally includes console facilities for central management, reporting, and logging as well as staged distribution of virus definitions (“signatures”). Properly configured, these facilities increase your chances of getting an early warning of malicious activity, such as a botnet beginning to take hold on your systems. Look out for anomalies such as malicious files quarantined because they could not be deleted or files quarantined because of suspicious characteristics. Many products include a facility for sending code samples back to the vendor for further analysis. And, of course, antivirus products can be integrated with other security products and services, which can give you a better overview of a developing security problem.

Antivirus is often seen as the Cinderella of the security industry, addressing a declining proportion of malware with decreasing effectiveness and tied to a subscription model that preserves the vendor's revenue stream without offering protection against anything but known viruses. What role can it possibly have in the mitigation of bot activity? Quite a big role, in fact, not least because of its ability to detect the worms and blended threats that are still often associated with the initial distribution of bots.

You should be aware that modern antivirus software doesn't only detect viruses. In fact, full-strength commercial antivirus software has always detected a range of threats (and some nonthreats such as garbage files, test files, and so on). A modern multilayered enterprise antivirus (AV) solution detects a ridiculously wide range of threats, including viruses, jokes, worms, bots, backdoor Trojans, spyware, adware, vulnerabilities, phishing mails, and banking Trojans. Not to mention a whole class of nuisance programs, sometimes referred to as possibly unwanted programs or potentially unwanted applications. So why don't we just call it antimalware software? Perhaps one reason is that although detection of even unknown viruses has become extraordinarily sophisticated (to the point where it's often possible to disinfect an unknown virus or variant safely as well as detect it), it's probably not technically possible to detect and remove all malware with the same degree of accuracy. A vendor can reasonably claim to detect 100 percent of known viruses and a proportion of unknown viruses and variants but not to detect anything like 100 percent of malware. Another reason is that, as we've already pointed out, not everything a scanner detects is malicious, so maybe antimalware wouldn't be any better.

Tools & Traps …

Explaining Antivirus Signatures

It's widely assumed that antivirus works according to a strictly signature-based detection methodology. In fact, some old-school antivirus researchers loathe the term signature, at least when applied to antivirus (AV) technology, for several reasons. (The term search string is generally preferred, but it's probably years too late to hope it will be widely adopted outside that community when even AV marketing departments use the term signature quite routinely). Furthermore:

The term signature has so many uses and shades of meaning in other areas of security (digital signatures, IDS attack signatures, Tripwire file signatures) that it generates confusion rather than resolving it. IDS signatures and AV signatures (or search strings, or identities, or .DATs, or patterns, or definitions …) are similar in concept in that both are “attack signatures”; they are a way of identifying a particular attack or range of attacks, and in some instances they identify the same attacks. However, the actual implementation can be very different. Partly this is because AV search strings have to be compact and tightly integrated for operational reasons; it wouldn't be practical for a scanner to interpret every one of hundreds of thousands of verbose, standalone rules every time a file was opened, closed, written, or read, even on the fastest multiprocessor systems. Digital signatures and Tripwire signatures are not really attack signatures at all: They're a way of fingerprinting an object so that it can be defended against attack.

It has a specific (though by no means universally used) technical application in antivirus technology, applied to the use of a simple, static search string. In fact, AV scanning technology had to move far beyond that many years ago. Reasons for this include the rise of polymorphic viruses, some of which introduced so many variations in shape between different instances of the same virus that there was no usable static string that could be used as a signature. However, there was also a need for faster search techniques as systems increased in size and complexity.

The term is often misunderstood as meaning that each virus has a single unique identifier, like a fingerprint, used by all antivirus software. If people think about what a signature looks like, they probably see it as a text string. In fact, the range of sophisticated search techniques used today means that any two scanner products are likely to use very different code to identify a given malicious program.

In fact, AV uses a wide range of search types, from UNIX-like regular expressions to complex decryption algorithms and sophisticated search algorithms. These techniques increase code size and complexity, with inevitable increases in scanning overhead. However, in combination with other analytical tools such as code emulation and sandboxing, they do help increase the application's ability to detect unknown malware or variants, using heuristic analysis, generic drivers/signatures, and so on.

To this end, modern malware is distributed inconspicuously, spammed out in short runs or via backdoor channels, the core code obscured by repeated rerelease, wrapped and rewrapped using runtime packers, to make detection by signature more difficult. These technical difficulties are increased by the botherder's ability to update or replace the initial intrusive program.

Tools & Traps …

Malware in the Wild

The WildList Organization International (www.wildlist.org) is a longstanding cooperative venture to track “in the wild” (ItW) malware, as reported by 80 or so antivirus professionals, most of them working for AV vendors. The WildList itself is a notionally monthly list of malicious programs known to be currently ItW. Because the organization is essentially staffed by volunteers, a month slips occasionally, and the list for a given month can come out quite a while later. This isn't just a matter of not having time to write the list; the process involves exhaustive testing and comparing of samples, and that's what takes time.

However, the WildList is a unique resource that is the basis for much research and is extensively drawn on by the better AV testing organizations (Virus Bulletin, AV-Test.org, ICSAlabs). The published WildList actually comprises two main lists: the shorter “real” WildList, where each malware entry has been reported by two or more reporters, and a (nowadays) longer list that has only been reported by one person. A quick scan of the latest available lists at the time of writing (the September 2006 list is at www.wildlist.org/WildList/200609.htm) demonstrates dramatically what AV is really catching these days:

First, it illustrates to what extent the threatscape is dominated by bots and bot-related malware: The secondary list shows around 400 variants of W32/Sdbot alone.

It also demonstrates the change, described earlier, in how malware is distributed. Historically, the WildList is published in two parts because when a virus or variant makes the primary list, the fact that it's been reported by two or more WildList reporters validates the fact that it's definitely (and technically) ItW. It doesn't mean that there's something untrustworthy about malware reports that only make the secondary list. B-list celebrities might be suspect, but B-list malware has been reported by an expert in the field. So, the fact that the secondary list is much longer than the primary list suggests strongly that a single variant is sparsely distributed, to reduce the speed with which it's likely to be detected. This does suggest, though, that the technical definition of ItW (i.e., reported by two or more reporters; see Sarah Gordon's paper, What is Wild?, at http://csrc.nist.gov/nissc/1997/proceedings/177.pdf) is not as relevant as it used to be.

Don't panic, though; this doesn't mean that a given variant may be detected only by the company to which it was originally reported. WildList-reported malware samples are added to a common pool (which is used by trusted testing organizations for AV testing, among other purposes), and there are other established channels by which AV researchers exchange samples. This does raise a question, however: How many bots have been sitting out there on zombie PCs that still aren't yet known to AV and/or other security vendors? Communication between AV researchers and other players in the botnet mitigation game has improved no end in the last year or two. Despite this, anecdotal evidence suggests that the answer is still “Lots!” After all, the total number of Sdbot variants is known to be far higher than the number reported here (many thousands …).

Heuristic Analysis

One of the things that “everybody knows” about antivirus software is that it only detects known viruses. As is true so often, everyone is wrong. AV vendors have years of experience at detecting known viruses, and they do it very effectively and mostly accurately. However, as everyone also knows (this time more or less correctly), this purely reactive approach leaves a “window of vulnerability,” a gap between the release of each virus and the availability of detection/protection.

Despite the temptation to stick with a model that guarantees a never-ending revenue stream, vendors have actually offered proactive approaches to virus/malware management. We'll explore one approach (change/integrity detection) a little further when we discuss Tripwire. More popular and successful, at least in terms of detecting “real” viruses as opposed to implementing other elements of integrity management, is a technique called heuristic analysis.

Internal controls may be preventive detective corrective which is preventive control
TIP

Integrity detection is a term generally used as a near-synonym for change detection, though it might suggest more sophisticated approaches. Integrity management is a more generalized concept and suggests a whole range of associated defensive techniques such as sound change management, strict access control, careful backup systems, and patch management. Many of the tools described here can be described as integrity management tools, even though they aren't considered change/integrity detection tools.

Heuristic analysis (in AV; spam management tools often use a similar methodology, though) is a term for a rule-based scoring system applied to code that doesn't provide a definite match to known malware. Program attributes that suggest possible malicious intent increase the score for that program. The term derives from a Greek root meaning to discover and has the more general meaning of a rule of thumb or an informed guess. Advanced heuristics use a variety of inspection and emulation techniques to assess the likelihood of a program's being malicious, but there is a trade-off: The more aggressive the heuristic, the higher the risk of false positives (FPs). For this reason, commercial antivirus software often offers a choice of settings, from no heuristics (detection based on exact or near-exact identification) to moderate heuristics or advanced heuristics.

Antivirus vendors use other techniques to generalize detection. Generic signatures, for instance, use the fact that malicious programs and variants have a strong family resemblance—in fact, we actually talk about virus and bot families in this context—to detect groups of variants rather than using a single definition for each member of the group. This has an additional advantage: There's a good chance that a generic signature will also catch a brand-new variant of a known family, even before that particular variant has been analyzed by the vendor.

Internal controls may be preventive detective corrective which is preventive control
TIP

From an operational point of view, you might find sites such as VirusTotal (www.virustotal.org), Virus.org (www.virus.org), or Jotti (http://virusscan.jotti.org/) useful for scanning suspicious files. These services run samples you submit to their Web sites against a number of products (far more than most organizations will have licensed copies of) and pass them on to antivirus companies. Of course, there are caveats. Inevitably, some malware will escape detection by all scanners: a clean bill of health. Since such sites tend to be inconsistent in the way they handle configuration issues such as heuristic levels, they don't always reflect the abilities of the scanners they use so are not a dependable guide to overall scanning performance by individual products. (It's not a good idea to use them as a comparative testing tool.) And, of course, you need to be aware of the presence of a suspicious file in the first place.

Malware detection as it's practiced by the antivirus industry is too complex a field to do it justice in this short section: Peter Szor's The Art of Computer Virus Research and Defense (Symantec Press, 2005) is an excellent resource if you want to dig deeper into this fascinating area. The ins and outs of heuristic analysis are also considered in Heuristic Analysis: Detecting Unknown Viruses, by Lee Harley, at www.eset.com/download/whitepapers.php.

You might notice that we haven't used either an open-source or commercial AV program to provide a detailed example here. There are two reasons for this:

There is a place for open source AV as a supplement to commercial antivirus, but we have concerns about the way its capabilities are so commonly exaggerated and its disadvantages ignored. No open-source scanner detects everything a commercial scanner does at present, and we don't anticipate community projects catching up in the foreseeable future. We could, perhaps, have looked at an open-source project in more detail (ClamAV, for instance, one of the better community projects in this area), but that would actually tell you less than you might think about the way professional AV is implemented. Free is not always bad, though, even in AV. Some vendors, like AVG and Avast, offer free versions of their software that use the same basic detection engine and the same frequent updates but without interactive support and some of the bells and whistles of the commercial version. Note that these are normally intended for home use; for business use, you are required to pay a subscription. Others, such as ESET and Frisk, offer evaluation copies. These are usually time-restricted and might not have all the functionality of the paid-for version.

Commercial AV products vary widely in their facilities and interfaces, even comparing versions of a single product across platforms (and some of the major vendors have a very wide range of products). Furthermore, the speed of development in this area means that two versions of the same product only a few months apart can look very different. We don't feel that detailed information on implementing one or two packages would be very useful to you. It's more important to understand the concepts behind the technology so that you can ask the right questions about specific products.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978159749135850007X

Operations Security

Craig Wright, in The IT Regulatory and Standards Compliance Handbook, 2008

Control Categories

There are many types of controls. The following section will introduce a number of these control categories. When designing a control framework it is necessary to include multiple levels of controls. For instance, either preventative or detective controls alone are unlikely to be effective in stopping attacks.

When these operate together they create an effect that is greater than its sum.

Deterrent (or Directive) Controls

Deterrent controls are administrative mechanisms (such as policies, procedures, standards, guidelines, laws, and regulations) that are used to guide the execution of security within an organization. Deterrent controls are utilized to promote compliance with external controls, such as regulatory compliance. These controls are designed to complement other controls (such as preventative and detective controls). Deterrent and Directive controls are synonymous.

Preventative Controls

Preventive controls include security mechanisms, tools, or practices that can deter or mitigate undesired actions or events. An example of a preventive control would be a firewall. In the domain of operational security, preventative controls are designed to achieve two things:

To decrease the quantity and impact of unintentional errors that are entering the system, and

To prevent unauthorized intruders (either internal or external) from accessing the system.

An example of these controls would include firewalls, anti-virus software, encryption, risk analysis, job rotation and account lock outs.

Detective Controls

Detective controls are designed to find and verify whether the directive and preventative controls are working. Detective controls are designed to detect errors when they. Detective controls operate after the fact. They include logging and forensic controls are used to collate unauthorized transactions such as for the prosecution of the offender, or to lessen the impact of the attack or error on the system. Examples of this category of control include audit trails, logs, CCTV and IDSs.

Corrective Controls

Corrective controls are comprised of the instructions, procedures, or guidelines that are used to overturn the consequences of an incident. Corrective controls are put into practice in order to alleviate the impact of an event that has resulted in a loss and also to respond to incidents in a manner that will minimize risk. Examples include manuals, logging and journaling, incident handling, exception reporting, and fire extinguishers.

Recovery Controls

Recovery controls are designed to recover a system and returned to normal operation following an incident. Examples of recovery controls include system restoration, backups, rebooting, key escrow, insurance, redundant equipment, fault-tolerant systems, failovers, and contingency plans (BCP).

Application Controls

Application controls are designed into applications in order to minimize and detect operational irregularities that may occur within the application. Transaction controls are a type of application control.

Transaction Controls

Transaction controls are utilized in order to afford a level of control over the various stages of a transaction as it is processed. Transaction controls are implemented from the first stages when the transaction is initiated through to when the output is produced. Comprehensive testing and change control are also types of transaction controls. A number of these controls have been included below.

Input Controls

Input controls are used to make certain that transactions are correctly inputted into the system only on one occasion. An element of input control could include the counting of data or the time stamping data with the date it was entered or edited.

Processing Controls

Processing controls are used to certify whether a transaction is valid and accurate. These controls are also used to find and re-process incorrectly entered transactions.

Output Controls

Output controls are designed to protect the confidentiality of output, and to verify the integrity of output using a comparison of the input transaction to the output data.

Change Control

Change control is implemented to preserve data integrity in a system as changes are made to the configuration. Procedures and standards have been created to manage change and the modification of a system and its configuration. Change control and configuration management control is thoroughly described later in this chapter and within other sections of this book.

Test Controls

Test controls are designed to prevent violations of confidentiality and to ensure transactional integrity. Test controls are often included as a component of the change control process. An example of this category of control is the appropriate use of sanitized test data.

Operational Controls

Operational controls include those methods and procedures that afford protection for systems. The majority of these are implemented or performed by the organization staff or outsourced entities and are administrative in nature. Organizational controls may also include selected technological or logical controls.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597492669000229

Interpreting Results

Jack Freund, Jack Jones, in Measuring and Managing Information Risk, 2015

Unstable conditions

The chart above might result from a scenario where resistive controls are nonexistent but the threat event frequency is inherently low – i.e. there just isn’t much threat activity. This is referred to as an Unstable Risk Condition. It’s unstable because the organization can’t or hasn’t chosen to put preventative controls in place to manage loss event frequency. As a result, if the frequency changes the losses can mount fast. Another way to describe this is to say that the level of risk is highly sensitive to threat event frequency. Examples might include certain weather or geological events. The condition also commonly exists with privileged internal threat communities (e.g. executive administrative assistants, database administrators, etc.). Since most companies model scenarios related to privileged insiders as a relatively infrequent occurrence with high impact, these risk scenarios will often be unstable.

Perhaps given all of the other risk conditions the organization is wrestling with this just hasn’t been a high enough priority. Or perhaps the condition hadn’t been analyzed and recognized before. Regardless, it boils down to the organization rolling the dice every day and hoping they don’t come up snake-eyes. If you presented the analysis results above without noting that it is unstable management might decide there’s no need to introduce additional controls or buy insurance because the odds of it happening are so low. They would be making a poorly informed decision. Designating something as an unstable risk condition is really just a way of providing a more robust view of the situation so decision makers can make better-informed decisions.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124202313000075

Intrusion Detection

Eric Knipp, ... Edgar DanielyanTechnical Editor, in Managing Cisco Network Security (Second Edition), 2002

What Is Intrusion Detection?

An intrusion detection system (IDS) is a software program, or a suite of hardware and software, that automates the investigation of unusual or potentially inappropriate activity in or around computers. It is an example of a technical security control, where the direct application of technology (as opposed to procedure or management guidance) attempts to solve security problems.

Technical controls can be classified as preventative or detective. Preventive controls attempt to avoid the occurrence of unwanted events, whereas detective controls attempt to identify unwanted events after they have occurred. An IDS is typically used as a detective control, alerting to misuse, and providing information about the frequency of the event. These detective controls typically combine signature-based approaches (similar to antivirus scanners) as well as unusual traffic analysis. This allows for more broadly based detection, but suffers from problems of false alerts.

An IDS can also be used in a preventative fashion: modern IDS can take action to interrupt a system call on a host, or interrupt network activity. In this case, the IDS must be adjusted so that this kind of activity only occurs when very clear identification of malicious activity is present.

Types of IDSs

IDSs fall into two types: network-based IDSs (NIDSs), where traffic is analyzed as it passes by a sensor on a wire; and host-based IDSs (HIDSs), where traffic is analyzed as it is accepted by the OS. The former is more readily deployed since it can be done with appliance devices rather than requiring modification of an existing server, and can provide a broader area of coverage. The latter is more precise, since it is able to understand what is occurring at the host itself: thus if an unknown form of attack attempted to cause a system to fail in a known fashion, the network-based sensor would probably miss the attack, but the host-based sensor would see the fault. This allows host-based IDS to function effectively as a preventative control, and is generally considered an appropriate use of the technology.

The Cisco Secure Network IDS product has the capability to do shunning. With shunning, the intrusion detection system alerts actually cause configuration changes in firewalls and routers, and block traffic from those networks.

IDS Architecture

IDSs are generally composed of a management station and one or more sensors. Because the control must see traffic to analyze it, it is generally distributed throughout a network at key locations. The management station integrates information from the distributed sensors (host and network) to provide a comprehensive and comprehensible view of the network. An operator usually interacts with the management station via a Web front-end or dedicated graphical user interface (GUI), and does not directly interact with the sensors. With the Cisco IDS, the management station can be either the IDS Director or a Cisco Secure Policy Manager (CSPM).The CSPM is documented in Chapter 12. In the fall of 2002, Cisco will be announcing a new management device to replace both the CSPM and Director consoles.

Ideally, the management station should integrate with any other operations management platforms in use. In an all-Cisco network, integration with the CSPM is helpful. In larger or nonhomogeneous networks, third-party products, such as HP OpenView, are often used to provide that integration.

Designing & Planning…

Controlling the Communication between Sensor and Manager

Often people deploy the sensor and manager focusing on bandwidth issues, and don't think about the security issues. Remember, security usually revolves around CIA: confidentiality, integrity, and availability. These issues come up in spades for the sensor/manager communication:

Confidentiality The output from the sensors will contain highly sensitive information, including passwords, URLs visited, and the like.

Integrity If a bad guy can forge data from the sensor, he can implicate other innocent users.

Availability If a bad guy can prevent the data from getting from the sensor to the manager, he can work his evil undetected.

While other IDSs may communicate using unencrypted protocols such as syslog, Cisco has thoughtfully provided for confidentiality and integrity in its Post Office protocol, used to communicate between sensor and manager. However, donʹt forget to protect the communication channel, and don't forget that the sensors and the managers are prime targets for the attackers!

Why Should You Have an IDS?

The security events detected by an IDS are typically of three types:

Malicious events, such as those present at an intrusion

Misconfigured events, such as incorrect configuration data causing system malfunction

Ineffective events, such as ineffective network traffic

These security events are also usually classified by severity (that is, the ability of the event to harm the enterprise) and frequency (the likelihood of the occurrence). As an example of two types of malicious events, those that are severe or frequent (for example, the recent Nimda worm) are more important to identify and act upon than those that are minor or rare (for instance, a curious employee performing a port scan on his buddy’s machine). Perhaps even more important is distinguishing between that curious employee port scan on an unimportant machine and a port scan on a core business asset that may signal a prelude to a determined attack.

The business drivers for each of the three types are slightly different. The driver for the first is to reduce the risk associated with a systems compromise. They may be a required part of due diligence for protection of corporate assets. The driver for the second is to identify errors in configuration so that they can be corrected. This reduces the overall cost of maintenance. The driver for the third is to optimize the use of corporate assets.

Benefits of an IDS in a Network

As stated, the tuning process can take different approaches depending upon the desired result. Usually, the desired result should follow the business driver. These are examined in turn.

Reduce the Risk of a Systems Compromise

This is the most direct driver associated with an intrusion detection system. Risk can be reduced indirectly through detection and response or through direct corrective action. As a part of the response, a forensic element can be applied. If the enterprise has the ability to document the root causes of an attack, this can reduce the frequency of occurrence, particularly among the local user community (they are put on notice that malicious activity can have severe consequences). Forensic analysis may also be of some use in recovering damages, if the activity is careful enough to survive the necessary legal proceedings.

Indirect Action

Indirect action through an incident response procedure is flexible, and can tolerate potential errors in alerting. The trade off is increased work for reduced risk. The key is to have a prepared incidence response protocol for handling events.

Direct Action

Direct corrective action can be both automated and inherent to the alert, or provide notice to a security officer so that an incident response procedure can be initiated. Examples of direct action are blocking an offending system call (for a host-based system) or reconfiguring a firewall (for a network-based system). These kinds of activity require a high degree of confidence in the alert.

Identifying Errors of Configuration

Identifying errors of configuration is an immediate benefit of an IDS. A complex environment, such as a server or a network, is usually misconfigured in several small ways. Luckily, our systems are redundant enough that the error conditions are handled by secondary systems. However, there is a risk that the secondary system may fail, causing a systemic failure; in addition, there may be improvements to service possible if the original device is correctly configured.

An IDS can usually identify this sort of invalid traffic. For example, a device may be misconfigured to have an invalid password for file access. The IDS will track this as an attempt to “break into” the file server by noting an excessive number of password failures. The detective control will allow the owner of the system to correct the password, and allow improved functionality.

Optimize Network Traffic

A third benefit of an IDS is to optimize network flows, or at least to provide insight into how networks are being used. A common component of an IDS is a statistical anomaly detection engine. Cisco calls this profile-based detection and notes it “involves building statistical profiles of user activity and then reacting to any activity that falls outside these established profiles.”The immediate reason is to identify an intrusion through unusual behavior. However, this also permits the operator to get a feel for the behavior of the network under normal operating conditions, and that insight can provide assistance on larger network maintenance and design issues.

Documenting Existing Threat Levels for Planning or Resource Allocation

When you are drawing up a budget for network security, it often helps to substantiate claims that the network is likely to be attacked or is even currently under attack. Understanding the frequency and characteristics of attacks allows you to understand what security measures are appropriate to protect the network against those attacks.

IDSs verify, itemize, and characterize threats from both outside and inside the enterprise network, assisting security management in making sound decisions regarding the allocation of computer security resources. Using IDSs in this manner is important, as many people mistakenly deny that anyone (outsider or insider) would be interested in breaking into their networks. Furthermore, the information that IDSs give you regarding the source and nature of attacks allows you to make decisions regarding security strategy driven by demonstrated need, not guesswork or folklore.

Changing User Behavior

A fundamental goal of computer security management is to affect the behavior of individual users in a way that protects information systems from security problems. Intrusion detection systems help organizations accomplish this goal by increasing the perceived risk of discovery and punishment of attackers. This serves as a significant deterrent to those who would violate security policy.

Deploying an IDS in a Network

The placement of a NIDS requires careful planning. Ciscoʼs Secure IDS product (NetRanger) is made up of a probe and a central management station called a Director (old style) or the CSPM (new style). Each probe has two interfaces: a command interface, on which configuration information is accepted and logging information is sent, and a sensor interface. The sensor interface has an unnumbered interface; some feel that this allows placement of the command interface on a different network than the command interface. If that is your policy, it is helpful to place the command interface on a management network. If you are concerned about the potential for a compromise through the sensor interface, then it is best to place the command interface on the same network as the sensor interface. Let’s look at the best place to put the Sensor interface.

Sensor Placement

Most companies have a firewall that separates the internal network from the outside world. They typically have one or more service networks, and the internal network may also be subdivided.

Should we place the probe outside or inside? If the probe is outside, then it can monitor external traffic. This is useful against attacks from the outside but does not allow for detection of internal attacks. Of course, understanding attacks against the outside net may not be particularly valuable, since generally they would be stopped by the firewall. Also, the probe itself may become the target of an attack so it must be protected.

If you place the probe inside, it will detect internally-initiated attacks and can highlight firewall rules that are not working properly or are incorrectly configured.

Generally, the reason to put a probe outside the firewall would be to “take the temperature” of the Internet. This can be valuable to demonstrate the value of the firewall. More importantly, you should deploy sensors so they can view traffic worth sensing.

One effective strategy is to deploy probes to capture the interface of the firewall that faces the service net (or nets) so you can capture traffic from networks headed toward the service net. Other appropriate monitoring points would be near server clusters, or near router transit networks/interfaces. When you review your security policy, you may decide you need to install more probes at different points in the network according to security risks and requirements.

Here are some example locations:

The Accounts department’s Local Area Network (LAN)

Company strategic networks (for example, the Development LAN)

Technical department’s LAN

LANs where staff turnover is rapid or hot-desk/temp locations exist

the Server LAN

Difficulties in Deploying an IDS

There are several difficulties associated with successful IDS deployments. One fundamental problem is that the underlying science behind intrusion detection systems is relatively new. While everyone agrees that some things can be achieved, the January 1998 paper Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection by Thomas H. Ptacek and Timothy N. Newsham seemed to throw the field for a loop. They described techniques by which a properly designed IDS can be deceived, with a follow-up discussion that seemed to indicate the loftiest goal of an IDS is not achievable without a complete recreation of all network hosts. In the paper they note:

The number of attacks against network intrusion detection systems, and the relative simplicity of the problems that were actually demonstrated to be exploitable on the commercial systems we tested, indicates to us that network intrusion detection is not a mature technology. More research and testing needs to occur before network intrusion detection can be looked to as a reliable component in a security system.

However, it should not be taken that this is seen as an unusable technology. An IDS is one of the most common security purchases today. Current (2002) Computer Security Institute (CSI)/FBI statistics show that approximately 60 percent of Fortune 500 companies deploy an IDS; in just a few years, an IDS suite will likely be as ubiquitous as firewalls. What this does point out is that this is a technology in a state of rapid change. It is also worth noting that Cisco engineering took the flaws identified to heart; today, their analysis engine is vulnerable to none of these flaws.

A second difficulty is that of expectation. Management may feel that simply purchasing an IDS will make them safe. It doesn’t. It can be of assistance in identifying, imperfectly, attacks on a host or network, and can also be of use in tracking human events. But IDS tools should probably be combined with additional tools to provide a more robust detection environment.

A third difficulty is associated with the deployment phase. The network deployment is relatively straightforward but non-trivial, and coordination between multiple groups is often required. In a larger enterprise, the people who “own” the network are different from the people who “own” security, and clear communication may not always be possible. A host deployment involves interaction with a complex environment, and may involve further unknown interactions.

A fourth difficulty concerns incidence response. An incidence response procedure is a nontrivial task for most enterprises. A significant development effort is usually required. For most enterprises, such programs have not been required before. In many environments, the program is developed after the first incident, as part of a “lessons learned” analysis.

A fifth difficulty revolves around IDS tuning, described next. An IDS, out of the box, is generally not very useful. It must be adjusted to be in harmony with the local environment and the resources available to explore events. It’s this level of effort associated with IDS tuning that management often underestimates.

It should be recognized that most IDS programs are at their most effective several months or even years after their initial deployment.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781931836562500179

Which is preventive control in internal control?

Preventive Controls Separation of duties. Pre-approval of actions and transactions (such as a Travel Authorization) Access controls (such as passwords and Gatorlink authentication) Physical control over assets (i.e. locks on doors or a safe for cash/checks)

What is a preventive control?

Preventative controls: Designed to keep errors or irregularities from occurring in the first place. They are built into internal control systems and require a major effort in the initial design and implementation stages. However, preventative controls do not require significant ongoing investments.

Which of the following is a preventive control?

Examples of preventative controls include policies, standards, processes, procedures, encryption, firewalls, and physical barriers.

Is internal audit preventive or detective?

Some examples of detective controls are internal audits, reconciliations, financial reporting, financial statements, and physical inventories.