Divergent Directions: Looking back over the last 30+ years of my work in information security, I see two diverging trends when it comes to defining the information security-related standard of due care. By the “standard of due care,” in this column I mean the actions that management needs to take (for instance the controls that need to be deployed), in order to avoid legal problems such as charges of negligence.
The first of these two directions involves the definition of a set of controls that all organizations should subscribe to, across the board. Examples include the ISO 27002 information security standard or the recommended controls from NIST SP800-53. The components of this set are for the most part already defined, but the size of the set is still expanding slowly over time. The second of these directions involves situation-specific requirements. The components in this set are for the most part still being defined, and the size of this set is rapidly expanding over time. In the long run, most information security requirements will be situation-specific requirements. This is so because the information security measures expected in banking would not necessarily also be expected in manufacturing (more about this below).
Management is not at liberty to choose one or the other set of requirements. Instead, in the future, they will be expected to meet both sets of requirements. This column explores some examples of the emergent situation-specific standards of due care, which of course should be expressed in an information security policy.
Evolving Legal Requirements: Unfortunately, decades of experience has proven that many top managers won’t spend money on, or devote significant attention to information security — unless they are forced to do so. I won’t name names, although it would be easy to do so. Top management at many large and reputable organizations has been taking an amazingly lax attitude about information security. For example, a few years ago, a large French bank was hit by a $4.9 billion computer-assisted fraud perpetrated by a rogue trader. In an effort to prevent these amazingly large losses, a variety of new information-security-related laws have been passed. For example, the Sarbanes-Oxley Act of 2002 (aka SOX) and the Federal Information Security Management Act (FISMA) both mandate a higher level of organizational vigilance, as well as a higher level of management accountability for information security. These laws are an example of the first category of requirements defining an across-the-board standard of due care. There are many others that could have been mentioned here, but this column is focused on the second category of requirements.
Besides the new laws and regulations, case law is defining the ways that the Board of Directors and top management must be involved with information security matters. For example, the 1996 Caremark International case establishes that directors have a duty to monitor compliance programs related to information security, to make sure that controls are operating as they should be. In that case, directors were held liable because they “should have known” that Caremark staff were violating Federal anti-kickback laws. Likewise, the 2003 Walt Disney case further clarified this standard of oversight that directors must exercise. In that case, the directors allowed the CEO to walk away with a $140 million golden parachute deal, even though he had been working on the job less than a year. Again, the directors “should have known” that these things were going on. The court decided that the directors did not act in good faith, that they had a conscious disregard for matters to which they should have paid attention.
Risks Define Policies: The information security risks facing a bank are really quite different from the risks facing a manufacturing firm. The former is very concerned about fraud and privacy, whereas the latter is very concerned about business interruption and quality control problems. Yet, because the information security field is still in such a young state, most firms are being subjected to a “one size fits all” approach. Granted, certain fundamental management duties associated with the information security function (sometimes called a “baseline”) can be defined across all firms. One function that goes into such a baseline is the performance of a regular risk assessment. In fact, ISO 27001 has defined such a baseline applicable to all organizations. But when it comes to the specific controls to be adopted, that conversation will often take us in a very different direction because controls must be a function of the risks, and the risks will be different from organization to organization.
Robert Courtney used to be head of information security for IBM. In a discussion with him years ago, he said, “you cannot determine whether a specific system is secure if you look only at the technology.” What he meant by that was that we, the assessors of the level of information security, must take the whole situation into consideration, not just the technology. For example, we must ask: “what are the business risks, the legal risks, the financial risks, and the other circumstantial factors in this particular environment?” Only when these factors are collectively considered can an assessor give any sort of a meaningful opinion about the prevailing level of security.
This situation-specific viewpoint is manifest in a host of new computing-environment-specific standards that are cropping up. Consider the Trusted Cloud Initiative. This new standard defines what controls cloud service providers should be providing to their customers. It focuses on the nature of the relationship between customers and cloud service providers, notably the need for transparency, so that customers can understand what cloud service providers are doing. The Trusted Cloud Initiative also focuses on the integration of security systems between customers and providers, for instance in the identity and access management area. Thus this set of information security requirements is largely defined by the nature of the outsourcing relationship and the new technology that goes along with that relationship.
The situation-specific viewpoint is furthermore manifest in requirements that define the controls relevant to a specific type of information. For example the Payment Card Industry – Data Security Standard (PCI-DSS) defines the controls that merchants — and also third-party transaction processors who are handling credit card data — must deploy. Among other things, PCI-DSS discusses how encryption must be used in order to protect credit card data. Here we see that the nature of the information involved (valuable, critical, and/or sensitive) defines the controls that should be deployed.
The situation-specific definition of controls is additionally now evident in a number of high-risk environments. For instance, about a decade ago, a handbook called OCC 99-9 was released by the Office of the Comptroller of the Currency (OCC). This handbook defined how banks should be handling information security. This handbook goes into a number off specific controls, such as what unauthorized attempts should be reported to the Federal Bureau of Investigation. In a similar way, the Gramm Leach Bliley Act (GLBA) also defines specific information security requirements for financial institutions. These requirements include an information security plan and policies detailing the ways that financial institutions are going to protect restricted-access personal information. Here we see situation-specific controls defined on an industry-by-industry basis.
The Best Approach: The military provides us with a phrase which defines the best approach to defining the standard of due care, as it will be observed within a specific organization. That phrase is “system high.” In the military, those words mean that the level of security is a function of the most sensitive piece of information resident on a certain system or network. For example, if the most sensitive type of information is only “confidential,” then the whole system or network must operate with confidential security measures. But if a new piece of information comes onto that system or network, and it is “top secret,” then the security on this system or network must be upgraded, so that it will then be operating according to a more stringent standard.
The system high approach can, and in many instances should be, applied to the definition of an organization-specific standard of due care. Your organization will find that it is subject to information security requirements defined by different entities such as government regulators (at different levels of government), courts issuing case law in the contract and tort areas, industry associations, and information security community groups. The system high approach dictates that all those relevant requirements be combined in a patchwork way, so that the most stringent of these then collectively make up the baseline, the minimum standard to which your organization should subscribe. This baseline would of course be explained in your information security policy document, and (if yours is a larger organization) probably an information security architecture document as well.
Part of the reason why we must go with the most stringent of the requirements is that there is not a direct match-up between legal and regulatory jurisdictions on one hand, and the scope and breadth of organizational or multi-organizational networks and operations on the other hand. For example, the requirements defined in European data protection laws are not found in many non-European countries. At the same time, multinational business operations are generally not restricted only to western European countries with these privacy laws.
To assure continued business operations without the need for special silos or separate collections of data, and without special content filters or walls to block the exchange of data, your organization should go with the system high approach. Yes, this will initially cost more, but in the long run it will probably not cost as much as you might at first blush believe. This approach brings many benefits, such as reducing costs because: (1) organizational-wide vendor discounts are obtained, (2) staff needs to be trained in fewer approaches to security, and (3) the computing environment is thereby simplified and standardized.
Legal Collaboration: To come up with an approach that makes sense for your organization, this author recommends that you discuss the matter with your organization’s attorney early in the development process, not just later on after you’ve got a specific proposal. If you collaborate in this way, the law can be used as a compelling force driving greater top management engagement with information security, and also clearly defining the information security related situation-specific standard of due care.
Charles Cresson Wood, MBA, MSE, CISA, CISM, CISSP, is an independent technology risk management consultant with InfoSecurity Infrastructure, Inc., in Mendocino, California. In the field since 1979, he is the author of a collection of ready-to-go information security policies entitled Information Security Policies Made Easy. He can be reached via www.infosecurityinfrastructure.com.