NIST SP800-53 Rev. 3: Risk Management Framework Underpins the Security Life Cycle

The National Institute of Standards and Technology (NIST) Special Publication (SP) SP 800-53 provides a unified information security framework to achieve information system security and effective risk management across the entire federal government. Everything that follows is Brusil's work with minor edits. * * * The Risk Management Framework in SP 800-53 (Chapter 3) evokes the use of NIST document SP 800-39, Managing Risk from Information Systems: An Organizational Perspective to specify the risk management framework for developing and implementing comprehensive security programs for organizations. In this second of four articles about the latest revision of this landmark Special Publication from the Joint Task Force Transformation Initiative in the Computer Security Division of the Information Technology Laboratory, Paul J. Brusil reviews the framework for risk management offered in SP 800-53 Recommended Security Controls for Federal Information Systems and Organizations, Rev. 3 which was prepared by a panel of experts drawn from throughout the U.S. government and industry. SP 800-39 also provides guidance for managing risk associated with the development, implementation, operation, and use of information systems.

The risk management activities are detailed across several NIST documents (as identified in SP 800-53, Figure 3-1), of which SP 800-53 is only one. Part 1: NIST SP800-53 Rev. 3: Key to Unified Security Across Federal Government and Private Sectors The risk management activities within the Risk Management Framework include the six steps of:1) Categorizing information and the information systems that handle the information.2) Selecting appropriate security controls.3) Implementing the security controls.4) Assessing the effectiveness and efficiency of the implemented security controls.5) Authorizing operation of the information system.6) Monitoring and reporting the ongoing security state of the system. SP 800-53 focuses primarily on step (2): security control selection, specification and refinement. To start the risk management process, each organization uses other mandatory, NIST-developed, government standards. SP800-53 is intended for new information systems, legacy information systems and for external providers of information system services. One standard helps to determine the security category of each of an organization's information and information systems.

These other standards are Federal Information Processing Standard (FIPS) 199, Standards for Security Categorization of Federal Information and Information Systems and FIPS 200, Minimum Security Requirements for Federal Information and Information Systems. The other standard is used to designate each information system's impact level (low-impact, moderate-impact or high-impact). The impact level identifies the significance that a breach of the system has on the organization's mission. Companion guidelines in another NIST recommendation, SP 800-60, Guide for Mapping Types of Information and Information Systems to Security Categories, Rev. 1,> facilitate mapping information and information systems into categories and impact levels. SP 800-53 details the security control selection activities in Section 3.3. In brief, a minimum set of broadly applicable, baseline security controls (SP 800-53, Appendix D), are chosen as a starting point for security controls applicable to the information and information system. SP 800-53 summarizes the categorization activities in Section 3.2. Each organization then chooses security controls commensurate with their specific information and their specific information system's risk level exposure using typical factors such as identifying vital threats to systems, establishing the likelihood a threat will affect the system and assessing the impact of a successful threat event. SP 800-53 specifies three groups of baseline security controls that correspond to the low-impact, moderate-impact and high-impact information system level categories defined in FIPS 200. The intent of establishing different target impacts is to facilitate the use of appropriate and sufficient security controls that effectively mitigate most risks encountered by a target with a specific level of impact.

Then, as needed based on an organization's specific risk assessment, possible local conditions and environments, or specific security requirements or objectives, these minimal baseline security controls can be tailored, expanded or supplemented to meet all of the organization's security needs. The baseline security controls are selected by an organization based on the organization's approach to managing risk, as well as security category and worst-case impact analyses in accordance with FIPS 199 and FIPS 200. SP 800-53 gives guidance to organizations on the scope of applicability of each security control to the organization's specific situation, including, for example, the organization's specific applicable policies and regulations, specific physical facilities, specific operational environment, specific IT components, specific technologies, and/or specific exposure to public access interfaces. Tailoring activities include selecting organization-specific parameters in security controls, assigning organization-specific values to parameters in security controls and assigning or selecting appropriate, organization-specific control actions. If the tailored security control baseline is not sufficient to provide adequate protection for an organization's information and information system, additional security controls or control enhancements can be selected to meet specific threats, vulnerabilities, and/or additional requirements in applicable regulations. Augmentation activities include adding appropriate, organization-specific, control functionality or increasing control strength.

As a last resort, an organization can select security controls from another source other than SP 800-53. This option is possible if suitable security controls do not exist in SP 800-53, if appropriate rationale is established for going to another source and if the organization assesses and accepts the risk associated with use of another source. The plan documents rationale for selecting and tailoring each security control. An organizationally-specific security plan is then developed. Such rationale is used to provide evidence that the security controls adequately protect organizational operations and assets, individuals, other organizations and ultimately the nation. A designated senior official gives such authorization. Subsequent analyses of the risk management decisions documented in the security plan become the bases for authorizing operation of the organization's information system.

After authorizing operation, the organization begins continuous monitoring of the effectiveness of all security controls. Modification and update may be necessary to handle information system changes and/or updates, new configurations, operational environment changes, new types of security incidents, new threats and the like. Such monitoring facilitates potential future decisions to modify or to update the organization's security plan and the deployed security controls. Depending on the severity of adverse impacts on the organization, the revised security plan may need to be used to re-authorize operation of the information system. Organizations document selected program management controls in an Information Security Program Plan. SP 800-53 also defines 11 organization-level, program management security controls (Appendix G) for managing and protecting information security programs.

This plan is implemented, assessed for effectiveness via assessment procedures documented in NIST document SP 800-53A, Guide for Accessing the Security Controls in Federal Information Systems – Building Effective Security Assessment Plans and subsequently authorized and continuously monitored. In the next part of this four-part series, Brusil discusses the comprehensive repository of security controls presented in SP800-53 Rev. 3. * * *

The evolving branch office

In a recent newsletter we introduced the concept of Application Delivery 2.0. One of the steps that IT organizations are taking in order to support the requirements of Application Delivery 2.0 is to implement a next generation branch office. What's driving Application Delivery 2.0? Over the last decade most companies have come to realize that their branch offices are a critical business asset. As the next three newsletters will demonstrate, the next generation branch office represents a multi-year movement away from branch offices that are IT-heavy to ones that are IT-lite. As a result, many companies have moved employees out of a headquarters facility and relocated them into a branch office.

The trend to have employees work outside of a headquarters facility was discussed in an article in Network World. In addition, in an effort to both reduce cost and maximize flexibility, many employees now work out of home offices. That article stated that 90% of employees currently work away from headquarters. Because of this requirement, the reaction of most IT departments five to 10 years ago was to upgrade the branch office infrastructure to include many, if not all of the same technologies that are used at central sites. Employees who work in a branch office still need access to a wide range of applications.

These include high-performance PCs and server platforms as well as high-speed switched LANs. In addition, the typical branch office of this era hosted most of the applications that the branch office users needed to access. The branch office of this era can be considered to be IT-heavy. This included e-mail, sales force automation, CRM as well as office productivity applications. That follows because in addition to having complex IT infrastructure and applications at each branch office, it was also common to have IT staff at an organization's larger branch offices. One of the key characteristics of having branch offices that are IT-heavy is that while branch office employees rely on the WAN, the performance of the WAN does not have a major impact on their productivity. This staffing was necessary in order to provide technical support and maintenance for the applications hosted at the branch as well as to support the complex IT infrastructure.

In addition to the cost associated with being IT-heavy, it was extremely difficult for IT organizations of this era to control access to the data stored in branch office and to maintain physical security for the servers in the branch office. There was almost no ability to measure the actual application response time as seen by the end user. In addition, virtually all of the management functionality of this era focused on individual technology domains; for example, LANs, WANs, servers, databases. In our next newsletter we will discuss how IT organizations began to move IT resources out of branch offices and how that gave rise to Application Delivery 1.0.

The Net at 40: What's Next?

When the Internet hit 40 years old - which, by many accounts, it did earlier this month - listing the epochal changes it has brought to the world was an easy task. Businesses stay in touch with customers using the Twitter and Facebook online social networks. It delivers e-mail, instant messaging, e-commerce and entertainment applications to billions of people. CEOs of major corporations blog about their companies and their activities.

On Sept. 2, 1969, a team of computer scientists created the first network connection, a link between two computers at the University of California, Los Angeles. Astronauts have even used Twitter during space shuttle missions. But according to team member Leonard Kleinrock , although the Internet is turning 40, it's still far from its middle age. "The Internet has just reached its teenage years," said Kleinrock, now a distinguished professor of computer science at UCLA. "It's just beginning to flex its muscles. That will pass as it matures." The next phase of the Internet will likely bring more significant changes to daily life - though it's still unclear exactly what those may be. "We're clearly not through the evolutionary stage," said Rob Enderle, president and principal analyst at Enderle Group. "It's going to be taking the world and the human race in a quite different direction. The fact that it's just gotten into its dark side - with spam and viruses and fraud - means it's like an [unruly] teenager. We just don't know what the direction is yet.

It may doom us. It may save us. But it's certainly going to change us." Marc Weber, founding curator of the Internet History Program at the Computer History Museum in Mountain View, Calif., suggested that the Internet's increasing mobility will drive its growth in the coming decades. Sean Koehl, technology evangelist in Intel Corp.'s Intel Labs research unit, expects that the Internet will someday take on a much more three-dimensional look. "[The Internet] really has been mostly text-based since its inception," he said. "There's been some graphics on Web pages and animation, but bringing lifelike 3-D environments onto the Web really is only beginning. "Some of it is already happening ... though the technical capabilities are a little bit basic right now," Koehl added. The mobile Internet "will show you things about where you are," he said. "Point your mobile phone at a billboard, and you'll see more information." Consumers will increasingly use the Internet to immediately pay for goods, he added.

The beginnings of the Internet aroused much apprehension among the developers who gathered to watch the test of the first network - which included a new, state-of-the-art Honeywell DDP 516 computer about the size of a telephone booth, a Scientific Data Systems computer and a 50-foot cable connecting the two. We were confident the technology was secure. The team on hand included engineers from UCLA, top technology companies like GTE, Honeywell and Scientific Data Systems, and government agencies like the Defense Advanced Research Projects Agency. "Everybody was ready to point the finger at the other guy if it didn't work," Kleinrock joked. "We were worried that the [Honeywell] machine, which had just been sent across the country, might not operate properly when we threw the switch. I had simulated the concept of a large data network many, many times - all the connections, hop-by-hop transmissions, breaking messages into pieces. It was thousands of hours of simulation." As with many complex and historically significant inventions, there's some debate over the true date of the Internet's birth.

The mathematics proved it all, and then I simulated it. Some say it was that September day in '69. Others peg it at Oct. 29 of the same year, when Kleinrock sent a message from UCLA to a node at the Stanford Research Institute in Palo Alto, Calif. Kleinrock, who received a 2007 National Medal of Science, said both 1969 dates are significant. "If Sept. 2 was the day the Internet took its first breath," he said, "we like to say Oct. 29 was the day the infant Internet said its first words." This version of this story originally appeared in Computerworld 's print edition. Still others argue that the Internet was born when other key events took place. It's an edited version of an article that first appeared on Computerworld.com.

Five signs your telework program is a bust

Many companies make it possible for employees to work remotely, but without a structured telework program in place, they could be putting corporate data at risk and stifling employee productivity. Here are a few of the red flags telework advocates should watch for when their programs seem to be lacking positive results. 1. Lackluster management supportSome people simply don't buy into telework, regardless of the promised benefits or the potential cost-savings to a company. "It is rare, but in some cases senior upper management doesn't like the prospect of employees working remotely and makes it difficult to move a program forward," says Chuck Wilsker, president and CEO of The Telework Coalition. "There are those managers that believe presence equals productivity, no matter what the arguments for telework are. Do you know where your employees are working? "If companies don't have a policy or performance management system in place that could help them monitor their telework program and focus on work output, then it might become chaotic," says Cindy Auten, general manager for Telework Exchange. One word from the right manager can make the program go bust and turn it off immediately like a spigot." 2. Ineligible job dutiesCompanies may want to offer their employees the perk of working remotely, especially in these tough economic times when cutting fuel costs could help most people.

A successful remote work policy includes detailed descriptions of how employees connect, what software and hardware equipment they use, and how support can best meet their needs. Yet not all positions apply when it comes to working remotely. "Jobs requiring face-to-face or in-office communications won't work unless the program is very structured to specific duties on specific days," Auten explains. "And jobs that deal with sensitive data might be restricted to on-site activities as well, unless the company has well-documented data security policies." Also companies that don't recognize that some employees would be able to work remotely while others could not might experience failure sooner rather than later. "Any company who thinks that all employees are suitable for teleworking are setting themselves up for failure," says Ben Rothke, a New York-city based senior security consultant with BT Professional Services. 3. Poor technical supportEven if managers comply and job duties are suited to remote work, telework programs could be stalled by subpar technology and support services. Yet experts say that many companies fail with this more obvious telework requirement. "Dissatisfaction with technology and equipment can cause many teleworkers to not take full advantage of a program. But for others, collaboration tools that enable and monitor ongoing communications between managers and employees are mandatory. If they don't have what's necessary to do their job, such as fast Internet connection and helpdesk services, then why bother trying to work remotely," Wilsker says. (See related story, "Secure telework without a VPN.")  4. Communication breakdownFor many in management positions, telework requires a leap of faith or an inherent trust in the employees' discipline to work without supervision. Without such resources, telework programs can be seen as a failure - even if work is getting done - without some sort of accountability. "Companies shouldn't measure employee productivity by doing attendance, but [they need to] have another means by which to validate work output," says Lawrence Imeish, principal consultant for Dimension Data. "Programs such as instant messaging can show when employees are idle, but that isn't the best way to communicate.

Companies must "make sure the user has the basics, a shredder and a secure area, including a locking file cabinet, in which to work. Set policies for checking in and establish that criteria upfront or you could lose productivity." But in some cases, regardless of efforts to quantify performance and monitor efforts, employees aren't able to work without supervision, another sign telework is not for the organization. "It is rare, but without proper screening and training, companies could get experience an individual that doesn't thrive in that type of environment," Wilsker says. 5. Security breachPossibly the worst sign of an unsuccessful telework program is the loss of client data or a corporate security breach that's blamed on inadequate telework policies. "Data handling is a crucial area to include," Rothke says. If an employee works with confidential data, ensure that their computer is in a secure area of their home," he adds. Follow Denise Dubie on Twitter Do you Tweet?

How a Botnet Gets Its Name

There is a new kid in town in the world of botnets - isn't there always? When a botnet like Festi pops onto the radar screen of security researchers, it not only poses the question of what is it doing and how much damage it can cause; there is also the issue of what to call it. A heavyweight spamming botnet known as Festi has only been tracked by researchers with Message Labs Intelligence since August, but is already responsible for approximately 5 percent of all global spam (around 2.5 billion spam emails per day), according to Paul Wood, senior analyst with Messagelabs, which keeps tabs on spam and botnet activity. For all of their prevalence and power online, when it comes to naming botnets, there is no real system in place.

Wood explained Festi's history. "The name came from Microsoft; they identified the malware behind it and gave it the catchiest name," said Wood. "Usually, a number of companies will identify the botnet at the same time and give it a name based on the botnet's characteristics. A common practice so far has been to name it after the malware associated with it; this is a practice that has some drawbacks. Its original name was backdoor.winnt/festi.a or backdoor.trojan. Usually the name and convention comes from wording found within the actual software itself and that is used in some way. Backdoor droppers are common and that wouldn't stick, it would be too generic. This one may have been related to a word like festival." Because the security industry lacks a uniform way to title botnets, the result is sometimes a long list of names for the same botnet that are used by different antivirus vendors and that can be confusing to customers.

The Srizbi botnet is also called Cbeplay and Exchanger. As it stands now, the infamous Conficker is also known as Downup, Downadup and Kido. Kracken is also the botnet Bobax. For instance Gumblar, a large botnet that made news earlier this year (and is possibly perking up again), first hit the gumblar.cn domain, said DiMino. Why they are called what they are called is up to the individual researchers who first identified them. "A lot of time it depends on the first time we see bot in action and what it does," according to Andre DiMino, director of Shadowserver Foundation, a volunteer group of cybercrime busters who, in their free time, are dedicated to finding and stopping malicious activity such as botnets.

Another known as Avalanche was deemed so because of what DiMino described as a preponderance of domain names being used by the botnet. Over the years naming for malware has had a few ground rules. "Don't name anything after the author," he said. "That was most important back when viruses were written for fame." Weafer whipped off a few botnet names that have made headlines in recent years and did his best to recall how they got their titles. The naming dilemma can be a difficult one to tackle according to Vincent Weafer, vice president of Symantec's security response division. Among the more notable, he said, is Conficker, which is thought to be a combination of the English word configure and the German word ficker, which is obscene. Kracken is named after a legendary sea monster.

The Storm botnet was named after a famous European storm and the associated spam that was going around related to it. And MegaD, a large spambot, got its name because it is known for spam that pushes Viagra and various male enhancement herbal remedies. "You can guess what the D stands for after Mega," he said. Because botnets morph and change so frequently, he said, they rarely continue to have a meaningful association with the original malware sample that prompted researchers to name it in the first place. "Botmasters don't restrict themselves to a single piece of malware," said Ollmann "They use multiple tools to generate multiple families of malware. Gunter Ollmann, VP of research with security firm Damballa, believes it is time for a systematic approach to naming botnets that vendors can agree upon. To call a particular a botnet after one piece of malware is naïve and doesn't really encompass what the actual threat is." Also see Botnets: 4 Reasons It's Getting Harder to Find Them and Fight Them Ollmann also adds that the vast majority of malware has no real humanized name, and is seen simply as digits, which makes naming impossible.

The most recent iteration of the discussion focused on how to transport the meta-data that describes the particular name threat of the malware. The result is a confusing landscape for enterprise customers who may be trying to clean up a mess made by a virulent worm, only to find various vendors using different names for the same problem. "There is some work going on among AV vendors to come up with naming convention for the malware sites, but this is independent of the botnets," said Ollmann. "This has been going on for several years now. But there has been no visible progress the end user can make use of." Ollmann said Damballa is now using a botnet naming system, with the agreement of customers, which favors a two-part name and works much like the hurricane naming system used by the National Weather Service. Once a botnet is identified, the name is used and crossed it off the list. The first part of the name comes from a list of pre-agreed upon names.

It becomes the name forever associated with that botnet. While the botnet master changes their malware on a daily basis, they usually only change their malware family balance on a two-or-three day basis, said Ollmann. The second part of the name tracks the most common piece of malware that is currently associated with the botnet. The second part of the name then changes to in order to reflect that fluctuation. "So many of these are appearing it just becomes a case of assigning a human readable name and no other name associated with it," said Ollmann. "It is perhaps ungracious to name them with a hurricane naming system, but it speaks perhaps to the nature of this threat."

Retail sales of Windows 7 delayed in India

Microsoft launched its new Windows 7 operating system in India Thursday, but customers who want to buy off-the-shelf packages of the operating system at a retail store will have to wait longer. As a result, consignments of imported packaged software are not being cleared easily. Importers of packaged software to India are caught in a dispute with the country's customs department over the interpretation of new taxes on packaged software that were introduced in July.

Windows 7 is however already available to consumers in India on computers that come factory-loaded with the new operating system, a spokeswoman for Microsoft said on Thursday. Enterprise customers can download and deploy the new operating system under a volume licensing agreement with Microsoft. Dell PCs with Windows 7, for example, are now available at retail stores across the country, a company spokeswoman said. Over 1,000 enterprise customers in the country are in the process of deploying Windows 7, the Microsoft spokeswoman said. Most consumer sales of Windows are with hardware, he added. The delay at Indian customs will not have much effect Windows 7 sales in India, because off-the-shelf retail sales of packaged software account for less than 5 percent of Microsoft's operating systems sales in the country, according to an industry source who declined to be named.

New government rules that came into force in July introduced separate taxes on the import of the physical media for the software, and on the value of the software license, according to Raju Bhatnagar, vice president for government relations at the National Association of Software and Service Companies (Nasscom). The sticker price on the box of packaged software does not however specifically state the value of the license, Bhatnagar said. Nasscom is asking the government to issue instructions to vendors and the customs department to resolve the issue. Microsoft said it hoped that its consignment of Windows 7 packages will be cleared soon. One option would be for vendors to print the value of the license on the packages, Bhatnagar said.

Chip sales to grow in 2010, iSuppli says

Worldwide semiconductor sales will grow in 2010 as chip sales gain steam in response to stabilizing economies, analyst firm iSuppli said on Wednesday. Chip sales could total $282.7 billion in 2012; sales tallied close to $273.4 billion in 2007. However, global chip sales will decline in 2009, albeit at a lower rate than iSuppli first projected. Semiconductor sales could grow by 13.8 percent on a year-over-year basis to reach US$246 billion in 2010. Chip revenue will keep growing through 2012 and could reach levels of 2007, after which chip revenue skid began.

The analyst firm predicted year-over-year global chip sales would decline by 16.5 percent in 2009. Earlier in the year, iSuppli projected a 23 percent drop. Semiconductor sales and inventory levels in the PC and mobile-handset markets - which account for a majority of semiconductor sales - improved in the second quarter, iSuppli said. Chip sales in 2009 will total US$216 billion, compared to $258 billion in 2008. Chip sales have "gained clarity" as economies stabilize and supplies improve in key markets after an unstable first quarter, iSuppli said in a statement. Major vendors have also increased their outlooks for PC and mobile-handset sales, which has given more clarity to project overall chip sales for the year. "Semiconductor shipments rebounded as inventories were replenished and modest forward-looking purchases were made," said Dale Ford, senior vice president, market intelligence services for iSuppli, in a statement. Otellini's comments were stronger than conservative outlooks provided for an expected PC industry recovery from companies like Advanced Micro Devices and Dell earlier in the year. Intel CEO Paul Otellini last week said that its chip shipments were stabilizing as PC shipments start to take off.

The companies said that PC shipments would grow as users look to buy new PCs with Microsoft's upcoming Windows 7 OS, which is due next month, and as companies look to refresh PCs. The global economy was partly boosted in the second quarter by worldwide economic stimulus efforts, especially in China, iSuppli said. The U.S. stimulus effort - the American Recovery and Reinvestment Act - has a lesser effect as it wasn't implemented on a wide basis, iSuppli said. China's stimulus efforts resulted in a massive increase in consumer purchasing, which benefitted worldwide economic conditions, iSuppli said. An economic stimulus package of $787 billion to spur economic activity was passed in February by Congress and signed into law by President Barack Obama.