Network IPS tests reveal equipment shortcomings

An independent test and evaluation of 15 different network intrusion-protection system products from seven vendors showed none were fully effective in warding off attacks against Microsoft, Adobe and other programs. NSS Labs, which conducted the test without vendor sponsorship of any kind, also evaluated the 15 network IPS offerings for their capability in responding to "evasions," attacks delivered in an obfuscated and stealthy manner in order to hide. NSS Labs, which conducted the evaluation, found that the Sourcefire IPS showed 89% effectiveness against a total of 1,159 attacks on products such as Windows, Adobe Acrobat and Microsoft SharePoint, while the Juniper IPS scored lowest at only 17% effectiveness.

In that arena, the McAfee and IBM IPS held up particularly well. Products tested came from Cisco, IBM, Juniper, McAfee, Sourcefire, Stonesoft and TippingPoint. Clear Choice Test: Cisco IPS 7.0 raises the bar   Rick Moy, president of NSS Labs, says he was disappointed overall that none of the 10Mbps to 10Gbps IPS products tested achieved 100% effectiveness in detecting and blocking the attacks, including buffer overflow exploits. Check Point, Enterasys, Nitro Security, Radware, StillSecure, Top Layer and Trustwave declined to participate in this round of tests, which were conducted in October and November. "The threats are continuing to get worse and everyone says they're keeping up with them, so we wanted them to prove it,"  Moy says. Under this measurement, McAfee, IBM and Stonesoft did well. The vendors that did participate were allowed to tune their equipment in one round of tests designed to find out how long it took to make changes to the default settings in order to try and improve performance based on policy.

The Sourcefire IPS, however, took the most time, which Moy says would translate into time needed for professionals to manage it in an enterprise. Sometimes lab tests simply "don't look like a real attack" to equipment. McAfee, which on Tuesday will make major announcements related to new network-security gear, was left at a loss to explain why the its IPS didn't achieve 100% effectiveness in the NSS Labs tests. "There are a variety of reasons you might not achieve 100%," says Greg Brown, McAfee's senior director of products marketing, who adds he hasn't read the NSS Labs report yet. He says McAfee focuses its efforts on "very new exploits." Details on the IPS effectiveness, evasion attacks, tuning, performance and cost-of-ownership issues are included in depth in the 50-plus page report "Network Intrusion Prevention Group Test" that NSS Labs is selling for $1,800. NSS Labs also anticipates conducting a round of tests for host-based IPS products in the near future.

Personal data of 24,000 Notre Dame employees exposed online

In an embarrassing security gaffe, personal data on more than 24,000 past and present employees at the University of Notre Dame was made publicly available on the Web for more than three years. Files containing the data are believed to have been posted on the site in August 2006 and remained there until this October this year when they were finally discovered and reported to university officials. The breach resulted when an employee inadvertently posted files containing the names, Social Security numbers and zip codes of the employees on a publicly accessible university Web site. The files have since been removed and secured and there is no evidence that the information has been inappropriately used, said Dennis Brown, Notre Dame's assistant vice president for news and information.

All of those affected by the breach have been notified and the university has offered to pay for credit monitoring services, he said. Included in the list of those affected by the breach are a "large number" of on-call and temporary employees, Brown said. Notre Dame last suffered a data breach in January 2006 , when unknown intruders broke into a server and accessed records belonging to an undisclosed number of individuals. On Sunday, a blogger discovered online a sensitive security manual containing detailed information on the screening procedures used by Transportation Security Administration agents at U.S. airports. This is the second time this week that an organization has found itself in the news over an inadvertent data leak.

The document was supposed to have been redacted before it was posted on a government Web site, but wasn't. As with the Notre Dame incident, the document was quickly removed once the lapse was reported but not before numerous copies of the document were posted at sites all over the Internet. This June, a 267-page document listing all U.S. civilian nuclear sites , along with descriptions of their assets and activities, became available on whistleblower site Wikileaks.org days after a government Web site publicly posted the data by accident. Though data breaches involving external hackers get most of the attention, inadvertent data exposures such as these latest examples are not all that uncommon. The sensitive, but unclassified, data had been compiled as part of a report being prepared by the federal government for the International Atomic Energy Agency (IAEA). In another incident in October 2007, a student at Western Oregon University discovered a file containing personal data on student grades after it had been accidentally posted on a publicly accessible university server by an employee. Numerous others breaches, including at government agencies, have also inadvertently leaked confidential and sensitive data over file-sharing networks.

Retailers taking orders for laptops with Core i7 chips

Retailers are now taking orders for what could easily be the world's fastest laptops, powered by Intel's speedy Core i7 desktop processors. The chips, launched in November, were dubbed the "world's fastest chips" by Intel until the company's Xeon server processors were introduced in March. U.S. retailer AVADirect and Canadian retailer Eurocom are offering variants of Clevo's D900F laptop with the Core i7 processor, a chip usually included in high-end gaming desktops. The laptops will come with 17-inch screens and are intended to be desktop replacement PCs. The machines don't skimp on features and include a full array of components one would find in Core i7 desktop systems, according to laptop specifications on the retailers' Web sites.

Laptop hardware usually lags desktop hardware by up to 12 months, so the desktop hardware needed to be redesigned for notebook usage, AVADirect said. AVADirect, in particular, decided not to wait to bring the Core i7 hardware to consumers in a portable form. "While power usage will be higher, AVADirect does not need to wait until Intel or some other company designs and implements mobile offerings of current desktop hardware," AVADirect said in a statement. The laptops come with Core i7 920, 940 and 965 quad-core processors running at speeds from 2.66GHz to 3.2GHz, and include 8MB of L3 cache. The laptops will support up to 6GB of DDR3 memory, which should provide a tremendous performance boost. The laptops draw 130 watts of power, and will come with the X58 chipset and an Nvidia graphics processing unit (GPU) to boost graphics performance. The machines will support up to 1.5TB of RAID hard drive storage and include wireless 802.11a/b/g/n technology.

The price crosses $6,000 for an extravagant configuration that includes the fastest Core i7 965 processor, three 500GB storage drives, internal Bluetooth capabilities, a DVD-RW drive and additional cooling features. They will ship with either Windows Vista or Linux OS. Eurocom's customized Clevo D900F system - which is called the Panther D900F - weighs a whopping 11.9 pounds (5.4 kilograms). With standard components, the D900F laptop's starting price is around US$2,500 on AVADirect's Web site. Intel's Core i7 chips are a significant upgrade over Intel's Core 2 Duo chips, which are currently used in desktops and laptops. Each core will be able to execute two software threads simultaneously, so a laptop with four processor cores could simultaneously run eight threads for quicker application performance. The new chips are built on the Nehalem microarchitecture, which improves system speed and performance-per-watt compared to Intel's earlier Core microarchitecture. Intel has integrated the chips and chipset with QuickPath Interconnect (QPI) technology, which integrates a memory controller and provides a faster pipe for the CPU to communicate with system components like graphics cards.

The chips, code-named Arrandale, will be dual-core and start shipping in the fourth quarter this year, with laptops becoming available in early 2010. Arrandale chips are expected to be faster than existing Core 2 Duo chips and consume less power. Intel later this year intends to introduce new chips for desktops and laptops. However, laptops with Arrandale chips may not match the speeds of Core i7 laptops, considering the chips will be dual-core and built to draw limited amounts of power.

IBM's Black Friday-like promotion pushes IT equipment leasing

As the holiday season approaches, IBM this quarter began its own Black Friday promotion by offering incentives it hopes will convince users to lease its hardware or buy used IBM equipment. IBM, like most other major IT vendors, operates its own financing arm and recommends that leasing is especially attractive for customers that have concerns about their budgets, about upgrade processes and about ultimate equipment disposal. When asked, Tom Higgins, director of IBM Global Financing, was hard pressed to provide any argument against leasing.

Industry analysts have said they expect that leasing will increasingly become the preferred path for many large users of IT equipment, especially as the economy continues to stagnate and large companies look to preserve capital. In addition to spreading out the cost of equipment, leasing may enable companies to speed upgrades, analysts said. And in most cases, analysts say, IT vendors have the backing to extend credit, giving them a leg up over many other financing options. For example, leasing could help a company more easily shift from a five-year to a three-year IT equipment replacement schedule. The cost of leasing the upgraded systems would be less than a company would have spent to power that equipment if it owned it, according to Braunstein's research. In a study of performance and energy use of IBM blade servers, Cal Braunstein, an analyst with the Robert Frances Group Inc., in Westport, Conn., found that over a three-year period a user could replace 1,000 blades with 250 blades due to constant performance improvements.

Joe Pucciarelli, an analyst at IDC, said the poor economy will probably continue to limit capital available to large firms, so he expects an increase in their leasing of IT equipment. Pucciarelli did note that companies that turn to leasing must have a good lifecycle management process in place so it can effectively utilize the new equipment. He said the benefits of leasing versus buying may be further justified by allowing IT operations to more easily upgrade to more powerful systems rather than have to build out floor space to house more older equipment. The IBM promotion offers a 90-day payment deferral on leased hardware, software and services, as well as zero percent financing on software products. In addition, the company said that during 2009 it has been passing savings from accelerated depreciation tax benefits to customers even though its global financing arm holds title to the equipment.

IBM also said it is offering its pre-owned equipment "at attractive prices" during the promotion. In the latest quarter, IBM Global Financing revenue fell by 15% to $536 million. By comparison, rival Hewlett-Packard's financial services unit reported revenue of $726 million in the quarter, up 5% from the prior-year period.

NIST SP800-53 Rev. 3: Risk Management Framework Underpins the Security Life Cycle

The National Institute of Standards and Technology (NIST) Special Publication (SP) SP 800-53 provides a unified information security framework to achieve information system security and effective risk management across the entire federal government. Everything that follows is Brusil's work with minor edits. * * * The Risk Management Framework in SP 800-53 (Chapter 3) evokes the use of NIST document SP 800-39, Managing Risk from Information Systems: An Organizational Perspective to specify the risk management framework for developing and implementing comprehensive security programs for organizations. In this second of four articles about the latest revision of this landmark Special Publication from the Joint Task Force Transformation Initiative in the Computer Security Division of the Information Technology Laboratory, Paul J. Brusil reviews the framework for risk management offered in SP 800-53 Recommended Security Controls for Federal Information Systems and Organizations, Rev. 3 which was prepared by a panel of experts drawn from throughout the U.S. government and industry. SP 800-39 also provides guidance for managing risk associated with the development, implementation, operation, and use of information systems.

The risk management activities are detailed across several NIST documents (as identified in SP 800-53, Figure 3-1), of which SP 800-53 is only one. Part 1: NIST SP800-53 Rev. 3: Key to Unified Security Across Federal Government and Private Sectors The risk management activities within the Risk Management Framework include the six steps of:1) Categorizing information and the information systems that handle the information.2) Selecting appropriate security controls.3) Implementing the security controls.4) Assessing the effectiveness and efficiency of the implemented security controls.5) Authorizing operation of the information system.6) Monitoring and reporting the ongoing security state of the system. SP 800-53 focuses primarily on step (2): security control selection, specification and refinement. To start the risk management process, each organization uses other mandatory, NIST-developed, government standards. SP800-53 is intended for new information systems, legacy information systems and for external providers of information system services. One standard helps to determine the security category of each of an organization's information and information systems.

These other standards are Federal Information Processing Standard (FIPS) 199, Standards for Security Categorization of Federal Information and Information Systems and FIPS 200, Minimum Security Requirements for Federal Information and Information Systems. The other standard is used to designate each information system's impact level (low-impact, moderate-impact or high-impact). The impact level identifies the significance that a breach of the system has on the organization's mission. Companion guidelines in another NIST recommendation, SP 800-60, Guide for Mapping Types of Information and Information Systems to Security Categories, Rev. 1,> facilitate mapping information and information systems into categories and impact levels. SP 800-53 details the security control selection activities in Section 3.3. In brief, a minimum set of broadly applicable, baseline security controls (SP 800-53, Appendix D), are chosen as a starting point for security controls applicable to the information and information system. SP 800-53 summarizes the categorization activities in Section 3.2. Each organization then chooses security controls commensurate with their specific information and their specific information system's risk level exposure using typical factors such as identifying vital threats to systems, establishing the likelihood a threat will affect the system and assessing the impact of a successful threat event. SP 800-53 specifies three groups of baseline security controls that correspond to the low-impact, moderate-impact and high-impact information system level categories defined in FIPS 200. The intent of establishing different target impacts is to facilitate the use of appropriate and sufficient security controls that effectively mitigate most risks encountered by a target with a specific level of impact.

Then, as needed based on an organization's specific risk assessment, possible local conditions and environments, or specific security requirements or objectives, these minimal baseline security controls can be tailored, expanded or supplemented to meet all of the organization's security needs. The baseline security controls are selected by an organization based on the organization's approach to managing risk, as well as security category and worst-case impact analyses in accordance with FIPS 199 and FIPS 200. SP 800-53 gives guidance to organizations on the scope of applicability of each security control to the organization's specific situation, including, for example, the organization's specific applicable policies and regulations, specific physical facilities, specific operational environment, specific IT components, specific technologies, and/or specific exposure to public access interfaces. Tailoring activities include selecting organization-specific parameters in security controls, assigning organization-specific values to parameters in security controls and assigning or selecting appropriate, organization-specific control actions. If the tailored security control baseline is not sufficient to provide adequate protection for an organization's information and information system, additional security controls or control enhancements can be selected to meet specific threats, vulnerabilities, and/or additional requirements in applicable regulations. Augmentation activities include adding appropriate, organization-specific, control functionality or increasing control strength.

As a last resort, an organization can select security controls from another source other than SP 800-53. This option is possible if suitable security controls do not exist in SP 800-53, if appropriate rationale is established for going to another source and if the organization assesses and accepts the risk associated with use of another source. The plan documents rationale for selecting and tailoring each security control. An organizationally-specific security plan is then developed. Such rationale is used to provide evidence that the security controls adequately protect organizational operations and assets, individuals, other organizations and ultimately the nation. A designated senior official gives such authorization. Subsequent analyses of the risk management decisions documented in the security plan become the bases for authorizing operation of the organization's information system.

After authorizing operation, the organization begins continuous monitoring of the effectiveness of all security controls. Modification and update may be necessary to handle information system changes and/or updates, new configurations, operational environment changes, new types of security incidents, new threats and the like. Such monitoring facilitates potential future decisions to modify or to update the organization's security plan and the deployed security controls. Depending on the severity of adverse impacts on the organization, the revised security plan may need to be used to re-authorize operation of the information system. Organizations document selected program management controls in an Information Security Program Plan. SP 800-53 also defines 11 organization-level, program management security controls (Appendix G) for managing and protecting information security programs.

This plan is implemented, assessed for effectiveness via assessment procedures documented in NIST document SP 800-53A, Guide for Accessing the Security Controls in Federal Information Systems – Building Effective Security Assessment Plans and subsequently authorized and continuously monitored. In the next part of this four-part series, Brusil discusses the comprehensive repository of security controls presented in SP800-53 Rev. 3. * * *

The evolving branch office

In a recent newsletter we introduced the concept of Application Delivery 2.0. One of the steps that IT organizations are taking in order to support the requirements of Application Delivery 2.0 is to implement a next generation branch office. What's driving Application Delivery 2.0? Over the last decade most companies have come to realize that their branch offices are a critical business asset. As the next three newsletters will demonstrate, the next generation branch office represents a multi-year movement away from branch offices that are IT-heavy to ones that are IT-lite. As a result, many companies have moved employees out of a headquarters facility and relocated them into a branch office.

The trend to have employees work outside of a headquarters facility was discussed in an article in Network World. In addition, in an effort to both reduce cost and maximize flexibility, many employees now work out of home offices. That article stated that 90% of employees currently work away from headquarters. Because of this requirement, the reaction of most IT departments five to 10 years ago was to upgrade the branch office infrastructure to include many, if not all of the same technologies that are used at central sites. Employees who work in a branch office still need access to a wide range of applications.

These include high-performance PCs and server platforms as well as high-speed switched LANs. In addition, the typical branch office of this era hosted most of the applications that the branch office users needed to access. The branch office of this era can be considered to be IT-heavy. This included e-mail, sales force automation, CRM as well as office productivity applications. That follows because in addition to having complex IT infrastructure and applications at each branch office, it was also common to have IT staff at an organization's larger branch offices. One of the key characteristics of having branch offices that are IT-heavy is that while branch office employees rely on the WAN, the performance of the WAN does not have a major impact on their productivity. This staffing was necessary in order to provide technical support and maintenance for the applications hosted at the branch as well as to support the complex IT infrastructure.

In addition to the cost associated with being IT-heavy, it was extremely difficult for IT organizations of this era to control access to the data stored in branch office and to maintain physical security for the servers in the branch office. There was almost no ability to measure the actual application response time as seen by the end user. In addition, virtually all of the management functionality of this era focused on individual technology domains; for example, LANs, WANs, servers, databases. In our next newsletter we will discuss how IT organizations began to move IT resources out of branch offices and how that gave rise to Application Delivery 1.0.

The Net at 40: What's Next?

When the Internet hit 40 years old - which, by many accounts, it did earlier this month - listing the epochal changes it has brought to the world was an easy task. Businesses stay in touch with customers using the Twitter and Facebook online social networks. It delivers e-mail, instant messaging, e-commerce and entertainment applications to billions of people. CEOs of major corporations blog about their companies and their activities.

On Sept. 2, 1969, a team of computer scientists created the first network connection, a link between two computers at the University of California, Los Angeles. Astronauts have even used Twitter during space shuttle missions. But according to team member Leonard Kleinrock , although the Internet is turning 40, it's still far from its middle age. "The Internet has just reached its teenage years," said Kleinrock, now a distinguished professor of computer science at UCLA. "It's just beginning to flex its muscles. That will pass as it matures." The next phase of the Internet will likely bring more significant changes to daily life - though it's still unclear exactly what those may be. "We're clearly not through the evolutionary stage," said Rob Enderle, president and principal analyst at Enderle Group. "It's going to be taking the world and the human race in a quite different direction. The fact that it's just gotten into its dark side - with spam and viruses and fraud - means it's like an [unruly] teenager. We just don't know what the direction is yet.

It may doom us. It may save us. But it's certainly going to change us." Marc Weber, founding curator of the Internet History Program at the Computer History Museum in Mountain View, Calif., suggested that the Internet's increasing mobility will drive its growth in the coming decades. Sean Koehl, technology evangelist in Intel Corp.'s Intel Labs research unit, expects that the Internet will someday take on a much more three-dimensional look. "[The Internet] really has been mostly text-based since its inception," he said. "There's been some graphics on Web pages and animation, but bringing lifelike 3-D environments onto the Web really is only beginning. "Some of it is already happening ... though the technical capabilities are a little bit basic right now," Koehl added. The mobile Internet "will show you things about where you are," he said. "Point your mobile phone at a billboard, and you'll see more information." Consumers will increasingly use the Internet to immediately pay for goods, he added.

The beginnings of the Internet aroused much apprehension among the developers who gathered to watch the test of the first network - which included a new, state-of-the-art Honeywell DDP 516 computer about the size of a telephone booth, a Scientific Data Systems computer and a 50-foot cable connecting the two. We were confident the technology was secure. The team on hand included engineers from UCLA, top technology companies like GTE, Honeywell and Scientific Data Systems, and government agencies like the Defense Advanced Research Projects Agency. "Everybody was ready to point the finger at the other guy if it didn't work," Kleinrock joked. "We were worried that the [Honeywell] machine, which had just been sent across the country, might not operate properly when we threw the switch. I had simulated the concept of a large data network many, many times - all the connections, hop-by-hop transmissions, breaking messages into pieces. It was thousands of hours of simulation." As with many complex and historically significant inventions, there's some debate over the true date of the Internet's birth.

The mathematics proved it all, and then I simulated it. Some say it was that September day in '69. Others peg it at Oct. 29 of the same year, when Kleinrock sent a message from UCLA to a node at the Stanford Research Institute in Palo Alto, Calif. Kleinrock, who received a 2007 National Medal of Science, said both 1969 dates are significant. "If Sept. 2 was the day the Internet took its first breath," he said, "we like to say Oct. 29 was the day the infant Internet said its first words." This version of this story originally appeared in Computerworld 's print edition. Still others argue that the Internet was born when other key events took place. It's an edited version of an article that first appeared on Computerworld.com.