Network IPS tests reveal equipment shortcomings

An independent test and evaluation of 15 different network intrusion-protection system products from seven vendors showed none were fully effective in warding off attacks against Microsoft, Adobe and other programs. NSS Labs, which conducted the test without vendor sponsorship of any kind, also evaluated the 15 network IPS offerings for their capability in responding to "evasions," attacks delivered in an obfuscated and stealthy manner in order to hide. NSS Labs, which conducted the evaluation, found that the Sourcefire IPS showed 89% effectiveness against a total of 1,159 attacks on products such as Windows, Adobe Acrobat and Microsoft SharePoint, while the Juniper IPS scored lowest at only 17% effectiveness.

In that arena, the McAfee and IBM IPS held up particularly well. Products tested came from Cisco, IBM, Juniper, McAfee, Sourcefire, Stonesoft and TippingPoint. Clear Choice Test: Cisco IPS 7.0 raises the bar   Rick Moy, president of NSS Labs, says he was disappointed overall that none of the 10Mbps to 10Gbps IPS products tested achieved 100% effectiveness in detecting and blocking the attacks, including buffer overflow exploits. Check Point, Enterasys, Nitro Security, Radware, StillSecure, Top Layer and Trustwave declined to participate in this round of tests, which were conducted in October and November. "The threats are continuing to get worse and everyone says they're keeping up with them, so we wanted them to prove it,"  Moy says. Under this measurement, McAfee, IBM and Stonesoft did well. The vendors that did participate were allowed to tune their equipment in one round of tests designed to find out how long it took to make changes to the default settings in order to try and improve performance based on policy.

The Sourcefire IPS, however, took the most time, which Moy says would translate into time needed for professionals to manage it in an enterprise. Sometimes lab tests simply "don't look like a real attack" to equipment. McAfee, which on Tuesday will make major announcements related to new network-security gear, was left at a loss to explain why the its IPS didn't achieve 100% effectiveness in the NSS Labs tests. "There are a variety of reasons you might not achieve 100%," says Greg Brown, McAfee's senior director of products marketing, who adds he hasn't read the NSS Labs report yet. He says McAfee focuses its efforts on "very new exploits." Details on the IPS effectiveness, evasion attacks, tuning, performance and cost-of-ownership issues are included in depth in the 50-plus page report "Network Intrusion Prevention Group Test" that NSS Labs is selling for $1,800. NSS Labs also anticipates conducting a round of tests for host-based IPS products in the near future.

Personal data of 24,000 Notre Dame employees exposed online

In an embarrassing security gaffe, personal data on more than 24,000 past and present employees at the University of Notre Dame was made publicly available on the Web for more than three years. Files containing the data are believed to have been posted on the site in August 2006 and remained there until this October this year when they were finally discovered and reported to university officials. The breach resulted when an employee inadvertently posted files containing the names, Social Security numbers and zip codes of the employees on a publicly accessible university Web site. The files have since been removed and secured and there is no evidence that the information has been inappropriately used, said Dennis Brown, Notre Dame's assistant vice president for news and information.

All of those affected by the breach have been notified and the university has offered to pay for credit monitoring services, he said. Included in the list of those affected by the breach are a "large number" of on-call and temporary employees, Brown said. Notre Dame last suffered a data breach in January 2006 , when unknown intruders broke into a server and accessed records belonging to an undisclosed number of individuals. On Sunday, a blogger discovered online a sensitive security manual containing detailed information on the screening procedures used by Transportation Security Administration agents at U.S. airports. This is the second time this week that an organization has found itself in the news over an inadvertent data leak.

The document was supposed to have been redacted before it was posted on a government Web site, but wasn't. As with the Notre Dame incident, the document was quickly removed once the lapse was reported but not before numerous copies of the document were posted at sites all over the Internet. This June, a 267-page document listing all U.S. civilian nuclear sites , along with descriptions of their assets and activities, became available on whistleblower site Wikileaks.org days after a government Web site publicly posted the data by accident. Though data breaches involving external hackers get most of the attention, inadvertent data exposures such as these latest examples are not all that uncommon. The sensitive, but unclassified, data had been compiled as part of a report being prepared by the federal government for the International Atomic Energy Agency (IAEA). In another incident in October 2007, a student at Western Oregon University discovered a file containing personal data on student grades after it had been accidentally posted on a publicly accessible university server by an employee. Numerous others breaches, including at government agencies, have also inadvertently leaked confidential and sensitive data over file-sharing networks.

Retailers taking orders for laptops with Core i7 chips

Retailers are now taking orders for what could easily be the world's fastest laptops, powered by Intel's speedy Core i7 desktop processors. The chips, launched in November, were dubbed the "world's fastest chips" by Intel until the company's Xeon server processors were introduced in March. U.S. retailer AVADirect and Canadian retailer Eurocom are offering variants of Clevo's D900F laptop with the Core i7 processor, a chip usually included in high-end gaming desktops. The laptops will come with 17-inch screens and are intended to be desktop replacement PCs. The machines don't skimp on features and include a full array of components one would find in Core i7 desktop systems, according to laptop specifications on the retailers' Web sites.

Laptop hardware usually lags desktop hardware by up to 12 months, so the desktop hardware needed to be redesigned for notebook usage, AVADirect said. AVADirect, in particular, decided not to wait to bring the Core i7 hardware to consumers in a portable form. "While power usage will be higher, AVADirect does not need to wait until Intel or some other company designs and implements mobile offerings of current desktop hardware," AVADirect said in a statement. The laptops come with Core i7 920, 940 and 965 quad-core processors running at speeds from 2.66GHz to 3.2GHz, and include 8MB of L3 cache. The laptops will support up to 6GB of DDR3 memory, which should provide a tremendous performance boost. The laptops draw 130 watts of power, and will come with the X58 chipset and an Nvidia graphics processing unit (GPU) to boost graphics performance. The machines will support up to 1.5TB of RAID hard drive storage and include wireless 802.11a/b/g/n technology.

The price crosses $6,000 for an extravagant configuration that includes the fastest Core i7 965 processor, three 500GB storage drives, internal Bluetooth capabilities, a DVD-RW drive and additional cooling features. They will ship with either Windows Vista or Linux OS. Eurocom's customized Clevo D900F system - which is called the Panther D900F - weighs a whopping 11.9 pounds (5.4 kilograms). With standard components, the D900F laptop's starting price is around US$2,500 on AVADirect's Web site. Intel's Core i7 chips are a significant upgrade over Intel's Core 2 Duo chips, which are currently used in desktops and laptops. Each core will be able to execute two software threads simultaneously, so a laptop with four processor cores could simultaneously run eight threads for quicker application performance. The new chips are built on the Nehalem microarchitecture, which improves system speed and performance-per-watt compared to Intel's earlier Core microarchitecture. Intel has integrated the chips and chipset with QuickPath Interconnect (QPI) technology, which integrates a memory controller and provides a faster pipe for the CPU to communicate with system components like graphics cards.

The chips, code-named Arrandale, will be dual-core and start shipping in the fourth quarter this year, with laptops becoming available in early 2010. Arrandale chips are expected to be faster than existing Core 2 Duo chips and consume less power. Intel later this year intends to introduce new chips for desktops and laptops. However, laptops with Arrandale chips may not match the speeds of Core i7 laptops, considering the chips will be dual-core and built to draw limited amounts of power.

IBM's Black Friday-like promotion pushes IT equipment leasing

As the holiday season approaches, IBM this quarter began its own Black Friday promotion by offering incentives it hopes will convince users to lease its hardware or buy used IBM equipment. IBM, like most other major IT vendors, operates its own financing arm and recommends that leasing is especially attractive for customers that have concerns about their budgets, about upgrade processes and about ultimate equipment disposal. When asked, Tom Higgins, director of IBM Global Financing, was hard pressed to provide any argument against leasing.

Industry analysts have said they expect that leasing will increasingly become the preferred path for many large users of IT equipment, especially as the economy continues to stagnate and large companies look to preserve capital. In addition to spreading out the cost of equipment, leasing may enable companies to speed upgrades, analysts said. And in most cases, analysts say, IT vendors have the backing to extend credit, giving them a leg up over many other financing options. For example, leasing could help a company more easily shift from a five-year to a three-year IT equipment replacement schedule. The cost of leasing the upgraded systems would be less than a company would have spent to power that equipment if it owned it, according to Braunstein's research. In a study of performance and energy use of IBM blade servers, Cal Braunstein, an analyst with the Robert Frances Group Inc., in Westport, Conn., found that over a three-year period a user could replace 1,000 blades with 250 blades due to constant performance improvements.

Joe Pucciarelli, an analyst at IDC, said the poor economy will probably continue to limit capital available to large firms, so he expects an increase in their leasing of IT equipment. Pucciarelli did note that companies that turn to leasing must have a good lifecycle management process in place so it can effectively utilize the new equipment. He said the benefits of leasing versus buying may be further justified by allowing IT operations to more easily upgrade to more powerful systems rather than have to build out floor space to house more older equipment. The IBM promotion offers a 90-day payment deferral on leased hardware, software and services, as well as zero percent financing on software products. In addition, the company said that during 2009 it has been passing savings from accelerated depreciation tax benefits to customers even though its global financing arm holds title to the equipment.

IBM also said it is offering its pre-owned equipment "at attractive prices" during the promotion. In the latest quarter, IBM Global Financing revenue fell by 15% to $536 million. By comparison, rival Hewlett-Packard's financial services unit reported revenue of $726 million in the quarter, up 5% from the prior-year period.

NIST SP800-53 Rev. 3: Risk Management Framework Underpins the Security Life Cycle

The National Institute of Standards and Technology (NIST) Special Publication (SP) SP 800-53 provides a unified information security framework to achieve information system security and effective risk management across the entire federal government. Everything that follows is Brusil's work with minor edits. * * * The Risk Management Framework in SP 800-53 (Chapter 3) evokes the use of NIST document SP 800-39, Managing Risk from Information Systems: An Organizational Perspective to specify the risk management framework for developing and implementing comprehensive security programs for organizations. In this second of four articles about the latest revision of this landmark Special Publication from the Joint Task Force Transformation Initiative in the Computer Security Division of the Information Technology Laboratory, Paul J. Brusil reviews the framework for risk management offered in SP 800-53 Recommended Security Controls for Federal Information Systems and Organizations, Rev. 3 which was prepared by a panel of experts drawn from throughout the U.S. government and industry. SP 800-39 also provides guidance for managing risk associated with the development, implementation, operation, and use of information systems.

The risk management activities are detailed across several NIST documents (as identified in SP 800-53, Figure 3-1), of which SP 800-53 is only one. Part 1: NIST SP800-53 Rev. 3: Key to Unified Security Across Federal Government and Private Sectors The risk management activities within the Risk Management Framework include the six steps of:1) Categorizing information and the information systems that handle the information.2) Selecting appropriate security controls.3) Implementing the security controls.4) Assessing the effectiveness and efficiency of the implemented security controls.5) Authorizing operation of the information system.6) Monitoring and reporting the ongoing security state of the system. SP 800-53 focuses primarily on step (2): security control selection, specification and refinement. To start the risk management process, each organization uses other mandatory, NIST-developed, government standards. SP800-53 is intended for new information systems, legacy information systems and for external providers of information system services. One standard helps to determine the security category of each of an organization's information and information systems.

These other standards are Federal Information Processing Standard (FIPS) 199, Standards for Security Categorization of Federal Information and Information Systems and FIPS 200, Minimum Security Requirements for Federal Information and Information Systems. The other standard is used to designate each information system's impact level (low-impact, moderate-impact or high-impact). The impact level identifies the significance that a breach of the system has on the organization's mission. Companion guidelines in another NIST recommendation, SP 800-60, Guide for Mapping Types of Information and Information Systems to Security Categories, Rev. 1,> facilitate mapping information and information systems into categories and impact levels. SP 800-53 details the security control selection activities in Section 3.3. In brief, a minimum set of broadly applicable, baseline security controls (SP 800-53, Appendix D), are chosen as a starting point for security controls applicable to the information and information system. SP 800-53 summarizes the categorization activities in Section 3.2. Each organization then chooses security controls commensurate with their specific information and their specific information system's risk level exposure using typical factors such as identifying vital threats to systems, establishing the likelihood a threat will affect the system and assessing the impact of a successful threat event. SP 800-53 specifies three groups of baseline security controls that correspond to the low-impact, moderate-impact and high-impact information system level categories defined in FIPS 200. The intent of establishing different target impacts is to facilitate the use of appropriate and sufficient security controls that effectively mitigate most risks encountered by a target with a specific level of impact.

Then, as needed based on an organization's specific risk assessment, possible local conditions and environments, or specific security requirements or objectives, these minimal baseline security controls can be tailored, expanded or supplemented to meet all of the organization's security needs. The baseline security controls are selected by an organization based on the organization's approach to managing risk, as well as security category and worst-case impact analyses in accordance with FIPS 199 and FIPS 200. SP 800-53 gives guidance to organizations on the scope of applicability of each security control to the organization's specific situation, including, for example, the organization's specific applicable policies and regulations, specific physical facilities, specific operational environment, specific IT components, specific technologies, and/or specific exposure to public access interfaces. Tailoring activities include selecting organization-specific parameters in security controls, assigning organization-specific values to parameters in security controls and assigning or selecting appropriate, organization-specific control actions. If the tailored security control baseline is not sufficient to provide adequate protection for an organization's information and information system, additional security controls or control enhancements can be selected to meet specific threats, vulnerabilities, and/or additional requirements in applicable regulations. Augmentation activities include adding appropriate, organization-specific, control functionality or increasing control strength.

As a last resort, an organization can select security controls from another source other than SP 800-53. This option is possible if suitable security controls do not exist in SP 800-53, if appropriate rationale is established for going to another source and if the organization assesses and accepts the risk associated with use of another source. The plan documents rationale for selecting and tailoring each security control. An organizationally-specific security plan is then developed. Such rationale is used to provide evidence that the security controls adequately protect organizational operations and assets, individuals, other organizations and ultimately the nation. A designated senior official gives such authorization. Subsequent analyses of the risk management decisions documented in the security plan become the bases for authorizing operation of the organization's information system.

After authorizing operation, the organization begins continuous monitoring of the effectiveness of all security controls. Modification and update may be necessary to handle information system changes and/or updates, new configurations, operational environment changes, new types of security incidents, new threats and the like. Such monitoring facilitates potential future decisions to modify or to update the organization's security plan and the deployed security controls. Depending on the severity of adverse impacts on the organization, the revised security plan may need to be used to re-authorize operation of the information system. Organizations document selected program management controls in an Information Security Program Plan. SP 800-53 also defines 11 organization-level, program management security controls (Appendix G) for managing and protecting information security programs.

This plan is implemented, assessed for effectiveness via assessment procedures documented in NIST document SP 800-53A, Guide for Accessing the Security Controls in Federal Information Systems – Building Effective Security Assessment Plans and subsequently authorized and continuously monitored. In the next part of this four-part series, Brusil discusses the comprehensive repository of security controls presented in SP800-53 Rev. 3. * * *

The evolving branch office

In a recent newsletter we introduced the concept of Application Delivery 2.0. One of the steps that IT organizations are taking in order to support the requirements of Application Delivery 2.0 is to implement a next generation branch office. What's driving Application Delivery 2.0? Over the last decade most companies have come to realize that their branch offices are a critical business asset. As the next three newsletters will demonstrate, the next generation branch office represents a multi-year movement away from branch offices that are IT-heavy to ones that are IT-lite. As a result, many companies have moved employees out of a headquarters facility and relocated them into a branch office.

The trend to have employees work outside of a headquarters facility was discussed in an article in Network World. In addition, in an effort to both reduce cost and maximize flexibility, many employees now work out of home offices. That article stated that 90% of employees currently work away from headquarters. Because of this requirement, the reaction of most IT departments five to 10 years ago was to upgrade the branch office infrastructure to include many, if not all of the same technologies that are used at central sites. Employees who work in a branch office still need access to a wide range of applications.

These include high-performance PCs and server platforms as well as high-speed switched LANs. In addition, the typical branch office of this era hosted most of the applications that the branch office users needed to access. The branch office of this era can be considered to be IT-heavy. This included e-mail, sales force automation, CRM as well as office productivity applications. That follows because in addition to having complex IT infrastructure and applications at each branch office, it was also common to have IT staff at an organization's larger branch offices. One of the key characteristics of having branch offices that are IT-heavy is that while branch office employees rely on the WAN, the performance of the WAN does not have a major impact on their productivity. This staffing was necessary in order to provide technical support and maintenance for the applications hosted at the branch as well as to support the complex IT infrastructure.

In addition to the cost associated with being IT-heavy, it was extremely difficult for IT organizations of this era to control access to the data stored in branch office and to maintain physical security for the servers in the branch office. There was almost no ability to measure the actual application response time as seen by the end user. In addition, virtually all of the management functionality of this era focused on individual technology domains; for example, LANs, WANs, servers, databases. In our next newsletter we will discuss how IT organizations began to move IT resources out of branch offices and how that gave rise to Application Delivery 1.0.

The Net at 40: What's Next?

When the Internet hit 40 years old - which, by many accounts, it did earlier this month - listing the epochal changes it has brought to the world was an easy task. Businesses stay in touch with customers using the Twitter and Facebook online social networks. It delivers e-mail, instant messaging, e-commerce and entertainment applications to billions of people. CEOs of major corporations blog about their companies and their activities.

On Sept. 2, 1969, a team of computer scientists created the first network connection, a link between two computers at the University of California, Los Angeles. Astronauts have even used Twitter during space shuttle missions. But according to team member Leonard Kleinrock , although the Internet is turning 40, it's still far from its middle age. "The Internet has just reached its teenage years," said Kleinrock, now a distinguished professor of computer science at UCLA. "It's just beginning to flex its muscles. That will pass as it matures." The next phase of the Internet will likely bring more significant changes to daily life - though it's still unclear exactly what those may be. "We're clearly not through the evolutionary stage," said Rob Enderle, president and principal analyst at Enderle Group. "It's going to be taking the world and the human race in a quite different direction. The fact that it's just gotten into its dark side - with spam and viruses and fraud - means it's like an [unruly] teenager. We just don't know what the direction is yet.

It may doom us. It may save us. But it's certainly going to change us." Marc Weber, founding curator of the Internet History Program at the Computer History Museum in Mountain View, Calif., suggested that the Internet's increasing mobility will drive its growth in the coming decades. Sean Koehl, technology evangelist in Intel Corp.'s Intel Labs research unit, expects that the Internet will someday take on a much more three-dimensional look. "[The Internet] really has been mostly text-based since its inception," he said. "There's been some graphics on Web pages and animation, but bringing lifelike 3-D environments onto the Web really is only beginning. "Some of it is already happening ... though the technical capabilities are a little bit basic right now," Koehl added. The mobile Internet "will show you things about where you are," he said. "Point your mobile phone at a billboard, and you'll see more information." Consumers will increasingly use the Internet to immediately pay for goods, he added.

The beginnings of the Internet aroused much apprehension among the developers who gathered to watch the test of the first network - which included a new, state-of-the-art Honeywell DDP 516 computer about the size of a telephone booth, a Scientific Data Systems computer and a 50-foot cable connecting the two. We were confident the technology was secure. The team on hand included engineers from UCLA, top technology companies like GTE, Honeywell and Scientific Data Systems, and government agencies like the Defense Advanced Research Projects Agency. "Everybody was ready to point the finger at the other guy if it didn't work," Kleinrock joked. "We were worried that the [Honeywell] machine, which had just been sent across the country, might not operate properly when we threw the switch. I had simulated the concept of a large data network many, many times - all the connections, hop-by-hop transmissions, breaking messages into pieces. It was thousands of hours of simulation." As with many complex and historically significant inventions, there's some debate over the true date of the Internet's birth.

The mathematics proved it all, and then I simulated it. Some say it was that September day in '69. Others peg it at Oct. 29 of the same year, when Kleinrock sent a message from UCLA to a node at the Stanford Research Institute in Palo Alto, Calif. Kleinrock, who received a 2007 National Medal of Science, said both 1969 dates are significant. "If Sept. 2 was the day the Internet took its first breath," he said, "we like to say Oct. 29 was the day the infant Internet said its first words." This version of this story originally appeared in Computerworld 's print edition. Still others argue that the Internet was born when other key events took place. It's an edited version of an article that first appeared on Computerworld.com.

Five signs your telework program is a bust

Many companies make it possible for employees to work remotely, but without a structured telework program in place, they could be putting corporate data at risk and stifling employee productivity. Here are a few of the red flags telework advocates should watch for when their programs seem to be lacking positive results. 1. Lackluster management supportSome people simply don't buy into telework, regardless of the promised benefits or the potential cost-savings to a company. "It is rare, but in some cases senior upper management doesn't like the prospect of employees working remotely and makes it difficult to move a program forward," says Chuck Wilsker, president and CEO of The Telework Coalition. "There are those managers that believe presence equals productivity, no matter what the arguments for telework are. Do you know where your employees are working? "If companies don't have a policy or performance management system in place that could help them monitor their telework program and focus on work output, then it might become chaotic," says Cindy Auten, general manager for Telework Exchange. One word from the right manager can make the program go bust and turn it off immediately like a spigot." 2. Ineligible job dutiesCompanies may want to offer their employees the perk of working remotely, especially in these tough economic times when cutting fuel costs could help most people.

A successful remote work policy includes detailed descriptions of how employees connect, what software and hardware equipment they use, and how support can best meet their needs. Yet not all positions apply when it comes to working remotely. "Jobs requiring face-to-face or in-office communications won't work unless the program is very structured to specific duties on specific days," Auten explains. "And jobs that deal with sensitive data might be restricted to on-site activities as well, unless the company has well-documented data security policies." Also companies that don't recognize that some employees would be able to work remotely while others could not might experience failure sooner rather than later. "Any company who thinks that all employees are suitable for teleworking are setting themselves up for failure," says Ben Rothke, a New York-city based senior security consultant with BT Professional Services. 3. Poor technical supportEven if managers comply and job duties are suited to remote work, telework programs could be stalled by subpar technology and support services. Yet experts say that many companies fail with this more obvious telework requirement. "Dissatisfaction with technology and equipment can cause many teleworkers to not take full advantage of a program. But for others, collaboration tools that enable and monitor ongoing communications between managers and employees are mandatory. If they don't have what's necessary to do their job, such as fast Internet connection and helpdesk services, then why bother trying to work remotely," Wilsker says. (See related story, "Secure telework without a VPN.")  4. Communication breakdownFor many in management positions, telework requires a leap of faith or an inherent trust in the employees' discipline to work without supervision. Without such resources, telework programs can be seen as a failure - even if work is getting done - without some sort of accountability. "Companies shouldn't measure employee productivity by doing attendance, but [they need to] have another means by which to validate work output," says Lawrence Imeish, principal consultant for Dimension Data. "Programs such as instant messaging can show when employees are idle, but that isn't the best way to communicate.

Companies must "make sure the user has the basics, a shredder and a secure area, including a locking file cabinet, in which to work. Set policies for checking in and establish that criteria upfront or you could lose productivity." But in some cases, regardless of efforts to quantify performance and monitor efforts, employees aren't able to work without supervision, another sign telework is not for the organization. "It is rare, but without proper screening and training, companies could get experience an individual that doesn't thrive in that type of environment," Wilsker says. 5. Security breachPossibly the worst sign of an unsuccessful telework program is the loss of client data or a corporate security breach that's blamed on inadequate telework policies. "Data handling is a crucial area to include," Rothke says. If an employee works with confidential data, ensure that their computer is in a secure area of their home," he adds. Follow Denise Dubie on Twitter Do you Tweet?

How a Botnet Gets Its Name

There is a new kid in town in the world of botnets - isn't there always? When a botnet like Festi pops onto the radar screen of security researchers, it not only poses the question of what is it doing and how much damage it can cause; there is also the issue of what to call it. A heavyweight spamming botnet known as Festi has only been tracked by researchers with Message Labs Intelligence since August, but is already responsible for approximately 5 percent of all global spam (around 2.5 billion spam emails per day), according to Paul Wood, senior analyst with Messagelabs, which keeps tabs on spam and botnet activity. For all of their prevalence and power online, when it comes to naming botnets, there is no real system in place.

Wood explained Festi's history. "The name came from Microsoft; they identified the malware behind it and gave it the catchiest name," said Wood. "Usually, a number of companies will identify the botnet at the same time and give it a name based on the botnet's characteristics. A common practice so far has been to name it after the malware associated with it; this is a practice that has some drawbacks. Its original name was backdoor.winnt/festi.a or backdoor.trojan. Usually the name and convention comes from wording found within the actual software itself and that is used in some way. Backdoor droppers are common and that wouldn't stick, it would be too generic. This one may have been related to a word like festival." Because the security industry lacks a uniform way to title botnets, the result is sometimes a long list of names for the same botnet that are used by different antivirus vendors and that can be confusing to customers.

The Srizbi botnet is also called Cbeplay and Exchanger. As it stands now, the infamous Conficker is also known as Downup, Downadup and Kido. Kracken is also the botnet Bobax. For instance Gumblar, a large botnet that made news earlier this year (and is possibly perking up again), first hit the gumblar.cn domain, said DiMino. Why they are called what they are called is up to the individual researchers who first identified them. "A lot of time it depends on the first time we see bot in action and what it does," according to Andre DiMino, director of Shadowserver Foundation, a volunteer group of cybercrime busters who, in their free time, are dedicated to finding and stopping malicious activity such as botnets.

Another known as Avalanche was deemed so because of what DiMino described as a preponderance of domain names being used by the botnet. Over the years naming for malware has had a few ground rules. "Don't name anything after the author," he said. "That was most important back when viruses were written for fame." Weafer whipped off a few botnet names that have made headlines in recent years and did his best to recall how they got their titles. The naming dilemma can be a difficult one to tackle according to Vincent Weafer, vice president of Symantec's security response division. Among the more notable, he said, is Conficker, which is thought to be a combination of the English word configure and the German word ficker, which is obscene. Kracken is named after a legendary sea monster.

The Storm botnet was named after a famous European storm and the associated spam that was going around related to it. And MegaD, a large spambot, got its name because it is known for spam that pushes Viagra and various male enhancement herbal remedies. "You can guess what the D stands for after Mega," he said. Because botnets morph and change so frequently, he said, they rarely continue to have a meaningful association with the original malware sample that prompted researchers to name it in the first place. "Botmasters don't restrict themselves to a single piece of malware," said Ollmann "They use multiple tools to generate multiple families of malware. Gunter Ollmann, VP of research with security firm Damballa, believes it is time for a systematic approach to naming botnets that vendors can agree upon. To call a particular a botnet after one piece of malware is naïve and doesn't really encompass what the actual threat is." Also see Botnets: 4 Reasons It's Getting Harder to Find Them and Fight Them Ollmann also adds that the vast majority of malware has no real humanized name, and is seen simply as digits, which makes naming impossible.

The most recent iteration of the discussion focused on how to transport the meta-data that describes the particular name threat of the malware. The result is a confusing landscape for enterprise customers who may be trying to clean up a mess made by a virulent worm, only to find various vendors using different names for the same problem. "There is some work going on among AV vendors to come up with naming convention for the malware sites, but this is independent of the botnets," said Ollmann. "This has been going on for several years now. But there has been no visible progress the end user can make use of." Ollmann said Damballa is now using a botnet naming system, with the agreement of customers, which favors a two-part name and works much like the hurricane naming system used by the National Weather Service. Once a botnet is identified, the name is used and crossed it off the list. The first part of the name comes from a list of pre-agreed upon names.

It becomes the name forever associated with that botnet. While the botnet master changes their malware on a daily basis, they usually only change their malware family balance on a two-or-three day basis, said Ollmann. The second part of the name tracks the most common piece of malware that is currently associated with the botnet. The second part of the name then changes to in order to reflect that fluctuation. "So many of these are appearing it just becomes a case of assigning a human readable name and no other name associated with it," said Ollmann. "It is perhaps ungracious to name them with a hurricane naming system, but it speaks perhaps to the nature of this threat."

Retail sales of Windows 7 delayed in India

Microsoft launched its new Windows 7 operating system in India Thursday, but customers who want to buy off-the-shelf packages of the operating system at a retail store will have to wait longer. As a result, consignments of imported packaged software are not being cleared easily. Importers of packaged software to India are caught in a dispute with the country's customs department over the interpretation of new taxes on packaged software that were introduced in July.

Windows 7 is however already available to consumers in India on computers that come factory-loaded with the new operating system, a spokeswoman for Microsoft said on Thursday. Enterprise customers can download and deploy the new operating system under a volume licensing agreement with Microsoft. Dell PCs with Windows 7, for example, are now available at retail stores across the country, a company spokeswoman said. Over 1,000 enterprise customers in the country are in the process of deploying Windows 7, the Microsoft spokeswoman said. Most consumer sales of Windows are with hardware, he added. The delay at Indian customs will not have much effect Windows 7 sales in India, because off-the-shelf retail sales of packaged software account for less than 5 percent of Microsoft's operating systems sales in the country, according to an industry source who declined to be named.

New government rules that came into force in July introduced separate taxes on the import of the physical media for the software, and on the value of the software license, according to Raju Bhatnagar, vice president for government relations at the National Association of Software and Service Companies (Nasscom). The sticker price on the box of packaged software does not however specifically state the value of the license, Bhatnagar said. Nasscom is asking the government to issue instructions to vendors and the customs department to resolve the issue. Microsoft said it hoped that its consignment of Windows 7 packages will be cleared soon. One option would be for vendors to print the value of the license on the packages, Bhatnagar said.

Chip sales to grow in 2010, iSuppli says

Worldwide semiconductor sales will grow in 2010 as chip sales gain steam in response to stabilizing economies, analyst firm iSuppli said on Wednesday. Chip sales could total $282.7 billion in 2012; sales tallied close to $273.4 billion in 2007. However, global chip sales will decline in 2009, albeit at a lower rate than iSuppli first projected. Semiconductor sales could grow by 13.8 percent on a year-over-year basis to reach US$246 billion in 2010. Chip revenue will keep growing through 2012 and could reach levels of 2007, after which chip revenue skid began.

The analyst firm predicted year-over-year global chip sales would decline by 16.5 percent in 2009. Earlier in the year, iSuppli projected a 23 percent drop. Semiconductor sales and inventory levels in the PC and mobile-handset markets - which account for a majority of semiconductor sales - improved in the second quarter, iSuppli said. Chip sales in 2009 will total US$216 billion, compared to $258 billion in 2008. Chip sales have "gained clarity" as economies stabilize and supplies improve in key markets after an unstable first quarter, iSuppli said in a statement. Major vendors have also increased their outlooks for PC and mobile-handset sales, which has given more clarity to project overall chip sales for the year. "Semiconductor shipments rebounded as inventories were replenished and modest forward-looking purchases were made," said Dale Ford, senior vice president, market intelligence services for iSuppli, in a statement. Otellini's comments were stronger than conservative outlooks provided for an expected PC industry recovery from companies like Advanced Micro Devices and Dell earlier in the year. Intel CEO Paul Otellini last week said that its chip shipments were stabilizing as PC shipments start to take off.

The companies said that PC shipments would grow as users look to buy new PCs with Microsoft's upcoming Windows 7 OS, which is due next month, and as companies look to refresh PCs. The global economy was partly boosted in the second quarter by worldwide economic stimulus efforts, especially in China, iSuppli said. The U.S. stimulus effort - the American Recovery and Reinvestment Act - has a lesser effect as it wasn't implemented on a wide basis, iSuppli said. China's stimulus efforts resulted in a massive increase in consumer purchasing, which benefitted worldwide economic conditions, iSuppli said. An economic stimulus package of $787 billion to spur economic activity was passed in February by Congress and signed into law by President Barack Obama.

Enterasys revamps high-end Ethernet switch line

Enterasys this week is introducing a major upgrade to its Ethernet switch line in an effort to better serve converged networks, including those that are heavily virtualized.  The S-Series boasts an almost fourfold increase in switching capacity and a 10x increase in throughput over the predecessor N-Series, plus greater 10G port density. All that, plus efficient Power-over-Ethernet provisioning, should enable customers to better network VoIP, wireless LAN and assorted data center products, including those from Siemens Enterprise Communications Group, the outfit Enterasys merged with last year. In addition, the switches come with improved policy-based security features, a traditional Enterasys differentiator. The rollout also could catalyze Enterasys' share of the $19 billion Ethernet switching market, which has been essentially flat (Dell-Oro Group says the vendor's share was 1.3% in the third quarter of 2007 and 1.1% as recently as the second quarter of 2009). Analysts say it's about time Enterasys refreshed the top line.

For virtualized environments, the S-Series can be configured and policy-defined to identify virtual hosts supported by VMware, XenServer and HyperV hypervisors and assign ports, access controls and class of service parameters for each, Enterasys says. The N-Series is several years old, and though enhanced several times over that period, it still wasn't fully convergence capable. "They ran out of room on the backplane of the N," says Steve Schuchart of Current Analysis. "Different sheet metal is required – if you need S-Series capability, you need a new switch." The S-Series switching line is comprised of 1U, 3-slot, 4 -slot, and 8-slot chassis, depending on the application: a network edge access switch, distribution layer switch, a multi-terabit core router, or as a data center virtualization system. These policies can then follow the virtual server as it moves around the data center, the company says. Total switching capacity for the S-Series initially is 1.28Tbps and throughput measures 950Mpps, Enterasys says. For cloud computing, Enterasys says the S-Series can identify on-demand applications, automatically prioritize them based on user ID, and authorize and control network access.

The S-Series backplane, though, is designed to support greater than 6Tbps of capacity, the company says. The system is capable of 160Gbps per slot supporting up to 128 10G ports, 100 more than the N-Series. The N-Series topped out at 1.68Tbps and 94.5Mpps. This capacity also prepares the switch for 40/100Gbps Ethernet, expected next year. They include: • Automated provisioning of virtual and physical server connectivity; • A distributed switching and system management architecture; • Self-healing functionality, in which switching and routing applications are distributed across multiple modules in the event of a module failure; • Multiple discovery methods, such as Cisco Discovery Protocol and LLDP-MED, to identify and provision services to IP phones and wireless access points from major vendors; • And automatic upgrade, reload or rollback of firmware on each module. "One of the real potentials of the switch is that you're going to be able to put rules on there that go all the way down to Layer 7," says the telecom manager of a major American university, and a large Enterasys customer who asked not to be named. "That switch has a lot more capability when it comes to policy and rules." But a disadvantage, he says, is what Schuchart alluded to in "different sheet metal" - the S-Series modules will not work in the N-Series chassis.

The S-Series also includes many standard features that competitors might charge extra for. The S-Series also does not support virtual switching, or chassis "bonding," in which a user can combine switches into one to pool bandwidth to increase performance. The S-series is expected to compete squarely with the Catalyst 6500 and 4500s from Cisco – Enterasys claims the S-Series switches cost 20% less and are more than four times as power efficient as those products. It's akin to Cisco's Virtual Switching System 1440 capability for its Catalyst 6500 switches. The university customer says that chassis bonding feature may be added to the S-Series in a year or two.

But the code base for the switches remains the same, says the user, who adds that he expects to replace roughly half of his 127 N-Series switches with the S-Series over the next three years. "We're real excited about the product," he says. "We'd buy more if we could." Enterasys S-Series products are priced from $15,995. Enterasys is the network infrastructure division of Siemens Enterprise Communications Group.

Microsoft slates Office 2010 public beta for November

Microsoft will launch the public beta of Office 2010 next month, company CEO Steve Ballmer said on Monday. When pressed for details, a Microsoft spokeswoman said the company did not have a specific timeline beyond Ballmer's pinning the beta to next month. In a keynote that kicked off Microsoft's SharePoint Conference 2009 in Las Vegas, Ballmer announced that the public beta of Office 2010 will be made available in November.

So far, Microsoft has offered a preview of its next desktop suite only to a relatively small group of testers. Office Web Apps includes lightweight versions of Word, Excel and PowerPoint and will be made available to millions free of charge in the first half of next year, the only timetable Microsoft has set for Office 2010's ship date. It has also opened the online edition , Office Web Apps, to a similar preview. Anyone will be eligible to test drive the Office 2010 beta, said Microsoft today. Last summer, Microsoft said that it expected to distribute millions of copies of the Office 2010 public beta. However, the company declined to answer questions about whether the number of copies of the beta will be limited - as it tried to do with the Windows 7 beta earlier this year - or be available only for a limited time, as was the Windows 7 release candidate.

In April, Microsoft said that it would not offer users the chance to test Office 2010, as it had done with other editions, including Office 2007. The company quickly backtracked , saying that it had simply given "the wrong impression" about its plans. The latter move, Microsoft said in July 2006, was because "the beta 2 downloads have exceeded our goals," prompting it to "implement a cost-recovery measure." Microsoft may use a new technology, called Click-To-Run, that debuted in July with the Technical Preview, to deliver the beta of Office 2010. Click-To-Run "streams" pieces of the suite to users who begin a download, letting them start using the suite within minutes. Also unknown is whether Microsoft will charge users to download the beta, a tactic it used with the second beta of Office 2007, when it let customers try out the suite from within their browsers for free, but charged them $1.50 to download the preview. While users work with the suite, the remainder of the code is downloaded in the background. The company will also offer an advertising-supported version of Office 2010 to computer makers, who will install it on their new PCs as an alternative to the retired Microsoft Works.

Two weeks ago, Microsoft said it would use Click-To-Run to offer a limited-time trial of Office 2010 when the final bits ship next year. Dubbed Office Starter 2010, it will include scaled-back editions of Word 2010 and Excel 2010. An after-market "key," purchased either on a card at electronics retailers or online from Microsoft, will unlock the appropriate for-a-fee version, so that no additional software need be downloaded. Microsoft has not yet announced prices for Office 2010. Ballmer made the Office 2010 beta announcement at the same time he revealed some of the features of the upcoming enterprise SharePoint 2010 software. Customers will be able to upgrade from Starter to Office 2010 Home & Student, Home & Business or Professional. He said that a public beta of SharePoint 2010 would also be available to the public next month.

Gartner: Turn server heat up to 75

Data center managers should turn server temperatures up to 75 degrees Fahrenheit, and adopt more aggressive policies for IT energy measurement, Gartner says in a new report.  Five tools to prevent energy waste in the data center After conducting a Web-based survey of 130 infrastructure and operations managers, Gartner concluded that measurement and monitoring of data center energy use will remain immature through 2011. Only 7% of respondents said their top priorities include procurement of green products and pushing vendors to create more energy efficient technology. In a troubling sign, 48% of respondents have not yet considered metrics for energy management. In general, data center managers are not paying enough attention to measuring, monitoring and modeling of energy use. "Although the green IT and data center energy issue has been on the agenda for some time now, many managers feel that they have to deal with more immediate concerns before focusing attention on their suppliers' products," Rakesh Kumar, research vice president at Gartner, said in a news release. "In other words, even if more energy efficient servers or energy management tools were available, data center and IT managers are far more interested in internal projects like consolidation, rationalization and virtualization." About 63% of survey respondents expect to face data center capacity constraints in the next 18 months, and 15% said they are already using all available capacity and will have to build new data centers or refurbish existing ones within the next year.

Gartner issued four recommendations for improving energy management: • Raise the temperature at the server inlet point up to 71 to 75 degrees Fahrenheit (24 degrees Celsius), but use sensors to monitor potential hotspots. • Develop a dashboard of data center energy-efficient metrics that provides appropriate data to different levels of IT and financial management. • Use the SPECpower benchmark to evaluate the relative energy efficiency of servers. • Improve the use of the existing infrastructure through consolidation and virtualization before building out or buying new/additional data center floor space. CDW surveyed 752 IT pros in U.S. organizations for its 2009 Energy Efficient IT Report, finding that 59% are training employees to shut down equipment when they leave the office, and 46% have implemented or are implementing server virtualization. In addition to Gartner's report, a recent survey by CDW illustrates trends related to data center efficiency. The recession has helped convince IT organizations of the financial value of power-saving measures, with greater numbers implementing storage virtualization, and managing cable placement to keep under-floor cooling chambers open and thus reduce demand on cooling systems. Data center managers are finding it easier to identify energy efficient equipment because of the Environmental Protection Agency's new Energy Star program for servers.

CDW found that 43% of IT shops have implemented remote monitoring and management of their data centers, up from 29% the year before. But data centers are still missing many opportunities to save money on energy costs. "Energy reduction efforts are yielding significant results … Still, most are spending millions more on energy than necessary," CDW writes. "If the average organization surveyed were to take full advantage of energy-savings measures, IT professionals estimate they could save $1.5M annually." Follow Jon Brodkin on Twitter 

iPhone GPS app market heating up

The iPhone GPS app market unleashed by the release of the iPhone 3.0 software update is getting more interesting by the day, with several developers in an arms race to add new features to their initial offerings. My own in-car navigation box doesn't even speak street names (other than numbered freeways), and it sure makes a big difference. Taking the lead in the GPS app race is Navigon MobileNavigator, which recently added support for spoken street names-a major failing in the three apps that I previewed in a Macworld Video last month. Last week, I got to spend a little bit of time with Navigon's Johan-Till Broer, who showed me the next version of MobileNavigator, due as a free App Store update sometime in October.

The traffic update also does a better job of estimating the speeds of various roads without live traffic data. It adds live traffic to the party, downloading traffic updates over the digital cell network and rerouting you around slow spots. The end result should be that MobileNavigator will do a better job of suggesting the fastest route you should take to your destination, based on both current conditions and the time of day you're traveling. I've found Sygic Mobile Maps to be a solid app, although it feels more like a port of a standalone GPS device than a native iPhone app. Sygic, maker of the Sygic Mobile Maps GPS navigation app, recently updated its app to support spoken street names, as well as catching up with the other apps by integrating the addresses of the contacts in your iPhone's address book. However, you can't beat the price-Sygic is trying to drive sales of its updated app by reducing the price (temporarily, at least) to $40 for an app containing only United States maps and $60 for the app containing maps of all of North America.

TomTom's promised car kit for the iPhone, which promises a mount, speaker, and improved GPS reception, has yet to arrive here in the States. (Our friends at Macworld UK are reporting that the car kit is available for order on that side of the Atlantic, with shipping times listed as "two to three weeks.") As for the TomTom app, the company promises "several updates by the end of 2009," but hasn't given details. While Navigon and Sygic are not familiar names to most Americans, TomTom is a strong brand and its iPhone app has sparked a lot of interest, although the iTunes charts would suggest that it may have fallen behind Navigon in terms of sales. Presumably spoken street names and live traffic are high on the agenda. Look for a comprehensive comparison of iPhone GPS apps from Macworld in the near future. Reviewing these apps is hard, requiring a lot of driving (and a dedicated driver so the reviewer doesn't cause an accident!), and the features of the apps keep updating at a rapid pace.

In the meantime, check out my video above if you'd like to see the apps in action. From my perspective, right now Navigon MobileNavigator is the best choice available, but this game is far from over.

Dell-Perot Deal Spells Trouble for Tier-Two Outsourcers

The consolidating IT services market contracted a bit further on Monday with Dell's announcement that it will acquire Perot Systems for $3.9 billion. The fact that Dell paid nearly a 70 percent premium on Perot's stock price to seal the deal confirms "the value of integrating hardware and services for infrastructure management is clearly gaining momentum," says Peter Bendor-Samuel, CEO of outsourcing consultancy Everest Group, which counts both Perot and Dell among its clients. The Texas twosome can hardly match the scale of HP or IBM on the outsourcing front-Perot brings just $2.7 billion in services revenue to the table-but the matchup is clearly made in their image. It also suggests, he adds, that the size of outsourcing/hardware companies will continue to increase in importance.

But Dell, struggling as a hardware manufacturer at a time when infrastructure sales are slow, wants in on the outsourcing business, even if it takes several acquisitions to do it. "Perot's capabilities are focused on a few geographies and industries, which Dell will need to grow or complement with other acquisitions to attain greater scale to compete head-on with the likes of HP and IBM," says Bendor-Samuel. [ Related: Dell Perot Deal: Big Price Tag, Small Industry Impact and FAQ: What the Dell-Perot Merger Means for the IT Industry. ] Neither company is likely to be too worried about the competition at this point. While Perot operates in some high-interest industries-most notably healthcare and government services-its footprint remains relatively small. It's more likely that Dell-Perot will make inroads on smaller deals. "Dell and Perot Systems can exert pressure in this sector, and if played right, could see their market share increase in the midmarket in both products and services," says Stan Lepeak, managing director at outsourcing consultancy EquaTerra. India-based providers who've been attempting to ramp up their infrastructure offerings "must continue to find ways to grow and reach meaningful scale," says Bendor-Samuel. As such, it's the tier-two players that will be watching the Dell-Perot deal closely. Meanwhile, traditional IT services players who've yet to walk down the aisle with a hardware vendor-such as ACS, CSC and Unisys-may be wondering how wise it was to stay single. "They will be asking themselves how they can grow in the infrastructure space to meet the increased threat posed by the integrated hardware and services offerings of IBM, HP, and now Dell," Bendor-Samuel says.

While Dell may be eager to keep Perot clients-and their relatively healthy profit margins-existing customers should proceed with caution (See Five Steps to Take if Your Outsourcer is Sold.) Specifically, clients should assess any impact the deal has on non-Dell hardware options, Lepeak advises. As for integration issues, Dell and Perot may have an easier go of it than most. "Good cultural alignment, close physical proximity for key leaders, and the absence of an entrenched services business at Dell-together with the obvious convergence around the value of Perot as a hardware channel for Dell and Dell as a lead generator for Perot-should make integration much faster and less painful than is the norm for deals of this scale," says Mark Robinson, EquaTerra's chief operating officer. Those most worried about the Perot deal are Dell customers working with other outsourcers. "While growing the legacy Perot Systems' client base, Dell must use caution not to alienate hardware clients who are using other service providers for outsourcing services," says Lepeak.

You've got questions, Aardvark Mobile has answers

Aardvark has taken a different tack with search. And now the people behind Aardvark are bringing that same approach to the iPhone and iPod touch. The online service figures it's sometimes more productive to ask a question of an actual person-usually someone from within your social network-rather than brave the vagaries of a search engine and its sometimes irrelevant answers. Aardvark Mobile actually arrived in the App Store nearly a week ago.

Aardvark Mobile tackles the same problem as the Aardvark Web site-dealing with subjective searches where two people might type in the same keywords but be searching for two completely different things. "Search engines by design struggle with these types of queries," Aardvark CEO Max Ventilla said. But developer Vark.com waited until Tuesday to take the wraps off the mobile version of its social question-and-answer service. What Aardvark does is tap into your social networks and contacts on Facebook, Twitter, Gmail, and elsewhere to track down answers to questions that might otherwise flummox a search engine-things like "Where's a good place to eat in this neighborhood?" or "Where should I stay when I visit London?" With Aadvark's Web service, you'd send a message through your IM client to Aardvark; the service then figures out who in your network (and in their extended network) might be able to answer the question and asks them on your behalf. The majority of questions are answered in less than five minutes. Ventilla says that 90 percent of the questions asked via Aardvark get answered. The iPhone version of Aardvark works much the same way.

The service pings people for an answer, and sends you a push notification when there's a reply. Instead of an IM, you type a message directly into the app, tag it with the appropriate categories, and send it off to Aardvark. In previewing the app, I asked a question about affordable hotels in Central London-two responses came back within about three minutes from other Aardvark users. If you shake your mobile device when you're on the Answer tab, Aardvark Mobile looks up any unanswered questions that you may be able to provide a response for (while also producing a very alarming aardvark-like noise). "We think Aardvark is particularly well-suited to mobile, and especially the iPhone given how rich that platform is to develop for," Ventilla said. In addition to push notifications, Aardvark Mobile also taps into the iPhone's built-in location features to automatically detect your location-a feature that can help when you're asking about local hotspots. You don't have to already be using Aardvark's online service to take advantage of the mobile app.

Aardvark Mobile requires the iPhone OS 3.0. The free Aardvark Mobile app lets you set up a profile on your iPhone or iPod touch; Facebook Connect integration helps you instantly build up a network of friends who are also using the service.

IPv6: Not a Security Panacea

With only 10% of reserved IPv4 blocks remaining, the time to migrate to IPv6 will soon be upon us, yet the majority of stakeholders have yet to grasp the true security implications of this next generation protocol. While IPv6 provides enhancements like encryption, it was never designed to natively replace security at the IP layer. Many simply have deemed it an IP security savior without due consideration for its shortcomings.

The old notion that anything encrypted is secure doesn't stand much ground in today's Internet, considering the pace and sophistication in which encryptions are cracked. Unfortunately, IPsec, the IPv6 encryption standard, is viewed as the answer for all things encryption. For example, at the last Black Hat conference hacker Moxie Marlinspike revealed vulnerabilities that breaks SSL encryption and allows one to intercept traffic with a null-termination certificate. But it should be noted that:  IPsec "support" is mandatory in IPv6; usage is optional (reference RFC4301). There is a tremendous lack of IPsec traffic in the current IPv4 space due to scalability, interoperability, and transport issues. Many organizations believe that not deploying IPv6 shields them from IPv6 security vulnerabilities.

This will carry into the IPv6 space and the adoption of IPsec will be minimal. IPsec's ability to support multiple encryption algorithms greatly enhances the complexity of deploying it; a fact that is often overlooked. This is far from the truth and a major misconception. For starters, most new operating systems are being shipped with IPv6 enabled by default (a simple TCP/IP configuration check should reveal this). IPv4 based security appliances and network monitoring tools are not able to inspect nor block IPv6 based traffic. The likelihood that rogue IPv6 traffic is running on your network (from the desktop to the core) is increasingly high. The ability to tunnel IPv6 traffic over an IPv4 network using brokers without natively migrating to IPv6 is a great feature.

Which begs the question, why are so many users routing data across unknown and non-trusted IPv6 tunnel brokers? However, this same feature allows hackers to setup rogue IPv6 tunnels on non-IPv6 aware networks and carry malicious attacks at will. IPv6 tunneling should never be used for any sensitive traffic. By enabling the tunneling feature on the client (e.g. 6to4 on MAC, Teredo on Windows), you are exposing your network to open, non-authenticated, unencrypted, non-registered and remote worldwide IPv6 gateways. Whether it's patient data that transverses a healthcare WAN or Government connectivity to an IPv6 internet, tunneling should be avoided at all costs.

The rate at which users are experimenting with this feature and consequently exposing their networks to malicious gateways is alarming. The advanced network discovery feature of IPv6 allows Network Administrators to select the paths they can use to route packets. Is your security conscious head spinning yet? In theory, this is a great enhancement, however, from a Security perspective it becomes a problem. So where are the vendors that are supposed to protect us against these types of security flaws?

In the event that a local IPv6 Network is compromised, this feature will allow the attacker to trace and reach remote networks with little to no effort. The answer is, not very far along. Since there are no urgent mandates to migrate to IPv6, most are developing interoperability and compliance at the industry's pace. Like most of the industry, the vendors are still playing catch-up. So the question becomes: will the delay in IPv6 adoption give the hacker community a major advantage over industry?

As we gradually migrate to IPv6, the lack of interoperability and support at the application and appliance levels will expose loopholes. Absolutely! This will create a chaotic and reactive circle of patching, on-the-go updates and application revamp to combat attacks. There is more to IPv6 than just larger IP blocks. Regardless of your expertise in IPv4, treat your migration to IPv6 with the utmost sensitivity. The learning curve for IPv6 is extensive.

Many of the fundamental network principles like routing, DNS, QoS, Multicast and IP addressing will have to be revisited. People can't be patched as easily as Windows applications, thus staff training should start very early. Reliance on given IPv4 security features like spam control and DOS (denial of service) protection will be minimal in the IPv6 space as the Internet 'learns' and 'adjusts' to the newly allocated IP structure. Jaghori is the Chief Network & Security Architect at L-3 Communications EITS. He is a Cisco Internetwork Expert, Adjunct Professor and industry SME in IPv6, Ethical Hacking, Cloud Security and Linux. It's essential that your network security posture is of the utmost priority in the migration to IPv6. Stakeholders should take into account the many security challenges associated with IPv6 before deeming it a cure-all security solution.

Jaghori is presently authoring an IPv6 textbook and actively involved with next generation initiatives at the IEEE, IETF, and NIST. Contact him at ciscoworkz@gmail.com.

Report: New net neutrality rule coming next week

Federal Communications Commission chairman Julius Genachowski will propose a new network neutrality rule during a speech at the Brookings Institute on Monday, the Washington Post reports. Additionally, the principles state that consumers are "entitled to competition among network providers, application and service providers and content providers." Broadly speaking, net neutrality is the principle that ISPs should not be allowed to block or degrade Internet traffic from their competitors in order to speed up their own. Anonymous sources have told the Post that Genachowski won't offer too many details about the proposed rule and will likely only propose "an additional guideline for networks to be clear that they can't discriminate, or act as gatekeepers, of Web content."  The Post speculates that the rule will essentially be an add-on to the FCC's existing policy statement that networks must allow users to access any lawful Internet content of their choice, to run any legal Web applications of their choice, and to connect to the network using any device that does not harm the network. The major telcos have uniformly opposed net neutrality by arguing that such government intervention would take away ISPs' incentives to upgrade their networks, thus stalling the widespread deployment of broadband Internet.

The debate over net neutrality has heated up over the past few years, especially after the Associated Press first reported back in 2007 that Comcast was throttling peer-to-peer applications such as BitTorrent during peak hours. Several consumer rights groups, as well as large Internet companies such as Google and eBay, have led the charge to get Congress to pass laws restricting ISPs from blocking or slowing Internet traffic, so far with little success. Essentially, the AP reported that Comcast had been employing technology that is activated when a user attempts to share a complete file with another user through such P2P technologies. The FCC explicitly prohibited Comcast from engaging in this type of traffic shaping last year. As the user is uploading the file, Comcast would then send a message to both the uploader and the downloader telling them there has been an error within the network and that a new connection must be established. Both friends and foes of net neutrality have been waiting anxiously to see how Genachowski would deal with the issue, ever since his confirmation as FCC chairman earlier this year.

Tim Karr, the campaign director for media advocacy group Free Press, said at the time of Genachowski's nomination that he was instrumental at getting then-presidential candidate Barack Obama to endorse net neutrality during his presidential campaign. Net neutrality advocates cheered when Genachowski took over the FCC, as many speculated that he would be far more sympathetic to net neutrality than his predecessor Kevin Martin.

Linux driver chief calls out Microsoft over code submission

After a kick in the pants from the leader of the Linux driver project, Microsoft has resumed work on its historic driver code submission to the Linux kernel and avoided having the code pulled from the open source operating system. The submission was greeted with astonishment in July when Microsoft made the announcement, which included releasing the code under a GPLv2 license Microsoft had criticized in the past. Microsoft's submission includes 20,000 lines of code that once added to the Linux kernel will provide the hooks for any distribution of Linux to run on Windows Server 2008 and its Hyper-V hypervisor technology.

Greg Kroah-Hartman, the Linux driver project lead who accepted the code from Microsoft in July, Wednesday called out Microsoft on the linux-kernel and driver-devel mailing lists saying the company was not actively developing its hv drivers. If they do not show back up to claim this driver soon, it will be removed in the 2.6.33 [kernel] release. HV refers to Microsoft Hyper-V. He also posted the message to his blog. "Unfortunately the Microsoft developers seem to have disappeared, and no one is answering my emails. So sad...," he wrote. They are not the only company." Also new: Microsoft forms, funds open source foundation Kroah-Hartman said calling out specific projects on the mailing list is a technique he uses all the time to jump start those that are falling behind. Thursday, however, in an interview with Network World, Kroah-Hartman said Microsoft got the message. "They have responded since I posted," he said, and Microsoft is now back at work on the code they pledged to maintain. "This is a normal part of the development process.

In all, Kroah-Hartman specifically mentioned 25 driver projects that were not being actively developed and faced being dropped from the main kernel release 2.6.33, which is due in March. On top of chiding Microsoft for not keeping up with code development, Kroah-Hartman took the company to task for the state of its original code submission. "Over 200 patches make up the massive cleanup effort needed to just get this code into a semi-sane kernel coding style (someone owes me a big bottle of rum for that work!)," he wrote. He said the driver project was not a "dumping ground for dead code." However, the nearly 40 projects Kroah-Hartman detailed in his mailing list submission, including the Microsoft drivers, will all be included in the 2.6.32 main kernel release slated for December. Kroah-Hartman says there are coding style guidelines and that Microsoft's code did not match those. "That's normal and not a big deal. But the large number of patches did turn out to be quite a bit of work, he noted.

It happens with a lot of companies," he said. He said Thursday that Microsoft still has not contributed any patches around the drivers. "They say they are going to contribute, but all they have submitted is changes to update the to-do list." Kroah-Hartman says he has seen this all before and seemed to chalk it up to the ebbs and flows of the development process. The submission was greeted with astonishment in July when Microsoft made the announcement, which included releasing the code under a GPLv2 license Microsoft had criticized in the past. Microsoft's submission includes 20,000 lines of code that once added to the Linux kernel will provide the hooks for any distribution of Linux to run on Windows Server 2008 and its Hyper-V hypervisor technology. Follow John on Twitter

U.S. pledges $1.2 billion for digital health networks

The U.S. government has pledged $1.2 billion to help hospitals and clinicians develop and implement systems for digital health records and information sharing.

What $700 billion could buy your company

In an announcement made yesterday by Vice President Joe Biden and Health and Human Services Secretary Kathleen Sebelius, the government said it was awarding $598 million in grants to "establish approximately 70 Health Information Technology Region Extension Centers" to consult hospital technicians when they buy and deploy electronic health record systems. The government is also issuing $564 million in grants to support information sharing technologies within the digital health networks.

Dr. David Blumenthal, the national coordinator for health IT, said that the grants would "begin the process of creating a national, private, secure electronic health information system" to "help doctors and hospitals acquire electronic health records and use them… to improve the health of patients and reduce waste and inefficiency."

The http://www.networkworld.com/columnists/2009/031809antonopoulos.html ">digital health grants are being funded by the economic stimulus package passed by Congress earlier this year.

In addition to funding the digitization of health care records, the stimulus package has also designated $7.2 billion to fund broadband infrastructure investment. Of that money, $4.7 billion has been allotted to the National Telecommunications and Information Administration to award grants for projects that will build out broadband infrastructure in unserved or underserved areas; deliver broadband capabilities for public safety agencies; and stimulate broadband demand through training and education.

The remaining $2.5 billion in broadband stimulus money has been allotted to the U.S. Department of Agriculture (USDA) to make loans to companies building out broadband infrastructure in rural areas.

Citrix eliminates security holes in hypervisor

Citrix's hypervisor has caught up to VMware in providing enterprise-class security features, and is now the second virtualization platform to be certified as "production-ready," according to the Burton Group analyst firm.

Two months ago, the Burton Group said VMware's hypervisor was the only one on the market to meet all 27 features the analyst firm believes are required to run production-class workloads in the enterprise.

13 desktop-virtualization tools

Best desktop virtualization software

Citrix met 85% of the requirements but fell short in features such as security logging and auditing of administrative actions; directory services integration; and role-based access controls.

But with the vendor's latest software released on June 16, Citrix XenServer 5.5, Citrix has eliminated those shortcomings, Burton Group analyst Chris Wolf writes in a blog post.

"Citrix added several key features for the 5.5 release, including directory service integration, security logging and auditing of administrative actions, and role based access controls (via the Lab Manager interface included in Essentials 5.5 Platinum Edition)," Wolf writes. "Also, Citrix reworked its XenServer support policy to meet our minimum 3 year market support requirement."

The Burton Group's virtualization criteria were drawn up to ensure that hypervisors provide adequate security, management, availability, storage, network, compute, scalability, and performance to enterprise IT shops.

"Having multiple production-ready hypervisors on the market means more choice for the customer, and a greater push for vendors to continue furthering innovation and competitive differentiation," Wolf writes. "Regardless of where your hypervisor loyalties stand, we'll all benefit from the progress of the XenServer platform."

In addition to required production features, Burton Group has two other categories: preferred features that are important but not required, and features that are just optional. VMware is still ahead of its rivals in preferred and optional features, Wolf writes.

In the analyst firm's last report, Microsoft Hyper-V lagged behind its rivals, meeting 78% of enterprise virtualization requirements. The Burton Group is analyzing Microsoft's new Hyper-V Server 2008: R2, and will post an update once the evaluation is complete, Wolf says.

This new version of Hyper-V is still missing some enterprise features but includes live migration and other important tools that bring it closer to production-ready status, Wolf writes in an e-mail.

Instant-on Linux vendors put on a brave face against Chrome

Google Inc. says its coming Linux-based Chrome operating system will "start up and get you onto the web in a few seconds."

If Chrome can fulfill that promise, that could render the cut-down, instant-on Linux platforms offered by a cadre of smaller vendors less compelling, if not obsolete.

Those vendors include DeviceVM Inc. with its Splashtop mini-Linux, BIOS maker Phoenix Technologies Ltd., with its Linux-lite HyperSpace platform, Xandros Inc.'s Presto, and Good OS LLC's Cloud offering.

Makers of instant-on environments claim their offerings can boot in a matter of seconds, compared with the several minutes usually taken by Windows. They also say their platforms start up more reliably than Windows when woken from sleep or hibernate modes.

But early versions let users do little apart from surf the Web. That has changed in recent months. Phoenix added the Office-compatible ThinkFree suite this spring, while DeviceVM says it is close to adding support for streamed enterprise apps.

That has allowed some of these vendors to gain traction. DeviceVM, for instance, says eight out of the 10 largest PC makers are installing Splashtop as a second "pre-boot" environment as an adjunct to the main Windows operating system.

But Chrome's entry "is going to make it a lot harder for them to make a go of it," said independent analyst Jack Gold.

Not so, say these vendors. Mark Lee, CEO of DeviceVM, insists that Chrome OS "validates Splashtop's value proposition" and won't interfere with its growth.

"By the end of 2010, Splashtop will be in the hands of more than 150 million desktop, net-top, notebook, and netbook users," Lee said in an e-mailed statement. "Google's entry into the market should accelerate this trend, and help to make instant-on the de facto computing standard."

Woody Hobbs, CEO of Phoenix, said HyperSpace can run on both ARM and Intel CPUs, which Chrome aims to do. Moreover, HyperSpace can run as a "dual resume" environment side-by-side with Windows or a Linux environment such as Chrome, Hobbs said in a statement, allowing users to quickly switch back and forth between environments. That feature is unique to Phoenix, he said.

With Google unlikely to target Chrome as a secondary quick-boot environment for netbooks primarily running Windows, that leaves a niche for instant-on vendors, said Jeffrey Orr, an analyst with ABI Research.

Gold, meanwhile, said instant-on vendors might be able to compete if they can show much lower battery consumption than Chrome either when on or in sleep mode.

Open-Source Routers Are Becoming an IT Option

Many large IT operations are extensively using open-source technology - in operating systems, applications, development tools and databases. So why not in routers, too?

It's a question Sam Noble, senior network system administrator for the New Mexico Supreme Court's Judicial Information Division, pondered while looking for a way to link the state's courthouses to a new centralized case management system.

Noble wanted an affordable and customizable DSL router but found that ISP-supplied modems lacked the ability to remotely monitor local link status, a key requirement of the courts.

Another alternative, adding ADSL cards to the 2600 series frame-relay routers from Cisco Systems Inc. used at some courthouses, provided key features, but the aging devices lacked the power needed to support firewall performance.

A third option, Juniper Networks Inc.'s NetScreen SSG20 firewall/router with an ADSL option, "lacked many of the features we wanted, like full-featured command lines and unlimited tunnel interfaces," Noble said.

Frustrated, Noble decided to investigate yet another possibility: open-source routers. The technology is emerging but still isn't a favorite among corporate IT managers.

Noble first downloaded open-source router software distributed and supported by Belmont, Calif.-based Vyatta Inc. onto a laptop and ran some preliminary tests. "I was especially interested in whether the administrative interfaces were complete and feature-full," he said.

Impressed by the initial results, Noble created a prototype site in Santa Fe to study the technology's performance, cost-effectiveness and ability to work with other technologies used in the courts. "We needed to bring up a DSL connection for testing and to work out the best configuration without impacting our production network," he said.

The tests convinced Noble that the open-source router could provide what he wanted. He also noted that its VPN concentrator, support for the Border Gateway Protocol, and URL filtering and packet-capture security features "would have been unavailable or very costly to add to Cisco or NetScreen equipment."

In April 2008, Noble began deploying Vyatta router appliances to an average of two sites each month. When the project is completed over the next year or so, the routers - 514 in all - will connect 40 to 50 sites around the state to the centralized case management system.

Potential Problems

Analysts and users note that IT managers exploring the use of open-source routers should be aware of potential support and compatibility issues that could come with any open-source product. "You have to be careful during deployment," said Mark Fabbi, an analyst at Gartner Inc. "It's not ready to take over the world yet, but it certainly is providing an interesting base of discussion."

Trey Johnson, an IT staff member at the University of Florida in Gainesville, said that choosing a noncommercial technology with a limited enterprise-level track record could pose problems for IT managers. "That makes a hard sell for going into a business model with it," Johnson said.

The university uses an open-source router supported by Vyatta. "[The router] actually has a company backing it - you can buy support for it, which makes it more viable," Johnson said.

Others say that community support, an open-source hallmark, can cut two ways in an enterprise setting. Communities don't usually respond as quickly as IT managers would like, and they don't offer inexperienced users one-on-one instruction.

Noble and Johnson are two among a small but growing number of IT managers eschewing proprietary routers in favor of open-source alternatives for a variety of reasons.

Noble, for example, says pain-free customization is the technology's biggest benefit. "The flexibility of having a free software stack built into our routers will let us make a small change - a tweak - or an addition, and be able to continue with minimal impact on long-range plans."

Barry Hassler, president of Hassler Communication Systems Technology Inc., an ISP and network designer in Beavercreek, Ohio, said he uses IProute, a Linux-based open-source routing technology distributed by the Linux Foundation, to provide his company's large users with enterprise-level Internet access at an affordable price. "I'm using standard PC hardware, running Linux, with the routing functionality built in," he says. "What we're doing with these boxes is routing among multiple interfaces, which is fairly standard routing, but beyond that, we're also able to do bandwidth management."

Hassler estimated that a comparable Cisco router would cost more than twice as much as the Linux-based IProute router he chose. "That helps keep [overall] costs low," he says.

IT consulting firm CMIT Solutions of Central Rhode Island has installed open-source DD-WRT firmware in both of its Linksys wireless routers to gain additional capabilities, said Adam Tucker, a network engineer at the firm. "We wanted a robust wireless system that would allow us to manage quality of service for prioritizing voice over IP [and] things like that, as well as to add some of the more advanced filtering and stuff the [old] firmware simply didn't support," he says.

Tucker said the routers have worked flawlessly for well over a year.

Fabbi said he sees significant potential for open-source routers, particularly in the retail and food services industries, where large companies must often link thousands of sites without breaking the budget. "You think of a McDonald's or a Burger King [where] there are tens of thousands of franchisee-type locations but you still want them connected," he said.

In other industries, open-source technology is well suited for server-based routing applications, including virtualization, Fabbi added. He noted that virtualized router applications are limited only by developers' imaginations. "Sometimes it's something as simple as a distributed print server; other times it's video distribution caching."

Ready for the Enterprise?

Matthias Machowinski, an analyst at Infonetics Research in Campbell, Calif., said he believes that open-source routers are now capable of handling enterprise-level workloads. "If you have reasonable requirements - a regular-size office or a normal amount of traffic - then performance-wise, they should be able to handle the traffic load," he said. The only exceptions he cited are large companies that run an extraordinary amount of traffic, such as video content distributors.

Open-source routers are also starting to hold their own on the features front, Machowinski said. "They started out not being as feature-rich as some of the mainstream commercial [products], but open-source router vendors have narrowed that gap," he said.

Open-source routers come in three basic forms: software that transforms a standard PC or server into a combination router and firewall, firmware that can be inserted into an existing router, and appliances that come with open-source routing software preinstalled. In addition to Vyatta, DD-WRT and IProute, open-source routing technologies include Xorp, downloadable at Xorp.org; and pfSense, a free, open-source distribution of the FreeBSD operating system customized for use as a firewall and router.

Despite a steadily rising profile and a growing number of adherents, open-source routers aren't likely to topple the market status quo anytime soon. That's because the open-source field remains microscopic compared with the market share held by the top proprietary vendors, particularly router giant Cisco, which has about 80% of the overall market. But even Cisco has recently begun making overtures in the open-source world.

Edwards is a technology writer in the Phoenix area.