NASA’s future: Now the battle begins

When it comes down to it, NASA is the most accomplished space organization in the world but its human spaceflight activities are at a tipping point, primarily due to a mismatch of goals and money. The report's 157-pages worth of findings will now be debated and in the end, dictate the future of NASA and space flight operations. That was the conclusion of the Augustine Review of United States Human Space Flight Plan Committee report delivered to the White House today.

NetworkWorld Extra: 10 NASA space technologies that may never see the cosmosTop 10 cool satellite projects According to the report, NASA's fundamental conundrum is that within the current structure of the budget, NASA essentially has the resources either to build a major new system or to operate one, but not to do both. Either additional funds need to be made available or a far more modest program involving little or no exploration needs to be adopted, the repot stated. This is the root cause of the gap in capability of launching crew to low-Earth orbit under the current budget and will likely be the source of other gaps in the future. The commission seems to say space exploration is a worth-while endeavor but the way it is accomplished and the way NASA approaches it need to be radically changed. From the Augustine report, some of the most important include: • International partnerships: The US can lead a bold new international effort in the human exploration of space.

So what are some of those changes? If international partners are actively engaged, including on the "critical path" to success, there could be substantial benefits to foreign relations and more overall resources could become available to the human spaceflight program. • Short-term Space Shuttle planning: The remaining Shuttle manifest should be flown in a safe and prudent manner without undue schedule pressure. The Committee did not identify any credible approach employing new capabilities that could shorten the gap to less than six years. This manifest will likely extend operation into the second quarter of FY 2011. • The human-spaceflight gap: Under current conditions, the gap in US ability to launch astronauts into space will stretch to at least seven years. The only way to significantly close the gap is to extend the life of the Space Shuttle Program. • Extending the International Space Station: The return on investment to both the United States and our international partners would be significantly enhanced by an extension of the life of the ISS. A decision not to extend its operation would significantly impair US ability to develop and lead future international spaceflight partnerships. • Heavy lift: A heavy-lift launch capability to low-Earth orbit, combined with the ability to inject heavy payloads away from the Earth, is beneficial to exploration.

The Committee reviewed: the Ares family of launchers; Shuttle derived vehicles; and launchers derived from the Evolved Expendable Launch Vehicle family. It will also be useful to the national security space and scientific communities. Each approach has advantages and disadvantages, trading capability, life-cycle costs, maturity, operational complexity and the "way of doing business" within the program and NASA. • Commercial launch of crew to low-Earth orbit: Commercial services to deliver crew to low-Earth orbit are within reach. A new competition with adequate incentives to perform this service should be open to all US aerospace companies. While this presents some risk, it could provide an earlier capability at lower initial and life-cycle costs than government could achieve.

This would let NASA focus on more challenging roles, including human exploration beyond low-Earth orbit based on the continued development of the current or modified Orion spacecraft. • Technology development for exploration and commercial space: Investment in a well-designed and adequately funded space technology program is critical to enable progress in exploration. This investment will also benefit robotic exploration, the US commercial space industry, the academic community and other US government users. • Pathways to Mars: Mars is the ultimate destination for human exploration of the inner solar system; but it is not the best first destination. Exploration strategies can proceed more readily and economically if the requisite technology has been developed in advance. If humans are ever to live for long periods on another planetary surface, it is likely to be on Mars. The options here include:-Mars First, with a Mars landing, perhaps after a brief test of equipment and procedures on the Moon.-Moon First, with lunar surface exploration focused on developing the capability to explore Mars.-A Flexible Path to inner solar system locations, such as lunar orbit, Lagrange points, near-Earth objects and the moons of Mars, followed by exploration of the lunar surface and/or Martian surface.

But Mars is not an easy place to visit with existing technology and without a substantial investment of resources. The report comes at a time when NASA is about to test one of the largest and most complicated parts of its future rocket, the Ares I-X. The launch vehicle test is slated for Oct. 27. The flight test will provide NASA with an early opportunity to test and prove flight characteristics, hardware, facilities and ground operations associated with the Ares I. Ares has had significant technical and design challenges according to experts. NASA estimates that Ares I and its Orion system represent up to $49 billion of the over $97 billion estimated to be spent on the overall Constellation program through 2020. Augustine said of Constellation: The estimated cost of the Ares I launch vehicle development increased as NASA determined that the original plan to use the Space Shuttle main engines on the Ares I upper stage would be too costly. First off it has had a weight problem and NASA needs to eliminate vibrations during launch and other challenges. But the replacement engine had less thrust and inferior fuel economy, so the first-stage solid rockets had to be modified to provide more total impulse.

This is the nature of complex development programs—with budgets that are far more likely to decreasethan increase. This in turn contributed to a vibration phenomenon, the correction of which has yet to be fully demonstrated. Complicating matters further, insofar as the Constellation Program is concerned, this Committee has concluded that the Shuttle Program will almost inevitably extend into FY 2011 in order to fly the existing manifest and that there are strong arguments for the extension of the International Space Station for another five years beyond the existing plan. In addition, adequate funds must eventually be provided to safely de-orbit the ISS—funds that were not allotted in the current or original program plans. These actions, if implemented, place demands of another $1.1 billion and $13.7 billion, respectively, on the NASA budget.

Juniper’s splash big on tech vision, short on specifics

Juniper Networks' wide-ranging announcements on Thursday, billed by the company as the most significant since its founding in 1996, perhaps left more questions than answers after all the products, technologies and partnerships were unveiled. The Cisco rival even unveiled a new corporate logo, a symbol of the company's readiness to embark on a new decade. Juniper rolled out a sweeping array of software, silicon and systems enhancements, as well as new and expanded partnerships intended to take the company and its customers into the next decade of networking. The event was even staged on the 40th anniversary of the Internet's birth to signify its importance to Juniper, if not to the industry.

Why the makeover? "It puts a stake in the ground for our vision for the next decade," said Juniper CEO Kevin Johnson at the event. "We're driving to a platform view that's horizontal and open to integration: one platform with unlimited applications." With that, Juniper unveiled its strategy for opening and licensing its JUNOS operating system to developers and partners. And it was hosted by the New York Stock Exchange, Juniper's most recent showcase account. It also rolled out a new generation of processors, called Trio, designed to massively scale the edge of the service provider network. Also from Network World: Juniper's enterprise business hums in Q3 In addition, Juniper disclosed Project Falcon, an initiative to develop products for the mobile packet core and subscriber management of 4G networks, as well as "universal edge" applications integrating wireline and wireless networks. It also introduced new MX-series Ethernet edge routers with "3D" scaling of bandwidth, subscribers and services.

This served as an attempt to clarify Juniper's position in this market after losing partner Starent Networks to Cisco, which is buying the company for nearly $3 billion.  Lastly, Juniper provided an update on its Stratus cloud computing project that included three steps to cloud-enable a data center: simplify the environment through a unified fabric managed as a single switch; sharing resources through virtual partitioning and VPLS; and securing the environment with security policies based on the new JUNOS Space platform and enhancements to Juniper's SRX Services Gateway. And attendees were still clamoring for more meat from the event ,which seemed fixated on sweeping technology advances rather than specific solutions for key markets. "There are no details on the data center side," said Zeus Kerravala of the Yankee Group. "How are they going to play in the converged data center? Still, Juniper did not disclose deliverables for the Stratus or Falcon projects. How do they address that aside from the loose IBM, Dell OEM deals? FCoE is regarded as the quintessence of a unified data center fabric, yet there was nary a mention of it by Juniper officials. "That's one of the things that's missing," Kerravala said. "They need to talk specifically on how to address that." Andy Ingram, a vice president in Juniper's Fabric and Switching technology group, says an FCoE strategy will be forthcoming from Juniper.

They need to put some meat on the bones." One of the omissions from the prepared remarks was a FibreChannel over Ethernet (FCoE) strategy. It will combine organic development with partner contributions. Still, customers may want a more definitive roadmap, analysts say. "The problem is … there are no [Juniper] products today to help the data center," says Cindy Borovick, a data center analyst at IDC. "But customers are making their investments now." Borovick says Juniper's data center strategy right now is targeted at large content sites that deploy network-attached storage (NAS) rather than FibreChannel. But he adds the economics of FCoE – its Converged Network Adapters cost twice as much as Fibre Channel Host Bus Adapters, which cost two to four times as much as Ethernet NICs – don't currently make sense. She notex, though, that Juniper's exclusive agreement to license JUNOS to BLADE Network Technologies gives Juniper a blade switch strategy and provides another avenue for JUNOS to be embedded in data centers.

How are they going to improve in the field sales?" At least one high profile customer doesn't seem too worried about the specific gaps still to fill in Juniper's strategic direction. "It's clear they aim to be a leading provider of network solutions, like we are [a leader] in our industry," says Duncan Niederauer, CEO of NYSE Euronext. "This is about our business models converging, our partnership is just beginning. Juniper's broad brush stroke may be intended to avoid the perception that it is responding to trendy new markets with point products. "They don't want to be perceived as going down rabbit holes," says Ron Westfall, research director at Current Analysis. "But one item not addressed is that Cisco outsells them despite the technological differentiation. Juniper was the right company to work with."

Intel looks to save $250M by consolidating data centers

Intel is maintaining a four-year refresh cycle for servers in data centers as it looks to save close to US$250 million in data center costs over an eight-year period, a company executive said on Tuesday. Intel had 147 data centers at its peak, with the now reduced to around 70. Intel hopes to save $250 million between 2007 to 2015 by cutting costs associated with data centers, including cooling, system maintenance and support. The company has already cut the number of data centers by half and is further looking to consolidate servers, said Diane Bryant, Intel's chief information officer, at an event on Tuesday. A four-year refresh cycle for servers, which started in 2007, is already helping the company reduce such expenditure, Bryant said.

Intel decided a four-year refresh cycle for servers would be optimal as older servers eat up financial resources and cost more to replace. The company saved $45 million in 2008 in data center costs, but there has been a lot more scrutiny on IT expenditure this year, Bryant said. Intel hopes to cut data center costs by implementing faster chips, consolidating servers and putting more applications in virtualized environments, Bryant said. That has helped reduce the hardware in data centers while increasing overall server performance, Bryant said. Intel has consolidated servers by replacing 10 single-core Xeon chips with one Nehalem-based quad-core Xeon chip. The company also cut hardware acquisition costs and related overhead costs per server, like energy and maintenance.

Implementing more power-efficient servers has helped reduce energy costs, but Intel has struggled in identifying an "efficient data center." Cooling costs relate to the power efficiency of servers, a metric that has been hard to calculate, she said. A big chunk of data-center expenditure involved cooling servers, Bryant said. Intel is working with U.S. government agencies like the U.S. Environmental Protection Agency to measure power efficiency in different server states from idle to maximum usage, Bryant said. The company is also using technologies to squeeze out maximum server performance by maintaining high utilization rates. The EPA issued Energy Star ratings for servers in May, with the main metric criteria being the efficiency of a server's power supply and the power consumed at idle.

Intel has about 100,000 total servers, of which 80,000 are in the high-performance computing environment. The company has 20,000 "office" servers for normal tasks, where the company maintains a 65 percent utilization rate for maximum efficiency. Intel looks for an 85 percent utilization rate in the HPC environment without overloading the servers, Bryant said. Getting applications out of dedicated hardware and into virtualized environments is one way Intel manages to attain high utilization rates, Bryant said. Intel is already experimenting in numerous ways to cut energy costs. At the same time, Intel wants to make sure it reaches the utilization threshold without overburdening systems. "Back two or three years ago, when virtualization became the focus, when everybody's data centers were running at 5, 10 or 15 percent utilization, the focus ... was to drive up utilization levels through consolidation and virtualization," Bryant said.

Last year proposed it experimented with a data center that uses minimal air conditioning. It is also working with academia and companies like Hewlett-Packard and IBM to determine the best techniques to cool equipment in data centers.

IBM adding data centers, cloud computing lab in Asia

IBM opened a new data center in South Korea on Thursday and said it is building another one in Auckland, New Zealand, to address a surge in demand for cloud computing and IT services in the Asia-Pacific region. The total investment by IBM in these three facilities is about US$100 million, said James M. Larkin, a spokesman for IBM Global Services. The company also announced the opening of a cloud computing lab in Hong Kong. The company, which already has over 400 data centers worldwide, will continue to invest in new data centers that offer cloud computing capabilities, while upgrading existing data centers to support cloud computing, Larkin said.

The data center at Auckland will be in operation by 2010 with IBM investing about US$57 million in that center over the next ten years. IBM is planning to announce by February next year a new data center in Raleigh, North Carolina, he added. IBM will locate the data center at Highbrook Business Park in East Tamaki. The company can add more stages to expand the data center as demand rises, it added. The 56,000 square-foot facility will include a 16,000 square-foot data center, IBM said. The center will support IBM's clients in New Zealand and neighboring countries in the Asia-Pacific region, Larkin said.

The center was built using green technology, according to the company. The data center in Seoul will provide IT services including strategic outsourcing, e-business hosting and disaster recovery to more than 20 clients which have entered into outsourcing agreements with the company, IBM said. The Cloud Computing Laboratory in Hong Kong is a development and services center, focusing on LotusLive messaging development, testing, technical support and services delivery, IBM said. The lab, which is IBM's tenth cloud computing lab worldwide, builds on the email technology and expertise of Outblaze, a company in Hong Kong, whose messaging assets were acquired by IBM earlier this year and included in the Lotus brand of collaboration services. LotusLive is IBM's collection of integrated, online collaboration solutions and social networking services for businesses. The lab is part of the IBM China Development Laboratory which has over 5,000 developers.

Storm8 says phone-number lawsuit lacks merit

On Monday, we covered a pending class-action lawsuit filed against Storm8, developer of numerous popular iPhone games. In an official statement on the company's forum, Storm8 attempts to clarify just why the heck it was gathering phone numbers, and just what the heck it was doing with them. The suit alleges that Storm8's games used "backdoor" methods to snag players' iPhone numbers. The short version: accidentally, and nothing.

Elsewhere in the forum thread, Storm8 claims that said code was removed from its apps in August 2009, that the existing database of phone numbers was destroyed, and that the phone numbers sent by users who haven't yet upgraded to latest versions of the games aren't stored. The long version goes like this: Early in the development process of Storm8's initial games, the company wanted a way to identify specific iPhones connecting to its massively-multiplayer games, so it tried using the device's phone number . Eventually, Storm8 "determined it was more suitable to use the device's Unique Device ID instead." But-and here's the big head-scratcher-somehow, the old number-sniffing code was left in place anyway. On the lawsuit itself, Storm8 makes this key claim: "Storm8 will ask the judge to dismiss the lawsuit in its entirety due to the lawsuit's complete lack of merit. To our knowledge, no user has incurred any damage or loss as a result of the matters discussed in the lawsuit." We'll let the courts decide, of course, but if Storm8's claims are to be believed, perhaps the only thing the company is guilty of is especially lousy code review. We believe that we have always complied with all of the statutes referred to in the lawsuit and never took an action that harmed or impaired users or your devices in any way.