Fink, CTO and Director of HP Labs., Hewlett-Packard Company,
presented a keynote titled New
Style of IT
at the recent ARM TechCon 2013.
Martin Fink, CTO and
Director of HP Labs, Hewlett-Packard Company, holds one of the
Moonshot-based blade servers in his hands.
Fink first reviewed how
IT has progressed from the mainframe in the 1970s and ‘80s, to the
clients and servers of the 1990s, the Internet of the 2000s, and the
cloud, social, Big Data, and mobile computing since 2010. Along with
these changes, many IT companies emerged then faded from the main
stage. No one has any objection to this statement, I think.
Along with the current
trend, the Internet
of things (IoT) is becoming a reality, with
smart connected devices and exploded Web traffic, as shown in the
picture below along with a bunch of interesting current statistics
and predictions for 2020.
Fink said HP thinks
that today's servers are not equipped for future IT demands from the
standpoint of data center construction ($10–$20B), the number of
power plants required (8–10 more), and the large amount of power
consumed (2 million homes’ worth), as shown in the next picture.
So here comes Moonshot
for software-defined servers with:
80% less space
77% less cost
89% less energy
On October 28, one day
before the conference began, HP announced the availability of
Moonshot, based on Calxeda.
Since this was an ARM conference, Fink talked only about ARM chips,
but HP also has a
version with Intel's ATOM chip, which was
announced April 8.
Fink then talked about
how ARM and open source are enabling technology shifts. He further
divided the technology into subcategories, as shown in the next
For each subcategory,
he compared what it once was ("From”) with what it is now ("To”):
Nine levels of hierarchy
Massive, universal main memory
General purpose architecture
Energy/algorithm optimized SoC ecosystems
Open source data management platform
Open source data management platform
Content access & consumption
Multiple stand-alone devices
Visual/interactive real-time data consumption
Those points are mostly
self-explanatory. Progress in hardware technologies has made most of
the things above possible. Furthermore, IoT deals with a variety of
equipment and devices of many form factors and purposes in different
domains. It is important to customize a solution for a specific
device or piece of equipment in a specific domain. Traditional
transactional/relational databases can no longer handle the massive
amount of data generated and collected in real time (Big Data). It is
time to adopt NoSQL.
In summary, HP Moonshot
was designed and developed to accommodate the trends just mentioned
by employing advanced hardware technology and many open source
as Cassandra, Couchbase, HBase, VoltDB, MongoDB, and Hadoop
Tools, such as
GCC, Java, and OpenShift
Linux, such as Red
Hat, SUSE, and Ubuntu
As a server provider,
HP sees IoT as a business opportunity to produce a specific solution
for a specific need of a given customer in the IoT ecosystem. It is
interesting to see software becoming more important than hardware. As
a former software developer, I welcome this trend. Many IT jobs can
be done with less energy, and the efficiency of hardware use is
increased. The digital society will continue to grow and more energy
may be necessary. But with green IT, a good balance between
convenience and benefits to the society and energy use could be
ASHARAE is a professional organization
and according to their website:
ASHRAE, founded in
1894, is a building technology society with more than 50,000 members
worldwide. The Society and its members focus on building systems,
energy efficiency, indoor air quality and sustainability within the
industry. Through research, standards writing, publishing and
continuing education, ASHRAE shapes tomorrow’s built environment
It has a big impact on data center
operations. In the recent conference given by DatacenterDynamics
in San Francisco, I had a chance to chair a
track that included thermal guidelines from ASHRAE. Judging from the
past attendance at ASHRAE guideline sessions, I was expecting a large
turnout, and the two sessions on the subject were packed.
The first presentation was by Don
Beaty, ASHRAE Fellow and president of DLA
One thing Don kept emphasizing during
the talk was that it is the temperature of inlet airflow to a server
that matters for data center cooling but not the temperature in the
room. In the past, CRAC units on the data center floor checked the
temperature of returned air and used it to approximate the
temperature of inlet airflow to a server. Obviously, it usually did
not reflect the actual inlet airflow temperature. If cooling is via
a raised floor, the inlet airflow temperature varies widely,
depending upon the proximity of CRAC units. So it is imperative to
measure and monitor the temperature at the inlet of each server/rack.
At some point, cooling via a raised floor may not function well. For
example, a rack that consumes 10 kW may not be cooled effectively
with raised floor cooling. Furthermore, even though it is desirable
to have uniform power consumption and heat dissipation from each
rack, because of ICT equipment configuration requirements and other
constraints it is not always possible to do so. Don presented a
guideline for the inlet temperature for servers titled, 2011
Thermal Environments – Expanded Data Center Classes and Usage
Guidance , and I extracted the table and
a graph from page 8 and page 9 of the document, respectively, for
chart describes temperature and humidity and is
used to set a proper range for the combination of the two in a data
center. This chart shows A1 through A4 ranges, along with the
The current server can operate fine
(with server vendor warranty) with the A2 guideline shown above. A2
sets the high temperature at 35°C (95°F). But new guidelines can
expand the acceptable ranges to 40°C (104°F) by A3 and 45°C
(113°F) by A4. If you allowed this widely expanded range, almost any
data center in the world could take advantage of free cooling, such
as airside economizer. Incidentally, Christian Belady of Microsoft
that developing a server that tolerates higher temperatures might
raise the production cost (thus the purchase price), but millions of
cooling dollars could be saved with several thousands dollars more
for this type of IT equipment.
So what's holding up the production of
servers with A3 and A4 guidelines? Next up were Dr. Tahir Cader,
Technologist, Industry Standard Servers (ISS), and David
Chetham-Strode, Worldwide Product Manager, both of Hewlett-Packard.
They shared very interesting results. Tahir is on the board of
directors for The
a member of ASHRAE
and a liaison between ASHRAE and The Green Grid.
I do not have their presentations and unfortunately cannot refer to
specific data. They experimented with power consumption at various
geographical locations, using the A1 through A4 guidelines. According
to their findings, you may not need to employ A3 or A4, depending
upon your location. In many cases, there was little or no difference
between A3 and A4. Sometimes there is some savings between A2 and
A3, but it depends upon the geographical location.
we consider the temperature in a data center, we also need to
consider it for humans. Even though the primary purpose of the data
center is to host ICT equipment and the temperature could be raised
up to 45°C
at the server inlet, doing so could also raise the temperature at
other locations. Anything above 30°C
may not be very comfortable for people to work in for a long time.
was relatively easy in the past to pick your server, using some
well-defined data, such as CPU speed, number of cores in the CPU,
memory size, disk capacity, and number of networking ports and their
speed. Even if you have data centers at locations throughout the
world, you may buy only a few server types and get a good discount
from particular vendors for all of your data centers. But another
factor may be added when you refresh your servers next time, which is
the analysis of the inlet airflow temperature to a server vs. power
consumption. If you are big and sophisticated like HP, you could run
your own analysis to decide which server (that supports A1 through
A4) to use. But this analysis seems to be fairly complex and it may
not be that easy. Being a chair, I needed to control the Q&A
session but had a chance to ask a simple question. Can a server
vendor like HP provide a service to pick the right type of servers
for your geography? The answer was yes. That is good to know.
There will be five tracks or halls on
different themes. I will chair Hall 2, whose focus is IT. Of course, the discussions will be on
IT in conjunction with data center mechanical and electrical.
Subjects at data center conferences tend to be on the facilities
side, and very few are from the IT perspective. So I am very excited
about this track.
In Hall 2, several subjects that relate
to IT equipment and IT applications for data center optimization will
be discussed. These are ASHRAE guidelines for telecom and IT
equipment, cooling energy reduction, DCIM, and cloud computing.
Don Beaty of DLA Associates will
discuss two guidelines from ASHRAE: 2nd edition of Datacom Equipment
Power Trends & Cooling Applications and the upcoming 3rd edition
of Thermal Guidelines for Data Processing Environments. This may lead
to chiller-less data centers. Then Tahir Cader of HP will update us
on the new server inlet air temperature guidelines known as A3 and
A4. By raising inlet air temperature, we can cut cooling energy
substantially. Are server vendors ready for the challenge?
John Boggs of Emerson will talk about
how to reduce cooling energy at data centers with little or no cost
to you. As we know, cooling can use close to 60% of power consumed
at data centers. If we can control and reduce it (and with little or
no cost), it would be great. I cannot wait for this presentation.
DCIM covers a broad area of the data
center, ranging from design, simulation, monitoring, and controlling
to retrofitting. The four DCIM sessions will collectively give us
good insights into what DCIM is and what it can do for us. Two of the
four sessions do not use the term DCIM in their titles or synopsis,
though. The first one will be by Dhesikan
Ananchaperumal of CA Technologies. He will talk about the necessity
for good integration of data from both facilities and IT for
optimizing data center operations. Jim Kennedy from RagingWire will
share his experience with making both existing and new data centers
energy efficient with DCIM tools, which led to EPA's Energy Star for
data center certification. Following that, Khaled Nassoura of Raritan
will tell us how to improve data center operations by combining DCIM
with intelligent and automated systems. Finally, Todd Goldman of
Nlyte Software will explain how to apply DCIM tools in steps without
going too fast.
past few years have brought an increase in cloud computing sessions
at data center conferences like this one. Gina Tomlinson of the City
and County of San Francisco will talk about how she put
mission-critical IT infrastructures into a cloud. Although the
notion of cloud computing is widely known by now, many people are
still hesitant to move their mission-critical infrastructure into
cloud for reasons of security, controllability, and SLA. How did she
cope with the fear and move them to a cloud? It will be very
interesting to hear about her experience.
this conference is filled with many more interesting sessions and
speakers. But if you are an IT professional, come and join me for
those very informative talks. See you in San Francisco!
At Data Center World, Rick Sawyer of HP gave a very good overview of
the Information Technology Infrastructure Library (ITIL) and how it can be used
to formalize data center management.
A data center comprises three elements: IT, facilities, and operations.
To classify and measure effectiveness, the facilities side has Tier, but there
is no equivalent measure for IT or operations. Sawyer’s point was to use ITIL
(specifically, its capacity maturity model) to tie them together and to come up
with a unified measure to assess data centers. It might also be used to train
and qualify people who work at data centers.
The capability maturity model (CMM) makes a useful bridge between IT
and facilities for alignment, common language, and benchmarks. CMM provides five
different levels (0 to 5, with 0 being the least formal) of formality of
IT and facilities use different terminologies and definitions for
management. The next slide is an attempt to align both in the same way.
For each area, the following table is used to evaluate each item.
With all these factors consolidated in the following slide, you can
get a clear picture of how much each element is formalized in a concise manner.
You can join the ITIL Group or another of the groups in LinkedIn to
receive more up-to-date information.
I’ve written about container-based data centers before. Most of the data
center experts I’ve talked to think that container-based data centers are a
little too early to market, although one believes that this may change in
several years as cloud computing becomes mainstream. The trouble is that unless
you are super big and your smallest building block is twenty racks at a time,
its granularity is too rough.
I talked to an HP rep who was at the Data Center World show this
week. HP calls its container-based solution Performance-Optimized Datacenter
(POD). HP recently introduced a sister version of the original 40-foot version,
which hosts up to 22 racks. The 20-foot version accommodates up to 10 racks,
which is much friendlier to smaller operators. According to the HP rep, the
response to the new smaller version is quite good. The HP POD and other
container-based products come with preconfigured IT equipment and cooling gear.
All you need is power and water for a chilled system.
At the Data Center World show, I attended a session given by Active
Active Power puts power in a shipping container and makes it
available as a building block for a power source. I mentioned that a container
full of servers might be overkill for companies that are not super big. But in
the case of power, it makes much more sense because container-based data
centers can provide different levels of power, ranging from 240 kW to 960 kW,
with an increment of 240 kW. No wonder Active Power cut a deal to partner with
So HP provides IT gear while Active Power provides power. This
arrangement is somewhat similar to Dell’s double-decker container solution. Dell
puts a container with IT equipment on the ground and puts a container with power
and cooling on top of it. The idea is to let IT take care of the IT container
and facilities folks maintain the power/cooling container.
I think the power-in-a-box idea is much easier to accept. I do not
know how Active Power does in terms of partnership. They can talk to other
container-based data center players. Whether it is for IT or facilities, one
thing is clear: modularity is here to stay.
Dell (DELL) said the acquisition will allow it to offer a broader range of IT services, and a built-in market for its hardware among Perot Systems’ existing customers. Perot Systems has a strong footprint in the health care and government, which provide 48 percent and 25 percent of its revenues, with the remainder in the enterprise sector. Those two sectors are expected to see strong growth due to the Obama administration’s economic stimulus plan, which is expected to boost the adoption of electronic health records and upgrades of federal agencies’ IT infrastructure.
Miller in a separate Twitter message says this deal is designed to boost enterprise and data center businesses. Other tweets and analysts suggest this is to counter the HP and EDS merger, as well as IBM, to build out higher-margin IT services and outsourcing businesses.
This goes along with what I heard through the grapevine—that Dell has been working hard to create a new data center service business. My source tells me that a person at Dell who was promoted to head a new group is traveling all over the country and cannot be easily contacted. Obviously, he is aligned with this movement.
Once upon a time, I had some intensive discussions with Groundwork (which provides a product version of Nagios, an open-source project for network and system management) when I was covering the open-source software segment. I am still interested in open source but from a different perspective: an open-source application for cleantech. Steven Chu, secretary of the U.S. Department of Energy, recently talked about the use of open source for green buildings. Naturally, my interest is still alive.
Going back to EMC’s story, the data center management field was called network and system management, and the big four—HP, IBM, CA, and BMC—have been the big dogs in it. EMC has decided to enter the field. It is too early to see whether EMC becomes the fifth big dog.
In his article, Kevin Fogarty of CIO.com handily summarized EMC’s new Ionix features:
EMC Ionix for Service Discovery and Mapping will be based on a configuration management database (CMDB) configuration management system designed to automate change management, doing application troubleshooting and mapping servers and applications before migrations or virtualization projects.
EMC Ionix for IT Operations Intelligence includes root-cause analysis and impact analysis of errors, as well as mapping of the virtualizations among virtual servers, whose performance can also be examined using the same root-cause tools.
EMC Ionix for Data Center Automation and Compliance is the broadest product set, designed to automate data-center management operations completely enough to make up for staff shortages due to layoffs or hiring freezes. The core function is compliance-management software that can enforce policy-based usage rules across both physical and virtual servers, networks and storage.
EMC Ionix for Service Management focuses on ITIL service management, integrating CMDBs and workflow automation functions, to help service desks fix errors with as little human intervention as possible
I notice one feature is missing from this picture: measuring and monitoring. An extension is being added to HP’s OpenView and IBM’s Tivoli to measure the temperature and power usage of servers and other IT equipment in data centers. Companies like SynapSense, Sentilla, and Sensicast provide measuring and monitoring capabilities, collect and aggregate measured data via mesh networking, and display the data in a central dashboard. With the advent of smart grid, such measurements and power price information will make your data centers even more energy efficient. This looks like a good extension to differentiate with.
To break into a field already crowded with incumbents, a company must have a product with some unique features. I am not sure if the above features are good enough to compete with the big dogs. But who knows? EMC may already have a plan for this. Time will tell.
I did not cover Cisco’s entry into the data center blade space. Since there was a lot of press on this, I will not repeat it here. Several observers speculated about IBM’s failed acquisition attempt of Sun were intended as a shoot-back to this announcement. Others felt the attempt was to deal with HP.
There are two schools of thought on the recent news about IBM reselling Brocade’s Ethernet products by extending the SAN OEM deal. One school says it is to deal with Cisco and its new entry to the (blade) server business. The other says it was to further compete with HP. Bill Marozas of Cisco in his blog discussed this observation from Cisco’s viewpoint. Even after discounting this because it comes from Cisco, it may still be right. HP has a vision for Adaptive Infrastructure, and comprehensive products and services for data centers:
Proliant server series
ProCurve networking gears
Data center automation tools (via acquisition of Opsware)
Services (via acquisition of EDS and EYP)
IBM has a vision, Dynamic Infrastructure, similar to that of HP and needs to counter this. Before the acquisition of ComPaq, IBM was about twice as big as HP. After the acquisition, HP was about the same size as IBM. It is clear that IBM and HP will compete further, as they are of similar size in many spaces, including data centers.
Mike Fratto of InformationWeek also thinks this move was to counter HP.
Rich Miller of DatacenterKnowledge.com has a blog on this subject as well and agrees that this is to compete with HP but also indicated other opinions reported in an article by Jim Duffy of Network World wrote on this as well.
One opinion that this was IBM countering Cisco is found in Bridget Botelho’s article in TechTarget:
Recently, there has been a lot of movement in the data center gear market. Virtualization is really changing the way data centers are being built from the IT perspective. As indicated in my previous blog, when a paradigm shifts, a lot will change and new vendors will emerge to take over the leadership position. By the way, how does Dell fit into this picture? And Oracle? I wonder…
Rich Miller of DatacenterKnowledge.com wrote an article on the recent server market.
Due to the economic slowdown, the overall server market did not do well. It is quite understandable. (The following discussion is during the forth quarter of 2008 unless otherwise specified. )
However, I found some of the statistics interesting. For example:
The unit shipment is down by 12% and the total revenue went down by 14% to $13.5B in comparison to a year ago
The server market share is as follows: IBM (36.3%), HP (29%), Dell (10.6%) and Sun (9.3%)
In spite of the slowdown in the server market, blade servers are doing well. The revenue went up by 16.1% while the shipment went up by 12.1%
Although the market share varies from time to time, these four vendors dominate the server market. In total, their shares occupy 85.2% of the entire server market. The reason for blade servers’ surge is attributed to energy efficiency.
There are two aspects of blade servers. On one hand, the support argument for blade servers is energy efficiency because of sharing of PSU (power supply unit) and cooling (fans) among several blade servers. I blogged on this in my previous posting. On the other hand, there is an argument against it as I blogged in here.
Now which argument is correct? That depends , as I mentioned in my recent blog. There are several arguments like this in the data center space like DC vs AC power delivery. Depending upon assumptions and contexts, a different conclusion is derived. It is important to know what we are arguing with clear understanding of assumptions and contexts.
One of my long-term interests, which is true of our firm AltaTerra in general, is to try to contribute to these discussions in a way that enables meaningful comparisons, though generally this is not by using formal standards.
In terms of Green IT, office environments and data centers tend to be discussed separately from each other. A simple and easy linkage between the two is cloud computing.
Another link may be thin clients. Over 10 years ago, there was a lot of discussion on thin clients or network computer. Regardless of early hypes, thin clients did not catch on. Recently, I had an opportunity to attend a joint seminar by VMware and HP on the subject of desktop virtualization and thin clients. This reminds me of my consulting work for a company to market thin client products several years ago. For many reasons (such as non-existence of virtualization and high speed network connections), it did not catch on.
Computers with thin clients have been said to be appropriate for some vertical markets like warehouse, healthcare and restaurants. During the seminar, HP’s case study was for healthcare.
In spite of the optimism shown by both HP and VMware, Jon Brodkin of NetworkWorld paints a little less optimism in his article.
He cited quotes from a Forrester analyst, Gartner and IDG Research Services Group:
"I see huge interest right now, for many reasons," says Forrester analyst Natalie Lambert. "But the challenge is that desktop virtualization is a very costly endeavor. I don't care what people tell you otherwise, they're wrong."
“Gartner's latest numbers released this month predict that hosted virtual desktop revenue will quadruple this year, going from US$74.1 million worldwide in 2008 to nearly $300 million in 2009.”
“Is [desktop virtualization] going to break out in 2009? I don't see any reason it would," IDC analyst Michael Rose says. "Frankly, the current economic environment is going to be a significant barrier for adoption of virtual desktops in the data center." True ubiquity could take another five years, given current financial problems and the nature of PC refresh cycles, he says.”
If this is the case, does it ever catch on? May be another angle is required to link the office environment and data cnters from a different perspectives. More on virtual desktop later.