Print Page   |   Contact Us   |   Your Cart   |   Sign In   |   Register
Zen and the Art of Data Center Greening (and Energy Efficiency)
Blog Home All Blogs
Commentary of Dr. Zen Kishimoto on news, trends, and opportunities in environmentally sustainable data centers and energy efficiency.

 

Search all posts for:   

 

Top tags: Smart Grid  cloud computing  Data center  Japan  EPA  PUE  HP  Cisco  JDCC  Tokyo  IBM  SVLG  Virtualization  IT  PG&E  Santa Clara  Google  Green Grid  APC  ICT  Intel  Microsoft  Cloud  DCIM  Fujitsu  smart meter  Sun  Energy Star  Green IT  NoSQL 

ARM TechCon 2013: HP's Game-Changing Moonshot

Posted By Zen Kishimoto, Monday, November 04, 2013

Martin Fink, CTO and Director of HP Labs., Hewlett-Packard Company, presented a keynote titled New Style of IT at the recent ARM TechCon 2013.


Martin Fink, CTO and Director of HP Labs, Hewlett-Packard Company, holds one of the Moonshot-based blade servers in his hands.

Fink first reviewed how IT has progressed from the mainframe in the 1970s and ‘80s, to the clients and servers of the 1990s, the Internet of the 2000s, and the cloud, social, Big Data, and mobile computing since 2010. Along with these changes, many IT companies emerged then faded from the main stage. No one has any objection to this statement, I think.

Along with the current trend, the Internet of things (IoT) is becoming a reality, with smart connected devices and exploded Web traffic, as shown in the picture below along with a bunch of interesting current statistics and predictions for 2020.


Let me reproduce the statistics in the picture.


2013

2020

In one minute:

  • 98K tweets

  • 23,148 applications downloaded

  • 400,710 ad requests

  • 2K lyrics played on tunewiki

  • 1.5K pings sent on pingme

  • 208,333 minutes Angry Birds played


  • 30 billion devices

  • 40 T GB data

  • 10 million mobile applications

  • World population: 8 billion


Fink said HP thinks that today's servers are not equipped for future IT demands from the standpoint of data center construction ($10–$20B), the number of power plants required (8–10 more), and the large amount of power consumed (2 million homes’ worth), as shown in the next picture.



So here comes Moonshot for software-defined servers with:

  • 80% less space

  • 77% less cost

  • 89% less energy

  • 97% less complexity

On October 28, one day before the conference began, HP announced the availability of Moonshot, based on Calxeda. Since this was an ARM conference, Fink talked only about ARM chips, but HP also has a version with Intel's ATOM chip, which was announced April 8.

He showed three versions of Moonshot server blades, shown in the next picture. They are from Calxeda, Texas Instruments (TI), and Applied Micro. Although the Calxeda version has been formally announced, the details of the TI and Applied Micro versions have not been formally announced yet.



Fink then talked about how ARM and open source are enabling technology shifts. He further divided the technology into subcategories, as shown in the next picture.


For each subcategory, he compared what it once was ("From”) with what it is now ("To”):


Category

From

To

Memory/storage

Nine levels of hierarchy

Massive, universal main memory

Compute

General purpose architecture

Energy/algorithm optimized SoC ecosystems

Interconnect

Electrical signaling

Integrated photonics

Open source data management platform

Transactional/relational databases

Open source data management platform

Content access & consumption

Multiple stand-alone devices

Visual/interactive real-time data consumption


Those points are mostly self-explanatory. Progress in hardware technologies has made most of the things above possible. Furthermore, IoT deals with a variety of equipment and devices of many form factors and purposes in different domains. It is important to customize a solution for a specific device or piece of equipment in a specific domain. Traditional transactional/relational databases can no longer handle the massive amount of data generated and collected in real time (Big Data). It is time to adopt NoSQL.

In summary, HP Moonshot was designed and developed to accommodate the trends just mentioned by employing advanced hardware technology and many open source solutions, including:

  • Applications, such as Cassandra, Couchbase, HBase, VoltDB, MongoDB, and Hadoop

  • Tools, such as GCC, Java, and OpenShift

  • Linux, such as Red Hat, SUSE, and Ubuntu

As a server provider, HP sees IoT as a business opportunity to produce a specific solution for a specific need of a given customer in the IoT ecosystem. It is interesting to see software becoming more important than hardware. As a former software developer, I welcome this trend. Many IT jobs can be done with less energy, and the efficiency of hardware use is increased. The digital society will continue to grow and more energy may be necessary. But with green IT, a good balance between convenience and benefits to the society and energy use could be established.

Tags:  ARM TechCon 2013  Hewlett Packard  HP  Moonshot  Open source  Servers 

Share |
PermalinkComments (0)
 

An IT Guy's Take on ASHRAE's Recent Guidelines for Data Center ICT Equipment

Posted By Zen Kishimoto, Thursday, July 19, 2012

ASHARAE is a professional organization and according to their website:

ASHRAE, founded in 1894, is a building technology society with more than 50,000 members worldwide. The Society and its members focus on building systems, energy efficiency, indoor air quality and sustainability within the industry. Through research, standards writing, publishing and continuing education, ASHRAE shapes tomorrow’s built environment today.

It has a big impact on data center operations. In the recent conference given by DatacenterDynamics in San Francisco, I had a chance to chair a track that included thermal guidelines from ASHRAE. Judging from the past attendance at ASHRAE guideline sessions, I was expecting a large turnout, and the two sessions on the subject were packed.

The first presentation was by Don Beaty, ASHRAE Fellow and president of DLA Associates.


Don Beaty

He was the cofounder and first chair of ASHRAE Technical Committee TC 9.9 (Mission Critical Facilities, Technology Spaces and Electronic Equipment). TC 9.9 is very relevant to data center operations because it sets guidelines for operating data centers, including facilities and ICT equipment. When I attend a conference session, I usually record it for accuracy and memory’s sake. But it was hard to do so as a chair. So I am recalling from memory and some of the details are a little bit fuzzy.

One thing Don kept emphasizing during the talk was that it is the temperature of inlet airflow to a server that matters for data center cooling but not the temperature in the room. In the past, CRAC units on the data center floor checked the temperature of returned air and used it to approximate the temperature of inlet airflow to a server. Obviously, it usually did not reflect the actual inlet airflow temperature. If cooling is via a raised floor, the inlet airflow temperature varies widely, depending upon the proximity of CRAC units. So it is imperative to measure and monitor the temperature at the inlet of each server/rack. At some point, cooling via a raised floor may not function well. For example, a rack that consumes 10 kW may not be cooled effectively with raised floor cooling. Furthermore, even though it is desirable to have uniform power consumption and heat dissipation from each rack, because of ICT equipment configuration requirements and other constraints it is not always possible to do so. Don presented a guideline for the inlet temperature for servers titled, 2011 Thermal Environments – Expanded Data Center Classes and Usage Guidance , and I extracted the table and a graph from page 8 and page 9 of the document, respectively, for reference purposes.




A psychrometric chart describes temperature and humidity and is used to set a proper range for the combination of the two in a data center. This chart shows A1 through A4 ranges, along with the recommended envelope.

The current server can operate fine (with server vendor warranty) with the A2 guideline shown above. A2 sets the high temperature at 35°C (95°F). But new guidelines can expand the acceptable ranges to 40°C (104°F) by A3 and 45°C (113°F) by A4. If you allowed this widely expanded range, almost any data center in the world could take advantage of free cooling, such as airside economizer. Incidentally, Christian Belady of Microsoft has said that developing a server that tolerates higher temperatures might raise the production cost (thus the purchase price), but millions of cooling dollars could be saved with several thousands dollars more for this type of IT equipment.

So what's holding up the production of servers with A3 and A4 guidelines? Next up were Dr. Tahir Cader, Distinguished Technologist, Industry Standard Servers (ISS), and David Chetham-Strode, Worldwide Product Manager, both of Hewlett-Packard. They shared very interesting results. Tahir is on the board of directors for The Green Grid, a member of ASHRAE TC 9.9, and a liaison between ASHRAE and The Green Grid.

 


Dr. Tahir Cader

Again, I do not have their presentations and unfortunately cannot refer to specific data. They experimented with power consumption at various geographical locations, using the A1 through A4 guidelines. According to their findings, you may not need to employ A3 or A4, depending upon your location. In many cases, there was little or no difference between A3 and A4. Sometimes there is some savings between A2 and A3, but it depends upon the geographical location.

When we consider the temperature in a data center, we also need to consider it for humans. Even though the primary purpose of the data center is to host ICT equipment and the temperature could be raised up to 45°C at the server inlet, doing so could also raise the temperature at other locations. Anything above 30°C may not be very comfortable for people to work in for a long time.

It was relatively easy in the past to pick your server, using some well-defined data, such as CPU speed, number of cores in the CPU, memory size, disk capacity, and number of networking ports and their speed. Even if you have data centers at locations throughout the world, you may buy only a few server types and get a good discount from particular vendors for all of your data centers. But another factor may be added when you refresh your servers next time, which is the analysis of the inlet airflow temperature to a server vs. power consumption. If you are big and sophisticated like HP, you could run your own analysis to decide which server (that supports A1 through A4) to use. But this analysis seems to be fairly complex and it may not be that easy. Being a chair, I needed to control the Q&A session but had a chance to ask a simple question. Can a server vendor like HP provide a service to pick the right type of servers for your geography? The answer was yes. That is good to know.

Tags:  A1  A2  A3  A4  ASHRAE  Datacenter  DatacenterDynamics  DCD SF  DLA Accociates  Don Beaty  HP  psychrometric chart  San Francisco  Tahir Cader 

Share |
PermalinkComments (0)
 

Come and Meet Me and Other Speakers at DatacenterDynamics’ San Francisco Conference

Posted By Zen Kishimoto, Sunday, July 15, 2012

This is short notice, but I would like to encourage people to come and join me at the DatacenterDynamics conference on July 17 in San Francisco.

There will be five tracks or halls on different themes. I will chair Hall 2, whose focus is IT. Of course, the discussions will be on IT in conjunction with data center mechanical and electrical. Subjects at data center conferences tend to be on the facilities side, and very few are from the IT perspective. So I am very excited about this track.

In Hall 2, several subjects that relate to IT equipment and IT applications for data center optimization will be discussed. These are ASHRAE guidelines for telecom and IT equipment, cooling energy reduction, DCIM, and cloud computing.

Don Beaty of DLA Associates will discuss two guidelines from ASHRAE: 2nd edition of Datacom Equipment Power Trends & Cooling Applications and the upcoming 3rd edition of Thermal Guidelines for Data Processing Environments. This may lead to chiller-less data centers. Then Tahir Cader of HP will update us on the new server inlet air temperature guidelines known as A3 and A4. By raising inlet air temperature, we can cut cooling energy substantially. Are server vendors ready for the challenge?

John Boggs of Emerson will talk about how to reduce cooling energy at data centers with little or no cost to you. As we know, cooling can use close to 60% of power consumed at data centers. If we can control and reduce it (and with little or no cost), it would be great. I cannot wait for this presentation.

DCIM covers a broad area of the data center, ranging from design, simulation, monitoring, and controlling to retrofitting. The four DCIM sessions will collectively give us good insights into what DCIM is and what it can do for us. Two of the four sessions do not use the term DCIM in their titles or synopsis, though. The first one will be by Dhesikan Ananchaperumal of CA Technologies. He will talk about the necessity for good integration of data from both facilities and IT for optimizing data center operations. Jim Kennedy from RagingWire will share his experience with making both existing and new data centers energy efficient with DCIM tools, which led to EPA's Energy Star for data center certification. Following that, Khaled Nassoura of Raritan will tell us how to improve data center operations by combining DCIM with intelligent and automated systems. Finally, Todd Goldman of Nlyte Software will explain how to apply DCIM tools in steps without going too fast.

The past few years have brought an increase in cloud computing sessions at data center conferences like this one. Gina Tomlinson of the City and County of San Francisco will talk about how she put mission-critical IT infrastructures into a cloud. Although the notion of cloud computing is widely known by now, many people are still hesitant to move their mission-critical infrastructure into cloud for reasons of security, controllability, and SLA. How did she cope with the fear and move them to a cloud? It will be very interesting to hear about her experience.

Overall, this conference is filled with many more interesting sessions and speakers. But if you are an IT professional, come and join me for those very informative talks. See you in San Francisco!

Tags:  ASHRAE  CA Technologies  Cloud  DatacenterDynamics  DCD  DCIM  Emerson  Facilities  HP  IT  Nlyte  RagingWire  Raritan  San Francisco 

Share |
PermalinkComments (0)
 

Emergence of the Information Technology Infrastructure Library

Posted By Zen Kishimoto, Monday, March 22, 2010

At Data Center World, Rick Sawyer of HP gave a very good overview of the Information Technology Infrastructure Library (ITIL) and how it can be used to formalize data center management.


Rick Sawyer

A data center comprises three elements: IT, facilities, and operations. To classify and measure effectiveness, the facilities side has Tier, but there is no equivalent measure for IT or operations. Sawyer’s point was to use ITIL (specifically, its capacity maturity model) to tie them together and to come up with a unified measure to assess data centers. It might also be used to train and qualify people who work at data centers.

The capability maturity model (CMM) makes a useful bridge between IT and facilities for alignment, common language, and benchmarks. CMM provides five different levels (0 to 5, with 0 being the least formal) of formality of management.


IT and facilities use different terminologies and definitions for management. The next slide is an attempt to align both in the same way.


For each area, the following table is used to evaluate each item.


With all these factors consolidated in the following slide, you can get a clear picture of how much each element is formalized in a concise manner.


You can join the ITIL Group or another of the groups in LinkedIn to receive more up-to-date information.

Tags:  EYP  Facilities  HP  IT  Rick Sawyer 

Share |
PermalinkComments (0)
 

Power in a Box

Posted By Zen Kishimoto, Friday, March 19, 2010

I’ve written about container-based data centers before. Most of the data center experts I’ve talked to think that container-based data centers are a little too early to market, although one believes that this may change in several years as cloud computing becomes mainstream. The trouble is that unless you are super big and your smallest building block is twenty racks at a time, its granularity is too rough.

I talked to an HP rep who was at the Data Center World show this week. HP calls its container-based solution Performance-Optimized Datacenter (POD). HP recently introduced a sister version of the original 40-foot version, which hosts up to 22 racks. The 20-foot version accommodates up to 10 racks, which is much friendlier to smaller operators. According to the HP rep, the response to the new smaller version is quite good. The HP POD and other container-based products come with preconfigured IT equipment and cooling gear. All you need is power and water for a chilled system.

At the Data Center World show, I attended a session given by Active Power.



Martin Olsen

Active Power puts power in a shipping container and makes it available as a building block for a power source. I mentioned that a container full of servers might be overkill for companies that are not super big. But in the case of power, it makes much more sense because container-based data centers can provide different levels of power, ranging from 240 kW to 960 kW, with an increment of 240 kW. No wonder Active Power cut a deal to partner with HP’s POD.


So HP provides IT gear while Active Power provides power. This arrangement is somewhat similar to Dell’s double-decker container solution. Dell puts a container with IT equipment on the ground and puts a container with power and cooling on top of it. The idea is to let IT take care of the IT container and facilities folks maintain the power/cooling container.

I think the power-in-a-box idea is much easier to accept. I do not know how Active Power does in terms of partnership. They can talk to other container-based data center players. Whether it is for IT or facilities, one thing is clear: modularity is here to stay.

Tags:  Active Power  HP  Power in a box  Shipping container 

Share |
PermalinkComments (0)
 

Dell Buys Perot: What Does It Mean to Its Data Center Business?

Posted By Zen Kishimoto, Monday, September 21, 2009

Several media and analysts reported this news:


Taking excerpts from Rich Miller’s blog:

Dell (DELL) said the acquisition will allow it to offer a broader range of IT services, and a built-in market for its hardware among Perot Systems’ existing customers. Perot Systems has a strong footprint in the health care and government, which provide 48 percent and 25 percent of its revenues, with the remainder in the enterprise sector. Those two sectors are expected to see strong growth due to the Obama administration’s economic stimulus plan, which is expected to boost the adoption of electronic health records and upgrades of federal agencies’ IT infrastructure.


Miller in a separate Twitter message says this deal is designed to boost enterprise and data center businesses. Other tweets and analysts suggest this is to counter the HP and EDS merger, as well as IBM, to build out higher-margin IT services and outsourcing businesses.

This goes along with what I heard through the grapevine—that Dell has been working hard to create a new data center service business. My source tells me that a person at Dell who was promoted to head a new group is traveling all over the country and cannot be easily contacted. Obviously, he is aligned with this movement.

Tags:  Dell  EDS  HP  IBM  Perot 

Share |
PermalinkComments (0)
 

EMC’s Entry into the Data Center Management Field

Posted By Zen Kishimoto, Monday, July 13, 2009
Once upon a time, I had some intensive discussions with Groundwork (which provides a product version of Nagios, an open-source project for network and system management) when I was covering the open-source software segment. I am still interested in open source but from a different perspective: an open-source application for cleantech. Steven Chu, secretary of the U.S. Department of Energy, recently talked about the use of open source for green buildings. Naturally, my interest is still alive.

Going back to EMC’s story, the data center management field was called network and system management, and the big four—HP, IBM, CA, and BMC—have been the big dogs in it. EMC has decided to enter the field. It is too early to see whether EMC becomes the fifth big dog.

In his article, Kevin Fogarty of CIO.com handily summarized EMC’s new Ionix features:

  1. EMC Ionix for Service Discovery and Mapping will be based on a configuration management database (CMDB) configuration management system designed to automate change management, doing application troubleshooting and mapping servers and applications before migrations or virtualization projects.
  2. EMC Ionix for IT Operations Intelligence includes root-cause analysis and impact analysis of errors, as well as mapping of the virtualizations among virtual servers, whose performance can also be examined using the same root-cause tools.
  3. EMC Ionix for Data Center Automation and Compliance is the broadest product set, designed to automate data-center management operations completely enough to make up for staff shortages due to layoffs or hiring freezes. The core function is compliance-management software that can enforce policy-based usage rules across both physical and virtual servers, networks and storage.
  4. EMC Ionix for Service Management focuses on ITIL service management, integrating CMDBs and workflow automation functions, to help service desks fix errors with as little human intervention as possible

I notice one feature is missing from this picture: measuring and monitoring. An extension is being added to HP’s OpenView and IBM’s Tivoli to measure the temperature and power usage of servers and other IT equipment in data centers. Companies like SynapSense, Sentilla, and Sensicast provide measuring and monitoring capabilities, collect and aggregate measured data via mesh networking, and display the data in a central dashboard. With the advent of smart grid, such measurements and power price information will make your data centers even more energy efficient. This looks like a good extension to differentiate with.

To break into a field already crowded with incumbents, a company must have a product with some unique features. I am not sure if the above features are good enough to compete with the big dogs. But who knows? EMC may already have a plan for this. Time will tell.

Tags:  BMC  CA  Data center management tool  HP  IBM 

Share |
PermalinkComments (0)
 

Cisco, IBM, and HP

Posted By Zen Kishimoto, Tuesday, May 05, 2009
I did not cover Cisco’s entry into the data center blade space. Since there was a lot of press on this, I will not repeat it here. Several observers speculated about IBM’s failed acquisition attempt of Sun were intended as a shoot-back to this announcement. Others felt the attempt was to deal with HP.

There are two schools of thought on the recent news about IBM reselling Brocade’s Ethernet products by extending the SAN OEM deal. One school says it is to deal with Cisco and its new entry to the (blade) server business. The other says it was to further compete with HP. Bill Marozas of Cisco in his blog discussed this observation from Cisco’s viewpoint.  Even after discounting this because it comes from Cisco, it may still be right. HP has a vision for Adaptive Infrastructure, and comprehensive products and services for data centers:
  • Proliant server series
  • ProCurve networking gears
  • Storage gears
  • Data center automation tools (via acquisition of Opsware)
  • Services (via acquisition of EDS and EYP)

IBM has a vision, Dynamic Infrastructure, similar to that of HP and needs to counter this. Before the acquisition of ComPaq, IBM was about twice as big as HP. After the acquisition, HP was about the same size as IBM. It is clear that IBM and HP will compete further, as they are of similar size in many spaces, including data centers. 

Mike Fratto of InformationWeek also thinks this move was to counter HP.

Rich Miller of DatacenterKnowledge.com has a blog on this subject as well and agrees that this is to compete with HP but also indicated other opinions reported in an article by Jim Duffy of Network World wrote on this as well.

One opinion that this was IBM countering Cisco is found in Bridget Botelho’s article in TechTarget:

Recently, there has been a lot of movement in the data center gear market. Virtualization is really changing the way data centers are being built from the IT perspective. As indicated in my previous blog, when a paradigm shifts, a lot will change and new vendors will emerge to take over the leadership position. By the way, how does Dell fit into this picture? And Oracle? I wonder…

Tags:  Brocade  Cisco  Csico  HP  IBM  Virtualization 

Share |
PermalinkComments (0)
 

Blade or not Blade

Posted By Zen Kishimoto, Monday, March 09, 2009
Rich Miller of DatacenterKnowledge.com wrote an article on the recent server market.

Due to the economic slowdown, the overall server market did not do well.  It is quite understandable. (The following discussion is during the forth quarter of 2008 unless otherwise specified. )

However, I found some of the statistics interesting. For example:
  • The unit shipment is down by 12% and the total revenue went down by 14% to $13.5B in comparison to a year ago
  • The server market share is as follows: IBM (36.3%), HP (29%), Dell (10.6%) and Sun (9.3%)
  • In spite of the slowdown in the server market, blade servers are doing well. The revenue went up by 16.1% while the shipment went up by 12.1%

Although the market share varies from time to time, these four vendors dominate the server market. In total, their shares occupy 85.2% of the entire server market.  The reason for blade servers’ surge is attributed to energy efficiency.

There are two aspects of blade servers. On one hand, the support argument for blade servers is energy efficiency because of sharing of PSU (power supply unit) and cooling (fans) among several blade servers.  I blogged on this in my previous posting.  On the other hand, there is an argument against it as I blogged in here.

Now which argument is correct?  That depends , as I mentioned in my recent blog.  There are several arguments like this in the data center space like DC vs AC power delivery.  Depending upon assumptions and contexts, a different conclusion is derived. It is important to know what we are arguing with clear understanding of assumptions and contexts.

One of my long-term interests, which is true of our firm AltaTerra in general, is to try to contribute to these discussions in a way that enables meaningful comparisons, though generally this is not by using formal standards.


Tags:  Blade  Dell  HP  IBM  Market share  Server  Sun 

Share |
PermalinkComments (0)
 

Green IT: Data Center and User Environment

Posted By Zen Kishimoto, Tuesday, March 03, 2009
In terms of Green IT, office environments and data centers tend to be discussed separately from each other. A simple and easy linkage between the two is cloud computing.

Another link may be thin clients. Over 10 years ago, there was a lot of discussion on thin clients or network computer. Regardless of early hypes, thin clients did not catch on.  Recently, I had an opportunity to attend a joint seminar by VMware and HP on the subject of desktop virtualization and thin clients. This reminds me of my consulting work for a company to market thin client products several years ago.  For many reasons (such as non-existence of virtualization and high speed network connections), it did not catch on. 

Computers with thin clients have been said to be appropriate for some vertical markets like warehouse, healthcare and restaurants. During the seminar, HP’s case study was for healthcare.

In spite of the optimism shown by both HP and VMware, Jon Brodkin of NetworkWorld paints a little less optimism in his article.

He cited quotes from a Forrester analyst, Gartner and IDG Research Services Group:
"I see huge interest right now, for many reasons," says Forrester analyst Natalie Lambert. "But the challenge is that desktop virtualization is a very costly endeavor. I don't care what people tell you otherwise, they're wrong."

“Gartner's latest numbers released this month predict that hosted virtual desktop revenue will quadruple this year, going from US$74.1 million worldwide in 2008 to nearly $300 million in 2009.”

“Is [desktop virtualization] going to break out in 2009? I don't see any reason it would," IDC analyst Michael Rose says. "Frankly, the current economic environment is going to be a significant barrier for adoption of virtual desktops in the data center." True ubiquity could take another five years, given current financial problems and the nature of PC refresh cycles, he says.”

If this is the case, does it ever catch on?  May be another angle is required to link the office environment and data cnters from a different perspectives. More on virtual desktop later.

Tags:  Datacenter  Forrester  Gartner  HP  IDC  Virtual desktop  VMware 

Share |
PermalinkComments (0)
 
Page 1 of 3
1  |  2  |  3
Sign In


Forgot your password?

Haven't registered yet?