Print Page   |   Contact Us   |   Your Cart   |   Sign In   |   Register
Zen and the Art of Data Center Greening (and Energy Efficiency)
Blog Home All Blogs
Commentary of Dr. Zen Kishimoto on news, trends, and opportunities in environmentally sustainable data centers and energy efficiency.

 

Search all posts for:   

 

Top tags: Smart Grid  cloud computing  Data center  Japan  EPA  PUE  HP  Cisco  JDCC  Tokyo  IBM  SVLG  Virtualization  IT  PG&E  Santa Clara  Google  Green Grid  APC  Intel  Microsoft  Cloud  DCIM  Fujitsu  ICT  smart meter  Sun  Energy Star  Green IT  NoSQL 

An IT Guy's Take on ASHRAE's Recent Guidelines for Data Center ICT Equipment

Posted By Zen Kishimoto, Thursday, July 19, 2012

ASHARAE is a professional organization and according to their website:

ASHRAE, founded in 1894, is a building technology society with more than 50,000 members worldwide. The Society and its members focus on building systems, energy efficiency, indoor air quality and sustainability within the industry. Through research, standards writing, publishing and continuing education, ASHRAE shapes tomorrow’s built environment today.

It has a big impact on data center operations. In the recent conference given by DatacenterDynamics in San Francisco, I had a chance to chair a track that included thermal guidelines from ASHRAE. Judging from the past attendance at ASHRAE guideline sessions, I was expecting a large turnout, and the two sessions on the subject were packed.

The first presentation was by Don Beaty, ASHRAE Fellow and president of DLA Associates.


Don Beaty

He was the cofounder and first chair of ASHRAE Technical Committee TC 9.9 (Mission Critical Facilities, Technology Spaces and Electronic Equipment). TC 9.9 is very relevant to data center operations because it sets guidelines for operating data centers, including facilities and ICT equipment. When I attend a conference session, I usually record it for accuracy and memory’s sake. But it was hard to do so as a chair. So I am recalling from memory and some of the details are a little bit fuzzy.

One thing Don kept emphasizing during the talk was that it is the temperature of inlet airflow to a server that matters for data center cooling but not the temperature in the room. In the past, CRAC units on the data center floor checked the temperature of returned air and used it to approximate the temperature of inlet airflow to a server. Obviously, it usually did not reflect the actual inlet airflow temperature. If cooling is via a raised floor, the inlet airflow temperature varies widely, depending upon the proximity of CRAC units. So it is imperative to measure and monitor the temperature at the inlet of each server/rack. At some point, cooling via a raised floor may not function well. For example, a rack that consumes 10 kW may not be cooled effectively with raised floor cooling. Furthermore, even though it is desirable to have uniform power consumption and heat dissipation from each rack, because of ICT equipment configuration requirements and other constraints it is not always possible to do so. Don presented a guideline for the inlet temperature for servers titled, 2011 Thermal Environments – Expanded Data Center Classes and Usage Guidance , and I extracted the table and a graph from page 8 and page 9 of the document, respectively, for reference purposes.




A psychrometric chart describes temperature and humidity and is used to set a proper range for the combination of the two in a data center. This chart shows A1 through A4 ranges, along with the recommended envelope.

The current server can operate fine (with server vendor warranty) with the A2 guideline shown above. A2 sets the high temperature at 35°C (95°F). But new guidelines can expand the acceptable ranges to 40°C (104°F) by A3 and 45°C (113°F) by A4. If you allowed this widely expanded range, almost any data center in the world could take advantage of free cooling, such as airside economizer. Incidentally, Christian Belady of Microsoft has said that developing a server that tolerates higher temperatures might raise the production cost (thus the purchase price), but millions of cooling dollars could be saved with several thousands dollars more for this type of IT equipment.

So what's holding up the production of servers with A3 and A4 guidelines? Next up were Dr. Tahir Cader, Distinguished Technologist, Industry Standard Servers (ISS), and David Chetham-Strode, Worldwide Product Manager, both of Hewlett-Packard. They shared very interesting results. Tahir is on the board of directors for The Green Grid, a member of ASHRAE TC 9.9, and a liaison between ASHRAE and The Green Grid.

 


Dr. Tahir Cader

Again, I do not have their presentations and unfortunately cannot refer to specific data. They experimented with power consumption at various geographical locations, using the A1 through A4 guidelines. According to their findings, you may not need to employ A3 or A4, depending upon your location. In many cases, there was little or no difference between A3 and A4. Sometimes there is some savings between A2 and A3, but it depends upon the geographical location.

When we consider the temperature in a data center, we also need to consider it for humans. Even though the primary purpose of the data center is to host ICT equipment and the temperature could be raised up to 45°C at the server inlet, doing so could also raise the temperature at other locations. Anything above 30°C may not be very comfortable for people to work in for a long time.

It was relatively easy in the past to pick your server, using some well-defined data, such as CPU speed, number of cores in the CPU, memory size, disk capacity, and number of networking ports and their speed. Even if you have data centers at locations throughout the world, you may buy only a few server types and get a good discount from particular vendors for all of your data centers. But another factor may be added when you refresh your servers next time, which is the analysis of the inlet airflow temperature to a server vs. power consumption. If you are big and sophisticated like HP, you could run your own analysis to decide which server (that supports A1 through A4) to use. But this analysis seems to be fairly complex and it may not be that easy. Being a chair, I needed to control the Q&A session but had a chance to ask a simple question. Can a server vendor like HP provide a service to pick the right type of servers for your geography? The answer was yes. That is good to know.

Tags:  A1  A2  A3  A4  ASHRAE  Datacenter  DatacenterDynamics  DCD SF  DLA Accociates  Don Beaty  HP  psychrometric chart  San Francisco  Tahir Cader 

Share |
PermalinkComments (0)
 

The New Data Center Infrastructure Management Segment

Posted By Zen Kishimoto, Wednesday, July 06, 2011

When a new market segment starts to emerge, some analyst company tends to name it. The data center infrastructure management (DCIM coined by Gartner) segment is now emerging in the data center space.

DCIM solutions collect data from both the IT and the facility parts of a data center. I am familiar with companies like Sentilla, Modius, OSIsoft, and SynapSense. Arch Rock was spun off from Cisco and spun back in recently. Power Assure provides somewhat more sophisticated power management for data centers.

Those DCIM companies collect real-time data from actual operations and provide varying degrees of functions. Some collect data from both IT and facility equipment (like servers), aggregate it, and display the result to provide an overview of a data center’s power usage. Others receive data from somewhere else and provide more sophisticated analysis.

Romonet was founded by Zahl Limbuwala (CEO) and Liam Newcombe back in 2006, but they kept it in stealth mode until now. In conjunction with the recent DatacenterDynamics conference in San Francisco, Romonet came out of stealth mode and launched in the US. It launched in the UK late last year. http://www.romonet.com/


Zahl Limbuwala


Liam Newcombe

Wanting to coin a new term to accurately describe their segment, they came up with data center predictive model (DCPM). Rather than collecting real-time data, they predict data center configuration and architecture.

They showed the differences between DCIM and DCPM in the following slide.


Romonet’s product is called Prognose. Its function is summarized in the following slide.


The tool can provide "what if” scenarios for many different elements, such as PUE and power consumption. Two screenshots are shown below.


This display shows how PUE might change with different power loads and temperature.


This display shows the power usage information of different IT equipment.

The rationale for a tool like this is the complex interrelationship of elements in a data center. Changing one element may have an adverse effect on other elements. It would be nice if we could tell what the impact of a change might be before we make it. Prognose can be used for capacity planning. One of the case studies presented at the launch meeting was from Intel. A representative from Intel said that this tool could be used for choosing a data center location on the basis of temperature and humidity conditions in each geographical area in the world.

The tool is based on modeling algorithms, and its effectiveness depends solely on how good such modeling is. They surveyed many data centers of various sizes to fine-tune the model. Because I have not used this tool for a real data center, I withhold my judgment on it, but a tool like this is pretty handy when a data center goes through frequent changes, as they typically do.

Another area where I withhold my opinion is their claim of "only one DCPM in the world.” This is because I found Nlyte Software http://www.nlyte.com/ at the show the next day. Nlyte also provides predictive modeling. They also provide management and real-time monitoring of data center assets.

Claiming differentiation by just monitoring, aggregating, and displaying data from multiple sources at a data center is difficult. The differentiation is in the analytics and prediction. As Romonet said, the DCIM segment is crowded, and some consolidation is inevitable. It is not "if” but "when.”

Tags:  Datacenter  datacenter capacity  DCIM  DCPM  Prediction  Romonet 

Share |
PermalinkComments (0)
 

Revision to Panel Proposal 1

Posted By Zen Kishimoto, Friday, June 18, 2010

I made three panel proposals before, one of which was regarding metering and monitoring. After bouncing this idea around with several people, including a conference planner, I made a few changes to the proposal. This particular conference is very keen on presenting users’ perspectives rather than vendors’ pitches. In order to present this idea to this conference, I am making some changes. In the following, the revised version is given.

I have talked to more than half a dozen vendors in this space. See my original post for their identities. Although they are all in the same monitoring and metering space, each company has a slightly different angle. Some measure via their own sensors and aggregate the data for display. Others do not gather data by themselves but exploit the data from other sources. So the functions can be roughly classified into three categories: measuring (via sensor), aggregating, and analysis and display.

Also, in addition to measuring power consumption, different approaches can be taken to common functions like:

  • alarm management
  • asset management
  • capacity analysis
  • efficiency analysis
  • air control automation

In this panel, I would like to discuss the following so that a general audience can understand the needs of metering, learn which technologies are state-of-the-art, and assess the minimally necessary functions of metering.

We would discuss the following:

  • Why do data centers require monitoring and metering?
  • What is a typical architecture for measuring, aggregation, analysis, and display?
  • What are the minimally required functions of metering?
  • What kind of standards should be defined for metering? What type of data should be collected? Granularity of data? Frequency of collection? Data formats (like XML)? Display formats?
  • What extensions, such as e-waste and water, should be considered for existing metering?

An ideal panel will consist of three or four customers of vendors, one researcher in the space, and a regulatory representative, namely EPA. As of this writing, I have received positive responses, but this is just the beginning. I will report how this goes in the coming days.

Tags:  analysis  datacenter  metering  monitoring 

Share |
PermalinkComments (0)
 

Fuel Cells to Power Data Centers?

Posted By Zen Kishimoto, Friday, February 26, 2010

It has become imperative for many data center operators to secure enough power without producing greenhouse gases. (See my previous post about data centers’ excessive needs for power.) Fuel cell units are the perfect solutions for them. Data Center Journal reports that Google and eBay are interested in Bloom Energy’s new server technology, which makes use of fuel cells. That company has been everywhere in the media and blogosphere for the past few days. I found a compact and concise article in PCWorld that helped me quickly grasp what the big deal is. 

Why Bloom Energy? According to the article:

"There are probably another 100 companies that are working on something very similar,” Jack Brouwer, associate director of the National Fuel Cell Research Center told the Los Angeles Times. "But the key thing is that Bloom has an integrated system and package ready for commercial sale that puts them ahead of the pack.”

David Coursey, who wrote the article, wondered whether the Bloom Energy Server will be affordable enough (around $3,000) for household use. Bloom Energy thinks it will be about ten years before that happens.

I also cover the Japanese market, and the fuel cell market there is interesting. In the U.S., utilities tend to provide both electricity and gas. My utility company is Pacific Gas and Electric. In Japan, two separate utilities provide electricity and gas, one for electricity and the other for gas. In the metropolitan Tokyo region, for example, Tokyo Electric Power Co. (TEPCO) provides electricity, while Tokyo Gas provides gas.

There is a war going on between Japanese electric and gas companies. Electric companies like TEPCO are campaigning to have each household use only electricity because it is safer and cleaner. Gas companies like Tokyo Gas are fighting back with fuel cells. They sell a co-gen system based on fuel cell technology. The unit costs about $30,000, but with the government subsidy, it goes down to $15,000. It is still five times as expensive as what Bloom Energy plans. Its capacity is 1 kW, but that is adequate for each home. The heat generated by co-gen is used to heat the house or water for showering and washing dishes.

I do not hear much about the use of fuel cells for data centers, but fuel cells are being tested for electric vehicles in Japan, where EVs are very hot.

Tags:  Bloom energy  datacenter  Fuel Cells  TEPCO  Tokyo Gas 

Share |
PermalinkComments (0)
 

Which Is Evil: Data Center or Cloud Computing?

Posted By Zen Kishimoto, Friday, September 18, 2009
There seem to be two different kinds of people, cloud people and data center people. If you understand how cloud computing is implemented, some of the things happening in the market place puzzle you. I am not the only one who is confused by the messages in the market. Rich Miller wrote an interesting post.

Basically, the U.S. government is fed up with its expenditure on data centers and wants to outsource them to third parties:

The federal government spent $76 billion on IT last year, and about $20 billion of that was spent on data center infrastructure, (U.S. CIO Vivek) Kundra said. “A lot of these investments are duplicative,” said Kundra. “We’re building data centers the size of city blocks and spending hundreds of millions of dollars. … We cannot continue on this trajectory.” The solution: begin shifting government infrastructure to cloud computing services hosted in third-party data centers, rather than building more government facilities. Kundra notes that the General Services Administration has eight data centers, while Homeland Security has 23 (but not for long, as they’re consolidating to two large facilities).

His post includes a video announcement by the U.S. CIO. Miller mentions:

Here’s a video of (Vivek) Kundra’s announcement Tuesday at the NASA Ames Research Center in Mountain View, Calif. An interesting moment: check out the video that starts at the 19-minute mark, which underscores the “data centers are the enemy” theme. It’s almost like a bad political ad: when the data centers appear, the music turns ominous and the background grows dark … but when cloud computing is mentioned, the music turns happy and the landscape becomes green.

Miller’s Twitter message to introduce his blog says it all:

Fed Cloud Targets Evil Data Centers: Cloud=good, datacenters=bad! Hmmm ... wonder where those clouds will live.

I totally agree with Miller. Kundra is simply passing the problem (according to him) to others to worry about. I really want to find out how bad each cloud computing provider’s PUE is. If you cannot foresee demand, you need to overprovision buildings, power, cooling, hardware gears, software, and more. The total power consumption (and as the U.S. CIO, Kundra needs to be concerned about it) may not change by outsourcing your data centers to third parties.

Tags:  Cloud computing  Datacenter  Evil  US CIO 

Share |
PermalinkComments (0)
 

Green IT: Data Center and User Environment

Posted By Zen Kishimoto, Tuesday, March 03, 2009
In terms of Green IT, office environments and data centers tend to be discussed separately from each other. A simple and easy linkage between the two is cloud computing.

Another link may be thin clients. Over 10 years ago, there was a lot of discussion on thin clients or network computer. Regardless of early hypes, thin clients did not catch on.  Recently, I had an opportunity to attend a joint seminar by VMware and HP on the subject of desktop virtualization and thin clients. This reminds me of my consulting work for a company to market thin client products several years ago.  For many reasons (such as non-existence of virtualization and high speed network connections), it did not catch on. 

Computers with thin clients have been said to be appropriate for some vertical markets like warehouse, healthcare and restaurants. During the seminar, HP’s case study was for healthcare.

In spite of the optimism shown by both HP and VMware, Jon Brodkin of NetworkWorld paints a little less optimism in his article.

He cited quotes from a Forrester analyst, Gartner and IDG Research Services Group:
"I see huge interest right now, for many reasons," says Forrester analyst Natalie Lambert. "But the challenge is that desktop virtualization is a very costly endeavor. I don't care what people tell you otherwise, they're wrong."

“Gartner's latest numbers released this month predict that hosted virtual desktop revenue will quadruple this year, going from US$74.1 million worldwide in 2008 to nearly $300 million in 2009.”

“Is [desktop virtualization] going to break out in 2009? I don't see any reason it would," IDC analyst Michael Rose says. "Frankly, the current economic environment is going to be a significant barrier for adoption of virtual desktops in the data center." True ubiquity could take another five years, given current financial problems and the nature of PC refresh cycles, he says.”

If this is the case, does it ever catch on?  May be another angle is required to link the office environment and data cnters from a different perspectives. More on virtual desktop later.

Tags:  Datacenter  Forrester  Gartner  HP  IDC  Virtual desktop  VMware 

Share |
PermalinkComments (0)
 
Sign In


Forgot your password?

Haven't registered yet?