Print Page   |   Contact Us   |   Your Cart   |   Sign In   |   Register
Zen and the Art of Data Center Greening (and Energy Efficiency)
Blog Home All Blogs
Commentary of Dr. Zen Kishimoto on news, trends, and opportunities in environmentally sustainable data centers and energy efficiency.

 

Search all posts for:   

 

Top tags: Smart Grid  cloud computing  Data center  Japan  EPA  PUE  HP  Cisco  JDCC  Tokyo  IBM  SVLG  Virtualization  IT  PG&E  Santa Clara  Google  Green Grid  APC  ICT  Intel  Microsoft  Cloud  DCIM  Fujitsu  smart meter  Sun  Energy Efficiency  Energy Star  Green IT 

What's Going on in the Data Center Market?

Posted By Zen Kishimoto, Saturday, July 20, 2013

This is a summary of my attendance at the recent DatacenterDynamics San Francisco conference. The sessions I attended do not necessarily reflect the real marketplace, but they give you some idea of what it’s like.

California Cap-and-Trade Impacts on the Data Center Market

The federal cap-and-trade bill died in 2010 in the Senate. But California passed its own and put it into practice as of January 1, 2013. Mark Wenzel, Climate Change Adviser of the California Environmental Protection Agency, gave an overview of the program.

Mark Wenzel

He covered many aspects of the program, but one thing was noteworthy. It only applies to any entity that produces more than 25,000 metric tons of CO2 per year. That corresponds roughly to a diesel power generator running on 2.5 million gallons of diesel fuel per year. No data center in California consumes that much, so this does not apply to data centers in California. But it is not over yet. Some work is under way for energy-intensive, trade-exposed industries at CPUC, and there may be a separate category for data centers in this program.

A panel following Mark’s session discussed the impacts of this on data centers in California. In addition to Mark, the panelists were Zahl Limbuwala, CEO, Romonet; Kurt Salloux, CEO, Global Energy Innovations; and Nicola Peil-Moelter, Director of Environmental Sustainability, Akamai Technologies.

Although many subjects were discussed, two things stuck with me. The cap-and-trade program does not have a direct impact on data centers in California but it does have an indirect one: an increase of a few cents in the price of power. The other thing is that this program alone would not be a reason for a data center to leave California, but latency could be. Nicola added that Akamai will stay in California because of their customers in California.

Modeling

Professor Jonathan Koomey of Stanford University has been researching energy issues as they relate to IT. In his talk, he presented the necessity of modeling a data center operation by monitoring airflow and temperatures to minimize lost capacity. Capacity is lost when some section of the white space cannot be used for IT equipment because cooling capacity is lacking. By simulating IT equipment at several locations, a data center could minimize lost capacity, if not eliminate it completely.

As it became clear during the Q&A session, his claim is that modeling is for short-term rather than long-term planning. In other words, he was not advocating using a model to design and construct a data center but to make a decision about how to place new IT equipment.

Jonathan Koomey

Dynamic IT Power

Two things caught my attention in the presentation by Donald Mitchell of Schneider Electric.

Proper rack power and cooling for VMs

One was that Schneider Electric teams up with Microsoft to manage virtual machines. Schneider uses its StruxureWare to monitor power and cooling for each rack and alert the operator to any problem. If there is a problem, such as lost cooling, Microsoft's virtual machine (VM) manager moves VMs from a faulty rack to one with proper power and cooling.

Actual power consumed by data center equipment

The other thing is a database they are putting together called the data center genome project. It contains IT equipment information consisting of system type, make/model, protocols, capacity/dimensions, power consumption and delivery, and airflow. This should help to prepare for the power and cooling needs of IT equipment.

Data Center Trends

Mike Salvador of Belden presented data center trends that are summarized in the following. I agree with everything.


 Source: Michael Salvador of Belden

But I may add a few more to the list, such as DCIM, metrics, and IT utilization. Those may be included in other trends he listed above, though.

DCIM

Data center infrastructure management (DCIM) tools are becoming the norm rather than a novelty. Although DCIM covers many aspects of data center operations, such as capacity planning, a few speakers at the conference used DCIM to mean monitoring, measuring, and asset management.

Metrics

Power usage effectiveness (PUE) has become the standard metric for gauging the energy efficiency of a data center. PUE's problems have been discussed in many places. One problem is that it does not consider IT energy efficiency. Some alternatives have been proposed, such as CADE and DCeP, but none of them have been as well received as PUE.

IT energy efficiency

PUE considers the power consumed by IT vs. all the power used at a data center. But IT energy efficiency has not been considered, mainly because it is much more difficult to deal with.

Software-Defined Data Center

A panel on the software-defined data center was interesting. Ambrose McNevin of DatacenterDynamics moderated the panel of Mark Monroe of DLB Associates and David Gauthier of Microsoft.

From left: Ambrose McNevin, Mark Monroe, and David Gauthier

"Software-defined XXX” is becoming very popular, as in "software-defined network” and "software-defined data center.” Simply put, you can define your data center infrastructure with software without regard to the actual physical entities. Both panelists agreed that in some way a software-defined data center (SDDC) is a data center operating system (OS). Before the OS was introduced, we needed to manage memory, process, I/O, and users and perform other cumbersome tasks to run our applications. All these burdensome tasks are taken care of by the OS automatically.

David also mentioned that DCIM tools are more like subroutines called by the SDDC to feed information from a target data center. It feeds asset information via asset management function and environmental information, such as temperature and airflow capacity at each rack. If new IT equipment is introduced, the SDDC may find an optimal location to place and provision it by plugging in its DNA database and an operational model (see the data center genome project and Koomey's modeling sections above), and adjust cooling and power allocation by moving VMs around. If something fails or is about to fail, it detects that and takes appropriate action to keep operations going without disruption.

How does this relate to cloud computing? Is the SDDC equivalent to or a base for cloud computing? At this stage, the SDDC seems to be considered only for one single data center. But by extending the concept, multiple data centers could be defined by software to function as a single virtual data center.

As Yevgeniy Sverdlik wrote in his article (page 52), virtualization for servers is well under way with storage and network virtualization following. In order to fully realize the SDDC, we need to virtualize cooling and power. Can we do it? I will cover this issue in future blogs.

Some Comments

It is always good to attend a conference to find out what's happing in the marketplace, in addition to meeting with new and old friends. As I said above, I wanted to see more discussion of DCIM, metrics, and IT energy efficiency. Also, there was no particular discussion about how to integrate facilities and IT functions for a seamless operation, a topic that was covered in a past DatacenterDynamics conference.

Tags:  Data Center  DatacenterDynamics  DCD 2013 SF  DCIM  software defined  Software defined datacenter  Virtualization 

Share |
PermalinkComments (0)
 

An IT Guy's Take on ASHRAE's Recent Guidelines for Data Center ICT Equipment

Posted By Zen Kishimoto, Thursday, July 19, 2012

ASHARAE is a professional organization and according to their website:

ASHRAE, founded in 1894, is a building technology society with more than 50,000 members worldwide. The Society and its members focus on building systems, energy efficiency, indoor air quality and sustainability within the industry. Through research, standards writing, publishing and continuing education, ASHRAE shapes tomorrow’s built environment today.

It has a big impact on data center operations. In the recent conference given by DatacenterDynamics in San Francisco, I had a chance to chair a track that included thermal guidelines from ASHRAE. Judging from the past attendance at ASHRAE guideline sessions, I was expecting a large turnout, and the two sessions on the subject were packed.

The first presentation was by Don Beaty, ASHRAE Fellow and president of DLA Associates.


Don Beaty

He was the cofounder and first chair of ASHRAE Technical Committee TC 9.9 (Mission Critical Facilities, Technology Spaces and Electronic Equipment). TC 9.9 is very relevant to data center operations because it sets guidelines for operating data centers, including facilities and ICT equipment. When I attend a conference session, I usually record it for accuracy and memory’s sake. But it was hard to do so as a chair. So I am recalling from memory and some of the details are a little bit fuzzy.

One thing Don kept emphasizing during the talk was that it is the temperature of inlet airflow to a server that matters for data center cooling but not the temperature in the room. In the past, CRAC units on the data center floor checked the temperature of returned air and used it to approximate the temperature of inlet airflow to a server. Obviously, it usually did not reflect the actual inlet airflow temperature. If cooling is via a raised floor, the inlet airflow temperature varies widely, depending upon the proximity of CRAC units. So it is imperative to measure and monitor the temperature at the inlet of each server/rack. At some point, cooling via a raised floor may not function well. For example, a rack that consumes 10 kW may not be cooled effectively with raised floor cooling. Furthermore, even though it is desirable to have uniform power consumption and heat dissipation from each rack, because of ICT equipment configuration requirements and other constraints it is not always possible to do so. Don presented a guideline for the inlet temperature for servers titled, 2011 Thermal Environments – Expanded Data Center Classes and Usage Guidance , and I extracted the table and a graph from page 8 and page 9 of the document, respectively, for reference purposes.




A psychrometric chart describes temperature and humidity and is used to set a proper range for the combination of the two in a data center. This chart shows A1 through A4 ranges, along with the recommended envelope.

The current server can operate fine (with server vendor warranty) with the A2 guideline shown above. A2 sets the high temperature at 35°C (95°F). But new guidelines can expand the acceptable ranges to 40°C (104°F) by A3 and 45°C (113°F) by A4. If you allowed this widely expanded range, almost any data center in the world could take advantage of free cooling, such as airside economizer. Incidentally, Christian Belady of Microsoft has said that developing a server that tolerates higher temperatures might raise the production cost (thus the purchase price), but millions of cooling dollars could be saved with several thousands dollars more for this type of IT equipment.

So what's holding up the production of servers with A3 and A4 guidelines? Next up were Dr. Tahir Cader, Distinguished Technologist, Industry Standard Servers (ISS), and David Chetham-Strode, Worldwide Product Manager, both of Hewlett-Packard. They shared very interesting results. Tahir is on the board of directors for The Green Grid, a member of ASHRAE TC 9.9, and a liaison between ASHRAE and The Green Grid.

 


Dr. Tahir Cader

Again, I do not have their presentations and unfortunately cannot refer to specific data. They experimented with power consumption at various geographical locations, using the A1 through A4 guidelines. According to their findings, you may not need to employ A3 or A4, depending upon your location. In many cases, there was little or no difference between A3 and A4. Sometimes there is some savings between A2 and A3, but it depends upon the geographical location.

When we consider the temperature in a data center, we also need to consider it for humans. Even though the primary purpose of the data center is to host ICT equipment and the temperature could be raised up to 45°C at the server inlet, doing so could also raise the temperature at other locations. Anything above 30°C may not be very comfortable for people to work in for a long time.

It was relatively easy in the past to pick your server, using some well-defined data, such as CPU speed, number of cores in the CPU, memory size, disk capacity, and number of networking ports and their speed. Even if you have data centers at locations throughout the world, you may buy only a few server types and get a good discount from particular vendors for all of your data centers. But another factor may be added when you refresh your servers next time, which is the analysis of the inlet airflow temperature to a server vs. power consumption. If you are big and sophisticated like HP, you could run your own analysis to decide which server (that supports A1 through A4) to use. But this analysis seems to be fairly complex and it may not be that easy. Being a chair, I needed to control the Q&A session but had a chance to ask a simple question. Can a server vendor like HP provide a service to pick the right type of servers for your geography? The answer was yes. That is good to know.

Tags:  A1  A2  A3  A4  ASHRAE  Datacenter  DatacenterDynamics  DCD SF  DLA Accociates  Don Beaty  HP  psychrometric chart  San Francisco  Tahir Cader 

Share |
PermalinkComments (0)
 

Come and Meet Me and Other Speakers at DatacenterDynamics’ San Francisco Conference

Posted By Zen Kishimoto, Sunday, July 15, 2012

This is short notice, but I would like to encourage people to come and join me at the DatacenterDynamics conference on July 17 in San Francisco.

There will be five tracks or halls on different themes. I will chair Hall 2, whose focus is IT. Of course, the discussions will be on IT in conjunction with data center mechanical and electrical. Subjects at data center conferences tend to be on the facilities side, and very few are from the IT perspective. So I am very excited about this track.

In Hall 2, several subjects that relate to IT equipment and IT applications for data center optimization will be discussed. These are ASHRAE guidelines for telecom and IT equipment, cooling energy reduction, DCIM, and cloud computing.

Don Beaty of DLA Associates will discuss two guidelines from ASHRAE: 2nd edition of Datacom Equipment Power Trends & Cooling Applications and the upcoming 3rd edition of Thermal Guidelines for Data Processing Environments. This may lead to chiller-less data centers. Then Tahir Cader of HP will update us on the new server inlet air temperature guidelines known as A3 and A4. By raising inlet air temperature, we can cut cooling energy substantially. Are server vendors ready for the challenge?

John Boggs of Emerson will talk about how to reduce cooling energy at data centers with little or no cost to you. As we know, cooling can use close to 60% of power consumed at data centers. If we can control and reduce it (and with little or no cost), it would be great. I cannot wait for this presentation.

DCIM covers a broad area of the data center, ranging from design, simulation, monitoring, and controlling to retrofitting. The four DCIM sessions will collectively give us good insights into what DCIM is and what it can do for us. Two of the four sessions do not use the term DCIM in their titles or synopsis, though. The first one will be by Dhesikan Ananchaperumal of CA Technologies. He will talk about the necessity for good integration of data from both facilities and IT for optimizing data center operations. Jim Kennedy from RagingWire will share his experience with making both existing and new data centers energy efficient with DCIM tools, which led to EPA's Energy Star for data center certification. Following that, Khaled Nassoura of Raritan will tell us how to improve data center operations by combining DCIM with intelligent and automated systems. Finally, Todd Goldman of Nlyte Software will explain how to apply DCIM tools in steps without going too fast.

The past few years have brought an increase in cloud computing sessions at data center conferences like this one. Gina Tomlinson of the City and County of San Francisco will talk about how she put mission-critical IT infrastructures into a cloud. Although the notion of cloud computing is widely known by now, many people are still hesitant to move their mission-critical infrastructure into cloud for reasons of security, controllability, and SLA. How did she cope with the fear and move them to a cloud? It will be very interesting to hear about her experience.

Overall, this conference is filled with many more interesting sessions and speakers. But if you are an IT professional, come and join me for those very informative talks. See you in San Francisco!

Tags:  ASHRAE  CA Technologies  Cloud  DatacenterDynamics  DCD  DCIM  Emerson  Facilities  HP  IT  Nlyte  RagingWire  Raritan  San Francisco 

Share |
PermalinkComments (0)
 

Japanese Data Centers in the Aftermath of the Major Quake in March

Posted By Zen Kishimoto, Thursday, June 16, 2011

Japan Data Center Council (JDCC) is a consortium of data center operators in Japan. They recently added English pages to their web site. Their membership roster is a Who’s Who of more than 100 companies in segments like IT, electronics, and construction. The membership list is here. http://www.jdcc.or.jp/english/council.pdf

At the upcoming DatacenterDynamics conference on June 30 in San Francisco, a rep from JDCC and I will give a talk on Japanese data centers in the aftermath of the major quake in March. Although many data centers in Japan are concentrated in the Tokyo metropolitan area (some 60%), and few suffered direct damage from the quake, the JDCC surveyed the damage and collected noteworthy pictures and statistics.

The presentation agenda is shown here.

  • Disaster Overview
  • Earthquake
  • Tsunami
  • Nuclear Power Plants Trouble
  • Liquidation
  • Indirect Effects of Earthquake
  • Electric Power Shortage
  • Temporary Shortage of Various Products
  • Temporary Economic Activity Stagnation
  • What Really Happened to Japanese Data Center Industry
  • Timeline from March
  • Reported Damages
  • Sudden Need for DC as BCP/DRP Reinforcement
  • Countermeasures for Electric Power Shortage

  • What’s Next?
  • Electric Power Saving Order from Government in Effect
  • Possibility of Long-Term Power Shortage
  • Redesigning of BCP/DRP

  • Lessons Learned

  • Communication Difficulties: Tools That Worked and Tools That Didn’t
  • Transportation: Human and Goods
  • Rumors: Power of SNS
  • Plan for Unexpected Events
  • Future JDCC Action/Q&A

I think we will disseminate invaluable information to data center operators and parishioners. Come and join us at 11:05 a.m. on June 30 at the San Francisco DatacenterDyamics conference.

Tags:  BCP  DatacenterDynamics  Disaster recovery  Earthqauke  JDCC 

Share |
PermalinkComments (0)
 

DatacenterDynamics SF Conference July 16

Posted By Zen Kishimoto, Thursday, July 08, 2010

DatacenterDynamics holds conferences at several places in the U.S. and Europe. Recently, after its noncompetitive contract with PS Holdings ended, it started holding conferences in Asia as well. 

Although its conferences are international in nature, each conference has some local flavor. The upcoming one in San Francisco has a number of movers and shakers in the data center space, in addition to national and international participants. See the program here.

I plan to attend the conference as press and report on some of the sessions in my coming blog posts. If you are in the San Francisco Bay Area, this is a good conference to attend. And if you do attend, please find me and we can chat about data center issues. I am looking forward to seeing you there.

Tags:  DatacenterDynamics  SF Conference 

Share |
PermalinkComments (0)
 

Data Center of the Future: Dry, Dark, and Hot

Posted By Zen Kishimoto, Thursday, July 23, 2009
That was the title of a panel discussion at last week’s Datacenter Dynamics conference in San Francisco.

Panelists discussing the future of the data center were as follows:
  • Steve Andreano, Cisco, moderator
  • Christian Belady, Microsoft
  • Geoffrey Noer, SGI
  • John Weale, Rumsey Engineers
  • Herb Villa, Switch and Data

As the title indicates, future data centers are expected to face three challenges:
  • Insufficient water (dry)
  • Insufficient electricity (dark)
  • Highly concentrated and more powerful IT equipment (hot)




As for the shortage of water, Belady expressed his strong opinion that airside economizer should be used to eliminate the use of water altogether. He even said that Microsoft intends to make zero use of water for new construction in five years. He is a strong proponent of designing IT equipment to operate at higher temperatures, such as 35 °C and above. Remember his experiment with putting IT equipment under tents outside without any protection from heat, dust, falling leaves, and even rain? He is adamant on this issue and claims that it is the user who decides what he wants in his IT equipment.

Villa said that water is also very important to Switch and Data. Their Toronto data center is 100% water-cooled. Luckily, it happens to be very close to a lake (or maybe this site was selected because of its proximity to the lake) and exploits the abundant lake water. Too bad we cannot use seawater without processing it first.

As for power consumption and heat, SGI was the only server vendor in the panel. The old SGI was acquired by Rackable and the new company took SGI’s name. Noer explained how to reduce power consumption from the server perspective:
  • Higher voltage to each rack (208–277 VAC) may distribute power more efficiently. In addition, rather than converting 120 VAC to 12 VDC for each server, convert power to DC at each rack or PDU level, which reduces conversion loss substantially.
  • The heat tolerance level is different for different components: memory, 85 °C; CPU, 60–70 °C; and regular hard drive, 45 °C. SSD may be a little more tolerant.
  • This difference is why complete airside economizer may not work all the time.
  • Raising cold air temperature from 10 °C to 35 °C may increase fan speed and may not save on total power.

Belady mentioned that Microsoft has started to issue RFPs for IT equipment that operates at higher temperatures. He even said that if the hard drive is the problem, cool the hard drive pin-pointedly. Someone in the audience mentioned that such a scheme might raise the production cost (and thus, the purchase price). Belady said that millions of cooling dollars could be saved with several thousands dollars more for this type of IT equipment, which makes very good sense to me.



From left: Christian Belady, Geoffrey Noer, John Weale, and Herb Villa

As for power reduction, virtualization is key. Andreano said that a Cisco data center in Mountain View, California, carried out aggressive server virtualization and cut power consumption from 2 MW to 1 MW.

So is everyone going where power is cheap? Villa said that for a hosting company like Switch and Data, latency is one of the biggest issues. As discussed before, there are many factors in deciding where to construct data centers.  Cap-and-trade regulations may be another selection consideration. The panel consensus was that no single factor like power cost can decide location.

One very interesting thing about this panel was the thinking outside of the box. Vendors want to conduct business as usual without changing their products drastically for fear of failing with the new specification. A large user like Microsoft can push a new specification to OEMs to make a change. The impact could be substantial. Cooling costs a lot for data centers, and this new specification could allow operators to use airside economizer almost anywhere in the world. Expanding the temperature and humidity range for IT equipment would allow data center operators to deploy data centers almost anywhere in the world and in almost any climate.

Tags:  Airside economizer  DatacenterDynamics  Heat  OEMs  Power  San Francisco  Water 

Share |
PermalinkComments (0)
 

Busy Week This Week

Posted By Zen Kishimoto, Wednesday, July 15, 2009
I attend conferences, seminars, and other meetings if they are relevant to green IT, green data centers, cloud computing, smart grid, or energy. This week I plan to attend three such events.

On July 14, the Keizai Society/U.S.-Japan Business Forum presents “Green Technology and Collaborative Business Opportunities,” which includes a panel session with Tony Seba, author and high-tech strategy consultant; former Palo Alto (CA) mayor Yoriko Kishimoto, now a member of the VTA and Air District boards; and Binay Panda, Ph.D., cross-border entrepreneur.

Everyone from the White House to your house is talking about the future of global energy policy, environmental preservation, and the role of biotech in these and many other fields. Fueled in part by the Japanese government’s economic stimulus package, Japanese entrepreneurs and industry leaders alike have taken a global leadership position in creating green, sustainable energy, and biotech businesses. This presents unique opportunities for Silicon Valley companies, also global leaders in these fields, to collaborate with their Japanese counterparts.

On July 16 in San Francisco, Digital Realty Trust hosts with Emerson an all-day seminar, “Datacenter Energy Efficiency: Cooling and Measurement.”

This free one day seminar is designed to provide participants with a comprehensive understanding of the issues, strategies and technologies that are associated with cooling today’s high density datacenters. This seminar will also deliver an insightful analysis of the leading measurement tools available to effectively evaluate datacenter energy efficiency and how to interpret and use this information. Through presentations by industry leading experts from Emerson Network Power and Digital Realty Trust, attendees will be presented with real world examples of practical methods for dealing with datacenter heat loads in the most energy efficient manner possible.

On July 17, DatacenterDynamics presents a conference with the theme “Carbon: Risk or Opportunity?”

Are we reaching an inflection point in our industry? Preconceptions of how data centers are designed, built and operated are once again being challenged. The rising tide of regulation, a scarcity of capital and natural resources and the struggling economy are combining to drive a revision of data center and IT infrastructure strategy. Find out how you can unleash stranded capacity, do even more with less and balance data center productivity with efficiency at the 7th Annual San Francisco DatacenterDynamics Conference and Expo.

These three events are very timely and useful venues in which to gather and collect information. I will report on them in future blogs.

Tags:  DatacenterDynamics  Digital Realty Trsut  Keizai Society 

Share |
PermalinkComments (0)
 

Reporting on DatacenterDynamics Conference in Tokyo (November 21, 2008)

Posted By Zen Kishimoto, Tuesday, November 25, 2008


Courtesy of Nikkei ITPro

I will be covering this conference in more detail for the next few days as I have many things to report. Overall, 300 people joined this one-day conference, as compared to 500 last time. The decline in the attendance is not due to the lack of interest but the market condition according to people from DatacenterDynamics.

 

My presentation on US trends and their applicability to Japan was well received. It was the first presentation at 9:00 on a very cold morning but the room was packed with people. I think the interest level in U.S. trends is very high.

 

This is the first conference where Japanese giants like NEC and Hitachi participated. Up until now, it had been only NTT Facilities (a subsidiary of NTT which designs data centers.) Their level of technology for green data centers is pretty high and I will cover them later in another blog. I also had a chance to discuss the DC vs. AC controversy and got a pretty good sense of each camp. I also talked to a representative from the Japanese chapter of The Green Grid in Japan, who works for Intel Japan, and a VP of Rackable system in Fremont, CA.

 

It is 3;00 a.m. now and I am not sure if I know where my mind is. Signing off for now.  Good night….


<Addition> My presentation was covered by Nikkei BP ITPro (Web version) here. It has a better picture of me.


Tags:  DataCenterDynamics  Tokyo conference 

Share |
PermalinkComments (0)
 
Sign In


Forgot your password?

Haven't registered yet?