Print Page   |   Contact Us   |   Your Cart   |   Sign In   |   Register
Zen and the Art of Data Center Greening (and Energy Efficiency)
Blog Home All Blogs
Commentary of Dr. Zen Kishimoto on news, trends, and opportunities in environmentally sustainable data centers and energy efficiency.


Search all posts for:   


Top tags: Smart Grid  cloud computing  Data center  Japan  EPA  PUE  HP  Cisco  JDCC  Tokyo  IBM  SVLG  Virtualization  IT  PG&E  Santa Clara  Google  Green Grid  APC  Fujitsu  ICT  Intel  Microsoft  Cloud  DCIM  smart meter  Sun  Energy Efficiency  Energy Star  Green IT 

Security and Data Centers

Posted By Zen Kishimoto, Sunday, December 7, 2014
Updated: Sunday, December 7, 2014

Twenty years ago, I ran a small group in a large corporation’s subsidiary, which developed firewall and virtual networking gear. In those days, most people were beginning to discover what the Internet and the World Wide Web were. That security was required for the Internet was not well recognized. It was inconceivable for many that some bad guys would hack into their internal networks via the Internet and cause all kinds of trouble. Internet security was a new area, and several companies with security gear went public.

Fast forward to 2014, when we are well aware that there are many kinds of attacks on corporations’ networks worldwide. At the recent Motivated to Influence Data Center Conference in San Francisco, I sat in a session on security in data centers, Data Center Security Discipline: You Do Not Have to Outrun the Bear, by Jeffrey Logsdon, COO, MainNerve.

In most data conferences, topics are either facility or IT related, and security, even though it may fall into the IT category, is not discussed very often. Jeffrey’s talk was very interesting. Well, as he was joking, I got a little depressed to hear how vulnerable we may be in the use of the Internet and data centers.

Jeffrey Logsdon

The following is a summary of his talk and my comments.

Why Security in Data Centers

He discussed the fundamental reason why we need security at data centers: they host data that are sensitive and valuable, such as:


  • Student records, donor records, alum records as related to FERPA
  •  Critical plant information, physical plant, IT systems
  • Research information, internal data, and intellectual property
  • Law enforcement, court orders, criminal and investigative records
  • Email, messaging, and text data

Motivations to steal those may vary. Rogue nations have their motives. An individual may want to show off. And there are monetary reasons. Then what about the dollar figure associated with those sensitive data? Jeffrey showed the following interesting chart.

This shows that stealing my information alone won’t make a lot of money in the black market.

Type of threats

Jeffrey then showed types of threats to deal with at data centers.




Web executables, images, videos & links


Mail, websites & attachments

Trojan/logic jam

Applications, games & programs


Attachment borne & quick to replicate across systems


Hacker borne

Social engineering

Human involved

Insider threat

Employees & contractors

Key logging

Social media


He also mentioned that those threats go up during:

  • Holidayseasons s, such as Black Friday, Easter, Christmas

 Security measures

Jeffrey then showed what we need to do to counter such security threats:

  • Develop a risk management program
  • Train our staff


More specifically:

  • Develop a formal security plan with a strategic road map and a tactical crosswalk for getting there


He concluded by saying that with all of these preparation for security you can:

  • Drive revenue
  • Improve quality
  • Improve client experience
  • Defend masterfully


My comments

These days we cannot go through a day without using online services, such as banking, filling out prescriptions, buying goods online, selling and buying stock, and many other services. It is inconceivable not to be online each and every day. I have thought of security threats online and on the Internet. If my accounts are broken into, I will suffer from that for sure. But what if financial institutions with my accounts or retail stores with credit card service are broken into? Even if I am careful to protect my accounts, if the corporations that deal with sensitive information like that in “Why Security in Data Centers” above are hacked, I will also be impacted.

Jeffrey mentioned that he and his wife do not use online banking because he had seen so many security breaches. For me, I do not have the option not to do online banking, because it is quick and convenient and allows me to manage multiple accounts.

Another thought is that huge dynamic distributed systems may go out of control. A large distributed system cannot be controlled manually, so some kind of intelligence is incorporated to manage the entire system. Therefore, dynamic changes in outside parameters may cause the system to behave unexpectedly. Related to this thought, in a recent article, artificial intelligence (AI) is considered to be dangerous. If we invest in AI to progress, at some point it would surpass human ability and may take control. It is a scary thought. The only relief for me is that it may not happen during my lifetime.

Tags:  attack  cyber  data  datacenter  EComerce  security 

Share |
PermalinkComments (0)

Tier44 Provides Data Center Service Optimization, Reincarnation of PowerAssure

Posted By Zen Kishimoto, Friday, November 21, 2014

I am interested in applying IT and communications technologies (ICT) to the energy sector. One such area is the data center market. This segment is very interesting because it involves ICT and facility management and operations in the same place. Besides, I can relate to the IT side of the house easily because of my background. Over the past few years, I’ve gradually learned the facility side as well.

I have followed what PowerAssure was doing for several years. In brief, they apply ICT technologies to data center operations and optimize power use and consumption by traffic-controlling power distribution within the data center. They raised more than $30 million but ran out of gas at the end of August this year. I talked to Clemens Pfeiffer, who was founder and CTO of PowerAssure, from time to time to follow their progress.

On a recent afternoon, I found that Clemens got a new job at Tier44,, which confused me a lot because PowerAssure was Clemens and Clemens was PowerAssure. So I asked him what was going on.

Here is what I found out from him.

PowerAssure ran out of operating capital at the end of August and everyone was let go. But Clemens started a new company right away, which was named Tier44, adjacent to where PowerAssure used to be.


Tier44? That is a strange name. Uptime Institute, a division of The 451 Group, originally defined the tier system for data centers (tier 1 to 4) with the facility focus.  The larger the tier number, the more reliable a data center is, so tier 4 is the most reliable data center from the facility point of view. By making it 44, Clemens wanted to emphasize that in addition to the facility side, the IT side needs to be reliable, i.e., tier 4 for IT. The integration of facility and IT has been mentioned by many analysts, consultants, and practitioners. But there is still room to move in that direction, which is a good one for uniting both IT and the facility for optimized holistic data center operations.

Market Segment Category

Among some of the new trends in the data center market, data center infrastructure management (DCIM) has been mentioned everywhere. Even though DCIM covers many categories like monitoring and asset management, it involves too many categories and segments. PowerAssure was categorized under the market segment of DCIM, but I felt that DCIM includes too many categories and needs reclassification. Clements said they are currently being classified as data center service optimization (DCSO) by The 451 Group, which owns Uptime Institute.

In short, DCSO is defined by The 451 Group as:

DCSO systems will provide a more business-, cost- and service-oriented view of the datacenter, drawing on external resources, integrating with multiple systems, and enabling the optimization of datacenters services in real time and throughout the lifecycle of a facility. DCIM systems managing the underlying facilities infrastructure are a critical and subsidiary function.

I think PowerAssure will fit into this category rather than the original DCIM category. I am sure more companies will come out with technologies that will fall into this category later. It is good to have a higher level view towards the optimization of data centers services for its wider acceptance.

One thing I mentioned to Clemens was that most decision-making people at data centers are facility or business/financial focused, and presenting complex technologies might not be the best way to persuade them to adopt such technologies. In other words, what PowerAssure had was way ahead of the market, even for some technically savvy facility managers. Clemens agreed with that assessment.

Sales Approach

When very complex technologies are marketed, you can adopt direct sales force with solutions sales. The problem with this approach is that it usually takes time to penetrate into a potential account and retain a sizable number of sales people who are usually compensated with how much they sell rather than a fixed salary. They tend to like to sell products that are easy to explain. When you are required to sell complex technologies, your sales force needs training to keep its motivation to sell. This is time consuming and costly. Yes, I had a small organization before and managed a few sales people and know this at firsthand.

Clemens said that he would emphasize partnership with companies that already have a large customer base. His complex technologies could be incorporated with partners’ solutions and deployed to partners’ customer bases without explaining them to each potential customer.

Unfair Advantage

When I was involved with startups, I was often told that any successful startup needed an unfair advantage over others. Of course, it is a necessary condition but not necessarily a sufficient condition. Tier44 acquired all the technologies PowerAssure developed. Clemens said PowerAssure raised and spent more than $30 million. I asked him how much he paid for the acquisition of the technologies. Although he did not mention the number, he said with a smile that he and his team got them at discount. They are very familiar with the technologies. After all, they developed them in the first place.

They are now after the customer base PowerAssure was cultivating. As of this writing, I do not know if they acquired any customers yet. But if they could do that in the near future, they have unfair advantages like existing technologies and paying customers. As for outside funding, Clemens and his two other cofounders are paying expenses now. Clemens said he would balance between fund raising and business development. If they could acquire customers and get enough revenue infusion from them, they could defer or do without external funds.

It is exciting to see a new startup (or shall I say a reincarnation of an old one) going with a lot of hope. But I also know it is a tough business. It remains to be seen how Tier44 does.

Tags:  datacenter  energy  ICT  power  PowerAssure  Tier44 

Share |
PermalinkComments (0)

An IT Guy's Take on ASHRAE's Recent Guidelines for Data Center ICT Equipment

Posted By Zen Kishimoto, Thursday, July 19, 2012

ASHARAE is a professional organization and according to their website:

ASHRAE, founded in 1894, is a building technology society with more than 50,000 members worldwide. The Society and its members focus on building systems, energy efficiency, indoor air quality and sustainability within the industry. Through research, standards writing, publishing and continuing education, ASHRAE shapes tomorrow’s built environment today.

It has a big impact on data center operations. In the recent conference given by DatacenterDynamics in San Francisco, I had a chance to chair a track that included thermal guidelines from ASHRAE. Judging from the past attendance at ASHRAE guideline sessions, I was expecting a large turnout, and the two sessions on the subject were packed.

The first presentation was by Don Beaty, ASHRAE Fellow and president of DLA Associates.

Don Beaty

He was the cofounder and first chair of ASHRAE Technical Committee TC 9.9 (Mission Critical Facilities, Technology Spaces and Electronic Equipment). TC 9.9 is very relevant to data center operations because it sets guidelines for operating data centers, including facilities and ICT equipment. When I attend a conference session, I usually record it for accuracy and memory’s sake. But it was hard to do so as a chair. So I am recalling from memory and some of the details are a little bit fuzzy.

One thing Don kept emphasizing during the talk was that it is the temperature of inlet airflow to a server that matters for data center cooling but not the temperature in the room. In the past, CRAC units on the data center floor checked the temperature of returned air and used it to approximate the temperature of inlet airflow to a server. Obviously, it usually did not reflect the actual inlet airflow temperature. If cooling is via a raised floor, the inlet airflow temperature varies widely, depending upon the proximity of CRAC units. So it is imperative to measure and monitor the temperature at the inlet of each server/rack. At some point, cooling via a raised floor may not function well. For example, a rack that consumes 10 kW may not be cooled effectively with raised floor cooling. Furthermore, even though it is desirable to have uniform power consumption and heat dissipation from each rack, because of ICT equipment configuration requirements and other constraints it is not always possible to do so. Don presented a guideline for the inlet temperature for servers titled, 2011 Thermal Environments – Expanded Data Center Classes and Usage Guidance , and I extracted the table and a graph from page 8 and page 9 of the document, respectively, for reference purposes.

A psychrometric chart describes temperature and humidity and is used to set a proper range for the combination of the two in a data center. This chart shows A1 through A4 ranges, along with the recommended envelope.

The current server can operate fine (with server vendor warranty) with the A2 guideline shown above. A2 sets the high temperature at 35°C (95°F). But new guidelines can expand the acceptable ranges to 40°C (104°F) by A3 and 45°C (113°F) by A4. If you allowed this widely expanded range, almost any data center in the world could take advantage of free cooling, such as airside economizer. Incidentally, Christian Belady of Microsoft has said that developing a server that tolerates higher temperatures might raise the production cost (thus the purchase price), but millions of cooling dollars could be saved with several thousands dollars more for this type of IT equipment.

So what's holding up the production of servers with A3 and A4 guidelines? Next up were Dr. Tahir Cader, Distinguished Technologist, Industry Standard Servers (ISS), and David Chetham-Strode, Worldwide Product Manager, both of Hewlett-Packard. They shared very interesting results. Tahir is on the board of directors for The Green Grid, a member of ASHRAE TC 9.9, and a liaison between ASHRAE and The Green Grid.


Dr. Tahir Cader

Again, I do not have their presentations and unfortunately cannot refer to specific data. They experimented with power consumption at various geographical locations, using the A1 through A4 guidelines. According to their findings, you may not need to employ A3 or A4, depending upon your location. In many cases, there was little or no difference between A3 and A4. Sometimes there is some savings between A2 and A3, but it depends upon the geographical location.

When we consider the temperature in a data center, we also need to consider it for humans. Even though the primary purpose of the data center is to host ICT equipment and the temperature could be raised up to 45°C at the server inlet, doing so could also raise the temperature at other locations. Anything above 30°C may not be very comfortable for people to work in for a long time.

It was relatively easy in the past to pick your server, using some well-defined data, such as CPU speed, number of cores in the CPU, memory size, disk capacity, and number of networking ports and their speed. Even if you have data centers at locations throughout the world, you may buy only a few server types and get a good discount from particular vendors for all of your data centers. But another factor may be added when you refresh your servers next time, which is the analysis of the inlet airflow temperature to a server vs. power consumption. If you are big and sophisticated like HP, you could run your own analysis to decide which server (that supports A1 through A4) to use. But this analysis seems to be fairly complex and it may not be that easy. Being a chair, I needed to control the Q&A session but had a chance to ask a simple question. Can a server vendor like HP provide a service to pick the right type of servers for your geography? The answer was yes. That is good to know.

Tags:  A1  A2  A3  A4  ASHRAE  Datacenter  DatacenterDynamics  DCD SF  DLA Accociates  Don Beaty  HP  psychrometric chart  San Francisco  Tahir Cader 

Share |
PermalinkComments (0)

The New Data Center Infrastructure Management Segment

Posted By Zen Kishimoto, Wednesday, July 6, 2011

When a new market segment starts to emerge, some analyst company tends to name it. The data center infrastructure management (DCIM coined by Gartner) segment is now emerging in the data center space.

DCIM solutions collect data from both the IT and the facility parts of a data center. I am familiar with companies like Sentilla, Modius, OSIsoft, and SynapSense. Arch Rock was spun off from Cisco and spun back in recently. Power Assure provides somewhat more sophisticated power management for data centers.

Those DCIM companies collect real-time data from actual operations and provide varying degrees of functions. Some collect data from both IT and facility equipment (like servers), aggregate it, and display the result to provide an overview of a data center’s power usage. Others receive data from somewhere else and provide more sophisticated analysis.

Romonet was founded by Zahl Limbuwala (CEO) and Liam Newcombe back in 2006, but they kept it in stealth mode until now. In conjunction with the recent DatacenterDynamics conference in San Francisco, Romonet came out of stealth mode and launched in the US. It launched in the UK late last year.

Zahl Limbuwala

Liam Newcombe

Wanting to coin a new term to accurately describe their segment, they came up with data center predictive model (DCPM). Rather than collecting real-time data, they predict data center configuration and architecture.

They showed the differences between DCIM and DCPM in the following slide.

Romonet’s product is called Prognose. Its function is summarized in the following slide.

The tool can provide "what if” scenarios for many different elements, such as PUE and power consumption. Two screenshots are shown below.

This display shows how PUE might change with different power loads and temperature.

This display shows the power usage information of different IT equipment.

The rationale for a tool like this is the complex interrelationship of elements in a data center. Changing one element may have an adverse effect on other elements. It would be nice if we could tell what the impact of a change might be before we make it. Prognose can be used for capacity planning. One of the case studies presented at the launch meeting was from Intel. A representative from Intel said that this tool could be used for choosing a data center location on the basis of temperature and humidity conditions in each geographical area in the world.

The tool is based on modeling algorithms, and its effectiveness depends solely on how good such modeling is. They surveyed many data centers of various sizes to fine-tune the model. Because I have not used this tool for a real data center, I withhold my judgment on it, but a tool like this is pretty handy when a data center goes through frequent changes, as they typically do.

Another area where I withhold my opinion is their claim of "only one DCPM in the world.” This is because I found Nlyte Software at the show the next day. Nlyte also provides predictive modeling. They also provide management and real-time monitoring of data center assets.

Claiming differentiation by just monitoring, aggregating, and displaying data from multiple sources at a data center is difficult. The differentiation is in the analytics and prediction. As Romonet said, the DCIM segment is crowded, and some consolidation is inevitable. It is not "if” but "when.”

Tags:  Datacenter  datacenter capacity  DCIM  DCPM  Prediction  Romonet 

Share |
PermalinkComments (0)

Revision to Panel Proposal 1

Posted By Zen Kishimoto, Friday, June 18, 2010

I made three panel proposals before, one of which was regarding metering and monitoring. After bouncing this idea around with several people, including a conference planner, I made a few changes to the proposal. This particular conference is very keen on presenting users’ perspectives rather than vendors’ pitches. In order to present this idea to this conference, I am making some changes. In the following, the revised version is given.

I have talked to more than half a dozen vendors in this space. See my original post for their identities. Although they are all in the same monitoring and metering space, each company has a slightly different angle. Some measure via their own sensors and aggregate the data for display. Others do not gather data by themselves but exploit the data from other sources. So the functions can be roughly classified into three categories: measuring (via sensor), aggregating, and analysis and display.

Also, in addition to measuring power consumption, different approaches can be taken to common functions like:

  • alarm management
  • asset management
  • capacity analysis
  • efficiency analysis
  • air control automation

In this panel, I would like to discuss the following so that a general audience can understand the needs of metering, learn which technologies are state-of-the-art, and assess the minimally necessary functions of metering.

We would discuss the following:

  • Why do data centers require monitoring and metering?
  • What is a typical architecture for measuring, aggregation, analysis, and display?
  • What are the minimally required functions of metering?
  • What kind of standards should be defined for metering? What type of data should be collected? Granularity of data? Frequency of collection? Data formats (like XML)? Display formats?
  • What extensions, such as e-waste and water, should be considered for existing metering?

An ideal panel will consist of three or four customers of vendors, one researcher in the space, and a regulatory representative, namely EPA. As of this writing, I have received positive responses, but this is just the beginning. I will report how this goes in the coming days.

Tags:  analysis  datacenter  metering  monitoring 

Share |
PermalinkComments (0)

Fuel Cells to Power Data Centers?

Posted By Zen Kishimoto, Friday, February 26, 2010

It has become imperative for many data center operators to secure enough power without producing greenhouse gases. (See my previous post about data centers’ excessive needs for power.) Fuel cell units are the perfect solutions for them. Data Center Journal reports that Google and eBay are interested in Bloom Energy’s new server technology, which makes use of fuel cells. That company has been everywhere in the media and blogosphere for the past few days. I found a compact and concise article in PCWorld that helped me quickly grasp what the big deal is. 

Why Bloom Energy? According to the article:

"There are probably another 100 companies that are working on something very similar,” Jack Brouwer, associate director of the National Fuel Cell Research Center told the Los Angeles Times. "But the key thing is that Bloom has an integrated system and package ready for commercial sale that puts them ahead of the pack.”

David Coursey, who wrote the article, wondered whether the Bloom Energy Server will be affordable enough (around $3,000) for household use. Bloom Energy thinks it will be about ten years before that happens.

I also cover the Japanese market, and the fuel cell market there is interesting. In the U.S., utilities tend to provide both electricity and gas. My utility company is Pacific Gas and Electric. In Japan, two separate utilities provide electricity and gas, one for electricity and the other for gas. In the metropolitan Tokyo region, for example, Tokyo Electric Power Co. (TEPCO) provides electricity, while Tokyo Gas provides gas.

There is a war going on between Japanese electric and gas companies. Electric companies like TEPCO are campaigning to have each household use only electricity because it is safer and cleaner. Gas companies like Tokyo Gas are fighting back with fuel cells. They sell a co-gen system based on fuel cell technology. The unit costs about $30,000, but with the government subsidy, it goes down to $15,000. It is still five times as expensive as what Bloom Energy plans. Its capacity is 1 kW, but that is adequate for each home. The heat generated by co-gen is used to heat the house or water for showering and washing dishes.

I do not hear much about the use of fuel cells for data centers, but fuel cells are being tested for electric vehicles in Japan, where EVs are very hot.

Tags:  Bloom energy  datacenter  Fuel Cells  TEPCO  Tokyo Gas 

Share |
PermalinkComments (0)

Which Is Evil: Data Center or Cloud Computing?

Posted By Zen Kishimoto, Friday, September 18, 2009
There seem to be two different kinds of people, cloud people and data center people. If you understand how cloud computing is implemented, some of the things happening in the market place puzzle you. I am not the only one who is confused by the messages in the market. Rich Miller wrote an interesting post.

Basically, the U.S. government is fed up with its expenditure on data centers and wants to outsource them to third parties:

The federal government spent $76 billion on IT last year, and about $20 billion of that was spent on data center infrastructure, (U.S. CIO Vivek) Kundra said. “A lot of these investments are duplicative,” said Kundra. “We’re building data centers the size of city blocks and spending hundreds of millions of dollars. … We cannot continue on this trajectory.” The solution: begin shifting government infrastructure to cloud computing services hosted in third-party data centers, rather than building more government facilities. Kundra notes that the General Services Administration has eight data centers, while Homeland Security has 23 (but not for long, as they’re consolidating to two large facilities).

His post includes a video announcement by the U.S. CIO. Miller mentions:

Here’s a video of (Vivek) Kundra’s announcement Tuesday at the NASA Ames Research Center in Mountain View, Calif. An interesting moment: check out the video that starts at the 19-minute mark, which underscores the “data centers are the enemy” theme. It’s almost like a bad political ad: when the data centers appear, the music turns ominous and the background grows dark … but when cloud computing is mentioned, the music turns happy and the landscape becomes green.

Miller’s Twitter message to introduce his blog says it all:

Fed Cloud Targets Evil Data Centers: Cloud=good, datacenters=bad! Hmmm ... wonder where those clouds will live.

I totally agree with Miller. Kundra is simply passing the problem (according to him) to others to worry about. I really want to find out how bad each cloud computing provider’s PUE is. If you cannot foresee demand, you need to overprovision buildings, power, cooling, hardware gears, software, and more. The total power consumption (and as the U.S. CIO, Kundra needs to be concerned about it) may not change by outsourcing your data centers to third parties.

Tags:  Cloud computing  Datacenter  Evil  US CIO 

Share |
PermalinkComments (0)

Green IT: Data Center and User Environment

Posted By Zen Kishimoto, Tuesday, March 3, 2009
In terms of Green IT, office environments and data centers tend to be discussed separately from each other. A simple and easy linkage between the two is cloud computing.

Another link may be thin clients. Over 10 years ago, there was a lot of discussion on thin clients or network computer. Regardless of early hypes, thin clients did not catch on.  Recently, I had an opportunity to attend a joint seminar by VMware and HP on the subject of desktop virtualization and thin clients. This reminds me of my consulting work for a company to market thin client products several years ago.  For many reasons (such as non-existence of virtualization and high speed network connections), it did not catch on. 

Computers with thin clients have been said to be appropriate for some vertical markets like warehouse, healthcare and restaurants. During the seminar, HP’s case study was for healthcare.

In spite of the optimism shown by both HP and VMware, Jon Brodkin of NetworkWorld paints a little less optimism in his article.

He cited quotes from a Forrester analyst, Gartner and IDG Research Services Group:
"I see huge interest right now, for many reasons," says Forrester analyst Natalie Lambert. "But the challenge is that desktop virtualization is a very costly endeavor. I don't care what people tell you otherwise, they're wrong."

“Gartner's latest numbers released this month predict that hosted virtual desktop revenue will quadruple this year, going from US$74.1 million worldwide in 2008 to nearly $300 million in 2009.”

“Is [desktop virtualization] going to break out in 2009? I don't see any reason it would," IDC analyst Michael Rose says. "Frankly, the current economic environment is going to be a significant barrier for adoption of virtual desktops in the data center." True ubiquity could take another five years, given current financial problems and the nature of PC refresh cycles, he says.”

If this is the case, does it ever catch on?  May be another angle is required to link the office environment and data cnters from a different perspectives. More on virtual desktop later.

Tags:  Datacenter  Forrester  Gartner  HP  IDC  Virtual desktop  VMware 

Share |
PermalinkComments (0)
Sign In

Sign In securely

Haven't registered yet?