Print Page   |   Contact Us   |   Your Cart   |   Sign In   |   Register
Zen and the Art of Data Center Greening (and Energy Efficiency)
Blog Home All Blogs
Commentary of Dr. Zen Kishimoto on news, trends, and opportunities in environmentally sustainable data centers and energy efficiency.

 

Search all posts for:   

 

Top tags: Smart Grid  cloud computing  Data center  Japan  EPA  PUE  HP  Cisco  JDCC  Tokyo  IBM  SVLG  Virtualization  IT  PG&E  Santa Clara  Google  Green Grid  APC  ICT  Intel  Microsoft  Cloud  DCIM  Fujitsu  smart meter  Sun  Energy Efficiency  Energy Star  Green IT 

Dell Buys Perot: What Does It Mean to Its Data Center Business?

Posted By Zen Kishimoto, Monday, September 21, 2009

Several media and analysts reported this news:


Taking excerpts from Rich Miller’s blog:

Dell (DELL) said the acquisition will allow it to offer a broader range of IT services, and a built-in market for its hardware among Perot Systems’ existing customers. Perot Systems has a strong footprint in the health care and government, which provide 48 percent and 25 percent of its revenues, with the remainder in the enterprise sector. Those two sectors are expected to see strong growth due to the Obama administration’s economic stimulus plan, which is expected to boost the adoption of electronic health records and upgrades of federal agencies’ IT infrastructure.


Miller in a separate Twitter message says this deal is designed to boost enterprise and data center businesses. Other tweets and analysts suggest this is to counter the HP and EDS merger, as well as IBM, to build out higher-margin IT services and outsourcing businesses.

This goes along with what I heard through the grapevine—that Dell has been working hard to create a new data center service business. My source tells me that a person at Dell who was promoted to head a new group is traveling all over the country and cannot be easily contacted. Obviously, he is aligned with this movement.

Tags:  Dell  EDS  HP  IBM  Perot 

Share |
PermalinkComments (0)
 

Blade or not Blade

Posted By Zen Kishimoto, Monday, March 09, 2009
Rich Miller of DatacenterKnowledge.com wrote an article on the recent server market.

Due to the economic slowdown, the overall server market did not do well.  It is quite understandable. (The following discussion is during the forth quarter of 2008 unless otherwise specified. )

However, I found some of the statistics interesting. For example:
  • The unit shipment is down by 12% and the total revenue went down by 14% to $13.5B in comparison to a year ago
  • The server market share is as follows: IBM (36.3%), HP (29%), Dell (10.6%) and Sun (9.3%)
  • In spite of the slowdown in the server market, blade servers are doing well. The revenue went up by 16.1% while the shipment went up by 12.1%

Although the market share varies from time to time, these four vendors dominate the server market. In total, their shares occupy 85.2% of the entire server market.  The reason for blade servers’ surge is attributed to energy efficiency.

There are two aspects of blade servers. On one hand, the support argument for blade servers is energy efficiency because of sharing of PSU (power supply unit) and cooling (fans) among several blade servers.  I blogged on this in my previous posting.  On the other hand, there is an argument against it as I blogged in here.

Now which argument is correct?  That depends , as I mentioned in my recent blog.  There are several arguments like this in the data center space like DC vs AC power delivery.  Depending upon assumptions and contexts, a different conclusion is derived. It is important to know what we are arguing with clear understanding of assumptions and contexts.

One of my long-term interests, which is true of our firm AltaTerra in general, is to try to contribute to these discussions in a way that enables meaningful comparisons, though generally this is not by using formal standards.


Tags:  Blade  Dell  HP  IBM  Market share  Server  Sun 

Share |
PermalinkComments (0)
 

Dell's plan to help its customers conserve energy

Posted By Zen Kishimoto, Monday, February 09, 2009
The main business of Dell is selling server and other IT gear. Agam Shah of Networkworld wrote this article to explain how Dell plans to conduct its business to help its customer conserve energy with their gear.

If you are a server vendor, there are two major ways to conserve energy:
  • Produce energy efficient gear (and set an appropriate refresh cycle for product lines), and
  • Provide virtualization technology and support

Dell now claims that a short refresh cycle of 3 years instead of historical 7-8 years makes sense as Albert Esser, vice president of Dell's data center infrastructure group says:
“"It is very clear that refresh rates of three years ... the total cost of ownership is less than keeping your old machines running," Esser said. Old servers unable to distribute workloads could utilize excess power and lead to system administrators buying exponentially expensive servers to fill IT needs.”

As I indicated before you need more memory to support virtualization:
“"Virtualization requires an awful lot of memory, but doesn't require an awful lot of CPU horsepower," Esser said. Dell plans to load two-socket servers with memory attachments typically found in 4-socket servers, Esser said. Similarly, four-socket servers will have memory attachments typically found in eight-socket servers.”

As for major virtualization technologies, Dell covers them all as:
“The company already ships products that are enabled for VMware, Microsoft's Hyper-V and XenServer hypervisors, a thin piece of software that unites multiple operating systems on a server and allocates resources accordingly.”

Other areas Dell works to save energy include power supply units to be more energy efficient.

It is good to know that a server vendor like Dell is keen on energy efficiency.  One interesting thing that came to my mind is that as in my previous blog, an air-economizer system may shorten a server’s life because of unfiltered harmful gasses and dusts. One argument is if the server life is only three years and it gets refreshed every three years, the impacts of those undesirables may be within a reasonable tolerance level. So, from that point of view, the shorter refresh period is good.

Tags:  air-economizer  Dell  Memory  refresh cycle  Virtualization 

Share |
PermalinkComments (0)
 

Carbon Neutrality

Posted By Zen Kishimoto, Monday, January 05, 2009

As we expand the coverage of this blog from green data centers to green IT, I note an article that illustrates some of the relationships between the two topics—and how large and complicated "green IT" can be. The article appeared at the end of 2008 in the Wall Street Journal.

The article analyzed Dell’s declaration of “carbon neutrality” and concluded that Dell’s claim is not valid.

It defines “carbon neutrality” and “carbon footprint” in passing:

“The term may suggest a company has reengineered itself so that it's no longer adding to the carbon dioxide and other greenhouse gases scientists say are contributing to climate change.”


And:

“The amount of emissions Dell has committed to neutralize is known in the environmental industry as the company's 'carbon footprint.'"


As the authors rightly point out, the definition of “footprint” is not standardized:

“there is no universally accepted standard for what a footprint should include, and so every company calculates its differently. Dell counts the emissions produced by its boilers and company-owned cars, its buildings' electricity use, and its employees' business air travel.”


Their problem with Dell’s claim consists of two areas:

  1. Some items are not included in “carbon footprint”, such as “the oil used by Dell's suppliers to make its computer parts, the diesel and jet fuel used to ship those computers around the world, or the coal-fired electricity used to run them.” This is “about 10 times the footprint Dell has defined for itself.”
  2. The purchase of “carbon credits” by Dell is not really reducing the company's footprint.


The article takes a negative view of Dell’s claim. I, however, consider Dell commendable for taking a stance at this early stage when carbon neutrality is not well defined. Somebody should start from somewhere. It is not completely fair to single out Dell when Yahoo and Google, among others, are taking a similar approach.

Dell has taken a first step towards the goal of “carbon neutral.” Now Dell and others should begin to develop a more precise and standardized definition.

Tags:  Carbon Neutrality  Dell  Green IT  Wall Street Journal 

Share |
PermalinkComments (0)
 

Data Center Consulting Services

Posted By Zen Kishimoto, Friday, December 12, 2008
Bridget Botelho of Techtarget reported that in addition to Sun, Microsoft and Hewlett Packard, Dell has entered data center consulting services.

Their services include:

“to help users extend the life of existing facilities and avoid building out new data centers.”

In order to satisfy today’s computing resource demands, there are usually only two solutions according to Dr. Albert Esser, the vice president of power and infrastructure solutions at Dell.
  • build a new data center or
  • renovate an existing one.
Both solutions entail very high cost.

Instead,
“Dell's data center services strategy is threefold: extending the life of existing data center by implementing server virtualization to increase server utilization, decommissioning out-of-use equipment and refreshing legacy systems.”

The first two strategies are self-explanatory, but what is “refreshing legacy systems”?  This is a fancy way of saying dumping old servers and getting new (Dell) servers. This is a clever way of pushing Dell’s servers although I agree that newer servers tend to be more powerful and power-efficient than many servers in action.

In addition, Dell advises their clients, to do things such as:
  • Raising the data center temperature,
  • Hot/cold aisle containment, and
  • Use PUE to optimize a data center under service

An interesting thing is that these bits of advice are based on retrofitting of their own data centers with such things as virtualization. Any advice
based on real experiences would be very convincing.

Tags:  Consulting services  Dell  HP 

Share |
PermalinkComments (0)
 

Future of Computing

Posted By Zen Kishimoto, Tuesday, September 09, 2008

Two recent articles in GigaOM further indicate advancement of Cloud Computing or server side computing. One is an article regarding Dell, which appears to be changing its major business of manufacturing and selling of PCs to something else, as Wall Street Journal reports.

 

It speculates that Dell is talking to contract manufacturers to sell their manufacturing plants. Dell may be moving from the current major business to something totally new like Cloud Computing. Dell currently has much smaller form-factor PCs and those may be a  device for starting business in Cloud Computing. Dell could sell hardware for Cloud Computing and services related/required by Cloud Computing.

 

Another article discusses the death of the DVD sales business in Korea, which is much more connected and on-line than most of countries. Its broadband penetration rate is 90.1%. In Seoul, more than 50% of people download movies from the net, and renting and sales of DVDs nose-dived.  Because of this, Sony Pictures is leaving the Korean market like other Hollywood studios.  Even VoD is not replacing it but by cloud. People there are using Web based storage called Webhard instead.

 

In addition to the two news, Microsoft has announced Midori for cloud computing and recently, Google released Chrome. Chrome is not intended to browse Web pages but execute applications in the cloud.   Those collectively reinforce the market trend of moving towards Cloud Computing.  In addition to simply replacing functions and applications on the client with the server side, new services and functions could be generated. Storage in the cloud was mentioned a few years ago but there was no much market then.  But as the second article reports, such business is developing and expanding.

 

What this means is that computing is shifting towards the server side, demanding more computing power and storage capacity at data centers.  This will encourage the development of smaller yet more powerful chips and the packing of more blade servers in a chassis.  In turn, this will require more (huge) data centers and more power to run them.  If enough number of power plants are not developed in time to accommodate those rising power requirements, demands cannot be satisfied.

 

Greening data centers have started with low hanging fruits but as Cloud Computing moves into a center stage, more complex and sophisticated technologies and policies will be required, including renewable energy and so on.  I will discuss this in a future blog.

Tags:  Broadband  Chrome  Cloud computing  Dell  DVD  Microsoft 

Share |
PermalinkComments (0)
 

What is Storage Virtualization?

Posted By Zen Kishimoto, Friday, September 05, 2008
Updated: Sunday, September 07, 2008
Server virtualization
 
As discussed in many publications, server virtualization is easy to understand since it allows an operator to consolidate multiple servers to a single machine without losing any services. Although there are a few variations, the way virtualization is defined and implemented for servers is pretty much well agreed upon and known. In a legacy server setup, on top of a bare machine (hardware), there is a host operating system (OS), such as Windows and Linux. Then, a single or multiple applications execute utilizing the host OS.  In case of virtualization, there is another layer between the host OS and applications. A thin layer (a small program component), called Hypervisor, is placed between the host OS and a guest OS, and applications utilize the services of the guest OS.  A combination of the guest OS and its applications is called virtual machine (VM) or virtual appliance.  This arrangement allows an operator to host more than one VM on a single machine (physical server.)
 
Storage virtualization
 
Now can this concept be applied to storage equipment easily?  Unlike servers, such equipment is more complex since there are several types of storages, such as internal disks, directly attached disks (DAS), network attached disks (NAS), storage area network (SAN), tape backup and so on.  A typical data center has many of these storage types configured in a complex manner.  The storage virtualization looks at the whole storage areas as one logical entity and does not distinguish each entity separately.  This is a very useful concept since even hourly or daily, the required data segment (and their amount) needs vary dynamically due to many factors. By considering the best platform to host them based on several criteria, such as cost, reliability, quality, accessibility and speed of access, a particular data segment is placed on the most appropriate place at a given time transparently.
 
Compared to server virtualization, I do not hear much about the actual use of storage virtualization.  However, Tsvetanka Stoyanova of DataCenter Journal, did some survey work published in his recent article. He says some storage virtualization is utilized in a typical data center, although it is not widely used like server virtualization. He also points out that according to experts in this area, storage virtualization is far from a major technology in data centers now. He surveyed definitions/classifications and various technologies for storage virtualization and tried to classify each vendor’s product according to the survey.

His conclusion was there are too many definitions and classifications and thus, it is necessary to have a consensus on the definition. Rather than repeating the result, I only point out major players in this segment as: Dell, IBM, Hitachi Data Systems, Sun, EMC, DataCore, FalconSor, NetApp, RelData, LSI and leave the details to each reader to read his blog.

One thing is clear.  Unlike the server virtualization, the market for the storage virtualization is still wide open and it is hard to pick a winner technology or vendor(s) at this time.  This market needs to be followed constantly.

Tags:  DAS  DEll  Hitachi Data Systems  IBM  NAS  SAN  Storage Virtualizaiton 

Share |
PermalinkComments (0)
 

A Container as a Building Block for Data Centers

Posted By Zen Kishimoto, Friday, September 05, 2008
Updated: Saturday, September 06, 2008
A new way to develop a data center has been proposed and multiple leading companies like Microsoft, Sun and IBM have developed and begun to deploy it. In addition, HP and Dell are rumored to be developing one. Some details are different among those companies but they are more or less similar.  In order to save several steps to set up and configure each server and rack at a data center, pre-configured severs in racks are shipped in a container. A good summary of Microsoft' s solution  is given by James Hamilton's paper at Microsoft Research.

"Large numbers of low-cost, low-reliability commodity components are rapidly replacing high-quality, mainframe-class systems in data centers.  These commodity clusters are far less expensive than the systems they replace, but they can bring new administrative costs in addition to heat and power-density challenges."
………………………………..
"The proposed solution is to no longer build and ship single systems or even racks of systems.  Instead, we ship macro-modules consisting of a thousand or more systems.  Each module is built in a 20-foot standard shipping container, configured, and burned in, and is delivered as a fully operational module with full power and networking in a ready to run no-service-required package.  All that needs to be done upon delivery is provide power, networking, and chilled water."
...............................................
"In this modified model, the constituent components are never serviced and the entire module just slowly degrades over time as more and more systems suffer non-recoverable hardware errors.  Even with 50 unrecoverable hardware failures, a 1,000 system module is still operating with 95% of its original design capacity.  The principle requirement is that software applications implement enough redundancy so that individual node failures don’t negatively impact overall service availability."

Microsoft plans to populate its Northlake (near Chicago) Data Center, which is under construction, with 220 such containers according to an article reported by datacenterknowledge.com.

The success of the container-based system is yet to be seen. Finally, we would like to add a note that there are some skeptics on the container based solution  as reported in Eric Lai's  article in ComputerWorld.

Tags:  Container  Dell  HP  Microsoft  Sun 

Share |
PermalinkComments (0)
 
Sign In


Forgot your password?

Haven't registered yet?