Print Page   |   Contact Us   |   Your Cart   |   Sign In   |   Register
Zen and the Art of Data Center Greening (and Energy Efficiency)
Blog Home All Blogs
Commentary of Dr. Zen Kishimoto on news, trends, and opportunities in environmentally sustainable data centers and energy efficiency.


Search all posts for:   


Top tags: Smart Grid  cloud computing  Data center  Japan  EPA  PUE  HP  Cisco  JDCC  Tokyo  IBM  SVLG  Virtualization  IT  PG&E  Santa Clara  Google  Green Grid  APC  Fujitsu  ICT  Intel  Microsoft  Cloud  DCIM  smart meter  Sun  Energy Efficiency  Energy Star  Green IT 

Remote Control Is Everywhere, Thanks to Proliferation of Smart End Devices

Posted By Zen Kishimoto, Sunday, June 2, 2013

With the proliferation of smart end devices, we can do what was not done easily or inexpensively before, thanks to innovations such as cloud, low-power yet powerful processors, high-capacity yet inexpensive storage, and enhanced wireless technologies. Many companies have come up with interesting technologies and products in this emerging market. People Power is one such company.

At the recent Tiecon Conference, I sat down with David Moss, cofounder and CTO, to find out what they are up to. Although I did not talk to Gene Wang, cofounder and CEO, I took his picture with David.

David Moss (left) and Gene Wang (right)

Both gentlemen are veterans of wireless technologies. David met Gene when both worked at a company called Bitfone, which was later acquired by HP. Bitfone had technologies in wireless remote firmware updates. They founded People Power in 2009. Most of the current employees are engineers from Bitfone.

Because they emphasize the Internet of Things in their website, I asked a question regarding the Internet of Things, M2M, and intelligent systems. David told me that those terms are very poorly defined and all mean that everything would be connected. So he prefers the term the Internet of Everything.

People Power currently has two products: Presence, a security camera, and People Power 1.0, a power meter display. When I read a description of People Power, I was not quite sure how their product differed from their competition’s. Remember, I interviewed Ayla Networks before. They have somewhat similar solutions at a very high level.


Presence is a remote surveillance camera. That in itself is nothing new; we are very familiar with remote web cameras—they’re all over the place. What I thought was interesting about this was that it has:

· A uniform platform to connect everything through end devices like smartphones and tablets

· Support for three different architectures

· Ease of incorporation of analytics packages

You can turn your iPhone or iPad (an Android version is under development) into a surveillance camera by simply downloading this application to it. A surveillance camera has a motion detection capability that exploits a smartphone’s native feature. The iPhone series consists of several versions introduced over the past few years. Some people traded an old one for a new one or kept an old one in a drawer collecting dust. In one school district, a school needed a security camera but did not have the budget for it. So the district asked for a donation of old iPhones from the community and turned them into security cameras with People Power’s Presence. It can be used to monitor babies and old folks, as well as low-budget public places. David showed me his room in Phoenix while he was at the show with me in Santa Clara, CA. He checked on his room and his pet with his iPhone from time to time while he was out.

Presence demo with David holding his iPhone and me with my digital camera shown on both iPhone and iPad mini.

Behind Presence is a platform general enough to be used for many applications, and People Power makes the software available as open source. It is called the Internet of Things SDK, or IOT SDK for short. Under a BSD license, it is available for download at their developer site and on GitHub.

Although the current version of Presence is free, their platform will be licensed later with more features to enable device manufacturers to market their products connected via this platform and to enable service providers to deal with many devices from different vendors.


There are three types of architectures for this system.

Architecture 1

This is the simplest of the three. Its architecture is shown in the following diagram.

A controlling end device, such as an iPhone, connects to the People Power server placed somewhere in Amazon’s EC2 cloud, and relays its request to connect to the remote target device, such as another iPhone or an iPad. The network protocol currently supported is TCP/IP (it can also support such things as Bluetooth and ZigBee) and utilities/applications on the control end device are written in embedded C, Python, and Java. The target device is currently either an iPhone or an iPad. But it could be anything. In that case, unique interfaces may be required.

Architecture 2


This architecture is basically the same as the first. The controlled device now can play the role of networking hub. So with one control device, multiple target devices can be managed.

Architecture 3

If the target device is associated with its own cloud, it may talk to its own cloud service instead of People Power’s. As long as it makes sense, People Power can implement an interface with that cloud. Also, People Power opens up its APIs, and the other party has the option to implement that interface. Either way, the end device can support many target devices that are not otherwise supported.

Marketing strategy for Presence

Currently, Presence is offered as a free service marketed as freemium to get market recognition; they will charge for a premium version with more sophisticated features later. They have implemented a request board to solicit and collect useful enhancement features. Because they share the suggestions they receive, people can vote on the features they want to be implemented soon. I thought this was very clever.


A lot of data moves around through their platform. By collecting and analyzing such data, we can obtain useful information. Because what they have developed is a platform, it can be applied to many other applications and industry segments, in addition to security cameras. They designed their platform to allow easy plug-in of analytics packages, which David told me differentiates them from their competition. Consolidating all types of devices on a single platform enables uniform collection of data and ease of applying analytics.

People Power 1.0 , mobile energy management

Another product is People Power 1.0, which works with a power consumption monitor to visualize your power consumption information at home. Some time ago, both Google and Microsoft tried to market their solutions, but those did not work out for several reasons. One is utility companies’ reluctance to share power consumption data with them. People Power partners with Blue Line Innovation, which develops and markets power meters. Because Blue Line taps into a home’s power panel board and measures power usage directly from it, all that People Power 1.0 needs to do is to visualize it. In addition to Blue Line, it works with Energy Inc.’s TED product line. In the school district case above, in addition to using People Power’s security camera product, they are also using a power meter to manage their power consumption.

Energy consideration

My blog always ends with this consideration. People Power 1.0 is clearly relevant to this. How about Presence? One thought that comes to mind is the saving by remotely monitoring without actually going there. When they add control features in the future, people will implement energy saving by controlling home appliances, such as an AC. A controlling end device tends to be always on, and Presence software does not add much of a burden in terms of energy consumption. The fact that target devices need to be always on cannot be avoided; it comes with the territory. We can put those devices in a low-power consumption mode with some configurable options, such as no detected motion, and get them awakened when something like detected motion happens. We probably need to know how much power is consumed by new devices and weigh convenience against energy consumption.

Tags:  Cloud  Energy Efficiency  intelligent systems  Internet of everything  Internet of Things  iPad  iPhone  M2M  People Power  Power meter  Remote monitor 

Share |
PermalinkComments (0)

Ayla Networks Promotes Device Connectivity for Internet of Things

Posted By Zen Kishimoto, Wednesday, May 8, 2013

A previous blog explained how the connectivity of end devices leads to intelligence. Simply connecting the devices does not by itself produce intelligence, but connecting them to a bigger system that aggregates, stores, and analyzes their data does. Many details still need to be worked out.

An ecosystem for intelligent systems consists of several players, such as chip, OS, middleware, end device, cloud service, back office processing and analytics providers, and system integrators. Ayla Networks, which is still in stealth mode, claims that they provide secure connectivity for an end-to-end solution for an intelligent system. They currently focus on the consumer market but do not rule out expansion into other areas.

I sat down with David Friedman, CEO of Ayla Networks, during the recent Design West to find out what they are up to.

David Friedman

Who they are

David was VP business development for a wireless chip company before. After selling it in 2010 , he saw a business opportunity. At that time, end devices were beginning to be connected to form the Internet of Things. But the ugly reality was that those thousands of end devices were very different from each other, with microcontrollers in a variety of architectures and operating systems, as compared with the nonembedded world dominated by Windows and Linux. All those differences sure were a hindrance to accelerating and proliferating the Internet of Things. David and his cofounders saw the need for a generic solution that could absorb these differences. That led to the formation of Ayla Networks.

David and his team started to work on his solutions. Using his background as a chip guy, he teamed up with STMicro because ST is a major player in the microcontroller market. Ayla Networks is a software company and does not deal with hardware, so this is a good combination. Chip vendors focus on how to design and develop new and better chips but are not experts in networking such as Berkeley sockets and SSL. In other words, companies should focus on their core competency and outsource the rest. In the same vein, application vendors are not experts in the lower layers of software that support applications. During Design West, I heard from several players that application vendors should outsource the lower layers and concentrate on their core business; that is, design and develop applications.

So David is saying "Come to us. We will absorb any protocols differences and security needs to support your applications. You do not need to worry about the lower layers and other infrastructure concerns.”

What they do

Ayla provides an end-to-end connectivity software; for example, to remotely control your AC from outside your home with your smartphone. If you implement something like that on your own, you need to develop lower-layer software for the smartphone, including secure interfaces with its OS and networking stack. Then you need to develop an application to work with that infrastructure. Then you need to worry about how to connect it to your target AC. Communication can be via cellular, WAN, LAN, or PAN. You need to choose the right one. Finally, on the target AC, some mechanism needs to be incorporated to receive data and control from your smartphone. For that, a small board with a communications chip on it must be inserted along with the lower-layer software. And as with your smartphone, you need to interface with that chip's OS and networking stack, on top of developing applications.

What Ayla provides:

  1. Client-side lower-layer software for applications
  2. Networking solutions with security
  3. Cloud services
  4. Lower-layer software on target appliances

The client-side software can be integrated with your applications and downloaded from Apple Store and Google Play like other applications. Ayla provides whatever networking protocols are required by the applications. In addition, they provide cloud services to connect your client devices to target appliances. David did not elaborate on how they provide such services. Cloud services consist of cloud infrastructures and applications in the form of virtual machines. Because of the proliferation of inexpensive cloud infrastructures services, a startup like Ayla can afford to provide the cloud services. The lower-layer software on target appliances is the same as #1. Application developers can focus on their core business of developing applications without getting bogged down in lower-layer stuff.


Now this seems to require a lot of technical expertise in several areas, such as embedded systems, networking, and cloud. Although these areas are closely related, no one person could address all of them. Although David did not reveal details about his team, he did say that he gathered technical people who had worked together for several years. People who like to innovate and have a passion to create something new are attracted to his team.

Devil is in the details

Many people have discussed controlling an AC from outside with a smartphone or a tablet, and that by itself is nothing new. David told me that now is the perfect time to bring their solutions to the market. Technologies have advanced and the market is opening up. An article by Reuters reports that by 2022 a typical household will own 50 Internet-connected devices, compared with 10 now. David said that we do not want 50 solutions for 50 devices but only a single solution so that any new device can easily belong to the existing network. He also emphasized that creating a supportable product is really, really difficult.

Their infrastructure pieces must be easy to:

  • install
  • configure with a lot of latitude
  • scale
  • implement with secure delivery

They claim that they have met all four requirements.

Big Data

They are in a perfect position to collect and aggregate data, but David did not reveal any future plan for business exploiting such a position. But he did not rule out the possibility, either. If I were an AC OEM, I would be very interested in analyzing control data sent by smartphones, to reflect on how to tune my AC features. David told me that the key to the use of Big Data is anonymization with the ability to opt in or out.

Energy consumption

What about power consumption? Smartphones eat a lot of power, and additional features like these would consume even more. David told me that his developers pay a lot of attention to curbing power consumption. Power-use optimization implemented as in the iPhone would attain energy efficiency.

We chatted a bit about power in general when everything is connected. My view was as follows:

  1. Advantages: There are many advantages to deriving useful information from generated data that might be otherwise discarded. Some information can be used to save power.
  2. Disadvantages: Unless we can intelligently select which data to collect, or keep, or discard, we will end up with a pile of useless data occupying a lot of storage and server equipment, wasting energy.

I think what David said about the disadvantages was interesting. He said that analyzing a vast amount of data, transforming them to a small number of useful data, and discarding the rest might do the trick. I do not know the doability of such a thing, but it is an interesting thought.


David did not give me any concrete future plans, but this system can expand beyond the consumer segment to the commercial and industrial markets. I think there is a reasonable level of traction

in the consumer market at this point, and there will be greater demand later. In addition to a clear application for turning an AC on and off, I can think of a few more examples. Sprinklers for lawns are usually on timers, and occasionally they start to work even in the rain while you are not at home. Your remote device can override this. Or better yet, you can program sprinklers in conjunction with moisture-detecting sensors buried in the ground and with sensors for other local weather.

But I think the really big applications are in the commercial and industrial segments. I think it is very smart of Ayla to choose the consumer market first. There are two reasons. The first is that the commercial and industrial segments are known to be late adopters. The second is that if you target very specialized and sophisticated industry-grade equipment, how many people will know? But familiar appliances like ACs show up on many people’s radar screens; after success in the consumer market, Ayla can enter bigger markets.

The conversation stayed at a high level because they are still in a stealth mode, but a public announcement is forthcoming. Meanwhile, you can register to purchase their design kit by visiting here.

Tags:  Ayla Networks  Cloud  Connectivity  Internet of Things  Low layer software stack 

Share |
PermalinkComments (0)

Come and Meet Me and Other Speakers at DatacenterDynamics’ San Francisco Conference

Posted By Zen Kishimoto, Sunday, July 15, 2012

This is short notice, but I would like to encourage people to come and join me at the DatacenterDynamics conference on July 17 in San Francisco.

There will be five tracks or halls on different themes. I will chair Hall 2, whose focus is IT. Of course, the discussions will be on IT in conjunction with data center mechanical and electrical. Subjects at data center conferences tend to be on the facilities side, and very few are from the IT perspective. So I am very excited about this track.

In Hall 2, several subjects that relate to IT equipment and IT applications for data center optimization will be discussed. These are ASHRAE guidelines for telecom and IT equipment, cooling energy reduction, DCIM, and cloud computing.

Don Beaty of DLA Associates will discuss two guidelines from ASHRAE: 2nd edition of Datacom Equipment Power Trends & Cooling Applications and the upcoming 3rd edition of Thermal Guidelines for Data Processing Environments. This may lead to chiller-less data centers. Then Tahir Cader of HP will update us on the new server inlet air temperature guidelines known as A3 and A4. By raising inlet air temperature, we can cut cooling energy substantially. Are server vendors ready for the challenge?

John Boggs of Emerson will talk about how to reduce cooling energy at data centers with little or no cost to you. As we know, cooling can use close to 60% of power consumed at data centers. If we can control and reduce it (and with little or no cost), it would be great. I cannot wait for this presentation.

DCIM covers a broad area of the data center, ranging from design, simulation, monitoring, and controlling to retrofitting. The four DCIM sessions will collectively give us good insights into what DCIM is and what it can do for us. Two of the four sessions do not use the term DCIM in their titles or synopsis, though. The first one will be by Dhesikan Ananchaperumal of CA Technologies. He will talk about the necessity for good integration of data from both facilities and IT for optimizing data center operations. Jim Kennedy from RagingWire will share his experience with making both existing and new data centers energy efficient with DCIM tools, which led to EPA's Energy Star for data center certification. Following that, Khaled Nassoura of Raritan will tell us how to improve data center operations by combining DCIM with intelligent and automated systems. Finally, Todd Goldman of Nlyte Software will explain how to apply DCIM tools in steps without going too fast.

The past few years have brought an increase in cloud computing sessions at data center conferences like this one. Gina Tomlinson of the City and County of San Francisco will talk about how she put mission-critical IT infrastructures into a cloud. Although the notion of cloud computing is widely known by now, many people are still hesitant to move their mission-critical infrastructure into cloud for reasons of security, controllability, and SLA. How did she cope with the fear and move them to a cloud? It will be very interesting to hear about her experience.

Overall, this conference is filled with many more interesting sessions and speakers. But if you are an IT professional, come and join me for those very informative talks. See you in San Francisco!

Tags:  ASHRAE  CA Technologies  Cloud  DatacenterDynamics  DCD  DCIM  Emerson  Facilities  HP  IT  Nlyte  RagingWire  Raritan  San Francisco 

Share |
PermalinkComments (0)

More on the CIO's New Role in the Era of Cloud Computing and Consumerization of IT

Posted By Zen Kishimoto, Friday, February 10, 2012
Updated: Friday, July 20, 2012

This continues the discussion of the interview of Tim Crawford by Andrew Dailey of MGI Research at the recent Teladata Technology Convergence Conference. As before, I have summarized what was discussed and injected my comments and thoughts triggered by their Q&A.

It was simple in the enterprise world. There was a main business organization with a handful of supporting departments like HR, accounting, legal, and facilities management. There was no doubt that the business organization was king. It ruled the enterprise without question. Then computers came to the enterprise. At the beginning, computers were subordinate, simply supporting business. As computer technologies advanced, it became necessary to form an independent organization called IT. Even so, IT was subordinate to business. Business dictated and IT followed.

It is hard to say when, but at some point IT got so powerful that business could not tell it what to do. Or more accurately, business was still boss, but IT became less responsive to business needs, and not just to requests; IT did not and could not respond to the changes taking place around it. IT was supposed to support and accelerate business goals, but it became a barrier to business. It became so bad that people said IT was the place where big, important projects went to die, according to Tim. For some time, business was frustrated with IT but did not have the means to bypass it. Then came the era of cloud computing and commercialization of IT. Business secretly formed a shadow IT department, like a shadow cabinet in the UK, and started to bypass IT whenever possible. Who can blame non-IT folks who need IT services yesterday? If it is going to be months before IT can satisfy my needs when I need it now, I will bring in my own gear or outsource the services. The big difference now is that we can do it if we want. It was not possible to do so only a few years ago. Tim said some CIOs now realize this and are working to face this straight, but many CIOs still think the old and traditional way of running IT departments is appropriate.

There is no real department for shadow IT. In a way, any business or non-IT staff who needs IT services can virtually form a shadow IT group and outsource their needs to cloud and mobile computing. If there were really one physical shadow IT department, it would be easier for the real or traditional IT department to confront it and take back control. But this shadow IT group is like guerrilla warfare. There is no particular place the group shows up. It appears where there is a real need for IT services, gets them quickly, and then may disappear. As far as I know, Tim is the first person in IT to admit that the blame is on IT for bringing this situation on itself.

This may not be a good analogy, but at the same conference Pascal Finette of Mozilla gave a keynote speech on open innovation. His theme was that opening the barrier could accomplish even greater results. Maybe it is a stretch, but I am saying that the IT department should be more OPEN to what their internal customers want and work with them for the entire company. Tim said that CIOs in this new era should take a very hard look at their core capabilities, review their portfolio of services, and decide what should be retained and what should be outsourced. This is a hard thing to do because it may mean downsizing the IT department. The CIO and the IT group need to have heart-to-heart discussions among themselves and with their internal customers.

It is easy to propose this at a high level, but how do you actually implement it? When it comes to cloud computing, the number of popular services like and AWS is low, and it may be easy to evaluate the usefulness of each. But if IT approves a "bring your own device” (BYOD) policy, the sheer number of variations could be a problem. I would like to ask Tim about this. Maybe he has already addressed this in his blog.

Tim advocates that IT take the initiative to evaluate current services and gear before its internal customers come asking for support and approval. It would be great if, when a customer asks for services or new technologies, IT is ready to embrace the request and seamlessly integrate it into the current portfolio of services. That is a new value to IT and will definitely increase their importance in the company. There is going to be new pressure on IT. IT needs to study the market trends and new services and technologies daily and constantly. This alone would produce a ton of work. So this makes it even more important to review what they currently have and remove services of low priority from their portfolios. If some services can be easily outsourced without losing important elements, such as security, they should be.

For example, if IT studied well in advance of a customer request to incorporate its services for business, it could accommodate the request with strong support. Moreover, because IT is probably the only department that interacts with most, if not all, departments in the company, it can facilitate communication between departments. For example, if IT understands how marketing and sales operate, it could avoid sending conflicting messages to the same customer from marketing and sales, which is very embarrassing, according to Tim.

What if IT provides virtualization? Would it be enough to prevent internal customers from resorting to outside clouds? Virtualization and cloud are two different things. I caught Tim after the interview to get more details on this, but that is a story for later. For now, let's say virtualization is not cloud, and cloud is more than that. Tim said that the three pillars of cloud computing are economic value, flexibility, and responsiveness. In most discussions, economy is considered the key value for cloud computing, but Tim said it might not be the most important factor. Sometimes cloud computing may cost you more, but you may want to adopt it for its flexibility and responsiveness. To sell cloud computing to your CFO and CEO, you should be able to make each point by giving a concrete example and the savings associated with it. For example, scaling from 50 to 1,500 servers over days and weeks could only be accomplished via cloud computing; no traditional methods accommodate change on such a scale. Moreover, you should be able to convert this to $ to explain it to the CFO and CEO.

Is virtualization a necessary process before adopting a cloud service? Tim said that the adoption of virtualization is probably in the 30% to 40% range, less than the 50%-plus estimated by IDC. If you have not virtualized your data center yet, you need to take a holistic view rather than simply considering virtualization. Looking solely at technology (virtualization) is not right. After all, virtualization maximizes server utilization, but other things—like maximization of the organization’s resources and processes—are also very important. So, according to Tim, it is OK to bypass virtualization and adopt cloud services by paying attention to the whole picture.

What about organizations that have already invested in hardware? It is easy to talk about startups, which have very little IT gear, but larger companies, even small to midsize companies, may have a hard time moving to a new paradigm. Tim's answer was clear: whether it’s hardware or software, everything will be replaced at some point. The lifespan of server hardware is about four to five years, and software applications must be upgraded or replaced at some point. That is the time to take a hard look at what to do from several points of view: economy, technology, process change, and the whole organization.

Finally, security is the number one inhibitor mentioned in many research results. The perception is that only larger companies need to consider both security and regulatory conformance and that small startups can adopt cloud computing readily. Tim said both considerations are required in any business of any size. So that kind of argument does not have merit. Also, he said, in many instances cloud providers' data centers and security measures for other elements are better than those of companies of many sizes and kinds. In any event, if you worry about data security, you need to weigh convenience and ease of use with security risks. Having data at your own data center does not make it more secure than the data hosted somewhere in clouds.

There is a lot of talk about cloud computing, but sometimes people duck the hard questions. I will catch Tim in the future to discuss some of mine:

  • How do you transition from virtualization to cloud computing?

  • What are public, private, hybrid, and federated clouds?

  • Is there such a thing as private cloud?

  • Hybrid clouds? What about interoperability of VMs between private and public clouds?

  • What new requirements, such as ID management, do federated clouds bring?

  • What are green clouds? Can you quantify the greenness of clouds?

So stay tuned!

Tags:  CIO  Cloud  public and private clouds  Technology convergence  Teladata  Tim Crawford  virtualization 

Share |
PermalinkComments (0)

Cloud Computing and the Consumerization of IT: Does CIO Stand for Career Is Over?

Posted By Zen Kishimoto, Wednesday, February 8, 2012

I have talked to Tim Crawford several times in the past and written a blog based on one of those conversations.  Tim is an IT guy of CIO caliber, and his insights are independent of particular vendors or industries. He has been very active in the cloud computing area and appears at many cloud and CIO-related conferences.

At Teladata's recent Technology Convergence Conference, Andrew Dailey of MGI Research interviewed him.

From left: Andrew Dailey and Tim Crawford have a fireside chat.

I will write a few blogs based on this session. The questions and answers below are edited, and my notes are indicated by my initials in parentheses (ZK).

Only the fittest will survive in the technology and business world, as in nature. Andrew’s first two questions were pretty blunt and provocative. The first was whether we still need a CIO. Tim's answer was yes and no.

(ZK) People in a corporation can bypass IT and get necessary computing bandwidth with their credit cards from cloud vendors almost instantly, without a lengthy request, review, and approval process. They do not need to purchase new IT gear to satisfy their needs, and they charge the cost as part of business costs rather than capital expenditure.

Also, people can bring in their own gear, such as smart phones and tablet computers. Some may even bring in a Wi-Fi-access device for their convenience. Smart phones, such as iPhones and Androids, and tablet computers, such as the iPad, hit the consumer market first. Even before corporate IT departments embraced and supported such consumer-oriented devices, workers brought them in and used them, bypassing IT departments. With this gear, people even form personal clouds on top of private and public clouds. If people can do their jobs easier and quicker without going through their IT department, we do not need the IT department or consequently the CIO.

Tim thinks in spite of this trend, there should be someone who decides and shows the direction for corporate IT.

(ZK) Each individual can do whatever they want to do with their own gear and outside vendors. But if there is no clear vision of where their IT is headed, there is just going to be chaos.

Tim said that today's CIO needs to change with the current mindset to accommodate the sea change in the corporate environment.

(ZK) This is very true. If the IT department remains a place where they can close their eyes and ears to change, neither it nor its CIO is necessary.

Andrew’s second question was whether each company needs to build, maintain, and run its own data center. Tim’s answer was very clear. If the data center is not your core value for staying competitive in your own field, you should not get involved in data center operations.

(ZK) When a provider of data center services, such as colocation or wholesale, claims that you should outsource your data center services, they cite several reasons: construction, maintenance, and operation require a large capital outlay; you must acquire and retain expert talent; and there will be a time lag until actual use. Even though this is true, because it is coming from a vendor, you may not buy it with 100% confidence. But coming from an independent source, it is more convincing. Still, it is not easy to determine whether you want to move all your gear out of your data center to an outside provider. Also, what about private vs. public? Hybrid clouds? I will catch Tim later and ask him more about it.

There were many other interesting conversations during the session that I will share with you in the future. Meanwhile, you can catch more of Tim’s thoughts on his blog.

Tags:  CIO  cloud  Consumerization of IT  Telatada Technology convergence conference  Tim Crawford 

Share |
PermalinkComments (0)

Lew Tucker’s Keynote Speech at Cloud Connect

Posted By Zen Kishimoto, Wednesday, March 9, 2011

Cloud must be getting a lot of attention, because all the sessions at Cloud Connect were packed with a lot of people—sometimes I could not find a seat. I will write a few blogs about some of the more interesting sessions as they relate to energy efficiency.

Lew Tucker, formerly with Sun and now with Cisco, is an excellent speaker and very knowledgeable. Being knowledgeable does not always make one a good speaker. In the case of Lew, he satisfies both traits and did not betray my expectation this time, either.

Lew Tucker

Sun used to say that the network is the computer. Now Cisco is saying that. As we add mobile devices, sensors (components at home, commercial buildings, and industrial entities), and interactions with car components, the number of addressable components online will increase from roughly 5 billion now to 50 billion by 2020 (by some estimates). And the day will come soon, before we know it, when 1 trillion connected devices, 1 million applications, and 1 zettabyte of data is common.

Elements connected to the Internet (slide from Lew Ticker’s speech)

By the way, 1 trillion is 1,000 billion. Zetta? Let’s see. Giga is 10 to the 9th power, tera is 10 to the 12th power, peta is 10 to the 15th power, exa is 10 to the 18th power, and zetta is 10 to the 21st power.

As so many elements are connected to the Internet, how do we cope with them? The answer is cloud computing. Lew explained the legacy data center and cloud data center.

Legacy data center (slide from Lew Tucker’s speech)

In the traditional data center, each application is tied to a physical server; thus when you add an application, you also add hardware and complexity. In the cloud data center, you do not need to add hardware, even if you add more applications, and therefore it is more efficient.

Cloud data center (slide from Lew Tucker’s speech)

Lew’s speech can be seen here (running time: about 14 minutes). It is worth your 14 minutes. His presentation is available here for free.

OK, this blog is all about energy efficiency. How is his speech relevant to that? IT or data centers are often to blame for their power consumption. If the 1 trillion, 1 million, and 1 zetta situation becomes a reality, how much more power will be consumed in the legacy data center? In the cloud data center, redundant hardware and software (by way of service-oriented architecture, or SOA) will be eliminated, and more energy efficiency will be established.

Tags:  Cloud  Cloud connect 2011  Keynote  Lew Tucker 

Share |
PermalinkComments (0)

Clouds Getting More Complex

Posted By Zen Kishimoto, Wednesday, January 13, 2010
Recently, we have been seeing new clouds, first public and then private. Moreover, the external and internal classification has been added. Now, another criterion, service level agreement (SLA) may be added, and cloud types may vary according to different levels of service.

Jay Fry, a cloud computing expert, wrote a very interesting blog in conjunction with CA’s acquisition of Oblicore. Note that Jay was VP of marketing at Cassatt, a cloud computing management company that CA acquired last June.

Jay explains what Oblicore does:

"The Oblicore technology is best known for its ability to take technical information about what’s going on in your IT environment and correlate that with the business-level information held in your service level contracts. The company’s name is an appropriate summary: service level "obligations” are at the "core” of your business."

As Jay pointed out, it is obvious that IT exists to satisfy and implement business. But many techie IT folks (maybe including yours truly) lose sight of this fundamental fact. Discussions about green data centers tend to focus on how both IT and facilities can cooperate to make data centers green. It is important to discuss the two organizations’ collaboration, but we tend to forget why we purchase and operate servers. Without business, we would not require IT equipment like servers, much less data centers to operate and manage such IT units.

Clouds will be getting more complex. It is no longer just a public cloud; we need to consider control and security (private cloud), and now SLAs. SLAs are complicated because they include many aspects of services provided by IT. Right now, some vendor like GoGrid is boasting that you can have your cloud resources right away with a few clicks and a credit card. Before too long, we may be able to specify public, private, internal, external, and various parameters to your required SLAs.

Tags:  CA  Cassat  Cloud  Jay Fry  Oblicore  SLA 

Share |
PermalinkComments (0)

The Progression from Data Center to Cloud

Posted By Zen Kishimoto, Friday, January 8, 2010
In this blog, I have pointed out several times the confusion about the difference between a data center and a cloud. Some people think a cloud is a mysterious entity floating somewhere without consuming much energy. Data center people tend to focus on the physical components in their data centers and do not have a good grasp of what a cloud is. Very few good explanations have linked the two.

A good friend of mine, Ken Novak, wrote a good blog to explain how a data center evolves into a cloud.


He started with an existing data center and explained how it becomes a cloud in the following progression:
  1. Consolidated data center
  2. Virtual data center
  3. Private cloud
  4. Public cloud

First, the consolidated data center employs virtualization and disk consolidation via networking. This has been done up to this point. Second, the virtual data center enhances the consolidated data center further with advanced virtualization, like virtual machine migration. Third, the private cloud enhances the virtual data center by further enabling automation. Finally, the public cloud enhances the private cloud by supporting many more users with economies of scale.

This is a great classification because there is a definite gap between a conventional data center and a cloud. You cannot jump from an existing data center to a cloud in one leap. I have two comments on this. The first is that it is sometimes hard to distinguish one from the other, especially at the boundary. The second is that private and public clouds are developed for different reasons, and I am not sure whether the public cloud is an evolution of the private cloud. Nevertheless, it is good to have a classification like this.

However, it is not important to define exactly where you stand but to take appropriate action to plan the jump to the next stage of cloudization.

Tags:  Cisco  Cloud  data centers  Ken Novak 

Share |
PermalinkComments (0)

Tier1’s Data Center Transformation Summit: Managed Services and Cloud Panel

Posted By Zen Kishimoto, Tuesday, December 15, 2009
Updated: Wednesday, December 16, 2009

This panel consisted of the following people:

  • Moderator:

Dan Golding, Tier1 Research, VP and Research Director

  • Steve Herman, VP of DC Business Development
  • Denoid Tucker, VP Technology, RagingWire Enterprise Solution
  • Steve Prather, VP of Sales Operations, ViaWest

From left: Sam Prather, Denoid Tucker, Steve Herman, and Dan Golding (standing)

This panel was eye-opening for me. The panel’s consensus was that a physical data center can be abstracted out to a virtual data center, which is still not a cloud. If you further abstract out the virtual data center, you get cloud.

Cloud is a funny thing. Everyone is very interested in it, but there is no standard definition. Another thing is that "everyone loves a cloud,” according to Golding. He also said that he had not heard that anyone loves colocation.

Up until now, enterprises were kicking cloud’s tires to see if it makes sense to adopt it. But in the past six to eight months, hosted private cloud has become popular. Cloud computing is always discussed in conjunction with virtualization, but after listening to Tucker’s description, I changed my mind.

RagingWire and ViaWest implement cloud in a similar but not identical way. They both provide colocation and cloud service at the same data center. A client may lease a cage for its operation but need further resources (e.g., because of seasonal demands) from the cloud, as shown below.


The difference is that ViaWest uses virtualization and RagingWire does not. ViaWest hosts each client virtual machine (VM) in a multitenancy manner (shared hardware). RagingWire without virtualization pulls necessary hardware from a pool of resources and allocates the dedicated hardware to each user. Even though RagingWire does not use virtualization, it supports resource allocation on demand (maybe not as granular as virtualization).

RagingWire’s approach is more secure because one client’s resources are never shared with another client. Although many modern virtualization technologies separate one VM from another for no intrusion, the dedicated solution would be more secure. OpSource’s implementation supports both cloud in a multitenancy and  a dedicated manner. .

Cloud is no longer just public vs. private. It is getting very complex.

Tags:  Clocation  Cloud  Managed services  RagingWire  Tier1 Data Center Transformation Summit  Virtualization 

Share |
PermalinkComments (0)

Standardizing Cloud Computing by Object Management Group

Posted By Zen Kishimoto, Monday, August 24, 2009
I participated in the Object Management Group’s (OMG) activities a long time ago, especially their Common Object Request Broker Architecture (CORBA) initiative.  OMG is a consensus-oriented organization that does not often require voting. When I represented NEC in creating the C version of the CORBA Interface Definition Language (IDL), NEC sided with Sun and other companies against the winner of the specification, whose name I cannot recall. We had the vote for the first time in the history of OMG to decide which specification won. The room was packed with people to see the outcome. We lost, but it comes back to me as if it were yesterday.

Well, the nostalgic memory aside, SD Times reports that OMG is trying to set standards for cloud computing. The most recent Cloud Standards Summit was held July 13.
SD Times reports the following major participants:

Kevin Jackson wrote a blog on this with more information about participants.

This is a good step towards the standardization of cloud computing, including standardizing terminologies. I complained about the lack of standards for defining the cloud in my previous blog. This step may stop the repeated iterations of cloud computing definitions. I wonder, however, if it will. The notion of cloud computing is fairly simple, but more and more potential uses and new technologies will send cloud computing down several evolutionary routes. No matter how quickly standards are set, they will lag behind the real speed of cloud computing’s progression. There is a term greenwashing, but there is no term cloudwashing. It will be a long time before reasonable standards emerge for cloud computing. But it is certainly a good sign that cloud computing is entering the mainstream of IT and computing.

Also, note that NIST has just published a Working Definition of Cloud Computing.

Tags:  Cloud  Cloud Standards Summit  OMG  SDTimes  Standards 

Share |
PermalinkComments (0)
Page 1 of 2
1  |  2
Sign In

Sign In securely

Haven't registered yet?