efficient power converters, you can reduce the amount of heat
and hence not only reduce the cooling costs but also increase
the MTBF of the product. Reducing heat and improving system
reliability also puts lesser demands on secondary protection/
redundancy. This allows system designers to use more available
space for better designs.
ECN: What are some of the latest trends for backup power to
handle utility power disruptions. What are some of the monitoring and control trends? Where does the UPS system fit in?
AJ: Fuel cells like bloom energy, and high voltage battery
technology (384V, 48V batteries).
ECN: What additional telecom/datacom/networking trends do
you expect to emerge in the near future?
AJ: More and more ASICs used are being designed in smaller
process geometries, demanding lower core voltages south of
0.9 V and increasing currents north of 90 A. This is a significant
challenge for traditional power converters in terms of conversion efficiency. More features designed in by system designers
yield significant board space constraints, pushing the power to
a small area toward the tail end of the board. In addition, rising
ambient temperatures challenge both the useful power that could
be obtained and the MTBF of the power converter. High power
GPUs are being considered in place of CPUs to cater to increasing
data processing power. This directly impacts power deliver and
distribution architectures. ECN
continues on page 26
ECN: How is the datacenter meeting the huge energy demands
that mobile devices, cloud storage and the Internet of Things all
KW: So let’s start with definitions. When we say datacenter,
there’s actually a lot of different types of datacenters and we can
generally put them into two groups. There’s datacenters that are
sort of served and operated by an entity for their own use. Then
there’s another kind which is sort of the on-demand datacenter. So
these are essentially datacenter infrastructure deployments that go
out and then they’re really rented out to different entities as they
need networking or storage capabilities, so there’s generally different kinds of datacenters. We see that the majority of the demand
is now being supported by these datacenters that are essentially
available on demand. The deployments today are pretty traditional
in terms of the infrastructure solutions that they can use depending on the size of the installation it’s based on; either a UPS power
backup system or DC systems with batteries or potentially a 12-V
infrastructure which is sort of the lowest power grade.
ECN: I’ve read that companies that currently manage their
own servers only use perhaps 10 percent to 20 percent of the
available computing cycles available, which otherwise translates
into wasted energy and cooling. Is cloud computing the best
available option to address the inefficiency or are there other
KW: It depends on the level of demand you’re looking for.
There’s very few entities that need the full capability of a da-
tacenter deployment. Really there’s two kinds of needs. One of
them is power and the other is computational power. So computa-
tional power is really talking about how many processors do you
have available to serve your requirements and the other is how
much electrical power is consumed by the datacenter in order
to provide the level of services required…. Cloud computing
is a way of using existing infrastructure and deploying it more
intelligently both in terms of computational power and in terms
of electrical power use. Cloud computing opens up the door not
necessarily with new hardware but simply with a more intelli-
gent software solution to optimize the use of the infrastructure
that’s deployed; so from that perspective, it enables a new level
of capability that the hardware has always had but is now really
available to a lot more people.
ECN: How are datacenter infrastructure management software
(DCIM) platforms changing the data center? What should server
and storage systems designers know about DCIM?
KW: So let’s take a classic issue the telecoms had to deal with
for a while, which is backup. Part of the reason why there’ a pretty
substantial cost associated with the power infrastructure for telecom centers historically is that you need to be up all the time. You
can’t really suffer a lot of downtime. And to do that, you basically
have to maintain the power delivery to the infrastructure that
you’re putting in place. Now with datacenters, historically you
kind of had a similar instance where you simply could afford to
have racks in a datacenter go down; and still, it’s not desirable.
The difference is that today with cloud computing you have the
ability to move the content or move the delivery of the content
around in such a way that even if you have localized power loss
or you have some kind of localized weakness in the network, it
doesn’t hamper your ability to deliver the content to everybody
who’s looking for it. So the ability to intelligently move content,
especially with just a little bit of warning, has changed the power
requirements in the datacenter from being something that needs
to be up all the time to needing to be powered intelligently, and
you can now decide at what level it’s OK to not be up.
ECN: When designing or scaling up a datacenter, how is power
distribution and control considered?
KW: There is no one way that has today sort of been established as a core solution. So right now the industry is in kind of
a state of experimentation where everybody’s trying different
things to see whether or not this or that are sort of better alternatives to move forward. The one that is sort of emerging as a
potential conduit that kind of connects the pieces maybe is high
voltage DC power infrastructure. So 380 V DC as sort of a step up
when you go into really high consumption datacenters. There’s
UPS systems, there’s 48-V like traditional telecom DC systems,
Q&A with Karim Wassef of GE Energy Management