Why clean data is as important as clean energy

Why clean data is as important as clean energy

Zahl Limbuwala, Co-Founder & Executive Director

Renewable energy initiatives have been on the news agenda the last couple of weeks. 

According to Bloomberg, a large proportion of the Fortune 500 has set clean energy goals in response to the savings generated by renewable power. As companies amass huge amounts of data, a significant part of their strategy for reaching their ambitious goals will involve data centers, no matter if a business owns, builds or uses them in the Cloud.

Apple is leading the way in this area. The company recently released its Annual Environmental Responsibility Report which provides a detailed outline of the steps it is taking to ensure its data centers are environmentally friendly.   

Of course, in order to assess progress and success these companies will also need to track and report against sustainability and energy efficiency metrics too.

Accurate, reliable data is critical

But metrics are only as good as the accuracy of the data feeding into them. If companies put sustainability at the core of their business strategies, the metrics they set will be heavily scrutinized.

So, what happens if raw data from data centers is not properly cleaned and validated, leading to weeks and even months of incorrect and misleading information?

The result will be an embarrassing anomaly in the resulting operational report and a lot of awkward explaining to managers, stakeholders and potentially customers and shareholders.  

Accurate, reliable data is central to a serious sustainability initiative; collecting raw data and presenting it is simply not enough. After all, important decisions about a facility’s environmental profile are made on the basis of that data, so it needs to be spot-on.

The key to meeting environmental goals with confidence is to collect, clean, validate and then analyze the data relating to energy efficiency, carbon emissions and water consumption, in order for the business to have confidence in it. In this way, data center managers can quickly understand what areas need adjustments and remove the risk of making poorly informed decisions due to bad data when planning changes or improvements for each facility.

Another crucial point for organizations with clean energy objectives is in the planning of data centers. A report containing incorrect data could lead to a design that struggles to meet the business requirements, or to large budgets being spent without a verifiable return on investment . Analysis of available data can help to ascertain the most economic, sustainable and cost effective design options and locations before a spade even hits the ground.

It is reassuring to see so many renewable and environmentally-minded projects being initiated by world leading organizations. Let’s hope they pay as much attention to clean data as they do to clean energy.  

Romonet named an Innovator by IDC Report

Romonet named an Innovator by IDC Report

idc-innovatorWe’re one of only three vendors in IDC’s latest report recognising innovative companies in the data center industry. Our platform offers complete data center lifecycle analytics and as IDC states, “provides a single, accurate way of reporting data to key decision makers.”

"Running an agile IT environment requires an equally agile physical facility that is prepared to accommodate demanding and fluctuating IT loads. Technologies that improve the ability to manage the physical environment are essential, especially as data center resources become more distributed to support digital transformation and IoT initiatives," said Jennifer Cooke, research director, Datacenter Trends & Strategies at IDC.

Read the full report.

Sustainability is back on the agenda

Sustainability is back on the agenda

green-step-sustainability-blog-hero-bannerIt’s been quite a while since my last blog, I’ve been pretty busy at work and have been keeping abreast with the latest in analytics, machine learning and AI technologies. There has been a pretty big resurgence in the world of sustainability, especially in the data center sector. Initially I was a little surprised but there’s more substance to the movement compared to the pre-2009 economic crash that pretty much killed sustainability and green initiatives as a board level issue.

Before the last recession the green movement in the data center sector had gathered quite some pace. There was a lot of good work done by the BCS, Green Grid, EPA, LBNL, METI and others around metrics and tracking of how green a data center was. Indeed, it was this very movement that gave birth to the J.R.R Tolkien of metrics, PUE - one ring/metric to rule them all…get it?

Even way back then, when I was chairman of the BCS Data Center Specialist Group, raising awareness of data center energy efficiency (or lack thereof) was best done by talking to environmental lobbyists.

Greenpeace started its Click Green Report back in 2010, naming and shaming companies for how green their data centers weren’t. The Click Green program initially examined energy efficiency in the data center but has evolved to encompass a much broader scope since then.

When the global economic downturn descended upon us most of the less publicly visible corporate world (which back then included most data center companies) put green on the back burner and focused on saving money instead.

I have to say that this always seemed an unwise move to me because in most cases any green initiative worth its salt, especially in the energy efficiency arena, should have a good financial ROI and not just a green brownie points ROI’. The issue was that many didn’t have the tools, context or knowledge to assess and build strong green and financial business cases that could stand up to scrutiny or any sort of third party validation.

Thus I was very happy this year when my conversations with both customers and other industry pundits once again started to include discussions about the green, now referred to more often as sustainability, agenda. I prefer the term sustainability as it is much more encompassing of the broader agenda beyond energy efficiency, such as water consumption, embodied carbon, sustainable construction practices, etc.

Before everyone jumps back on the ‘we need more metrics’ bandwagon, let me say, no we don’t! Stop, put down the white paper draft on the new 'super all-encompassing one sustainable metric to rule them all’! Please just keep it simple and collect, track and analyse data indicating your energy efficiency, carbon emissions (this is a calculation from energy), water consumption and you’ll be well on your way to improving your data center’s sustainability efforts.

Oh and by the way, remember that simply believing raw data from sensors and instrumentation points is not an accurate representation of what’s going on (trust me, we clean and validate data for a living). If you are a service provider, not being able to allocate the fair-share of your overall carbon emissions to your customers is less than ideal.

Sustainability is a board level issue again and claiming your IT is zero carbon because it’s all in the cloud is not going to cut it as far as brand value is concerned, at least in the public’s eyes.

Zahl Limbulwala, Romonet CEO

How we became a Data Center Knowledge Startup to Watch

How we became a Data Center Knowledge Startup to Watch

dck-to-watch-hero-bannerWe passionately believe in the potential of Big Data and the power of our Platform and it is flattering when organizations in our industry recognise what we are doing.

Earlier this year Data Center Knowledge named us as a 2016 Startup to Watch. The editorial team chose a selection of companies addressing some of the most significant challenges facing data center managers and executives today.

Most of these companies have emerged from humble beginnings and evolved rapidly, us included. Just a few years ago, being able to compare expected and actual performance of the data center, model, simulate, predict and control a data center’s energy consumption, capacity, total cost of ownership and environmental risks was considered an elusive concept.

We believe we are changing that perception. Take Intel as an example. With our Platform the company is providing its clients with the operational understanding they need to make more informed decisions.

Intel's objective was to assess how, given the significant energy cost and capital expenses associated with cooling technology, there must be opportunities to run data center facilities at higher ambient temperatures. Our Platform proved Intel's theory to be correct.

Intel's challenge is mirrored by many other enterprises posing questions such as how do you model capacity and predict technology inflexion points? How do you know when to implement the right technologies to deliver the greatest return on investment? What happens to IT performance if you challenge accepted operational parameters and push boundaries?

In the last seven years, we have modeled 500 data centers with 98% accuracy, justified $800 million worth of investment and answered the above-mentioned questions for enterprise data centers and cloud and colocation providers.

Finance and Operations Working in Harmony

At the crux of these 'what if' scenarios is one financial question – how much does it truly cost to run a data center?

In another recent Data Center Knowledge article, the publication explained how complex that question is to answer without tools such as Romonet.

This is where Romonet adds value to an organization. With the power of predictive analytics both enterprise-class data centers and those businesses providing hosting, colocation and cloud services (multi tenant data centers) can address inefficiencies, uncover significant savings, increase infrastructure performance and maximize profitability.

That said, sometimes the information we deliver is used for alternative purposes. For example, Iceotope manufactures servers for cloud service providers and HPC environments. Its liquid-cooled server platform has been modeled and engineered to ensure it harvests as much heat from electronics as possible in the most efficient way. As a result, organizations can reduce data center cooling costs by up to 97%, ICT power load by up to 20% and overall ICT infrastructure costs by up to 50%.

Iceotope used Romonet to analyze and prove the performance benefits of its technology compared to traditional, air-cooled servers. Armed with this accurate, quantifiable data, the company secured $10 million in funding to continue developing its technology.

The challenges facing data center operators and managers extend far beyond simplistic energy targets. They include everything from profit & loss (P&L) targets, return on investment (ROI), total cost of ownership objectives and Corporate Social Responsibility (CSR), to regulatory compliance and how a company sources its natural resources.

Designing a Platform that solves this multitude of challenges is an exciting path, however it is always made more satisfying when those in the industry agree with what we’re attempting to achieve.

To PUE or not to PUE? Is that the question?

To PUE or not to PUE? Is that the question?

pue-question-mark-hero-banner"OMG!" I hear you say! Not another blog wanting to debate the pros and cons of PUE!
I'm not writing this to re-open (did it ever close?!) the debate about PUE. I'm here to talk about how those of you who use it today to track your data center performance can greatly improve its value to you and your business.

But first a little history.....

Many years ago in a land far far away...well it wasn't that far actually; it was Milan in northern Italy. Three guys sat around a dinner table chatting about how the data center industry just needed to start measuring something simple that gave an indication of how efficiently data centers were using energy.

I was one of the three along with Liam Newcombe (my CTO) and Christian Belady (Mr Data Center at Microsoft) and we'd just spent the day together at the European Commission's Joint Research Center (JRC) in Ispra, Italy, which is a very impressive campus where real science happens funded mostly by the EU member states.

Actually while it's an impressive site, because of the large number of non European attendees the meeting didn't take place inside the JRC, but rather the big meeting hall above the JRC tennis club just outside the high security fences of the JRC grounds themselves.

Christian had done a good job at that meeting of pitching the use of PUE to be used within the European Code of Conduct for data centers.

Luckily, Christian and Liam (who was the primary author of the original code and its best practice guidelines at the time) saw eye to eye about the use of PUE. It was the first time they'd met but it was clear to me (being mostly a spectator during much of the conversation that transpired over dinner) they were both cut from the same cloth.

With the might of the Green Grid and many vendors behind it, PUE went on to become the de facto metric for representing data center infrastructure efficiency (can you spot the irony there?).

Today many people spend many hours of their lives trying to explain to others in their company, usually the senior ranks, why the data center PUE getting "worse" (becoming a larger number) was not necessarily a "bad" thing and didn't necessarily mean they'd not done their job properly in terms of looking after the data center.

The problem with PUE (and clearly the industry knows that PUE is far from a perfect metric) is that using the absolute PUE number to track the performance of a site only tells part of the story.

"We know this already" I hear you say...

Yes, you already know that without asking what the corresponding utilization is, you can't really take a view on whether the number you have I front of you is good, bad, indifferent and whether it can or should be improved (we already know that every data center reaches an inflection point where you start trading TCO for PUE) and in an economized data center, you'd also be well advised to ask what the climate did too.

Most organizations today target their data center managers on an absolute reduction of their site's PUE but is that really a good plan? How do you know when you’ve reached that inflection point and while you might continue to shoot for as low as you can go, you’re unknowingly targeting the DC manager to increase your overall Total Cost of Ownership!

Also looking at the absolute PUE value is a bit of an unfair measure for the site manager, because generally the site managers have no control over what happens with the IT load; servers come in, go out, their level of utilization fluctuates, etc. We already know that improving server utilization through virtualization and consolidation for example will often make your PUE worse, due to the total IT load going down, which for any Enterprise IT operator is a good thing but for a colo operator it's generally a bad thing.

In an economised site the PUE will vary significantly with the outside temperature. I've yet to meet a DC manager than has any control over their local climate so a particularly warm year may mean it's simply impossible meet their PUE reduction target for the year.

Dynamic setting of PUE targets and tracking performance against them

With the introduction of predictive system level modeling for data centers (and no I don’t mean a CFD model) it is possible to build a highly calibrated (98% calibration accuracy) model that will allow you to do a number of things:

  1. Verify that your data center is performing at its most optimum PUE given the way it’s been designed and built and with the load and climate it’s operating with.
  2. Where it's not operating at it’s optimum PUE the model is able to show you why as well as where and how to improve it.
  3. Using the metered data from the site and continuously feeding the actual climate data and actual IT load, the calibrated model of a now fully optimized data center will continuously and dynamically tell the site manager what the PUE "should be” if everything is working as expected - something we call the "expected PUE".

Now with this dynamically calculated “Expected PUE” to compare against the actual PUE, the target for the site manager should be to keep the “Expected vs Actual PUE” within an acceptable tolerance; remember the expected PUE will automatically adjust itself for variation in IT load and climate so it’s a fair and equitable target and more appropriately represents the actual domain of control that a site manager can impact.

Now of course you may say "well I could still improve the PUE by making more impactful changes" and you’d be right, whether it’s increasing set points, changing to a different control strategy or upgrading to more efficient drives or equipment, all of these things could well improve the site's PUE.

Another benefit of having a calibrated predictive model is that you can now rapidly try out all the different things you might do to your site to improve it’s PUE, and if the model is capable of modeling cost as well as PUE, then you can make some really well informed decisions about what actions you might take to reduce the absolute PUE of your site, but not going past that TCO vs PUE inflection point without knowing so.

Don't sit back and think you’ve just got to live with being beaten regularly with the internal PUE stick! There is a much smarter, more significant and valuable way to use this important industry metric that will help you manage and reduce PUE using meaningful and achievable targets that take account of all the variables that impact the site’s performance.

Zahl Limbuwala, CEO of Romonet

The Rise and Rise of the Data Center CFO

The Rise and Rise of the Data Center CFO

rise-of-rise-cfoThe data center market has enjoyed many decades of almost unbridled and recession proof growth. This but this year, more than any other, we can see the inevitable signs of a market that's rapidly maturing and finding its longer term feet in the form of bigger, stronger (and in theory) more financially sustainable data center businesses. Businesses that provide the core underpinning resource of the digital and internet-based economy.

That said, this is a significantly different marketplace than it was just a year ago.

The focus has shifted from top to bottom line performance, and with the ever-increasing challenge of understanding, controlling and managing the financial drivers of these businesses, a long overdue change in operational and financial management is required.

It used to be sufficient to create an excel based financial model together for each asset that spoke to the capital requirement, expected operating costs, and projected revenues and thus provide a pretty good macro level yield model.

Roll many of those together and so long as your model was relatively conservative it was almost a sure thing that you'd have a free cash flow generative business, so long as you had sales people that could sell.

There were good deals but also so less than great deals done over the years as far as acquiring customer revenue was concerned for most operators. Quite often the commercial model was tweaked, and even more often larger customers were sold capacity on "special pricing or special terms".

Thus today most multi tenant operators have a mix of good and 'bad' customers from both a revenue, but more importantly, a margin perspective. However, understanding exactly what the margin per customer actually is, is an amazingly complex and time/resource consuming task. In fact it's worse than that, as while doing a point-in-time analysis is hard, the reality is the dynamic. So by the time you've figured it out, the variables have changed and thus your information is already out of date!

CFOs of all competent data center businesses out there will recognize this problem because it's what they are grappling with right now. If they aren't feeling these issues yet, it's either because of the inertia (mostly a function of their financial size in this case) their business already has, or they are in even bigger trouble than they realize!

CFOs have risen to prominence alongside the CIO within corporate enterprise due to the ever increasing budget and importance of technology to a business's competitive advantage - or even just its continued existence in heavily commoditized markets.

The CFO is about to rise to prominence in a similar way within data center companies.

It's no longer tenable to try and manage capital and operational spend using spreadsheets and a finance system alone. Financial models need to be tied to operational models or one will mislead the other, leading to tears and gnashing of teeth.

Financial planning and modeling can no longer be a once a year static high cost time intensive exercise. Every deal must be rapidly assessed and its margin understood before a contract is signed. Further, the financial performance of each customer must be automatically tracked so that when the dynamic of the asset changes, either intentionally or unintentionally, the impact on the financial return is immediately visible.

All of this requires new tools, new capability and automation between operations, engineering and finance that's never existed before, and it is far from easy to create!

Luckily, Romonet exists to solve these problems and meet this need and we've been anticipating it for the last eight years, so guess what, we are ready and able to help the data center industry go through this next stage of its economic maturity.

Zahl Limbuwala, CEO of Romonet