Thursday, August 30, 2018

The Inevitable Emergence of The Smart Grid

On March 5, 2004, Andres Carvallo defined the smart grid as follows. “The smart grid is the integration of an electric grid, a communications network, software, and hardware to monitor, control, and manage the creation, distribution, storage and consumption of energy. The smart grid of the future will be distributed, it will be interactive, it will be self-healing, and it will communicate with every device.”
And he also defined an advanced smart grid as follows. “An advanced smart grid enables the seamless integration of utility infrastructure, with buildings, homes, electric vehicles, distributed generation, energy storage, and smart devices to increase grid reliability, energy efficiency, renewable energy use, and customer satisfaction, while reducing capital and operating costs.”
The U.S. Department of Energy (DOE) released a handbook on the smart grid in 2009, and in the first few pages, made a distinction between a “smarter grid” and a “smart grid.” By this reasoning, the former is achievable with today’s technologies, while the latter is more of a vision of what will be achievable as a myriad of technologies come on line and as multiple transformations reengineer the current grid. The DOE vision for a smart grid uses these adjectives: intelligent, efficient, accommodating, motivating, opportunistic, quality-focused, resilient, and green.
In effect, all definitions of the smart grid, envision some future state with certain defined qualities. So for purposes of discussion and clarity, we have adopted a convention for this book in which we refer to smart grids today as first generation smart grids, or Smart Grid 1.0, if you will. Our vision for the future we define as second generation smart grids, or Smart Grid 2.0, or as in the title of this book, we simply refer to the advanced smart grid. And at the end of this book, we envision a future where the smart grid has evolved to an even more advanced state, which we call Smart Grid 3.0.
We use a key distinguishing feature to mark the difference between smart grids as they’re envisioned today and how they will evolve as experience is gained and a more expansive vision—our more expansive vision, we hope—is adopted. The difference, while it may seem trivial at first, is fundamental, and that has to do with the starting point for the smart grid project. If the project starts off with an application then that smart grid by our definition must be a Smart Grid 1.0 project. If on the other hand, the starting point is a deliberate architecture, design and integrated IP network(s) that supports any application choice, then it is a Smart Grid 2.0, or an advanced smart grid project.
Nearly all smart grid projects today start with a compelling application, whether generation automation (e.g., distributed control systems), substation automation (e.g., SCADA/EMS), distribution automation (e.g., distribution management system, outage management system, or geospatial information system), demand response, or meter automation, and then design a dedicated communication network that is capable of supporting the functionality of each stand-alone application. Evolved from the silos of the current utility ecosystem (i.e., generation, transmission, distribution, metering, and retail services), the first generation smart grid carries with it a significant level of complexity, often perceived as a natural aspect of a smart grid project.
In fact, a considerable amount of the complexity and cost of a first generation smart grid project derives from its application-layer orientation. Starting at Layer 7 of the OSI Stack, the application layer—regardless of the application—requires complex integration projects to enable grid interoperability, from the start of the smart grid project onward into the future. As additional applications and devices are added to the smart grid, whether as part of the original deployment or subsequently and over time, the evolving smart grid must be integrated to ensure system interoperability and sustained grid operations. In short, starting with the application brings greater complexity, which comes at the expense of long-term grid optimization.
The advanced smart grid perspective begins with a basic tenet. At its core, a smart grid transition is about managing and monitoring applications and devices by leveraging information to gain efficiency for short-term and long-term financial, environmental, and societal benefits. For a system architecture whose principal goal is to leverage information on behalf of customer outcomes, it makes better sense to start with use cases, define necessary processes, choose application requirements, optimize data management and communication designs, and then make infrastructure decisions. A primary focus on the appropriate design process ensures that the system will do what it is meant to do. This key insight—starting at the network layers rather than the application layer—produces the appropriate architecture and design, and drives incredible benefits measured not only in hard cost benefits, but also in soft strategic and operational benefits.
Network-layer change stresses investment in a future-proof architecture and network that will be able to accomplish not only the defined goals of the present and near-term future, but also the undefined but likely expansive needs of a dynamic digital future, replete with emerging innovative applications and equipment. A well-informed design and resilient integrated IP network foundation puts the utility in a position of strength, able to choose from best-of-breed solutions as they emerge, adapting the network to new purposes and functionality, consistently driving costs out by leveraging information in new ways. The advanced smart grid is foundational; we go so far as to say that its emergence is inevitable.
The advanced smart grid is bound to emerge for two principal reasons. First, electricity is an essential component of modern life, without which we revert to life as it was in the mid-nineteenth century. The loss of electrical power, even for just a few hours, is the ultimate disruption to the way we live. We simply cannot live as a modern society without electricity. And second, at its core, technological progress is all about individual empowerment. But only recently have advances in component miniaturization, computers, software, networking, and device power management technology and the standards that drive their pace of innovation combined to enable individual empowerment in the electric utility industry. A new distributed grid architecture is beginning to emerge that will not only ensure future reliability, but also empower individuals in new ways.
Networks and individual empowerment define twenty-first century technology. It is inevitable that the design of advanced smart grids will begin with a network orientation that is able to accommodate any and all network devices and applications that will emerge in the future. It is also inevitable that advanced smart grids will evolve to ensure an abundant and sustainable supply of electricity and to empower individuals to manage their own production, distribution, and consumption of this essential commodity. The advanced smart grid must be robust, flexible, and adaptable, so it will be; as projects move along the learning curve, society who will insist on an advanced smart grid.
An excerpt from The Advanced Smart Grid: Edge Power Driving Sustainability by Andres Carvallo and John Cooper © 2011 Artech House, Inc. Reprinted with permission of the publisher. The book is available at and

Thursday, August 29, 2013

Customer targeting and segmentation analytics

This article excerpt from IntelligentUtility written by my Jayhawk friend Christine Richards has some good points and some not so good points.  I hope that the full report gets into the details that I am suggesting below. Let me elaborate on the conversation and help educate the masses.

This is nothing new.  Utilities have been doing this for a long time.  That is how their pricing rates are created.  Utilities are pricing experts by demographic data (low income, churches, schools, elderly, etc).

Most generalizations about the Electric Utility industry are useless. Especially, customer marketing and customer segmentation generalizations.  Why?  Because there is no one single US Electric Utility Market. The market is segmented into 50 states in the continental US.  And within each of those 50 markets, there are IOUs (Investor Owned Utilities), MOUs (Municipality Owned Utilities), and COOPs (Cooperative Owned Utilities).

Those utilities behave differently due to their ownership and they have different market operating rules by law.  Furthermore, some markets have energy retail competition and some don't.  So for example, the customer segmentation and customer marketing sophistication that one can witness in the energy retail competitive electrics markets (Connecticut, District of Columbia, Illinois, Maryland, New Jersey, New York, Pennsylvania and Texas) would be totally different than the one exercised in non-competitive markets.

So, it would be much better to have a real analysis done within the different markets that can be compared and contrasted to learn from.  Do a price study in the Texas competitive market and publish the results. I guarantee that everyone will be amazed with your findings, as the Texas competitive market has been doing phenomenal things since it launched in 2002.

Location, Competition and Innovation are the only three driving forces for better customer segmentation and customer marketing within Electric Utilities.  Without focusing on them, the analysis is truly meaningless to the practitioner.



Customer targeting and segmentation analytics

Excerpts from our just-released publication

Today, I wanted to share some snippets from our hot-off-the-virtual-presses Customer targeting and segmentation analytics industry briefing. In this report, we discuss how it isn't about just satisfying utility customers as a group, but satisfying customers as individuals. That's a big change for many utilities, but fortunately customer targeting and segmentation analytics can help with that transformation. This report provides a snapshot of where utilities are at today with their customer targeting and segmentation analytics, as well as what's ahead. In this article, we pull out just a few highlights, like defining customer marketing, targeting and segmentation, and discussing the state of customer marketing and segmentation efforts.

Defining marketing, targeting, segmentation
In our Institute discussions, as well as one of our working group titles, we use the description of "customer marketing and segmentation." In the report we get a little bit more specific and talk about "customer targeting and segmentation." With this change, we thought it would be helpful to quickly review the differences between marketing, targeting and segmentation. Our analyst, Kim Gaddy, developed these base definitions.

Customer segmentation
Customer segmentation is the process of dividing the customer base into groups based on common characteristics. These characteristics include:
  • Age
  • Gender
  • Income level
  • Location
  • Interests
  • Participation in specific rate plans or programs
  • Level of engagement
  • Payment behaviors
  • Preferred communications channels

Segmentation efforts range from those that are focused on managing the overall utility-customer relationship to very specific segmentation efforts designed to support the marketing of a particular product or program or the prioritization of credit and collections efforts. The benefits of segmentation also extend beyond customers. For example, segmentation of buildings can be a valuable input for a utility's energy efficiency strategy.

Customer targeting
Customer targeting is the process of applying customer segmentation to specific marketing initiatives or campaigns. Some campaigns may target more than one customer segment. The idea is to improve marketing efficiency by focusing marketing efforts and resources on those customers most likely to, for example, benefit from a particular product, enroll in a particular program, or pay an outstanding balance.

Customer marketing
Customer marketing involves a number of components and encompasses both customer segmentation and customer targeting. In general, customer marketing includes four elements, the 4 P's of marketing:

  • Product | This includes product concept, selection, defining the value proposition and product development
  • Price | Determination of a product's price
  • Place | Selection of the distribution channels that will be used to reach customers
  • Promotion | The design and execution of a promotional strategy

How utilities are using targeting and segmentation analytics
Now let's see what utilities are actually doing with targeting and segmentation. In this article, we'll share just a smidge of the results from our customer marketing and segmentation working group survey, including targeting and segmentation status and resources, customer criteria usage and segmentation usage.

According to utility survey respondents, most utilities have some segmentation/targeting framework in place. And this is all happening in spite of the fact that 93% of our respondents haven't developed a business case surrounding segmentation and targeting.
As far as where these frameworks are developed and managed, many different groups are involved in the effort. Of the most frequently mentioned responses, about 67% of respondents said their market research departments are involved and another 50% said their customer marketing departments participate in the effort.

So we know about a few of the groups involved, but what about the people involved? How much time are they spending on customer targeting and segmentation efforts? We found that most utilities have more than a couple of folks working of these efforts. About 31% of utilities have two to five FTEs, and 31% have five to nine FTEs devoted to this work. Utilities are definitely dedicating resources to developing and maintaining customer targeting and segmentation.

Alright, so we've covered some basics with customer  targeting and segmentation efforts. Now let's look at how utilities are applying some of that targeting and segmentation work. The first area we'll explore is what are the most-often used criteria by utility companies. When looking at all customers, key criteria used by utilities include past participation in utility programs, customer location, consumption patterns and demographics. If we dig into different customer types, the criteria can vary. For example, we found that criteria for residential customers focus more on rate class, geography and program participation, whereas the top commercial customer criteria include consumption, firmographics and rate class.

Now let's see how this segmentation and targeting information is being used by utilities. In terms of departments using segmentation and targeting information, we found that there are a ton of groups leveraging this information-from engineering to finance. The top departments, however, include marketing/market research, public affairs, product/program management and customer service. The top ways in which these departments are using this information include conservation/usage awareness, customer satisfaction research, message development and customer use patterns.

Another way to look at usage is to look at how segmentation and targeting information can help utility companies shape and deliver their programs and educational materials. Top programs and materials that benefit from this information include special billing, energy efficiency/water conservation, self-service, and different payment options.

And that's just a smidge of what we covered in the report. If you'd like to learn more details about our Customer targeting and segmentation analytics publication, please contact me and I'll be happy to provide you with more details, or you can go directly to our research report page.

Thanks for reading!

H. Christine Richards is a research director with the Utility Analytics Institute. You may reach her at

Monday, August 26, 2013

State of Arizona Wants Retail Energy Competition

I agree with Barbara.  It seems that deregulation is back.  Maybe it is here to stay finally.  Clearly competition works for many industries.  Competition delivers consistent innovation of products, services, and customer experiences.  The Texas Retail Energy market is fully competitive and working well. Retail Energy competition should be the right path for delivering consistent innovation of retail energy products, services, and customer experiences.



Arizona clamoring for competition

Read more: Arizona clamoring for competition - FierceEnergy 

Subscribe at FierceEnergy

Arizonans for Electric Choice & Competition (AECC) members and retail competition advocates Direct Energy, Noble Americas Energy Solutions and Constellation NewEnergy have filed comments with the Arizona Corporation Commission (ACC) urging the commission to move forward with opening Arizona's electricity markets to competitive suppliers.

Currently, the Arizona Corporation Commission (ACC) continues to deliberate the re-opening of competitive electricity markets in the state.

"As the discussion around opening Arizona's electricity market intensifies, many are busy crying wolf over how competition will impact the state, primarily the incumbent utilities," said Greg Bass, Director of Western Regulatory Affairs, Noble Solutions. "The comments we filed with the ACC last week clearly articulate competition's value based on the facts and the experience of over a decade of retail choice in other states. We look forward to participating in a regulatory process that separates truth from fiction, one which we are confident will lead to the opening of the state's electricity market to retail competition."

Seventeen states including Texas are open to retail electric competition. In fact, a recent J.D. Power survey revealed that customer satisfaction in the competitive Texas electricity market is at an all-time high and customers who participate in the competitive Texas market are more satisfied with their electricity prices than customers served by regulated utilities.

Low prices may be good news to customers, but deregulation can be bad news for utilities. The low rates in Texas have brought about a dearth of private investment in the Texas electricity power grid, creating new capacity concerns. In response, the wholesale electricity rate cap was raised with the goal of stimulating investment in new power plants.

Thursday, August 22, 2013

Dissecting the utility CIO

This article has some great points and some not so great ones. Let me enhance the conversation.

Electric utilities manage three types of data everyday (Operational, Transactional, Reporting):

Operational Data (OD) to manage their assets:  Real-time Assets:  power plants, transmission lines, and substations with distributed control systems (DCS), generation management systems (GMS), automated generation controls (AGC), supervisor control and data acquisition (SCADA) and energy management systems (EMS).  Right-time Assets: distribution lines and devices, meters, and distributed energy resources (DER) with distribution management systems (DMS), outage management systems (OMS), geospatial information systems (GIS), asset management systems (AMS), workforce management systems (WMS), mobile workforce management systems (MWMS), demand response management systems (DRMS), electric vehicle management systems (EVMS), energy storage management systems (ESMS).

Transactional Data (TD) to manage their wholesale energy markets (WEM) and their retail energy markets (REM): The WEM challenges require the use of energy market scheduling systems, energy settlement systems, load forecasting systems, and external energy trading system for commodities exchange trading. The REM challenges require the use of enterprise resource planning (ERP), customer information systems (CIS), billing systems, meter data management systems (MDMS), call-center systems, integrated voice recognition (IVR), websites, customer portals, and customer mobile applications.

Reporting Data (RD): Most utilities prepare some 100 plus reports per month to be shipped to the many regulatory bodies and agencies that require transparency for their respective programs and or purview. So utilities use relational database management systems (RDBMS), business intelligence (BI) tools, reporting tools, data warehouse (DW) tools, enterprise content management systems (ECMS), Microsoft Office, etc.


BigData is the latest marketing buzz to sell more database technology and data storage hardware and systems to utilities and the world at large.

When I left Austin Energy, we had gone from managing 20 terabytes (2003) of annual data to 400 terabytes (2010) of annual data.  I suspect that they are now over 1 petabyte of annual data (if you include the Pecan Street data).

So collecting and managing data is nothing new to utilities.  What is new is two things:  First, utilities are collecting more and more demand size data (Pecan Street was created in 2008 at Austin Energy for that purpose) and need to turn that information into new products and services over time.  Second, it is finally affordable for utilities to go 100% predictive (due to the decrease in prices of data storage hardware systems) with all operational data per asset type.

Both challenges are huge and will take a journey of innovation.  However, I doubt that any value can be unleashed out of Operational Data unless there is large domain knowledge to direct the mining.

So I am NOT on board with the assertion that retail, financial and telecom CIOs can really do much about the Operational Data of the electric utility.  On the contrary, I do agree that retail, financial, and telecom CIOs can help lots in unleashing value with Transactional Data and Reporting Data.

I hope this helps.



IntelligentUtility MagazineKathleen Wolf Davis | Aug 20, 2013

Utilities know power. They know it intimately, from the inside out. They've been working with it for years; they've been working with it for generations. But, today, we're in the digital age, and, along with knowing that power, utilities also have to know data—about their systems, about their customers, about the weather, about load, about end-use. That's new, and that's a little scary.

To get to know their own data as intimately as they know their own power, utilities have turned to a new type of executive to help them weather this change: the chief information officer (CIO). CIOs have been commonplace in other industries for years, but they're a rather new field (relatively) to the utility arena. Tom Turco, Deloitte Consulting's power and utilities practice leader, noted that while the gig is indeed fairly new, it's already evolving.

A few years ago, the typical utility CIO was promoted from within, coming up through IT or even operations to take over the job. These days, however, it's not uncommon for utility CIOs to be pulled from outside of the industry: from retail, from banking, from telecommunications.
Turco sees this trend as, perhaps, reflective of a change in need and philosophy from a CIO who had to understand the technology of what was going on right now to someone who has a plan for the future."Newer CIOs are looking to define an IT strategy and investment road map to meet the business strategic priorities and required business capabilities to be enabled by technology," Turco said. "Candidates from outside the utility industry can bring in new ideas and thinking into the role.

"Still, just because you bring in the strategy capability from outside doesn't mean your team will fully support it. You can call in Warren Buffett to do your financial planning, but if you're not willing to put your money behind his plan, what good has that done you?

Here's a CIO take on that analogy: In a survey of 300 across-the-board CIOs (including utilities) titled "The DNA of the CIO" by Ernst & Young last year, the third-highest barrier cited by CIOs was a lack of a clear corporate strategy. Number one was a real big-boy seat at the executive table. (The report noted that many CIOs feel like executives in title only and, well, like they were seen as support people in bigger britches, to return to that note.) Those two items together can certainly be a barrier to strategic thinking.

But, whether intentions or mere perceptions, if that thinking needs to be more rah-rah and strategy-oriented, do you replace the tech-head with an MBA graduate to compensate?Both Turco and the report added that there's been a real shift recently to a business-driven position with the CIO from five years ago.

If that's true, it's entirely possible that the hurdles of support and overall company strategy noted by Ernst & Young may simply be lagging behind obvious growing pains for the position—a ripple effect that hasn't yet rippled all the way.

And, while understanding how the technology works was definitely the place to start (that translates as hiring the tech guy for CIO), transitioning to a plan for future technology investment takes a different, more ROI-oriented bit of thinking (that's the new strategy guy). All positions, from peon to president, get more complicated and more layered as time goes on. (Perhaps utilities now need an engineer with an MBA for CIO.)

In fact, this CIO evolution is mirrored in the current tech transition from the smart grid that we've been talking about for years to a convergence of operations and the digital side of utilities commonly called IT/OT integration. We started with smart meters, but now there are layers of sensors and SCADA and back-office and customer data—a similar change in thinking to the CIO tech-to-strategy flip.

Turco noted that this change in CIO thinking isn't right for all utilities and certainly isn't going to be right for every utility, but it is a growing trend in the utilities industry, with pros and cons on each side of the arena.

An inside guy knows your business. He knows your equipment, your people and your processes. He doesn't have to learn those things and can hit the ground running with a wealth of knowledge on how technology and trends may work (or not work) for your specific company. An outside fellow, on the other hand, brings a more objective view, can pipe in new techniques and strategies and doesn't have blinders on when it comes to corporate policy.

Whether your CIO is a tech-head or planning ahead (or, delightfully, has the skills of both), Turco has some advice on areas of focus (beyond the bottom line) for the utility CIO over the next five years or so. First and foremost is cybersecurity, which is now a priority requirement across all industries. Turco also suggests a focus on the value of investments and a tighter partnership with business leaders, along with an eye on the customer, business efficiency and cost management (right back to Ernst & Young's second hurdle there), with analytics, mobility and other technology investments to enhance the experience and results for both customers and workers.

And, more than anything, Turco says a utility CIO needs to be all about value—for the shareholder and for the consumer. And that sounds very strategic indeed.

Wednesday, August 21, 2013

Thunderbolt drive shipments surge, but USB still reigns

Can you imagine a day where terabytes of storage go for pennies with data transfer speeds of 100Gbps and devices that could power other devices around them with 1,000 Watts of power via normal ethernet cables? We are surely moving faster and faster towards those numbers.  Amazing future ahead of us.



Thunderbolt drive shipments surge, but USB still reigns

Hard drive sales are back to normal two years after Thailand floodsLucas Mearian
August 19, 2013 (Computerworld)

Though still a small part of the overall interconnect market, Thunderbolt-equipped hardware shipments surged 300% over the past year, according to IDC.

There were roughly 20,600 Thunderbolt units shipped in the second quarter of 2012, representing a little over 0.1% of all personal and entry-level storage (PELS) devices shipped. In the second quarter of 2013, Thunderbolt-enabled storage device shipments grew to about 0.6% of the market, according to IDC analyst Liz Conner, a 411% increase.IDC predicted in its first quarter report that Thunderbolt, which offers 10Gbps interface speeds, could skyrocket to 5.7% of the PELS market by 2017. But the dominant interface will remain USB.

USB in the PELS market grew by 11.5% year over year in Q2. Ethernet also saw strong shipment growth, posting a 10.2% growth rate in the same time frame.

The SuperSpeed USB 3.1 specification was recently published and jumps I/O throughput (on paper) from 4.8Gbps (in USB 3.0) to 10Gbps, bringing it on par with today's Thunderbolt specification. The next USB spec will also eliminate the need for power cords as the first USB Power Delivery specification is expected to boost from 10 watts to 100 watts the power across Power Delivery-certified USB cables. That spec is currently being tested by equipment developers.

USB 3.0 graphic

The new SuperSpeed specification will increase power to 100 watts and offer bidirectional data and audio/visual transfer, meaning a laptop or monitor with a USB hub could power many other devices, including an HDTV.
Earlier this year, however, Intel also announced that its Thunderbolt specification would double data transfer speeds, opening up peripheral pipes to greater throughput.
"Thunderbolt is definitely growing, but it's hit a few speed bumps along the way," Conner wrote in an email to Computerworld.
Thunderbolt sales have suffered in part because of higher prices compared to USB devices, and "people initially assumed it was an Apple only interface (and it took a while for a PC version of Thunderbolt to be introduced)," Conner said.
Hard disk drive prices were also affected by the 2011 Thailand floods, which shut down manufacturing and resulted in drive shortages. That bumped drive prices higher, pushing consumers toward cheaper options, "which really squeezed out Thunderbolt unless the higher speed was a necessity," Conner said.
Thunderbolt will definitely continue to grow, Conner continued, but more as a replacement for Firewire/1394 and eSATA, not as a replacement to USB.
The speed of Thunderbolt is definitely sought by media professionals and other niche users who really need top-level performance, Connor said. But USB 3.0's speed continues to be "good enough" for most users. That, plus the fact that USB 3.0 is backwards compatible, makes it significantly cheaper than Thunderbolt.
At the same time, the worldwide market for personal and entry-level storage hardware saw double-digit growth in the second quarter of 2013. The market includes storage products that range from a single disk through twelve-drive bay storage arrays marketed for individuals, small offices/home offices, and small businesses.
The worldwide market grew 10.7% year over year with 16.8 million units (worth $1.5 billion) shipped in the second quarter, according to IDC's PELS tracker.
It was the third consecutive quarter of year-over-year growth, according to IDC.
"The second quarter of 2013 brought ... a return to normal for the personal and entry-level storage market," Conner said. "For the last four quarters, the PELS market has seen a distinct focus on recovery after Thailand floods."
Drive maker Western Digital was hardest hit by the flooding, with up to 75% of the company's production lines temporarily shut down by the floods, IDC said.
By the beginning of 2012, disk drive production began to recover, but prices remained inflated because of shortages. According to ecommerce tracking site Dynamite Data, the top 50 hard drives on sites such as and saw prices jump by 50% to 150%. The price hikes kicked off in October 2011, when inventory levels plummeted 90% in less than a week.
Users continue to migrate to higher capacities to meet growing storage needs. In the 3.5-in personal storage market, 2TB drives represented 51.9% of shipments in the quarter.
For the 2.5-in personal storage market, 1TB drives had 51.7% of the market, with 4TB devices comprising 28.6% of units shipped.
Lucas Mearian covers storage, disaster recovery and business continuity, financial services infrastructure and health care IT for Computerworld. Follow Lucas on Twitter at Twitter @lucasmearian. His e-mail address

Tuesday, August 20, 2013

Bloodsucking leech puts 100,000 servers at risk of potent attack

And naive computer companies continue to want the Smart Grid to rely on their years of security experience.  This article proves why managing mission-critical communications networks related to the Smart Grid can not be open and should not be touching the Internet in any way.
NIST, NERC, SGIP and other Energy Standard Agencies should push hard to fix this problem.
Andres Carvallo
Bloodsucking leech puts 100,000 servers at risk of potent attack
Think IPMI admin tool is secure and no one connects it to public addresses? Nope.
At least 100,000 Internet-connected servers sold by Dell, HP, and other large manufacturers contain hardware that is vulnerable to potent remote hack attacks that steal passwords and install malware on their host systems, researchers said.
The threat stems from Baseboard Management Controllers (BMCs) that are embedded onto the motherboards of most servers. Widely known as BMCs, the microcontrollers allow administrators to monitor the physical status of large fleets of servers, including their temperatures, disk and memory performance, and fan speeds. But serious design flaws in the underlying Intelligent Platform Management Interface, or IPMI, make BMCs highly susceptible to hacks that can cascade throughout a network, according to a paper presented at the Usenix Workshop on Offensive Technologies.
Heightening the risk, a recent Internet scan detected at least 100,000 IPMI-enabled servers running on publicly accessible addresses, despite long-standing admonitions from security professionals never to do so.
"IPMI can be a convenient administrative tool, but under the control of attackers, it can also serve as a powerful backdoor," the scientists from the University of Michigan wrote in the paper, which was titled Illuminating the Security Issues Surrounding Lights-out Server Management. "Attackers who take control of the BMC can use it to attack the host system and network in a variety of ways."

"Parasitic server"

One possibility, the paper continued, is the installation of BMC-resident spyware that captures administrative passwords when an operator remotely accesses a host server. Another scenario: attackers could gain unfettered "root" access to the host by remotely booting the server into recovery mode. Worse yet, attackers could abuse vulnerable BMCs to run an unauthorized operating system on the host that gives raw access to the server disks.
The researchers aren't the first to warn of the threats posed by widely used IPMI and BMC technologies. Last month, Dan Farmer, the highly regarded white-hat hacker, posted his own manifesto that used even stronger language to describe the lurking danger. At one point he wrote:
Imagine trying to secure a computer with a small but powerful parasitic server on its motherboard; a bloodsucking leech that can't be turned off and has no documentation; you can't login, patch, or fix problems on it; server-based defensive, audit, or anti-malware software can't be used for protection; its design is secret, implementation old, and it can fully control the computer's hardware and software; and it shares passwords with a bunch of other important servers, stores them in clear text for attackers to access.
HD Moore, chief research officer of security firm Rapid7 and chief architect of the Metasploit project used by penetration testers and hackers, provides an equally bleak security assessment of IPMI and BMC here.
BMCs contain different names and specifications depending on the server they're bundled with, and there's little public material documenting their inner workings. But because each runs the same IMPI protocol, they're all believed to be susceptible to the same threats. The University of Michigan researchers tested this hypothesis by selecting one such controller, which came embedded on the Super X9SCL-F motherboard of a Supermicro SYS-5017C-LF 1U rack-mounted server. After performing a thorough analysis of the device, the scientists found that its firmware (designed by a firm called ATEN Technology) contained "numerous textbook security flaws, including exploitable privilege escalation, shell injection, and buffer overflow vulnerabilities." The researchers developed proof-of-concept attack code that exploited the vulnerabilities to remotely obtain root access on the BMC. (Supermicro has since issued BMC firmware updates that fix some or all of the vulnerabilities.)
They went on to catalog a list of attack scenarios malicious hackers could mount when exploiting the bugs. They included:
  • Subverting the host system or other machines on the management network
  • Installing BMC spyware that eavesdrops on remote management sessions to sniff passwords or even the physical server console
  • Installing persistent BMC rootkits that provide attackers with backdoor access that remains hidden from IPMI logs
  • The creation of IPMI botnets to take advantage of the large amount of network bandwidth at their disposal
In all, the scientists detected more than 100,000 Internet-exposed IMPI devices, 40,000 of which used the Supermicro BMC they tested at length.
"We conservatively estimate that it would take less than an hour to launch successful parallel attacks against all of the 40,000 ATEN-based Supermicro IPMI devices that we observed listening on public IP addresses," they reported.

Either incompetence or indifference

The paper includes a list of defenses that should be required reading for anyone who administers a server anywhere. Suggestions include keeping IPMI firmware up to date, changing default passwords, and never, ever running IPMI devices on public IP addresses. This last admonition is widely repeated—often by the manufacturers of the servers that are put at risk by the vulnerabilities. The scientists' Internet scans provide convincing evidence that this advice is frequently ignored, so unfortunately, it's worth repeating often.
But the researchers also take engineers at original equipment manufacturers (OEMs) to task for, among other things, building devices that have IPMI capabilities turned on by default. The researchers go on to direct some harsh words at the people developing IPMI devices and the servers they go into.
"Given the power that IPMI provides, the blatant textbook vulnerabilities we found in a widely used implementation suggest either incompetence or indifference towards customers' security," the paper states. "While some OEMs recommend helpful precautions such as dedicated management networks, this should not be an excuse to shift blame to users who fail to heed this advice and suffer damage because of vulnerabilities in IPMI firmware. We believe that properly securing IPMI will require OEMs to take a defense-in-depth approach that combines hardening the implementations with encouraging users to properly isolate devices."

Monday, February 04, 2013

US Electricity blackouts skyrocketing -
By Thom Patterson , CNN
October 15, 2010 7:26 p.m. EDT | Filed under: Innovation

(CNN) -- New York's Staten Island was broiling under a life-threatening heat wave and borough President James Molinaro was seriously concerned about the area's Little League baseball players.

It was last July's Eastern heat wave and Consolidated Edison was responding to scattered power outages as electricity usage neared record highs.
So, authorities followed Molinaro's suggestion to cancel that night's Little League games, which were to be played under electricity-sucking stadium lights.

"Number one, it was a danger to the children that were playing out there in that heat, and secondly it would save electricity that people would need for air conditioning in their homes," said Molinaro, who'd been forced to sleep at his office that night because of a blackout in his own neighborhood.

Throughout New York City, about 52,000 of ConEd's 3.2 million customers lost power during the heat wave. Triple-digit temperatures forced residents like 77 year-old Rui Zhi Chen, to seek shelter at one of the city's 400 emergency cooling centers. "It felt like an oven in my home and on the street," Chen said.
Should Americans view these kinds of scenarios as extraordinary circumstances -- or a warning sign of a darker future?

Power interruptions by region

Video: Could the lights go out?

Video: Too darn hot
Experts on the nation's electricity system point to a frighteningly steep increase in non-disaster-related outages affecting at least 50,000 consumers.

During the past two decades, such blackouts have increased 124 percent -- up from 41 blackouts between 1991 and 1995, to 92 between 2001 and 2005, according to research at the University of Minnesota.

In the most recently analyzed data available, utilities reported 36 such outages in 2006 alone.

"It's hard to imagine how anyone could believe that -- in the United States -- we should learn to cope with blackouts," said University of Minnesota Professor Massoud Amin, a leading expert on the U.S. electricity grid.

Amin supports construction of a nationwide "smart grid" that would avert blackouts and save billions of dollars in wasted electricity.

In a nutshell, a smart grid is an automated electricity system that improves the reliability, security and efficiency of electric power. It more easily connects with new energy sources, such as wind and solar, and is designed to charge electric vehicles and control home appliances via a so-called "smart" devices.

Summer of '77

You might say Amin's connection with electricity began in New York City with a bolt of lightning.
In July 1977, Amin was a 16-year-old high school student visiting from his native Iran when lightning triggered a 24-hour blackout that cut power to nine million.

As he and his father walked near their Midtown Manhattan hotel, they were shocked to see looters smash their way into an electronics store less than 20 yards down the street.

Amin recalls feeling violated by the ugly scene -- and wondering if the nation's infrastructure was in danger of collapse. "... not just the electric grid that underpins our lives," he said, "but also the human condition."

More than 30 years later, the United States is still "operating the most advanced economy in the world with 1960s and 70s technology," said Amin. Failing to modernize the grid, he said, will threaten the U.S. position as an economic super power.

Millions remember the historic August 2003 blackout, when overgrown trees on powerlines triggered an outage that cascaded across an overloaded regional grid. An estimated 50 million people lost power in Canada and eight northeastern states. Smart grid technology, experts say, would have immediately detected the potential crisis, diverted power and likely saved $6 billion in estimated business losses.
By April of 2013 ConEd hopes to install a "smart" automated self-healing system aimed at preventing the burnout of large feeder cables during peak demand periods -- such as heat waves.

The new technology would anticipate possible equipment failure in specific neighborhoods and reroute electricity to compensate. For example, a project to help Queens' Flushing neighborhood will "give us the capability to remotely control up to 26 underground switches," said Con Ed smart grid manager Thomas Magee.

Had systems like this been in place, said ConEd's Aseem Kapur, it might have prevented or reduced New York's scattered outages last July.

Who's got the juice?

Some of the most reliable utilities are in the heartland states of Iowa, Minnesota, Missouri, the Dakotas, Nebraska and Kansas.

In those states, the power is out an average of only 92 minutes per year, according to a 2008 Lawrence Berkeley National Laboratory study. On the other end of the spectrum, utilities in New York Pennsylvania and New Jersey averaged 214 minutes of total interruptions each year. These figures don't include power outages blamed on tornadoes or other disasters.

Map: How often do the lights go out where you live?

But compare the U.S. data to Japan which averages only four minutes of total interrupted service each year. "As you can see, we have a long way to go," said Andres Carvallo, who played a key role in planning the smart grid in Austin, Texas.

Experts point to the northeastern and southeastern U.S. as regions where outages pose the most threat -- mainly due to aging wires, pole transformers and other lagging infrastructure.

"They know where they have tight spots," said Mark Lauby, of the North American Electric Reliability Corporation, which enforces reliability standards. Without mentioning specific regions, Lauby said utilities are "making sure the generation and the transmission are available to help support those consumers."

Building a national smart grid "won't be cheap and it wont be easy," acknowledged Amin. Much of it could be completed as soon as 2030 at a cost of up to $1.5 trillion, according to the Department of Energy. It's unclear who would foot the entire bill, but the Obama administration has committed about $4 billion in investment grants.

The 'Easy Button' Carvallo jokes about the so-called "Easy Button" at Austin Energy. It's not really a big red button on the wall, but it is a mechanism that allows an operator to control tens of thousands of home thermostats.

"Austin is two to three years ahead of everybody else," said Carvallo, now chief strategy officer for the smart grid software firm Proximetry.

He points to a volunteer program that offers free thermostats to customers who allow the utility to remotely control their air conditioners during specific months and hours. This way, thousands of power-gulping air conditioners can be cycled off for a short time when electricity was needed elsewhere.

By summer's end, Austin expects to begin enabling its 700,000 streetlights to be turned "on and off with a flip of a switch," saving $340,000 in electricity each year, and eliminating 200 tons of carbon dioxide air pollution.

Replacing old-style electric meters with "smart meters" is often described as the first step in creating a smart grid. All 400,000 of Austin's meters are smart meters.

Nationwide, 26 utilities in 15 states have installed some 16 million smart meters in homes and businesses.

Soon, when power goes out in a neighborhood with smart meters, utilities won't have to wait for customers to report outages -- the smart meters will alert utilities automatically. Utilities will then e-mail or text message each affected customer information about when the lights will be back on.

Critics question smart meter accuracy and whether the devices will really save energy in the long run.

"It feels a bit like the utilities are jumping the gun and they're trying to put these meters in before the rest of the pieces of the so-called smart grid are in place and before we even know that the smart meters are going to have advantages commensurate with the cost," said electricity consumer advocate Mindy Spatt of The Utility Reform Network.

One advantage of smart grid technology may be jobs.

High-tech manufacturers want to locate their factories in places where electricity is most reliable, said Carvallo. "That's where the manufacturing facilities move to. That's where you get your high-paying jobs."

Friday, December 28, 2012

IEEE Survey Report Illuminates Smart Grid Future by Andres Carvallo

A comprehensive polling of industry leaders around the world finds that North America leads in energy storage, while Europe is ahead in distributed generation and microgrids. Energy management systems, distributed management systems and communications technologies will be critical to full realization of all the anticipated smart grid benefits.

A new report from Zpryme, commissioned by IEEE and available at the IEEE smart grid website, details how energy storage, distributed generation and microgrid technologies stand to evolve given the rapid deployment of the smart grid across the globe over the next five years. The report is based on a survey done in September 2012 of 460 smart grid executives around the world, almost all of them highly educated. Two thirds of them said they believe energy storage and distributed generation will be very important to the future development of the smart grid, and half thought microgrid development will be very important.

Top-rated benefits of energy storage include provision of supplemental power to meet peak demand, improvement of power system reliability and reduction of energy costs. Yet, on a somewhat skeptical note, the report says, “If the cost of grid-scale storage technologies does not significant decrease over the next five years, the market will not realize its full potential.” Specifically, "Industry experts from the U.S. Department of Energy, EPRI and KEMA estimate that costs must decrease by at least 50 percent relative to today’s costs in order for energy storage technologies to realize mainstream adoption. If the costs do not significantly decrease, utilities will continue to rely upon gas-fired turbines (peaker plants) for lead shifting and renewable integration.”

The report finds that Europe is the global leader in adopting and utilizing distributed generation and microgrids, while North America is prominent in energy storage technology. The report says that these regions stand to “take the lead when it comes to developing an deploying next-generation distributed energy systems.” But Japan, South Korea and China “will also continue to make strong investments in energy storage as these countries are determined to lead the world when it comes to clean technologies,” the report says.

Energy management systems, distributed management systems and communications technologies are identified in the report as the critical enabling technologies for energy storage, distributed generation and microgrids, as well as advanced grid services such as net metering, load aggregation and real-time energy monitoring that in many cases will be delivered in the cloud.

Key interrelated themes emerge from the research behind the report, such as the necessity of customer demand to drive the market for the three technologies and, in turn, the need for customer feedback to infuse their R&D strategies. The report illuminates how energy storage, distributed generation and microgrid technologies can support important new revenue streams for manufacturers, utilities, end users and third-party providers alike, spurring new global markets for software and systems that integrate these technologies into modern and future energy systems.

Among the report’s finding are the following highlights:

- Electricity demand of the future will be met with distributed-energy systems.
- Customer demand—not further regulation, policies or subsidies—must drive the viability of the market for the three technology areas of focus in the report (energy storage, distributed generation and microgrids).

- Market-driven innovation will lead the transition to a high-growth phase for the three technology areas, so manufacturers must, in turn, “closely integrate customer feedback into their R&D (research and development) roadmaps.”

- Better coordination on standards, R&D and funding is required to drive down costs and advance energy storage, distributed generation and microgrid technologies.
Digitized or connected energy systems will be necessary to support advanced smart-grid functionality and distributed energy systems.

- The three technology areas offer opportunities for utilities, end users and third-party providers to create new revenue streams.

- One of the major challenges to advancing deployment of energy storage, distributed generation and microgrids remains the need for more driving of costs down.

The Zpryme report indicates that importance of all three technology areas is coming into clearer focus with rising global interest in more efficiently managing energy consumption, heightening electricity demand and increasing awareness of the cost of service interruptions. Ultimately, the report states, there is strong growth potential for all three technologies.

In conclusion, to summarize the most important overarching findings:

1- Microgrids, distributed generation and especially grid-level energy storage still need external private- and public-sector funding for both R&D and projects/pilots. The benefits would include more cost-effective solutions, better businesses cases for the technologies and development of best practices with regard to technology installation, application and optimization.

2 - The most important enabling technologies for distributed generation, microgrids and energy storage stand to be energy management systems, distributed management systems and communications technologies. Future distributed energy systems must be able to interact across both centralized and decentralized electrical networks, supporting advanced grid services (net metering, load aggregation and real-time energy monitoring, for example) that often will be delivered in the cloud.

3 - And further, network-layer change stresses investment in a future-proof architecture and communications network that will be able to accomplish not only the defined goals of the present and near-term future, but also the undefined but likely expansive needs of a dynamic digital future, replete with emerging innovative applications and equipment. A well-informed design and resilient integrated IP network foundation puts the utility in a position of strength, able to choose from best-of-breed solutions as they emerge, adapting the network to new purposes and functionality, consistently driving costs out by leveraging information in new ways.