Evaluation, Measurement, and Verification
Policymakers and utilities in the US have recently put increased focus on energy efficiency as a clean, low-cost and reliable utility system resource and policy strategy to meet long-term energy needs and climate goals. This increased attention calls for excellence in evaluation, measurement & verification (EM&V), which provides accurate, transparent and consistent metrics—based on good data—that assess the performance and implementation of energy efficiency projects, programs, and portfolios of programs. The US has more than three decades of experience implementing energy efficiency EM&V. One key challenge is how to balance rigor and accuracy with ease of implementation and evaluation costs. Recent advances in data availability and analytics are paving the way for new opportunities to improve accuracy while managing costs. Improved regional and national collaboration also hold new promise for elevating the confidence in energy efficiency as a resource.
In this toolkit we first describe the objectives of EM&V, followed by general approaches and typical steps in an EM&V process. We then discuss several key areas for consideration when developing a plan. Next we discuss how the industry is entering a new paradigm in EM&V shaped by improved data availability and analytics, as well as increased national and regional collaboration. Finally, we provide a detailed list of additional references for EM&V implementation.
Policymakers typically require that energy efficiency programs and projects be cost-effective. To this end, most states require that program administrators conduct independent, third-party EM&V. Energy efficiency EM&V serves three critical objectives: accountability of the impacts, risk management, and continuous improvement. To restate these objectives as questions:
- Accountability of the impacts: Did the program deliver its estimated benefits? EM&V activities document and measure the effects of a program and determine whether it met its goals. This often includes the energy and demand savings, as well as co-benefits such as emissions impacts, transmission and distribution benefits, or water savings.
- Risk management to support energy resource planning: How certain are these savings? Risk refers to the uncertainty of the realization of expected savings from an efficiency project or program. EM&V activities should be sophisticated enough to assess and maximize the level of confidence of estimated savings, which provides credibility to energy efficiency as a viable resource. An added risk is that, in the absence of good data, governments may under-invest in relatively cheaper and more beneficial energy efficiency programs, and over-invest in more costly alternatives. EM&V activities aim to provide this data, thereby avoiding costly misallocation of public and private resources.
- Continuous improvement: What can be done to improve program performance in the future? Most importantly, EM&V activities should be used to go beyond compliance by evaluating why a program had the effect that it did, with an eye for both improving existing programs and providing a robust mechanism for estimating savings from planned programs.
It is important to first make a distinction between energy efficiency projects and energy efficiency programs or portfolios of programs because of differences in the scope of measurement and methods of evaluation for each. A project is a single activity that takes place at a single location, such as the installation of energy efficient lighting in an office. The term measurement and verification (M&V) alone refers to project-level analysis associated with the documentation of energy savings and verification of installation at individual sites (more on that later under savings determination approaches). In contrast, a program is a prolonged effort by an organization or collaborative of organizations that encompass a group of projects with similar characteristics and applications (e.g., an initiative to install advanced hot water heaters in residential buildings). A portfolio is a collection of programs that collectively address multiple technologies and market segments. The broader term evaluation, measurement and verification (EM&V) refers to program-level or portfolio-level analysis and includes a broader approach to evaluation.
At the program or portfolio level, a seminal resource for an in-depth review of EM&V program evaluation is the Energy Efficiency Program Impact Evaluation Guide from 2012 (and its precursor in 2007), prepared by the State and Local Energy Efficiency Action Network (SEE Action), which is co-facilitated by the US Department of Energy (DOE) and Environmental Protection Agency (EPA). As described in that report, the most common way to categorize efficiency program evaluations is as follows:
- Impact evaluations assess outcomes of the changes attributable to an energy efficiency program. These evaluations answer questions for the first and second objectives described above about the accountability of the benefits and risk management.
- Process evaluations assess program operations to identify and recommend areas of improvement. These evaluations answer questions for the third objective above about program improvement.
- Market evaluations assess broad aspects of the marketplace with respect to energy efficiency. For example, a market effects evaluation characterizes changes in the structure or functioning of the market or the behavior of market participants that resulted from one or more program efforts. These evaluations help to answer questions for all three objectives.
These best-practice EM&V activities should be seen as cyclical -- occurring throughout the energy efficiency planning, implementation, and evaluation process. SEE Action’s guide focuses mainly on impact evaluations, which is the center of the EM&V process. Additional information on process and market evaluations can be found in the various references listed at the end of this toolkit. DOE’s Uniform Methods Project, which is described later, provides detailed model evaluation plans for specific energy efficiency measures and project categories. Next we describe the high-level steps for an impact evaluation process based largely on the SEE Action guide.
- Define the evaluation objectives, scale and time frame in the context of policy objectives. Evaluation planning should be incorporated in the planning for the efficiency program itself, for budgetary and staffing reasons, as well as for program design purposes. The basic objectives of any evaluation program are accountability, risk management, and program improvement. Other objectives may include the calculation of co-benefits, as described below. Scale is often a tradeoff between expected benefit from the EM&V process and the administrative costs of the program. Evaluation time frames are typically on the order of one year.
- Select an impact evaluation savings determination approach and define baseline scenarios. Evaluation methods depend on program objectives, and are discussed more fully in the referenced documents below. The baseline (or "business-as-usual" scenario) consists of an estimate of energy use and demand in the absence of any efficiency program interventions. Because energy savings cannot be directly measured, they must be calculated by comparing energy use and demand after efficiency program implementation with a baseline defined at the start of the program.
- Design and conduct data collection and analysis. Decide upon the experimental or quasi-experimental design for the evaluation. Prepare the sampling plan and data collection instruments and protocols. Select data filtering and analysis methodologies. Implement the evaluation plan.
- Determine energy and demand savings (gross and/or net savings). Gross savings represent the changes in energy use and demand that result from program activities, regardless of what factors may have motivated the participant to take the energy efficiency actions. A sample of representative projects are selected, and their effects are measured and verified (taking the effects of uncontrollable forces like weather into account) to determine gross savings. Net savings are determined by adjusting gross savings to account for what would have happened without the program (free riders) and for program-induced spillover and market effects (see definitions later).
- Calculate co-benefits (according to policy objectives). Co-benefits may include avoided greenhouse gas emissions and other environmental benefits, energy price effects, economic impacts such as job creation and increases in income, non-energy benefits to program participants (e.g., health, comfort, reduced maintenance, etc.), national security impacts, and other technical system benefits. Methods exist for determining these co-benefits, according to the objectives of the energy efficiency program policy.
- Report the evaluation results and work with program administrators to implement recommendations and to resource planners and demand forecasters
Here we provide more details about some specific elements of the EM&V process for further consideration. See ACEEE 2015 for additional information.
Savings Determination Approach
There are inherent challenges in measuring energy efficiency impacts because it requires comparing actual energy use to what would have happened absent the energy efficiency improvements. This requires the use of a counterfactual scenario, i.e. estimating what the energy use would have been had the program or measure not been implemented. The SEE Action guide describes three general approaches to savings determination: 1) measurement and verification (M&V); 2) deemed savings; and 3) large-scale consumption data analysis with the use of control groups. The type of approach is a key area for consideration—and requires balancing evaluation costs with level of accuracy. Program administrators may want to use a variety of these approaches across their portfolio of programs.
Measurement and Verification (M&V)
M&V is applied at the project level, as described earlier, and means the determination of gross energy savings at individual sites or projects using one or more methods can involve metering measurements in combination with engineering calculations, statistical analysis, and/or computer simulation modeling. M&V guidelines and protocols have existed for decades (since the beginning of the energy performance contracting industry). Today the most widely used of which include the Federal Energy Management Program (FEMP) guidelines, the Efficiency Value Organization’s (EVO) International Performance Measurement & Verification Protocol (IPMVP), and ASHRAE’s Guideline 14-2014. More recently, the US DOE’s Uniform Methods Project has become a resource for some M&V protocols. See the list of project-level M&V references at the end of this toolkit for links to these resources.
For energy efficiency programs, this M&V savings determination approach is most often used in custom programs targeting large customers, where the savings are dependent on the technologies applied and the specific customer characteristics. This approach can also serve as the basis for determining, in part, deemed savings values for prescriptive programs.
Deemed savings values are estimates for the energy and/or demand savings for a single unit of an installed energy efficiency measure that (1) have been developed from data sources (such as prior metering studies) and analytical methods that are widely considered acceptable for the measure and purpose, and (2) are applicable to the situation being evaluated. Individual parameters or calculation methods can also be deemed, e.g. effective useful life of a measure, or a set of engineering algorithms used to calculate the savings. (free-ridership and net-to-gross factors may also be deemed).
For energy efficiency programs, deemed savings approaches are generally used for projects with well-documented savings values, for example appliances, lighting, and computer equipment. This EM&V approach is popular because it is relatively low-cost and straightforward. ACEEE research from 2012 found that 36 states use some type of deemed savings values in their evaluation frameworks, and that 26 states cite the use of sources or databases from other states (ACEEE 2012).
Large-scale consumption statistical analysis with the use of comparison groups
Comparison groups are a more elaborate way of determining energy savings and can result in a more informed understanding of program-induced energy savings. The SEE Action guide distinguishes between two kinds of control groups. Randomized controlled trials (RCT) randomly assign customers to either the treatment group, whose members participate in the program, or a comparison group, whose members do not participate. Quasi-experimental methods (QEM) use a comparison group that has not been randomly selected. Both methodologies compare the energy use of a control group not involved in program activities with that of efficiency program participants. Evaluators collect energy consumption data for both groups and calculate the difference between the two sets of data. Both comparison-group approaches require a relatively large and homogeneous population of energy users. They are most often used in residential programs, since they involve so many customers, usually with a limited number of energy consumption profiles. They can also be used for commercial programs with large numbers of participants, but relatively sophisticated statistical techniques are required.
Of the two kinds of control groups, RCT tends to be more accurate in assessing savings, but it is time-consuming, expensive, and cannot be applied to full-scale programs because it requires random assignment to participant and control (nonparticipant) groups. The simplest QEM approach is the pre/post method, which compares the energy use of program participants before and after the program; in effect, participants become their own control group. The QEM approach is more flexible and is more broadly applicable to programs. Randomized encouragement designs are an additional approach (See Uniform Methods Project’s Sampling Design Cross-Cutting Protocol [April 2013]).
For certain programs with substantial energy savings and large numbers of participants, periodic statistical analyses with comparison groups are helpful to the overall EM&V process. These can also help calibrate deemed saving estimates.
Technical Resource Manuals
Technical Resource Manuals (TRMs) are databases or reports that hold information on the features and energy savings of large quantities of energy efficiency measures for use by an entire state or region. Deemed savings values and deemed calculations are usually documented in TRMs, as are other assumptions and metrics such as measure lifetimes. As of 2012, there were 17 state and regional TRMs in use across the U.S. (SEE Action 2012). Developing robust state or regional TRMs, with periodic reviews and updates, is a helpful way to improve consistency.
Net vs. Gross Savings
Evaluators are interested in examining the extent to which variables external to a program may affect energy use and thereby lead to over- or underreporting of energy savings. Using definitions from DOE’s Uniform Methods Project (NREL 2014, Chapter 17):
- Gross savings impacts are “changes in energy consumption that result directly from program-related actions taken by participants in an energy efficiency program, regardless of why they participated.”
- Net savings impacts are “changes in energy use attributable to a particular energy efficiency program. These changes may implicitly or explicitly include the effects of factors such as free-ridership, participant and non-participant spillover, and induced market effects.”
Free-riders are participants who would have adopted energy efficiency measures in the absence of the program. Spillover is when the program inspires participants or nonparticipants to take other efficiency actions not directly targeted by the program. Induced market effects occur as a result of changes in the market inspired by the program (e.g. contractors change their previous equipment stocking and recommendation practices due to familiarity with a new technology promoted by the program). While it is considered best practice for net savings evaluations to account for free-ridership and spillover (and occasionally induced market effects), in practice many evaluators account for free-riders alone, thereby running the risk of undercounting total savings impacts.
An analysis by ACEEE examines details about state practices, precedents, and issues regarding net and gross savings (ACEEE 2014). The study’s interviews with state and national experts made it clear that both net and gross savings can be useful toward assessing the three objectives of evaluation. For example, estimates of net savings help programs improve as they work to minimize free-ridership. Utility system planners are generally most concerned with what overall changes are occurring in consumption levels (i.e. gross savings), and less concerned with parsing out what portion of the change would happen without programs or is attributable to different parties. On the other hand, there is a need and often regulatory pressure to understand the net impacts attributable to programs, especially as a way to calculate things like cost-effectiveness and lost revenue policies in order to protect ratepayer interests and to apply limited program dollars where they will do the most good. Some states have taken the simplistic approach of assuming that free-ridership and spillover cancel each other out, so that gross savings equal net savings. That approach may ignore important differences between programs within a portfolio, and likely obscures important information about how particular programs are functioning.
Common Practice Baseline
In recent years, the “common practice baseline” approach has received increased attention. This approach is somewhere in-between net and gross savings approaches in that it measures savings relative to what is determined to be common practice without a program, but makes no further adjustments. This approach is commonly used in the Pacific Northwest and is recommended in EPA’s draft EM&V guidance for evaluating energy efficiency savings under the Clean Power Plan (EPA 2015). As with other net savings approaches, the common practice baseline approach is designed to assess the savings attributable to efficiency program activities. A description and discussion of this approach can be found in the Uniform Methods Project’s Chapter 17 (NREL 2014).
Cost-effectiveness screening is one key element of the EM&V process, and it is used in various ways in different jurisdictions. Recent national collaboration on this topic has led to some helpful resources. The National Efficiency Screening Project (NESP), as described later, spearheaded the development of the Resource Value Framework (RVF) (ACEEE is a participating member of NESP). The RVF advocates that in designing energy efficiency cost-effectiveness screening tests, each state should adhere to several principles, including:
- Support the public interest
- Account for the energy policy goals of each state
- Ensure that tests are applied symmetrically, where both relevant costs and relevant benefits are included in the screening analysis
- Should not exclude relevant benefits on the grounds that they are difficult to quantify and monetize
- Should be transparent by using a standard template to explicitly identify their state’s energy policy goals and to document assumptions and methodologies
For program administrators, typical costs for energy efficiency EM&V are 3-5% of annual portfolio budgets (based on data from the Consortium of Energy Efficiency). The cost of EM&V varies with the frequency, complexity, and scope of data collection and analysis. Depending on the desired level of certainty in the results, measurements may be taken on an entire system or a single parameter, on every measure or a sampling of projects, more or less often, and for longer or shorter periods. In general, the level of costs and stringency of EM&V should be commensurate with the magnitude of savings and the degree of uncertainty around existing estimates of savings.
Stakeholder Working Groups
Several states have had success with establishing stakeholder working groups that are responsible for oversight and input into decision making regarding EM&V considerations such as those described above. Having a well-designed collaborative stakeholder process to oversee EM&V activities and reporting can help assure that evaluation is independent and objective, and minimize subsequent disputes and litigation over reported results.
Major new advances in data analytics and data availability are creating exciting opportunities in the area of automated M&V. The Northeast Energy Efficiency Partnerships (NEEP) outlines these trends in its report, The Changing EM&V Paradigm, across two major areas: 1) advanced data analytics and program enhancements (enabled by new software); and 2) advanced data availability (enabled by new hardware) (NEEP 2015). ACEEE is also examining how ICT can automate data collection and analysis, and how new analytical techniques are giving evaluators the ability to monitor and meter what is relevant and then extract what is needed to gain intelligence about energy consumption (see ACEEE 2015).
In that report, ACEEE provided case studies for the residential, commercial, and industrial customer segments. For example, one case study profiles a warehouse management company that installed an intelligent lighting system, which has self-metering and historical data collection capabilities that enable to report energy savings in near real time. While some energy efficiency programs such as monitoring-based building commissioning (MBCx) have been using these types of techniques for several years, a broader class of energy efficiency programs could now potentially take advantage of automated M&V. At the same time, these new techniques can help build confidence in energy efficiency performance for a broad range of stakeholders (ACEEE 2015).
States have been developing and implementing EM&V methodologies for decades. More recently, especially with the prospect of federal climate regulations, a broader recognition of the need to coordinate has led to national and regional initiatives focused on energy efficiency EM&V. Here we briefly describe these initiatives and list some key resources.
EM&V Working Group of the State and Local Energy Efficiency Action Network (SEE Action), co-facilitated by the US DOE and the US EPA
- Convenes experts from around the country on EM&V issues, specifically around three key focus areas: 1) support consistency and transparency for EM&V methods; 2) address emerging issues and technologies; and 3) increase adoption of best practices. ACEEE participates in the working group.
- Publishes numerous technical reports and guidance documents.
- In 2012 published a seminal EM&V resource for both novices and experts: Energy Efficiency Program Impact Evaluation Guide. Includes definitions, concepts, and steps for calculating energy and demand savings, avoided emissions, and other impacts.
Uniform Methods Project (UMP) by the Department of Energy (DOE)
- Develops M&V protocols for determining energy savings for commonly implemented program measures. The work is being done through collaboration with energy efficiency program administrators, stakeholders, and EM&V consultants.
- Aims to establish easy-to-follow protocols based on commonly accepted engineering and statistical methods for determining gross savings for a core set of commonly deployed energy efficiency measures.
- In 2013, published first set of protocols for determining energy savings from energy efficiency measures and programs; ongoing protocols are listed here. Chapter 17 addresses net savings methods.
- Group of organizations and individuals (including ACEEE) working together to improve the way that utility customer-funded electricity and natural gas energy efficiency resources are screened for cost-effectiveness.
- Developed the Resource Value Framework (RVF) of principles and recommendations to provide guidance for states to develop and implement cost-effectiveness tests that are consistent with sound principles and best practices.
- During 2016 and 2017, NESP is working to develop a National Standard Practice Manual for Energy Efficiency (NSPM) designed to update and expand upon the California Standard Practice Manual.
Regional Technical Forum by the Northwest Power & Conservation Council
- Established in 1999 as an advisory committee to develop standards to verify and evaluate energy efficiency and conservation savings.
- Develops unit energy savings (UES) measures, standard protocols, and numerous guidelines.
- Uses subcommittees to review and provide oversight and/or guidance on projects, provide feedback to the RTF on specific issues, and help develop and update sector-specific measure savings and assumptions.
Regional Evaluation, Measurement and Verification Forum (EM&V Forum) by the Northeast Energy Efficiency Partnership (NEEP)
- Consists of nine jurisdictions across the Northeast and mid-Atlantic regions. Works to develop and support the use of consistent savings assumptions and standardized, transparent guidelines and tools to evaluate, measure and verify, and report the energy and demand savings, costs, and avoided emission impacts of energy efficiency.
- Steered by a committee of state public utility commissioners, energy office and air agency representatives; convenes stakeholders through regular events.
- Develops and collects numerous resources such as its glossary of terms.
- In 2015 published The Changing EM&V Paradigm which reviews key trends and new industry developments and their implications on current and future EM&V practices.