• Published
  • Impact Measurement 101


    This page covers:


    Impact Measurement

    What do we mean when we talk about ‘impact?’ Generally speaking, impact can be defined as a positive or negative change on people or the planet. More specifically, impact is interpreted as the ultimate goal of an organization or program– the societal change that it seeks to achieve. Specifics of this goal will vary based on the organization/program itself– for example, a public health program may seek a reduction in childhood mortality as its main impact, while an organization handing out small business loans may define impact according to an industry-standard measure, such as return on investment (ROI).

    When it comes to measuring impact, organizations that we work with often use similar-but-different terms for this idea. You may have heard or seen the following terms in this context before:

    • Impact management

    • Impact monitoring

    • MEL (Monitoring, Evaluation, and Learning)

    • MERL (Monitoring, Evaluation, Research, and Learning

    • ….and many others

    There are many different labels out there, but don’t be intimidated; most organizations are still talking about the same thing: asking and answering questions about impact. How much is a program moving the needle for its beneficiaries? What or how much is the effect of a program or an intervention? All of this is impact measurement.

    The questions that an organization seeks to answer about impact can range from simple to complex, and they may be determined or prioritized by the organization’s sector, industry, or donor base. Some examples include:

    • How many people did we reach with our services?

    • Did we improve quality of life for our beneficiaries? By how much?

    • Where did our program have the greatest impact? Where did it have the least impact?

    • What was the cost-effectiveness of our intervention?

    These questions can each be answered with the right impact measurement approach.

    Monitoring and Evaluation

    Another term used frequently in the impact measurement space is monitoring and evaluation. Actually, even though the terms are used together, monitoring and evaluation are two separate, distinct processes that serve different purposes as part of an organization’s impact measurement strategy.

    Monitoring is the process of tracking a program’s implementation, or the delivery of services (Road to Results, p. 16.). Program monitoring is done on an ongoing basis, throughout a program’s life cycle, and is usually carried out internally by program staff as part of their day-to-day jobs. A monitoring strategy seeks to answer the question, “What are we doing/have we done in our program?” Monitoring-related measurement questions are generally focused on a program’s inputs, activities, and short-term outputs. For example:

    • How many sets of training materials were developed? (Input)

    • Were the expected number of teacher training sessions carried out? (Activity)

    • How many teachers attended the training sessions? (Output) 

    In contrast with monitoring, evaluation is the periodic assessment of a program’s performance, usually against its previously-stated goals (Road to Results, p. 15). Evaluation tends to focus on broader questions; rather than “What have we done in our program?” it asks, “Has this program reached its intended outcomes? Why or why not?” Going a step further, evaluations often test key assumptions about how an organization or a program intends to create change.

    There are many categories of evaluations: Outcome evaluations, process evaluations, formative evaluations, participatory evaluations…and the list can go on (A comparison of different types of evaluations is available here: https://www.betterevaluation.org/themes_overview#Types )! Each asks different types of questions related to a program’s effectiveness.

    Monitoring vs. Evaluation

    In contrast with monitoring, evaluations are usually done periodically, rather than on an ongoing basis, because evaluations are typically more time- and resource-intensive than day-to-day monitoring. A final evaluation at the conclusion of a multi-faceted program may take several months or longer to carry out– the planning, survey development, sample size calculations, recruitment and training of enumerators, and finally the data analysis and interpretation can each be time and labor-intensive tasks. 

    Finally, unlike program monitoring, which is typically done internally,  evaluations often make use of external contractors with more specialized expertise. This approach also helps maintain objectivity and independence, which are often required attributes of evaluations.

    The Data Cycle

    An organization’s process of capturing and using data for impact measurement (including monitoring and evaluation) can be summarized as the data cycle. 

     

    The data cycle has five main stages:

    1. Plan: In this initial step, an organization maps out what its program(s) will do and how its activities are expected to lead to impact. This stage also involves thinking through the expected results of a project from a data perspective, and deciding how those expected results will be measured. We will cover specific tools used during this stage (theory of change, log frame, and indicators) in the next section of this module.

    2. Data collection: In this stage, an organization develops the tools and methodologies they need to collect data to measure the expected results that were decided in the Plan stage, and carries out the collection of data. Data collection can happen at several points during the life of a project; the data collected can be quantitative or qualitative, from a number of different primary sources (surveys, focus groups, etc.) or secondary sources, such as public datasets. 

    3. Data management: In this step, data is stored for analysis and use. Data management solutions must have the right level of access, security, and flexibility to meet an organization’s needs.

    4. Analysis: The goal of this step is to get insights from the data. This can mean summarizing quantitative (numeric) data and/or doing more advanced statistical analyses to test hypotheses, and transcribing and synthesizing qualitative (non-numeric) data.

    5. Use: Finally, after carefully carrying out the previous stages of the data cycle, an organization should be able to use the insights that it has gained from the data in order to improve program quality and further inform the program’s or organization’s strategy. 

    Data use is the ultimate goal of impact measurement. The organizations that Vera Solutions works with seek to measure and understand their impact in order to improve their operations. When each stage of the data cycle is strong, it creates a positive feedback loop, allowing an organization to adapt, grow, and ultimately amplify its impact on society and the world.

    Planning for Impact Measurement

    Let’s dig in and learn more about a few of the tools that organizations often use in the ‘plan’ stage of the data cycle.

    Theory of Change

    A theory of change is a statement–or sometimes, a graphical representation–that loosely maps out an organization’s impact strategy. This is a ‘big picture’ tool; it doesn’t spell out the process for achieving impact in exact steps, but generally summarizes the intended achievements.

    The idea of a ‘theory of change’ is widely interpreted across different sectors and organizations. Sometimes they are written out as mission statements; sometimes they are made as pictorial representations.  Some theories of change present as fairly linear: “If we do X, then Y will happen.” Others are circular or unstructured. Much of the value of a theory of change lies in the process of developing it; organizations or programs must go through the process of articulating their ‘what’ (what are we working towards) and their ‘how’ (how are we going to do it) as a first step to developing an impact measurement strategy.

    A few examples of theories of change are below.

     

    Source: International Institute for Environment and Development (International Institute for Environment and Development.  Theory of change for increasing the influence of Least Developed Countries' climate diplomacy . Creative Commons license.).

    In this first example, the ultimate mission or goal is represented in the center of the theory of change graphic, with supporting strategies, activities, and causal pathways represented in the intersecting circles. 

    Source: Internews (https://internews.org/about/our-strategy/theory-change/ ).

     

    This second example is from one of Vera Solutions’ clients, Internews. The organization has articulated a problem statement, an impact statement, and several key pillars to success. This example also includes some elements of a logic model, described in more detail in the next section.

    Source: Save the Children (Save the Children. 2019. “Closing the Gap: Our 2030 Ambition and 2019-2021 Global Work Plan.” https://www.savethechildren.org/content/dam/usa/reports/advocacy/scus-2019-21-plan-booklet.pdf).

    In this third example, Save the Children’s theory of change is represented in a circular fashion with four strategic pillars: Be the Innovator, Achieve Results at Scale, Be the Voice, and Build Partnerships.

    What features of these theory of change examples stick out to you most? Is there anything that you find particularly striking or helpful?

    Logic Model / Logical Framework

    The next tool in the impact measurement planning toolkit is the logic model– also called logical framework, or log frame. A logic model is also typically an illustration about a program’s impact; however, where a theory of change might show many different causal pathways leading to impact (or none at all), a logic model aims to be more direct and linear. Logic models are designed to help “fill in the blanks” and illustrate how an organization’s resources and activities (inputs) will be used to achieve impacts.

     

    Source: Chris Lysy, http://freshspectrum.com

    Logic models typically have five components (the exact number, and terms for each, can vary by organization):

    1. Inputs: These are the resources that are used for a program. Examples might be staff (and their time and expertise), funding, materials, training spaces

    2. Activities: List of what, exactly, the program will do. Activities can usually be expressed with a verb: distribute books to libraries; train volunteers; raise awareness through a public health campaign, etc.

    3. Outputs: The direct results of activities. Outputs are usually discrete and easily measured– how many books did we distribute? How many teachers were trained? Outputs can be thought of as the first layer of impact results.

    4. Outcomes: Measures of the change that a program has created for participants. Sometimes outcomes are further divided into short-term and medium-term outcomes. They are similar to outputs, but typically a bit more complex to measure. Outcomes are usually framed with verbs that describe change, such as increase, decrease, improve, etc.

    5. Impact: The change that results from the accumulation of outcomes over time, and typically at a system level, beyond the beneficiary level. Impact statements are typically broad and describe the ultimate goal of an organization or program.

    Logic model components.

     

    Some examples of logic models are illustrated below.

     

    Source: Minnesota Department of Health (https://www.health.state.mn.us/communities/practice/resources/phqitoolbox/images/logicmodel.jpg)

    This logic model shows the inputs, outputs, and the short- , medium- , and long-term outcomes for a disaster reduction program from the Minnesota Department of Health. This illustration also provides an excellent example of two additional components often found in logic models: key assumptions, which are statements that must hold true in order for the causal pathways of the logic model to work as intended, and external factors, a list of influences outside of the program that will contribute to its success.

    Amp Impact’s logic model.

    Here is another logic model– Amp Impact’s logic model. It shows how inputs from Vera Solutions and the Salesforce platform translate to the three main impacts of the product.

    Indicators

    After an organization has developed a logic model, the next step is to develop indicators. An indicator is a quantitative or qualitative factor that provides the means to measure the changes resulting from a program or intervention. Indicators are metrics that are used to track the progress of a program towards impact.

    You may have seen them referred to by other terms, such as metrics, markers, or key performance indicators (KPIs).

    Good indicators are SMART: 

    • Specific to the change that needs to be detected/measured

    • Measurable: can be used to track changes over time

    • Achievable: related to realistic goals or milestones

    • Relevant to the program or context

    • Time-bound: measurable for a specific period

    Indicators can be developed directly from the activities, outputs, and outcomes on the logic model. Below are a few examples of indicators that might come from Amp Impact’s logic model. What are some other possible indicators for these outputs and outcomes?

     

    Indicators are frequently determined by an organization or program internally, but sometimes they may also be defined by donors or external sources. Many development organizations use common indicators as a way to consistently measure progress towards the same objectives. For example, the Sustainable Development Goals provide a set of common indicators to work towards global goals in health, education, economic growth, and reducing inequality. Other times, indicators are provided by grantmaking organizations to grantees, to ensure that grantees in their portfolio are measuring results in the same way.

    Amp Impact as an Impact Measurement Tool

    Finally, let’s look at how Amp Impact fits in with the data cycle and helps clients measure impact using the Salesforce platform.

     

    Frameworks

    During the ‘Plan’ stage of a project’s data cycle, Amp Impact’s Frameworks component can be used for capturing a project’s theory of change and/or log frame. Using the Frameworks component, project managers can define the impacts for their program and specify outcomes, outputs, activities, and inputs that make up the pathway to impact.

    This component will also display project indicators and the numeric baseline values, targets, and results for each indicator based on data that has been entered for the project.

     

    Indicators Catalog

    The Indicators catalog provides a way for organizations to store their portfolio of indicators in Salesforce. Using the Indicators component, project managers can select from this catalog to assign indicators to their project. They also have the option to create custom indicators for their project in order to track unique metrics for their log frame.

    When creating indicators for the catalog, users can specify key attributes for ‘SMART’ indicators such as the precise indicator definition, data source, scope, and frequency of measurement.

    Users can also upload reference data, such as the indicators for the Sustainable Development Goals, to add to their projects.

    Set Targets / Add Results (STAR)

    On the Set Targets and Add Results pages in Amp Impact (commonly referred to as STAR pages) users can set targets for specific reporting periods for a project, or the life of the project, and then enter the results based on data that were collected for each indicator.

    Performance Graphs & Built-in Report/Dashboard Templates

    Amp Impact’s performance graphs allow users to visualize the results of their projects compared to the targets that were set. The Salesforce reports that come as part of the package offer different ways to break down results, such as by geographic region, thematic area, or against the objectives in the logic model.

     

    Amp Impact also offers Tableau and Power BI templates to facilitate external analysis and development of dashboards using these tools.