Skip to main content

Evaluating digital inclusion initiatives: How can we get better evidence for what works?

Executive summary

This document provides advice on options for supporting the evaluation of digital inclusion initiatives.

Evaluation is the systematic determination of value. It helps us to understand how well an initiative is working and how it could be better. When done and used well, evaluation is a key input into decisions about initiatives.

In a stocktake of New Zealand digital inclusion initiatives, we found that very few have been formally evaluated. Through the review and interviews with key people, we identified 8 evaluation challenges:

  • Challenge 1 — Diversity among digital inclusion initiatives makes evaluation consistency very difficult.
  • Challenge 2 — We lack a shared understanding of how to measure digital inclusion outcomes.
  • Challenge 3 — For some digital inclusion initiatives, it’s very difficult to measure outcomes and to determine the initiative’s contribution. Difficulties include contacting and tracking participants; and isolating the contribution of the initiative from other contributions to outcomes.
  • Challenge 4 — Providers of digital inclusion initiatives lack the resources to carry out evaluation.
  • Challenge 5 — There may be insufficient support for scaling up successful digital inclusion initiatives, which can discourage evaluation.
  • Challenge 6 — Conventional perceptions of how social outcomes are achieved do not consider the contribution of digital inclusion.
  • Challenge 7 — The influence of evaluation on funding decisions has lacked transparency, reducing providers’ motivation to evaluate.
  • Challenge 8 — There may be a lack of knowledge about evaluation.

It’s unlikely that these challenges will resolve themselves, and it’s important that we take action to address them. Without better evaluation, we will struggle to justify any increased funding for digital inclusion; we will have little evidence to inform decisions about what initiatives should be scaled up; and we will fail to identify opportunities for improvement.

We suggest the following actions to improve evaluation of digital inclusion initiatives

Embed incentives and support for evaluation into funding for digital inclusion initiatives:

  1. Embed evidence requirements into decisions about funding initiatives.
  2. Fund initiatives at a scale and for a duration that supports evaluation.
  3. Allocate funding specifically to evaluation.

Build evaluation skills and knowledge:

  1. Develop guidance on evaluation of digital inclusion.
  2. Facilitate access to tailored evaluation advice.
  3. Promote inter-organisational sharing of experiences in evaluating digital inclusion.

Consider using large-scale analytics to evaluate digital inclusion initiatives:

  1. Assess the feasibility and suitability of using large-scale analytics to evaluate the various types of digital inclusion initiatives.
  2. Where large-scale analytics are feasible and suitable, begin by ensuring that initiatives have the needed prerequisites in place (e.g. informed consent from participants and collection of appropriate data).

Promote measurement of digital inclusion alongside other outcomes:

  1. Work with other agencies to embed measurement of digital inclusion outcomes into their monitoring and evaluation, where appropriate.

Purpose and scope of this document

This document provides advice on options for supporting the evaluation of government and non-government digital inclusion initiatives.

As described in the 2019 Action Plan — Building the foundations, this contributes to the government’s role to ‘lead’ and comprises part of the next step to “…investigate how to measure the success of government digital inclusion initiatives”.

This document includes a discussion of scope (what evaluation is and what ‘digital inclusion initiatives’ are), and a review of the current state of digital inclusion initiatives. Challenges in evaluating digital inclusion initiatives are identified, and actions to address those challenges are suggested.

Findings and recommendations are based on a stocktake of digital inclusion initiatives, interviews with key people, and a limited review of New Zealand and international literature on evaluation of digital inclusion.

What is evaluation?

Evaluation tells us about the value of an initiative

Evaluation is the systematic determination of the value of something. We all evaluate things every day, and we use those value judgements to make decisions. In the discipline of formal evaluation, we combine evidence with explicit criteria for value, to understand:

  • how well an initiative is working
  • in what ways it is working well or not so well
  • how it could be better.

Evaluation is not method-specific; many techniques, quantitative and qualitative, can be used to evaluate an initiative.

Evaluation helps us make good decisions

When done well and used constructively, evaluation forms a key input into decisions about the future of an initiative. Evaluation can:

  • provide accountability to funders and stakeholders
  • support arguments for more funding (or less)
  • identify ways we can improve initiatives
  • assist decisions about where to prioritise effort
  • support our personal satisfaction and integrity by showing us whether we’re making a difference.

Further reading on evaluation

Several excellent resources with further information on evaluation are:

  • Superu (2017). Making sense of evaluation: a handbook for everyone
    User-friendly entry-level guidance that provides an overview of evaluation concepts and processes.[Footnote 1]
  • Davidson (2005).[Footnote 2]
    Textbook providing guidance on how to evaluate. Describes the types of questions that evaluators need to answer, how to choose appropriate methods to answer the questions, and how to combine qualitative and quantitative data with relevant values to draw evaluative conclusions.
  • Better Evaluation
    Comprehensive and searchable website providing descriptions and examples of many different evaluation approaches. Created by an international collaboration of evaluators. Very useful for finding out about specific evaluation methods and topics.
  • What works
    Aotearoa New Zealand website providing advice, case studies, and links to resources on evaluation.

What are digital inclusion initiatives?

The Digital Inclusion Blueprint — Te Mahere mō te Whakaurunga Matihiko states that being digitally included currently means: “…having convenient access to, and the ability to confidently use, the internet through devices such as computers, smartphones and tablets”.

The Blueprint acknowledges that what is needed to be digitally included will change as technology and society evolve (for example, coding skills may become necessary in future), but it focuses our current effort on: “…enabling non-users and sporadic users of the internet to become users, rather than on upskilling people who already access and use the internet in their day-to-day lives.”

The Blueprint describes 4 elements that are needed for a person to be digitally included.

Becoming digitally included occurs when people have the motivation, access, skills and trust to conveniently and confidently use the internet.

Initiatives that are in scope

We define digital inclusion initiatives as services, projects or programmes that contribute to enabling everyone to conveniently and confidently use digital devices and the internet, via improving motivation, access, skills or trust.

Digital inclusion initiatives contribute to enabling everyone to conveniently and confidently use digital devices and the internet.

In-scope initiatives include:

  • services that develop people’s motivation, access, skills or trust, and that are available to people who are not yet digitally included, such as:
    • training in foundational digital skills
    • arranging affordable access to devices and internet connections
  • initiatives that improve online safety and trust, for example through improving people’s awareness of, and resilience to, online threats such as scams, privacy breaches, and through protecting Māori data sovereignty
  • services, projects or programmes that make online content more accessible for disabled people.

Initiatives that are out of scope

There are other types of initiatives that touch on aspects of digital inclusion but are out of scope for the time being.

Out-of-scope initiatives include:

  • initiatives that focus on improving motivation and digital skills among people who already access and use the internet in their daily lives, such as mentoring and training courses in coding and robotics (these initiatives do not fit the Blueprint’s current definition of digital inclusion)
  • initiatives that support the wider digital inclusion system, for example through growing New Zealand’s understanding of digital inclusion, developing and implementing standards and frameworks to support digital inclusion or making connections between other initiatives (these activities are important, but are not a priority for the evaluation of digital inclusion).

Initiatives can focus on digital inclusion alongside other outcomes

Digital inclusion initiatives can (and usually do) focus on other outcomes alongside digital inclusion. For example, many digital inclusion initiatives also have education, employment or other social goals. This is appropriate, as digital inclusion is an enabler of outcomes in other areas, and because research shows that engagement in digital inclusion is better when people are ‘hooked in’ through a personal interest or when digital inclusion initiatives are embedded within other services.[Footnote 3][Footnote 4]

Digital inclusion policy, funding and evaluation must allow for this. Support for digital inclusion must be flexible enough to allow initiatives to work towards, report on, and evaluate digital inclusion alongside other key goals.

What is the current state of digital inclusion initiatives?

Four main types of initiatives can be distinguished based on the digital inclusion elements they address and the groups they reach.

The digital inclusion team is developing a stocktake of government and non-government digital inclusion initiatives. The stocktake attempts to list all digital inclusion initiatives in New Zealand, and gathers information on initiatives’ characteristics such as size, purpose and the groups they work with.

The stocktake has identified more than 60 currently active New Zealand services, projects or programmes that fit our definition for digital inclusion initiatives.

Of these, over 90% can be classified into 4 types, based on the digital inclusion elements they address and the groups of people they reach.

1. Connectivity for everyone

Initiatives that help arrange access to an internet connection, in a non-personalised way, and do not include digital skills training for users of the service.

New Zealand examples

2. Connectivity and skills for low income families with children

Initiatives that work with school age children and their families to teach digital skills, and arrange connectivity. All initiatives target low income families or low decile schools.

New Zealand examples

3. Basic skills for everyone

Basic computing and digital literacy training for working age adults or seniors. Some are oriented to work-relevant skills and some to socially-relevant skills.

New Zealand examples

4. Building online trust

National-level education resources, campaigns and tools that aim to build online trust and security.

New Zealand examples

Further observations about these initiatives will be presented in an upcoming report on the analysis of the stocktake.

Most New Zealand digital inclusion initiatives haven’t been evaluated

Through the stocktake of digital inclusion initiatives, we found that around 20% of digital inclusion initiatives have been formally evaluated, or have a future evaluation planned. Some of the remainder have monitoring in place that may support future evaluation. Among the formal evaluations that we found, we saw very little consistency in the digital inclusion-related outcomes that have been measured.

Interviews with key people (Appendix 1) confirmed that little formal evaluation has been done, and that we lack a shared understanding about what digital inclusion outcomes we should measure and how we should measure them.

Eight main challenges with evaluating digital inclusion

Our stocktake of New Zealand digital inclusion initiatives showed that many different organisations are delivering digital inclusion initiatives in New Zealand, and that formal evaluation is rarely done and, when it is done, methods are not consistent across initiatives.

Drawing from the stocktake and 15 interviews with key people (Appendix 1), we identified 8 main challenges with evaluating digital inclusion initiatives.

Challenge 1: Diversity among initiatives makes evaluation consistency very difficult

There is a great deal of diversity across digital inclusion initiatives. For example, initiatives that facilitate connectivity for everyone are very different to skills training courses. Different types of initiatives require different evaluation methods and measures, and their evaluation findings will only rarely be directly comparable.

Challenge 2: We lack a shared understanding of how to measure digital inclusion outcomes

Among the evaluations of New Zealand digital inclusion initiatives, we found almost no consistency in the digital inclusion outcomes that were measured, even where initiatives were similar enough that there could have been consistency. Several key interviewees commented that New Zealand lacks an agreed set of digital inclusion outcomes, and that they would like advice on what outcomes to measure and how to measure them.

This differs from some other areas. For example, standard measures of various education and health outcomes exist and are commonly used in evaluation.

Challenge 3: For some digital inclusion initiatives, it’s very difficult to measure outcomes and to determine the initiative’s contribution

The following difficulties with evaluating outcomes were described by key interviewees.

  • It’s difficult to track longer term outcomes among participants, especially when they're transient and reluctant to trust outsiders. For example, this has made it hard to measure educational and employment outcomes among participants in ‘connectivity and skills for low income families with children’ initiatives. The low-income groups that these initiatives target can be highly transient and reluctant to participate in surveys and evaluation.
  • Some initiatives cannot identify participants, making it hard to measure anything about them. This applies to the ‘building online trust’ initiatives and to most of the ‘connectivity for everyone’ initiatives, which often have no built-in way to find out who they’re reaching or what behavioural changes are happening among the people they reach.
  • Wellbeing outcomes for individuals and communities have multiple contributing causes. Isolating the effect of an initiative from other factors is difficult. This applies to all types of digital inclusion initiatives and is a very common challenge for evaluation more generally.

Challenge 4: Providers of digital inclusion initiatives lack the resources to carry out evaluation

Key interviewees described major challenges with funding resources for evaluation among community and government providers of digital inclusion initiatives. Many providers:

  • lack evaluation capability
  • are so busy delivering core services that they cannot find time to do evaluation
  • do not receive funding for evaluation (and are under-resourced for administration generally)
  • are funded by multiple small grants, creating administrative inefficiencies and resulting in a situation where no 1 grant is large enough to explicitly support evaluation.

Challenge 5: There may be insufficient support for scaling up successful digital inclusion initiatives, which can discourage evaluation

Scale is important for evaluation because larger-scale initiatives can more easily find resources for evaluation and embed evaluation into standard processes. Likewise, evaluation is important for scaling up because evaluation findings can support the case to do so, providing evidence of success and an understanding of the critical factors that should be retained as the initiative grows.

Several key interviewees said that New Zealand lacks long-term funding for digital inclusion initiatives and does not support successful initiatives to scale up.

Through the stocktake, we found at least 10 initiatives that have been successfully rolled out across multiple locations, suggesting that we have supported some scaling up. However, the stocktake also indicated a high turnover of initiatives, with around 10% having ceased operation since the stocktake was first drafted in early 2018. While it’s possible that some initiatives ceased because they were unsuccessful, one key interviewee pointed to some successful initiatives that had stopped because the people running them had ‘burnt out’ from the stress of insecure and short-term funding.

Challenge 6: Conventional perceptions of how social outcomes are achieved do not consider the contribution of digital inclusio

Key interviewees suggested that some digital inclusion initiatives have struggled to retain funding because they are not thought to directly affect the outcomes that government agencies traditionally focus on (such as health, education or employment). Digital inclusion initiatives are at risk of falling between agency siloes, even though there is evidence suggesting that they can facilitate achievement of outcomes in many established areas.

A 2015 evaluation of Computers in Homes found that:

“… the benefits of digital inclusion impact on the outcomes of several agencies including the Ministry of Business, Innovation and Employment, Department of Internal Affairs, Ministry of Social Development and Ministry of Education. It is an archetypal case of an intervention at risk of being orphaned because it is not the priority of any one particular agency, but that has the potential to strongly contribute to whole-of-government outcomes.”[Footnote 5]

This emphasises the need to develop a shared understanding of how digital inclusion affects social outcomes, to help initiatives to demonstrate their value.

Challenge 7: The influence of evaluation on funding decisions has lacked transparency, reducing providers’ motivation to evaluate

There is scepticism about the value of formal evaluation among some providers of digital inclusion initiatives. This scepticism is in part based on their experiences with particular funding decisions that either did not take account of evaluation findings or lacked transparency in how they did so.

Challenge 8: There may be a lack of knowledge about evaluation

Some key interviewees suggested that providers may not have a good understanding of how evaluation can contribute to initiative improvement.

Is the current state good enough?

We need improved evaluation to support increased funding, intelligent scaling up and better outcomes.

As described in the section Most New Zealand digital inclusion initiatives haven’t been evaluated, there has been little formal evaluation of New Zealand digital inclusion initiatives. We could encourage more and better evaluation by addressing the challenges with evaluation capability, knowledge, consistency and motivation. This will need support, as the challenges are long-standing and are unlikely to be resolved on their own.

Or we could continue with the status quo, but this would have the following drawbacks.

  • We would struggle to make a good case for increasing funding for digital inclusion. More government funding for digital inclusion would almost certainly require a budget bid, and evaluation would be needed to support that. Budget initiative submissions must present a well-evidenced analysis of how the initiative will benefit wellbeing, a strong intervention logic, and a plan for monitoring and evaluation.[Footnote 6] We cannot yet meet these requirements.
  • We will continue to have very little evidence to support decisions about which initiatives should be scaled up. If government intends to fund digital inclusion more extensively, we will need evidence on which initiatives are ready to grow, which will create the most beneficial outcomes, and which are suitable for different groups.
  • We are missing opportunities to improve initiatives. While most providers have feedback mechanisms in place to assist service improvement, better evaluation capability would supplement this.

Actions to improve the evaluation of digital inclusion initiatives

There are 4 areas in which actions could be taken to address the evaluation challenges. In this section, we describe each action in detail and note the challenges that may be addressed by these actions.

Embed incentives and support for evaluation into funding

The actions in this area are based on the assumption that funding for digital inclusion initiatives will be developed. However, it’s worth noting that no decisions about funding support have been taken, and that funding is only one mechanism by which DIA can support digital inclusion.

The challenges addressed through these actions are:

Action 1: Embed evidence requirements into decisions about whether to fund initiative

Digital inclusion funding decisions should be informed by good evidence for what works. This can be encouraged by embedding an evidence standard into funding processes. The standard would:

  • provide a consistent and transparent mechanism for evidence to influence funding decisions
  • increase the visibility of evaluation as a decision-making tool
  • motivate funders to use evaluation findings in decision-making
  • motivate more evaluation, better quality evaluation, and more consistent evaluation.

This will address the lack of motivation to evaluate (Challenge 7) as long as it’s accompanied by support for evaluation capacity and a commitment to use evaluation findings in funding decisions.

We recommend adopting the Evidence Rating Scale published by Superu.[Footnote 7] This is a standard for grading initiatives’ strength of evidence for effectiveness, and suitability for scaling up or implementation in new locations. Appendix 2 describes the main features of this scale.

Among the various standards that could be adopted, the Superu standard is the best fit with digital inclusion, as it is inclusive of different evaluation methods (including western and Māori approaches), and it includes early-stage initiatives and creates a clear evidence progression pathway for them. It is also consistent with The Treasury guidance which suggests using the Superu scale in rating the evidence quality that underlies budget bid intervention logic and cost-benefit analyses.[Footnote 8]

The scale is tiered, with higher standards of evidence applied as initiatives become more established. Digital inclusion initiatives should be required to meet the following standards to be eligible for funding.

Level 1: Pilot and early-stage initiatives

Evidence requirements:

  • an evidence-based theory of change
  • an evaluation plan.
Level 2: Small to medium initiatives that have been operating for around 1 to 3 years

Evidence requirements:

  • information on efficiency (delivery of outputs compared to inputs)
  • at least 1 evaluation that shows some beneficial effects and meets the standards described in Appendix 2
  • documentation and procedures that show how the initiative is implemented and what resources are required to deliver it.
Level 3: Medium to large initiatives that have been operating for around 3 to 10 years

Evidence requirements:

  • information on efficiency (delivery of outputs compared to inputs)
  • at least 1 evaluation that provides convincing evidence of beneficial effects and meets the standards described in Appendix 2
  • an assessment of cost relative to impact
  • evidence for the causal mechanism (how and why the initiative leads to outcomes)
  • documentation and procedures that show how the initiative is implemented and what resources are required to deliver it
  • regular reviews of procedures, manuals and staff training processes.

As time goes on, some initiatives may be able to reach the requirements of level 4 on the scale (Appendix 2), but for now the 3 levels listed above should be sufficient.

If this standard is adopted, it will need to be supported by funding processes that:

  • include an explicit review of evidence
  • support initiatives to meet higher levels of the standard as they become more established (for example, by funding evaluation)
  • support the desired mix of early stage and more established initiatives, possibly using tiered funding (described in Action 2).

Action 2: Fund initiatives at a scale and for a duration that supports evaluation

As described under Challenges 4 and 5, digital inclusion initiatives struggle to find the longer term and larger scale funding that would put them on a more sustainable footing, and this is a barrier to good evaluation.

There is no clear-cut figure for the size and duration of grant that is needed to support evaluation. Further consultation with organisations that work in digital inclusion will be needed to determine appropriate funding levels and durations. The following points may provide a starting point for these discussions.

  • The New Zealand Productivity Commission, in their wide-ranging inquiry into social services, recommended applying a standard duration of 3 years to social services contracts unless risk analysis indicates otherwise.[Footnote 9]
  • Among organisations that use a rule of thumb to specify evaluation budgets, common estimates range from 5% to 20% of programme costs.[Footnote 10]
  • Many factors affect monitoring and evaluation costs, including geography, sample size, how hard people are to contact, and the complexity of what’s being measured.[Footnote 11]
  • Dedicating more resources to evaluation may be appropriate for innovative, risky, pilot or high profile initiatives, and in situations where the evaluation findings may have a large influence on future policy.[Footnote 12][Footnote 13]

A tiered funding model could be worth considering, in which separate pools of funding are reserved for early, progressing and mature initiatives.

Early stage initiatives would be eligible for smaller amounts of funding to support initial testing and validation (and may receive proportionately more funding for monitoring and evaluation). Initiatives that meet level 2 of the evidence standard would be eligible for larger amounts of funding and would use part of this funding to develop evidence that meets level 3 of the standard. Initiatives that meet level 3 of the standard would be eligible for a larger amount of funding again, to support scaling up.

Results for America describes how tiered funding has been implemented by agencies in the USA.[Footnote 14]

Action 3: Allocate funding specifically to evaluation

Closely related to actions 1 and 2, evaluation should be supported with funding that is specifically earmarked for evaluation. A study of US digital inclusion initiatives (found that organisations that were further along in evaluating their digital inclusion programmes were either larger, with internal researchers on staff who could focus on evaluation, or they received support to focus specifically on outcomes-based evaluation.

There are 2 main possible approaches.

  • Funding for digital inclusion initiatives could include specified amounts for evaluation and a requirement that providers report on the results of evaluation.
  • Funding for evaluation could be retained by the funder and used to pay for a funder-led evaluation across several initiatives. This could help to achieve economies of scale and promote consistency in evaluation, but the resource required to gain buy-in across parties and to set up shared measurement shouldn’t be underestimated.[Footnote 15]

Whatever approach is chosen, research on building evaluation capacity suggests that better results are achieved when there is a foundation of trust between the funder and the provider (that includes respect for self-determination and provider expertise), and where there is joint negotiation of evaluation expectations between the funder and the provider.[Footnote 16]

Build evaluation skills and knowledge

The challenges addressed through these actions are:

Action 4: Develop guidance on evaluation of digital inclusion

Evaluation guidance will help to develop our collective understanding of how to evaluate digital inclusion initiatives and how to address the difficulties with measuring outcomes (Challenges 2 and 3).

As a first step, we recommend collecting and disseminating examples of good practice in evaluating digital inclusion. This will demonstrate feasibility and inform further consensus-building efforts. Ultimately, guidance on what digital inclusion outcomes to measure and how to measure them could be developed, similar to the bank of outcomes produced by the UK Government Digital Services[Footnote 17] but tailored for New Zealand, and building on the digital inclusion outcomes framework.[Footnote 18]

Action 5: Facilitate access to tailored evaluation advice

While guidance on evaluation will be helpful, more will be needed to assist providers who don’t have in-house evaluation expertise. Tailored evaluation advice could help these providers to build capacity, implement outcomes measurement and evaluation, and meet any evidence standards that may be required for funding.

Further work is needed to determine how to best facilitate this. Expert evaluator “intermediaries” who are independent from funders and able to be chosen by providers were suggested by Superu to be an important component of building evaluation capacity in non-government organisations (NGOs).[Footnote 19] These “intermediaries” could be evaluators working in the tertiary education or private sectors. Superu reported on ‘lessons learned’ from undertaking evaluations and building evaluation capability in NGOs, and found that expert support was crucial.[Footnote 20]

Action 6: Promote inter-organisational sharing of experiences

As part of government’s role to ‘connect’ the digital inclusion sector, DIA could promote information-sharing between organisations that are working on digital inclusion, with evaluation among the topics discussed. This would enable organisations to learn from each other’s experiences and to collectively address upcoming issues. Networking is one of the important components of building evaluation capacity described by.[Footnote 21]

There are various mechanisms by which experiences could be shared, including workshops, meetings, conferences, and online discussion groups.

Further advice on developing a community of practice is provided by Better Evaluation.[Footnote 22]

Consider using large-scale analytics to evaluate digital inclusion initiatives

The challenges addressed through these actions are:

Action 7: Assess the feasibility and suitability using large-scale analytics to evaluate different types of analytics

With recent advances in data linking and availability, more evaluations are using administrative data and analytics-based approaches to measure impact, often alongside other methods.

In New Zealand, the Integrated Data Infrastructure (IDI), which links together government administrative and survey datasets, has facilitated this shift by enabling us to more easily assess associations between activities in one area (for example, educational participation, as shown by student enrolment data) and outcomes in another area (for example, earnings, as shown by tax data).[Footnote 23] The Social Investment Agency has been especially active in using and promoting this approach.[Footnote 24][Footnote 25]This has the potential to be useful for evaluation of digital inclusion initiatives, and several key interviewees suggested that it holds promise. The basic concept of how it would work is as follows.

  1. Providers of digital inclusion initiatives would collect relevant data on their participants and obtain permission (where required) to use that data, in aggregate, for research and evaluation.
  2. The data would be taken into an environment, such as the IDI, and linked with other datasets to:

a. extract relevant aggregate data about participants from other datasets (for example, employment, health and education outcomes before and after participation in the initiative)

b. extract aggregate data on a ‘matched’ comparison group of people who didn’t participate in the initiative (for example, people with similar characteristics for whom the same data on employment, health and education outcomes is available).

  1. Outcomes would be compared across the 2 groups using appropriate statistical methods. The difference in outcomes between the participant group and the matched group would be attributed to the initiative.

An example of this approach is the “social housing test case”, in which outcomes for people who received social housing were compared to outcomes for people who applied for but didn’t receive social housing.[Footnote 26]

The approach has the following potential advantages.

  • It could reduce the need for data collection, thereby reducing costs and participant burden, for example, by replacing follow-up surveys of participants with re-using existing data.
  • It could allow us to better quantify the impact of initiatives on outcomes, by making it easier to create comparison groups and to retrieve before and after measures for participants.
  • It could help us to investigate outcomes across different domains (for example, health, education and employment), which is especially important for digital inclusion, as it can enable many different social outcomes.
  • It may allow a more robust measurement of some outcomes by replacing some self-reported data (which is subject to various biases) with administrative data that records actual events.
  • It could encourage better standardisation of outcome measures, improving comparability across initiatives.

But there are difficulties with this approach, and it may only ever work well for specific types of large initiatives. Some of the difficulties are that:

  • for small to medium initiatives, there may be too few participants, raising privacy issues and reducing statistical power to the point where no useful conclusions can be drawn
  • there will not always be appropriate outcome indicators in existing administrative and survey datasets. In particular, we currently lack good national data on digital inclusion outcomes[Footnote 27]
  • we will not always have good data for generating a matched comparison group. A poorly matched comparison group can lead to inaccurate and misleading results
  • these approaches can be hard to understand, making it difficult to obtain genuinely informed consent, especially when participants face low literacy or other barriers to comprehension
  • outcomes data from national sample surveys (such as the General Social Survey) is unlikely to be useful because of the low likelihood of finding initiative participants in a survey sample.

In addition to these problems, there are some current issues with the resources required for this type of work, although these may be resolved in the future. There are:

  • long lead times for getting data into the IDI and setting up projects with the necessary approvals to proceed. Systems that might help with this, such as the Social Investment Agency’s Data Exchange, are under development[Footnote 28]
  • currently few people available with the necessary analytical expertise. Demand for those people is high, making their skills costly to buy
  • some quality issues with administrative data and the cost of data cleaning are frequently underestimated.

Before embarking on this approach, an assessment of feasibility is needed to better understand the extent of the limitations, to clarify which types of initiatives could be validly evaluated using this approach, and to determine what prerequisites (for example, monitoring data) need to be in place.

Action 8: Ensure the prerequisites for large-scale analytics are in place

Once we have a better understanding of which initiatives could validly use large-scale analytics, we could work towards ensuring that they have the required prerequisites in place. These prerequisites might include:

  • collecting appropriate informed consent from participants
  • a good understanding of, and commitment to, ethical requirements
  • collecting appropriate monitoring data
  • establishing data quality assurance processes
  • ensuring that initiative leaders have a good understanding of how the findings can be used to inform decisions (and how they should not be used).

Further relevant advice is provided by Superu[Footnote 29] and Social Investment Agency.[Footnote 30]

Promote measurement of digital inclusion alongside other outcomes

Action 9: Work with other agencies to embed the measurement of digital inclusion outcomes

Across government, activities that lead to digital inclusion are funded and carried out by agencies that have particular social and economic goals. If those agencies don’t measure digital inclusion outcomes alongside the outcomes that they normally measure, and if they don’t assess the contribution of digital inclusion to other outcomes, it raises the risk that the digital inclusion activities will be ignored, under-valued and then, discontinued (Challenge 6).

To address this, DIA should work with other government agencies, encouraging them to embed digital inclusion measures into their monitoring, evaluation and funding, where appropriate.

Actions to promote the dissemination and use of evaluation findings were also considered. These are commonly included in organisational policies for evaluation (for example: Department of State, United States of America;[Footnote 31] Executive Board of the United Nations Development Programme, The United Nations Population Fund, & The United Nations Office for Project Services;[Footnote 32] Ministry of Foreign Affairs and Trade;[Footnote 33] UNICEF Evaluation Office[Footnote 34]). However, at this stage, it’s a higher priority to encourage more frequent and consistent evaluation.

Appendix 1. Interviews with key people

Fifteen interviews were conducted with key people in organisations that fund, carry out or develop policy related to digital inclusion initiatives. The interviewers asked:

  • about the digital inclusion-related work that they did, and any existing monitoring and evaluation
  • how they felt evaluation could add value
  • what challenges make it difficult to evaluate digital inclusion initiatives
  • what kinds of evaluation advice, resources or guidance they would find helpful
  • whether they could identify good examples of evaluation of digital inclusion.

The key interviewees were from:

  • Policy Regulation and Communities, DIA
  • Aotearoa People’s Network Kaharoa (APNK), DIA
  • Office of the National Librarian, DIA
  • Community Operations (Hāpai Hapori), DIA
  • Digital Strategy - Equitable Access, Ministry of Education
  • Channels team, Ministry of Education
  • Digital Economy team, Ministry of Business, Innovation and Employment (MBIE)
  • Infrastructure team, MBIE
  • CERT NZ, MBIE
  • Ka Hao, Te Puni Kōkiri
  • Digital Inclusion Alliance Aotearoa
  • 20/20 Trust
  • Netsafe
  • InternetNZ.

In addition, feedback on early findings was sought from members of the Digital Inclusion sub-group of the Digital Economy and Digital Inclusion Ministerial Advisory Group (DEDIMAG) and the digital inclusion team at DIA.

Appendix 2. Key features of the Superu Evidence Rating Scale

Level 1: Pilot and early stage initiatives

Evidence standards required for funding

There is:

  • strong theory of change (or logic model) based on evidence
  • an evaluation plan.

Level 2: Small to medium sized initiatives that have been operating for around 1 to 3 years

Evidence standards required for funding

  • There is reported information about efficiency (delivery of outputs relative to inputs).
  • It has been evaluated at least once, showing some beneficial effects. The evaluation:
    • used a convincing method to measure change, such as pre- and post-analysis, or a recognised qualitative method
    • used valid, reliable and appropriate methods
    • analysed data appropriately and presents conclusions supported by evidence.
  • Documentation and procedures provide clarity on how the initiative is implemented and the resources required to deliver it.

Level 3: Medium to large initiatives that have been operating for around 3 to 10 years

Evidence standards required for funding

  • There is reported information about efficiency (delivery of outputs relative to inputs).
  • It has been evaluated at least once, showing convincing evidence of beneficial effects. The evaluation:
    • measured change using pre- and post-analysis of outcomes
    • investigated attribution of outcomes to the initiative using a comparison group or other appropriate data, ideally with long-term follow-up
    • used other valid methods to examine attribution if it is impossible or extremely difficult to obtain comparison data
    • presents good evidence that intermediate outcomes predict long term outcomes, if it is not possible or extremely difficult to do long-term follow-up
    • used valid, reliable and appropriate methods
    • analysed data appropriately and presents conclusions supported by evidence.
  • There is an assessment of the cost of the initiative relative to its impacts.
  • There is evidence that shows how and why the initiative leads to outcomes.
  • Documentation and procedures provide clarity on how the initiative is implemented and the resources required to deliver it.
  • There is regular review of procedures, manuals and staff training processes.

Level 4: Very large initiatives that have been operating for around 8 years or longer

Evidence standards required for funding

  • There is reported information about efficiency (delivery of outputs relative to inputs).
  • At least 2 evaluations show convincing evidence of beneficial effects. They:
    • measured change using pre- and post-analysis of outcomes
    • investigated attribution of outcomes to the initiative using a comparison group or other appropriate data, ideally with long-term follow-up
    • used other valid methods to examine attribution if it is impossible or extremely difficult to obtain comparison data
    • presented good evidence that intermediate outcomes predict long-term outcomes, if it is not possible or very difficult to do long-term follow-up
    • used valid, reliable and appropriate methods
    • analysed data appropriately and presented conclusions supported by evidence.
  • At least 1 cost-benefit analysis completed, using methods that meet established standards.
  • There is evidence that shows how and why the initiative leads to outcomes.
  • There is evidence about which elements of the initiative are necessary to implement with fidelity, and which can be adapted (e.g. to local conditions).
  • There is evidence of the impact of the initiative on different sub-groups in the target population, for example, outcomes for different ages, ethnicities, genders.
  • There is evidence that the initiative is consistently delivered as planned and reaches its target groups.
  • Documentation and procedures provide clarity on how the initiative is implemented and the resources required to deliver it.
  • There is regular review of procedures, manuals and staff training processes.
  • Technical support is available to help implement the initiative in new settings.

More detail on each standard is given in Superu (2017).[Footnote 35]

Was this page helpful?
Thanks, do you want to tell us more?

Do not enter personal information. All fields are optional.

Last updated