It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.
The Independent Review of the Role of Metrics in Research Assessment and Management was set up in April 2014 to investigate the current and potential future roles that quantitative indicators can play in the assessment and management of research.
In our Strategy 2020, we laid out our mission to create value for customers through innovation that delivers positive impact for Australia. Each year we provide our stakeholders with robust evidence that we're achieving this goal.
Our impact evaluation activities provide credible evidence of the effects of our research and innovation activities on the economy, environment and society.
Our research activities and their impacts are diverse in nature and occur across many sectors of the economy. While some impacts are primarily economic and capable of being evaluated in monetary terms, many others – especially those relating to environmental or social effects – may have to be evaluated qualitatively. Ultimately though, each impact must be assessed within the context of a common framework if a comprehensive understanding of our impact and return on investment is to be developed.
The Australian Centre for International Agricultural Research (ACIAR) has been systematically undertaking independent impact assessment studies of its portfolio of research activities for more than 20 years. More recently, for the last 5 years, we
have added adoption studies to this program.
Three years ago ACIAR commissioned two reviews of the impact assessment studies. These summarised the overall return of ACIAR’s research and development (R&D) investments and also assessed the consistency of the studies. Since a range of independent consultants are commissioned to undertake these assessments, and
because these studies are complex and diverse, it was not surprising that the reviewers found that there was vari-
ability in the detail, rigour and presentation of the results.
The Novo Nordisk Foundation has just released its first report on the Foundation’s impact on the scientific community. The report measures the Foundation’s input of resources to the research community and the subsequent research activities in 2006–2015 and includes bibliometric analyses of how the Foundation affected public research in 2006–2013. The report is based on the Foundation’s grant recipients systematic reporting of their activities using the online system researchfish®. The report shows, among other things, that the Foundation’s annual pay-outs to research projects have increased sevenfold since 2006 and in 2015 reached DKK 927 million.
While the success of university research can be viewed in measures of excellence, it can also be found in its economic, social, and environmental impacts. In 2015-16 we’ve invested approximately $3.5 billion in university research. Assessing and reporting on how our investments in university research translate to tangible benefits for Australia will help show where collaboration with industry and other partners could bolster and more quickly deliver these benefits.
What is it?
For the first time, Australia will introduce a systematic national assessment to measure these impacts. The evaluation measures will be determined through an extensive consultation with universities, industry and community stakeholders and the assessment will be conducted by the Australian Research Council (ARC) as a companion exercise to the Excellence in Research for Australia assessment. This will build on the good work already done by the Australian Academy of Technological Sciences and Engineering.
The 2014 Research Excellence Framework (REF) is a peer assessment of the quality of UK universities' research in all disciplines. It replaces the Research Assessment Exercise (RAE), last conducted in 2008.
The REF was undertaken by the four UK higher education funding bodies, who will use the REF results to distribute research funding to universities on the basis of quality, from 2015-16 onwards.
Research evaluations have gained increasing importance among activities that funding organisations carry out to increase their knowledge base for developing research and innovation policies and to improve funding schemes.
Conducting a good impact assessment of a value chain project involves the following steps (the steps assume two research rounds--a baseline and follow-up):
Select the Project(s) to be Assessed.
Conduct an Evaluability Assessment.
Prepare a Research Plan.
Contract and Staff the Impact Assessment.
Carry out the Field Research and Analyze its Results.
Disseminate the Impact Assessment Findings.
The Australian Technology Network of Universities (ATN) and the Group of Eight (Go8) undertook a joint trial exercise, the Excellence in Innovation for Australia (EIA) Trial, in 2012 to assess the impact of research produced by the Australian university sector.
The Australian Technology Network of Universities (ATN) and Group of Eight (Go8) universities, along with the University of Newcastle, Charles Darwin University and the University of Tasmania participated in the Trial, which sought to identify and demonstrate the contribution that high quality research has made to the economic, social, cultural and environmental benefit of society. Implicit in this goal was the purpose to investigate the means by which these benefits may best be recognised, portrayed and assessed by institutions and government.
Indicators of Impact/Outcome provide a sign of how well you have achieved the changes you were were hoping for as a result of your project. They are about measuring change. In other words they are a measure of the extent to which you have achieved your objectives and your longer term goal. Indicators of impact relate to your objectives, and indicators of outcome relate to your goal.
Indicators are necessary to help determine what data needs to be collected to assist in assessing the progress of the program and if it is on track to achieving its goals and objectives. For example, an objective may be to improve social skills. Indicators used to monitor the progress in terms of achieving this objective could include participants’ ability to adhere to group values/norms; management of emotions and development of positive conflict resolution skills.
This paper is based on intervention logic that outlines a chain of expected effects (outputs,outcomes and impacts) for a successful intervention. For each outcome and specific impact, a set
of indicators has been identified that can measure their achievement. A full set of effects isoutlined in the intervention logic diagram on page 3 and the indicators are summarised in Annex
A. For full details on the methodology used for this working paper, please see the 'methodological approach' paper.
It should be noted that the intervention logic is focused on primary and secondary education andthat relevant indicators should be disaggregated by the specific educational level. The term 'pupils'is used as a comprehensive definition for children at school and those studying vocational education. This working paper does not cover indicators for tertiary education. The indicators in this paper are mainly drawn from the MDG's and UNESCO education indicators
UIS.Stat contains all the latest available data and indicators, for education, literacy, science, technology and innovation, culture, communication and information. Table outputs are available for Innovation, R&D and Science and Technology.
This guide will help you to track and measure the impact of research data, whether your own or that of your department/institution. It provides an overview of the key impact measurement concepts and the services and tools available for measuring impact. After discussing some of the current issues and challenges, it provides some tips on increasing the impact of your own data. This guide should interest researchers and principal investigators working on data-led research, administrators working with research quality assessment submissions, librarians and others helping to track the impact of data within institutions.
The Australian Government recognises the importance of research, science and innovation for increasing productivity and wellbeing to achieve long term economic growth for the Australian community and to enable Australia to engage effectively with current and future national and global challenges. Research is a key contributor to improving Australia’s productivity over the longer term.1
There is an increasing focus on showcasing or measuring the societal benefits from research, and a need for better coordination in reporting and promoting the impact of these research outcomes. This will become increasingly important in a tight fiscal government environment where returns on investment in research will need to be demonstrated in terms of environmental, economic and social impact. For these reasons and others, key stakeholders including government, industry and the community require more information on the benefits derived from investment in Australian research activities.
A working group was established in 2012 to develop a common understanding of approaches, terminology and reporting of research impact.
Case Studies are not impact statements. They are examples of the impact possible after years of investment. The Case Studies showcase longer-term impact generated from the scientific research community. Society reaps many benefits from excellent research, some of which occur immediately, while others develop over years or generations. This repository of Case Studies will show examples from a wide spectrum of endeavour over a wide range of impact categories - Economic and Commercial, Societal, Health and Wellbeing, Environmental, International Engagement, Human Capacity, Public Policy, Services and Regulation and Professional Services.
Impact in the REF was assessed through the submission of case studies using two criteria: reach – the spread or breadth of influence or effect on the relevant constituencies, and significance – the intensity of the influence or effect. The case studies, now available via an online database, provide an extraordinary resource for those interested in analysing knowledge translation and research impact.
Our analysis of the 6,679 non-redacted case studies identified a number of observations.
If your research has had beneficial impacts in the real world, then there’s a good chance you’ll want (or have) to shout about it. This guide will distill the best advice that is currently available about how to write research impact case studies that have real, erm… impact.
There remains a concern that Indigenous Australians have been over-researched without corresponding improvements in their health; this trend is applicable to most Indigenous populations globally. This debate article has a dual purpose: 1) to open a frank conversation about the value of research to Indigenous Australian populations; and 2) to stimulate ways of thinking about potential resolutions to the lack of progress made in the Indigenous research benefit debate.
Investments in agricultural research by national and international organizations have successfully generated improvements in the economic well-being of people, well quantified in a wide range of ex-post impact assessment studies. In contrast, relatively little attention has been given to quantifying the impacts of research on the environment. This gap in understanding the full range of impacts arising from agricultural research presents an important challenge. Growing scientific and public recognition of the significance of environmental impacts of agricultural research, both positive and negative, necessitates their integration into the research evaluation process. This article provides a broad overview of some of the conceptual issues and empirical challenges inherent in measuring and documenting environmental impacts resulting from changes in agricultural practices, reviews some recent environmental impact assessment case studies, and discusses the lessons yielded by those case studies.
A substantial investment is made each year in research to support environmental policies. Understanding the impact of this research is important from a number of perspectives. What remains unclear is how such evaluations may be undertaken, particularly as very little current practice is captured in the literature. This paper reports on a set of 10 exploratory case studies of environmental research impact assessment in practice. Most of the impact evaluations identified have multiple objectives and used a combination of research methods. Challenges include establishing attribution, the timing of an evaluation, how to capture the duration of research impact, checking the reliability of information from key informant interviews and identifying methods for capturing as many impacts as possible. Best and Holmes' (2010) framework is used to consider the status of the case-study organisations in progressing from first generation linear models of knowledge to action to the more recently advocated systems models.
There is an increasing interest in demonstrating the outcomes from research for the purposes of learning, accountability, or to demonstrate the value of research investments. However, assessing the impact of social science research on policy and practice is challenging. The ways in which research is taken up, used, and reused in policy and practice settings means that linking research processes or outputs to wider changes is difficult, and timescales are hard to predict. This article proposes an empirically grounded framework for assessing the impact of research—the Research Contribution Framework. A case study approach was adopted to explore the nature of research impact and how it might be assessed.