European Research Council
Place Rogier 16, COV2 24/144, BE-1049 Brussels, Belgium I https//erc.europa.eu I
Evaluation of research proposals:
the why and what of the ERC's recent changes
Maria Leptin, ERC President
1
1. Introduction
The mission of the European Research Council (ERC) is to encourage the highest quality
research in Europe through competitive funding and to support investigator-driven frontier
research across all fields, based on scientific excellence. Research evaluation is therefore
at the heart of its operations. Recently, the Scientific Council
1
of the ERC has introduced
changes in the evaluation processes and evaluation forms for the 2024 calls for research
proposals
2
, as described in the ERC Work Programme 2024 and the associated guidance
documents
3
. This report describes the changes, the discussions that led to them, and the
reasoning behind them.
The Scientific Council continuously scrutinises the ERC evaluation processes, soliciting
feedback from the chairs and members of the ERC evaluation panels and listening to input
from applicants, grantees and other members of the scientific community. A dedicated
committee of the Scientific Council is responsible for the development of norms and rules
for the proper functioning of the evaluation panels.
4
In addition, certain aspects of research
assessment are handled by the Working Group on Open Science.
5
In July 2021
6
, the ERC
endorsed the San Francisco Declaration on Research Assessment (DORA) and in early
2023
7
signed the Agreement on Reforming Research Assessment.
Having followed the debate on research assessment in recent years and observed the
reforms introduced in some countries and institutions, the Scientific Council shares the
concern that current research assessment systems often use inappropriate and narrow
methods to assess the quality, performance and impact of research and researchers.
Given the fast-moving nature of this policy area, and the European Commission’s initiative
8
,
launched in January 2022, to create a ‘coalition of the willing’ to promote changes the
Scientific Council wanted to take an encompassing and structured look at research
assessment in general, establish our own position, and consider possible changes to the
ERC’s evaluation processes that may follow from those deliberations.
The Scientific Council defined three tasks: one was to decide which characteristics and
qualities of the applicant and the proposed project should be considered, the second was to
decide how to evaluate those characteristics and qualities, and finally, it was necessary to
decide how to weigh the different characteristics and qualities against each other.
In this report, I present the consensus views at which the Scientific Council arrived, and the
reasoning behind those views. Where there were strongly divergent opinions, I report on
those as well. I also include our thinking on changes on which we had already decided at an
earlier point.
1
https://erc.europa.eu/about-erc/erc-president-scientific-council
2
https://erc.europa.eu/news-events/news/erc-scientific-council-decides-changes-evaluation-forms-and-
processes-2024-calls
3
https://ec.europa.eu/info/funding-tenders/opportunities/docs/2021-2027/horizon/wp-call/2024/wp_horizon-
erc-2024_en.pdf
4
https://erc.europa.eu/about-erc/erc-standing-committees/standing-committee-panels
5
https://erc.europa.eu/about-erc/thematic-working-groups/working-group-open-access
6
https://erc.europa.eu/news/erc-2022-work-programme
7
https://erc.europa.eu/news-events/news/erc-scientific-council-decides-changes-evaluation-forms-and-
processes-2024-calls
8
https://research-and-innovation.ec.europa.eu/news/all-research-and-innovation-news/process-towards-
agreement-reforming-research-assessment-2022-01-18_en
2
2. How we organised our work
We, the Scientific Council, set up a task force composed of members of the Scientific Council
supported by staff of the ERC Executive Agency (ERCEA) to assemble and analyse
background materials and to prepare discussions in the Scientific Council. All decisions were
taken by the Scientific Council, and we will not distinguish here between the deliberations of
the task force and the Council.
Before taking any decisions, we:
1. assembled background material including a summary of recent stakeholder position
papers on reforming research assessment, and information on the ERC’s ‘sole criterion of
excellence’ and current ERC evaluation processes (overview in Annex 1);
2. held a two-day analytical workshop with experts in the field of research assessment
representing different disciplines and organisations, and with different geographical
backgrounds, from different careers and career stages, together with members of the
Scientific Council and ERCEA staff (executive summary in Annex 2);
3. prepared a list of possible dimensions or elements (list in Annex 3) that could be used in
the evaluation of researchers and proposals for ERC grants, taken from the materials
described above. This list served as the basis for assessing whether the ERC already looks
at or should in the future look at these dimensions or elements in its evaluation processes
and guidance documents. Content from the workshop also fed into the discussion.
We first assessed what elements of a researcher’s CV, track record and proposal were
relevant for the evaluation (for example publications or recognition by peers), and then the
mechanisms that should be used to evaluate them (for example citation counts versus
statements or narratives). In a parallel process, members of the Scientific Council and the
ERCEA also participated in the group of organisations
9
that elaborated the Agreement on
Reforming Research Assessment.
10
3. Summary of changes resulting from our work
The steps outlined above resulted in the following changes, which, taken together, are
designed to emphasise the qualitative nature of the ERC’s evaluations with the primary
focus on the proposed research project:
The description of required ‘profiles’ of ERC PIs has been removed from the Work
Programme.
In the application form, the CV and track record, previously two separate documents,
are now combined as a single template.
The document is limited to four pages in length (with fixed font and spacing), and it is
left to the applicant how to allocate this space to the following three sections:
9
https://research-and-innovation.ec.europa.eu/news/all-research-and-innovation-news/process-towards-
agreement-reforming-research-assessment-2022-01-18_en
10
https://coara.eu/app/uploads/2022/09/2022_07_19_rra_agreement_final.pdf
3
Personal details.
This section contains only the personal details, education and key qualifications of
the applicant, and the current and previous positions held.
Research achievements and peer recognition.
For the research achievements, the number of examples is limited to ten. The
type of research output is deliberately left open; it can be publications,
including preprints, books or essays, expeditions, data sets, code, or any other
research achievement considered relevant in the applicant’s domain of
research. For each entry, the applicant is encouraged to explain in a brief
narrative how it has advanced the field, and how it demonstrates the
applicant’s capacity to successfully carry out the proposed project.
Peer recognition covers prizes and awards, elected academy memberships,
honorary degrees, significant leadership positions, etc.
It is evident that these lists will vary depending on the career stage of the applicant
as well as on the area of research.
Additional information.
In this section the applicant can provide information on career breaks, diverse
research career paths and major life events, as well as particularly noteworthy
contributions to the research community not reflected in the previous section.
Proposals will continue to be evaluated on the sole criterion of scientific excellence:
the panels will primarily evaluate the ground-breaking nature, ambition, and feasibility
of the research project. At the same time, the panels will evaluate the intellectual
capacity and creativity of the applicant, with a focus on the extent to which the
applicant has the required scientific expertise and capacity to successfully execute
the project.
The panels will also consider the additional information from the applicant. This will
provide context to the evaluation panels when assessing the applicant’s research
achievements and peer recognition.
With these changes, applicants can now provide a more holistic and fuller account of their
research career and contributions for the panels to consider.
4
4. Discussion in the Scientific Council
In discussing criteria and mechanisms for assessing researchers and project proposals, we
took an open-minded approach, neither assuming that changes in the processes of the ERC
were necessary nor rejecting them out of hand. In our view, many of the demands being
made about research assessment, particularly those from the younger generation
11
, are
legitimate.
We reached a rapid consensus on some fundamental points. The ERC as a funder of frontier
research should retain the sole criterion of scientific excellence, as legally enshrined in the
acts establishing the EU research and innovation framework programme
12
, and not move
towards evaluating economic or societal impact. Using economic or societal impact as
explicit evaluation criteria would disfavour fundamental, curiosity-driven research that may
not have an immediate or obvious economic or societal impact but is nevertheless important
for scientific progress.
A major challenge is to agree on what is meant by ‘excellence’. For the sake of clarity, and
as a proper basis for discussion, it seems reasonable to consult dictionaries for a definition
of the term. The Oxford English Dictionary describes the noun as ‘the quality of being
extremely good’, Merriam Webster offers ‘the quality of being excellent’, which in turn is
described as ‘very good of its kind; eminently good; first-class’. Both agree that it is a
measure on a scale of quality and this measure might apply to any entity of interest. While
this says that ‘excellence’ is not a description of a collection of entities that is desired or even
ideal in any given situation, this is seen differently by many engaged in the current discussion
on research assessment, especially the assessment of researchers (as opposed to research
proposals). This is important for this discussion. When we say we judge the excellence of
the proposal or researcher, we do not expect the application to satisfy each element of a
broad portfolio of demands. Instead, for any characteristic and quality we deem important,
we look at whether the proposal and the researcher rank highly in comparison with others.
The high level of competition of our funding schemes implies that selected proposals and
researchers excel not only in direct competition with other applications but are also of
highest quality in absolute terms.
Like many others who have commented on research assessment (see Annex 1), we agree
that different contexts for assessment (such as faculty recruitment, promotion, awards,
grants) necessitate assessing different characteristics and qualities. For example, in
assessing a candidate for a faculty position, it makes sense to ask for the candidate to excel
in teaching or in participating in faculty committee work as well as in research. Different
11
See for example: De Herde, V., Björnmalm, M. and Susi, T., 2021. Game over: empower early career
researchers to improve research quality. Insights: the UKSG journal, 34(1), p.15.
https://doi.org/10.1629/uksg.548
and
de Rijcke, S., Cosentino, C., Crewe, R., D’Ippoliti, C., Motala-Timol, S., Binti A Rahman, N., Rovelli, L., Vaux,
D.L. and Yupeng, Y. 2023. The Future of Research Evaluation: A Synthesis of Current Debates and
Developments. Global Young Academy (GYA), the InterAcademy Partnership (IAP) and the International
Science Council (ISC) Centre for Science Futures. https://doi.org/10.24948/2023.06
12
https://eur-lex.europa.eu/eli/reg/2021/695/oj#d1e32-51-1
5
qualities will be important when assessing grant proposals for, say, an international research
expedition, or infrastructure support.
In the case of the ERC, project proposals are judged on excellence in creativity, originality
and potential for significant advances in knowledge - or, to use the wording of the ERC work
programme: “the ground-breaking nature, ambition and feasibility of the proposal”.
4.1. Evaluation of the proposed project
While the ranking of the project proposals according to excellence in the ambition, potential
scientific impact and scientific approach is entirely entrusted to the evaluation panels, and
we saw no need for structural or procedural changes, some guidance has been given in the
past and will also be necessary in the future.
During our deliberations we recognized that some terms that had previously been in use
may not be fit for purpose. The term ‘high-risk, high-gain’ was seen as potentially confusing
and problematic. This concept is often invoked to discourage evaluation panels from
conservatism in their choice of what to fund. Indeed, the possibility that a project will not fulfil
its aims is inherent in frontier research, but this possibility means precisely that the results
cannot be predicted. On the other hand, a researcher who, for example, has already
established with preliminary data that an exciting new approach is likely to work, may be
able to carry out ground-breaking work with a relatively high chance of success.
We stress that the ERC continues to look for proposals that address important challenges
and hope that the research funded by the ERC will lead to major advances at the frontier of
knowledge. However, the ‘high-risk, high-gain’ conjunction is not helpful for the evaluation
of proposals, and the terms ambitious, ‘creative and original’ are better descriptors for the
kinds of proposals the ERC should fund.
An element that can be positive but is not strictly necessary for an excellent proposal is the
development of novel methodologies’. New methodologies can allow long-standing
problems or questions to be tackled and developing them is therefore crucial for advancing
knowledge. However, an applicant may come up with an original idea for approaching an
unsolved problem with an existing methodology. Conversely, new methodologies could be
developed and then employed for projects of minor importance or interest. Thus, the
development of new methodologies is neither necessary nor sufficient to make a proposal
excellent. It therefore does not make sense to ask evaluators specifically about this in the
proposed project, but the project should be assessed on its core questions and approaches.
The reference to this evaluation element was therefore removed from the guidance for
evaluators.
6
4.2 Evaluation of the applicant
Many points in the following sections were uncontroversial; we regarded some elements as
obvious or even essential (e.g., the applicant having ‘leading international expertise in the
subject areaor having demonstrated ‘originality of research’), and others as not relevant for
the applicant’s ability to carry out the proposed research (e.g., academic leadership roles’
or ‘developing strategies for societal impact’). Others needed clarification, or Council
members had divergent views.
Overall, we agreed that the emphasis of the assessment of the PI should continue to be on
whether they had demonstrated the ability to carry out ambitious and challenging research
and had thereby contributed to advancing knowledge in their field. The only way to assess
this in the first instance is through their track record in terms of research outputs, and
indirectly, by the recognition they receive from their peers. Some of our discussions on how
to deal with desirable qualities, such as being a good mentor or actively engaging in open
science, that are not strictly necessary for carrying out the proposed research project, are
reported below.
Our discussions led to the subdivision of the new template into the three sections described
above (personal details, research achievements and peer recognition, additional information
on general noteworthy contributions and career path) where the central one should contain
the information on which the evaluation is primarily based, with the others providing context.
The previously used templates requested information from applicants that we did not find
useful for the evaluation, and the corresponding sections have now been deleted. We
describe our reasoning for those changes first and will then report our thinking on the new
sections.
Supervision of graduate students or postdoctoral researchers
The CV and track record templates in the past asked how many PhD students and postdocs
the applicant had supervised. The intention was to see evidence of experience in leading a
research group and good mentorship. However, numbers alone are not sufficient to assess
whether a PI has been a good advisor for the members of their research team. For example,
it is not clear whether a large number or a small number is meaningful, even when taking
into account that team sizes in different disciplines vary.
A somewhat better question would be what academic positions former team members have
attained, and indeed this is used by some institutions and funders. However, this too is
problematic. It ignores the fact that nowadays many young researchers do not even aim for
an academic career, and it presupposes that academic careers are superior to other
occupations. It also gives an unfair advantage to PIs at large, elite research centres over
those from less well-known institutions or isolated locations, since for the latter it is much
more difficult to attract top level candidates who are then more likely to find prestigious jobs
even if the research done in the group is ground-breaking and original.
This element therefore becomes a proxy that is in part a reflection of the excellence of the
environment rather than the research team leader. Some institutions have come up with the
solution of soliciting anonymous feedback from former team members (e.g., NWO in the
7
Netherlands), an interesting idea that may work well for a small number of candidates in
faculty recruitments, but is impractical, if not impossible, for the many hundreds of applicants
for ERC grants. We were unable to come up with any other reliable and fair measure for
‘good mentorship’ and thus concluded that this information should no longer be asked for.
Extramural funding
The amount of funding a researcher attracts is often seen as a measure for the importance,
relevance or competitiveness of their work. However, a wide range of factors influence this
parameter, including the availability of grants in different national settings and for different
types of research. Some PIs have generous institutional funding and may never have
needed or wanted to apply for grants in the past. Attraction of extramural funding is therefore
another proxy that does not necessarily measure the importance of a researcher’s work.
The ERC therefore considers this point only to ensure that the proposed project is not
already funded from other sources, and only at the second step of evaluation.
Sections of the new template
PERSONAL DETAILS
This section is for a brief overview of the applicant’s research career: education and training,
PhD and postdoctoral work, and current and past positions held. The applicant can provide
comments on any of these steps, like work outside research institutions or universities,
career breaks, or other special aspects, in the third section of the template.
RESEARCH ACHIEVEMENTS AND PEER RECOGNITION
This is what we consider to be the most important part of the track record: here the applicants
provide the evidence for their ability to carry out demanding and original research. In many
fields this evidence consists of publications that are recognized by the research community
to have reported major advances, often (but certainly not always) published in leading
journals. We recognize that such evidence is field-dependent and there are high-quality
research outputs other than publications. Evidence of peer recognition can help evaluators
complement the view of the applicant they have formed based on the research outputs.
While not all Scientific Council members shared this view, the majority favoured inclusion of
evidence of peer recognition in the track record.
Research achievements
The ERC has already made it clear in the past that evaluations should not focus on quantity
but on quality and that inappropriate metrics (such as the Journal Impact Factor) should not
be used in the evaluation of applicants. But we also take note of the frequently heard
complaint that evaluators cannot be expected to read every paper the applicant has ever
published.
The new template takes account of this in two ways. First, the number of outputs is limited
to ten (with an emphasis on more recent ones), and it is no longer specified what format
such outputs should or might have. Thus, it is possible to list, for example, datasets, open-
source code or software that are widely used, expeditions that yielded important data,
8
granted patents, prototypes, or any other type of major research output. Secondly, the
person best qualified to explain the importance and impact of their past research and the
nature of the advance in knowledge they have achieved is the applicant (though they may
of course not be the most objective). The new template therefore encourages the applicants
to provide such explanations in brief narratives.
The old track-record ‘profiles’ of ERC PIs contained the phrasing ‘major international peer-
reviewed multi-disciplinary scientific journals and/or [...] leading international peer-reviewed
journals, peer-reviewed conferences proceedings and/or monographs of their respective
research fields’. However, some ground-breaking discoveries may only have been posted
on pre-print servers, been published in niche or specialist journals, while others may be in
entirely different formats or platforms, and in some disciplines national publications may be
the most relevant and important.
This specification has therefore been deleted.
We reaffirmed our position that quantitative metrics must be used responsibly. Panel
members are instructed to focus on the scientific content of the researcher’s achievements
and to refrain from using surrogate measures of the quality of research outputs, such as
Journal Impact Factors.
Peer recognition
It is clear that applicants cannot all be judged by the same standards. For example, more
junior applicants are less likely to have been asked to act as organizers of major international
conferences or invited to present as keynote speakers. Prizes are common in some fields
and almost non-existent in others. While feedback from the evaluation panels illustrates that
the panels are aware of such differences and take them into account, explicit guidance for
evaluators has been put in place.
The new templates no longer ask for any specific elements of peer recognition, but leave it
open to the applicant what to list, and to use the narrative component to explain the context
and the significance of the listed items.
Narrative elements
Narrative CVs are, by nature, more subjective than traditional CVs. which might make them
more difficult to compare with each other. Indeed, in the first years of the ERC, applicants
were asked to describe their ‘leadership potential’, which resulted in a wide range of non-
comparable inputs, from unstructured essays to terse one-liners. Narrative CVs are also
typically less standardised than traditional CVs. This could make them more time-consuming
to read and evaluate, particularly in situations where a large number of CVs need to be
reviewed.
The narrative format could be used to misrepresent achievements or skills, and different
cultures have different norms and expectations around ‘storytelling’ and self-presentation.
Therefore, narrative CVs could inadvertently disadvantage individuals from cultures where
self-promotion or certain forms of storytelling are not the norm. Writing a compelling
narrative CV requires strong writing skills. Therefore, narrative CVs could inadvertently
9
disadvantage individuals who are less skilled or comfortable with writing, even if they are
highly skilled in their field of research. Nevertheless, we felt that voluntary narrative elements
can provide a more comprehensive view of a researcher's career, contributions, and
potential. This is particularly the case when they are used to complement other assessment
tools and metrics. They can highlight important aspects of a researcher's work that may not
be captured by traditional metrics.
Two mechanisms will hopefully counteract the potentially problematic aspects of the
narratives. There is an overall limit to the length of the section on the CV and track record,
so applicants have to choose how to allocate space to the various elements they wish to
report, and secondly, we included a request to explain achievements in neutral terms. In
addition, experience at the ERC shows that panels are wary of boastful applications. The
instructions in the application form say: You may include a short, factual explanation of the
significance of the selected outputs, your role in producing each of them, and how they
demonstrate your capacity to successfully carry out your proposed project.
The responsibility for selecting and explaining the research outputs and elements of peer
recognition is thus left entirely to the applicant.
OTHER CONTRIBUTIONS
Engagement in peer review, teaching, academic leadership and other contributions
Most researchers are engaged in academic activities that do not directly contribute to their
research. For university staff, the most prominent and often time-consuming of these is
teaching. All successful researchers are asked to participate in peer review, whether of
manuscripts or grant or fellowship proposals, whether as individual referees or as members
of evaluation panels. Related functions, but more peripheral to the actual research
enterprise, include the chairing of committees, presiding academies or learned societies,
developing training programmes, public outreach and other major contributions to the
community. These activities are crucial for the proper functioning of fundamental research,
and should be highly valued, but they are not sufficiently rewarded, as noted in many of the
recent discussions and documents on research assessment.
A generally accepted way of recognizing and rewarding these desirable activities has yet to
be found (researchers’ peer review record in ORCID, for example, or teaching assessments
in universities provide some starting points). One important question in our discussions was
whether in the context of the ERC’s evaluations, they should be recognized in some way
and be discounted against past scientific output, the argument being that researchers with
such a constraint on their time face a higher hurdle to assemble a large portfolio of research
outputs. This is particularly pertinent for PIs at universities with a heavy teaching load.
However, the new CV and track record no longer asks for quantity in output, nor for ‘prestige’
proxies. The excellence of the researcher should be measured by the quality of the outputs
they list, and not by the bulk they have produced. We also acknowledge that not all
researchers have equal capabilities or equal opportunities to take on such functions,
regardless of how excellent they are in doing frontier research. This would argue against
taking such activities into account when assessing the applicant.
10
Nevertheless, we agree, in line with our having signed the Agreement on Reforming
Research Assessment, that researchers contributing broadly to the functioning of the
research system is extremely important and their commitment should be recognized.
Therefore, particularly noteworthy contributions to teaching and other outstanding
contributions to the research community should be listed to provide context in the
assessment of applicants research achievements and peer recognition, even if they do not
directly enter the evaluation of these elements.
4.3 Weighting of the assessment of the proposed project and the
assessment of the applicant
The focus of the evaluation should be on the scientific content of the proposal. In the past,
both the proposal and the applicant were numerically graded in the first step of the
evaluation, on equal scales. As a result, the application from an apparently ‘strong’ PI with
a weak proposal could end up with a similar combined score as one from a less
accomplished PI with a brilliant proposal. This exposes the evaluation to a higher risk of
unconscious bias.
For example, it has been observed that researchers based at highly visible and well-funded
institutions, or at well-connected centres of excellence are more likely to be awarded ERC
grants than those in remote or unknown institutions. It is disputed whether this is exclusively
because the former institutions host a larger number of excellent researchers, or whether
this is due to, or at least exacerbated by an unconscious bias against less well-known
applicants. The ERC guidelines are explicit in stating that the host institution of the
researcher should not be an element hat enters the evaluation of the applicant’s excellence,
and we have explained above that we have removed another element (list of previously
funded grants) from the evaluation elements that would contribute to such a Matthew effect.
One method for avoiding such biases is double-blind review, but we feel that it would be
almost impossible in such a setting to assess whether an applicant has the capacity to carry
out the proposed project. We discussed whether the evaluation should focus exclusively on
the scientific excellence of the proposal and ignore the identity of the applicant at least at
the first step. However, most of the Scientific Council members found it important to
understand the track record and CV of the applicant to decide whether to select the
application for in-depth evaluation in the second step.
Instead, we sought a way to put a stronger emphasis on the evaluation of the project
proposal starting with the initial ranking of the applications. During the individual remote
evaluation, panel members would first evaluate the research project without considering the
information on the applicant and decide on a score, and then evaluate the applicant. This
would avoid the applicant’s identity and reputation influencing the project score. In the past,
the project and the PI were both scored in parallel on a scale of 1 5, and the scores were
then added up. Not all panels took this sum of scores as guidance for their initial ranking,
but some did. We have now stopped this practice: only the project is scored on a numerical
scale, and only this score can be used to rank the list of proposals before the panel
discussion. The applicant is given an overall qualitative assessment with five options
(outstanding / excellent / very good / good / non-competitive), which is not converted into a
11
numerical score and is not combined with the score for the research project. In this way, the
evaluation should give more weight to the project than to the applicant. This has been a
practice in most ERC panels already, and we now explicitly indicate it in the ERC Work
Programme.
5. Implementation of changes, guidance to applicants and evaluators
The evaluation process must be as fair and as transparent as possible.
The 90 peer review panels of the ERC (28 each for Starting, Consolidator and Advanced
Grants, five for Synergy Grants, and one for Proof-of-Concept Grants) that meet each year
decide independently on the final ranking of the proposals submitted to their panels. As
discussed above, even though the ERC’s evaluations are based on the ‘sole criterion of
scientific excellence’, we have always provided written guidance and briefings for panel
members on what qualities or elements are most relevant to consider during the evaluation
and to applicants on what to include in their application. Applicants and panel members must
have a clear understanding of what is expected of them and, in particular, the same
understanding of how and for what purpose any element of information from the applicant
is used by the panel for the evaluation. Explicit guidance on evaluation elements will also
help to level the playing field for all applicants, regardless of their background or prior
familiarity with ERC grants.
It is important to balance the need for a fair and comparable treatment of all applications
across all panels with the freedom for the panels, whose members are selected by the
Scientific Council for their expertise and standing in their research fields, to act according to
their own insight. However, it is often the panels themselves who ask for guidance. Left to
themselves panel members can develop their own heuristics that are likely to be sub-optimal
and to differ from one panel to the next.
The briefing documents provide information on topics on which the scientific officers of the
ERC Executive Agency frequently receive questions, or that come up during oral panel
briefings.
Despite the need for guidance discussed above, our challenge in providing such guidance
was not to be too prescriptive. This would run counter to the philosophy that the applicants
should have maximum freedom and flexibility to present their work in a way that best
represents its value and significance.
For example, in past calls applicants were asked to present a track record of achievements
based on a profile for each type of grant. Applicants to the Advanced Grant calls were
restricted to presenting a track record of significant research achievements in the last 10
years. Among the problems with such a strict cut-off is the fact that it disfavours applicants
who took career breaks or return to research from leadership positions in academic
management, politics or industry. For the new calls, we removed the profiles and created a
single common form for all calls.
Now, rather than strictly defining an exact period of ten years for the research outputs, all
applicants are asked to provide a list of up to ten research outputs [] with an emphasis on
12
more recent achievements on the assumption that panels will be able to judge which ‘recent’
period is appropriate for any given CV and whether a particular achievement was relevant
to the application.
The guidance for applicants now provides examples for the categories of research
achievements and peer recognition, but not for other contributions to the research
community. We had a long list of potential contributions which researchers may make to the
research community but were concerned that giving only a subset of examples could be
seen as the Scientific Council being interested only in those, while others not listed would
not count.
6. Conclusion: an ongoing process
Many other topics were discussed, including potential bias against applicants who are at an
earlier career stage than their competitors, or those working in less popular fields, or at less
prestigious institutions; feedback provided to applicants needing to be meaningful and
constructive; the practical implementation of any changes; the challenge of measuring and
comparing qualitative aspects. We also looked at topics like partial randomisation and other
innovative approaches to the allocation of research funding. These discussions will continue
in the future.
The Scientific Council continuously solicits input from the evaluation panels, and we have
now set up a procedure for regularly responding to the input and taking action where
necessary.
The effects of the changes we have made will be closely monitored and could be refined in
future following feedback from the applicants, panel members, scientific officers of the ERC
Executive Agency and the scientific community.
13
7. Annex 1 Background Material
Statements and policy reports by major actors at European level
European Commission, Directorate-General for Research and Innovation, Towards a
reform of the research assessment system Scoping report, Publications Office, 2021,
https://data.europa.eu/doi/10.2777/707440
Council of the EU (May 2022) Conclusions on research assessment and implementation
of Open Science (adopted on 10 June 2022)
https://www.consilium.europa.eu/media/56958/st10126-en22.pdf
Initiative for Science in Europe (ISE) (February 2022), Centrality of researchers in
reforming research assessment https://initiative-se.eu/paper-research-assessment/
League of European Research Universities (LERU) (January 2022), A Pathway towards
Multidimensional Academic Careers - A LERU Framework for the Assessment of
Researchers https://www.leru.org/publications/a-pathway-towards-multidimensional-
academic-careers-a-leru-framework-for-the-assessment-of-researchers
Science Europe (July 2020), Position Statement and Recommendations on Research
Assessment Processes https://www.scienceeurope.org/media/3twjxim0/se-position-
statement-research-assessment-processes.pdf
Marie Curie Alumni Association (MCAA)(December 2019), Policy Brief -Towards
Responsible Research Career Assessment https://doi.org/10.5281/zenodo.3560479
European Alliance for the Social Sciences and Humanities (EASSH), Improving
Research Impact Assessment in Horizon Europe: A Perspective from the Social
Sciences and Humanities https://eassh.eu/Position-Papers/Improving-Research-
Impact-Assessment-in-Horizon-Europe--A-Perspective-from-the-Social-Sciences-and-
Humanities~p1247
European Network for Research Evaluation in the Social Sciences and Humanities
(ENRESSH)(2020), ENRESSH Policy Brief: Research Evaluation.
https://doi.org/10.6084/m9.figshare.12049314.v1
Research assessment practices and innovative approaches towards the allocation of
research funding
Strinzel M, Kaltenbrunner W, van der Weijden I, von Arx M, Hill M (March 2022), SciCV,
the Swiss National Science Foundation’s new CV format. bioRxiv preprint.
https://doi.org/10.1101/2022.03.16.484596
Curry, Stephen; de Rijcke, Sarah; Hatch, Anna; Pillay, Dorsamy (Gansen); van der
Weijden, Inge; Wilsdon, James (2020). The changing role of funders in responsible
research assessment: progress, obstacles and the way ahead (RoRI Working Paper
No.3). Research on Research Institute. Report.
https://doi.org/10.6084/m9.figshare.13227914.v2
Luxembourg National Research Fund (FNR) (February 2021), Narrative CV:
Implementation and feedback results https://www.fnr.lu/narrative-cv-implementation-
and-feedback-results/
14
Bendiscioli S, Firpo T, Bravo-Biosca A, Czibor E, Garfinkel M, Stafford T, et al.
(December 2021): The experimental research funder’s handbook (RoRI Working Paper
No.6). https://doi.org/10.6084/m9.figshare.17102426.v1
Woods HB, Wilsdon J (December 2021): Why draw lots? Funder motivations for using
partial randomisation to allocate research grants (RoRI Working Paper No.7)
https://doi.org/10.6084/m9.figshare.17102495.v2
Bendiscioli S, Garfinkel MS (March 2021), Informational report ‘Dealing with the limits of
peer review with innovative approaches to allocating research funding’
https://www.embo.org/documents/science_policy/peer_review_report.pdf
Aubert Bonn N, Bouter L (2021), Research assessments should recognize responsible
research practices Narrative review of a lively debate and promising developments.
https://doi.org/10.31222/osf.io/82rmj
Hug SE, Aeschbach M (2020), Criteria for assessing grant applications: a systematic
review. Palgrave Commun 6, 37. https://doi.org/10.1057/s41599-020-0412-9
Technopolis (December 2019), Science Europe Study on Research Assessment
Practices https://doi.org/10.5281/zenodo.4915998
VolkswagenStiftung (2014), What Is Intellectual Quality in the Humanities? Some
Guidelines.
https://www.volkswagenstiftung.de/sites/default/files/downloads/Humanities_Quality_G
uidelines.pdf
The notion of ‘excellence’ and peer review; risk taking
Veugelers R, Wang J, Stephan P (August 2022, updated October 2022), Do Funding
Agencies Select and Enable Risky Research: Evidence from ERC Using Novelty as a
Proxy of Risk Taking, National Bureau of Economic Research Working Paper Series,
No. 30320 https://doi.org/10.3386/w30320
Ochsner M (April 2022), Identifying Research Quality in the Social Sciences, in
Handbook on Research Assessment in the Social Sciences (edited by Tim C.E. Engels
and Emanuel Kulczycki) https://doi.org/10.4337/9781800372559.00010 - Open access
version (accepted manuscript)
https://serval.unil.ch/en/notice/serval:BIB_AB4513EFF8D6
Hug SE, Ochsner M (January 2022), Do peers share the same criteria for assessing
grant applications?, Research Evaluation, Volume 31, Issue 1
https://doi.org/10.1093/reseval/rvab034 - Open access version (accepted manuscript)
https://arxiv.org/abs/2106.07386
Jong L, Franssen T, Pinfield S (September 2021), ‘Excellence in the Research
Ecosystem: A Literature Review. (RoRI Working Paper No. 5)
https://doi.org/10.6084/m9.figshare.16669834.v2
Jong L, Franssen T, Pinfield S (2022), Transforming excellence? From ‘matter of fact’ to
‘matter of concern’ in research funding organizations, SocArXiv, Center for Open
Science. https://ideas.repec.org/p/osf/socarx/nduxf.html
15
Moore S, Neylon C, Eve MP et al. (January 2017), “Excellence R Us”: university research
and the fetishisation of excellence. Palgrave Commun 3, 16105
https://doi.org/10.1057/palcomms.2016.105
National developments and initiatives
Poot, R. et al. (December 2021), Report on the risks of Open Science and DORA,
https://www.erasmusmc.nl/-/media/erasmusmc/pdf/1-themaspecifiek/themabmw/report-
open-science-and-dora.pdf
VSNU, NFU, KNAW, NWO and ZonMw (November 2019), Room for everyone’s talent -
towards a new balance in the recognition and rewards of academics
https://recognitionrewards.nl/wp-content/uploads/2020/12/position-paper-room-for-
everyones-talent.pdf
16
8. Annex 2 Executive summary of the Analytical Workshop
The ERC Workshop on Research Assessment took place on 14-15 June 2022 on the
premises of the ERC Executive Agency. The aim of the two-day workshop was to reflect on
current ERC assessment principles and practices, and to provide input to the ERC Scientific
Council’s Task Force on Research Assessment about pros and cons of the possible use of
innovative evaluation systems.
15 experts, representing several disciplines and organisations, and with different
geographical backgrounds, from different careers and career stages gathered with members
of the ERC Scientific Council and employees of the Agency.
The workshop followed the Chatham House Rule.
Structured discussions were used as the main mechanism for elicitation.
Below is a summary of the main points, which are articulated in more detail in the full report.
Input 1. Consider (re)defining ‘excellence’
The current usage of the term ‘excellence’ as the only criterion for assessment was
discussed, with emphasis on scientific productivity, which in the opinion of several
participants had produced a toxic environment in many parts of the research enterprise. It
was suggested that the ERC should focus more on a healthy research culture, rewarding
principles and practices like Open Science, EDI (Equality, Diversity, and Inclusion), integrity,
collegiality, and transparency. Refocussing in this manner would not harm the quality of
scientific results but on the contrary could improve them.
Others felt that the ERC should resist the push to include in the assessment parameters that
were not essential to carry out the project, including, for example, services to the scientific
community. Such additional evaluation elements could instead be used in a different way,
for example by giving them a different weight or using them as tiebreakers.
Input 2. Deal with unintended biases in evaluations and improve the functioning of panels
Concerns about biases in the current ERC evaluation system were expressed especially in
relation to the functioning of the panels.
Some participants said that being a human activity, the selection process could not be
expected to be completely unbiased, and this had to be accepted. Others thought biased
decisions were not tolerable, and all possible measures should be taken to prevent them.
The structure of panels could be re-thought in its entirety. The importance of extreme care
in the selection of panel members was stressed, so that selection behaviours determined
by belonging to a certain community (‘scientific community games’) would be avoided. It was
suggested that the chairs or the co-chairs of panels could be Scientific Officers from the
Agency with broad expertise and experience. This would prevent some gaming, help to
address horizontal issues, and ensure consistency across panels. Others felt that decisions
on funding should not be influenced by employees of the Agency.
17
Input 3. Consider revising the order of assessment of research and researcher
The order in which the project and the PI are evaluated may have an impact on the outcome.
Several participants felt that focussing on the person could lead to more diverse or new
research topics being considered. Others, instead, argued that focusing on the project was
better suited for bottom-up, outstanding proposals and should remain the priority.
Moreover, emphasising the research idea rather than the past performance of the PI might
result in more proposals from women, from the so-called ‘widening countries’ and from less
well represented fields.
Input 4. Consider adopting a version of ‘narrative CVs’
The use of narrative CVs may be an effective tool to help change the current research
culture, which has been heavily influenced by quantitative bibliometric measures as proxy
for quality of research.
Some funders have introduced narrative CVs with the aim to better assess the context of
researchers’ careers and outputs. Narrative CVs give space to contributions other than
publications, while also providing the opportunity to present evidence of those contributions.
The experience so far suggests that evaluating these CVs takes reviewers no more time
than evaluating traditional CVs. Narrative CVs could counterbalance the current over-
emphasis on quantitative metrics.
The ERC could consider this approach for its evaluations.
Input 5. Consider partial randomisation for selection
Evaluation panels often agree on the top-quality proposals and on proposals that should not
be funded, while there is less agreement for those in the so-called ‘grey areas’. In this grey
zone subjective factors rather than quality may play a greater role, which may introduce
biases. Practical solutions to this problem, including partial randomisation, have been
implemented by some funders.
Drawing on their own experiences, participants suggested that evaluation processes that
include (partial) randomisation may encourage the submission of proposals for risk-taking
research, which is one of the objectives of the ERC funding schemes.
18
9. Annex 3 List of dimensions or elements
The table below was the starting point for the discussions in the task force. It does not reflect
the structure of the new or old CV or track record. The items were collected from the
documents in Annex 1, and the wording is directly from those documents, or paraphrased
or slightly simplified.
Elements
leading international expertise in the subject area
contribution to the advancement of knowledge in the field
provide intellectual thought leadership
setting the international research agenda
development of research and funding strategies
developing strategies for societal impact
originality of research
participation in national and international scientific networks and
conferences
invitations to present as key-note speaker or invited lecturer
leads major research conferences (membership in the steering and/or
organising committee)
prizes and honours for research (including artefacts with documented use,
such as architectural or engineering design)
editing or reviewing for major academic journals
elected to research-related leadership roles in the community
reputation and recognition by peers (including academy memberships)
recognized publications (peer-reviewed journal articles and conference
proceedings, monographs)
portfolio of high-quality research outputs other than publications (including
data, databases, software, models, methods, theories, algorithms,
protocols, workflows, exhibitions, policy contributions, open and citable
peer reviews, educational products, clinical guidelines)
preprints
research monographs and translations thereof
19
scientific/technological impact through high quality-research and/or
citations
number of publications (in relation to the individual’s career)
open access to (past/future) publications, data, and other research outputs
patents
examples of innovation leadership
winning competitive funding
research projects and their funding
ability to acquire third-party funds
develops multi-, inter-, trans-, or cross-disciplinary research activities
leads collaborative research projects (including research expeditions)
maintains international research collaborations
intersectoral collaboration (e.g., industry-academia collaboration;
collaboration with hospitals)
excellence through the performance of others
high research student completion rates
nurtures talent and demonstrates engagement with researcher training
and development
demonstrates inclusive leadership and provides a positive working
environment
workshops or summer schools
regular teaching activity (other than workshops or summer schools)
supervision of students / PhD candidates, postdocs and colleagues
mentoring of other researchers in their field and support to the
advancement of colleagues
editing, reviewing, refereeing, committee work
contributions to the evaluation of researchers and research projects
organisation of events
contributions to increasing research integrity, and improving research
culture
20
appointments to positions of responsibility such as committee membership
and corporate roles within organisation or sector
citizen science
societal engagement
engagement with industry and the private sector
engagement with the public sector, clients, and the broader public
(including patient care)
advise policymakers at local, national, or international level
provide information through the press and on social media
science communication through any means (including radio interviews,
exhibitions for the general public, etc.)
societal or economic impact
career stage
leadership potential
research independence and evidence of maturity
international or intersectoral mobility
unconventional career paths
career breaks / part-time work
personal circumstances
belonging to an underrepresented group
general and research-specific ethics and integrity standards are met
gender equality / gender dimension
diversity in the broader sense (e.g., racial, or ethnic origin, sexual
orientation, socio-economic, disability)
equal opportunities and inclusiveness
security issues
freedom of scientific research