1. Evaluators favour specific countries
This is probably the most common misconception we hear from participants at our training courses, as researchers very often believe that evaluators will seek for specific countries to be involved in each and every proposal.
The truth is, the geographical representation of your partnership is not an evaluation criterion in itself. Evaluators will rather look at the consistency between your scientific concept (Section 1.3 of your proposal), your planned activities (Section 3.1) and the consortium as a whole (Section 3.3).
If in Section 1.3 you argue that your novel solution is going to benefit the whole of Europe, you should make sure you substantiate this claim for example by piloting the solution in countries presenting different characteristics, or involving partners that can represent the most different regions of Europe.
However, Europa Media did win quite a few proposals that had a very specific geographical focus, typically in the Central Eastern European region, given our headquarters in Budapest. Let me show you here below the geographical representation of the consortia of two of our running projects:
START2ACT |
MY-GATEWAY
|
BelgiumBulgariaCroatiaCzech RepublicHungaryNetherlandsPolandRomaniaSlovakiaUnited Kingdom |
BelgiumBosnia and HerzegovinaCzech RepublicHungaryIsraelMacedoniaPortugalRomaniaSerbiaSloveniaSpainUnited Kingdom
|
In both cases, we decided to concentrate the project’s efforts in the CEE region, while still involving a couple of Western partners because they provided us with access to specific knowledge or data, e.g. two Spanish partners in MY-GATEWAY that were already embedded in the Startup Europe ecosystem. Whether this strategy will make you win or not depends on many factors, including the number of proposals that may be funded and, most importantly, how convincingly you describe the benefits of such an approach in Section 1.3 of your proposal!
2. You need to have “big names” on board to succeed
Another frequently held idea is that evaluators will give preference to those proposals which involve specific “big names”, meaning well-known universities or large industry players; the evidence would be found in their widespread participation in EU projects.
Section 4 (Members of the consortium) is not part of the evaluation process, it is not one of the evaluation criteria. While we cannot obviously rule out the influence that a specific name might exercise on a single evaluator on the overall look of a proposal, at the time of the Consensus Meeting each evaluator will have to justify the high or low scores he/she has assigned to each proposal, so any potential subjectivity will become topic for discussion.
Evaluators are rather asked to evaluate:
- The need for each partner in a proposal
This means the technical link between the expertise of each partner (as described in Section 3.3), their contribution to the concepts and methodologies underlying the proposal (as described in Section 1.3) and their involvement in the proposal’s activities (as described in Section 3.1).
If for example you allocate a high number of person months to a single partner in one or more work packages, but fail to describe their specific contribution to the work, this will be found as a shortcoming by evaluators, whether that partner is an unknown small company or one of the largest universities in Europe.
- The operational capacity of each partner
From Section 4 (Members of the consortium), evaluators are requested to judge the capacity of each participating organisation in delivering the planned activities; we have to say here that it is very uncommon to fail at this stage, as typically your partners will not commit to a work plan they do not feel confident with.
3. It’s all up to the excellence of your scientific approach
Last but not least: in academia there is a frequent misconception regarding the weight of the excellence of your proposal concept compared to the rest of the sections. There are indeed a number of funding schemes under Horizon 2020 where “Excellence” is the only evaluation criterion utilised, such as the grants awarded by the European Research Council.
For collaborative grants, however, which are those submitted under the “Leadership in Enabling and Industrial Technologies” or the “Societal Challenges” pillars, three evaluation criteria are utilised: Excellence, Impact and Implementation, each holding exactly the same weight. In other words, the excellence of your scientific approach, methodology and ambition, will form 33% of the overall scoring.
There is only one exception to this: in proposals submitted under “Innovation Actions”, which typically include activities aiming at bringing a novel product into the market, the score you receive under the evaluation criterion “Impact” will be multiplied 1.5 times, therefore weighing more compared to the other two criteria.
How to make sure you have a good overview of the evaluation process and criteria?
On the participant portal, following this link, you can download all the evaluation forms for all types of action under H2020, which include the key points evaluators are asked to judge. Make sure you are aware of these before you even start writing your own proposal, as your writing will be able to better reflect the expectations of evaluators.
If you have more questions or doubts, feel free to write to us at