Most grant proposals fail before a reviewer reads the second page. Internal data from major funders consistently shows that 40 to 60 percent of submissions are eliminated at administrative or eligibility check, before any technical assessment begins. Of those that survive to scoring, only the top 10 to 20 percent typically receive funding under competitive calls such as Horizon Europe, ERASMUS+, or USAID solicitations.
A "winning" proposal is therefore not the most ambitious one. It is the one that survives every elimination stage and lands in the funder's narrow band of fundable scores.
This guide breaks down what actually drives those scores, donor by donor, section by section. It is written for two audiences: organizations preparing their own applications (NGOs, startups, research teams, cultural institutions, businesses) and grant writers building a repeatable methodology across calls.
1. Start with donor logic, not your project
The single biggest predictor of failure is misalignment between the project and the funder's actual priorities. Reviewers do not score what your organization wants to do; they score how well your proposal advances the funder's strategic objectives stated in the call.
Before drafting anything, build a one-page donor brief that maps:
- Strategic priorities listed in the call documents and the funder's current strategy paper
- Eligibility criteria (legal entity types, country focus, consortium requirements)
- Funding ceiling and floor, co-financing rate, eligible cost categories
- Evaluation criteria with their respective weights
- Mandatory cross-cutting themes (gender, climate, digitalization, inclusion)
- Submission format, page limits, mandatory annexes
Comparison of evaluation logic across major funders (illustrative based on standard call structures):
| Funder / Programme | Excellence / Relevance | Impact | Implementation / Quality | Typical Page Limit (Part B) |
|---|---|---|---|---|
| Horizon Europe (RIA / IA) | 1/3 | 1/3 | 1/3 | 45 pages |
| ERASMUS+ KA2 Cooperation Partnerships | 30 points | 25 points | 25 points (plus 20 for partnership) | ~120,000 chars |
| USAID (typical RFA / NOFO) | Technical approach 50% | Past performance 25% | Personnel / management 25% | 25 to 40 pages |
| Interreg Europe | Relevance and strategy | Partnership | Communication and management | 60 to 80 pages |
| Creative Europe | Relevance | Quality of content | Project management | Varies by strand |
The lesson: a proposal optimized for Horizon Europe (where impact carries one third of the score) requires a fundamentally different structure than a USAID Cooperative Agreement (where past performance is heavily weighted).
2. The five questions every reviewer answers
Regardless of funder, evaluators implicitly answer five questions when scoring. Build your proposal so that each section pre-answers them clearly:
- Is the problem real, urgent, and within the funder's mandate?
- Is the proposed solution credible and based on evidence?
- Can this organization actually deliver it?
- Are the results measurable, attributable, and aligned with the funder's indicators?
- Will the change persist after funding ends?
If a reviewer cannot answer "yes" to all five within the first three pages, the proposal usually does not recover.
3. Executive summary: the only section read in full
In nearly every funding mechanism, the executive summary (or abstract, or project summary) is the single section guaranteed to be read by every reviewer, including those who only skim the rest. It is also the section most often written last and rushed.
A high-scoring executive summary contains, in this order:
- One sentence stating the problem and its scale, with a number
- One sentence stating the target group and geography
- One sentence stating the proposed intervention and its theoretical basis
- The expected outcomes with quantified indicators
- The total budget, duration, and requested amount
- A closing line on why this consortium or organization
A common length is 150 to 250 words. Avoid adjectives. Reviewers respond to nouns and numbers.
Weak example:
Our innovative project will empower vulnerable youth in the region through cutting-edge interventions and partnerships, delivering transformative impact.
Strong example:
In three eastern Ukrainian oblasts, 38 percent of 16 to 24 year olds are not in employment, education, or training (Ukrstat, 2024). The 24-month action will deliver vocational micro-credentials and paid internships to 1,200 NEET youth across Kharkiv, Dnipropetrovsk, and Donetsk oblasts, using a dual-track methodology piloted in 2022 and 2023. Target outcomes: 60 percent employment rate at 6 months post-completion; 800 graduates retained in formal employment. Total budget 1.85 million EUR over 24 months; EU contribution requested 1.48 million EUR (80 percent).
4. Problem statement: the evidence test
Reviewers test whether your problem statement is grounded in three layers of evidence:
- Macro data. Official statistics, peer-reviewed studies, OECD or World Bank indicators.
- Sector data. Thematic reports from UN agencies, donor evaluations, academic research.
- Micro data. Your own field assessments, beneficiary surveys, partner observations.
A robust problem statement cites all three. It also defines the gap precisely: not "youth unemployment is high" but "a youth NEET rate of 38 percent in target oblasts versus an EU average of 11.7 percent (Eurostat, 2024), with no existing programme covering the dual-track vocational model demonstrated to produce 6-month employment rates above 50 percent."
Frequent rejection reason: "the problem is described, but the gap that this project specifically fills is not articulated." This is the single most cited weakness in EU and USAID evaluation summary reports.
5. Theory of Change and Logical Framework
Every major international donor (EU, USAID, UN agencies, Sida, GIZ, FCDO, Global Affairs Canada) requires either an explicit Theory of Change, a Logical Framework, or a Results Framework. These are not bureaucratic formalities. They are the tool through which reviewers assess the internal logic of your proposal.
Minimum standard: every activity must trace upward to an output, every output to an outcome, and every outcome to the funder's higher-level impact statement. If a reviewer can identify any activity that does not connect, the entire logic is treated as broken.
A simplified LogFrame structure:
| Level | Description | Indicators | Means of Verification | Assumptions |
|---|---|---|---|---|
| Impact | Long-term societal change | High-level statistics | National data sources | Stable policy environment |
| Outcomes | Behavioral or institutional change | Quantified change indicators | Endline surveys, third-party evaluations | Continued participation |
| Outputs | Direct deliverables | Counts, percentages, quality measures | Project records, attendance lists | Sufficient demand |
| Activities | What the project does | Implementation milestones | Activity reports | Resources available on time |
For a deeper treatment of LogFrame construction, including donor-specific variations (USAID Results Framework, EU intervention logic, UNDP RBM), see our dedicated guide on the Logical Framework methodology.
6. SMART objectives and indicators
Objectives should be Specific, Measurable, Achievable, Relevant, and Time-bound. The most common drafting error is mixing levels: stating an output as an outcome, or vice versa.
Test each objective against three checks:
- Can this be measured with a single indicator? If you need three, it is probably two objectives.
- Is the baseline known or measurable? If not, the indicator is not yet usable.
- Is the target attribution plausible? If 90 percent of the change would happen anyway, the project is not the cause.
Indicator hierarchy expected by major donors:
- Output indicators. Number of trained, number of materials produced, number of events held.
- Outcome indicators. Percentage adopting a new practice, percentage retaining knowledge at 6 months, percentage of beneficiaries reporting behavior change.
- Impact indicators. Change in sector statistics partly attributable to the intervention.
EU calls increasingly require alignment with EU mission indicators (climate, cancer, soil, oceans, cities) and with Sustainable Development Goal targets. USAID requires alignment with the relevant Country Development Cooperation Strategy and standard foreign assistance indicators (F indicators).
7. Implementation plan: how reviewers test feasibility
The work plan section is where reviewers move from "is this a good idea" to "can these people deliver it." Three elements must be airtight.
Work package structure (EU style) or activity clusters (USAID style). Each package needs a lead partner, clear deliverables, milestones, and resource allocation. EU consortia typically use 4 to 8 work packages; more than 10 signals fragmentation, fewer than 3 signals lack of differentiation.
Gantt chart and critical path. Show monthly granularity for the first year and quarterly thereafter. Mark dependencies. Reviewers immediately spot Gantt charts where deliverables are due before the activities producing them, a surprisingly common error.
Risk register. Minimum five risks, each with: likelihood (low / medium / high), impact (low / medium / high), mitigation measure, and risk owner. Generic risks ("staff turnover") score lower than specific risks ("permitting delay for field site X due to ongoing land registry reform").
8. Budget: the section reviewers read most carefully
Budget cuts and budget rejections are the most common cause of reduced scoring at the technical stage. Three principles drive a fundable budget.
Cost categories must match the funder's eligible cost rules. Direct personnel, subcontracting, travel, equipment, other goods and services, and indirect costs each have specific rules. Horizon Europe applies a flat 25 percent indirect cost rate; USAID requires a NICRA agreement; many foundations cap indirect at 10 to 15 percent.
Every line needs a justification. A budget line "training: 25,000 EUR" is unscored. A line "training: 25,000 EUR (5 workshops x 50 participants x 100 EUR per participant covering venue, materials, lunch, and certified trainer fee, based on 2024 market quotes from three providers)" is scoreable.
Cost effectiveness ratio. Reviewers calculate a rough cost-per-beneficiary or cost-per-output figure and benchmark it against similar projects. If your training costs 2,000 EUR per beneficiary and the sector norm is 400 EUR, you must justify the difference (depth, duration, certification value) or expect a budget cut.
Common budget red flags:
| Red flag | What reviewers conclude |
|---|---|
| Round numbers throughout | No bottom-up costing |
| Personnel above 70 percent without justification | Top-heavy, weak delivery capacity |
| Equipment purchases late in the timeline | Equipment not actually needed for the project |
| Travel costs without linkage to activities | Padding |
| Indirect costs against the wrong base | Compliance risk |
9. Organizational capacity and track record
USAID weights past performance at 25 percent or more in many solicitations. EU programmes assess "operational capacity" as a pass or fail gate. Foundations increasingly request audited financials and references.
Strong capacity sections include:
- Three to five comparable past projects with: donor, budget, dates, role, key results achieved against indicators
- Audited financial statements for the past two to three years
- CVs of key personnel with relevant project experience highlighted
- References from past donors with permission to be contacted
- Evidence of compliance systems: anti-fraud policy, safeguarding policy, financial procedures
For young organizations and startups, the substitute is the personal track record of founders and a clear statement of fiscal sponsorship or partnership with an established entity.
10. Sustainability and exit strategy
A non-negotiable section in nearly every public donor application. Reviewers test whether benefits will persist after funding ends. Address four sustainability dimensions:
- Financial sustainability. Revenue model, public budget integration, follow-on funding pipeline.
- Institutional sustainability. Capacity transferred to local partners, ownership by beneficiaries.
- Policy sustainability. Integration into government strategies, regulatory uptake.
- Environmental and social sustainability. Do-no-harm principles, gender and inclusion safeguards.
Vague claims ("the project will continue through partnerships") score poorly. Specific commitments ("the municipal education department has signed a letter of intent to integrate the methodology into its 2027 to 2029 strategy and allocate budget line 2.4.1") score well.
11. Cross-cutting themes
Most major donors require integration of cross-cutting themes throughout the proposal, not as a standalone chapter. Mandatory or strongly weighted themes:
| Theme | EU Programmes | USAID | UN Agencies |
|---|---|---|---|
| Gender equality | Required, often with quantified Gender Equality Plan | Mandatory gender analysis | Mandatory in most agencies |
| Climate and environment | "Do No Significant Harm" principle, climate mainstreaming | Climate Risk Management screening | Mandatory environmental screening |
| Digitalization | Increasingly required | Digital Strategy alignment | Variable |
| Inclusion / disability | Required under several programmes | Disability inclusion required | Mandatory "leave no one behind" framing |
A proposal that addresses cross-cutting themes only in a final paragraph signals weak integration. Reviewers expect to see them reflected in problem analysis, target groups, indicators, activities, and budget.
12. The pre-submission checklist (20 points)
Before submitting, run a structured review against this checklist. Projects passing all 20 items are several times more likely to advance past the eligibility stage.
Eligibility and compliance
- Legal entity registration documents current and uploaded
- Consortium meets minimum partner number and country distribution rules
- All partners eligible under the call (no debarment, sanctions checks done)
- Co-financing percentage meets minimum
- Project duration within call limits
Substantive content
- Executive summary contains all six required elements
- Problem statement cites macro, sector, and micro evidence
- Theory of Change or LogFrame logic is unbroken from activities to impact
- Each objective is SMART and matched to at least one indicator
- Indicators include baseline, target, and source of verification
- Work plan has clear deliverables, milestones, and dependencies
- Risk register includes at least five specific, mitigated risks
Budget and finance
- Budget cost categories match the funder's eligible cost rules
- Each line has a calculation basis and justification
- Indirect cost rate applied correctly
- Cost-per-beneficiary benchmarked against sector norms
- Audited financial statements attached for required years
Cross-cutting and submission
- Gender, climate, digital, and inclusion dimensions integrated throughout
- All mandatory annexes attached and signed
- Submission completed at least 24 hours before the deadline (portal failures are common)
13. The reviewer's perspective
Internal evaluator guidelines from EU agencies, USAID review panels, and major foundations share one feature: reviewers are time-constrained, often reviewing 5 to 15 proposals against the same criteria within a fixed window. They are scoring against a rubric, not searching for hidden brilliance.
Practical implications for the writer:
- Use the funder's exact terminology in headings and section titles
- Mirror the structure of the call's evaluation criteria in your table of contents
- Make scores easy to assign by signposting evidence ("Evidence supporting Excellence criterion 1.1 follows below")
- Use bold sparingly to highlight quantified results, not adjectives
- Include a one-page summary of how the proposal addresses each evaluation criterion
14. Common reasons for rejection
Aggregating evaluator comments from EU and USAID public summary reports, the most frequent rejection reasons are:
- Insufficient alignment with the call priorities (cited in roughly 35 percent of rejected EU proposals)
- Weak or generic problem statement without quantified evidence
- LogFrame inconsistencies between activities, outputs, and outcomes
- Unrealistic or unjustified budget
- Insufficient consortium expertise or geographic coverage
- Vague sustainability and impact pathways
- Cross-cutting themes treated superficially
- Submission errors: missing annexes, incorrect format, late upload
Each one is preventable with a structured review. None is a result of bad luck.
15. After submission: the often forgotten phase
Whether funded or not, every submission produces information your organization should capture.
If funded, read your evaluation summary closely. Reviewers' minor comments are often the basis for negotiation cuts to your budget or scope.
If rejected, request the evaluation summary report, which is mandatory under EU and many other public funders. Analyze each weakness and decide whether the proposal can be revised for the next call.
In both cases, build an internal proposal library with reusable elements: organizational descriptions, capacity sections, partner profiles, validated budget norms.
Top-performing organizations approach grant writing as a system, not a sequence of one-off submissions. A reusable proposal library can reduce drafting time for the next submission by 40 to 60 percent without compromising quality.
A winning grant proposal is not a feat of writing. It is the product of donor research, structured logic, evidence-based content, and disciplined review. The organizations that consistently win are not those with the most ambitious ideas. They are those who treat each submission as a scoreable, repeatable process.
The framework above works across funders, sectors, and regions, from Horizon Europe to USAID, from foundation grants to municipal innovation calls. It is the starting point for the practitioner library at i-grants.com, where you will also find specialized guides on individual programmes, donor analyses, and templates ready for adaptation.
