MEAL Glossary
AâZ glossary of Monitoring, Evaluation, Accountability & Learning (MEAL) terms. Definitions are in plain, practical language and draw on widely used international references including FAO, OECD (DAC evaluation criteria), ALNAP and USAID (CLA/adaptive management).
A
- Accountability
- The obligation to explain decisions, take responsibility for results, and respond to stakeholders (especially affected people).
- Accountability to Affected Populations (AAP)
- Putting community voice, feedback, and safety at the center of program decisions.
- Activity
- A task or action carried out to produce outputs (e.g., training sessions, site visits).
- Adaptive management
- Making structured changes to plans and implementation based on evidence and learning.
- Adequacy
- Whether the scale/coverage of assistance is âenoughâ relative to the need.
- After-action review (AAR)
- A short reflection session after an activity/event to capture what worked, what didnât, and what to improve.
- Assumption
- A condition outside the projectâs control that must hold true for the logic to work.
- Attribution
- The extent to which observed changes can be credibly linked to the intervention (vs other factors).
B
- Baseline
- The starting value of an indicator before the intervention begins.
- Benchmark
- A reference point/standard used for comparison (internal or external).
- Bias
- A systematic error that leads to distorted findings (e.g., selection bias, recall bias).
- Beneficiary
- A person/group intended to directly benefit from the intervention.
C
- Case study
- An in-depth qualitative analysis of a program, site, or group.
- Causal pathway
- The âhow and whyâ chain connecting activities to results.
- Change management
- A structured approach to shifting people/processes to a desired future state.
- Cluster sampling
- Sampling groups (clusters) first (e.g., villages), then selecting respondents inside clusters.
- Coherence (OECD criterion)
- How well an intervention fits with other interventions and policies (avoid duplication/contradiction).
- Compliance monitoring
- Checking adherence to rules, standards, SOPs, safeguards, or contracts.
- Confidence interval
- A range likely to contain the true value of an estimate (with a stated probability).
- Conflict sensitivity
- Understanding how an intervention interacts with conflict dynamics and adjusting to avoid harm.
- Control group
- A comparison group that does not receive the intervention (in experiments/quasi-experiments).
- Cost-effectiveness analysis
- Comparing options by cost per unit of outcome (e.g., cost per additional child immunized).
- Counterfactual
- What would have happened without the intervention (the âno-projectâ scenario).
- Coverage
- Who/what proportion is reached (often disaggregated by location/group).
D
- Data quality
- Fitness of data for use (accuracy, completeness, timeliness, consistency, integrity).
- Data disaggregation
- Splitting data by sex, age, disability, geography, income, etc. to see differences.
- Data triangulation
- Using multiple sources/methods to cross-check and strengthen findings.
- Desk review
- Reviewing documents and secondary data (reports, budgets, admin records).
- Difference-in-differences (DiD)
- A quasi-experimental method comparing change over time between treated and comparison groups.
- Do No Harm
- Designing and implementing in ways that minimize unintended negative effects.
- Double counting
- Counting the same person/output more than onceâan indicator design/reporting error.
E
- Effect
- Any change (intended/unintended; positive/negative) linked to an intervention.
- Effectiveness (OECD criterion)
- The extent to which objectives/results are achieved.
- Efficiency (OECD criterion)
- How well resources (time, money, staff) are converted into results.
- Endline
- Final measurement of indicators at the end of an intervention.
- Evaluation
- A systematic assessment of design, implementation, and results to judge value and learn.
- Evaluation question (EQ)
- A focused question that the evaluation is designed to answer.
- Evidence
- Verifiable information used to support findings (documents, records, observations, data).
- Equity
- Fairness in processes and resultsâwho benefits, who is left behind (often cross-cutting).
- Ethics (research/evaluation)
- Protecting participants: informed consent, confidentiality, safety, minimizing risk.
- External evaluation
- Conducted by independent evaluators outside the implementing team.
F
- Feedback
- Community/stakeholder comments, complaints, and suggestions used to improve delivery.
- Feedback and response mechanism (FRM)
- The channel + process to collect, track, and respond to feedback.
- Fidelity
- Whether activities were delivered as intended (dose, quality, targeting).
- Focus group discussion (FGD)
- Facilitated group conversation to explore perceptions/experiences.
- Formative evaluation
- Early-stage evaluation to improve design and implementation.
- Frequency
- How often an indicator is measured/reported (weekly, monthly, quarterly).
G
- Gender analysis
- Assessing gender roles, constraints, power, and risks to design better interventions.
- Generalizability
- Whether findings apply beyond the sampled group/context.
- Goal (impact-level)
- The highest-level intended change (often long-term).
- Governance
- Rules, roles, and accountability arrangements guiding decision-making.
- Grievance mechanism
- A formal channel to raise and resolve concerns/complaints safely.
H
- Harmonization
- Aligning indicators, tools, and reporting across partners to reduce duplication.
- Human-centered design
- Designing services around user needs and experiences, iteratively testing improvements.
I
- Impact (OECD criterion)
- Broader, longer-term positive/negative changes linked to the intervention (intended or not).
- Impact evaluation
- Designs aimed at estimating causal effects (experimental or strong quasi-experimental).
- Indicator
- A measurable sign of change used to track progress.
- Informed consent
- A participantâs voluntary agreement after understanding purpose, risks, and rights.
- Input
- Resources used (funds, staff, equipment, time).
- Internal evaluation
- Conducted by the implementing organizationâs own staff/unit.
- Intersectionality
- Considering overlapping vulnerabilities (e.g., gender + disability + poverty).
J
- Judgement
- The evaluative conclusion about merit/worth based on evidence and criteria.
- Justification
- Clear rationale for methods, sampling, and decisions in MEAL work.
K
- Key informant interview (KII)
- Interview with someone who has specialized knowledge (officials, leaders, staff).
- KPI (Key Performance Indicator)
- A priority indicator used for high-level performance tracking.
L
- Learning agenda
- A prioritized set of learning questions that the program commits to answering.
- Lesson learned
- A practical insight supported by evidence that can improve future work.
- Logic model
- A visual map linking inputs â activities â outputs â outcomes â impact.
- Logframe (Logical Framework Matrix)
- A structured results framework (objectives, indicators, means of verification, assumptions).
M
- MEAL
- Monitoring, Evaluation, Accountability & Learningâusing evidence for performance, transparency, and improvement.
- M&E (Monitoring & Evaluation)
- Monitoring is continuous tracking; evaluation is periodic assessment of value and results.
- Methodology
- The overall approach and rationale for how data will be collected and analyzed.
- Midline
- Measurement taken mid-way to check progress and adjust.
- Mixed methods
- Combining quantitative (numbers) and qualitative (narratives) methods.
- Monitoring
- Routine collection and analysis of information to track implementation and indicator progress.
- Most Significant Change (MSC)
- A participatory method collecting âchange storiesâ and selecting the most important ones.
- Meta-analysis
- Combining results from multiple studies/evaluations to derive broader conclusions.
N
- Needs assessment
- Determining what people need, how severe it is, and who is most affected.
- Negative result
- A finding showing no improvement or deteriorationâstill important learning.
- Non-response
- Missing responses that may bias survey results if systematic.
O
- Outcome
- A short- to medium-term change in behavior, capacity, access, or practice.
- Output
- A direct product/service delivered (e.g., road sections completed, staff trained).
- Outcome harvesting
- Identifying outcomes first, then working backward to see contribution and evidence.
- Operational monitoring
- Tracking day-to-day delivery, timelines, procurement, staffing, and bottlenecks.
P
- Participatory monitoring
- Communities/stakeholders actively help define indicators, collect data, and interpret findings.
- Performance evaluation
- Examines how and why results were achieved; often focuses on implementation and outcomes.
- Process evaluation
- Studies how implementation happened (fidelity, quality, barriers, enablers).
- Program theory
- The underlying logic explaining why the intervention should work.
- Proxy indicator
- An indirect measure used when direct measurement is difficult (with clear limitations).
- PSEA
- Prevention of Sexual Exploitation and Abuseâpolicies and controls to prevent/respond.
Q
- Qualitative data
- Non-numeric information (interviews, observations, open-ended responses).
- Quantitative data
- Numeric information used for counting, measuring, and statistical analysis.
- Quality assurance (QA)
- Steps to ensure MEAL products meet standards (tools, training, checks).
- Quality control (QC)
- Checks during data collection/entry to detect errors early.
R
- RBM (Results-Based Management)
- Managing by defining results, measuring progress, and using evidence for decisions.
- Relevance (OECD criterion)
- Fit of objectives/design to needs, priorities, and context.
- Reliability
- Consistency of a measure (would it give similar results if repeated?).
- Reporting
- Communicating performance, learning, and accountability information to stakeholders.
- Results chain
- The sequence from activities to outputs to outcomes to impact.
- Risk register
- A list of risks with likelihood, impact, mitigation actions, and owners.
S
- Sample
- A subset of a population selected for data collection.
- Sampling frame
- The list/source from which the sample is drawn.
- Satisfaction survey
- Captures user perceptions of quality, access, fairness, timeliness.
- Safeguarding
- Preventing and responding to harm, including SEA/SH, child protection, and exploitation.
- SMART indicator
- Specific, Measurable, Achievable, Relevant, Time-bound.
- Stakeholder
- Anyone who affects or is affected by the intervention.
- Standard operating procedure (SOP)
- Step-by-step instructions to standardize MEAL processes.
- Sustainability (OECD criterion)
- Likelihood that benefits continue after support ends (financial, institutional, social, environmental).
T
- Target
- The intended value to achieve for an indicator by a given time.
- Theory of Change (ToC)
- A detailed explanation of how change is expected to happen, including assumptions and context.
- Third-party monitoring (TPM)
- Independent monitoring conducted by an external party to increase credibility and reach.
- Time series
- Data collected repeatedly over time to see trends and seasonality.
- Triangulation
- Cross-checking data through multiple sources/methods (strengthens confidence).
U
- Unintended effects
- Positive/negative changes not planned in the design.
- Utilization-focused evaluation
- Designing evaluations for intended users and their decisions (use is central).
- Uptake
- The extent to which a service/product is adopted or used by the intended group.
V
- Validity
- Whether a tool measures what it claims to measure.
- Verification
- Checking reported results against evidence (documents, photos, site checks, registers).
- Value for money (VfM)
- Assessing economy, efficiency, effectiveness, and equity of spending (context-specific).
W
- Workplan
- A time-bound plan of activities, responsibilities, and milestones.
- Weighting
- Assigning importance to criteria/indicators when scoring (must be transparent).
- With/without comparison
- Comparing those who received the intervention with those who did not (with caution about bias).
X
- XLS/Database audit trail
- A record of edits/entries showing who changed what and when (useful for data integrity).
Y
- Yield (program yield)
- The âreturnâ from activitiesâe.g., number successfully completing training or adopting practices (define carefully).
Z
- Zero tolerance
- A strict policy stance (often used for fraud/SEA), paired with reporting channels and enforcement.