14. When Can You Compare EPDs? (And When You Can’t)

The promise of EPDs is comparability. Standardised methodology through Product Category Rules should let you compare products fairly and make informed decisions. But comparison isn’t automatic. Two products with EPDs aren’t necessarily comparable, and comparing them without checking compatibility can lead to poor decisions based on misleading information.

EN 15804 states bluntly: “EPD that are not in a building context are not tools to compare construction products.” This isn’t bureaucratic caution. It’s recognition that valid comparison requires careful consideration of context, functional equivalence, and methodological alignment.

This guide explains when EPD comparison is valid, when it’s misleading, and how to compare properly. Whether you’re an architect selecting materials, a procurement manager evaluating suppliers, or a manufacturer positioning products, understanding comparison rules prevents misuse.

The Fundamental Principle: Building Context

For construction products, EPD comparison must happen in building context, not in isolation. You don’t compare two insulation products by looking at their Module A1-A3 carbon numbers. You compare how those products perform in an actual building considering all life cycle stages.

Why building context matters:

Two insulation products might show different Module A1-A3 manufacturing impacts. Product A: 15 kg CO₂e per m². Product B: 25 kg CO₂e per m². Product A looks better, right?

But Product B might have better thermal performance, requiring less thickness to achieve the same U-value. When you model both in a building, Product B’s superior thermal performance saves more operational energy (Module B6) than the extra manufacturing impact costs. Total building-level impact over 60 years might favour Product B despite higher production impacts.

This isn’t theoretical. It’s exactly why EN 15804 prohibits simple isolated comparison. Products interact with building systems. Use stage performance affects operational impacts. Replacement intervals matter. End-of-life scenarios in specific locations influence total impacts. Context determines which product genuinely performs better.

What building context requires:

  • Complete life cycle perspective (all modules A through D)
  • Functional equivalence in the specific application
  • Consideration of how products affect building operational performance
  • Realistic scenarios for the actual project location and conditions
  • Understanding of interaction with other building systems

For non-construction products, similar principles apply. Comparison requires functional equivalence and complete life cycle consideration, even if “building context” isn’t literally relevant.

Valid Comparison: The Requirements Checklist

EPD comparison is valid when these conditions are met:

Same Product Category and PCR

Requirement: Products must be in the same category, ideally following the same PCR or clearly compatible PCRs.

Why: PCRs define system boundaries, allocation rules, data quality requirements, and scenarios. Different PCRs mean different methodology. Results aren’t comparable even if products seem similar.

Example of valid comparison: Two concrete mixes, both with EPDs following EN 15804 and the same concrete PCR from EPD International. System boundaries match, impact assessment methods align, scenarios are consistent.

Example of invalid comparison: Concrete EPD from EPD International following one PCR versus concrete EPD from a regional programme following a different PCR with different allocation rules. Even though both are concrete EPDs, methodology differences make comparison questionable.

How to check: Look in the EPD document for the PCR reference. It states the PCR title, version, and issuing body. If EPDs reference the same PCR version, this requirement is satisfied. If they reference different PCRs, investigate whether the PCRs are harmonised or compatible.

Functional Equivalence

Requirement: Products must serve the same function with equivalent performance in the specific application.

Why: Comparing products that do different things or perform at different levels is meaningless. Low-performance product with low impacts versus high-performance product with higher impacts isn’t a fair comparison unless you account for functional differences.

Example of valid comparison: Two floor tiles with equivalent durability, slip resistance, and aesthetic properties. They serve identical functions in the application.

Example of invalid comparison: Comparing an engineered timber beam to a steel beam without considering that they might require different connection details, have different spanning capabilities, or need different fire protection. Even if both are “structural beams,” functional differences affect building-level impacts.

How to check: Verify technical specifications are equivalent for the intended application. Strength, durability, thermal performance, acoustic properties, or whatever characteristics matter for function should be comparable.

Complete Life Cycle Coverage

Requirement: Compare the same life cycle modules between products.

Why: Comparing one product’s cradle-to-gate impacts (A1-A3 only) to another’s cradle-to-grave impacts (A1-A3 + B + C + D) makes the first product look artificially better.

Example of valid comparison: Both EPDs report Modules A1-A3, C1-C4, and D. You compare matching modules: A1-A3 to A1-A3, C to C, D to D. You can see where differences occur.

Example of invalid comparison: Product A’s EPD reports only A1-A3. Product B’s EPD reports A1-A3 + C + D. If you compare just the A1-A3 totals, you’re comparing properly. If you compare Product A’s A1-A3 to Product B’s A1-A3+C+D total, you’re comparing different scopes.

How to check: Look at which modules each EPD includes. Compare only modules that both EPDs report. If one reports more modules than the other, limit comparison to overlapping modules or request data for missing modules.

Same Functional or Declared Unit

Requirement: Compare EPDs on the same basis (per kg, per m², per functional unit).

Why: One product reported per kg and another per m² aren’t directly comparable without conversion.

Example of valid comparison: Both insulation products report per m² at declared thickness. Direct comparison works if thickness is equivalent or you adjust for thermal performance.

Example of invalid comparison: One product reports per m², another per m³, a third per kg. Without converting to common units considering density and application thickness, numbers aren’t comparable.

How to check: Note the declared or functional unit in each EPD. Convert to common basis if needed using product density, thickness, or coverage information.

Compatible Impact Assessment Methods

Requirement: EPDs should use the same impact assessment methods and characterisation factors.

Why: Different methods can yield different numerical results even for identical emissions. Climate change using CML characterisation factors versus ReCiPe versus EN 15804 methods produces different numbers.

Example of valid comparison: Both EPDs follow EN 15804+A2 using European Commission characterisation factors. Impact assessment methodology is identical.

Example of invalid comparison: One EPD uses EN 15804 methods. Another uses TRACI methods common in North America. Climate change values aren’t directly comparable because characterisation approaches differ.

How to check: EPDs state which impact assessment methods they used, usually in the LCA methodology section. Verify methods match or understand how differences affect results.

Consistent Scenarios

Requirement: For life cycle stages using scenarios (transport, use, end-of-life), scenarios should be consistent or adjusted for comparison.

Why: Different assumptions about transport distances, replacement intervals, or recycling rates affect results. Comparing optimistic scenarios for one product to pessimistic scenarios for another skews comparison.

Example of valid comparison: Both EPDs assume 500 km transport to site, 60-year service life, and 70% end-of-life recycling based on current regional practice. Scenarios align.

Example of invalid comparison: Product A assumes 300 km transport, 40-year service life requiring one replacement, and 50% recycling. Product B assumes 600 km transport, 60-year service life with no replacement, and 80% recycling. Scenario differences confound product differences.

How to check: Read scenario descriptions in EPDs carefully. Note assumptions about transport, service life, maintenance, and end-of-life. If scenarios differ substantially, consider adjusting calculations to use consistent assumptions.

Invalid Comparison: Common Mistakes

Several comparison approaches fail to meet validity requirements:

Comparing Across Product Categories

The mistake: Comparing insulation to windows, or concrete to steel, using EPD numbers directly.

Why it’s invalid: Different product categories serve different functions. They follow different PCRs with different system boundaries and rules. Functional equivalence doesn’t exist.

What to do instead: Compare at building level how different combinations of products affect total building environmental performance. Use building assessment tools that account for how products interact.

Cherry-Picking Modules

The mistake: Highlighting only modules where your product looks good while ignoring others.

Why it’s invalid: It presents incomplete pictures. A product might have low Module A but high Module B. Comparing only Module A misleads.

What to do instead: Present complete life cycle data. If certain modules matter most for the application, explain why while still disclosing all modules.

Ignoring Functional Differences

The mistake: Comparing products with different performance levels without accounting for implications.

Why it’s invalid: Lower environmental impact with lower performance might not be better if you need more product to achieve required function.

What to do instead: Adjust comparison to functional equivalence. If products have different thermal conductivity, calculate quantity needed to achieve the same U-value. Compare based on equivalent function, not equal mass or area.

Comparing Different Module D Assumptions

The mistake: Treating Module D as equivalent to other modules and comparing products where Module D assumptions differ substantially.

Why it’s invalid: Module D is scenario-dependent and sits outside system boundaries. Comparing optimistic recycling scenarios to pessimistic ones doesn’t reflect product differences.

What to do instead: Compare excluding Module D, or ensure scenarios are realistic and consistent. Understand that Module D represents potential future benefits dependent on infrastructure and markets.

Focusing Only on Climate Change

The mistake: Comparing only Global Warming Potential while ignoring other impact categories.

Why it’s invalid: Products optimised for low carbon might have high water use, acidification, or resource depletion. Single-indicator comparison misses tradeoffs.

What to do instead: Review multiple impact categories. If climate change is priority, state that clearly, but acknowledge other impacts exist.

How to Compare EPDs Properly

Follow this systematic approach:

Step 1: Verify Comparability Prerequisites

Before comparing, check:

  • Do both EPDs follow the same or compatible PCRs?
  • Are products functionally equivalent in the application?
  • Are declared/functional units compatible?
  • Do EPDs report the same life cycle modules?
  • Are impact assessment methods aligned?

If any answer is no, comparison requires additional work to address incompatibilities or may not be valid.

Step 2: Establish Building Context

Define the specific application:

  • What building type and location?
  • What performance requirements?
  • What service life are you assessing?
  • What other building systems interact with these products?

This context determines which life cycle stages matter most and how to interpret differences.

Step 3: Compare Matching Modules

Extract results for matching modules from both EPDs:

  • Compare Module A1-A3 production impacts
  • Compare Module B use stage impacts if reported
  • Compare Module C end-of-life impacts
  • Consider Module D separately with scenario assessment

Look at multiple impact categories, not just climate change.

Step 4: Assess Scenarios

Check scenario consistency:

  • Do transport distances align with project location?
  • Are service life assumptions realistic?
  • Are end-of-life scenarios appropriate for local infrastructure?

Adjust scenarios if needed to create fair comparison.

Step 5: Consider Building-Level Implications

Model products in actual building context:

  • How do thermal properties affect operational energy?
  • How do durability differences affect replacement needs?
  • How do products interact with other building systems?

Use building LCA tools (OneClickLCA, Tally, etc.) that properly integrate EPD data into building assessment.

Step 6: Interpret Results Holistically

Consider:

  • Which life cycle stages dominate impacts?
  • Are differences between products significant or marginal?
  • Do tradeoffs exist between impact categories?
  • What uncertainty affects results?

Don’t reduce comparison to “Product A is better.” Explain where products differ, why differences exist, and what matters most in your context.

Comparison Tools and Software

Several tools support proper EPD comparison in building context:

Building LCA software (OneClickLCA, Tally, Athena Impact Estimator, Bionova, eLCA) integrates EPD data into building models. These tools account for quantities, building energy interactions, and life cycle aggregation properly.

EPD platforms (EPD International’s search, IBU’s database) let you filter and compare EPDs within categories. They don’t do building-level assessment but help identify comparable declarations.

Spreadsheet templates can support comparison if you systematically extract data, align modules, and apply consistent scenarios. Manual but transparent.

Good tools prevent common mistakes by enforcing compatibility checks and requiring building context.

When Manufacturers Compare Their Products to Competitors

Marketing comparison claims using EPDs require extra care:

Follow advertising standards. Claims must be truthful, substantiated, and not misleading. UK Advertising Standards Authority and similar bodies regulate environmental claims.

Ensure genuine comparability. Don’t compare your EPD to competitors’ EPDs following different PCRs or with different assumptions.

Disclose comparison basis. State clearly what you’re comparing, which modules, and under what scenarios.

Avoid selective presentation. Show complete impacts, not just favourable categories or modules.

Update regularly. Competitor EPDs may update. Ensure comparisons remain current.

Consider legal review. Comparative claims can face legal challenge if perceived as misleading.

Comparison is legitimate marketing but it must be done properly to avoid greenwashing accusations or regulatory action.

What Green Building Schemes Say About Comparison

LEED, BREEAM, and other schemes have specific requirements:

LEED awards points for using products with EPDs but doesn’t require product comparison. Points come from having EPDs, not from having the “best” EPD values.

BREEAM similarly credits EPD use without mandating comparison. The emphasis is transparency rather than performance thresholds.

Level(s) and other European frameworks may incorporate EPD data into performance assessment but with specific rules about how comparison and aggregation work.

Schemes recognise that proper comparison requires expertise and building context. They encourage EPD use while being cautious about simplistic comparison.

When You’re Not Sure Whether Comparison Is Valid

If compatibility is unclear:

Consult experts. LCA professionals or EPD consultants can assess whether comparison is methodologically sound.

Contact programme operators. They can clarify whether EPDs following their PCRs are comparable.

Review carefully. Read both EPDs completely, checking methodology sections for differences.

Be conservative. If significant doubts exist, present products separately rather than claiming direct comparability.

Invalid comparison is worse than no comparison. It misleads decision makers and undermines EPD credibility.

The Bigger Picture: EPDs as Information Tools

EPDs provide environmental transparency. Comparison is one use but not the only use. EPDs also:

  • Document product environmental profiles for records
  • Support corporate sustainability reporting
  • Enable supply chain environmental data tracking
  • Demonstrate compliance with regulations
  • Show environmental improvement over time
  • Inform product development decisions

Focusing excessively on comparison sometimes misses these broader benefits. Even when direct product comparison isn’t valid, EPDs provide valuable information about environmental performance.

Key Takeaways for Valid Comparison

Valid EPD comparison requires:

  1. Same product category and compatible PCRs
  2. Functional equivalence in the specific application
  3. Building context consideration for construction products
  4. Complete life cycle perspective (matching modules)
  5. Consistent scenarios and assumptions
  6. Multiple impact category assessment
  7. Transparent disclosure of comparison basis

Invalid comparison attempts include:

  • Comparing across different product categories
  • Using different PCRs or programme operators without checking compatibility
  • Ignoring functional differences
  • Cherry-picking favourable modules or impact categories
  • Comparing without building context
  • Using inconsistent scenarios

When done properly, EPD comparison supports informed decisions about product selection, procurement, and building design. When done improperly, it misleads and undermines the credibility that verification and standardisation create.

The complexity exists for good reason. Environmental performance is multi-dimensional, context-dependent, and affected by numerous factors. EPDs provide the data. Proper comparison methodology ensures that data gets used wisely rather than simplistically. Taking time to compare correctly produces insights worth having. Rushing to invalid comparison produces numbers not worth trusting.