Table of Content
- How Manipulation Happens
- Detecting Manipulation
- ISO Requirements for Comparison
- Verification as Safeguard
- Intentional versus Unintentional Bias
- Self-LCA Credibility Issues
- PCRs Reduce Manipulation Opportunities
- Greenwashing Risks
- Protecting Against Manipulation
- Legitimate Variability versus Manipulation
- Professional Ethics
- Bottom Line
Yes. LCA involves numerous methodological choices that can be selected to favour particular outcomes. Legitimate uncertainty creates opportunities for manipulation through selective boundaries, favourable allocation rules, and cherry-picked impact categories.
How Manipulation Happens
Understanding manipulation techniques helps identify questionable studies and protects against misleading environmental claims.
Boundary Manipulation
System boundaries can be drawn to include favourable stages and exclude unfavourable ones.
A product with low manufacturing impacts but high raw material extraction impacts might use cradle-to-gate boundaries. This captures the good manufacturing performance while excluding upstream burdens.
Conversely, a product with problematic manufacturing but cleaner raw materials might use gate-to-gate boundaries, showing only the manufacturing stage without upstream context.
End-of-life exclusion benefits products with disposal problems. Packaging that creates landfill burdens looks better if the assessment stops at consumer use.
Allocation Choices
Multifunctional processes allow allocation choices that substantially affect product-specific results.
Economic allocation shifts impacts toward higher-value co-products. A refinery might allocate more impacts to diesel than petrol if diesel commands higher prices. Change the allocation basis and diesel looks cleaner.
Mass allocation favours lightweight high-value products. Precious metals from mining operations receive minimal allocated impacts when mass-based rules apply, despite driving the economic viability of mining.
System expansion versus allocation produces different results. The choice between approaches can be justified either way, creating flexibility for favourable outcomes.
Impact Category Selection
Assessing only favourable impact categories creates misleading impressions.
A product might have low carbon emissions but high water consumption. Carbon-only assessment highlights the positive while hiding the negative.
Material substitution creates category trade-offs. Bioplastics might reduce fossil resource depletion but increase land use and eutrophication. Showing resource benefits without land impacts tells an incomplete story.
Selective reporting of impact categories within full LCA assessment misleads even when the underlying work is comprehensive. “Lower environmental impact” might reference one category while ignoring others where performance is worse.
Functional Unit Gaming
Poorly chosen functional units can favour specific products.
Comparing “one widget” versus “one thingamajig” makes no sense if they provide different service levels. The functional unit should enable fair comparison based on equivalent function.
Duration assumptions affect comparisons. A durable product looks worse than a disposable alternative if the assessment period is shorter than the durable product’s lifetime.
Usage assumptions matter for products with use-phase impacts. Assuming infrequent use reduces use-phase burdens. Assuming intensive use increases them. The baseline affects which option appears better.
Data Selection Bias
Choosing data that favours your product creates biased results.
Using your own efficient facility’s data while comparing against industry average data for competitors makes you look better. Fair comparison requires either both site-specific or both industry average data.
Optimistic assumptions about recycling rates, energy efficiency, or manufacturing yields improve results. Pessimistic assumptions about alternatives worsen their performance.
Geographic mismatches benefit certain products. Comparing European production (cleaner grid) against Asian production (fossil-heavy grid) might reflect reality or might be cherry-picked to favour European manufacturers.
Temporal Mismatches
Using current data for your product but outdated data for alternatives creates artificial advantages.
Technology improves over time. Comparing your 2024 process against competitor data from 2018 might show improvement that reflects technological progress rather than genuine superiority.
Prospective assessment of your own product against historical assessment of alternatives stacks the deck. Either both should be prospective or both historical.
Detecting Manipulation
Several red flags suggest potentially manipulated studies:
Lack of Transparency
Opaque methodology documentation hides manipulation. If the study doesn’t clearly state boundaries, allocation methods, data sources, and assumptions, question the results.
ISO 14044 requires comprehensive documentation. Absence of this detail suggests the study might not withstand scrutiny.
Unusual Methodological Choices
Methodology deviating from standard practice without justification warrants scepticism.
Non-standard allocation when standard approaches exist might indicate favourable selection. Unusual impact category combinations might cherry-pick favourable metrics.
Narrow boundaries excluding significant life cycle stages should trigger questions about what’s hidden beyond boundaries.
Extreme Results
Results showing dramatically better performance than expected need explanation. Genuine innovation creates improvements, but 50% better across all categories than established alternatives suggests something might be amiss.
Implausibly good results might reflect optimistic assumptions, favourable data selection, or methodological choices rather than actual performance.
Missing Verification
Unverified comparative assertions should be viewed sceptically. Independent review catches manipulation and questionable methodology.
ISO 14044 requires critical review for publicly disclosed comparative studies. Absence of verification for comparative claims indicates non-compliance or avoidance of scrutiny.
ISO Requirements for Comparison
ISO 14044 establishes requirements specifically to prevent manipulation in comparative assertions:
Equivalent functional units – Compared products must provide equivalent function.
Equivalent system boundaries – Boundaries must be equivalent across compared systems.
Same methodology – Impact assessment methods and allocation procedures should be consistent.
Current data – Data currency should be equivalent across compared products.
Critical review – Third-party verification is required for publicly disclosed comparative assertions.
These requirements reduce manipulation opportunities but compliance isn’t universal.
Verification as Safeguard
Third-party verification substantially reduces manipulation risk.
Verifiers check methodology against ISO standards, question unjustified assumptions, examine data quality, and ensure consistent comparison methods.
Programme operator oversight adds another layer. EPD programmes check PCR compliance and methodological appropriateness before publication.
However, internal unverified studies face no external scrutiny. These might serve legitimate purposes but shouldn’t support public environmental claims without verification.
Intentional versus Unintentional Bias
Not all questionable methodology results from intentional manipulation.
Unintentional Bias
Inexperienced practitioners make poor methodological choices without manipulation intent. Legitimate uncertainty about appropriate boundaries, allocation rules, or impact categories can lead to suboptimal choices.
Confirmation bias affects interpretation. Practitioners might unconsciously emphasise favourable findings while downplaying unfavourable results.
Resource constraints force compromises. Limited budgets might prevent comprehensive data collection, forcing reliance on estimates that happen to favour certain outcomes.
Intentional Manipulation
Deliberate manipulation involves selecting methodology known to produce favourable results.
This might include testing multiple allocation approaches and reporting only the most favourable. Running sensitivity analyses but publishing only the central case. Selecting impact categories highlighting strengths while omitting weaknesses.
Distinguishing intent proves difficult. Questionable methodology might reflect ignorance or manipulation. Either way, the results are unreliable.
Self-LCA Credibility Issues
Manufacturers assessing their own products face credibility challenges even without manipulation.
The conflict of interest is apparent. An organisation has incentive to show favourable results. Even rigorous honest assessment faces scepticism because manipulation is possible.
This explains requirements for third-party verification. Independent review provides credibility that self-assessment lacks.
PCRs Reduce Manipulation Opportunities
Product Category Rules standardise methodology for specific product types. Following PCRs constrains methodological choices that enable manipulation.
PCRs specify:
- Required system boundaries
- Mandatory impact categories
- Allocation procedures
- Data quality requirements
- Cut-off criteria
These specifications remove discretion that enables favourable selection. All products of a type follow the same rules, enabling fair comparison.
Greenwashing Risks
LCA manipulation serves greenwashing when it creates misleading environmental claims.
Selective reporting makes products appear better than they are. Vague claims like “lower environmental impact” might reference one favourable category while ignoring multiple unfavourable categories.
Comparing incomparable products through functional unit manipulation creates false superiority claims. Choosing comparison points that favour your product isn’t legitimate comparison.
Absence of verification enables greenwashing. Public environmental claims should rest on verified LCA. Unverified studies supporting public claims warrant scepticism.
Protecting Against Manipulation
Several practices reduce manipulation risk:
Demand transparency – Require full methodology documentation. Opacity enables manipulation.
Require verification – Independent review catches manipulation and poor methodology.
Check PCR compliance – Product Category Rules constrain methodological flexibility.
Question extraordinary claims – Dramatically better performance needs explanation.
Seek multiple impact categories – Comprehensive assessment reveals trade-offs that single-category assessment hides.
Compare methodology – When comparing products, verify methodological consistency.
Consider source credibility – Manufacturer self-assessment carries less weight than independent study.
Legitimate Variability versus Manipulation
Not all study differences indicate manipulation. Legitimate factors create variability:
Different research questions justify different boundaries. A manufacturing optimisation study appropriately uses narrower boundaries than a product comparison study.
Data availability constraints force reasonable compromises. Using available data beats conducting no assessment.
Methodological development means earlier studies might use outdated approaches. This reflects scientific progress rather than manipulation.
Genuine uncertainty means different reasonable assumptions produce different results. This is legitimate variability, not manipulation.
The distinction is intent and transparency. Legitimate variability is acknowledged and documented. Manipulation is hidden and presents biased results as comprehensive truth.
Professional Ethics
LCA practitioners have ethical obligations to conduct and report honestly.
Professional bodies and certification programmes establish ethical standards. Violating these standards through manipulation damages professional reputation and undermines LCA credibility generally.
Client pressure to produce favourable results creates ethical dilemmas. Professional practitioners should refuse to manipulate methodology even when clients request it. Reputation and professional integrity matter more than individual projects.
Bottom Line
LCA can be manipulated through methodological choices. Transparency, verification, and standardisation reduce but don’t eliminate manipulation risks.
Users of LCA results should evaluate methodology, demand verification for comparative claims, and treat unverified manufacturer self-assessment sceptically.
Practitioners should conduct assessments honestly, document methodology thoroughly, and refuse to manipulate results even under client pressure.
The methodology’s value depends on trustworthy application. Manipulation undermines LCA credibility and damages environmental decision-making.
Need LCA, EPD, or CBAM consultancy?
Or have a research proposal to collaborate on?
Global commercial consultancy • Horizon Europe, UKRI & Innovate UK research partner