Top Banner
Brussels, 5 March Comparability of Central Baltic Lake GIG results on macrophyte and phytoplankton composition The Netherlands Marcel van den Berg / Centre for Water Management
16

Brussels, 5 March

Jan 19, 2016

Download

Documents

afya

Comparability of Central Baltic Lake GIG results on macrophyte and phytoplankton composition. Marcel van den Berg / Centre for Water Management. The Netherlands. Brussels, 5 March. Situation. CB Lake GIG (ex DK) has agreed that their results are comparable (enough) but EC does not agree - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Brussels,  5  March

Brussels, 5 March

Comparability of Central Baltic Lake GIG results on macrophyte and phytoplankton composition

The Netherlands

Marcel van den Berg / Centre for Water Management

Page 2: Brussels,  5  March

CB Lake GIG (ex DK) has agreed that their results are comparable (enough) but EC does not agreeEC has proposed a comparability criterion for option 3 intercalibration that is not consistent with option 2 results

Situation

Page 3: Brussels,  5  March

1. CB GIG has clear criteria for comparability which are consistent with the ones as agreed in option 2 intercalibration (across GIGs rivers/lakes/co)

2. Assessment of reference sites results in c. 70% as ‘high’ for sites within MSs territory and c. 40% (20-90%) for across GIG (incomparability is thus not only inevitable but needed in order to comply with normative definitions)

3. Averaged opinion of all MS in the exercise shows acceptable relationship with pressure indicators given all limitations

4. Comparison is made not at site level but at lake level (=body level)

5. Remaining mismatches between MSs are not really due to size of the GIG but to the definitions of the types

6. The level of confidence of the weighed averaged comparability indicator is determined and is fairly acceptable

Why do we think that the results are acceptable?

Page 4: Brussels,  5  March

In some cases data are incomplete to fully apply a method, and comparison was improved by testing full national method vs incomplete method (DE and BE)Small differences in EQR’s may result in different classes, specifically when the EQR-values are close to the class boundaries (by definition a misclassification of sites very close to the boundary is 50%).Typology is not best suited, and does especially not account for lake size. E.g. almost no overlap in data availability of largest lakes in Belgium vs smallest in other MSs.

Why do we think that the results are acceptable? - Constrains

Page 5: Brussels,  5  March

Use of weighed averaged of misclassification of 0,25 quality class (=consistent with 0,05 EQR units)Use of frequency of misclassification of ca. 35% agreement (=60% with small stretch). This criterion is comparable with accepted statistical noise in option 2 intercalibration as expressed in R2 (Option 2 has agreed on R2=0,5, this criterion is not always achieved but results are in decisions for good reasons)Use of correlation coefficients of one MS against average (significance P<0.001)

CB Lake GIG has applied criteria consistent with other intercalibration results (e.g. option 2 rivers)

Page 6: Brussels,  5  March

CB Lake GIG has applied criteria consistent with other intercalibration resultsWaterbody

Member State 1

Member State 2

Absolute class difference

Weighed averaged class difference (MS1-MS2)

1 MS1 M B 2 +2

2 MS2 G H 1 -1

3 MS1 P P 0 0

4 MS2 H H 0 0

total     3/4=0,75 EC:0,5: not comparable

1/4=0,25 CB lake GIG:0,25,: comparable*All option 2:0,25: comparable**

Page 7: Brussels,  5  March

CB Lake GIG has applied criteria consistent with other intercalibration resultsWaterbody

Member State 1

Member State 2

Absolute class difference

Weighed averaged class difference (MS1-MS2)

1 MS1 H H 0 0

2 MS2 G G 0 0

3 MS1 M P 1 +1

4 MS2 P B 1 +1

total     4/4=1 EC:0,5: comparable

2/4=0.5CB lake GIG:0,25: not comparableAll option 2:0,25: not comparable

Page 8: Brussels,  5  March

Ultimate test: transpose results of option 3 to option 2; Example macrophytes LCB2 / G/M boundary

0

0,1

0,2

0,3

0,4

0,5

0,6

0,7

UK EE LV NL BE DE

G/M

on

co

mm

on

met

ric

EQ

R

Page 9: Brussels,  5  March

From option 3 to option 2; What about R2 for MSs methods vs Common metric?

MS Lakes macrophytes

Rivers macroinvertebrates

UK 0.46 0.73

BE 0.58 0.72

NL 0.75 0.29

LV or LT 0.83 0.37

EE or PL 0.74 0.68

DE 0.23 0.52

Results considering misclassifications are acceptable

Page 10: Brussels,  5  March

Reference sites are predicted correctly as ‘high status’ for 70% of the sites within their own territory

High

< High

But this percentage ‘high’ classifications is on average c.40% (15-90%) when a MS applies its method on outside its territory

Reference sites

Page 11: Brussels,  5  March

Relation between averaged normalised EQRs of MSs vs pressure indicators and position of ref sites

Chl_mn_v

1000,00100,0010,001,001,00E-4

AV

G

1,00

0,80

0,60

0,40

0,20

0,00

Fit line for TotalFit line for Total1,00,00

REF_Clean_list

R Sq Linear = 0,252

R Sq Linear = 0,252

Page 12: Brussels,  5  March

Making the GIG smaller is not necessarily a good solution…

y = -2E-05x + 0,4511R2 = 0,0019

0

0,1

0,2

0,3

0,4

0,5

0,6

0,7

0,8

0,9

1

0 500 1000 1500 2000

Distance between MS

Co

rrel

atio

n b

etw

een

MS

Page 13: Brussels,  5  March

Confidence of Weighed Average

Page 14: Brussels,  5  March

CB Lake GIG work is comparable enough, and absolute disagreement is due to typological limitations and the fact that MSs methods are best suited for their own territoryImprovement for macrophytes can be achieved by agreement on European scale on common indicators e.g. maximum colonised depthImprovement for phytoplankton can be achieved by agreeing on combination rules, and by using more sites in the comparison >>>BUT: To GIG will be proposed to exclude the results of phytoplankton composition, because of combination rules used by MSs are not clear.

Conclusion

Page 15: Brussels,  5  March

Comparability paper has to be rewritten slightly (but crucial) and has to be extended with more criteria for both credibility and acceptability of intercalibration results in the decision

Way forward / Recommendation

Page 16: Brussels,  5  March

Criteria for acceptability and credibility of intercalibration results **Example**

GIG Ref values compared

Reference values harmonised

Reference selection criteria agreed

Relation with pressure demon-strated

Average level of agreement on G/M

Absolute level of agreement of classific

CB Lake GIG

+ + + + ++ +/-

Coastal chlorophyll-a

+ +/- - -? + +/- or n.a.

River phytobenthos

++ - +/- + ++ +/-