Data Sources and Methods
EPA ensures the scientific integrity of the climate change indicators through a rigorous development process, as described below. For every indicator, EPA also develops technical documentation that describes the data sources and analytical methods used. Access the technical documentation on each indicator’s webpage.
On this page:
EPA's Approach to Developing Indicators
- How does EPA identify candidate indicators?
- What criteria does EPA use to evaluate candidate indicators?
- How does EPA screen candidate indicators?
- How does EPA develop indicators?
- How are EPA’s indicators reviewed?
- Are EPA’s existing indicators ever revised?
How does EPA identify candidate indicators?
Identification of candidate indicators is a function of the criteria used to evaluate them as well as the need to transparently document the underlying data and methods. EPA screens and selects each indicator using a standard set of criteria that consider data availability and quality, transparency of the analytical methods, and the indicator’s relevance to climate change. This process ensures that all indicators selected are consistently evaluated, are based on credible data, and can be transparently documented. EPA considers candidate indicators through coordinated outreach, stakeholder engagement, and reviewing the latest scientific literature.
Key considerations for new indicators include: 1) filling gaps in the existing indicator set in an attempt to be more comprehensive; 2) newly available, or in some cases improved, data sources that have been peer-reviewed and are publicly available from government agencies, academic institutions, and other organizations; 3) analytical development of indicators resulting from existing partnerships and collaborative efforts within and external to EPA (e.g., development of streamflow metrics in partnership with the U.S. Geological Survey for the benefit of the partner agencies as well as key programs within EPA’s Office of Water); and 4) indicators that communicate key aspects of climate change and that are understandable to various audiences, including the general public.
What criteria does EPA use to evaluate candidate indicators?
EPA chooses indicators that meet a set of 10 criteria that consider data quality, transparency of analytical methods, and relevance to climate change. Based on the availability of these data, some indicators present a single measure or variable while others have multiple measures, reflecting different data sources or different ways to group, characterize, or zoom in on the data.
The criteria EPA uses to select indicators are:
- Trends over time: Data are available to show trends over time. Ideally, these data will be long-term, covering enough years to support climatically relevant conclusions. Data collection must be comparable across time and space. Indicator trends have appropriate resolution for the data type.
- Actual observations: The data consist of actual measurements (observations) or derivations thereof. These measurements are representative of the target population.
- Broad geographic coverage: Indicator data are national in scale or have national significance. The spatial scale is adequately supported with data that are representative of the region/area.
- Peer-reviewed data (peer-review status of indicator and quality of underlying source data): Indicator and underlying data are sound. The data are credible, reliable, and have been peer-reviewed and published.
- Uncertainty: Information on sources of uncertainty is available. Variability and limitations of the indicator are understood and have been evaluated.
- Usefulness: The indicator informs issues of national importance and addresses issues important to human or natural systems. It complements existing indicators.
- Connection to climate change: The relationship between the indicator and climate change is supported by published, peer-reviewed science and data. A climate signal is evident among stressors, even if the indicator itself does not yet show a climate signal. The relationship to climate change is easily explained.
- Transparency, reproducibility, and objectivity: The data and analysis are scientifically objective, and methods are transparent. Biases, if known, are documented, minimal, or judged to be reasonable.
- Understandability by the public: The data provide a straightforward depiction of observations and are understandable to the average reader.
- Feasibility to construct: The indicator can be constructed or reproduced within a reasonable timeframe. Data sources allow routine updates of the indicator.
How does EPA screen candidate indicators?
EPA researches, screens, and selects indicators based on an objective, transparent process that considers the scientific integrity of each candidate indicator, the availability of data, and the value of including the candidate indicator. EPA conducts the screening process in two stages. As an initial screen, each candidate indicator is evaluated against five of 10 criteria to assess whether or not it is reasonable to further evaluate and screen the indicator. These “Tier 1” criteria include peer-review status of the data, feasibility to construct, usefulness, understandability to the public, and connection to climate change. Indicators that reasonably meet these criteria are researched further; indicators that do not meet these criteria are eliminated from consideration. Some of the candidate indicators ruled out at this stage include ideas that could be viable indicators in the future (e.g., indicators that do not yet have published data or need further investigation into methods).
Indicators deemed appropriate for additional screening are assessed against the Tier 2 criteria: transparency, reproducibility, and objectivity, broad geographic range, actual observations, trends over time, and uncertainty. Based on the findings from the complete set of 10 criteria, the indicators are again evaluated based on the assessment of the remaining criteria.
The distinction between Tier 1 and Tier 2 criteria is not intended to suggest that one group is necessarily more important than the other. Rather, EPA determined that a reasonable approach was to consider which criteria must be met before proceeding further and to narrow the list of indicator candidates before the remaining criteria were applied
How does EPA develop indicators?
Based on the results of the screening process, the most promising indicators are developed into proposed indicator summaries. EPA consults the published literature, subject matter experts, and online databases to obtain data for each of these indicators. Upon acquiring sound data and technical documentation, EPA prepares a set of possible graphics for each indicator, along with a summary table that describes the proposed metric(s), data sources, limitations, and other relevant information. Summary information is reviewed by EPA technical staff, and then the indicator concepts that meet the screening criteria are formally approved to begin development.
Graphics, summary text, and technical documentation for all proposed new or revised indicators are developed in accordance with the format established for the original 24 indicators in the 2010 indicator report. An additional priority for development is to make sure that each indicator communicates effectively to a non-technical audience without misrepresenting the underlying data and source(s) of information. Regional features (e.g., Community Connections) are developed in the same manner.
How are EPA's indicators reviewed?
The complete indicator packages (graphics, summary text, and technical documentation) undergo EPA internal review, data provider/collaborator review, and an independent peer review.
Indicators are reviewed at various stages of development by EPA technical staff and various levels of management within the Agency. Organizations and individuals who collected and/or compiled the data (e.g., the National Oceanic and Atmospheric Administration and the U.S. Geological Survey) review the relevant indicators. EPA also enlists technical experts to assist in review of new indicators or revisions.
Each new indicator also undergoes independent peer review by subject-matter experts external to EPA. EPA’s indicator compilation reports (i.e., the 2010, 2012, 2014, and 2016 print editions) have also gone through independent external peer review. EPA follows the procedures in EPA’s Peer Review Handbook (the most recent edition is the fourth, EPA/100/B-15/001) for reports that do not provide influential scientific information. The reviews are managed by a contractor under the direction of a designated EPA peer review leader, who prepares a peer-review plan, the scope of work for the review contract, and the charge for the reviewers. The peer review leader plays no role in producing the indicators or reports.
EPA’s peer-review process also includes a quality-control check by the peer review leader to ensure that the authors took sufficient action and provided an adequate response for every peer review and re-review comment.
Are EPA’s existing indicators ever revised?
Existing indicators are re-evaluated to ensure they are relevant, comprehensive, and sustainable. The process of re-evaluating indicators includes monitoring the availability of newer data, eliciting expert review, and assessing indicators in light of new science. EPA improves existing indicators by adding or replacing metrics or underlying data sources. These revisions involve obtaining new data sets and vetting their scientific validity. For example, a new independent data set and analysis has been made available related to trends in ocean heat content. As a result, EPA updated the Ocean Heat indicator to include this new data series, making the indicator more comprehensive.
Indicator-Specific Technical Documentation
EPA compiles technical documentation for every indicator to ensure that it is fully transparent — so readers can learn where the data come from, how each indicator was calculated, and how accurately each indicator represents the intended environmental condition. EPA uses a standard technical documentation format that includes 13 elements for each indicator:
- Indicator description
- Revision history
- Data sources
- Data availability
- Data collection (methods)
- Indicator derivation (calculation steps)
- Quality assurance and quality control
- Comparability over time and space
- Data limitations
- Sources of uncertainty (and quantitative estimates, if available)
- Sources of variability (and quantitative estimates, if available)
- Statistical/trend analysis (if any has been conducted)
- References
Readers can access technical documentation for any indicator through a link on each indicator’s webpage.