Autonomous Correction of Sensor Data Applied to Building Technologies Utilizing Statistical Processing Methods Energy Informatics, 2012 Atlanta, GA October 6, 2012 Charles C. Castello, Ph.D. Oak Ridge National Laboratory Oak Ridge, TN 37831-6720 [email protected]Joshua R. New, Ph.D. Oak Ridge National Laboratory Oak Ridge, TN 37831-6720 [email protected]
23
Embed
Autonomous Correction of Sensor Data Applied to Building Technologies Utilizing Statistical Processing Methods Energy Informatics, 2012 Atlanta, GA October.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Autonomous Correction of Sensor Data Applied to Building Technologies Utilizing Statistical Processing Methods
Energy Informatics, 2012Atlanta, GA
October 6, 2012
Charles C. Castello, Ph.D.Oak Ridge National LaboratoryOak Ridge, TN [email protected]
Joshua R. New, Ph.D.Oak Ridge National LaboratoryOak Ridge, TN [email protected]
• Energy consumption in the U.S. is a critical area of concern where residential and commercial buildings consume approximately 40% of total primary energy.
• Retrofitting inefficient existing buildings with new and innovative technologies that help to curb energy consumption.– Standing seam metal roofs that exploit infrared-reflective paint pigments to
boost solar reflectance– Pella® triple-pane, low emittance, Argon filled windows– ClimateMaster ®’s high-efficiency, water-to-air heat pump for space
conditioning– ClimateMaster ®’s high-efficiency, water-to-water heat pump for hot water
heating– Fantech Energy Recovery Ventilator (ERV) which replaces air inside of a
home with fresh air
Background
4
• There is much research dealing with the improvement of energy efficiency in commercial buildings and residential homes.
• This research includes several fundamental concerns relevant to sensors being used to collect a wide variety of variables. – Concerns include:
• Sensors are an imperative tool in analyzing these technologies and determining their impact.– Example of this is ORNL’s ZEBRAlliance Project which consists of 4 homes:
• 1st home 279 sensors• 2nd home 279 sensors• 3rd home 321 sensors• 4th home 339 sensors
ZEBRAlliance Project
Home #1Home #3
Home #2 Home #4
The majority of these sensors are temperature, humidity, pressure, and energy.
• The ZEBRAlliance Project broke ground on September 26, 2008.• Research project and multi-faceted educational campaign to
influence consumers that energy efficient homes can be:– Real– Desirable– Affordable
• Located in the Crossroads at Wolf Creek Subdivision in Oak Ridge, TN.
• Energy savings of approximately 55-60% compared with traditional new construction.
• Data is being collected for a span of three years.• After the research period has ended, all four homes will be sold.
More on the ZEBRAlliance Project
7
• Most sensors have a 15-minute resolution for ORNL’s ZEBRAlliance project with approximately 80 sensors having a 1-minute resolution:– 9,352 data points in an hour– 224,448 in a day– 1,571,136 in a week– 81,699,072 in a year
• Many issues arise with this amount of data points being collected in such a real-world experiment, specifically data corruption from:– Sensor failing to produce data.– Fouling of the sensor’s interface due to the environment which produces
inaccurate data.– A sensor’s calibration being incorrect, producing inaccurate data.– Data logger failure which causes missing data.
Dealing with Large Amounts of Data
8
• There are currently two approaches that are widely used for validation of data:– Analytical redundancy– Hardware redundancy
• Analytical redundancy uses data from multiple sensors to predict a sensor’s value.– An example is using a temperature and humidity channel to predict heat flux.– When the number of sensors increases, the complexity of the model
increases.
• Hardware redundancy is not always possible due to the need for increased amount of resources:– Sensors– Data acquisition
Current Data Validation Methods
9
• Sensor data validation techniques are needed that minimizes the amount of needed resources:– Hardware– Software
• Sensor data validation techniques are also required to be:– Automated– Able to handle large amounts of data– Able to handle different types of data– Correct missing and corrupt data
• Data collected through ORNL’s ZEBRAlliance project has a maximum error rate of 14.59% (per month) for data logger and sensor failure (does not include sensor fouling and calibration error).
Sensor Validation Techniques
10
• Data collected by sensors in residential buildings are not only used to determine the impact of energy efficiency technologies but also used for:– Control– Modeling
• An example of the influence inaccurate data has on models, Nonhebel 1994 did a study of using inaccurate data to predict crop growth.– Using inaccurate data caused deviations up to 30% in simulated yields.
• Studies have not been done on the influence inaccurate data has on building applications.
• Using the figure in the previous slide, imagine a 14.59% deviation in cost or simple payback for investing in an energy efficiency technology for your home. This inaccuracy is substantial!
Effects of Inaccurate Data
11
• Statistical methods from Bo et al., 2009 were used to predict wireless field strength:– Least squares– Maximum likelihood estimation– Segmentation averaging– Threshold based
• Modified to meet the needs of fault detection and sensor data prediction for building applications.
• Artificial gaps are introduced into the dataset by randomly removing portions of existing data for testing the accuracy of auto-correcting algorithms.
• Accomplished by randomly generating training and testing subsets:– Training set represents 70% of original dataset.– Testing set represents 30% of original dataset.
Statistical Processing Methods
12
• Each sensor is used as an independent variable and predicts sensor values based upon a variable-sized window of observations.
• A prediction model is generated for each window of observations and correction occurs if values are missing or corrupt:– Interpolation– Extrapolation
• An observation window of size w is used to predict the sensor’s data value for each time-step within the observation window.
• The observation window moves forward by w time-steps (no overlap) and prediction for each sample within the observation window is calculated.
• Actual value vs. predicted value is measured:– Root-mean-square error (RMSE)– Relative error– Absolute error
Predict Missing Data
13
• Let’s say we have 18 samples of temperature data with a 15-minute resolution (4 ½ hours).
• However, a chunk of data is missing.• How do we correct this?
– Generate a model based on data that we have obtained.– Interpolate based on that model for missing data values.
– where: • r is the residual value between the actual and predicted data
• Relative error:
– where:• n is the current time-step• s represents the first time-step of the observation window• y(s) is the actual sensor data• r(s) is the residual corresponding to y(s)
• Absolute error:
– where:• ymax and ymin are the maximum and minimum sensor data values respectively of the sensor dataset, Y
Performance Metrics
• Taken from ORNL’s ZEBRAlliance project, specifically:– Temperature (°F) “Z09_T_ERV_IN_Avg” Temperature of ERV intake from outside– Humidity (%RH) “Z09_RH_ERVin_Avg” Humidity of ERV intake from outside– Energy usage (Wh) “A01_WH_fridge_tot” Energy of refrigerator
• Data collected using Campbell Scientific’s CR1000 data logger.• There are four homes in the ZEBRAlliance project.
– Data was taken from the 2nd home – During the 2010 calendar year (N=35,040)
• Technologies in home #2– Advanced framing– High-efficiency florescent lighting– Energy Star appliances– Water-to-air heat pumps (WAHP)– Water-to-water heat pumps (WWHP)
• Four statistical processing methods are used to validate temperature, humidity, and energy data in residential buildings for autonomous detection and correction of missing or corrupt sensor data.– Least squares– Maximum likelihood estimation– Segmentation averaging– Threshold based
• Independent data validation is accomplished using observation windows (i.e., subset of samples) with w observations to build a model of the data.
• Data validation and correction occurs for each successive observation window within the sensor dataset using interpolation and/or extrapolation for missing and corrupt data.
Summary
22
• Results– The threshold based technique performed best with:
• Temperature (c=2)• Humidity (c=2)• Energy data (c=1)
– It is anticipated that temperature, relative humidity, and energy data would follow similar patterns in other buildings, but additional studies are needed to confirm the degree to which these results generalize across other buildings.
• Other Future Research Work– Studying other types of methods besides statistical such as:
• Filtering• Machine learning
– Other data types will also be investigated:• Heat flux• Airflow• Liquid flow