Zhao Cao*, Charles Sutton + , Yanlei Diao*, Prashant Shenoy* *University of Massachusetts, Amherst + University of Edinburgh Distributed Inference and Query Processing for RFID Tracking and Monitoring
Feb 24, 2016
Zhao Cao*, Charles Sutton+, Yanlei Diao*, Prashant Shenoy*
*University of Massachusetts, Amherst+University of Edinburgh
Distributed Inference and Query Processing for RFID Tracking and Monitoring
2
Applications of RFID Technology
RFID readers
3
Tag id: 01.001298.6EF.0ATime: 2008-01-12, 14:30:00Manufacturer: X Ltd.Expiration date: Oct 2011
RFID Deployment on a Global Scale
+Tag id: 01.001298.6EF.0AReader id: 3478Time: 2008-01-15, 06:10:00
Tag id: 01.001298.6EF.0AReader id: 5140Time: 2008-01-21 08:15:00
Tag id: 01.001298.6EF.0AReader id: 6647Time: 2008-01-30 15:00:00
Tag id: 01.001298.6EF.0AReader id: 7990Time: 2008-02-04, 09:10:00
Tag id: 01.001298.6EF.0AReader id: 5140Time: 2008-02-10, 12:40:00
4
Tracking and Monitoring Queries
Path Queries: - List the path taken by an item through the supply chain.- Report if a pallet has deviated from its intended path.
Containment Queries: - Alert if a flammable item is not packed in a fireproof case. - Verify that food containing peanuts is never exposed to other
food cases.
Hybrid Queries: - For any frozen food placed outside a cooling box, alert if it has
been exposed to room temperature for 6 hours.
Object locationsand history
Containment among items, cases, pallets
Sensor dataLocation
Containment
5
Challenges in RFID Data Stream Processing
Q1: For any frozen food placed outside a cooling box, raise an alert if it has been exposed to room temperature for 6 hours.
(time, location, temperature)Sensor Stream
(time, tag_id, reader_id)RFID Stream
2. RFID Data is incomplete and noisy.
1. RFID data streams are not queriable (no location or containment info).
1
3 4
2
5 6
F E DLocations:
1
3 4
2
5 6
F E D
4Missing Overlapped
3. Scale inference and query processing to numerous sites and millions of objects.
6
A Scalable, Distributed RFID Stream Processing System
(time, tag_id, reader_id)Raw RFID Stream
Location & Containment Inference
(time, tag_id, location, container)Queriable RFID Stream
Distributed Query Processing
Monitoring result (time, tag_id, query result)
Distributed Loc. & Cont. Inference
7
I. Location and Containment Inference – Intuition
Containment Inference:Co-location history
1
3 4
2
5 6
t=4
F E D
1
3 4
2
5 6
Time t=1
Reader location: A
1
3 4
2
5 6
t=2
B C
1
3 4
2
5 6
t=3
D E C
Items
Cases
Location Inference:Smoothing over containment
Item 5 is contained in case 2 Case 2 is in Location C at t=3
Item 6 is contained in case 2
Iterative procedure
Containment Changes: Change point detection
Containment between case 1 and item 4 has changed
8
(1) Our Probabilistic Graphical Model
CT
Sensor model
Sensor model
Containment(0 or 1)
𝑙𝑐
𝑙𝑜
𝑥𝑐
𝑦 𝑜R
R
Hidden variables : true object and container locations
Evidence variables: RFID readings
Independency assumptions:• Independence among containers• Independence over time
RFID sensor model: read rate, overlap rate
Read rate: , sampled and updated periodically
Joint probability:
RFID sensor model: 𝑝 (𝑥𝑡𝑟𝑐|𝑙𝑡𝑐 )={ 𝜋 (𝑟 , 𝑙𝑡𝑐 )𝑖𝑓 𝑥𝑡𝑟𝑐=1(𝑡𝑎𝑔𝑟𝑒𝑎𝑑)
1 −𝜋 (𝑟 , 𝑙𝑡𝑐 ) 𝑖𝑓 𝑥𝑡𝑟𝑐=0 ( h𝑜𝑡 𝑒𝑟𝑤𝑖𝑠𝑒)
Containment: edges between hidden variables
9
Current guess about the containment relations
(2) Location and Containment Inference using EM
An iterative algorithm in the EM framework:
• M-Step: (customized)
Until the containment relations don’t change Final values of are used to determine the location of container and objects
Posterior of each container’s location
Posterior of each container’s location
Choose the best containment relation
Log likelihood 𝐿 (𝐶 )=∑𝑡=1
𝑇
∑𝑐=1
𝐶
𝑙𝑜𝑔∑𝑎∈𝑅
𝑝 ( 𝑙𝑡𝑐)∏𝑟∈𝑅
𝑝(𝑥𝑡𝑟𝑐∨𝑙𝑡𝑐) ∏𝑜∨(𝑜 ,𝑐 )∈𝐶
𝑝 (𝑦𝑡𝑟𝑜∨𝑙𝑡𝑜)Function of containment C
• E-Step:
Theorem1. The RFINFER algorithm converges, and the resulting values are a local maximum of the likelihood defined in Eq(3)..
10
(3) Change Point Detection -- Intuition
A statistical approach based on hypothesis testing• Null hypothesis: no containment change in [0, T].• Alternative hypothesis: containment change at time t’, 0 ≤ t’ ≤ T.
1
3 4
2
5 6
t=4
F E D
1
3 4
2
5 6
Time t=1
Reader location: A
1
3 4
2
5 6
t=2
B C
1
3 4
2
5 6
t=3
D E C
Items
Cases
€
Δ o(T) = L(C0:T ) − maxt '∈[o,T ](L(C0:t ' ) + L(Ct ':T ))
• If Δ is over a threshold δ, a change; otherwise, no change.• δ is obtained by offline sampling hypothetical observation sequences from the
model with stable containment (e.g., using the max likelihood).
11
(5) Implementation and Optimizations
TC
Sensor model
Sensor model
Containment(0 or 1)
R
R
Both E-step and M-step have high complexity O(TCOR2)
Inference is run every few seconds Change point detection:
• Runs at the end of each inference• Sums up quantities memorized in
inference, little extra overhead
O(TCOR2)
O(TC+TO)
O(C+O)
Optimizations:• Location restriction: each object is read in a few locations• Containment restriction: a container includes a small set of objects• Candidate pruning: for an object, consider only containers observed
frequently in the first few epochs and in several recent epochs
• History truncation: further eliminate the factor of T• Memoization: reuse values from the previous iteration of EM
12
II. Distributed Processing with State migration
SELECT tag_id, A[].tempFROM ( SELECT RSTREAM(R.tag_id, R.loc, T.temp)
FROM RFIDStream [NOW] as R,TempStream [PARTITION BY sensor_id ROW 1] AS T
WHERE (R.container != ‘cooling box’ or R.container = NULL) and R.loc = T.loc and T.temp > 10°C
) AS Global Stream S [ PATTERN SEQ(A+) WHERE A[i].tag_id = A[1].tag_id and A[A.len].time > A[1].time+6 hrs ]
Local Processing
Global Processing
Global Proc.
Local Proc.
Query processing
Inference
Site 2Site 3Site1
State migration
State migration
Object events(tag,loc,cont,…)
RFID readings(tag,reader,time)
Sensor readings
Query: Raise an alert if a frozen product has been placed outside a cooling box for 6 hours.
13
Minimize Inference State – History Truncationt=0~90
Entrydoor
Belt
Shelf A
Shelf B
t=100~105 t=120~200Time
RNRC
NRNC
Strength of co-location in M-Step in inference:
Periodically find a critical region, CR, over history.
Later inference runs on (CR + recent history H’).
When an object leaves a site, compress CR to a single weight (co-location strength) to minimize state.
15
Minimize Query Processing State via Sharing
Global query processing• A query state per object per query• As an object leaves a site, transfer the query state to the next
Sharing query states based on stable containment• At the exit, objects in a container have the same location and container (but possibly
different histories)• Share their query states using a centroid-based method
• Find the most representative query state• Compress other similar query states by storing only the differences
[1,2,3,4…]
Query states before compression Query states after compression
16
Implementation and Evaluation Implemented inference, distributed inference, and distributed query processing Instrumented an RFID lab in a warehouse setting Developed a simulator for a network of warehouses
• Number of warehouses (N): 1-10• Frequency of pallet injection: 1 every 60 seconds• Cases per pallet: 5• Items per case: 20• Main read rate of readers (RR): [0.6,1], default 0.8
• Overlap rate for shelf readers (OR): [0.2,0.8], default 0.5• Non-shelf reader frequency: 1 every second• Shelf reader frequency: 1 every 10 seconds• Frequency of anomalies (FA): 1 every 10 to 120 seconds
17
Single Site, Stable Containment
Three methods: history truncation (CR), simple windowing (W), naïve (all history)Metrics: accuracy of location and containment inference, time cost of inference
All three methods offer high accuracy for location. Simple windowing has poor accuracy for containment inference. Using all history hurts performance. History truncation (CR) is best in accuracy and performance, insensitive to trace length.
18
Evaluation of a Lab RFID Deployment
Trace settings:• T1: RR=0.85, OR=0.25• T2: RR=0.85, OR=0.5• T3: RR=0.7, OR=0.25• T4: RR=0.7, OR=0.5• T5 to T8 extend T1 to T4 with 3 items
moved across cases, 1 item removed
Improved SMURF (window-based temporal smoothing) w. containment inference and change detection
Our algorithm: (1) Location error rates are low. (2) Containment error rates are low with stable containment. (3) Containment changes cause more errors, especially given more noise (lower rate rates or higher overlap rates).
SMURF: much more errors. Simple temporal smoothing has missed opportunities.
19
Results for Distributed Inference w. State Migration
• Experiment setting: 10 warehouses, each with up to 150,000 items, totaling 1.5 million items• Compared algorithms: State Migration (CR), No State Migration (none), and Centralized
bytes Centralized None CR
RR=0.6 125,895,500 0 225,890RR=0.7 145,858,950 0 223,790RR=0.8 166,746,235 0 225,890RR=0.9 187,589,810 0 225,890
The naïve method with no state-transfer has a high error rate. The centralized method incurs a huge amount of data to be transferred.• Our method (CR) performs close to the centralized method in accuracy but with x830
reduction in communication cost.
20
Results for Distributed Query Processing
• The overall accuracy (F-measure) of query results is high (>89%).• Query state sharing yields up to 10x reduction in query state size.• The accuracy and query state reduction ratio of Q1 are lower than those of Q2,
because Q1 combines location and containment while Q2 uses only inferred location.
Q1: reports the frozen food that has been placed outside a cooling box for 3 hours.Q2: reports the frozen food that has been exposed to temperature over 10 degrees for 10 hours.
RR=0.6 RR=0.7 RR=0.8 RR=0.9
Q1 F-measure(%) 89.2 94 95.1 96
State w/o sharing (bytes) 65,500 66,000 67,037 67,000
State w sharing (bytes) 6,986 5,737 5,589 5,156
Q2 F-measure(%) 93.5 96.1 97.3 97.5
State w/o sharing (bytes) 80,248 85,510 87,029 87,000
State w sharing (bytes) 7,296 6,108 5,341 5,273
21
Summary and Future Work
Summary: • Novel inference techniques that provide accurate estimates of object locations
and containment relationships in noisy, dynamic environments.• Distributed inference and query processing techniques that minimize the
computation state transferred.• Our experimental results demonstrated the accuracy, efficiency, and scalability
of our techniques, and superiority over existing methods.
Future work:• Exploit local tag memory for distributed inference, such as utilizing aggregate
tag memory and fault tolerance.• Extend work to probabilistic query processing.• Explore smoothing over object (entity) relations in other data cleaning problems.
22
Thank You!