PROP: A Scalable and Reliable PROP: A Scalable and Reliable P2P Assisted Proxy Streaming P2P Assisted Proxy Streaming System System Computer Science Department College of William and Mary Lei Guo, Songqing Chen, and Xiaodong Zhang
Dec 13, 2015
PROP: A Scalable and Reliable P2P PROP: A Scalable and Reliable P2P Assisted Proxy Streaming SystemAssisted Proxy Streaming System
Computer Science Department
College of William and Mary
Lei Guo, Songqing Chen, and Xiaodong Zhang
Computer Science Department, College of William and Mary
Media Streaming in Internet
• Rapidly growing applications– Scientific data retrieval and processing– Commercial applications– Education and professional training– Entertainments
• Challenges– Large size of media objects– Real time requirement of media content delivering
Computer Science Department, College of William and Mary
Existing Systems
• Content Delivery Network (CDN)– Performance effective but very expensive– Need dedicated hardware and administration
• Proxy-based media caching– Cost effective but not scalable– Limited storage and bandwidth, single point failure
• Client-based P2P collaboration– Scalable, cost effective, but not guarantee to QoS– Non-dedicated service– Peers come and go frequently
Computer Science Department, College of William and Mary
PROP: Design Rationale and Objectives
• Integrate proxy caching and P2P collaboration techniques
• Coordinate the proxy and its P2P clientsThe functions of proxy and clients are complementary– Media proxy works as a backup server
• To provide a reliable and dedicated service
– Clients self-organize into a P2P system • To provide a scalable and cost-effective service
• Build a scalable and reliable streaming proxy system for VoD in cost-effective manner.
Computer Science Department, College of William and Mary
Outline
• Introduction
• System architecture
• Resource management
• Performance evaluation
• Conclusion
Computer Science Department, College of William and Mary
Infrastructure Overview
Internet
Media Server
Firew allFirewallMedia Proxy
P2P OverlayContent Addressable
Network
Intranet DHT
DHT DHT
DHT DHT
DHT
DHT
Computer Science Department, College of William and Mary
System Components
• Streaming proxy– Interface between the system and media servers– Bootstrap site of the system
• P2P overlay of users, in which each peer is– A client– A streaming server– An index server and router
Computer Science Department, College of William and Mary
Media Proxy
• Bootstrap
• Fetch media data from media server
• Cache media objects by segment
• Serve media data for clients
New client join
Media server
Media proxyMedia proxy
Client A
Client B
Internet
Computer Science Department, College of William and Mary
• Receive data• Playback• Cache data locally
Peer as a Client
Computer Science Department, College of William and Mary
Peer as a Streaming Server
Local Cache
Client BClient A
• Receive requests• Stream media data
Peer Streaming Server
Computer Science Department, College of William and Mary
pointers to serving peers
……meta data
Segment Index
peer peer
proxy
Segment IDvalue
Peer as an Index Server/Router
DHT
Routing table
??
Is Segment ID in my key space?
Yes
No
key
??
peer
peer
Computer Science Department, College of William and Mary
Basic Operations
• Publishing and unpublishing media segments– publish (segment_id, location)– unpublish (segment_id, location)
• Requesting and serving media segments– request (segment_id, URL)
• Getting and updating segment meta data– update_info (segment_id, data)– get_info (segment_id)
Computer Science Department, College of William and Mary
Peer Serves Streaming
Internet
Media Server
Firew allFirewallMedia Proxy
P2P Overlay DHT
DHT DHT
DHT DHT DHT
DHT
??Overlay routing
Point to point
ready?yes!
Computer Science Department, College of William and Mary
Proxy Fetches Data
Internet
Media Server
Firew allFirewallMedia Proxy
P2P Overlay DHT
DHT DHT
DHT DHT DHT
DHT
??Overlay routing
Point to point
NULNULLL
ask proxy to fetchpublish
Computer Science Department, College of William and Mary
Outline
• Introduction
• System architecture
• Resource management
• Performance evaluation
• Conclusion
Computer Science Department, College of William and Mary
pointers to serving peers
……meta data
Segment Index
peer peer
proxy
Streaming Server Selection
• The index maintains a list of serving peers, including the proxy
• The proxy works as a backup server, takes over media streaming when necessary
• Peer with the largest available serving capacity is selected as the serving peer
100Kbps 1Mbps
Computer Science Department, College of William and Mary
Data Management
• Each client maintains its cache separately (use LRU)?– Not efficient: popular media data have too many replicas– Not effective: a single streaming session can flush all cached
data• 1 hour streaming video may consume more than 100MB cache
– Even worse in some cases: many media are accessed only once
• People are not interested in viewing the same movie repeatedly
• Exploit locality of all clients collectively?– Global cache: each peer maintains cached data based on the
access of all clients, instead of itself– Keep both popular and unpopular media data– Consider both the popularity and replica number of media
objects
Computer Science Department, College of William and Mary
Popularity of Media Segments
segment meta data T0: first access time Tr: most recent access time Ssum: total accessed bytes S0: segment size n: # of requests r: # of copies
pointers to serving peers
……meta data
Segment index
Computer Science Department, College of William and Mary
Popularity of Media Segments
segment meta data T0: first access time Tr: most recent access time Ssum: total accessed bytes S0: segment size n: # of requests r: # of copies
pointers to serving peers
……meta data
Segment index
popularity ),1min(*)(0
0
0 r
nTT
r
SS
TtTTtp
rsum
Computer Science Department, College of William and Mary
average access rate in the past
Popularity of Media Segments
segment meta data T0: first access time Tr: most recent access time Ssum: total accessed bytes S0: segment size n: # of requests r: # of copies
pointers to serving peers
……meta data
Segment index
popularity ),1min(*)(0
0
0 r
nTT
r
SS
TtTTtp
rsum
Computer Science Department, College of William and Mary
average access rate in the past probability of future access
Popularity of Media Segments
segment meta data T0: first access time Tr: most recent access time Ssum: total accessed bytes S0: segment size n: # of requests r: # of copies
pointers to serving peers
……meta data
Segment index
popularity ),1min(*)(0
0
0 r
nTT
r
SS
TtTTtp
rsum
Computer Science Department, College of William and Mary
average access rate in the past probability of future access
Popularity of Media Segments
segment meta data T0: first access time Tr: most recent access time Ssum: total accessed bytes S0: segment size n: # of requests r: # of copies
pointers to serving peers
……meta data
Segment index
popularity
nTT
rrTt 0
average access interval in the past
),1min(*)(0
0
0 r
nTT
r
SS
TtTTtp
rsum
future access prob. is small
Computer Science Department, College of William and Mary
average access rate in the past probability of future access
Popularity of Media Segments
segment meta data T0: first access time Tr: most recent access time Ssum: total accessed bytes S0: segment size n: # of requests r: # of copies
pointers to serving peers
……meta data
Segment index
popularity the popularity of media objects the popularity of media objects fades with time goingfades with time going
nTT
rrTt 0
average access interval in the past
),1min(*)(0
0
0 r
nTT
r
SS
TtTTtp
rsum
future access prob. is small
Computer Science Department, College of William and Mary
Utility of Cached Media Segments
• Media data are cached with the progress of media accessing
• Each media segment may have multiple copies cached in the system
• The popularity of media objects/segments– follows heavy tail distribution
– varies with time going
Segments with small popularity and large number of replications
Segments with large popularity and large number of replications
Define the segment utility function
Segment with smallest utility should be evicted
0loglog min
r
pp
0loglog max
r
pp
2maxmin )log(log*)log(log
)(r
pppptu
Computer Science Department, College of William and Mary
Distribution of Segment Replicas• Resilient to peer failure
– Segments of an object should be distributed across multiple peers instead of a single peer
• Balancing load of media serving– Popular segments should have more copies in the system
0
0
0
1
1
2
0 1 2
proxy0 1 2 3 4 5
media object
Use replacement operations to achieve such a distributionUse replacement operations to achieve such a distribution
3 4 5
popular unpopular
Computer Science Department, College of William and Mary
00
Cache Replacement
• Proxy cache replacement– popularity-based
– segments with the smallest popularity are replaced
– better than LRU
• Peer cache replacement– utility-based
– segments with the smallest utility value are replaced
0
00
0 1 2
proxy
0 1 2 3 4 5
object 1
0 1 2 3 4 5
object 2
1 2
1 1
1
0 011
3 4 5012
0 0
0??
Computer Science Department, College of William and Mary
Fault Tolerance
• Graceful degradation when proxy fails– DHT still works (no single point of failure)– Clients fetch data from media server directly
• Peer failure– Each peer replicates its neighbors’ DHT and the index
of cached data– When a peer and all its neighbors fail (small prob.)
• The peer who detect this situation initiate a broadcast to let all peers republish its cached contents and check the validation the serving peer list in the index
Computer Science Department, College of William and Mary
Outline
• Introduction
• System architecture
• Resource management
• Performance evaluation
• Conclusion
Computer Science Department, College of William and Mary
Performance Evaluation
• Metrics– Streaming jitter byte ratio– Delayed startup ratio– Byte hit ratio
• Simulations– Proxy caching system– Pure P2P system without proxy– Our system
Computer Science Department, College of William and Mary
Workload Summary
• HP Corporate Media Solutions: REAL• Synthetic workload: WEB and PART
– Media object popularity follows Zipf distribution
– Request arrival follows Possion distribution
# of req # of obj # of peers size(GB) length (min)
duration
REAL 9000 403 800 20 6-131 10 days
WEB 15188 400 800 51 2-120 1 day
PART 15188 400 800 51 2-120 1 day
ifffp i
N
i iii /1,1
...2,1,0,!/*),( xxexp x
Computer Science Department, College of William and Mary
Simulation Results
• Overall Performance
• Proxy load changes
• Replacement policies
• Routing hops
Computer Science Department, College of William and Mary
Streaming Jitter Byte Ratio
Computer Science Department, College of William and Mary
Delayed Startup Ratio
Computer Science Department, College of William and Mary
Byte Hit Ratio
Computer Science Department, College of William and Mary
Simulation Results
• Overall Performance
• Proxy load changes
• Replacement policies
• Routing hops
Computer Science Department, College of William and Mary
Proxy Load
Computer Science Department, College of William and Mary
Simulation Results
• Overall Performance
• Proxy load changes
• Replacement policies
• Routing hops
Computer Science Department, College of William and Mary
Streaming Jitter Byte Ratio
Computer Science Department, College of William and Mary
Delayed Startup Ratio
Computer Science Department, College of William and Mary
Byte Hit Ratio
Computer Science Department, College of William and Mary
Simulation Results
• Overall Performance
• Proxy load changes
• Replacement policies
• Routing hops
Computer Science Department, College of William and Mary
Routing Hops
Computer Science Department, College of William and Mary
Outline
• Introduction
• System architecture
• Resource management
• Performance evaluation
• Conclusion
Computer Science Department, College of William and Mary
Conclusion
• Propose a scalable and reliable P2P media streaming system
• Address the limits of– Unscalability of proxy caching systems– Unreliable QoS of pure P2P systems
• Propose global replacement policies for– Proxy to maximize its cache utilization– Peers to optimize data distribution across the system
Thank you!Thank you!