Data Scheduling for Multi-item and transactional Requests in On-demand Broadcast Nitin Pabhu Vijay Kumar MDM 2005.

Post on 19-Jan-2016

214 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

Transcript

Data Scheduling for Multi-item and transactional Requests in

On-demand Broadcast

Nitin Pabhu

Vijay Kumar

MDM 2005

2

Outline

• Introduction

• Related Work

• Background and Motivation

• Scheduling Algorithm

• Experimental Results

• Conclusion and Future Work

3

Introduction• Data broadcasting through wireless channel

– become a common way– offer high scalability – satisfy multiple request– utilize bandwidth efficiently

• Broadcast scheduling algorithm is the main component of broadcasting systems– affect data latency and access time

• Broadcast systems– push-based– pull-based– hybrid

4

Introduction (Cont.)• Push-based

– server periodically broadcast a schedule using client’s access history

– do not concern the current data access pattern

• Pull-based (On-demand)– client explicitly request for data item form server– server broadcast the data request pending in

the service queue– perform better

• Hybrid– a mixture of push and pull approaches

5

Introduction (Cont.)

• This paper focus on on-demand multi-item broadcast system– use bandwidth efficiently– access to data timely– most of the users seek specific information

6

Related Work• Push-based

– deliver data based on the access probability

• Hybrid– a part of channel bandwidth is reserved for pull

• Pull-based– J. Wang

• FCFS (First Come First Serve)• MRF (Most Request First)• MRFL (Most Request First)• LWF (Longest Wait First)

7

Related Work (Cont.)• Xuan, Sen, …

– request s associated a deadline– on-demand with EDF (Earliest deadline First) per

forms better

• Aksoy, Franklin– R*W

• R: number of requests for a data item• W: waiting time of the first request in R

– select the data item with maximum R*W value

• Vincezo, Liberatore– use the access pattern dependencies and acces

s probability

8

Background and Motivation• on-demand scheduling schemes for single

item requests– there is a single server– client sends query or update transaction through

uplink channel– server broadcasts relevant information over the

downlink where user retrieves the result– assume that all data items are available on the

server with equal size hence they have equal service time

– The broadcast duration of a data item is a broadcast tick

9

Background and Motivation (Cont.)

• Current broadcast scheduling schemes– make decision at the data item level– does not consider all transactions that requested

them– consistency problem– increase the access time

10

• For example:

if d1 and d2 is updated before 7th broadcast tick

compute d1 + d2 - d7 => inconsistent

Background and Motivation (Cont.)

11

Background and Motivation (Cont.)

• This have tried to overcome this problem

• With index, mobile client can check if its required data is in the current broadcast cycle– If NOT, mobile client can sleep and tune in the

next broadcast cycle for index

12

Scheduling Algorithm• Consistency

– a database state: < a set of ordered pair of data items, their values>

– data items are related through some constraints– mechanisms to update at periodic intervals– client gets consistent view of the database

• Response time

13

Scheduling Algorithm (Cont.)

• Tuning Time– the time spent by the client listening to the broa

dcast channel to access the data items– a measure of the power consumed – define:

• n = size of a database D (total number of data items)

• Rdi = number of request for data item di

• TDi = data items accesses by transaction ti , TDi D• numi = number of data items accessed by transactio

n ti , numi = | TDi |

• Tavg = average transaction size in terms of data items

14

Scheduling Algorithm (Cont.)• Temperature of a Transaction

– a measure of the number of hot data items that a transaction accessed

– Tempi = the temperature of transaction ti

– Tempi = Rdj / numi for all dj TDi

– Ex: TD1={d1,d2,d7}, Rd1=5, Rd2=4,Rd7=1, numi =3

Temp1 = (5+4+1)/3 = 3.33

• take scheduling decisions at every broadcast cycle

15

Scheduling Algorithm (Cont.)• broadcast cycle

– content is same in every cycle at push-based– in this paper, content may vary depending on

the current workload at the server– Purpose

• indexing can be used• updates can be applied to the database at the

interval

16

Scheduling Algorithm (Cont.)• Definition

– Request: the set of transactions – Bset: data items selected in the current cycle– Tlist: the set of transactions used to fill the current Bset

• Protocol at the Server1. Calculating Temperature of transactions

– Calculate temperature of all transactions

2. Sorting the Request list– By (Tempi *Wi) values

3. Selection for transactions for current broadcast cycle– Identify Bset and Tlist– K is the length of the broadcast cycle

17

Scheduling Algorithm (Cont.)

4. Arrangement of data items with in the broadcast cycle– Broadcast denotes the order all data items to be

broadcasted in the current cycle

18

Scheduling Algorithm (Cont.)

d). Select transaction ti with the highest value of ( overlapi / remi )

If there is a tie, then select the one that has highest Tempi*Wi value 1. Broadcast = Broadcast ∪ (TDi – (Broadcast ∩ TDi) 2. Tlist = Tlist - ti

3. If | Broadcast | ≤ K then Go to step c

1. Broadcast= Broadcast ∪ (TDi – (Broadcast ∩ TDi) 2. Tlist = Tlist – ti

c). Calculate for every transaction ti ∈ Tlist 1. The overlap of the transaction ti with the Broadcast set

as: overlapi = Broadcast ∩ TDi

2. Number of data items remaining to be broadcasted

denoted remi where remi = |TDi – overlapi|

19

Scheduling Algorithm (Cont.)

5. Indexing– Adapt (1,m) indexing mechanism– Indexing at the beginning of every broadcast cycle (m

ajor index) and after every (K/m) broadcast slots (minor index)

– ith minor index will contain (K – i*(K/m)) data items yet to be broadcasted

20

Scheduling Algorithm (Cont.)6. Broadcast data

– In the order determined in Broadcast set

7. Filtering Transactions– At the end of current broadcast cycle– Remove Request which were not selected in Tlist– Two categories of arrival

1. Before the current broadcast cycle

2. After the current cycle has began

– Tbegin be the time when the broadcast cycle started

– Ti be arrival time of transaction ti

For all the transaction which arrived till the end of the current broadcast cycle

If (Ti < Tbegin) then { If (TDi ∩ Broadcast = TDi) then Request = Request – ti } else

1. Select the next minor index (MI) that was broadcasted after time Ti.

2. If (TDi ∩ MI = TDi) then Request = Request – ti.

21

• Protocol at client

Scheduling Algorithm (Cont.)

22

Example• Step5 (indexing) m=7 K/m=1

– major,d1,minor1,d2,minor3,d7,minor4,d3,minor5,d4,minor6,d6,minor7,d5

• Step6 (broadcast)• Step7 (filtering)

– Rquest={t1,t2,t3,t4,t5}

– t1: Rquest={t2,t3,t4,t5}

– t2: Rquest={t3,t4,t5}

– t3: Rquest={t4,t5}

– t4: Rquest={t5}

– t5: TD5 minor2 TD5 , Rquest={t5}

23

Example (Cont.)• T1=1,T2=2,T3=2,T4=3,T5=6, Tbegin=5• Step1 (calculate Temperature)

– Temp1=3.33, Temp2=3.25, T3=3.33, T4=3.25

• Step2 (compute Tempi*Wi)– TW1=9.99, TW2=6.5, TW3=6.66, TW4=3.25 => t1>t3>t2>t4

• Step3 (select transaction) K=7– t1: Tlist={t1}, Bset={d1,d2,d7}

– t3: Tlist={t1,t3}, Bset={d1,d2,d3,d4,d7}

– t2: Tlist={t1,t2,t3}, Bset={d1,d2,d3,d4,d6,d7}

– t4: Tlist={t1,t2,t3,t4}, Bset={d1,d2,d3,d4,d5,d6,d7}

• Step4 (arrangement)– t1: Tlist={t1}, Broadcast={d1,d2,d7}

• overlap2/rem2 = 2/2=1, overlap3/rem3=1/2, overlap4/rem5=2/2=1, TW3>TW2

– t3: Tlist={t1,t3}, Broadcast={d1,d2,d7,d3,d4}• overlap2/rem2=3/1=3, overlap4/rem4=3/1=3, TW2>TW4

– t2: Tlist={t1,t2,t3}, Bset={d1,d2,d7,d3,d4,d6}

– t4: Tlist={t1,t2,t3,t4}, Bset={d1,d2,d7,d3,d4,d6,d5}

24

Experimental Results• Simulation environment

– Using a simulation model called CSIM– Use broadcast tick to measure simulated times

25

Experimental Results (Cont.)• Performance compare with FCFS, MRF

and R*W

26

Experimental Results (Cont.)• Effect of broadcast period and the aborts

comparison with FCFS, MRF and R*W

27

Experimental Results (Cont.)• Effect of shift in hot spot on transaction

waiting time– 1000(10%)

of the database

28

Conclusion and Future Work• Contribution

– Issue: scheduling multiple items and transactional requests in an on-demand broadcast environment

• Future– Dynamically determining the optimal size of

broadcast cycle period

• Problem– Overhead of the scheduling and the

transmission time were not concerned– The approach in this paper is more like hybrid

top related