Top Banner
László Böszörményi Distributed Multimedia Systems Multimedia Servers - 1 Distributed Multimedia Systems 6. Multimedia Servers
59

Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

Jul 06, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 1

Distributed Multimedia Systems

6. Multimedia Servers

Page 2: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 2

What is special? - Everything

• Architecture• Process scheduling• Disk scheduling• Network management• File placement• Data acquisition• Why?

– Huge amount of data– Large files– Continuous delivery

Page 3: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 3

Streaming Media Paradigms• Local Streaming

– Single movie from local disk – enough resources• Store-and-Display

– Movie is downloaded as a whole and streamed locally• Progressive Download – local disk as big buffer

– Quality > bandwidth possible (e.g. for movie trailers)• Remote Streaming

– Local memory as smoothing buffer– Low start-up delay (starts almost immediately)– Content is not stored locally (good for copy right)– Support for life events– Hard to avoid hiccups (disruptions, jitter etc.)

Page 4: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 4

Stream Characteristics

• Constant bit-rate (CBR)• Variable bit-rate (VBR)

X0 X1 X2

X3 X4...

Deocder

X0 X1 Buffer

SM Server

Server Retrieval

Network Delivery

Buffer Delay

Display Time

Time

X0

X0

X0

X0

X1

X1

X1

X1

X2

X2

X2

Time Period (Tp) = Display time

X3

hiccup

Page 5: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 5

6.1. Simple, single-node architecture

StorageSubsystem

DifferentDevices:

Disk, Raid, CD, DVD, tape-robot

NetworkSubsystem

DifferentConnections:TCP, UDP Ethernet,

RTP

Data server

Applicationserver

Control server

Data flow Control flow

Processor Subsystem

To clients

Page 6: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 6

6.1.1. Application server

• Receives and preprocesses application requests• Application specific

– Videoconferencing, Video on Demand . . .

Contents database

Billing database

User database

Service gateways

User interface

To clients

To services

Page 7: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 7

6.1.2. Control server

• Admission control• Resource selection, reservation and optimization

Administrator interface

Configuration database

Server initialization

Server optimization

Admission control

Server shutdown

Page 8: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 8

6.1.3. Data server

• Multimedia file or data retrieval• Stream delivery

Data importer

Buffer manager

File system Data exporter

E.g. video camera orframe grabber

Page 9: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 9

6.2. Distributed architecture

• Enhanced capacity– The server consists of multiple resources:– Nodes, disks, network controllers

• Must be scalable– Additional clients can be served by adding nodes– Communication between nodes might be a bottleneck

• Several possible architectures– Special purpose hardware– Regular workstations connected by fast switched

network (ATM or switched network)– Regular workstations, regular connections

Page 10: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 10

6.2.1. Partitioned Server

• Wastes server and disk capacity (replicated data)

Server-1

Server-2

Server capacity

Req. peak capacity

NumberServers

Excess capacity

50 1562 32 60

100 1333 14 40

Statistical analysis assumes:Average 1000 users, 0.1% rejections

Page 11: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 11

6.2.2. Externally Switched Servers

• Optimal usage of server, but wasting of disk capacity

Server-1 Server capacity

Req. peak capacity

NumberServers

Excess capacity

50 1081 22 10

100 1081 11 10

Statistical analysis assumes:Average 1000 users, 0.1% rejections

Network

Server-2

Control serverControl server

Page 12: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 12

6.2.3. Fully Switched Servers

• Optimal usage of server and disk capacities

Server-1

Network

Server-2

Control serverControl server

I/O switch

Page 13: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 13

6.3. Client Session Scheduling

• Logical channel– Resources needed for continuous delivery, e.g.

disk, buffer, CPU, network, display screen• Pipelining

– The logical channel builds a pipeline, e. g.blocki is displayed; blocki+1 is buffered at the display; blocki+2-3 are on the network; blocki+4 in a NIC buffer …

– Total transmission delay < time to display?– It is sufficient, that no component starves in the pipeline– delayj ≤ delayj+1 must be guaranteed

• Buffering and prefetching may be helpful

Page 14: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 14

6.3.1. Logical and control channels

Netw

orkChannelscheduler

Video server

Resourcemanager Network

scheduler

Diskscheduler

CPUscheduler

Videoconferencemanager

Video-on-demandmanager

VoD

Video conf.

Page 15: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 15

6.3.2. Logical Channel Setup• Component selection

– Optimization• E.g. load balancing, max. cache hit ratio (affinity routing)

– Component characteristics – e.g. avoid slow switching– Server topology – e.g. regard only connected nodes– Path-based component selection

• Bottleneck-bandwidth: smallest available bandwidth on a path• Select the path with the largest bottleneck-bandwidth

• Resource reservation– Reserve enough resources on the selected path– Select the cheapest resource set (cost function)– For VoD systems often only one choice

Page 16: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 16

6.3.3. QoS Parameters

• Bandwidth requirement– For variable bandwidth requirement: average – E.g. 1.5 Mbps for MPEG-1

• Peak bandwidth requirement– Necessary for applications with variable bandwidth– Maximum duration of a peak– Burst size (work ahead): max. amount above average

• Delay– Important in video conferencing, less important in VoD

• Loss probability– Strict for control, loser for multimedia data

Page 17: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 17

6.3.4. QoS Specification

• Explicit QoS specification– Puts the burden on the application– Provides more flexibility, applications may adapt

themselves to available bandwidth• E.g. smaller window, only key frames etc.

• Implicit QoS specification– Puts the burden on the server– Special attributes and files maybe stored on the server

• QoS attributes as required bandwidth• Special files for fast forward/reverse

– Transparent – easier to migrate from non-multimedia to multimedia applications

Page 18: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 18

6.3.5. Capacity Estimation

• Static table with bandwidth values of the manufacturers – poor solution

• Calibration – ideally off-line, maybe partly on-line– Input: Actual configuration as a component graph– Aggregate those components, whose capacities cannot

be distinguished (e.g. N disks striped together)– For each source-destination path compute the weight

as a fraction of the overall flow – Play back MM streams causing saturation

(maximal flows without violating QoS requirements)– Output: Table of the capacity of the components

Page 19: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 19

6.4. Client Request Scheduling

• Channel allocation for sessions– Wastage in intervals between clip displays

• Channel allocation on demand– Too costly for many short clip requests

• Resource reservation in the network is expensive– A more rigid, static scheme is desirable

• Resource reservation in the server is cheaper– A more dynamic reservation principle can be used– Statistical sharing of resources, based on user

inactivity (# active users > capacity of channels)• Data-centered vs. user-centered scheduling

Page 20: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 20

6.4.1. Video popularity, Zipf’s law

• Surprisingly general rule of popularity– pi = c / i(1 - α) α: parameter, c: normalization constant– pi = c / i (α=0: c,0.5c,0.3c,0.25c,0.2c,0.16c, … 0.05c)

Population of the 20 largest cities in the U.S. (NY=1, LA =2, Chicago =3, …)

Page 21: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 21

6.4.2. Customer reneging• Reneging is likely, if waiting time is too long• Tren is unknown, modeled as a random variable• Channel scheduling can influence it• Minimum reneging time

– Clients are assumed to be willing to wait at least Tren,min

– Tren = Tren,min + Texp ( Tren,min: constantTexp : has exponential distribution)

– Preallocate capacity for hot videos – Multicast them at time intervals of Tren,min

– For hot videos a Twait,max ≤ Tren,min can be guaranteed– For the rest the general model holds

Page 22: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 22

6.4.3. Schedulers• Objectives

– Minimize long-term reneging probability– Minimize short-term peak reneging probability– Minimize average and/or variance of waiting time– Fairness – equal reneging probability for all requests

• Hot and cold videos may differ

– Minimize resume delay• Hierarchical Architecture

– Low level scheduler allocates channels to waiting requests (e.g. VCR control and batching)

– High level scheduler controls channel allocation rate based on expected future load

Page 23: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 23

6.4.4. Broadcasting vs. MoD

• Live Broadcasting– Media from life camera & microphone to server– The server multicasts it with high time-to-live value– Duration is not necessarily known in advance– Requires high bandwidth: users tune in ca. at once

• Media on Demand– Acts as a bank of VCR devices– Duration is known in advance– Maybe unicast: easy to implement, high bandwidth– IP-multicasting may reduce the bandwidth requirement– Short startup delay and VCR control maybe difficult

Page 24: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 24

6.4.5. Types of Media on Demand• True Media on Demand (TMoD)

– Any media stream, at any time, any kind of VCR control• Near Media on Demand (NMoD)

– 1 of the 3 conditions above does not hold (2 and 3)• Quasi Media on Demand (QMoD)

– A stream is started, if at least k requests are available• Closed-loop or Client-pull

– Full interactive user control (TMoD is closed-loop)• Open loop MoD or Server-push

– Non-interactive periodic broadcast (NMoD maybe both open and closed loop)

Page 25: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 25

6.4.6. VCR control operations

• VCR pause/resume, interactive video applications– Keep resources between consecutive clips – wasteful

• Contingency channel policy – A pool (a contingent) only for actions as resume– For other requests allocate from a free pool

• VCR fast-forward, fast-backward– Special files (every 10th frame) – wastes storage, rigid– Fast playback at the client – wastes network bandwidth– Scan – read file at the server non-sequentially

• In push systems the server has to know the encoding format• In pull systems easy, if the client knows which frames to read

Page 26: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 26

6.4.7.1. Multicast Techniques (1)• Batching

– Requests for the same stream within a time period p are grouped together in 1 batch

– Startup delay ≤ p– The channel is busy for the whole time– Early requests in a batch have to wait for latecomers

• Relaying– A relay multicasts incoming streams to “friendly” clients

• Chaining– Stream is pipelined through a chain of caching clients– Multicasting source is at clients instead of the server

Page 27: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 27

6.4.7.2. Multicast Techniques (2)• Adaptive Piggybacking or dynamic batching

– Later requests are merged with existing ones via block caching + slight rate change

a) Two users, same movie 10 sec out of sync

b) Merging two streams into one

– Bad for audio

Page 28: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 28

6.4.7.3. Multicast Techniques (3)• Patching (also called dynamic caching)

– Merges batching channels by buffering– Assume 2 batches Bi and Bj stream the same media m

over the channels Ci and Cj at starting time ti and tj (tj>ti)• I.e. j comes later; e.g. Bi starts at 10:00; Bj at 10:10

– Clients of Bj buffer the stream of Ci while playing the new start-up multicast on Cj

– After the patching period (pp= tj - ti) Cj can be released• After 10 minutes Bj can be streamed from the buffer (10:10-20)

and use Ci for filling the buffer with the next 10 minutes

– Most channels are only used to patch missing portions– Improvement of Cj: |m|/pp (|m| is length of the movie)

Page 29: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 29

6.4.7.4. Periodic broadcasting

New stream starts every 5 time units

Page 30: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 30

• Buffering ΔT minutes at each client, who can read 2 streams– The “future” part is buffered at start or after jump, from streami-1

– Streami can be dropped, after ΔT has been buffered

6.4.7.5. Periodic broadcasting with VCR

Small movements: local

Ref

ill b

uffe

rSt

art p

rivat

e st

ream

Page 31: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 31

6.4.8. Time-varying workload• Peaks at certain hours – often predictable• Don’t let allocate all resources at low-load period• High-level allocation rate scheduling policies• On-demand allocation – no control

– Does not solve the problem of cyclic periods• Forced-wait – implicit control

– First request of every video is forced to wait Tmin

• Pure rate control – explicit control– Uniform channel allocation rate– Allocates a maximum number of channels in a fixed

interval – channels are never exhausted

Page 32: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 32

6.5. Process Scheduling• Periodic processes displaying movies

– Frame rates and CPU time different for each process– Necessary (not sufficient) requirement of schedulability:∑m

i = 1 Ci / Pi ≤ 1 P: period; C: CPU-time / periodPA = 30, CA = 10, PB = 40, CB = 15, PC = 50, CC = 5

10/30+15/40+5/50 = 0.808

Page 33: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 33

6.5.1. Rate Monotonic Scheduling

• Used for processes which meet the conditions– Each periodic process must complete within its period– No process depends on any other process– Each process needs the same CPU time each burst– Non-periodic processes have no deadline– Process preemption occurs instantaneously

• Schedulability: ∑mi = 1 Ci / Pi ≤ m(21/m - 1)

– CPU utilization: m = 3: 0.780; m → ∞: → ln2 (0.693)• Fixed priority = frequency of the triggering event

– priority = 33 for running every 30 msec (33 times/sec)25 for every 40 msec, 20 for every 50 msec, …

Page 34: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 34

6.5.2. Earliest Deadline First Scheduling (1)

A preempts B

A, B: same deadline

RMS has luck: CPU utilization = 0.808 (> 0.780)

• EDF schedules the process with earliest deadline• No need for periodicity nor for constant run time

Page 35: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 35

6.5.3. Earliest Deadline First Scheduling (2)

• No chance for RMS– CPU utilization 0.975 (m = 3, >> 0.780)

A needs longer (CA=15) 15/30+15/40+5/50 = 0.975

Page 36: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 36

6.6. Multimedia File System Paradigms

• Pull Server– rather traditional view

• Push Server– fits better for a/v

Page 37: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 37

6.6.1. Placing a File on a Single Disk• Contiguous allocation is advantageous

– The content of a video server changes usually slowly– Fragmentation progresses also slowly

• Interleaving – avoids seeks between v-a-t files– Video, audio, text in single contiguous file per movie– Unwanted audio or text must be read - disadvantage

Frame 1 Frame 2 Frame 3

AudioFrame

TextFrame

Page 38: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 38

6.6.2. Further File Placement Strategiesa) Small disk blocks (1-2 KB)

• Frame on contiguous runsb) Large disk blocks (e.g. 256 KB)

• Big internal fragmentation

Disk block smaller than frame (1-2 KB)

Disk block larger than frame (e.g. 256 KB)

Typical frame size = 16KB

Pointer + block counter / frame

Like an i-node + number of the 1. frame

Page 39: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 39

• Constant Time Length (CTL) – better small blocks• Constant Data Length (CDL) – better large blocks• Frame index (small blocks)

– Heavier RAM usage during movie play (frame index itself is big)– Little disk wastage– Disk management needs to find consecutive runs of blocks– Double buffering is easier to implement (1 frame / read-write)– Fast-forward can be easily implemented by skipping non-I frames

• Block index (no splitting frames over blocks)– Lower RAM usage– Major disk wastage

• Block index (splitting frames over blocks allowed)– Low RAM usage– No disk wastage– Extra seeks

6.6.3. Trade-offs small vs. large blocks

Page 40: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 40

• Double buffering– Producer/consumer swap the buffers in every run

• Triple buffering (each buffer may be also double)1. Retrieval buffer (e.g disk to memory)2. Intermediate buffer (memory to memory)3. Delivery buffer (e.g. memory to network)

6.6.4. Buffering

producer

consumer

double buffer

Page 41: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 41

6.6.5. Placing Files for NVoD Broadcast

• 30 frames/sec, starting every 5 min, 24 streams for 2 hours• 24 frames build a single disk block• Minimal seek time – we read block by block, track by track

30 * 60 * 5 = 9000

Just starting

Page 42: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 42

6.6.6. Placing Multiple files on a Single Disk

• Organ-pipe distribution of files on server – head in the middle– Most popular movie in middle of disk– Next most popular on either side, etc.– 1000 movies + Zipf distribution: head will stay 30% of time at the top 5 movies

Page 43: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 43

6.6.7. Placing Files on a “Disk Farm”

a) No striping – easy to implement, bad balanceb) Same striping pattern for all files – starts maybe hotc) Staggered striping – staggers starting blocksd) Random striping

Page 44: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 44

6.6.8. Striping by frame vs. block

• Striping by frame– Frame_1 goes to disk_1, frame_2 to disk_2 etc.– Frames have typically different sizes– Frames are read one at a time: no speed-up for

individual movies, but the total load is better balanced• Striping by blocks

– System can easily start reading multiple blocks– Buffer requirement may become high

• E.g. 1000 active users, 4 disks, 256 KB blocks require 1 GB RAM buffer – still acceptable on a strong server

• Wide resp. narrow striping– Uses all disks resp. a subset of the disks for striping

Page 45: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 45

6.6.10. File Caching

• Most movies stored on DVD or tape– Copy to disk when needed– Results in large startup time

• Keep most popular movies on disk– Good for popular, unfair against the others

• Keep first few min. of all movies on disk– Start movie from this while remainder is fetched

Page 46: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 46

6.6.11. Static Disk Scheduling

• In one round, each movie asks for one frame– A round is 33.3msec for NTSC, 40msec for PAL

• Requests are optimized as usual (e.g. SCAN)

Order in which disk requests are processed

Stream

Page 47: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 47

6.6.12. Dynamic Disk Scheduling

• Scan-EDF algorithm– Uses deadlines & cylinder numbers for scheduling

Page 48: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 48

6.7. Comparison of MoD Servers

• Real-time Delivery– Strong or weak periodic playback, use of RTP+RTSP

• Scalability– Horizontal: # of server nodes can grow– Vertical: capacity of the existing nodes can grow

• Fault Tolerance – redundancy available• Transparency – the client sees only one system• Per-client System Cost

– costclient = requried_bandwidthclient * durationclient

• Security– Intellectual property management (MPEG-21)

Page 49: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 49

6.7.1. The Berkeley VoD System– Enterprise NMoD – Data acquisition to ASs– AS selects the best VFS

MPEG-1 CMOs on Archive Server + Tertiary Storage Devices (tape jukebox or optical disk)

Video File Servers cache CMOs on disk

Service model Pull (closed)Scalability Medium, h+vArchitecture DistributedOrganization StaticRT streaming NoFault tolerance YesPer-client cost Very highSecurity None Unicast

CMOsover UDP

Primary client entry

Page 50: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 50

6.7.2. The Tiger VoD System– Large-scale NMoD (µsoft)– Wide striping– Highly parallel

Service model Pull (closed)Scalability Medium, h+vArchitecture DistributedOrganization StaticRT streaming YesFault tolerance YesPer-client cost Very highSecurity None

Constant bitrate

Primary client entry

Page 51: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 51

6.7.3. The Darwin Streaming Server– Open source v. of Apple QuickTime SS– MPEG-1,2,4, RTP, RTSP, uni+multicast– Skip protection®: uses excess bandwidth

to buffer data ahead faster than RT

Relay

Destination Server

Client

Broadcast TMoDServ. Model Push (open) Pull (closed)

Scalability High, h+v Medium, v

Architecture Distributed Centralized

Organization Static Static

Streaming Yes Yes

Fault tol. Yes No

Cost/client Very low Very high

Security None Stream ACLs

Broadcasting of life and prerecorded video, via relays

Page 52: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 52

6.7.4. The Helix Universal Server

Broadcast TMoDServ. model Push (open) Pull (closed)

Scalability High, h+v High, h+v

Architecture Distributed Distributed

Organization Static Static

RT streaming Yes Yes

Fault tol. Yes Yes

Cost/client Very low Very low

Security Yes Yes

– Real Networks®, partly open s.– Many formats+protocols– Splitting a) push, b) pull

• Chain of transmitters and receiversunicast multicast

Page 53: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 53

6.8. A case study – IRS• Integrated Real-Time Resource Scheduling (IRS)• Multimedia applications on the server

– Consist of dependent tasks– Every task is bound to one specific resource– Task sequences are reiterated over application’s life– An application takes time of a fixed period P

Task1

Task3Task2

Read data from disk

Display on local screen Transmit over network

Period = P

Task Dependency Graph

Page 54: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 54

Coordinated multiple resource Scheduling

• The application programmer has to specify– Requirement for each resource

• Number of bytes for transmission, jitter for computations– Task precedence graph (TPG)

• Under these constraints the IRS framework– Allocates heterogeneous resources– Computes delay budget (deadline per period) for tasks– Checks admission– Relaxes delay budgets if possible to balance resources– Allows local schedulers to define desirability metrics

• E.g. the disk scheduler may prefer requests in SCAN order– Actual CPU need is calculated in probation mode

Page 55: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 55

Task deadline assignment

• Basic notions– Work (e.g. Mbit), delay budget (sec), capacity (Mb/s)

• Minimal delay budget (LNi) of a TaskNi per period– Needed workNi / Available capacity of resourceNi

(= maximum capacityNi - Σ used capacities)• Admission control for application N

– ∀ LNi are feasible ∧ Σ LNi <= PN

• Relaxation of deadlines to balance resource load– If the period is larger: spare room available– Distribute the remaining resource capacities either

• equally, or• corresponding to expected demand

Page 56: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 56

Deadline sensitive SCAN (DSSCAN)

• Consolidates SCAN and EDF• Two queues

1. Ordered by start-deadline2. Ordered by SCAN order

• DSSCAN Algorithm1. Pick first request from the start-deadline queue (EDF)2. Pick first request from the SCAN queue3. Schedule the 2., except the deadline of 1. were missed4. Rearrange the SCAN queue if queue 1. was selected

• Can be extended by a 3. queue for interactive I/O– Is served fast (before 2.), but without deadline miss

Page 57: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 57

Start-deadline calculation

• Normally only the completion deadline (EDi) and the delay budget (Xi) are known

• It is better to use start-deadline– If the start-deadline of the first element can be satisfied,

then the same holds for all requests in the queue– Only the first element of the queue must be regarded

• (Re-)calculation of the start-deadlineCurrent = EDNdisk; // Iterates starting at the latest end-deadlinefor (i = Ndisk; i >= 1; i--) {

SDi = min (Current, EDi) - Xi; // Start-deadline of the ith taskCurrent = SDi;

} // for iED1

ED2

ED3

X1

X2

X3SD1SD2 SD3

Page 58: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 58

IRS Implementation Architecture

Multimedia application

User-level IRS libraryUser space

Kernel space

System Calls interface for IRS

Admission ControllerRT file-systemoperations

Resource usagemonitorGlobal scheduler

Disk scheduler CPU scheduler Network scheduler

Page 59: Distributed Multimedia Systemslaszlo/courses/distmm_lyon/server.pdfLászló Böszörményi Distributed Multimedia Systems Multimedia Servers - 9 6.2. Distributed architecture • Enhanced

László Böszörményi Distributed Multimedia Systems Multimedia Servers - 59

IRS API

• API– Starting a new thread to execute the new graph in graph_func

graph_id = cerate_graph(graph_func, period);– Registration of a periodic read/write task

t_id1 = register_read(fd, bytes_per_period, buff, graph_id); t_id2 = register_write(fd, bytes_per_period, buff, graph_id);

– Registration of a periodic computational taskt_id3 = register_comute(compute_func, jitter, graph_id);

– Definition of the dependency graph via pairs of dependenciesdepend (t_id2, t_id1, graph_id); ...

– Main body of graph_func:while (some_condition) exec_graph(graph_id);

• On each call to exec_graph ISR strives to execute all tasks