Anirban Sinha Charles ‘Buck’ Krasic Ahi G l Anirban Sinha, Charles Buck Krasic Ashvin Goel Ashvin Goel University of British Columbia Ui it fT t University of British Columbia University of Toronto University of Toronto F P Fi h i h h d li F. Pure Fairshare vs C Pure Fairshare Scheduling F. Pure Fairshare vs C. Pure Fairshare Scheduling C ti A h Cooperative Approach •Time based approach opposed to Cooperative Approach •Time based approach opposed to Fi h i i it Fair-share Cooperative priority . Fair share Cooperative priority . Shdli Shdli •No starvation Overall fairness in Scheduling Scheduling No starvation. Overall fairness in Scheduling Scheduling the system the system. B bl b d k •Better balance between desktop Better balance between desktop and server performance needs and server performance needs. B fit f t •Benefits from recent Benefits from recent Di h L infrastructural components Dispatcher Latency. infrastructural components Dispatcher Latency. • Fine grained time accounting • Fine grained time accounting. A Traditional • High resolution timers A. Traditional • High resolution timers. • Effective data structures (heaps red Scheduling Approach • Effective data structures (heaps, red- Scheduling Approach black trees etc ) black trees etc.) •multi level feedback queue •multi-level feedback queue Q C d b ? l ih h ’ h di 30 Q: Can we do better? algorithm – hasn’t changed in 30 Context Switch Rate Q: Can we do better? algorithm hasn t changed in 30 Context Switch Rate. A Y b bi i fi years A: Yes by combining fair years. A: Yes, by combining fair S t CPU d IO i t i jb h i ih i • Separate CPU and IO intensive jobs sharing with cooperation Separate CPU and IO intensive jobs sharing with cooperation. • Priority based • Priority based. • Breaks down for mixed CPU and IO • Breaks down for mixed CPU and IO i i jb lik id intensive jobs like video intensive jobs, like video Th h % f i l l D Overview of Our Approach: applications security enabled web Throughput as a % of single player case. D. Overview of Our Approach: applications, security enabled web Throughput as a % of single player case. D. Overview of Our Approach: dtb t Cooperative Polling servers, databases etc. Cooperative Polling servers, databases etc. •Fairshare at finest granularity has 5x coop poll() system call Cooperative Polling •Using real time priority leads to •Fairshare at finest granularity has 5x coop_poll() system call •Using real time priority leads to latency of coop yet context switch rate U starvation and live locks latency of coop, yet context switch rate User space starvation and live locks. h i b h d di is 2x worse •Behavior can be hard to predict is 2x worse. Best effort tasks Cooperative tasks Behavior can be hard to predict • Cooperative approach leverages • deadlocks live locks or priority • Cooperative approach leverages Step1:Cooperative tasks inform the Step 5: Kernel informs the • deadlocks, live locks or priority li ti if ti t t t tasks inform the kernel their most important task of the next most important event of the i i application information to context most important event important event of the other coop tasks. inversion may occur. application information to context parameters. inversion may occur. switch in a much more strategic Kernel Space Event queue (coop) • poor adaptation for adaptive time- switch in a much more strategic Space Step 4: If a cooperative task is Event queue (coop) • poor adaptation for adaptive time- fashion Step 4: If a cooperative task is selected from the virtual time h k l l h k Virtual time queue iti kl d fashion. Step 2:The kernel inserts this queue, the kernel selects the task that is at the top of the coop event Virtual time queue (fairshare) sensitive workloads. inserts this information in its queue. It calculates the task’s timeslice based on the nearest The kernel task scheduler sensitive workloads. own event queue containing event timeslice based on the nearest deadline of other coop tasks. scheduler info. for all coop- tasks Step 3: The kernel chooses the next task to G Coordinated Adaptation tasks. run by inspecting the head of the virtual time queue. The task with smallest virtual G. Coordinated Adaptation time gets chosen. G. Coordinated Adaptation B O(1) hdl B. O(1) scheduler ll f B. O(1) scheduler •Have overall fairness Have overall fairness. •Allow cooperation between time •Allow cooperation between time sensitive tasks via the kernel: sensitive tasks via the kernel: • Give preferential treatment to TS • Give preferential treatment to TS tasks within the boundaries of Frame rate of all 12 videos at tasks within the boundaries of Frame rate of all 12 videos at fairness overload fairness. overload. • facilitates uniform fidelity across Th id bl t iti • facilitates uniform fidelity across The videos are able to maintain a Dispatcher latency with increasing videos tasks The videos are able to maintain a Dispatcher latency with increasing videos tasks. uniform quality even at overload uniform quality even at overload. i f E Overview of our E. Overview of our i l i H Conclusion implementation H. Conclusion implementation H. Conclusion Vi t l ti b d •Virtual time based. Virtual time based. Coop + fairshare: •One new system call :coop poll() Coop + fairshare: •One new system call :coop_poll() F t f ll 10 i lt •Gives better timeliness (smaller •Uses efficient heaps for priority Frame rate of all 10 simultaneous •Gives better timeliness (smaller •Uses efficient heaps for priority id l ) d l d videos latency) even under overload queues i h l latency) even under overload. queues. •Dispatcher latency: •Facilitates coordinated adaptation •Benefits from high resolution one Dispatcher latency: •Facilitates coordinated adaptation •Benefits from high resolution one- • actual – requested dispatch time f lti l d ti t k ht ti & i ti actual requested dispatch time. for multiple adaptive tasks. shot timers & precise time •The latency increases quickly under for multiple adaptive tasks. shot timers & precise time •The latency increases quickly under •Informed context switching is accounting in the kernel heavy load with increasing videos •Informed context switching is accounting in the kernel. heavy load with increasing videos. cache efficient leading to a better •We use playback of multiple videos S f h id i cache efficient – leading to a better •We use playback of multiple videos •Some of the videos experience l h h bl ih kl d f Some of the videos experience timeliness-throughput balance to represent a rich workload of noticeable interruptions timeliness throughput balance. to represent a rich workload of noticeable interruptions. multiple time sensitive applications multiple time-sensitive applications.