Top Banner
Robust Quality Management for Differentiated Imprecise Data Services Mehdi Amirijoo, J ¨ orgen Hansson Dept. of Computer and Information Science Link¨ oping University, Sweden meham,jorha @ida.liu.se Sang H. Son Dept. of Computer Science University of Virginia, Charlottesville, Virginia, USA [email protected] Svante Gunnarsson Dept. of Electrical Engineering Link¨ oping University, Sweden [email protected] Abstract Several applications, such as web services and e- commerce, are operating in open environments where the workload characteristics, such as the load applied on the system and the worst-case execution times, are inaccu- rate or even not known in advance. This implies that trans- actions submitted to a real-time database cannot be subject to exact schedulability analysis given the lack of a pri- ori knowledge of the workload. In this paper we propose an approach, based on feedback control, for manag- ing the quality of service of real-time databases that provide imprecise and differentiated services, given in- accurate workload characteristics. For each service class, the database operator specifies the quality of ser- vice requirements by explicitly declaring the precision requirements of the data and the results of the transac- tions. The performance evaluation shows that our ap- proach provides reliable quality of service even in the face of varying load and inaccurate execution time esti- mates. 1. Introduction Lately the demand for real-time data services, provided by real-time databases (RTDBs), has increased and appli- cations used in, e.g. manufacturing, web servers, and e- commerce, are becoming increasingly sophisticated in their This work was funded, in part by CUGS (the National Graduate School in Computer Science, Sweden), CENIIT (Center for Indus- trial Information Technology) under contract 01.07, NSF grants CCR- 0098269 and IIS-0208758, and ISIS (Information Systems for Indus- trial Control and Supervision). data needs. In these applications it is desirable to pro- cess user requests within their deadlines using fresh data. In dynamic systems, such as web servers and sensor net- works with non-uniform access patterns, the workload of the databases cannot be precisely predicted and, hence, the databases can become overloaded. As a result, deadline misses and freshness violations may occur during the tran- sient overloads. To address this problem we propose a qual- ity of service (QoS) sensitive approach, based on impre- cise computation [13], to guarantee a set of requirements on the behavior of the database, even in the presence of un- predictable workloads. In this paper we employ the notion of imprecise com- putation [13] on transactions as well as data, i.e., we allow data objects to deviate, to a certain degree, from their corre- sponding values in the external environment. However, only using imprecise computations will not by itself solve the problems caused by transient overload, as there is an up- per limit of the amount of resources that can be traded off for QoS. Instead of attempting to provide service to all the workload submitted to the RTDB we may service only a subset, representing the most important parts, of that work- load. Previous work in service differentiation in real-time systems [3] and RTDBs [10, 8] focus on the importance or criticality of the transactions. Transactions are usually clas- sified into classes with regard to their importance and it is assumed that the more important classes receive the best QoS. We consider the importance of a class and the QoS that the class requires to be disjoint and, hence, importance and QoS are two orthogonal entities. This is in contrast to less general approaches, e.g. value-driven scheduling [3], where deadline miss ratio of important transactions is lower than less important transactions. For example, consider an embedded vehicle control application [7] where there is a set of tasks with different importance. The fuel ignition task Proceedings of the 25th IEEE International Real-Time Systems Symposium (RTSS 2004) 1052-8725/04 $20.00 © 2004 IEEE
11

Robust quality management for differentiated imprecise data services

May 15, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Robust quality management for differentiated imprecise data services

Robust Quality Management for Differentiated Imprecise Data Services�

Mehdi Amirijoo, Jorgen HanssonDept. of Computer and Information Science

Linkoping University, Swedenfmeham,[email protected]

Sang H. SonDept. of Computer Science

University of Virginia, Charlottesville, Virginia, [email protected]

Svante GunnarssonDept. of Electrical EngineeringLinkoping University, Sweden

[email protected]

Abstract

Several applications, such as web services and e-commerce, are operating in open environments where theworkload characteristics, such as the load applied on thesystem and the worst-case execution times, are inaccu-rate or even not known in advance. This implies that trans-actions submitted to a real-time database cannot be subjectto exact schedulability analysis given the lack of a pri-ori knowledge of the workload. In this paper we proposean approach, based on feedback control, for manag-ing the quality of service of real-time databases thatprovide imprecise and differentiated services, given in-accurate workload characteristics. For each serviceclass, the database operator specifies the quality of ser-vice requirements by explicitly declaring the precisionrequirements of the data and the results of the transac-tions. The performance evaluation shows that our ap-proach provides reliable quality of service even in theface of varying load and inaccurate execution time esti-mates.

1. Introduction

Lately the demand for real-time data services, providedby real-time databases (RTDBs), has increased and appli-cations used in, e.g. manufacturing, web servers, and e-commerce, are becoming increasingly sophisticated in their

� This work was funded, in part by CUGS (the National GraduateSchool in Computer Science, Sweden), CENIIT (Center for Indus-trial Information Technology) under contract 01.07, NSF grants CCR-0098269 and IIS-0208758, and ISIS (Information Systems for Indus-trial Control and Supervision).

data needs. In these applications it is desirable to pro-cess user requests within their deadlines using fresh data.In dynamic systems, such as web servers and sensor net-works with non-uniform access patterns, the workload ofthe databases cannot be precisely predicted and, hence, thedatabases can become overloaded. As a result, deadlinemisses and freshness violations may occur during the tran-sient overloads. To address this problem we propose a qual-ity of service (QoS) sensitive approach, based on impre-cise computation [13], to guarantee a set of requirementson the behavior of the database, even in the presence of un-predictable workloads.

In this paper we employ the notion of imprecise com-putation [13] on transactions as well as data, i.e., we allowdata objects to deviate, to a certain degree, from their corre-sponding values in the external environment. However, onlyusing imprecise computations will not by itself solve theproblems caused by transient overload, as there is an up-per limit of the amount of resources that can be traded offfor QoS. Instead of attempting to provide service to all theworkload submitted to the RTDB we may service only asubset, representing the most important parts, of that work-load. Previous work in service differentiation in real-timesystems [3] and RTDBs [10, 8] focus on the importance orcriticality of the transactions. Transactions are usually clas-sified into classes with regard to their importance and it isassumed that the more important classes receive the bestQoS. We consider the importance of a class and the QoSthat the class requires to be disjoint and, hence, importanceand QoS are two orthogonal entities. This is in contrast toless general approaches, e.g. value-driven scheduling [3],where deadline miss ratio of important transactions is lowerthan less important transactions. For example, consider anembedded vehicle control application [7] where there is aset of tasks with different importance. The fuel ignition task

Proceedings of the 25th IEEE International Real-Time Systems Symposium (RTSS 2004)

1052-8725/04 $20.00 © 2004 IEEE

Page 2: Robust quality management for differentiated imprecise data services

is very important whereas engine monitoring tasks are con-sidered to be less important. However, the fuel ignition taskmay require less precise data as compared to the enginemonitoring tasks which may require very precise data. Con-sequently, the QoS demand of the monitoring tasks is higheralthough they are less important compared to the ignitiontask.

In this paper we present an architecture to manage QoS,defined in terms of data precision and transaction preci-sion, to support service differentiation of multiple classes.To the best of our knowledge this is the first paper describ-ing performance management of multiple classes in real-time databases that support imprecise computation at trans-actions and data object level. As the first contribution wepresent a QoS specification model supporting orthogonal-ity between importance and QoS. The expressive power ofthe QoS specification model allows a database operator1 ordatabase designer to specify not only the desired nominalsystem performance, but also the the worst-case system per-formance and system adaptability in the face of unexpectedfailures or load variation. The second contribution is an ar-chitecture and two algorithms, based on feedback controlscheduling [16, 14], for managing the QoS as given by theQoS specification. The performance studies show that thesuggested algorithms give a robust performance of RTDBs,in terms of transaction and data precision, even for transientoverloads and with inaccurate execution time estimates ofthe transactions. We achieve resource isolation among thedifferent classes, and we show that during overloads trans-actions are rejected in a strictly hierarchical fashion basedon importance. Finally, the experimental results show thatour approach supports orthogonality between class impor-tance and class QoS needs.

The rest of this paper is organized as follows. A prob-lem formulation is given in section 2. In section 3, the as-sumed database model is given. In section 4, we presentan approach for QoS management and in section 5, the re-sults of performance evaluations are presented. In section6, we give an overview on related work, followed by sec-tion 7, where conclusions and future work are discussed.

2. Problem Formulation

In our database model, data objects in an RTDB areupdated by update transactions, e.g. sensor values, whileuser transactions represent user requests, e.g. complex read-write operations. We apply the notion of imprecision at dataobject and user transaction level. Increasing the resourcesallocated to the update transactions results in greater qual-ity of data (QoD) as the imprecision of the data objects de-

1 By a database operator we mean an agent, human or computer, thatsupervises and operates the database, including setting the QoS.

creases. Similarly, increasing the resources for user trans-actions results in greater quality of user transactions, forbrevity referred to as quality of transaction (QoT), as theimprecision of the results produced by user transactions de-creases. Intuitively, sufficiently precise data values stored inthe database are regarded to have no effect on the result ofa transaction. If temporal consistency constraints are satis-fied then such imprecision is admissible.

Let SV C � fsvc�� � � � � svcc� � � � � svcjSV Cjg denotethe set of service classes and jSV Cj denote the number ofservice classes. User transactions are classified into serviceclasses based on their importance, where the first level svc�

holds the most important or critical transactions, the sec-ond level svc� holds the less important transactions and soon. We introduce the notion of transaction error (denotedtei), inherited from the imprecise computation model [13],to measure the quality of a transaction Ti. Here, the qualityof the result given by a transaction depends on the process-ing time allocated to the transaction. The transaction returnsmore precise results, i.e. lower tei, as it receives more pro-cessing time. Further, for a data object stored in the RTDBand representing a real-world variable, we can allow a cer-tain degree of deviation compared to the real-world value. Ifsuch deviation can be tolerated, arriving updates may be dis-carded during transient overloads. To measure data qualitywe introduce the notion of data error (denoted dei), whichgives an indication of how much the value of a data ob-ject di stored in the RTDB deviates from the correspondingreal-world value, which is given by the latest arrived trans-action updating di. Note that the latest arrived transactionupdating di may have been discarded and, hence, di mayhold the value of an earlier update transaction.

For a service class svcc we can then specify the desiredQoS of svcc in terms of tei that the transactions in svcc

produce and the data error of the data objects that transac-tions in svcc read. We observe that a data object may beaccessed by several transactions in different service classesand, hence, there may be different precision requirementsput upon the data item. It is clear that we need to ensure thatthe data error of the data object complies with the needs ofall transactions and, consequently, any data error conflictsmust be resolved by satisfying the needs of the transactionwith the stronger requirement. If the access patterns of thetransactions are known in advance we may use that infor-mation to keep the data objects precise. However, in dy-namic systems with unpredictable access patterns we mustrather predict the access patterns of the transactions duringrun-time and adapt the precision of the data objects suchthat the transactions accessing them have precise readings.

Finally, it is important that the tei of terminated trans-actions in the same service class does not vary significantlyfrom one transaction to another. Here it is emphasized thatlarge deviations between tei must be minimized to ensure

Proceedings of the 25th IEEE International Real-Time Systems Symposium (RTSS 2004)

1052-8725/04 $20.00 © 2004 IEEE

Page 3: Robust quality management for differentiated imprecise data services

QoS fairness. In summary the goal of this work is to: (i) es-tablish a model for expressing QoS requirements in termsof data error and transaction error for each service class,(ii) develop an architecture and a set of algorithms for man-aging data error and transaction error such that given QoSspecifications for each service level are satisfied, and (iii)minimize the variation of transaction error among admittedtransactions, i.e., maximize QoS fairness.

3. Data and Transaction Model

We consider a main memory database model, wherethere is one CPU as the main processing element. We con-sider the following data and transaction models. In our datamodel, data objects can be classified into two classes, tem-poral and non-temporal [17]. For temporal data we onlyconsider base data, i.e., data objects that hold the view of thereal-world and are updated by sensors. A base data objectdi is considered temporally inconsistent or stale if the cur-rent time is later than the timestamp of di followed by theabsolute validity interval avii of di, i.e. currenttime �

timestampi � avii. For a data object di, let data errordei � ��cvi� vj� be a non-negative function of the currentvalue cvi of di and the value vj of the latest arrived trans-action that updated di or that was to update di but was dis-carded. The function � may for example be defined as theabsolute deviation between cvi and vj , i.e., dei � jcvi�vj j,

or the relative deviation as given by dei �jcvi�vj jjcvij

. To cap-ture the QoD demands of the different service classes wemodel the data error as perceived by the transactions in svcc

with dei � defc where def c denotes the data error factorof the transactions in svcc. The greater def c is, the greaterdoes the transactions in svcc perceive the data error. We de-fine the weighted data error as wdei � dei � defd�i, wheredefd�i is the maximum data error factor of the transactionsaccessing di.

Update transactions arrive periodically and may onlywrite to base data objects. User transactions arrive aperiod-ically and may read temporal and read/write non-temporaldata. User and update transactions (Ti) are composed of onemandatory subtransaction mi and jOij � � optional sub-transactions oi�j , where oi�j is the jth optional subtransac-tion of Ti. For the remainder of the paper, we let ti�j de-note the jth subtransaction of Ti. Since updates do not usecomplex logical or numerical operations, we assume thateach update transaction consists only of a single manda-tory subtransaction, i.e., jOij � �. We use the milestoneapproach [13] to transaction imprecision. Thus, we dividetransactions into subtransactions according to milestones.A mandatory subtransaction is completed when it is com-pleted in a traditional sense. The mandatory subtransactiongives an acceptable result and must be computed to com-pletion before the transaction deadline. The optional sub-

0 2 4 6 8 100

0.2

0.4

0.6

0.8

1

|COSi|

tei

n=2n=1

n=0.5

Figure 1. Contribution of jCOSij to tei.

transactions may be processed if there is enough time or re-sources available. While it is assumed that all subtransac-tions of a transaction Ti arrive at the same time, the first op-tional subtransaction (if any) oi�� becomes ready for execu-tion when the mandatory subtransaction, mi, is completed.In general, an optional subtransaction, oi�j , becomes readyfor execution when oi�j�� (where � � j � jOij) completes.We set the deadline of every subtransaction ti�j to the dead-line of the transaction Ti. A subtransaction is terminated ifit has completed or has missed its deadline. A transaction Tiis terminated when oi�jOij completes or one of its subtrans-actions misses its deadline. In the latter case, all subtransac-tions that are not completed are terminated as well.

For a user transaction Ti, we use an error function [5]to approximate its corresponding transaction error given by

tei�jCOSij� ���� jCOSij

jOij

�ni

where ni is the order of the

error function and jCOSij denotes the number of completedoptional subtransactions. By choosing ni we can model andsupport multiple types of transactions showing different er-ror characteristics (see Figure 1).

4. Approach

Below we describe an approach for managing the perfor-mance of an RTDB in terms of transaction and data qual-ity. First, we start by defining QoS and how it can be spec-ified. An overview of the feedback control scheduling ar-chitecture is given, followed by issues related to modelingof the architecture and design of controllers. We refer tothe presented approach as Robust Quality Management ofDifferentiated Imprecise Data Services (RDS).

4.1. Performance Metrics and QoS specification

We apply the following steady-state and transientstate performance metrics [14] to each service class.Terminatedc�k� denotes the set of terminated transac-tions in service class svcc during the interval ��k���T� kT ,where T is the sampling period. For the rest of this pa-per, we sometimes drop k where the notion of time is notimportant.

Proceedings of the 25th IEEE International Real-Time Systems Symposium (RTSS 2004)

1052-8725/04 $20.00 © 2004 IEEE

Page 4: Robust quality management for differentiated imprecise data services

� The average transaction error of admitted user transac-tions,

atec�k� � ����

Pi�Terminatedc�k� tei

jTerminatedc�k�j���

gives the precision of the results produced by usertransactions.

� The data precision requirement of user transactions isgiven using def c.

� Data precision is manipulated by managing the data er-ror of the data objects, which is done by considering anupper bound for the weighted data error given by themaximum weighted data errormwde. An update trans-action Tj is discarded if the weighted data error of thedata object di to be updated by Tj is less or equal tomwde (i.e. wdei � mwde). If mwde increases, moreupdate transactions are discarded, degrading the qual-ity of data. Setting mwde to zero results in the high-est data precision, while setting mwde to one resultsin lowest data precision allowed.

� Overshoot M cp is the worst-case system performance

in the transient system state (see Figure 2) and it isgiven in percentage. Overshoot is applied to atec.

� Settling time T cs is the time for the transient overshoot

to decay and reach the steady state performance (seeFigure 2), hence, it is a measure of system adaptability,i.e., how fast the system converges towards the desiredperformance. Settling time is applied to atec.

� To measure QoS fairness among admitted transactions,we introduce the standard deviation of transaction er-ror,

sdtec�k� �sPi�Terminatedc�k� ����� tei � atec�k��

jTerminatedc�k�j � ��

which gives a measure of how much the transaction er-ror of terminated transactions deviates from the aver-age transaction error.

� Admission Percentage, apc �jAdmittedc�k�jjSubmittedc�k�j ���,

where jAdmittedc�k�j is the number of admit-ted transactions and jSubmittedc�k�j is the numberof submitted transactions in svcc.

We define QoD in terms of mwde and an increase inQoD refers to a decrease in mwde, while a decrease in QoDrefers to an increase in mwde. Similarly, we define QoTfor a service class svcc in terms of atec. QoT for a ser-vice class svcc increases as atec decreases, while QoT de-creases as atec increases. The QoS specification is given interms of def c and a set of target levels in the steady-state

p

s

steady-state

+-

M

T

valu

e

time

2%

Figure 2. Definition of settling time (Ts) andovershoot (Mp)

or references atecr for atec. Turning to QoD requirementspecification, for service class svcc we want that dei � �,where � is an arbitrary data error. Assume that several trans-actions, including those in svcc, with different precision re-quirements access di. Then it must hold that defd�i � defc,since defd�i is the maximum data error factor of the trans-actions accessing di. di is least precise when mwde isequal to one and, hence, dei � defc � dei � defd�i �wdei � �. From this we conclude that dei � �

defc. So

by setting def c to ��

we satisfy the QoD requirement ofthe transactions in svcc. The following example shows aspecification of QoS requirements: fate�r � ���� def� ��� T �

s � ��s�M�p � ���g� fate�r � ��� def� � �� T �

s ���s�M�

p � ���g� fate�r � ��� def� � ���� T �s �

��s�M�p � ���g� fate�r � ���� def� � ���� T �

s ���s�M�

p � ���g. Note that the transactions in svc� aremore important than the transactions in svc�, however, theQoS requirement for svc� is less than the QoS requirementfor svc�, i.e., ate� � ate� and def� � def�. This showsthe orthogonality of importance and QoS requirements.

4.2. QoD Classes

We need to make sure to meet the data precision require-ments of the transactions in all service levels. An initial ap-proach would be to simply block a transaction accessing adata object that does not satisfy the precision requirement.However, this may lead to many deadline misses and, hence,decreased performance. To lower the blocking time we mustrather classify the data objects according to the precision re-quirements of the transactions accessing them, where eachclass of data objects represents a data precision requirement.The data classification must be adaptive since the accesspatterns of the transactions may change during run-time. Asporadic transaction with high precision requirement mayaccess a data object only once during the entire operationof the RTDB. Keeping the precision of that data object at ahigh level may result in a waste of resources, since few orno update transactions are discarded.

Conceptually, when a user transaction with a very highdata precision requirement accesses a data object, we clas-sify the data object to a higher QoD class representing

Proceedings of the 25th IEEE International Real-Time Systems Symposium (RTSS 2004)

1052-8725/04 $20.00 © 2004 IEEE

Page 5: Robust quality management for differentiated imprecise data services

defd,i

time

Figure 3. QoD classification of the data ob-ject di

….

Monitor

QoD Manager

server1

server|SVC|

Transaction

Handler

FM BSCC

Capacity Allocator /

Overload Resolver

cATE

1

cATE

|SVC|….

c1

c|SVC|

….

….

ate1 …. ate|SVC|Precision

Controller

mwde

Update

Transactions

AC1

AC|SVC|

svc1

svc|SVC|

r1

r|SVC|

ATE ControllersSVC1

….SVC|SVC|

UpdateC

User

Transactions

….

cAC

1 cAC

|SVC|

Figure 4. QoS management architecture us-ing feedback control

greater precision. If after a while, no transaction with equalor greater precision requirement accesses that data object,then we move the data object to a lower QoD class, rep-resenting a lower precision requirement. Once a transactionTi accesses a data object di where the data error factor def c

of Ti is greater than the current data error factor defd�i of di,then we set defd�i to def c. Hence, di is raised to the QoDclass representing def c. The data error factor defd�i thendecreases linearly over time, moving to lower QoD classesuntil a transaction with a higher data error factor accessesdi, as shown in Figure 3. At time � a transaction in svcc ac-cesses di with def c greater than defd�i and, hence, defd�iis set to def c in order to adapt to the new precision require-ment. After time � no more transactions with higher pre-cision requirements than defd�i access di and therefore theprecision requirement of di is relaxed by lowering defd�i.This way the system is adaptive to changes in access pat-terns.

4.3. Feedback Control Scheduling Architecture

The architecture of our QoS management scheme isgiven in Figure 4. Update transactions have higher prior-ity than user transactions and are upon arrival ordered in

an update ready queue according to the earliest deadlinefirst (EDF) scheduling policy (for an elaborate discussionon EDF see e.g. [3]). To provide individual QoS guaran-tees for each user transaction class we have to enforce iso-lation among the classes by bounding the execution timeof the transactions in each class. This is achieved by us-ing deferrable servers [3], which enables us to limit the re-source consumption of transactions, while lower average re-sponse time is enforced for the transactions in higher ser-vice classes. Let serverc denote the server for svcc andcc denote the capacity of serverc. We assign priorities tothe servers according to their importance, i.e., serverc hashigher priority than serverc��. Upon activation serverc

serves any pending user transactions in its ready queuewithin the limit of cc or until no more user transactionsare waiting, at which point serverc becomes suspended andserverc�� becomes active. Note, serverc is reactivated ifnew user transactions arrive and cc is greater than zero. Thecapacity is replenished with the sampling period T .

The transaction handler manages the execution of thetransactions. It consists of a freshness manager (FM), a unitmanaging the concurrency control (CC), and a basic sched-uler (BS). The FM checks the freshness before accessinga data object, using the timestamp and the absolute valid-ity interval of the data. We employ two-phase locking withhighest priority (2PL-HP) [1] for concurrency control. 2PL-HP is chosen since it is free from priority inversion and haswell-known behavior. We consider two different schedulingalgorithms as basic schedulers: (i) Earliest Deadline First(EDF), where transactions are processed in the order deter-mined by their absolute deadlines, and (ii) Highest ErrorFirst (HEF) [2], where transactions are processed in the or-der determined by their transaction error, i.e., the next trans-action to run is the one with the greatest transaction error.For both basic schedulers (EDF and HEF) the mandatorysubtransactions have higher priority than the optional sub-transactions and, hence, scheduled before them. We referto RDSEDF when EDF is used as a BS, correspondingly toRDSHEF when HEF is used as BS.

At each sampling instant kT , the controlled variablesatec are monitored and fed into the ATE Controllers, whichcompare the performance references atecr with atec to getthe current performance errors. Based on this each ATEController computes a requested change �ccATE to cc. Ifatec is higher than atecr, then a positive �ccATE is returned,requesting an increase in the capacity so that atec is low-ered to its reference. The requested changes in capacitiesare given to the Capacity Allocator, which distributes thecapacities according to the class level. During overloads itmay not be possible to accommodate all requested capac-ities. Instead, the QoD is lowered, resulting in more dis-carded update transactions, hence, more resources can beallocated for user transactions. If the lowest data quality

Proceedings of the 25th IEEE International Real-Time Systems Symposium (RTSS 2004)

1052-8725/04 $20.00 © 2004 IEEE

Page 6: Robust quality management for differentiated imprecise data services

is reached and no more update transactions can be dis-carded, the amount of capacity rc that is not accommo-dated is returned to the admission controller, which rejectstransactions with a total execution time of rc. Now, a con-troller computes a change in capacity such that atec�k���equals atecr. This change in capacity is based on an observedatec�k�, which depends on the admitted load and cc�k�.However, due to the unpredictable workload applied on thedatabase, the admitted load may increase (for example con-sider the most important service class where no transactionsare rejected). To suppress large overshoots we react in ad-vance by informing the Overload Resolver to modify the ca-pacities when the current admitted load is greater than theadmitted load during the previous sampling interval. If for aservice class svcc, the execution time of the admitted trans-actions in the current period is greater than the previous pe-riod then an increase in capacity �ccAC equal to the differ-ence in admitted execution time is requested for svcc. Thecapacities of svcc� � � � � svcjSV Cj are then recomputed if theincrease in capacity can be accommodated. If �ccAC can-not be accommodated then more transactions in svcc, cor-responding to �ccAC are rejected.

We have modeled the controlled system using Z-transform theory [6]. The transfer function of themodel describing ate in terms of �ccATE is given byP �z� � ate�z�

�ccATE

�z� � Gc

z�� , where Gc is the deriva-tive of the function relating atec and cc at the vicinity ofatecr. We have tuned Gc by measuring ate for different ca-pacities and taking the slope at ater. The ATE Controlleris implemented using a P-controller tuned with root lo-cus [6].

4.4. Data and Transaction Error Management

4.4.1. Capacity Allocation Figure 5 shows how cc, rc,andmwde are computed. To simplify the presentation of thealgorithms we denote the execution time of update transac-tions with the capacity of the update transactions, althoughthere is no server for update transactions. Let cUpdate�k�denote the capacity, or equivalently the execution time, ofthe update transactions. Since the arrival patterns of the up-date transactions are varying, we take the moving average ofthe update transaction capacity to smoothen out great vari-ations (line 1). We start allocating capacities with respect tothe service levels, starting with svc�. The requested capac-ity ccreq�k��� of service class svcc is the sum of the previ-ously assigned capacity cc�k� and the requested change incapacity �ccATE�k��� (line 4). If the total available capacityis less than the requested capacity then we degrade QoD bycalling ChangeUpdateC along with how much update ca-pacity to free (lines 5-7). See Section 4.4.2 for an elaboratedescription on ChangeUpdateC. If the requested capacitystill cannot be accommodated, we enforce the capacity ad-

justment by rejecting more transactions (lines 10-11). Onekey concept in the capacity allocation algorithm is that weemploy an optimistic admission policy. We start by admit-ting all transactions and in the face of an overload we rejecttransactions until the overload is resolved. This contrastsagainst pessimistic admission policy where the admissionpercentage is increased as long as the system is not over-loaded. We believe that the pessimistic admission policy isnot suitable as some critical transactions, i.e. the transac-tions in higher service classes, are initially discarded. Con-tinuing with the algorithm description, if the requested ca-pacity can be accommodated, we try to reduce the numberof rejected transactions (lines 12-21). Finally, after the ca-pacity allocation we check to see whether there is any sparecapacity cs, and if so we upgrade QoD within the limits ofcs. If QoD cannot be further upgraded, i.e. mwde � �, thenwe distribute cs among the servers (lines 23-30).

4.4.2. QoD Management The precision of the data iscontrolled by the QoD Manager by setting mwde�k�depending on the requested change in update capac-ity �cUpdate�k�. Rejecting an update results in a de-crease in update capacity. We define the gained capac-ity, gc�k� �

PTi�Discarded�k� eeti, as the capacity gained

due to the result of rejecting one or more updates dur-ing period k. Discarded�k� is the set of discarded up-date transactions during the period ��k � ��T� kT � andeeti is the estimated execution time of the update trans-action Ti. In our approach, we profile the system andmeasure gc for different mwdes and linearize the relation-ship between these two, i.e. mwde � � � gc. Further,since RTDBs are dynamic systems such that the behav-ior of the system and environment is changing, the re-lation between gc and mwde is adjusted on-line. Thisis done by measuring gc�k� for a given mwde�k� dur-ing each sampling period and updating �. Having therelationship between gc and mwde, we introduce the func-tion h��cUpdate�k�� � � � �gc�k� � �cUpdate�k��, whichreturns an mwde given �cUpdate�k� and the linear re-lation between gc and mwde. Since mwde cannot begreater than one and less than zero we use the func-tion,

f��cUpdate�k�� ����

�� h��cUpdate�k�� � �h��cUpdate�k��� � � h��cUpdate�k�� � ��� h��cUpdate�k�� � �

to enforce this requirement. Now that we have arrived atf it is straightforward to compute mwde, as shown inFigure 6. The function ChangeUpdateC returns the esti-mated change in update capacity given the requested change�cUpdate�k�.

Proceedings of the 25th IEEE International Real-Time Systems Symposium (RTSS 2004)

1052-8725/04 $20.00 © 2004 IEEE

Page 7: Robust quality management for differentiated imprecise data services

ComputeCapacity(�c�ATE �k���� � � � � �cjSVCjATE �k���)

1: cUpdate�k�� �� cUpdate�k� � �� � �� � cUpdate�k � ��2: c�k � �� � cUpdate�k� fthe total capacity is set to the update ca-

pacityg3: for c � � to jSV Cj do4: ccreq�k � �� � max

��� cc�k� � �ccATE�k � ��

�fcompute the

requested capacityg5: if T � c�k��� � ccreq�k��� then fif the total available capacity

if less than the requested capacityg6: c�k � ��� c�k � ���ChangeUpdateC�T � c�k � ���

creq�k � ��� ftry to degrade QoD to free capacityg7: end if8: cc�k � �� � min�T � c�k � ��� ccreq�k � ��� fassign capacity

to servercg9: c�k � ��� c�k � �� � cc�k � �� fupdate the total capacityg

10: if cc�k � �� � ccreq�k � �� then fif the assigned capacity is lessthan the requested capacityg

11: rc�k � �� � rc�k� � ccreq�k � �� � cc�k � �� freject moretransactionsg

12: else if rc�k� � � then fif the requested capacity was accommo-dated and we rejected transactions during the previous periodg

13: if T � c�k � �� � rc�k� then fif the available capacity is lessthan the rejected capacity in the previous periodg

14: c�k���� c�k����ChangeUpdateC(T�c�k����rc�k�ftry to degrade QoD to reject as few transactions as possibleg

15: end if16: rc�k��� � max��� rc�k�� T � c�k���� flower the trans-

action rejectiong17: cc�k � �� � cc�k � �� � rc�k� � rc�k � �� fallocate more

capacity to accommodate decrease in transaction rejectiong18: c�k � �� � c�k � �� � rc�k� � rc�k � �� fupdate the total

capacityg19: else fif the requested capacity is accommodated and we did not re-

ject transactions during the previous periodg20: rc�k � �� � � fdo not reject transactions during the next pe-

riodg21: end if22: end for23: cs�k � �� � T � c�k � �� fcompute the spare capacity that is not

used by the classesg24: if cs�k � �� � � then fif there is spare capacityg25: cs�k � �� � max��� cs�k � ���ChangeUpdateC(cs�k � ����

ftry to upgrade QoD as much as possibleg26: for c � � to jSV Cj do27: fif there is still spare capacity left after the QoD upgrade then

give more capacity to the class serversg

28: cc�k � �� � cc�k � �� � cs�k���jSV Cj

fdivide the spare capacity

evenly between the classesg29: end for30: end if

Figure 5. The capacity allocation algorithm.

5. Performance Evaluation

5.1. Experimental Goals

The performance evaluation is undertaken by a set ofsimulation experiments, where a set of parameters havebeen varied. These are: (i) load (load), as computationalsystems may show different behaviors for different loads,especially when the system is overloaded, and (ii) execu-

ChangeUpdateC(�cUpdate�k�)mwde�k � ��� f��cUpdate�k��

Return ���mwde�k� � mwde�k � ��� freturn the estimated

change in update transaction capacityg

Figure 6. The QoD management algorithm.

tion time estimation error (esterr), since often exact execu-tion time estimates of transactions are not known.

5.2. Simulation Setup

The simulated workload consists of update and usertransactions, which access data and perform virtual arith-metic/logical operations on the data. In the experiments, onesimulation run lasts for 10 minutes of simulated time. For allthe performance data, we have taken the average of 10 sim-ulation runs and derived 95% confidence intervals. We usethe QoS specification given in Section 4.1. The workloadmodel of the update and user transactions are described asfollows. We use the following notation where the attributexi refers to the transaction Ti, and xi�ti�j � is associated withthe subtransaction ti�j of Ti.

Data and Update Transactions. The database holds1000 temporal data objects (di) where each data object isupdated by a stream (streami, � � i � ����). The pe-riod (pi) is uniformly distributed in the range (100ms,50s),i.e. U � ����ms� ��s�, and estimated execution time (eeti)is given by U � ��ms� �ms�. The average update value(avi) of each streami is given by U � ��� ����. Upon aperiodic generation of an update, streami gives the up-date an actual execution time given by the normal distri-bution N � �eeti�

peeti� and a value (vi) according to N �

�avi�pavi � varfactor�, where varfactor is uniformly

distributed in (0,0.5). The deadline is set to arrivaltimei�pi. We define data error as the relative deviation betweencvi and vj as given by dei ���� jcvi�vj j

jcvij��.

User Transactions. Each sourcei generates a transac-tion Ti, consisting of one mandatory subtransaction andjOij, uniformly distributed between 1 and 10, optional sub-transaction(s). The estimated (average) execution time ofthe mandatory and the optional (eeti�ti�j �) subtransactionsis given by U � ��ms� �ms�. The estimation error esterr isused to introduce execution time estimation error in the av-erage execution time given by aeti�ti�j � �� � esterr� �eeti�ti�j �. Further, upon generation of a transaction, sourceiassociates an actual execution time to each subtransactionti�j , given by N � �aeti�ti�j ��

paeti�ti�j ��. The deadline is

set to arrivaltimei � eeti � slackfactor. The slack fac-tor is uniformly distributed according to U � ���� ���. In theperformance evaluation we consider four service classes,

Proceedings of the 25th IEEE International Real-Time Systems Symposium (RTSS 2004)

1052-8725/04 $20.00 © 2004 IEEE

Page 8: Robust quality management for differentiated imprecise data services

where 20�, 20�, 30�, and 30� of the workload is dis-tributed to svc�� � � � � svc�, respectively.

5.3. Baseline

To the best of our knowledge, there has been no ear-lier work on techniques for managing data imprecision andtransaction imprecision, satisfying QoS or QoD require-ments for differentiated services. For this reason we com-pare RDSEDF and RDSHEF with a baseline, called Admit-All, where all transactions are admitted and scheduled withEDF, and no QoD management is performed. This way wecan study the impact of the workload on the system, e.g.,how ate is affected by increasing workload.

5.4. Experiment 1: Results of Varying Load

We measure ate, ap, mwde, and sdte and apply loadsfrom 40� to 700�. The execution time estimation erroris set to zero (i.e. esterr � �). Figure 7 shows ate, ap,and mwde. The dashed lines indicate references, while thedashed-dotted lines give the overshoot. The confidence in-tervals for ap and ate are less than �������, the confi-dence interval for mwde is less than ������, and the con-fidence interval for sdte is less than ������.

As we can see from Figure 7(a), ate of the service classesincreases with increasing load. Admit-All violates the QoSspecification as atec is greater than atec

r� ���� � Mp,

while RDSEDF and RDSHEF produce ate reaching the ref-erences and, hence, satisfying the QoS specification. Asthe load increases the admission percentage of the lowestservice class svc� decreases and more transactions are re-jected. This means that when ap� becomes zero then ate� isequal to zero as we always measure the average transactionerror over admitted transactions. From Figure 7(a) we seethat ate� starts decreasing at loads equal to 180�, but startsincreasing again at 240� load. We have tuned the ATE Con-trollers for loads around 100�; the effect of this is a delayuntil ap� reaches zero as the load increases. As the capacityassigned to the server in svc� is zero for loads greater than��� and ap� does not decrease to zero instantaneously,then average ate� becomes greater than zero. This is furthershown in Section 5.6 where we discuss the transient perfor-mance of the algorithms.

Turning to admission percentage in Figure 7(b), we ob-serve the strict hierarchic admission policy where the low-est service class suffers at the expense of the higher ser-vice classes. ap� starts decreasing at 140� load and whenall transactions in svc� are rejected, ap� starts decreasing.Note that the graphs show the average ap over the entire runand we cannot have zero ap as we start with ���� admis-sion percentage at the start of the run. If we would take theaverage of ap� at steady-state then we would observe a zero

100 200 300 400 500 600 7000

50

100

load (%)

ate1 (

%)

100 200 300 400 500 600 7000

50

100

load (%)

ate2 (

%)

100 200 300 400 500 600 7000

50

100

load (%)

ate3 (

%)

100 200 300 400 500 600 7000

50

100

load (%)

ate4 (

%)

RDSEDF

RDSHEF

Admit−All

(a)

100 200 300 400 500 600 7000

50

100

load (%)

ap1 (

%)

100 200 300 400 500 600 7000

50

100

load (%)

ap2 (

%)

100 200 300 400 500 600 7000

50

100

load (%)

ap3 (

%)

100 200 300 400 500 600 7000

50

100

load (%)ap

4 (%

)

RDSEDF

RDSHEF

Admit−All

(b)

100 200 300 400 500 600 7000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

load (%)

mw

de

RDSEDF

RDSHEF

Admit−All

(c)

Figure 7. Experiment 1: Varying load

average ap� for loads greater than 240�. Turning to Fig-ure 7(c) we see that mwde starts increasing at 100� load,trying to lower the update load so that no user transactionsare rejected. Studying sdte we note that the difference be-tween RDSHEF and RDSEDF is not significant. Howeverin all cases RDSHEF provides a lower sdte than RDSEDF,with the maximum difference of ����� observed.

From the QoS specification (see Section 4.1) we see thatthe transactions in svc� are more important than the transac-tions in svc�, however, the QoS requirement for svc� is lessthan the QoS requirement for svc�, i.e., ate� � ate�. Figure7 shows that ate� is greater than ate� (i.e. the QoS require-ment of svc� is lower than the QoS requirement of svc�)

Proceedings of the 25th IEEE International Real-Time Systems Symposium (RTSS 2004)

1052-8725/04 $20.00 © 2004 IEEE

Page 9: Robust quality management for differentiated imprecise data services

and at the same time the admission percentage of svc� isgreater or equal to the admission percentage of svc� (i.e.transactions in svc� are rejected in favor of the transactionsin svc�). This result clearly shows that RDS supports or-thogonality between importance and QoS requirement. Insummary we have shown that RDSHEF and RDSEDF pro-vide robust and reliable performance that is consistent withthe QoS specification for varying load, as ate of all classeshas been less than the specified overshoot. The admissionmechanism enforces the desired strict hierarchic admis-sion policy, and RDSHEF provides more QoS fairness thanRDSEDF. The latter is consistent with observations from theexperiments where service differentiation was not used [2].

5.5. Experiment 2: Results of Varying esterr

In Experiment 1 (Section 5.4) we examined the behav-ior of the algorithms for varying load and we assumed thatwe had no execution time estimation error. In reality ex-act execution times are not available and for this reason weevaluate the behavior of the algorithms when increasing theexecution time estimation error. We measure ate, ap, andmwde and apply 300� load. The execution time estimationerror esterr is varied between -0.4 and 3 with steps of 0.2.This way we examine the effects of both overestimation andunderestimation of the execution time. Figure 8 shows ate,ap, and mwde. The confidence interval for ap is less than�����, the confidence interval for ate is less than������,and the confidence interval for mwde is ������. Ideally,the performance of the RTDB should not be affected by ex-ecution time estimation errors. This corresponds to no or lit-tle variations in ate, ap, and mwde as esterr changes. Asshown in Figure 8, ate, ap, and mwde do not change signif-icantly with varying esterr. From above we conclude thatRDSHEF and RDSEDF are insensitive to changes to exe-cution time estimation error as ate, ap, and mwde do notchange significantly with varying esterr. This means thatRDS conforms to inaccurate execution times, satisfying theQoS specification.

5.6. Experiment 3: Transient Performance

Studying the average performance is often not enoughwhen dealing with dynamic systems. Therefore we studythe transient performance of RDSEDF. We do not includethe performance of RDSHEF as it has similar behavior toRDSEDF. We measure ate, ap, and c. The load is set to���� and esterr set to zero. Figure 9 shows the transientbehavior of RDSEDF. The dash-dotted line indicates over-shoot, whereas the dashed lines represent references. Theovershoot for ate�� � � � � ate� are measured to be 2.18�(ate� equal to 30.65� at time 20s), 33.24� (ate� equal to26.65� at time 15s), and 38.21� (ate� equal to 55.28�

0 0.5 1 1.5 2 2.5 30

50

100

ate1 (

%)

0 0.5 1 1.5 2 2.5 30

50

100

ate2 (

%)

0 0.5 1 1.5 2 2.5 30

50

100

ate3 (

%)

0 0.5 1 1.5 2 2.5 30

50

100

esterr

ate4 (

%)

RDSEDF

RDSHEF

(a)

0 0.5 1 1.5 2 2.5 30

50

100

ap1 (

%)

0 0.5 1 1.5 2 2.5 30

50

100

ap2 (

%)

0 0.5 1 1.5 2 2.5 30

50

100

ap3 (

%)

0 0.5 1 1.5 2 2.5 30

50

100

esterrap

4 (%

)

RDSEDF

RDSHEF

(b)

0 0.5 1 1.5 2 2.5 30

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

esterr

mw

de

RDSEDF

RDSHEF

(c)

Figure 8. Experiment 2: Varying executiontime estimation error

at time 10s), while the overshoot requirement by the QoSspecification is 35� for all classes. Hence, the overshootrequirement for the most important service classes svc�

and svc� are satisfied. The overshoot for svc� is undefined,since the steady-state value of ate� is zero.

As we can see, ate� reaches 100� and it takes a whileuntil ate� becomes zero. Using feedback control we areable to react to changes only when the controlled variablehas changed and, hence, when ate� is greater than ate�r,the ATE Controller for svc� computes a positive �c�

ATE,

Proceedings of the 25th IEEE International Real-Time Systems Symposium (RTSS 2004)

1052-8725/04 $20.00 © 2004 IEEE

Page 10: Robust quality management for differentiated imprecise data services

0 100 200 300 400 5000

50

100

ate1 (

%)

0 100 200 300 400 5000

50

100

ate2 (

%)

0 100 200 300 400 5000

50

100

ate3 (

%)

0 100 200 300 400 5000

50

100

time (s)

ate4 (

%)

(a)

0 100 200 300 400 5000

50

100

ap1 (

%)

0 100 200 300 400 5000

50

100

ap2 (

%)

0 100 200 300 400 5000

50

100

ap3 (

%)

0 100 200 300 400 5000

50

100

time (s)

ap4 (

%)

(b)

0 100 200 300 400 5000

1000

2000

c1 (s)

0 100 200 300 400 5000

1000

2000

c2 (s)

0 100 200 300 400 5000

1000

2000

c3 (s)

0 100 200 300 400 5000

1000

2000

time (s)

c4 (s)

(c)

Figure 9. Experiment 3: Transient perfor-mance

requesting for more capacity. As the assigned capacity ofserver� is less than the requested capacity (c� is near zero),transactions are rejected instead according to the capac-ity allocation algorithm (see Figure 5). The rejection rateincreases as �c�

ATEincreases and, hence, a larger �c�

ATE

results in improved suppression of ate� and faster con-

vergence to zero. The magnitude of �c�ATE

increases asthe magnitude of P-controller parameter increases. Now,we have tuned the controllers when the load is ���� andconsidering the applied load in the experiment is 200�,

the magnitude the P-controller parameter is not sufficientlylarge to efficiently suppress ate�.

In other experiments, not presented in this paper, we haveobserved a significant improvement in ate� suppression andfaster QoS adaptation when the applied load is equal to theload at which the controllers were tuned. By increasing themagnitude of the P-controller parameter as the applied loadincreases, better QoS adaptation is achieved. One way todeal with changing system properties, e.g. the load appliedon the system, is to use gain scheduling or adaptive control[18], where the behavior of the controlled system is mon-itored at run-time and controllers adapted accordingly. Inour case, the RTDB reacts to the higher applied load by in-creasing the magnitude of the P-controller parameter suchthat faster QoS adaptation is achieved. We believe that usinggain scheduling, where the parameter of the P-controllerschanges according to the current applied load, results ina substantial performance gain with respect to faster QoSadaptation. In our future work we plan to use gain schedul-ing to update the control parameters.

6. Related Work

Liu et al. [13] and Hansson et al. [9] presented algo-rithms for minimizing the total error and total weighted er-ror of a set of tasks. The latter cannot be applied to ourproblem, since we want to control a set of performancemetrics such that they converge towards a set of referencesgiven by a QoS specification. A query processor, APPROX-IMATE [19], produces monotonically improving answersas the allocated computation time increases. The relationaldatabase system, called CASE-DB, can produce approxi-mate answers to queries within certain deadlines [15]. Leeet al. studied the performance of real-time transaction pro-cessing where updates can be skipped [12]. In contrast tothe above mentioned work, we have introduced impreci-sion at both data object and transaction level and presentedQoS in terms of data and transaction imprecision. Rajku-mar et al. presented a QoS model, called Q-RAM, for ap-plications that must satisfy requirements along multiple di-mensions such as timeliness and data quality [11]. How-ever, they assume that the amount of resources an applica-tion requires is known and accurate, otherwise optimal re-source allocation cannot be made. Kang et al. used a feed-back control scheduling architecture to balance the load ofuser and update transactions for differentiated services [10]where the database operator can specify miss ratio and uti-lization requirements. However, in this work performanceisolation between classes is not implemented and, conse-quently, orthogonality in class priority and class QoS re-quirements cannot be realized. In this paper we have ex-tended our previous work [2] on managing QoS to supportservice differentiation, including a new QoS specification

Proceedings of the 25th IEEE International Real-Time Systems Symposium (RTSS 2004)

1052-8725/04 $20.00 © 2004 IEEE

Page 11: Robust quality management for differentiated imprecise data services

model that supports orthogonality between importance andQoS requirements, and a new QoS management algorithm.Feedback control scheduling has been receiving special at-tention in the past few years [14, 16, 4]. However, none ofthem have addressed QoS management of imprecise real-time data services.

7. Conclusions and Future Work

In this paper we have argued for the increasing need ofRTDBs that are able to react to changes in their environmentin a timely manner. In this paper we present an approach forspecifying and managing QoS, in terms of transaction anddata precision, for differentiated and imprecise real-timedata services. The expressive power of the QoS specifica-tion model allows a database operator or database designerto specify not only the desired nominal performance, butalso the worst-case system performance and system adapt-ability in the face of unexpected failures or load variation.Further, the QoS specification model allows the databaseoperator to specify the importance and the QoS requirementof the transactions independently. The presented QoS man-agement algorithms, RDSHEF and RDSEDF, give a robustand controlled behavior of RTDBs in terms of transactionand data precision, even for transient overloads and with in-accurate run-time estimates of the transactions.

In the current work all transactions in a service class havethe same QoS requirement. In our future work we extend thetransaction and service model such that each service classmay have multiple QoS requirements. We also plan to im-plement gain scheduling [18] to update the parameter of theP-controllers such that the control action can be adapted de-pending on the system properties.

References

[1] R. Abbott and H. Garcia-Molina. Scheduling real-time trans-actions: A performance evaluation. ACM Transactions onDatabase System, 17:513–560, 1992.

[2] M. Amirijoo, J. Hansson, and S. H. Son. Error-driven QoSmanagement in imprecise real-time databases. In Proceed-ings of the Euromicro Conference on Real-Time Systems(ECRTS), 2003.

[3] G. C. Buttazzo. Hard Real-Time Computing Systems.Kluwer Academic Publishers, 1997.

[4] G. C. Buttazzo and L. Abeni. Adaptive workload managmentthrough elastic scheduling. Journal of Real-time Systems,23(1/2), July/September 2002. Special Issue on Control-Theoretical Approaches to Real-Time Computing.

[5] J. Chung and J. W. S. Liu. Algorithms for scheduling peri-odic jobs to minimize average error. In Proceedings of theReal-Time Systems Symposium (RTSS), 1988.

[6] G. F. Franklin, J. D. Powell, and M. Workman. Digital Con-trol of Dynamic Systems. Addison-Wesley, third edition,1998.

[7] T. Gustafsson and J. Hansson. Data management in real-timesystems: a case of on-demand updates in vehicle control sys-tems. In Proceedings of Real-time Applications symposium(RTAS), 2004.

[8] J. Hansson, S. H. Son, J. A. Stankovic, and S. F. Andler.Dynamic transaction scheduling and reallocation in over-loaded real-time database systems. In Proceedings of theConference on Real-time Computing Systems and Applica-tions (RTCSA), 1998.

[9] J. Hansson, M. Thuresson, and S. H. Son. Imprecise taskscheduling and overload managment using OR-ULD. In Pro-ceedings of the Conference in Real-Time Computing Systemsand Applications (RTCSA), 2000.

[10] K.-D. Kang, S. H. Son, and J. A. Stankovic. Service dif-ferentiation in real-time main memory databases. In Pro-ceedings of the International Symposium on Object-orientedReal-time Distributed Computing, 2002.

[11] C. Lee, J. Lehoezky, R. Rajkumar, and D. Siewiorek. Onquality of service optimization with discrete QoS options. InProceedings of the 5th Real-Time Technology and Applica-tions Symposium (RTAS), 1999.

[12] V. Lee, K. Lam, S. H. Son, and E. Chan. On transaction pro-cessing with partial validation and timestamps ordering inmobile broadcast environments. IEEE Transactions on Com-puters, 51(10):1196–1211, 2002. Special issue on DatabaseManagement Systems and Mobile Computing.

[13] J. W. S. Liu, W.-K. Shih, K.-J. Lin, R. Bettati, and J.-Y.Chung. Imprecise computations. Proceedings of the IEEE,82, Jan 1994.

[14] C. Lu, J. A. Stankovic, G. Tao, and S. H. Son. Feed-back control real-time scheduling: Framework, modelingand algorithms. Journal of Real-time Systems, 23(1/2),July/September 2002. Special Issue on Control-TheoreticalApproaches to Real-Time Computing.

[15] G. Ozsoyoglu, S. Guruswamy, K. Du, and W.-C. Hou. Time-constrained query processing in CASE-DB. IEEE Transac-tions on Knowledge and Data Engineering, 7(6), 1995.

[16] S. Parekh, N. Gandhi, J. Hellerstein, D. Tilbury, T. Jayram,and J. Bigus. Using control theory to achieve service levelobjectives in performance managment. Journal of Real-timeSystems, 23(1/2), July/September 2002. Special Issue onControl-Theoretical Approaches to Real-Time Computing.

[17] K. Ramamritham. Real-time databases. International Jour-nal of Distributed and Parallel Databases, (1), 1993.

[18] K. J. Astrom and B. Wittenmark. Adaptive Control.Addison-Wesley, second edition, 1995.

[19] S. V. Vrbsky and J. W. S. Liu. APPROXIMATE - a queryprocessor that produces monotonically improving approxi-mate answers. IEEE Transactions on Knowledge and DataEngineering, 5(6):1056–1068, December 1993.

Proceedings of the 25th IEEE International Real-Time Systems Symposium (RTSS 2004)

1052-8725/04 $20.00 © 2004 IEEE