Top Banner
Actor-Oriented Programming for Wireless Sensor Networks Elaine Cheong Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2007-112 http://www.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-112.html August 30, 2007
154

Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

Aug 03, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

Actor-Oriented Programming for Wireless SensorNetworks

Elaine Cheong

Electrical Engineering and Computer SciencesUniversity of California at Berkeley

Technical Report No. UCB/EECS-2007-112

http://www.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-112.html

August 30, 2007

Page 2: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

Copyright © 2007, by the author(s).All rights reserved.

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission.

Page 3: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

Actor-Oriented Programming for Wireless Sensor Networks

by

Elaine Cheong

B.S. (University of Maryland, College Park) 2000M.S. (University of California, Berkeley) 2003

A dissertation submitted in partial satisfaction of therequirements for the degree of

Doctor of Philosophy

in

Engineering – Electrical Engineering and Computer Sciences

in the

GRADUATE DIVISIONof the

UNIVERSITY OF CALIFORNIA, BERKELEY

Committee in charge:Professor Edward A. Lee, Chair

Professor Eric A. BrewerProfessor Paul K. Wright

Fall 2007

Page 4: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

Actor-Oriented Programming for Wireless Sensor Networks

Copyright 2007

by

Elaine Cheong

Page 5: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

1

Abstract

Actor-Oriented Programming for Wireless Sensor Networks

by

Elaine Cheong

Doctor of Philosophy in Engineering – Electrical Engineering and Computer Sciences

University of California, Berkeley

Professor Edward A. Lee, Chair

Wireless sensor networks is an emerging area of embedded systems that has the potential to

revolutionize our lives at home and at work, with wide-ranging applications, including environmen-

tal monitoring and conservation, manufacturing and industrial control, business asset management,

seismic and structural monitoring, transportation, health care, and home automation. Building sen-

sor networks today requires piecing together a variety of hardware and software components, each

with different design methodologies and tools, making it a challenging and error-prone process.

In this dissertation, I advocate using an actor-oriented approach to designing, generating, pro-

gramming, and simulating wireless sensor network applications. Actor-oriented programming pro-

vides a common high-level language that unifies the programming interface between the operating

system, node-centric, middleware, and macroprogramming layers of a sensor network application.

This dissertation presents the TinyGALS (Globally Asynchronous, Locally Synchronous) pro-

gramming model, which is built on the TinyOS programming model. TinyGALS is implemented

in the galsC programming language, which provides constructs to systematically build concurrent

tasks called actors. The galsC compiler toolsuite provides high-level type checking and code gen-

eration facilities and allows developers to deploy actor-oriented programs on actual sensor node

hardware.

This dissertation then describes Viptos (Visual Ptolemy and TinyOS), a joint modeling and

design environment for wireless networks and sensor node software. Viptos is built on Ptolemy

II, an actor-oriented graphical modeling and simulation environment for embedded systems, and

TOSSIM, an interrupt-level discrete-event simulator for TinyOS networks.

This dissertation also presents methods for using higher-order actors with various metapro-

Page 6: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

2

gramming and generative programming techniques that enable wireless sensor network application

developers to create high-level, parameterizable descriptions and automatically generate sensor net-

work simulation scenarios from these models.

All of the tools I developed and describe in this dissertation are open-source and freely available

on the web. The networked embedded computing community can use these tools and the knowledge

shared in this dissertation to improve the way we program wireless sensor networks.

Professor Edward A. LeeDissertation Committee Chair

Page 7: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

i

To the teachers

who have challenged, encouraged, and guided me through life.

Page 8: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

ii

Café PtolemyExecutive Chefs: Elaine Cheong and Andrew Mihal

AppetizersAvocado Smash and Double-Decker Baked Quesadillas

Corn Pancakes with Green Onions, Actors, Crème Fraîche, Avocado, and Cherry Tomatoes

SaladsFuyu Persimmon, Avocado & Watercress Salad with Discrete-Event Miso Dressing

Corn and Cherry Tomato Salad with Arugula

SoupsMexican Chicken Soup

Butternut Squash BisqueWhy-the-Chicken-Crossed-the-Model Santa Fe-Tastic Tortilla Soup

EntreesChicken Pot Pie

Fettuccine with Concurrent Meyer Lemon Butter SauceChicken Tikka Masala

Farfalle with Ptalon Pesto, Feta, and Cherry TomatoesButternut Squash Lasagna

Cumin Crusted Chicken with Cotija and Mango-Garlic Sauce with Green Onion Pesto Mashed Motes

Butternut Squash, Metacabbage & Pancetta Risotto with Basil Oil

DessertsMarillenknödel: Austrian apricot dumplingsZwetschgendatschi: Bavarian plum delicacy

DrinksPlum Granita with Limoncello

Vinho do IOPortoLychee-flavoured Ramune (ラムネ)

Page 9: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

iii

Contents

List of Figures vi

List of Tables viii

1 Introduction 11.1 Wireless Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Actor-Oriented Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Actor-Oriented Programming for Wireless Sensor Networks . . . . . . . . . . . . 51.4 Previously Published Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2 Background 92.1 TinyOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.1.1 NesC syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.1.2 TinyOS execution model . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.2 Ptolemy II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.2.1 VisualSense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3 TinyGALS and galsC 193.1 The TinyGALS Programming Model and galsC Language . . . . . . . . . . . . . 22

3.1.1 Programming constructs and language syntax . . . . . . . . . . . . . . . . 223.1.2 Execution model and language semantics . . . . . . . . . . . . . . . . . . 283.1.3 Link model within actors . . . . . . . . . . . . . . . . . . . . . . . . . . . 363.1.4 Type inference and type checking . . . . . . . . . . . . . . . . . . . . . . 383.1.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.2 Concurrency and Determinacy Issues . . . . . . . . . . . . . . . . . . . . . . . . . 403.2.1 Concurrency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403.2.2 Determinacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413.2.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

3.3 Code Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493.3.1 Links and connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513.3.2 Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513.3.3 TinyGUYS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533.3.4 System initialization and start of execution . . . . . . . . . . . . . . . . . 53

Page 10: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

iv

3.3.5 Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533.3.6 Memory usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

3.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

4 Viptos 594.1 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

4.1.1 Representation of nesC components . . . . . . . . . . . . . . . . . . . . . 624.1.2 Transformation of nesC components . . . . . . . . . . . . . . . . . . . . . 644.1.3 Generation of code for target deployment . . . . . . . . . . . . . . . . . . 664.1.4 Generation of code for simulation . . . . . . . . . . . . . . . . . . . . . . 684.1.5 Simulation of TinyOS in Viptos . . . . . . . . . . . . . . . . . . . . . . . 69

4.2 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744.2.1 Comparison to TOSSIM . . . . . . . . . . . . . . . . . . . . . . . . . . . 774.2.2 Radio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

4.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

5 Metaprogramming for Wireless Sensor Networks 815.1 Generative Programming and Metaprogramming . . . . . . . . . . . . . . . . . . 815.2 Higher-order Functions, Actors, and Components . . . . . . . . . . . . . . . . . . 825.3 Ptalon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

5.3.1 A simple example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845.3.2 Reconfiguration in Ptalon . . . . . . . . . . . . . . . . . . . . . . . . . . 86

5.4 Specifying WSN Applications Programmatically . . . . . . . . . . . . . . . . . . 895.4.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 895.4.2 Small World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905.4.3 Parameter Sweep . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905.4.4 Higher-order actors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955.4.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

6 Related Work 1056.1 TinyGALS and galsC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

6.1.1 Non-blocking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1056.1.2 MPI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1066.1.3 Port-Based Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1086.1.4 Click . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1106.1.5 Click and Ptolemy II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1186.1.6 Timed Multitasking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

6.2 Design, Simulation, and Deployment Environments . . . . . . . . . . . . . . . . . 1196.2.1 Design and simulation environments . . . . . . . . . . . . . . . . . . . . . 1196.2.2 TinyOS development and editing environments . . . . . . . . . . . . . . . 1236.2.3 Programming and deployment environments . . . . . . . . . . . . . . . . 123

6.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

Page 11: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

v

7 Conclusion 125

Bibliography 127

Page 12: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

vi

List of Figures

1.1 Object-oriented design vs. actor-oriented design. Source: Edward A. Lee. . . . . . 51.2 WSN landscape. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.1 Sample nesC source code. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.2 Illustration of an actor-oriented model (top) in Ptolemy II and its hierarchical ab-

straction (bottom). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.3 XML representation of the Sinewave source. . . . . . . . . . . . . . . . . . . . . 15

3.1 Graphical representation of the SenseTag application. . . . . . . . . . . . . . . . 223.2 Source code for the TimerC and TimerM components. . . . . . . . . . . . . . . . 243.3 Source code for TimerActor and SenseActor. . . . . . . . . . . . . . . . . . . . 273.4 Source code for the SenseTag application. . . . . . . . . . . . . . . . . . . . . . 293.5 Directed acyclic graphs (DAGs) within actors. . . . . . . . . . . . . . . . . . . . . 323.6 A single-output, multiple-input connection. . . . . . . . . . . . . . . . . . . . . . 343.7 Type checking example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383.8 A self-loop actor triggered by an interrupt. . . . . . . . . . . . . . . . . . . . . . . 413.9 Two events are produced at the same time. . . . . . . . . . . . . . . . . . . . . . . 423.10 A single interrupt. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453.11 One or more interrupts where actors have delayed output. . . . . . . . . . . . . . . 463.12 Active system state after one interrupt. . . . . . . . . . . . . . . . . . . . . . . . . 473.13 Active system state determined by adding the active system state after one non-

interleaved interrupt. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473.14 Code generation for the SenseTag application. . . . . . . . . . . . . . . . . . . . 503.15 TinyGALS scheduling algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . 543.16 Sensor array for object detection and reporting. . . . . . . . . . . . . . . . . . . . 563.17 Top-level, per-node view of the object detection application. . . . . . . . . . . . . 58

4.1 Sample nesC source code. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634.2 SenseToLeds application in Viptos. . . . . . . . . . . . . . . . . . . . . . . . . . 654.3 Generated MoML by nc2moml for TimerC.nc . . . . . . . . . . . . . . . . . . . 664.4 Generated MoML by ncapp2moml for SenseToLeds.nc . . . . . . . . . . . . . . 674.5 TOSSIM scheduling algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704.6 Viptos version of TOSSIM scheduling algorithm. . . . . . . . . . . . . . . . . . . 714.7 Send and receive application in Viptos. . . . . . . . . . . . . . . . . . . . . . . . . 75

Page 13: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

vii

4.8 Multi-hop routing in Viptos. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764.9 Execution time of the SenseToLeds application as a function of the number of

nodes. Each simulation ran for 300.0 virtual seconds. . . . . . . . . . . . . . . . . 784.10 Execution time of a radio send and receive model in Viptos as a function of the

number of senders and receivers. Each simulation ran for 120.0 virtual seconds. . . 79

5.1 MultipleNodesMoML.ptln . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 865.2 PtalonActor in Ptolemy II. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 875.3 MultipleNodesMoML.xml . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885.4 Small World in Ptolemy II. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 925.5 ParameterSweep version of Small World in Ptolemy II. . . . . . . . . . . . . . . 935.6 Modal model for changing parameter values of Small World model in Ptolemy II. . 945.7 SDF model for changing parameter values of Small World model in Ptolemy II. . . 965.8 ParameterSweep version of Small World model with MultiInstanceComposite

in Ptolemy II. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975.9 Ptalon code for SmallWorld (SmallWorld.ptln). . . . . . . . . . . . . . . . . . . . 995.10 Ptalon version of Small World in Ptolemy II. . . . . . . . . . . . . . . . . . . . . . 1005.11 Excerpt of MoML code for Ptalon version of Small World. . . . . . . . . . . . . . 101

6.1 An example Click element. Source: Eddie Kohler. . . . . . . . . . . . . . . . . . . 1116.2 A simple Click configuration with sequence diagram. Source: Eddie Kohler. . . . . 1126.3 Flowchart for Click configuration shown in Figure 6.2. Source: Eddie Kohler. . . . 1136.4 Click vs. TinyGALS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1156.5 A sensor network application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1176.6 Pull processing across multiple nodes; a configuration for the application in Figure

6.5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

Page 14: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

viii

List of Tables

3.1 Summary of valid types of links in TinyGALS/galsC. . . . . . . . . . . . . . . . . 373.2 Generated code for ports in galsC. . . . . . . . . . . . . . . . . . . . . . . . . . . 503.3 Generated code for parameters (TinyGUYS) in galsC. . . . . . . . . . . . . . . . . 51

4.1 Representation scheme for nesC components in Viptos. . . . . . . . . . . . . . . . 63

5.1 Comparison of number of bytes between different implementations of SmallWorld. 102

Page 15: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

ix

Acknowledgments

I would like to thank Jie Liu for his invaluable guidance throughout my graduate career. I

would also like to thank Feng Zhao for giving me the opportunity to work with him and Jie at both

PARC and Microsoft Research.

For their feedback, advice, and/or help with hardware and software, without which this disser-

tation would not be possible, I would like to thank: Christopher Brooks, Adam Cataldo, Phoebus

Chen, Prabal Dutta, David Gay, Jorn Janneck, Eddie Kohler, Jackie Leung, Judy Liebman, Xiaojun

Liu, Andrew Mihal, Steve Neuendorffer, L. Parke, Roberto Passerone, John Reekie, Mary Stewart,

Rob Szewczyk, Heather Taylor, Kamin Whitehouse, Yang Zhao, and the rest of the Ptolemy Group

and NEST Group.

I would like to thank my undergraduate advisor, David B. Stewart, for introducing me to em-

bedded systems and starting me on this path.

Finally, I would like to thank my dissertation committee, Edward A. Lee, Eric Brewer, and

Paul Wright, for their feedback. I would especially like to thank my advisor, Edward, for his advice

and support throughout all these years.

Recipes courtesy of Food Network and Cooking Fresh from the Bay Area, as well as Backen

kostlich wie noch nie, Cook’s Illustrated, Epicurious, and Greens Restaurant.

Page 16: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

x

Page 17: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

1

Chapter 1

Introduction

In his classic software engineering text, The Mythical Man Month: Essays on Software Engi-

neering [17], Frederick P. Brooks, Jr. discusses high-level languages as an essential tool for increas-

ing programmer productivity:

Surely the most powerful stroke for software productivity, reliability, and simplicityhas been the progressive use of high-level languages for programming. Most observerscredit that development with at least a factor of five in productivity, and with concomi-tant gains in reliability, simplicity, and comprehensibility.

What does a high-level language accomplish? It frees a program from much of itsaccidental complexity. An abstract program consists of conceptual constructs: oper-ations, datatypes, sequences, and communication. The concrete machine program isconcerned with bits, registers, conditions, branches, channels, disks, and such. To theextent that the high-level language embodies the constructs wanted in the abstract pro-gram and avoids all lower ones, it eliminates a whole level of complexity that was neverinherent in the program at all.

The above passage was originally published in 1975. In the twentieth-anniversary edition

(1995) of the text, Brooks elaborates further:

Most past progress in software productivity has come from eliminating noninherentdifficulties such as awkward machine languages and slow batch turnaround. There arenot a lot more of these easy pickings. Radical progress is going to have to come fromattacking the essential difficulties of fashioning complex conceptual constructs.

The most obvious way to do this recognizes that programs are made up of concep-tual chunks much larger than the individual high-level language statement—subroutines,or modules, or classes. If we can limit design and building so that we only do the puttingtogether and parameterization of such chunks from prebuilt collections, we have rad-ically raised the conceptual level, and eliminated the vast amounts of work and thecopious opportunities for error that dwell at the individual statement level.

Page 18: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

2

Although twelve years have passed since Brooks wrote the above passage, it still applies today

to new high-level programming concepts. An application area where these concepts are particularly

needed is that of wireless sensor networks. This dissertation presents methods for raising the con-

ceptual level of building wireless sensor network applications, using actor-oriented programming

concepts.

1.1 Wireless Sensor Networks

Wireless sensor networks is an emerging area of embedded systems that has the potential to

revolutionize our lives at home and at work, with wide-ranging applications, including environmen-

tal monitoring and conservation, manufacturing and industrial control, business asset management,

seismic and structural monitoring, transportation, health care, and home automation [107]. Wireless

sensor networks provide a way to create flexible, tetherless, automated data collection and monitor-

ing systems.

Unlike traditional networked systems, a sensor network is constrained by finite on-board bat-

tery power and limited network communication bandwidth. In addition, sensor networks are spa-

tially aware and are more closely linked to geographic location and the physical environment than

centralized systems. A sensor node in a typical sensor network has a battery, a microprocessor, and

a small amount of memory for signal processing and task scheduling. Each node is equipped with

one or more sensing devices, such as sensors for visible or infrared light, changing magnetic field,

electrical resistance, acceleration or vibration, pH, humidity, or temperature; acoustic microphone

arrays, and/or video or still cameras. Each sensor node communicates wirelessly with a few other

neighboring nodes within its radio communication range [107]. A wireless sensor network may also

be augmented with a higher tier of more powerful, wired nodes with greater network capacity and

computation power, as in the Tenet architecture [36]. Nodes in this higher tier are sometimes called

masters [36] or microservers [67].

Building sensor networks today requires piecing together a variety of hardware and software

components, each with different design methodologies and tools, making it a challenging and error-

prone process. Typical networked embedded system software development may require the design

and implementation of device drivers, network stack protocols, scheduler services, application-level

tasks, and partitioning of tasks across multiple nodes. Little or no integration exists among the tools

necessary to create these software components, mostly because the interactions between the pro-

gramming models are poorly understood. In addition, these tools typically have little infrastructure

Page 19: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

3

for building models and interactions that are not part of their original scope or software design

paradigms.

1.2 Actor-Oriented Programming

Actor-oriented programing is a high-level programming concept that can increase software

productivity, reliability, and simplicity. A brief history of actor research up to 1993 is summarized

by Agha [3] and excerpted and extended here.

The actor model was originally proposed by Carl Hewitt, though the meaning of the term has

evolved over time. In his model, Hewitt proposes actors as an approach to modeling intelligence

as a society of communicating knowledge-based problem-solving experts. One can view each of

the experts as a society that can be further decomposed until reaching the primitive actors of the

system. These actors are objects that interact in a purely local way by sending messages to one an-

other. Hewitt [46] showed that control structures can be represented as patterns of message passing

between simple actors with a conditional construct but no local state.

Gul Agha extended the notion of actor to include history-sensitive behavior necessary for

shared, mutable data objects [1]. He intended actors to be used as a paradigm for exploiting par-

allelism on massively parallel architectures and as a suitable language for concurrency [2]. Agha

assumes that each actor encapsulate a thread of control.

Edward A. Lee generalized the notion of actors and applied it to software design for concur-

rent systems [49]. Instead of object-oriented design, which emphasizes inheritance and procedural

interfaces, he suggests the term actor-oriented design as a refactored software architecture, where

instead of objects, software components are parameterized actors with ports. Ports and parame-

ters define the interface of an actor. A port represents an interaction with other actors, but unlike

a method, it does not have call-return semantics. The precise semantics depends on the model of

computation, but conceptually it represents signaling between components. Unlike Agha’s actors,

Lee’s actors are not required to encapsulate a thread of control [60]. Hewitt and Agha view actors

as a universal concept; everything in the system is an actor that responds to messages. Lee distin-

guishes data tokens, which encapsulate data and do not interact with one another, from actors which

exchange and process data [74].

This dissertation uses Lee’s concept of actors. Like Neuendorffer [74], I view actor-oriented

programming as an approach to system-level design. Actors are concurrent dataflow-oriented com-

ponents that specify behavior abstractly without relying on low-level implementation constructs

Page 20: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

4

such as function calls, threads, or distributed computing infrastructure [74]. In traditional object-

oriented programming, what flows through an object is sequential control. In other words, things

happen to objects. In actor-oriented programming, what flows through an object is evolving data.

In other words, actors make things happen (see Figure 1.1).

Actor-oriented programming and object-oriented programming are duals of each other, similar

to Lauer and Needham’s concept of the duality of message-oriented systems and procedure-oriented

systems [58]. Lauer and Needham explain that though “no real system precisely agrees with either

model in all respects,” “most modern operating systems can be usefully classified using them. Some

systems are implemented in a style which is very close in spirit to one model or the other. Other

systems are able to be partitioned into subsystems, each of which corresponds to one of the models,

and which are coupled by explicit interface mechanisms.” They conclude that “the considerations

for choosing which model to adopt in a given system are not found in the applications which that

system is meant to support. Instead, they lie in the substrate upon which the system is built,” “i.e.,

machine architecture and/or programming environment—on which the process and synchronization

facilities are implemented. The factors and design decisions of the system upon which the process

and synchronization facilities are built are the things which make one or the other style more attrac-

tive or more tedious.” They suggest that a message-oriented (actor-oriented) style is best when it is

easy to allocate message blocks and queue messages but difficult to build a protected procedure call

mechanism. Other constraints are those “imposed by the machine architecture and hardware,” such

as the “organization of real and virtual memory, the size of the stateword which must be saved on

every context switch, the ease with which scheduling and dispatching can be done, the arrangement

of peripheral devices and interrupts, and the architecture of the instruction set and the programmable

registers.”

Actor-oriented programming and other message-oriented systems are well-suited to embedded

systems and other highly concurrent systems, where a variety of peripheral devices and interrupts

must be accessed frequently, with a fast response rate, and memory space is at a premium (and

memory protection is often not provided in the underlying infrastructure). Actor-oriented program-

ming can be combined with object-oriented programming and other procedure-oriented systems in

a structured way to achieve the best of both worlds.

Page 21: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

5

actor name

data (state)

ports

Input data

parameters

Output data

What flows through an object is

evolving data.

class name

data

methods

call return

What flows through an object is

sequential control.

The established: Object-oriented:

Things happen to objects.

Actors make things happen.

The alternative: Actor-oriented:

Figure 1.1: Object-oriented design vs. actor-oriented design. Source: Edward A. Lee.

1.3 Actor-Oriented Programming for Wireless Sensor Networks

Wireless sensor networks are highly concurrent systems, with concurrency at many different

levels. In this dissertation, I advocate using an actor-oriented approach to designing, generating,

programming, and simulating wireless sensor network applications. Existing approaches to building

wireless sensor networks can be divided into four layers, as shown in the vertical axis of Figure 1.2.

The operating system approach forms the bottom-most layer, whose focus is to provide basic

programming abstractions to allow a program to run on the sensor node hardware. Examples include

TinyOS [48], SOS [40], Contiki [27], MantisOS [13], NutOS [11], Linux, and .NET.1

The node-centric approach forms the next layer above the operating system layer. Software

in the node-centric layer runs on a single node on top of the operating system, and more abstract

programming models are used, which makes programming easier for the user. Examples include

Mate [64], SNACK [37], Token Machines [75], and the Object State Model [53].

The middleware approach forms the third layer, which begins to include programming ab-

stractions that allow the user to address multiple nodes. Examples include directed diffusion [50],

1Many of the examples shown in Figure 1.2 rely on either simulation with a combination of TOSSIM and gdb, oremulation for the Atmel AVR microcontroller instruction set. Instruction-level emulation lies below the operating systemapproach and is not shown in the figure.

Page 22: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

6

Figure 1.2: WSN landscape.

Page 23: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

7

abstract regions [101], Hood [102], IDSQ (information-driven sensor querying) [108], and embed-

ded web services [67].

The macroprogramming approach forms the top layer, which allows the user to create an appli-

cation by programming the wireless sensor network as a whole, rather than programming individual

nodes separately. Macroprogramming is also known as programming the ensemble. Examples in-

clude TinyDB [70], Agilla [30], and actorNet [56].

The process of building a wireless sensor network can be divided into three stages of devel-

opment: design, simulation, and deployment. However, existing development tools are disjoint

and difficult to integrate. Unfortunately, wireless sensor networks are often deployed in resource-

constrained environments. These constraints dictate that sensor network problems are best ap-

proached in a holistic manner, by jointly considering the physical, networking, and application

layers and making major design trade-offs across the layers [107].

Most existing work focuses on only one stage of development, rather than an integrated ap-

proach. Many of the tools shown in Figure 1.2 rely on the TOSSIM TinyOS simulator for operating

system-level simulation and testing, and TinyViz for visualization.

Simulation tools that fall somewhere between the middleware and node-centric layers include

ns-2, SensorSim, OPNET, OMNeT++, J-Sim, Prowler, and Em*. These tools, with the exception of

Em*, are usually stand-alone and not designed for hardware deployment.2

PIECES (Programming and Interaction Environment for Collaborative Embedded Systems)

[68] is a higher-level simulation tool implemented in a mixed Java-Matlab environment, though it

does not translate easily to actual deployment. Other tools are programming models or languages

that focus solely on design, and not simulation, including Semantics Streams [103], DSN (Declara-

tive Sensornet) [94], Regiment [76], Kairos [38], DHT (Distributed Hash Table), and UML (Unified

Modeling Language).

In this dissertation, actor-oriented programming provides a common high-level language that

unifies the programming interface across the four application layers and between the different stages

of development, from design to simulation and testing, and to deployment. The developer can

choose the model of computation, or communication model between actors, that best fits the target

application.

The goal of this work is to create integrated tools and programming models for networked

embedded application developers to model and simulate their algorithms and quickly transition to

2Chapter 6 contains a more detailed discussion of these simulation tools.

Page 24: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

8

testing their software on real hardware in the field, while allowing them to use the model of com-

putation most appropriate for each part of the system. Chapter 2 introduces the reader to TinyOS,

a runtime environment for wireless sensor nodes. I use TinyOS as an interface for the bottom-most

layer, the operating system approach. Chapter 2 also introduces Ptolemy II, a Java-based software

framework with a graphical user interface, which allows construction of actor-oriented models of

computation. Chapter 3 describes an actor-oriented, node-centric model called TinyGALS for pro-

gramming individual sensor nodes. Chapter 4 introduces an actor-oriented modeling and design

environment for wireless sensor networks. This tool, called Viptos, encompasses multiple layers

and lies above the operating system approach. Chapter 5 describes various techniques for using

higher-order actors to generate multiple simulation scenarios for design and test of wireless sensor

network applications. Chapter 6 discusses related work, and Chapter 7 concludes this dissertation.

1.4 Previously Published Work

Some of the material in this dissertation was previously published in technical reports or con-

ference proceedings. A summary of how these papers have been incorporated into this dissertation

follows.

TinyGALS: A Programming Model for Event-Driven Embedded Systems [23] was the first pa-

per published on this topic, and it was extended into a master’s report, Design and Implementation

of TinyGALS: A Programming Model for Event-Driven Embedded Systems [20]. The language

implemented for the programming model described in these two papers was redesigned and reim-

plemented as part of the nesC compiler and described in galsC: A Language for Event-Driven

Embedded Systems [24], which was later condensed, revised, and published under the same title

[25]. These four publications are combined and updated to form the basis of Chapter 3 and part of

Chapter 6.

Viptos: A Graphical Development and Simulation Environment for TinyOS-based Wireless

Sensor Networks [22] was the first paper published on this topic, and it was revised, updated, and

extended as Joint Modeling and Design of Wireless Networks and Sensor Node Software [21]. These

two publications are combined and updated to form the basis of Chapter 4 and part of Chapter 6.

Page 25: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

9

Chapter 2

Background

In this chapter, I present TinyOS, one of the most popular software toolsuites in the wireless

sensor network research and development community. I also present Ptolemy II, the current version

of one of the most influential actor-oriented design frameworks. Together, these tools form the

background knowledge required for understanding the implementation of the tools and techniques

presented later in this dissertation.

2.1 TinyOS

TinyOS [47, 48] is an open-source runtime environment designed for sensor network nodes

known as motes. TinyOS has a large user base—over 500 research groups and companies use

TinyOS on the Berkeley/Crossbow motes. It has been ported to over a dozen platforms and numer-

ous sensor boards, and new releases see over 10,000 downloads. TinyOS differs from traditional

operating system models in that events drive the behavior of the system. Using this type of exe-

cution, battery-operated nodes can preserve energy by entering a sleep mode when no interesting

events are happening. According to the TinyOS website [95], “TinyOS’s event-driven execution

model enables fine-grained power management yet allows the scheduling flexibility made neces-

sary by the unpredictable nature of wireless communication and physical world interfaces.”

In this section, I present the details of the nesC syntax and the TinyOS execution model. Note

that in this dissertation, I focus on TinyOS 1.x. TinyOS 2.x is a rewritten implementation of TinyOS

1.x that provides users with a cleaner interface. All material presented in this dissertation can easily

be transferred to TinyOS 2.x.

Page 26: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

10

2.1.1 NesC syntax

TinyOS provides a library of reusable software components written in nesC, an extension to

the C programming language. A TinyOS application connects these components using a wiring

specification that is independent of the component implementation. Some TinyOS components are

thin wrappers around hardware, though most are software modules which process data. The dis-

tinction is invisible to the developer. Decomposing different OS services into separate components

allows unused services to be excluded from the application. Figure 2.1(a) shows a TinyOS program

called SenseToLeds that displays the value of a photosensor in binary on the LEDs of a mote. The

TinyOS component library includes those “wired” together in SenseToLeds: Main, SenseToInt

(shown in Figure 2.1(b)), IntToLeds, TimerC, and DemoSensorC.

A nesC component may expose a set of interfaces. Each interface is a set of methods. A

method may be either an event or a command, where an event is usually called “upwards” from

a hardware interrupt handler, and a command is usually called “downwards” from the application

code. A nesC component provides methods that it implements, and uses methods that are imple-

mented by other components. A nesC component is either a configuration that contains a wiring

of other components, or a module that contains an implementation of its interface methods. NesC

interfaces may also be parameterized to provide multiple instances of the same interface. In Figure

2.1(a), SenseToLeds is a configuration that exposes no interface methods. The TimerC.Timer

interface is parameterized. The Timer interface of SenseToInt connects to a unique instance of the

corresponding interface of TimerC. If another component connects to the TimerC.Timer interface,

it connects to a different instance. Each timer can be initialized with different periods.

2.1.2 TinyOS execution model

TinyOS contains a single thread of control managed by the scheduler, which may be interrupted

by hardware events. Component methods encapsulate hardware interrupt handlers. These methods

may transfer the flow of control to another component by calling a uses method. Computation

performed in a sequence of method calls must be short, or it may delay the processing of other

events.

There are two sources of concurrency in TinyOS: tasks and events. Tasks are a deferred compu-

tation mechanism. A long-running computation can be encapsulated in a task, which a component

method posts to the scheduler task queue. The post operation returns immediately, deferring the

computation until the scheduler executes the task later. The TinyOS scheduler processes the tasks

Page 27: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

11

configuration SenseToLeds {} implementation {

components Main, SenseToInt, IntToLeds,

TimerC, DemoSensorC as Sensor;

Main.StdControl -> SenseToInt;

Main.StdControl -> IntToLeds;

SenseToInt.Timer ->

TimerC.Timer[unique("Timer")];

SenseToInt.TimerControl ->

TimerC;

SenseToInt.ADC -> Sensor;

SenseToInt.ADCControl -> Sensor;

SenseToInt.IntOutput -> IntToLeds;

}

(a)

module SenseToInt {provides {

interface StdControl;

}uses {

interface Timer;

interface StdControl as TimerControl;

interface ADC;

interface StdControl as ADCControl;

interface IntOutput;

}} implementation {

...

}

(b)

Figure 2.1: Sample nesC source code.

in the queue in FIFO order whenever it is not executing an interrupt handler. Tasks run to com-

pletion and do not preempt each other. Events signify either an event from the environment or the

completion of a split-phase operation. Split-phase operations are long-latency operations where op-

eration request and completion are separate functions. Commands are typically requests to execute

an operation. If the operation is split-phase, the command returns immediately and completion is

signaled later with an event; non-split-phase operations do not have completion events. Events also

run to completion, but they may preempt the execution of a task or another event. Resource con-

tention is typically handled through explicit rejection of concurrent requests. Because tasks execute

non-preemptively, TinyOS has no blocking operations. TinyOS execution is ultimately driven by

events representing hardware interrupts.

2.2 Ptolemy II

Ptolemy II, a modeling and design framework for concurrent systems, and VisualSense, an

extension to Ptolemy II that supports modeling and simulation of wireless sensor networks, form the

basis of the tools described in this dissertation. In this section, I excerpt and summarize information

from Hylands, et al. [49] and Baldwin, et al. [8].

The Ptolemy Project conducts foundational and applied research in software-based design tech-

Page 28: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

12

niques for embedded systems. It studies heterogeneous modeling, simulation, and design of concur-

rent systems. The focus is on embedded systems, particularly those that mix technologies including,

analog and digital electronics, hardware and software, and electronics and mechanical devices. The

focus is also on systems that are complex in the sense that they mix widely different operations, such

as networking, signal processing, feedback control, mode changes, sequential decision making, and

user interfaces.

Ptolemy II is the current software infrastructure of the Ptolemy Project and is published freely

in open-source form. It serves as a laboratory for experimenting with design techniques. Executable

models are constructed under a model of computation, which is the set of the “laws of physics” that

govern the interaction of components in the model.1 If a model describes a mechanical system,

then the model of computation may literally be the laws of physics. More commonly, however,

the model of computation is a set of rules that are more abstract, and provide a framework within

which a designer builds models. A set of rules that govern the interaction of components is called

the semantics of the model of computation. A model of computation may have more than one

semantics, in that there might be distinct sets of rules that impose identical constraints on behavior.

Most, but not all, models of computation in Ptolemy II support actor-oriented design. This

contrasts with, and complements, object-oriented design by emphasizing concurrency and com-

munication between components. Components called actors execute and communicate with other

actors in a model, as illustrated in Figure 2.2. A director, which is a component specific to the model

of computation used, controls the execution of a model. Actors, like objects, have a well-defined

component interface. This interface abstracts the internal state and behavior of an actor, and restricts

how an actor interacts with its environment. The interface includes ports that represent points of

communication for an actor, and parameters which are used to configure the operation of an actor.

Often, parameter values are part of the a priori configuration of an actor and do not change when

a model is executed, but not always. The “port/parameters” shown in Figure 2.2 function as both

ports and parameters.

Central to actor-oriented design are the communication channels that pass data from one port

to another according to some messaging scheme. Whereas with object-oriented design, components

interact primarily by transferring control through method calls, in actor-oriented design, they inter-

act by sending messages through channels.2 The use of channels to mediate communication implies

1These components are not the same as TinyOS/nesC components, though Chapter 4 explores the relationship betweenPtolemy II components and TinyOS/nesC components.

2These channels may be wired or wireless. The next section discusses wireless channels in more detail.

Page 29: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

13

annotation

director

port/parameters

model

external port

hierarchical abstraction

Figure 2.2: Illustration of an actor-oriented model (top) in Ptolemy II and its hierarchical abstraction(bottom).

Page 30: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

14

that actors interact only with the channels to which they are connected and not directly with other

actors. A relation is an object used to represent the (wired) interconnection. Models, like actors,

may also define an external interface. The interface of a model is called its hierarchical abstrac-

tion. This interface consists of external ports and external parameters, which are distinct from the

ports and parameters of the individual actors in the model. The external ports of a model can be

connected by channels to other external ports of the model or to the ports of actors that compose

the model. External parameters of a model can be used to determine the values of the parameters of

actors inside the model.

Taken together, the concepts of models, actors, ports, parameters, and channels describe the

abstract syntax of actor-oriented design. This syntax can be represented concretely in several ways:

graphically, such as in a bubble-and-arc or block-and-arrow diagram; in XML (Extensible Markup

Language), such as in Figure 2.3; or in a program designed to a specific API (Application Program-

ming Interface), such as in SystemC. Ptolemy II offers all three alternatives. It is important to realize

that the syntactic structure of an actor-oriented design says little about its semantics. The semantics

is largely orthogonal to the syntax and is determined by a model of computation. The model of

computation might give operational rules for executing a model. These rules determine when actors

perform internal computation, update their internal state, and perform external communication. The

model of computation also defines the nature of communication between components.

2.2.1 VisualSense

VisualSense [8] is a modeling and simulation framework for wireless sensor networks that

builds on Ptolemy II. This framework supports actor-oriented definition of sensor nodes, wireless

communication channels, physical media such as acoustic channels, and wired subsystems. The

software architecture consists of a set of base classes for defining wireless channels and sensor

nodes, a library of subclasses that provide specific wireless channel models and node models, and

an extensible visualization framework. Custom nodes can be defined by subclassing the base classes

and defining the behavior in Java or by creating composite models using any of several Ptolemy II

modeling environments. Custom wireless channels can be defined by subclassing the Wireless-

Channel base class and by attaching functionality defined in Ptolemy II models.

To support this style of modeling, VisualSense uses a specialization of the discrete-event (DE)

domain of Ptolemy II. The DE domain of Ptolemy II [15] provides execution semantics where in-

teraction between components occurs via events with time stamps. A sophisticated calendar-queue

Page 31: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

15

<?xml version="1.0" standalone="no"?>

<!DOCTYPE class PUBLIC "-//UC Berkeley//DTD MoML 1//EN"

"http://ptolemy.eecs.berkeley.edu/xml/dtd/MoML_1.dtd">

<class name="Sinewave" extends="ptolemy.actor.TypedCompositeActor">

<property name="samplingFrequency" class="ptolemy.data.expr.Parameter"

value="8000.0"/>

<property name="SDF Director" class="ptolemy.domains.sdf.kernel.SDFDirector"/>

<property name="frequency" class="ptolemy.actor.parameters.PortParameter"

value="440.0"/>

<property name="phase" class="ptolemy.actor.parameters.PortParameter"

value="0.0"/>

<port name="frequency" class="ptolemy.actor.parameters.ParameterPort">

<property name="input"/>

</port>

<port name="phase" class="ptolemy.actor.parameters.ParameterPort">

<property name="input"/>

</port>

<port name="output" class="ptolemy.actor.TypedIOPort">

<property name="output"/>

</port>

<entity name="Ramp" class="ptolemy.actor.lib.Ramp">

<property name="firingCountLimit" class="ptolemy.data.expr.Parameter"

value="0"/>

<property name="init" class="ptolemy.data.expr.Parameter" value="0"/>

<property name="step" class="ptolemy.actor.parameters.PortParameter"

value="(frequency*2*PI/samplingFrequency)"/>

</entity>

<entity name="TrigFunction" class="ptolemy.actor.lib.TrigFunction"/>

<entity name="Const" class="ptolemy.actor.lib.Const">

<property name="value" class="ptolemy.data.expr.Parameter" value="phase"/>

</entity>

<entity name="AddSubtract" class="ptolemy.actor.lib.AddSubtract"/>

<relation name="relation3" class="ptolemy.actor.TypedIORelation"/>

<relation name="relation4" class="ptolemy.actor.TypedIORelation"/>

<relation name="relation" class="ptolemy.actor.TypedIORelation"/>

<relation name="relation2" class="ptolemy.actor.TypedIORelation"/>

<link port="output" relation="relation3"/>

<link port="Ramp.output" relation="relation"/>

<link port="TrigFunction.input" relation="relation4"/>

<link port="TrigFunction.output" relation="relation3"/>

<link port="Const.output" relation="relation2"/>

<link port="AddSubtract.plus" relation="relation"/>

<link port="AddSubtract.plus" relation="relation2"/>

<link port="AddSubtract.output" relation="relation4"/>

</class>

Figure 2.3: XML representation of the Sinewave source.

Page 32: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

16

scheduler is used to efficiently process events in chronological order. The DE domain has a for-

mal semantics that ensures determinate execution of deterministic models [59], although stochastic

models for Monte Carlo simulation are also well supported. The precision in the semantics prevents

the unexpected behavior that sometimes occurs due to modeling idiosyncrasies in some modeling

frameworks.

The DE domain in Ptolemy II supports models with dynamically changing interconnection

topologies. Changes in connectivity are treated as mutations of the model structure. The software is

carefully architected to support multithreaded access to this mutation capability. Thus, one thread

can be executing a simulation of the model while another changes the structure of the model, for

example by adding, deleting, or moving actors, or changing the connectivity between actors. The

results are predictable and consistent.

The most straightforward uses of the DE domain in Ptolemy II are similar to other discrete-

event modeling frameworks such as ns [77], OPNET [78], and VHDL. Components (which are

called actors) have ports, and the ports are interconnected to model the communication topology.

Ptolemy II provides a visual editor for constructing DE models as block diagrams. VisualSense is a

subclass of the DE modeling framework in Ptolemy II that is specifically intended to model sensor

networks. In particular, it removes the need for explicit connections between ports, and instead

associates ports with wireless channels by name (e.g., “RadioChannel”). Connectivity can then be

determined on the basis of the physical locations of the components. The algorithm for determining

connectivity is itself encapsulated in a component as a wireless channel model, and hence can be

developed by the model builder. In VisualSense, sensor nodes themselves can be modeled in Java,

or more interestingly, using more conventional DE models (as block diagrams) or other Ptolemy II

models (such as dataflow models, finite-state machines, or continuous-time models).

Ptolemy II and VisualSense permit customized icons for components in a model. Visual de-

pictions of systems can help to offset the increased complexity that is introduced by heterogeneous

modeling, and to lend insight into the behavior of models. For example, a sensor node can have as

an icon a translucent circle that represents (roughly or exactly) its transmission range.

Another feature of Ptolemy II and VisualSense is a sophisticated type system [105]. In this

type system, actors, parameters, and ports can all impose constraints on types, and a type resolution

algorithm identifies the most specific types that satisfy all the constraints. By default, the type

system in Ptolemy II includes a type constraint for each connection in a block diagram. However,

in wireless models, these connections do not represent all the type constraints. In particular, every

actor that sends data to a wireless channel requires that every recipient from that channel be able to

Page 33: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

17

accept that data type. VisualSense imposes this constraint in the WirelessChannel base class, so

unless a particular model builder needs more sophisticated constraints, the model builder does not

need to specify particular data types in the model. They are inferred from the ultimate sources of

the data and propagated throughout the model.

2.3 Summary

This chapter summarized background information on TinyOS and Ptolemy II, so that the reader

can understand the underlying implementation of the tools and techniques presented in the following

chapters.

Page 34: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

18

Page 35: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

19

Chapter 3

TinyGALS and galsC

Networked embedded software designers face issues such as managing computation as well

as communication, maintaining consistent state across multiple tasks, handling irregular interrupts,

avoiding concurrency errors, and conserving power. These tasks become even more challenging

when the resources of the hardware platforms are too limited, in terms of CPU speed and memory

size, to host a full-scale modern operating system. Traditional technologies for developing em-

bedded software, inherited from writing device drivers and optimizing assembly code to achieve

a fast response and small memory footprint, do not scale with the growing complexity of today’s

applications. Despite the fact that “high-level” languages such as C and C++ have recently replaced

assembly language as the dominant embedded software programming languages, most of these

high-level languages are designed for writing sequential programs to run on an operating system

and fail to handle concurrency intrinsically.

Event-driven embedded software is similar to hardware, where conceptually concurrent com-

ponents are activated by incoming signals (or events). Event-driven execution is particularly suitable

for untethered devices such as sensor network nodes, since a node can go into a sleep mode to pre-

serve energy when no interesting events are happening. For many networked embedded systems,

there is a fundamental gap between this event-driven execution model and sequential programming

languages.

The TinyGALS (Globally Asynchronous and Locally Synchronous) programming model [23]

aims to fill this gap by providing language constructs to systematically build concurrent tasks called

actors. At the application level, these actors communicate with each other asynchronously via

message passing. Within each actor, components communicate synchronously via method calls, as

in most imperative languages.

Page 36: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

20

The terms “synchronous,” “asynchronous,” and “globally asynchronous, locally synchronous

(GALS)” mean different things to different communities, thus causing confusion. The circuit and

processor design communities use these terms for synchronous and asynchronous circuits, where

synchronous refers to circuits that are driven by a common clock [51]. In the system modeling

community, synchronous often refers to computational steps and communication (propagation of

computed signal values) that take no time (or, in practice, very little time compared to the intervals

between successive arrivals of input signals). GALS then refers to a modeling paradigm that uses

events and handshaking to integrate subsystems that share a common tick (an abstract notion of an

instant in time) [10]. The TinyGALS notion of synchronous and asynchronous, however, is consis-

tent with the usage of these terms in distributed programming paradigms [72]. In this chapter, syn-

chronous means that the software flow of control transfers immediately to another component and

the calling code blocks awaiting return. Steps do not take infinite time; control eventually returns to

the calling code. Asynchronous means that the software flow of control does not transfer immedi-

ately to another component; execution of the other component is decoupled. Thus, the TinyGALS

programming model is globally asynchronous and locally synchronous in terms of transfer of the

flow of control.

In order to incorporate shared variable semantics where only the latest value matters, a set of

guarded yet synchronous variables (called TinyGUYS) is provided at the system level for actors to

exchange global information “lazily.” Access to these variables is thread-safe, yet components can

quickly read their values. In this programming model, application developers have precise control

over the concurrency in the system, and they can develop software components without the burden

of thinking about multiple threads.

galsC [24, 25] is a language that implements the TinyGALS programming model. This lan-

guage has a type system that spans synchronous and asynchronous communication boundaries.

galsC takes advantage of the nesC specification for TinyOS 1.x. TinyOS/nesC components provide

an interface abstraction that is consistent with synchronous communication via method calls. How-

ever, concurrent tasks in TinyOS are not exposed as part of the galsC component interface. Lack of

explicit management of concurrency forces TinyOS component developers to manage concurrency

by themselves (locking and unlocking semaphores), which makes TinyOS applications difficult to

develop. The galsC language provides basic concurrency constructs, and the galsC compiler gener-

ates executable code, including an application-specific operating system scheduler, from high-level

specifications. This generative approach allows further analysis of concurrency problems, such as

race conditions, at a high level. Automatically generated code also reduces implementation and

Page 37: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

21

debugging time, since the developer does not need to reimplement standard constructs (e.g., com-

munication ports, queues, functions, and guards on variables).

In a reactive, event-driven system, most of the processor time is spent waiting for an external

trigger or event. A reasonably small amount of additional code to enhance software modularity will

not greatly affect the performance of the system. As an actor-oriented, high-level programming lan-

guage, the TinyGALS/galsC framework can greatly improve software productivity and encourage

component reuse.

The design of TinyGALS is influenced by the trend of introducing formal concurrency models

in embedded software. In particular, synchronous languages try to compile away concurrent exe-

cutions based on the synchronous (zero-time execution) assumption [39]. When it is not possible

to compile away concurrency, the port-based object (PBO) model [92] has a global shared variable

space mediating component interaction. Various dataflow models [73] use FIFO queues to separate

flow of control. The POLIS codesign approach [7] uses an event-driven model for both hardware

and software execution. To some extent, TinyGALS is closer to system-level hardware/software

codesign languages, such as SystemC [12] and VCC [57], than embedded software languages such

as nesC.

The TinyGALS approach differs from coordination models like those discussed above, in that

it allows designers to directly control the concurrent execution and sizes of buffers between asyn-

chronous actors. At the same time, it uses a thread-safe global data space to store messages that

do not trigger reactions. Components in the TinyGALS model are entirely sequential, and they are

easy to develop and backwards compatible with most legacy software. TinyGALS programs do not

rely on the existence of an operating system. Instead, the galsC compiler generates the scheduling

framework as part of the application. The galsC compiler and toolsuite is built on the nesC 1.1.1

compiler and toolsuite for the wireless sensor network nodes known as the Berkeley motes.

The remainder of this chapter is organized as follows. Section 3.1 describes the TinyGALS

programming model and galsC language. Section 3.2 discusses concurrency and determinacy is-

sues in TinyGALS programs. Section 3.3 explains a code generation technique based on the two-

level execution hierarchy and a system-level scheduler. Section 3.4 describes a sample application

implemented in galsC. Section 3.5 summarizes this chapter.

Page 38: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

22

CounterSenseToInt

PhotoTimerC

Trigger

TimerControlTimer

...trigger

StdControl

TimerActoractorControl

StdControl

Timer

trigger

countcount

SenseActor

StdControl

IntOutput.output

trigger

IntOutput.output

trigger

ADC ADCControl

output

actorControl

64

uint16_t count = 0

Figure 3.1: Graphical representation of the SenseTag application.

3.1 The TinyGALS Programming Model and galsC Language

This section uses a simple sensing application to illustrate the TinyGALS programming model

and galsC syntax and semantics. In this example, shown in Figure 3.1, a hardware clock triggers

the system to update a time tick counter. A downsampled clock signal triggers the system to read

the light intensity level from a photoresistor at a lower rate. Reading the sensor may take time. The

system tags the resulting sensor value with the latest value of the counter and sends it downstream

for further processing.

Section 3.1.1 introduces the basic constructs in the TinyGALS programming model and the

syntax of the galsC programming language. Section 3.1.2 explains the semantics of TinyGALS

and galsC. Section 3.1.3 describes valid links in TinyGALS/galsC, and Section 3.1.4 discusses type

inference and checking in galsC.

3.1.1 Programming constructs and language syntax

There are three basic constructs in TinyGALS and galsC: components, actors, and applications.

This section presents the abstract TinyGALS notation, as well as the concrete galsC syntax, for each

construct.

Page 39: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

23

TinyGALS Components

Components are the most basic elements of a TinyGALS program. A TinyGALS component

C is a 4-tuple:

C = (PROV IDESC,USESC,COMPONENT SC,LINKSC,VC), (3.1)

where PROV IDESC and USESC are the sets of methods that constitute the interface of C,

COMPONENT SC is the set of components that form C, LINKSC is the set of relations among

the interface methods of the components (including that of C), and VC is the set of internal variables

that carry the state of C from one invocation of an interface method of C to another.

A component that provides an interface (in PROV IDESC) contains an implementation of the

interface method(s), whereas a component that uses, or requires, an interface (in USESC) expects

another component to implement the interface. Thus, a component is like an object in most object-

oriented programming languages, but with explicit definition of the external methods it uses.

Components in galsC are written in the nesC programming language. Syntactically, a compo-

nent is defined in two parts—an interface definition and an implementation. A component is either

a module or a configuration. The implementation of a module contains executed code, whereas the

implementation of a configuration only contains a list of components and the links between their

interface methods.

Figure 3.2 shows the source code for the TimerC configuration used in Figure 3.1. TimerC

contains a module named TimerM that implements the provided interface methods. Using the tuple

Page 40: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

24

configuration TimerC {provides interface Timer[uint8_t id];

provides interface StdControl;

} implementation {components TimerM, ClockC, ...;

TimerM.Clock -> ClockC;

...

StdControl = TimerM.StdControl;

Timer = TimerM.Timer;

}

module TimerM {provides interface Timer[uint8_t id];

provides interface StdControl;

uses interface Clock;

...

} implementation {// Each bit represents a timer state.

uint32_t mState;

uint8_t mScale, mInterval;

...

command result_t StdControl.init() {mState = 0;

mScale = 3;

mInterval = 230;

return call Clock.setRate(

mInterval, mScale);

}...

}

Figure 3.2: Source code for the TimerC and TimerM components.

notation given in Equation 3.1, the TimerC component can be defined as1

C = (PROV IDESC = {Timer,StdControl},

USESC = /0,

COMPONENT SC = {TimerM,ClockC, ...},

LINKSc = {(TimerM.Clock,ClockC.Clock),

...,

(StdControl,TimerM.StdControl),

(Timer,TimerM.Timer)},

VC = /0).

1The interface keyword in nesC refers to a set of methods. So, the Timer interface refers to the set containingTimer.start(char, uint32 t), Timer.stop(), and Timer.fired(); and the StdControl interface refers to the set containingStdControl.init(), StdControl.start(), and StdControl.stop(). NesC allows the shorthand notation of linking two inter-faces of the same type, which means that each of the individual methods are linked. For brevity, the TinyGALS notationused in this chapter only lists the name of a given interface, rather than the individual methods in the interface.

Page 41: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

25

TinyGALS Actors

Actors are the major building blocks of a TinyGALS program, encompassing one or more

TinyGALS components. A TinyGALS actor R is a 6-tuple:

R = (INPORT SR,OUT PORT SR,PARAMET ERSR,COMPONENT SR,LINKSR, INITR), (3.2)

where INPORT SR and OUT PORT SR are the sets that specify the input ports and output ports of

R, respectively; PARAMET ERSR is the set of external variables—they are global variables that can

be both read and written2; COMPONENT SR is the set of components that form the actor; LINKSR

is the set that specifies the relations among the interface methods of the components (PROV IDESC

AND USESC in Equation 3.1) and the input and output ports of R (INPORT SR and OUT PORT SR)

and the parameters of R (PARAMET ERSR); and INITR is the list of initialization methods that

belong to the components in COMPONENT SR.

Actors are different from components; INPORT SR and OUT PORT SR of an actor R are not the

same as PROV IDESC and USESC of a component C. PROV IDESC and USESC refer to component

methods and may be linked to actor ports in INPORT SR and OUT PORT SR. PROV IDESC and

USESC are executable, whereas INPORT SR and OUT PORT SR are not. However, LINKSR of an

actor R is similar to LINKSC of a component C. The only difference is that the relations in LINKSR

may also include actor ports and parameters.

The galsC syntax for an actor is similar to that of a galsC (or nesC) configuration component.

The interface of an actor consists of a set of input and/or output ports and a set of parameters. An

actor implementation contains a list of components and links. A link can join a component interface

method to one of four types of endpoints: (1) another component interface method, (2) a port, (3) a

parameter, or (4) some combination of these.3 An actor may also contain an actorControl section

which exports the StdControl interface of any of its components to the application level for system

initialization (e.g., for initializing hardware components).

Figure 3.3 shows the source code for TimerActor, which contains the TimerC component,

whose source code was shown in Figure 3.2. TimerActor has an output port named trigger, which is

linked to a component interface method, Trigger.trigger. A different component interface method,

Counter.IntOutput.output, writes to the count parameter. TimerActor exports the StdControl

interfaces of Count and Trigger for system initialization. Figure 3.3 also shows the source code for

2Refer to information on TinyGUYS in Section 3.1.2.3Sections 3.1.2 and 3.1.3 describe links in more detail, including which configurations of components within an actor

are valid.

Page 42: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

26

SenseActor. Its output port output is connected to the concatenation of the component interface

method SenseToInt.IntOutput.output and the value read from the count parameter. Figure 3.1

shows a graphical representation of the actors.

Using the tuple notation given in Equation 3.2, TimerActor can be defined as4

R = (INPORT SR = /0,

OUT PORT SR = {trigger},

PARAMET ERSR = {count},

COMPONENT SR = {Counter,TimerC,Trigger},

LINKSR = {(Counter.Timer,TimerC.Timer[0]),

(Counter.IntOut put.out put,count),

(Trigger.Timer,TimerC.Timer[1]),

(Trigger.TimerControl,TimerC.TimerControl),

(Trigger.trigger, trigger)},

INITR = [Counter.StdControl,Trigger.StdControl]).

The semantics of the execution of components within an actor are discussed in more detail in

Section 3.1.2.

TinyGALS Application

At the top level of a TinyGALS program, actors are connected to form a complete application.

A TinyGALS application A is a 5-tuple:

A = (GLOBALSA,ACTORSA,VARMAPSA,CONNECT IONSA,STARTA), (3.3)

where GLOBALSA is the set of global variables; ACTORSA is the list of actors that form A;

VARMAPSA is a set of mappings, each of which maps a global variable in GLOBALSA to a pa-

rameter (PARAMET ERSR in Equation 3.2) of an actor in ACTORSA5; CONNECT IONSA is the set

of the relations between actor input and output ports; STARTA is the list of input ports of actors

in the application. CONNECT IONSA of an application A differs from LINKSR of an actor R in

4The unique() function in nesC is a constant function that evaluates to a constant at compile time. If the programcontains n calls to unique() with the same identifier string (in this example, “Timer”), each call returns a differentunsigned integer in the range {0, . . . ,n−1}.

5Refer to information on TinyGUYS in Section 3.1.2.

Page 43: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

27

actor TimerActor {port {

out trigger;

} parameter {uint16_t count;

} implementation {components Counter, TimerC,

Trigger;

Counter.Timer ->

TimerC.Timer[unique("Timer")];

Counter.IntOutput.output -> count;

Trigger.Timer ->

TimerC.Timer[unique("Timer")];

Trigger.TimerControl -> TimerC;

Trigger.trigger -> trigger;

actorControl {Counter.StdControl;

Trigger.StdControl;

}}

}

actor SenseActor {port {

in trigger;

out output;

} parameter {uint16_t count;

} implementation {components SenseToInt, Photo;

SenseToInt.ADC -> Photo;

SenseToInt.ADCControl -> Photo;

trigger -> SenseToInt.trigger;

(SenseToInt.IntOutput.output,

count) -> output;

actorControl {SenseToInt.StdControl;

}}

}

Figure 3.3: Source code for TimerActor and SenseActor.

Page 44: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

28

that connections between actors contain an implicit queue, whereas links inside an actor (between

components) do not.

A galsC program is created by writing a galsC application file that contains zero or more

parameters (global variables) and an implementation containing a list of actors, mappings, and

connections, as well as an application start section. A mapping associates application parameters

(global names) with actor parameters (local names). A connection connects actor output ports with

actor input ports, with an optional declaration of the port queue size (defaults to size one). Section

3.1.2 describes which configurations of actors within an application are valid.

Figure 3.4 shows the source code for the SenseTag application, which contains TimerActor,

SenseActor, and some downstream actors. The application contains a parameter (global variable)

named count, which is initialized to zero and connected to the corresponding parameters of Timer-

Actor and SenseActor. The output port trigger of TimerActor is connected to the corresponding

input port of SenseActor, with a queue size of 64. The appstart section declares that an initial

token is to be placed in the input port of SenseActor. Note that arguments (initial data) may also

be passed to the port.

Using the tuple notation given in Equation 3.3, the example application can be defined as

A = (GLOBALSA = {count},

ACTORSA = [TimerActor,SenseActor, ...],

VARMAPSA = {(count,TimerActor.count),

(count,SenseActor.count)},

CONNECT IONSA = {(TimerActor.trigger,SenseActor.trigger),

(SenseActor.out put, ...)},

STARTA = [SenseActor.trigger()]).

3.1.2 Execution model and language semantics

This section discusses the semantics of execution within a component, between components

within an actor, and between actors within an application. It also includes a discussion of the

conditions for well-formedness an application.

Page 45: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

29

application SenseTag {parameter {

uint16_t count = 0;

} implementation {actor TimerActor, SenseActor, ...;

count = TimerActor.count;

count = SenseActor.count;

TimerActor.trigger =[64]=> SenseActor.trigger;

SenseActor.output => ...;

appstart {SenseActor.trigger();

}}

}

Figure 3.4: Source code for the SenseTag application.

Assumptions

The TinyGALS architecture is intended for a platform with a single processor. All memory is

statically allocated; there is no dynamic memory allocation. A TinyGALS program runs in a single

thread of execution (single stack), which may be interrupted by the hardware. A piece of code

is reentrant if multiple simultaneous, interleaved, or nested invocations do not interfere with each

other. This section assumes that interrupt handlers are not reentrant, but that interrupts are masked

while servicing them (interleaved invocations of the same interrupt handler are disabled). However,

other (different) interrupts may occur while servicing an interrupt. There are no other sources of

preemption other than hardware interrupts. This section discusses constraints on what constitutes

a valid configuration of components within an actor when using components that contain interrupt

handlers in which interrupts are enabled. These constraints are necessary for avoiding unexpected

reentrancy, which may lead to race conditions and other nondeterminacy issues. Methods that do

not access component state will not suffer from race conditions, but may suffer from reentrancy

problems. To simplify the discussion, this section assumes that all methods may potentially access

component state. This discussion also assumes the existence of a clock, which is used to order

events.

Page 46: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

30

TinyGALS Components

There are three cases in which a component C may begin execution: (1) an interrupt arrives

from the hardware that C encapsulates, (2) an event arrives on the actor input port linked to one of

the interface methods of C, or (3) another component calls one of the interface methods of C. In

the first case, the component is a source component and when activated by a hardware interrupt,

the corresponding interrupt service routine runs. Source components do not connect to any actor

input ports. In the second case, the component is a triggered component, and the event triggers the

execution of a provided method. Both source components and triggered components may call other

components via required methods. This results in the third case, where the component is a called

component. Once activated, a component executes to completion. That is, the interrupt service

routine or method finishes.

Reentrancy problems may arise if a component is both a source component and a triggered

component. An event on a linked actor input port may trigger the execution of a component method.

While the method runs, an interrupt may arrive, leading to possible race conditions if the interrupt

modifies internal variables (internal state) of the same component. Therefore, to improve the ease

of analyzability of the system and eliminate the need to make components reentrant, source com-

ponents must not also be triggered components, and vice versa. The same argument also applies to

source components and called components. Therefore, it is necessary that source components only

have outputs (required methods) and no inputs (provided methods). Additional rules for linking

components together are detailed in the next section.

In Figure 3.2, mState, mScale, and mInterval are internal variables of component TimerM.

When the StdControl.init() method of TimerM is called, the component calls the Clock.setRate()

method with the values of mInterval and mScale as its arguments. The call keyword indicates that

the Clock.setRate() method is called synchronously (explained further in the next section). The

component only needs to know the type signature of Clock.setRate(), but it does not matter to

which component the method is linked.

TinyGALS Actors

The flow of control between components within a TinyGALS actor occurs on links. A link

is a relation within an actor between its port(s), parameter(s), and component method(s). Section

3.1.3 discusses the exact specifics of what types of links are valid. Links represent synchronous

communication via method calls. When a component calls a required method with the call keyword,

Page 47: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

31

the flow of control in the actor is immediately transferred to the callee component or port. The

external method can return a value through the call just as in a normal method call.6 The graph of

the components and the links between them is an abstraction of the call graph of the methods within

an actor, where the methods associated with a single component are grouped together.

The execution of actors is controlled by the scheduler in the TinyGALS runtime system. There

are two cases in which an actor R may begin execution: (1) a triggered component is activated, or

(2) a source component is activated. In the first case, the scheduler activates the component method

linked to an input port of R in response to an event sent to R by another actor. In the second case,

R contains a source component which has received a hardware interrupt. Notice that in this case,

R may interrupt the execution of another actor. An actor is considered to have finished executing

when the components inside of it have finished executing and control has returned to the scheduler.

As discussed in the previous section, preemption of the normal thread of execution by an

interrupt may lead to reentrancy problems. Therefore, TinyGALS places some restrictions on what

configurations of components within an actor are allowed.

Cycles within actors (between components) are not allowed, otherwise reentrant components

are required.7 Therefore, any valid configurations of components within an actor can be modeled as

a directed acyclic graph (DAG).

A source DAG is formed by starting with a source component and following all forward links

between it and other components in the actor, as in Figure 3.5(a). A triggered DAG is similar to a

source DAG but starts with a triggered component instead, as in Figure 3.5(b).

Race conditions and reentrancy problems may occur if source DAGs and triggered components

are connected within an actor. In Figure 3.5(c), the source DAG (C1, C3) is connected to the triggered

DAG (C2, C3). Race conditions and reentrancy problems may occur if C3 is running in a scheduled

context and an interrupt causes C1 to preempt C3. One can relax the restriction on cycles between

components and only disallow cycles in method call chains between components by first separating

the methods within a component into separate source and triggered components.

If all interrupts are masked during interrupt handling (interrupts are disabled), then additional

restrictions on source DAGs is unneeded. However, if interrupts are not masked (interrupts are

enabled), then a source DAG must not be connected to any other source DAG within the same actor.

Triggered DAGs can be connected to other triggered DAGs, since with a single thread of exe-

cution, it is not possible for a triggered component to preempt a component in any other triggered

6In TinyOS, the return value indicates whether the command completed successfully or not.7Recursion within components is allowed. However, the recursion must be bounded for the system to be live.

Page 48: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

32

C1 C2

Actor A

(a) A source DAG is activated by a

hardware interrupt.

C1 C2

Actor B

(b) A triggered DAG is activated by

the arrival of an event at the actor in-

put port.

C3C1

C2

Actor C

(c) When a source DAG is con-

nected to a triggered DAG, race

conditions and reentrancy prob-

lems may occur.

Figure 3.5: Directed acyclic graphs (DAGs) within actors.

DAG. Recall that once triggered, the components in a triggered DAG execute to completion.

TinyGALS places restrictions on what connections are allowed between component methods

and actor ports, since some configurations may lead to nondeterministic component firing order. Let

us first assume that both actor input ports and actor output ports are totally ordered (using the order

of the ports declared in the port section of the actor definition file), but that components are not or-

dered. As discussed earlier, the configuration of components inside an actor must not contain cycles

and must follow the rules above regarding source and triggered DAGs. Then actor input ports may

be associated either with one provided method of a single component C or with one or more actor

output ports. Likewise, required component methods may be associated with either one provided

method of a single component C or with one or more actor output ports.8 Provided component

methods may be associated with any number or combination of required component methods and

actor input ports, but they may not be associated with actor output ports. Likewise, actor output

ports may be associated with any number or combination of required component methods and actor

input ports. However, if we assume that neither actor input ports nor actor output ports are ordered,

then actor input ports and required component methods may only be associated with either a single

method or with a single output port.

In Figure 3.3, the implementation section of the TimerActor declares that whenever component

Trigger calls trigger(), an event is produced at the trigger output port; the implementation section

of the SenseActor definition declares that whenever the trigger input port is triggered (explained

8In the existing TinyOS constructs, one caller (a required component method) can have multiple callees. The inter-pretation is that when the caller calls, all the callees are called in a possibly non-deterministic order. A combination ofthe callee’s return values is returned to the caller. Although multiple callees are not part of the TinyGALS semantics, it issupported by the galsC software tools for TinyOS compatibility.

Page 49: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

33

in the next section), the trigger() method of component SenseToInt is called.

TinyGALS Application

Each input port of an actor has a FIFO (first-in, first-out) queue. During execution of a

TinyGALS application, communication between actors occurs asynchronously through these queues.

When a component within an actor calls a method that is linked to an output port, the arguments of

the call are converted into events called tokens. A copy of the token is placed in the event queue

of each input port connected to the output port. Later, the TinyGALS scheduler removes the token

from the event queue of each input port connected to the output port. and calls the method that is

linked to the input port with the contents of the token as its arguments. The queue separates the flow

of control between actors; the call to the output port returns immediately, and the component within

the actor can proceed. Communication between actors is also possible without the transfer of data.

In this case, an empty message (token) transferred between ports acts as a trigger for activation of

the receiving actor. Tokens are placed in input port queues atomically, so other source components

cannot interrupt this operation. The scheduler processes tokens in the order in which they are gen-

erated. Tokens are dropped if the input port queue is full; the programmer is currently responsible

for selecting the correct queue size. Note that since each input port of an actor R is linked to a

component method, each token that arrives on any input port of R corresponds to a future invocation

of the component(s) in R. When the system is not responding to interrupts or events on input ports,

the system does nothing (i.e., sleeps).

The execution of a TinyGALS system begins with the initialization of all methods specified in

INITRi for all actors Ri. The order in which actors are initialized is the same as the order in which

they are listed in the application configuration file. The order in which methods are initialized for

a single actor is the same as the order in which they are listed in the actor configuration file. After

actor initialization, the TinyGALS runtime system places an initial token at each system start port,

which are input port(s) declared in the appstart section of the application configuration file. If

initial arguments to the port were declared in the application configuration file, these are stored in

the token. For example, in Figure 3.4, the application starts when the runtime system places an

initial token at the input port trigger of SenseActor. The TinyGALS scheduler passes the token

to the linked component method, and the components in the triggered DAG of the starting actor

execute to completion. They may generate one or more events at the output port(s) of the actor.

During execution, interrupts may occur and preempt the normal thread of execution. However,

Page 50: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

34

Actor A

Actor B

Actor C

C_in

B_inA_out

Figure 3.6: A single-output, multiple-input connection.

control eventually returns to the normal thread of execution.

The TinyGALS semantics do not define exactly when the input port is triggered. Section 3.2

discusses the ramifications of token generation order on the determinacy of the system. The current

galsC implementation processes the tokens in the order that they are generated as defined by the

hardware clock. Tokens generated at the same logical time are ordered according to the global

ordering of actor input ports, which the next paragraph discusses. The runtime system maintains

a global event queue which keeps track of the tokens in all actor input port queues in the system.

Currently, the runtime system activates the actors corresponding to the tokens in the global event

queue using FIFO scheduling. More sophisticated scheduling algorithms can be substituted, such

as ones that take care of timing and energy concerns.

The previous section discussed limitations on the configuration of links between components

within an actor. Connections between actors are much less restrictive. Cycles are allowed between

actors. This does not lead to reentrancy problems because the queue on an actor input port acts as a

delay in the loop. Actor output ports may be connected to one or more actor input ports, and actor

input ports may be connected to one or more actor output ports. A single-output, multiple-input

connection acts as a fork. For example, in Figure 3.6, every token produced by A out is duplicated

and trigger both B in and C in. Tokens that are produced at the same “time” are processed with

respect to the global input port ordering. Input ports are first ordered by actor order, as they appear

in the application configuration file, then in the order in which they are declared in the actor con-

figuration file. A multiple-output, single-input connection has a merge semantics, such that tokens

from multiple sources are merged into a single stream in the order that the tokens are produced.

This type of merge does not introduce any additional sources of nondeterminacy. See Section 3.2

for a discussion of interrupts and their effect on the order of events in the global event queue.

Page 51: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

35

TinyGUYS

The TinyGALS programming model has the advantages that actors become decoupled through

message passing and are easy to develop independently. However, each message passed triggers the

scheduler and activates a receiving actor, which may quickly become inefficient if there is global

state that must be updated frequently. The TinyGUYS (Guarded Yet Synchronous) mechanism

provides a way for actors to share global data safely. This is implemented as the parameter feature

in the galsC programming language.

One must be very careful when implementing global data spaces in concurrent programs. Sev-

eral actors may access the same global variables at the same time. It is possible that while an actor

is reading the variables, an interrupt may occur and preempt the read. The interrupt service routine

may modify the global variables. When the actor resumes reading the remaining variables after

handling the interrupt, it may see an inconsistent state. In the TinyGUYS mechanism, global vari-

ables (parameters) are guarded. Actors may read a parameter synchronously (i.e., without delay).

However, writes to the parameter are asynchronous in the sense that all writes are delayed. A write

to a TinyGUYS global variable is actually a write to a copy of the global variable. One can think of

this as a write buffer of size one. Because there is only one buffer per global variable, the last actor

to write to the variable “wins”, i.e., the last value written will be the new value of the global variable.

Parameters are updated atomically by the scheduler only when it is safe (i.e., after an actor finishes

and before the scheduler triggers the next actor). One can think of this as a way of formalizing race

conditions. Section 3.2.2 discusses how to eliminate race conditions.

TinyGUYS have global names that are mapped to the local parameter names of each actor. A

component interface method or an actor port can write to a parameter by calling a connected function

with a single argument. A component interface method or an actor port can read a parameter when

the method or port is invoked by passing the parameter value as one of the arguments. This design

does not require parameter names to appear inside the component name space. One can develop

components in their own scope, independent of the connected parameters.

In TimerActor in Figure 3.3, the Counter.IntOuput.output method has a single argument

which is written to the count parameter whenever the method is called. In SenseActor in Figure

3.3, the count parameter is passed as the last argument to the output port.

Page 52: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

36

3.1.3 Link model within actors

A link x → y inside an actor consists of a source x and a target y.9 The equations below use

regular expressions to describe possible entities of x and y, where l is the local name of a parameter,

p is an actor port name, and f is a component interface function (method):

source = (l)∗ (p | f ) (l)∗ (3.4)

target = l | p | f (3.5)

A trigger is a port or function that appears as the source of a link. A port is triggered when

the scheduler invokes it with the first token in its queue. A function is triggered when it is called by

another function.

A link x → y is valid if the number of arguments and the types of the arguments of the source

match those of the target when the arguments on each side of the arrow are concatenated separately,

similar to the notion of record types [100]. Additionally, a source port must be an input port and

a target port must be an output port, and a source function must be a required method and a target

function must be a provided method. The return type of a trigger must also match that of the target.

For example, suppose f1 is a required method with exactly two arguments. The link ( f1, l1)→p1 is valid if p1 is an output port that has exactly three arguments whose types match those of the

left hand side (i.e., the types of the first two arguments of p1 must match those of f1, and the type

of the last argument of p1 must match that of l1) and if the return type of f1 matches that of p1.

Using the regular expression model, the following enumerates the valid types of links, where l

in (t, l) is an abbreviation for any number of parameters appearing before or after the trigger t:

• Without parameters

– p1 → p2 [When the input port p1 is triggered, transfer the token directly from p1 to the

output port p2.]

– p1 → f1 [When the input port p1 is triggered, trigger a function f1.]

– f1 → p1 [When the function f1 is triggered, create a token from the arguments of the

function f1 and send it to the output port p1.]

– f1 → f2 [When the function f1 is triggered, trigger another function f2.]

9This model also applies to connections at the application level. However, the discussed port directions must bereversed: a source port must be an output port and a target port must be an input port. Also, global parameter namesshould be used instead of local parameter names. Note that functions do not appear at the application level.

Page 53: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

37

Table 3.1: Summary of valid types of links in TinyGALS/galsC.

No parameters Parameter GET Parameter PUTp1 → p2 (p1, l)→ p2 p → lp1 → f1 ( f1, l)→ p1 f → l

Parameter GET/PUTf1 → p1 (p1, l)→ f1 (p, l1)→ l2f1 → f2 ( f1, l)→ f2 ( f , l1)→ l2

• With parameters

– Parameter GET

∗ (p1, l)→ p2 [When the input port p1 is triggered, concatenate the arguments of p1

with the current value of the parameter(s) l, and send the resulting token directly to

the output port p2.]

∗ ( f1, l) → p1 [When the function f1 is triggered, concatenate the arguments of f1

with the current value of the parameter(s) l, and send the resulting token to the

output port p1.]

∗ (p1, l) → f1 [When the input port p1 is triggered, concatenate the arguments of

p1 with the current value of the parameter(s) l, and trigger a function f1 with the

corresponding arguments.]

∗ ( f1, l) → f2 [When the function f1 is triggered, concatenate the arguments of f1

with the current value of the parameter(s) l, and trigger another function f2 with the

corresponding arguments.]

– Parameter PUT

∗ p → l [When the input port p is triggered, write its argument to a parameter l.]

∗ f → l [When the function f is triggered, write its argument to a parameter l.]

– Parameter GET/PUT

∗ (p, l1)→ l2 [When the input port p is triggered, read the current value of the source

parameter l1 and write it to the target parameter l2.]

∗ ( f , l1)→ l2 [When the function f is triggered, read the current value of the source

parameter l1 and write it to the target parameter l2.]

For links with no parameters, the trigger either (a) triggers the connected function or (b) passes

a token to the connected output port. In a parameter GET (read) link, the parameter value(s) are

Page 54: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

38

call f() f() {...}

Actor A Actor B Actor C

τττ111 τ2 τ6τ4 τ7 τττ888

τττ333 τττ555

Figure 3.7: Type checking example.

appended to the trigger’s argument list and passed to the connected function or port. In a param-

eter PUT (write) link, the trigger writes its argument to the parameter. In a parameter GET/PUT

(read/write) link, the trigger causes the source parameter to be read and its value stored in the target

parameter. Note that for the number of arguments to match, the trigger in a parameter PUT link must

have only one argument, and the trigger in a parameter GET/PUT link must have no arguments.

What are the semantics of multiple links (i.e., fanout from a function)? For example, what is

the order of computation if one has f1 → l1 and f1 → f2? Or if one has f1 → l1 and f1 → p? In

TinyGALS, the write to the parameter occurs first, before any additional computation or transfer

of control. The buffered parameter value may then get overwritten in the later computation. This

policy provides a consistent view of ordering in the system.

3.1.4 Type inference and type checking

The galsC compiler performs high level type inferencing on the connection graph of an appli-

cation. There are two parts to the type inference system: connections with ports, and connections

with parameters but no ports.10

Ports

In galsC, ports are untyped. The actual types of ports are inferred from the connection graph

of a galsC program. In Figure 3.7, actor A contains a component which has a call to function f with

type signature τ1. The input port of actor B is the target of the concatenation of the output port of

A with a parameter with type τ3. The output port of B is the target of the concatenation of the input

port of B and a parameter with type τ5. The output port of B is directly connected to the input port

of actor C. The input port of C is a trigger for a function with type signature τ8. The known types

(τ1,τ3,τ5,τ8) are shown in bold.

10Connections containing only functions are checked with the nesC type checker.

Page 55: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

39

One can write a type equation for each connection in the system:

τττ111 = τ2

τ2× τττ333 = τ4

τ4× τττ555 = τ6

τ6 = τ7

τ7 = τττ888

One can then solve the set of equations to determine the types of the ports. A valid system has a

unique solution to the set of equations. The galsC compiler derives types for all ports in the system

by matching the return type and the argument types of all connected upstream and downstream

functions. The galsC compiler detects a type error when the set of equations conflicts with itself or

is unsolvable.

Parameters

The type system for parameter connections without ports is straightforward, since there are

only two types of connections: (1) mappings between a global name and a local name, and (2) links

between a function and a local name. Since the types of all of these sources and targets are known,

the type checker merely verifies that all the types in a connection match each other.

3.1.5 Summary

In TinyOS, many components that are wrappers for device drivers are “split phase”, which

means that they are actually both source and triggered components. A higher level component

can call the device driver component to ask for data. This call returns immediately. Later, the

device driver component interrupts with the ready data. The hidden source aspect of these types

of components may lead to TinyOS configurations with race conditions or other synchronization

problems. Although the TinyOS architecture allows components to reject concurrent requests, it is

up to the software developer to write thread-safe code. This job is quite difficult, especially after

components are wired together and may have interleaved events. The previous sections showed

how the TinyGALS component model enables users to analyze potential sources of concurrency

problems more easily by identifying source, triggered, and called components and defined what

kinds of links and connections between components, ports, and parameters are valid.

Page 56: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

40

3.2 Concurrency and Determinacy Issues

Concurrency management is a significant concern in event-driven systems. Poorly imple-

mented systems may suffer from deadlock (i.e., where no tasks can proceed due to blocking on

a shared resource), livelock (i.e., where the system falls into deadloop and responds to no further

interrupts), and race conditions (i.e., where shared variables are accessed by multiple threads at the

same time).

This section only considers concurrency issues on single processor platforms. In TinyGALS,

all memory is statically allocated; there is no dynamic memory allocation. A TinyGALS program

runs in a single thread of execution (single stack), which may be interrupted by the hardware. An

actor A may begin execution when: (1) the scheduler activates A in response to an event at its

input port, or (2) an interrupt service component within A is triggered by an external interrupt. The

execution activated by the scheduler is called the scheduled context, and the execution triggered

by interrupts is called the interrupt context. Since all scheduled executions of actors are in the

scheduled context and controlled sequentially by the scheduler, the only possibility for cross-actor

concurrent execution is when one actor is in the scheduled context, and one or more other actors are

in an interrupt context.

3.2.1 Concurrency

There are two mechanisms for actors to communicate in TinyGALS: event queues (ports) and

guarded global variables (parameters). Blocking on shared resources (e.g., a blocking read) is not

part of the semantics across actors. Thus,

Theorem 1. Deadlock is not possible across actors.

In event-driven systems, since there are critical system operations, such as enqueuing and

dequeuing events, which are atomic, it is possible for a scheduler to retain control and disable

interrupts indefinitely. In Figure 3.8, the Loop actor is first triggered by an internal interrupt, which

produces an event (token) at the output port. The event loops back to the input port where it is

inserted into the event queue. Interestingly, there is a direct link between the input port and the

output port inside the actor. Can this self-loop prevent further interrupts from entering the system?

Once the event is enqueued, the scheduler first dequeues the event with interrupts disabled,

then calls the function connected to the inside of the input port (in this case the put() function of

the output port). Within the put() function, the code that inserts the event back into the event queue

Page 57: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

41

interrupt

Actor Loop

Figure 3.8: A self-loop actor triggered by an interrupt.

is also atomic. So, without a careful implementation of the scheduler, there is a risk of livelock.

However, in the galsC scheduler, interrupts are enabled between dequeuing the event and enqueuing

the event, so future interrupts will not be blocked. Thus,

Theorem 2. Livelock is not possible across actors.

Race conditions are another major concurrency concern. Since there are shared data between

actors, an actor may be in the midst of writing the data when another actor tries to read the data.

Two actors may also try to write to a shared variable at the same time.

There are two forms of shared data across actors: tokens and parameters. Tokens are stored

in event queues, and access to them is atomic and controlled by the scheduler. Parameters, as

discussed in the previous section, are always guarded, whose value updates are again controlled by

the scheduler (where the last value written wins). Thus,

Theorem 3. Race conditions are not possible across actors.

As a result of these claims, concurrency errors will not happen at the application level across

actors. So, programmers can focus on concurrency issues within each actor, which is a problem

with a much smaller scope. These issues were discussed in Section 3.1.2.

3.2.2 Determinacy

Notice that the lack of concurrency errors does not mean TinyGALS programs are determinis-

tic. The system state of a TinyGALS program consists of (1) the internal state of all components, (2)

the contents of the global event queue11 and (3) the values of all global parameters. The question of

determinacy is that given a unique initial state of a TinyGALS program and a set of known interrupts

(in terms of both interrupt time and value), will the program have a unique state trajectory indepen-

dent of the execution/CPU speed? Note that single thread sequential programs, where all inputs are

11The global event queue is defined as the ordered sequence of tokens in the event queues of all actor ports.

Page 58: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

42

Actor R(event, t0)

(event, t0)

Figure 3.9: Two events are produced at the same time.

read into the system, are determinate. Concurrent models, such as Kahn process networks, which

sacrifices real-time properties, can also be determinate [52]. However, for event-driven systems,

determinacy may be sacrificed for reactiveness.

This section analyzes the determinacy property of TinyGALS programs, beginning with def-

initions for a TinyGALS system, system state (including quiescent system state and active system

state), actor iteration (in response to an interrupt and in response to an event), and system execution.

This section also reviews the conditions for well-formedness of a TinyGALS system.

Definition 1 (System). A system consists of an application and a global event queue. Recall from

Equation 3.3 that an application is defined as

A = (GLOBALSA,ACTORSA,VARMAPSA,CONNECT IONSA,STARTA).

Recall that the input port associated with a connection between actors has a FIFO queue for

ordering and storing events destined for the input port. The global event queue provides an ordering

for tokens in all input port queues. Whenever a token is stored in an input port queue, a repre-

sentation of this event is also inserted into the global event queue. Thus, events that are produced

earlier in time with respect to the system clock appear in the global event queue before events that

are produced later in time. Events that are produced at the same time (e.g., as in Figures 3.9 or

3.6) are ordered first by order of appearance in the application actors list (ACTORSA), then by order

of appearance in the actors input ports list (INPORT S′R, which is an ordered list created from the

actors input ports set INPORT SR).

Definition 2 (System state). The system state consists of four main items:

1. The values of all internal variables of all components (VCi).

2. The contents of the global event queue.

3. The contents of all of the queues associated with actor input ports in the application.

Page 59: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

43

4. The values of all TinyGUYS (GLOBALSA).

Recall that the global event queue contains the events in the system, but the actor input ports

contain the data associated with the event, encapsulated as a token.

The system state is either quiescent or active:

Definition 2.1 (Quiescent system state). A system state is quiescent if there are no events in the

global event queue, and hence, no events in any of the actor input port queues in the system.

Definition 2.2 (Active system state). A system state is active if there is at least one event in the

global event queue, and hence, at least one event in the queue of at least one actor input port.

Note that a TinyGALS system starts in an active system state, since execution begins by trig-

gering an actor input port.

Execution of the system can be partitioned into actor iterations based on component execution.

Definition 3 (Component execution). A source component is activated when the hardware it en-

capsulates receives an interrupt. A triggered or called component C is activated when one of its

provided methods is called. Component execution is the execution of the code in the body of the

interrupt service routine or method through which the component has been activated. Note that the

code executed upon component activation may call other methods in the same component or in a

linked component. Component execution also includes execution of all external code until control

returns and execution of the code body has completed.

Definition 4 (Actor iteration). An iteration of an actor R is the execution of a subset of the compo-

nents inside of R in response to either an interrupt or an event at an input port.

The following defines these two types of actor iterations in more detail, including what is meant

by “subset of components.”

Definition 4.1 (Actor iteration in response to an interrupt). Suppose actor R is iterated in response

to interrupt I. Let C be the component that contains the interrupt handler of I. Recall from Section

3.1.2 that C therefore must be a source component. Create a source DAG D by starting with C and

following all forward links between C and other components in R. Iteration of the actor consists of

the execution of the components in D beginning with C. Note that iteration of the actor may cause

it to produce one or more events on its output port(s).

Page 60: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

44

Definition 4.2 (Actor iteration in response to an event). Suppose actor R is iterated in response to

an event E stored at the head of one of its input port queues, Q. Let C be the component linked

to the input port of Q. Recall from Section 3.1.2 that C therefore must be a triggered component.

Create a triggered DAG D by starting with C and following all forward links between C and other

components in R. Iteration of the actor consists of the execution of the components in D beginning

with C. As with the interrupt case, iteration of the actor may cause it to produce one or more events

on its output port(s).

The following discusses how to choose the actor iteration order.

Definition 5 (System execution). Given a system state and zero or more interrupts, system execution

is the iteration of actors until the system reaches a quiescent state. The order in which actors are

executed is the same as the order of events in the global event queue.

Conditions for well-formedness Below is a summary of the conditions that the components

within a single TinyGALS actor must satisfy to be well-formed and avoid concurrency problems, as

discussed in Sections 3.1.2 and 3.2.1.

• Source components may neither also be triggered components nor called components.

• Cycles among components within an actor are not allowed, but loops around actors are al-

lowed.

• Component source DAGs and triggered DAGs must be disconnected.

• Component source DAGs must not be connected to other source DAGs, but triggered DAGs

may be connected to other triggered DAGs. Assumes that an interrupt whose handler is

running is masked, but other interrupts are not masked.

• Outgoing component methods may be associated with a single method of another component,

or with one or more output ports.

• Input ports may be associated with a single method of a single component, or with one or

more output ports.

Page 61: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

45

quiescent

state

...

state

quiescent

an actor iteration

active states

interrupt I

r0 r1 r2

q0 a0,1a0,0

rn

q1a0,n−1

Figure 3.10: A single interrupt.

Determinacy

Given the definitions in the previous section, this section first discusses determinism of a

TinyGALS system in the case of a single interrupt occurring in a quiescent state. This section

then discusses determinism for one or more interrupts during actor iteration in the cases (1) where

there are no global variables and (2) where there are global variables.

In the intuitive notion of determinacy, given an initial quiescent system state and a set of inter-

rupts that occur at known times, the system always produces the same outputs and ends up in the

same state after responding to the interrupts.

Theorem 4 (Determinacy). A system is determinate if, for each quiescent state and a single inter-

rupt, there is only one system execution path.

Recall that a TinyGALS system starts in an active system state. The application start port is

an actor input port which is in turn linked to a component C inside the actor. The component C

is a triggered component, which is part of a DAG. Components in this triggered DAG execute and

may generate events at the output port(s) of the actor. System execution proceeds until the system

reaches a quiescent state. From this quiescent state, one can analyze the determinacy of a TinyGALS

system.

Figure 3.10 depicts iteration of a TinyGALS system between two quiescent states due to ac-

tivation by an interrupt I. A TinyGALS system is determinate, since the system execution path is

the order in which the actors are iterated, and in each of the steps r0,r1, . . . ,rn, the actor selected is

determined by the order of events in the global event queue.

What if one or more interrupts occur during an actor iteration, that is, between quiescent states,

as is usually true in an event-driven system?

Page 62: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

46

...

...

q1ax0,n

I0

I1 InI2

ax0,0 ax

0,1q0

Figure 3.11: One or more interrupts where actors have delayed output.

Determinacy of a system without global variables. This section first examines the case where

there are no TinyGUYS global variables.

Consider an actor R that contains a component C which produces events on the output ports of

R. Suppose the iteration of actor R is interrupted one or more times. Since source DAGs must not be

connected to triggered DAGs, the interrupt(s) cannot cause the production of events on output ports

of R that would be used in the case of a normal uninterrupted iteration. However, the interrupt(s)

may cause insertion of events into other actor input port queues, and hence insertions into the global

event queue. Depending on the¡ relative timing between the interrupts and the production of events

by C at the output ports of R, the order of events in the global event queue may not be consistent

between multiple runs of the system if the same interrupts occur during the same actor iteration, but

at slightly different times. This is a source of non-determinacy.

A partial solution for reducing non-determinacy in the system is to delay producing outputs

from the actor being iterated until the end of its iteration. This approach is taken by models of

computation such as timed multitasking [69] and Giotto [44]. If one knows the order of interrupts,

then one can predict the state of the system after a single actor iteration even if it is interrupted one or

more times. Figure 3.11 shows a system execution in which a single actor iteration is interrupted by

multiple interrupts. In the TinyGALS notation, aij,k refers to an active system state after an interrupt

Ii starting from quiescent state q j and after actor iteration rk. In Figure 3.11, the superscript x in axj,k

is a shorthand for the sequence of interrupts I0, I1, I2, . . . , In.

In order to determine the value of active system state axj,k, one can “add” the combined system

states. Suppose active state a10,0 would be the next state after an iteration of the actor corresponding

to interrupt I1 from quiescent state q0, and that active state a20,0 would be the next state after an

iteration of the actor corresponding to interrupt I2 from q0. This is illustrated in Figure 3.12.

This section assumes that the handlers for interrupts I1, I2, . . . , In execute quickly enough such

that they are not interleaved (e.g., I2 does not interrupt the handling of I1). Then the system state

Page 63: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

47

Ii

ai0,0q0

Figure 3.12: Active system state after one interrupt.

...

I0

a10,0

I1 I2 In

q0

a10,0 +a2

0,0 + ...+an0,0

a10,0 +a2

0,0

a10,0 +a2

0,0 + ...+an0,0 +a0

0,0

Figure 3.13: Active system state determined by adding the active system state after one non-interleaved interrupt.

before the completion of the iteration of actor R in response to interrupt I0, but after the comple-

tion of the interrupt handlers for interrupts I1 and I2, would be a10,0 + a2

0,0, where the value of this

expression is the system state in which the new events produced in active system state a20,0 are in-

serted (or “appended”) into the corresponding actor input port queues in active system state a10,0.

One can extend this to any finite number of interrupts, In, as shown in Figure 3.13. It is necessary

that the number of interrupts be finite for liveness of the system. From a performance perspective,

it is also necessary that interrupt handling be fast enough that the handling of the first interrupt I0

completes in a reasonable length of time. If the interrupts are interleaved, one must add the system

state (append actor input port queue contents) in the order in which the interrupt handlers finish.

Another solution, which leads to greater predictability in the system, is to preschedule actor

iterations. That is, if an interrupt occurs, a sequence of actor iterations is scheduled and executed,

during which interrupts are masked. One can also queue interrupts in order to eliminate preemption.

Then, system execution is deterministic for a fixed sequence of interrupts. However, both of these

approaches reduces the reactiveness of the system.

Page 64: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

48

Determinacy of a system with global variables. This section now discusses system determinacy

in the case where there are TinyGUYS global variables.

Suppose that actor R writes to a global variable. Also suppose that the iteration of actor R is

interrupted, and a component in the interrupting source DAG writes to the same global variable.

Then without timing information, one cannot predict the final value of the global variable at the end

of the iteration. (Note that when read, a global variable always contains the same value throughout

an entire actor iteration). As currently defined, the state of the system after the iteration of actor R

is interrupted by one or more interrupts is highly dependent on the time at which the components in

R write to the global variable(s). There are several possible alternatives for eliminating this source

of nondeterminacy.

Solution 1 Allow only one writer for each TinyGUYS global variable.

Solution 2 Allow multiple writers, but only if they can never write at the same time. That is, if a

component in a triggered DAG writes to a TinyGUYS global variable, no component in

any source DAG can be a writer (but components in other triggered DAGs are allowed

since they cannot execute at the same time). Likewise, if a component in a source DAG

writes to a TinyGUYS global variable then no component in any triggered DAG can be

a writer. Components in other source DAGs are only allowed to write if all interrupts

are masked.

Solution 3 Delay writes to a TinyGUYS global variable by an iterating actor until the end of the

iteration.

Solution 4 Prioritize writes such that once a high priority writer has written to the TinyGUYS global

variables, lower priority writes are lost.

3.2.3 Summary

A TinyGALS program is determinate in a restricted case, where there is pure reactive execu-

tion. That is, interrupts occur only at quiescent states. This may require that the processing speed be

quick enough to process all triggered execution before the next interrupt occurs. An extreme version

of this case is the “synchronous” assumption in synchronous/reactive models, where the processing

speed is infinitely fast, and it takes zero time to react to external events [39].

In general, a TinyGALS program is non-determinate. The source of non-determinacy is the

preemptive handling of interrupts. Suppose that while an actor is being iterated, it is interrupted by

Page 65: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

49

another actor. If both of these actors produce events at their output ports, the order of events in the

global event queue may not be consistent when the system is executed at different speeds. If both

of these actors write to a global variable (i.e., a parameter), then without exact timing information,

one cannot predict the final value of the global variable at the end of the iteration.

However, event-driven systems are usually designed to be reactive. In these cases, interrupts

should be considered as high priority events which should affect the system state as soon as possi-

ble.

3.3 Code Generation

The highly structured architecture of the TinyGALS model enables automatic generation of

the communication and scheduling code for galsC programs, allowing software developers to avoid

writing error-prone concurrency control code. The galsC compiler takes advantage of a real com-

piler backend. The galsC toolset is an extension of the nesC 1.1.1 toolset, and can compile both

nesC and galsC programs. The galsC compiler uses traditional compiler techniques, including type

checking, dead code elimination, and function inlining. The galsC compiler also inherits the data-

race detection feature of nesC. The detection feature is modified for galsC, since the decoupling of

execution through ports eliminates some possible sources of race conditions. The galsC compiler

uses the link model described in Section 3.1.3 to check links and connections, and to infer and check

types in the system graph of ports, parameters, and functions (methods).

Given the definitions for the components, actors, and application, the galsC compiler auto-

matically generates all the code necessary for (1) component links and actor connections, (2) com-

munication between actors, and (3) TinyGUYS global variable reads and writes, and (4) system

initialization and start of execution. The output of the galsC compiler can be cross-compiled for any

platform used with TinyOS, including the Berkeley motes.

The discussion throughout this section uses the example system illustrated in Figure 3.14. This

is an annotated version of the SenseTag application example shown in Figure 3.1 at the beginning

of this chapter. Tables 3.2 and 3.3 show a summary of the generated functions and data structures

for galsC.

This section also gives an overview of the implementation of the TinyGALS scheduler and

how it interacts with TinyOS, as well as data on the memory usage of TinyGALS.

Page 66: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

50

CounterSenseToInt

PhotoTimerC

Trigger

TimerControlTimer

...

GALSC_sched_start()

GALSC_sched_init()

TinyGALS scheduler

GALSC_eventqueue[]

trigger

StdControl

TimerActoractorControl

StdControl

Timer

trigger

countcount

SenseActor

actorControl StdControl

IntOutput.output

trigger

IntOutput.output

trigger

ADC ADCControl

SenseActor$trigger$put()

SenseActor$trigger$get()

SenseActor$trigger$arg0[64]

SenseActor$trigger$head

SenseActor$trigger$count

output

GALSC_params

GALSC_params_buffer

64

uint16_t count = 0

Figure 3.14: Code generation for the SenseTag application.

Table 3.2: Generated code for ports in galsC.

Function or variable name Per port12 Function DescriptionGALSC_sched_init() X Initialize scheduler data structures.GALSC_sched_start() X Put initial tokens into input port queues.GALSC_eventqueue[] Event queue for the TinyGALS scheduler.actor$port$put() X X Put token into input port queue.actor$port$get() X X Get token out of input port queue.actor$port$argi[] X Queue for the ith argument of the input port.13

actor$port$head X Points to the beginning of the input port queue.13

actor$port$count X Number of tokens in the input port queue.

Page 67: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

51

Table 3.3: Generated code for parameters (TinyGUYS) in galsC.

Function or variable name Per parameter14 Function DescriptionGALSC_params Contains all of the parameters.GALSC_params_buffer Copy of GALSC_params.parameter$put() X X Write to parameter buffer.parameter$get() X X Read from parameter.

3.3.1 Links and connections

The compiler generates a set of aliases and mapping functions that create the links between

components, as well as the connections between actors. The mapping functions for the links be-

tween components is the same as in the original nesC compiler—these are intermediate functions

that call the destination function.

In the example in Figure 3.14, for the links between the TimerControl interfaces of the Trig-

ger and TimerC components, the galsC compiler generates an alias and a mapping function for

each method of the interface. For the init() method of the TimerControl interface15, the alias

and destination for the link is TimerM$StdControl$init() (see Figure 3.2 for the source code of

the TimerC and TimerM components). The galsC compiler generates a mapping function named

Trigger$TimerControl$init(), which calls TimerM$StdControl$init(). The galsC compiler also

generates similar aliases and mapping functions for connections between actors, though the called

function is a put() or get() function for an actor port, as detailed in the next section.

3.3.2 Communication

The compiler automatically generates a set of scheduler data structures and functions for each

connection between actors.

For each input port of an actor, the compiler generates a queue of width m and length n, where

m is the number of arguments in the linked component method, and n is the length specified by the

programmer in the application definition file. If the linked component method has no arguments,

then as an optimization, the compiler does not generate a queue for the port, but it still reserves

12“Per port” indicates that this function or variable is generated for each input port. If not indicated, there is only oneinstance of the function or variable for the entire galsC program.

13This variable is not generated if the port has no arguments (i.e., the token contains no data).14“Per parameter” indicates that this function or variable is generated for each parameter. If not indicated, there is only

one instance of the function or variable for the entire galsC program.15Here, TimerControl is an alias for StdControl that is explicitly declared in the declaration of the Trigger component

using the as keyword in nesC.

Page 68: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

52

space for events in the scheduler event queue. The compiler also generates a pointer and a counter

for each input port to keep track of the location and number of tokens in the queue.

In the example in Figure 3.14, for the definition of the trigger input port of SenseActor, the

galsC compiler generates an input port queue of length 64 called SenseActor$trigger$arg0[ ]16,

as well as the variables SenseActor$trigger$head and SenseActor$trigger$count.

The galsC compiler also generates a put() and get() function for each input port. The put()

function handles the actual copying of data to the input port queue. The put() function also adds the

port identifier to the scheduler event queue so that the scheduler activates the actor at a later time.

For each link between a component method and an actor output port, the galsC compiler gen-

erates a mapping function, as described in the previous section. The mapping function is called

whenever a method of a component wishes to write to an output port, which in turn calls the linked

input port put() function.

In the example in Figure 3.14, the galsC compiler generates a mapping function TimerAc-

tor$Trigger$trigger() for the trigger method of component Trigger in TimerActor, and gener-

ates functions SenseActor$trigger$put() and SenseActor$trigger$get() for the input port trig-

ger of SenseActor. The mapping function TimerActor$Trigger$trigger() in turn calls Sense-

Actor$trigger$put() to insert data into the queue. It modifies SenseActor$trigger$head and

SenseActor$trigger$count to keep track of the queue contents.

If the queue is full when attempting to insert data into the queue, one can take one of several

strategies. The galsC scheduler currently takes the simple approach of dropping events that occur

when the queue is full. However, an alternate method is to generate a callback function which

attempts to re-queue the event at a later time. Yet another approach would be to place a higher

priority on more recent events by deleting the oldest event in the queue to make room for the new

event.

For each link between a component method and an actor input port, the galsC compiler also

generates a mapping function, as described in the previous section. The mapping function calls

the get() function of the linked input port. When the scheduler activates an actor via an input

port, the system first calls this generated function to remove data from the input port queue and

pass it to the component method. In the example, the system calls SenseActor$trigger$get()

when the scheduler activates SenseActor to remove data queued in SenseActor$trigger$arg0[0].

The scheduler also modifies SenseActor$trigger$head and SenseActor$trigger$count before

16TimerActor.Trigger.trigger() is a method with one argument.

Page 69: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

53

calling the trigger() method of the SenseToInt component with the newly removed data as the

argument.

3.3.3 TinyGUYS

The compiler generates a pair of data structures and a pair of access functions for each

TinyGUYS global variable declared in the application definition. The pair of data structures consists

of a data storage location of the type specified in the actor definition that uses the global variable,

along with a buffer for the storage location. The pair of access functions consists of a get() function

that returns the value of the global variable, and a put() function that stores a new value for the

variable in the variable’s buffer. The mapping functions generated for the component connections

to TinyGUYS parameters calls these put() and get() functions. A generated flag indicates whether

the scheduler needs to update the variables by copying data from their buffers.

For the example in Figure 3.14, the galsC compiler generates a global variable named

GALSC params.count, along with a buffer named GALSC params buffer.count. The code gen-

erator also creates functions count$put() and count$get().

3.3.4 System initialization and start of execution

The code generator creates a system-level initialization function called GALSC sched init(),

which initializes the scheduler data structures. The code generator also connects the StdControl

interfaces listed in the actorControl section of each actor to the Main component used in TinyOS

to initialize the system. The order of actors listed in the application definition determines the order

in which the interfaces are connected.

The code generator also creates an application start function called GALSC sched start().

This function places initial tokens into the input port queues specified in the appstart section of the

application definition.

In the source code shown in Figure 3.4, SenseActor.trigger() is listed in the appstart sec-

tion of the application definition. Therefore, the GALSC sched start() function calls the Sense-

Actor$trigger$put() function at the start of the system.

3.3.5 Scheduling

Execution of a TinyGALS system begins in the scheduler, which performs all of the runtime

initialization. Figure 3.15 shows the TinyGALS scheduling algorithm. There is a single scheduler

Page 70: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

54

if there is an event in the global event queue then {if any TinyGUYS have been modified

Copy buffered values into variables.

end if

Get token corresponding to event out of input port.

Pass value to the method linked to the input port.

else if there is a TinyOS task then {Take task out of task queue.

Run task.

end if

Figure 3.15: TinyGALS scheduling algorithm.

in TinyGALS which checks the global event queue for events. If the global event queue contains an

event, the scheduler first copies buffered values into the actual storage for any modified TinyGUYS

global variables. The scheduler removes the token corresponding to the event from the appropriate

actor input port and passes the value of the token to the component method linked to the input port.

If the global event queue contains no events, the scheduler runs any posted TinyOS tasks. The

algorithm loops until there are no events or TinyOS tasks, at which point the system goes to sleep.

The TinyGALS scheduler is a two-level scheduler. TinyGALS actors run at the highest priority,

and TinyOS tasks run at the lowest priority. Note that the TinyOS scheduler is included as a subset

of the TinyGALS scheduler for backwards compatibility with TinyOS tasks. If TinyOS tasks are

not used, the TinyGALS scheduler is about the same size as the original TinyOS scheduler.

The TinyGALS programming model removes the need for TinyOS tasks. Both triggered actors

in TinyGALS and tasks in TinyOS provide a method for deferring computation. However, TinyOS

tasks are not explicitly defined in the interface of the component, so it is difficult for a developer

wiring off-the-shelf components together to predict what non-interrupt driven computations will run

in the system. In TinyOS tasks must be short; lengthy operations should be spread across multiple

tasks. However, since there is no communication between tasks, the only way to share data is

through the internal state of a component. The user must write synchronization code to ensure that

there are no race conditions when multiple threads of execution access this data. TinyGALS actors,

on the other hand, allow the developer to explicitly define “tasks” at the application level, which is

a more natural way to write applications. The asynchronous and synchronous parts of the system

are clearly separated to provide a well-defined model of computation, which leads to programs that

Page 71: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

55

are easier to debug. The globally asynchronous nature of TinyGALS provides a way for tasks to

communicate. The developer has no need to write synchronization code when using TinyGUYS to

share data between tasks; the galsC compiler automatically generates the code.

3.3.6 Memory usage

TinyGALS provides an improved programming model in exchange for a minimal application-

dependent increase in code size for scheduling and communication between actors. For a simple

galsC photosensor application, the initialization and scheduling code is 662 bytes compared to 564

bytes for the original nesC code. The get() and put() functions for a port with one argument of type

uint8 t together use 208 bytes. The get() and put() functions for a parameter of type uint16 t use

30 bytes. The scheduler event queue size is equal to the sum of the user-allocated sizes for each port

connection (depends on the size of the data type).

Thus, memory usage of a TinyGALS application is determined mainly by the user-specified

queue sizes and the total number of ports in the system. The TinyGALS communication framework

is very lightweight, since event queues are generated as application-specific data structures.

3.4 Example

To illustrate the effectiveness of the galsC language, consider a classical sensor network appli-

cation that detects and monitors point-source targets. A set of sensor nodes (motes) are deployed in

a 2-D field. To simplify the discussion, assume that the motes are deployed on a perturbed grid, as

shown in Figure 3.16. The goal of the sensor network is to detect moving objects modeled as point

signal sources, and to report the detection to a central base station, located at the lower left corner

of the field. Note that the goal here is to illustrate the language, rather than to develop sophisticated

algorithms to solve the problem optimally.

Assume that the motes know their locations on the grid and the grid size. The application

primarily consists of two tasks: (1) exchanging local sensor readings to determine the “leader”

responsible for reporting a detection, and (2) multi-hop forwarding of the report messages to the base

station. For simplicity, the leader election is achieved by having every mote periodically broadcast

a packet containing the location of the mote and its sensor reading. These packets also serve as

beacons to establish a multi-hop routing structure.

The multi-hop routing is implemented as a routing tree rooted at the base station. Assume

Page 72: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

56

basestation

Figure 3.16: Sensor array for object detection and reporting.

that no mote has the global topology of the network; a mote finds out its parent in the tree by

eavesdropping on other messages. These messages include sensor reading broadcasts and forwarded

report messages. Every message contains the hop count of the sender, which indicates the level of

the sender in the routing tree. For example, the mote directly connected to the base station has hop

count 0. Whenever it broadcasts a message, every node that can overhear the message notes that

it is probably one hop away from the base station. The reachable nodes of a wireless broadcast

may have a complicated shape, as illustrated by the dashed line in Figure 3.16. To compensate for

the unreliable and sometimes asymmetric wireless communication links, a mote maintains a list

of senders it has heard in the past T seconds and chooses the most reliable one (measured by, for

example, a trade-off between low hop count and message repeatability) as its parent node. It then

calculates its own hop count from its parent’s hop count.

Figure 3.17 shows a high-level view of the galsC implementation of the object detection ap-

plication. All motes run the identical code modular to their locations. Two types of event sources

drive the execution of a mote—clock interrupts and received messages. Similar to the example from

the beginning of the chapter in Figure 3.1, the TimerActor handles clock interrupts and updates the

latest timer count in a parameter named timeCount. Every half second, TimerActor emits a token

that triggers the SenseAndSend actor.

The MessageReceiver actor receives messages from the radio and chooses an action based

on the message type:

Page 73: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

57

• If the message is a local broadcast, the actor updates the neighborReadings table. Note that

since only the latest neighbor sensor reading matters, the overriding semantics of TinyGUYS

variables is a natural fit.

• Also for each broadcast message, the actor updates an internal routing table by looking at

the repetition frequency of the sender node. Note that it requires the timeCount value to

determine the rate of the messages heard. Whenever there is a change of the desired parent

node, and thus this node’s hop count, it updates the parentNode and hopCount parameters.

• If the message is a forwarding message, the actor sends the content of the message to the

downstream MessageForwarder actor.

The SenseAndSend actor activates the ADC (analog-to-digital converter) to get a sensor

reading. Once the sensor reading is available, the actor queues a local broadcast of the sensor

reading. The actor also compares its own reading with the latest values from its neighbors.17 If

this mote has the highest sensor reading (i.e., it is closest to the signal source), SenseAndSend

generates a report message and queues it with the MessageForwarder actor.

Both the LocalBroadcast actor and the MessageForwarder actor send out packets with this

mote’s hopCount so that other motes can use it to build the multi-hop routing tree. The Message-

Forwarder actor also takes the parentNode ID as part of its input token, merged with the requests

from SenseAndSend and MessageReceiver.

3.5 Summary

This chapter described the TinyGALS programming model for event-driven embedded systems

such as sensor networks, and the galsC programming language that implements the programming

model. The globally asynchronous, locally synchronous model allows developers to use high-level

constructs such as ports and parameters to create thread-safe, multitasking programs based on the

actor model.

At the local level, software components are linked via synchronous method calls to form ac-

tors. At the global level, actors communicate with each other asynchronously via message passing,

which separates the flow of control between actors. A complementary model called TinyGUYS is a

guarded yet synchronous model designed to allow thread-safe sharing of global state between actors

via parameters without explicitly passing messages.17Here, the neighbors are defined as the motes directly above, below, left, and right of this mote in the grid.

Page 74: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

58

timeCount

TimerActor

neighborReadings

SenseAndSend

LocalBroadcast

hopCount

MessageForwarder

parentNode

MessageReceiver

Figure 3.17: Top-level, per-node view of the object detection application.

This chapter also described a type system for checking connections across synchronous and

asynchronous communication boundaries. The galsC compiler automatically generates communi-

cation and scheduling code for programs specified in the galsC language, which allows developers to

avoid writing error-prone task synchronization code. Having a well-structured concurrency model

at the application level greatly reduces the risk of concurrency errors, such as deadlock and race con-

ditions. The galsC compiler extends the nesC compiler, which allows galsC to have traditional type

checking, dead code elimination, and function inlining, as well as checking for possible race condi-

tions. The language and compiler are implemented for the Berkeley motes and extend TinyOS/nesC

by providing a higher programming abstraction level than the TinyOS primitives.

Page 75: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

59

Chapter 4

Viptos

In The Mythical Man Month [17], Frederick P. Brooks, Jr. writes about requirements refinement

and rapid prototyping:

The hardest single part of building a software system is deciding precisely what tobuild. No other part of the conceptual work is so difficult as establishing the detailedtechnical requirements, including all the interfaces to people, to machines, and to othersoftware systems. No other part of the work so cripples the resulting system if donewrong. No other part is more difficult to rectify later.

Therefore the most important function that software builders do for their clients isthe iterative extraction and refinement of the product requirements. For the truth is, theclients do not know what they want. They usually do not know what questions mustbe answered, and they almost never have thought of the problem in the detail that mustbe specified. Even the simple answer—“Make the new software system work like ourold manual information-processing system”—is in fact too simple. Clients never wantexactly that. Complex software systems are, moreover, things that act, that move, thatwork. The dynamics of that action are hard to imagine. So in planning any softwareactivity, it is necessary to allow for an extensive iteration between the client and thedesigner as part of the system definition.

Brooks later quotes Harel, author of STATEMATE [42], in the twentieth-anniversary edition

of The Mythical Man Month [17]:

Harel argues strongly that much of the conceptual construct of software is inher-ently topological in nature and these relationships have natural counterparts in spa-tial/graphical representations:

Using appropriate visual formalisms can have a spectacular effect on engi-neers and programmers. Moreover this effect is not limited to mere acciden-tal issues; the quality and expedition of their very thinking was found to beimproved. Successful system development in the future will revolve around

Page 76: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

60

visual representations. We will first conceptualize, using the “proper” enti-ties and relationships, and then formulate and reformulate our conceptionsas a series of increasingly more comprehensive models represented in an ap-propriate combination of visual languages. A combination it must be, sincesystem models have several facets, each of which conjures up different kindsof mental images.

As discussed in Chapter 1, most existing tools for wireless sensor networks focus on either

design, simulation, or deployment. None of these allow extensive iteration between design and

implementation, especially in an intuitive, visual manner.

This chapter presents Viptos (Visual Ptolemy and TinyOS), a joint modeling and design envi-

ronment for wireless networks and sensor node software. Viptos is built on Ptolemy II, a graphi-

cal modeling and simulation environment for embedded systems, and TOSSIM, an interrupt-level

discrete-event simulator for homogeneous TinyOS networks. TinyOS was chosen because of its

large and active user base in the wireless sensor network community, and its event-driven execution

model, which ties in well with an actor-oriented approach.

A TinyOS program consists of a graph of components that are written in an object-oriented

style using nesC [32], an extension to the C programming language. TinyOS application developers

can use TOSSIM [65], a TinyOS simulator for the PC that can execute nesC programs designed for

a mote. TOSSIM contains a discrete-event simulation engine, which allows modeling of various

hardware and other interrupt events. Although a large community uses TinyOS in simulation to

develop and test various algorithms and protocols, they face some key limitations when using the

nesC/TinyOS/TOSSIM programming toolsuite. Users may choose from a few built-in radio con-

nectivity models in TOSSIM, but it is difficult to use other models. TOSSIM can efficiently model

large homogeneous networks where the same nesC code is run on every simulated node, but it does

not allow simulation of networks that contain different programs. Additionally, a TinyOS program

consists of a graph of mostly pre-existing nesC components; users must write their programs in

a multi-file, text-based format, even though a graphical block diagram programming environment

would be much more intuitive. Similar barriers to integrated design and deployment exist for other

popular wireless sensor network development platforms, as discussed in Chapter 1.

To address these problems, consider VisualSense [8], a Ptolemy II-based graphical modeling

and simulation framework for wireless sensor networks that supports actor-oriented definition of

sensor nodes, wireless communication channels, physical media such as acoustic channels, and

wired subsystems. VisualSense, however, does not provide a mechanism for transitioning from a

sensor network application developed within the framework to an implementation for real hard-

Page 77: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

61

ware without rewriting the code from scratch for the target platform. VisualSense mainly provides

an abstract, mathematically-based modeling environment, and node models must be created from

scratch.

Integrating TinyOS and VisualSense combines the best of both worlds. TinyOS provides a

platform that works on real hardware with a library of components that implement low-level rou-

tines. VisualSense provides a graphical modeling environment that supports hierarchical, hetero-

geneous systems. The result, Viptos, allows networked embedded systems developers to construct

block and arrow diagrams to create TinyOS programs from any standard library of TinyOS com-

ponents written in nesC. Viptos automatically transforms the diagram into a nesC program that can

be compiled and downloaded from within the graphical environment onto any TinyOS-supported

target platform. Viptos also includes the full capabilities of VisualSense, including modeling of

communication channels, networks, and non-TinyOS nodes. It presents a major improvement over

VisualSense by allowing developers to refine high-level wireless sensor network simulations down

to real-code simulation and deployment, and adds much-needed capabilities to TOSSIM by allowing

simulation of heterogeneous networks. Viptos provides a bridge between Ptolemy II and TOSSIM

by providing interrupt-level simulation of actual TinyOS programs, with packet-level simulation of

the network, while allowing the developer to use other models of computation available in Ptolemy

II for modeling the physical environment and other parts of the system. This framework allows

application developers to easily transition between high-level design and simulation of algorithms

to low-level implementation, simulation, and deployment.

The work presented in this chapter has three main contributions. First, it addresses a need for a

unified wireless sensor network development environment that allows abstract modeling and refine-

ment to low-level simulation and deployment. Second, it provides insights into the integration of

the semantics of two different simulation systems, with different representations of software compo-

nents, programming languages, types systems, and schedulers. Third, it shows through evaluation

that the implementation of the combined system is linearly scalable in the number of nodes, and

even without aggressive performance tuning, can simulate moderately large, heterogeneous sensor

networks effectively.

Section 4.1 describes the architecture of the integrated TinyOS and Ptolemy II toolchain and

investigates the semantics of this interface. Section 4.2 evaluates the performance of Viptos. Section

4.3 summarizes this chapter. Related work is presented separately, in Chapter 6 (Section 6.2).

Page 78: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

62

4.1 Design

Viptos provides an integrated toolchain for designing, simulating, and deploying sensor net-

work applications by integrating the programming and execution models and the component li-

braries of two systems: Ptolemy II/VisualSense and TinyOS/TOSSIM. This section describes the

architecture of this integrated system in detail, including the representation of nesC components, the

transformation of the nesC components into this representation, the generation of deployment and

simulation code for TinyOS programs developed in Viptos, and the simulation of sensor network

models that include nodes running TinyOS.

4.1.1 Representation of nesC components

Let us review the basics of the nesC programming language used in TinyOS. A nesC compo-

nent exposes a set of interfaces. An interface consists of a set of methods. A method is known as

either a command or an event. A nesC component implements its provides methods and expects

other components to implement its uses methods. A nesC component is either a configuration that

contains a wiring of other components, or a module that contains an implementation of its interface

methods. A TinyOS program consists of a set of nesC components, where the top-level file that

describes the application is a nesC component that exposes no interface methods.

Figure 4.1(a) shows a TinyOS program called SenseToLeds that displays the value of a pho-

tosensor in binary on the LEDs of a mote. SenseToLeds contains a wiring of the components

Main, SenseToInt (whose source code is shown in Figure 4.1(b)), IntToLeds, TimerC, and De-

moSensorC. These components are just a few of the nesC components that are available in the

TinyOS component library.

NesC interfaces can also be parameterized to provide multiple instances of the same interface

in a single component. In Figure 4.1(a), the TimerC.Timer interface is parameterized. The Timer

interface of SenseToInt connects to a unique instance of the corresponding interface of TimerC.

If another component connects to the TimerC.Timer interface, it connects to a different instance.

Each timer can be initialized with different periods.

In Ptolemy II, basic executable code blocks are called actors and may contain input and/or

output ports. A port may be a simple port that allows only a single connection, or it may be a

multiport that allows multiple connections. Fan-in to, or fan-out from, simple ports may be achieved

by placing a relation in the path of the connection. A code block is stored in a class, and an actor is

an instance of the class.

Page 79: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

63

configuration SenseToLeds {} implementation {

components Main, SenseToInt, IntToLeds,

TimerC, DemoSensorC as Sensor;

Main.StdControl -> SenseToInt;

Main.StdControl -> IntToLeds;

SenseToInt.Timer ->

TimerC.Timer[unique("Timer")];

SenseToInt.TimerControl ->

TimerC;

SenseToInt.ADC -> Sensor;

SenseToInt.ADCControl -> Sensor;

SenseToInt.IntOutput -> IntToLeds;

}

(a)

module SenseToInt {provides {

interface StdControl;

}uses {

interface Timer;

interface StdControl as TimerControl;

interface ADC;

interface StdControl as ADCControl;

interface IntOutput;

}} implementation {

...

}

(b)

Figure 4.1: Sample nesC source code.

Table 4.1: Representation scheme for nesC components in Viptos.

NesC construct Ptolemy II construct Ptolemy II Graphical Iconcomponent class block

uses interface output port outward pointing triangleprovides interface input port inward pointing triangle

non-parameterized interface simple port black trianglesingle-index parameterized interface1 multiport white triangle

fan-in or fan-out relation black diamond

Viptos uses the representation scheme shown in Table 4.1 for the various parts of nesC com-

ponents. Figure 4.2(c) shows a graphical representation in Viptos of the equivalent wiring diagram

for the SenseToLeds configuration shown in Figure 4.1(a). Relations are represented by diamond-

shaped icons. Note that the TimerC component in Figure 4.2(c) provides a parameterized interface,

or input multiport, as indicated by the white triangle pointing into the block. Non-parameterized

interfaces, or simple ports, are represented by black triangles.

Viptos can serve as a program design and editing environment—users design programs by

manipulating the Ptolemy II graphical icons on the screen, then generate code using the automatic

process described later in Sections 4.1.3 and 4.1.4.

1Although multiple-index, parameterized interfaces are allowed in nesC, Viptos does not support them, since they arenot used in practice and do not appear in any existing components in the TinyOS component library.

Page 80: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

64

4.1.2 Transformation of nesC components

As the implementation for representing nesC components, Viptos uses MoML (Modeling

Markup Language) [61], an XML-based language used in Ptolemy II to specify interconnections

of parameterized, hierarchical components. As discussed previously, a nesC component is either

a subcomponent of an application if it exposes interface methods, or a top-level application if it

does not. Viptos treats subcomponents and top-level applications differently when transforming

nesC files into MoML. For nesC subcomponents, Viptos provides a tool called nc2moml; for nesC

top-level applications, Viptos provides a tool called ncapp2moml.

The nc2moml tool harvests TinyOS nesC component files and converts them into MoML class

files. The initial version of nc2moml was a modification of the source code of the nesC 1.1 compiler.

The current version of nc2moml uses the XML output feature of the nesC 1.2 compiler, which de-

couples nc2moml from nesC compiler version updates. Both versions of nc2moml generate MoML

syntax that specifies the name of the component, as well as the name and input/output direction

of each port, and whether they are multiports. Viptos uses the resulting MoML files to display

TinyOS components as a library of graphical blocks. The user may drag and drop components from

the library onto the workspace and create connections between component interfaces by clicking

and dragging between ports. Figure 4.3 shows the generated MoML code for the TimerC compo-

nent referenced in Figure 4.1(a). Figure 4.2(c) shows a TinyOS program created graphically using

components from the converted library.

The ncapp2moml tool harvests TinyOS nesC application files and converts them into Viptos

MoML model files. Unlike the TinyOS component files examined by nc2moml, TinyOS application

files in nesC do not have interfaces. The ncapp2moml tool uses information about the nesC wiring

graph and the referenced interfaces in the XML output from the nesC 1.2 compiler to generate

MoML syntax that specifies a model containing the class corresponding to each nesC component

used, the relations required at each port, and the links between the ports and relations such that

the connections in the model correspond to the connections between interfaces in the nesC file.

ncapp2moml can also automatically embed the converted TinyOS application into a template model

containing a representation of the hardware interface of the node and optionally, a default physical

environment. Figure 4.4 shows an example of a portion of the MoML code generated from the

SenseToLeds.nc file shown in Figure 4.1(a).

For both nc2moml and ncapp2moml, Viptos uses the NDReader Java class provided in the

nesC 1.2 compiler distribution to parse nesC XML output and create nesC-specific data structures.

Page 81: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

65

a

c

b

d

e

f

Figure 4.2: SenseToLeds application in Viptos.

Page 82: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

66

<?xml version="1.0"?>

<!DOCTYPE plot PUBLIC "-//UC Berkeley//DTD MoML 1//EN"

"http://ptolemy.eecs.berkeley.edu/xml/dtd/MoML_1.dtd">

<class name="TimerC"

extends="ptolemy.domains.ptinyos.lib.NCComponent">

<property name="source"

value="$CLASSPATH/tos/system/TimerC.nc" />

<property name="_displayedName" class="..."

value="TimerC" />

<port name="StdControl" class="ptolemy.actor.IOPort">

<property name="input" />

<property name="_showName" class="..." />

</port>

<port name="Timer" class="ptolemy.actor.IOPort">

<property name="input" />

<property name="multiport" />

<property name="_showName" class="..." />

</port>

</class>

Figure 4.3: Generated MoML by nc2moml for TimerC.nc

The tools use JDOM 1.0 to construct and generate XML output. Viptos does not use XSLT (Exten-

sible Stylesheet Language Transformations) because the generated MoML files are not complex.

4.1.3 Generation of code for target deployment

When a user compiles a TinyOS program for an actual sensor node, the nesC compiler automat-

ically searches the TinyOS component library paths for included components, including directories

containing the components that encapsulate the hardware components specific to the target plat-

form, such as the clock, radio, and sensors. The nesC compiler generates a pre-processed C file,

which it can send to a cross compiler for the target hardware.

Viptos can transform a model of a TinyOS program (as in Figure 4.2(c)) into a nesC file. Note

that this is the opposite of ncapp2moml, which means that it is possible to convert back and forth

between Viptos models and nesC files. Viptos does this transformation by means of a director

called PtinyOS Director, which controls code generation, simulation, and deployment to target

hardware for a single node. A user can configure the PtinyOS Director (Figure 4.2(d)) to compile

the generated nesC code to any target supported by the TinyOS make system, including cross-

Page 83: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

67

...

<entity name="MicaCompositeActor"

class="ptolemy.domains.ptinyos.lib.MicaCompositeActor">

...

<entity name="DemoSensorC"

class="tos.sensorboards.micasb.DemoSensorC" />

<entity name="TimerC" class="tos.system.TimerC" />

<entity name="Main" class="tos.system.Main" />

<entity name="SenseToInt"

class="tos.lib.Counters.SenseToInt" />

<entity name="IntToLeds"

class="tos.lib.Counters.IntToLeds" />

<relation name="relation1"

class="ptolemy.actor.IORelation" />

<relation name="relation2"

class="ptolemy.actor.IORelation" />

<relation name="relation3"

class="ptolemy.actor.IORelation" />

<relation name="relation4"

class="ptolemy.actor.IORelation" />

<relation name="relation5"

class="ptolemy.actor.IORelation" />

...

<link relation="relation1" port="Main.StdControl"/>

<link port="IntToLeds.StdControl" relation="relation2"/>

<link relation1="relation2" relation2="relation1"/>

<link port="SenseToInt.StdControl" relation="relation3"/>

<link relation1="relation3" relation2="relation1"/>

<link relation="relation4" port="SenseToInt.Timer"/>

<link port="TimerC.Timer" relation="relation5"/>

<link relation1="relation5" relation2="relation4"/>

...

</entity>

...

Figure 4.4: Generated MoML by ncapp2moml for SenseToLeds.nc

Page 84: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

68

compilation to target hardware, or TOSSIM for external simulation. The user can also download

code to the target hardware from the Viptos interface.

Running the model in Figure 4.2(c) causes the PtinyOS Director to generate a nesC compo-

nent file for SenseToLeds, equivalent to that shown in Figure 4.1(a). The director also generates

a makefile that includes all of the paths necessary for compilation to target hardware, which Viptos

uses internally and that users can run externally.

4.1.4 Generation of code for simulation

When a user compiles a TinyOS program for simulation with TOSSIM, the nesC compiler

follows the procedure described in the previous section, but with the TinyOS scheduler and device

drivers replaced with TOSSIM code. Thus, the TOSSIM executable image depends on the particular

TinyOS program specified by the user.

The Viptos simulation environment provides more capabilities than TOSSIM alone. In addition

to simulating wireless sensor node(s) running TinyOS, Viptos users can model and simulate the

physical environment, radio channels, wired subsystems, and other wireless nodes, including non-

TinyOS nodes. The user can take advantage of the hierarchical, heterogeneous nature of Ptolemy II

to create detailed models of physical phenomena such as light, temperature, and sound; as well as

models of entities such as buildings, servers, microservers, and other nodes. Developers may choose

from diverse models of computation, such as continuous-time, dataflow, synchronous/reactive, time-

triggered, and Kahn process networks. Users may also interface to live data through Ptolemy II

library blocks such as those that interface with the microphone or the IP (Internet Protocol) network.

A common actor-oriented programming and execution model unifies these modeling capabilities.

Figure 4.2(a) shows a basic example with models of a light source and a sensor node.

As a template for modeling a real wireless sensor node, Viptos provides a model of the hard-

ware interface of a Mica mote with sensor board. This hardware representation includes ports

for the ADC (analog-to-digital converter) channels connected to sensors that include a thermistor,

photoresistor, microphone, magnetometer, and accelerometer; and ports for the LEDs and radio

communication. Figure 4.2(b) shows this graphically.

Running the model in Figure 4.2(b) causes the PtinyOS Director to generate a nesC file and

a makefile. If the user specified the ptII simulation target as the target compilation platform, the

PtinyOS Director then compiles the nesC file against a custom version of TOSSIM to create a

shared library. The PtinyOS Director also generates a Java wrapper to load the shared library

Page 85: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

69

into Viptos so that the PtinyOS Director can run the shared library via JNI (Java Native Interface)

method calls, which Viptos uses to allow calls between the C-based TOSSIM environment and the

Java-based Ptolemy II environment. To avoid duplicate functionality, Viptos relies on the nesC

compiler to do a complete analysis of the connected nesC interface methods at the TinyOS level to

detect incorrect usage of commands or events marked with the async keyword and hence possible

race conditions.

4.1.5 Simulation of TinyOS in Viptos

This section explains how Viptos simulates TinyOS programs and discusses the integration of

the TOSSIM and Ptolemy II framework in terms of scheduling, type system, radio and I/O, and

support for multiple nodes and multi-hop routing.

Scheduling

Let us review the basics of the TinyOS scheduling model. In TinyOS, there is a single thread of

control managed by the scheduler, which may be interrupted by hardware events. NesC component

methods encapsulate hardware interrupt handlers. Methods may transfer the flow of control to

another component by calling a uses method. Computation performed in a sequence of method

calls must be short, or it may block the processing of other events. A long-running computation can

be encapsulated in a task, which a method posts to the scheduler task queue. The TinyOS scheduler

processes the tasks in the queue in FIFO order whenever it is not executing an interrupt handler.

Tasks are atomic with respect to other tasks and do not preempt other tasks.

TOSSIM is a discrete-event simulator for TinyOS. Its scheduler contains a task queue similar

to the regular TinyOS scheduler, as well as an ordered event queue. An event in this queue has a

time stamp implemented as a long long in C (a 64-bit integer on most systems). The smallest time

resolution is equal to 1/(4 MHz), the original CPU clock period of the Rene/Mica motes.

Upon initialization, TOSSIM inserts a boot-up event into the event queue. The TOSSIM sched-

uler begins its main loop by processing all tasks in the task queue in FIFO order. If there is an event

in the event queue, the TOSSIM scheduler updates the simulated system time with the time stamp

of the new event and then processes the event. The processing of an event may cause new tasks

to be posted to the task queue and new events to be created with time stamps possibly equal to the

current time stamp. In TOSSIM, all components call the queue insert event() function to insert

new events into the event queue. Figure 4.5 summarizes the scheduling algorithm.

Page 86: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

70

while (true) {while there are TinyOS tasks {

Process them.

end while

if the event queue is not empty {Set the TOSSIM time to the time of next event.

Handle the event.

end if

end while

Figure 4.5: TOSSIM scheduling algorithm.

At the top level of a model, Viptos uses a specialization of the discrete-event (DE) domain of

Ptolemy II [15] created for modeling wireless systems in VisualSense. The DE domain provides ex-

ecution semantics where interactions between components occur via events with time stamps. The

DE domain uses a sophisticated calendar-queue scheduler to efficiently process events in chronolog-

ical order. Formal semantics ensure determinate execution of deterministic models [59], although

the DE domain also supports stochastic models for Monte Carlo simulation. The precision in the

semantics prevents the unexpected behavior that sometimes occurs due to modeling idiosyncrasies

in some modeling frameworks. In Viptos, the specialized DE director may control one or more node

models.

In Viptos, a node model contains an instance of PtinyOS Director, which compiles and loads

a custom copy of TOSSIM that simulates the code for a single node. Viptos controls the execution

of TOSSIM by using customized TOSSIM scheduler and device driver functions that notify Viptos

of all TOSSIM events. Viptos uses a modified TOSSIM queue insert event() function that also

makes a JNI call to insert an event with the TOSSIM time stamp into the event queue of the Ptolemy

II discrete-event scheduler (DE director) that controls the PtinyOS Director.2 Thus, Viptos uses

the same event time stamps as TOSSIM.

At each event time stamp, Viptos calls the custom TOSSIM scheduler to process the event.

The main loop updates the TOSSIM system time, processes an event in the TOSSIM event queue,

and then processes all tasks in the task queue. If the TOSSIM event queue contains another event

with the current TOSSIM system time, the scheduler processes that event along with any tasks that

2The JNI call uses fireAt() with the TOSSIM system time as the argument.

Page 87: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

71

do {if the event queue of this instance of TOSSIM is not empty {

Set the TOSSIM time to the time of next event.

Handle the event.

end if

while there are TinyOS tasks {Process them.

end while

while (the event queue is not empty and the time of the next event is the same as the current TOSSIM time)

Figure 4.6: Viptos version of TOSSIM scheduling algorithm.

may have been generated. This last step is repeated until there are no other events with the current

TOSSIM system time. Note that the order in the main loop of the custom TOSSIM scheduler

is opposite that of the original TOSSIM, which processes all tasks before updating the TOSSIM

system time and processing an event in the TOSSIM event queue. This change is required in order

to guarantee causal execution in Viptos, since tasks may generate events with the current TOSSIM

time stamp. Otherwise, new events may have a time stamp that is before the current Ptolemy II

system time. Figure 4.6 summarizes the scheduling algorithm.

Viptos supports models with dynamically changing interconnection topologies and treats chan-

ges in connectivity as mutations of the model structure. The software is carefully architected to sup-

port multithreaded access to this mutation capability. Thus, one thread can be executing a simulation

of the model while another changes the structure of the model, e.g., by adding, deleting, or moving

actors, or changing the connectivity between actors. The results are predictable and consistent.

Type system

NesC components in TinyOS and TOSSIM use the type system provided by the C program-

ming language. Ptolemy II provides its own type system, in which actors, parameters, and ports

may all impose constraints on types. A type resolution algorithm identifies the most specific types

that satisfy all the constraints. Communication between actors in Ptolemy II occurs through typed

tokens.

Viptos composes these two type systems, the C type system and the Ptolemy II type system,

so that static type analysis can be performed. A special Java base class created for Viptos, called

Page 88: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

72

TypeOpaqueCompositeActor, allows a Ptolemy II actor’s ports to have types, but does not require

that the actors inside use the Ptolemy II type system. This facilitates the embedding of a different

type system within Ptolemy II. A Viptos submodel containing nesC components uses a subclass of

this base class, called PtinyOSCompositeActor, so that the components can use the C type system.

Viptos performs automatic type conversion between the two type systems during simulation.

Viptos uses JNI functions in the custom copy of TOSSIM to automatically convert between the C

types used in TOSSIM and the token types used in Ptolemy II. Since the data communicated between

TOSSIM and Ptolemy II only involve a mote’s hardware interface, Viptos can limit type conversion

to the data types required by the ADC interface, the LEDs, and the packets sent and received over

the radio. The types provided by C, however, usually do not match the actual data types of the

hardware interface. As a result, TinyOS and TOSSIM use arbitrary data types to represent values

with different bit widths.

The ADC channels of a mote use 10-bit unsigned values. TOSSIM represents an ADC value

with an unsigned short integer masked for 10-bit usage. Sensor data modeled in Ptolemy II typically

use tokens with values of type double. When TOSSIM requests an ADC value, Viptos automatically

performs the lossy conversion from a double-valued token in Ptolemy II to a masked unsigned short

integer value in TOSSIM.

Although LED state is binary, TOSSIM represents an LED value with a char. When TOSSIM

updates the state of the LEDs, Viptos automatically converts the char in TOSSIM into a boolean-

valued token in Ptolemy II, which Viptos uses to change the animation state of the simulated LEDs.

In TOSSIM, TinyOS packets are represented by a C data structure containing a char array. In

order to maintain a standard endian format and enable easy parsing of packets, Viptos represents

TinyOS packets using Ptolemy II string tokens. Viptos automatically converts between the TOSSIM

char array representation and the Ptolemy II string token representation whenever a node transmits

or receives a packet.

Radio and I/O

TOSSIM has built-in models for per-node ADC values and for radio connectivity between

multiple nodes, as well as an interface for manually setting the per-node and per-link values and

probabilities.

In Viptos and VisualSense, the algorithm for determining radio connectivity is itself encapsu-

lated in a component as a channel model, and hence can be developed by the model builder. Both

Page 89: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

73

Viptos and VisualSense provide several built-in models, including AtomicWirelessChannel, De-

layChannel, LimitedRangeChannel, ErasureChannel, and PowerLossChannel (see the left-

hand pane of Figure 4.7(a)). Both tools can determine connectivity on the basis of the physical

locations of the components.

Viptos overrides the built-in ADC and radio models and LED device drivers in TOSSIM so

that they send data to, and receive data from, the ports of the node model. This allows the simulated

node to interact with user-created models, such as sources of light (e.g., Figure 4.2(e)), temperature

gradients, radio channels, and other nodes.

In the DE domain of Ptolemy II, tokens received at the input port of an actor cause the actor to

fire at the time of the token time stamp. The actor usually consumes the token, at which point the

port becomes empty. In Viptos, the node model may receive tokens at the ADC ports that represent

new values. To reconcile the difference in timing between when the simulated environment makes a

new ADC value available and when the simulated node reads its ADC ports, Viptos uses a Ptolemy

II PortParameter instead of a Port for the ADC ports. This usage of PortParameter makes the

port value persistent between updates, such that when the TinyOS program requests data from the

ADC port, the program gets the value of the most recently received token.

Figure 4.2(a) shows an example containing a model of a light source and a node running the

SenseToLeds TinyOS program. Viptos transmits light source data to the sensor node by means

of a photo port (Figure 4.2(b)) associated with a LimitedRangeChannel named PhotoChannel

(Figure 4.2(a)).

Multiple nodes and multi-hop routing

TOSSIM simulates one or more nodes with the same TinyOS program by maintaining a copy

of the state of each component for each simulated node. The nesC compiler has built-in support

for generating arrays to store these copies, so that users do not need to modify the TinyOS program

source code when compiling for TOSSIM.

Viptos simultaneously simulates multiple nodes with possibly different programs by embed-

ding multiple node models, with each TinyOS node containing a different PtinyOS Director, into

the Wireless domain (the specialized DE domain). To prevent namespace collision between dif-

ferent simulated TinyOS programs, Viptos separately compiles and loads a shared library for each

node. Viptos performs this by passing a unique name for each node to the nesC compiler, which the

compiler then inserts into the TOSSIM source code by means of macros. Since Viptos models have

Page 90: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

74

a global discrete-event scheduler, all nodes operate on the same time reference.

Figure 4.7 shows an example model containing two nodes that communicate over a lossless

radio channel (AtomicWirelessChannel) with full connectivity. The node on the left contains

the CntToLedsAndRfm TinyOS program, which maintains a counter on a 4 Hz timer, displays

the counter value on the LEDs, and sends it over the radio in a TinyOS packet. The node on the

right contains the RfmToLeds TinyOS program, which listens for radio packets and displays any

received counter values on the LEDs. A user can easily replace the radio channel model by deleting

it and dragging in a different channel model from the menu in the left-hand pane.

Though the application shown in Figure 4.7 uses broadcast, Viptos also supports multi-hop

routing. Viptos accomplishes this by passing a node ID to the nesC compiler for each custom copy

of TOSSIM. The modified TOSSIM code uses this node ID where it would normally be used in

TinyOS, instead of using the default TOSSIM value of the index of the array containing the state of

the nodes.

Viptos allows users to indicate globally the name of the base station in the PtinyOS Direc-

tor configuration screen, as shown in 4.2(d). Viptos includes a multi-hop routing demonstration

that models a network with multiple TinyOS nodes running the Surge multi-hop routing protocol

application, shown in Figure 4.8, where the base station is node 0.

4.2 Performance Evaluation

This section evaluates the scalability of Viptos in terms of execution time as the number of

nodes increases. It separately evaluates the execution time of applications without radio usage,

and the execution time of applications with radio usage, in order to determine the scalability of

communication within the framework.

I collected timing information on an Intel Pentium M 760 processor (2.0 GHz, 2 MB L2 Cache,

533 MHz FSB) with 1024 MB of SDRAM, running Ubuntu 6.06 LTS (Dapper Drake) with Linux

kernel 2.6.15-27-386. The tools I used included nesC 1.2.7a, gcc 3.4.3, TinyOS 1.x, and Sun Java

VM 1.4.2 13-b06 with a heap size of 512 MB. In order to run large models, I increased the maxi-

mum number of open file descriptors allowed in the Bash shell from a default of 1024 to 20000 with

the ulimit -n command.

To eliminate timing variance due to random boot times, I set all nodes to boot at virtual time

0.0 seconds. I did not set the TOSSIM DBG environment variable, which affects which event debug

messages get generated. I sent all printed debug messages (on stdout or stderr) from all copies of

Page 91: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

75

a

cb

d e

g

f

Figure 4.7: Send and receive application in Viptos.

Page 92: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

76

a

c

b

Figure 4.8: Multi-hop routing in Viptos.

Page 93: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

77

TOSSIM to /dev/null, to eliminate timing variance from printing to the screen under X11.

4.2.1 Comparison to TOSSIM

This section uses the SenseToLeds application to evaluate the scalability of Viptos as the

number of nodes increases and to compare it to TOSSIM.

For TOSSIM, I used the /usr/bin/time command to measure the execution time of the Sense-

ToLeds application from the tinyos-1.x CVS tree. I discarded the timing measurement for the first

run in each experiment to eliminate timing variance due to caching.

For Viptos, I instrumented the PtinyOS Director with calls to the Java Date().getTime()

and Runtime.getRuntime() methods to measure elapsed time while running the SenseToLeds

application displayed in Figure 4.2. I eliminated the model of the environment in order to make a

fair comparison to TOSSIM, since TOSSIM uses random ADC values by default. For models with

multiple nodes, I used the timing information from the last node to start, since nodes must wait until

Viptos invokes all internal copies of TOSSIM before simulation can proceed because they all operate

on the same time reference. For a given number of nodes, I collected multiple runs from the same

instantiation of Viptos. I discarded the timing measurement for the first run in each experiment to

eliminate timing delay due to loading of new Java classes, instantiation of Java objects, and caching.

For modeling additional nodes, I copied and pasted existing nodes into the graph. I saved the model,

restarted Viptos, and took additional measurements.

To measure the overhead due to integrating TOSSIM with Ptolemy II, I started timing right be-

fore Viptos invoked the internal copy of TOSSIM. This does not include the overhead of running the

nesC compiler and loading the TOSSIM shared object into memory. To eliminate timing delay due

to waiting for remaining threads to join, I stopped timing at the beginning of wrapup(), since thread

joining is only necessary for running the model multiple times within a graphical environment. To

reduce timing variance due to Java garbage collection, I instrumented Viptos to call System.gc()

to perform garbage collection before starting the timing measurement.

This section does not present timing overhead in Viptos for opening files; running the nesC,

gcc, and Java compilers; or loading shared objects. This overhead scales linearly with the number of

nodes, and is on the order of a few seconds for small models, and several minutes for large models.

Figure 4.9 shows the average execution time of the SenseToLeds application with a virtual

run time of 300.0 seconds for an increasing number of nodes. The figure shows that Viptos has

more overhead when compared to TOSSIM, but that both simulators scale linearly in the number

Page 94: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

78

Figure 4.9: Execution time of the SenseToLeds application as a function of the number of nodes.Each simulation ran for 300.0 virtual seconds.

of nodes. So, in exchange for slightly increased execution time, the user gains increased modeling

and simulation capabilities and flexibility, and an interactive, graphical programming environment.

Using a least squares linear regression, the results show that approximately 410 nodes can be simu-

lated in 300.0 real seconds or less, which means that Viptos can simulate networks up to this size in

real time. The exact number for any given application depends on the fidelity of simulation required

and the complexity of the application.

4.2.2 Radio

This section evaluates the scalability of models that use the radio using the same techniques

described in the previous section. I created a model similar to that of the SendAndReceiveCnt ap-

plication shown in Figure 4.7. The model uses a lossless radio channel model with full connectivity,

and a varying number of senders and receivers. Senders send packets at 4 Hz. To eliminate timing

variance due to the graphical interface, I disabled animation of the LEDs. This analysis used a vir-

tual run time of 120.0 seconds for all nodes. The plot in Figure 4.10 shows the average execution

time for this model.

Page 95: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

79

Figure 4.10: Execution time of a radio send and receive model in Viptos as a function of the numberof senders and receivers. Each simulation ran for 120.0 virtual seconds.

The plot shows that the main determinant of execution time is the total number of nodes.

The number of senders versus receivers has no noticeable effect. The execution time of the model

increases linearly with the number of nodes, whether or not the radio is used.

4.3 Summary

This chapter described an extensible actor-oriented software framework for modeling sensor

networks. This tool, called Viptos, builds upon Ptolemy II and TinyOS, and provides an integrated

graphical design and simulation environment. Viptos allows users to easily transition from high-

level, hierarchical, heterogeneous modeling to low-level implementation, simulation, and deploy-

ment. This chapter showed that Viptos simulator performance is scalable, and execution time scales

linearly as a function of the number of nodes, and even without aggressive performance tuning, can

simulate moderately large sensor networks effectively.

Page 96: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

80

Page 97: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

81

Chapter 5

Metaprogramming for Wireless Sensor

Networks

In The Mythical Man Month [17], Frederick P. Brooks, Jr. asserts that “radically better software

robustness and productivity are to be had only by moving up a level, and making programs by the

composition of modules, or objects.”

Chapter 3 explained how to build wireless sensor node programs from pre-existing

TinyOS/nesC components, using an actor-oriented framework called galsC. Chapter 4 explained

how to build wireless sensor network applications graphically from pre-existing, actor-oriented

components and pre-existing TinyOS/nesC components, using an actor-oriented framework called

Viptos. This chapter explains how to programmatically specify the wireless sensor network appli-

cation itself through a variety of techniques that combine higher-order actors or components, with

generative programming and metaprogramming.

5.1 Generative Programming and Metaprogramming

Generative programming and metaprogramming are very similar concepts. Like Sztipanovits

and Karsai [93], I use the term “generative programming” in a broad sense: systems or components

of systems are automatically generated from a specification written in one or more textual or graphi-

cal domain-specific languages [26]. According to Wikipedia [104], metaprogramming is the writing

of computer programs that write or manipulate other programs (or themselves) as their data. The

terms “generative programming” and “metaprogramming” are often used interchangeably. In this

dissertation, however, I differentiate between them—a metaprogram does not necessarily generate

Page 98: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

82

a new program or system, although it may accept other programs or systems as input.

The benefits of metaprogramming are best described by Brooks in The Mythical Man Month

[17], where he discusses them in the context of using shrink-wrapped software packages as compo-

nents:

The metaprogramming concept is not new, only resurgent and renamed. In the early1960s, computer vendors and many big management information systems (MIS) shopshad small groups of specialists who crafted whole application programming languagesout of macros in assembly language...Now the chunks offered by the metaprogrammerare many times larger than those macros.

The shrink-wrapped package provides a big module of function, with an elaboratebut proper interface, and its internal conceptual structure does not have to be designedat all...Next-level application builders get richness of function, a shorter developmenttime, a tested component, better documentation, and radically lower cost.

In Actor-Oriented Metaprogramming by Neuendorffer [74], actor-oriented models are viewed

as descriptions of concurrent software architectures, i.e., structured metaprograms. Neuendorffer

describes a metaprogramming system that transforms actor-oriented models in Ptolemy II into self-

contained Java code, where partial evaluation is used as a way to generate more efficient programs.

He argues that partial evaluation generally requires less explicit specification by a programmer than

other metaprogramming techniques. It is particularly effective in this use case, since a generic

actor specification is specialized to a particular role in the model, and both the generic actor and

specialized actor perform the same role and produce the same behavior.

5.2 Higher-order Functions, Actors, and Components

Related to metaprogramming is the concept shared by higher-order functions, higher-order

actors, and higher-order components.

According to Reekie [82] (emphasis mine),

A higher-order function takes a function argument or produces a function result...Higher-order functions are one of the more powerful features of functional programming lan-guages, as they can be used to capture patterns of computation...[S]ome higher-orderfunctions encapsulate common types of processes; other higher-order functions cap-ture common interconnection patterns, such as serial and parallel connection; yet oth-ers represent various linear, mesh, and tree-structured interconnection patterns...Vectoriterators are higher-order functions that apply a function across all elements of a vector.In effect, each of them captures a particular pattern of iteration, allowing the program-mer to re-use these patterns without risk of error. This is one of the most persuasivearguments in favour of inclusion of higher-order functions in a programming language.

Page 99: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

83

Reekie then explains how the concept of higher-order functions can be applied to actors [82]:

...the map actor [in Visual Haskell] takes a function as its parameter, which it applies toeach element of its input channel. If f is known, an efficient implementation of map( f )can be generated; if not, the system must support dynamic creation of functions since itwill not have knowledge of f until run-time. An actor of this kind mimics higher-orderfunctions in functional languages, and could therefore be called a higher-order actor.

Lee and Parks [62] explain that “dataflow processes with state cover many of the commonly

used higher-order functions in Haskell. The most basic use of icons in [the Ptolemy Classic] visual

syntax may therefore be viewed as implementing a small set of built-in higher-order functions.”

Higher-order actors gain their power from a key restriction: “the replacement actor is specified by

a parameter, not by an input stream. Thus [the system] avoid[s] embedding unevaluated closures in

streams.”

Reekie explains higher-order actors in Ptolemy Classic [82]:

Special blocks represent multiple invocations of a “replacement actor.” The Mapactor, for example, is a generalised form of mapV [the vector iterator higher-orderfunction]. At compile time, Map is replaced by the specified number of invocationsof its replacement actor...Unlike mapV, Map can accept a replacement actor with arity> 1; in this case, the vector of input streams is divided into groups of the appropriatearity (and the number of invocations of the replacement actor reduced accordingly).

The requirement that the number of invocations of an actor be known at compile-time ensures that static scheduling and code generation techniques will still be effective.Further work is required to explore forms of higher-order function mid-way betweenfully-static and fully-dynamic. For example, a code generator that produces a loop withan actor as its body, but with number of loop iterations unknown, could still executevery efficiently.

Just as functions may serve as arguments to higher-order functions in functional programming

languages, components may serve as parameters to higher-order components in composition lan-

guages, or languages for constructing networks of components [19]. Like higher-order functions

in Visual Haskell [82], higher-order components are the most powerful feature of these types of

languages, since they capture patterns of instantiation and interconnection between components. In

a higher-order composition language such as Ptalon [19], the structure of a system is effectively pa-

rameterizable, and the parameters may be other systems. An interesting aspect of Ptalon is that it is,

to quote Reekie, “mid-way between fully-static and fully-dynamic.” The next section investigates

Ptalon in more detail.

Page 100: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

84

5.3 Ptalon

Ptalon [18, 19] is a higher-order composition language for constructing higher-order com-

ponents in Ptolemy II. Cataldo proved mathematically that higher-order components can lead to

succinct syntactic descriptions of large systems, which minimizes the amount of input a system

designer must provide to create a new system, thus enabling a form of scalability in system design

[19].

Using the definitions presented in Section 5.1, Ptalon is both a generative programming system

and a metaprogramming language, since Ptalon automatically generates components from a speci-

fication written in a textual language, and Ptalon accepts components as arguments (inputs) to other

components.

Ptalon makes it easy to parameterize a component with the number and types of subcomponents

that should be generated within the component. A developer can use Ptalon to easily generate sensor

network applications and configurations; the specified subcomponents may be different types of

wireless sensor nodes running various individual programs.

In Cataldo’s original Ptalon implementation for Ptolemy II [19], a higher-order component

is called a PtalonActor. Components passed as parameters to these higher-order components are

atomic actors (i.e., they are specified in Java, the underlying programming language of Ptolemy II).

This original implementation assumes that models containing higher-order components are static;

arguments to a higher-order component cannot change once specified.

I have improved the Ptalon system for evaluating parameters such that the values of Ptalon

parameters can be changed at run-time. I have also improved the Ptolemy II implementation of

Ptalon to allow composite actors in addition to atomic actors. That is, an application developer

can specify an actor not only with a Java file, but also with an XML file containing an arbitrary

collection of actors.

The following sections present an example that uses the improved version of Ptalon and explain

the implementation of the parameter reconfiguration capabilities.

5.3.1 A simple example

Ptalon code is written in a simple declarative style. Figure 5.1 shows a sample Ptalon file that

specifies a component containing n components of type RelayNode, with varying values for the

nodes’ range and location parameters. The value of the local variable i is set by the for loop,

whereas the value of the parameter n is specified externally. Ptalon uses the Ptolemy II expression

Page 101: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

85

language to evaluate all values within double brackets ([[ ]]).

To use Ptalon within Ptolemy II, a user places a new PtalonActor in a Ptolemy II graph. The

PtalonActor parameter configuration window initially shows a blank value for the ptalonCodeLo-

cation parameter. Once the user sets this parameter to reference a Ptalon file, the PtalonActor then

reconfigures its parameter configuration window to show the parameters declared in the Ptalon file,

for which the user can then give values.

Figure 5.2(a) shows a Ptolemy II model containing an instance of a PtalonActor called Mul-

tipleNodesMoML that references the Ptalon file in Figure 5.1. A user specifies the value of n as a

parameter of the PtalonActor, as shown in Figure 5.2(b).

The Ptalon compiler is implemented within Ptolemy II and is invoked as soon as the Ptalon-

Actor is set to reference a particular Ptalon file. The Ptalon compiler consists of multiple phases.

In its initial phase, the Ptalon compiler parses the Ptalon file and creates an abstract syntax tree

(AST). The first populator phase of the Ptalon compiler occurs next, in which the Ptalon compiler

instantiates any entities that do not depend on unknown parameter values. The second populator

phase of the Ptalon compiler begins only when the values of all parameters of the PtalonActor are

known. The Ptalon compiler walks the AST and creates the remaining entities. The Ptalon compiler

creates all entities as part of the PtalonActor submodel.

Figure 5.2(c) shows the components generated inside the PtalonActor. In this example, each

of the components are nodes that are actually composite actors that contain other components, as

shown in Figure 5.2(d).

Figure 5.3 shows the XML code for the model shown in Figure 5.2(a). Note that since the

PtalonActor automatically populates itself with actors, a PtalonActor only needs to save its pa-

rameter values, and not its internal configuration.

Ptalon can also be integrated with Viptos (see Chapter 4). An application developer can start

with regular components that use pre-existing Ptolemy II domains, then refine and replace these

components with a real code implementation that uses TinyOS. This allows simulation of abstract

and concrete node and environment models with various parameters, and eventual validation against

a real-world implementation. I have implemented Ptalon-based versions of the SenseToLeds and

SendAndReceiveCnt examples presented in Chapter 4, which allow a user to change a parameter

to specify different numbers of TinyOS nodes.

Page 102: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

86

MultipleNodesMoML is {actor node = ptolemy.domains.wireless.demo.SmallWorld.RelayNode;

parameter n;

for i initially [[ 1 ]] [[ i <= n ]] {node( range := [[ 40 + 10 * i ]],

_location := [[ [100*i, 100*i] ]] );

} next [[ i + 1 ]]

}

Figure 5.1: MultipleNodesMoML.ptln

5.3.2 Reconfiguration in Ptalon

In his dissertation [82], Reekie discusses actor parameters:

Execution of an actor proceeds in two distinct phases: i) instantiation of the actor withits parameters; and ii) execution of the actor on its stream arguments...Lee stressesthe difference between parameter arguments and stream arguments in Ptolemy: pa-rameters are evaluated during an initialisation phase; streams are evaluated during themain execution phase. As a result, code generation can take place with the parametersknown, but with the stream data unknown. Thus, the separation between parametersand streams—and between compile-time and run-time values—is both clear and com-pulsory.

What happens if a so-called “compile-time” parameter value changes at run-time? If the value

of a PtalonActor parameter changes, it may cause the internal configuration of the PtalonActor

to change. The Ptalon compiler implementation in Ptolemy II uses two steps to handle any change

to the value of a PtalonActor parameter. First, the compiler deletes the internal representation of

all entities and relations in the PtalonActor, while preserving existing ports. Second, the Ptalon

compiler restarts itself in its initial phase (as described in the previous section), and reuses existing

ports whenever possible during the populator phase. The Ptalon compiler proceeds through the

population phase, using the newly assigned value of the parameter, as well as existing values for

any other parameters.

The value of a PtalonActor may be an actual token that has a type corresponding to one in the

Ptolemy II token type lattice, or it may be a reference to a model parameter. For the latter option,

a change in the value of the referenced model parameter results in a change to the actual value of

the PtalonActor parameter, which necessitates a reconfiguration of the PtalonActor. Neuendorffer

Page 103: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

87

a

c

b

d

Figure 5.2: PtalonActor in Ptolemy II.

Page 104: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

88

<?xml version="1.0" standalone="no"?>

<!DOCTYPE entity PUBLIC "-//UC Berkeley//DTD MoML 1//EN"

"http://ptolemy.eecs.berkeley.edu/xml/dtd/MoML_1.dtd">

<entity name="MultipleNodesMoML" class="ptolemy.actor.TypedCompositeActor">

</property>

<entity name="MultipleNodesMoML" class="ptolemy.actor.ptalon.PtalonActor">

<configure>

<ptalon file="ptolemy.actor.ptalon.demo.MultipleNodes.MultipleNodesMoML">

<ptalonExpressionParameter name="n" value="3"/>

</ptalon>

</configure>

</entity>

</entity>

Figure 5.3: MultipleNodesMoML.xml

[74] enumerated the ways in which reconfiguration of model parameters may occur in Ptolemy II,

which I summarize and extend here:

• Interactive editing. A user may change parameters in Ptolemy II through interactive editing of

the model, usually via a dialog box associated with the model, parameter, or actor of interest.

• Modal model. A modal model is an extended version of a finite state machine, in which each

state of the finite state machine contains a dataflow model, or refinement, that is active in that

particular state. Essentially, the active dataflow model replaces the finite state machine until

the state machine makes a state transition. Finite state machines transitions can reconfigure

parameters of the target state’s refinement when the transition is taken. The Ptolemy II user

manual [16] contains more details on constructing modal models.

• Reconfiguration port. Also known as a PortParameter, a reconfiguration port is a special

form of dataflow input port. Ptolemy II binds each reconfiguration port to a parameter of the

port’s actor, and tokens received through the port reconfigure the parameter.

• Reconfiguration actor. The SetVariable actor is a special actor that has a single input port.

Ptolemy II associates this actor with a parameter of the containing model. The actor con-

sumes a single token during each firing and reconfigures the associated parameter during the

quiescent point after the firing.

Another way reconfiguration of model parameters may occur in Ptolemy II is through the use

of higher-order actors (I do not include PtalonActor as part of this discussion):

Page 105: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

89

• ModelReference and VisualModelReference. The ModelReference and VisualModelRef-

erence actors are both atomic actors that can execute a model specified by a file or URL

(Uniform Resource Locator). A developer can use these actors to define an actor whose firing

behavior is given by a complete execution of another model. The developer can add these

ports to an instance of this actor. If the actor has input ports, then on each firing, before exe-

cuting the referenced model, the actor reads an input token from the input port, if there is one,

and uses it to set the value of a top-level parameter in the referenced model that has the same

name as the port, if there is one.

• RunCompositeActor. This actor is almost the same as ModelReference and VisualModel-

Reference, only it is a composite actor instead of an atomic actor. The actor executes the

contained model completely, as if it were a top-level model, on each firing. The actor also

uses tokens at an input port to set the value of a top-level parameter with the same name in

the contained model.

• ModelDisplay. This actor opens a window to display the specified model. The model devel-

oper can provide inputs that are MoML strings that the actor applies to the specified model.

The developer can use this, for example, to create animations by changing parameter values.

5.4 Specifying WSN Applications Programmatically

In this section, I present methods for specifying wireless sensor network applications program-

matically by combining in various ways higher-order actors in Ptolemy II with an improved version

of VisualSense/Viptos, and I explain when a particular method might be most applicable.

5.4.1 Motivation

“On the Credibility of Manet Simulations” [4], an article by Andel and Yasinsac, summarizes

various articles that question the credibility of published simulations results in the mobile ad hoc

network (MANET) research community. Problems cited include lack of independent repeatabil-

ity, lack of statistical validity, use of inappropriate radio models, improper/nonexistent validation,

unrealistic application traffic, improper precision, and lack of sensitivity analysis.

Andel and Yasinac’s proposed solution to the first problem, lack of independent repeatability,

is to properly document all settings. Since publication venues have limited space, they suggest

Page 106: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

90

including only major settings and/or providing all settings as external references to research web

pages, which should include freely available code/models and applicable data sets.

Ptolemy II is well-suited to address this problem, as well as many of the other problems cited. It

is an open-source tool whose source code is freely distributable and modifiable. Ptolemy II models

are simple XML files that are easy to publish on the web, and the Ptolemy II version number with

which they are built are automatically stored in the XML file.

The models described in the following sections show that with the techniques introduced in

this dissertation, wireless sensor network simulations are easily repeatable. I also discuss how these

techniques can address the other problems cited.

5.4.2 Small World

The SmallWorld example shown in Figure 5.4 illustrates a phenomenon where ad hoc net-

works achieve connectivity with fewer hops on average with a network that is less reliable but where

ranges are longer, than with a network that is more reliable but ranges are shorter. Franceschetti and

Meester showed that on average, fewer hops are needed when the range increases [31].

Figure 5.4 shows the SmallWorld model as originally implemented in VisualSense. Each node

in the sensor network rebroadcasts the first message it receives. When the user runs the model, an

Initiator component, shown in Figure 5.4(b) broadcasts a message. A node turns red if it receives

the message in one hop; it turns green if it receives it in more than one hop. It stays white if it

never receives the message. The model plots a histogram of the number of nodes that receive the

message after one hop, after two hops, etc. If the user increases the range above sureRange, then

the probability of delivery drops according to the formula shown, which keeps the expected number

of recipients roughly constant. The NodeRandomizer actor randomizes the locations of the nodes

at the beginning of each run. All nodes (not including the Initiator) have the same implementation

as shown in Figure 5.2(d).

5.4.3 Parameter Sweep

I now introduce two different models, both of which perform the same set of experiments,

where a slightly modified version of the SmallWorld model (shown in Figure 5.5) is run as a sub-

model with the same sets of changing parameter values. There are only a few differences between

this modified version and the original version: (1) the modified version stores the histogram data in

a file whose name is specified by a new parameter, (2) the modified version has an additional param-

Page 107: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

91

eter created to allow node location randomization to be controlled externally, and (3) the modified

version uses non-zero random seeds so that each run is repeatable. Both top-level models store the

settings used as part of the model itself, and no additional configuration files are needed. The initial

location of the nodes (not including the Initiator) is not significant. I will call this version of the

SmallWorld model the ParameterSweep version.

Modal model

Figure 5.6(a) shows a modal model in which the main state (named state and highlighted in

green) contains a refinement (Figure 5.6(b)). This refinement is an SDF (synchronous dataflow)

model containing a VisualModelReference actor with three different ports, one for each of the

parameters to be changed (range, resetOnEachRun, and fileName) in the ParameterSweep

version of SmallWorld (Figure 5.5).

The modal model sweeps over the parameter values such that runs i number of different ran-

dom node layouts are simulated, and for each node layout, runs j number of different ranges are

simulated. The transitions in the modal model are used to change the counters i and j, and set the pa-

rameter values for each run. For each run, the model creates an output file with the stored histogram

data. This model allows application developers to create simulation scenarios that are independently

repeatable, and to validate their algorithms by quickly creating new simulation scenarios via a few

simple parameter value changes.

Dataflow

Figure 5.7 shows an SDF model which accomplishes the same objectives as the modal model

in Figure 5.6. That is, simulating the ParameterSweep version of the SmallWorld model with

runs i different node layouts and runs j different ranges. The SDF model uses dataflow actors that

send the simulation parameters directly to a VisualModelReference actor with the same ports as

those in the modal model. Just as in the modal model, the VisualModelReference in the SDF

model references the ParameterSweep version of SmallWorld (Figure 5.5).

Notice that for this particular application, the values of the parameters are more readily appar-

ent in the SDF model than in the modal model. For simulations where the parameters values are

known a priori, a dataflow language provides a more intuitive interface for specifying these settings.

However, if the user wants to create simulation scenarios with dynamically derived parameters val-

ues, e.g., for the purposes of sensitivity analysis [83], a modal model might be a more appropriate

Page 108: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

92

a

d

c

e

f

b

Figure 5.4: Small World in Ptolemy II.

Page 109: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

93

Figure 5.5: ParameterSweep version of Small World in Ptolemy II.

Page 110: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

94

a

c

b

Figure 5.6: Modal model for changing parameter values of Small World model in Ptolemy II.

Page 111: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

95

choice. For example, the user can feed output from the SmallWorld model back into the modal

model, which can then automatically select new parameter settings on the basis of noise level or

network connectivity. In other words, the application developer can choose the most appropriate

domain-specific language to specify the metaprogram.

5.4.4 Higher-order actors

Since most of the nodes in the SmallWorld application have the same implementation, one

might also consider using a higher-order actor to specify the nodes. This section considers two

different methods, the first using a MultiInstanceComposite actor, and the second using a Ptalon-

Actor.

MultiInstanceComposite

In his dissertation [74], Neuendorffer introduces higher-order components (actors) (emphasis

mine):

In many cases is it useful to build parameterized structures in actor-oriented mod-els. Such programmatically generated structures are called higher-order components toemphasize their similarity to higher-order functions in functional languages. A param-eter which is used to determine the structure of a higher-order component is a structuralparameter.

The MultiInstanceComposite actor in Ptolemy II is one example of a simplehigher order component. Just before a model is executed, this actor replicates itselfa number of times determined by a structural parameter. This actor is often used in sit-uations where a model contains repetitive structures that are awkward to build by hand,or when the number of repetitions is specified by a parameter.

A similar feature also existed in Ptolemy Classic. As described by Lee and Parks [62], a

user could graphically specify the number of instances of an actor in Ptolemy Classic, either by

implication (by graphically specifying the number of instances of upstream actors), or directly (by

graphically instantiating the desired number). Ptolemy Classic took advantage of higher-order func-

tions by allowing a user to specify the number of instances of an actor by modifying the parameters

of a bus icon (a line connecting the boxes representing the actors), rather than the visual representa-

tion. Ptolemy Classic also allowed the user to visually represent the replacement function in a way

that is conceptually similar to using a box inside of the icon for a higher-order function.

Figure 5.8 shows the SmallWorld application in Ptolemy II, where a MultiInstanceCompos-

ite creates all of the nodes, each of which has an implementation identical to that in Figure 5.2(d).

Page 112: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

96

a

b

Figure 5.7: SDF model for changing parameter values of Small World model in Ptolemy II.

Page 113: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

97

a

b

Figure 5.8: ParameterSweep version of Small World model with MultiInstanceComposite inPtolemy II.

MultiInstanceComposite generates the nodes in its container, which means that the location pa-

rameter of the generated nodes are easily accessible and remain in reference to the Initiator actor in

the container. No other changes to the model are required. The model shown in Figure 5.8 has the

same behavior as that in Figure 5.5.

Ptalon

Ptalon is a natural fit for specifying model parameters programmatically, since it can specify

the structure of the model itself, not just values of actor parameters. One can use Ptalon to generate

the SmallWorld application shown in Figure 5.5. Figure 5.9 shows the required Ptalon code. The

first section of code declares all of the actor types needed in the model. The second section declares

four parameters: channelName, reportChannelName, range, and n. The third section declares

Page 114: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

98

an output port named output. The remainder of the file instantiates the components.

Ptalon automatically generates names of actor instances. However, in VisualSense and Viptos,

wireless ports are parameterized by the name of the wireless channel on which they receive or trans-

mit. So, the Ptalon file shown in Figure 5.9 uses the parameters channelName and reportChan-

nelName to specify concrete names for the channels. The parameter range specifies the radio

range of the nodes. The parameter n specifies the number of nodes to create.

Figure 5.10(a) shows a Ptolemy II model containing a PtalonActor named SmallWorld that

refers to the Ptalon code in Figure 5.9. Note that this model is similar to the model shown in Fig-

ure 5.5, except that Ptalon generates the nodes, wireless channels, NodeRandomizer, and Wire-

lessToWired converter, as shown in Figure 5.10(c). The PtalonActor also contains an output port,

through which the actor transmits the data to be recorded.

Figure 5.10(b) shows the values of the PtalonActor parameters. Note that all of the parameters,

except the number of nodes, refer to model parameters with the same name. Also note that the

resetOnEachRun parameter in Figure 5.9 is not explicitly declared as a Ptalon parameter. Because

parameters in Ptolemy II use a form of lazy evaluation (changes to parameter values may not be

propagated until they are used at run time), the user must create a Ptalon parameter as a mirror of

any Ptolemy parameters that should be evaluated before run time. I explicitly declare these variables

because they are useful for visualization, e.g., to verify visually that the ranges are correct, before

running the model. The Ptalon model will still run correctly, even if the range parameter is not

declared as a Ptalon parameter. Figure 5.11 shows an excerpt of the MoML code for the model in

Figure 5.10.

5.4.5 Discussion

A user can control both the MultiInstanceComposite (Figure 5.8) and PtalonActor (Figure

5.10) versions of SmallWorld with either the modal model or the SDF model discussed previously,

with no modifications required. One advantage of using higher-order actors such as MultiInstance-

Composite and PtalonActor is that they enable run-time reconfiguration (e.g., the number of nodes

in the model can be controlled programmatically).

Another advantage of higher-order actors is that they require fewer bytes to express the model.

Table 5.1 shows a comparison of the three different ways presented for implementing the Small-

World application. For all files, I removed all extra white space (tabs, spaces, and extra linefeeds),

in addition to annotations and comments that were not constant across all models. The first column

Page 115: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

99

SmallWorld is {/* Actor types */

actor node = ptolemy.actor.ptalon.demo.SmallWorld.RelayNode;

actor channel = ptolemy.domains.wireless.lib.LimitedRangeChannel;

actor initiator = ptolemy.actor.ptalon.demo.SmallWorld.Initiator;

actor wirelessToWired = ptolemy.domains.wireless.lib.WirelessToWired;

actor nodeRandomizer = ptolemy.domains.wireless.lib.NodeRandomizer;

/* Ptalon parameters */

parameter channelName;

parameter reportChannelName;

parameter range;

parameter n;

/* Port declaration */

outport output;

/* Instantiation of components */

channel( defaultProperties := [[ {range=range} ]],

lossProbability := [[ 1.0 - probability ]],

seed := [[ 1L ]],

name := [[ channelName ]] );

channel( seed := [[ 1L ]],

name := [[ reportChannelName ]] );

wirelessToWired( inputChannelName := [[ reportChannelName ]],

payload := output,

_location := [[ [0.0, 0.0] ]] );

initiator( _location := [[ [230.0, 345.0] ]] );

nodeRandomizer( maxPrecision := [[ 3 ]],

randomizeInInitialize := [[ true ]],

resetOnEachRun := [[ resetOnEachRun ]],

range := [[ {{100.0, 400.0}, {200.0, 500.0}} ]],

seed := [[ 1L ]] );

for i initially [[ 1 ]] [[ i <= n ]] {node( nodePropagationDelay := [[ nodePropagationDelay ]],

range := [[ range ]],

haloColor := [[ {0.0, 0.5, 0.5, probability*visualDensity} ]],

randomize := [[ randomize ]],

_location := [[ [10.0 * i, 10.0 * i] ]] );

} next [[ i + 1 ]]

}

Figure 5.9: Ptalon code for SmallWorld (SmallWorld.ptln).

Page 116: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

100

a

b

f

e

d

c

g

Figure 5.10: Ptalon version of Small World in Ptolemy II.

Page 117: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

101

...

<entity name="SmallWorld" class="ptolemy.actor.ptalon.PtalonActor">

<property name="_location" class="ptolemy.kernel.util.Location" value="[240.0, 210.0]">

</property>

<configure>

<ptalon file="ptolemy.actor.ptalon.demo.SmallWorld.SmallWorld">

<ptalonExpressionParameter name="n" value="49"/>

<ptalonExpressionParameter name="channelName" value="channelName"/>

<ptalonExpressionParameter name="reportChannelName" value="reportChannelName"/>

<ptalonExpressionParameter name="range" value="range"/>

</ptalon>

</configure>

</entity>

...

Figure 5.11: Excerpt of MoML code for Ptalon version of Small World.

is the ParameterSweep version of SmallWorld as shown in Figure 5.5. The second column is the

MultiInstanceComposite implementation, as shown in Figure 5.8. The third column is the Ptalon

implementation as shown in Figure 5.10.

Note that in all of the non-Ptalon versions of the SmallWorld application (Figures 5.4, 5.5, and

5.8), the code for the Initiator actor is stored in the model itself. For the Ptalon model, however, the

MoML code for the Initiator actor must be stored externally so that the actor can be referenced in

the Ptalon file. Also note that the code for the RelayNode actor in the ParameterSweep version

is stored externally, whereas the code for the RelayNode actor in the MultiInstanceComposite

version must be stored in the MultiInstanceComposite itself. Even though the RelayNode code

is stored external to the ParameterSweep version, the parameter values for each node must still

be stored internally. The difference in the number of lines of between the ParameterSweep and

MultiInstanceComposite versions would be even greater if there were more nodes, since the Pa-

rameterSweep version requires 705 bytes for each additional node to store the parameter values

and the instance declaration. Increasing the number of nodes in the implementations in the Mul-

tiInstanceComposite and PtalonActor versions requires no extra bytes (except if the number of

digits in the number of nodes exceeds two, in which case there is an extra byte for each digit).

The main difference between using MultiInstanceComposite and PtalonActor for this par-

ticular application is that one cannot visualize the generated components using the MultiInstance-

Composite actor. Additionally, the MultiInstanceComposite actor must be opaque, i.e., have a

director, so that its Actor interface methods (preinitialize(), ..., wrapup()) are invoked during model

Page 118: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

102

Table 5.1: Comparison of number of bytes between different implementations of SmallWorld.

ParameterSweep with MultiInstanceComposite with PtalonSmallWorld.xml 55320 SmallWorld.xml 48212 SmallWorld.xml 16882RelayNode.xml 28228 RelayNode.xml 28314

Initiator.xml 5292SmallWorld.ptln 1151

Total 83548 48212 51639

initialization. Ptalon makes no such constraints on the PtalonActor component.

Ptalon has the advantage over the other methods in that it is easier to express model structure.

For example, the application developer can modify the Ptalon code to cycle through a number

of different types of radio channel models, in order to test the behavior of a routing algorithm

under different channel assumptions. This is not possible with MultiInstanceComposite alone

(one would need to use a Case actor or other similar actor to achieve the same results). Ptalon also

allows the user to specify heterogeneous networks more easily. With MultiInstanceComposite,

a user would need to create a new instance of the actor for each type of duplicated node in the

network.

In general, flexibility in specifying simulation parameters is extremely important. In Le-

ung’s survey of the 70 full-length papers from prominent wireless sensor networking conferences,

IPSN/SPOTS 20071 and SenSys 20062, over one hundred different simulation parameters were

used, with very few repeated counts [63]. These results show that parameter choices are largely

application-dependent, and that there are few standard benchmarks.

5.5 Summary

In this chapter, I demonstrated how higher-order components provide a powerful way to build

wireless sensor network applications. Combined with generative programming and metaprogram-

ming techniques, sensor network developers can easily specify experimental simulation setups pro-

grammatically using a variety of techniques, including modal models, dataflow, and higher-order

actors. Developers can choose the method that best fits the particular application. They can then

1International Conference on Information Processing in Sensor Networks and Track on Sensor Platforms, Tools andDesign Methods.

2Conference on Embedded Networked Sensor Systems.

Page 119: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

103

refine these simulations to real-world implementations using a technology such as Viptos (presented

in Chapter 4).

Page 120: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

104

Page 121: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

105

Chapter 6

Related Work

This chapter details information on work related to TinyGALS and galsC, as well as work

related to Viptos and the metaprogramming techniques for wireless sensor networks discussed in

earlier chapters.

6.1 TinyGALS and galsC

This section summarizes the features of several related operating systems and software ar-

chitectures, and discusses how they relate to TinyGALS and galsC. Herlihy’s method for building

non-blocking operations, as well as the message passing interface (MPI) offer concurrency and

communication alternatives to those used in TinyGALS. The SVAR (state variable) mechanism of

PBOs (port-based objects) and FPBOs (featherweight port-based objects) influenced the design of

TinyGUYS. The Click Modular Router project has interesting parallels to the TinyGALS model of

computation, as do Ptolemy II, the CI (component interaction) domain, and the TM (Timed Multi-

tasking) domain.

6.1.1 Non-blocking

Herlihy proposes a methodology in [45] for constructing non-blocking and wait-free imple-

mentations of concurrent objects. Programmers implement data objects as stylized sequential pro-

grams, with no explicit synchronization. Each sequential operation is automatically transformed

into a non-blocking or wait-free operation via a collection of synchronization and memory man-

agement techniques. However, operations may not have any side-effects other than modifying the

memory block occupied by the object. Unlike TinyGALS, this technique does not address the need

Page 122: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

106

for inter-object communication when composing components. Additionally, this methodology re-

quires additional copying of memory, which may become expensive for large objects.

6.1.2 MPI

MPI (Message Passing Interface) is the de facto standard library interface for writing mes-

sage passing programs on high-performance parallel computing platforms [104]. MPI provides

virtual topology, synchronization and communication functionality between a set of processes that

have been mapped to processing nodes. Interface functions include point-to-point, rendezvous-type

send/receive operations (including synchronous, asynchronous, buffered, and ready forms); choos-

ing between a Cartesian or graph-like logical process topology; exchanging data between process

pairs (send/receive operations); combining partial results of computations (gather and reduce oper-

ations); synchronizing nodes (barrier operation); as well as obtaining network-related information

such as the number of processes in the computing session, identity of the current processor to which

a process is mapped, and neighboring processes accessible in a logical topology.

MPI was originally targeted for distributed memory systems, though implementations for

shared memory systems have appeared as these platforms have become more popular. In MPI,

all parallelism is explicit; the programmer is responsible for correctly identifying parallelism and

implementing parallel algorithms using MPI constructs. The number of tasks dedicated to run a

parallel program is static. New tasks cannot be dynamically spawned during run time, though the

new MPI-2 standard addresses this issue. The advantages of MPI over older message passing li-

braries are portability (because MPI has been implemented for almost every distributed memory

architecture) and speed (because each implementation is in principle optimized for the hardware on

which it runs).

Hempel and Walker [43] summarize MPI and its alternatives:

The main function of MPI is to communicate data from one process to another.Other mechanisms, such as TCP/IP and CORBA, do essentially the same thing. MPIprovides a level of abstraction appropriate for communication of data in scientific com-puting, whereas TCP/IP is geared to low-level network transport, and CORBA to client-server interactions...

The idea of communicating sequential processes as a model for parallel executionwas developed by C.A.R. Hoare in the 1970s, and is the basis of the message pass-ing paradigm. This paradigm assumes a distributed process memory model, i.e., eachprocess has its own local address space. Processes co-operate to perform a task by inde-pendently computing with their local data and communicating data with other processes

Page 123: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

107

by explicitly exchanging messages. Technically, this message passing is normally re-alized by calls to library functions, which, for example, send or receive a message, orbroadcast some data to a whole group of processes. Message passing provides the mostexplicit way of programming a parallel computer with physically distributed memory,and is well-suited to this type of machine since there is a good match between thedistributed memory model and the distributed hardware...

[Although regarded as competing standards], MPI and PVM [Parallel Virtual Ma-chine] were designed for different uses. PVM was originally intended for use on net-works of workstations (NOWs) and addresses issues such as heterogeneity, fault toler-ance, interoperability, and resource management—its message passing capabilities arenot very sophisticated. The design of MPI focused on message passing capabilities,and it is intended to attain high performance on tightly-coupled, homogeneous parallelarchitectures.

MPI has its detractors, such as Per Brinch Hansen, inventor of Concurrent Pascal, the first

concurrent programming language. In his evaluation of MPI [41], he states,

The MPI routines for synchronous message passing work as expected. However, asyn-chronous communication is dangerously insecure. It is possible to call a user proce-dure that inputs a message in a local variable and returns before the input has beencompleted. This time-dependent error may change a variable, which (conceptually) nolonger exists, and therefore may be reused by unrelated procedure calls! Twenty yearsago, Concurrent Pascal proved that nontrivial parallel programs can be written exclu-sively in a secure programming language. The Message-Passing Interface follows inthe footsteps of the Unix threads library: both extend a sequential programming lan-guage with subroutines for parallel execution and data communication. Personally, Iregard the attempt to replace a parallel programming language and its compiler withinsecure procedures as a step backwards in programming technology.

However, MPI has had considerable impact on the development of middleware and other tools

for wireless sensor networks.

OMNeT++ [84, 98], discussed later in Section 6.2, supports parallel distributed simulation

using one of various communication mechanisms, including MPI, named pipes, or the file system.

There are some constraints, however: (1) modules can only communicate by sending messages (no

direct method call or member access) unless they are mapped to the same processor, (2) no global

variables are allowed, (3) a module may not send directly to a submodule of another module, unless

the modules are mapped to the same processor, (4) lookahead must be present in the form of link

delays, (5) currently only static topologies are supported.

Welsh and Mainland take inspiration from MPI in their approach to abstract regions [101], a

family of spatial operators that capture local communication within regions of a wireless sensor

Page 124: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

108

network, which may be defined in terms of radio connectivity, geographic location, or other node

properties. They state that

[MPI] provides a unified interface for message passing across a large family of parallelmachines. MPI hides the details of the communication hardware and provides efficientimplementations of common collective operations, such as broadcast and reduction.MPI has been extremely successful in the parallel processing community as it is high-level enough to shield programmers from most of the details of the underlying machine,yet low-level enough to permit extensive application-specific optimizations. We wishto provide communication interfaces that serve a similar role for sensor networks.

Bakshi and Prasanna [6] have a similar goal in their library of structured communication prim-

itives. In their system, “structured communication” refers to a routing problem where the com-

munication pattern is known in advance, with example patterns including one-to-all (broadcast),

all-to-one (data gather), many-to-many, all-to-all, and permutation.

UW-API (University of Wisconsin-Madison’s Application Programmer’s Interface) [5, 81] for

sensor network communication is motivated by MPI. Some of the UW-API primitives are to be

invoked by a single sensor node; others are for collective communication, to be invoked simultane-

ously by a group of nodes in a geographic region. All operations take place on regions, which users

can create with specific primitives. Barrier synchronization is also supported for the sensor nodes

that lie within a region.

The Open Source Cluster Application Resources (OSCAR) package [29, 66] is an integrated

software bundle designed for high performance cluster computing. OSCAR provides the standard

Message Passing Interface (MPI) for communication between the parallel computing processes. It

has been used in sensor network applications to parallelize data fusion processes, where the sensor

network sends its data to the computing cluster through a gateway node.

The actor model used by TinyGALS/galsC, Viptos, and Ptolemy II uses message passing.

However, it is more comprehensive than MPI, in that the actor model specifies scheduling and

execution semantics, in addition to communication primitives.

6.1.3 Port-Based Objects

The port-based object (PBO) [92] is a software abstraction for designing and implementing

dynamically reconfigurable real-time software. The software framework was developed for the

Chimera multiprocessor real-time operating system (RTOS). A PBO is an independent concurrent

process, and there is no explicit synchronization with other processes. PBOs may execute either

Page 125: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

109

periodically or aperiodically. A PBO communicates with other PBOs only through its input ports

and output ports. PBOs may also have resource ports that connect to sensors and actuators via

I/O device drivers, which are not PBOs. Configuration constants are used to reconfigure generic

components for use with specific hardware or applications.

PBOs communicate with each other via state variables stored in global and local tables. Every

input and output port and configuration constant is defined as a state variable (SVAR) in the global

table, which is stored in shared memory. A PBO can only access its local table, which contains only

the subset of data from the global table that is needed by the PBO. Since every PBO has its own local

table, no explicit synchronization is needed to read from or write to a state variable. Consistency

between the global and local tables is maintained by the SVAR mechanism, and updates to the

tables only occur at predetermined times. The system updates configuration constants only during

initialization of the PBO. The system updates the state variables corresponding to input ports prior to

the execution of each cycle of a periodic PBO, or before the processing of each event for an aperiodic

PBO. During its cycle, a PBO may update the state variables corresponding to the PBO’s output

ports at any time. The system updates these values in the global table only after the PBO completes

its processing for that cycle or event. The system performs all transfers between the local and

global tables as critical sections. Although there is no explicit synchronization or communication

among processes, multiple accesses to the same SVAR in the global table are mutually exclusive,

which creates potential implicit blocking. The system uses spin-locks to lock the global table, and

it assumes that the amount of data communicated via the ports on each cycle of a PBO is relatively

small. It is guaranteed that the task holding the global lock is on a different processor and will not

be preempted, thus it will release the lock shortly. If the total time that a CPU is locked to transfer a

state variable is small compared to the resolution of the system clock, then there is negligible effect

on the predictability of the system due to this mechanism locking the local CPU. Since there is only

one lock, there is no possibility of deadlock. A task busy-waits with the local processor locked until

it obtains the lock and goes through its critical section.

Echidna [9] is a related real-time operating system designed for smaller, single-processor, em-

bedded microcontrollers. The design is based on the featherweight port-based object (FPBO) [91].

The application programmer interface (API) for the FPBO is identical to that of the PBO. In an

RTOS, PBOs are separate processes, whereas FPBOs all share the same context. The Chimera PBO

implementation uses data replication to maintain data integrity and avoid race conditions. Echidna

FPBO implementation takes advantage of context sharing to eliminate the need for local tables,

which is especially important since memory in embedded processors is a limited resource. Access

Page 126: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

110

to global data must still be performed as a critical section to maintain data integrity. However,

instead of using semaphores, Echidna constrains when preemption can occur.

To summarize, in both the PBO and FPBO models, software components only communicate

with other components via SVARs, which are similar to global variables. Updates to an SVAR are

made atomically, and the components always read the latest value of the SVAR. The SVAR concept

is the motivation behind the TinyGALS strategy of always reading the latest value of a TinyGUYS

parameter. However, in TinyGALS, since components within a module may be tightly coupled

in terms of data dependency, updates to TinyGUYS are buffered until a module has completed

execution. This is more closely related to the local tables in the Chimera PBO implementation than

the global tables in the Echidna FPBO implementation. However, there is no possibility of blocking

when using the TinyGUYS mechanism.

6.1.4 Click

Click [54, 55] is a flexible, modular software architecture for creating routers. A Click router

configuration consists of a directed graph, where the vertices are called elements and the edges are

called connections. This section provides a detailed description of the constructs and processing in

Click and compares it to TinyGALS.

Elements in Click A Click element is a software module which usually performs a simple com-

putation as a step in packet processing. An element is implemented as a C++ object that may

maintain private state. Each element belongs to a single element class, which specifies the code

that should be executed when the element processes a packet, as well as the element’s initialization

procedure and data layout. An element can have any number of input and output ports. There are

three types of ports: push, pull, and agnostic. In Click diagrams, push ports are drawn in black,

pull ports in white, and agnostic ports with a double outline. Each element supports one or more

method interfaces, through which they communicate at runtime. Every element supports the simple

packet-transfer interface, but elements can create and export arbitrary additional interfaces. An el-

ement may also have an optional configuration string which contains additional arguments to pass

to the element at router initialization time. The Click configuration language allows users to define

compound elements, which are router configuration fragments that behave like element classes. At

initialization time, each use of a compound element is compiled into the corresponding collection

of simple elements.

Page 127: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

111

Tee(2)input port output ports

element class

configuration string

Figure 6.1: An example Click element. Source: Eddie Kohler.

Figure 6.1 shows an example Click element that belongs to the Tee element class, which sends

a copy of each incoming packet to each output port. The element has one input port. The element is

initialized with the configuration string “2”, which in this case configures the element to have two

output ports.

Connections in Click A Click connection represents a possible path for packet handoff and at-

taches the output port of an element to the input port of another element. A connection is imple-

mented as a single virtual function call. A connection between two push ports is a push connection,

where packet handoff along the connection is initiated by the source element (or source end, in the

case of a chain of push connections). A connection between two pull ports is a pull connection,

where packet handoff along the connection is initiated by the destination element (or destination

end, in the case of a chain of pull connections). An agnostic port behaves as a push port when

connected to push ports and as a pull port when connected to pull ports, but each agnostic port must

be used exclusively as either push or pull. In addition, if packets arriving on an agnostic input might

be emitted immediately on an agnostic output, then both input and output must be used in the same

way (either push or pull). When a Click router is initialized, the system propagates constraints until

every agnostic port has been assigned to either push or pull. A connection between a push port

and a pull port is illegal. Every push output and every pull input must be connected exactly once.

However, push inputs and pull outputs can be connected more than once. There are no implicit

queues on input and output ports, which means that they do not carry the associated performance

and complexity costs. Queues in Click must be defined explicitly and appear as Queue elements.

A Queue has a push input port (responds to pushed packets by enqueuing them) and a pull output

port (responds to pull requests by dequeuing packets and returning them).

Another type of element is the Click packet scheduler. This is an element with multiple pull

inputs and one pull output. The element reacts to requests for packets by choosing one of its inputs,

pulling a packet from it, and returning the packet. If the chosen input has no packets ready, the

Page 128: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

112

FromDevice Null Null ToDevice

push(p) push(p)

returnreturn

pull()pull()

return p return p

receivepacket p

enqueue p

ready totransmit

dequeue pand return it

send p

Figure 6.2: A simple Click configuration with sequence diagram. Source: Eddie Kohler.

scheduler usually tries other inputs. Both Queue elements and scheduling elements have a single

pull output, so to an element downstream, the elements are indistinguishable. This leads to an ability

to create virtual queues, which are compound elements that act like queues but implement behavior

more complex than FIFO (first in, first out) queuing.

Click runtime system Click runs as a kernel thread inside the Linux 2.2 kernel. The kernel thread

runs the Click router driver, which loops over the task queue and runs each task using stride schedul-

ing [99]. A task is an element that needs special access to CPU time. An element should place itself

on the task queue if the element frequently initiates push or pull requests without receiving a corre-

sponding request. Most elements are never placed on the task queue; they are implicitly scheduled

when their push() or pull() methods are called. Since Click runs in a single thread, a call to push()

or pull() must return to its caller before another task can begin. The router continues to process each

pushed packet, following it from element to element along a path in the router graph (a chain of

push() calls, or a chain of pull() calls), until the packet is explicitly stored or dropped (and simi-

larly for pull requests). The placement of Queues in the configuration graph determines how CPU

scheduling may be performed. For example, device-handling elements such as FromDevice and

ToDevice place themselves on Click’s task queue. When activated, FromDevice polls the device’s

receive DMA (direct memory access) queue for newly arrived packets and pushes them through

the configuration graph. ToDevice examines the device’s transmit DMA queue for empty slots and

pulls packets from its input. Click is a pure polling system; the device never interrupts the processor.

Timers are another way of activating an element besides tasks. An element can have any

number of active timers, where each timer calls an arbitrary method when it fires. Click timers are

implemented using Linux timer queues.

Page 129: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

113

Poll packet fromreceive DMA ring

Queuefull?

Pull packetfrom Queue

Push packetto Queue

Enqueue packeton Queue

Enqueue packet ontransmit DMA ring

Drop packet(Queue drop)

N

Y

Figure 6.3: Flowchart for Click configuration shown in Figure 6.2. Source: Eddie Kohler.

Page 130: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

114

Figure 6.2 shows a simple Click router configuration with a push chain (FromDevice and Null)

and a pull chain (Null and ToDevice). The two chains are separated by a Queue element. The Null

element simply passes a packet from its input port to its output port; it performs no processing on

the packet. Note that in the sequence diagram in Figure 6.2, time moves downwards. Control flow

moves forward during a push sequence, and moves backward during a pull sequence. Data flow (in

this case, the packet p) always moves forwards.

Figure 6.3 illustrates the basic execution sequence of Figure 6.2. When the task corresponding

to FromDevice is activated, the element polls the receive DMA ring for a packet. FromDevice

calls push() on its output port, which calls the push() method of Null. The push() method of Null

calls push() on its output port, which calls the push() method of the Queue. The Queue element

enqueues the packet if its queue is not full; otherwise it drops the packet. The calls to push()

then return in the reverse order. Later, the task corresponding to ToDevice is activated. If there

is an empty slot in its transmit DMA ring, ToDevice calls pull() on its input port, which calls the

pull() method of Null. The pull() method of Null calls pull() on its input port, which calls the pull()

method of the Queue. The Queue element dequeues the packet and returns it through the return

of the pull() calls.

Overhead in Click Modularity in Click results in two main sources of overhead. The first source

of overhead comes from passing packets between elements. This leads to one or two virtual function

calls, each of which involve loading the relevant function pointer from a virtual function table,

as well as an indirect jump through that function pointer. This overhead is avoidable—the Click

distribution contains a tool to eliminate all virtual function calls from a Click configuration. The

second source of overhead comes from unnecessarily general element code. Kohler, et al. found

that element generality had a relatively small effect on Click’s performance since not many elements

in a particular configuration offered much opportunity for specialization [55].

Comparison of Click to TinyGALS An element in Click is comparable to a component in

TinyGALS in the sense that both are objects with private state. Rules in Click on connecting ele-

ments together are similar to those for connecting components in TinyGALS: push outputs must be

connected exactly once, but push inputs may be connected more than once (see Sections 3.1.2 and

3.2.2). Both types of objects (Click elements and TinyGALS components) communicate with other

objects via method calls. In Click, there is no fundamental difference between push processing and

pull processing at the method-call level; both push and pull processing are sets of method calls that

Page 131: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

115

C1 C2 C5 C6C4

TinyGALS

thread

event

task invocation

scheduler invokes actor

from event queue

Click

push

C3

Actor A Actor B Actor C

Click router

thread

???

Click

pull

Control flow

Data flow

TinyGALS

task invocation

Figure 6.4: Click vs. TinyGALS.

differ only in name. However, the direction of control flow with respect to data flow in the two

types of processing are opposite of each other. Push processing can be thought of as event-driven

computation (if one ignores the polling aspect of Click), where control and data flow downstream

in response to an upstream event. Pull processing can be thought of as demand-driven computation,

where control flows upstream in order to compute data needed downstream.

Figure 6.4 provides a more detailed analysis of the difference in control and data flow between

Click and TinyGALS. Figure 6.4 shows a push processing chain of four elements connected to a

queue, which is connected to a pull processing chain of two elements. In Click, control begins at

element C1 and flows to the right and returns after it reaches the Queue. Data (a packet) flows

to the right until it reaches the Queue. Visualizing this configuration as a TinyGALS model, with

elements C1 and C2 grouped into an actor A and elements C3 and C4 grouped into an actor B,

shows that a TinyGALS actor forms a boundary for control flow.

Note that a compound element in Click does not form the boundary of control flow. In Click, if

an element inside of a compound element calls a method on its output, control flows to the connected

element (recall that a compound element is compiled to a chain of simple elements). In TinyGALS,

data flow within an actor is not represented explicitly. Data flow between components in an actor

Page 132: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

116

can have a direction different from the link arrow direction, unlike in Click, where data flow has the

same direction as the connection. Data flow between actors always has the same direction as the

connection arrow direction, although TinyGUYS provides a possible hidden avenue for data flow

between actors.

Also note that the Click Queue element is not equivalent to the queue on a TinyGALS actor

input port. In Click, arrival of data in a queue does not cause downstream objects to be scheduled,

as in TinyGALS. This highlights the fact that Click configurations cannot have two push chains

(where the end elements are activated as tasks) separated by a Queue. Additionally, since Click

is a pure polling system, it does not respond to events immediately, unlike TinyGALS, which is

interrupt-driven and allows preemption to occur in order to process events. Much of this is because

Click’s design is motivated by high throughput routers, whereas TinyGALS is motivated by power-

and resource-constrained hardware platforms; a TinyGALS system goes to sleep when there are no

external events to which to respond.

Aside from the polling/interrupt-driven difference, push processing in Click is equivalent to

synchronous communication between components in a TinyGALS actor. Pull processing in Click,

however, does not have a natural equivalent in TinyGALS. In Figure 6.4, elements C5 and C6 are

grouped into an actor C. If one reverses the arrow directions inside of actor C, control flow in this

new TinyGALS model is the same as in Click. However, elements C5 and C6 may have to be

rewritten to reflect the fact that C6 is now a source object, rather than a sink object.

In Click, execution is synchronous within each push (or pull) chain, but execution is asyn-

chronous between chains, which are separated by a Queue element. From this global point of view,

the execution model of Click is quite similar to the globally asynchronous, locally synchronous

execution model of TinyGALS.

Unlike TinyGALS, elements in Click have no way of sharing global data. The only way of

passing data between Click elements is to add annotations to a packet (information attached to the

packet header, but which is not part of the packet data).

Unlike Click, TinyGALS does not contain timers associated with elements, although this can

be emulated by linking a CLOCK component with an arbitrary component. Also unlike Click, the

TinyGALS model does not contain a task queue.1

1Although, for backwards compatibility with TinyOS, the TinyGALS runtime system implementation supportsTinyOS tasks, which are long running computations placed in the task queue by a TinyOS component method. Thescheduler runs tasks in the task queue only after processing all events in the event queue. Additionally, tasks can bepreempted by hardware interrupts. See Section 3.3.5 for more information.

Page 133: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

117

A D

B C

North

Figure 6.5: A sensor network application.

Node B

Node ANode C

Node D

Figure 6.6: Pull processing across multiple nodes; a configuration for the application in Figure 6.5.

Pull processing in sensor networks Although TinyGALS does not currently use pull processing,

the following example by Jie Liu given in Yang Zhao’s paper [109] illustrates a situation in which

pull processing is desirable for eliminating unnecessary computation. Figure 6.5 shows a sensor

network application in which four nodes cooperate to detect intruders. Each node is only capable

of detecting intruders within a limited range and has a limited battery life. Communication with

other nodes consumes more power than performing local computations, so nodes should send data

only when necessary. Node A has more power and functionality than other nodes in the system. It

is known that an intruder is most likely to come from the west, somewhat likely to come from the

south, but very unlikely to come from the east or north. Under these assumptions, node A may want

to pull data from other nodes only when needed. Figure 6.6 shows one possible configuration for

this kind of pull processing. The center component is similar to the Click scheduler element. This

example also demonstrates a way to perform distributed multitasking. Node D (and others) may

be free to perform other computations while node A performs most of the intrusion detection. This

could be an extension to the current single-node architecture of TinyGALS.

Page 134: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

118

6.1.5 Click and Ptolemy II

The MESCAL project has created a tool called Teepee [71], which is based on Ptolemy II and

implements the Click model of computation.

The CI (component interaction) domain [16] in Ptolemy II models systems that contain both

event-driven and demand-driven styles of computation. CI is motivated by the push/pull interaction

between data producers and consumers in middleware services such as the CORBA event service.

CI actors can be active (i.e., have their own thread of execution) or passive (triggered by an active

actor). There is a natural correlation between the CI domain and Click. CI and Click could be

leveraged to implement an implementation of TinyGALS in Ptolemy II, possibly using the Class-

Wrapper actor to model TinyGALS components.

6.1.6 Timed Multitasking

Timed multitasking (TM) [69] is an event-triggered programming model that takes a time-

centric approach to real-time programming but controls timing properties through deadlines and

events rather than time triggers.

Software components in TM are called actors, due to the implementation of TM in Ptolemy

II. An actor represents a sequence of reactions, where a reaction is a finite piece of computation.

Actors have state, which carries from one reaction to another. Actors can only communicate with

other actors and the physical world through ports. Unlike method calls in object-oriented models,

interaction with the ports of an actor may not directly transfer the flow of control to another actor.

Actors in a TM model declare their computing functionality and also specify their execution

requirements in terms of trigger conditions, execution time, and deadlines. The system activates an

actor when its trigger condition is satisfied. If there are enough resources at run time, then the system

grants the actor at least the declared execution time before it reaches its deadline. The system makes

the results of the execution available to other actors and the physical world only at the deadline time.

In cases where an actor cannot finish by its deadline, the TM model includes an overrun handler to

preserve the timing determinism of all other actors and allow an actor that violates the deadline to

come to a quiescent state.

A trigger condition can be built using real-time physical events, communication packets, and/or

messages from other actors. Triggers must be responsible, which means that once triggered, an actor

should not need any additional data to complete its finite computation. Therefore, actors are never

blocked on reading. The communication among the actors has event semantics, in which, unlike

Page 135: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

119

state semantics, every piece of data is produced and consumed exactly once. The event semantics

can be implemented by FIFO queues. Conceptually, the sender of a communication is never blocked

on writing.

Liu and Lee [69] describe a method for generating the interfaces and interactions among TM

actors into an imperative language like C. There are two types of actors: interrupt service routines

(ISRs) respond to external events, and tasks are triggered entirely by events produced by peer actors.

These two types do not intersect. In a TM model, an ISR usually appears as a source actor or

a port that transfers events into the model. ISRs do not have triggering rules, and outputs are

made immediately available as trigger events to downstream actors. An ISR is synthesized as an

independent thread. Tasks have a much richer set of interfaces than ISRs and have a set of methods

that define the split-phase reaction of a task. The TM runtime system uses an event dispatcher to

trigger a task when a new event is received at its port. Events on a connection between two actors are

represented by a global data structure, which contains the communicating data, a mutual-exclusion

lock to guard the access to the variable if necessary, and a flag indicating whether the event has been

consumed.

Section 3.2.2 suggested that a partial method of reducing non-determinacy in TinyGALS pro-

grams due to one or more interrupts during an actor iteration is to delay producing outputs from an

actor until the end of its iteration. This is similar to the TM method of only producing outputs at the

end of an actor’s deadline.

6.2 Design, Simulation, and Deployment Environments

A number of frameworks for designing, simulating, and deploying wireless systems exist,

though none include all of the capabilities of Viptos. Some information presented in this section is

excerpted from papers on VisualSense [8] and Viptos [21, 22].

6.2.1 Design and simulation environments

ns-2 [77] is a well-established, open-source network simulator. It is a discrete-event simulator

with extensive support for simulating TCP/IP, routing, and multicast protocols over wired and wire-

less (local and satellite) networks. Wireless and mobility support in ns-2 comes from the Monarch

project, which provides channel models and wireless network layer components in the physical,

link, and routing layers [14].

Page 136: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

120

SensorSim [79] builds on ns-2 and claims power models and sensor channel models. A power

model consists of an energy provider (the battery) and a set of energy consumers (CPU, radio, and

sensors). An energy consumer can have several modes, each corresponding to a different trade-off

between performance and power. The sensor channels model the dynamic interaction between the

physical environment and the sensor nodes. SensorSim also claims hybrid simulation in which real

sensor nodes can participate. Unfortunately, SensorSim is no longer under development and will

not be publicly released.

OPNET Modeler [78] is a commercial tool that offers sophisticated modeling and simulation

of communication networks. An OPNET model is hierarchical, where the top level contains the

communication nodes and the topology of the network. Each node can be constructed from software

components, called processes, in a block-diagram fashion, and each process can be constructed

using finite state machine (FSM) models. It uses a discrete-event simulator to execute the entire

model. In conventional OPNET models, nodes are connected by static links. The OPNET Wireless

Module provides support for wireless and mobile communications. It uses a 13-stage “transceiver

pipeline” to dynamically determine the connectivity and propagation effects among nodes. Users

can specify transceiver frequency, bandwidth, power, and other characteristics. The transceiver

pipeline stages use these characteristics to calculate the average power level of the received signals to

determine whether the receiver can receive this signal. OPNET also supports antenna gain patterns

and terrain models.

OMNeT++ [98] is an open source tool for discrete-event modeling. With the Mobility Frame-

work extension, it shares many concepts, solutions, and features with OPNET. But instead of using

FSM models for processes, OMNeT++ defines a component interface for the basic module, with an

object-oriented approach similar to the abstract semantics of Ptolemy II [28]. The NesCT tool of

the EYES WSN project allows users to run TinyOS applications directly in OMNeT++ simulations.

J-Sim [97] is an open-source, component-based, compositional network simulation environ-

ment developed entirely in Java. A new wireless sensor framework [90] is builds upon the au-

tonomous component architecture (ACA) and the extensible internetworking framework (INET)

of J-Sim, and provides an object-oriented definition of (1) target, sensor and sink nodes, (2) sen-

sor and wireless communication channels, and (3) physical media such as seismic channels, and

mobility models and power models (both energy-producing and energy-consuming components).

Application-specific models can be defined by sub-classing classes in the simulation framework

and customizing their behaviors. It also includes a set of classes and mechanisms to realize net-

work emulation. This new framework extends the notion of network emulation to Berkeley Mica

Page 137: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

121

mote-based wireless sensor networks. Physical environment data from the network is extracted with

SerialForwarder, a utility distributed with TinyOS that collects TinyOS packets sent to a mote base

station attached to a PC and forwards them through the serial port.

Prowler [86] is a probabilistic wireless network simulator running under MATLAB and can

simulate wireless distributed systems, from the application to the physical communication layer.

Although Prowler provides a generic simulation environment, its current target platform is the

Berkeley Mica mote running TinyOS. Prowler is an event-driven simulator that can be set to op-

erate in either deterministic mode (to produce replicable results while testing the application) or in

probabilistic mode (to simulate the nondeterministic nature of the communication channel and the

low-level communication protocol of the motes). It can incorporate an arbitrary number of motes, on

arbitrary (possibly dynamic) topology, and it was designed to be easily embedded into optimization

algorithms.

Em* [34] is a toolsuite for developing sensor network applications on Linux-based hardware

platforms called microservers. It supports deployment, simulation, emulation, and visualization of

live systems, both real and simulated. EmTOS [35] is an extension to Em* that enables an entire

nesC/TinyOS application to run as a single module in an Em* system. The EmTOS wrapper library

is similar to the TOSSIM simulated device library. Em* modules are implemented as user-space

processes that communicate through message passing via device files. This means that the minimum

granularity of a timer is 10 milliseconds, corresponding to the Linux jiffy clock that is part of the

scheduler in the Linux 2.4 kernel. Thus, EmTOS modules are restricted to using the Linux scheduler

as the main programming model.

GloMoSim (Global Mobile system Simulator), from UCLA, is a scalable environment for

parallel simulation of wireless systems [106]. It relies on Parsec, a C-based simulation language

for sequential and parallel execution of discrete-event simulation models. GloMoSim is designed to

be extensible and composable: the communication protocol stack for wireless networks is divided

into a set of layers, each with its own API, similar to the OSI (Open Systems Interconnection)

seven-layer network architecture. Bagrodia founded Scalable Network Technologies, Inc., which

expanded and further developed GloMoSim into a commercial tool called QualNet, which supports

both wired and wireless networks.

TinyViz [65] is a Java-based graphical user interface for TOSSIM. TinyViz supports soft-

ware plugins that watch for events coming from the simulation—such as debug messages and radio

messages—and react by drawing information on the display, setting simulation parameters, or ac-

tuating the simulation itself, for example, by setting the sensor values that simulated motes read.

Page 138: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

122

TinyViz includes a radio model plugin with two built-in models: “Empirical” (based on an outdoor

trace of packet connectivity with the RFM1000 radios) and “Fixed radius” (all motes within a given

fixed distance of each other have perfect connectivity, and no connectivity to other motes).

Other simulators used in the TinyOS community for cycle accurate simulation/emulation of

the Atmel AVR (processor used in the Mica mote series) instruction set include ATEMU [80] and

Avrora [96]. ATEMU simulates a byte-oriented interface to the radio and its transmissions at the

bit level with precise timing. Avrora works at the byte level with precise timing, and its simulation

speed scales much better than ATEMU for large number of nodes. Both support simulation of

heterogeneous networks.

All of these systems provide extension points where model builders can define functionality by

adding code. Some are also open-source software, like Viptos. None provide the ability to transition

from high-level modeling to real code simulation and deployment. All except Em* provide some

form of discrete-event simulation, but none provide the ability that Viptos inherits from Ptolemy II to

integrate diverse models of computation, such as continuous-time, dataflow, synchronous/reactive,

and time-triggered. This capability can be used, for example, to model the physical environment, as

well as the physical dynamics of mobility of sensor nodes, their digital circuits, energy consumption

and production, signal processing, or real-time software behavior. Such models would have to be

built with low-level code. Viptos and Ptolemy II support hierarchical nesting of heterogeneous

models of computation [28]. They also appear to be unique among these modeling environments

in that FSM models can be arbitrarily nested with other models; i.e., they are not restricted to be

leaf nodes [33]. They also appear to be the only frameworks to provide a modern type system at the

actor level (vs. the code level) [105].

DyMND-EE [110] is a wireless sensor network simulator based on Ptolemy II and uses Em*

to run nesC code in a Linux environment using the FUSD kernel module to provide connections be-

tween simulated nodes and the DyMND-EE simulation manager. DyMND-EE is similar to Viptos,

except that it requires modification to the nesC source code in order to use simulated sensor and

other devices. One interesting part of this project is the DyMND Execution Sequencer (DES) user

interface, which allows users to graphically specify the deployment topology of a sensor network in-

cluding target positions and trajectories, and to generate an XML configuration file. DES generates

a model encompassing the full sensor network from this XML configuration and existing Ptolemy

II descriptions of the required actors, along with any properties files required by external runtime

environments like Em*. This generative technique is similar to the metaprogramming techniques

presented in Chapter 5.

Page 139: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

123

6.2.2 TinyOS development and editing environments

GRATIS II (Graphical Development Environment for TinyOS) is built on top of GME 3

(Generic Modeling Environment). The TinyOS component library is available as graphical blocks

within GRATIS II. Given a valid model, the GRATIS II code generator can transform all the in-

terface and wiring information into a set of nesC target files. However, GRATIS II was developed

mainly for static analysis of TinyOS component graphs and does not support simulation.

TinyDT is a TinyOS 1.x plugin for the Eclipse platform that implements an IDE (integrated

development environment) for TinyOS/nesC development. This open source project features syntax

highlighting of nesC code, code navigation, code completion for interface members, support for

multiple target platforms and sensor boards, automatic build support, team development support

(through Eclipse-CVS integration), and support for multiple TinyOS source trees. TinyDT uses a

Java-based nesC parser implemented using ANTLR to build an in-memory representation of the

actual nesC application, which includes component hierarchy, wirings, interfaces and the JavaDoc

style nesC documentation. TinyOS IDE is another Eclipse plugin that supports TinyOS project

development and provides nesC syntax highlighting. Both TinyDT and TinyOS IDE complement

Viptos in that they can be used to create and edit the source code for new TinyOS library compo-

nents, which nc2moml can then import into Viptos for simulation.

6.2.3 Programming and deployment environments

Sun Microsystems Laboratories has created a Java-based wireless sensor network platform

called Sun SPOT (Small Programmable Object Technology) [88]. The Sun SPOT is based on a

32-bit 180 MHz ARM920T core with 512 KB of RAM and 4 MB of flash memory, and a CC2420

802.15.4 radio with an effective range of about 80 meters. It runs the Squawk VM [85], a small

J2ME-compliant Java virtual machine. The Sun SPOT supports multiple concurrently running ap-

plications.

SPOTWorld is “an integrated management, deployment, debugging and programming tool”

for Sun SPOTs [87, 89]. This graphical tool can run stand-alone or be integrated with NetBeans.

SPOTWORLD depicts each automatically discovered Sun SPOT, and users can manage each device,

e.g., to get device status information, set a persistent name property, deploy code, reset the device, or

start any of the available applications. Users can graphically address individual running applications

in order to pause, resume, or exit each one. SPOTWorld also has an experimental feature that allows

users to drag an application from one SPOT to the next, even as the application runs. SPOTWorld

Page 140: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

124

enables the user to compile a collection of applications and deploy the resulting file over the air to

selected Sun SPOTs. The developers of the Sun SPOT and SPOTWorld have expressed interest in

integrating features of Viptos with SPOTWorld.

6.3 Summary

This chapter presented a number of frameworks related to TinyGALS/galsC, Viptos, and the

metaprogramming techniques presented in earlier chapters. Future versions of these tools can ben-

efit from cross-fertilization of the techniques presented in this dissertation.

Page 141: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

125

Chapter 7

Conclusion

Developing software for wireless sensor networks today is an error-prone and tedious process

that involves patching together many different tools and techniques, usually using very low-level

code. This dissertation discussed raising the conceptual level of designing, simulating, and de-

ploying wireless sensor network applications by using actor-oriented programming tools and tech-

niques. Actor-oriented programming provides a way to unify the layers and stages of application

development—between the operating system, node-centric, middleware, and macroprogramming

layers; and between the deployment, simulation, and design stages.

TinyGALS provides a globally asynchronous, locally synchronous programming model that

combines an actor-oriented (message-oriented) model with an object-oriented (procedure-oriented)

model that allows application developers to use high-level actors as a first-order programming con-

cept, but still allows them to use a low-level programming model when needed. This combination

balances fast response with an easy-to-understand programming model that puts application tasks

first. galsC is a language that implements the TinyGALS programming model, and this dissertation

described its syntax and the high-level type checking, concurrency error detection, and scheduling

and communication code generation facilities provided by the galsC compiler.

Viptos provides an integrated, actor-oriented design, simulation, and deployment environment

for wireless sensor network applications. Application developers can use Viptos to create abstract

models of their intended systems and refine them down to low-level code that can be transferred to

target hardware.

Various metaprogramming and generative programming techniques described in this disser-

tation using higher-order actors in Ptalon, Ptolemy II, VisualSense, and/or Viptos enable wireless

sensor network application developers to create high-level descriptions or models and automatically

Page 142: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

126

generate sensor network simulation scenarios.

All of the tools I developed and described in this dissertation are open-source and freely avail-

able on the web. The networked embedded computing community can use these tools and the

knowledge shared in this dissertation to improve the way we program wireless sensor networks.

Page 143: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

127

Bibliography

[1] Gul Agha, Svend Frølund, WooYoung Kim, Rajendra Panwar, Anna Patterson, and Daniel

Sturman. Abstraction and modularity mechanisms for concurrent computing. IEEE Parallel

and Distributed Technology: Systems and Applications, 1(2):3–14, 1993.

[2] Gul A. Agha. ACTORS: A Model of Concurrent Computation in Distributed Systems. The

MIT Press Series in Artificial Intelligence. MIT Press, Cambridge, 1986.

[3] Gul A. Agha, Ian A. Mason, Scott F. Smith, and Carolyn L. Talcott. A foundation for actor

computation. Journal of Functional Programming, 7(1):1–72, 1997.

[4] Todd R. Andel and Alec Yasinsac. On the credibility of manet simulations. Computer,

39(7):48–54, July 2006.

[5] Amol Bakshi and Viktor K. Prasanna. Algorithm design and synthesis for wireless sensor

networks. In ICPP ’04: Proceedings of the 2004 International Conference on Parallel Pro-

cessing (ICPP’04), pages 423–430, Washington, DC, USA, 2004. IEEE Computer Society.

[6] Amol Bakshi and Viktor K. Prasanna. Structured communication in single hop sensor net-

works. In Proceedings of the First European Workshop on Wireless Sensor Networks (EWSN

2004), pages 138–153, January 2004.

[7] Felice Balarin, Massimiliano Chiodo, Paolo Giusto, Harry Hsieh, Attila Jurecska, Luciano

Lavagno, Alberto Sangiovanni-Vincentelli, Ellen M. Sentovich, and Kei Suzuki. Synthesis

of software programs for embedded control applications. IEEE Transactions on Computer-

Aided Design of Integrated Circuits and Systems, 18(6):834–849, June 1999.

[8] Philip Baldwin, Sanjeev Kohli, Edward A. Lee, Xiaojun Liu, and Yang Zhao. Modeling of

sensor nets in Ptolemy II. In IPSN’04: Proceedings of the Third International Symposium

Page 144: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

128

on Information Processing in Sensor Networks, pages 359–368, New York, NY, USA, 2004.

ACM Press.

[9] Kathleen Baynes, Chris Collins, Eric Fiterman, Brinda Ganesh, Paul Kohout, Christine Smit,

Tiebing Zhang, and Bruce Jacob. The performance and energy consumption of embedded

real-time operating systems. IEEE Transactions on Computers, 52(11):1454–1469, 2003.

[10] Albert Benveniste, Benoıt Caillaud, and Paul Le Guernic. From synchrony to asynchrony. In

CONCUR ’99: Proceedings of the 10th International Conference on Concurrency Theory,

pages 162–177, London, UK, 1999. Springer-Verlag.

[11] Jan Beutel. Fast-prototyping using the BTnode platform. In DATE ’06: Proceedings of

the Conference on Design, Automation and Test in Europe, pages 977–982, 3001 Leuven,

Belgium, Belgium, 2006. European Design and Automation Association.

[12] J. Bhasker. A SystemC Primer, Second Edition. Star Galaxy Publishing, 2004.

[13] Shah Bhatti, James Carlson, Hui Dai, Jing Deng, Jeff Rose, Anmol Sheth, Brian Shucker,

Charles Gruenwald, Adam Torgerson, and Richard Han. MANTIS OS: an embedded multi-

threaded operating system for wireless micro sensor platforms. Mobile Networks and Appli-

cations, 10(4):563–579, 2005.

[14] Josh Broch, David A. Maltz, David B. Johnson, Yih-Chun Hu, and Jorjeta Jetcheva. A perfor-

mance comparison of multi-hop wireless ad hoc network routing protocols. In MobiCom ’98:

Proceedings of the 4th Annual ACM/IEEE International Conference on Mobile Computing

and Networking, pages 85–97, New York, NY, USA, 1998. ACM Press.

[15] Christopher Brooks, Edward A. Lee, Xiaojun Liu, Stephen Neuendorffer, Yang Zhao, and

Haiyang Zheng (eds.). Heterogeneous concurrent modeling and design in Java (Volume 3:

Ptolemy II domains). Technical Report UCB/ERL M05/23, EECS Department, University

of California, Berkeley, Jul 2005.

[16] Christopher Brooks, Edward A. Lee, Xiaojun Liu, Stephen Neuendorffer, Yang Zhao, and

Haiyang Zheng (eds.). Heterogeneous concurrent modeling and design in Java (Volume 1:

Introduction to Ptolemy II). Technical Report UCB/EECS-2007-7, EECS Department, Uni-

versity of California, Berkeley, 11 January 2007.

Page 145: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

129

[17] Frederick P. Brooks, Jr. The Mythical Man-Month: Essays on Software Engineering, 20th

Anniversary Edition. Addison Wesley Longman, Inc., 1995.

[18] Adam Cataldo, Elaine Cheong, Thomas Huining Feng, Edward A. Lee, and Andrew Christo-

pher Mihal. A formalism for higher-order composition languages that satisfies the Church-

Rosser property. Technical Report UCB/EECS-2006-48, EECS Department, University of

California, Berkeley, 9 May 2006.

[19] James Adam Cataldo. The Power of Higher-Order Composition Languages in System Design.

PhD thesis, EECS Department, University of California, Berkeley, 18 December 2006.

[20] Elaine Cheong. Design and implementation of TinyGALS: A programming model for event-

driven embedded systems. Master’s thesis, University of California, Berkeley, Berkeley, CA,

USA 94720, May 2003. Published as Technical Memorandum UCB/ERL M03/14.

[21] Elaine Cheong, Edward A. Lee, and Yang Zhao. Joint modeling and design of wireless

networks and sensor node software. Technical Report UCB/EECS-2006-150, EECS Depart-

ment, University of California, Berkeley, November 2006.

[22] Elaine Cheong, Edward A. Lee, and Yang Zhao. Viptos: A graphical development and

simulation environment for TinyOS-based wireless sensor networks. Technical Report

UCB/EECS-2006-15, EECS Department, University of California, Berkeley, February 2006.

[23] Elaine Cheong, Judy Liebman, Jie Liu, and Feng Zhao. TinyGALS: A programming model

for event-driven embedded systems. In Proceedings of the Eighteenth Annual ACM Sympo-

sium on Applied Computing, pages 698–704, March 2003.

[24] Elaine Cheong and Jie Liu. galsC: A language for event-driven embedded systems. Memo-

randum UCB/ERL M04/7, University of California, Berkeley, April 2004.

[25] Elaine Cheong and Jie Liu. galsC: A language for event-driven embedded systems. In Pro-

ceedings of Design, Automation and Test in Europe (DATE05), 7–11 March 2005.

[26] Krzysztof Czarnecki. Overview of generative software development. In Unconventional Pro-

gramming Paradigms (UPP) 2004, volume 3566/2005 of Lecture Notes in Computer Science,

pages 326–341. Springer Berlin / Heidelberg, 2005.

Page 146: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

130

[27] Adam Dunkels, Bjorn Gronvall, and Thiemo Voigt. Contiki - a lightweight and flexible

operating system for tiny networked sensors. In Proceedings of the First IEEE Workshop on

Embedded Networked Sensors (EmNetS-I), Tampa, Florida, USA, November 2004.

[28] Johan Eker, Jorn W. Janneck, Edward A. Lee, Jie Liu, Xiaojun Liu, Jozsef Ludvig, Stephen

Neuendorffer, Sonia Sachs, and Yuhong Xiong. Taming heterogeneity—the Ptolemy ap-

proach. Proceedings of the IEEE, 91(1):127–144, January 2003.

[29] D. J. Ferreira, M. A. R. Dantas, A. R. Pinto, C. Montez, and Martius Rodriguez. A mid-

dleware for OSCAR and wireless sensor network environments. In HPCS ’07: Proceedings

of the 21st International Symposium on High Performance Computing Systems and Applica-

tions, Washington, DC, USA, 2007. IEEE Computer Society.

[30] Chien-Liang Fok, Gruia-Catalin Roman, and Chenyang Lu. Mobile agent middleware for

sensor networks: An application case study. In Proceedings of the 4th International Confer-

ence on Information Processing in Sensor Networks (IPSN’05), pages 382–387. IEEE, April

2005.

[31] Massimo Franceschetti and Ronald Meester. Navigation in small world networks: a scale-

free continuum model. Journal of Applied Probability, 43(4):1173–1180, 2006.

[32] David Gay, Phil Levis, Rob von Behren, Matt Welsh, Eric Brewer, and David Culler. The

nesC language: A holistic approach to networked embedded systems. In Proceedings of

Programming Language Design and Implementation (PLDI) 2003, June 2003.

[33] Alain Girault, Bilung Lee, and Edward A. Lee. Hierarchical finite state machines with mul-

tiple concurrency models. IEEE Transactions On Computer-Aided Design Of Integrated

Circuits and Systems, 18(6):742–760, June 1999.

[34] Lewis Girod, Jeremy Elson, Alberto Cerpa, Thanos Stathopoulos, Nithya Ramanathan, and

Deborah Estrin. EmStar: A software environment for developing and deploying wireless

sensor networks. In ATEC’04: Proceedings of the USENIX Annual Technical Conference

2004, pages 283–296, Berkeley, CA, USA, 2004. USENIX Association.

[35] Lewis Girod, Thanos Stathopoulos, Nithya Ramanathan, Jeremy Elson, Deborah Estrin, Eric

Osterweil, and Tom Schoellhammer. A system for simulation, emulation, and deployment

of heterogeneous sensor networks. In SenSys ’04: Proceedings of the 2nd International

Page 147: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

131

Conference on Embedded Networked Sensor Systems, pages 201–213, New York, NY, USA,

2004. ACM Press.

[36] Omprakash Gnawali, Ki-Young Jang, Jeongyeup Paek, Marcos Vieira, Ramesh Govindan,

Ben Greenstein, August Joki, Deborah Estrin, and Eddie Kohler. The Tenet architecture for

tiered sensor networks. In SenSys ’06: Proceedings of the 4th International Conference on

Embedded Networked Sensor Systems, pages 153–166, New York, NY, USA, 2006. ACM

Press.

[37] Ben Greenstein, Eddie Kohler, and Deborah Estrin. A sensor network application construc-

tion kit (SNACK). In SenSys ’04: Proceedings of the 2nd International Conference on Em-

bedded Networked Sensor Systems, pages 69–80, New York, NY, USA, 2004. ACM Press.

[38] Ramakrishna Gummadi, Omprakash Gnawali, and Ramesh Govindan. Macro-programming

wireless sensor networks using Kairos. In Proceedings of the International Conference on

Distributed Computing in Sensor Systems (DCOSS), volume 3560/2005 of Lecture Notes in

Computer Science, pages 126–140. Springer Berlin / Heidelberg, 2005.

[39] Nicolas Halbwachs. Synchronous Programming of Reactive Systems. Kluwer Academic

Publishers, 1993.

[40] Chih-Chieh Han, Ram Kumar, Roy Shea, Eddie Kohler, and Mani Srivastava. A dynamic

operating system for sensor nodes. In MobiSys ’05: Proceedings of the 3rd International

Conference on Mobile Systems, Applications, and Services, pages 163–176, New York, NY,

USA, 2005. ACM Press.

[41] Per Brinch Hansen. An evaluation of the message-passing interface. ACM SIGPLAN Notices,

33(3):65–72, 1998.

[42] David Harel, Hagi Lachover, Amnon Naamad, Amir Pnueli, Michal Politi, Rivi Sherman,

Aharon Shtull-Trauring, and Mark Trakhtenbrot. STATEMATE: A working environment for

the development of complex reactive systems. IEEE Transactions on Software Engineering,

16(4):403–414, April 1990.

[43] Rolf Hempel and David W. Walker. The emergence of the MPI message passing standard for

parallel computing. Computer Standards & Interfaces, 21(1):51–62, 1999.

Page 148: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

132

[44] Thomas A. Henzinger, Benjamin Horowitz, and Christoph Meyer Kirsch. Embedded control

systems development with Giotto. In Proceedings of the ACM SIGPLAN Workshop on Lan-

guages, Compilers and Tools for Embedded Systems (LCTES’01), pages 64–72, New York,

NY, USA, 2001. ACM Press.

[45] Maurice Herlihy. A methodology for implementing highly concurrent data objects. ACM

Transactions on Programming Languages and Systems, 15(5):745–770, November 1993.

[46] Carl Hewitt. Viewing control structures as patterns of passing messages. Journal of Artificial

Intelligence, 8(3):323–364, 1977.

[47] Jason Hill. A software architecture supporting networked sensors. Master’s thesis, University

of California, Berkeley, 2000.

[48] Jason Hill, Robert Szewczyk, Alec Woo, Seth Hollar, David Culler, and Kristofer Pister. Sys-

tem architecture directions for networked sensors. In Proceedings of the Ninth International

Conference on Architectural Support for Programming Languages and Operating Systems,

pages 93–104. ACM Press, 2000.

[49] Christopher Hylands, Edward Lee, Jie Liu, Xiaojun Liu, Stephen Neuendorffer, Yuhong

Xiong, Yang Zhao, and Haiyang Zheng. Overview of the Ptolemy project. Technical Re-

port UCB/ERL M03/25, EECS Department, University of California, Berkeley, July 2003.

[50] Chalermek Intanagonwiwat, Ramesh Govindan, and Deborah Estrin. Directed diffusion: a

scalable and robust communication paradigm for sensor networks. In MobiCom ’00: Pro-

ceedings of the 6th Annual International Conference on Mobile Computing and Networking,

pages 56–67, New York, NY, USA, 2000. ACM Press.

[51] Anoop Iyer and Diana Marculescu. Power and performance evaluation of globally asyn-

chronous locally synchronous processors. In Proceedings of the 29th Annual International

Symposium on Computer Architecture, pages 158–168. IEEE Computer Society, 2002.

[52] Gilles Kahn. The semantics of a simple language for parallel programming. In Proceedings

of the IFIP Congress 74, pages 471–475, Paris, France, 1974. International Federation for

Information Processing, North-Holland Publishing Company.

[53] Oliver Kasten and Kay Romer. Beyond event handlers: programming wireless sensors with

attributed state machines. In IPSN ’05: Proceedings of the 4th International Symposium on

Page 149: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

133

Information Processing in Sensor Networks, pages 45–52, Piscataway, NJ, USA, 2005. IEEE

Press.

[54] Eddie Kohler. The Click Modular Router. PhD thesis, Massachusetts Institute of Technology,

November 2000.

[55] Eddie Kohler, Robert Morris, Benjie Chen, John Jannotti, and M. Frans Kaashoek. The Click

modular router. ACM Transactions on Computer Systems (TOCS), 18(3):263–297, 2000.

[56] YoungMin Kwon, Sameer Sundresh, Kirill Mechitov, and Gul Agha. ActorNet: an actor

platform for wireless sensor networks. In AAMAS ’06: Proceedings of the Fifth International

Joint Conference on Autonomous Agents and Multiagent Systems, pages 1297–1300, New

York, NY, USA, 2006. ACM Press.

[57] William W. LaRue, Sherry Solden, and Bishnupriya Bhattacharya. Functional and perfor-

mance modeling of concurrency in VCC. In Concurrency and Hardware Design, Advances

in Petri Nets, pages 191–227, London, UK, 2002. Springer-Verlag.

[58] Hugh C. Lauer and Roger M. Needham. On the duality of operating system structures. In

Proc. Second International Symposium on Operating Systems. IRIA, Oct 1978. Reprinted in

Operating Systems Review, 13,2 April 1979, pp. 3–19.

[59] Edward A. Lee. Modeling concurrent real-time processes using discrete events. Annals of

Software Engineering, 7(1-4):25–45, 1999.

[60] Edward A. Lee. Embedded software. Advances in Computers, 56, 2002.

[61] Edward A. Lee and Steve Neuendorffer. MoML – a modeling markup language in XML –

version 0.4. Technical Report UCB/ERL M00/12, EECS Department, University of Califor-

nia, Berkeley, 2000.

[62] Edward A. Lee and Thomas M. Parks. Dataflow process networks. Proceedings of the IEEE,

83(5):773–801, May 1995.

[63] Man-Kit Leung. Reviving the value of WSN simulation results through Viptos extensions.

Spring 2007 EE290Q (Wireless Sensor Networks) Class Project Report, 9 May 2007.

Page 150: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

134

[64] Philip Levis and David Culler. Mate: a tiny virtual machine for sensor networks. In ASPLOS-

X: Proceedings of the 10th International Conference on Architectural Support for Program-

ming Languages and Operating Systems, pages 85–95, New York, NY, USA, 2002. ACM

Press.

[65] Philip Levis, Nelson Lee, Matt Welsh, and David Culler. TOSSIM: accurate and scalable

simulation of entire TinyOS applications. In Proceedings of the 1st International Conference

on Embedded Networked Sensor Systems (SenSys 2003), pages 126–137. ACM Press, 2003.

[66] Hong Lin, John Rushing, Sara J. Graves, Steve Tanner, and Evans Criswell. Real time target

tracking with binary sensor networks and parallel computing. In Proceedings of 2006 IEEE

International Conference on Granular Computing, pages 112–117, 10-12 May 2006.

[67] Jie Liu, Elaine Cheong, and Feng Zhao. Semantics-based optimization across uncoordinated

tasks in networked embedded systems. In EMSOFT ’05: Proceedings of the 5th ACM In-

ternational Conference on Embedded Software, pages 273–281, New York, NY, USA, 2005.

ACM Press.

[68] Jie Liu, Maurice Chu, Juan Liu, James Reich, and Feng Zhao. State-centric programming for

sensor-actuator network systems. IEEE Pervasive Computing, 2(4):50–62, 2003.

[69] Jie Liu and Edward A. Lee. Timed multitasking for real-time embedded software. IEEE

Control Systems Magazine, pages 65–75, February 2003.

[70] Samuel Madden, Michael J. Franklin, Joseph M. Hellerstein, and Wei Hong. The design

of an acquisitional query processor for sensor networks. In SIGMOD ’03: Proceedings of

the 2003 ACM SIGMOD International Conference on Management of Data, pages 491–502,

New York, NY, USA, 2003. ACM Press.

[71] Andrew Mihal and Kurt Keutzer. Mapping concurrent applications onto architectural plat-

forms. In Axel Jantsch and Hannu Tenhunen, editors, Networks on Chip, chapter 3, pages

39–59. Kluwer Academic Publishers, 2003.

[72] Thomas J. Mowbray, William A. Ruh, and Richard M. Soley. Inside CORBA: Distributed

Object Standards and Applications. Addison-Wesley, 1997.

[73] Walid A. Najjar, Edward A. Lee, and Guang R. Gao. Advances in the dataflow computational

model. Parallel Computing, 25(1):1907–1929, January 1999.

Page 151: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

135

[74] Stephen A. Neuendorffer. Actor-Oriented Metaprogramming. PhD thesis, EECS Department,

University of California, Berkeley, 2005.

[75] Ryan Newton, Arvind, and Matt Welsh. Building up to macroprogramming: an intermediate

language for sensor networks. In IPSN ’05: Proceedings of the 4th International Symposium

on Information Processing in Sensor Networks, pages 37–44, Piscataway, NJ, USA, 2005.

IEEE Press.

[76] Ryan Newton, Greg Morrisett, and Matt Welsh. The Regiment macroprogramming system.

In IPSN ’07: Proceedings of the 6th International Conference on Information Processing in

Sensor Networks, pages 489–498, New York, NY, USA, 2007. ACM Press.

[77] The network simulator - ns-2. http://www.isi.edu/nsnam/ns.

[78] OPNET Technologies, Inc. Opnet modeler. http://www.opnet.com.

[79] Sung Park, Andreas Savvides, and Mani B. Srivastava. SensorSim: a simulation framework

for sensor networks. In MSWIM ’00: Proceedings of the 3rd ACM International Workshop

on Modeling, Analysis and Simulation of Wireless and Mobile Systems, pages 104–111, New

York, NY, USA, 2000. ACM Press.

[80] Jonathan Polley, Dionysys Blazakis, Jonathan McGee, Dan Rusk, John S. Baras, and Manish

Karir. ATEMU: A fine-grained sensor network simulator. In Proceedings of the First IEEE

Communications Society Conference on Sensor and Ad Hoc Communications and Networks

(SECON’04), pages 145–152, 2004.

[81] Parmesh Ramanathan, Kewal Saluja, Kuang-Ching Wang, and Thomas Clouqueur. UW-API:

A network routing application programmer’s interface (draft version 1.2). Technical report,

University of Wisconsin-Madison, Department of Electrical and Computer Engineering, 29

October 2001.

[82] Hideki John Reekie. Realtime Signal Processing: Dataflow, Visual, and Functional Pro-

gramming. PhD thesis, University of Technology at Sydney, 1995.

[83] Thomas J. Santner, Brian J. Williams, and William I. Notz. The Design and Analysis of

Computer Experiments. Springer Series in Statistics. Springer-Verlag New York, Inc., 2003.

Page 152: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

136

[84] Y. Ahmet Sekercioglu, Andras Varga, and Gregory K. Egan. Parallel simulation made easy

with OMNeT++. In Proceedings of the 15th European Simulation Symposium (ESS’03),

pages 493–499, October 2003.

[85] Doug Simon, Cristina Cifuentes, Dave Cleal, John Daniels, and Derek White. JavaTM on

the bare metal of wireless sensor devices: The Squawk Java virtual machine. In VEE ’06:

Proceedings of the 2nd International Conference on Virtual Execution Environments, pages

78–88, New York, NY, USA, 2006. ACM Press.

[86] Gyula Simon, Peter Volgyesi, Miklos Maroti, and Akos Ledeczi. Simulation-based optimiza-

tion of communication protocols for large-scale wireless sensor networks. In Proceedings

2003 IEEE Aerospace Conference, volume 3, pages 3 1339–3 1346, 8-15 March 2003.

[87] Randall B. Smith. SPOTWorld and the Sun SPOT. In IPSN ’07: Proceedings of the 6th

International Conference on Information Processing in Sensor Networks, pages 565–566,

New York, NY, USA, 2007. ACM Press.

[88] Randall B. Smith, Cristina Cifuentes, and Doug Simon. Enabling JavaTM for small wireless

devices with Squawk and SpotWorld. In 2nd Workshop on Building Software for Pervasive

Computing, 16 October 2005.

[89] Randall B. Smith, Bernard Horan, John Daniels, and Dave Cleal. Programming the world

with Sun SPOTs. In OOPSLA ’06: Companion to the 21st ACM SIGPLAN Conference on

Object-Oriented Programming Systems, Languages, and Applications, pages 706–707, New

York, NY, USA, 2006. ACM Press.

[90] Ahmed Sobeih, Wei-Peng Chen, Jennifer C. Hou, Lu-Chuan Kung, Ning Li, Hyuk Lim,

Hung-Ying Tyan, and Honghai Zhang. J-Sim: A simulation environment for wireless sensor

networks. In ANSS ’05: Proceedings of the 38th Annual Symposium on Simulation, pages

175–187, Washington, DC, USA, 2005. IEEE Computer Society.

[91] David B. Stewart and Robert A. Brown. Grand challenges in mission-critical systems: Dy-

namically reconfigurable real-time software for flight control systems. In Workshop on Real-

Time Mission-Critical Systems in conjunction with the 1999 Real-Time Systems Symposium,

November 1999.

Page 153: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

137

[92] David B. Stewart, Richard A. Volpe, and Pradeep K. Khosla. Design of dynamically re-

configurable real-time software using port-based objects. IEEE Transactions on Software

Engineering, 23(12):759–776, December 1997.

[93] Janos Sztipanovits and Gabor Karsai. Generative programming for embedded systems. In

GPCE ’02: Proceedings of the 1st ACM SIGPLAN/SIGSOFT Conference on Generative Pro-

gramming and Component Engineering, pages 32–49, London, UK, 2002. Springer-Verlag.

[94] Arsalan Tavakoli, David Chu, Joseph Hellerstein, Philip Levis, and Scott Shenker. Declara-

tive sensornet architecture. In International Workshop on Wireless Sensor Network Architec-

ture (WWSNA 2007), 25-27 April 2007.

[95] TinyOS community forum: An open-source OS for the networked sensor regime.

http://www.tinyos.net.

[96] Ben Titzer, Daniel Lee, and Jens Palsberg. Avrora: Scalable sensor network simulation with

precise timing. In Proceedings of IPSN’05, Fourth International Conference on Information

Processing in Sensor Networks, pages 477–482, 2005.

[97] Hung-Ying Tyan. Design, realization and evaluation of a component-based compositional

software architecture for network simulation. PhD thesis, The Ohio State University, 2002.

[98] Andras Varga. The OMNeT++ discrete event simulation system. In Proceedings of the

European Simulation Multiconference (ESM’2001), 6-9 June 2001.

[99] Carl A. Waldspurger and William E. Weihl. Stride scheduling: Deterministic proportional-

share resource management. Technical Report MIT/LCS/TM-528, Massachusetts Institute

of Technology, Cambridge, MA, USA, June 1995.

[100] Mitchell Wand. Type inference for record concatenation and multiple inheritance. In Pro-

ceedings of the Fourth Annual Symposium on Logic in Computer Science, pages 92–97, Pis-

cataway, NJ, USA, 1989. IEEE Press.

[101] Matt Welsh and Geoff Mainland. Programming sensor networks using abstract regions. In

NSDI’04: Proceedings of the 1st Conference on Symposium on Networked Systems Design

and Implementation, pages 29–42, Berkeley, CA, USA, 2004. USENIX Association.

Page 154: Actor-Oriented Programming for Wireless Sensor Networks · 2007-08-30 · Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong B.S. (University of Maryland, College

138

[102] Kamin Whitehouse, Cory Sharp, Eric Brewer, and David Culler. Hood: a neighborhood

abstraction for sensor networks. In MobiSys ’04: Proceedings of the 2nd International Con-

ference on Mobile Systems, Applications, and Services, pages 99–110, New York, NY, USA,

2004. ACM Press.

[103] Kamin Whitehouse, Feng Zhao, and Jie Liu. Semantic Streams: A framework for composable

semantic interpretation of sensor data. In K. Romer, H. Karl, and F. Mattern, editors, The

Third European Workshop on Wireless Sensor Networks (EWSN), volume 3868 of Lecture

Notes in Computer Science, pages 5–20. Springer-Verlag Berlin Heidelberg, 2006.

[104] Wikipedia. http://www.wikipedia.org/.

[105] Yuhong Xiong. An Extensible Type System for Component-Based Design. PhD thesis, EECS

Department, University of California, Berkeley, 2002.

[106] Xiang Zeng, Rajive Bagrodia, and Mario Gerla. GloMoSim: A library for parallel simula-

tion of large-scale wireless networks. In Proceedings of the 12th Workshop on Parallel and

Distributed Simulation – PADS ’98, pages 154–161, 26-29 May 1998.

[107] Feng Zhao and Leonidas Guibas. Wireless Sensor Networks: An Information Processing

Approach. Elsevier/Morgan-Kaufmann, 2004.

[108] Feng Zhao, Jie Liu, Juan Liu, Leonidas Guibas, and James Reich. Collaborative signal

and information processing: An information directed approach. Proceedings of the IEEE,

91(8):1199–1209, August 2003.

[109] Yang Zhao. A study of Click, TinyGALS and CI.

http://ptolemy.eecs.berkeley.edu/˜ellen zh/click tinygals ci.pdf, April 2003.

[110] Andrew L. Zimdars, James Yang, and Prasanta Bose. End-to-end prototyping and validation

for health management sensor networks. In Proceedings of 2005 IEEE Aerospace Confer-

ence, pages 3820–3830, 5-12 March 2005.