Top Banner
Telemark University College Faculty of Technology DAQ training course Introduction to SCADA systems, OPC, Real-time systems and DAQ systems. c Nils-Olav Skeie (NOS) January 5, 2011
210
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: DAQ_Training_Course.pdf

Telemark University CollegeFaculty of Technology

DAQ training course

Introduction to SCADA systems, OPC, Real-time systems andDAQ systems.

c Nils-Olav Skeie (NOS)

January 5, 2011

Page 2: DAQ_Training_Course.pdf

Contents

Preface vii

I SCADA systems 1

1 Industrial IT systems 21.1 Control system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Process control system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Distributed Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.4 System Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.4.2 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.4.3 Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.5 Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.5.1 Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.6 Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.7 The Functions of a Computer Control System . . . . . . . . . . . . . . . . . . . . . . . . . 8

2 SCADA 92.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.1.1 User Interface (UI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.1.2 Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.1.3 Alarm system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.2 SCADA Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.3 SCADA control and monitoring devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.3.1 RTU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.3.2 DCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.3.3 PLC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.3.4 PAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.4 RTU Subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.4.1 Open or Closed Control Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.4.2 PID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.4.3 CNC Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.4.4 Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.4.5 Instrumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.5 Superior SCADA systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.5.1 ERP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.5.2 MES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.5.3 IMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.6 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.6.1 Safety system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.6.2 Shutdown system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.7 Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.8 Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.9 Future . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

i

Page 3: DAQ_Training_Course.pdf

CONTENTS ii

II OPC 22

3 Introduction 233.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.2 Operating systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243.3 Software application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.4 Communication model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.4.1 Client/server model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.4.2 Publisher/subscriber model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

4 OPC speci�cation 304.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304.2 Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304.3 OPC Common . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.4 OPC Data Access (OPC DA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.5 OPC Alarms & Events (OPC AE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364.6 OPC Data eXchange (OPC DX) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374.7 OPC Historical Data Access (OPC HDA) . . . . . . . . . . . . . . . . . . . . . . . . . . . 384.8 OPC Batch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394.9 OPC Complex Data (OPC CD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414.10 OPC Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414.11 OPC XML-DA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434.12 OPC Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444.13 OPC UA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

5 OPC system 465.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465.2 Why use OPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465.3 OPC development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465.4 OPC test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

6 OPC exercise 48

III Real-Time System 55

7 Introduction 567.1 Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567.2 Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577.3 Embedded system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577.4 CPU and microcontrollers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587.5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

8 Speci�cations 608.1 Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

9 System architecture 629.1 Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629.2 States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659.3 Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

10 Synchronization 6710.1 Semaphore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6710.2 Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6810.3 Interprocess communication (IPC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

10.3.1 Pipes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7010.3.2 Message queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7010.3.3 Shared memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

10.4 Communication Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

Page 4: DAQ_Training_Course.pdf

CONTENTS iii

10.4.1 Token Ring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7010.4.2 CSMA/CD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

11 Resources 7211.1 Deadlock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7211.2 Critical region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

12 Software modules 7612.1 Instruction time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7612.2 Software application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

12.2.1 Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7712.2.2 Thread . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7912.2.3 Task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

12.3 Core and Multicore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8112.4 Input monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

12.4.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8212.4.2 Priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

12.5 Watchdog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

13 Design 85

14 Programming 8614.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8614.2 Memory allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8614.3 Posix.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8714.4 C# example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

15 Operating systems 9015.1 RTOS requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9015.2 Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9215.3 Windows / Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

15.3.1 Windows history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9415.3.2 Windows CE or Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9415.3.3 Windows XP Embedded . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

15.4 QNX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9515.5 VxWorks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

16 RT system 9616.1 Bene�ts of any RTOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9616.2 Cost of RTOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9616.3 Contents of a RTOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

IV DAQ systems 97

17 Sensor overview 9817.1 Sensor device types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

17.1.1 Passive or Active . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9917.1.2 Absolute or Relative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10017.1.3 Point or Continuous Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . 10017.1.4 Contact or non-contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10017.1.5 Invasive or Intrusive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

17.2 Sensor device properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10017.2.1 Concepts for Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10117.2.2 Concepts for Operating Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . 10217.2.3 Concepts for Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

17.3 Sensor output signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10617.4 Dynamic measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10617.5 MEMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

Page 5: DAQ_Training_Course.pdf

CONTENTS iv

18 Signal condition systems 10818.1 Ampli�cation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

18.1.1 Bandwidth distortions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10918.1.2 Common-mode rejection ratio (CMRR) . . . . . . . . . . . . . . . . . . . . . . . . 11118.1.3 Input and output loading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

18.2 Attenuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11418.3 Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

18.3.1 Low pass �lter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11518.3.2 High pass �lter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11918.3.3 FIR or IIR �lter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

18.4 Di¤erentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12018.5 Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12018.6 Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12018.7 Combiner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12018.8 Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

18.8.1 Low-level analog voltage signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12018.8.2 High-level analog voltage signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12118.8.3 Current-loop analog signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12118.8.4 Digital signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

18.9 Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

19 Data Acquisition Systems 12419.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12419.2 Digital representation of numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

19.2.1 Integers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12719.2.2 Floating numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

19.3 ASCII codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12919.4 DAQ parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

19.4.1 Counters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13019.4.2 Digital inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13019.4.3 Digital outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13119.4.4 Multiplexer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13119.4.5 Digital to Analog Converters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13319.4.6 Analog to Digital Converter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13419.4.7 Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13919.4.8 Reference Voltage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14019.4.9 Single-Ended and Di¤erential Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . 14019.4.10Number of channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14019.4.11Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14119.4.12Range, Gain and Measured Precision . . . . . . . . . . . . . . . . . . . . . . . . . . 14119.4.13Software calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14119.4.14Transfer of A/D conversion to system memory . . . . . . . . . . . . . . . . . . . . 141

19.5 Range check of signal values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

20 Communication 14320.1 Communication architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

20.1.1 Current loop communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14320.1.2 Serial communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14520.1.3 Network communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14520.1.4 Instrument control buses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14520.1.5 Wireless communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

20.2 Wireless Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14620.2.1 Bar Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14720.2.2 RFID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15020.2.3 RFID or Bar Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15020.2.4 GPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

20.3 Wireless sensor network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15220.3.1 ZigBee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

Page 6: DAQ_Training_Course.pdf

CONTENTS v

20.3.2 Bluetooth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15620.3.3 Wireless HART . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15620.3.4 Wireless Cooperation Team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15720.3.5 Comparison of wireless standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

20.4 Distributed Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

21 Discrete Sampling 15921.1 Sampling-rate theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15921.2 A/D conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16021.3 Simultaneous Sample and Hold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16221.4 Aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16321.5 Oversampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16321.6 Folding diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16521.7 Spectral analysis of Time varying signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16621.8 Spectral Analysis using the Fourier transform . . . . . . . . . . . . . . . . . . . . . . . . . 16721.9 FFT diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16721.10Selecting the sampling rate and �ltering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16921.11Dynamic range of the �lter and A/D converter . . . . . . . . . . . . . . . . . . . . . . . . 16921.12Time interleaved A/D converters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17021.13Nyquist Frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

22 Logging 17122.1 Sensor data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17122.2 Historical data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17122.3 Trend curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

23 Statistical analysis of Experimental data 17323.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17323.2 General concepts and de�nitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173

23.2.1 De�nitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17323.2.2 Measure of central tendency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17423.2.3 Measures of dispersion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174

23.3 Historgram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17523.3.1 Examples using the room temperatures . . . . . . . . . . . . . . . . . . . . . . . . 176

23.4 Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17623.4.1 Probability Distribution Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 17623.4.2 Some probability distribution functions with engineering applications . . . . . . . . 17723.4.3 Parameter estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17923.4.4 Criterion for rejecting questionable data points . . . . . . . . . . . . . . . . . . . . 18023.4.5 Correlation of experimental data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180

23.5 Uncertainty budget . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184

24 Calibration 18524.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18524.2 Calibration process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18724.3 Calibration of sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18724.4 Calibration Certi�cate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189

V Documentation 190

25 Guidelines for planning experiments 19125.1 Overview of an experimental tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191

25.1.1 Problem de�nition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19125.1.2 Experimental design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19125.1.3 Experimental construction and development . . . . . . . . . . . . . . . . . . . . . . 19225.1.4 Data gathering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19225.1.5 Data analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19225.1.6 Interpreting the results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192

Page 7: DAQ_Training_Course.pdf

CONTENTS vi

25.1.7 Conclusion and reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19225.2 Activities in experimental projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192

25.2.1 Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19225.2.2 Cost Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19225.2.3 Dimensional analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19325.2.4 Determining the Test Rig Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19325.2.5 Uncertainty Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19325.2.6 Calibration/testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19325.2.7 Test Matrix and Test Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19425.2.8 Documenting Experimental Activities . . . . . . . . . . . . . . . . . . . . . . . . . 19425.2.9 Group projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194

26 Meetings 195

27 Guidelines for documenting experiments 19727.1 Informal report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19727.2 Formal report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19727.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

27.3.1 Harvard style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19827.3.2 Vancouver style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

27.4 Article or paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

Bibliography 199

Index 201

Page 8: DAQ_Training_Course.pdf

Preface

This document contains sections for SCADA systems, OPC protocol, real-time systems, DAQ systems,and sensor signal conversions. The history of this document is:

Revision Changes / Extensions Who Date0:1 First version for the DAQ workshop at TUC NOS 5-JAN-11

c Nils-Olav Skeie: Permission is granted to distribute single copies of this documentfor non-commercial use, as long as it is distributed as a whole in its original form, and thename of the author is mentioned.

vii

Page 9: DAQ_Training_Course.pdf

Part I

SCADA systems

1

Page 10: DAQ_Training_Course.pdf

Chapter 1

Industrial IT systems

The process industry is using more and more IT systems and a PC is prefered due to the low price andthe software available. A process IT system can be a monitoring system or a control system, or anycombination. The monitoring system is just for monitoring a process and contains only input modulesand some sort of I/O devices for the user information. The control system consists of a monitoringpart used for input of informations and an output part for controlling the process, together with somesort of I/O devices for the user information. These system can be a stand alone system consisting ofonly a single computer or distributed systems consisting of several, tens or even hundres of computersinterconnected in di¤erent ways.

1.1 Control system

A control system is a device or set of devices used for managing the behavior of another system or aprocess. The control system will take the input from one or several sensing devices and perform somesort of action on a set of actuators, depending on the input signals and the algorithm. Two di¤erenttypes of control system exists, logic control and feedback control.Logic control system, or sequential control system, is responding to �simple� input signals, often

on/o¤ signals. Normally these systems perform a sequence of operations due to an input signal, therethey are also called sequential control systems.Feedback control system, or linear control system, is using �continous� feedback signals from the

system for the control algoritm. A PID controller is an example of such a controller where the di¤erencebetween a set point signal and the feedback signals are used for control.Also note that some physical systems are not controllable.Fuzzy logic is a combination of logic control and feedback control to combine some of the design

simplicity of logic control with the utility of feedback control.An automated system is a collection of devices working together to accomplish tasks or produce a

product or family of products. The main functions for a control system is shown in Figure 1.1 where acontrol system is connected to a SCADA system.

1.2 Process control system

A process control system will monitor and control some sort of process. Processes can be described bytheir starting and stopping points, and by the kinds of changes that take place in between. The type ofprocesses is shown in Figure 1.2 and can be:

� discrete; Found in many manufacturing, motion and packaging applications. Robotic assembly, suchas that found in automotive production, can be characterized as discrete process control. Mostdiscrete manufacturing involves the production of discrete pieces of product (www.wikipedia.org2006),

� batch; Batch jobs can be stored up during working hours and then executed during the evening orwhenever the computer is idle. Batch processing is particularly useful for operations that requirethe computer or a peripheral device for an extended period of time. Once a batch job begins, it

2

Page 11: DAQ_Training_Course.pdf

CHAPTER 1. INDUSTRIAL IT SYSTEMS 3

Figure 1.1: The main functions for a process control system; monitoring, control and interconnections.

Figure 1.2: The type of processes that can be monitored and/or controlled by a process control system.

Page 12: DAQ_Training_Course.pdf

CHAPTER 1. INDUSTRIAL IT SYSTEMS 4

Figure 1.3: A distributed computer system.

continues until it is done or until an error occurs. Note that batch processing implies that there isno interaction with the user while the program is being executed,

� continuous; Often, a physical system is represented though variables that are smooth and unin-terrupted in time. The is a system that should run all the time and are often named a 24x7system.. meaning running 24 hours a day, 7 days a week. The control of the water temperaturein a heating jacket, for example, is an example of continuous process control. Some importantcontinuous processes are the production of fuels, chemicals and plastics. Continuous processes, inmanufacturing, are used to produce very large quantities of product per year(millions to billions ofpounds) (www.wikipedia.org 2006),

� hybrid; Applications being some sort of combination of discrete, batch and continuous processcontrol.

1.3 Distributed Systems

An industrial system can be a single computer system or a distributed system with several devicesinterconnected. A single system is often used on very small processes or plants, but normally severaldi¤erent IT systems are cooperating for solving the monitoring and/or controlling tasks. The reason forusing several systems or devices are

1. to exploit the functions of di¤erent systems or devices,

2. redundancy,

3. better overview and structure,

4. troubleshooting.

The drawback of using several systems are the price and a more complex system. No perfect singlesystem exists for controlling purposes, so a cooperation between di¤erent systems is prefered. Such asystem is shown in Figure 1.3.A distributed control system (DCS) refers to a control system usually of a manufacturing system,

process or any kind of dynamic system, in which the controller elements are distributed throughout thesystem with each component sub-system controlled by one or more controllers. The entire system of con-trollers are connected by some sort of networks for communication and monitoring (www.wikipedia.org2006).A distributed system will have a more complex system structure than a single system, but each sub

device will end up with a more simple structure than a signle system. The distributed system can alsohave a higher degree of redundancy.One example of a distributed system is shown in Figure 1.4. The system contains several controllers

for measurement and control of di¤erent parts in the plant, local displays, a local area network for sharinginformation and several display systems for remote operations.

Page 13: DAQ_Training_Course.pdf

CHAPTER 1. INDUSTRIAL IT SYSTEMS 5

Figure 1.4: A distributes system using a set of local controllers, a local area network (LAN), and systemsfor remote operations (Caro 2004).

1.4 System Reliability

1.4.1 Introduction

The main branches of reliability (Rausand & Høyland 2004):

1. hardware reliability,

(a) the physical approarch; technical items,

(b) the actuarial approach; operating loads and strength,

2. software reliability,

3. human reliability.

Some system reliability de�nitions (Rausand & Høyland 2004):

1. Reliability; the ability of an item to perform the required function, under given environmental andoperational conditions and for a stated period of time.

2. Quality; the totality of features and characteristics of a product or service that bear on its abilityto satisfy stated or implied needs.

3. Availability; the ability of an item (under combined aspects of its reliability, maintainability, andmaintenance support) to perform its required function at a stated instant of time or over a statedperiod of time.

4. Maintainability; the ability of an item, under stated conditions of use, to be retained in, or restoredto, a state in which it can perform its required functions, when maintenance is performed understated conditions and using prescribed procedures and resources.

5. Safety; freedom from those conditions that can cause death, injury, occupational illness, or damageto or loss of equipment or property.

6. Security; dependability with respect to prevention of deliberate hostile actions.

7. Dependability; the collective term used to describe the availability performance and its in�u-enceing factors: reliability performance, maintainability performance, and maintenance supportperformance.

Page 14: DAQ_Training_Course.pdf

CHAPTER 1. INDUSTRIAL IT SYSTEMS 6

Figure 1.5: The failrates over time for a given system or sub system. The shape is often known as the�bathtub�function.

1.4.2 Estimation

Applications for estimating system reliability (Rausand & Høyland 2004):

1. risk analysis,

2. environmental protection,

3. quality,

4. optimization of maintenance and operation,

5. engineering design,

6. veri�cation of quality/reliability.

1.4.3 Computation

Computation of reliability (Olsson & Rosen 2003):It is assumed that the possible errors are independent events, that they are not depending of each

other. This assumption is correct as long as a faulty component does not in�uence the others and havea causal e¤ect on their functionality (Olsson & Rosen 2003).Using n components in the system, these components will operate or be faulty as:

n = no (t) + nf (f)

where no (t) is the number of operating components and where nf (f) is the number of faulty componetns,both numbers will be a function of time; n will be constant. The reliability function R (t) is de�ned asfollows (Olsson & Rosen 2003):

R (t) =no (t)

n= 1� nf (f)

n

A measure of the system is the MTTF (Mean Time To Failure) given as:

MTTF =

Z 1

0

R (t) dt =1

where � is the fault rate.The fault rate for hardware devices often gives a shape known as the �bathtub�function, with several

early faults in the beginning, a section with random faults (constant fault rate), and ends with wear-outfaults. This shape is shown in Figure 1.5.The availability of the system is measured as an average value in time intervals in which the system

operates correctly, calles the MTBF (Mean Time Between Failures). The average time intervals in which

Page 15: DAQ_Training_Course.pdf

CHAPTER 1. INDUSTRIAL IT SYSTEMS 7

Figure 1.6: The mean time between failures for a device or a system (www.wikipedia.org 2006).

the system is not working is called the MTTR (Mean Time To Repair). The availability of a system Ais de�ned as (Olsson & Rosen 2003):

A =MTBF

MTBF +MTTR

1.5 Redundancy

Redundancy is the duplication of critical components of a system with the intention of increasing relia-bility of the system, usually in the case of a backup or fail-safe (www.wikipedia.org 2006).There can be 3 types of redundancy (Fortuna, Graziani, Rizzo & Xibilia 2007):

1. Physical redundancy; physically replicating the components to be used,

2. Analytic redundancy; the redundant source will be a mathematical model of the component,

3. Knowledge redundancy; The redundant source consists of heurisitic information about the system.

Another way of dividing the forms of redundancy can be like (www.wikipedia.org 2006):

1. hardware;

(a) dual modular redundant (DMR) has duplicated elements which work in parallel to provideone form of redundancy,

(b) triple modular redundancy (TMR) is a fault tolerant form of N-modular redundancy, in whichthree systems perform a process and that result is processed by a voting system to producea single output. If any one of the three systems fails, the other two systems can correct andmask the fault. If the voter fails then the complete system will fail. However, in a good TMRsystem the voter is much more reliable than the other TMR components.

2. information;

(a) error detection and correction,

(b) soft sensor,

(c) disk arrays,

3. time;

4. software.

An important property when deciding any form of redundancy is the Mean Time Between Failures(MTBF). MTBF is the mean time between failures of a system, see Figure 1.6. The calculation of theMTBF will be:

MTBF =

P(downtime� uptime)number of failures

Calculations of MTBF assume that a system is "renewed", i.e. �xed, after each failure, and then returnedto service immediately after failure. The downtime and the uptime values are according to Figure 1.6meaning that the EMTBF will add up the time the system is not working dividing on the number oftimes it is not working. The average time between failing and being returned to service is termed meandown time (MDT) or mean time to repair (MTTR) (www.wikipedia.org 2006).

Page 16: DAQ_Training_Course.pdf

CHAPTER 1. INDUSTRIAL IT SYSTEMS 8

1.5.1 Communication

Several nodes, computer with communication capabilities, are often connected to the same communica-tion media. The communication media, called a bus, can be wire, optic, or wireless. A single failure ofthe wire or optic can stop all communication in the system. Several types of redundancy exists:

� ring redundancy; both ends of the communication media is connected to the master,

� sub ring redundancy; only parts of the network has redunancy,

� master redundancy; several masters monitoring the tra�c of the network.

1.6 Cluster

A computer cluster is a group of linked computers, working together closely so that in many respectsthey form a single computer (www.wikipedia.org 2010). These computers are normally interconnectedthrough a fast local area networks. Cluster of computers are usually deployed to improve performanceand/or availability over that of a single computer. The solution with a cluster of computers are typicallymuch more cost-e¤ective than single computers of comparable speed or availability.Redundancy is duplication of computers, while cluster is computers working together.

1.7 The Functions of a Computer Control System

The main functions of a process control system is shown in Figure 1.1. These functions are:

1. process monitoring; modules for collecting and interpretation of data from the plant,

2. control; modules for controlling some parameters for the plant,

3. connections; interconnections between the process monitoring and control for processing fo inputand output data, for feedback and automatic control.

Page 17: DAQ_Training_Course.pdf

Chapter 2

SCADA

2.1 Introduction

Supervisory Control And Data Acquisition (SCADA) is an industrial control system monitoring andcontrolling a process, or separate control systems. A SCADA system is only a software application. ASCADA System usually consists of the following subsystems (www.wikipedia.org 2006):

1. a Human-Machine Interface or HMI is the apparatus which presents process data to a humanoperator, and through this, the human operator, monitors and controls the process,

2. A supervisory (computer) system, gathering (acquiring) data on the process and sending commands(control) to the process,

3. Remote Terminal Units (RTUs) connecting to sensors in the process, converting sensor signals todigital data and sending digital data to the supervisory system.

4. Communication infrastructure connecting the supervisory system to the Remote Terminal Units,

5. an integrated alarm system.

A SCADA system must consists of a lot of software modules, at least a monitoring module to get theinformation from the process and a control module to �write� information back to the process. Figure2.1 shows the most common software modules (or sub modules) in a SCADA system.Some of the important SCADA subsystems (or submodules) can be:

2.1.1 User Interface (UI)

A User Interface is the device or module which presents process data to a human operator, and throughwhich the human operator controls the process.The User Interface (UI) is also known as the:

1. Graphical User Interface (GUI); today almost userfaces are graphical,

2. Man-Machine Interface (MMI), the same as an UI,

3. Human-Machine Interface (HMI), the same as an UI.

A UI can be some simple devices like buttons and lamps up to complex systems on several computerscreens or an overhead screen. The presentation of the process information is very important to let theoperator focus on the important information only. The separation in information levels is a commonsolution for control system. The system has an overview presentation and the operator can select a speci�cpart to get more detailed information. Depending on the size of the plant and type of information therecan be several layers of information. An exanple of such a system is shown in Figure 2.2. Figure 2.3shows 2 computer screens with a graphical UI with process information and control options for a plant.Figure 2.4 shows an overview GUI of a process or plant, and a set of GUI with details from several partsof the same process or plant.The amount of information on the screen has to be adopted to the usage of the system and it is

important not to overload the GUI with information. Figure 2.5 shows two di¤erent modes of presentationof the same information. Which presentation will be best for the operator?

9

Page 18: DAQ_Training_Course.pdf

CHAPTER 2. SCADA 10

Figure 2.1: An overview of some of the software modules that can be part of a SCADA system.

2.1.2 Database

A database is a structured collection of data that stored in a computer system. The structure is achievedby organizing the data according to a database model(www.wikipedia.org 2006). The model used mosttoday is the relational model, but other models exist such as the hierarchical model or the network model.In a process system is the database used for:

1. con�guration or setup data; how is the process system structured with references to the input andoutput of values,

2. runtime data; the current values of the process system,

3. historical data; a history of the �current�values of the process system.

2.1.3 Alarm system

A process system needs an alarm system for presenting the alarms and the actions from the operators.The alarm system should be an integrated system part of the monitoring and control systems to give anoverview of all the alarms.

2.2 SCADA Overview

Figure 2.6 shows how these devices, functions, and/or modules can be interconnected in an industrial ITsystem. An industrial IT system can however combine the devices and/or modules in any combinationand this course will try to give a background for understanding these devices and the di¤erent ways ofinterconnect them. A SCADA system is a software module running on a industrial computer, often withan alarm system, a database, and an UI. The Scada system will communicate with a set of externaldevices, denoted as RTU, being a DCS, a PLC, or a PAC depending on the control puproses. The RTUis physical devices, hardware devices, working as distributed modules in the process system. The PIDmodule can be a software module of these RTUs. These RTUs will be standalone units, but the SCADAsystem can interact with these devices to monitor the operator or changing the control parameters.The layered interconnection between the ERP system, the MES, the IMS, and the SCADA system

is shown in Figure 2.7. This �gure shows the layer priorites of the systems and the information used inthese systems. The ERP, the MES, and the IMS all depends on the SCADA systems, the informationsused and the decisions taken in the ERP, the MES, and the IMS depends on trustworthy informationsfrom the SCADA system. The management system will be useless if the SCADA is not able to delivertrustworthy informations.

Page 19: DAQ_Training_Course.pdf

CHAPTER 2. SCADA 11

Figure 2.2: Several layers of information in a process system, from overview information about the plantdown to detailed information about a sensor device (Olsson & Rosen 2003).

Page 20: DAQ_Training_Course.pdf

CHAPTER 2. SCADA 12

Figure 2.3: A computer screen based UI for a plant (www.analogdevices.com: SEP-08).

Figure 2.4: The GUI of a SCADA system showing an overview of a total process (or plant) (www.abb.com:dec-09).

Page 21: DAQ_Training_Course.pdf

CHAPTER 2. SCADA 13

Figure 2.5: The same type of information presented in two di¤erent systems. Which system will youprefer to use?

Figure 2.6: An overview of a set of devices and interconnections that can be used in an industrial ITsystem.

Page 22: DAQ_Training_Course.pdf

CHAPTER 2. SCADA 14

Figure 2.7: The layered connections between an ERP system, MES, IMS, and the SCADA system (fromJohn Baaserud, Baze Technology).

Figure 2.8: A gas power plant at Kårstø in Rogaland in the western part of Norway (Foto: Dag MagneSøyland/StatoilHydro).

An example of a plant using SCADA systems is shown in Figure 2.8. This is a gas power plant atKårstø in Rogaland, in the western part of Norway, consisting of several SCADA systems monitoringand controlling the functions of the plant.Some of the SCADA systems available are Wonderware, Citect, and iFix.

2.3 SCADA control and monitoring devices

A SCADA system is normally using a set of sub devices for monitoring and controlling a plant. Thesize of the plant will decide the distribution of devices, but normally should the SCADA system beindependent of the �physical�monitoring and control of the plant. These devices can be RTUs, DCSs,PLCs and PACs, and any combination of these devices can be interconnected to function as such a devicefor the SCADA system.

2.3.1 RTU

Remote Terminal Units (RTU) are distributed computer systems in a larger control system where eachRTU will control and/or monitor olny a part of the plant. Often will a RTU be used for each closedcontrol loops for better maintaining the control strategies of the plant. The RTU will be an industrialcomputer system consisting of (See Figure 2.9):

� inputs; for analog and/or digital inputs,

� outputs; for analog and/or digital outputs,

Page 23: DAQ_Training_Course.pdf

CHAPTER 2. SCADA 15

Figure 2.9: The modules of the RTU, with I/O for reading and writing values to process equipment likesensors and actuators and communication to some sort of SCADA system.

Figure 2.10: A set of DCS are used for monitoring and controlling a plant. Each DCS can consists of asingle computer or a set of computers in network.

� communication; for communication with the SCADA system (serial lines or network, wired orwireless),

� processor and memory,

� software dedicated to the functions of the RTU.

The RTU will often use a real-time operating system and an operating system customized to theoperation of the RTU. The RTU is a standalone physical unit working without any interactions fromany remote systems, however the remote system (SCADA) can monitor the device and also change anycontrol parameters of the RTU. Available RTUs are the DCS, the PLC, and the PAC.

2.3.2 DCS

A Distributed Computer System (DCS) is a RTU used mainly for analog I/O to the control system. TheDCS often has a modular, distributed, but integrated architecture (Mackay, Wright, Park & Reynders2004). The DCS is dedicated for a speci�c task in the control system, often some analog monitoringand/or control loop. These devices may also have a small display for local UI (Mackay et al. 2004).The DCS is used for control purposes in a distributes system, an industrial process will consists of aset of DCS controlling only a part of the process. The DCS itself can be a single computer or a set ofcomputers in network depending on the complexity and/or requirements of redundancy for the controlsystem. To complicate the structure a PLC can also be part of the DCS network.An examplee of a plant controlled by a number of DCS is shown in Figure 2.10. The DCS�s are

connected in a network. Each DCS can consist of a single computer or several computers interconnected ina private network or the common network. The network of computers for a DCS can be any combinationof computers for measurement and control, both DCS and PLC.

Page 24: DAQ_Training_Course.pdf

CHAPTER 2. SCADA 16

Figure 2.11: The operation of a PLC program. An image of the input bits will be copied to the memory,the program will perform the operations depending on the input image in memory, generating an outputimage. At the end of the cycle time will the output pins be updated.

2.3.3 PLC

A Programmable Logic Controller (PLC) is a RTU used mainly for digital I/O for the control system.These devices are primarily used for sequence control based on on/o¤ inputs and outputs, but todaythese devices may also include analog I/O (Mackay et al. 2004). The PLC is often a single computer.The PLC will have a cycle time, at a speci�c time a copy of the input state will be copied to the memory,the PLC program will run and depending on the input states and the program, an output state will begenerated in memory. At the end of the cycle time will the output state image be copied to the outputpins. This way will the input state be steady when the program is running, but there will always be adelay from the input states to the output states in a PLC system. This is shown in Figure

2.3.4 PAC

A Programmable Automation Controller (PAC) is a compact controller that combines the features andcapabilities of a PC-based control system with that of a typical PLC. A PAC provides the reliability ofa PLC and the task �exibility and computing power of a PC.

2.4 RTU Subsystems

2.4.1 Open or Closed Control Loops

Control systems with or without a monitoring part, open control loops without any feedback from theprocess and closed control loops with feedback (monitoring) from the process.

2.4.2 PID

A proportional�integral�derivative controller (PID controller) is a generic control loop feedback mech-anism (controller) widely used in industrial control systems. A PID controller attempts to correct theerror between a measured process variable and a desired setpoint by calculating and then outputting acorrective action that can adjust the process accordingly(www.wikipedia.org 2006).The PID controller calculation (algorithm) involves three separate parameters; the Proportional, the

Integral and Derivative values. The Proportional value determines the reaction to the current error,the Integral value determines the reaction based on the sum of recent errors, and the Derivative valuedetermines the reaction based on the rate at which the error has been changing. The weighted sum ofthese three actions is used to adjust the process as shown in Figure 2.12 where Kp is the proportionalfactor, Ki is the Integral factor, and Kd is the derivate factor.The PID function or controller is controlling the process using some type of analog actuator(s). The

PID function is often a part of the functinality of the RTU in the SCADA system, only a softwaremodule. Some applications may require using only one or two modes to provide the appropriate system

Page 25: DAQ_Training_Course.pdf

CHAPTER 2. SCADA 17

Figure 2.12: An overview of the PID controller (www.wikipedia.org 2006).

control giving a PI, PD, or just a P controller. The PID function is often a software module of the DCS,the PLC, and the PAC devices.

2.4.3 CNC Machines

Computer Numerical Control (CNC) machines are using programmed operations for machine tools.These machine tools, powered mechanical devices, are operated only by the software program downloadedinto the device.

2.4.4 Robots

Robots are �exible tools in automated systems.

2.4.5 Instrumentation

Instrumentation consists of both sensors and actuators; sensors for monitoring the process and actuatorsfor controlling the process. Figure 2.13 shows a part of a plant with pipes, sensors, and actuators. Theactuators are mainly valves, the orange devices. Some of the sensors are used for feedback informationof the valve positions, small green devices on top of the valves.

2.5 Superior SCADA systems

2.5.1 ERP

Enterprise resource planning (ERP) is an enterprise-wide information system designed to coordinate allthe resources, information, and activities needed to complete business processes such as order ful�llmentor billing (www.wikipedia.org 2006).An ERP system supports most of the business system that maintains in a single database the data

needed for a variety of business functions such as Manufacturing, Supply Chain Management, Financials,Projects, Human Resources and Customer Relationship Management. The following are steps of a datamigration strategy that can help with the success of an ERP implementation (www.wikipedia.org 2006):

1. Identifying the data to be migrated,

2. Determining the timing of data migration,

3. Generating the data templates,

4. Freezing the tools for data migration,

5. Deciding on migration related setups,

6. Deciding on data archiving.

Page 26: DAQ_Training_Course.pdf

CHAPTER 2. SCADA 18

Figure 2.13: A part of a plant with several pipes, valves as actuators, and sensors reading the valvepositions (Photo from Automatisering 07/2009).

2.5.2 MES

Manufacturing execution systems (MES) serve as the intermediary between a business system suchas ERP and a manufacturer�s plant �oor control equipment. MES is helping to manage productionscheduling and sequencing, creating an audit trail for track and trace, and delivering work instructionsto shop �oor workers. MES can also be named as Operations Management Software (OMS).The key focus with a MES system is traceability, to be able to �gure out :

1. where a product is manufactured,

2. when a product is manufactured,

3. any sub devices used to manufactured this product,

4. any claims, warnings, or errors of this product.

2.5.3 IMS

Information Management System (IMS) is an information that makes low level information available onall levels in the organization. This system is often divided into the subsystems:

� Laboratory Information Management System (LIMS); Information system for lab data; samples,analysis, results, instruments managing etc.,

� Process Information Management System (PIMS); Information system with focus on process dataand data acquisition to a real-time database, in the process industry is the PIMS and the IMS thesame system

2.6 Security

A SCADA system must also contain control logic for safety and shutdown functions. Such a system canalso be named a SAS (Safety and Automation System)

Page 27: DAQ_Training_Course.pdf

CHAPTER 2. SCADA 19

2.6.1 Safety system

Safety Integrity Level (SIL) is requirements for the processing chain; reading, evaluating, and responding.Regarding a process system or a SCADA system will the SIL be looked upon as the sensor devices, theprocessing units (RTUs), and the actuators. The SIL consists of a set of numbers with 1 as the lowestlevel. The weakest part in a chain will decide the SIL for the whole chain. One way to have a higher SILis to use failsafe controllers. These controllers will evaluate safety relevant �eld signals and switch to orstay in a safe condition in the event of faults (PROFIsafe 2009). In a failsafe controller will safety orientedoperations be processed in two di¤erent paths (algorithms) and the results are compared at the end ofthe algorithms. If any deviations, a fault has occured in one of the paths, and the controller will switchto a safe condition. These controllers must have extensive self-diagnostic facilities (PROFIsafe 2009).The higher the degree of automation, the more the control needs to be monitored for safety, but this

is only possible if the system is failsafe (PROFIsafe 2009). Safety will be regarding hardware, software,and communication modules and devices in the system.There exists a lot of di¤erent standards and regulations regarding the safety matter for equipment, we

humans, and the environment. Some international, other local to di¤erent countries, or local extensionsor limitations to international standards and regulations.

IEC61508 Safety Standard

IEC61508 is an international standard focusing on safety-related systems that incorporate electrical,electronic and/or programmable electronic (E/E/PE) instruments and devices. This standard is mainlyused in the automation and process control industry, but is more and more accepted for applications inother industries.The IEC 61508 standard is divided into 7 parts:

1. General requirements,

2. Requirements for (E/E/PE) safety-related systems,

3. Software requirements,

4. De�nitions and abbreviations,

5. Examples of methods for the determination of safety integrity levels,

6. Guidelines on the application of IEC 61508-2 and IEC 61508-3,

7. Overview of measures and techniques.

2.6.2 Shutdown system

The function of a shutdown system is to protect environment, plant and humans in case any state of theprocess goes beyond prede�ned boundaries. A plant normally have several levels of protection like:

� Process Control System (PCS) for daily opeation of the plant, using the the alarm system of theSCADA system,

� Process Shutdown System for controlled process shutdown (PSD),

� Emergency Shutdown System in case of an emergency situation (ESD),

� Fire & Gas System (FGS) to detect �re and initiate automatic shutdown,

� Mechanical devices like Pressure Safety Valves (PSV) to avoid overpressure when previous systemsfail.

Page 28: DAQ_Training_Course.pdf

CHAPTER 2. SCADA 20

Figure 2.14: The development system for a SCADA system consists of a set of standard modules, andcan be extended with developed modules for a speci�c plant, process, or production system.

2.7 Documentation

A SCADA system can be documented in di¤erent ways. Some of the diagrams are:

1. P&ID (Process and Instrumentation Diagram); a family of functional one-line diagrams showinghull, mechanical and electrical systems like piping, instrumentation and cable block diagrams,

2. P&ID (Piping and Instrumentation Diagram/Drawing); a schematical diagram showing piping,equipment and instrumentation connections within process units (www.wikipedia.org 2010),

3. SCD (System Control Diagrams); mainly for SAS, integration of the control logic in order to moreeasy be able to check the logic �connection�to other areas in the system.

The documentations are using di¤erent types of levels and symbols to show the contents of the controland monitoring systems.

2.8 Development

A SCADA system has to be developed and con�gured to a speci�c plant, process or production systems.A modern SCADA system like System Platform from Wonderware (2010), Pro�cy iFIX (Interlotion)from General Electric (GE)(2010), CitectSCADA or ClearSCADA from Schneider Electric (2010) consistof a set of standard modules and building blocks. The building blocks are normally object orientedbuilding blocks meaning that a new system consists of the standard modules and any combinationsof the building blocks. New or extended building blocks can normally easy be developed using objectoriented development principles. The developed building block consists of system functions (businesslayer) and speci�c HMI elements. Figure 2.14 shows a SCADA system with the standard modules andthe developed modules, often the developed modules are mainly for the business layer, but also forcommunication with special hardware devices and speci�c HMI elements.Since SCADA systems can be any size and combination of devices, the SCADA software must be

con�gured to each process systems.

2.9 Future

Analysis from Frost & Sullivan1 has shown that the SCADA market in Europe is worth about $1325million in 2009. The same marked is estimated to reach about $1900 million in 2016. The analysiscovered the areas of oil and gas, power, water and wastewater and others, including plant-level SCADA(food and beverage, pharmaceuticals, chemicals and pulp and paper) and automotive and transportation.The reason for the increasing popularity is the standardization of systems and building blocks used inthe system and provide operational e¤ectiveness for a relatively low capital investment.

1According to Control Engieering Europe (www.controlengeurope.com) 5-OCT-2010. See also www.frost.com.

Page 29: DAQ_Training_Course.pdf

CHAPTER 2. SCADA 21

One of the big challenges confronting SCADA is that of cyber security. Better education of plant-leveloperators and engineers as well as system integrators and other SCADA developers about the bene�tsand importance of providing security is also necessary to ensure system security.

Page 30: DAQ_Training_Course.pdf

Part II

OPC

22

Page 31: DAQ_Training_Course.pdf

Chapter 3

Introduction

3.1 Background

In every area of the industry there is a move from proprietary solutions to open vendor independentstandards. As well as reducing costs it allows the choice of components according to their performanceand reduces the dependence on suppliers. The important part in data communication is the protocolde�ning a set of rules for data exchange between software applications. Figure 3.1 shows 2 computersexchanging information using a wired comunication link. The protocol should de�ne how the computerswill exchange this information and the speci�cation of the information in the data message sent betweenthe computers. The Field Bus de�ned how several computers can be connected to the same network ina industial system. The bus protocol must de�ne the structure of the data message sent between thecomputers, but also how to use the �bus� as a common communication link. Only one computer cantransmit data on the �bus� at a time. Figure 3.2 shows several computers connected on a bus usinga bus protocol for controlling the communication between the computers. Very often will the protocolde�ne the structure of the communication between the computers and the contents of data messages tosome degree. However are the details about speci�c data values and the meaning of speci�c bits in theprotocols not de�ned clearly. How can the software applications in a network system exchange a speci�cdata value and know the status of this data value? This is one of the reasons for the OPC standard.The speci�cation of the Dynamic Data Exchange (DDE) protocol from 1987 provide a �rst solution

for the data exchange between MS-Windows based applications. The main drawback of this solution waslow bandwidth not very well suited for real-time systems. High bandwidth will be a major requirementfor automation systems where exchange of data will be important. See Figure 3.3.The speci�cation of the Object Linking and Embedding (OLE) protocol, from 1990, a distributed

object system and protocol, provided better bandwidth. OLE is said to be the evolution of the DDE.While DDE was limited to transferring limited amounts of data between two running applications, OLEwas capable of maintaining active links between two documents or even embedding one type of documentwithin another. The main bene�t of using OLE, next to reduced document size, is the ability to create amaster document. References to data in this document can be made and the master document can thenhave changed data which will then take e¤ect in the referenced document. See Figure 3.3.The OLE protocol later evolved to become an architecture for software components known as the

component object model (COM) given that the documents can be objects as well. Both OLE and COMwas developed for communication on a single computer, but the Network OLE protocol as later evolvedto the Distributed Component Object Model (DCOM) protocol for software components distributedacross several networked computers to communicate with each other.The usage of DCOM and OLE was used for developing the open standard �OLE for Process Control�

Figure 3.1: A protocol is needed when 2 computers are going to exchange information.

23

Page 32: DAQ_Training_Course.pdf

CHAPTER 3. INTRODUCTION 24

Figure 3.2: A protocol is needed when several computers are going to exchange information. Theprotocols must de�ne the structure of the data messages sent between the computers, but also how thecomputers should cooperate to use the bus as a common communication link.

Figure 3.3: The background for the OPC standard.

(OPC), as the original name for an open standard speci�cation developed in 1996 by an industrial au-tomation industry task force. The standard speci�es the communication of real-time plant data betweencontrol devices from di¤erent manufacturers. The background for the OPC standard is shown in Figure3.3.After the initial release, the OPC Foundation1 was created to maintain the standard. Today the OPC

standard is a series of standard speci�cations and the standard is often called Open Process Control(OPC).

3.2 Operating systems

The OPC Speci�cation was based on the OLE, COM, and DCOM technologies developed by Microsoftfor the Microsoft Windows operating system family. The speci�cation de�ned a standard set of objects,interfaces and methods for use in process control and manufacturing automation applications to facilitateinteroperability.OPC was designed to bridge Windows based applications and process control hardware and software

applications. It is an open standard that permits a consistent method of accessing �eld data from plant�oor devices. This method remains the same regardless of the type and source of data.COM and DCOM was a proprietor protocol from Microsoft meaning that OPC �rst cound only be

used on systems based on the Windows operating system. Several other operating systems are now

1The WEB site: http://www.opcfoundation.org (4-FEB-07).

Page 33: DAQ_Training_Course.pdf

CHAPTER 3. INTRODUCTION 25

Figure 3.4: A 3 layers software application.

supporting the DCOM protocol like Solaris (Sun), Unix, VMS (Digital), Linux and AIX (IBM) gvingthat OPC can be used on di¤erent operating systems.

3.3 Software application

The term SCADA refers to a large-scale, distributed measurement (and control) system, see Figure 3.9.A SCADA system includes input/output signal hardware, controllers, HMI, networks, communication,database and software (www.wikipedia.org 2006).Programmable automation controller (PAC) is a compact controller that combines the features and

capabilities of a PC-based control system with that of a typical programmable logic controller (PLC), seeFigure 3.8. PACs are most often used in industrial settings for process control, data acquisition, remoteequipment monitoring, machine vision, and motion control. Additionally, because they function andcommunicate over popular network interface protocols like TCP/IP, OLE for process control (OPC) andSMTP, PACs are able to transfer data from the machines they control to other machines and componentsin a networked control system or to application software and databases (www.wikipedia.org 2006).The SCADA and PAC software often consists of a 3 layer application model with the following layers

(see Figure 3.4) :

1. GUI; the presentation layer,

2. Business layer; the calculation, monitoring logic, analysis and so on,

3. Data layer; the process data, events, alarms and so on.

The SCADA and PAC software is a distributed measurement system getting the information fromdi¤erent distributed computer equipment (DCE) so a 3 layered software application on a SCADA orPAC system can be as shown in Figure 3.5.A complex software system and the exchange of data will depend on a lot of di¤erent protocols. What

if a new function has to be integrated into the system? To solve the problems regarding the exchangingof data and connection between all the modules, one of the solutions can be OPC. One solution of thesystem in Figure 3.5 is shown in Figure 3.6.As can be seen in Figure 3.6 the system has a much easier structure and any extensions or changes are

easy as the information will be available at the OPC protocol. The drawback is that every system musthave support for the OPC protocol. A more speci�c �gure is shown in Figure 3.7 showing 2 applicationsas OPC clients using informations from 3 OPC servers.The OPC server will be the software module with some sort of data access and will have one or

several protocols for interfacing the I/O hardware modules. The server will have the OPC protocoltowards the other software modules in the system. All OPC clients must also have the OPC prototoclsfor communicating the OPC servers. The complexity of the software will be much higher for the serverthan the clients.

Page 34: DAQ_Training_Course.pdf

CHAPTER 3. INTRODUCTION 26

Figure 3.5: A general SCADA or PAC software application with a lot of di¤erent protocols between thesoftware modules (Krogh 2005).

Figure 3.6: SCADA or PAC software using OPC as protocol for interconnection (Krogh 2005).

Figure 3.7: Applications in a system with OPC clients and OPC servers.

Page 35: DAQ_Training_Course.pdf

CHAPTER 3. INTRODUCTION 27

Figure 3.8: A PAC system as a combination of a PC and a PLC system.

Figure 3.9: A SCADA system with distributed I/O as HMI, PLC, DCE, and PAC.

3.4 Communication model

A SCADA (Supervisory Control And Data Acquisition) or PAC (Programmable automation controller)system will always be a distributed system and from a communication point of view a distributed systemconsists of a service provider and a service user. A PAC system, as a combination of a PC and a PLC(Programmable Logic Controller), is shown in Figure 3.8. A SCADA system, with distributed I/O as aHMI (Human Machine Interface), PLC, DCE (Distributed Computer Equipments), and PAC is shown inFigure 3.9. As shown can a PAC be part of a SCADA system indication that a SCADA system normallywill be a more complex system than a PAC system.In these system there will be a lot of information, and some subsystem has the information and other

subsystems need this information. The service provider and service user must be logically connectedfor the user to get information from the provider. The service user will ask the service provider forinformation. See Figure 3.10.This logical connection can be described using two di¤erent models:

1. Client / server model,

2. publisher / subscriber model.

OPC supports both models and distinguishes only between synchronous and asynchronous services.In asynchronous services can another request be answered before this answer, there is no relationshipbetween the requests from the clients and the answers back to the clients.

3.4.1 Client/server model

In a client/server model the server is the owner of the data (or resource) and the client, or clients, mustpoll the server to exchange data. The advantage of a client/server model is that several clients can accessthe data (or resource) at the server at the same time, using no server only one �client�can access the

Figure 3.10: User and provider messages.

Page 36: DAQ_Training_Course.pdf

CHAPTER 3. INTRODUCTION 28

Figure 3.11: Client/server model with a request and a response.

Figure 3.12: Several clients connected to a server.

data (or resource) at a time. Remember that you should not have copies of your data in the processsystem, the data should be available in only one location and the client/server model is good way ofdealing with the data resources.The communication between the client and the server is determined by the OPC protocol. The

sequence always starts with a client sending a service request to the server. The answer from the serverwill be a service response. Figure 3.11 shows the request and response in a client/server model.Normally will several clients connect to a server as shown in Figure 3.12. One problem with the

client/server model is the polling (requests) from the client, the client should request a new set of dataevery time the data set is used. In real-time system this can be a lot of unnecessary requests, a betterway can be the usage of the publisher/subscriber model.

3.4.2 Publisher/subscriber model

The publisher/subscriber model assumes a cyclic data supply by the publisher, where the data transferfrom the publisher depends on either an external request or an internal event (e.g. a timer). The clientor the clients must �rst subscribe to data and de�ne the type of subscription (request and/or events),and the server will give a response when the type of subscription is activated without the need of pollingfrom the client. One or more clients can subscribe of the same type of data, as shown in Figure 3.13Will the publisher be a client or a server? The server is the �owner� of data and the most logical

solution will be to let the server also be the publisher.The di¤erences of these communication models are shown in Figure 3.14 showing the messages in

the time domain. Notice the number of messages, the client/server model needs more network tra�c toget the same number of information from the server.The client / server model can be either synchronous or asynchronous. These types of read and write

operations are shown in more detail in Figure 3.15 where the di¤erent read and write operations willdepend on the implementation of the clients.

Figure 3.13: A publisher/subscriber model with 1 publisher and 2 subscribers.

Page 37: DAQ_Training_Course.pdf

CHAPTER 3. INTRODUCTION 29

Figure 3.14: The client/server model shown on the top and the publisher/subscriber model shown at thebottom, both in the time domain. Notice the di¤erence in tra�c load between these models.

Figure 3.15: The synchronous, asynchronous, and subscription based read and write operations betweena OPC server and a OPC client. Client/server types on the top and publisher/subscriber type at thebottom (Kirrmann 2007).

Page 38: DAQ_Training_Course.pdf

Chapter 4

OPC speci�cation

4.1 Introduction

The OPC protocol consists of a set of speci�cations:

1. Common,

2. Data access (DA),

3. Alarm and events (AE),

4. Data exchange (DX),

5. Historical Data Access (HDA)

6. Batch,

7. Complex data,

8. XML,

9. Security,

10. Commands,

11. Uni�ed Architecture (UA).

The speci�cations are developed in Working Groups in the OPC Foundation1 and only members willhave access drafts, pre-releases and info from the working group. The released speci�cations are howeveravailable for everybody2 . The speci�cations are often dealing with the data on the server side, giving aset of di¤erent servers for di¤erent speci�cations.The interconnection between the di¤erent speci�cations is shown in Figure 4.1.Each speci�cation is a description of a server, a software module that can be running on a node in

the system. Note that several servers can be running on the same node in the system. In small systemswill most probably all the servers be running on the same node. Figure 4.2 shows the usage of some ofthe OPC servers, the connection to the OPC protocol and the plant. Note that the OPC protocol is theconnections between the systems and also the connection point for the OPC clients.

4.2 Communication

COM and DCOM is the basis for the OPC comunication and the COM/DCOM connection consists of:

1. Objects,

2. Interfaces.

30

Page 39: DAQ_Training_Course.pdf

CHAPTER 4. OPC SPECIFICATION 31

Figure 4.1: The interconnection of the OPC speci�cations.

Figure 4.2: The connections between some of the OPC servers in a process system.

Figure 4.3: A DCOM client and server with a interface connection.

Page 40: DAQ_Training_Course.pdf

CHAPTER 4. OPC SPECIFICATION 32

Figure 4.4: Local communication in the same process (application).

Figure 4.5: The interprocess communication between the OPC client and the OPC server using COM.

The interface is connection point between the client and server object, the client is only able to �see�the contents of the DCOM server by the interface connection, see Figure 4.3. The DCOM server will belike a �black box�for the client as the client do not know anything about the funconality of the server.A set of functions are available at the main interface, like:

1. Addref(); add the reference of the client,

2. Release(); get the release number of the server object,

3. QueryInterface(); request information about the functionality of the server.

4. The COM/DCOM solution gives the same functionality both on local and remote systems.

The communication between the client and the server will be the responsibility of the client to �ndthe interface of the server component and connect. If the client and the server is in the same softwareapplication, which is not very likely, but possible, the client will connect directly to the server, as shownin Figure 4.4. All the code for the application will be in the same process giving a fast and directconnection. This solution is only possible for very special systems, and will not utilize OPC. A moreusual way of doing the communication is between two di¤erent processes on the same computer. Thiscommunication is often called InterProcess Communication (IPC) and a lot of di¤erent IPC exists, alsocalled Middleware. OPC is using COM and DCOM as the IPC, and is shown in Figure 4.5. The COMmodule is used as the IPC �channel�for the OPC on a local machine.IPC can also be used between processes on di¤erent computers but the IPC must then support

network connections as well. COM is only for IPC on the same computer while DCOM can be usedcommunication between two processes on two di¤erent computers. The usage of DCOM is shown inFigure 4.6.The network support can be di¤erent protocols as well and a set of protocols that DCOM can use is

shown in Figure 4.7.The most used OPC standards in the process industry are:1The WEB site: http://www.opcfoundation.org (4-FEB-07)2The speci�cation is avaliable at: http://www.opcfoundation.org/ ! Downloads ! Speci�cations.

Figure 4.6: Usage of DCOM for COM communication between two computers.

Page 41: DAQ_Training_Course.pdf

CHAPTER 4. OPC SPECIFICATION 33

Figure 4.7: Network connection in a DCOM.

1. OPC DA (Data Access),

2. OPC AE (Alarm & Events),

3. OPC HDA (Historical Data Access).

4.3 OPC Common

Usage:

1. common de�nition for several of the OPC speci�cations,

2. instruction for registration of OPC software modules.

OPC Common interfaces:

1. IOPCServerList; Find the OPC servers on a computer,

2. IOPCCommon; let the client de�ne the language,

3. IOPCShutdown; callback to the client,

Registration of OPC software modules: using the Windows registry (or ini �les).

4.4 OPC Data Access (OPC DA)

The current Data access speci�cation 3.0 has 19 interfaces and 69 methods (functions). Speci�cation1.0A is from 1997, speci�cation 2.0 from late 1998. The functions will di¤er from di¤erent speci�cationso it will be important to know the speci�cation number of the DA server. The usage of the OPC DA is:

1. reading of measurement values,

2. calculation and estimation of values,

3. writing of values,

OPC DA server has implemented a set of services, and the clients are using these services. Figure4.8 shows an example of a system using an OPC DA server.Tags are used a lot in the process industry and is normally assigned to a piece of information. A

tag consists of a name describing a single point of information meaning that a process system (plant)consists of hundreds or even thousands of tags. The �gure shows that the DA server contains one tag foreach measurement point and controller point in the plant, and it is the responsibility of the DA serverto get (or set) the information from the controllers. This is one of the reasons for the complexity of theservers, they need to have drivers for a lot of controllers and/or measurement systems.The OPC servers has di¤erent rooms for grouping of items and adding access and names, and the

client can use group index instead of an item index. The group concept is important and one or more

Page 42: DAQ_Training_Course.pdf

CHAPTER 4. OPC SPECIFICATION 34

Figure 4.8: An example of a system with an OPC DA server and OPC clients using tags for the I/Ovalues. Note that the OPC DA server will not use OPC for communication with the I/O devices.(Kirrmann 2007).

items can be added to the same group. The group information is stored on the server, but the serverlets the client maintains the group information, and the client can also browse in the name space of theserver. See Figure 4.9 where the name space contains information about the items on the server. Thename space is the area in the server where all the group and tag information are stored.A more detailed Figure is shown in Figure 4.10 showing the tree structure of the groups and tags

information in a OPC DA server. The information is stores in a root, several levels of branches containingthe groups, and a leave level containing the tags. Each tag indicates a speci�c point of measurement orcontroller for a process.Exchange of measurement values can be done by groups, or items belonging to a group. The

read/wrtie operations between the server and the client to access the values can be:

1. reading or writing synchronous,

2. reading or writing asyncronous,

3. reading as a subscription.

Figure 4.9: A client connection with a DA server group with a set of I/O items.

Page 43: DAQ_Training_Course.pdf

CHAPTER 4. OPC SPECIFICATION 35

Figure 4.10: The name space information of a DA server (Kirrmann 2007).

Figure 4.11: A deadband meaning that the output will not change even if the input is changing (systemis �dead�).

The subscription based operation can depend on the following settings:

1. deadband variations (in %); The deadband is an area of a signal range (or band) where no actionoccurs (the system is dead), see Figure 4.11. Deadband is a way of data compression, remove someof the data but keep the information,

2. minimum time interval (in seconds),

3. for each item in the group, or the whole group.

A sample of a value saved in the OPC DA server has the following descriptions:

1. value (only the current value, no history),

2. quality; like GOOD, BAD (unknown error), CONFIG_ERROR, DEVICE_ERROR, SENSOR_ERROR,COMM_FAILURE, ...,

3. timestamp (Coordinated Universal Time (UTC) / Greenwitch Mean Time (GMT))

4. access rights,

Page 44: DAQ_Training_Course.pdf

CHAPTER 4. OPC SPECIFICATION 36

5. properties; SI unit, scaling, description,..

The server will have a lot of interfaces and methods that can be used by the clients.Some of the Server interfaces are: IOPCCommon, IOPCServer, IOPCServerPublicGroups, IOPCBrowserServer-

AddressSpace.Some of the IOPCServer methods are: Addgroup, GetErrorString, GetgroupByName, GetStatus,

Removegroup, CreateGroupEnumerator.Some of the interfaces for the group object are : IOPCItemMgt, IOPCGroupStateMgt.Some of the IOPCItemMgt methods are: AddItems, ValidateItems, RemoveItems, SetActiveState,

SetClienthandles, SetDataTypes.

4.5 OPC Alarms & Events (OPC AE)

Usage:

1. monitoring of events,

2. reports of events.

Meaning:

1. discrete alarms,

2. �level�alarms; change in the process value,

3. warnings,

4. informations.

The meaning of the OPC AE server will then be:

� alarms on sensor devices,

� alarms on sensor values/data,

� alarms on control parameters,

� status on hardware connections,

� status on systems and subsystems.

Usage of the OPC AE server will be:

1. Detections of alarms and/or events from one or more sources,

2. publishing to one or more clients using subscription (including a �lter),

3. type of clients can be GUI systems and separate alarm systems.

3 di¤erent type of events:

1. simple, a simple event in the system,

2. condition, a condition in the system, can be several events,

3. tracking, a external event often by an operator or an external system.

An example of a OPC client having a event subscription on a OPC EA server is shown in Figure4.12.The structure for the connection between a client and an OPC AE server is shown in Figure 4.13

showing the connection sequence. The sequence will be:

1. the client connects to the AE server,

2. the client set up a subscription request to the AE server, getting a connection point (CP),

3. the client will con�gured the connection point (CPC),

4. the connection point (CP) will send an event when the condition of the con�guration of the con-nection point.

Page 45: DAQ_Training_Course.pdf

CHAPTER 4. OPC SPECIFICATION 37

Figure 4.12: A OPC client and a OPC AE server with a group for condition events.

Figure 4.13: The connection sequence between a client and a AE server in an OPC system.

4.6 OPC Data eXchange (OPC DX)

Usage:

1. Con�guration and data exchange between di¤erent systems,

2. Use existing speci�cation if possible,

3. de�ne a standard for system con�guration.

OPC Data Access is often used for vertical information exchanging, while OPC Data eXchange isoften used for horizontal information exchange. This means that OPC DA is used between servers andclients and OPC DX is used between servers, shown in Figure 4.14.A OPC DX server is an OPC DA server with DX extensions. The extension is 2 di¤erent items:

1. readable; a data source,

Figure 4.14: The normal di¤erences of OPC data Access and OPC Data Exchange.

Page 46: DAQ_Training_Course.pdf

CHAPTER 4. OPC SPECIFICATION 38

Figure 4.15: The readbale / connectable connection between OPC DA and OPC DX servers.

2. connectable; a data destination.

A connection is made between the readable and the connectable, the connection is saved on thedestination side, and it is the responsibility of the OPC DX server to read the value of the readable onthe OPC DA and update the connectable value. See Figure 4.15.

4.7 OPC Historical Data Access (OPC HDA)

The reasons/usages for a OPC HDA server:

1. Reading of historical values,

2. Tools for historical values,

3. Tools database clients.

Server functions:

1. reading and writing data for process and time series database,

2. access of the name space of the DA server,

3. historical values with attributes, timestamps and quality,

4. support for annotation and aggregation,

5. support for replay (playback) of historical values.

The HDA speci�cation can give a range of extra server functionallity from a simple server for readingof trend data only, to a complex server with a lot extra functions.Value attributes for saving a new sample:

1. maximum time interval; a new value has to be saved after this time interval,

2. minimum time interval; a new value shall NOT be saved during this time interval,

3. exception deviation; minimum change for saving a new value,

4. exception deviation type; absolute value, percent of new value, or percent value of value span(HighEntryLimit - LowEntryLimit),

5. High Entry Limit; the upper limit for a valid value,

6. Low Entry Limit; the lower limit for a valid value.

Timestamps:

1. absolute time; reference is UTC (Universal Time Coordinated Time, same time as the old GMT),

2. relative time;

(a) Keywords: NOW, YEAR, MONTH, WEEK, DAY, HOUR, MINUTE, SECOND

(b) Syntax: Keyword � O¤set(c) O¤set: Y, MO, W, D, H, M, S

Page 47: DAQ_Training_Course.pdf

CHAPTER 4. OPC SPECIFICATION 39

Figure 4.16: The contents of a batch procure.

3. Example: daily report:

(a) Start=DAY-1D

(b) Stop=DAY

Annotation (Comments):

1. text,

2. username,

3. timestamp.

Aggregation:

1. Calculation only on request,

2. no saving of the calculated values,

3. Optional extension,

4. Types: Average, Minimum, Maximum, Start, Stopp, Count, etc.

Playback:

1. Playback of the historical data from the OPC HDA server to a client,

(a) de�ne the speed and duration,

(b) de�ne values and aggregation.

2. Useful for testing, simulation, and teaching,

3. Optional extension.

4.8 OPC Batch

Batch is the execution of a series of programs (jobs) on a computer without human interation. A batchconsists of a set of operations as shown in Figure 4.16.The de�nition of batch process (Furenes 2009): �Processes that lead to the production of �nite

quantities of material by subjecting quantities of input materials to a de�ned order of processing actionsusing one or more pieces of equipment�Some characteristics of a batch process (Furenes 2009):

Page 48: DAQ_Training_Course.pdf

CHAPTER 4. OPC SPECIFICATION 40

Figure 4.17: A batch process with input variables, output variables, measured variables, and manipulatedvariable (Furenes 2009).

1. Run intermittently to produce low-volume and high-value products,

2. Have dynamic nature of operation, no steady state,

3. Have �nite time of operation,

4. Frequent repetition of the same process.

The reasons for the OPC batch server:

1. S88 standard for batch control (IEC 61512-1)

2. Easy to con�gure batch processes,

3. Easy to operate batch processes.

An overview of a batch process is shown in Figure 4.17. The process will have some input variables,output variables, measured variables, and manipulated variables. The run time of one batch and thebatch run index will be two time variables.The S88 standard contains:

1. Prescription handling,

2. Production planning,

3. Process control,

4. Monitoring.

Examples of batch processes:

1. making a report; the process must collect a set of data, organize the data, make the report pages,and send the pages to the printing system,

2. baking a cake; the process is shown in Figure 4.18.

Page 49: DAQ_Training_Course.pdf

CHAPTER 4. OPC SPECIFICATION 41

Figure 4.18: Baking a cake; an example of a batch process (Furenes 2009).

4.9 OPC Complex Data (OPC CD)

The reason/usage for a OPC Complex Data server:

1. Able to use more complex data types then OPC DA,

2. Structure of simple items or complex items,

3. The client should read both the structure and the values,

4. Only extensions to the OPC Data Access.

Complex data:

1. Consists of simple or complex items,

2. Unlimited number of nested levels,

3. Structures can be:

(a) arrays,

(b) structures (database records),

(c) arrays of structures,

(d) arrays and structures.

Figure 4.19 shows the usage of a OPC CD server to extend the OPC DA server with complex datastructures for the OPC clients.

4.10 OPC Security

The reasons for security control:

1. Control of the access of data in the system,

(a) Requirement for physical security of data,

Page 50: DAQ_Training_Course.pdf

CHAPTER 4. OPC SPECIFICATION 42

Figure 4.19: The OPC clients are using complex data structures when communication with the DAserver. The DA server is extended with a CD server to support these complex data structures.

(b) Requirement for con�dence of data.

2. OPC is an open standard

(a) Anybody can make an OPC client and access data,

(b) Using wireless network no physical connection is necessary.

OPC is based on security in Windows:

1. User access in the system (principals)

2. User has to be member of a group (principals)

3. Access certi�cates,

4. Security objects,

5. Access control lists,

6. Reference monitor,

7. Communication channels,

8. Autorization,

9. Impersonation (being another principals).

COM/DCOM objects:

1. Security objects,

(a) the principal of the client must have access to the COM/DCOM objects,

(b) using subscriptions the principal of the server must have access to the client,

(c) using the application DCOMcnfg.exe for access con�guration.

The DCOMcnfg.exe application is shown in Figure 4.20.Recommended security settings for OPC servers (Krogh 2005):

1. Authentication Level: Connect,

2. Impersonation Level: Identify,

3. OPC servers should be running on one speci�ed account,

Page 51: DAQ_Training_Course.pdf

CHAPTER 4. OPC SPECIFICATION 43

Figure 4.20: The DCOMcnfg.exe application.

4. Use DCOMcnfg to allow users to access the servers,

5. Use DCOMcnfg to allow users to start the servers.

An OPC server can use 3 di¤erent levels of security:

1. No security,

2. DCOM security: security on users, no security access on objects,

3. OPC security: No security or DCOM security, and security access on objects.

4.11 OPC XML-DA

The reasons/usages for a OPC XML Data Access server:

1. better integration of system not tight connected,

(a) system in di¤erent operating systems,

(b) system in di¤erent application domains,

2. better usage of OPC on internet.

Extensible Markup Language (XML) :

1. text based,

2. rules for structured information,

3. focus on information, not presentation.

XML example:

<STUDENT><NAME>

<FIRST> ABC </FIRST><MIDDLE> DEF </MIDDLE><LAST> GHI </LAST>

</NAME><UNIVERSITY>

<CODE> TUC </CODE><WEB> www.hit.no </WEB>

</UNIVERSITY></STUDENT>

Page 52: DAQ_Training_Course.pdf

CHAPTER 4. OPC SPECIFICATION 44

Figure 4.21: The message sequence in a XML-DA server.

SOAP:

1. Simple Object Access Protocol,

2. Communication protocol,

3. XML based,

4. OPC XML is using SOAP.

Operation mode using XML will be:

1. The OPC client is making a XML document (of input from the user),

2. Sending the XML document to the server,

3. The server extract the information from the document,

4. Making new information based on the client information,

5. Sending a new XML document back to the client,

6. The client will present the information for the user or extract the necessary information.

The message sequence is shown in Figure 4.21.OPC XML can be used instead of DCOM based OPC Data Access as DCOM protocols may have

problems with �rewalls.Will OPC XML replace DCOM based OPC?

1. SOAP/XML is basis for the new .NET communication technology,

2. SOAP/XML will be available on all new Microsoft operating systems,

3. Poor real-time support,

4. Overhead in the communication protocol.

4.12 OPC Command

Can be used for con�guration of servers and control of state based operations, but often speci�c solutionsfor a speci�c server. OPC commands are often XML based.A command:

1. Takes a long time,

2. changes the state of the server.

Page 53: DAQ_Training_Course.pdf

CHAPTER 4. OPC SPECIFICATION 45

Figure 4.22: The contents of the OPC-UA speci�cation. The speci�cation contains a new comunicationstandard and has a better integration of the OPC-DA, the OPC-HDA, the OPC-AE, the OPC-CD, andthe OPC-DX speci�cations (www.opcfoundation.com: jan-09).

4.13 OPC UA

OPC Uni�ed Architecture is a new architecture:

1. using protocols based on XML and .NET technology from Microsoft and will in�uence all the OPCspeci�cations. DCOM is often based on DLL (Dynamic Link Library) given a lot of problems withdi¤erent versions (DLL hell) and DCOM communications have problems through �rewalls. Veryoften most of the security has to be switch o¤ the let the OPC system work.

2. with better integration between the di¤erent OPC speci�cations as the OPC-DA, the OPC-DX,the OPC-CD, the OPC-AE, and the OPCHDA speci�cations. Today will normally a server beinstalled for each of the speci�cation, why not combing the speci�cations in fewer servers?

3. for focusing more on services, the OPC-UA will be more based on the Service Oriented Architecture(SOA) to focus more on services, not functions.

4. for better integration on non-Microsoft systems, allow an easier integration of system not havinga tight coupling. This includes systems like embedded systemes, systems communicating overinternet to meantion some.

Figure 4.22 shows an overview of the OPC-UA speci�cation.There are three primary factors that in�uenced the decision for moving forward with the OPC UA

architecture (www.matrikon.com 2007):

1. The major OPC installation base, Microsoft is focusing their future e¤orts on Web Services andSOA applications. In addition, there is increasing pressure from end users looking for OPC supporton Linux and other non-Windows platforms.

2. OPC is no longer a simple point-to-point solution, and is becoming the backbone of increasingcomplex OPC architectures, that involve multiple speci�cations. Vendors and users require asingle interface that exposes the key functional areas of OPC.

3. The OPC Foundation would get more and more requests from clients and other institutions toleverage its standards to aid those who are de�ning other industry standards at a granularitybelow the interface level.

Page 54: DAQ_Training_Course.pdf

Chapter 5

OPC system

5.1 Introduction

OPC consists today of a lot of standards and each of the standards are implemented as a server. AnOPC system will therefor consists of a set of servers having the data and a set of clients using the data.These servers can be installed on one or several computers depending on the structure of the SCADAsystem. The important aspect here is that every software module that should be integrated into thisSCADA system must support the OPC protocol.The implementation of the OPC protocol will depends on the functonality of the client or the server.

Normally is the implementation of the OPC in the client less time consuming than the implementationon the servers.

5.2 Why use OPC

1. Using standard network technology,

(a) proven,

(b) good performance,

(c) using existing networks,

(d) security and availability,

(e) cost,

(f) knowledge,

(g) wire and wireless.

2. open communication standard,

(a) independent,

(b) many suppliers,

(c) good performance.

5.3 OPC development

Two main reasons for developing your own client or server:

1. Control,

2. Performance.

Easy to develop a client, from a couple of days and up depending on the functionality of the client.A server is much more work, almost a year (from 3 to 12 months) depending on the knowledge of thespeci�cations. The servers should also be approved by the OPC foundation.The type of knowledge (Krogh 2005):

46

Page 55: DAQ_Training_Course.pdf

CHAPTER 5. OPC SYSTEM 47

1. OPC speci�cations,

2. Windows security,

3. OOP (Object Oriented Programming),

4. Developing tools,

5. COM/DCOM,

6. OPC toolkits.

5.4 OPC test

The OPC system should be tested before installing into a real plant. The best way of testing sucha system is to use any simulation modes of the server if the hardware is not available. Most OPCservers have a simulation mode and the next chapter will show the usage of a freeware OPC server fromMatrikon1 in simulation mode. This server has a limited number of variables but can be used for smallertests.

1Matrikon: See web page www.matrikon.com.

Page 56: DAQ_Training_Course.pdf

Chapter 6

OPC exercise

Install the MatriconOPCSimulation.EXE on the computer (Windows version only) and start the OPCserver. Note that this installation will only install the Simulation server, not the DDE server. The DDEserver must also be installed if using data from the Microsoft Excel application. Figure 6.1 shows thestartup window of the OPC simulation server.Click on the Alias Con�guration line, and a new window is shown on the right side of the main

window. Right click on the line and select insert new alias item as shown in Figure 6.2.Insert a new value, call the value Temp and select the value as a Triangle Waves and Int4 as shown

in Figure 6.3. An item called Temp is now de�ned in the OPC server.Lets use the MatrikonOPCExplorer as an OPC client to display the item Temp from the OPC server.

The startup window of the OPC client and the Temp item de�ned in the OPC server is shown in Figure6.4.First right click on the server name to connect to the server. Select connection to the OPC simulation

server on local host. The client should now connect to the server and the icon in from of the server nameshould change to show that the client is connected. See Figure 6.5 for the connect menu.When the client is connected, a group must be added to be able to connect to the item8s) on the

server. Right click on the server name again, and select the Add Group option as shown in Figure 6.6.Select the Add Group and call the new group for TempGroup as shown in Figure 6.7. Other para-

meters as Update Rate and % Deadband can be set on the group as well.Select OK to save the group and right click on the group name to add items to the group. Select

Add Items in the menu, shown in Figure 6.8.Select New Items from the menu, and a new window showing the items for this group will be present.

The window is shown in Figure 6.9.Click on the Con�gured Alias folder name and the available tags will shown in the lower window.

Double click on the Temp tag and the tag name will show in the Tag ID line on top of the window. Usethe button with the arrow to transfer the tag name to the added tag window shown in Figure 6.10.Then select File �! Validate Tags to validate the tag. Use File �! Update and return to save the

tag in the group and the application will return to the main window of the OPC client. The OPC clientwill now display the value of the item as shown in Figure 6.11.More practice:

1. Try to add more items to the group, di¤erent type of values and so one.

2. Try to use another client, try for example MATLab R if you have the OPC toolbox.

48

Page 57: DAQ_Training_Course.pdf

CHAPTER 6. OPC EXERCISE 49

Figure 6.1: The startup window of the OPC simulation server.

Figure 6.2: Insert of new alias items in the OPC server.

Page 58: DAQ_Training_Course.pdf

CHAPTER 6. OPC EXERCISE 50

Figure 6.3: Insert an item called Temp being a Triangle Waves type as Int4.

Figure 6.4: Start of the OPC client on top of the OPC server.

Page 59: DAQ_Training_Course.pdf

CHAPTER 6. OPC EXERCISE 51

Figure 6.5: Right clikc on the server name to get the connect menu option.

Figure 6.6: Right click on a connected OPC server to get the Add Group menu option.

Page 60: DAQ_Training_Course.pdf

CHAPTER 6. OPC EXERCISE 52

Figure 6.7: Add a new group to the OPC server from the OPC client.

Figure 6.8: Right click on the group name to get a menu for adding item(s) to the group.

Page 61: DAQ_Training_Course.pdf

CHAPTER 6. OPC EXERCISE 53

Figure 6.9: The window for adding new items to a speci�c group on the OPC server.

Figure 6.10: The Temp item from the OPC server is added to our group.

Page 62: DAQ_Training_Course.pdf

CHAPTER 6. OPC EXERCISE 54

Figure 6.11: The OPC client is displaying the value of the Temp item from the OPC server.

Page 63: DAQ_Training_Course.pdf

Part III

Real-Time System

55

Page 64: DAQ_Training_Course.pdf

Chapter 7

Introduction

A real-time system means a computer based system where one or more of the applications must be ableto �synchronise" with a physical process. �Real-time�means that the computer system is monitoringthe states of the physical process and must respond to changes of one or more of these states withina maximum time. A real-time system can then be used for monitoring of di¤erent parameters in thephysical process for presentation, warnings, alarm situations and for control. The control is possible byregulation of the input variables to the physical process. A typical system is shown in Figure 20.14 wherea real-time system is in�uenced by sensors in the physical process and the real-time system is using theinformation from the sensors for control of input variables to the physical process.NOTE: A realtime system does not mean as fast a possible, a good design of a realtime system just

means as fast as necessary to satisfy the requirements of the system.

7.1 Synchronization

The applications of the real-time system must run together with the physical process, so the real-timesystem must be able to manage simultaneity. The solution is often to run several applications on thecomputer system or on di¤erent computers in a distributed system. These solutions requires some sortof synchronization between the applications and between the applications and the physical process.Probably the most used way of synchronization between the applications and the physical process is theusage of sensors, while the synchronization between the applications is the usage of global variables ormessages. This is shown in Figure 7.2.When several applications are running �simultaneous� on a computer system, there must also be

some control of the usage of the resources in the computer system. Resources can be both hardware andsoftware, like I/O units1 , globale variables in the software, the CPU, memory, disk etc. One example isthe printer device, only one can use the printer at a time. If several users are printing at the same timewill the text be mixed. This is also shown in Figure 7.2.

1 I/O units: Input and/or output units.

Figure 7.1: A real-time system for monitoring and control of a physical process.

56

Page 65: DAQ_Training_Course.pdf

CHAPTER 7. INTRODUCTION 57

Figure 7.2: Syncronization in a real-time system.

Figure 7.3: The sequential data�ow of an o¢ ce software application.

7.2 Programming

Programming of o¢ ce systems will be a sequential data�ow, meaning that the application will be started,will execute a set of operations in a �xed sequence. The system will use the data already available inthe system or ask the user, and will end when the operations are �nished. This is shown in Figure 7.3.A real-time system will be running in parallel with a physical process and must be running as long as

the physical process is running. The process be running all the time and the real-time system must bea 24� 7 system2 . A real-time system is executing the operations depending on events from the physicalprocess and must then react within a speci�c time on these events. Data will not be available in thesystem, the real-time system must read the data from the physical process when needed. This is shownin Figure 7.4.

7.3 Embedded system

An embedded system is a special-purpose system in which the computer is completely encapsulated bythe device it controls, a system dedicated to a speci�c purpose. An embedded system does not have anyreal-time requirements, but often the concept �embedded system� is used for �real-time system� andvice versa. The di¤erences are:

224� 7: 24 hours � 7 days: The system must be running all the time, can not be stopped or restarted!

Page 66: DAQ_Training_Course.pdf

CHAPTER 7. INTRODUCTION 58

Figure 7.4: The event based operations of a real time software application.

Figure 7.5: The main parts of a Central Processing Unit (CPU), the important part for a real timesystem is the registers and the PC.

1. An embedded system is often a system that should work without interaction with a user, will oftenbe a �blackbox�.

2. A real-time system is running in parallel with a physical process, requirements for simultaneousnessand the reaction of external events have to be within a speci�c time.

Remark 1 A real-time system will very often be an embedded system�while an embedded system do notneed to be a real-time system.

7.4 CPU and microcontrollers

The CPU, Central Processing Unit, is the main unit in a computer system. The CPU consists of a setof registers, a program counter (PC) having the address for the next CPU instruction, an ArithmeticLogical Unit (ALU) for the operations and logic for the control of the memory address. In real-timesystems very often a lot of I/O is needed so a microcontroller can be used instead. The main contents ofa CPU is shown in Figure 7.5. A microcontroller is a CPU with di¤erent type of I/O units integrated.The CPU or the microcontroller will normally be the most important resource in a real-time system.

7.5 Example

In Figure 7.6 a real-time system is used for monitoring and controlling the liquid level in a bu¤er tank.The bu¤er tank should never be empty nor full, and the liquid level is monitored using a high level switchat 19:5 liter and a low level switch at 0:5 liter. The output from the switches is low when not covered byliquid and high when covered by liquid, with a delay of 0:08 to 0:1 second. The pump is controlled by a

Page 67: DAQ_Training_Course.pdf

CHAPTER 7. INTRODUCTION 59

Figure 7.6: Bu¤er tank with low and high control of the level, a pump and a real-time system controllingthe pump based on the low and high level sensors.

simple ON/OFF signal and the pumping rate is 1 liter pr. second. The delay for the ON/OFF controlof the pump is maximum 0:2 second.What will be the real-time requirements for this system? Will it be a real time system?The real-time requirements are to start or stop the pump based on the sensor signals without the

bu¤er tank being over�lled or empty. The pump capacity is 1 liter pr. second giving 0:5 seconds to startor stop the pump from the low or high level sensors. Let bus use the high level sensors as this has thelargest delay time. The real-time requirements for the system will be:

tRT�req = tavailableMAX� tpumpMAX

� tsensorMAX= 0; 5� 0; 2� 0; 1 = 0; 2 s

meaning that the system will work if the RT system can deliver the pump sisgnal within 0; 2 second.If the system can not react within 0; 2 second, it will not be a real-time system.

Page 68: DAQ_Training_Course.pdf

Chapter 8

Speci�cations

8.1 Descriptions

De�nition 2 A real-time system is a system that react at the right time, in a predictable way, on anunexpected external event1 .

De�nition 3 A real-time system is a system where the calculations not only depend on a logical correctexecution, but also of the time when the result is available2 .

The requirements for a system to be a real-time system:

1. deadline; the real-time system must detect any changes of the states for the physical process withina speci�c time. The system is de�ned as failing if these deadlines are not kept,

2. simultaneousness; the system must be able to handle several changes in the physical process atthe same time and all of these changes must be detected within the deadline. This requirementdemands a parallelism in the system, the solution may be using multitasking and/or a distributedsystem,

3. resources; the real-time system will have a limited set of resources like CPU, memory, disc, I/Odevices so these resources must be shared, synchronization variables in the software and so on.Important that the requirement for simultaneousness is kept even if there is a limited set of resource.

A real-time system can be de�ned as a �hard�or �soft�system having the following requirements:

1. a �hard�real-time system;

(a) delay will NEVER be accepted,

(b) information is useless if given at a wrong time,

(c) the system will fail if the deadline is not kept,

(d) �shall not miss a deadline�

(e) Examples:

i. Automatic Braking System (ABS) in a car,ii. control systems for �ghters.

2. a �soft�real-time system;

(a) delay can be accepted, but the cost may be higher,

(b) lower performance can be accepted if delayed,

(c) �should not miss a deadline�,

(d) examples:

i. IP-phone.ii. �lm/video from internet.

1Martin Timmerman, Belgium.2Hans Christian Lønstad, Data Respons ASA, Norway.

60

Page 69: DAQ_Training_Course.pdf

CHAPTER 8. SPECIFICATIONS 61

8.2 Properties

1. Event: changes in the physical process involving an event into the real-time system. An event issomething that comes from outside the system or part of the system. An event can be a messagesent to one or more of the tasks in the real-time system,

2. Task / Process / Thread: to be able to have simultaneousness in the real-time system should theproblem be divided into smaller parts depending on the deadline for these parts. Each of theseparts can then be designed as a software module like a task, process or thread,

3. Multitasking: to be able to have simultaneousness in the real-time system as all the tasks, processes,or threads must run in parallel,

4. Scheduler: The service in the real-time system responsible for the multitasking, from a set of rulesto select the next task to run3 ,

5. Preemption: Important events in the physical process may require a fast decision, faster thenthe scheduler service can give. A real-time system must therefore be able to be preemptive; tointerrupt the running task and start a new task. But note that the deadline must still be held forthe interrupted task,

6. Interrupt: An event in the real-time system that is used for temporary stopping the running taskand start a new task,

7. Interrupt latency: The time from an interrupt occurs until the new task is running,

8. Priority: The tasks will be assigned di¤erent priority levels when analysing and designing thesystem, depending on the importance of the task. The scheduler will always try to run the taskswith the highest priorities �rst,

9. Watchdog: A hardware device monitoring the whole system, and restarts the system if the softwarefails.

3Both Windows and Linux are multitasking operating systems containing a scheduler.

Page 70: DAQ_Training_Course.pdf

Chapter 9

System architecture

The architecture of a real-time system depends on the size of the physical process that shall be monitoredand/or controlled, the real-time requirements of the physical process and security. The size of the physicalprocess will de�ne the number of input signals and output signals (sensors and actuators), and the real-time requirements to these signals. Security means among others the requirement for Mean Time ToFailure (MTTF), Mean Time To Repair (MTTR), and what if the deadline can not be hold.The architecture can be a single system, duplication of a single system for redundancy, or distributed

systems. Distributed systems can be distribution of the microcontrollers (CPU cards), distribution ofI/O units, and distribution of resources among others. Network and data buses are important elementsin a distributed system as these elements will be the communication elements between the distributedunits. A real-time system is often a distributed system in one way or another, especially distributed I/O.A single system and a distributed system is shown in Figure 9.1.In distributed system is the real-time functions/services in the operating system important since tasks

and processes can not use common areas in the memory for communication or synchronization.

9.1 Scheduling

The job for the real-time system is the total mission for the device and its associated hardware, consistingof multiple tasks (multitask). A job is divided into a set of tasks. In a real-time system an application isdivived into a set of independent modules being the tasks. The modules are divided in such a way thateach module should ful�l the its own time critical events.2 types of schedulers:

1. Long term scheduler (batch job scheduler, not useful for real-time systems),

2. short term scheduler (CPU scheduler, useful for real-time systems).

The CPU is an important resource in a real-time system due to the requirements of dead lines andsimultaneousness. The simultaneousness is solved by leting the di¤erent software contexts be active in

Figure 9.1: A �single�system and a distributed system.

62

Page 71: DAQ_Training_Course.pdf

CHAPTER 9. SYSTEM ARCHITECTURE 63

Figure 9.2: The principles of multitasking, macro view and CPU view.

short time lags.Figure 9.2 shows the principle of multitasking, the macro view to the left and the CPU view to the

left. The macro view shows that the contexts are running simultaneousness for instance the last 10minutes, while the CPU view shows the details for instance the last second.The task, a software module, will be controlled by the scheduler. Without a scheduler only one task

can run, called a single task system. A single task system is an application that run in an endless loop.Any real-time part of the application must be solved by interrupt service routines (ISR). This solutionis used in small systems, uncomplex systems, or if the real-time behavior is not critical.The CPU view in Figure 9.2 shows how the scheduler is working. Each real-time task is running for

short period of time before the next real-time task is started. The scheduler can be either non-preemptiveor preemptive:

1. non-preemptive scheduler: a task will use the CPU until it release the CPU to the next task.

(a) Every context is performing a number of operations before it is calling a function in thescheduler. Every context must then consists of a main function, a state machine and requiregood knowledge when developing the software. The reason for this is that every context hasthe responsibility to run for only a short time and then call the scheduler function. See Figure9.3 for this type of a scheduler.

2. preemptive scheduler: the CPU can be taken away from the task during execution.

(a) The normal way of making the scheduler is to use a scheduler task controlled by an interrupt.This interrupt is controlled by a hardware signal from a timer, and the timer is then givinga hardware interrupt to activate the scheduler a �xed time intervals. The interrupt with thehighest priority is used so the scheduler task can abort all other contexts. Both Windows andLinux is using 20ms, but can be adjusted according to the maximum number of tasks thatcan run simultaneous. Figure 9.4 shows the principle of scheduler controlled by an interrupt.At �xed time intervals will the timer issue an interrupt to the CPU and the current contextwill be aborted and the scheduler context will be started. The scheduler will save the contextof the aborted context (CPU registers, PC and stack), check which task to be started andrestore the context of this task. It is important that the running time of the scheduler shouldbe as short as possible as this is overhead in the real-time system.

(b) An extended way of the interrupt controlled scheduler is using a shorter time intervals onlyfor checking if the running context should be aborted. Normally the running context will runjust as long as in number 2a, but the system can react faster for instance on external events.This solution gives a faster reaction to events, but also more overhead since the scheduler willbe active more often.

Remember that the interrupt and/or the scheduler can abort a task at any time, at any instructionin the software task. This is important to have in mind when analysing and designed the system. Anytask can be aborted at any time, and any of the other tasks can be the next active task.

Page 72: DAQ_Training_Course.pdf

CHAPTER 9. SYSTEM ARCHITECTURE 64

Figure 9.3: A non preemptive scheduler; each task will �nish the operations before the scheduler getsthe control.

Figure 9.4: A scheduler controlled by interrupt.

Page 73: DAQ_Training_Course.pdf

CHAPTER 9. SYSTEM ARCHITECTURE 65

Figure 9.5: The states and relationships of a task or process.

A dispatcher is a software module (often a part of the scheduler) that gives control of the CPU tothe selected task by:

1. switching context (load the CPU registers, set the PC and the stack segment),

2. start the task (jump to the code of the Program Counter (PC)).

9.2 States

Using a scheduler gives the tasks di¤erent states. The scheduler will decide which task will be runningnext and must then know which of the tasks that are ready run. Remember that a task can also bewaiting for instance an external event and it is wast of time starting a task that is just waiting. Thereforewill the context of the tasks have a state, being:

1. O­ ine; The task is not loaded into memory, no context exist of this task yet,

2. Waiting; The task is waiting for a resource. The scheduler will check the waiting condition everytime it is active to check if the state can be changed from waiting to ready,

3. Ready; the task is ready to be running,

4. Running; the task that are running, only one task can be running at a time (on a single core CPU).

The relation between the di¤erent states is shown in Figure 9.5. The scheduler has tables or listscontaing information of every task in the system, including the state.

9.3 Strategies

When the scheduler is activated, the running context is aborted and saved. The scheduler will thencheck the waiting list (table/queue) and check if any of the waiting tasks can be moved to the ready listor table. Then the scheduler will check the ready list to check which one is running next. The schedulermust use a set of criteria to select the next task for running. The reason for using a set of criteria:

1. CPU utilization: minimize overhead, by keeping the CPU as busy as possible,

2. Throughput: number of processes or tasks completed per unit time,

3. Turnaround time: from creation time to termination time. Time=starteTime+Waiting in the readyqueue+Executing on the CPU+Doing I/O

4. Response time: from creation time to �rst output,

5. Fairness: each process or task should have a fair share of the CPU.

Some of these criteria can be:

Page 74: DAQ_Training_Course.pdf

CHAPTER 9. SYSTEM ARCHITECTURE 66

Figure 9.6: The round robin scheduling algorithm.

1. cooperation; is used when the scheduler is not controlled by the interrupt, but with cooperationbetween the tasks. Used for non-preemptive schedulers,

2. �rst-come, �rst-served (FCFS); each task runs until it blocks or terminates. Used for non-preemptiveschedulers,

3. shortest job �rst (SJF); the task in the ready queue with the shortest running CPU time �rst.Used for non-preemptive schedulers,

4. round robin; using the sequence of the ready list (queue) to decide which task to start next. Usedfor preemptive schedulers (preemptive version of (FCFS), works if all the tasks have the samepriority. This is shown in Figure 9.6.

5. priority; the task with the highest priority is stated �rst. May give problems for tasks with lowpriority, but can be solved by rising the priority while waiting in the ready list (queue).

Priority inversion; If a task with a high priority must wait for a resource used by a task with a lowerpriority, will the priority of the task be increased unntil the resource is released. This is the responsibilityof the scheduler and used for preemptive schedulers.Other criteria can also be used, the criteria can be important when selecting a RTOS. In some RTOS

it is also possible to select a set of criteria to be used when building or even starting the system.

Page 75: DAQ_Training_Course.pdf

Chapter 10

Synchronization

Normally a task has to be synchronised with another task in some way, and a real-time system has too¤er a set of services for synchronization like semaphores, events, interprocess communication and sharedmemory.

10.1 Semaphore

A semaphore is the simplest form of syncronization and has two basic functions. These functions are:

1. request; the scheduler will move the task to wait queue if the semaphore is already �occupied�byanother task. The function name can be wait()1 ,

2. release; releases a semaphore, the scheduler will move the �blocked�tasks from the wait queue tothe ready queue. and may free a blocked thread2 .

A semaphore can be a binary variable being only �0�or �1�or a unsigned integer (char/short/integer)being �0�or a positive number. The real-time system must then support a set of semaphore services, oftenonly 2 operations are needed. One operation for increasing the value, and one operation for decreasingthe value. The operation for decreasing the value will only do it if the value is 1 or greater. If the valueis 0 the task will wait until the value becomes 1 before it can continue. The name of these operationscan be wait() and release(), but these names may vary from real-time system to real-time system.Release() will always increment the value, while wait() will decrement the value, but only if the value

is 1 or higher. The wait() function will contain at least both an operation for checking the value anddecrement the value if 1 or greater. The semaphore service will guarantee that the scheduler will notabort the operation these operations, being a kind of critical regions.The create() function makes an semaphore, stored in the operation system, that can be available for

all tasks in the system.The semaphore services must be part of the real-time system so that every task has the access to

these services. One of the tasks must create the semaphore and then this task and other tasks can usethis semaphore. Normally all resources will be protected by semaphores used for controlling the accessof the resources. A task that is doing a wait() for a resource will be put in the wait queue until theresource is ready for usage.The coding of the semaphore:

Semaphore sem1 = 1 ;

release(){

sem1 = 1 ;}

wait()

1 In C# (.NET): use WaitOne() for requesting a semaphore.2 In C# (.NET): use Release() for releasing a semaphore.

67

Page 76: DAQ_Training_Course.pdf

CHAPTER 10. SYNCHRONIZATION 68

Figure 10.1: The RTS system with the software process, software threads and the displays.

{if (sem1 == 1){

sem1 = 0 ;return OK ;

}else{

// Move the task to the wait queue// Do NOT use a wait loop !!!!

}}

Example 4 A real-time system has 5 displays, located at 5 di¤erent locations, but shall be updated fromthe same parameter in the physical process. The displays are updated from a text bu¤er of 8 charactersshowing the parameter as a text string. The system contains a process for reading the values from theprocess, calculating the parameter, and converting the parameter to the text string in the text bu¤er. Theprocess will also start 5 threads for displaying the value of the text string on the 5 displays. See Figure10.1.

Lets assume the current value of the text bu¤er is 0:9876 and the new calculated value from theprocess is 1:0123. The operation for converting the �oating value (of binary form) to the text stringrequires some CPU time updating one and one character in the text bu¤er.The �simultaneous�requirement of the system gives that the process only have time to update the

�rst character in the bu¤er before the �rst thread is started. The value in the text bu¤er will now be1:9876 and the �rst thread will now write this value on the display and end the execution. The processwill then be started again and update the next character in the text bu¤er, now containing 1:0876. Thenthe next thread will be started and will display the current value of the text bu¤er on the display. Theprocess will have time to update only one character in the text bu¤er between each display process. The�simultaneous�requirement is done by letting the process and the threads running for a short period oftime as shown in Figure 10.2.This kind of multitasking is using the scheduler in the real-time operating system to switch between

the context for the process and the threads, the problem is the synchronization of the text bu¤er. Theresults of displays will be as shown in Table 10.1, showing the values on the di¤erent displays and thecorrect value.How will you solve this problem using syncronization? You can use semaphore or events, events will

be the best solution for this type of syncronization.

10.2 Events

An event is used if several tasks is waiting for a semaphore and all of these tasks is going to do anoperation where a semaphore is not needed.One example can be reading of a value that have been calculated or estimated by a task. The task

writing the new value will use a semaphore to update the value, but after the value has been updated

Page 77: DAQ_Training_Course.pdf

CHAPTER 10. SYNCHRONIZATION 69

Figure 10.2: The execution of the process and the threads �simultaneous�.

Table 10.1: The displayed values on the di¤erent values when the parameter value is updated from 0.9876to 1.0123

Display Display value Correct value1 1:9876 1:01232 1:0876 1:01233 1:0176 1:01234 1:0126 1:01235 1:0123 1:0123

several tasks can read the value without the usage of a semaphore. All tasks wanting to read the valuewill use a wait() function and be put in the wait queue of the scheduler. When the writing task haschanged the value and updated the synchronization, all waiting tasks will get an event can read the newvalue. See Figure 10.3 where task #1 is updating the value using an event semaphore and task #2 totask #n is only receiving an event when they can read the value. This means that the tasks are movedfrom the wait queue to the ready queue of the scheduler.Real-time systems supporting events normally have the operations set(), clear() and wait() for events.

Figure 10.3: Using syncronization events to read a updated value. Task#1 is updating the value, andtask #2 to task#n is only reading the value.

Page 78: DAQ_Training_Course.pdf

CHAPTER 10. SYNCHRONIZATION 70

10.3 Interprocess communication (IPC)

Semaphores and events are primitive operations and very di¢ cult if communication of data or messagesis necessary between tasks. It is however important to notice that semaphores and events are used bythe more complex operations as well.

10.3.1 Pipes

A pipe is a FIFO3 bu¤er created in a common data area of the real-time system for used as read/writebu¤er for the task. A pipe can then be used as a simple communication channel between two tasks asone task can write data to the bu¤er and the other task can read data from the bu¤er.If a task is reading from a empty queue, the task will be moved to the wait queue until the other

task is writing data into the bu¤er.

10.3.2 Message queue

Message queues are set of pipes used as post boxes, all created in the common data area of the real-timesystem. These message queues can then be used for sending messages and data between the tasks in thesystem. Synchronization can be done by sending commands to other tasks and waiting for the answers.If the message queue is empty and trying to read, the task will be moved to the wait queue of thescheduler. If the write queue of another task is full, the task is also put in the wait queue until themessage queue can be written to.Synchronization can also be done by a master task sending commands to other tasks about what to

do. The other tasks will perform the command and then return to wait for the next command.

10.3.3 Shared memory

A common data area where the tasks can read and write random locations in the bu¤er. Using pipesand normal message queues, a FIFO bu¤er is used, using shared memory a random location in the bu¤ercan be read and write.

10.4 Communication Protocols

As time is an important parameter in RTOS, knowledge about the maximum time of receiving or trans-mitting data is important to be able to calculate the dealines for the functions in the RTOS. Protocolsare used for receiving and transmitting data between two or more devices in a system, often distributedsystems, and protocols will be important for the RTOS. The real time requirement is normally higher onthe control level and �eld level. Today will protocols that are using the Ethernet standard and followingthe IEEE 802.3 be imprtant to be able to use standard infrastructur building blocks.In distributed systems there will be two important properties regard communication and these prop-

erties are: a) direct cross tra�c between the nodes in the network, b) network topology. Direct crosstra�c means that no master is necesarry, the messages should be sent as broadcast mode messages andthe information will be avialable for all the nodes in the network at the same time. This will give amore e¤ective communication, less tra�c load in the network, and more capaciyt available on the master.Mixture of network topologies gives a system that easly can be upgraded and/or extended.

10.4.1 Token Ring

The token ring protocol is using a token which de�ne the owner of the communication media. The nodehaving the token will be the master of the network. The master node must send the token to the nextnode when �nishing the network operation.

10.4.2 CSMA/CD

The Carrier Sense Multiple Access / Collision Detect (CSMA/CD) protocol is de�ning that every nodecan be the master whenever the communication media is needed. If several nodes are trying to use the

3FIFO: First In First Out

Page 79: DAQ_Training_Course.pdf

CHAPTER 10. SYNCHRONIZATION 71

Figure 10.4: The basic cycle of the POWERLINK communication (www.wikipedia.org 2010).

media at the same time, a collision will occur, and both nodes will stay of the media for a random timeand retry to use the media.This protocol can not be used in real-time systems as the protocol delay is not de�ned, this is not

a deterministic. Di¤erent industrial Ethernet protocols have been developed, some based on hardwareextensions and other on only software extensions. Hardware extensions mean that standard Ethernetdevices for CSMA/CD can not be used, like the Pro�net IRT (Isochronous Real-Time). Other protocolslike Ethernet POWERLINK is using only software and is compatible with existing network devices likerouters, gateways, etc. These protocols are using a combination of polling and timeslices where only onenode can use the communication media in speci�c timeslices.

Ethernet POWERLINK

Ethernet POWERLINK is a deterministic real-time protocol for standard Ethernet communication sys-tems. It is an open protocol managed by the Ethernet POWERLINK Standardization Group (EPSG)(www.wikipedia.org2010). It is based on the standard Ethernet, but extent the protocol with a mixed Polling- and Times-licing mechanism. This gives:

� transfer of time-critical data in short isochronic cycles with con�gurable response time,

� time-synchronisation of all nodes in the network with high precision (�s),

� transmission of less timecritical data in a reserved asynchronous channel.

The communication media is controlled by a Managing Node (MN) and the overall cycle time dependson the amount of isochronous data, asynchronous data, and the number of nodes to be polled duringeach cycle.The basic cycle consists of the following phases (See also Figure 10.4):

1. Start Phase: The MN is sending out a synchronization message to all nodes, called �Start of Cycle�(SoC),

2. Isochronous Phase: The MN calls each node to transfer time-critical data (PollReq. and PollRes.messages). Since all nodes are listening to all data during this phase, the communication systemprovides a producer-consumer relationship. Time slots are used for each addressed node,

3. Asynchronous Phase: The MN grants the usage of the communication media to one particularnode for sending non real-time data (Soa and AsyncData messages). Standard IP-based protocolsand addressing can be used during this phase.

The quality of the real-time behaviour depends on the precision of the basic cycle time. The durationof the isochronous and the asynchronous phase can be con�gured. POWERLINK is extended the DataLink Layer with time slots for each node in the isochronous phase giving that only a single has accessto the communication media. The number of slaves (CN) can vary from cycle to cycle as all the CNdo not need to be polled in every cycle. CN with lower priority can share the same time slot giving amultiplexing of the lower priority slaves.

Page 80: DAQ_Training_Course.pdf

Chapter 11

Resources

A system with requirements to simultaneousness must also have some requirements to access the resourcesin the system. A resource can be hardware or software devices, a device that two or more tasks must beable to share. A resource can not be used by several tasks at the same time, so the resource has to bereserved by the task before being able to use the resource. The task must also release the resource when�nished using the resource.The reservation and releasing of resources is often di¢ cult, especially to debug when it is not working.

Normal problems are:

1. Forget to reserve a resource before using it,

2. Forget to release the resource after using it.

A good advice is to reserve only one resource at the time and release the resource before reservinganother resource. Deadlock is a situation that can arise if several tasks are trying to reserve a set ofresources.

11.1 Deadlock

Deadlock arise if task #1 has reserved resource A and need resource B to �nished the operation, andtask #2 has reserved resource B and need resource A to �nish the operation. These tasks will then neverbe able to �nished their operations.

// Task #1resourceA.wait() ; // wait for resource AresourceB.wait() ; // wait for resource B

// Task #2resourceB.wait() ; // wait for resource BresourceA.wait() ; // wait for resource A

Example 5 In an alarm system should all important alarms be saved on both disk and paper. Severaltasks are used with alarm checking and these tasks are then going to log the alarm on the disk as willbe a resource and write on the printer as will be another resource. The scheduler will run task #1 whodetects a alarm situation and is reserving the disk. Then the scheduler will run task#2 who also detectsa alarm situation, task#2 is written by another programmer, and is reserving the printer �rst. Task#1is then activating again writing the alarm message to the disk and tries to reserve the printer. As theprinter is already reserved, task#1 will be put onto the wait queue and task#2 will be started. Task#2will write the message on the printer and try to reserve the disk for logging the alarm message. Task#2will also be put in the wait queue because the disk resource is already reserved. And both tasks will be inthe wait queue for ever ...

Remark 6 Also note that the watchdog task will not detect this type of errors, and will not reset thesystem!

72

Page 81: DAQ_Training_Course.pdf

CHAPTER 11. RESOURCES 73

Table 11.1: The task for the alarm system using wait() and release() functions.

Step Task#1 Task#2

N .. ..N+1 wait(disk) wait(printer)N+2 log alarm print alarmN+3 wait(printer) wait(disk)N+4 print alarm log alarmN+5 release(printer) release(disk)N+6 release(disk) release(printer)N+7 .. ..

Figure 11.1: A scenario without deadlock is shown to the left, and a scenario with deadlock is shown tothe right. The tasks are reserving the disk and printer for logging the alarm states.

Table 11.1 shows the usage of wait() and release() functions for the example.Two scenarios for the tasks in table 11.1 are shown in Figure 11.1. The scenario to the left is without

the deadlock situation where each task is reserving the disk and printer resources for logging the alarmstates. In this scenario will the tasks not request the disk and printer resources at exactly the same timeand a task is releasing the resources before the next task is requesting the same resource. The scenarioto the right shows a deadlock situation because both tasks are requesting the resources at the same time,and will not release any resource before both resources are reserved.How to avoid the deadlock situation? Reserve and release only one resource at the time!Deadlock can also arise in systems using messages for syncronization between the tasks. The system

has 2 tasks that should be syncronizated by the x and y messages, shown with the code below. Thereceive() method will wait unntil a message is received before returning the control back to the task.

// Task #1receive() ; // Wait for next next messagesend(x) ; // Send message x

// Task #2receive() ; // Wait for the next messagesend(y) ; // send the message y

This example will give a deadlock because both systems will wait for a new message. How candeadlock be avoided in this example1?

1HINT: Change the sequence of the rx/tx functions in the tasks, have di¤erent sequences in the two tasks.

Page 82: DAQ_Training_Course.pdf

CHAPTER 11. RESOURCES 74

Figure 11.2: Reservation of plane seats from di¤erent airports.

11.2 Critical region

Often is it necessary to protect a sequence of commands, in some sort of high level programming language,to avoid that the scheduler is aborting this sequence. This sequence is called a critical region and willnever be aborted by the scheduler or another interrupt function, and must be a service of the real-timeoperating system.

Example 7 An example of a critical region is a reservation system for an airplane. Two passengers arechecking in, one at Oslo/Gardermoen and one at Torp/Sandefjord, both at exactly the same time. Bothpassengers are going to USA using the same plane from Schiphol/Amsterdam. See Figure 11.2 for theairports and reservation system.

Both passengers wants seat 9A and without a critical region the sequence may be:

1. The system in Oslo/Gardermoen is calling the reservation system for checking of seat 9A,

2. The system in Torp/Sandefjord is calling the reservation system for checking of seat 9A,

3. The scheduler on the reservation system starts the task from Oslo/Gardermoen,

4. The reservation system checks that seat 9A is free,

5. The scheduler on the reservation system is switching to the task for Torp/Sandefjord,

6. The reservation system checks that seat 9A is free,

7. The scheduler on the reservation system is switching to the task for Oslo/Gardermoen,

8. The reservation system informs the system in Oslo/Gardermoen that seat 9A is taken,

9. The scheduler on the reservation system is switching to the task for Torp/Sandefjord,

10. The reservation system informs the system in Torp/Sandefjord that seat 9A is taken.

Using a critical region on the free/taken sequence, the reservation will now be:

1. The system in Oslo/Gardermoen is calling the reservation system for checking of seat 9A,

2. The system in Torp/Sandefjord is calling the reservation system for checking of seat 9A,

3. The scheduler on the reservation system starts the task from Oslo/Gardermoen,

4. The task for Oslo/Gardermoen is entering a critical region,

5. The reservation system checks that seat 9A is free,

6. The reservation system informs the system in Oslo/Gardermoen that seat 9A is taken,

Page 83: DAQ_Training_Course.pdf

CHAPTER 11. RESOURCES 75

7. The task for Oslo/Gardermoen is leaving the critical region,

8. The scheduler on the reservation system is switching to the task for Torp/Sandefjord,

9. The reservation system informs the system in Torp/Sandefjord that seat 9A is busy.

As the critical region will disable the scheduler and other interrupts, it must be as short as possible.Some questions:

1. Never ever wait or try to reserve a resource inside a critical region. Why?

2. Can you implement a critical region using one semaphore?

Page 84: DAQ_Training_Course.pdf

Chapter 12

Software modules

When analyzing and designing the real-time system, the system will be divided into tasks. Designingapplications for a real-time systems are always challenging. One way to decrease the complexity of theseapplication is to use a task-oriented design and divide a project into di¤erent modules (or tasks). Eachmodule is then responsible for a speci�c sub parts of the application. With such a system it is importantto be able to specify that some modules (or tasks) are more important than others. The applications canthen be divided into sevaral tasks, and each task can be developed as a software application or program,but need to intercommunicate with each other.After compiling the software, an image of the systems memory will be made and this image must be

transferred to the targets disk, EPROM1 or FLASH2 memory depending on the type of real-time system.Figure 12.1 shows one way of developing software for a real-time system, the software is developed usinga �standard�development system and must be uploaded to the real-time system for �nale testing.The software is uploaded to the real-time system and started using some type of boot loader3 . The

software will be located in memory with a code area (read-only), data area (both read-only and read-write), a working area4 , and a stack area. The stack is located from the top of the memory and is usedfor temporary storage. The contents of the memory is shown in Figure 12.2, loaded from a storagedevice to the left and loaded from a EPROM/FLASH system to the right.The running software is called a context being:

1. location of the application code, application data and stack in the address range,

2. the contents of all the CPU registers,

3. the content of the program counter (PC) (pointing to the next program instruction to be per-formed).

Every program task is developed as a standalone application that will be implemented in the real-timesystem as a task or a process.

12.1 Instruction time

An instruction cycle is the time period during which a computer processes a machine language instructionfrom its memory or the sequence of actions that the central processing unit (CPU) performs to execute

1EPROM: Erasable Programmable read Only Memory; Can also be ROM, PROM and EEPROM (Electrical EPROM).2FLASH: Similar to EEPROM, but erasing can only be done in blocks or the entire chip.3Boot-loader: A software device copying the software from the storage device into the working memory of the system.4The working area is often called the heap.

Figure 12.1: Developing software for real-time systems.

76

Page 85: DAQ_Training_Course.pdf

CHAPTER 12. SOFTWARE MODULES 77

Figure 12.2: The context of a running software task.

each machine code instruction in a program.The name fetch-and-execute cycle is commonly used for the instruction cycle. The instruction cycle

will vary from instruction to instruction and a cpu normally has large set of instruction. The operationfor an instruction cycle will be:

1. Fetching the instruction; read the contents of the memorylocation of the program counter (PC),

2. Decode the instruction; the instruction register will hold the instruction,

3. Execute the instruction; a control unit is instructed by the instruction register how this instructionshall be executed, often using the arithmetic logic unit (ALU),

4. Store the results; any results will be stored in memory.

In some system it can be necessary to disassamble the application code to get a list of the CPUinstructions. The sequence of the CPU instructions can be used to calculate the exact execution time ofthe application or part of the application.An example of the instruction set for a CPU is shown in Figure 12.3. This Figure shows a few

instructions for the 80386 CPU giving the instruction codes and number of clock cycles.

12.2 Software application

A running application is a context and can be either a process, a task or a thread.

12.2.1 Process

Processes are used for program tasks in bigger real-time systems and require more memory and hardwaredevices. The most important hardware unit is the MMU5 , a unit converting the addresses from a virtual(logical) memory area to the physical memory area for the system. Normally real-time systems using32 and 64 bits microprocessors (CPUs) are using MMU, and software applications are implemented as

5MMU: Memory Management Unit.

Page 86: DAQ_Training_Course.pdf

CHAPTER 12. SOFTWARE MODULES 78

Figure 12.3: The �rst page of the instruction for 80386 CPU, the �father�of the CPU used in Windowscomputer today (Int 1988).

Page 87: DAQ_Training_Course.pdf

CHAPTER 12. SOFTWARE MODULES 79

Figure 12.4: The memory layout and the PID list in a real-time operating system.

processes. The reason for the MMU is a more e¤ective utilization of the memory and ensure that noprocesses can access memory area outside it�s own memory area. The advantage is when a softwareapplication contains an error; the application will not be able to in�uence other applications in thesystem. The disadvantage is that a software application will not be able to access data from otherapplications. How shall these processes be able to exchange data or be synchronized? The solution isto use operating system services for data exchange and synchronization.The operating system must maintain the MMU table and the process ID (PID) lists, shown in Figure

12.4. The operating system also contains services for synchronizations between the processes, and inmodern operating system threads can also be used.

12.2.2 Thread

A thread, also called a �light weighted� process, must be started by a process, will have a new codememory area (with it�s own program counter), but will have access to the same data memory area asthe process. A process can start a number of threads, all of them will have separate code memory areabut all will share the same data memory area.Figure 12.5 shows a process containing a code memory area and a data memory area. The process has

also started a number of threads, having separate code memory area but all of them having a commondata memory area.When the process and the threads have the same data area will synchronization be important. Nor-

mally will a thread be started to perform a small part of the program task and often will this small partof the program task take some time.Threads can be very useful for I/O functions, because:

1. error checks can be done in the thread and will not in�uence the execution of the process,

2. scaling is easy, just start a new thread if more I/O is necessary,

3. waiting for an event, only the thread will wait for the event, the rest of the application will run asnormal.

The drawback when using threads is synchronization, in a real-time system the process and thethreads must be executed �simultaneous�and Example 4 shows the problem of synchronization.

12.2.3 Task

Smaller real-time systems using 4; 8; or 16 bits microprocessors (CPU) do not have a MMU and therewill not be any mapping between virtual (logical) and physical memory. Using no MMU, there is not

Page 88: DAQ_Training_Course.pdf

CHAPTER 12. SOFTWARE MODULES 80

Figure 12.5: A single process with a number of threads.

possible to have the boundary checks between the program tasks, and these program tasks are calledtasks. A task can always access the whole memory area as shown in Figure 12.6, showing the task IDlist and the memory where each task can access the whole memory area.The advantages of using tasks (and no MMU):

1. faster real-time operating system as no conversion between logical and physical address is necessary,

2. all tasks can access the whole memory area, easy and fast to exhange data between the tasks,

3. easy to syncronize the tasks..

The disadvantages of using tasks (and no MMU):

1. Di¢ cult to trace a software problem as a task can destroy for the other tasks (which one is �rst?),

2. All access to common data has to be syncronized.

From time to time there is a mixing of the term process and task in the litterature, the basis is thata process is a software application on a mainframe computer while a task is a software application on amicrocomputer. A common description of a job in a real-time system is a task, so this can be the reasonfor this mixture. It is however important to understand the technical di¤erences of a process and a task.A process:

1. is using a logical or virual memory area (needs a MMU),

2. must use operating system services for communication and synchronization with other processes,

3. with a software error will not destroy the data for other processes in the system.

A task:

1. is using a physical memory area,

2. can communicating directly with other tasks using memory bu¤es, but should of course use OSfunctions for syncronization,

3. can destroy the data for other tasks in the system if an error.

Page 89: DAQ_Training_Course.pdf

CHAPTER 12. SOFTWARE MODULES 81

Figure 12.6: A real-time system with 4 tasks with common data area and no check of memory addresses.

Figure 12.7: A Lab VIEW application with two independent loops running on two diferent cores on themulticore processor (www.ni.com: jan-10).

12.3 Core and Multicore

A CPU can contain one core or several cores. A core is a single CPU with the registers, program counter(PC), memory and I/O controller, and the aritmetic logic unit. The core is the CPU resourse and amulticore means that the system has several CPU resourses. The cores will use the same memory andI/O devices and range, meaning that the task/processes can share resource the same way as a single coresystem. A multicore system will have several run queues meaning that the scheduler has to decide thenext running task for all the cores.Figure 12.7 shows how two independ loops can utilize the two cores in a multicore processor.

12.4 Input monitoring

A real-time system must be preemptive, meaning that a software context can be aborted by anothersoftware context. This is solved by interrupt, either hardware interrupt or software interrupt. Hardwareinterrupt is also very useful when monitoring an event in the physical process. This monitoring can bedone in two di¤erent ways:

1. interrupt,

Page 90: DAQ_Training_Course.pdf

CHAPTER 12. SOFTWARE MODULES 82

Figure 12.8: Interrupt control in the upper part and polling in the lower part. Notice the vast of CPUtime in the lower part.

Figure 12.9: Using an interrupt task for reading the keyboard characters.

2. polling.

Using interrupt, a real-time system is utilizing the preemptive support and switch the software contextto read the event and then switch back to the interrupted context. See the top section of Figure 12.8.Using polling, the real-time system must have a process/task checking the event all the time wasting

a lot of unnecessary CPU power. See the bottom section of Figure 12.8.Interrupt should be used for digital signales (state changes), while polling should be used for analog

values. The analog values vary all the time so a new value should be read at every polling time. Thepolling time can be the sampling rate of the monitoring system.

12.4.1 Example

A keyboard is a good example of device using interrupt. When a key is pressed, the current softwarecontext is interrupted, the software context for the keyboard is started, the key pressed is read and savedin memory, and the interrupted software context is started. See Figure 12.9.Using polling, the keyboard must be polled fast enough not to loose any keys, meaning that the

keyboard has to polled at least every 500ms all the time.

12.4.2 Priority

A microprocessor often has a lot of interrupts, both hardware and software. These interrupts havedi¤erent priorities and an interrupted task with lower priority will be aborted by an interrupted taskwith higher priority. The right priority of the interrupts must then be assigned during the analysis anddesign of the real-time system.

Page 91: DAQ_Training_Course.pdf

CHAPTER 12. SOFTWARE MODULES 83

Figure 12.10: Priority inversion where a low priority task TaskL has exclusive access to a resource andprevent the high priority TaskH from running (www.iar.com: Nov-08).

Figure 12.11: Priority inversion with inheritance where a low priority task TaskL has exclusive access toa resource, but inherit the high priority from the TaskH (www.iar.com: Nov-08).

Priority inversion

A system with multiple tasks running in parallel may in some cases give a low priority task that can causea high priority task to be halted. In systems where di¤erent tasks need exclusive access to a resource,priority inversion can occur. A task with a high priority will be given a low priority for a speci�c periodof time. A low priority task gets to run and starts to use a resource with exclusive access. If a higherpriority task needs the same resource it halts until the low priority task has released the resource. Thiswould e¤ectively give the high priority task a priority level beneath the low priority task.This can be a problem for the task with the high priority because the time requirements is �harder�

than the tasks with lower priority. This is shown in Figure 12.10 where the high priority task, TaskH, isgiven a priority lower than the low priority task, TaskL, in the period marked with priority inversion.There are di¤erent solutions to this problem, the most common ones are:

1. Priority inheritance; let the low priority task inherit the priority from the tasks waiting for theresource,

2. Disabling task switching to protect critical sections; turn o¤ the switching when using the resources.

The priority inheritance will let a low priority task inherit the highest priority from any tasks waitingfor the resource. This is shown in Figure 12.11 where the low priority task TaskL inherit the priorityfrom the high priority task TaskH. The low priority task TaskL will �nished using the resource and letthe high priority task TaskH �nish before the medium priority task TaskM is started.

12.5 Watchdog

A monitoring function of the RTOS system. A hardware device, often a counter that will reset the CPUafter a speci�c amount of time. A task with the lowest priority is used for reseting the hardware deviceand the system have to be designed that the watchdog task will not reset the system when everything is

Page 92: DAQ_Training_Course.pdf

CHAPTER 12. SOFTWARE MODULES 84

Figure 12.12: The watchdog is a hardware device, a counter, using the hardware reset signal for reset ofthe CPU.

Figure 12.13: The watchdog is counting the number of pulses and a limit of pulses when the output willbe activated. The reset signal will force the counter to start count from 0.

working OK. If a software error occurs (or system error), the watchdog will not be reset and the systemwill be restarted. it is however important to understand that the watchdog will only work if your softwareis designed wrong with endless loops or crashing, but will not with problems like deadlock. Why?Figure 12.12 shows the principle for the watchdog. The watchdog is a hardware device, a counter,

connected to an oscillator or a crystal. The counter will count a number of pulses and the output will behigh when this number of pulses is reach. The counter output is connected to the hardware reset inputof the CPU resetting the CPU, meaning restarting the whole real-time system.The reset task in the system has the responsibility to reset the counter, to start from 0, so it will

never reset the CPU. Resetting the CPU is an error condition and should normally not happen. Figure12.13 shows the functions of the watchdog. The watchdog will count the number of pulses and the outputwill be activated when a speci�c number pulses are counted. The reset task should reset the watchdogcounter before reaching this speci�c number of pulses, forcing the counter to start from 0.

Page 93: DAQ_Training_Course.pdf

Chapter 13

Design

A real-time system consists of a set of time requirements and these time requirements are the key focus forthe design. The functional requirements, often controlled by the time requirements, require a distributionof the functions. This distribution is often solved by dividing into sub tasks, and in real-time system willthe distribution be structured with one time requirement, or time requirements belonging to each other,in the same sub tasks.The sub tasks in a real-time system will often run concurrently giving a stronger restraints of the

design (Ripps 1989). The rules for real-time tasking (Ripps 1989):

1. Use structured design (make the application more easy to design, implement, test, and maintain);

(a) The main activies should be assigned to at least one separate taks,

(b) Closely related functions should be kept in the same task,

(c) Functions that deal with the same inputs, data, and/or ouput should be kept in the sametask,

(d) Activity with I/O devices should be kept in a separate task, gives better error handling andscaling.

2. Try to keep the CPU(s) always busy with productive work;

(a) Use the operatings system wait function in stead of a wait loop.

3. Functions with di¤erent attributes must be assign to di¤erent tasks;

(a) Functions with di¤erent requirements should be assigned to separate tasks,

(b) Functions with signi�cantly di¤erent levels of urgency must be assigned to separate tasks,

(c) Functions proceeded at di¤erent time scales be assigned to separate tasks.

85

Page 94: DAQ_Training_Course.pdf

Chapter 14

Programming

14.1 Introduction

When programming of a real-time system it is important to have focus on the dead lines, simultaneous-ness, and the usage of resources. The software system has to be as fast as possible, never waste any CPUtime, and minimize the usage of the resources.If the program is going to wait for an event, never ever use a waiting loop, the solution is to

tell the scheduler that the task has �nished and will wait for a periode of time. The normal operationis sleep(time) where the waiting time will be time in milliseconds. Whenever a task is performing thissleep() function, the scheduler is told to place the task in the waiting queue and start the next task inthe ready queue. Everytime the scheduler is activated, it will check the tasks in the waiting queue andthe time will be decremented until 0. At 0 will the task be moved to the ready queue again. A trick isoften to use the sleep(0) which will start the next task in the ready queue and place the aborted task atthe back of the ready queue.The program �ow in a real-time application will be depended on external events as distincted from

o¢ ce applications using a sequential program�ow controlled by user and data. A real-time applica-tion responds to external events from sensors among others, while o¢ ce applications resonds to userinteractions and available data.Some advices when designing real-time systems:

1. Always think synchronization when using resources,

2. Always think synchronization when changing globale data,

3. Always use a sleep() function if the application has to wait (or can wait),

4. State machines are useful in real-time systems.

14.2 Memory allocation

The memory allocation is a service of the operation system and is also treated as a resource in a real-timesystem. The normal operation is to allocate the memory when needed and release the memory after theusage. This should be avoided in a real-time system because the memory will be fragmented and thesoftware will not be able to allocate more memory. This is special for 24x7 systems as the memoryallocation service (table) will never be reset. The solution for the real-time system can be to allocate allthe memory at startup time, and just reuse this memory while running.The memory allocation will be used more in an Object Oriented based system as every new() method

is allocating memory for the class. Some programming languages like C++ are using delete() methodto release memory. For programming languages like C# and Java is a garbage collector running inthe background releasing memory for classes with no references. The garbage collector will use CPUtime making problems with the deadlines, and the memory will be fragmented giving problems for theallocation function.

86

Page 95: DAQ_Training_Course.pdf

CHAPTER 14. PROGRAMMING 87

14.3 Posix.4

Portable Operating System Interface (Posix) is a standard from IEEE and the meaning with Posix isto have portable source code for the applications. The Posix standard de�nes a set of system functionsthat can be used to make a more portable source code, and Posix.4 de�nes a set of real-time functions.Using a Posix.4 compatible real-time system can be useful when developing real-time systems.

14.4 C# example

An application consisting of the source code listed below shows how C# can be used for threads, waitand simultaneousness. The main program starts 3 threads and all 4 tasks (process and threads) arerunning simultaneously, waiting some while and printing a message on the screen for each loop. Theapplication consists of 2 classes, one Thread class (ThreadClass) and the main program (ThreadEks).Both the main program and the threads are using the sleep() function to let the scheduler abort

the context, move the context to the waiting queue and start the next context in the run queue. Theapplication is using di¤erent waiting times for the main program and the threads and every context iswriting the status on the screen. The output of the application is shown in Figure 14.1, showing thestate of the main program and the threads.The application starts in the ThreadEks class, in the main function. The function of the main

application is:

1. Write the textstring � Start of main program �on the screen,

2. Create the �rst thread class, set the name Thread#1 and the wait time to 200ms;

3. Create the second thread class, set the name Thread#2 and the wait time to 300ms;

4. Create the third thread class, set the name Thread#3 and the wait time to 400ms;

5. Then entering a loop counting to 30. In each loop, print a dot on the screen and wait 100ms;

6. End the main program by writing � End of main program �.

What will be the duration of the main program?

Duration=__________

The functions of the thread applications will be:

1. Set the name and wait time of the thread,

2. Create the threading function of the thread class (the run function),

3. Start the threading function (the run function),

4. Write the textstring �Starting Thread#n �,

5. Loop for 5 times waiting the wait time for the thread and then write the threadname and the loopcounter on the screen,

6. End the thread by writing �Ending Thread#n �.

What will be the duration of each thread?

Thread#1=________; Thread#2=__________; Thread#3=___________

Page 96: DAQ_Training_Course.pdf

CHAPTER 14. PROGRAMMING 88

using System;using System.Threading;//Example in using sleep() and Threads i C#namespace ThreadEks{

/// <summary>/// Threadklasse/// </summary>class ThreadClass{

int loop_cnt ;int loop_delay ;Thread cThread ;

public ThreadClass(string name, int delay){

loop_cnt = 0 ;loop_delay = delay ;cThread = new Thread(new ThreadStart(this.run)) ;cThread.Name = name ;cThread.Start() ;

}

// The main function in the ThreadClassvoid run(){

Console.WriteLine(" Starting " + cThread.Name) ;do{

loop_cnt++ ;Thread.Sleep(loop_delay) ;Console.WriteLine(" " + cThread.Name + ": Loop=" + loop_cnt) ;

} while (loop_cnt < 5) ;// Ending of the threadConsole.WriteLine(" Ending " + cThread.Name) ;

}}// The applicationclass ThreadEks{

/// <summary>/// Start of the main program/// </summary>static void Main(string[] args){

Console.WriteLine(" Start of main program ") ;// Making 3 threads ..ThreadClass ct1 = new ThreadClass("Thread#1", 200) ;ThreadClass ct2 = new ThreadClass("Thread#2", 300) ;ThreadClass ct3 = new ThreadClass("Thread#3", 400) ;// Wait while the threads are running ...for (int cnt = 0; cnt < 30; cnt++){

Console.Write(".") ;Thread.Sleep(100) ;

}// End of main program

Page 97: DAQ_Training_Course.pdf

CHAPTER 14. PROGRAMMING 89

Figure 14.1: The output from the C# application consisting of the main program and 3 threads.

Figure 14.2: The execution of the main process and the 3 threads.

Console.WriteLine(" End of main program ") ;}

}}///////////////////////////// EOC ////////////////////////////////////

The timing of the main process and the threads shown in Figure 14.1 is shown in Figure 14.2.

Page 98: DAQ_Training_Course.pdf

Chapter 15

Operating systems

Most of you knows Windows as an operating system and some of you also Linux. These operatingsystems are designed as General Purpose Operating Systems (GPOS) meaning that they skal be ageneral operating system, but they are not very suitable as a Real-Tome Operating System (RTOS).Todays versions of Windows and Linux is probable more a Network Operating System (NOS) than aGPOS.Time is an important parameter in a RTOS system and the system needs to ful�ll the requirements of

deadlines. This means that every function in the OS must have an indication of the maximum time usedbefore the control is given back to the RTOS application. Without this information it is not possible touse the OS, or the functions, in a RTOS. A RTOS is sometimes called real-time multitasking kernel. ARTOS is a software component that ensures an e¢ cient processing of time critical events by dividing theapplication into multiple independent modules caledd tasks.There are two main technologies of real-time operating systems. Shared Memory RTOS where the

developer is responsible for protecting memory from mutual access by using speci�c RTOS tools likesemaphores, and Direct-Message Passing RTOS where data is encapsulated in messages used for bothinter-process communication and synchronization.A lot of operating systems are designed as RTOS, both in complexity and price. The most popular

RTOS1 is shown in Figure 15.1, showing VxWorks, XP Embedded, Windows CE, DSP/BIOS, Red HatLinux and QNX on the top.A lot of di¤erent factors is important when selecting a RTOS, especially factors like price of the

development system, price for each license and the development tools. Technical factors will be thenumber of interrupt levels, the number of priority levels and the strategy of the scheduler.

15.1 RTOS requirement

The requirements for a RTOS:

1. multitasking,

2. preemtible,

3. the number of priority levels should be at least 16, preferably 64 or 256. The number of levelsdepends on the number of tasks in the system,

4. predictable synchronization mechanisms,

5. interrupt latency (depending on hardware),

6. several interrupt levels (depending on hardware).

Some of the important speci�cations for a some of OS systems are listed in Table 15.1.

1From july-2006.

90

Page 99: DAQ_Training_Course.pdf

CHAPTER 15. OPERATING SYSTEMS 91

Figure 15.1: The RTOS popular in july-06, from EmbeddedSystems Europa June/July 2006, page 18.

Table 15.1: Some of the important OS speci�cations for real-time systems.Requirements WxWorks XP Emb. Win CE Linux QNXMultitasking Yes Yes Yes Yes YesPreemtible Yes Yes Yes Yes YesPriority levels 256 16 or 32? 256 32 32

Page 100: DAQ_Training_Course.pdf

CHAPTER 15. OPERATING SYSTEMS 92

Figure 15.2: A simple OS with a set of drivers and basic OS functions. The applications will use thedrivers and the basic OS functions.

15.2 Driver

The driver is the software module used as communication between the RTOS and the hardware. Anyoperating system is using a Hardware Abstraction Layer (HAL) for communication with a virtuel driver.The driver will be a software module for communication between the virtual driver and the hardware.The modules will be a layer structure like:

Operating system Services OS softwareHardware Abstraction Layer OS softwareDriver SoftwareHardware

The driver can be a problem for Windows and Linux system as the driver often is developed for aGPOS and not RTOS. In RTOS the responstime is important and the driver must be developed for afast respons from the hardware to the RTOS. The driver must also be robust (reliable) as a real-timesystem very often will be a 24x7 system.Figure 15.2 shows a simple OS or RTOS with drivers and a set of basic OS functions. The appliocation

will use only the drivers and the OS functions to communicate with the hardware, the application shouldNOT contain any speci�c coding for the hardware.

15.3 Windows / Linux

Windows and Linux are not RTOS even if the Linux kernel 2.6 and later has much better real-timeperformance then before. One solution that can be used if you still want to use Windows or Linux isthe solution shown in Figure 15.3. The solution is to use a real-time kernel (a simple real-time operatingsystem) where the standard operating system like Windows and Linux is running as process on the lowestpriority. The realtime kernel will be the scheduler for the real-time processes and the operating systemprocess. The scheduler in the standard operating system will be the scheduler for the standard operatingsystem processes. This system will then have both real-time support and the standard operating systemsupport, the standard operating system must even be installed from the standard CD (or DVD).Another solution (www.ardence.com) is a real-time solution for Windows where the real-time system

is running in parallel with Windows, see Figure 15.4. The communication between Windows and thereal-time sub system is solved using IPC, a driver and a DLL2 . This way can real-time processes andWindows processes communicate and be synchronised.Figure 15.5 shows a solution for Linux or Symbian OS where the OS is part of the User Level Code,

only a RT Executive is part of the Privileged Level Code.Why do we want to use Windows or Linux?

1. Developer knowledge,

2. Developments tools,

2DLL: Dynamic Link Library.

Page 101: DAQ_Training_Course.pdf

CHAPTER 15. OPERATING SYSTEMS 93

Figure 15.3: One solution to have real-time support together with a standard OS like Windows andLinux.

Figure 15.4: A real-time solution for Windows (www.ardence.com).

Figure 15.5: A real-time solution for Linux or Symbian with a high degree of system security as most ofthe code is in User Level. This solution is for di¤erent types of phones.

Page 102: DAQ_Training_Course.pdf

CHAPTER 15. OPERATING SYSTEMS 94

Figure 15.6: A �DOS� terminal showing some commands and the results from this commands. Theterminal has only a text based user interface, it is not a graphical user interface.

3. Debugging.

15.3.1 Windows history

Windows started with the Microsoft DOS (MS-DOS) in 1982, the standard operating system for the �rstPCs. The MS-DOS was a single user, single task, operating system with a �DOS�terminal for typingthe commands. Figure 15.6 shows such a text based user interface, it is also known as a �DOS�window.The history of the Windows operatingsystem is:

Windows Major changes�DOS� Text based user interface and only one �DOS�window.95 Greatly simpli�ed interface (graphical); much more friendly to the average user98 Improved multimedia capabilities and built-in Internet functionality2000 Industrial-strength Windows NT code base, but in a much more polished packageXP Uni�ed the Win9x and WinNT/2K code bases; allowed businesses to standardize on one OSVista Added to much security, avoid a lot of WinXP application to run.7 Merging the good solutions from XP and Vista

15.3.2 Windows CE or Linux

When selecting a RTOS should you select Windows CE, Linux, or a speci�c RTOS?

1. Windows XP Embedded is an embedded OS, not a RTOS,

2. Lisence fee for WCE, Linux is �free�,

3. WCE has a subset of the WIndows API,

4. WCE from v3.0 (2000) is a RTOS, from v4.2 it is a stable RTOS, from v6.0 (2006) Visual Studiocan be used as a tool,

5. The startup cost is higher for Linux than Windows CE,

6. Windows CE has only one distribution, Linux has several,

Page 103: DAQ_Training_Course.pdf

CHAPTER 15. OPERATING SYSTEMS 95

7. Windows CE has a footprint from 300 KB, but between 8-12 MB is usual with display supportetc.. With explorer will the footprint be about 20 MB. A normal Linux footprint is about 1-4 MB,minimum from 400 KB. A normal Linux system is about 8-16 MB,

8. Documentation of Linux is much better than Windows CE.

15.3.3 Windows XP Embedded

Windows XP embedded is an OS build of modules and component.

15.4 QNX

QNX Neutrino is another popular RTOS that is both time tested and �eld proven3 . The QNX NeutrinoRTOS is said to be a microkernel operating system, meaning that every driver, application, protocol stack,and �le system runs outside the kernel, in the safety of memory-protected user space. Any componentcan fail, and will be automatically restarted without a¤ecting other components or the kernel.Technology overview (www.qnx.com: dec-08):

1. High availability solution;

(a) Process watchdog for application monitoring and recovery, self healing inter-process commu-nications, and restartable device drivers and operating system services,

(b) Virtually any component, even a low-level driver, can fail without damaging the kernel orother components,

(c) Process model ensures that if a component fails, QNX Neutrino can cleanly terminate it andreclaim any resources it was using � no need to reboot.

2. Essential networking technologies including IPv4, IPv6, IPSec, FTP, HTTP, SSH, and Telnet,

3. Photon microGUI � a full featured embedded graphical user interface,

4. Integrated �le systems for �ash devices and rotating media,

5. System visibility and debugging support,

6. Supported by QNX Momentics, the Eclipse based integrated development environment,

7. Full memory protection where the OS can immediately identify the component responsible, at theexact instruction,

8. Instrumented kernel and visualization tools that trace system events including interrupts, threadstate changes, synchronization, CPU utilization and more,

9. Scalability; Scale large or small using only the desired components,

10. Take advantage of built-in multiprocessing capabilities harness the power of multi-core processors,

11. Simplify the design of fault-tolerant clusters with built-in transparent distributed processing,

12. Portability;

(a) Maximize application portability with extensive support for the POSIX standard, which allowsquick migration from Linux, Unix, and other open source programs,

(b) Target the best hardware platform for an embedded system and get up and running quicklywith runtime support and BSPs for popular chipsets, including MIPS, PowerPC, SH-4, ARM,StrongArm, XScale, and x86

13. Field-tested binaries � drivers, applications, custom OS services, and so on � can be reused acrossentire product lines.

15.5 VxWorks3QNX has delivered RTOS since the beginning of the 1980, see www.qnx.com for more information.

Page 104: DAQ_Training_Course.pdf

Chapter 16

RT system

16.1 Bene�ts of any RTOS

The bene�t of using a RTOS (Schultz 1999):

1. It is more easy to get all the details when breaking a set of jobs into tasks,

2. Multitasking can make it easy to respond to real-time demands,

3. Intertask communication provides a solid method of controlling execution order and timing,

4. Tasks can be independent,

5. Tasks will be small and easier to manage,

6. Commercial or build-your-own?

(a) commercial RTOS will be fully debugged and tested,

(b) commercial RTOS will have a license fee,

(c) commercial RTOS will be more general (e¢ cient and memory size),

16.2 Cost of RTOS

The cost of using a RTOS (Schultz 1999):

1. The hardware must have a speci�c timer and interrupt structure,

2. Multitasking costs time,

3. Time to learn programming the system.

16.3 Contents of a RTOS

1. Scheduler

2. Semaphore

96

Page 105: DAQ_Training_Course.pdf

Part IV

DAQ systems

97

Page 106: DAQ_Training_Course.pdf

Chapter 17

Sensor overview

A sensor device is often de�ned as a device that receives a signal or stimulus and responds with anelectrical signal (Kester 1999), (Fraden 2004). A sensor is a device that senses either a absolute value orchange in a physical quantity. The concepts sensor and transducer are often mixed, but a transducer isde�ned as a converter of one type of energy into another type of energy. A transducer converting a typeof energy into an electrical signal can be a sensor. Sensor signals are often incompatible with input ofmeasurement systems so the sensor signal must be conditioned. An overview of a measurement systemis shown in Figure 17.1.A sensor is used for measuring various physical properties such as temperature, force, pressure, �ow,

position, light intensity, etc. These properties act as the stimulus to the sensor and the sensor output isa measurement of this property. The stimulus is the quantity, property, or condition that is sensed andconverted to an electrical signal.Sensors can not operate by themselves, they must be part of a larger system consisting of a mea-

surement system and a computer system as shown in Figure ??. The important parts in this coursewill be the sensor device consisting of the sensing device, signal condition and converters, and the mea-surement system. The sensing device will be the device that receives a signal or stimulus and respondswith some type of electrical signal. These electrical signals will always be an analog signal, meaning atime continuous signal where some time varying feature of the signal is a representation of some othertime varying quantity (www.wikipedia.org 2006). The primary disadvantage of an analog signal is thein�uence of noise, meaning random variation, and another disadvantage is that a computer can only usedigital signals. Digital signals are digital representations of discrete-time signals, which are often derivedfrom analog signals (www.wikipedia.org 2006). An important part of the sensor device will then be thesignal condition and converters. As shown in Figure ?? the signal condition and the converters can bepart of both the sensor devices and the measurement systems meaning that not all sensors devices canbe connected to all measurement systems.The sensor device normally consists of the sensing device, a signal condition device, and some sort of

transducer or transmitter as shown in Figure 17.2. This sensor device can be connected to a measurementsystem using a standard interface or a standard sensor device bus. A sensor device bus is an interfacewhere several sensors can be connected to the same wires often by a sort of addressing for the sensordevices.Sensors devices can also have proprietor interfaces meaning that the sensor devices can only be

connected to a measurement system designed for these sensors. This is shown in Figure 17.3.

Figure 17.1: The main modules in a measurement and control system. The sensors and the actuatorsare an important part of such a system.

98

Page 107: DAQ_Training_Course.pdf

CHAPTER 17. SENSOR OVERVIEW 99

Figure 17.2: A sensor device with a standard interface.

Figure 17.3: A sensor device with a proprietor interface.

A transducer converts a physical phenomenon to an electrical signal so the sensing device and thesignal condition device in Figures 17.2 and 17.3 will be a transducer. A transmitter provides a speci�coutput signal, very often a standard interface signal like 4� 20mA1 , HART2 , RS-4853 or other types ofbus signals (analog or digital signal). A sensor transmitter will always have a transducer included as thetransmitter is only used for transmitting the electrical signal to another device.

17.1 Sensor device types

Sensor devices can range from very simple devices to complex devices.

17.1.1 Passive or Active

A sensor device can be either passive or active. In passive sensors the property of the device changewithout any additional energy consumption from the electrical circuits connected to the sensor, while theactive sensors requires an operating signal provided by an excitation circuit (Fraden 2004). The passivesensors directly generates an electric signal in response to an external stimulus, energy is only neededto amplify the analog signal. Examples of passive sensors can be a termocouple where a voltage willbe generated depending of the temperature, a photodiode where the voltage over the diode will dependon the light, and piezoelectric sensor where the voltage will depend on the pressure on the piezoelecticelement. Active sensors need external power for operation where the sensor device is modifying a signaldepending on the stimulus. Examples of active sensors can be the temperature dependent resistors(RTD) like PT-100, where the current (excitation signal) will generate an output voltage dependingon the restanse, light dependend resistors (LDR) where the current (excitation signal) will generate anoutput voltage depending on the restanse, or an ultrasonic sensor where a sound signal will generate anoutput voltage or current.The di¤erence is important if the sensor is going to be located in a hazardous area, i.e. areas with

potentially explosive atmospheres. Active sensors must then be equipped with an extra device to limitthe amount of electrical energy used in the hazardous area to avoid explosions under any possible faultcondition. The regulation of the equipment in hazardous areas is in Europe controlled by the ATEX4

directives and guidelines.Active sensors require external power for the operation, called an excitation signal (Fraden 2004).

This signal is used by the sensor device to produce the output signal of the sensor device. An exampleis shown in Figure 17.4 showing a current loop sensor using the same wire for both power and signal.

14� 20mA: Current loop, an analog signal.2HART: A current loop with digital information added.3RS-485: A digital signal.4ATEX: ATmosphères Explosibles

Page 108: DAQ_Training_Course.pdf

CHAPTER 17. SENSOR OVERVIEW 100

Figure 17.4: An active sensor shown as a current loop sensor (4� 20mA).

The output from the sensor will be a current and using a resistor the current can be converted to avoltage. Normally a 4 � 20mA standard is used meaning that the electronics inside the sensor is using4mA and then the electronics will adjusting the output current between 0 and 16mA depending on thestimulus. This 4�20mA standard is used a lot in the industry due to better noise immune, using currentfor signal cummunication, and using the same two wire for both power and signal.

17.1.2 Absolute or Relative

A sensor device can also be an absolute or relative sensing device. An absolute sensor device detects astimulus in reference to an absolute physical scale that is independent on the measurement conditions,whereas a relative sensor detects a stimulus relative to another stimulus (Fraden 2004).

17.1.3 Point or Continuous Measurement

A sensor device can be used for point measurement or continuous measurement. In point measurementthe output from the sensor device will indicate the presence or not of some properties at a speci�edpoint. Continuous measurement will give an continuous output signal, normally proportional with theproperty of the measurand.

17.1.4 Contact or non-contact

A sensor device can be a contact device or a non-contact device. A contact device must be in contactwith the medium to measure the property, while the non-contact device will not be in physical contactfor measuring the property. A non-contact device will then depend on transfer of energy to the sensor formeasuring the property. This energy transfer can either be radiation, or a re�ected signal transmittedfrom the sensor device.

17.1.5 Invasive or Intrusive

A sensor device can also be invasive or intrusive indicating the in�uence of the measurand. Invasiveoften means that the sensor will be in contact with the measurand, while intrusive means that the sensordevice will be disturbing the measurand as well.Expressions like online, o­ ine, and inline can also be used, where online means invasive, inline means

intrusive, and o­ ine means non-contact. Often online is mixed with automatic meaning measuring themeasurand in real-time.This mean that a sensor device can be either passive or active, point or continuous, and contact or

non-contact. A contact sensor device can be either invasive or intrusive.

17.2 Sensor device properties

A sensor device is converting the value of the measurand to a output signal of the sensor device as shownin Figure 17.5.There is a lot of important properties for the sensor device in describing this conversion, and these

properties can be divided into 3 di¤erent groups:

Page 109: DAQ_Training_Course.pdf

CHAPTER 17. SENSOR OVERVIEW 101

Figure 17.5: The conversion of the measurand to the output of the sensor device.

1. concepts for the conversion,

2. concepts for the operating conditions,

3. concepts for the accuracy.

17.2.1 Concepts for Conversion

The concepts for the conversion is:

1. Lower range value (LRV ); the lowest value of the measurand the sensor device can convert,

2. Upper range value (URV ); the highest value of the measurand the sensor device can convert,

3. Range; the interval between lower range value and higher range value (range = high� low),

4. Lower range limit; the lowest value can be adjusted,

5. Upper range limit; the highest value that can be adjusted,

6. Overrange; the sensor device can be destroyed if the measurand is above this value,

7. Overrange limit; the sensor device will be destroyed if the measurand is above this value.

8. Unidirectional; zero is either the lower and upper range value (0 �C to + 125 �C) ;

9. Bidirectional; zero is between the lower and upper range value (�25 �C to + 125 �C) ;

10. Suppressed-zero; zero is below the lower range value (10 �C to + 125 �C) ;

11. Elevated-zero; zero is above the upper range value (�125 �C to � 25 �C)

Sensor output:

1. Lower output value (LOV ); the output value of the sensor device for the lower range value,

2. Upper output value (UOV ); the output value of the sensor device for the upper range value.

Knowing the measurand, the output value of the sensor will be:

output_value = LOV +

�measurand�LRVURV � LRV

�(UOV � LOV )

What will the measurand be if we know the output value of sensor device?

Example 8 You have a temperature sensor with a range of [�20 �C; 80 �C] and an output range of4� 20mA. Show that the output current is 12mA when the temperature is 30 �C:

Page 110: DAQ_Training_Course.pdf

CHAPTER 17. SENSOR OVERVIEW 102

Figure 17.6: The operating conditions of a sensor device (from (Olsen 2005)).

Figure 17.7: Random errors when sensing the measurand.

17.2.2 Concepts for Operating Conditions

The concepts of operating condition (environment) (Olsen 2005):

1. reference operating conditions; a small working area where the accuracy of the sensor device isvalid,

2. normal operating conditions; a working area where the sensor device is constructed for operation,

3. operative limits; limits for the working area where the sensor device can operate without beingdestroyed,

4. transportation and storage conditions; the area of the measurand where the sensor device can beheld without being destroyed or any need for recalibration.

Figure 17.6 shows these operating conditions of a sensor device.

17.2.3 Concepts for Accuracy

The output value from the sensor device will vary from reading to reading, the value will NOT be aconstant value even if the measurand seems to be constant. This is due a set of small random errors likenoise etc. See Figure 17.7.The variation of the output value will however be like a normal distribution where the mean should

be the most correct value for the measurand and the standard deviation often depends on the accuracyof the sensor. The formula for the normal distribution is:

f (x) =1

�p2�e�(x��)2

2�2

where � is the standard deviation for the population and � is the mean of the population. Often thenormal distribution is denoted by:

x � N��; �2

Page 111: DAQ_Training_Course.pdf

CHAPTER 17. SENSOR OVERVIEW 103

Figure 17.8: A set of normal distribution curves with di¤erent � and �:

where N () is the normal distribution, � is the mean, and �2 is the variance. As a rule of thumb for thenormal distribution:

Values Range68% ��95% �2�

99; 7% �3�

meaning that 68% of all measured values should be within the standard deviation of the mean value.A set of normal distribution curves are shown in Figure 17.8.Accuracy properties:

1. Sensitivity; the smallest change in the measurand that can be detected by the device�4output4input

�,

the absolute change, often in voltage

2. Resolution; the smallest portion of the measurand that can be observed by the device, the relativechange and often depending on the range, often in the number of bits,

3. Repeatability; the closeness of successive measurements carried out under the same conditions,

4. Reproducibility; the closeness of successive measurement carried out with a stated change in con-ditions,

5. Accuracy; the closeness of the measurement and the measurand,

6. Absolute accuracy; the closeness of the measurement and the measurand,

7. Relative accuracy; the closeness of the measurement and a reference value,

8. Error; the deviation between the measurement and measurand,

9. Random error; the mean of a large number of measurements of the same measurand in�uenced byrandom (see Figure 17.10):

error (rand om_error = measured_value� average_of_readings)

10. Systematic error; the mean of a large number of measurement of the same measurand in�uencedby systematic error deviates from the measurand ( see Figure 17.10):

(systematic_error = average_of_readings�measu rand)

Page 112: DAQ_Training_Course.pdf

CHAPTER 17. SENSOR OVERVIEW 104

Figure 17.9: Sensor hysteresis showing the di¤erences when the measurand is increasing or decreasing.

Figure 17.10: System error and random error for a sensor device.

11. Uncertainty; An estimate of a possible error in the measurement,

12. Time drift; changes in accuracy over a long period of time (1 year?).

13. Hysteresis error; the di¤erence in the sensor device output for a speci�c measurand depending ifthe previous measurand was lower or upper. See Figure 17.9.

14. Nonlinearity error; the maximum deviation of a straight line.

Accuracy and precisionThe accuracy and precision is important when doing measurements and depends on the entire DAQ

system. Accuracy de�nes how close the measurement is to the real value and precision is how repeatedmeasurements under unchanged conditions show the same results. Some examples of accuracy andprecision is shown in Figure 17.11, the relationship between accuracy and precision and the time domainof the signal is shown in Figure 17.12, and the relationship between accuracy and precision is shown inFigure 17.13. The precision indicates the repeatability for the device.Accuracy is one of the most important considerations for measurement and often the accuracy is

given as percent of the full range output, stated as FRO (Full Range Output)5 . When using the sensordevice as an input device to a model, the repeatability will be more important than the accuracy, why?The model must be trained with proper data and it does not matter if the sensor device signals are notthat accurate as long as the sensor device signals are repeatable.Accuracy, resolution and repeatitabilityFigure 17.14 shows di¤erent combination of resolution, accuracy and repeatability. The repeatablity

can be high even if the resolution or the accuracy is high. Using a model, the model can be trained todeal with lower resolution and/or accuracy, but not low repeatability.

error =measured_value - measurand_value

Example 9 You buy a pressure sensors with the measuring range [1 bar; 10 bar] with an accuracy of�0:5% FRO6 . You measurement range is [1 bar; 5 bar], what will be the minimum and maximum accuracy

5FRO: Also called Full Scale (FS) or Full Scale Output (FSO).6FRO: Full Range Output, also given as Full Output (FO) or Full Scale (FS).

Page 113: DAQ_Training_Course.pdf

CHAPTER 17. SENSOR OVERVIEW 105

Figure 17.11: Some examples of accuracy and precision (Mat 1999).

Figure 17.12: The relationship between precision, accuracy, and the signal in the time domain (Olsson& Piani 1998).

Figure 17.13: The relartionship between accuracy and precision (www.wikipedia.org 2010).

Page 114: DAQ_Training_Course.pdf

CHAPTER 17. SENSOR OVERVIEW 106

Figure 17.14: The relationship between the resolution, accuracy, and repeatability. The repeatabilitycan be high even if the resolution or the accuracy is high (www.keithley.com: oct-2010).

for your measurement range? The minimum accuracy will be �1% at 5 bar and �5% at 1 bar as theaccuracy is 10�0;005100 = �50mbar. The solution will be to always select a measurement range of the sensordevice as close as possible to your measurement range!

External errors:

1. interference errors like vibrations, noise, and the power supply unit (PSU),

2. wrong use of the sensor device like wrong calibration, using it outside the normal operation condi-tions, or wrong interface �connections�.

17.3 Sensor output signals

The output from a sensor device can be either an analog signal or a digital signal. This will not be therange of the sensor device, but the interface signal for connection to a DAQ system. The output signalvariables can be:

1. current signal; normally a 4� 20mA, often used in noisy environment,

2. voltage signal; most commonly used interface signal, three important properties:

(a) amplitude,

(b) frequency,

(c) duration.

3. bandwidth; the range of the frequencies present in the measured signal, all sensors have a low anda high limit for measurement.

17.4 Dynamic measurement

When the measurand is unchanging in time and the measurement system is showing the same valuein response to the measurand, the measurement process is said to be static. When the measurand ischanging in time and the measurement system is not showing instantaneous response, the measurementprocess is said to be dynamic. When the measurement system is dynamic there is usually an errorintroduced into the measurement and actions must be taken to minimize this error.Dynamic responses of a measurement system can usually be placed into one of tree categories:

1. zero order; response instantly to measurands, given that no measurement systems are truly zeroorder,

2. �rst order; see Response A in Figure 17.15,

Page 115: DAQ_Training_Course.pdf

CHAPTER 17. SENSOR OVERVIEW 107

Figure 17.15: The dynamic respons of a system with a pulse on the input. Response A is a �rst orderrespons and Response B is a second order response.

Figure 17.16: The dynamic response of a sensor, T0 is the dead time, Td is the delay time, Tp is the peaktime, Mp is the peak value, and Ts is the settling time (Olsson & Piani 1998).

3. second order see Response B in Figure 17.15.

The sensor will also have some dynamics, but this information is normally not included in thedatasheets of the sensors. The dynamic response of a sensor can be tested with a step response. Figure17.16 shows the parameter that describes the respons of a sensor, and these parameters should be assmall as possible (Olsson & Piani 1998).The dynamic parameters are (Olsson & Piani 1998):

� dead time (T0); the time between the �rst change of the physical value and the �rst change in theoutput signal of the sensing device,

� rise time; the time it takes to pass from 10% to 90% of the steady state response,

� delay time (Td); the time to reach 50% of the steady state response,

� peak time (Tp); the time to reach the �rst peak,

� settling time (Ts); the time when the sensor step response is within a certain percentage (e.g. � 5%)of the steady-state value.

17.5 MEMS

Micro-Electro-Mechanical Systems (MEMS) is the integration of mechanical elements, sensors, actuators,and electronics on a common silicon substrate. The electronics will be fabricated as integrated circuitson the silicon sustrate, while the micromechanical components will be fabricated etching away parts ofthe silicon substrate or add new structural layers to form the mechanical and electromechanical devices.

Page 116: DAQ_Training_Course.pdf

Chapter 18

Signal condition systems

This section will focus on measurement systems with electrical signals. The sensing device has anelectrical output meaning that the electrical property is caused by a change of the measurand. Oftenwill the measurand make a change in a resistance, capacitance, or a voltage of the sensing device, but insome cases can the output be a measurand dependent on current, frequency, or electric charge as well.Figure 18.1 shows the signal condition part of a sensor device.The focus will be on sensing device with an electrical output having the advantages over mechanical

devices:

1. ease of transmitting the measurement signal from the sensing device to the measurement system,

2. ease of amplifying, �ltering, and otherwise modifying the signal,

3. ease of converting the signal to a digital signal for monitoring and control,

4. ease of logging the signal.

Electrical sensing devices are normally called sensors, but can also be called transducers, gages, cells,pickups and transmitters. A measurement system can be as shown in the Figures ??, 17.2, or 17.3. Thesensing devices will be the focus in later chapters, in this chapter we will focus on the signal conditioningdevice. The must common functions of the signal conditioning device or stage are:

1. ampli�cation,

2. attenuation,

3. �ltering,

4. di¤erentiation,

5. integration,

6. linearization,

7. combining the measured signal with a reference signal,

8. converting the signal to an output signal (often voltage or current).

Figure 18.1: The signal condition part of a sensor device.

108

Page 117: DAQ_Training_Course.pdf

CHAPTER 18. SIGNAL CONDITION SYSTEMS 109

Figure 18.2: An ampli�er with the input voltage Vi and the output voltage Vo (Wheeler & Ganji 2004).

9. electrical isolation; high-voltage transients, safety or grounding.

10. multiplexing; mixing several signals,

11. excitation source.

The signal conditioning function will be one or several combination of the listed functions and willbe a very important part of the sensor device.

18.1 Ampli�cation

Sensing devices often produce low voltage signals (�V or mV) and since these signals are di¢ cult totransmit over wires, the signal should be ampli�ed. See Figure 18.2 for an overview of an ampli�er usedfor changing the low input voltage, the changing is due to the gain property of the ampli�er.An ampli�er to be used in a measurement system will often be an instrumentation ampli�er. An

instrumentation ampli�er is a type of di¤erential ampli�er that has been out�tted with input bu¤ers,which eliminate the need for input impedance matching and thus make the ampli�er particularly suitablefor use in measurement and test equipment. Additional characteristics include very low DC o¤set, lowdrift, low noise, very high open-loop gain, very high common-mode rejection ratio, and very high inputimpedances. Instrumentation ampli�ers are used where great accuracy and stability of the circuit bothshort- and long-term are required (www.wikipedia.org 2006).Let the low voltage be Vi (input voltage) and the output voltage of the ampli�er be Vo, the gain G

will be:

G =VoVi

The gain can be any value, but often within [1; 1000], or a decrease within [0; 1]. Gain is normally givenin a logarithmic scale, expressed in decibels (dB) as:

GdB = 20 log10G = 20 log10VoVi

Example 10 The output voltage of a sensing device is maximum 5mV, you need a maximum voltageas input to your measurement system of 5V. Show that the gain of the instrumentation ampli�er shouldbe 60 dB.

The ampli�er used for changing the sensing signal, normally amplifying the signal, can also changethe signal in other ways as for example frequency distortion and/or phase distortion.The design of the ampli�er will depend on the sensor resistor, the sensor resitor can be placed between

the load and circuit ground or between the supply and the load. The low side current sensing is withthe sensor resistor between the load and circuit ground, the high side current sensing is with the sensorresistor between the supply and the load. Figure 18.3 shows the high side sensing circuit to the left andthe low side sensing circuit to the right. The value of the resitor should be as low as possible to keeppower dissipation in check, but high enough to generate a voltage detectable by the ampli�er.

18.1.1 Bandwidth distortions

Ampli�ers have di¤erent gains for di¤erent frequencies, therefore the bandwidth of the ampli�er isimportant. The gain will always be reduced for (low and) high frequencies of a signal ampli�er. The

Page 118: DAQ_Training_Course.pdf

CHAPTER 18. SIGNAL CONDITION SYSTEMS 110

Figure 18.3: The high side sensing circuit to the left and the low side sensing circuit to the right(Electronic Engineering Times, Oct-09).

Figure 18.4: The bandwidth for an ampli�er with the �3dB cuto¤ frequencies (flow and fhigh) (Wheeler& Ganji 2004).

bandwidth is a parameter describing the frequency area. The bandwidth is de�ned as the frequenciesbetween the low and high frequencies where the gain is reduced by 3 dB; see Figure 18.4. The bandwidthis measured in Hertz (Hz) :The reduction of the gain is:

Gain =1p2� 0; 707 � 3dB

meaning that the output voltage is � 71% of the maximum output voltage.Due to the bandwidth will the ampli�cation of the di¤erent frequencies be di¤erent, given the fre-

quency distortion. Figure 18.5 shows the frequency distortion of a square wave input signal. The squarewave signal contains a wide range of harmonics given that an ampli�er has di¤erent gain factors for eachfrequency due to the bandwidth, which gives the frequency distortion.

Figure 18.5: Frequency distortion of a square-wave input signal (Wheeler & Ganji 2004).

Page 119: DAQ_Training_Course.pdf

CHAPTER 18. SIGNAL CONDITION SYSTEMS 111

Figure 18.6: The phase angle of a sine wave signal (Wheeler & Ganji 2004).

Figure 18.7: A phase angle response diagram of an ampli�er (Wheeler & Ganji 2004).

Normally will the gain be relatively constant over the bandwidth, but the phase angle, anotherproperty of the output signal, can change signi�cantly. If the input signal of the ampli�er is expressedas:

Vi (t) = Vmi sin 2�ft

where f is the frequency and Vmi is the maximum amplitude of the input sine wave, the output signalwill be:

Vo (t) = GVmi sin (2�ft+ �)

where � is called the phase angle, see Figure 18.6. Normally will the phase shift not be a problem, butfor complicated periodic waveforms it may result in a problem called phase distribution.The phase angle response diagram is shown in Figure 18.7, showing the phase angle versus the

logarithm of the frequency. The combination of the bandwidth diagram in Figure 18.4 and the phase-angle response diagram are called the Bode diagrams of a dynamic system.It can be shown that a linear variation of the phase angle with the frequency, the output signal will

only be delayed or advanced in time. A non-linear phase angle will disturb the output signal givingphase distortion. This is shown in Figure 18.8 where (a) is the input signal; (b) is the output signal witha linear phase angle; and (c) is the output signal with a non-linear phase angle.

18.1.2 Common-mode rejection ratio (CMRR)

The common-mode rejection ratio (CMRR) of a di¤erential ampli�er (or other device) measures thetendency of the device to reject input signals common to both input leads (www.wikipedia.org 2006).As shown in Figure 18.2 an instrumentation ampli�er will have two inputs and the connection of theseinputs can be di¤erential-mode or common mode. In di¤erential mode will the input voltage be appliedto the two input terminals as shown in Figure 18.9(a) : When the same input voltage is applied to thetwo input terminals, relative to ground, the input is common-mode voltage. An ideal instrumentationampli�er will not produce an output signal in common-mode voltage, but real ampli�ers will.The common-mode rejection ratio is de�ned as:

Page 120: DAQ_Training_Course.pdf

CHAPTER 18. SIGNAL CONDITION SYSTEMS 112

Figure 18.8: A signal with a linear and non-linear phase angle variation with frequency: (a) input signal;(b) linear variation of phase angle; (c) non-linear variation of phase angle.

Figure 18.9: Common-mode rejection ratio, the di¤eretial mode connection to the left and the commonmode connection to the right.

Page 121: DAQ_Training_Course.pdf

CHAPTER 18. SIGNAL CONDITION SYSTEMS 113

Figure 18.10: A simple model of the sensing device (a) and the ampli�er (b) (Wheeler & Ganji 2004).

Figure 18.11: A connection of the sensing device model (a), the ampli�er model (b), and the load (c)(Wheeler & Ganji 2004).

CMRR = 20 log10GdiffGcm

expressed in decibels. Gdiff is the gain in di¤erential-mode and Gcm is the gain in common mode. Sincethe signals of interest often result in di¤erential mode and noise signals often result in common-mode, ahigh value of CMRR is desirable (often more then 100dB).

18.1.3 Input and output loading

Connecting sensing devices, signal condition devices, and measurement systems can give problems forinput and output loading for these devices and systems. See Figure 18.1. To analyses these problemssimple models of these devices can be used. The sensing device can be modeled as a voltage generator Vsin series with a resistor Rs. This model is shown in Figure 18.10(a). This model shows how the sensingdevice will behave if someone makes a connection. An equivalent model can describe the instrumentationampli�er as well, shown in Figure 18.10(b).The output voltage of the sensing device, Vs, will then depend on the load of the device. Generally

should the input load be as high as possible and the output load as low as possible. A model of thecomplete system is shown in Figure 18.11 with the sensing device (a), the ampli�er (b), and the load (c).The current from the sensing device will be:

Ii =Vs

Rs +Ri

giving the input voltage of the ampli�er to be:

Vi = Ri

�Vs

Rs +Ri

�=

RiVsRs +Ri

Let Ri >> Rs, then Vs � Vi:The output of the ampli�er will then be:

VL =RLGViRo +RL

=RL

Ro +RL�G � RiVs

Rs +Ri(18.1)

Page 122: DAQ_Training_Course.pdf

CHAPTER 18. SIGNAL CONDITION SYSTEMS 114

Figure 18.12: A dividing network of resistors used for signal attenuation.

Let Ri >> Rs and RL >> Ro, then an approximate of the Equation 18.1 will be:

VL = G � VS

18.2 Attenuation

In some situations will the ampli�ed signal be higher then the input range for the next device and theoutput signal should be reduced, also known as attenuation. A diving network of resistors can be usedfor signal attenuation and is shown in Figure 18.12.The current Ii will be:

Ii =Vi

R1 +R2

and the output voltage will be:

Vo = R2 � Ii = R2�

ViR1 +R2

�=

R2ViR1 +R2

Remember that these equations mean that R2 << RL, where RL is any resistance load.

18.3 Filtering

Often will an input signal be complicated, a sum of many di¤erent frequencies and amplitudes. To beable to remove some of these frequencies, �ltering is used. Two very common situations where �lteringis used are noise and aliasing. Aliasing is regarding sampling and will be discussed later in the course.Noise is unused frequency components of the signal and a �lter can remove these unwanted components.Filters can either be hardware devices (combinations of resistors, capacitors and/or inductances)

or implemented in software. Hardware �lters are used if �ltering is necessary before the measurement(DAQ) system, software �lters can only be implemented on the signal after the digital conversion.There are four di¤erent type of �lters:

1. low-pass �lter; see Figure 18.13 (a), passband will be low frequency, stopband in high frequency,

2. high-pass �lter; see Figure 18.13 (b), passband will be high frequency, stopband in low frequency

3. band-pass �lter; see Figure 18.13 (c),

4. band-stop �lter; see Figure 18.13 (d),

As shown in Figure 18.13 the �lters have a corner frequency, fc, indicating the frequency for thesignal attenuation. A very large number of hardware �lters exist, four classes of �lters are most used.Each �lter class has unique characteristics that make them suitable for a particular application. Thesefour classes are:

1. Butterworth; maximally �at in the passband but not a crisp cut-o¤ in the stopband, see Figure18.14,

2. Chebyshev; crisp cut-o¤ in the stop band but ripples in the passband, see Figure 18.15,

3. elliptic; a very crisp transition between passband and stop band, but has ripples in both passbandand stopband,

Page 123: DAQ_Training_Course.pdf

CHAPTER 18. SIGNAL CONDITION SYSTEMS 115

Figure 18.13: Four di¤erent type of �lters: (a) lowpass �lter; (b) highpass �lter; (c) bandpass �lter; (d)bandstop �lter (Wheeler & Ganji 2004).

4. Bessel; a good linear variation of the phase angel with frequency in the passband, but have a lowerroll-of rate than Butterworth �lters. See Figure 18.16.

A common characteristic is the �lter order. The higher the order, the greater will the attenuationoutside the corner frequencies be (in the stop band).

18.3.1 Low pass �lter

A low pass �lter is used for smoothing a set of the last input values (present values) and will give somedelay of the actual input signal. The di¤erences of the present value, a �ltered value, a predicted value,and a trend value is shown in Figure 18.17.The low pass �lter is shown in Figure 18.18 where the cut o¤ frequency, fc, is where the amplitude

is �3dB or 1p2.

Hardware low pass �lter

The simplest and cheapest �lter available is a single pole RC �lter, a resistor (R) in serial with the signaland a capacitor (C) between the signal and ground. The �lter rolls o¤ at 6 dB per octave (20 dB perdecade) above the corner frequency at:

fc =1

2�RC

A simple low pass �lter can be designed using only a resistor and a capacitor as shown in Figure18.19. The time constant for the �lter will be:

� =1

!cutt�off= R � C

In discrete-time will a low pass �lter be:

y (k) = (1� �) y (k � 1) + �u (k)

Page 124: DAQ_Training_Course.pdf

CHAPTER 18. SIGNAL CONDITION SYSTEMS 116

Figure 18.14: Gain of a lowpass Butterworth �lter as a function of �lter orders and frequency (Wheeler& Ganji 2004).

Figure 18.15: Gain of a lowpass Chebyshev �lter as a function of �lter orders and frequency (Wheeler &Ganji 2004).

Figure 18.16: Phase angle variation with frequency for Bessel and Butterworth classes of �lters (Wheeler& Ganji 2004).

Page 125: DAQ_Training_Course.pdf

CHAPTER 18. SIGNAL CONDITION SYSTEMS 117

Figure 18.17: Filter values are delayed values of the present value, smooting several of the last presentvalues. Past values are used for trending and predictiable values are for estimation in the future.

Figure 18.18: The response of a low pass �lter.

Figure 18.19: A �rst order LP �lter using a resistor and a capacitor.

Page 126: DAQ_Training_Course.pdf

CHAPTER 18. SIGNAL CONDITION SYSTEMS 118

Figure 18.20: The input (solid draw) signal and the output (dotted line) signal for an IIR low pass �lter.

Figure 18.21: A moving average �lter.

where the �lter constant � is a number between 0 and 1. The �lter constant is:

� =dT

RC + dT

where dT is the sampling time. Knowing the sampling time and the �lter constant, the RC factor is:

RC = dT

�1� ��

�Software low pass �lter

A low pass �lter is often implemented as an exponential �lter or as a moving average �lter. Theexponential �lter is an in�nite impulse response (IIR) (Ifeachor & Jervis 2002) �lter meaning that thereis feedback in the �lter. The box-car average, moving average and weighted moving average �lters are�nite impulse response (FIR) (Ifeachor & Jervis 2002) �lters meaning that there is no feedback in these�lters.The exponential �lter will be:

yk = �xk + (1� �) yk�1where yk is the new value and xk is the new sensor value. The �lter is very easy to implement in softwareas only the previous value is needed for each calculation. Figure 18.20 shows the input signal (solid line)and the output signal (dotted line) of such a �lter.A moving average �lter is shown in Figure 18.21.The �lter requires a ring bu¤er in software to save the last n values from the sensor device and the

average calculation for each reading of a new sensor value. The output value yk will be:

yk =

PNi=1 xiN

where yk will be the �lter output value at step k estimated from the N sensor device values. Figure18.22 shows the input signal (solid line) and the output signal (dotted line) of such a �lter.

Page 127: DAQ_Training_Course.pdf

CHAPTER 18. SIGNAL CONDITION SYSTEMS 119

Figure 18.22: The input (solid draw) signal and the output (dotted line) signal for an FIR low pass �lter.

Figure 18.23: A simple high pass �lter using a capacitor (C) and a resistor (R).

18.3.2 High pass �lter

The simplest and cheapest �lter available is a single pole RC �lter, a resistor (R) and a capacitor (C) asshown in Figure 18.23.The �lter rolls o¤ at 6 dB per octave (20 dB per decade) below the corner frequency at:

fc =1

2�RC

The time constant for the �lter will be:

� =1

!cutt�off= R � C

In discrete-time will a high pass �lter be:

y (k) = �y (k � 1) + �(u (k)� u (k � 1))

where the �lter constant � is a number between 0 and 1. The �lter constant is:

� =RC

RC + dT

where dT is the sampling time. Knowing the sampling time and the �lter constant, the RC factor is:

RC = dT

��

1� �

18.3.3 FIR or IIR �lter

The Finite Impulse Response (FIR) �lter is based on feedforward while the In�nite Impulse Response(IIR) is based on both feedforward and feedback. A suggestions for using these �lters:

� IIR: usage if a sharp cut o¤ is wanted and large datarate,

� FIR: usage if a linear phase is wanted and small datarate.

A FIR �lter is more complicated than a IIR �lter, but also more �eksibel.

Page 128: DAQ_Training_Course.pdf

CHAPTER 18. SIGNAL CONDITION SYSTEMS 120

Figure 18.24: A voltage to current converter to the left, a frequency to voltage converter in the middle,and a frequency to current converter ot the right.

18.4 Di¤erentiation

A methods to compute the rate at which an output signal Vo is changing with respect to the inputsignal Vi. The rate of change is called the derivate of Vo with respect to Vi. The derivate of a curve willbe the slope of a line that is a tangent to the curve (www.wikipedia.org 2006). One usage will be forcompression (remove the data, keep the information).

18.5 Integration

The integral is the area of a region in the xy-plane bounded by a graph, the x axis, and the verticalboundary lines (www.wikipedia.org 2006).

A =

Z b

a

f (x) dx

Integration means summing the area, not the values. The usage is for sensor devices sensing only changesof a property, integration can be used to get the total value.

18.6 Linearization

Finding a linear approximation to a function at a given point (www.wikipedia.org 2006).

18.7 Combiner

A combiner can be used for the output, to combine the output signal with:

1. a reference signal to add or subtract an o¤set,

2. modulation.

Several other ways of combining a set of signals exist as well.

18.8 Conversion

There exists converters for voltage to current, frequency to voltage, and frequency to current. Normallythe DAQ system has voltage inputs, use voltage for short distances and current for larger distancesbetween the sensor and the measurement system. Block diagrams of some of the converters are shownin Figure 18.24.

18.8.1 Low-level analog voltage signal

Low-level analog voltage signal, below 100mV; is common from sensing devices. It is di¢ cult to transmitsuch signals over long distance due to noise (ambient electric and magnetic �elds can induce voltages inthe signal wires). An instrumentation ampli�er should be used to make a high level signal.

Page 129: DAQ_Training_Course.pdf

CHAPTER 18. SIGNAL CONDITION SYSTEMS 121

Figure 18.25: Two systems communicating using voltage, the output voltage Vo and the input voltageVi.

18.8.2 High-level analog voltage signal

A standard level of an analog voltage signal is 0�10V and can be transmitted for a distance of 10�30mwithout major problems. The limitation in distance is due to the resistance of the wire, shown in Figure18.25.The output voltage is Vo, the output resistance is Ro, the wire resistance is Rw, the input resistance

is Ri, and the input voltage is Vi. The Ri should be much larger than the Ro giving the current:

I =Vo

Rw +Ri

The input voltage will be:

Vi = Ri � I =VoRi

Rw +Ri� Vo

as long as the Ri is much larger than the Rw.

18.8.3 Current-loop analog signal

The output of the sensor is converted to a current signal instead of a voltage signal. A standard signal is4� 20mA meaning that the power consumption of the sensor is 4mA and the range for the measurandwill be 4�20mA giving 16mA. The signal can be transmitted for a distance of up to 3 km without majorproblems. See Figure 17.4. The current loop sensor contains a current generator to convert the valueof the measurand to a current signal and the sensor needs a minimum voltage to operate this currentgenerator. Often this minimum voltage is about 9V. Most DAQ system has only voltage inputs andthe the current output from the sensor must be converted to a voltage signal. A high precision resistoris often used for this purpose, the temperature speci�cation is often the most important property of thehigh precision resistor.These sensors are used a lot in the industry and the HART protocol is a way of adding digital

information to the analog signal. A protocol is a set of rules de�ning how two or several computerscan communicate. The protocol may de�ne both hardware and software requirements, or only hardwarerequirements.The current-loop signal is more immune to noise then a voltage signal due to the lower input im-

pedance of a current loop signal. A DAQ system with an input range of 0V to 5V with a noise signalof 1�A will have:

� with an input resistance of 1M will the input noise be:

Unoise = R � I = 1M � 1�A = 1V

being 20% of the voltage range.

� with a current signal of 4� 20mA will the input resistance be:

R =U

I=

5V

20mA= 250

giving the input noise of:

Unoise = R � I = 250 � 1�A = 0; 25mV

being 0; 005% of the voltage range.

Page 130: DAQ_Training_Course.pdf

CHAPTER 18. SIGNAL CONDITION SYSTEMS 122

18.8.4 Digital signal

The best way is to convert the analog signal to a digital signal as close to the sensing device as possibleto avoid noise problems. The digital signal is only a set of voltage pulses being 0 if below a voltage limitand being 1 if above a voltage limit including an illegal band between these voltage limits. Then thesesignals will be more immune to noise problems.The most used standards for transmitting digital signals are the RS-232C, RS-422 and RS-485 stan-

dards. RS-232C is limited to about 10m, while RS-485 is limited to about 1:2 km. Standards as USB,FireWire and Ethernet are all using di¤erent types of RS-485 as transporting medium.The transmitter and receiver needs a protocol to communicate and di¤erent protocol exists. In the

process industry a set of �eldbuses exists with di¤erent type of protocols like Pro�bus, CAN bus, andFieldbus Foundation. These �eldbuses are all digital buses.

18.9 Noise

In transmitting analog signals across a process plant or factory �oor, one of the most critical requirementsis the protection of data integrity. However, when a data acquisition system is transmitting low levelanalog signals over wires, some signal degradation is unavoidable and will occur due to noise and electricalinterference. Noise and signal degradation are two basic problems in analog signal transmission. Noiseis consider to be be any measurement that is not part of the phenomena of interest.Noise can be categorized into two broad categories:

1. internal noise

2. external noise.

While internal noise is generated by components associated with the signal itself, external noise resultswhen natural or man-made electrical or magnetic phenomena in�uence the signal as it is being trans-mitted. Noise limits the ability to correctly identify the sent message and therefore limits informationtransfer.Some of the sources of internal and external noise include:

1. Electromagnetic interference (EMI);

2. Radio-frequency interference (RFI);

3. Leakage paths at the input terminals;

4. Turbulent signals from other instruments;

5. Electrical charge pickup from power sources;

6. Switching of high-current loads in nearby wiring;

7. Self-heating due to resistance changes;

8. Electrical motors;

9. High-frequency transients and pulses passing into the equipment;

10. Improper wiring and installation;

11. Signal conversion error;

12. Uncontrollable process disturbances.

Electronic noise exists in all circuits and devices as a result of thermal noise, also referred to asJohnson Noise. The lower the temperature the lower is this thermal noise. Semiconductor devices canalso contribute �icker noise and generation-recombination noise. In any electronic circuit, there also existrandom variations in current or voltage caused by the random movement of the electrons carrying thecurrent as they are jolted around by thermal energy (www.wikipedia.org 2006).Figure 18.26 shows the typical noise sources in a measurement system.Advises to avoid noise:

Page 131: DAQ_Training_Course.pdf

CHAPTER 18. SIGNAL CONDITION SYSTEMS 123

Figure 18.26: Some possible noise sources in a measurement system.

1. Use shielded cables,

2. Terminate the shielding only in one end of the cable,

3. Terminate all shieldings and groundings in one point,

4. Try to separate analog and digital grounding terminations,

5. Use current loop for transmitting analog signals,

6. Convert to digital signals as close to the data source as possible,

7. Use high quality power suppliers.

Page 132: DAQ_Training_Course.pdf

Chapter 19

Data Acquisition Systems

19.1 Introduction

The Data Acquisition (DAQ) System is the connection between the sensor devices, the actuator devices,and the computer system. One of the main purpose is to convert analog signals from the real world todigital representations for computer systems. Figure 19.1 shows a DAQ system with N sensors connected.The common subsystems of a DAQ systemare:

1. Analog input; signals from analog sensors,

2. Analog output; signals to analog actuators,

3. Digital input; signals from on/o¤ sensors,

4. Digital output; signals to on/o¤ actuators,

5. Counters; counting the frequency, period, or the number of events,

6. Timers; output event controls or pulse train generation.

It can be di¤erent types of modules connected to the computer�s I/O ports (parallel, serial, PCMCIA,USB, FireWire, SCSI, network, wireless...) or cards inserted into the slots (PCI, ISA) on the motherboardof the computer. An overview of the connections are shown in Figure 19.2. The connection to thecomputer can be an internal card or an external device, while the sensor connections most often will bean external box.Important factors of a DAQ system:

1. Interface: The connection between the DAQ system and the computer system. An internal orexternal system, connected to the intranet or internet? Cable or wireless?

2. Signal conditioning: The analog to digital conversion of the input signals and the digital to analogconversion of the output signals,

3. Number of analog channels: The number and range of the analog input and output signals,

Figure 19.1: A DAQ system with a set of sensors connected.

124

Page 133: DAQ_Training_Course.pdf

CHAPTER 19. DATA ACQUISITION SYSTEMS 125

Figure 19.2: The connections of the sensors and the DAQ system to a computer.

4. Sampling rate: The time to convert an analog signal to a digital signal,

5. Resolution: The smallest value change the system can detect.

6. Accuracy: This is a function of many variables in the system, including A/D nonlinearity, ampli�ernonlinearity, gain and o¤set errors, drift, and noise.

7. Digital I/O: The number of digital input and output signals.

An example of a complete system is shown in Figure 19.3. This system contains a set of sensors,the measurement system, the motor control system, and the MMI (Man Machine Interface). This is adistributed system containing a set of microcontroller modules and serial communication between themodules. The type of serial communication will decide the maximum distances between the di¤erentmodules.Figure 19.4 shows the general structure of a measurement system containing the following devices:

1. the sensing device; part of the sensor device,

2. the signal conditioning device; normally part of the sensor device,

3. the signal processing device; normally part of the DAQ system,

4. the data presentation device; normally part of the computer system.

The signal processing device will be the DAQ system collecting the values from the sensors, mostoften analog signals, converting these signals to digital representation of these analog signals, and maybe some preprocessing of the digital signals as well. An overview of the main components for the analoginput section of a DAQ system, where the analog to digital conversion (ADC) is part of the DAQ, isshown in Figure 19.5.The DAQ system, with an ADC, will normally consists of the following components:

1. ADC; an important part of the DAQ system converting the analog signal to a digital representationof the analog signal. The ADC is often an expensive component so the DAQ system normally onlyhave one ADC.

2. Mux; the multiplexer is used to be able to connect more sensors to only one analog to digitalconverter ADC, often 4; 8; 16; or 32 inputs connected to one ADC.

3. �C, microcontroller used for controlling the multiplexer and the ADC.

(a) The multiplexer must be connected to the right sensor,

(b) the ADC must be told to start the conversion. The conversion from analog to digital willalways take some time (�s or ms),

(c) the ADC will inform the microcontroller when the conversion is �nished,

(d) the converted digital value will be read from the ADC,

(e) select the next channel of the multiplexer.

Page 134: DAQ_Training_Course.pdf

CHAPTER 19. DATA ACQUISITION SYSTEMS 126

Figure 19.3: The system blocks of a distrubuted system containing sensors, measurement system, controlsystem, and MMI (Man Machine Interface) (Cravotta 2008).

Figure 19.4: General structure of a measurement system for a single sensor device (Bentley 2005).

Figure 19.5: An overview of the main components for the analog input section for a DAQ system forreading sensor device values.

Page 135: DAQ_Training_Course.pdf

CHAPTER 19. DATA ACQUISITION SYSTEMS 127

Figure 19.6: The conversion of the continuous analog signal to a discrete signal at a speci�c time. Thearrows on the top indicate the speci�c times for each sensor.

Figure 19.7: The electrical representation of digital information in a computer.

4. Signal condition; some additional conversion of the digital value.

The conversion from analog to digital is called sampling, reduction of a continuous signal to a discretesignal. Sampling means to get a value at speci�c time in the time domain. Figure 19.6 shows theconversion from a analog continuous signal to a discrete signal using a MUX and a ADC.

19.2 Digital representation of numbers

Numbers used by human are normally represented in base 10 (decimal), but is not practical for acomputer. A more practical base for computers is base 2 as this base can be represented as a signal asvoltage or not. A voltage will indicate a �1�and no voltage (0V) will indicate a �0�. See Figure 19.7for the electrical levels where +V indicate a �1�and 0V indicate a �0�. The dotted lines are the limitvoltages for detecting a �1�or a �0�.

19.2.1 Integers

A number can then be represented by a set of voltages and the conversion between the base 10 and base2 will be (in the range [0; 256]):

N10 = b7�27�+ b6

�26�+ b5

�25�+ b4 (24) + b3

�23�+ b2

�22�+ b1 (21) + b0

�20�

where N is the number in base 10 and bn is the di¤erent bit numbers for base 2: The highest bit (b7) iscalled the Most Signi�cant Bit (MSB) and the lowest bit (b7) is called the Least Signi�cant Bit (LSB).The highest bit will be the same as the word size of the computer, most used is 8 bits, 16 bits, 32 bits,or 64 bits. It is common to break long binary numbers up into segments of 8 bits.

Example 11 Convert the binary number 001001012 to a digital number. The result is 3710

The conversion from a number of base 10 to a number of base 2 is dividing the number of 2 and usingthe remainder as the base 2 numbers.

Example 12 The 8 bit representation of 10110 will be:

Page 136: DAQ_Training_Course.pdf

CHAPTER 19. DATA ACQUISITION SYSTEMS 128

Number Remainder Bit value Value Bit101=250=2 1 1 1 0 LSB25=2 0 2 112=2 1 4 4 26=2 0 8 33=2 0 16 41=2 1 32 32 50 1 64 64 60 0 128 7 MSB

101

The result is the remainders starting with LSB given that 10110 = 011001012: These numbers canrepresent only positive decimals integers, how to represent negative decimal integers?Negative numbers are normally represented by 2�s compliment meaning that a 8 bits integer will

represent the range [�128; 127] instead of the range [0; 256]. The way of converting a negative decimalnumber using 2�s compliment is:

1. convert the integer to binary as if it were a positive integer,

2. invert all the bits, change all �0�to �1�and all �1�to �0�,

3. add 1 LSB to the �nal result.

Example 13 The 8 bit binary representation of �10110 and �610 will then be:

�10110 �6101 10110 = 011001012 011022 10011010 10013 10011011 1010

The MSB is normally the indication of positive or negative decimal number, the number is negativeif MSB=�1�, and positive if MSB=�0�.An integer of 8 bits is limited to the range of [�128; 127] or [0; 255] depening on signed or unsigned

integers. Larger ranges require more bytes, and word integers and long integers can be used. Wordintegers are often limited to 16 bits and long integers are limited to 32 bits, but may depend on the CPUarchitectures. A 64 bits CPU architecture may have di¤erent limitation than a 32 bits CPU architecture.The CPU architectures also di¤er in the order of the bytes in mulitbyte integers. The orders can beeither �Little Endian�or �Big Endian�.Little Endian means that the low order byte is stored in the �rst (lowest) address and the high order

byte in the last (highest) address. Big Endian means that the low order byte is stored in the last addressand the high order byte in the �rst address. Intel CPU architectures (used in most PCs) are using LittleEndian architecture, while Motorola CPU architectures (used in many MACs) are using Big Endianarchitecture.A long integer may consist of four bytes, byte 0, byte 1, byte 2 and byte 3. Byte 0 is the low order

byte and byte 3 is the high order byte. The long integer value will be:

valuelong = Byte_0 + 256 � (Byte_1 + 256 � (Byte_2 + 256 � (Byte_3)))

but these bytes will be stores di¤erently depending on the Endian architecture. The address positionsin memory will be:

Endian Adr#0 Adr#1 Adr#2 Adr#3Little Byte 0 Byte 1 Byte 2 Byte 3Big Byte 3 Byte 2 Byte 1 Byte 0

Using 2�s compliment we can represent positive and negative integers, but what about �oating points?

Page 137: DAQ_Training_Course.pdf

CHAPTER 19. DATA ACQUISITION SYSTEMS 129

Figure 19.8: The separation of the sign, mantissa, and the exponent in �oating point number(www.wikipedia.org 2006).

19.2.2 Floating numbers

Floating points are represented by separating 3 parts of the number, the sign, the mantissa, and theexponent. The number

�4:283:2

will have a negative sign, 4:28 will be the mantissa (fraction), and 3:2 will be the exponent. See Figure19.8 for the separation of the di¤erent bits in a �oating point number.Floating points can be represented by single or double precision and both representations are stan-

dardized from IEEE (IEEE 754).The data for the single and double precision are listed below:

Single DoubleSize in bits 32 64Sign bit 1 31 1 63Exponent bits 8 23� 30 11 52� 62Mantissa bits 23 0� 22 52 0� 51Value � �2:5� 1038 � �1:8� 10308

The exponent is biased by (2N�1)� 1, where N is the number of bits for the exponent. This meansthat for single precision will the exponent be adjusted by 127 . If the exponent is 6, will the biasedexponent be 6 + 127 = 133 and the mantissa will be adjusted according to the exponent.

19.3 ASCII codes

The computer is using binary signals, but base 2 numbers will be large numbers. Instead base 8, octal,or base 16, hex, are used. In octal groups of 3 and 3 bits are used and in hex groups of 4 and 4 bits areused as a number.The conversion between base 2, 8, 10, and 16 for the range [010; 1510]:

Base 2 8 10 160000 00 00 000001 01 01 01:: :: :: ::0111 07 07 071000 10 08 081010 12 10 A:: :: :: ::1111 17 15 F

When computers are communicating with the outside world there must some sort of protocol1 forde�ning the conversion between binary numbers and numbers for the outside world. These numberswill be di¤erent from the binary numbers and the most common code is ASCII, American Code forInformation Interchange. In this code 8 bits represent 256 characters. Today Unicode is used to extend

1Protocol: A set of rules (hardware and/or software) de�ning how to exchange the information between two systems.

Page 138: DAQ_Training_Course.pdf

CHAPTER 19. DATA ACQUISITION SYSTEMS 130

Figure 19.9: The input and output sections of a DAQ system. The left side is the input section (analogand digital), the right part is the output section (analog and digital), and the lower section in the middleis the computer I/O (digital bus).

the ASCII code for di¤erent types of languages and type of media. An example of the ASCII codes are[Character to Base 10 Code] :

Character Base 10 Character Base 10 Character Base 10<Space> 32 0 48 A 65

! 33 1 49 .. ::( 40 :: :: Z 90) 41 8 56 .. ::+ 43 9 57 a 97- 45 :: :: .. ::

The codes 0-31 are special control characters.

19.4 DAQ parts

The main parts of a DAQ system will be the the analog to digital converter (ADC) as shown in Figure19.5. Often a DAQ system also contains support for digital inputs and outputs and counters as well, seeFigure 19.9 for a complete DAQ system.

19.4.1 Counters

Counters are input and output signals that can be used for timing purposes. The input counters can beused to count a number pulses or measuring the time between any changes of the input signals. Thesesignals are often connected to internal counters or the interrupt system. The output counters can beused for generating frequency signals.

19.4.2 Digital inputs

Digital input signals are only ON/OFF signals and often will the number of digital input signal be amultiply of the data width for the system. This means that the number of digital inputs can for examplebe 8; 16; 24; or 32. Digital inputs can be used if the process change is a change between two states. Thisshould be converted to an electrical signal being on or o¤.Another type of digital input useful in data acquisition applications is the hardware trigger. This

allows an external event, often as an interrupt signal to the system.Important aspects of digital inputs are the:

1. input range of the digital inputs,

Page 139: DAQ_Training_Course.pdf

CHAPTER 19. DATA ACQUISITION SYSTEMS 131

Figure 19.10: A digital input using only GND as reference, the extern input will be independent of thevoltage in your measurement system.

2. input current,

3. noise protection.

The 0V (or ground) signal is a good references and by using pull-up resistors and a diode, only the0V signal can be used as the only reference. A capacitor can also be used for noise reduction and/or forbouncing. An example of a digital input is shown in Figure 19.10.

19.4.3 Digital outputs

Digital output signals are just like the digital input signals, only ON/OFF signals. Digital outputs areused to control equipment that are going to turned only ON and OFF. Often these outputs will controlrelays that can control any type of signal. A relay can be used both as Normally Open (NO) andNormally Closed (NC) devices. Important aspects of digital outputs are the:

1. output range of the digital outputs,

2. output current.

Often will an output ampli�er be used to amplify the output current for controlling the relay. Thisampli�er can be a digital inverter or a single transistor. However, always remember to add a diode withthe relay. An example of a digital output with an ampli�er, a diode, and a relay is shown in Figure19.11.This Figure also shows the usage of a transistor for ampli�cation of the digital output signal from

your measurement system as these signals often have low current driving capacity. The digital output isoften 5� 25mA, while a relay often need 100� 500mA to operate.The relay is an inductor and may create problems when the transistor (or any other switch) is turning

the current o¤. The voltage across the inductor in the relay is:

v = L

�di

dt

�where L is the inductance of the relay and i is the current �owing in the inductor. When the current isswitched o¤, the voltage across the actuator can become very high during the switching phase (Olsson& Piani 1998). The diode in Figure 19.11 is used to reduce the voltage spikes. Figure 19.12 shows thecurrent in the actuator to the left and the voltage across the actuator to the right.

19.4.4 Multiplexer

The multiplexer is an electronic switch, used to select the right input channel for the analog signal todigital conversion. The MUX seems as a simple device, but one important property is the crosstalk.Crosstalk is the interference between the channels of the MUX meaning that the input on one channelwill not be the same as the output. The reasons may be interference between the input channels. Thiscrosstalk property should be as low as possible, giving that the interference between the channels is low.

Page 140: DAQ_Training_Course.pdf

CHAPTER 19. DATA ACQUISITION SYSTEMS 132

Figure 19.11: A digital output using a relay, a diode for protection, and a transistor for ampli�cation ofthe digital output signal from your measurement system.

Figure 19.12: The current and voltage in an inductive actuator when turned o¤ (Olsson & Piani 1998).

Page 141: DAQ_Training_Course.pdf

CHAPTER 19. DATA ACQUISITION SYSTEMS 133

Figure 19.13: A resistor ladder for a digital to analog converter (www.wikipedia.org 2006).

Figure 19.14: The output range of a DAC in a DAQ system.

19.4.5 Digital to Analog Converters

In a process system the output of control values can be just as important as input for sensor signals andthe DAQ system can be used for both input of analog values and output of analog values.A Resistor Ladder, or R-2R Ladder is the most simple and inexpensive way to perform digital-to-

analog conversion, using repetitive arrangements of precision resistor networks in a ladder-like con�gu-ration (www.wikipedia.org 2006). Other types of DAC exists as well, but normally more simple than aADC. A resistor ladder DAC is shown in Figure 19.13The digital inputs or bits (Bit 0 to Bit 4) range from the most signi�cant bit (MSB) to the least

signi�cant bit (LSB). The bits are switched between either 0V or VREF and depending on the state andlocation of the bits OUT will vary between 0V and VREF. VREF will be the same voltage as for a logic1. See Figure 19.14.The detailed output of the DAC is shown in Figure 19.15 will be steps and the number of steps and

resolution depends on the number of bits.The output voltage from the DAC will be:

Vo =VMAX � VMIN

2N�Dv

where [VMIN ; VMAX ] is the output range of the DAC and Dv is the digital output value.

Figure 19.15: The output of a DAC.

Page 142: DAQ_Training_Course.pdf

CHAPTER 19. DATA ACQUISITION SYSTEMS 134

Example 14 DV = 128; N = 8; VMAX = 5V; VMIN = 0V gives VO = 2:5V

DAC speci�cations:

1. Settling time: Period required for a D/A converter to respond to a full-scale set point change.

2. Linearity: This refers to the device�s ability to accurately divide the reference voltage into evenlysized increments.

3. Range: The reference voltage sets the limit on the output voltage achievable.

4. Output control: ampli�ers and signal conditioners often are needed to drive a �nal control element.

5. Output �lter: A low-pass �lter may also be used to smooth out the discrete steps in output.

19.4.6 Analog to Digital Converter

The analog to digital converter is used for converting the analog signal from the output of the MUX toa digital representation of the analog signal. A set of problems arise when doing the analog to digitalconversion like:

1. the digital conversion will not be an exact representation of the analog signal,

2. the conversion will take some time so the analog signal must be stored while converting.

The representation of the analog signal will very seldom be exact due to the resolution of bits.Assuming an analog signal in the range of [0V; 10V ] and only 2 bits resolution of the system. 2 bits is22 = 4 numbers (0; 1; 2; 3) so the analog voltage range must be divided into 4 steps giving 10V�0V

4 = 2:5Vfor each step. The conversion will the be:

Value LSB MSB Number[0V; 2:5V ] 0 0 0[2:5V; 5V ] 1 0 1[5V; 7:5V ] 0 1 2[7:5V; 10V ] 1 1 3

In general will the output of an ADC have 2N possible values where N is the number of bits usedfor conversion in the ADC. Normally will an ADC in a practical solution have 8, 12, 14, or 16 bits. Theresolution for the converter will be:

R =High� Low

2N

ADC can vary widely, but there is 4 important properties for the converter:

1. the number of bits used for conversion. The greater the number of bits, a more accurate presentationof the analog input (8=10=12=14=16=18=20=24 bits),

2. the input range. The input range can be unipolar (0V to + 10V or � 5V to 0V ) or bipolar(�5V to + 5V ) ;

3. the reference voltage of the converter. This voltage will be part of the accuracy of the converter(often given as ppm/ �C),

4. the conversion speed, the time for converting the analog input to a digital representation (�s orms).

The Figure 19.16 shows the details of an A/D converter consisting of an analog section and a digitalsection. The analog signal is feed to the converter, the converter is started by a Start Conversion signal.The converter will report the Conversion Finished signal when the digital representation of the analogvalue is available in the digital section of the converter. The Read Value signal is used for reading thedigital value from the converter. The reference voltage is an important property for the accuracy of theA/D converter. On some converters this will be controlled by internal logic, on other converts will thisbe external logic.

Page 143: DAQ_Training_Course.pdf

CHAPTER 19. DATA ACQUISITION SYSTEMS 135

Figure 19.16: The details of an Analog to Digital Converter.

Figure 19.17: The principles for a unipolar single-slop analog to digital converter (Wheeler & Ganji 2004).

Page 144: DAQ_Training_Course.pdf

CHAPTER 19. DATA ACQUISITION SYSTEMS 136

Di¤erent types of ADC exist, lets use a unipolar single-slope integrating converter to demonstratethe analog to digital conversion process. A block diagram of the converter is shown in Figure 19.17 withthe analog input signal on the top left side.A start signal, lower left side, will start the conversion. The start signal will:

1. set the �lock�of the input signal as the input signal must be constant during the conversion,

2. reset the digital output of the counter,

3. reset the integrator,

4. reset and start the counter using the clock input.

The clock signal is used for the counter to let the digital output be a presentation of the analog value.In parallel with the counter will the integrator start rising a voltage and the output from the integrator iscompared with the analog value. When the integrator has a higher voltage than the analog voltage, thecounter will stop and the digital representation of the analog value can be read from the digital outputof the counter.The comparator will normally contain a �sample and hold� on the analog value input to keep the

analog value at a constant value while conversion is performed. The conversion time will depend on thespeed of the integrator and often there is a relation between the speed and resolution as shown:

Speed Low Medium HighPrecisionLow Interpolation

FoldingMedium Successive approximation

AlgorithmHigh Integration

OversamplingSigma Delta

The most used types of ADC are:

1. Sigma-Delta ADC,

2. successive-approximation ADC.

Successive-approximation

A successive-approximation ADC is the most common used type of analog to digital converters. A DACis used for converting a digital representation to a analog value. A DAC will be very simple compared toa ADC. The principle of successive approximation is shown in Figure 19.18, using a comparator, DAC,and a control module including a binary counter. The main di¤erent from the single-slop integrator isthe usage of the DAC instead. The comparator also includes a sample and hold circuit (S&H) to hold theinput value while searching. The binary counter starts with the most signi�cant bit (MSB) and workstowards the least signi�cant bit (LSB) using the clock input, giving a fast conversion. The DAC convertsthe output of the binary counter to an analog value which is compared with the analog input value inthe comparator. When these analog values are approximately equal, the counter stops and the binarycounter value is available as the digital output value.This design o¤ers an e¤ective compromise among resolution, speed, and cost. In this type of design,

an internal DAC converter and a single comparator are used to narrow in on the unknown voltage byturning the bits in the DAC until the voltages match within the least signi�cant bit. Raw samplingspeed for successive approximation converters is in the range of 50 kHz to 1 MHz.

Sigma-delta

A sigma-delta ADC uses a 1-bit DAC, �ltering, and oversampling to achieve very accurate conversions.The conversion accuracy is controlled by the input reference and the input clock rate.

Page 145: DAQ_Training_Course.pdf

CHAPTER 19. DATA ACQUISITION SYSTEMS 137

Figure 19.18: The principle of a successive approximation A/D converter.

The primary advantage of a sigma-delta converter is high resolution. The �ash and successive approx-imation ADCs use a resistor ladder or resistor string. The problem with these is that the accuracy of theresistors directly a¤ects the accuracy of the conversion result. Although modern ADCs use very precise,laser-trimmed resistor networks, some inaccuracies still persist in the resistor ladders. The sigma-deltaconverter does not have a resistor ladder but instead takes a number of samples to converge on a result.The primary disadvantage of the sigma-delta converter is speed. Because the converter works by

oversampling the input, the conversion takes many clock cycles. For a given clock rate, the sigma-deltaconverter is slower than other converter types. Or, to put it another way, for a given conversion rate,the sigma-delta converter requires a faster clock. Another disadvantage of the sigma-delta converter isthe complexity of the digital �lter that converts the duty cycle information to a digital output word.Figure 19.19 shows a simpli�ed Sigma Delta ADC.Figure 19.20 shows a sigma delta converter from Analog Devices. The A/D converter, AD7190,

consists of 4 analog inputs, a multiplexer, a sigma delta A/D converter and a signal condition unitconverting the digital representation for serial communication of the data. The device also contains atemperature sensor that can be used as an extra input for temperature compensation.

Integrating

This type of A/D converter integrates an unknown input voltage for a speci�c period of time, thenintegrates it back down to zero. This time is compared to the amount of time taken to perform a similarintegration on a known reference voltage. The relative times required and the known reference voltagethen yield the unknown input voltage. Integrating converters with 12 to 18-bit resolution are available,at raw sampling rates of 10-500 kHz.Because this type of design e¤ectively averages the input voltage over time, it also smooths out signal

noise. And, if an integration period is chosen that is a multiple of the AC line frequency, excellentcommon mode noise rejection is achieved. More accurate and more linear than successive approximationconverters, integrating converters are a good choice for low-level voltage signals.The solution with the integrator is using more time of the analog signal is closer to the high range,

while the successive approximation is using a kind of binary search and the time will not depend thatmuch of the analog signal.

Polarity

A DAQ system can convert either unipolar or bipolar signals, or both. A unipolar signal contains onlyzero and positive values. A bipolar cotains both zero, negative, and positive values. The input deviceswill decide the type of signals and the DAQ system should adopt to the type of signals to be able toexploit the resolution of the A/D converter. Figure 19.21 shows the di¤erence of unipolar and bipolarsignals.

Page 146: DAQ_Training_Course.pdf

CHAPTER 19. DATA ACQUISITION SYSTEMS 138

Figure 19.19: A simpli�ed Sigma delta ADC with examples of the signal levels (www.wikipedia.org 2006).

Figure 19.20: The block diagram of a Sigma Delta analog to digital converter, the AD7190 from AnalogDevices (www.analog.com; FEB-09).

Page 147: DAQ_Training_Course.pdf

CHAPTER 19. DATA ACQUISITION SYSTEMS 139

Figure 19.21: The di¤erence of unipolar and bipolar signals.

Range

In bipolar converters 2�s complement is used, starts with a binary number of 2N

2 at the lower end, 0 in

the middle, and 2N

2 � 1 as the top range. The output of a 2�s complement A/D converter will then be:

DO = int

�Vi � VrlVru � Vrl

� 2N�� 2

N

2

where Vi is the analog input voltage, Vru is the upper value of input range, Vrl is the lower value of inputrange, N is the number of bits, and DO is the digital output.A polar converter will have the output:

DO = int

�Vi � VrlVru � Vrl

� 2N�

Example 15 If the input voltage is 2V , the range is [�5V;+5V ], and the number of bits are 8, will thedigital output be:

DO = int(2� (�5)5� (�5) � 2

8)� 28

2= int

�7 � 25610

�� 128 = 179� 128 = 51

19.4.7 Resolution

Central to the performance of an A/D converter is its resolution, often expressed in bits. An A/Dconverter essentially divides the analog input range into 2N bins, where N is the number of bits.Since the output of an ADC changes in discrete steps (one LSB) will there be a resolution error, also

known as the quantizing error:

�0:5LSB

The input resolution error will then be:

�0:5Vru � Vrl2N

V

The input resolution error for the example above will be (Range [�5; 5] and 8 bits):

�0:5�10

256

�V = �19:5mV

The �standard� resolution2 of AD converters exists from about 12 bits to about 22 bits dependingon the price and the conversion time.Resolution, precision, and accuracy are often mixed. The di¤erence between resolution and precision;

resolution is the �neness to which an instrument can be read and precision is the �neness to which aninstrument can be read repeatably and reliably. This means that the di¤erence between resolution and

2as of 2008

Page 148: DAQ_Training_Course.pdf

CHAPTER 19. DATA ACQUISITION SYSTEMS 140

Figure 19.22: The di¤erence between di¤erential inputs and single ended inputs using an ampli�er.

precision is repeatability. The di¤erence between precision and accuracy is correctness. See the Figures17.11, 17.13, and 17.12.

19.4.8 Reference Voltage

A voltage reference is an electronic device (circuit or component) that produces a �xed (constant) voltageto the ADC irrespective of the loading on the device, power supply variation and temperature. It is alsoknown as a voltage source, but in the strict sense of the term, a voltage reference often sits at the heartof a voltage source (www.wikipedia.org 2006). Voltage references are used in ADCs and DACs to specifythe input or output voltage ranges.

19.4.9 Single-Ended and Di¤erential Inputs

Another important consideration when specifying analog data acquisition hardware is whether to usesingle-ended or di¤erential inputs. In short, single-ended inputs are less expensive but can be problematicif di¤erences in ground potential exist (www.wikipedia.org 2006).In a single-ended con�guration, the signal sources and the input to the ampli�er are referenced to

ground. This is adequate for high level signals when the di¤erence in ground potential is relatively small.A di¤erence in ground potentials, however, will create an error-causing current �ow through the groundconductor otherwise known as a ground loop (www.wikipedia.org 2006). The input voltage is measuredwith reference to ground and compared against the reference voltage.Di¤erential inputs, in contrast, connect both the positive and negative inputs of the ampli�er to both

ends of the actual signal source. Any ground-loop induced voltage appears in both ends and is rejectedas a common-mode noise. The downside of di¤erential connections is that they are essentially twice asexpensive as single-ended inputs; an eight-channel analog input board can handle only four di¤erentialinputs (www.wikipedia.org 2006). The input voltage is measured as the di¤erence between the inputlines and compared against the reference voltage.Figure 19.22 shows the di¤erence of these inputs for an ampli�er showing that DI inputs is not using

the ground as reference for the input signals. DI inputs will also require more input connections as eachinput connection has a separate input and return connection. The input signals to the DAQ system is�rst connected to the multiplexer. The number of connection will depend if DI or SE inputs are used,normally the number of SE inputs will be twice the number DI inputs. The inputs can either be DI orSE inputs, it is not possible to mix a number of SE and DI inputs on the same DAQ device. If one ofthe inputs need to be connected as DI input, all the inputs must be DI inputs. See Figure 19.23.The advice is to use DI inputs if:

� the input signal has a low level, normally less than 1 volt,

� the wires connecting the signal is longer than 3 meters,

� one of the input signals is using a reference di¤erent from the ground reference.

19.4.10 Number of channels

It is important to acknowledge that a multiplexer does reduce the frequency with which data points areacquired, and that the Nyquist sample-rate criterion still must be observed. During a typical data acqui-

Page 149: DAQ_Training_Course.pdf

CHAPTER 19. DATA ACQUISITION SYSTEMS 141

Figure 19.23: The usage of SE or DI inputs in a DAQ system.

sition process, individual channels are read in turn sequentially. This is called standard, or distributed,sampling. A reading of all channels is called a scan. Because each channel is acquired and converted ata slightly di¤erent time, however, a skew in sample time is created between data points.

19.4.11 Scaling

Because A/D converters work best on signals in the 1-10 V range, low voltage signals may need to beampli�ed before conversion-either individually or after multiplexing on a shared circuit. Conversely, highvoltage signals may need to be attenuated.Ampli�ers can also boost an A/D converter�s resolution of low-level signals. For example, a 12-bit

A/D converter with a gain of 4 can digitize a signal with the same resolution as a 14-bit converter with again of 1. It�s important to note, however, that �xed-gain ampli�ers, which essentially multiply all signalsproportionately, increase sensitivity to low voltage signals but do not extend the converter�s dynamicrange.Programmable gain ampli�ers (PGAs), on the other hand, can be con�gured to automatically increase

the gain as the signal level drops, e¤ectively increasing the system�s dynamic range. A PGA with threegain levels set three orders of magnitude apart can make a 12-bit converter behave more like an 18-bitconverter. This function does, however, slow down the sample rate.

19.4.12 Range, Gain and Measured Precision

The range of the DAQ system can often be con�gured to di¤erent gains like [�10V;+10V], [�5V;+5V],[�2V;+2V], and [�1V;+1V]. The precision will depend on the output signal of the sensor device andthe corresponding input range of the DAQ system. The input range should be as close as possible tosensor signal to have as good precision as possible.

19.4.13 Software calibration

Sometimes you need an accurate reference, more accurate than the product cost will support. Whenmanual adjustment is out of the question, the software can compensate for reference voltage variations.This is typically done by providing a known, precise input, which is used to calibrate the ADC. Thisreference can be very precise (and very expensive) because only a few are needed for the production line.

19.4.14 Transfer of A/D conversion to system memory

The A/D converter will use some time to convert the analog signal to a digital representation, andnormally the A/D converter will inform the controller when the conversion has �nished. The controllerwill read the converted value from the A/D converter and write the value to a speci�c location in thesystem memory. A First In First Out (FIFO) bu¤er can be part of the signal condition unit reading theconverted values from the A/D converter.The transfer from the A/D converter or the FIFO bu¤er to the system memory can be done in

di¤erent ways. These are:

Page 150: DAQ_Training_Course.pdf

CHAPTER 19. DATA ACQUISITION SYSTEMS 142

Figure 19.24: The usage of limit checks and validation checks for a value read from a sensor (Skeie 2008).

1. polling; The controller is waiting for the A/D controller to �nish the conversion. This is seldomused as it is a waste of the controller usage,

2. interrupt; The controller will get a signal from the A/D converter every time the operation has �n-ished. The controller will enter an interrupt function only when informed from the A/D converter,

3. Direct Memory Access (DMA) transfer; The A/D converter needs a memory interface (DMAcontroller) and will transfer the converted data to system memory without interference of thecontroller. The A/D converter (DMA controller) and the controller can not use the memory at thesame time, normally the A/D controller (DMA controller) will have the priority and the controllermust wait for any memory access when in use by the A/D controller. The controller can howevercontinue with any other tasks, without accessing the system memory.

19.5 Range check of signal values

Reading values from sensors can give good, wrong, or illegal values. It is important to have some sort ofvalue checks of the sensor device signals. Some of these value checks are (Pettersen 1984):

1. limit checks; checking that the sensor value is within the valid range of the sensor,

2. validation checks; checking that the sensor value is within a �window�of the last value,

3. redundancy checks; checking with other sensors.

Limit checks and validation checks can always be used, as they can be included in the software. Theusage of limit checks and validation checks is shown in Figure 19.24.

Page 151: DAQ_Training_Course.pdf

Chapter 20

Communication

The communication between the sensor devices and the DAQ system can be single cables (point topoint), a bus (multidrop), or wireless. The communication uses some sort of electrical interface and aprototocol for communication. The protocol is a set of rules de�ning the hardware and software of thecommunication. Figure 20.1 shows the measurement system with the sensor devices, the DAQ system,and the communication part between these devices. The communication can be based on point to pointconnection or bus, either cable or wireless based connections.

20.1 Communication architectures

Figure 20.2 shown a point to point connection to the left, and bus connections in the middle and to theright. Wireless is a type of bus connection without the wire.

20.1.1 Current loop communication

Current loop is using the same set of cables for both the power and signal. The signal is very immuneto noise and can be used over a long distance. Each sensor must be connected with a set of separatecables and only one analog signal can be read. Figure 20.3 shows a 4-20 mA sensor device connected toa measurement system where both the signal and the power to the sensor device is using the same pairof cables.The popular 4�20mA interface is a current loop communication signal where the analog signal from

the sensor is varying between 4mA and 20mA. The sensor device often contains an A/D converter anda D/A converter to convert the signal from the sensing device signal to the signal for the analog outputdevice. The signal from the sensing device is normally non-linear and the conversion is also containingthe linearization of the signal. The output of a 4� 20mA device is normally a linear signal.A pressure sensor device using the 4 � 20mA interface is shown in Figure 20.4. The signal from

the pressure sensor is converted to a digital signal, compensated by the temperature (linearized), andconverted to a 4� 20mA signal using the D/A convert and an ampli�er.

Figure 20.1: The communication from the sensor devices and the measurement system. The transmitterdevice of the sensor device will be responsible for the interface and protocol.

143

Page 152: DAQ_Training_Course.pdf

CHAPTER 20. COMMUNICATION 144

Figure 20.2: Point to point and bus connections.

Figure 20.3: The current loop communication using one pair of cables for both the power and signal forthe sensor device (www.analogservices.com: feb-2010).

Figure 20.4: The block diagram of a 4� 20mA pressure sensor with a sensing device and a temperaturesensor for temperature compensation. The output signal can be with or without the HART interface inaddition to the analog 4� 20mA signal.

Page 153: DAQ_Training_Course.pdf

CHAPTER 20. COMMUNICATION 145

Current loop communication can be used for both analog and digital signals.The HART protocol is an extension where a digital signal can be added to the analog signal for more

information on the same connection. Often HART is used in conjunction with the 4� 20mA interface.

20.1.2 Serial communication

Serial communication means mainly RS-485, a simple bus where several devices can be connected at thesame time. The signal are digital signals and the structure must be a master/slave principle. The masteris in charge of the communication at all time, requesting data from the slaves in a cyclic manner. RS-485de�nes only the physical interface, the software services will depend on the vendors.Serial communication can be either point to point communication or bus communication.

20.1.3 Network communication

Network communication will work the same way as a RS-485 bus, but will have a de�ned software protocolso sensors from di¤erent vendors can be used. The protocol will also de�ne the way the master/slaveprinciple should work.Network communication will be bus communication.

20.1.4 Instrument control buses

Di¤erent instrumentation buses exist and the meaning with these buses are to make I/O abstraction andinstrument abstraction using device drivers. These buses are network based.

1. LXI: LAN based bus,

2. USB: Serial bus,

3. IEEE-1394 / FireWire: Serial bus

4. IEEE-488 / GPIB: Based on the HP Interface Bus (HP-IB), now General Purpose Interface Bus:8-bit parallel bus, maximum 15 devices: Parallel port,

5. PXI: Computer bus, Peripheral Component Interconnected Extended, internal cards,

6. PCMCIA; Personal Computer Memory Card International Association, often one or two slots ona PC, parallel port.

20.1.5 Wireless communication

Wireless communication will be network communication without the cables. Interconnecting severalwireless sensors the system will be a wireless sensor network (WSN). The wireless system however requirea gateway for the connection between the sensors and the DAQ system.Advantages of wireless communication:

1. Avoid cabling,

2. Easy to install new sensor devices.

Each device in network is called node and a node in an active network must consists of a transmit-ter/receiver unit (radio), a microcontroller unit, a sensor device, and a power supply unit. This is shownin Figure 20.5.Wireless communication can be standalone nodes or nodes connected in a network. A network of

sensor nodes is called a sensor network, and a wireless sensor network when wireless nodes are used. Asensor network is a collection of sensor devices cooperation to measure the sensing parameters and asingle point of connection.To take into considerations when using wireless communication and wireless nodes:

1. Power consumption,

2. Number of nodes in the network,

Page 154: DAQ_Training_Course.pdf

CHAPTER 20. COMMUNICATION 146

Figure 20.5: The contents of an active node in a wireless sensor network.

Figure 20.6: The releationship between wireless standards, the data rate, the range, and the type transfer�objects�(www.iar.com: dec-08).

3. Security.

Some of the standard wireless protocols are shown in Figure 20.6. This Figure shows the relationshipbetween some of the wireless standards, the data rate, the range, and the type of transer �objects�.The ranges are divided intor personal area network (PAN), local area network (WLAN), Wide AreaNetworks (WWAN). Short range regarding the �gure is 10m to 100m and long range may be kilometers.Low datarate is kilo bytes (Kbytes) and high data rate is up to giga bytes (Gbytes).A set of standards exists like RFID, Bluetooth, ZigBee and WiFi:

1. RFID is a passive sensor, only transmitting the information when �asked�by a transmitter.

2. Bluetooth is an active sensor system, and has an limitation of 8 concurrent nodes, one master and7 slaves.

3. ZigBee is an active sensor system. The maximum number of nodes for ZigBee is 65535 with di¤erenttypes of architectures. One architecture used a lot is the Mesh network meaning that a node inthe network is only connecting to the nearest neighbour nodes.

20.2 Wireless Sensor

The motivations for using wireless technology are:

Page 155: DAQ_Training_Course.pdf

CHAPTER 20. COMMUNICATION 147

Figure 20.7: The structure for a bar code (www.taltech.com: jun-09).

1. no need for a cable,

2. installation in remote and hostile areas,

3. temporary and mobile installations,

4. provides larger �exibility,

5. enables new type of applications.

20.2.1 Bar Codes

Bar codes are like a printed version of the Morse code. Di¤erent bar and space patterns are used torepresent di¤erent characters. Sets of these patterns are grouped together to form a �symbolog�. Thereare many types of bar code symbologies each having their own special characteristics and features. Mostsymbologies were designed to meet the needs of a speci�c application or industry.Bar codes provide a simple and inexpensive method of encoding text information that is easily read

by inexpensive electronic readers. Bar coding also allows data to be collected rapidly and with extremeaccuracy. A bar code consists of a series of parallel, adjacent bars and spaces. Prede�ned bar and spacepatterns or "symbologies" are used to encode small strings of character data into a printed symbol. Barcodes can be thought of as a printed type of the Morse code with narrow bars (and spaces) representingdots, and wide bars representing dashes. A bar code reader decodes a bar code by scanning a light sourceacross the bar code and measuring the intensity of light re�ected back by the white spaces. The patternof re�ected light is detected with a photodiode which produces an electronic signal that exactly matchesthe printed bar code pattern. This signal is then decoded back to the original data by inexpensiveelectronic circuits. Due to the design of most bar code symbologies, it does not make any di¤erence ifyou scan a bar code from right to left or from left to right.The basic structure of a bar code consists of a leading and trailing quiet zone, a start pattern, one

or more data characters, optionally one or two check characters and a stop pattern. Figure 20.7 showsthe basic structure of a bar code.There are a variety of di¤erent types of bar code encoding schemes or "symbologies", each of which

were originally developed to ful�ll a speci�c need in a speci�c industry. Several of these symbologieshave matured into de-facto standards that are used universally today throughout most industries.

Bar Code reader

Pen type readers consist of a light source and a photo diode that are placed next to each other in thetip of a pen or wand. There are several di¤erent types of bar code readers available, using a slightlydi¤erent technology. These are pen type readers (e.g. bar code wands), laser scanners, CCD readers andcamera-based readers.

� Pen type readers and laser scanners; The photo diode measures the intensity of the light re�ectedback from the light source and generates a waveform that is used to measure the widths of thebars and spaces in the bar code. Dark bars in the bar code absorb light and white spaces re�ectlight so that the voltage waveform generated by the photo diode is an exact duplicate of the barand space pattern in the bar code. This waveform is decoded by the scanner in a manner similarto the way Morse code dots and dashes are decoded.

� CCD readers; Use an array of hundreds of tiny light sensors lined up in a row in the head of thereader. Each sensor can be thought of as a single photo diode that measures the intensity of the

Page 156: DAQ_Training_Course.pdf

CHAPTER 20. COMMUNICATION 148

Figure 20.8: An UPC bar code with product=6, manufactor code=39382, product type=00039, andchecksum=3. (www.howstu¤works.com: jun-09).

light immediately in front of it. The important di¤erence between a CCD reader and a pen or laserscanner is that the CCD reader is measuring emitted ambient light from the bar code whereas penor laser scanners are measuring re�ected light of a speci�c frequency originating from the scanneritself.

� Camera based readers; Use a small video camera to capture an image of a bar code. The readerthen uses a digital image processing techniques to decode the bar code. Video cameras use thesame CCD technology as in a CCD bar code reader except that instead of having a single row ofsensors, a video camera has hundreds of rows of sensors arranged in a two dimensional array sothat they can generate an image.

Barcode protocols

Code 39 The Normal CODE 39 is a variable length symbology that can encode the following 44 char-acters: 1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ-. *$/+%. Code 39 is the most popularsymbology in the non-retail world and is used extensively in manufacturing, military, and health appli-cations. Each Code 39 bar code is framed by a start/stop character represented by an asterisk (*). TheAsterisk is reserved for this purpose and may not be used in the body of a message.The FULL ASCII version of Code 39 is a modi�cation of the NORMAL (standard) version that

can encode the complete 128 ASCII character set (including asterisks). The Full ASCII version isimplemented by using the four characters: $/+% as shift characters to change the meanings of the restof the characters in the Normal Code 39 character set.

UPC Universial Production Code. UPC-A is a 12 digit, numeric symbology used in retail applications.UPC-A symbols consist of 11 data digits and one check digit. The �rst digit is a number system digitthat normally represents the type of product being identi�ed. The following 5 digits are a manufacturerscode and the next 5 digits are used to identify a speci�c product. Figure 20.8 shows an example of aUPC bar code.The coding of the numbers are:

Number Width0 3 2 1 11 2 2 2 12 2 1 2 23 1 4 1 14 1 1 3 25 1 2 3 16 1 1 1 47 1 3 1 28 1 2 1 39 3 1 1 2

Page 157: DAQ_Training_Course.pdf

CHAPTER 20. COMMUNICATION 149

The bar code starts with the standard start code of 1-1-1 (bar-space-bar) and ends with the samecode, the stop character is a 1-1-1 (bar-space-bar). In Figure 20.8, starting from the left is the start code;a one-unit-wide black bar followed by a one-unit-wide white space followed by a one-unit-wide black bar(bar-space-bar). Then will all the 12 numbers follow by a combinations of black bars and whithe spaces,and the bar code will end with the stop character. Note that in the middle there is a standard 1-1-1-1-1(space-bar-space-bar-space), which is important because it means the numbers on the right are opticallyinverted!The last number is the check digit, used as a checksum of the bar code. The check algorithme is

(using the example from Figure 20.8):

1. Add together the value of all of the digits in odd positions (digits 1, 3, 5, 7, 9 and 11); 6 + 9+ 8+0 + 0 + 9 = 32,

2. Multiply that number by 3; 32 � 3 = 96,

3. Add together the value of all of the digits in even positions (digits 2, 4, 6, 8 and 10); 3+3+2+0+3 =11,

4. Add this sum to the value in step 2; 96 + 11 = 107,

5. The check digit is the number that must be added to the number in step 4 to be a multiple of 10;107 + 3 = 110.

UPC numbers are assigned to speci�c products and manufacturers by the Uniform Code Council(UCC).

EAN

European Article Numbering (EAN) system (also called JAN in Japan) is a European version of UPC.It uses the same size requirements and a similar encoding scheme as for UPC codes.EAN-8 encodes 8 numeric digits consisting of two country code digits, �ve data digits and one check

digit. B-Coder will accept up to 7 numeric digits for EAN-8. B-Coder will automatically calculate thecheck digit for you. If you enter less than 7 digits or if you enter any digits other than 0 to 9, B-Coderwill display a warning message. If the option "Enable Invalid Message Warnings" in the Preferencesmenu is not selected and you do not enter 7 digits, B-Coder will left pad short messages with zeros andtruncate longer messages so that the total length is 7.EAN-13 is the European version of UPC-A. The di¤erence between EAN-13 and UPC-A is that

EAN-13 encodes a 13th digit into the parity pattern of the left six digits of a UPC-A symbol. This 13thdigit, combined with the 12th digit, usually represent a country code.

Summary

Symbology Data CapacityUPC-A 12 numeric digits - 11 user speci�ed and 1 check digit.UPC-E 7 numeric digits - 6 user speci�ed and 1 check digit.

EAN-8 8 numeric digits - 7 user speci�ed and 1 check digit.EAN-13 13 numeric digits - 12 user speci�ed and 1 check digit.

Code 39 Variable length alphanumeric data - the practical upper limit is dependent on the scanner and is typically between 20 and 40 characters.Code 93 Code 128 is more e¢ cient at encoding data than Code 39 or Code 93. Code 128 is the best choice for most general bar code applications.Code 128 Code 39 and Code 128 are both very widely used while Code 93 is rarely used.

Barcode system

The bar code system must consist of a bar code reader and a computer as shown in Figure 20.9. Thebar code application on the computer convert the bar code to some valid information in the computer.The advantage is that the bar code on the items can stay the same, while the data can be updated inthe computer only.

Page 158: DAQ_Training_Course.pdf

CHAPTER 20. COMMUNICATION 150

Figure 20.9: The usage of a bar code reader in a computer system. The bar code is converted to validinformation by a computer application.

20.2.2 RFID

Radio-frequency identi�cation (RFID) is an automatic identi�cation method, relying on storing andremotely retrieving data using devices called RFID tags or transponders.An RFID tag is an object that can be applied to or incorporated into a product, animal, or person

for the purpose of identi�cation using radio waves. Some tags can be read from several meters awayand beyond the line of sight of the reader. There are two types of RFID tags: active RFID tags, whichcontain a battery (up to six years with active sleep mode), and passive RFID tags, which have no battery.Passive RFID tags uses the electrical current induced in the antenna by the incoming radio frequencysignal to provide power for the CMOS integrated circuit in the tag to power up and transmit a response.Most RFID tags contain at least two parts:

1. an integrated circuit for storing and processing information, modulating and demodulating a (RF)signal, and other specialized functions,

2. an antenna for receiving and transmitting the signal. Chipless RFID allows for discrete identi�ca-tion of tags without an integrated circuit, thereby allowing tags to be printed directly onto assetsat a lower cost than traditional tags.

Figure 20.10 shows a block diagram of a RFID transponder chip. The only external component isthe coil, or antenna, to the Coil1 and Coil2 pins. The voltage induced in the coil is fed to a full waverecti�er block and used as the voltage generator for the chip. The voltage is available between the VDDand VSS. The clock frequency is generated from the clock extractor block. This clock frequency is usedby the sequencer to clock out the data in the memory array block. The data, 64 bits, are shifted outas serial data, into the data encoder block. In the data encoder is the data processed and fed to thedata modulator block according to a protocol. The data modulator drives the antenna coil and producesan amplitude modulated HF signal with the data in the side bands. Di¤erent protocols are used inRFID systems coding the data, and both the transmitter and the receiver must agree on using the sameprotocol.The usage of a RFID reader in a computer system is shown in Figure 20.11. The RFID code is

converted by a computer application to some sort of valid information. The advantage of this solution isthat any changes is updated in the computer application, not at the tag code. One example is the priceof the item, when the price is changed only the conversion table has to be updated not the RFID tagson the items.This is the same solution as with a bar code system as shown in Figure 20.9. An advantage of the

RFID system is the distance between the items and the RFID reader, this distance can be longer andfewer obstacles can stop the communication between the items and the reader.

20.2.3 RFID or Bar Codes

Both the RFID and the Bar Codes can be used as forms of automatid data collection. RFID uses atag applied to a product in order to identify and track it via radio waves, using an active sensor (anintegrated circuit and an antenna). Barcode is an optical representation of data represented by the widthand spacing of parallel lines, using a passive sensor (a printed label).

Page 159: DAQ_Training_Course.pdf

CHAPTER 20. COMMUNICATION 151

Figure 20.10: Block diagram of the EM4100 transponder chip (EM Microelectronic: jun-09).

Figure 20.11: The usage of a RFID reader in a computer system, the RFID code is converted to a validinformation by a computer application.

Page 160: DAQ_Training_Course.pdf

CHAPTER 20. COMMUNICATION 152

Figure 20.12: The globe with a set of satellites, some visible and some invisible from a speci�c locationon the globe (www.wikipedia.org 2006).

� Advantages of RFID; RFID technology is more comprehensive than barcode technology, allowingtags to be read from a greater distance. In addition, RFID tags can be read much faster thanbarcodes because barcodes require a direct line of sight, whereas about 40 RFID tags can be readat once.

� Advantages of Barcodes; The barcodes are cheaper than RFID technology. Barcode tags are alsomuch lighter and smaller than RFID tags, making them easier to use.

20.2.4 GPS

The global positioning system (GPS) is a satellite navigation system with up to 32 (24) satellites orbitingthe earth in 12 hours orbits (paths). The satellites are distributed such that, on average, there are 12satellites visible in each hemisphere (horizon). The satellites are time synchronized using an onboardatomic clock, and continuously transmitting the time and other information. The transmission is using acommon carrier, but each satellite has its own pseudo random number sequence. An another encryptedcarrier is available for military purposes using a �super secret GPS password�. The non encrypted carriercan be used by any GPS receiver. The earth with the satellites is shown in Figure 20.12 with a set ofvisible and non-visible satellites. The GPS receiver is a satellite receiver which listens for the signals andmeasures the time of the arrival, comparing it to the GPS time when the data was sent. This informationprovides a pseudo-range to each satellite that is received, and the range is used to compute the position.Four satellite signals are needed to get a three-dimensional position.GPS receivers typically listen for all 12 satellites that should be in the hemisphere where the receiver

is located and uses the strongest signals to compute the position. The more satellites it uses, the betterwill the accuracy be.In order to know which satellite is where, will all satellites from time to time transmit a �database�

containing the orbital data for all the satellites. The GPS receiver should store this database in nonvolatilememory to preserve this information between power cycles.At a cold start (power up) must the GPS receiver cycles through all possible satellites codes until

it receives a satellite signal with su¢ cient signal to noise ratio to download the orbital information andcurrent time. The cold starts can take up to 15 minutes, depending on the GPS receivers. A warn start,for example a watchdog reset, or moved a large distance, the startup time is often less than 2 minutes.The protocol used for many GPS receivers is the NMEA-0183 (National Marine Electronics Asso-

ciation). This protocol uses a simple ASCII, serial communication protocol that de�nes how data istransmitted in a �sentence�from one transmitter to one receiver at a time.

20.3 Wireless sensor network

Wireless sensor networks are networks of small, battery-powered, memory-constraint devices namedsensor nodes, which have the capability of wireless communication over a restricted area. A sensor nodeis a device in a wireless sensor network that is capable of performing some processing, gathering sensory

Page 161: DAQ_Training_Course.pdf

CHAPTER 20. COMMUNICATION 153

Figure 20.13: A control system based on wired sensor network (Skavhaug & Pettersen 2007).

Figure 20.14: A system controlling a plant or a process using a closed control loop.

information and communicating with other connected nodes in the network. A node is a connectionpoint for data transmission.The usage of wireless networks is being more and more popular, including wireless sensor networks.

It should be possible to achieve up to 10% reduction of construction cost by utilizing wireless instrumen-tation in new plants and facilities1 . A control system using cabling for the sensors and the actuators isshown in Figure 20.13.The system needs to receive the sensor data at the right time and need to know if any problem with

the sensor data like errors, no contact, and so on. The system also needs to control the actuators atthe right time and to receive the right feedback. The control loop is closed2 as long as the connectionbetween the control system, the actuators and the sensors is OK.A system controlling a plant or a process using measurement and control in a closed control loop is

shown in Figure 20.14.A control system using wireless communication for the sensors and the actuators is shown in Figure

20.15.1Dag Sjong, StatoilHydro, at Servomøtet 2007.2A closed control loop is using feedback from the plant/process for the control algorithm.

Figure 20.15: A control system based on wireless sensor network (Skavhaug & Pettersen 2007).

Page 162: DAQ_Training_Course.pdf

CHAPTER 20. COMMUNICATION 154

Figure 20.16: The communication and sensing functions of a RFD, a FFD, and a coordinator in a sensornetwork.

The control loop will now be based on wireless communication, the connection should still be OK,but what about the safety in the system? Is it better to use cables? Or should the system be designed tohave the same safety aspect with wireless communication? The system can for example to be designedwith safety states meaning that di¤erent parts will enter a safety condition if the communication fails.A network with active sensor nodes often consists of 3 di¤erent types of sensor devices:

1. Reduced Functional Devices (RFD), often sensing devices only sending information,

2. Fully Functional Devices (FFD), sensing device both sending and receiving (routing) information,

3. Coordinator, often only one device i the network, coordinating the network and the gateway toother systems.

An overview of the sensing and networking functionallity of such devices is shown in Figure 20.16.Some speci�cation and/or requirements for a wireless sensor network:

1. Sensors spread across a geographical area using no cable at all!

2. Each sensor node has: wireless communication capability, some level of intelligence for signalprocessing, and networking of the data.

3. Sensor Node Requirements: Small, Battery powered, Radio, tens of meters, Embedded processor,and storage.

4. Network Requirements: Large number of sensors (1.000 to 10.000 nodes), Low energy use, Networkself-organization, Querying ability, Di¤erent topologies, but mesh networks used most (Regularlydistributed networks that allow transmission only to the nearest nodes).

5. Low power usage; Communication is the most energy-consuming operation, Transmitting one bitcost the same energy as about 1000 instructions, Process data within the network whereverpossible!

6. Protocols;

(a) International Standards

i. ZigBeeii. Bluetoothiii. WirelessHARTiv. Smart Sensors (IEEE 1451)

(b) Proprietary Solutions

i. Dust Networksii. Sensicast

(c) Smart Home Networks

i. X-10 protocolii. Consumer Electronic Bus (CEBus)iii. LonWorks

Page 163: DAQ_Training_Course.pdf

CHAPTER 20. COMMUNICATION 155

Figure 20.17: The di¤erent types of network nodes in a ZigBee network.

20.3.1 ZigBee

ZigBee is the name of a speci�cation for a suite of high level communication protocols using small, low-power digital radios based on the IEEE 802.15.4 standard for wireless personal area networks (WPANs)(www.wikipedia.org 2006).The raw data rate is 250 kbit/s per channel in the 2.4 GHz band, 40 kbit/s per channel in the 915

MHz band, and 20 kbit/s in the 868 MHz band. Transmission range is between 10 and 75 meters (33~246feet), although it is heavily dependent on the particular environment. The maximum output power ofthe radios is generally 0 dBm (1 mW).The basic channel access mode speci�ed by IEEE 802.15.4-2003 is �carrier sense, multiple access/collision

detection�(CSMA/CD).ZigBee protocols are intended for use in embedded applications requiring low data rates and low

power consumption. ZigBee�s current focus is to de�ne a general-purpose, inexpensive, self-organizing,mesh network that can be used for industrial control, embedded sensing, medical data collection, smokeand intruder warning, building automation, home automation, domestics, etc. The resulting network willuse very small amounts of power so individual devices might run for a year or two using the originallyinstalled battery.The software is designed to be easy to develop on small, cheap microprocessors. The radio design

used by ZigBee has been carefully optimized for low cost in large scale production. It has few analogstages and uses digital circuits wherever possible.All the routers are mains-powered devices (lamps, heat pump, lighting �xtures, smoke alarms) and

the "end" devices are battery-powered (switches, thermostats, motion detectors).

Protocol Type Range Data rateZigBee 2.4 GHz 10m-75m 250 kbit/s(IEEE 802.15.4) 915 MHz 10m-75m 40 kbit/s

868 MHz 10m-75m 20 kbit/s

The number of ZigBee devices in a network depends on the topology, but can be up to 65535 nodes.The ZigBee standard includes three di¤erent ZigBee devices (nodes):� ZigBee end device (ZED): A device containing enough functionality to communicate with a

parent node, the device can not relay data from other devices.� ZigBee Router (ZR): This device will function as a router for passing data from other devices

in addition to be a ZED.� ZigBee coordinator (ZC): The gateway to other networks and forms the root of the network

tree. A ZigBee network will only have one ZC. A ZED and/or a ZR can also be part of the ZC.An example of a ZigBee network is shown in Figure 20.17.

Page 164: DAQ_Training_Course.pdf

CHAPTER 20. COMMUNICATION 156

20.3.2 Bluetooth

Bluetooth is an industrial speci�cation for wireless personal area networks (WPANs). Bluetooth providesa way to connect and exchange information between devices such as mobile phones, laptops, PCs, printers,digital cameras and video game consoles via a secure, globally unlicensed short-range radio frequency(www.wikipedia.org 2006).

Class Power RangeClass #1 100 mW � 100 mClass #2 2.5 mW � 10 mClass #3 1 mW � 1 m

A Bluetooth device playing the role of the �master�can communicate with up to 7 devices playingthe role of the �slave�. This network �group of up to 8 devices� (1 master and 7 slaves) is called apiconet. A piconet is an ad-hoc computer network of devices using Bluetooth technology protocols toallow one master device to interconnect with up to seven active slave devices. Up to 255 further slavedevices can be inactive, or parked, which the master device can bring into active status at any time.At any given time, data can be transferred between the master and one slave; but the master switches

rapidly from slave to slave in a round-robin fashion. (Simultaneous transmission from the master tomultiple slaves is possible, but not used much in practice). Either device may switch the master/slaverole at any time.Bluetooth speci�cation allows connecting 2 or more piconets together to form a scatternet, with some

devices acting as a bridge by simultaneously playing the master role in one piconet and the slave role inanother piconet.The Bluetooth protocol operates in the license-free ISM band at 2.45 GHz. In order to avoid inter-

fering with other protocols which use the 2.45 GHz band, the Bluetooth protocol divides the band into79 channels (each 1 MHz wide) and changes channels up to 1600 times per second. Implementationswith versions 1.1 and 1.2 reach speeds of 723.1 kbit/s. Version 2.0 implementations feature BluetoothEnhanced Data Rate (EDR), and thus reach 2.1 Mbit/s. Technically version 2.0 devices have a higherpower consumption, but the three times faster rate reduces the transmission times, e¤ectively reducingconsumption to half that of 1.x devices (assuming equal tra¢ c load).

Version Data rate Note1:1 723.1 kbit/s1:2 723.1 kbit/s2:0 2.1 Mbit/s

20.3.3 Wireless HART

WirelessHART is a wireless mesh network communications protocol for process automation applications.It adds wireless capabilities to the HART Protocol while maintaining compatibility with existing HARTdevices, commands, and tools. Gateways enable communication between the wireless HART devicesand host applications in the existing plant communications network. The network uses IEEE 802.15.4compatible radios operating in the 2.4GHz radio band. The radios are using channel hopping for com-munication security and reliability, as well as TDMA synchronized, latency-controlled communicationsbetween devices on the network.

Wireless HART propertiesRadio standard IEEE 802:15:4� 2006Frequency band 2.4 GHzChannel hopping Yes, packet basisDistance maximum 250mTopologies Mesh and Star

Page 165: DAQ_Training_Course.pdf

CHAPTER 20. COMMUNICATION 157

Figure 20.18: An overview of a wireleass HART network with a set of wireless nodes and two gateways(www.hartcomm.org: nov-09).

20.3.4 Wireless Cooperation Team

The Wireless Cooperation Team (WCT) is a collaboration by the Fieldbus Foundation, Hart Communi-cation Foundation, and Pro�bus. This team tries to agree on a wireless technology for the manufacturingand process industries worldwide. The team is developing an interface speci�cation and compliance guide-lines to integrate a universally accepted wireless solution into the Hart, Foundation �eldbus, Pro�busand Pro�net communications networks. The team was founded in 2008.

20.3.5 Comparison of wireless standards

Figure 20.19 shows a comparison of some of the key properties rearding a wireless standard.

20.4 Distributed Systems

Distributed computing deals with hardware and software systems containing more than one processingelement or storage element, concurrent processes, or multiple programs, running under a loosely or tightlycontrolled regime.In distributed computing a program is split up into parts that run simultaneously on multiple comput-

ers communicating over a network. Distributed computing is a form of parallel computing, but parallelcomputing is most commonly used to describe program parts running simultaneously on multiple proces-sors in the same computer. Both types of processing require dividing a program into parts that can runsimultaneously, but distributed programs often must deal with heterogeneous environments, networklinks of varying latencies, and unpredictable failures in the network or the computers. An example of adistributed system is shown in Figure 19.3.A common architecture is a server client architecture where the server is the owner of a resource and

the clients want information from the server.

Page 166: DAQ_Training_Course.pdf

CHAPTER 20. COMMUNICATION 158

Figure 20.19: Comparison of some of the key properties of the WiFi, the Bluetooth, and the ZigBeestandards.

Page 167: DAQ_Training_Course.pdf

Chapter 21

Discrete Sampling

Digital systems record signals at discrete times and record no information about the signal between thesetimes, see Figure 19.6. So sampling means to take snapshots of analog signals at discrete times. The timeinterval between the samples is kept constant in most application and is known as the sampling rate.The designer or user of such digital systems, like DAQ system, must be aware of problems of recordingat discrete times and take the right actions to get the correct information from the analog signals. Theconversion from an analog value to a binary number is called quantization. The binary number consistsof a number of bits used for the conversion, normally in the range of 8 to 24 bits.Figure 21.1 shows how a DAQ system will separate the analog and digital section of a measurement

system, and the digital section will only contain digital representation of the analog signal at discretetime intervals. This is shown in the right part of the Figure.

21.1 Sampling-rate theorem

When a computer system is used for recording analog signals at discrete times, will the measurementread a value only at speci�c times and no measurement between these speci�c times. How should theuser of the system know that he is getting all the necessary information from the analog signal only atthese speci�c times?The measurement at these speci�c times is known as the sampling rate and it is important to select the

right sampling rate for the measurement. Incorrect selection of the sampling rate can lead to misleadingresults (Wheeler & Ganji 2004).Lets start with a sinus of 10Hz shown in Figure 21.2 showing a sine wave that we want to sample.The frequency of the sine wave in Figure 21.2 is 10Hz and we will try to sample with 5; 11; 18; and

20:1 samples per second. A 10Hz sine wave has a periodic time of 100ms�T = 1

f

�and using 5 samples

per second will give exact the same value of the sine wave in every second period of the wave. Using 5samples per second will then just give a constant value as shown in Figure 21.3. Using 10 samples per

Figure 21.1: The DAQ system consists of an analog section and a digital section. The sensing partwill always be analog, and somewhere in the system will the DAQ convert an analog value to a digitalrepresentation. This is shown in the right part of the �gure.

159

Page 168: DAQ_Training_Course.pdf

CHAPTER 21. DISCRETE SAMPLING 160

Figure 21.2: A 10Hz sine wave to be sampled (Wheeler & Ganji 2004).

Figure 21.3: A 10Hz sine wave sampled with 5 or 10 samples per second (Wheeler & Ganji 2004).

second will give the same value but at every period of the sine wave.This shows that the sampling frequency must be at least higher than the frequency of the signal to

be sampled, but how much higher? Let us check with 11 samples per second, one sample per secondhigher than the frequency of the sine wave. Figure 21.4 shows the results of the 11 samples per secondof the 10Hz sine wave signal. The result is a sine wave, but compare the time scale with Figure 21.2,the result is a sine wave with another frequency.Rising the number of samples per second to 18 samples per second gives the result in Figure 21.5.

The result is much better than in Figure 21.4 but still the sampling is not good enough.Rising the samples per second to 20:1 as shown in Figure 21.6 shows a signal with the same frequency

information as in Figure 21.2. It turns out that for any sampling rate greater than twice the highestfrequency fm will the frequency fm will apparent in the sampled information. This is the sampling-ratetheorem:

fs � 2 � fmstating that the sampling frequency fs should be at least twice the highest frequency component of theoriginal signal in order to reconstruct the original waveform (frequency) correctly (Wheeler & Ganji 2004).

21.2 A/D conversion

A/D systems is called analog input subsystems as they convert analog signals to digtal signals for acomputer. These analog input subsystems typically consists of eight or sixteen input channels, but onlythe number of channels used will in�uence on the sampling rate for the system. Sampling will always be

Page 169: DAQ_Training_Course.pdf

CHAPTER 21. DISCRETE SAMPLING 161

Figure 21.4: The 10Hz sine wave signal sampled with 11 samples per second (Wheeler & Ganji 2004).

Figure 21.5: The 10Hz sine wave signal sampled with 18 samples per second (Wheeler & Ganji 2004).

Figure 21.6: Sampling of the sine wave at a sampling frequency of 2 � f .

Page 170: DAQ_Training_Course.pdf

CHAPTER 21. DISCRETE SAMPLING 162

Figure 21.7: The sampling of eight consecutive analog input channels, the sampling period, and thechannel skew.

a time dependent operation and the analog signal should be kept constant during the conversion period.A sample and hold (S/H) is normally used, consisting of a signal bu¤er, an electronic switch, and acapacitor.The operation of the S/H section will be:

1. The electronic switch will connect the capacitor to the input signal through the signal bu¤er, thecapacitor will be charged to the input voltage,

2. The electronic switch will disconnect the input signal, and the capacitor will have a �constant�voltage while the A/D converter is converting the analog voltage to a digital representation of theinput signal,

3. The S/H section will be common for analog input systems with several input channels,

4. The charging is repeated for every conversion cycle of the A/D converter.

As the DAQ system contains an analog multiplexer to be able to sample signals from several channels.The A/D converter is often an expensive component, while the analog multiplexer is a much cheapercomponent. Using several channels for an A/D converter, the highest frequency of each channels will be:

Maximum sampling rate per channel =Maximum sampling rate for AD converter

Number of channels usedUsing a multiplexer means that every input channel, or analog input signal, will be scanned after

each other. This means that the sampling period will be the time from one speci�c input channel isconverted until the same input channel is converted again. This is the reason why the maximum samplingrate will depend on the number of analog channels are used. This means that the designer of the DAQsystem should only scan the analog inputs that are used, not all available input channels. The maximumsampling rate for the DAQ system will be when using only one input channel. When several channelsare used, those channels cannot be sampled simultaneous and a time gap will exists between consecutivesamples channels. This time gap is called channel skew. The timing for sampling the analog channels,the sampling period, and the channel skew are shown in Figure 21.7.An analog input subsystem will, as a minimum, consists of a multiplexer for several analog input

signals, a sample and hold section, and the analog to digital (A/D) converter. These devices are inter-connected as a DAQ system in Figure 21.8.

21.3 Simultaneous Sample and Hold

In some system the channel skew can not be tolerated, all the channels must be read at exactly the sametime. Then Simultaneous Sample and Hold are used to sample all input signals at the same time, andholds the value until the A/D converter has converted all the values. Either a set of electronic switchesand capacitors can be used or separate A/D converters for each input channel. Figure 21.9 shows theusage of Simultaneous Sample and Hold logic.

Page 171: DAQ_Training_Course.pdf

CHAPTER 21. DISCRETE SAMPLING 163

Figure 21.8: A DAQ system consisting of an analog multiplexer, a sample and hold section, and ananalog to digital converter.

Figure 21.9: The sampling of eight consecutive analog input channels with Simultaneous Sample andHold section.

21.4 Aliasing

If the sampling frequency is too low, will the discrete time values not be a correct conversion of thecontinuous time values as shown in Figure 21.10.This is known as aliasing. Figure 21.4 is also showing aliasing as the sampling frequency is too low

to the correct information. One way of avoiding aliasing can be by adding a hardware �lter before theA/D converter for removing frequency components above the sampling frequency, see Figure 21.11.The LP �lter can be located before or after the mux, normally after the mux. If special �ltering

of the di¤erent input signals is wanted, use a LP �lter before the mux, but remember that every inputsignal needs a �lter. Using a �lter after the mux, only one LP �lter is needed.The design of RC �lter will be to de�ne the cut o¤ frequency for the �lter, this frequency will be

de�ned from the frequency components in the input signals and the sampling frequency of the ADC.The cut o¤ frequency will be:

fcut�off =1

2 � � � � =1

2 � � �R � C

21.5 Oversampling

In some cases, it is desirable to have a sampling frequency considerably more than twice the desiredsystem bandwidth so that a digital �lter can be used in exchange for a weaker analog anti-aliasing �lter.This process is known as oversampling and the �lter will now be part of the signal condition section inFigure 21.11. Di¤erent types of software �lters exist and is used for data cleaning purposes. An e¤ectivedata-cleaning �lter should satis�es two important properties (Pearson 2005):

1. outliers1 should be replaced with data values that are more consistent with the local variation ofthe nominal sequence,

1outlier: an entry in a dataset that is anomalous to the other entries in the dataset, with respect to the behavior(Pearson 2005)

Page 172: DAQ_Training_Course.pdf

CHAPTER 21. DISCRETE SAMPLING 164

Figure 21.10: The aliases problem if the sampling frequency is too low. The upper part of the Figureshows a sampling frequency high enough, while the lower part shows a too low sampling frequency endingup with a strain line signal as the digital representation of the signal.

Figure 21.11: A lowpass �lter used for avoiding aliasis in the measurement system.

Page 173: DAQ_Training_Course.pdf

CHAPTER 21. DISCRETE SAMPLING 165

Figure 21.12: A folding diagram used for estimation of the alias frequency (Wheeler & Ganji 2004).

2. the �lter should cause no or little change in the nominal data sequence.

Digital �lter techniques include exponential, box-car averaging, moving average, and weighted movingaverage2 �ltering, Fourier transformation, correlation analysis, and others (Meier & Zünd 2000).

21.6 Folding diagram

It is possible to estimate the lowest alias frequency using a folding diagram. The folding diagram can beused to predict the alias frequency based on the signal frequency and the sampling rate, like:

1. Compute the folding frequency: fN =fs2 where fs is the sampling frequency,

2. Compute the folding diagram index: fmfN where fm is the sampled signal frequency,

3. Find the index fmfN

in the folding diagram, draw a vertical line to the lowest line and read thefolding diagram index on the lowest line (base line),

4. The alias frequency will be: folding diagram index � fN .

The folding diagram is shown in Figure 21.12.

Example 16 A system has a sampling frequency of 100Hz and a maximum frequency of interest of80Hz: As the sampling frequency is not twice the frequency of interest, the will be an aliases frequency,but where?

1. The folding frequency: fN = 1002 = 50

2. Folding diagram index: id = 8050 = 1:6

3. Index in the folding diagram = 0:4

4. The lowest aliases frequency will be: fA = 0:4 � 50 = 20Hz

Your measurement system will read a new frequency signal because the sampling frequency is wrong.What is the correct sampling frequency?

2also called Savitzky-Golay �lter or Hampel �lter.

Page 174: DAQ_Training_Course.pdf

CHAPTER 21. DISCRETE SAMPLING 166

Figure 21.13: A saw tooth wave form.

Example 17 A system has a sampling frequency of 100Hz and a maximum frequency of interest of100Hz: As the sampling frequency is not twice the frequency of interest, the will be an aliases frequency,but where?

1. The folding frequency: fN = 1002 = 50

2. Folding diagram index: id = 10050 = 2

3. Index in the folding diagram = 0:0

4. The lowest aliases frequency will be: fA = 0:0 � 50 = 0Hz

Note that the signal is not zero, but a signal with zero frequency, a steady signal.

21.7 Spectral analysis of Time varying signals

A sensor signal often consists of a sum of signals with di¤erent frequencies. An example can be thesawtooth waveform being a simple waveform, but consist of a set of frequencies. See Figure 21.13 for asawtooth waveform.The method used to determine the frequency components is known as the Fourier series analysis.

The lowest frequency in a periodic wave is f0 and is called the fundamental or �rst harmonic frequency.The �rst harmonic frequency3 has a period T0 and angular frequency !0. It can be shown that anyperiodic function f (t) can be represented by the sum of a constant and a series of sine and cosine waves(Wheeler & Ganji 2004). The representation is:

f (t) = a0

+a1 cos!0t+ a2 cos 2!0t+ ::+ an cosn!0t

+b1 sin!0t+ b2 sin 2!0t+ ::+ an sinn!0t

The constant a0 is the time average of the function over the period T as:

a0 =1

T

Z T

0

f (t) dt

and the constants an will be:

an =2

T

Z T

0

f (t) cosn!0tdt

and the constants bn will be:

bn =2

T

Z T

0

f (t) sinn!0tdt

By using the Fourier series analysis the number of frequency components can be de�ned in any signaland the number of frequency components can be decided for the measurement system.

3The angular frequency ! = 2�f where f = 1T

Page 175: DAQ_Training_Course.pdf

CHAPTER 21. DISCRETE SAMPLING 167

21.8 Spectral Analysis using the Fourier transform

The Fourier transform is a generalization of Fourier series and can apply to any practical function byusing Fast Fourier Transform (FFT). FFT starts with the Fourier series, but is using complex exponentialform (Wheeler & Ganji 2004). The sine and cosine functions are represented as:

cosx =ejx + e�jx

2

sinx =ejx � e�jx

2j

where j =p�1: The signal can then be stated as:

f (t) =1X

n�1cne

jn!0t

where:

cn =1

T

Z T2

�T2

f (t) e�j!0tdt

If a longer value of T is selected will the lowest frequency be reduced. By making T in�nity will thefrequency becomes a continuous function and lead to the concept of Fourier transform. The Fouriertransform of a function f (t) is de�ned as:

F (!) =

Z 1

�1f (t) e�j!tdt

F (!) is a continuous complex-valued function. Once a Fourier transform has been terminated, theoriginal function f (t) can be recovered from the inverse Fourier transform:

f (t) =1

2�

Z 1

�1f (!) ej!td!

Using a measurement system and an A/D converter, the data are measured only at discrete times. TheDiscrete Fourier Transform (DFT) can be used for analysis of these data taken at discrete times over a�nite time interval. DFT is de�ned as:

F (k�f) =N�1Xn=0

f (n�t) e�j(2�k�f)(n�t) k = 0; 1; 2; :::; N � 1

where N is the number of samples taken during a time period T. The increment of f , �f , is equal to 1T

and the increment time is equal to TN :

21.9 FFT diagram

Using the function

f (t) = 2 sin 2�10t+ sin 2�15t

showing in Figure 21.14 The signal is composed of two sine waves with frequency of 10Hz and 15Hz:Using128 samples and one second, the FFT is shown in Figure 21.15.The FFT diagram shows the frequency components of the signal, in this signal the frequency com-

ponents are 10Hz and 15Hz. The sampling frequency must be at least 30Hz for this signal.These analysis will not be part of this course, but will be part of a signal analysis course. The FFT

will a software tool that can be used for analysing the signals and �nding the maximum frequency signalof the sensor signals.

Page 176: DAQ_Training_Course.pdf

CHAPTER 21. DISCRETE SAMPLING 168

Figure 21.14: The function f (t) = 2 sin 2�10t+ sin 2�15 (Wheeler & Ganji 2004).

Figure 21.15: The FFT analysis of the function f (t) = 2 sin 2�10t+ sin 2�15t (Wheeler & Ganji 2004).

Page 177: DAQ_Training_Course.pdf

CHAPTER 21. DISCRETE SAMPLING 169

21.10 Selecting the sampling rate and �ltering

A signal will consists of a set of sine and cosine waves with di¤erent frequencies, how to sample and �lterthese frequency components? Remember to avoid aliasing, the sampling frequency must be greater thanthe maximum frequency of the signal, not the frequency of interest.

1. Find the maximum frequency of the signal by analyzing the signal and/or the system. If any doubt,use FFT or DFT if the signal is available,

2. De�ne the maximum frequency of interest,

3. Check the sampling rate of the measurement system,

(a) remember that the sampling rate will be equal for all input channels of a measurement system,

(b) a low-pass �lter is necessary is the sampling rate is too high, often called a anti aliasing �lter,

(c) remember that a �lter will not remove other frequencies, only reduce them!,

4. Consider a �lter, either a hardware �lter or a software �lter,

(a) to avoid aliasing, use a hardware �lter,

(b) if oversampling is possible, use a software �lter,

5. Remember that many sensing devices eliminate unwanted frequency at the sensing stage,

6. Try to avoid system noise at all stages in the measurement system.

Some guidelines for making good measurements:

1. Miximize the precision and accuracy,

2. Minimize the noise,

3. Match the A/D converter range to the sensor range.

21.11 Dynamic range of the �lter and A/D converter

To calculate the signal attenuation the dynamic range of the A/D converter is important. The dynamicrange is:

Gdynamic_range = 20 � log10�2N�dB

where N is the number of bits for the A/D converter. For monopolar 8 and 12 bits converters, thedynamic range will be:

Dynamic RangeBits Monopolar Bipolar8 48 4212 72 66

For bipolar converters N is reduced by one since one bit is used as a sign bit. This way we need tode�ne the corner frequency of the �lter so the attenuation rate is removing the unwanted frequencies usingboth the �lter and the A/D converter. Normally a �lter has a corner frequency fc and an attenuationrate

�dB

octave

�. The number of octaves4 will then be:

Noct =dynamic_range

filter_attenuation_rate

and can be used to de�ne the corner frequency of the �lter, the sampling rate, and the number of bitsof the A/D converter. The maximum frequency fm can then be evaluated from

4octave: a doubling of the frequency.

Page 178: DAQ_Training_Course.pdf

CHAPTER 21. DISCRETE SAMPLING 170

Figure 21.16: A time interleaved conversion of an analog signal using two A/D converters.

fm = fc2Noct

and the sampling rate will be the twice of the maximum frequency fm. If the sampling rate is too high,a higher �lter attenuation must be consider.The attenuation of a Butterworth �lter will be:

Butterworth attenuation�rst order 6 dB

octave

eight order 48 dBoctave

but remember that a higher order of the �lter also gives a larger phase shift in the passband.

21.12 Time interleaved A/D converters

The multiplexing of the analog input signals and the sampling frequency are limitation for the A/Dconversion. Several options are available to solve these limitations like:

1. use one A/D converter for each input; a more expensive solution for both physical space and cost,

2. use a faster A/D converter; a more expensive solution for cost,

3. use time time interleaved A/D converters; a more expensive solution for physical space.

Figure 21.16 shows a time interleaved solution with 2 A/D converters. This solution gives that themaximum frequency can be twice of a solution using only one A/D converter.

21.13 Nyquist Frequency

The Nyquist frequency is half the sampling frequency and is sometimes called the folding frequency, orthe cut-o¤ frequency of a sampling system (www.wikipedia.org 2006).The sampling theorem shows that aliasing can be avoided if the Nyquist frequency is greater than the

bandwidth, or maximum component frequency, of the signal being sampled (www.wikipedia.org 2006).

Page 179: DAQ_Training_Course.pdf

Chapter 22

Logging

Logging of data will be valid for any check of the experiment, any check of the data, or any check ofthe trend of the data. Logging can be build into the measurement system, the monitoring and controlsystem (usual), or a an external system.

22.1 Sensor data

The logging data should be:

1. the sensor value,

2. the converted value (unit value),

3. the time,

4. the status of the sensor value.

Figure 22.1 shows an example of logging where the logging is biuld into the system, and data arelogged on �le.

22.2 Historical data

Historical data is sensor data logged at a time di¤erent from current time. Saving sensor data at speci�ctime intervals will give historical data. Historical data must be saved on speci�c �les or in a databasefor later use. Saving the data on �les will keep the historical data if the monitoring and/or controlapplication is stopped, or in case of a system failure or breakdown.

22.3 Trend curves

A utility (or tool) to display historical data. All data, or data between to speci�c times (time interval).Figure 22.2 shows the trening, �ltering, and prediction of a signal, where the trening is the signals in thepast. The trening can be a curve between each value or a smoothing curce as shown in the Figure.

Figure 22.1: The structure of a internal logging module.

171

Page 180: DAQ_Training_Course.pdf

CHAPTER 22. LOGGING 172

Figure 22.2: The trending, �ltering and prediction of a signal or value.

Page 181: DAQ_Training_Course.pdf

Chapter 23

Statistical analysis of Experimentaldata

Reading signals from sensor devices usually introduce a certain amount of randomness which can a¤ectthe conclusion drawn of the results. This chapter will deal with some important statistical methods thancan be used in these conclusions.

23.1 Introduction

Randomness will always be part of signals from sensor devices, even if the value to measured is �xed.This randomness is due to uncontrollable variables a¤ecting the measurand and lack of precision in themeasurement system. The randomness consists of two types:

1. systematic errors; repeatable errors that can be minimized by calibration of the measurementsystem,

2. random errors; errors for statistical analysis, both in planning the experiments and for the resultsof the experiments.

This means that some analysis should be performed on the values or set of values from the DAQsystem. Figure 23.1 shows a monitoring system getting digital values. These vaølues should be validateddue to the randomness of signal from sensor devices.

23.2 General concepts and de�nitions

23.2.1 De�nitions

1. Population; the entire collection of measurements,

2. Sample; a representative subset of the population used for the experiments,

3. Sample space; the set of all possible outcomes of an experiment,

Figure 23.1: Some analysis should be performed on the digital values from the DAQ system to validatethe data.

173

Page 182: DAQ_Training_Course.pdf

CHAPTER 23. STATISTICAL ANALYSIS OF EXPERIMENTAL DATA 174

4. Random variable; a numerically value from every experiment, continuous or discrete,

5. Distribution function; a graphical or mathematical relationship used to represent the values of therandom variable,

6. Parameter; a numerical attribute of the entire population (for example the average of the randomvariable),

7. Event; the outcome of the a random experiment,

8. Statistic; a numerical attribute of the sample (for example the average of the sample),

9. Probability; the change of occurrence of an event in the experiment,

10. Con�dence interval: an estimate of a population parameter,

11. Con�dence level: how likely the con�dence interval is to contain the population parameter,

12. Con�dence coe¢ cient: same as con�dence level.

23.2.2 Measure of central tendency

The most used parameter is the mean:

�x =x1 + x2 + ::+ xn

n=

nXi=1

xin

where xi is the value of the sample data and n is the number of measurements. In a population with a�nite number of elements, N , the mean is often denoted by the symbol �:Two other properties are the median and the mode.

1. Median; if the measured values are arranged in ascending or descending order, the median is thevalue in the center of the set.

2. Mode; the most frequently occurring value.

23.2.3 Measures of dispersion

Dispersion is the spread or variability of the data. The deviation of each measurement is de�ned as

di = xi � �xThe mean deviation is de�ned as

�d =nXi=1

jdijn

For a population with a �nite number of elements, the population standard deviation is de�ned as(sigma):

� =

vuut NXi=1

(xi � �)2

N

The sample standard deviation is de�ned as:

S =

vuut NXi=1

(xi � �x)2

(N � 1)

The sample standard deviation is used when the data of a sample are used to estimate the populationstandard deviation. If the number of measurement is more then 30, the population standard deviationis an approximation of the sample standard deviation:

n > 30 =) S � �

Page 183: DAQ_Training_Course.pdf

CHAPTER 23. STATISTICAL ANALYSIS OF EXPERIMENTAL DATA 175

Figure 23.2: The histogram of the temperature values.

The variance is de�ned as

variance =

��2 for the populationS2 for a sample

23.3 Historgram

Example 18 Measure the temperature of this room, every 5 minutes. The temperature for one hour canthen be:

1 = 22:3 2 = 21:9 3 = 22:8 4 = 21:9 5 = 22:3

6 = 21:9 7 = 22:8 8 = 22:3 9 = 22:3 10 = 21:5

11 = 22:3 12 = 21:9

What will be the temperature of this room? First of all the randomness seems to be the precision ofthe A/D converter since the steps are about 0:4 �C. One way to visualize the data is to use a histogram,a histogram divide the results into bins and shows the number of values in each bin. A histogram of thevalues, created by Matlab, is shown in Figure 23.2.The vertical axis indicate the number of values for each value step of the horizontal axis. The bin

indicate the value step of the horizontal axis and the bin width (size) or the number of bins is calculatedlike:

k =xmax � xmin

n

where n is the number of bins and k is the width of each bin. Each bin will then contain the numberof observations (sensor device readings) as shown in Figure 23.2. A histogram can be used for bothcontinuous random variables and discrete random variables using a line for continuous random variablesand boxes for discrete random variables. Figure 23.2 shows discrete random variables and should give agood idea about the mean of the temperature and the variance.Some guidelines for the histogram:

1. Select between 5� 15 bins,

2. Use the same width of each bin,

3. Cover the entire range of the data,

Page 184: DAQ_Training_Course.pdf

CHAPTER 23. STATISTICAL ANALYSIS OF EXPERIMENTAL DATA 176

4. The bins should not overlap.

A historgram can be used to make an assumption about the mean of the values, the shape of distri-bution, and the range of the values. A histogram can be a useful tool for getting a fast overview of thedata distribution.

23.3.1 Examples using the room temperatures

The values for the temperature in the room will then be (using Matlab)

mean �x = 22:2 �Cmedian xm = 22:3

�Cstandard deviation S = 0:38 �C

variance S2 = 0:15 �C2

mode m = 22:3 �C

23.4 Probability

Probability is a numerical value expressing the likelihood of a occurrence of an event relative to allpossibilities in a sample space. For example tossing a dice, the probability is 1

6 of each number of thedice.Probability of occurrence of an event A is de�ned as the number of successful occurrences (m) divided

by the total number of possible outcomes (n) in a sample space, evaluated for n >> 1 giving

probability of event A = P (A) =m

nSome properties of probability:

1. Always a positive number between 0 and 1: 0 � P (xi) � 1,

2. If an event is certain to occur: P (A) = 1,

3. If an event will never occur: P (A) = 0,

4. If an event �A is the complement of event A, meaning that if event A occur, �A can not occur:P��A�= 1� P (A),

5. If the events A and B are mutually exclusive, meaning that the probability of simultaneous occur-rence of A and B is zero, will the probability of event A or B is: P (A or B) = P (A) + P (B),

6. If the events A and B are independent of each other, meaning that their occurrences do not dependon each other, will the probability of both events be: P (AB) = P (A) � P (B),

7. The probability of occurrence of event A or B or both is represented by P (A [B) (A union B) is:P (A [B) = P (A) + P (B)� P (AB) :

Example: In a measurement system there is 2% change that a sensor is defective and a 0:5% changethat the DAQ system is defective. The probability for both a defective sensor and defective DAQ systemis: P (AB) = P (A) � P (B) = 0:02 � 0:005 = 0:0001 giving 0:01%. The probability of having at least oneof them defective is: P (A [B) = P (A)+P (B)�P (AB) = 0:02+0:005�0:0001 = 0:0249 giving 2:49%

23.4.1 Probability Distribution Functions

An important function of statistics is to use information from a sample to predict the behavior of apopulation. This approach is called use of an empirical distribution.Experience has shown that the distribution of a random variable often follows certain mathematical

functions. Sample data can then be used to compute parameters in these mathematical functions andthe mathematical functions can be used to predict properties of the population.These functions are divided into two main groups:

1. probability mass functions; used for discrete random variables,

2. probability density functions; used for continuous random variables.

Page 185: DAQ_Training_Course.pdf

CHAPTER 23. STATISTICAL ANALYSIS OF EXPERIMENTAL DATA 177

Probability mass function

1. The sum of all probabilities:Pn

i=1 P (xi) = 1,

2. The mean of the population: � =Pn

i=1 xi � P (xi) (also called the expected value of x),

3. The variance of the population: �2 =Pn

i=1 (xi � �)2 � P (xi).

Probability density function

1. The probability in a �nite interval: P (a � x � b) =R baf (x) dx,

2. The mean of the population: � =R1�1 x � f (x) dx,

3. The variance of the population: �2 =R1�1 (x� �)

2 � f (x) dx:

23.4.2 Some probability distribution functions with engineering applications

The most common probability distribution functions are described brie�y.

Binomial distribution

Describes discrete random variables that can have only two possible outcomes (success and failure). Thefollowing conditions must be satis�ed:

1. Each trial in the experiment can have only the two possible outcomes of success or failure,

2. The probability of success remains constant throughout the experiment (p) ;

3. The experiment consists of n independent trials.

The properties are:

1. The probability: P (r) =�nr

�pr (1� p)n�r where n is the number of independent trials, p is

the probability of success (constant throughout the experiment), and r is the number of success,

maximum n. The�nr

�= n!

r!(n�r)! is called n combination r.

2. The expected number of success: � = np,

3. The standard deviation: � =pnp (1� p)

Exercise 19 A manufacture of sensor devices claims that only 10% of the sensor devices must be repairedwithin the warranty period. What is the probability that 5 sensors in a batch of 20 sensors need repairduring the warranty period?

Solution: Success is 100% � 10% = 90% = 0:9,�nr

�=

�2015

�= 20!

15!(20�15)! = 15:504, P (15) =�2015

�0:915 (1� 0:9)5 = 0:032 meaning that the probability is 3:2% that exactly 5 sensors out of 20

sensors requiring repair.

Poisson distribution

Used to estimate the number of random occurrences of an event in a speci�ed interval of time or spaceif the average number of occurrences is already known. The two assumptions underline the Poissondistribution:

1. the probability of occurrence of an event is the same for any two intervals of the same length,

2. the probability of occurrence of an event is independent of the occurrence of other events.

The properties are:

Page 186: DAQ_Training_Course.pdf

CHAPTER 23. STATISTICAL ANALYSIS OF EXPERIMENTAL DATA 178

Figure 23.3: The normal distribution with di¤erent means and variations.

1. The probability: P (x) = e���x

x! where � is the mean number of occurrences during the interval ofinterest.

2. The mean (expected value): E (x) = � = �;

3. The standard deviation: � =p�

Exercise 20 In average there will be 3 wrong readings from a sensing device every minute. Find theprobability of no wrong readings the next minute.

Solution: � = 3 and x = 0 giving P (x) = e�330

0! = 0:05:

Normal distribution (Gaussian)

A simple distribution function that is useful for a large number of common problems involving continuousrandom variables. The normal probability density function is:

f (x) =1

�p2�e�(x��)2

2�2

where � is the standard deviation for the population and � is the mean of the population. Often thenormal distribution is denoted by:

x � N��; �2

�where N () is the normal distribution, � is the mean, and �2 is the variance. The normal distribution isshown in Figure 23.3 with di¤erent means (�)(my) and variations (�)(Sigma):

Exercise 21 Use the temperature example values for mean and variation to make a normal distributionmodel of the temperature sensor.

The normal distribution functions is used a lot in engineering and sensor devices and sensor measure-ment. The con�dence intervals for the normal distribution function are:

Page 187: DAQ_Training_Course.pdf

CHAPTER 23. STATISTICAL ANALYSIS OF EXPERIMENTAL DATA 179

Con�dence Con�denceinterval level (%)�1� 68:3�2� 95:4�3� 99:7�3:5� 99:96

Con�dence interval is an estimate of a population parameter. Instead of estimating the parameterby a single value, an interval to include the parameter is given. Thus, con�dence intervals are used toindicate the reliability of an estimate. How likely the interval is to contain the parameter is determinedby the con�dence level or con�dence coe¢ cient. Increasing the desired con�dence level will widen thecon�dence interval (www.wikipedia.org 2006).

23.4.3 Parameter estimation

In many experiments the sample size is small relative to the population, how to estimate the mean andstandard deviation of the whole population.

Population mean

The mean: � = �x � � or �x � � � � � �x + � where � is an uncertainty and �x is the sample mean. Theinterval from �x�� to �x+� is called the con�dence interval for the mean. The con�dence interval dependson the con�dence level, a probability that the population mean will fall within the speci�ed interval:

con�dence level = P (�x� � � � � �x+ �)

The con�dence level is normally expressed in terms of a variable � called the level of signi�cance.

con�dence level = 1� �

where � is the probability that the mean will fall outside the con�dence interval.The central limit theorem makes it possible to make an estimate of the con�dence interval with a

suitable con�dence level. The central limit theorem states that if n is su¢ ciently large from a population,the �xi will follow a normal distribution, and the standard deviation of these means is given by:

��x =�pn

The standard deviation of the mean is also called the standard error of the mean. Important conclusionsfrom the central limit theorem:

1. If the original population is normal distributed, the distribution of the �xi is normal distributed,

2. If the original population is not normal distributed and n is large (n > 30), the distribution of the�xi is normal distributed,

3. If the original population is not normal distributed and n < 30, the distribution of the �xi is normaldistributed only approximately.

If the sample size is large, the central limit theorem can be used directly. Since �x is normallydistributed, we can de�ne (using statistic):

z =�x� ���x

and estimate the con�dence interval on z. This is shown graphical in Figure 23.4. If z = 0 means that�x has the value of the population mean �.The true value of � will however lie somewhere in the con�dence interval

��z�

2; z�

2

�and the probability

that z lies in the con�dence interval is

P (z) = 1� �

Page 188: DAQ_Training_Course.pdf

CHAPTER 23. STATISTICAL ANALYSIS OF EXPERIMENTAL DATA 180

Figure 23.4: Concept of con�dence interval of the mean (Wheeler & Ganji 2004).

This gives:

P��z�

2� z � z�

2

�= 1� �

Substituting for z :

P

��z�

2� �x� �

��x� z�

2

�= 1� �

Substituting for ��x :

P

"�z�

2� �x� �

�pn

� z�2

#= 1� �

Rearranged:

P

��x� z�

2

�pn� � � �x+ z�

2

�pn

�= 1� �

meaning that:

� = �x� z�2

�pn

with con�dence level = 1� �:

23.4.4 Criterion for rejecting questionable data points

In some experiments one or more measured values appear to be out of the line with the rest of the data.These data are known as wild or outlier data points and should be removed from the data set. One waycan be to de�ne the data being outside the �3� con�dence interval to be outliers, but remember that itcan be wrong to remove any of these data set as they can describe problems with the sensing devices orthe measurement system.

23.4.5 Correlation of experimental data

Correlation coe¢ cient

Scatter due to random errors is a common characteristic of virtually all measurements. In some casesthe scatter may be so large that it is di¢ cult to detect a trend. See Figure 23.5 where sub Figure (a)shows a strong relationship between x and y, in sub Figure (b) there seems to be no relationship betweenx and y, in sub Figure (c) we will not be certain.

Page 189: DAQ_Training_Course.pdf

CHAPTER 23. STATISTICAL ANALYSIS OF EXPERIMENTAL DATA 181

Figure 23.5: Data showing signi�cant scatter (Wheeler & Ganji 2004).

Page 190: DAQ_Training_Course.pdf

CHAPTER 23. STATISTICAL ANALYSIS OF EXPERIMENTAL DATA 182

Figure 23.6: The relationship and correlation coe¢ cient.

A statistical parameter called the correlation coe¢ cient can be used for checking the trend of a dataset. If we have two variables, x and y, and our experiment yields a set of n data pairs [(xi; yi) ; i = 1; n],the linear correlation coe¢ cient will be (sample correlation):

rxy =

Pni=1 (xi � �x) (yi � �y)hPn

i=1 (xi � �x)2Pn

i=1 (yi � �y)2i 12

where

�x =

Pni=1 xin

�y =

Pni=1 yin

Remember that these equations are only valid for the sample correlation, not the whole populationcorrelation. The sample correlation coe¢ cient, rxy, will indicate the relationship between the measuredvariables x and y: The range of rxy will be [�1; 1] where �1 will indicate a perfectly linear relationshipwith a negative slop, 0 will indicate no relationship, while +1 will indicate a perfectly linear relationshipwith a positive slop. See Figure 23.6.

The correlation coe¢ cient rx;y for a population between to random variables x and y with expectedvalues �x and �y and standard deviations �x and �y is de�nes as:

rx;y =cov(x; y)

�x � �y=E�(x� �x)

�y � �y

���x � �y

Least-squares linear �t

It is a common requirement in experimentation by �tting sample data to mathematical functions suchas straight lines and exponentials. One of the most used function is a strait line:

Y = ax+ b

from a set of n pairs of data (xi; yi) :To estimate the constants a and b the method of least squares (orlinear regression) is used to �t the data. For each value of xi there will be an error:

ei = Yi � yiand the square of the error is:

e2i = (Yi � yi)2= (axi + b� yi)2

The sum of the squared error is:

E =nXi=1

(axi + b� yi)2

The solution for a and b will then be (minimizing, by setting the derivate to zero):

@E

@a= 0 =

nXi=1

2 (axi + b� yi)xi

@E

@b= 0 =

nXi=1

2 (axi + b� yi)

Page 191: DAQ_Training_Course.pdf

CHAPTER 23. STATISTICAL ANALYSIS OF EXPERIMENTAL DATA 183

These two equations can be solved simultaneously for a and b (Wheeler & Ganji 2004):

a =nPxiyi � (

Pxi) (

Pyi)

nPx2i � (

Pxi)

2

b =

Px2iPyi � (

Pxi) (

Pxiyi)

nPx2i � (

Pxi)

2

The resulting line, Y = ax+ b is the least-squares best �t for the data.The �t of the data can be determined by a coe¢ cient of determination, given by:

r2 = 1�P(axi + b� yi)2P(yi � �y)2

For engineering data the r2 should at least be 0:8; a value of 0:8 � 0:9 is good indication of a linearregression.

Outliers in x� y data sets

Having a set of data pairs (xi; yi) is is possible to check the outliers in the data set. First make aleast-squares best �t of the data and plot the line and the data. A visual check of the data may showdata with much larger deviation from the line then other data and these may be treated as outliers.Another solution is to calculate the residuals for the data set:

ei = Yi � yiand plot these residuals (ei). Assuming a normal distribution of the residuals we can expect 95% of theresiduals to be within the range of �2:This can however give complications like:

1. If the number of samples in the data set is small giving an invalid range of the con�dence interval.

2. If the data is not linear, giving higher values of the residuals.

Multiple and polynomial regression

Regression analysis is more general than the least-squares best �t. In multiple regression, the functionwill be:

Y = a0 + a1x̂1 + a2x̂2 + ::+ akx̂k

with several independent variables, x̂1; ::; x̂k. The x̂�s can be independent variables or functions of theindependent variables like:

x̂1 = x1

x̂2 = x2

x̂3 = x1 � x2The way of solving is the same as simple linear regression, using the error:

e = Y1 � y1 = a0 + a1x̂1i + a2x̂2i + ::+ akx̂ki � y1The sum of all errors is then:

E =X

(a0 + a1x̂1i + a2x̂2i + ::+ akx̂ki � y1)2

and then minimizing E by partially di¤erentiating with respect to each a and setting all resulting equationto zero.Many physical relationships cannot be represented by a simple straight line, but can easily �t with a

polynomial. The form of a polynomial regression equation is:

Y = a0 + a1x+ a2x2 + :::+ akx

k

where k is the degree of the polynomial. This can be solved by statistical programs by input of dataand the order of the polynomial desired. It can also be solved by using a subset of multiple regressionby making the x̂�s be x; x2; x3; etc.

Page 192: DAQ_Training_Course.pdf

CHAPTER 23. STATISTICAL ANALYSIS OF EXPERIMENTAL DATA 184

23.5 Uncertainty budget

An uncertainty analysis must be performed when dealing with sensors. The analysis will take intoconsideration the uncertainty of all device in your system, mainly the sensor devices, the A/D converter,and any actuators. The budget will be an table showing the uncertainty of the values used in yoursystem. An example of such a table can be:

Sensor Range Accuracy NoteA/D converterSensor type A ... ...Sensor type B ... ... ...Sensor type C ... ...... ... ...Device type A ... ... ...Device type B ... ...

The table must contains all devices with the necessary parameters and/or properties that are impor-tant for the accuracy of your measurement system. Do NOT include devices that will not in�uence onthe accuracy.

Page 193: DAQ_Training_Course.pdf

Chapter 24

Calibration

24.1 Introduction

Calibration is a �measuring�comparison between two sensors or measurement systems. One sensor ormeasurement system is to validated or controlled, the other sensor or measurement system is the �truth�.The �truth�has to be a sensor or measurement system calibrated according to national or internationalstandards with traceability. A sensor device require periodic calibration in order to maintain the accuracyand traceability to recognised industry standards.In the past 10 to 15 years there have been a lot of low maintenance sensor devices introduced to the

market. Field engineers or instrument engineers have a perception of these sensor devices that they canwork for a long time between the calibration (up tp ten years), but this is not necessarily true. Theperformance often depends on the cleanliness and adjustment of pipes and electrodes, not to meantioncorrosion within the pipe or electrodes.The most used calibration interval is annual calibration, but semiannually or even quartly is used. The

problem is that the sensor devices normally most be removed from the plant and sent to an calibrationlaboratory. The best solution is however to have a rig so the sensor devices can be calibrated on site.An example of a plant with an integrated calibration system is shown in Figure 24.1.As a rule of thumb should the �truth�sensor or measurement system have an accuracy 3 � 5 times

better than the sensor or measurement system to be validated.The reason for calibration is to document the error of measurement and the uncertainties in the

measurements. The calibration should end up in some sort of document like a certi�cate for the validatedsensor or measurement system.The calibration certi�cate document the accuracy of the sensor or measurement system. The main

reasons for calibration are:

1. optimize the process,

2. increase capacity,

3. higher quality,

4. safety,

5. quality of measurements.

The calibration of the instruments should by controlled by a system. The system can be manual orfully automatic, or often something in between. Figure 24.2 shows the relationship between the savingsof time and cost, quality, and the type of calibration system. The main reasons for using a calibrationsystem are:

1. Improve e¢ ciency; cut production down-time, simplify and automate calibration work,

2. save costs; optimize the calibration frequency and cut production down-time,

3. improve quality; automation of calibration data.

185

Page 194: DAQ_Training_Course.pdf

CHAPTER 24. CALIBRATION 186

Figure 24.1: An example on how the sensor devices can be calibrated on site. The calibration systemrequires a calibration management �level� and a calibration and documentation �level� shown as thelevels between the instruments and the plants (www.beamex.com: JAN-10).

Figure 24.2: The relationship of savings and quality regarding the type of calibration system(www.beamex.com: JAN-10).

Page 195: DAQ_Training_Course.pdf

CHAPTER 24. CALIBRATION 187

24.2 Calibration process

When to calibrate a sensor or measurement system:

1. Fiscal measurement (trade measurement),

2. Requirements (e.g. fuel pumps and weights),

3. Regulations,

4. Certi�ed companies,

5. Final control/test.

Calibration is not necessary:

1. Check measurement in production,

2. Production without speci�cations.

Accuracy:

1. is a measurement on how well the sensor or the measurement system is able to measure the �truth�value,

2. is documenting the following measurement errors during calibration:

(a) nonlinearity; an error specifying the maximum deviation of the real transfer function from theapproximation straight line (Fraden 2004),

(b) hysteresis; an error specifying the maximum deviation of the output when the sensor or themeasurement system is approaching from the opposite directions (Fraden 2004),

(c) repeatability; an error specifying the inability of a sensor or measurement system to representthe same value under identical conditions (Fraden 2004).

3. is stated as:

(a) percent of the range, most often full scale range (FS),

(b) percent of current reading (O.R or RDG),

24.3 Calibration of sensors

Sensor can be calibrated by �rst making a calibration operation and then a validation operation. In thecalibration operation the sensor is tested with di¤erent environment parameters within the range of thesensor and a set of calibration data are stored for the sensor. This is shown in Figure 24.3.The calibration certi�cate will contain part of the log data. The log data can now be used for making

a calibration set for the sensor, stored either in the sensor or as a calibration data �le for an externalsystem. The sensor with the calibration data must be validated as shown in Figure 24.4.The sensor with the calibration data will also have some sort of a calibration certi�cate. If the

calibration data are downloaded into the sensor, this will be the �nal calibration certi�cate. If thecalibration data are on an external �le/system, this will often be an additional calibration certi�cate.Figure 24.5 shows a non-linear sensor signal calibrated to a linear output signal. Often will the

nonlinearity depend on a speci�c sensor element, so every sensor device will have di¤erent calibrationdata. The calibration data can be used in di¤erent ways as:

1. Stored and used in the sensor device. The output from the sensor device is a calibrated signals.The sensor device needs storage and calculation capabilities,

2. Stored in the sensor device. The output from the sensor device is a non-linear signal and themeasurement system most calibrate the signals using data from the sensor device,

3. Stored in the measurement system. The output from the sensor device is a non-linear signal andthe measurement system most calibrate the signals using data stored in the measurement system.

Page 196: DAQ_Training_Course.pdf

CHAPTER 24. CALIBRATION 188

Figure 24.3: Calibration of a sensor.

Figure 24.4: The validation of a calibrated sensor.

Figure 24.5: The calibration of a non-linear sensor signal to a linear output signal.

Page 197: DAQ_Training_Course.pdf

CHAPTER 24. CALIBRATION 189

Figure 24.6: Part of a calibration certi�cate of a Schaevitz pressure sensor. The list shows the sensoroutput in mV at di¤erent temperatures and pressures in the pressure range of the sensor. The serialnumber and type of sensor is shown at the top of the certi�cate (a copy from KROHNE Skarpenord inNorway).

24.4 Calibration Certi�cate

Figure 24.6 shows the calibration certi�cate for a speci�c pressure sensor, the type and serial number isstated in the certi�cate. The output is in mV and is measured at di¤erent pressures and temperaturesinside the range for the sensor.

Page 198: DAQ_Training_Course.pdf

Part V

Documentation

190

Page 199: DAQ_Training_Course.pdf

Chapter 25

Guidelines for planning experiments

This chapter contains guidelines for designing and documenting experimental tasks.

25.1 Overview of an experimental tasks

In order to have the best results from experimental tasks a systematic approach should be taken. Thesteps for an experimental task will be:

1. Problem de�nition,

2. Design of the experiment,

3. Construction and development for the experiment,

4. Data gathering,

5. Data analysis,

6. Interpreting the results,

7. Conclusion(s) and reporting.

25.1.1 Problem de�nition

Very often engineers spend insu¢ cient time in analyzing and de�ning the problem, starting the designprocess without being aware of what the problem actually is and getting an overview of the possibleoptions.

25.1.2 Experimental design

This section is a major portion of any experimental programs and will include some or all of the followingparts:

1. Determining the schedule and costs. The Engineer should always start any experiments making atime schedule, for example a Gantt chart. This will force the engineer to think of all sub tasks inthe experiments and try to estimate the time usage of each sub task,

2. Search for information, often a literature survey,

3. Determining the experimental approach,

4. Determining the analytical model(s) used to analyze the data,

5. Specifying the measured variables,

6. Selecting the experimental units like instruments etc.,

7. Estimating the experimental uncertainties,

191

Page 200: DAQ_Training_Course.pdf

CHAPTER 25. GUIDELINES FOR PLANNING EXPERIMENTS 192

8. Determining the test matrix, values of independent variables to be tested,

9. Performing a mechanical design of the test rig. Remember to take into account that devices forthe test rig has di¤erent delivery times,

10. Specifying the test procedure.

25.1.3 Experimental construction and development

The is often the most expensive portion of the experimental task, both in time and cost. Building thetest rig will take some time and there will always be some sort of problems in ordering the parts, deliverytime and/or the veri�cation of the test rig. Please take this into consideration when making the timeschedule.

25.1.4 Data gathering

This part is gathering the data from the experiments after the test rig has been veri�ed and debugged.

25.1.5 Data analysis

Very often will computer programs be used for the data analysis, ready made programs or developed forthis purpose. Also remember to add time for designing, testing, and debugging these programs as well.

25.1.6 Interpreting the results

After the data analysis period the data must be interpreted. Logical reasons should be developed toexplain the trends of data and remember to comment on all type of anomalous data. Comparison andvalidation between di¤erent type of experiments are important.

25.1.7 Conclusion and reporting

The conclusion will be the results of the data interpretation. These results should often be documentedin a report.

25.2 Activities in experimental projects

Experimental projects meaning doing some experimental tests that will take some time and cost, andthe results will very often be of interest of other.

25.2.1 Scheduling

Scheduling is an important task to get a detailed overview of all the sub tasks in order to estimate thedelivery date and important dates during the project. It is impossible to verify the scheduling in advance,but will give the engineer some practice in structure a project and estimate the time of the sub tasks.Planning is part of scheduling and is important because:

1. Other projects may depend on this project,

2. Better utilization of the resources,

3. Complete all tasks before a deadline,

4. Be able to adjust the activity as early as possible,

5. The cost of the project.

25.2.2 Cost Estimation

Cost estimation will be the cost of time or usage of all the resources available in the project, anyconstruction of a test rig, and the devices needed for collecting the data.

Page 201: DAQ_Training_Course.pdf

CHAPTER 25. GUIDELINES FOR PLANNING EXPERIMENTS 193

25.2.3 Dimensional analysis

This analysis is to �gure out the number of variables in an experiment. This analysis is to �nd out thenumber of dimensional variables and dimensionless variables. The Buckingham � theorem states: Ifthere are m dimensional variables describing a physical phenomenon [A1; A2; ::; Am], then there exists afunctional relationship between these variables (Wheeler & Ganji 2004):

f (A1; A2; ::; Am) = 0

This means that if the list is complete (and relevant), there exists a solution in nature to the problem.Then there exists a set of (m� n) dimensionless variables [�1;�2; ::;�m�n] describing the same physicalproblem (Wheeler & Ganji 2004). These dimensionless parameters are related by another functionalrelationship (Wheeler & Ganji 2004):

F (�1;�2; ::;�m�n) = 0

The results of this theorem are (Wheeler & Ganji 2004):

1. A physical problem can be described using a suitable set of dimensionless parameters,

2. The set of dimensionless parameters has fewer members than the set of dimensional variables.

Several techniques can be used to determine a set of dimensionless parameters from the dimensionalvariables. The resulting set of dimensionless parameters describing a physical problem is not unique;there are usually alternative sets, but some are preferable for practical and historical reasons.The goal is to reduce the number of dimensional variables, to simpli�ed the experiments.

25.2.4 Determining the Test Rig Scale

The Test rig scale will depend on the physical phenomena that you are going to study and the type ofmeasurement your are going to use. Use the dimensional variables to decide the size and e¤ect of therig.

25.2.5 Uncertainty Analysis

The uncertainty analysis is always important for any engineering experiment as you will always deal withsome type of measurement. The uncertainty analysis will be important in both the design phase and thedata analysis phase.In the design phase must the equipment be selected with su¢ cient accuracy so the data can be

analyzed in the analysis phase. The result should be a table of a uncertainty budget.The analysis phase must contain some information about the systematic and random errors, and

estimates various loading and installation errors.

25.2.6 Calibration/testing

Before starting the experiments should the rig be tested and calibrated. This is also called the shakedowntests. The rig should �rst be tested with known input and output parameters to check the correct functionof the rig. The the rig should often be calibrated so you know the relationship between the a set of speci�cinputs and the corresponding outputs.The testing can be done in two di¤erent ways:

1. Static mode: When the rig is empty or not operating, the output signals should be known,

2. Special cases: Using a set of prede�ned input signals/states should give a known set of outputsignals.

It is very important to verify that the rig is working correctly before starting the experiments!

Page 202: DAQ_Training_Course.pdf

CHAPTER 25. GUIDELINES FOR PLANNING EXPERIMENTS 194

Figure 25.1: The results from the test matrix (Wheeler & Ganji 2004).

25.2.7 Test Matrix and Test Sequence

An experiments seeks to determine the relationship between one or more dependent variables, responses,and a set of independent variables, often called factors. A test matrix can be used for these factors tode�ne the test conditions for the experiments. De�ne the range for each factor and de�ne the values thatdeserves some attention for each factor.For many factors these test matrix can be complex. One solution can be to make a test matrix for a

set of the factors, varying a subset, and plot the results from all these experiments in one diagram. Oneexample of such a table is show as:

Y valuesX values Y1 Y2 Y3 Y4 Y5X1X2X3

This table contains 15 measurement, the number of measurement will depend on the selected valuesfor the factors. The combination of these experiments are shown in Figure 25.1.The test sequence is important, which condition of the experiments should be done in which order.

In some experiments can the order be random, in other type of experiments must the order of the de�nedbefore starting. This will depend highly of the type of experiments, but should be de�ned during theplanning of the tests.

25.2.8 Documenting Experimental Activities

Documentation may be boring, but still very important. Experiments may be individual or group based.Individual experiments means that only you will plan the experiments, do the experiments, and makethe documentation. A group based experiments will also involve some more project activities.

25.2.9 Group projects

Larger experimental projects must consist of several engineers working together. It is important thatthese engineers are working together as a group, not as individuals. This means that they should havesome regular meetings to discuss the progress and the sub tasks in the project. These meetings shouldbe divided into formal and non formal meetings.Group projects should always have a project leader.

Page 203: DAQ_Training_Course.pdf

Chapter 26

Meetings

Non formal meetings

Small meetings, often short meetings once a week. A good advice will be to include an agenda in anemail prior to these meetings, often from the project leader.

Formal meetings

Formal meetings include a �Notice of meeting�and a �Minutes of meeting�and is often used in connectionwith milestones in the project. The milestones will be part of the planning and should be part of theschedule.�Notice of meeting�should contain:

1. Name of the project; which project is the meeting for,

2. Place; the location, often a building and room number,

3. Date and time,

4. The name of the members of the meeting,

5. The agenda.

�Minutes of meeting�should contain:

1. Name of the project,

2. The name of the members of the meeting (indicate who was present and who was not present),

3. The date and time,

4. A table of the subject discussed on the meeting. Very often will this table contains 3 columns, theID, the subject, and the responsible. An example of such a table can be:

ID Subject Responsible1 Subject #1 discussed ..2 Subject #2 discussed .. Name and date.. ..N Subject #N discussed .. Name and date

E¤ective meetings

1. Determine whether the meeting is necessary or not,

2. Be precise, do not let the latecomers control the start of the meeting!

3. Be prepared,

195

Page 204: DAQ_Training_Course.pdf

CHAPTER 26. MEETINGS 196

4. Have an objective, �what should be the result of the meeting�,

5. Have an agenda,

6. Start with the most important items,

7. Be clear about the responsibilities,

8. Document the meeting.

Page 205: DAQ_Training_Course.pdf

Chapter 27

Guidelines for documentingexperiments

A report will be a document that communicates the work that has been carried out during some sort ofexperiments, group work, literature study, evaluations, and any combination. It is important to �gureout who will be the reader of the report and the competence, experience and/or skills of these readers,and adjust the contents of the report to these readers.

27.1 Informal report

An informal is mainly used for internal purposes, and should normally be as brief as possible. The mainfocus is to document an experiment, a meeting, or a discussion. This report should normally includean introduction, body, conclusion and recommendations. Always start the report with to whom thereport is going, whom it is from, the date and the subject. After the conclusion include your contactinformation if needed, at least the company, email and phone number. The "Recommendations" sectioncan be used to list any other people who support the information of your report.

27.2 Formal report

A report shall communicated the work done and the results of the work. Remember that the grading orany decisions will normally only be based on the information in the report. Therefor focus on getting allthe necessary information into the report, the readers. A formal report should contain:

1. Title page; title, author, and date,

2. Abstract (or summary); brief problem description, brief description of methods, and the mainresults,

3. Preface; what the report is about, any changes of tasks, and credits,

4. List of tables and �gures (optional)

5. Table of Contents

6. Introduction; short about the background, previous work, and new work,

7. Problem description; process and equipment description, measurement setup, etc.,

8. Theory; model and/or method development,

9. Methods; Method development,

10. Results; simulation results, model �tting, optimization results,

11. Discussions; the results, as expected or what are causes, uncertainty, remaining work,

12. Conclusions; focus on the results and what you have learned,

197

Page 206: DAQ_Training_Course.pdf

CHAPTER 27. GUIDELINES FOR DOCUMENTING EXPERIMENTS 198

13. Appendices; task description, details, listing, data sheets, etc,

14. References; necessary information for �nding the references,

15. Index (optional).

27.3 References

Literature references can be done in several ways and it is up to writer to select the right style. Themost used styles are the Harvard style and the Vancouver style. Do NOT mix these styles within thesame document.References to web pages should be avoided, but if you really need to use such a reference, you

should include the author (if the name of the author is not available, you may use the name of theorganization/institution) as well as the title of the web article, the URL and the date when you accessedthe web page.

27.3.1 Harvard style

The Harvard style is recommended for making references. In this style, the reference in the text bodyshould be placed in parentheses after (or in some cases inside) the sentence. It should include the author�sname and the year of publication (example (Flor, 2006)).The references should be listed in detail in a separate list at the end of the report. They should be

listed alphabetically.

27.3.2 Vancouver style

You should use the Vancouver style instead if you decide not to use the Harvard style (the name-and-yearstyle). In this style, the references in the text are given as numbers in brackets (example: [3]), and thereferences in the reference list should be listed using the numbers in brackets as well. When using thisstyle, the references should be given according to the order of appearance in the body of the text.

27.4 Article or paper

An article or paper is used to document any research work and should contain the following sections:

1. Introduction; introduce the problem you will be discussing in your article or write a short story ofyour experience with the problem.

2. Methods; discuss the methods that you want to use on the problem or challange you outlined inthe introduction. Break up each point into separate paragraphs,

3. Results; discuss the results of any experiments or simulations used with the methods. Break upeach point into separate paragraphs,

4. Discussion; this should include a brief summary of the article.

A mnemonic rule can be IMRAD (Introduction, Methods, Results, And Discussions).

Page 207: DAQ_Training_Course.pdf

Bibliography

Bentley, J. P. (2005), Principles of Measurement Systems, 4th. edn, Pearson Education Limited, Essex,England.

Caro, R. H. (2004), Automatiom Network Selection, ISA-The Instrumentation, Systems, and AutomationSociety.

Cravotta, R. (2008), �Sensor-rich designs�, EDN Europa pp. 24�28. www.edn-europa.com.

Fortuna, L., Graziani, S., Rizzo, A. & Xibilia, M. (2007), Soft Sensors for Monitoring and Control ofIndustrial Processes, Springer, London, UK. ISBN 1-84628-479-1.

Fraden, J. (2004), Handbook of Modern Sensors, 3. edn, Springer, USA.

Furenes, B. (2009), �Methods for batch-to-batch optimization of parallel production lines�, PhD Triallecture.

Ifeachor, E. C. & Jervis, B. W. (2002), Digital Signal Processing, A Practical Approach, 2nd. edn, PearsonEducation Limited, Essex, England.

Int (1988), 386 Microprocessor, High performance 32-bit CHMOS microprocessor with integrated memorymanagement.

Kester, W. (1999), Practical Design Techniques for Sensor Signal Conditioning, Analog Devices (PrenticeHall), Norwood, MA, USA.

Kirrmann, H. (2007), OLE for process control (OPC), Technical report, ABB research Centre, Baden,Switzerland. Industrial Automation, OPC, Data Access Speci�cation.

Krogh, E. (2005), OPC Seminar, Prediktor AS, Fredrikstad, Norway. www.prediktor.no (feb-07).

Mackay, S., Wright, E., Park, J. & Reynders, D. (2004), Practical Industrial Data Networks; Design,Installation and Troubleshooting, Elsevier (Newnes), Oxford, UK.

Mat (1999), Data Acquisition Toolbox User�s Guide, 5 edn.

Meier, P. C. & Zünd, R. E. (2000), Statistical Methods in Analytical Chemistry, second edn, John Wileyand Sons, Inc., New York, NY 10158-0012, USA.

Olsen, O. A. (2005), Instrumenteringsteknikk (in Norwegian Only), Tapir Akademisk Forlag, Trondheim,Norway.

Olsson, G. & Piani, G. (1998), Computer Systems for Automation and Control, 2nd edn, Prentice HallInternational (UK) Ltd., London, UK.

Olsson, G. & Rosen, C. (2003), Industrial Automation - Application, Structures and Systems, LundUniversity, Lund, Sweden.

Pearson, R. K. (2005), Mining Imperfect Data, Dealing with Contamination and Incomplete Records,Society for Industrial and Applied Mathematics (SIAM), Philadelphia, USA.

Pettersen, O. (1984), Sanntidsprogrammering for Prosess-Styring (in Norwegian), 4th edn, Tapir forlag,Trondheim, Norway. ISBN 82-519-0263-0.

199

Page 208: DAQ_Training_Course.pdf

BIBLIOGRAPHY 200

PROFIsafe (2009), �Pc-based, but safe!�, Control Engineering Europe November/December, 28�30.www.controlengeurope.com.

Rausand, M. & Høyland, A. (2004), System Reliability Theory, Models, Statistical Methods, and Appli-cations, 2nd edn, Wiley Interscience, John Wiley and Sons, Inc. Hoboken, New Jersey, Usa.

Ripps, D. L. (1989), An implementation guide to real-time programming, Prentice-Hall, Inc.

Schultz, T. W. (1999), C and the 8051, Building E¢ cient Applications, Prentice Hall PTR, New Jersey,USA.

Skavhaug, A. & Pettersen, S. (2007), �Wireless technology - something for safety - related applications?�.Wireless seminar at HydroStatoil/Trondheim 13-DEC-2007.

Skeie, N.-O. (2008), Soft Sensors for Level Estimation, PhD thesis, The Norwegian University of Scienceand Technology (NTNU).

Wheeler, A. J. & Ganji, A. R. (2004), Introduction to Enineering Experimentation, 2. edn, Pearson,USA.

www.matrikon.com (2007), �Matrikon�. OPC information.

www.wikipedia.org (2006), �Wikipedia�. Wikipedia, the free encyclopedia.

www.wikipedia.org (2010), �Wikipedia�. Wikipedia, the free encyclopedia.

Page 209: DAQ_Training_Course.pdf

Index

4-20 mA, 143

accuracy, 139ADC

integrating, 137sigma-delta, 136successive-approximation, 136

ampli�er, 109ATEX, 99

bandwidth, 110big endian, 128Bluetooth, 154Bode diagram, 111bytes, 127

calibration, 185process, 187sensor, 187

channel skew, 162check

limit, 142redundancy, 142validation, 142

CMRR, 111COM

Component Object Model, 23common mode rejection ratio, 111communication

current loop, 143network, 145serial, 145wireless, 145

control loop, 153critical region, 74

data acquisition system, 124DDE

Dynamic Data Exchange, 23deadband, 35deadlock, 72distributed system, 157DMA, 142

event, 61, 68

�lterband-pass, 114band-stop, 114Bessel, 115

Butterworth, 114Chebyshev, 114�nite impulse response (FIR), 118high-pass, 114in�nite impulse response, 118limit check, 142low-pass, 114moving average, 118redundancy check, 142validation check, 142

Firewire, 145

global positioning system, 152GPIB, 145GPS, 152Graphical User Interface (GUI), 9

HALHardware Abstraction Layer, 92

HART, 99, 156HART protocol, 121, 145histogram, 175Human-Machine Interface (HMI), 9

input loading, 113input signal

ADC, 134analog, 134di¤erential (DI), 140digital, 130multiplexer, 131single ended (SE), 140

instrumentation bus, 145interprocess communication

IPC, 70interrupt, 61, 82, 142

latency, 61

little endian, 128LXI, 145

Man-Machine Interface (MMI), 9mean time between failures, 7MEMS, 107model

client/server, 27instrumentation ampli�er, 113publisher/subscriber, 28sensing device, 113

multitasking, 61

201

Page 210: DAQ_Training_Course.pdf

INDEX 202

noise, 122

outlier, 163output loading, 113output signal

analog, 133DAC, 133digital, 131

PCMCIA, 145polling, 82, 142Posix.4

Programming, 87precision, 139preemption, 61priority, 61, 82priority inheritance, 83priority inversion, 83protocol, 121, 122, 129, 145

CSMA/CD, 70Ethernet POWERLINK, 71HART, 121, 145Pro�net IRT, 71token ring, 70wireless, 146, 154

PXI, 145

real-timedeadline, 60de�nition, 60event, 61interrupt, 61interrupt latency, 61multitasking, 61preemption, 61priority, 61resource, 60scheduler, 61simultaneousness, 60

redundancy, 7reference voltage, 140resolution, 139resource, 72RFID, 150RS-485, 145RTOS requirement, 90

Sample and Hold (S/H), 162sampling

aliasing, 163SAS

Safety and Automation System, 18satellite navigation system, 152SCADA, 9scheduler, 61, 62

strategies, 65semaphore, 67sensor, 98

absolute, 100

active, 99passive, 99relative, 100

sensor �lter�nite impulse response (FIR), 118in�nite impulse response (IIR), 118

sigma-delta, 136signal

ampli�cation, 109attenuation, 114combiner, 120conversion, 120di¤erentiation, 120�ltering, 114integration, 120linearization, 120

signal converters, 120frequency to current, 120frequency to voltage, 120voltage to current, 120

SILSafety Integrity Level, 19

Simultaneous Sample and Hold, 162Smart Sensors, 154SOA, 45software

process, 77task, 80thread, 79

successive-approximation, 136

tag, 33task state

o­ ine, 65ready, 65running, 65waiting, 65

transducer, 98transmitter, 99

uncertaintyanalysis, 184budget, 184

USB, 145User Interface (UI), 9

watchdog, 61Wireless

Bluetooth, 156ZigBee, 155

WirelessHART, 154, 156

XMLExtensible Markup Language, 43

ZigBee, 154devices, 155