Top Banner

Click here to load reader

Unobtrusive Fall Detection and Prevention: Extending · PDF fileUnobtrusive Fall Detection and Prevention: Extending From a Prototype Test to a Pilot Trial Henriette Rau, Jacob Grieger,

Sep 01, 2018

ReportDownload

Documents

duongdat

  • Unobtrusive Fall Detection and Prevention: Extending From a Prototype Test to a Pilot Trial

    Henriette Rau, Jacob Grieger, Christian Marzahl, Peter Penndorf, and Martin Staemmler

    University of Applied Sciences, ETI, Stralsund, Germany

    {henriette.rau, jacob.grieger, christian.marzahl, martin.staemmler}@fh-stralsund.de [email protected]

    Abstract: Fall detection based on images is rated obtrusive and costly. This paper presents an unobtrusive fall detection system, which allows a nearly invisible positioning under furniture and cost-efficient integration and scalability in retirement homes via WLAN. The system classifies events using image analysis and notifies the caregiver if an alarming event occurred. It was successfully tested in a nursing and a retirement home and refined based on the experiences made.

    Keywords: fall detection, fall prevention, context-aware, standards, AAL, complex event processing, pilot trial, evaluation

    1 Introduction

    Due to the demographic change in Germany the number of elderly people is continuously increasing. Statistics show that the risk to fall, the number of recorded falls and the seriousness of injuries increases exponentially starting at the age of 70 [Fun07]. Therefore the prevention and detection of falls is an important topic for ambient assisted living (AAL) research.

    This paper presents lessons learned from a conducted prototype test of an unobtrusive fall detection system to design and implement a pilot trial in a retirement home. The initial system used for the protoype test is introduced in Chapter 2. The lessons learned presented in Chapter 3 detail on the re-engineering both of the system design and the integration for the pilot trial in a retirement home. Chapter 4 gives first trial results and a short discussion.

    2 Initial Prototype Test

    Based on previous work [Mar12] this chapter describes the architecture of the initial fall detection and prevention system and the experiences made from running the prototype test. The prototype test was performed from August to December 2011 in two inhabitants rooms of a nursing home. Its objective was to achieve first feedbacks regarding the prototype developed.

    1456

  • 2.1 System Design

    Figure 1 shows the three main components of the system used for the prototype test. These components are described in the following.

    Fall Detection Events, Status Central Processing AlarmSmart Phone / Work Station

    Figure 1: Overview System Components

    2.1.1 Fall Detection

    The fall detection component involves the Microsoft Kinect [Kin12] camera and a computer for image analysis and event generation. These devices are connected via USB (Figure 2a).

    KinectImage Analysis

    Event Generation

    USB

    3D Images

    Figure 2a: Fall Detection Components

    Central Processing

    Event Database

    Event Storage / Event Analysis

    Alarm Database

    Alarm Storage / Alarm Configuration

    Figure 2b: Central Processing Unit

    The Kinect camera delivers 3D depth images to the image analysing unit. This unit processes the data image by image with distinct algorithms (e.g. segmentation, feature extraction) and classifies the current situation using a decision tree [Mar12]. The classification process generates an event and calculates the probability for the occurrence of this event. Implemented event types are:

    Fall (person fell) Activity (person wandering in the room)

    1457

  • Feet (feet in front of the bed) Bath (bath door opened) Exit (room door opened)

    Generated events containing the event type and data such as timestamp, room ID and probability are sent cable-based or wireless to the central processing component. Furthermore, the fall detection component generates a heartbeat signal enabling the central processing component to detect unavailability of the fall detection system.

    2.1.2 Central Processing

    The central processing unit is implemented as a bundle of web services with distinct assignments (Figure 2b). A web service receives detected events and stores them in an event database. A time-triggered process analyses the stored events belonging to the same context (i.e. room or inhabitant) of the last ten minutes to detect conditions for raising an alarm (event-condition-action). These alarm condition rules have been manually configured in the alarm database for each context and consist of constraints (e.g. time or mutual exclusion of events) and if-then rules taking event probabilities into account. If the analysed data matches an alarm condition, an alarm actuator executes the appropriate alarm action. Those actions have also been configured manually in the alarm database and comprise of notifications sent to smart phones by using an Asterisk [Abo12] server or to work stations via a web interface.

    Caregiver

    Smart Phone

    Work Station

    Central Processing

    Alarm Notification

    LAN/WLAN

    LAN/WLAN

    Figure 3: Alarm Notification Process

    1458

  • 2.1.3 Smart Phone / Workstation

    In most cases the receiver of an alarm notification is a caregiver assigned to the room, inhabitant or ward the alarm originates from. Figure 3 shows that the caregiver may either be notified by work station or smart phone in order to provide immediate assistance.

    2.2 Test Accomplishments and Experiences of the Prototype Test Phase

    The prototype test revealed some unwanted behaviour of the system, e.g. false-positive alarms, which resulted in an immediate correction of algorithms and rules. Thus, the false-positive alarm rate could be reduced. During the prototype test one real fall occurred, which was correctly detected. The images could also show what happened before, during and after the fall. Using only depth images ensures privacy through the anonymous detection of activities and falls. The prototype test also showed the need of protecting the systems hardware from access through unauthorized persons or mechanical influences, e.g. shifting the camera or pulling the power plug. For reducing the computational effort the initial image analysis was enhanced to ignore non-moving objects and thereby to assign more effort to the event classification. Furthermore, following the need to synchronize between the client and server clocks, NTP has been established. Without proper synchronisation false-negative alarms existed due to a clients time stamp virtually terminating earlier than the servers event registration.

    Calculating the probability of an event has been done for every image during the prototype test. To minimize fluctuating values for probabilities an analysis of two or more images captured in a row has been identified as a target for the pilot trial. For a better understanding of the context leading to the fall, the alarm notification of the pilot trial needs to be enhanced by saving captured 3D-images from a few minutes prior to the detected event using a data queue.

    3 Designing the Pilot Trial

    After improving the algorithms through the prototype test a pilot trial was performed to analyse the systems suitability to operate permanently in a retirement home. The following descriptions are in general terms and do not focus on this particular homes technical installation requirements, since these vary from home to home.

    To eliminate possible obstacles in the new retirement home a two-week pre-trial with healthy test persons was performed in March 2012. After that the main field trial in two inhabitants rooms started in May and will end in August 2012.

    1459

  • 3.1 System Design

    The key objective for the pilot trial was to further reduce the false-positive rate of 63% obtained in the pre-trial. Therefore both the certainty of the sensor data evaluation and the impact of context information had to be increased. The lessons learned from the prototype test lead to extensive changes of the system architecture on various levels. Table 1 shows the most significant changes.

    Table 1: Changes to the System Architecture

    Prototype Test Pilot Trial one sensor: Kinect + image processing various sensors: Kinect + image

    processing and door contact sensors single data evaluation data fusion and evaluation sensor data is processed centrally with one rule set

    sensor data is handled as an event, transported over event streams and processed in an event processing network (EPN) comprising of event processing agents (EPAs) with specialized rule sets

    inference at context level inference at situation level alarm notification alarm notification and image display

    The event-driven approach applied to the prototype test was extended by the complex event processing (CEP) architecture in order to achieve:

    Actuality: Real-time data acquisition and analysis for immediate alarm notification

    Efficiency: Ability to handle large data volumes from various sensors [Etz10] Scalability: Capability to scale-up the number of sensors and EPAs Agility: Possibility to add additional sensors and additional EPAs Maintainability: Separation of processing logic through EPAs

    The initial design included one sensor, i.e. the Kinect and the image processing unit. To gain a better certainty, context information and additional sensors such as door contacts are included in the pilot trial. Sensor data with corresponding context is fused to generate relevant context information. A reasoning engine infers the current situation from different types of context information. According to the approach of CEP [Bru10] all sensor data is handled and represented as a simple event. Figure 4 shows the event streams associated with different levels of data processing.

    1460

  • Sensor Data

    Event Str