1 1. HYPOTHESIS and OVERVIEW We hypothesize that an environmentally-aware, network connected, robotic assistive device will be able to prevent a number of problem events (particularly falls) and reduce mean response times to several classes of key health problems by way of risk assessment and unobtrusive monitoring. This will significantly reduce costs and lighten the burden on existing health care resources. Figure 1 depicts the overall proposed system which will incorporate a robotic smart walker, an in- house base station, and a remote analytics server. The walker will provide environment-aware mobility assistance as well as recording sensor data about the user and home activities. This data will be passed wirelessly to the base station, which will compress and transmit data over a wired Internet connection for analysis, visualization, and secure storage at the remote server, as well as storing navigational knowledge for later use, by the walker. In order to ensure maximum accuracy in fall detection, health condition prognosis, and environment awareness, the sensors involved will include at least visible and infrared light imaging, omnidirectional and pan/tilt directional audio, accelerometers, and pulse/heartbeat sensors. Research performed will include cutting-edge feature preprocessing and adaptation, navigation in dynamic cluttered environments, multimodal signal compression, and statistical models of day-to-day health. These will support generation of symptomatic alerts of both long-term problems (tremor progression, changes in respiration or sweating, altered voiding regimen) and immediate problems (falls, distress). Patient health histories will be stored in compliance with emerging medical standards. 2. OBJECTIVES FOR PERIOD OF PROPOSED WORK Listed here are the objectives for the proposed project, with additional description in Sections 3 and 4: Research, design, and test a novel unobtrusive smart walker and a base station Research, implement, and test navigation algorithms for the smart walker based upon unobtrusiveness criteria Research, implement, and test novel algorithms for smart walker environmental awareness Research, implement, and test biosignal modeling and person-to-person adaptation Research, implement, and test novel algorithms for biosignal processing Research, implement, and test novel post-fall detection algorithms based upon audiovisual event detection and visual pose estimation Research, implement, and test novel medicine dosing event detection algorithms based upon audiovisual object recognition and visual pose estimation Research, implement, and test novel boosted features based upon SLAM and visual context Research, implement, and test novel long- /short-term biosignal alerts based upon sensor history Develop an implementable architecture that lays out the major alternatives for partitioning of work among processing nodes and associated information exchange among nodes Figure1. Proposed system (smart walker, base station, server)
17
Embed
1. HYPOTHESIS and OVERVIEW - Rochester Institute of Technologyedge.rit.edu/edge/P13041/public/Background... · cluttered environments, multimodal signal compression, and statistical
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
1
1. HYPOTHESIS and OVERVIEW
We hypothesize that an environmentally-aware, network connected, robotic assistive device will be able
to prevent a number of problem events (particularly falls) and reduce mean response times to several
classes of key health problems by way of risk assessment and unobtrusive monitoring. This will
significantly reduce costs and
lighten the burden on existing
health care resources. Figure 1
depicts the overall proposed
system which will incorporate a
robotic smart walker, an in-
house base station, and a remote
analytics server. The walker will
provide environment-aware
mobility assistance as well as
recording sensor data about the
user and home activities. This
data will be passed wirelessly to
the base station, which will
compress and transmit data over
a wired Internet connection for
analysis, visualization, and secure
storage at the remote server, as well as storing navigational knowledge for later use, by the walker.
In order to ensure maximum accuracy in fall detection, health condition prognosis, and
environment awareness, the sensors involved will include at least visible and infrared light imaging,
omnidirectional and pan/tilt directional audio, accelerometers, and pulse/heartbeat sensors. Research
performed will include cutting-edge feature preprocessing and adaptation, navigation in dynamic
cluttered environments, multimodal signal compression, and statistical models of day-to-day health.
These will support generation of symptomatic alerts of both long-term problems (tremor progression,
changes in respiration or sweating, altered voiding regimen) and immediate problems (falls, distress).
Patient health histories will be stored in compliance with emerging medical standards.
2. OBJECTIVES FOR PERIOD OF PROPOSED WORK
Listed here are the objectives for the proposed project, with additional description in Sections 3 and 4:
Research, design, and test a novel unobtrusive smart walker and a base station
Research, implement, and test navigation algorithms for the smart walker based upon
unobtrusiveness criteria
Research, implement, and test novel algorithms for smart walker environmental awareness
Research, implement, and test biosignal modeling and person-to-person adaptation
Research, implement, and test novel algorithms for biosignal processing
Research, implement, and test novel post-fall detection algorithms based upon audiovisual event
detection and visual pose estimation
Research, implement, and test novel medicine dosing event detection algorithms based upon
audiovisual object recognition and visual pose estimation
Research, implement, and test novel boosted features based upon SLAM and visual context
Research, implement, and test novel long- /short-term biosignal alerts based upon sensor history
Develop an implementable architecture that lays out the major alternatives for partitioning of
work among processing nodes and associated information exchange among nodes
Figure1. Proposed system (smart walker, base station, server)
2
Develop implementable criteria for processing of sensed data for local handling versus
forwarding downstream to meet effectiveness and efficiency objectives of the overall architecture
Develop implementable approach to metadata characterization of patient data compatible with
existing health information technology standards
3. INTELLECTUAL MERIT
Research performed will include cutting-edge feature preprocessing and adaptation, navigation in
dynamic cluttered environments, activity recognition, fall risk assessment based upon pose, audiovisual
fall detection, statistical models of day-to-day health, symptomatic alerts of long-term (medicine
noncompliance) and immediate problems (falls), and health data management. Results in all areas are
expected to build upon the state of the art and improve the fields of robotics and health monitoring in the
context of service-oriented architectures for ambient assisted living.
4. LITERATURE REVIEW AND RESEARCH CHALLENGES
As this proposal is a collaborative proposal, there are four main components (teams) of the proposal:
Geriatric Need, Unobtrusive Smart Walker Design, Event Understanding and Alert Generation, and
Service Oriented Architecture for Health IT.
4.1 Geriatric Need (Rochester General Hospital System – RGHS)
The population of the United States is an aging population: the US Census Bureau estimates that the
population of people age 65 years or older will increase from 39.6 million (12.9%) in 2009 to 72.1 million
(19.3%) by 2030 [1]. With this shift in demographics comes an increased demand for health care resources,
with the availability of these resources recognized as a key indicator of well-being [2]. Clinical studies
shows that 33% of U.S. adults aged 65 of greater suffer falls in a given year [3, 4], with almost one-third
requiring medical attention and further falls occurring for close to 60% within a year. Direct and indirect
costs of falls are on the order of billions annually (e.g., $19 billion in 2000 [5]), some or all of which can be
prevented through risk factor minimization and immediate responder notification [6, 7, 8]. Likewise,
nonadherence to prescribed medical regimens is estimated to be as high as 38% [9], in spite of existing
methods of determining capacity to manage medications [10], resulting in reduced effectiveness and
potential for relapse. Our proposal is aimed at minimizing high-cost independent living health risks (such
as falls and medication nonadherence) for the elderly through robotics.
As it stands, a large proportion of the elderly population makes use of assistive technology (26% of
the chronically disabled without personal care, 58% with some personal care [11]). Mobility devices are
by far the most common, surpassing all anatomical, hearing, and vision devices. Therefore, we propose
the research and development of a robotic system that duplicates the benefits of a mobility device
(exercise, stability) while unobtrusively identifying risks and maximizing information for medical staff.
This research will advance the state of the art across several fields while meeting a key societal need.
4.2 Unobtrusive Smart Walker Design (Electrical and Microelectronic Engineering, RIT)
4.2.1 Assistive Robotics
Assistive devices have remained crucial throughout the advancement of human kind. Canes, eye glasses,
hearing aid, and work tools have been instrumental in technological and intellectual development.
Recently, intelligent robotic systems are being used for helping the elderly, disabled, blind, and injured.
Research has been done on aiding the blind through other senses than vision via a mobility aid called
a Smart Blind Cane that uses a smart phone based collision warning system (built around sensors like
ultrasonic range and temperature) and text -to- speech to notify the user of their surroundings [37]. Other
related devices include a thermal digital device that lets visually impaired individuals visualize works of
3
art by touch, glasses that measure time between blinks, and a device that uses sensors to smell [39]. For
elderly visually impaired people that have difficulty using guide dogs or long canes, walkers or walking
frame with wheels are designed to help them navigate; one such device is called PAM-AID [38].
In general, there have been many devices and robots researched for walking assistance and helping
the elderly in nursing homes and hospitals, with wheelchairs and walkers the most commonly used.
Researchers have tried to augment these walking aids with robots and sensors to make them smart and
autonomous. Exoskeleton devices [42] and robotic suits [55] [56] [57] have also been used as mobility and
rehabilitation aids. In one of these experiments, an anthropomorphic approach was used by mounting a
robotic Manus® arm [41] on a wheelchair and letting the user control the arm through a user friendly
interface [40]. Along with walking, assistance is also needed in standing and sitting. Devices have been
prototyped that help with all three operations to prevent injuries like fractures and dislocation of bones
while trying to stand or sit [43] [45]. Intelligent wheelchairs with obstacle avoidance and independent
control can be used to carry the elderly and/or disabled or move independently in a smart environment
as a sensorial extension, and have been implemented on at least one occasion [44]. Another such
implementation is a call-to-service walking helper robot that comes and provides assistance by locating
the user in a smart environment [35]. But, in many cases, the user desires assistance that respects their
freedom and control within their ability, yet also within an envelope of safety [49, 50]. For this reason,
semi-autonomous mobility aids have been designed that take into account intent [46, 47] and also
interactive [48]. Figure 2 shows some of these devices.
Figure 2. Examples of assistive walking and posture devices, left to right [35, 26, 49, 50]
Further examples explicitly requiring patient effort include exoskeletons supporting leg alignment
[42], full robotic suits [54, 55], robotic arms and exoskeletons for rehabilitation of upper limbs with weak
muscles [58] and patients post-stroke [52], orthotic ankle devices [53] and pelvic assistance [59] for gait
disorders, and cable-driven machines for rehabilitation [56]. Also developed is an exoskeleton for eating
that simulates upper limb motor patterns in a healthy person and can be controlled by wrist action [51].
Many such-assisted elderly/disabled live in nursing homes or in assisted living facilities with
constant assistance and monitoring. Moreover, caregivers need help in managing assistive devices. For
this reason, there has been work in robotics to push and steer wheelchairs on a fixed path [60] or come
when called [35], and dual-arm robots to help dress patients [64]. Models have been suggested to make
homes and/or hospitals context-aware environments sensitive and responsive to the presence of humans
using embedded systems, sensor networks, and robots [62]; based on feedback from health care
organizations, effort is being made on improving the behavior, appearance, and functionalities of such
systems [59]. Other related efforts include mapping fleets of helper robots, camera-based patient tracking
[61] and audiovisual human-robot interaction [65]. Studies on the social and psychological effects of
robotic-assisted activities on elderly people [66, 67] show improved moods and less depression. Research
on socially-acceptable robots [68] and graphical user interfaces for the elderly are underway [69]. This
extends to wearable life sign sensor vests for prognosis [73], shoes with acceleration sensors to detect fall
4
and gait disabilities [74], and floors with sensors to determine positioning and falls [63]. Children with
disabilities and suffering from autism or cerebral palsy have also benefited from robots and robotic toys
that help them learn and interact [70, 71, 72].
Research Challenge
There have been several efforts in developing smart assistive mobility devices for the elderly. However,
these efforts have been limited to one or two aspects of care and are obtrusive. In aiding the elderly,
special care needs to be taken in designing socially-acceptable devices which are minimally obtrusive.
Furthermore, most prototypes have not been networked, limiting the use and reporting. In an effort to
assist elderly without disrupting their daily lives, the assistive device needs to be designed so as to be
unobtrusive, semi-autonomous, and networked. Thus, in this proposed work we plan to design a walker
which is functionally similar to a manual walker but also networked to collect, monitor, and report life
signs.
4.2.2 Mobile Robot Navigation
In mobile robotics, navigation is one of the most fundamental and challenging tasks in robot autonomy;
hence, a lot of research has been done to address challenges such as positioning and orientation,
analyzing surroundings (mapping), conveying this information to the robot, and choosing a subsequent
appropriate action or path to take. Navigation is twofold: local and global. Local navigation, often called
reactive control, is how the robot learns or plans local paths using the sensory inputs without prior
complete knowledge of the environment. Global navigation, also called deliberate control, is where the
robot learns or plans the global paths based upon complete knowledge about the environment [26].
Additionally, challenges in navigating indoors and outdoors are addressed differently. An indoor
environment can be made smart by using a network of sensors or by placing identifying landmarks.
However, in an outdoor environment, all the sensors generally have to be on the robot and hence the
robot has to be independent of the environment to navigate.
Sensors generally used include RFID (Radio Frequency Identification) tags, sonar or ultra-sonic range
sensors, cameras, magnetic sensors and laser or infrared scanners. Vision-based navigation is the most
common and many research efforts use image processing with range sensing (odometry). For example, in
an unknown indoor environment, local navigation is achieved either through monocular vision-based
simultaneous localization and mapping (SLAM) [16] [17] by mounting a camera on the robot facing
upwards and extracting features like lamps, doors and corners and laser scans for odometry [12] [13]; or
by using stereo vision and extracting 3D features from static and dynamic obstacles [14] [15]. For
navigation in unknown environments without vision, networks of RFID tags or passive RF sensors [18]
[19], magnetic landmarks [22] and ultrasonic sensors [23] have been used. A sensor network can also be
used for SLAM by using triangulation localization methods [20] [21], including coupling with limited
sensing [24]. SLAM variants include complete coverage navigation (CCN) where the robot passes
through all points in a changing environment [25], hierarchical Q- learning for coupled global and local
navigation for very accurate tasks [26]; predictive SLAM (P-SLAM) for exploration [27]; uncertainty-
based algorithms [28]; algorithm for probabilistic localization using indoor GPS and laser scanners [29],
topological model [31], or occupancy maps [30], and many more. Further research includes algorithms for
multiple robots such as reconnaissance and surveillance for a large urban environment [33] and adaptive
road mapping [32].
Research Challenge
The main challenge in navigation of the Smart Walker is to move around with precision with passive
motion where the patients holding/pushing the walker. In addition, environmental awareness is very
critical for the walker to make decision on active versus passive motion. Some of the applications of
5
mobile robot navigation are call-to-service mobile robots for elderly patients (Walbot), that use Zigbee
sensor networks for localization and tracking the caller [35], service robots for transportation in hospitals
[34] and an autonomous vehicle for indoor surveillance [36].
4.2.3 Biosignal Processing and Classification
Analyzing and understanding human biosignals has been an important research area with practical
applications in everyday life. Brain computer interfacing (BCI) is a research area that studies controlling
devices by processing and understanding the brain via Electroencephalogram (EEG) signals [75 – 78].
Similarly, assistive robots [79] are being developed using eye [80] or muscle signals [81 – 87] to provide
control inputs. The efficiency for all of these applications depends on being able to process and classify
biosignals. Generally speaking, the most frequently used biosignals are Electroencephalogram (EEG),
Electroocculogram (EOG), and Electromyogram (EMG). Recorded EEG/EMG/EOG signals can be used to
control inputs for real-time systems such as BCI-based assistive devices or detection systems for brain
abnormalities [76]. Therefore, most of the applications that utilize biosignals require careful consideration
in terms of collection, processing and information extraction [79, 80, 85 – 95]. Biosignal applications
include human computer interfaces [75, 76, 96], emotional state classification [88], mobility device control
[97], prosthetic arm control [81, 82, 86], and detecting driver drowsiness [92].
In preprocessing, filtering is commonplace, coupled with feature extraction methodologies including
Principle Component Analysis (PCA) [77, 78, 93], Analysis of Variance (ANOVA) [77], Linear