Human Activity Recognition Using Machine Learning & iOT Devices, Part I

Introduction

Coming from a background in computer science, I was familiar with Machine Learning and its capabilities although admittedly I had never considered the impact it could have on the world of healthcare.  Subsequent to joining the team here at RxDataScience, I was tasked with implementing a decision tree, in order to detect what activity a person, in a room was undertaking using data collected via a radio frequency identification tag and sensors mounted in corners of the room.

Wearable Technology

In recent years the market surrounding wearable technology has grown exponentially, with good reason. This shift indicates how healthcare is transforming from treatment to prevention. There are a range of devices in circulation at the minute which are being utilised by the industry, ranging from smart wristbands devices which appear to be an adhesive bandage but are in fact skin-worn circuits. The data from these sensors is collected and stored before being analysed and having key decisions made upon it.

 

Activity Recognition

The purpose of being able to classify what activity a person is undergoing at a given time is to allow computers to provide assistance and guidance to a person prior to or while undertaking a task. The difficulty lies in how diverse our movements are as we perform our day-to-day tasks.

The Risk of Falls

The long-term goal of the data being collected is to try and mitigate the risk of bedside falls incurred by the elderly using lightweight sensors. The topic of falls in the elderly has been a growing one in recent years the demographics have changed to an ageing population. The Centers for Disease Control and Prevention reports that each year, there are 29 million older people falling with 2.8 million of these cases treated in Accident and Emergency Departments, leading to Medicare costs of $31 billion. These cases can count for the deaths of 28 thousand American citizens annually. While this is obviously the worst case scenario, there are many possible outcomes of a fall, ranging from bone fractures to traumatic brain injuries to depression. (Cdc.gov, 2018) Previous such surveys had been based around test subjects wearing larger battery powered sensors strapped to their chest. These wireless receivers require maintenance while being quite expensive and dense. As a result they have proved to be unpopular with those asked to wear them and have been deemed as unfit for the topic in question. (Ruan, 2016)

Goals

Through being able to recognise and classify the activity a person is taking part in, we can determine what is normal and what is abnormal activity for them, therefore indicating whether or not they require attention from facility staff. With accurate classification, we can decrease the rate of false alarms and increase response times as live sensor data can be processed and alarms raised when certain benchmarks are exceeded. (Wickramasinghe et al., 2017). A fall can be detected through study of the acceleration of the individual in the various planes of movement. A decrease in acceleration followed by a sharp increase would suggest a fall has occurred. This can be explained as follows; when stationary, our acceleration is 1g and perfect free fall would be 0g. Obviously we will not achieve 0g and the score recorded will be more like 0.5g. If a person falls to the floor a large increase in the acceleration is recorded. (Gjoreski et al., 2014)

Machine Learning and Activity Recognition

For the variety of reasons outlined above, there have been many attempts to use the various machine learning algorithms to accurately classify a person’s activity, so much so that Google have created an Activity Recognition API for developers to embed into their creation of mobile applications. The usage of this however, varies greatly, as it looks for less subtle classifications such as running and driving.

 

This is the first part of a two part series I. Following this there will be a more technical outlook on the algorithm used in my classification of the data provided by the UCI website for Machine Learning Datasets.

Wickramasinghe, A., Ranasinghe, D., Fumeaux, C., Hill, K. and Visvanathan, R. (2017). Sequence Learning with Passive RFID Sensors for Real-Time Bed-Egress Recognition in Older People. IEEE Journal of Biomedical and Health Informatics, 21(4), pp.917-929.

Ruan, W. (2016). Unobtrusive human localization and activity recognition for supporting independent living of the elderly. 2016 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops).

Gjoreski, H., Kozina, S., Gams, M. and Lustrek, M. (2014). RAReFall — Real-time activity recognition and fall detection system. 2014 IEEE International Conference on Pervasive Computing and Communication Workshops (PERCOM WORKSHOPS).

Cdc.gov. (2018). Important Facts about Falls | Home and Recreational Safety | CDC Injury Center. [online] Available at: [Accessed 9 Mar. 2018].

Declan Corrigan is a graduate of the Queen’s University of Belfast in the field of Computer Science. After working in a number of the verticals, he has settled in the field of healthcare and pharmaceutical data with partner company RxDataScience. Declan is keen to see how he, along with his colleagues, can impact the way data is collected, stored and analysed using kdb+ in-memory tables.