An efficient human activity recognition model based on deep learning approaches
Abstract
Human Activity Recognition (HAR) has gained traction in recent years in diverse areas such as observation, entertainment, teaching and healthcare, using wearable and smartphone sensors. Such environments and systems necessitate and subsume activity recognition, aimed at recognizing the actions, characteristics, and goals of one or more individuals from a temporal series of observations streamed from one or more sensors. Different developed models for HAR have been explained in literature. Deep learning systems and algorithms were shown to perform highly in HAR in recent years, but these algorithms need lots of computerization to be deployed efficiently in applications. This paper presents a HAR lightweight, low computing capacity, deep learning model, which is ideal for use in real-time applications. The generic HAR framework for smartphone sensor data is proposed, based on Long Short-Term Memory (LSTM) networks for time-series domains and standard Convolutional Neural Network (CNN) used for classification. The findings demonstrate that many of the deployed deep learning and machine learning techniques are surpassed by the proposed model. TRANSLATE with x English ArabicHebrewPolishBulgarianHindiPortugueseCatalanHmong DawRomanianChinese SimplifiedHungarianRussianChinese TraditionalIndonesianSlovakCzechItalianSlovenianDanishJapaneseSpanishDutchKlingonSwedishEnglishKoreanThaiEstonianLatvianTurkishFinnishLithuanianUkrainianFrenchMalayUrduGermanMalteseVietnameseGreekNorwegianWelshHaitian CreolePersian TRANSLATE with COPY THE URL BELOW Back EMBED THE SNIPPET BELOW IN YOUR SITE Enable collaborative features and customize widget: Bing Webmaster Portal Back
Keywords
Human Activity Recognition; Convolutional Neural Network; Deep Learning; Long Short-Term Memory
Full Text:
PDF
Refbacks
- There are currently no refbacks.
Indonesian Journal of Electrical Engineering and Informatics (IJEEI)
ISSN 2089-3272
This work is licensed under a Creative Commons Attribution 4.0 International License.