A Deep One-Pass Learning based on Pre-Training Weights for Smartphone-Based Recognition of Human Activities and Postural Transitions


  • Setthanun Thongsuwan Advanced Artificial Intelligence (AAI) Research Laboratory, Department of Computer Science, King Mongkut's Institute of Technology Ladkrabang, Bangkok 10520
  • Praveen Agarwal Department of Mathematics, Anand International College of Engineering, Jaipur 303012
  • Saichon Jaiyen Advanced Artificial Intelligence (AAI) Research Laboratory, Department of Computer Science, King Mongkut's Institute of Technology Ladkrabang, Bangkok 10520




Human activity recognition, Machine learning, Deep learning, Convolutional neural network, Feature learning, Classification, Extreme gradient boosting, XGBoost, Pre-trained weights


We describe a new deep learning model – Deep One-Pass Learning (DOPL) for Smartphone-Based Recognition of Human Activities and Postural Transitions based on the Pre-Trained Weights, DOPL consists of several stacked convolutional layers to learn the features of the input and is able to learn features automatically, followed by the Extreme gradient boosting (XGBoost) as the last layer for predicting the class labels. DOPL is much faster in the training phase, because the input weights are optimal weights from the Pre-Trained weights module and it does not have to re-adjust weights repeatedly. Further, we replaced the final fully connected layer with XGBoost to increase predictive efficiency. In the worst case, our model with demonstrated an accuracy of 99.2% for the smartphone sensors database data, which was significantly better than CNN or XGBoost alone as well as several other models assessed.


Download data is not yet available.


M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu and X. Zheng, TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems, software available from http://tensorflow.org/ (2015).

L. Breiman, J. H. Friedman, R. A. Olshen and C. J. Stone, in Classification and Regression Trees, Wadsworth and Brooks, Monterey, CA (1984), https://www.bibsonomy.org/bibtex/27f293aa2bdfd10960ef36928f2795f1d/machinelearning.

L. Breiman, Random forests, Machine Learning, 45(1) (2001), 5–32, DOI: 10.1023/A:1010933404324.

T. F. Chan, G. H. Golub and R. J. LeVeque, Updating formulae and a pairwise algorithm for computing sample variances, in COMPSTAT 1982 5th Symposium held at Toulouse 1982, Physica-Verlag HD, Heidelberg, 30–41 (1982).

C.-C. Chang and C.-J. Lin, LIBSVM: A library for support vector machines, ACM Transactions on Intelligent Systems and Technology, 2(3) (2011), 1–27, software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.

T. Chen and C. Guestrin, XGBoost: A scalable tree boosting system, in Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '16), San Francisco, California, USA, 785–794, ACM, New York, USA (2016), DOI: 10.1145/2939672.2939785.

A. Defazio, F. R. Bach and S. Lacoste-Julien, SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives, Advances in Neural Information Processing Systems, abs/1407.0202 (2014), 1–1, http://arxiv.org/abs/1407.0202, retrieved on 13 August 2018.

S. Dieleman, J. Schlüter, C. Raffel, E. Olson, S. K. Sí¸nderby, D. Nouri, D. Maturana, M. Thoma, E. Battenberg, J. Kelly, J. De Fauw, M. Heilman, diogo149, B. McFee, H. Weideman, takacsg84, peterderivaz, Jon, instagibbs, K. Rasul, CongLiu, Britefury, and J. Degrave, Lasagne: First Release (2015), DOI: 10.5281/zenodo.27878.

D. Dua and C. Graff, UCI Machine Learning Repository, https://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones, University of California, Irvine, School of Information and Computer Sciences (2015).

R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang and C.-J. Lin, LIBLINEAR: A Library for Large Linear Classification, Journal of Machine Learning Research, 9 (2008), 1871–1874, http://dl.acm.org/citation.cfm?id=1390681.1442794.

J. Friedman, Greedy function approximation: A gradient boosting machine, The Annals of Statistics 29 (2000), DOI: 10.1214/aos/1013203451.

J. H. Friedman, Stochastic gradient boosting, Computational Statistics & Data Analysis 38(4) (2002), 367–378, DOI: 10.1016/S0167-9473(01)00065-2.

P. Geurts, D. Ernst and L.Wehenkel, Extremely randomized trees, Machine Learning 63(1) (2006), 3–42, DOI: 10.1007/s10994-006-6226-1.

I. Goodfellow, Y. Bengio and A. Courville, Deep Learning, MIT Press, http://www.deeplearningbook.org (2016).

S. Gross and M. Wilber, Training and Investigating Residual Nets, http://torch.ch/blog/2016/02/04/resnets.html (2016).

T. Hastie, R. Tibshirani and J. Friedman, The Elements of Statistical Learning, 1 (2009), Springer.

P. He, X. Jiang, T. Sun and H. Li, Computer graphics identification combining convolutional and recurrent neural networks, IEEE Signal Processing Letters 25(9) (2018), 1369–1373, DOI: 10.1109/LSP.2018.2855566.

K. He, X. Zhang, S. Ren and J. Sun, Deep residual learning for image recognitions, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–778 (2016).

G. E. Hinton, Connectionist learning procedures, Artificial Intelligence 40(1) (1989), 185–234, DOI: 10.1016/0004-3702(89)90049-0.

A. Jabri, A. Joulin and L. van der Maaten, Revisiting visual question answering baselines, in Computer Vision – ECCV 2016, Springer International Publishing, Cham., 727–739 (2016).

W. Jiang and Z. Yin, Human activity recognition using wearable sensors by deep convolutional neural networks, in Proceedings of the 23rd ACM International Conference onMultimedia, MM'15 series, 2015, Brisbane, Australia, 1307–1310, ACM, New York, USA, DOI: 10.1145/2733373.2806333.

E. Kim, S. Helal and D. Cook, Human activity recognition and pattern discovery, IEEE Pervasive Computing 9(1) (2010), 48–53, DOI: 10.1109/MPRV.2010.7.

A. Krizhevsky, I. Sutskever and G. E. Hinton, ImageNet classification with deep convolutional neural networks, in Proceedings of the 25th International Conference on Neural Information Processing Systems, NIPS'12, Lake Tahoe, Nevada, 1097–1105, http://dl.acm.org/citation.cfm?id=2999134.2999257, Curran Associates Inc., USA (2012).

Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, Gradient-based learning applied to document recognition, Proceedings of the IEEE 86(11) (1998), 2278–2324, DOI: 10.1109/5.726791.

S. Lee, T. Chen, L. Yu and C. Lai, Image classification based on the boost convolutional neural network, IEEE Access 6 (2018), 12755–12768, DOI: 10.1109/ACCESS.2018.2796722.

J. Lemley, S. Bazrafkan and P. Corcoran, Deep learning for consumer devices and services: pushing the limits for machine learning, artificial intelligence, and computer vision, IEEE Consumer Electronics Magazine 6(2) (2017), 48–56, DOI: 10.1109/MCE.2016.2640698.

G. Liang, H. Hong, W. Xie and L. Zheng, Combining convolutional neural network with recursive neural network for blood cell image classification, IEEE Access 6 (2018), 36188–36197, DOI: 10.1109/ACCESS.2018.2846685.

D. Liciotti, M. Bernardini, L. Romeo and E. Frontoni, A sequential deep learning application for recognising human activities in smart homes, Neurocomputing (2019), DOI: 10.1016/j.neucom.2018.10.104.

V. Lioutas, N. Passalis and A. Tefas, Explicit ensemble attention learning for improving visual question answering, Pattern Recognition Letters 111 (2018), 51–57, DOI: 10.1016/j.patrec.2018.04.031.

H. Liu and L. Wang, Gesture recognition for human-robot collaboration: a review, International Journal of Industrial Ergonomics 68 (2018), 355–367, DOI: 10.1016/j.ergon.2017.02.004.

Z. Lu, K. Tong, X. Zhang, S. Li and P. Zhou, Myoelectric pattern recognition for controlling a robotic hand: a feasibility study in stroke, IEEE Transactions on Biomedical Engineering 66(2) (2019), 365–372, DOI: 10.1109/TBME.2018.2840848.

T. Mikolov, K. Chen, G. Corrado and J. Dean, Efficient estimation of word representations in vector space, (2013), 1–12, https://arxiv.org/abs/1301.3781.

H. Nguyen, L. Kieu, T.Wen and C. Cai, Deep learning methods in transportation domain: a review, IET Intelligent Transport Systems 12(9) (2018), 998–1004, DOI: 10.1049/iet-its.2018.0064.

H. F. Nweke, Y. W. Teh, M. A. Al-Garadi and U. R. Alo, Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: state of the art and research challenges, Expert Systems with Applications 105 (2018), 233–261, DOI: 10.1016/j.eswa.2018.03.056.

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot and E. Duchesnay, Scikit-learn: machine learning in Python, Journal of Machine Learning Research 12 (2011), 2825–2830.

C. N. Phyo, T. T. Zin and P. Tin, Deep learning for recognizing human activities using motions of skeletal joints, IEEE Transactions on Consumer Electronics 65(2) (2019), 243–252, DOI: 10.1109/TCE.2019.2908986.

J.-L. Reyes-Ortiz, L. Oneto, A. Samí , X. Parra and D. Anguita, Transition-aware human activity recognition using smartphones, Neurocomput. 171(C) (2016), 754–767, DOI: 10.1016/j.neucom.2015.07.085.

C. A. Ronao and S.-B. Cho, Deep convolutional neural networks for human activity recognition with smartphone sensors, in Neural Information Processing, 46–53 (2015), Springer International Publishing, Cham.

C. A. Ronao and S.-B. Cho, Evaluation of deep convolutional neural network architectures for human activity recognition with smartphone sensors, in Proceedings of the KIISE Korea Computer Congress, Korea, 858860 (2015).

C. A. Ronao and S.-B. Cho, Human activity recognition with smartphone sensors using deep learning neural networks, Expert Syst. Appl. 59 (2016), 235–244.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke and A. Rabinovich, Going deeper with convolutions, in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1–9 (2015), DOI: 10.1109/CVPR.2015.7298594.

Theano Development Team, Theano: A Python framework for fast computation of mathematical expressions, CoRR (2016), 1–19, http://arxiv.org/abs/1605.02688, 13 August 2018.

M. Wainberg, D. Merico, A. Delong and B. J. Frey, Deep learning in biomedicine, Nature Biotechnology 36 (2018), 829–838, DOI: 10.1038/nbt.4233.

J. Wang, Y. Chen, S. Hao, X. Peng and L. Hu, Deep learning for sensor-based activity recognition: a survey, Pattern Recognition Letters 119 (2019), 3–11, DOI: 10.1016/j.patrec.2018.02.010.

J. Wang, Y. Ma, L. Zhang, R. X. Gao and D. Wu, Deep learning for smart manufacturing: methods and applications, Journal of Manufacturing Systems (Special Issue on Smart Manufacturing) 48 (2018), 144–156, DOI: 10.1016/j.jmsy.2018.01.003.

Q. Wu, D. Teney, P. Wang, C. Shen, A. Dick and A. van den Hengel, Visual question answering: a survey of methods and datasets, Computer Vision and Image Understanding 163 (2017), 21–40, DOI: 10.1016/j.cviu.2017.05.001.

H.-F. Yu, F.-L. Huang and C.-J. Lin, Dual coordinate descent methods for logistic regression and maximum entropy models, Machine Learning 85(1) (2011), 41–75, DOI: 10.1007/s10994-010-5221-8.

P. Zham, D. K. Kumar, P. Dabnichki, S. A. Poosapadi and S. Raghav, Distinguishing different stages of Parkinsons disease using composite index of speed and pen-pressure of sketching a spiral, Frontiers in Neurology 8 (2017), 435, DOI: 10.3389/fneur.2017.00435.

Z. Zhang, S. Shan, Y. Fang and L. Shao, Deep learning for pattern recognition, Pattern Recognition Letters 119 (2019), 1–2, DOI: 10.1016/j.patrec.2018.10.028.




How to Cite

Thongsuwan, S., Agarwal, P., & Jaiyen, S. (2019). A Deep One-Pass Learning based on Pre-Training Weights for Smartphone-Based Recognition of Human Activities and Postural Transitions. Communications in Mathematics and Applications, 10(3), 541–560. https://doi.org/10.26713/cma.v10i3.1269



Research Article