Markerless Gait Classification Employing 3D IR-UWB Physiological Motion Sensing

—Human gait refers to the propulsion achieved by the effort of human limbs, a reflex progression resulting from the rhythmic reciprocal bursts of flexor and extensor activity. Several quantitative models are followed by health professionals to diagnose gait abnormality. Marker-based gait quantification is considered a gold standard by the research and health communities. It reconstructs motion in 3D and provides parameters to measure gait. But, it is an expensive and intrusive technique, limited to soft tissue arte-fact, prone to incorrect marker positioning, and skin sensitivity problems. Hence, markerless, swiftly deployable, non-intrusive, camera-less prototypes would be a game changing possibility, and an example is proposed here. This paper illustrates a 3D gait motion anal-yser employing impulse radio ultra-wide band (IR-UWB) wireless technology. The prototype can measure 3D motion and determine quantitative parameters considering anatomical reference planes. Knee angles have been calculated from the gait by applying vector algebra. Simultaneously, the model has been corroborated with the popular markerless camera based 3D motion capturing system, the Kinect sensor. Bland and Altman (B&A) statistics has been applied to the proposed prototype and Kinect sensor results to verify the measurement agreement. Finally, the proposed prototype has been incorporated with popular supervised machine learning such as, k -nearest neighbour ( kNN ), support vector machine (SVM) and the deep learning technique deep neural multilayer perceptron (DMLP) network to automatically recognize gait abnormalities, with promising results presented.

There are two types of 3D mocap models, marker-based 35 and markerless [5]. Video based optoelectronic techniques  Also, patients often suffer skin sensitivity issues from the 42 adhesive tape and electrode markers used. Thus, markerless 43 or non-contact 3D gait has gained increasing interest in the 1 biomechanics and biomedical community. Conventional mark-2 erless 3D gait estimation is performed by employing multiple 3 cameras or camera sensors to determine kinetics and kinemat- 4 ics [6]. The video data frames are synchronized from different 5 view-points to reconstruct movement information. Currently, 6 the biomechanics and biomedical communities collaborate 7 with computer vision adopting conventional machine learning 8 (ML) and deep learning (DL) to recognize gait abnormality. 9 Viewing angle and frame synchronization maintenance are the 10 most demanding tasks for this method. Markerless 3D gait 11 models are classified into two further groups, model-free and 12 model-based approaches. The model-free approaches use a 13 likelihood function to identify joints, pose, and body shape 14 whereas the model-based approaches use a priori knowledge 15 of the human body to estimate gait. However, current model-16 free and model-based gait research exploiting ML generally 17 focus on person identification, not to identify/diagnose walking 18 abnormalities or disorders, and for identification, the wearable 19 sensing tool is still the preferred research field in motion 20 analysis [7] hitherto. These investigations largely make use 21 of smartphone inbuilt inertial sensors, accelerometers, and 22 gyroscopes. For instance, Muaaz and Mayrhofer developed 23 an android application employing smartphone accelerometers 24 to analyse walking data to establish the identity of an indi-25 vidual in order to prevent zero-effort and live minimal-effort 26 impersonation attacks [8]. Gadaleta and Rossi developed a 27 gait recognition based user authentication system calculating 28 acceleration, orientation, and angular velocity features engag-29 ing convolutional neural network (CNN) and comprehending 30 them through one class support vector machine (SVM) [9]. 31 Zou et. al. employed a hybrid neural network architecture 32 to confirm an individual's walk collecting the inertial sensor 33 data from an accelerometer and gyroscope via a smartphone. 34 The gait features were extracted through deep CNN (DCNN) 35 maintaining time-series fashion and segregated with long 36 short-term memory (LSTM) network [10]. 37 On the contrary, other types of gait identification or bio-38 metric research consider image and video frames to analyse 39 unique postural characteristics of walk such as, Wolf et. al. 40 who created a 3D CNN for human walk identification taking 41 gray-scale images and optical flow as the input that is invariant 42 to clothing, walking speed, and viewing angle [11]. Tang et. 43 al. proposed a method to overcome a limited number of gait 44 view data assuming the 3D shape shares a common view 45 surface. Walking image shapes were formed via the Laplacian 46 deformation energy function inpainting gait silhouettes which 47 were re-projected onto the 2D space to construct partial gait 48 energy images. These partial gait view images were fed into 49 the system to classify the person from an arbitrary view 50 [12]. Usually, the gait biometric algorithms operate on a 51 single person, however walking characteristics change when 52 a person walks with multiple persons. This was addressed 53 by Chen et. al. computing human graphlets and integrating 54 them into a tracking-by-detection method to obtain a person's 55 complete silhouette. The attributes were determined using a 56 latent conditional random field (L-CRF) model  been measured between the proposed IR-UWB prototype and 8 Kinect sensor results to verify the agreement. Finally, the pro-9 posed prototype has been incorporated with popular supervised 10 machine learning (ML) as well as the deep neural multilayer 11 perceptron (DMLP) techniques to investigate their potential 12 to automatically recognize gait abnormalities with the said 13 system. The proposed markerless prototype would permit large 14 scale, local community based testing, not restrict patients with 15 marker attachments and allow them to walk comfortably, more 16 naturally and freely during diagnosis. Being easily deployable 17 and contact free, the set-up would innately be low cost per-18 patient and highly scalable. The cost and inconvenience of 19 dedicated labs, complex and single-use consumable markers 20 could be avoided, along with the necessity of cleaning and 21 potentially re-sterilization of a wearable instruments. Patients 22 are additionally relieved from skin irritation as well as the 23 potential to perform the test remotely enabling social distanc-24 ing requirements of the future. The model operates with very 25 low-power, which would bypass the power management issues 26 of patches and wearables, and also provides a business model 27 shift from consumable sales to a service model approach. This 28 provides valuable insightful information that could change the 29 nature of modern gait healthcare. The detailed experimental 30 set-up, the proposed method, result analysis, conclusion and 31 future research direction are demonstrated in the following 32 sections.

33
III. METHOD 34 A number of phases are involved in the study. Ethical clear-35 ance was required to conduct this research. Human participants 36 were recruited upon acceptance of the ethical statement and 37 examined through IR-UWB radar and Kinect Xbox sensor in 38 an anechoic environment. Knee angles have been determined 39 from the motion data captured using these two devices. The 40 knee angles computed from IR-UWB radar have been fed 41 into ML and DL after confirming its correctness, comparing 42 outputs with the Kinect's knee angle findings. The steps 43 involved here are summarised and shown in Figure 2.    The module is a pulsed Doppler radio transceiver that utilizes  The anechoic experimental environment is shown in Figure 3b. from the ground at a particular time is h then, If the moving limb be deviates at an azimuth angle ϕ, 7 where the travelled distances are −→ OA and − − → OC with specific 8 propagation delay. Thus, the change of distance is − − → DA at the 9 delay interval ∆t. Therefore, ϕ is calculated from the radian 10 measure, and equivalent degree conversion is ϕ = .

11
Let the coordinate of each back-scattered pulse returning 12 from an obstacle, such as a human body, have its motion 13 width span, distance, height from radar be denoted as a, r, h 14 respectively. Thus each pulse can be considered a vector and 15 represented as aî + rĵ + hk after finding the a, r, h where, 16 i,ĵ, andk are the unit vectors of 3D space. The 'a priori' 17 properties of vector and human body have been applied further 18 to measure the knee angles (shown in Figure 3c).

20
Human gait creates angles between the thigh and shank 21 muscles during walking where the angle increases during 22 muscle extension and decreases during flexion. This knee 23 angle variation is significant for gait characterization. Figure 24 4b shows a human walking posture where the four points 25 have 26 been assumed for the thigh and shank of the left and right 27 legs respectively. The dot product of the points from each leg 28 provides the acute angle γ L and γ R between them, whereas, 29 the measurement of the obtuse angles (β L and β R ) are 30 anatomically more significant. The acute left knee angles have 31 been determined and are described in Eq. 2 considering the 32 Similarly the acute right knee angle γ R has been calculated. 34 Subsequently, the obtuse knee angles (β L and β R ) for the 35 left and right legs are in Euclidean n-space. The component form of these vectors 7 have been denoted as, subscripts with a, r, h represents the distance fromî,ĵ,k 11 planes respectively.
provides the acute angle between these two, whereas the inner 11 knee angle would be the obtuse angle between them. The acute 12 angle has been denoted by γ ′ L and detailed in Eq. 3. (3) Therefore, the inner knee angle or obtuse knee angle for 14 the left leg β ′ L = 180 • − γ ′ L . Similarly, the acute knee angle 15 γ ′ R between − − → R T k and − − → R S k for right leg has been determined 16 where the obtuse angle or inner knee angle for right leg β ′ R = 17 Differences were found between the measurements of knee 20 angles from the IR-UWB system and Kinect, thus the out-21 comes have been compared using Bland and Altman (B&A) 22 plot analysis. The B&A is a hypothetical graphical approach 23 [31] based on the level of agreement between the two quanti-1 tative measurements by studying the mean difference and con-2 structing limits of agreement to assess the association between 3 methods. Let, the measured knee angles of participants from 4 the proposed and Kinect system be k p and k k respectively, 5 mean of knee angle is m k , differences between paired knee 6 angles is d k , standard deviation of the differences obtained 7 for the knee angle is s k . The graphical approach is employed of SVM can also separate non-linear data in high dimensions. 6 Subsequently, linear and quadratic kernel based SVMs have 7 been denoted by SVML and SVMQ respectively. The state-8 of-art classification technique, deep learning has also been 9 studied and implemented for the gait pattern recognition task. 10 Hence, a deep neural multilayer perceptron (DMLP) network 11 has been designed and implemented for the classification task. 12 The network comprises four hidden layers, where the rectified 13 linear activation function (ReLU) and cross-entropy have been 14 employed as the activation and loss function respectively. The 15 ground truth UWB gait data information has been created 16 during the data collection phase by observing simultaneous 17 skeletons of participants visualized via the Kinect interface.

VI. CROSS VALIDATION & PERFORMANCE EVALUATION 19
A cross validation technique has been used to assess predic-20 tive outcomes and select models to develop SML prototypes. 21 Model selection by cross-validation has been implemented 22 by repeated random sub-sampling of the data, which is also 23 known as Monte Carlo cross-validation. The dataset has been 24 randomly partitioned to select the training (initialised with 25 5% data) and validation dataset (started with the rest of the 26 95% data). This process is repeated to identify the appropriate 27 training-testing dataset ratio and the stage of overfitting. Each 28 model then ran for 10 rounds to acquire the appropriate ratio. 29 Subsequently the performance metrics have been aggregated 30 and averaged over all the rounds. A number of appropriate 31 and accepted statistical metrics for example, accuracy, sensi-32 tivity, and specificity [34] have been used to scrutinize the 33 implemented classifiers performance.  The x, y, and z axis signify gait motion width, distance from 42 radar, and height of movement respectively. Motion from the 43 system appears like the letter 'W', displaying the symmetry of 44 the human body with three areas labelled P 1 , P 2 , and P 3 . Here, 45 the area P 1 reflects the hip joint of this particular participant, 46 P 1 to P 2 and P 1 to P 3 denote the change of position of the 47 human body due to gait motion when one leg is lifted from 48 the ground and the other leg makes contact with the ground 49 to push the body forward during walking. The person walked 50 back and forth in front of the radar (along a 3 m testbed) during 51 the observation times, creating the distinct areas (P 1 , P 2 , and 52 P 3 ) in 3D. The distance between the bottom of P 2 and P 3 areas 53 represent the step base width i.e., the perpendicular distance 54 between two steps during gait. In addition, two areas detected 55 (a) Front view of IR-UWB 3D response from a normal walk.
(b) Side view of IR-UWB 3D response from a normal walk.
(e) Variation of knee angles determined from proposed model.
(f) Variation of knee angles determined from Kinect skeleton.
(g) 3D human motion captured by IR-UWB from spastic gait.
(i) Changes of knee angles determined from proposed model for spastic gait.
(j) Changes of knee angles determined from Kinect skeleton for spastic gait.   Figure 6) are presented in section IV to support the 43 knee angle measurement performed by the proposed model. 44 Figure 6a and 6b display the B&A plots of the knee angle 45 measurements taken by both systems for twenty normal and 46 four abnormal gaits respectively. The x and y axes represent 47 the mean of the two measurements and differences between 48 the two paired measurements respectively. Both methods have 49 some degree of error with the B&A plot indicating the rela-50 tionship and agreement between these two methods for non-51 contact gait analysis. Figure 6a shows that the bias or mean 52 of difference is -0.653, signifying that the second method, 53 Kinect continuously displays 0.653 degree units more than 54 the proposed IR-UWB model and 95% of the differences 55 are within d k ± 1.96s k for knee angle measurements. In 56 IEEE SENSORS JOURNAL, VOL. XX, NO. XX, XXXX 2017 addition, Figure 6b displays the bias at -2.277 when measuring   Figure 7b), and 92% (shown in Figure 7c). Balance between 9 the metrics indicates that kNN F can classify both normal and 10 abnormal patterns with approximately the same high precision. 11 Therefore, kNN F demonstrates a better overall performance 12 than the other tested NNs.

13
Subsequently, the SVM is investigated with two different 14 kernel functions to acquire the hyperplane that can separate 15 participants with normal and abnormal gait patterns using the 16 proposed UWB gait prototype. Figures 7a, 7b, and 7c  gait pattern vector x has been targeted to classify, s i is the 22 support vector, w i is weight, and b is the bias. Here, the linear 23 kernel function is k. The vector x is considered a member 24 of the normal gait group when, c ≥ 0, or in the abnormal 25 gait group otherwise. This creates a hyperplane that achieved 26 lower accuracy but better sensitivity. Among the implemented 27 SVMs, SV M L produces the highest sensitivity of 99.60% 28 with 15% training data, shown in the Figure 7b, indicating 29 an acceptable efficient performance to identify abnormal gait 30 among both kNNs and SVMs. However, specificity is 41.90% 31 (shown in Figure 7c) demonstrating a weaker performance in 32 identifying persons with normal gait, though the probability 33 in identifying abnormal gaits is better in this case. SV M Q 34 has been employed to obtain an improved testing accuracy 35 to differentiate normal and abnormal gaits by minimizing the 36 gap between two groups. The considered quadratic function 37 is min x 1 2 x T Hx + c T x, where Ax ≤ b, c is a real valued 38 vector, H is real symmetric matrix, A is real matrix, b is a 39 real vector, and the notation Ax ≤ b means that every entry 40 of the vector A x is less than or equal to the corresponding 41 entry of the vector b. The quadratic programming aims to 42 discover the vector x which could minimize that function. The 43 cross validation has also been implemented for experiment 44 with SV M Q . The model creates a hyperplane to classify gait 45 subjects and achieved maximum testing accuracy of 91.70%, 46 where sensitivity is 97.50% and specificity is 77.80% with 47 95% training data (shown in Figure 7a, 7b, and 7c) to identify 48 normal and abnormal subjects. A small number of abnormal 49 gaits are misclassified, but the low specificity implies many 50 normal gait patterns are predicted wrongly as abnormal gaits 51 by the SV M Q i.e., the presence of true negatives. Thus, the 52 low specificity reduces SV M Q appropriateness for this study. 53 DMLP has been configured with four hidden layers with 54 288, 192, 144, and 115 neurons in first, second, third, and 55 fourth layer respectively. Adam optimization algorithm has 56 been employed and cross-entropy loss function has been 57   Figures 7a, 7b, and 7c). The 6 specificity achieved by DMLP is the highest among all the 7 classifiers investigated here. 8 One issue which contributed to the high accuracy attained 9 by most of the algorithms but resulted in a variation in the 10 achieved sensitivity and specificity results is the skewness of 11 the dataset, i.e., only 20% of the dataset contains abnormal 12 UWB gait data. Concisely, kNN C , kNN F , kNN M , SV M Q , 13 and DMLP all attained high accuracy indicating superior 14 correct predictions out of the total number of predictions. 15 But, the decision boundaries are biased for either normal or 16 abnormal gaits due to the data skewness. For example, kNN C , 17 kNN F , kNN M , SV M L , and SV M Q attained high sensitivity, 18 signifying correct abnormal gait recognition, whereas kNN F 19 and DMLP achieved significantly higher specificity than the 20 other algorithms indicating correct normal gait prediction. 21 However, the proposed study aims to achieve balanced metrics 22 (i.e., high accuracy, sensitivity, and specificity) through one of 23 the implemented classifiers. Computing the optimal boundary 24 condition for SVM and weights for deep MLP become difficult 25 when datasets are disproportioned. However, here the kNNs, 26 particularly kNN F was not affected by the imbalance problem 27 and is the one algorithm here that attained high scores for 28 all the metrics. Though, kNN F follows a rudimentary or 29 "lazy" approach the simple Euclidean distance computation 30 performed better in separating normal and abnormal gait and 31 was found to be more efficient than the other tested classifiers. 32 The kNN F performance demonstrates that abnormal and nor-33 mal gait can be recognised based on the computed knee angles 34 from 3D IR-UWB model even with a data imbalance situation 35 exists.

VIII. DISCUSSION & CONCLUSION 37
The proposed gait recognition work here is the first study to 38 investigate normal and abnormal gait patterns contemplating 39 walking features from a 3D gait model without the use 40 of markers, mobile inertial sensors, and/or cameras. Camera 41 based 3D gait recognition research concentrate mainly on per-42 son identification in different circumstances, such as distinct 43 viewing angles, with and without backpacks, occlusion by 44 other objects, etc. However, the proposed work focuses on 45 employing such a system for scalable gait health purposes. 46 Direct comparison between the performance of the proposed 47 work and other camera-based works is difficult as they have 48 different outcomes and testing requirements, but quantitative 49 contrast can be made to understand the advantages of the 50 proposed study. A comparison has been performed and the 51 summary of this is shown in Table I, where most recent gait 52 recognition studies (discussed in Section II) II) have been 53 included with their accuracy or rank-1 accuracy, sensitivity, 54 specificity, device used, and study focus. The accuracies 55 of other algorithms have been reported with minimum and 56