Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
https://irjet.net/archives/V5/i12/IRJET-V5I12145.pdf
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
2020, IRJET
Gesture recognition is one of the language technologies with the objective of converting gestures through mathematical algorithms that are obtained from various body movements. Gestures are usually derived from any motion which emerges from face or hand. Several approaches have been proposed with the help of hand gloves and cameras to interpret signs. The proposed literature deals with a wearable sensor-based gloves. To obtain gesture's information Sensor-based recognition systems are used. In existing system, it is very complex to perform reverse process and conversion of text to sign. It needs further usage of gloves which is connected with sensors. It is also a complex and costly process. This system will provide a two-way communication in which dumb or deaf and individual can communicate with each other without the aid of translator. This prototype is an application of hand gesture technology. The hand movements (gestures) are uploaded into developed application and by using various algorithms the models are trained from which sign languages are converted into text which can be understood by normal people.
Sensors
The interactions between humans and unmanned aerial vehicles (UAVs), whose applications are increasing in the civilian field rather than for military purposes, are a popular future research area. Human–UAV interactions are a challenging problem because UAVs move in a three-dimensional space. In this paper, we present an intelligent human–UAV interaction approach in real time based on machine learning using wearable gloves. The proposed approach offers scientific contributions such as a multi-mode command structure, machine-learning-based recognition, task scheduling algorithms, real-time usage, robust and effective use, and high accuracy rates. For this purpose, two wearable smart gloves working in real time were designed. The signal data obtained from the gloves were processed with machine-learning-based methods and classified multi-mode commands were included in the human–UAV interaction process via the interface according to the task scheduling algorithm to facilitate sequential ...
Sensors
We propose a sign language recognition system based on wearable electronics and two different classification algorithms. The wearable electronics were made of a sensory glove and inertial measurement units to gather fingers, wrist, and arm/forearm movements. The classifiers were k-Nearest Neighbors with Dynamic Time Warping (that is a non-parametric method) and Convolutional Neural Networks (that is a parametric method). Ten sign-words were considered from the Italian Sign Language: cose, grazie, maestra, together with words with international meaning such as google, internet, jogging, pizza, television, twitter, and ciao. The signs were repeated one-hundred times each by seven people, five male and two females, aged 29–54 y ± 10.34 (SD). The adopted classifiers performed with an accuracy of 96.6% ± 3.4 (SD) for the k-Nearest Neighbors plus the Dynamic Time Warping and of 98.0% ± 2.0 (SD) for the Convolutional Neural Networks. Our system was made of wearable electronics among the mo...
2021, Sensors (Basel, Switzerland)
Wearable sensor technology has gradually extended its usability into a wide range of well-known applications. Wearable sensors can typically assess and quantify the wearer’s physiology and are commonly employed for human activity detection and quantified self-assessment. Wearable sensors are increasingly utilised to monitor patient health, rapidly assist with disease diagnosis, and help predict and often improve patient outcomes. Clinicians use various self-report questionnaires and well-known tests to report patient symptoms and assess their functional ability. These assessments are time consuming and costly and depend on subjective patient recall. Moreover, measurements may not accurately demonstrate the patient’s functional ability whilst at home. Wearable sensors can be used to detect and quantify specific movements in different applications. The volume of data collected by wearable sensors during long-term assessment of ambulatory movement can become immense in tuple size. This...
Electronics
The study proposed the classification and recognition of hand gestures using electromyography (EMG) signals for controlling the upper limb prosthesis. In this research, the EMG signals were measured through an embedded system by wearing a band of MYO gesture control. In order to observe the behavior of these change movements, the EMG data was acquired from 10 healthy subjects (five male and five females) performing four upper limb movements. After extracting EMG data from MYO, the supervised classification approach was applied to recognize the different hand movements. The classification was performed with a 5-fold cross-validation technique under the supervision of Quadratic discriminant analysis (QDA), support vector machine (SVM), random forest, gradient boosted, ensemble (bagged tree), and ensemble (subspace K-Nearest Neighbors) classifier. The execution of these classifiers shows the overall accuracy of 83.9% in the case of ensemble (bagged tree) which is higher than other clas...
Sensors
Sign language was designed to allow hearing-impaired people to interact with others. Nonetheless, knowledge of sign language is uncommon in society, which leads to a communication barrier with the hearing-impaired community. Many studies of sign language recognition utilizing computer vision (CV) have been conducted worldwide to reduce such barriers. However, this approach is restricted by the visual angle and highly affected by environmental factors. In addition, CV usually involves the use of machine learning, which requires collaboration of a team of experts and utilization of high-cost hardware utilities; this increases the application cost in real-world situations. Thus, this study aims to design and implement a smart wearable American Sign Language (ASL) interpretation system using deep learning, which applies sensor fusion that “fuses” six inertial measurement units (IMUs). The IMUs are attached to all fingertips and the back of the hand to recognize sign language gestures; t...
PLOS ONE
Journal of NeuroEngineering and Rehabilitation
Stroke is one of the main causes of long-term disability worldwide, placing a large burden on individuals and society. Rehabilitation after stroke consists of an iterative process involving assessments and specialized training, aspects often constrained by limited resources of healthcare centers. Wearable technology has the potential to objectively assess and monitor patients inside and outside clinical environments, enabling a more detailed evaluation of the impairment and allowing the individualization of rehabilitation therapies. The present review aims to provide an overview of wearable sensors used in stroke rehabilitation research, with a particular focus on the upper extremity. We summarize results obtained by current research using a variety of wearable sensors and use them to critically discuss challenges and opportunities in the ongoing effort towards reliable and accessible tools for stroke rehabilitation. Finally, suggestions concerning data acquisition and processing to...
—Gesturing is an instinctive way of communicating to present a specific meaning or intent. Therefore, research into sign language interpretation using gestures has been explored progressively during recent decades to serve as an auxiliary tool for deaf and mute people to blend into society without barriers. In this paper, a smart sign language interpretation system using a wearable hand device is proposed to meet this purpose. This wearable system utilizes five flex-sensors, two pressure sensors, and a three-axis inertial motion sensor to distinguish the characters in the American Sign Language alphabet. The entire system mainly consists of three modules: a wearable device with a sensor module and a processing module, and a display unit mobile application module. Sensor data are collected and analyzed using a built-in embedded support vector machine classifier. Subsequently, the recognized alphabet is further transmitted to a mobile device through Bluetooth low energy wireless communication. An Android-based mobile application was developed with a text-to-speech function that converts the received text into audible voice output. Experiment results indicate that a true sign language recognition accuracy rate of 65.7% can be achieved on average in the first version without pressure sensors. A second version of the proposed wearable system with the fusion of pressure sensors on the middle finger increased the recognition accuracy rate dramatically to 98.2%. The proposed wearable system outperforms the existing method, for instance, although background lights, and other factors are crucial to a vision-based processing method, they are not for the proposed system.
This paper explores the recognition of hand gestures based on a data glove equipped with motion, bending and pressure sensors. We se- lected 31 natural and interaction-oriented hand gestures that can be adopted for general-purpose control of and communication with computing systems. The data glove is custom-built, and contains 13 bend sensors, 7 motion sensors, 5 pressure sensors and a magne- tometer. We present the data collection experiment, as well as the design, selection and evaluation of a classification algorithm. As we use a sliding window approach to data processing, our algorithm is suitable for stream data processing. Algorithm selection and feature engineering resulted in a combination of linear discriminant anal- ysis and logistic regression with which we achieve an accuracy of over 98.5% on a continuous data stream scenario. When removing the computationally expensive FFT-based features, we still achieve an accuracy of 98.2%.
2021
In this study, information from surface electromyogram (sEMG) signals was used to recognize cigarette smoking. The sEMG signals collected from lower arm were used in two different ways: (1) as an individual predictor of smoking activity and (2) as an additional sensor/modality along with the inertial measurement unit (IMU) to augment recognition performance. A convolutional and a recurrent neural network were utilized to recognize smoking-related hand gestures. The model was developed and evaluated with leave-one-subject-out (LOSO) cross-validation on a dataset from 16 subjects who performed ten activities of daily living including smoking. The results show that smoking detection using only sEMG signal achieved an F1-score of 75% in person-independent cross-validation. The combination of sEMG and IMU improved reached the F1-score of 84%, while IMU alone sensor modality was 81%. The study showed that using only sEMG signals would not provide superior cigarette smoking detection perfo...
2021, IRJET
Sign language plays a vital role in communication between audio-vocal challenged and normal people. In this paper, we propose a new approach that recognizes the hand gesture based on Indian Sign Language and convert them into text and speech output. This system uses the vision-based technique where the hand gestures and facial expressions are captured using web-camera and the various technologies. The captured images are processed with image processing, and classified with neural network, Open CV to recognize the hand gesture and facial expressions and convert it into text and speech using microcontroller based hardware (raspberry pi).
Applied Sciences
Hand gesture-based sign language recognition is a prosperous application of human– computer interaction (HCI), where the deaf community, hard of hearing, and deaf family members communicate with the help of a computer device. To help the deaf community, this paper presents a non-touch sign word recognition system that translates the gesture of a sign word into text. However, the uncontrolled environment, perspective light diversity, and partial occlusion may greatly affect the reliability of hand gesture recognition. From this point of view, a hybrid segmentation technique including YCbCr and SkinMask segmentation is developed to identify the hand and extract the feature using the feature fusion of the convolutional neural network (CNN). YCbCr performs image conversion, binarization, erosion, and eventually filling the hole to obtain the segmented images. SkinMask images are obtained by matching the color of the hand. Finally, a multiclass SVM classifier is used to classify the hand...
Applied Sciences
Hand gesture recognition is a crucial task for the automated translation of sign language, which enables communication for the deaf. This work proposes the usage of a magnetic positioning system for recognizing the static gestures associated with the sign language alphabet. In particular, a magnetic positioning system, which is comprised of several wearable transmitting nodes, measures the 3D position and orientation of the fingers within an operating volume of about 30 × 30 × 30 cm, where receiving nodes are placed at known positions. Measured position data are then processed by a machine learning classification algorithm. The proposed system and classification method are validated by experimental tests. Results show that the proposed approach has good generalization properties and provides a classification accuracy of approximately 97% on 24 alphabet letters. Thus, the feasibility of the proposed gesture recognition system for the task of automated translation of the sign language...
2021
Surface electromyography (sEMG) is a non-invasive method of measuring neuromuscular potentials generated when the brain instructs the body to perform both fine and coarse locomotion. This technique has seen extensive investigation over the last two decades, with significant advances in both the hardware and signal processing methods used to collect and analyze sEMG signals. While early work focused mainly on medical applications, there has been growing interest in utilizing sEMG as a sensing modality to enable next-generation, high-bandwidth, and natural human-machine interfaces. In the first part of this review, we briefly overview the human skeletomuscular physiology that gives rise to sEMG signals followed by a review of developments in sEMG acquisition hardware. Special attention is paid towards the fidelity of these devices as well as form factor, as recent advances have pushed the limits of user comfort and highbandwidth acquisition. In the second half of the article, we explo...
2019
Activity classification is a task where we need to identify a sequence of gestures for a period of time. It is a challenging task without visual cues and only based on hand movements. There are several applications of activity classification without visual cues in science and technology, and in this paper we propose a solution based on EMG and IMU features from Myo Gesture Control Armband. We try to capture the temporal features of different hand gestures in multiple ways and apply machine learning and new deep learning techniques. Our approach is very promising and we are able to distinguish Eating activity from other activities with 94.76% accuracy.
IEEE Access
IEEE Access
Sensors
Globally, cigarette smoking is widespread among all ages, and smokers struggle to quit. The design of effective cessation interventions requires an accurate and objective assessment of smoking frequency and smoke exposure metrics. Recently, wearable devices have emerged as a means of assessing cigarette use. However, wearable technologies have inherent limitations, and their sensor responses are often influenced by wearers’ behavior, motion and environmental factors. This paper presents a systematic review of current and forthcoming wearable technologies, with a focus on sensing elements, body placement, detection accuracy, underlying algorithms and applications. Full-texts of 86 scientific articles were reviewed in accordance with the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines to address three research questions oriented to cigarette smoking, in order to: (1) Investigate the behavioral and physiological manifestations of cigarette smoking ...
Indonesian Journal of Electrical Engineering and Computer Science
Robotic prosthetics is increasingly adopted as an enabling technology for amputees. These are vital not only for activities of daily living but to display expression and affection. A vital element to this system is an intelligent model that can identify signatures from the remaining limb that can be mapped to specific effector movements. Therefore, the study proposes the use of forearm electromyogram to classify between different types of hand gestures; fingers spread, wave out, wave in, fist, double tap, and relaxed state. Data are acquired from 32 subjects using Myo armband. Initially, a total of 248 time-and frequency-domain features are extracted from the eightchannel device. Neighborhood component analysis has reduced them to a total of fourteen features. A hand gesture classification model based on electromyogram signal has been successfully developed using support vector machine with overall accuracy of 97.4% for training, and 88.0% for testing.
2018, Sensors (Basel, Switzerland)
Pattern recognition of electromyography (EMG) signals can potentially improve the performance of myoelectric control for upper limb prostheses with respect to current clinical approaches based on direct control. However, the choice of features for classification is challenging and impacts long-term performance. Here, we propose the use of EMG raw signals as direct inputs to deep networks with intrinsic feature extraction capabilities recorded over multiple days. Seven able-bodied subjects performed six active motions (plus rest), and EMG signals were recorded for 15 consecutive days with two sessions per day using the MYO armband (MYB, a wearable EMG sensor). The classification was performed by a convolutional neural network (CNN) with raw bipolar EMG samples as the inputs, and the performance was compared with linear discriminant analysis (LDA) and stacked sparse autoencoders with features (SSAE-f) and raw samples (SSAE-r) as inputs. CNN outperformed (lower classification error) bo...
2021
2021, IEEE/CAA Journal of Automatica Sinica
Electromyography (EMG) has already been broadly used in human-machine interaction (HMI) applications. Determining how to decode the information inside EMG signals robustly and accurately is a key problem for which we urgently need a solution. Recently, many EMG pattern recognition tasks have been addressed using deep learning methods. In this paper, we analyze recent papers and present a literature review describing the role that deep learning plays in EMG-based HMI. An overview of typical network structures and processing schemes will be provided. Recent progress in typical tasks such as movement classification, joint angle prediction, and force/torque estimation will be introduced. New issues, including multimodal sensing, inter-subject/inter-session, and robustness toward disturbances will be discussed. We attempt to provide a comprehensive analysis of current research by discussing the advantages, challenges, and opportunities brought by deep learning. We hope that deep learning can aid in eliminating factors that hinder the development of EMG-based HMI systems. Furthermore, possible future directions will be presented to pave the way for future research.
2018, Proceedings of the International Conference on New Interfaces for Musical Expression
Hands are important anatomical structures for musical performance, and recent developments in input device technology have allowed rather detailed capture of hand gestures using consumer-level products. While in some musical contexts, detailed hand and finger movements are required, in others it is sufficient to communicate discrete hand postures to indicate selection or other state changes. This research compared three approaches to capturing hand gestures where the shape of the hand, i.e. the relative positions and angles of finger joints, are an important part of the gesture. A number of sensor types can be used to capture information about hand posture, each of which has various practical advantages and disadvantages for music applications. This study compared three approaches, using optical, inertial and muscular information, with three sets of 5 hand postures (i.e. static gestures) and gesture recognition algorithms applied to the device data, aiming to determine which methods are most effective.
2022, IRJET
The number of amputees with upper limb loss has risen dramatically in recent years. In India itself, there are 16500 amputees every year. As there are only a very few prosthetic arms commercially available that provide the basic functionality of the original arm and even if they are available, they are quite expensive. By using the latest engineering technologies, the functionalities of a prosthetic arm can be improved tremendously by implementing machine learning algorithms, image processing, and artificial neural networks, and the cost of building these arms can be reduced by using 3D printing technology. Many other sorts of study based on these technologies have been successfully executed.
Sensors
In this paper, a customizable wearable 3D-printed bionic arm is designed, fabricated, and optimized for a right arm amputee. An experimental test has been conducted for the user, where control of the artificial bionic hand is accomplished successfully using surface electromyography (sEMG) signals acquired by a multi-channel wearable armband. The 3D-printed bionic arm was designed for the low cost of 295 USD, and was lightweight at 428 g. To facilitate a generic control of the bionic arm, sEMG data were collected for a set of gestures (fist, spread fingers, wave-in, wave-out) from a wide range of participants. The collected data were processed and features related to the gestures were extracted for the purpose of training a classifier. In this study, several classifiers based on neural networks, support vector machine, and decision trees were constructed, trained, and statistically compared. The support vector machine classifier was found to exhibit an 89.93% success rate. Real-time ...
2022, IRJET
The number of amputees with upper limb loss has risen dramatically in recent years. In India itself, there are 16500 amputees every year. As there are only a very few prosthetic arms commercially available that provide the basic functionality of the original arm and even if they are available, they are quite expensive. By using the latest engineering technologies, the functionalities of a prosthetic arm can be improved tremendously by implementing machine learning algorithms, image processing, and artificial neural networks, and the cost of building these arms can be reduced by using 3D printing technology. Many other sorts of study based on these technologies have been successfully executed.
2021, IRJET
According to the World Health Organization, Mute and deaf community are among the 5% of our world population that is approximately 430 million people. This number can increase to up to 2.5 billion till 2050 i.e. one among 10 people would suffer from some kind of hearing disorder. Often these differently abled people are neglected by society. These people face various problems like lack of basic education, platforms to express themselves to people who don't understand sign language or able to earn jobs to fulfil their basic needs, etc. According WHO unaddressed issues faced by mute and deaf people were that, there are almost 32 million kids who suffer from hearing loss and these are from currently developing countries where there is limited budget to provide education, or any other facilities that could help these people to compete with world. Thus, in adults the unemployment rate is quite high. Further leading to social isolation, loneliness, depression. Thus, this survey majorly focuses on comparing various currently available communication platforms and services for these differently abled people and further what are the necessary services they need. This survey also compares the techniques to build platforms to bridge the gap between these people and people lacking the knowledge of sign language.
Sensors
Wearable assistive robotics is an emerging technology with the potential to assist humans with sensorimotor impairments to perform daily activities. This assistance enables individuals to be physically and socially active, perform activities independently, and recover quality of life. These benefits to society have motivated the study of several robotic approaches, developing systems ranging from rigid to soft robots with single and multimodal sensing, heuristics and machine learning methods, and from manual to autonomous control for assistance of the upper and lower limbs. This type of wearable robotic technology, being in direct contact and interaction with the body, needs to comply with a variety of requirements to make the system and assistance efficient, safe and usable on a daily basis by the individual. This paper presents a brief review of the progress achieved in recent years, the current challenges and trends for the design and deployment of wearable assistive robotics inc...
2019, Handbook of Flexible and Stretchable Electronics
2021, Journal of Applied Technology and Innovation
Sign language recognition devices are gaining tremendous attention in recent years for helping speech and hearing-impaired community. The idea of fusing technology and sign language knowledge together to create a smart system is still being tried and developed all over the world with implementation with many different sign languages. In this paper, Malaysian Sign Language is given importance with 5 Malaysian Sign Language words being selected for recognition and prediction with new combination of sensor used compared to previous researches done for Malaysian Sign Language recognition and prediction. The combination of sensors used are 1 MPU9250, 1 MyoWare and 2 Force Sensitive Resistor sensors. 1D CNN time-series model is implemented with prediction accuracy ranging from 80 to 91 percentage.
2021, IRJET
The noble aim behind this project is to design a health care system which will be helpful for paralyzed and mute people. A Dumb individual all through the world uses gesture based communication for the correspondence. The progression in implanted framework can give a space to plan and build up an interpreter framework to change over the communication via gestures into discourse. As sign language primarily used by deaf but also used by people who can hear having problem in speaking so the approach used in this analysis is vision based. The glove uses are fitted with flex sensor in three dimensions to collect the data from every position of figure and hand motion to differentiate and distinguish each and every word from a particular sign. Heart attack is the major reason for death among both genders men and women. However, its occurrence cannot be always predictable. Most common device used to detect heart related problems is an EKG machine which is reliable to normal user, but is not mobile enough to be used as a monitoring device for a heart patient continuously. This project is to develop an algorithm for detecting a heart attack and if so, then to alert doctors, family members and emergency services .Hence here we introduce a smart health care system which will take care of problems and need of paralyzed and mute people and will also help in detection of heart attack.
Sensors
Wearable technology can be employed to elevate the abilities of humans to perform demanding and complex tasks more efficiently. Armbands capable of surface electromyography (sEMG) are attractive and noninvasive devices from which human intent can be derived by leveraging machine learning. However, the sEMG acquisition systems currently available tend to be prohibitively costly for personal use or sacrifice wearability or signal quality to be more affordable. This work introduces the 3DC Armband designed by the Biomedical Microsystems Laboratory in Laval University; a wireless, 10-channel, 1000 sps, dry-electrode, low-cost (∼150 USD) myoelectric armband that also includes a 9-axis inertial measurement unit. The proposed system is compared with the Myo Armband by Thalmic Labs, one of the most popular sEMG acquisition systems. The comparison is made by employing a new offline dataset featuring 22 able-bodied participants performing eleven hand/wrist gestures while wearing the two armba...
ACTA IMEKO
Gesture-based control potentially eliminates the need for wearisome physical controls and facilitates easy interaction between a human and a robot. At the same time, it is intuitive and enables a natural means of control. In this paper, we present and evaluate a framework for gesture recognition using four wearable Inertial Measurement Units (IMUs) to indirectly control a mobile robot. Six gestures involving different hand and arm motions are defined. A novel algorithm based on an Online Lazy Neighborhood Graph (OLNG) search is used to recognise and classify the gestures online. A software framework is developed to control a robotic platform through integrating our gesture recognition algorithm with a Robot Operating System (ROS), which is in turn used to trigger predefined robot behaviours. Experiments show that the framework is able to correctly detect and classify six different gestures in real time with average success rates of 81.61 % and 81.67 %, while keeping the false-positi...
2020, IRJET
Internet of Things (IoT) is one among the trending technology in this digital era. Developments in network infrastructure and automated devices boosted the reach of IoT. The IoT can be defined as a network of internet-connected things (e.g., computers, vehicles, and sensors). These interconnected things exchange data between themselves and help people to access internet-connected devices, applications, and services anytime and at anywhere. The technological advancements mainly focused on how to make huge profit by enabling faster machine to machine communication. Still there is a challenge exists to provide an efficient and better interaction between human beings and IoT systems. User could either monitor or configure internet connected things at home, offices and any other places for control of various functions like temperature, humidity, lighting and other energy efficiency. Users can be of different types. Depending on them, dissemination of IoT became a great issue. Because, the users like children, elderly people and various kinds of disabled persons will be the intended customers of specific IoT devices. To make them comfortable with the products, their interaction with IoT devices should be smooth and easier. Here, in this paper we tried to evaluate some of the early published interacting systems developed by others. This will enable you to understand the working principles used by them and will help you to integrate various methods to develop a better interacting system devoid of limitations experienced by them.
Applied Sciences
Numerous applications of human–machine interfaces, e.g., dedicated to persons with disabilities, require contactless handling of devices or systems. The purpose of this research is to develop a hands-free head-gesture-controlled interface that can support persons with disabilities to communicate with other people and devices, e.g., the paralyzed to signal messages or the visually impaired to handle travel aids. The hardware of the interface consists of a small stereovision rig with a built-in inertial measurement unit (IMU). The device is to be positioned on a user’s forehead. Two approaches to recognize head movements were considered. In the first approach, for various time window sizes of the signals recorded from a three-axis accelerometer and a three-axis gyroscope, statistical parameters were calculated such as: average, minimum and maximum amplitude, standard deviation, kurtosis, correlation coefficient, and signal energy. For the second approach, the focus was put onto direct...
Frontiers in Robotics and AI
2019
Human-computer interfacing is being used widely to make more integrated and user-friendly alternative methods of communication between humans and computers. Various devices have been developed to give a new dimension to the way a user interacts with computers and machines. They allow multiple ways of machine input, unlike conventional human interface devices (HID) such as mouse and keyboards. These devices remove constrictions to allow users to interact with devices in other innovative ways. The focus of this work is to illustrate an example of a Human-Computer Interaction (HCI) device using electromyography (EMG) sensors in conjunction with accelerometer and gyroscope placed inside an armband to detect the EMG signals and gestures of the forearm. The processed data from the microcontroller is then used to control a robotic car wirelessly. The prototype device can be made more wearable, user friendly and portable with the implementation of a more compact armband in prospective proje...
2021
Stroke is the 3 leading cause of deaths in USA with an equally high number of survivors. Post-stroke rehabilitation is an important part of recovery after stroke as it helps to relearn skills lost due to the effect of stroke on a part of your brain. Stroke rehabilitation can help the subject regain independence and improve their quality of life. During their rehabilitation process, a subject is expected to perform a certain number of exercises and constantly try to use their affected limb. But due to social constraints and psychological stress, the subjects tend to perform the respective activities only in a closed environment and hence only in the presence of a physicist i.e. during their rehabilitation in an hospital, thereby dampening their process of rehabilitation. This thesis aims at laying the foundation to develop a wearable device to track the recovery of a stroke patient while their in-hospital and at home upper limb rehabilitation processes depending on the muscular activ...
2021, IRJET
In this paper, hereby put forward concept of a real-time visual-audio translator interface, as in this modern age the advancement in ubiquitous computing has made the use of natural user interface very much required, as one of the major drawback of our society is the barrier that is created for disabled or handicapped. Here we develop an application which is capable of the gesture recognition in real-time will translate sign language, and hand gestures to English language and it's phrase in the form of text and audio. The image data is pre-processed using a combinational algorithm and recognition is done using ring nodes detection, angles between fingers and template matching this is also called as image processing. The translation in the form of text is then converted to audio given out by speaker, by this a deaf or dumb person can communicate with the blind person and also with other normal people also. In other stage a blind person can also communicate as through speaking, the application take speech voice and convert it to text which is visible. In other stage the input is from microphone and it convert it into text which is visual on the display. By this application we can probably make the communication for disable person much easier with rest of the world.
2020, Biomedical Engineering Letters
Sensors
Neurorobotic augmentation (e.g., robotic assist) is now in regular use to support individuals suffering from impaired motor functions. A major unresolved challenge, however, is the excessive cognitive load necessary for the human–machine interface (HMI). Grasp control remains one of the most challenging HMI tasks, demanding simultaneous, agile, and precise control of multiple degrees-of-freedom (DoFs) while following a specific timing pattern in the joint and human–robot task spaces. Most commercially available systems use either an indirect mode-switching configuration or a limited sequential control strategy, limiting activation to one DoF at a time. To address this challenge, we introduce a shared autonomy framework centred around a low-cost multi-modal sensor suite fusing: (a) mechanomyography (MMG) to estimate the intended muscle activation, (b) camera-based visual information for integrated autonomous object recognition, and (c) inertial measurement to enhance intention predic...