A Review Paper on Involuntary Human Motion Acknowledgement to Assist Feature Robotic Skills

Authors

  • Dr. Nidhi Mishra, Dr. F Rahman, Mr. Kapil Kelkar

DOI:

https://doi.org/10.17762/msea.v71i4.820

Abstract

It is a difficult effort due to issues such as backdrop clutter, partial occlusion, changes in scale, viewpoint, lighting, and appearance to recognize human activities from video sequences or still photos. A multiple activity recognition system is required for a wide variety of applications, such as video surveillance systems, human-computer interaction, and robots for the characterization of human behavior. In this article, we present a comprehensive evaluation of recent and cutting-edge research developments in the field of human activity classification. We propose a classification of human activity research strategies and then evaluate the benefits and drawbacks of each of these approaches. In particular, we divide human activity classification algorithms into two broad categories based on whether or not they use data from multiple modalities. The first category is called "without using data from multiple modalities," and the second is called "using data from multiple After that, each of these categories is broken down even further into sub-categories, which indicate the manner in which they simulate human behaviors and the kinds of activities in which they have an interest. In addition, we present a complete study of the human activity classification datasets that now exist and are accessible to the public, and we investigate the characteristics that should be met by an ideal human activity identification dataset. Finally, we address several unanswered questions about human activity recognition and reflect on the characteristics of potential future research directions. It is common practise for humanoid robots to employ a template-based dialogue system. This type of system is able to reply successfully inside a specific discourse domain, but it is unable to react effectively to information that falls outside of that discourse domain. Because the interactive elements don't have an emotional detection system, the rules for the dialogue system are hand-drawn instead of being automatically generated. In order to achieve this goal, a humanoid robot open-domain chat system and a deep neural network emotion analysis model were both developed. The former is intended to assess the feelings that interacting objects may have. Emotional state analysis, in addition to research on Word2vec and language coding, are all components of the process. Following this, the emotional state of a humanoid robot is taught with the help of a Training and emotional state analysis paradigm, which is described. The conventional dialogue system for humanoid robots is based on template construction. This system can provide acceptable answers within the designated discussion region, but it cannot go beyond this area. The rules of the dialogue system are based on manual creation, and it does not include any emotional recognition. This research built an open-domain dialogue system for a humanoid robot in addition to an emotion analysis model that was based on a deep neural network. The model was used to assess the emotions of interacting objects. Language processing, coding, feature analysis, and Word2vec are all essential components of emotional state analysis. A humanoid robot's emotional state analysis training findings are detailed here along with those results' implications. As science and technology continue to grow, robots have gradually made their way into every facet of human existence. Robots find use in manufacturing, the armed forces, home healthcare, education, and laboratories [1]. The three guiding principles of robotics [2] state that the ultimate goal of robot development is to have robots behave like humans, assist people in performing activities in a more effective manner, and accomplish goals [3]. To accomplish goals in human-robot cooperation, people need to interact more effectively with the robot [4, 5]. The traditional method of human-computer interaction involves a person inputting data through the use of a keyboard, mouse, and various other manual input devices, while a computer would output data to a person through the use of a display and various other peripherals. This contact requires a number of supplementary items. In the actual world, a computer is not accessible to everyone [6]. The natural routes of communication between people and machines include speech, vision, touch, hearing, proximity, and other human interactions [7]. This manner of connection is not only common but also productive. [8] So that human beings and robots can work together more efficiently. The emotion analysis model of the humanoid robot is able to assess and detect the emotional information of the interacting object while the object is interacting [9]. During contact, the language of the object carries a wealth of emotional information, and the textual content represents human cognition at a very advanced level.

Downloads

Published

2022-09-16

How to Cite

Dr. Nidhi Mishra, Dr. F Rahman, Mr. Kapil Kelkar. (2022). A Review Paper on Involuntary Human Motion Acknowledgement to Assist Feature Robotic Skills. Mathematical Statistician and Engineering Applications, 71(4), 2621–2630. https://doi.org/10.17762/msea.v71i4.820

Issue

Section

Articles