User talk:Gsttvr

HUMAN ACTION USER INTERFACE Most electronic machines, such as computers, industrial control systems and appliances, are designed so that they can be operated by pushing buttons (keys) or by touching touch- sensitive screens with fingers. Although there are a lot of opportunities that fingers touch input devices, the interactions between the fingers and the input devices are very poor. This paper proposes a new user interface, called a Human Action User Interface(HAUI), that enhances human interaction capability by using Human Action recognition technique. The system identifies the body actions including the movement of the finger in the air and identifies the corresponding element on the display device with which the user wants to interact with. This framework of a Human Action User Interface(HAUI) enables a user to virtually hold commands in his/her own fingers action on the air, he can perform any actions on any objects appearing on the screen. In order to make the system to understand the users hand movement the user have to sit in the front of the screen. For the added convenience the image of their own is displayed on the screen and when they move their hand in the air they can notice that the same change in the image that is embedded in the user form's. DESIGINING HUMAN ACTION USER INTERFACE: As shown in the below figure the back bone of the Human action user interface(HAUI) is the camera and the Five modules. 1.Image Recorder( usually a web camera) 2.Static and dynamic image filter. 3.Envelop detector. 4.Coordinate detector. 5.Event trigger's.

RECORDING USER MOTION: It is preliminary to record the motion of the user in order to process the action performed by the user .here the recording is done by a high precision web camera that is connected to the system and the image is recorded in the form of high quality MPEG format. This recorded movie play an important part in this system. The sequence of steps done while recording the user's motion can be explained by means of the following diagram.

The image is recorded and preserved in the local disk in order to access it later. at the same time the captured video is converted in to various frames and these frame are then sent to the next stage for processing.

STATIC AND DYNAMIC IMAGE FILTER:

Now we are having the images(frames) in our hand. The next important step is to find the envelop and hence to find the apex point. This can be done by adopting the procedure given by the following figure.

Here two consecutive images which are of 0.2 seconds delay are taken into account and they are compared to each other to find out the static and dynamic points. The static point is nothing but the areas in the picture which represents the static objects such as chair wall bench and such objects. The dynamic points are the points which represents the moving objects. More precisely the extraction of the envelop and the apex point can be explained as follows,  To Find The Apex Point:	 When we start to work with the system, the system start’s capturing the image of the person from the starting. Then the subsequent images are taken in the time interval of 0.2 seconds. Each and every points of the subsequent images are compared. The movement of the head and the finger before the system will help in finding the envelop and hence the apex point. The subsequent images mainly help in finding the apex points.

Compare these two pictures, we can identify that the user had moved a little bit. So that the pixels will be definitely changed. So the pictures should be taken for every 0.2 seconds then the apex point of the Head and the Finger are found. Thus now the apex point's and the envelop have been traced and the remaining work is to map the computed coordinate into the screen. There are two more modules to done this job, The Coordinate Detector will compute the position on the screen with which the user wants to interact. The Event Trigger fires out the corresponding event of the object which is present in the position determined by the coordinate detector  USAGE OF HUMAN ACTION USER INTERFACE: -Contact less operations. The simplest usage of Human Action User Interface(HAUI) is to support contact less operations. Many user interfaces employ mode conversion method to make it possible to specify many commands with fewer buttons or keys. In a GUI environment, such as Windows Xp or Linux KDE, object selection modes are converted by using key pressing. But here there is no contact between the user and the computer system hence it is highly stable. -Maintenance free system. Since it provides contact less operations there is no chances of physical damage in the system so it provides a maintenance free system. -More reliable. Since there is no mechanical contact between the user and the system the system is more reliable. -Effective replacement for mouse. All the actions that are previously done by using the mouse can be done by moving the fingers in the air. -Provides automated scrolling. It detects the movement of the finger in the air and hence if it finds the finger of the user at the bottom then it automatically starts scrolling. -Provides action tracking mechanism. It records the log of the actions performed by the user with the computer system along with their picture that is useful for future.