Contextual behavior of the prosthesis

Hello everyone! I continue writing a series of articles in which I talk about my project of a bionic hand prosthesis.

Part 1

Part 2

Part 3

Part 4 <- You are here

In the previous article, we stopped at the point where we examined various hand control schemes, and the following conclusions were made based on the results. Direct hand control with all three degrees of freedom is complex, requires high coordination and control, and the most convenient scheme turned out to be the one in which the hand is switched by the user to one or another static state that is most suitable in a given situation. In principle, this is not news, since many commercial bionic prostheses have long followed this path – they allow you to choose the state of the hand, that is, the pose in which the wrist and fingers are bent, from a predetermined set, in which each pose is designed to interact with a specific object. This is a good and simple solution, but it creates several difficulties. The first and most important one is that most bionic prostheses are transradial, that is, they are designed to replace the lost arm below the elbow joint. At the same time, the elbow joint remains mobile and gives the user the opportunity to control the bionic hand much more accurately. In my project, I am trying to solve a more complex problem of a transhumeral prosthesis, i.e. one designed to replace a lost arm above the elbow joint. The second difficulty is that fixed hand positions provide limited control over the object being interacted with, and this limitation has to be compensated for by auxiliary movements of other body parts – by turning the hand further using other joints of the arm and torso and by more intensively changing the position of the body relative to the object being interacted with.

Both of these challenges can be addressed by allowing the prosthesis to have some behavior of its own that complements the user's actions and facilitates the task. Again, this is not news. One of the cutting-edge areas of research in prosthetics is the use of artificial intelligence to recognize the user's intentions and then adjust its own movements to, ideally, help the user perform the action they intended. Of course, at this stage, artificial intelligence solutions are not reliable – the recognition rate is different from 100, AI requires a long training time, and the potential cost of such technology may be unacceptable for many patients. Those who have read my previous articles know that I am currently working in the paradigm of conventional deterministic algorithms, and in this same paradigm I propose to try to solve this problem – to give the prosthesis its own contextual behaviors.

To better understand the direction of thought, I suggest we digress to a seemingly unrelated topic. The picture shows a series of screenshots from my favorite game – Half-Life 2. If suddenly there is someone among the readers who has not played this masterpiece and ode to physics and interactivity in video games, then I will briefly describe what is happening in the screenshots. In each of them, there is an interaction with one or another game object – pulling down a lever, opening a door with a rotary handle, turning a valve, lifting and carrying a can and a box. And these screenshots have something in common – they do not show the hands of the protagonist, a silent physicist in exo-armor Gordon Freeman. The game mechanics are such that you approach an object and simply press the action button, after which the object is manipulated, and this manipulation does not raise any questions from the player, since it is expected, and is performed as the player expected. Today, games try to add hand animations when interacting with interactive elements, but this does not change the essence of what is happening – you approach an object and express a desire to interact with it in a certain way, and the game mechanics do the rest for you. Most everyday operations performed by hands in real life work in approximately the same way – we do not consciously control individual joints, we have some general intention to perform some operation with an object, and the hands obediently perform this operation.

From this example, we can conclude that for some actions with certain objects, it is possible to implement a certain contextual behavior of the prosthesis, in which the user will perform certain general movements, for example, bring the hand closer to the object, give a command to grab, give a command to turn, and then the prosthesis itself will perform this action – place the fingers in the most effective points for holding the object, turn the hand and elbow so that the desired action with the object is performed, ensure constant orientation of the object in space or, conversely, its movement along the desired trajectory.

We have figured out what contextual behavior is, now let's move on. How to define a set of such contextual behaviors? At first glance, it is impossible to cover the entire variety of objects and ways of interacting with them, and maybe it is so, but it turns out that for everyday and typical operations this list of behaviors will not be so large.

In the studies of bionic hand prostheses, the evaluation of the prosthesis's effectiveness plays a major role. Of course, various sets of tests – tasks that the prosthesis must cope with – were proposed for the evaluation. For those interested, I suggest that you familiarize yourself with a large review article of such tests. At the moment, there is no single standardized test, but if we run through them all, we will see something in common – most tests offer the user with a prosthesis to perform certain operations with a number of objects – these are either the most common objects in everyday life or a set of abstract geometric figures imitating them. This means that we already have an initial set of objects and behaviors for these objects that we need to implement. As an example, I will provide a table from test by Linda Resnick and her colleagueswhich lists objects and actions with them.

It's not that easy to implement all this, so I started small – I tried to implement context behaviors for two objects – a spoon and a screwdriver. The screwdriver is not on the list, this is my own initiative, since I, as a hobby electronics and modeler, interact with a screwdriver much more often than with the tools presented in the list – scissors and a hammer.

The results can be seen in the demo below, after which I will give brief comments on what is happening in the video:

The first thing I want to talk about regarding the new version of the prosthesis is that the gyroscope has disappeared from the head, and now the hand is an independent device. There are several reasons for this decision. The first and most important is that head control, although it adds precision, forces the user to intensively control head movements, which is not very convenient. The second reason is aesthetics and ergonomics. We are not yet in cyberpunk, where wires sticking out of the head are commonplace, so the sensor, even if it is wireless and disguised as skin color or hidden among the hair, will still cause unnecessary questions and inconvenience. Well, as it turned out, for the type of behavior that I am trying to implement, head control is redundant, so at this stage I am abandoning it, but it is quite possible that I will return to it in the future. In the absence of a head tilt sensor, control of the plane along which the hand moves (details in the first part) occurs due to a long hold of the button. That is, press the button, move the plane and release the button. The button, let me remind you, simulates the work of a myo sensor, that is, in reality, control occurs by contracting a muscle (for example, a bicep). Thus, pressing, double pressing and holding correspond to a short contraction, two consecutive contractions and contraction and holding of a muscle in a tense state (here the reader can try this on himself and play with his biceps – the main thing is that colleagues do not see if you are in the office).

The video demonstrates two modes of operation – the spoon mode and the screwdriver mode. In both modes, a certain state machine is implemented, in which the prosthesis sequentially switches from one state to the next. In the starting position, the prosthesis moves without any restrictions, but the hand tries to maintain a certain angle relative to the ground in order to conveniently capture the object. Capture occurs by pressing a button. After this, the behavior of the prosthesis begins to differ for the two modes. For a spoon, the hand in all states tries to maintain its orientation relative to the ground so that the spoon it holds also maintains its orientation. This is necessary so that we do not tip over the food when we carry it on a spoon. By pressing the button, we can completely immobilize the prosthesis so that it remains in its last position – this allows us to scoop up food with a spoon due to the rotation of the forearm. Finally, when we bring the spoon to the head, the hand automatically turns the spoon towards the head to facilitate eating. For a screwdriver, the state machine is more complex. After grabbing the screwdriver, we must go through two states – in the first, we set the distance of the hand from the head, in the second, the elbow joint is fixed, and we set the angle of the hand and, accordingly, the screwdriver relative to the ground. This state is nothing more than the pose mode from the previous article. After that, we enter the last state, in which we rotate the screwdriver clockwise by holding down the button, and when released, it automatically rotates counterclockwise. The transition between states forward occurs by pressing the button once, and back – by pressing it twice.

Conclusion

So, we have managed to implement contextual behavior for two objects, this is a good start. Next, we can work on quantity and start writing modes for other objects and actions. The question remains of how the user will switch between them. Of course, you can do the switching as it is done in some commercial models – using a phone and a special application on it, but I would like all the functionality of the prosthesis to be concentrated in it and not require other external devices. So far, I have come to the idea that you can place a small display on the prosthesis itself, and switching will be done, for example, by turning the forearm (like the menu on early iPods with navigation using the wheel). In addition, it is worth thinking about some algorithms that will smooth out the movements of the prosthesis and make them more natural and not so sharp. But the most important thing that I would like to do in the near future is mechanics. The small SG90 servomotors that move the hand no longer satisfy me even in testing, since they do not always withstand even idle loads, when the hand does not hold anything, which is why they constantly crash. I do not know whether this is due to insufficient power supply to the motors themselves or to a violation of the internal logic of their controllers, but it is time to think about how to move the hand with more powerful motors. Two problems immediately arise here. The first is weight. The problem is not that big, and I am sure that it can be solved with the help of some passive hand support, which can be found in construction exoskeletons. The second, more serious problem is where to place these motors and how to transmit their movement to the hand, possibly through a movable elbow joint. I found several interesting solutions to this problem, but all of them require parts with high manufacturing precision, so this stage of the project will most likely drag on, and I will most likely finally have to put aside the cardboard and glue and get acquainted with 3D printing. In the near future I will focus on another project related to this one – its virtual twin, which I mentioned in the previous article. This project is already in development, and I will tell you about it in the near future. In the meantime, I say goodbye to you, thank you all for your attention! And as always, the source code and all materials can be viewed on the project page on github.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *