From a disabled person to a cyborg with an AI hand

The future is here without any exaggeration. In our publication Third Eye for the Blind, we talked about how you can make life easier for blind people with the help of several ultrasonic sensors. Today we are talking about a cybernetic hand based on deep learning, the calculation accuracy of which is more than 95%. Also in the article there are the impressions of a daredevil who decided to test the technology on himself. This is what you see on the KDPV.


Listen to the story [на английском] can be on SoundCloud

There were 600 new articles about the Transformer architecture this week. What do i do? Randomly choose several of them, publish them with practically no changes (not counting some little things) and, perhaps, improve them a little?

I hope you are not too discouraged by such an introduction, but please understand me correctly: the Transformer architecture is so popular today that everyone else is drowning in reports about it. Of course, this is an amazing architecture, it can be extremely useful in many cases, and it is not for nothing that most researchers are crazy about it, but there are other things in the field of artificial intelligence (AI), and, believe me, no less, but even more fascinating ! No need to worry, I will naturally be talking about some impressive projects based on the Transformer architecture used in NLP, machine image recognition and many other areas. I think this architecture is very promising, but simply retelling the content of new works, making only cosmetic changes to them, is not so interesting for me.

As an example, I can mention a couple of papers published in March that talk about using the Transformer architecture for image classification. These works are quite similar to each other, and I have already talked about one of them (see below). I believe that from them you can get a fairly complete picture of the current state of the Transformer architecture used for machine image recognition.

Related article… Can the Transformer architecture replace CNN in machine image recognition?

Now let’s turn to the real topic of this article! This topic has nothing to do with either the Transformer architecture, or even the GAN, there are no buzzwords in it (except, perhaps, the word “cyberpunk”), but, nevertheless, it is one of the coolest applications of AI with which I came across lately! This thing is able to solve the pressing problems of many people and drastically change their lives for the better. Of course, it does not work as effectively as, for example, turning a human face into an anime or cartoon character, but nothing beats its usefulness.

I present to your attention “Portable automatic manual neuroprosthesis with finger control based on deep learning methods”, authors: Nguyen, Drealan, etc. Or, in the words of one of the authors, this is the hand of “cyberpunk”!

But first, I’d like to remind you of the free NVIDIA GTC event due next week. You will receive a lot of interesting news from the world of artificial intelligence, and if you subscribe to my newsletter, you will receive a prize from the Institute of Deep Learning, which I manage. If you are interested in this offer, you can check out my previous video, in which I talked about this prize.

Now let’s take a closer look at this unique and amazing new work.

In this work, deep learning technologies are superimposed on the neuroprosthesis, allowing real-time control of the movements of individual fingers of the prosthesis. A person who lost his hand 14 years ago can now move artificial fingers like on a regular hand! The delay in passing commands is from 50 to 120 milliseconds, the accuracy of movements is from 95 to 99%. It follows from this work that embedding deep neural network technologies directly into wearable biomedical devices is not only possible, but also extremely effective!

A real cyborg!
A real cyborg!

In this case, the NVIDIA Jetson Nano module was used, specially designed for deploying AI systems in stand-alone applications. This allowed the GPU and powerful libraries like TensorFlow and PyTorch to be used inside the manipulator itself. The authors of the project say: “When implementing our neural decoder, we found the most suitable compromise between size, power and performance.” The main goal of this work is to solve the problem of efficiently deploying deep learning neural decoders on a portable device used in real applications for long-term use in clinical practice.

Neuroprosthesis
Neuroprosthesis

Naturally, there are many technical subtleties that I will not talk about here (what an expert I am!). For example, I will not talk about how nerve fibers and bioelectronic elements are connected to each other, what microcircuits are used that allow simultaneous neural recording and stimulation, or how software and hardware are implemented that ensure the operation of the engine decoding system in real time. If you would like to know more about this, you can refer to the descriptions of the corresponding works, they can be easily found through the links. Let’s just look at the principles of deep learning implemented in this amazing invention. The innovative idea was to equip the engine decoding system with deep learning technologies, which reduced the computational load on the Jetson Nano platform.

NVIDIA Jet Nano circuit
NVIDIA Jet Nano circuit

The figure shows the data flow of the Jetson Nano platform. First, data in the form of peripheral nerve signals from the amputated arm is sent to the platform. This data is pre-processed. This step is very important: a sample of the input raw neural data is taken, after which the system calculates its main characteristics in the time domain, and then loads it into the model. This preprocessed data is consistent with the basic characteristics of 1-second neural data obtained from an amputated arm and cleared of noise sources. This processed data is then fed into the deep learning model, and the end result is the ability to control the movement of each finger. There are five sets of output in total, one for each finger.

How does the model used by the authors work in reality? It is based on the use of a convolutional layer. This layer is used to identify different representations of the input data. In this case, the number of convolutions is 64. These convolutions were obtained using different filters, that is, there are 64 different views in total.

Filters are network parameters that the system learned during training to properly control the prosthesis after it is connected. We know that time is extremely important in this case, as the fingers have to move smoothly, so Controlled Recurrent Units (GRUs) were chosen to represent this time-dependent aspect in decoding the data.

The GRUs tell the model what the hand was doing at the last second (what was coded first) and what it needs to do next (what is currently being decoded). In simple terms, GRU is nothing more than an improved version of recurrent neural networks, or RNNs.

RNNs solve the following computational problem: add gates so that only relevant information is saved about the past input data when executing a recurrent process (otherwise, you will have to pass the input data through filters every time).

Basically, RNNs decide what information should be sent to the output. As in recurrent neural networks, in our case, one-second data in the form of 512 properties is iteratively processed using recurrent GRUs. Each GRU receives the input data of the current step and past output data and generates the next set of output data based on them. Thus, the GRU can be viewed as an optimization of the “basic” recurrent neural network architecture. At the last stage, the decoded information is sent to the linear layers, where it is converted into probability values ​​for each individual finger.

The authors, as follows from their article, studied many different architectures and were able to create the most computationally efficient model, working with amazing accuracy – more than 95%.

We got a general idea of ​​how the model works and how accurate it is, but there are still some questions left. For example, how does a person using a neuroprosthesis feel? How real are his feelings? How well does the prosthesis work? In general, everyone is interested in the question: can such a prosthesis replace a real hand?

Here is what the patient himself says:

I understand that this thing still needs some work. It should have more “vital” functions for performing everyday tasks, so that you can not think about what position the hand is in and in what mode it is programmed. It is necessary for it to work like this: I saw it, reached out and took it. […] Ideally, I should feel not a prosthesis on my body, but an ordinary hand. I guess we’ll get to that. I believe that!

For me, this invention is the most incredible example of the use of artificial intelligence technologies.

This invention is capable of improving the quality of human life, and there is nothing more honorable than this goal. Hope you enjoyed this article. You can also watch the video version, where you can see with your own eyes the movements of a real cyborg hand. Thanks for reading the article. The video has already mentioned this, but I will repeat here: “This is insanely cool!”

Changing the whole world is a very big goal and it is practically unattainable. But we are quite capable of changing some of it. Such prostheses and software for them – can make the world a better place for many people who, for some reason, have lost their body parts. If there is not enough knowledge to implement your ideas, you can pay attention to our advanced course on Machine Learning and Deep Learning and perhaps it is you who will teach the prostheses to respond to the slightest nerve impulses.

find outhow to level up in other specialties or master them from scratch:

Other professions and courses
Links
  • [1] Nguyen & Drealan et al. (2021) A Portable, Self-Contained Neuroprosthetic Hand with Deep Learning-Based Finger Control.

  • [2]… Luu & Nguyen et al. (2021) Deep Learning-Based Approaches for Decoding Motor Intent from Peripheral Nerve Signals.

  • [3]… Nguyen et al. (2021) Redundant Crossfire: A Technique to Achieve Super-Resolution in Neurostimulator Design by Exploiting Transistor Mismatch: https://experts.umn.edu/en/publications/redundant-crossfire-a-technique-to-achieve-super-resolution -in-ne

  • [4]… Nguyen & Xu et al. (2020) A Bioelectric Neural Interface Towards Intuitive Prosthetic Control for Amputees

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *