Final Project Post

Project Title: Táltos-oid (Formerly posted under InGlove)

Project Teams: Curt Henrichs

Sentence Description:

Human augmentation device providing users with an extra thumb for everyday tasks.

Video:

Poster Image:

The following image is the poster presented at the Wearable Technology showcase.

Project Description:

The long-term goal of this project is to experiment with augmenting the human body. Specifically I am interested in understanding how a supernumerary robotic finger interacts with the user to complete typical tasks that occur while living their normal life (in essence what does it mean to live with this device) in order to better understand the design challenges and applications in a generalized manner.  This is motivated by a gap in the nascent literature of supernumerary robotic limbs in exploring these broad application questions instead of a narrow application focus.

To achieve this goal, the supernumerary robotic finger must be a functional appendage; meaning a well-articulated finger and intuitive interface to command finger. I am considering a well-articulated finger as one that can provide a sufficient range of motion. A user should be able to understand this appendage as a finger or thumb thereby building on existing grasping knowledge. In regards to the intuitive control, the finger must capture the intentions of the user in a manner that facilitates the augment the task. Thus the intention signal to noise must be high enough to be actionalizable and yet not be a burden on the user.

Táltos-oid is designed to address the outined goal by providing a four degree of freedom robotic finger and a flex sensor glove to provide gesture data to control the finger. Táltos-oid is designed with a function first perspective though aesthetics of the design should be reasonably thought out. In the following paragraphs I will discuss the robotic finger, flex sensor glove, and data-processing subsystems.

The robotic finger is composed of custom designed 3D printed plastic components actuated by four high-torque micro-servos. The rotational joints roughly map to the articulation of a human thumb where greater range of motion is feasible with the micro-servos. A soft interface layer was constructed with neoprene and leather to provide comfortable mounting of the finger to the human hand. The design of the interface is inspired by wrist and thumb braces with a 3D shape held by the plastic and leather. Furthermore, the interface is adjustable with two velcro straps providing some invariance to hand size. Finally, control of the servo motors is handled through an I2C servo driver.

For the flex sensor glove I started by minituraizing and innovating on the DIY flex sensors as documented by Plusea. Specifically I introduced a node in the center of the flex sensor to provide voltage thereby allowing two joints to be captured as parallel resistors. I sewed on the flex sensors using a manikin hand to provide support for the material. Then to I used silver-nylon conductive thread and Bare Conductive electronic ink to connect the flex sensors to wires coming from the microcontroller (through a wire to thread interface). The wire to thread interface is composed of either a 3-pin or 2-pin male header with conductive thread tied and wrapped around the long end. I applied conductive ink to the threads to secure them. After drying the long end of the pins are pushed through a piece of electrical tape thereby sandwiching the conductive thread between the electrical tape and plastic space from the header. I then apply a bit of hot glue to secure the interface. This interface is then sewn into the glove; wires attached; and more hot glue applied to secure the assembly. Finally, to read the flex sensor signal, the wires are connected to an analog mux and 3.3V. When the microcontroller selects a channel, that sensor is then in series with a resistor to ground creating a voltage the divider that the ESP32’s ADC can read. For the custom flex sensors, I found 1K Ohms to be a reasonable resistor value. If using a commercial flex sensor this may change (e.g. the adafruit 4.5″ flex sensor works well with 10K Ohms).

The microcontroller and data-processing is the last major component to this project. It consists of an ESP32 microcontroller and PC computer connected through bluetooth. The ESP32 was selected due to providing bluetooth onboard along with WiFi (though not using it for this project, it could be used in the future). The ESP32 connects to the I2C servo driver and the analog mux as noted previously. The firmware is developed around a serial JSON API that provides a push update of current state to connected PC along with polling functions for sensor data. JSON API also provides a command to set the joint states of the finger.

On the PC I wrote several python scripts to assist in development and operation of the machine learning model that controls the finger. First script worth noting is a training script which provides a command line interface to set joint state and capture sensor and joint data to a csv file. The second script is the algorithm generation script which takes in the csv file and outputs a trained model based on user selection as a python pickle file. Finally I have a multi-threaded model runner script that captures the pushed state from the serial port, calculates joints from the trained model and commands the finger joints through the serial JSON API.

For the machine learning models, I worked with linear regression, adaboost random forest regression, and a neural network. The linear regression and random forest approach consisted of four models each trained on a single joint whereas the neural network was trained to generate all joints in one model. All of these models are trained on a supervised problem of mapping raw sensor data into joint states. Additionally I explored a couple of pre- and post-processing steps on this data. First I applied PCA to the sensor data and compared the model trained using PCA against the raw data model. With no noticeable difference, I opted for the raw data model if only for simplicity. Then I applied a moving average to the sensor data again comparing against default case. I found the moving average helped with the random forest and linear regression approaches but the neural network did not benefit from this addition. Then I tried post-processing the joint values to prevent some of the jitter produced by the models (except the neural network as it was already fairly stable). I found that a moving average on the joint values did benefit the random forest and linear regression approaches. Finally, I tried both a sensor and joint moving average which made for a stable but lagged response from the SRF. After having the results I concluded that the neural network was the best approach. Note, in terms of implementation, I used python’s scikit-learn library to develop these models.

Given the description of what Táltos-oid is, the next topic is how to wear and use it. While technically a modular system with the glove, finger, and microcontroller connected by wires that mate with header pins, I have found that it is far easier to keep the connections intact and to just place the components on in the following order: glove -> microcontroller wrist-strap -> robotic to finger. After putting on the device, connect the two USB power cables to the USB ports on the microcontroller wrist-strap. When powered up, the finger hold at a predefined resting position. The user can now start the model runner script with the neural network  model and start to grasp objects. The model was trained such that a fully closed hand will cause the SRF to rest its tip onto the fingers; mirroring a typical biological thumb state. When fully open, the model would stretch out the finger, assuming it is grasping a large object. In between these extremes the finger will move to anticipate size of the object being grasped (even if no object is being grasped as there is no way to discern that state).

Feelings toward Project:

I am very pleased in the way the robotic finger turned out, generally pleased with the microcontroller subsystem, and a bit disappointed with the glove. While I will cover these in the successes and challenges sections, it is worth noting that the disappointment with the glove is due to the failure in my design and implementation but not in the concept behind it. Specifically, the glove failed during the showcase event which did not allow time in the term to correct for it. Thus, I feel like the project is not the best that it could be. From a different perspective though, I am glad that the glove failed. As now I can learn from this failure with the intention of not making the same design decisions in the future (regardless of project).

Success:

Though as mentioned, the glove did fail, I was able to construct a simple replacement that demonstrated some of the capabilities of the project. Recall my long term goal is to develop a human augmentation device for everyday tasks and to understand what challenges, design considerations, etc. that the space offers. To that end, I would consider this a good step in that direction. The technical platform of the finger is a good first iteration; the microcontroller system works well enough for the task at hand; and the simplified glove made for the final demo illustrates the intuitive control. Also the challenges that I faced during the design are worth capturing as it is part of the journey to answer my questions.

If I had to give a quantitative score I would give an 8.5 out of 10 for my expectations. The missing aspect is a user study to start asking the questions outlined.

Challenges:

Like any ambitious project, one does not know ahead of time all that will be encountered. While this can present challenges, it also affords the opportunity to pivot and prove out the concepts coming into the project. My project was no exception to this with pivots on several key decisions in roughly chronological order.

The first challenge encountered in this project was the flex sensors available for purchase are built on a flexible PCB that restricts movement and forces a glove shape instead of adapting to the wearer’s hand. An InGlove team member (when I was on that team) found Plusea’s work on a flex sensor that was soft and conforming to the body.

The second challenge was redesigning the robotic finger as it initially had three joints instead of the current four. While three joints are sufficient for most smaller object grasping tasks, it is insufficient for a grasp of large or awkward objects that need support lower than the palm of the wearer. Fortunately I designed the robotic finger in anticipation of these potential changes (by using a modular 3D printed component system).  As an aside, there is also a piece of sanded down command hook to provide the necessary angle between two flex sensors. This could easily be replaced by a 3D printed component.

Third challenge was the interaction with bluetooth and ADC readings. The ESP32 sells itself as having two ADCs onboard giving approximately half of the pins ADC capacity. However when using the radio (either bluetooth or WiFi), ADC 2 cannot be used for external sensing. I also found issues with ADC 1 when using the radio though it generally worked. To handle this challenge I used an analog mux that I purchased on a whim for a different project.

Fourth challenge and pivot was using an I2C servo driver for the PWM signals to the micro-servos instead of direct control by the ESP32. While working with the ESP32, I found that the PWM signal would dip in voltage when the servos were moving. Using a second power source helps in this (and I did implement it) but I also went a step further and purchased a driver board that offloads the control task from the microcontroller. I have tested with a single battery and the system still functions correctly so this change was not pointless.

Current failures that I am facing (and will address) are torque issues at robot finger-tip and developing a new glove approach.

The torque issue is a consequence of direct driving the joint from the servo. One approach is if I wanted to reduce the joint angle to 90 degrees I could double the torque output. Another approach would be to purchase micro-servos with even higher torque rating (though this would be more expensive than adding two gears). For this version of the finger, I think the torque is sufficient to express the concept.

Designing a new glove approach means more exploration of technologies, implementation strategies, and testing. As I mentioned before, I am glad the glove failed as it instructs me on the next steps I need to take in order to improve the design. So far I understand the issues with the glove as:

  • Megaohm resistance between flex sensor and wire terminals.
  • Some of the terminals failed
  • Some of the conductive threads while still attached seemed to have increased resistivity (perhaps friction and skin contact)
  • Conductive ink which was used as a conductive glue was discovered to be water soluble and thus not great if the user’s sweat breaks down the glove.

Last challenge is more of a hypothesis suggestion. Specifically, context and environment awareness is important for the machine learning algorithm to make sense of the grasp intention and resulting joint-states. A user could grasp different objects that at the level of flex sensors looks near identical but requires the robotic finger to be different sets of joint-states. This made machine learning challenging as seemingly valid data would cause performance to drop drastically.

Next Steps:

If the course was extended another week or two, I would rework the broken glove to get a fully functional demo for the blog. Additionally, I would try to run a mini-UX study with convenience sampling from the class.

For short period of time but still into the future, I would rework my approach to the glove / finger tracking with either a new interface design, use COTS flex sensors, or use IMUs on a flex PCB. I would also like to explore haptic feedback signaling joint configuration to user.

Further into the future, I would like to explore environmental context with 3D sensing and other interface modalities to gather intention. With 3D sensing the finger can make grasp adjustments, avoid collisions with non-target objects, and start to make predictions on workflow. With respect to other interface modalities I would be interested in EMG instead of flex sensors to capture the gesture intention. I also would be interested in a brain based approach like EEG or fNIRS (perhaps to augment the finger control captured by other means).

Finally, I want/need to present this project in experimental setting in order to evaluate the research goals I laid out. I am currently thinking about a day-long autobiographical study, user experience studies, and field studies (in manufacturing, health care, etc.).

Materials List:

  1. Gloves [Prototype and Main] – Free (N/A) and $14.99 (N/A)
  2. ESP32 Dev Board– $15.00 (1)
  3. High Torque Micro Servo– $9.95 (4)
  4. Flex sensor – 4.5 inches– $12.95 (2)
  5. 3D printed SRF – ~$14.00 (N/A)
  6. PCBs– N/A (1)
  7. I2C Servo Driver Board – $14.95 (1)
  8. Analog Mux – $5.50 (1)
  9. Velostat – Free (N/A)
  10. Electrical Tape – N/A (N/A)
  11. Wires, Resistors, etc. – Free (N/A)