Covering Material / Defussing material (Potentially)
Jacket
Weekly Accomplishments
Setup sonar sensor tracking in Arduino Mega.
Use sonar sensor to build a prototypical MIDI device.
Leap Motion Mechanism
How leap motion works, Accuracy, General application
HW and SW. Compatilibility to IoT Devicess
3D PrintLeap Motion case
First software prototype for theremin
Motion trace: proximity and height change
Data Transfer and MIDI encode/decode
Run on Arduino/Raspberry pi
(Optimization) De-noise.
Select a jacket.
Design the jacket.
Image/Video
Changes to our approach
We originally want to design the primitive circuit and sensors to make the sensing work. William has just worked out the sonar in wednesday, and as a backup plan and primitive approach we will design a theremin using the sonar sensors and integrate it as a part of jacket.
In search for potential improvement of gesture recognition, we also focus our attention on Leap Motion. On leap motion, we’re able to capture richer and more sensive gesture information — able to grab, tremble, drastically move up and down, within its well-defined range of service.
Our project is composed of three subprojects: the hand gesture / position glove and two applications for said glove.
For the first part of the final class project, we would like to make smart gloves that helps the user send remote signals using hand gestures as commands. While there are many similar kinds of gloves that exist as projects, a lot of those gloves do not make use of the wide variety of sensors available in the market. Additionally, some of the projects seemed to be made inefficiently in that they seemed to be pretty bulky for how much they can do. We want to explore how those existing gloves could be improved in not just functionality but also aesthetics. Thus, we would work on this project not as an invention, but rather a demonstration/experiment
1.1 FUNCTIONALITY:
The functionality of the glove would include things such as pressure, motion and flex sensors to capture various gestures, and associate these gesture-controlled commands to control the application sub-projects as detailed below. Communication will be handled with Bluetooth or WiFi depending on the application being implemented.
1.2 USAGE:
The typical process to use the glove will start with a user placing the glove onto their hand and performing calibration. While the details of calibration will need to be worked through as we develop the software, it will most likely take the form of the user positioning their hand into a predefined pose. Depending on the end application, we could either provide the user with a list of predefined guidelines that maps gestures to tasks or allow customization of the said mapping. In the latter case, we would have to perform experiments on the kinds of gestures that are best read and provide the user with a set of guidelines on how to make customization most efficient.
We now proceed to describe separately the idea and plan for Curt’s supernumerary robotic finger as well as Vedant and Shruthi’s Wearable home automation.
2. SUPERNUMERARY ROBOTIC FINGER
Using the glove developed as hand position input, Curt will construct a supernumerary robotic finger mounted next to the left hand pinky finger. This digit will mirror the biological thumbs location and joint structure. The robotic finger will map the hand gesture to user intention which in turn maps to a joint configuration for the finger. By the end of the term a simple hard-coded heuristic function will be developed to perform this mapping.
Curt is primarily developing this project for his own curiously. Specifically, Curt would like to wear the device for a full day to record hand position data, record failures and inconveniences, record interactions with others and their perception, and explore contexts of applicability. This in turn allows Curt to further develop a machine learning algorithm, iterative design improvements, HCI insight, and further general SRF usage taxonomy respectively.
As for the eventual end-user, this technology could potentially augment any life task however I am mostly interested in applying the technology to the manufacturing and construction spaces where the ability to do self-handovers is an important aspect of the task. An example would be screwing an object overhead while on a ladder. The constraints are that a person should keep three points of contact while holding both the object and the screwdriver. If they need to do all three, they may lean the abdomen onto the ladder which is less effective than a grasp. Instead with several robotic fingers (or a robotic limb) the object and screw driver could be effectively held/manipulated while grasping the ladder. Another example the should relate to this class is soldering where the part(s), soldering iron, and solder need to be secured. This could be overcome with an SRF thumb to feed the solder to the tip of the soldering iron while holding the parts down with one’s other hand.
In the SRF literature Curt is proposing to provide a modest incremental improvement. Wu & Asada worked with flex sensors however they were only interested in the first three fingers and did not attempt to model the hand position directly. Arivanto, Setiawan, & Arifin focused on developing a lower cost version of Wu & Asada’s work. One of Leigh’s & Maes’ work is with a Myo EMG sensor which is not included in the project. They also present work with modular robotic fingers though they never explore the communication between finger and human in that piece. Finally, Meraz, Sobajima, Aoyama, et al. focus on body schema where they remap a wearer’s existing thumb to the robotic thumb.
2.1 WEARABLE HOME AUTOMATION:
While these gloves could have many different functions, we would like to focus on using these gloves as part of a smart home, including functions like controlling the TV, smart lights, as well as a Google Home/Alexa (through a speaker). This could especially be useful for people with disabilities (scenarios where voice based communication is not easy), or even just a lazy person.
This project utilizes ideas of Computational gesture recognition to create a wearable device. We use components of sensor systems and micro controllers in conjunction with a mobile application, (preferably android) along with fundamentals of circuit design. Our project management technique would involve a mix of waterfall and iterative models, keeping in mind the timeline available so as to create a viable solution to home automation.
The idea is to have the glove communicate to an android application either via bluetooth or a wifi module, and the phone in turn can control several other devices. Since applications like the Google assistants have powerful AI technology integrated into them, can we extend those capabilities from a beyond the phone on to a wearable fabric?
Also, it brings in the concept of a portable google home of sorts. This means that we do not need to install a google home in every room. This project is meant to be pragmatic and the consumer is the general population. It could also be of extra help to people with disabilities.
3.INSPIRATION
SRF takes inspiration from Wu & Asada (along with other work in flex sensors as to detect finger movement), Meraz, Sobajima, Aoyama, et al. will provide inspiration of using a thumb as the digit being added, and Leigh’s & Maes’ work in modular fingers will be the inspiration for how Curt constructs the wrist connection. The novelty is bringing these pieces together to form a wearable that one can run a long-term usability test.
https://ieeexplore.ieee.org/document/6629884
http://maestroglove.com/
SRF:
[1] F. Y. Wu and H. H. Asada
[2] M. Ariyanto, R. Ismail, J. D. Setiawan and Z. Arifin
[3] Sang-won Leigh and Pattie Maes
[4] S. Leigh, H. Agrawal and P. Maes
[5] F. Y. Wu and H. H. Asada
[6] Segura Meraz, N., Sobajima, M., Aoyama, T. et al.
We find significant work done in the general area of gesture recognition using a glove. However, those gesture interpretations have been applied in different domains. We did find a project or two where gloves where used for home automation. However, the choice of sensors is different from what we plan to use. (flex and pressure).
The following sketches were developed for the project pitch and compiled here due to their similarity. The important aspects to note with the glove is the presence of flex sensors used to capture finger movement, touch sensors on the fingertips, and an inertial measurement unit to capture hand orientation.
Glove:
SRF:
5. MATERIALS
Electronics:
Microcontroller (w/ Bluetooth and WiFi ex. ESP32, Adafruit Flora, Particle Photon, Arduino, Possibly external Bluetooth or WiFi Module)
Flex sensors
Resistive pressure/force sensors
Vibration motor
IMU (Gyroscope, accelerometer)
Micro servos (For SRF)
Infrared LED emitter / receiver
Clothing / Physical Materials:
Glove
Wrist brace (For SRF)
3D printed components (case, mounting)
6.SKILLSET LIST:
Curt:
I come into this project with experience soldering, designing circuits, and programming. My talent is mainly in embedded system programing with C. I will need to master sewing and other soft materials skills and knowledge to hack a glove in an aesthetically pleasing fashion. Another skill hurdle will be 3D printing as I have some familiarity during undergrad where I worked with a 3D printer as a hobbyist but never formally trained. Finally, I will need to further hone my project management skills due to the ambitious scope as laid out.
Vedant:
While I have some experience with programming, this project would require a lot of microcontroller programming, so that would be the main thing I would have to master. The project would also require some knowledge of IoT so I would be looking more into that as well.
Additionally, I have experience using tools in the makerspace (3d printing, laser cutting), so if I have time to focus on the aesthetics, I could use those skills to help me improve the looks. However, it is a given that I will also have to learn soldering well to compact the design, as well as sewing to make the glove look nice.
Shruthi:
I have prior experience with arduino programming. I have a decent understanding of C and Python programming. Also, I can contribute in integrating the sensory data to a mobile application. I am more comfortable debugging on an IDE and not so much directly on a hardware device and I need to improve my skillset in this direction.I could also help with the aestheic aspect of the glove in sewing and stitching. However, I do not have significant experience in these areas.
7.TIMELINE
7.1 MILESTONE 1
Technology shown to work (March 25)
Glove
Glove w/ all flex sensors
Position of hand captured, data transmitted to PC/phone for processing / visualization
IMU captures absolute orientation, data transmitted to PC/phone for processing / visualization
Power supply and integration started
SRF
Robotic finger 3D printed
Assistant
Test whether the bluetooth or wifi module would work best to connect to the Google Assistant
Look into Google assistant API and see what functions/commands can be given using a third part app
7.2 MILESTONE 2
Technology works in wearable configuration (April 8)
Note after this point, project focus splits from being glove focused to application focused. Curt will take on SRF subproject and Shruthi & Vedant will take on the home assistant subproject.
Glove
Refinement of aesthetics
Power supply and integration complete
SRF
Robotic finger controlled
Brace developed
Assistant
Basic functioning app developed
Required IR codes and assistant commands obtained
7.3 MILESTONE 3
Technology and final wearable fully integrated (April 22)
Glove
Error reduction, final iteration on design
SRF
If time allows a one person “user study” to capture longer term data
Else refinement of the finger control / design
Assistant
Fully functioning app that receives commands from gloves and sends commands to Google Assistant app to control smart home
Aesthetic gloves that house the electronics, battery and sensors ergonomically
8. FALLBACK PLAN
Glove:
We will be starting with an experiment with one finger to see if one flex sensor is sufficient to capture the gesture information necessary. The answer found may show that one flex sensor is sufficient for the assistant subproject but not the SRF project. Alternatively it could be that flex sensors themselves are not sufficient. To that end, this experiment will be done early in order to integrate the findings into our design.
The minimum that we will consider success is a glove that can output preprogrammed gesture commands that are loosely coupled to the state of the fingers. For example curling the index finger will be a command.
SRF:
Curt will have to learn more about 3D printing to develop the SRF physical components. Additionally, a mounting system needs to be developed. Depending on how time consuming this step is, Curt may need to develop a simple, proof of concept, heuristic for the final demonstration. This could result in non-ideal finger movement that does not track the intent of the wearer, however failure of the algorithm in some cases will be deemed acceptable as this is still an open research area.
Assistant:
We will also integrate the sensor signal data and manage to establish communication to a phone via either bluetooth or wifi. We would probably begin by controlling a smart light or a single device and eventually try to build larger integration with the Google home app. We would first develop a prototype, a rather simple proof of concept and try to get a single instance of end to end communication channel functional. However, since this is still in experimental stages, we cannot guarantee the accuracy and reliability of its function.
Researchers have consistently demonstrated over the past three or four years that image and facial recognition techniques are highly susceptible to attack. Many are not designed to be robust in such a manner, making them vulnerable. I aim to create temporary tattoos or other articles of clothing that can disguise the wearer from facial or object recognition. Potentially, this tattoo could not only obscure the wearer but force the AI to classify them as a different person or object.
Technical Details
Researchers at Carnegie Mellon showed two years ago that it was possible to create psychedelic looking glasses that could massively impact how that person’s face was classified by AI [1]. Since then a number of different studies have had similar success attacking classifiers using a variety of techniques. An open source project dedicated to this idea, CVDazzle, has produced many “anti faces” to conceal the wearer. However, both Carnegie Mellon and CVDazzle’s techniques are relatively human obvious. I aim to create a temporary tattoo while looking “normal” has slight, human undetectable modifications that obfuscate the user’s face or body to image detection algorithms. This has been done by [2] although solely on a pixel-by-pixel basis and not in the real world.
[3] Turning a banana into a toaster
A team at Google found that a small patch, applied near an object, could disrupt image classifiers. Many of these techniques counted on access to the internal workings of the classification algorithm to work, however. In [4] a team from MIT showed that a “black box” approach to attack Google’s Cloud Vision. With an evolutionary algorithm, they were able to reduce the time taken to obfuscate an image by multiple orders of magnitude. Using a combination of the aforementioned techniques, I would aim to create patterns for temporary tattoos. Ultimately, the goal would be a tattoo that would be innocuous to humans, yet potent to a classification algorithm.
An example: what you see, what Google’s Cloud Vision or FaceID sees
Actually fabricating the tattoos would be trivial. Tattoo paper is cheap and widely available for use with color printers. Likely the most challenging aspect of the project would be to translate simulated pattern success into a real-worlddemonstration where the lighting and shadows are inconsistent.
Potentially, other objects and fabrics could be demonstrated, but their fabrication is more challenging.
Goals
The purpose of this project is more experimental. Attempts will be made to make these tattoos look normal, but the main purpose will be to successfully attack commercial face recognition technology.
Applications
The implications of this technology, if successful, are widespread. By simply concealing a wearer’s face, security technology at airports and face-ID technology in large cities like London or New York could be massively compromised for little investment.
If fooling a classifier into recognizing you as a different person is also possible, a whole host of new vulnerabilities are exposed. For example, if Apple’s face ID can be exploited, phones and iPads would instantly be vulnerable.
For the final project, I would like to make a synchronization suit that could assist people with needs of learning poses and moves. In general, it provides wearers with feedback of what part of body to move and where to move. Therefore, it could be helpful especially for performers needing to synchronize their moves.
A good example of how synchronization suit could be useful is learning a dance. As a dancer, I have been leading practices and teaching a dance routine for times, and it really takes me a lot of energy and time. If we could have a wearable tech that provides everyone in a team with feedback of how to move their body in ream time, it could improve whole team’s efficiency and save a lot of time. The idea was first inspired by the yoga learning helper mentioned in class discussion.
Functions
Following are tow main function modes of the suit I could imagine so far.
Real Time Feedback – Each sync suit could be set as a teacher or a learner. When set as learner, it receives signals from a suit set as teacher, and tells the wearer whether his or her moves is synchronized with the teacher. This mode should work when there are more than one suits.
Pre-Record Moves – When working alone, the suit allows the wearer to record a sequence of moves (ex. a dance, a fitness move-pattern) which could be later used to compare with moves of another wearer (either send to another suit or give the suit to another wearer). At the end of comparing, the suit would tell a sync rate to the user as result.
Implementation
Here are some choices I would have to make or difficulties I would face with making the suit.
How to collect data of teacher’s moves? – I think sensors that could collect information of position change would be very helpful in this case. To as clear as much draw the moves of teacher’s whole body, sensors would be placed at each key joint of human body (ex. wrist, elbow, shoulder, chest, crotch, knee).
Feedback should be provided in what ways? – As the wearer could potentially teach / learn numbers of poses and moves, the way of giving feedback should be simple enough and feedback should be easy enough to get no matter in what poses. So far the idea I have come up with is that there would be lighting lines along the side of arms and legs to work as the real time feedback giver. Whether the part of body is in the correct position would be shown in binary colors (ex. green for yes and red for no), and because the lighting lines are along the side of whole body, they already cover the largest space they could cover, which might be easy enough to see.
In what ways to tell wearer the position to move? – This would probably be the hardest part of making the suit. Should the guiding be told visually, or guiding should be transmitted physically through a light force on body? This is an important choice and should be considered and discussed further.
Other Thoughts
As colors are displayed based on whether moves are synchronized, this might be made use of in performing and creating a different visual experience.
Maybe choreographs or move patterns could be designed and edited in software and imported into the suit in the future.
For the final class project, I would like to make smart gloves that helps the user send remote signals using hand gestures as commands. While these gloves could have many functions, I would like to focus on using these gloves as part of a smart home, including functions like controlling the TV, smart lights, as well as a Google Home/Alexa (through a speaker). This could especially be useful for people with disabilities, or even just a very lazy person.
While there are many similar kinds of gloves that exist as projects, a lot of those gloves do not make use of the wide variety of sensors available in the market. Additionally, some of the projects seemed to be made inefficiently in that they seemed to be pretty bulky for how much they can do. I want to explore how those existing gloves could be improved in not just functionality but also aesthetics. Thus, I would work on this project not as an invention, but rather a demonstration/experiment.
The functionality of the glove would include things like a force sensitive/capacitive touch option where a user could clap to turn on/off lights, as well as gesture-controlled commands that can control TV functions.
Possible 3d printed housings/case for the electronics
Skills/concepts
While I have some experience with programming, this project would require a lot of microcontroller programming, so that would be the main thing I would have to master. The project would also require some knowledge of IoT so I would be looking more into that as well.
Additionally, I have experience using tools in the makerspace (3d printing, laser cutting), so if I have time to focus on the aesthetics, I could use those skills to help me improve the looks. However, it is a given that I will also have to learn soldering well to compact my design, as well as sewing to make the glove look nice.
Timeline
March 16: The sensor inputs can be used to output various signals to the devices
April 4: The sensors are integrated into a glove
April 20: The glove is a lot sleeker than the first prototype, and improvement in error reduction
I propose to make a supernumerary robotic finger (SRF) with position detection glove. This will consist of a robotic thumb mounted near the pinky, mirroring the biological thumbs location and movement. This thumb will be constructed with 3D printed mounting brackets and high torque micro servos. The thumb will be mounted to a modified wrist guard as used in skateboarding in order to distribute the weight of the thumb onto the wrist. The wrist guard must not impede normal hand function. Finally, there will be a series of flex sensors mounted onto a glove to detect the joint state of the wearer’s fingers. This along with an inertial measurement unit will report the hand pose information to an algorithm to control the SRF. For this project the algorithm will be constrained to a simple hard-coded heuristic approach, but future work will use data driven AI to learn the wearer’s intent off their hand position.
Target Audience
I am primarily developing this project for my own curiously. Specifically, I would like to wear the device for a full day to record hand position data, record failures and inconveniences, record interactions with others and their perception, and explore contexts of applicability. This in turn allows me to further develop a machine learning algorithm, iterative design improvements, HCI insight, and further general SRF usage taxonomy respectively.
As for the eventual end-user, this technology could potentially augment any life task however I am mostly interested in applying the technology to the manufacturing and construction spaces where the ability to do self-handovers is an important aspect of the task. An example would be screwing an object overhead while on a ladder. The constraints are that a person should keep three points of contact while holding both the object and the screwdriver. If they need to do all three, they may lean the abdomen onto the ladder which is less effective than a grasp. Instead with several robotic fingers (or a robotic limb) the object and screw driver could be effectively held/manipulated while grasping the ladder. Another example the should relate to this class is soldering where the part(s), soldering iron, and solder need to be secured. This could be overcome with an SRF thumb to feed the solder to the tip of the soldering iron while holding the parts down with one’s other hand.
My Motivation
Academically I am motivated by the research opportunities in the space. There are many unanswered questions as this technology has not been popularized yet. Though my interest stems from the philosophy that I adhere to, that being humans are not the end state of evolution. I am excited to construct technology that not only affects my physical appearance but my physical capacities.
Inspiration and Novel Proposal
As far as I am aware in the SRF literature I am providing a modest incremental improvement. Wu & Asada worked with flex sensors however they were only interested in the first three fingers and did not attempt to model the hand position directly [1,5]. Arivanto, Setiawan, & Arifin focused on developing a lower cost version of Wu & Asada’s work [2]. One of Leigh’s & Maes’ work is with a Myo EMG sensor which is not included in the project [3]. They also present work with modular robotic fingers though they never explore the communication between finger and human in that piece [4]. Finally, Meraz, Sobajima, Aoyama, et al. focus on body schema where they remap a wearer’s existing thumb to the robotic thumb [6].
My project will take inspiration from Wu & Asada (along with other work in flex sensors as to detect finger movement), Meraz, Sobajima, Aoyama, et al. will provide inspiration of using a thumb as the digit being added, and Leigh’s & Maes’ work in modular fingers will be the inspiration for how I construct the wrist connection. The novelty is bringing these pieces together to form a wearable that I can run a long-term usability test with myself as the subject.
Figure 1: Supernumerary robotic fingers found in literature.
Design
Figure 2 displays my thoughts on how the glove will be laid out. My first experiment will be whether one flex sensor is sufficient to capture the joint position for a finger. The position of the IMU is mounted on the glove instead of the wrist in order to capture more accurate absolute orientation of the fingers. Figure 3 shows the mounting location of the finger along with sketches of the robotic finger itself. This is inspired off of Wu & Asada’s work though I am going to building one with micro-servos instead of standard scale. These sketches are subject to change as I construct the glove.
Figure 2: Glove sketch with alternate approaches.
Figure 3: Robotic finger sketch (top) fusion 3D image, (bottom) top and bottom view of finger mounting position.
Materials
Electronics
ESP32 – Microcontroller w/ Bluetooth and Wi-Fi
Flex sensors
Micro servo motors
Resistive pressure sensor
Vibration motor
IMU
Clothing
Glove (Light weight, breathable)
Snowboarding / skateboarding arm brace
My Skills
Have experience in:
Soldering
Circuit design
Programming
Need to master:
Sewing / other soft materials knowledge and skills
Project management
3D printing
Timeline
Milestone 0 – Initial Prototype (February 24)
Glove w/ several flex sensors.
Determine either approach 1 or 2
Detect both span between fingers and flex of a finger
Power supply not a concern
Milestone 1 – Technology shown to work (March 16)
Glove w/ flex sensors
Position of hand captured, data transmitted to PC for processing / visualization
IMU captures absolute orientation, data transmitted to PC for processing / visualization
Power supply and integration started
(If time) Robotic finger 3D printed
Milestone 2 – Technology works in wearable configuration (April 6)
Milestone 3 – Technology and final wearable fully integrated (April 20)
(If time) “user” study
Potential Challenges
First major challenge that I have accounted for is that one flex sensor may not be enough to determine the joint state of a finger. Thus, as I will outline later, my first experiment is to see if this is the case. The next is that I do not have time to develop all components. This would be the worst case but if it does happen, I believe that I will prioritize the sensor glove before the fingers. While I have to learn more about 3D printing, I am also not a novice so given access and time I should be able to print out several parts for a robotic finger. Finally, the algorithm to convert from pose to finger position will most likely be a simple heuristic based on gesture. This could mean inadvertent triggering of a movement even if this was not the intent. While a failure of the algorithm, I am least concerned with this aspect for the term.
First Step
I have already purchased (though as of writing this still in shipping) the parts I need for my first hardware experiment. I need to figure out whether I can determine which of three finger joints is bending using one 4.5” flex sensor. If this is successful, then I will use 5 of these to capture pose information. Otherwise I will need to purchase 10 2” flex sensors to detect the finger position in one direction. As for the spread between fingers, I plan on using 1” flex sensors but for the initial prototype 2” flex sensors will work. I also plan on using an IMU to determine absolute orientation with respect to the Earth. All of this will be mounted on a relatively thin winter glove that I have purchased.
I will need to sew on the flex sensors for the one finger and one spread sense along with some mounting for the IMU. This will be connected to an Arduino Uno clone that I have to report the data back to my PC for visualization. The most challenging portion of this prototype is developing the code to determine position followed by with visualizing the hand state.
Inspiration References
[1] F. Y. Wu and H. H. Asada, Implicit and Intuitive Grasp Posture Control for Wearable Robotic Fingers: A Data-Driven Method Using Partial Least Squares, IEEE Transactions on Robotics, vol. 32, no. 1, pp. 176-186, Feb. 2016.
doi: 10.1109/TRO.2015.2506731
[2] M. Ariyanto, R. Ismail, J. D. Setiawan and Z. Arifin, Development of low cost supernumerary robotic fingers as an assistive device, 2017 4th International Conference on Electrical Engineering, Computer Science and Informatics (EECSI), Yogyakarta, 2017, pp. 1-6. doi: 10.1109/EECSI.2017.8239172
[3] Sang-won Leigh and Pattie Maes. 2016. Body Integrated Programmable Joints Interface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 6053-6057. DOI: 10.1145/2858036.2858538
[4] S. Leigh, H. Agrawal and P. Maes, Robotic Symbionts: Interweaving Human and Machine Actions, IEEE Pervasive Computing, vol. 17, no. 2, pp. 34-43, Apr.-Jun. 2018. doi: 10.1109/MPRV.2018.022511241
[5] F. Y. Wu and H. H. Asada, “Hold-and-manipulate” with a single hand being assisted by wearable extra fingers, 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, 2015, pp. 6205-6212. doi: 10.1109/ICRA.2015.7140070
[6] Segura Meraz, N., Sobajima, M., Aoyama, T. et al. Modification of body schema by use of extra robotic thumb, Robomech J (2018) 5: 3. doi: 10.1186/s40648-018-0100-3