Jotrin Electronics
장바구니
arrow
비고 합계 (USD) 운영
loading
제품과 함께 쇼핑 카트
제품과 함께 쇼핑 카트 : 0
> Motor Control > Machine Learning Based Gesture Detection Watch (ESP8266)

Machine Learning Based Gesture Detection Watch (ESP8266)

업데이트 시간: 2021-08-16 11:49:21

In the constant evolution of gesture technology, people's lives are becoming more and more convenient as it is very intuitive, easy to use and clearly makes your interactions with gadgets and things around you futuristic! So, to follow the trend, we will use the watch we built in our previous teaching and put in some machine learning to see if we can detect the gesture we are executing and maybe in the upcoming teaching we can use this gesture to execute some really awesome projects based on it.


Step 1: Story time!

Story time.png


I made a post on my Instagram page about all the new features I will implement on this version of the watch, but I removed the "micro USB port for charging", "press and hold to turn on or off the circuit" and "heart rate" monitoring".


So I want to add some ML to the project, it should be easy compared to electronics, it's just a bunch of code that we have to copy and paste from the stack overflow. If you want to learn more about implementing ML on embedded systems, check out the 2 links TinyML and Gesture Detection. One explains how to use TinyML tensor flow in Arduino and the other explains how to use basic ML algorithms on Arduino. I referenced a lot from the latter link, which is very easy to follow and works with very low memory resource microcontrollers like the Arduino NANO and UNO.


Step 2: PCB Assembly

PCB Assembly.png


I gathered all the SMD components for the project and arranged them in a place where I could easily access them without making a mess. Then all that was left was soldering!


Simply follow the circuit diagram and solder the components in the PCB accordingly. To make soldering easier, go from soldering smaller SMD components [resistors, capacitors, regulators] to larger through-hole components [MPU6050, OLEDs]. I also used 3M tape to secure the Lipo cells between the board and the OLED display during the soldering process.


I had a hard time finding the right regulator for the project, so in my past videos I just used the AMS1117 because it was cheaper and easier to find. But to make this project more efficient than previous builds. I have given 2 options in the PCB, you can use either the MCP1700 or the LSD3985. in my case, I used the LSD3985 and ignored the MCP1700. you can use any option depending on availability.


Step 3: Program the watch

Program the watch.png


To simplify programming, I left some space on the PCB so that you can just plug in the FTDI module and start programming. To program the board, you must first put the esp8266 in blink mode, so while connected to the PC, simply press and hold the button connected to GPIO-0 of the esp12E.


To test if the board is working properly, just upload the code from my previous tutorial [Github link] and test that all functions (such as NTP time, flick to wake up and change screen) work. If everything works, you are done with the hardware part.


Step 4: Machine learning? [Part 1]

Machine learning.png


Machine learning sounds fancy and complex, but trust me, a few algorithms in ML are easier to understand than a few other non-ML based algorithms. For most algorithms, when it needs to find the answer to a problem, we need to tell the algorithm the sequence of processes it must perform to get the final result. A simple example is multiplication. If you want to find the answer to a multiplication problem, let's say 2 times 5 we can tell the computer to perform multiple additions to get the answer. You can see here that we are telling the computer what to do to get the answer.


Step 5: Machine Learning? [Part 2]

Machine Learning5.png

ML works no differently, we just give the computer a bunch of questions and the corresponding answers and then let the computer figure out the method or process so it can answer any new question without us having to manually program the process. For example, finding an apple in a photo is very easy for humans, but it is very difficult for us to code and make the computer understand all the characteristics of the apple, it is not impossible, but it is very tedious and difficult. Instead, wouldn't it be great if we could write an algorithm that could learn on its own just by looking at 1000 pictures of apples? Another advantage of using ML algorithms is that they might even come up with a new way of finding apples in photos that we never thought of. So, ML is a very interesting area to explore. I'm really not the best person to explain machine learning and artificial intelligence, I'm just writing about what I've learned. But if you've read this far, you should check out my YouTube channel, and if you haven't subscribed to the channel yet, you probably should right now! Trust me, it's definitely worth it!


Step 6: Classification

Classification.png

There are many methods and techniques in machine learning to solve problems, and in our case of gesture detection, I will use a technique called classification. Why do we need to classify? After a period of repeated motion, the human eye seems to be able to predict the data. Now, if I perform the same movement off screen, you can still guess what it is by looking at the chart, and the best part is that you can even perform this for other gestures and movements. That's because our brain assigns different names to different patterns. Similarly, if we can show this data pattern to the ML algorithm multiple times, then it will try to understand the data and put them in different groups, or the algorithm will classify the data samples into different classes, so to speak. So the next time it sees a similar pattern in the data, it will figure out which motion or gesture it is. This is the reason why we need to do classification.


Step 7: Collect sensor data for training

Collect sensor data for training.png

Since we now have a basic understanding of ML, we can start by collecting the data used to train the ML algorithm. The tutorial I followed had a very clumsy way of collecting data through a serial monitor, since I had to wear the device on my wrist during the gesture. To solve this problem, I made the data collection wireless. I used the on-chip flash memory of the esp8266, and to make it easier, I showed the status of the collected and saved data on the OLED display. If you want to do the same compile and upload the Data_collection.ino file to your watch. Once you have uploaded the code, in order to test it, have your hand remain stationary as soon as the device starts up and it will initially calibrate the accelerometer and gyroscope. Once that's done, you're ready to start collecting data! Simply press the button that connects the GPIO-0, the device creates a new function, and then simply starts moving your hand to record motion. The effort to make data collection wireless is definitely worth it! It's much easier to collect each motion about 25-30 times without any problems (the more samples you have, the better the performance of the algorithm).


Step 8: Process the data

Process the data.png

You can now dump the collected data to the serial monitor by powering off the circuit and connecting the FTDI, then pressing the program button again while the serial monitor is turned on in the PC. This will dump all the data to the serial monitor. Then simply copy and paste it into a txt file. Each action will be separated by a phrase "new function" so you will know which data is associated with which action. Then use excel to split the text file into 3 csv files for the swipe left gesture, swipe right gesture and slap gesture. This completes the data collection part. This data should not be used directly; it must be processed to remove noise so that the algorithm can make more accurate predictions. But I didn't do anything to make the whole project more complicated, so I just skipped all that and went straight to the training part.


Step 9: Train the model

Train the model.png

This is the part where you teach your ML algorithm to detect gestures. To train the data, I used python to train the model and convert it to a C file that we can use in the arduino IDE. You can get this file from my github repository and open the "Classifier.py" file in the "Python training code" folder, where we will read the csv file and train the model to learn our previously recorded gestures. If you have a different file name, just rename the python list to fileName so that it will train the model based on the data you collected. After running this code, we will create a "model.h" file. This file contains the model trained to recognize the 3 gestures we captured. If you just want to test the model, paste your "model.h" file into the "Test Gesture Detection" folder and open the arduino file in that folder. Then simply compile and upload the code to your watch.


Step 10: Inferring the model


Once the code is uploaded to the microcontroller, our ML algorithm no longer learns, it just uses the pre-trained model we created earlier, which is called the inferred model. Once the code is uploaded successfully, make any gestures and the oled display should show you what gesture you are performing. In my case, the model works 95% of the time, it just sometimes has a hard time finding the right swipe gesture. It could be that I'm collecting noisy data, or that I'm not performing the movement correctly while collecting the data. Either way, 95% is good for me, and we can do a lot with this gesture detection!



공유:

이전: Microchip Introduces First Family of Aerospace-Certified Baseless Power Modules

다음: Based on TI CC3200 BleFi - BLE Wifi IoT Gateway Solution

 

장바구니

계정

jotrin03

라이브 채팅

sales@jotrin.com