This is a demo of machine learning in the browser. It's based on Tensorflow JS and a pre-trained image recognition model called MobileNet. Using a webcam, we can extend the MobileNet model to recognize four new gestures or poses. After training, the model recognizes your pose and triggers the corresponding action.
The orange meter next to the buttons shows the confidence score or the degree of certainty. This is the computer saying for example: "I'm about 70% certain you just made the play gesture". Sometimes the camera sees elements from more than one of your training examples though. (This is why multiple confidence meters may move at once.) In this project, only gestures with an 80% confidence score or higher actually trigger the corresponding action.
Ease of use, speed, and privacy - these are the primary advantages. Web projects are easier to deploy and access than mobile apps. The browser is free and compatible with both Windows and Mac. Training the model in the browser, using a technique called transfer learning, is much faster than training in the cloud. Once the session ends, your images are "dumped" - maintaining privacy.
Take the tutorial - it's excellent!
Simplest Starting Point More Tensorflow JS Demos Learn More w/3Blue1Brown VideosIntro to machine learning & computer vision
Try the YouTube API