Here's a 2 min video that explains the project in a more exciting way.

Like, Comment and Subscribe!!

GitHub: https://github.com/Andeyn/KinectAvatar

Abstract

Using the Microsoft Kinect, I developed a game and AI based off my childhood favorite show, “The Last Airbender” where the player controls the character with his/her hand motions.

The user plays against an AI which generates a specific combination to defeat the player based off the player’s previous moves.

Depending on the player's hand position, the player can signal the character to fire, shoot, deflect, or jump in the game to defend or defeat the AI.

The Microsoft Kinect

"The Kinect contains three vital pieces that work together to detect your motion and create your physical image on the screen: an RGB color VGA video camera, a depth sensor, and a multi-array microphone. The camera detects the red, green, and blue color components as well as body-type and facial features."

How It Works

This game was developed in Python, calling upon different modules and API's including the Pygame Module to create a smooth visual of the game, Microsoft Kinect's API and PyKinect2 Module to track the body and position, and Tensorflow's Machine Learning Module to track the AI.

The Microsoft Kinect Sensor provides the (x,y) coordinates to each of my joints’ points. Using those data points, I manipulate those joint positions that allow the players’ body to control the game.

AI Logic

The AI takes in a list of the player’s previous moves and analyzes it to calculate the next optimal move. Using the foundational concepts of reinforcement learning, the AI will reward or punish the AI depending on the resulting score. For instance, if the AI shoots and misses, it punishes the AI and tells it that the shooting was not beneficial based on the player's previous moves. The Tensorflow module helps conduct this Machine Learning Algorithm.