It's a bachelor thesis project with the aim of letting the quantity of dance movements in the room control the volume level using the Kinect for Windows hardware.
You can read all reports here.
|Still in its packaging.|
It's the first week - the first day, even - of my bachelor thesis project. I got my Kinect for Windows unit a couple of days ago. Have been setting it up and trying it out. Not gone into any coding yet, though.
|Time to get hacking!|
This week I will mainly focus on planning the report. Before the week is over I will have written down a skeleton report which I can then continuously fill during the course. I will also have found all reference material I need for the report. Lastly, I will have created the structure of the API.
As of right now I have a rudimentary skeleton report with some background filled in. I am currently going through some previous research in the field of human-computer interaction and gesture based input.
I have found a few sources which I will use as references in my report. The first is the book "Designing Interactive Systems" written by David Benyon. This is where I learned most of the principles I used when designing the user experience of Stoffi. This project should continue to build upon those principles and strive after simplicity and being intuitive. The second source is a paper by Ernesto L. Andrade and Robert B. Fisher titled "Simulation of Crowd Problems for Computer Vision". I have only read the abstract yet but I hope it contains some valuable information on how to deal with modelling crowds. The third source is titled "Human Detection Using Depth Information by Kinect" written by Lu Xia, Chia-Chih Chen and J. K. Aggarwal. I will read it and explore the suitability of the depth map for solving my particular problem.
At the wiki I have started to draw up a rudimentary API. I will probably add some way of accessing the raw data to allow the caller to piggy back on my device initialization and management if they need their own fine grained analysis.