Monday, May 28, 2012

Project Dance Controller: Report 1


This post is part of Project Dance Controller.

It's a bachelor thesis project with the aim of letting the quantity of dance movements in the room control the volume level using the Kinect for Windows hardware.

You can read all reports here.

Last week

The last week I have been going through some previous research on the area of crowd analysis and human detection. I think that the best approach for me is to use the depth image that I get from the IR sensor. This makes it easier to detect humans against cluttered backgrounds or when occlusion occurs.

The skeleton tracking only allows for tracking of two persons at the time and puts some serious restraints on poses and positions making it unusable for my scenario.

I read a research report Ikemura and Fujiyoshi where they used a depth image from an flight-of-time (FOT) camera to detect humans in real time. They used a window based approach and were able to get the detection calculations down to 100 ms on an Intel 3 Ghz CPU. Their approach was not very robust against certain poses and positions, though. It also wasn't able to handle occlusions very well.

The work of Xia, Chen and Aggarwal from the University of Texas presented a different approach to detecting humans from depth images. They used the Kinect for Xbox360 device for retrieving the depth array and detected humans in three steps. First they narrowed down all areas where a human head may be using 2D chamfer distance matching. They then confirmed all heads by fitting a 3D model onto the area. Lastly, they expanded the section from the head to include the rest of the visible body of the human. This method improves on the window based method by Ikemura and Fujiyoshi but it still suffers from some limiations. It won't work very well if the person is wearing a hat or if part of the head is hidden.

In addition to reading up on some previous research I was also able to install my newly arrived Kinect for Windows device and write a skeleton report which I will be uploading to the repository on Wednesday when I get back home again.

Challenges

I anticipated that it would be difficult to pick out good reference material and I still think I need to find more. But the 4 reports I've read so far have given me some very important insight into just was kind of methods may work and which won't.

This week

This week I will start to implement a skeleton library and document the API calls. This will help me get the API structure ready so I can early on see what works and what does not. The most basic things that my API should provide are some rudimentary detection of whether or not a Kinect device is connected to the computer or not. It should also provide some raw data from the depth sensor (possibly all raw data) so that the caller can get fine grained control if needed. That way if someone wants to use my dance quantifier in their own Kinect code they can piggy back my initialization and sensor detection.

Challenges

I think the hardest part this week will be to find a good balance on the API between full coverage and simplicity. I need to both make it very easy to use but also provide full control of the device for advanced developers who want to make their own Kinect code work with mine.

Monday, May 21, 2012

Project Dance Controller: Report 0

This post is part of Project Dance Controller.

It's a bachelor thesis project with the aim of letting the quantity of dance movements in the room control the volume level using the Kinect for Windows hardware.

You can read all reports here.

This week

Still in its packaging.

It's the first week - the first day, even - of my bachelor thesis project. I got my Kinect for Windows unit a couple of days ago. Have been setting it up and trying it out. Not gone into any coding yet, though.

Time to get hacking!

This week I will mainly focus on planning the report. Before the week is over I will have written down a skeleton report which I can then continuously fill during the course. I will also have found all reference material I need for the report. Lastly, I will have created the structure of the API.

As of right now I have a rudimentary skeleton report with some background filled in. I am currently going through some previous research in the field of human-computer interaction and gesture based input.

I have found a few sources which I will use as references in my report. The first is the book "Designing Interactive Systems" written by David Benyon. This is where I learned most of the principles I used when designing the user experience of Stoffi. This project should continue to build upon those principles and strive after simplicity and being intuitive. The second source is a paper by Ernesto L. Andrade and Robert B. Fisher titled "Simulation of Crowd Problems for Computer Vision". I have only read the abstract yet but I hope it contains some valuable information on how to deal with modelling crowds. The third source is titled "Human Detection Using Depth Information by Kinect" written by Lu Xia, Chia-Chih Chen and J. K. Aggarwal. I will read it and explore the suitability of the depth map for solving my particular problem.

At the wiki I have started to draw up a rudimentary API. I will probably add some way of accessing the raw data to allow the caller to piggy back on my device initialization and management if they need their own fine grained analysis.

Challenges

The biggest challenge will be to pick out the best reference material. There's tonnes of stuff out there and I need to find previous work which is as similar as possible to my project.

Sunday, May 13, 2012

Beta update: bug fixes and one intelligent queue

It's time for yet another beta upgrade. Stoffi got heavily tested by our dear beta tester Hylton a couple of days ago and I've finally been able to fix them all. A whopping 35 bugs has been squashed along with some minor tweaks to make the interface a bit more consistent and responsive (especially in the new cloud department).

I know that this is beta, but when I got this awesome feature request I just couldn't resist. I had to implement it.

Now, when you queue stuff, Stoffi will know in which order you selected the tracks and put them in the queue in that order. So if you hold down the Shift key and select some four tracks in random order, then right-click and select Queue, the tracks will be in the queue with the track you selected first in the top spot and the track you selected last in the last spot. No matter the order of the tracks in the actual list. Pretty awesome.

This is the forth beta of our upcoming new version of Stoffi and I will continue to hope (as I always do) that it will be the last before we can go stable. But if you find any bugs or problems, report them and I'll fix them. Stable should mean something. :)

Cheers!

(Oh, and as always, Stoffi will upgrade itself automatically if you're on the beta channel, otherwise you can get Stoffi Beta here.)

Thursday, May 10, 2012

Project Dance Controller: Introduction

It's another great moment for Stoffi. I have once again been able to combine my love for this music player with my studies. This semester is the last one before I get my bachelor degree in computer science here at Uppsala University and my thesis will be to bring some really kick-ass awesomeness to Stoffi: I will make to increase the volume the more you dance for it!

My thesis is being done under the department of human-computer interaction and I will use the Kinect for Windows hardware from Microsoft. Unsurprisingly the SDK is only available for Windows but there are some third party stuff out there which mean I can bring this to more platforms in the future if there's time and a desire for it. Always good to know.

The work will be going on for about ten weeks and I am starting on May 21st with two weeks of structuring and planning. This will be followed by three weeks of designing and implementing a library. If there's time I will integrate this into Stoffi, but I am prioritizing down this since the main goal will be to create the actual cool algorithms which will analyse your movements. I can always integrate it with Stoffi later if I need to.

This means that I will extend our existing plugin platform to allow for a new class of plugins: manipulation plugins. I thought stuff like this (volume manipulation), pitching, auto-tune, and so on could go into that category.

Just as with our Hackathon in 2011 there will be a report each week were I will summarize the work done, any challenges I encountered and also give a prognosis of how the coming week will go.

Sunday, May 6, 2012

Internet radio streaming

Just got some very cool new feature implemented in Stoffi: Internet Radio Streaming.

This feature was requested about a year ago but I've finally found the time to get around fixing it. The result is pretty neat.

First of all there's a new navigation item on the left hand called "Radio" where all your radio stations will appear. We plan on adding some preloaded stations later on but now you'll have to add each station manually. You do this by clicking on "Add" and then choose "Radio station".



A dialog will appear where you can enter the URL for the radio station. Stoffi will take either the direct URL to the stream, or the URL to a playlist (such as .pls or .m3u) which contains the streaming URL. If you keep the dialog open it will show the title of the radio station once it has loaded. If you close it by clicking Add before the station has been fully loaded (it may take a while depending on your Internet connection), the mouse cursor will change to indicate that it's loading and once finished the new station (or stations, if the playlist contains multiple stations) will appear in the list.

This is pretty basic stuff. What you expect from a music player able to stream Internet radio. But things get pretty cool when you combine this with the awesome power of Stoffi. One example is that you can create a fairly mixed queue with local files, YouTube tracks and a radio station.



This should arrive in the alpha channel pretty soon. I will probably release it some time after I release some beta stuff.

In the meantime you can help me by suggesting some radio stations we should ship with Stoffi. The more stations we have the better. What station do you want to see preloaded in Stoffi?

Enjoy!