MS Kinect as occupancy sensor...
Posted: Sunday 07 February 2016 12:08
Am pondering...
I have 2x Kinect for xbox360, attached to macmini, running processing2. I use these for developing interactive stage shows & visuals via Isadora.
Shouldn't be too difficult to knock up a quick sketch utilising difference flobs (persistent blob tracking) where the point cloud returned from the depth camera has a learned baseline (the empty room + cat-sized tolerance, unless you also wanted to know how many cats were in the room), and any deviation from baseline = occupied, so trigger a value (the number of flobs) which represents number of occupants, pushed to a virtual sensor via http. Alternatively could bring OpenNI into the equation and use skeleton identification, but OpenNI now out of development so limited lifespan, and very CPU intensive. (See processing sketch at https://github.com/PatchworkBoy/isadora ... ng_Mac.pde which already does the bulk of the work - can expose skeleton count via OSC, and exposes video feeds to syphon - standard stuff)
Could simplify it to do basic occupancy rather than number of occupants by just using a depth-threshold against the depth image to reduce CPU usage if necessary - foundation work I've already done within https://m.youtube.com/watch?v=u7s_dlXzR_I
The RGB feed could also be exposed to use as lux meter for confirmation of light on/off.
Much interest / worth me putting effort into? Any pros / cons spring to mind?
I have 2x Kinect for xbox360, attached to macmini, running processing2. I use these for developing interactive stage shows & visuals via Isadora.
Shouldn't be too difficult to knock up a quick sketch utilising difference flobs (persistent blob tracking) where the point cloud returned from the depth camera has a learned baseline (the empty room + cat-sized tolerance, unless you also wanted to know how many cats were in the room), and any deviation from baseline = occupied, so trigger a value (the number of flobs) which represents number of occupants, pushed to a virtual sensor via http. Alternatively could bring OpenNI into the equation and use skeleton identification, but OpenNI now out of development so limited lifespan, and very CPU intensive. (See processing sketch at https://github.com/PatchworkBoy/isadora ... ng_Mac.pde which already does the bulk of the work - can expose skeleton count via OSC, and exposes video feeds to syphon - standard stuff)
Could simplify it to do basic occupancy rather than number of occupants by just using a depth-threshold against the depth image to reduce CPU usage if necessary - foundation work I've already done within https://m.youtube.com/watch?v=u7s_dlXzR_I
The RGB feed could also be exposed to use as lux meter for confirmation of light on/off.
Much interest / worth me putting effort into? Any pros / cons spring to mind?