Archive for ‘image recognition’ Category
On May 17, 2011 Ryersons’ Interactive Computing Applications and Design Group (ICAD) demonstrated their latest projects. The session starts with a demonstration of using Microsoft Kinect hardware to control a computer mouse. Next, the group shows the use of a gestural interface to control Google Earth, followed by a demo of using Kinect to control a avatar in Second Life.
The session continues with a demonstration of a potential application to control a small arduino based robot over bluetooth using gestures. Following this the ICAD staff show the use of Kinect as a tracking and control mechanism for a Point-Tilt-Zoom (PTZ) camera. This approach allows them to track up to five people without active trackers. The data from the Kinect camera is used to instruct the PTZ camera where to “look”. Once a person is identified (by putting up their hand) the kinect will try to track the person around the room and make sure the PTZ camera follows the person as well. Switching the tracked person is done by raising ones hand.
Their last demo will show a gestural based keyboard that will eventually be tied into a interactive phonebook application where the user can type the name of a contact using gestures and automatically dial the number through a voip application (ie: google talk).
Individual project videos below….
1) Kinect Windows Mouse Interface
2) Kinect Google Earth Interface
3) Kinect Second Life Interface
4) Kinect Bluetooth Robot Interface
5) Kinect Tracker-Cam Interface
6) Kinect Interactive Phonebook
Posted on 21:40, January 16th, 2011 by Many Ayromlou
Great little video on how to setup AR marker recognition under QC. Even has a nice mellow background music :-).
Toronto Nuit Blanche was a blast. For those of you who don’t know:
This year the local Toronto Artist and Ryerson Image Arts Student, Mike Lawrie and I have entered a Independent project — Multitorch — under the Ryerson University/Faculty of Communications And Design’s Lightup the Night. The project involves a 23’x13′ (26′ diagonal) projection weighing in at 4096×2048 pixels driven by a multitouch engine. Up to 10 Infrared LED torches are handed out to the audience and the system will allow them to interact with the projection in front of them. As far as we know this is the largest (and highest resolution) multitouch screen deployed to date. The project uses CCV (Community Core Vision) tool for tracking, OSC (Open Sound Control) for communication and a 4500 line custom java visual engine. Here is a short 5:00 minute Timelapse video.
Posted on 14:50, June 26th, 2008 by Many Ayromlou
Just came across Idée’s new baby, TinEye. Have you ever wondered why it is that you can’t just go to google images (or similar image search engine) and look for images based on image content and not tags, names and such. Well it’s because it’s damn hard to do and frankly until now I haven’t seen one that actually worked properly. That said, I think the guys at TinEye have it figured out quite nicely. Their system does NOT use keywords, text, names or tags. They have developed a proprietary image identification technology that creates a image finger print for a given image. This allows them to do amazing partial matches, even if the image has been cropped, resized or modified. Although their database of images is not as large at google, their algorithms run circles around pretty much every other technology in this field.