Archive for ‘Research’ Category
Posted on 20:23, September 22nd, 2012 by Many Ayromlou
(Via MAKE Magazine)
On May 17, 2011 Ryersons’ Interactive Computing Applications and Design Group (ICAD) demonstrated their latest projects. The session starts with a demonstration of using Microsoft Kinect hardware to control a computer mouse. Next, the group shows the use of a gestural interface to control Google Earth, followed by a demo of using Kinect to control a avatar in Second Life.
The session continues with a demonstration of a potential application to control a small arduino based robot over bluetooth using gestures. Following this the ICAD staff show the use of Kinect as a tracking and control mechanism for a Point-Tilt-Zoom (PTZ) camera. This approach allows them to track up to five people without active trackers. The data from the Kinect camera is used to instruct the PTZ camera where to “look”. Once a person is identified (by putting up their hand) the kinect will try to track the person around the room and make sure the PTZ camera follows the person as well. Switching the tracked person is done by raising ones hand.
Their last demo will show a gestural based keyboard that will eventually be tied into a interactive phonebook application where the user can type the name of a contact using gestures and automatically dial the number through a voip application (ie: google talk).
Individual project videos below….
1) Kinect Windows Mouse Interface
2) Kinect Google Earth Interface
3) Kinect Second Life Interface
4) Kinect Bluetooth Robot Interface
5) Kinect Tracker-Cam Interface
6) Kinect Interactive Phonebook
Have you ever put your email online in a comment for example. Were you concerned about Spambots harvesting your email address? If so, did you use some weird trick (eg: replacing @ with “at”) to try to hide your address from spambots? That’s where Albion Research’s Email Address Obfuscator comes in handy. Follow this link, type in your real email address in the field and click “Obfuscate”. Their program will spit out two different “encodings” of your email that will be readable to humans (and clickable), but will cause havoc for email harvesting spambots. Really cool and innovative.
Toronto Nuit Blanche was a blast. For those of you who don’t know:
Nuit Blanche (literally White Night or All-Nighter in French) is an annual all-night arts festival. Its exact beginning is disputed between Paris, St Petersburg, and Berlin, but, taking elements from all of these, the idea of a night-time festival of the arts has spread around the world since 1997, taking hold from Montreal to Madrid and Lima to Leeds. A Nuit Blanche will typically have museums, private and public art galleries, and other cultural institutions open and free of charge, with the centre of the city itself being turned into a de facto art gallery, providing space for art installations, performances (music, film, dance, performance art), themed social gatherings, and other activities.
This year the local Toronto Artist and Ryerson Image Arts Student, Mike Lawrie and I have entered a Independent project — Multitorch — under the Ryerson University/Faculty of Communications And Design’s Lightup the Night. The project involves a 23′x13′ (26′ diagonal) projection weighing in at 4096×2048 pixels driven by a multitouch engine. Up to 10 Infrared LED torches are handed out to the audience and the system will allow them to interact with the projection in front of them. As far as we know this is the largest (and highest resolution) multitouch screen deployed to date. The project uses CCV (Community Core Vision) tool for tracking, OSC (Open Sound Control) for communication and a 4500 line custom java visual engine. Here is a short 5:00 minute Timelapse video.
Posted on 19:21, January 11th, 2009 by Many Ayromlou
Bug Labs announced five new BUGmodules at the 2009 Consumer Electronics Show in Las Vegas. At the Bug Labs Test Kitchen the team showcased several innovative new BUG applications which fully demonstrate the endless possibilities of BUG, the open source modular consumer electronics platform. Each BUGmodule represents a specific gadget function (e.g. a camera, a keyboard, a video output, etc.) that can be snapped to the BUGbase, a programmable Linux-based mini-computer with four available BUGmodule slots.
The five new BUGmodules are:
These five modules complement the initial batch of BUGmodules, including BUGlocate (GPS), BUGcam2MP (digital camera), BUGmotion (motion sensor and accelerometer) and BUGview (touchscreen LCD). And with the recent addition of BUGvonHippel, a breadboard module enabling users to add virtually any interface to their BUGbase, developers are given more control in making BUG the center of their device universe.
I’ve been a fan of the potential of augmented reality for some time, but the limitation of having to print out and use those funky registration images has always been there. A lot of people are working on solving/helping this problem. One of the groups that has come up with a novel approach is Mobilizy a small team based in Austria.
Mobilizy have developed one of the hottest applications available for the new Android G-1 Phone, called Wikitude. You see instead of using registration images for pattern recognition and image substitution, they use the GPS, Digital Compass and camera on the G-1 to deliver one of the first really practical augmented reality applications, excellent for travel and tourism.
In what mobilizy has dubbed “CamView” mode, users may hold the phone’s camera against a spectacular mountain range and see the names and heights displayed as overlay mapped with the mountains in the camera. Users may look out of an airplane window to see what is down there. Users may walk through a city like Seville, Spain, holding the phone’s camera against a building and Wikitude tells what it is.
Check out the demo video below for more detail/clearification.
Posted on 12:41, August 18th, 2008 by Many Ayromlou
More crazy image-enhanced video rendering papers from University of Washington being presented at Siggraph08. I just can’t get enough of these new applications of combining crappy video and some still frames to produce eye popping results. Most of the experiments in this video were done using a standard video camera and a hi-res still camera. The results were combined, some secret sauce added and you end up with these killer results. I for one can’t wait for editing packages to include some of these research topics as new features….Can you say UNREAL :-)
Posted on 11:24, August 14th, 2008 by Many Ayromlou
Hmmmm……Ugggghhh…..Yet another MS research juicy fruit that no one outside of Redmond is gonna be able to play with. I guess this is a further development or an offshoot of PhotoSynth that MS presented at Siggraph in Boston. This one is much more polished and seems to actually have a purpose (see the end of the video). The days of QTVR are numbered if MS ever decides to make this project a reality.
Yep, Microsoft is at it again (actually MS research to be more precise). Show up at Siggraph, present a juicy paper, get everyone salivating and then, well…..not sure…..hide :-). I don’t get it, I’ll give you an example, couple of years ago a bunch of MS research guys showed up at Siggraph in Boston (I think) and showed this amazing application — PhotoSynth — that would stitch pictures taken by random tourists from different internet sources into a brilliant 3D model. It was fantastic, but other than a demo application, it’s no where to be found.
MS, are you listening…..You’re a Software company, stop producing software noone wants/needs (Vista/Office anyone?) and realize some of these apps the research people are working on.
Anyways, rant off. Now for this years amazing app. The tool is called Unwrap Mosaic and is described as Photoshop of video editing tools. Watch the video here. Imagine being able to take a video and changing something inside it just like you would in photoshop…..without having to go to every single frame of that video. The technology behind UM allows for changes by unwrapping the objects contained in the video into a flat image. It would be incredibly difficult to update the video in its original form, but making objects flat allows the new objects to be mapped into the correct positions. In the old days (like 1-2 years ago) 3D artists had to manually map things in 3D onto models and then composit them into the video…..well no more. This is amazing….koodoos to MS research. More info here.
Posted on 21:53, March 28th, 2008 by Many Ayromlou
A friend passed this on today (thanks Jeremy). If you use a mic in your day to day businness (or even if you’re an occasional ichat/skype user, you should check this out. RevoLabs have introduced a new line of Wireless microphones that come with RF-Armor. What does that mean, well the next time your GSM phone rings/sync/receives email, your microphone won’t be going all crazy. Plus their Solo mics come in three different types:
Yeah baby, If you’re gonna telecine your Super 8 summer trip reels, why not do it using the RED Digital Cinema Camera at glorious (or is it gruesome) 4K. All those scratches and nicks blown up to 4K….Yummm. Well I guess film restorers will be back in business. The rig is a prototype made by Movie Stuff Workprinter XP specifically for the RED camera. The Workprinter’s “trigger out” interfaces directly to the Red’s GPI input to trigger capture in stop motion mode up to 30 frames per second in the Red’s 4K mode). I wonder if they’re gonna do a 16mm version of this rig as well. Now that would be a cheap 16mm telecine :-).
Posted on 20:23, March 14th, 2008 by Many Ayromlou
Posted on 13:09, February 12th, 2008 by Many Ayromlou
Canon is using Iris watermarking to take photographer’s copyright protection to the next level. A new Canon patent application (Pub. No.: US 2008/0025574 A1) reveals the next step in digital watermarking – Iris Registration. The process is as follows:
Original and more details via Photography Bay.
Posted on 12:19, December 17th, 2007 by Many Ayromlou
If you like to see some of the most prolific Engineers and Scientists of our time talk about how we got to where we are in computers, head over to the Computer History Museum Channel on You Tube. Oh, and if you’re ever in Northern California somewhere, take a side trip to Mountain View and visit the Museum in person, I did.
Speaking of hand tracking, here is a video of a guy playing around with an unknown system (looks a bit like linux). Very cool demo and almost perfect tracking. Not sure if it’s IR or not, you can see him in the corner of the screen, but can’t quite tell how it’s done. Anyways, I’m posting it since it’s one of the better ones I’ve seen. From the description:
A C++ computer vision application to emulate the mouse and the keyboard in any application using hand gestures and a low-cost webcam.
Posted on 22:11, October 1st, 2007 by Many Ayromlou
So here it is, the web2.0 app you’ve all been waiting for. We’d covered Content-Aware Image Resizing before in two of our articles here and here. Now it looks like there is rsizr is actually the working 2.0 app that can do this type of Seam Carving. Try it out…..it’s magic.
Well many of you have probably been wondering why N.E.R.D. has been a bit slow for the past couple of months. Well, August was a bit of a nightmare month (although an enjoyable nightmare for the most part). I got a chance to go to Siggraph’07 in San Diego, followed by a European trip to end the other project I’ve been working on (Comedia II) at Ryerson. That trip passed through Amsterdam (WOOHOO) and ended in Stuttgart with a succesful demonstation of our high-resolution low bandwidth screen sharing app which was a part of Comedia II deliverables.
The screen sharing basically uses a Blackmagic Design Intensity card to share/deliver/encode the DVI output of a CAD/CAM workstation to a remote site and with the addition of our home-brew pointer control system, to allow multiple remote audiences to have collaborative engineering design review sessions.
September was pretty much spent planning and implementing our demo for the GLIF conference in Prague. This was a demonstration put together by some of the CineGrid consortium members. The demo involved connecting three sites (Ryerson University‘s Dcinema Lab, Calit2 at UCSD and Barrandov studios in prague) via 10GigE optical connections in a layer-2 network. Below you’ll find the overall net diagram prepared by Alan Verlo.
The idea behind the demo was as follows (point form to make it a bit easier to visualize):
1) DCinema footage was shot in Prague last weekend (Sept. 15-16) using a DALSA Origin 4K DCinema Camera.
On Tuesday morning we started the two way HD conference, connected the front-end of the Baselight system to the back and after some adjustments had the system up and running with 2K proxy output in Toronto and Prague. The Demo was a “First in the World” and will be (atleast I think so) the first of many more to come out of our lab and it’s collaboration with CineGrid partners around the world….So stay tuned. I’ve included a bunch of pictures I took during the build and the actuall demo, official CineGrid press release is coming soon and I will try to post the video that we shot at our end of the first session soon (it’s in DVCProHD and I need to book one of our suites to edit it together).
Hot on the heels of our coverage of Image Slicing and Stretching paper titled Seam Carving for Content-Aware Image Resizing (Shai Avidan and Ariel Shamir), here is a fully working prototype of the shrinking part of the paper by Patrick Swieskowski. So how long do you think it will take for Adobe to snag these guys up?……