Archive for ‘Disruptive Technology’ Category
Browse:
Disruptive Technology »
Subcategories:

The Birth and Rise of Ethernet: A History – Input Output

datePosted on 11:27, August 29th, 2012 by Many Ayromlou

The Birth and Rise of Ethernet: A History – Input Output:

Nowadays, we take Ethernet for granted. We plug a cable jack in the wall or into a switch and we get the network. What’s to think about?

But it didn’t start that way. In the 60s and 70s, networks were ad hoc hodgepodges of technologies with little rhyme and less reason. But, then Robert “Bob” Metcalfe was asked to create a local area network (LAN) for Xerox’s Palo Alto Research Center (PARC). His creation, Ethernet, changed everything.

Back in 1972, Metcalfe, David Boggs, and other members of the PARC team assigned to the networking problem weren’t thinking of changing the world. They only wanted to enable PARC’s Xerox Altos (the first personal workstations with a graphical user interface and the Mac’s spiritual ancestor), to connect and use the world’s first laser printer, theScanned Laser Output Terminal.

(Via h30565.www3.hp.com)

Well it had to happen sooner or later. YDreams and Canesta Inc. have announced a partnership that will redefine Augmented Reality. We’ve all seen AR demos where “Real” markers are recognized by machine vision engines and are replaced with “Artificial” objects in the video stream. Canesta has taken the next step and built a camera that can provide a realtime markerless depth map of the scene it’s shooting.

Canestavision sensor chips fundamentally, work in a manner similar to radar, where the distance to remote objects is calculated by measuring the time it takes an electronic burst of radio waves to make the round trip from a transmitting antenna to a reflective object (like a metal airplane) and back. In the case of these chips, however, a burst of unobtrusive light is transmitted instead.

The chips, which are not fooled by ambient light, either then time the duration it takes the pulse to reflect back to each pixel, using high speed, on-chip timers in one method, or simply count the number of returning photons, an indirect measure of the distance, in another.

In either case, the result is an array of “distances” that provides a mathematically accurate, dynamic “relief” map of the surfaces being imaged. The image and distance information is then handed off to an on-chip processor running Canesta’s proprietary imaging software that further refines the 3-D representation before sending it off chip to the OEM application.

YDreams has taken Canesta’s technology and applied it to AR.

To date, implementing augmented reality solutions has only been possible with very specialized techniques.  By working together with Canesta, we look forward to making augmented reality a part of everyday life,

said Ivan Franco YDreams´ R&D Director.

Until now Augmented Reality has delivered limited applications to the general public, mostly offering 3-D objects on top of visually obtrusive markers. By using Canesta’s 3-D vision sensors, YDreams applications can do real-time capture of any object in 3-D, without the aid of any special markers or enhancements.

A new large-format multi-touch technology launched today by DISPLAX, a developer of interactive technologies, will transform any non-conductive flat or curved surface into a multitouch screen. The DISPLAX Multitouch Technology, believed to be the first of its kind, has been developed based on a transparent thinner-than-paper polymer film. When applied to glass, plastic or wood, the surface becomes interactive.

It can be applied to flat or curved, opaque as well as transparent surfaces up to three metres across the diagonal. It is hyper sensitive, allowing users to interact with an enabled surface not just by touching it but, for the first time, by blowing on it, opening up new possibilities for future applications. Currently, the technology can detect up to 16 fingers on a 50-inch screen. The number of fingers detected is expected to increase as development progresses.

Based on patent-pending projected capacitive technology, DISPLAX Multitouch Technology uses a controller that works by processing multiple input signals it receives from a grid of nanowires embedded in the film attached to the enabled surface. Each time a finger is placed on the screen or a user blows on the surface, a small electrical disturbance is caused. The micro-processor controller analyses this data and decodes the location of each input on that grid to track the finger and air-flow movements. The DISPLAX Multitouch technology controller combined with a projected capacitive nanowired film is a lightweight and highly scalable solution, ranging from seven inches (18 centimetres) to three metres across the diagonal thus opening up a wide range of commercial applications suitable for indoor or outdoor displays.

The DISPLAX Multitouch Technology will begin shipping in July 2010. The prices will be very competitive and depend on size.

Features:

  • Multi-touch detection: DISPLAX™ Multitouch Technology detects 16 fingers simultaneously on a 50-inch screen (to increase as technology development progresses).
  • Air-movement detection: DISPLAX™ Multitouch Technology detects when someone blows on the surface, measuring the intensity and direction of the air flow.
  • Large and small: As small as 18 cm and as large as three meters across the diagonal and thinner than paper.
  • Transparent: DISPLAX™ Multitouch Technology is completely transparent and allows the user to see through any transparent surface it is adhered to.
  • Light-weight: A 50-inch screen weighs about 300 grams, making it easily transportable and easy to install.
  • Versatile: Can be applied to any non-conductive flat or curved surface including glass, plastic and wood less than 15 mm thick. It works in daylight or at night, indoors or outdoors, and is not affected by light conditions.
  • Durable: When using rear projection, the film is applied to the reverse of the surface, protecting it from scratches or other damage, with no need for contact with the material.

Google adds Auto Captioning to YouTube…..

datePosted on 14:05, November 19th, 2009 by Many Ayromlou

Wow…..Another amazing feature brought to you just in time for X-Mas by the google…..Auto Captioning or Auto-Cap. You might be wondering Caption-Schmaption…..Why? Well first on the list would be accessibility which is self explanatory, but also searchability and auto-translation. You see once a video has been captioned, google can provide searchability, you can do word searches and literally jump to the point in video where the word is mentioned…..That’s really cool. Auto-translation is another natural fit, once you’ve got the english captions, you can do machine translation to any of the other 51 languages google translation engine supports.

To achieve this google combined their automatic speech recognition (ASR) technology with the YouTube caption system to offer automatic captions. Auto-caps uses the same voice recognition algorithms in Google Voice to automatically generate captions for video. The captions will not always be perfect (check out the video below for an amusing example), but even when they’re off, they can still be helpful—and the technology will continue to improve with time.

In addition to automatic captions, google is also launching automatic caption timing, or auto-timing, to make it significantly easier to create captions manually. With auto-timing, you no longer need to have special expertise to create your own captions in YouTube. All you need to do is create a simple text file with all the words in the video and google will use their ASR technology to figure out when the words are spoken and create captions for your video. This should significantly lower the barriers for video owners who want to add captions, but who don’t have the time or resources to create professional caption tracks.

For now, auto-caps is only visible on a handful of partner channels (UC BerkeleyStanfordMITYaleUCLADukeUCTVColumbia,PBSNational GeographicDemand MediaUNSW and most Google & YouTube channels.) Auto-timing, on the other hand, is rolling out globally for all English-language videos on YouTube.

THANKS GOOGLE :-)

National Film Board of Canada’s New Iphone/Ipod Touch App….

datePosted on 11:20, October 22nd, 2009 by Many Ayromlou

I usually don’t tend to write about apps, but this one got my attention. Download the FREE NFB app and you get access to over a thousand films, documentaries, animations and trailers on your IPhone or Ipod Touch. I think (hope) that this move will be the trickle before the storm that will finally open the flood gates of media archives being made available to people everywhere. It is a real shame that these works are usually housed/guarded in some concrete bunker, being only available to specialists. I will not bore you with my opinions on archives/copyrights/rights management of our collective cultural treasures by the “high priests”……Let’s just say I’m crawling out of my skin in joy that NFB has taken the (hopefully) first step :-).

Apertus: Open Source DCinema……

datePosted on 16:22, August 7th, 2009 by Many Ayromlou

Yep, those crazy open source hackers over at dvinfo.net have done it again. You thought the RED camera brought about a revolution in dcinema, well, you ain’t seen nothing yet. Apertus is using the Elphel 353, free software and open hardware camera. The Elphel Camera which this entire project is based on is basically an excellent security camera that can do some real magic. The camera uses an Aptina CMOS bayer-pattern sensor with an optical format of 1/2.5″ (5.70mm x 4.28mm) and a native resolution of 2592×1944 (5 Megapixels). It features a 12 bit ADC and supports: region of interest, on-chip binning and decimation. Aptina claims that the chip has 70db of dynamic range at full resolution and 76db when using 2×2 binning. The camera has a standard C-mount but ships with an adapter ring that allows to mount CS-lenses as well.

The recording resolution can be freely adjusted to anything starting from 16×16 to 2592×1944 in 16 pixel steps. This includes Apertus AMAX (2224×1251), Apertus CIMAX (2592×1120), 2K (2048 × 1536), Full HD (1920×1080), HD (1280×720) and of course all lower resolution SD formats like DV PAL, DV NTSC, etc.

Standard Resolution Record Mode max. FPS
Apertus AMAX 16:9 2224×1251 JP4 RAW 24
Apertus CIMAX 2.35:1 2592×1120 JP4 RAW 24.2
Full HD (1080p) 1920×1088 color 25.2
Full HD (1080p) 1920×1088 JP4 RAW 30.9
2K 2048×1088 color 23.9
2K 2048×1088 JP4 RAW 29.5
HD (720p) 1280×720 (2×2 binning) color 46.2
HD (720p) 1280×720 (2×2 binning) JP4 RAW 46.2
HD (720p) 1280×720 color 57.9
HD (720p) 1280×720 JP4 RAW 60
NTSC DV 640×480 color / JP4 RAW 126
NTSC DV 640×480 (3×3 binning) color / JP4 RAW 82
NTSC 16:9 854×480 color / JP4 RAW 110
PAL DV 720×576 color / JP4 RAW 100
PAL DV 720×576 (3×3 binning) color / JP4 RAW 66
PAL DV 16:9 1024×576 color / JP4 RAW 84

The lower the resolution the higher the maximal possible framerate. At the full sensor size (5 million pixels or 5 Megapixels) the maximal frame rate is 10 fps in normal color mode and 15 fps in JP4 RAW mode. JP4 achieves higher framerates in general as some camera internal calculations are skipped and need to be applied later in postproduction (like debayering/demosaicing).

The RAW recording mode in Apertus is called JP4 RAW. Because certain in-camera compression steps can be skipped JP4 RAW allows higher recording speed resulting in more fps. JP4 RAW requires postprocessing (DNG Converter) but in return offers the highest possible image quality.

The following connectors are available on the camera body:

  • SATA: Can be used to connect any external SATA device that is supported under Linux (external harddrives, raids, etc.)
  • Ethernet: 100MBit Network with POE (48V)
  • USB: USB 1.1 with 5V power supply
  • IDE: Used to connect internal HDD
  • RS232: Access to Console and debug output

The camera also supports the following recording media:

  • Optional internal IDE 1.8″ HDD
  • 2 internal CF Card Slots
  • external SATA connector to connect any SATA device (Linux support required)

And if that’s not enough for you there is a extra bonus that comes from the ability of the camera to shoot Full HD in portrait (upright) mode. Upright screens are basically 1080p screens mounted sideways (portrait mode). This type of mounting is becoming increasingly more popular for events, exhibitions and advertising. If you want to spare yourself the hassle of building a right to mount the camera 90 degrees rotated you can whip out your Apertus rig and just start recording. This will give you a 1088×1020 image that’s ready for portrait playback.

Get Wavy…..

datePosted on 12:45, June 14th, 2009 by Many Ayromlou

Yep, google has taken the giant step for mankind and introduced their collaboration platform….Wave. This is absolutely amazing. A mixture of Email, IM, Bulletin Boards, Versioning System, Wiki with a dash of google magic…..Man I can’t wait for my account…..Oh and did I mention it’s Free and Open Source. Yes, google is giving it away for you to install/play with on your own server. The introduction video is longish but definitely well worth the time. It’s out of this world……

BTW…..ROSY F*CKING ROCKS…..Go and check it out at about 1:12:00 into the video.

Touchless, Gestrual Interface, Powered by Electrostatics

datePosted on 11:16, May 6th, 2009 by Many Ayromlou

Great video showing a bizarre and novel way of creating a gesture based interface. You literally touch nothing….Air…..and the interface does the rest. Pretty interesting project. According to Justin Schunick of the team at Northeastern University, the interface uses an array of copper electrodes to sense a certain change in the electric field created by the device. The black material covering the electrodes shows how the interface can be hidden beneath surfaces to create a completely invisible interface. It is simple black felt you can buy at any fabric store. The total cost of this prototype was around $60.00 USD.

They created custom software to communicate with the microcontroller running the show with C++. This enables the use of the device as a new type of XYZ computer mouse. Think nintendo wii controller without the controller — or minority report without the gloves. This can potentially be revolutionary as far as HCI goes.

Teradici PCoIP makes me happy: Initial Review….

datePosted on 13:48, March 20th, 2009 by Many Ayromlou

So after covering the initial annoucement of Teradici’s PC over IP product, I received an email asking to see if I wanted to review the product. I said yes, since we have been looking for a remoting technology to consolidate all our lab PC’s and Mac’s at the university and we could potentially end up using a Teradici or similar product in the future.

I’ve only spent 4-5 hours with the hardware and some of that time was wasted as I didn’t do my usual RTFM. So please keep that in mind as you read through. The package comes in two parts: the PCoIP Host Card — which is a 1 x PCI express card with the Tera1200 PCoIP host chip — and the PCoIP Desktop Portal, which is a small device that houses the Tera1100 PCoIP Portal
chip and connectors.

The host card simply has two ports, a high density dual DVI port — which is where the supplied dual display DVI dongle hooks up to — and an ethernet port that will provide the network link to the portal device. The two DVI connectors at the end of the dongle connect to 1 or both of your Graphics cards DVI outputs and “steal” it’s signal.
The portal device comes with the following remoted ports (in addition to the ethernet port that provides the remoted devices):
  • 4 x USB ports (2 on front, 2 on back)
  • 1 x Audio out (on the back)
  • 1 x Audio in + 1 x Audio out (on the front)
  • 2 x DVI ports (on the back, which correspond to the 1 or 2 DVI ports on the “remote” PC’s graphics card).

The portal device is kinda neat. I don’t know if Teradici is planning on selling it as a stand alone unit, but if all you want is a remote desktop via RDP protocol (MS Windows only) you can just buy the portal device and use it as a “Dumb Terminal” for your PC.

I did run into a small snag as I was setting up the portal. On one of the ethernet ports in my office — which works perfectly with my other Mac’s and PC’s in the office — the portal devices link light would not come on…..not sure why. The issue was quickly resolved (after a bit of head scratching) after I switched to another port. The big head scratcher on the portal side was the fact that no matter what I did my el-cheapo MS digital media 3000 keyboard and MS comfort optical 3000 mouse did not work on the portals USB ports. Not sure why, but one nice thing about my work is that I’ve got my private stash of just about evey keyboard/mouse combination known to man, so I quickly changed it to a Apple keyboard and mouse and everything was happy.

The host card requires NO SOFTWARE which is a blessing, but does require you to read the manual on the supplied CD. I didn’t — since it looked to be so simple at a first glance — and had some problems. It took about 30 minutes to realize that the card has a jumper that defines if it gets it’s power from a small power connector onboard (default, atleast on my card) or from PCI-Express bus. Now, I realise that this card might have been a demo card and consequently might have had the jumper in the wrong position, but I really hope that the shipping cards are setup to default to grabbing power from PCI-Express bus. Better yet, a small switch on the face plate would have done the job.

Once I figured the power situation out, the rest was a breeze. The “physical” machine all of a sudden found a couple of USB ports, an Apple mouse and keyboard, plus my LG W2252 panel which was now listed as a secondary monitor (I used the primary DVI on the graphics card in clone mode to feed a “local” monitor next to the machine).

Well now on to the tests. None of these situations are scientific. They are based on what I see students doing on a day to day basis. The “logical” distance between the portal and the remoted PC is 4 GigE switches and they are on two different subnets. The portal is super snappy, mouse and keyboard feel like they are hooked into the “real” machine. I even had a couple of our staff members come and test my new Quadzilla PC (I showed them the tiny portal device and told them it was a quad core AMD machine) and they could not tell the difference. Once they were told about the remoting concept, I literally saw a couple of jaws drop. It is truly an amazing experience to sit infront of the portal and have a 1ms delay on a routed/switched network connection across the building to the Quadzilla.

Now our network is fast internally at the university (1Gbps to every desktop with 10Gbps backbone), but the PCoIP system seems to work quite nicely even on 100Mbps segments. Just for arguments sake I grabbed a cheap linksys 4 port 100Mb “switch/hub” and stuck it between the Portal device and the wall connection and I’m happy to report that there was absolutely NO difference in performance.

The hardest thing I’ve thrown at the system was playing back the HD versions of Big Buck Bunny and Elephants Dream and aside from a super tiny delay there is no visual loss that I can see. The system uses about 50-65 Mb/s of bandwidth in this default mode and delivers a solid 30f/s to the portal. Keep in mind that this is on scenes where literally every pixel on the screen is changing at 30 f/s. Normal bandwidth usage is about 1-3 Mbps for average webbrowsing/Excel/Word applications and there are options in the webbased Admin interface to squeeze this down if you need to (default is set to zero meaning full speed ahead). I will cover the webbased admin interface in more detail as soon as i get a chance to play with it more.

The PCoIP system is MAC and PC compatible. I will be doing a MAC test run as soon as I get a hold of the MAC Firmware, so stay tuned.

All in all I have to admit that the system has gone far beyond my expectations. I’m now dreaming of a day when PC graphics can be transmited wirelessly right off the graphics card over a fast/low latency wireless network….Mmmmm, wireless GPU’s :-).

Sixth Sense: You really need to see this.

datePosted on 22:45, March 14th, 2009 by Many Ayromlou

Well, I watched it and came across a couple of comments. First was “Holodeck is now one step closer” and right below it “Skynet is nearly Self-aware”. I guess I’ll leave it up to you to decide if you want to go on a “Star” trek or Terminate now :-). Just watch it, it’s 8 minutes of wonder.

12Next