Wednesday, January 26, 2011

G-Speak or Why John Underkoffler is my hero

Hi readers. To all my fellow geeks out there, this is mind-blowing!

Do you remember the film Minority Report? And do you remember the User Interface (UI) that Tom Cruise's character uses, the one where he waves his hands around to manipulate the videos of future crimes? Well....John Underkoffler, the scientific advisor for the film and inventor, has, together with his team at Oblong Industries, created a real life version of the UI in the movie. The UI is called the g-speak (yes, with small type) spatial operating environment. The inspirations for g-speak are many, among which is Underkoffler's desire that there should be a new operating system as there are no new operating systems since the creation of the Mac and Windows OS (and others that have a similar look).

The other reason being the desire to introduce the concept of space to machines and programs. As Underkoffler puts it, programs and computers are "hideously insensate when it comes to space".

The g-speak SOE is made up of three parts. It is one third gestural input/output that gives high definition output for high-fidelity input. Input is by hand gestures, movement and pointing. Finger and hand motions are tracked to 0.1 mm at 100 Hertz. The system also supports two-handed and multi-user input. This effectively gets rid of the mouse and keyboard input system, although the software still allows input by these two devices in conjunction with the gestural input.

The second part is 'recombinant networking'. What this means is that the g-speak platform allows for multi-computer collaboration. The data can be displayed and shared among many devices. Recombinant networking also means that the platform supports the integration of legacy applications (old applications) into g-speak. It is possible to adapt the legacy application with very little new code.

The third part is 'real world pixels'. This means that the platform can recognise real world objects and can accept input from them. G-speak can also work with multiple screens.

In the video below, John Underkoffler demonstrates the g-speak platform and tells the origin of g-speak. Another mind-blowing video:




And here is an overview of the g-speak:

g-speak overview 1828121108 from john underkoffler on Vimeo.


Story sources: http://oblong.com/

http://oblong.com/blog/

http://www.ted.com/talks/lang/eng/john_underkoffler_drive_3d_data_with_a_gesture.html

SixthSense

Hi readers. I would like to present to you SixthSense, made by Pranav Mistry and his team at the Fluid Interfaces Group at the MIT Media Lab. SixthSense is a wearable device that allows the user to interact with digital information that is overlaid onto the real world. The user interacts using natural hand gestures. This in effect is a form of augmented reality.

The SixthSense prototype consists of a pocket projector, a mirror and a camera. All this are arranged in a pendant-like mobile wearable device and are connected to a mobile computing device in the user's pocket. The projector will display various kinds of digital information onto almost any available surface, like a wall, a piece of paper, or even on your hand. These surfaces can be used as interfaces. The camera meanwhile, tracks the user's hand gestures and the objects in the surroundings. The system tracks our hand gestures with the aid of coloured markers placed on the tips of user's fingers. Multi-touch and multi-user interaction can also occur as the system can track any number of unique coloured markers.

There are many useful and fun applications to this system. The system allows users to carry a computer with them but with the digital information projected into the real world and not confined by a screen. For example, the user can ask the system to project a map onto any surface and the map can be manipulated by hand gestures. To zoom in, simply point two fingers to the map and increase the distances between the fingers. Hand gestures can also be interpreted as instructions. For instance, drawing a circle on your wrist will project an analog watch. Users can also take pictures using the 'framing' gesture and the photos can be viewed on any available surface. The system can also present more information about an object by projecting the information on the object itself. For example, a newspaper can show live news video connected to the news piece the viewer is reading.

By the way, Pranav Mistry says on his website that the prototype can be put together for only USD 350. He evens plans to make the system open-sourced and will post instructions showing us regular people how to build our own prototype soon!

To be honest, all these words do not do justice to this jaw-dropping technology. So, here are two videos of Pranav Mistry and Pattie Maes (Mistry's boss) demonstrating the technology:







Dean Kamen and his prosthetic arm

Hi readers. According to a very senior person in the US Department of Defense, 1600 soldiers come home to the States missing at least one full arm, from the shoulders to the fingers. About 24 of these 1600 will lose both arms. And all the military has been able to get for them are crude proshetic arms.


After some persuading, Dean Kamen, founder of DEKA Research and Development, and his team created the DEKA Arm.




The Arm by the way is funded by the Defense Advanced Research Projects Agency and the US Army Research Office.


The Arm has 14 degrees of freedom as opposed to 21 degrees of freedom in the human arm. However, Kamen assured that we don't need the degrees of freedom in the last two fingers. The 14 actuators in the arm has its own capability to sense temperature and pressure so that the arm can sense whether the object it is holding is soft or hard.
The Arm won't look like in the picture above. Instead, DEKA will conduct a CAT and MRI scan of the person's good arm and produce a silicon rubber to coat the Arm. It will then be painted on to replicate the look of the other, good arm.
For a demonstration of the arm:





Tuesday, January 25, 2011

3D Sound

Hi readers. In a sign that companies are trying to 3D-fy everything, BBC Radio, in December, started testing out 3D sound technology. According to my source article, the technology will allow surround sound from broadcasts through the use of specially positioned speakers. Public testing is however, still a long way off although BBC has tested it to a few listeners.

With this technology, the developers are hoping that online listeners will get a more immersive sound experience with the proper equipment. Also standing to gain are the sound systems of car radios where surround sound is not really effective. The developers also aim to bring the technology to TV broadcasts.

All this sounds like current surround sound technology which makes me side with my source's author in saying that the popularity of this remains to be seen.

Story source:

http://techland.time.com/2010/12/16/are-you-ready-for-3d-radio/

Voice Control for your car




Hi readers. Ford has announced during the recent Consumer Electronics Show that their 2012 Mustangs will come with their SYNC software system pre-installed in the cars. This technology, by the way, is already included in the current line up of Ford Fiestas. The SYNC system allows users to control their entertainment system using the voice recognition software in their smartphones installed with the SYNC Applink phone app.

Once the smartphones are plugged in, its screen blacks out and the driver can control it using the car's entertainment touch screen or by voice. The SYNC system allows drivers to gain access to their phone apps plus a voice-activated navigation system.

On the downside, this system currently only works with Android and Blackberry phones with an iPhone version coming out soon. What about Nokia?

All in all though, this is pretty cool. Any bets on when they'll create a car like KITT from Knight Rider?

Story source: http://techland.time.com/2011/01/06/voice-control-comes-to-mustang-with-ford-sync-phone-app/

Next-Gen Goggles


Hi readers. Here's something new. This is Recon Instruments' Transcends goggles. It is GPS enabled and it shows real time data on the heads up display in the goggles themselves. Among the data that is displayed are speed, latitude/longitude, altitude, vertical distance travelled, temperature, time and stopwatch or timer.

The goggles will use the Android OS which enables programmers to create personalized apps that can be downloaded into the goggles. It will also have integrated maps and a buddy finding system. As if that weren't enough to blow customers away, the goggles can also record video and users can access their messages, contacts and music files with their goggles through bluetooth technology.

There are two versions of this product. One, the SPPX, has lens that auto adjusts based on the brightness of the surroundings. This retails for USD500. The SPX version which only has polarized lenses will sell for USD400.

Time to start writing that Christmas wish list, folks.

Story source: http://techland.time.com/2010/09/30/recon-instruments-goggles-a-gps-based-dashboard-for-your-eyeball/

http://techland.time.com/2011/01/06/high-tech-goggles-coming-soon-next-gen-gps-technology-bluetooth-android/

Wireless Electricity

Hi readers. This is actually old news but I just found out about it and it amazed me. What I'm talking about it wireless electricity, or what its creators call WiTricity.

This technology was created by one Dr. Soljacic and a group of theoretical physicist at Massachusetts Institute of Technology when they managed to light a 60 watt lightbulb from 2 metres away. The efficiency they got at the time was about 50%. The inspiration for this came when Dr. Soljacic was woken up the third time in a row when his wife's phone beeped because it was running out of power. So he thought that with all the electricity flowing in the house, why couldn't the phone charge itself up so he could sleep?

The technology works by something the team calls resonant energy transfer using standard transformers. Transformers are able to increase or decrease electrical energy flowing through an alternating current. The coils in a transformer transfer energy in very short distances.

So, Dr. Soljacic managed to figure out how to make transformers transfer energy over large distances by resonance.

In simple terms, this is how it works. A coil is made to resonate using a radio frequency amplifier. It will then pulse at very high alternating current frequencies with a magnetic field. If you bring another device near the coil that can only work in that frequency, then both the device and coil will strongly couple and magnetic energy can be transfered to the device. That means that electrical energy is turned into magnetic energy and the device will use that magnetic energy and turn it back into electrical energy that it can use.

For a demonstration of the technology and a more detailed explanation, watch this video:





Story source: http://www.ted.com/talks/eric_giler_demos_wireless_electricity.html