Wednesday, January 26, 2011

G-Speak or Why John Underkoffler is my hero

Hi readers. To all my fellow geeks out there, this is mind-blowing!

Do you remember the film Minority Report? And do you remember the User Interface (UI) that Tom Cruise's character uses, the one where he waves his hands around to manipulate the videos of future crimes? Well....John Underkoffler, the scientific advisor for the film and inventor, has, together with his team at Oblong Industries, created a real life version of the UI in the movie. The UI is called the g-speak (yes, with small type) spatial operating environment. The inspirations for g-speak are many, among which is Underkoffler's desire that there should be a new operating system as there are no new operating systems since the creation of the Mac and Windows OS (and others that have a similar look).

The other reason being the desire to introduce the concept of space to machines and programs. As Underkoffler puts it, programs and computers are "hideously insensate when it comes to space".

The g-speak SOE is made up of three parts. It is one third gestural input/output that gives high definition output for high-fidelity input. Input is by hand gestures, movement and pointing. Finger and hand motions are tracked to 0.1 mm at 100 Hertz. The system also supports two-handed and multi-user input. This effectively gets rid of the mouse and keyboard input system, although the software still allows input by these two devices in conjunction with the gestural input.

The second part is 'recombinant networking'. What this means is that the g-speak platform allows for multi-computer collaboration. The data can be displayed and shared among many devices. Recombinant networking also means that the platform supports the integration of legacy applications (old applications) into g-speak. It is possible to adapt the legacy application with very little new code.

The third part is 'real world pixels'. This means that the platform can recognise real world objects and can accept input from them. G-speak can also work with multiple screens.

In the video below, John Underkoffler demonstrates the g-speak platform and tells the origin of g-speak. Another mind-blowing video:




And here is an overview of the g-speak:

g-speak overview 1828121108 from john underkoffler on Vimeo.


Story sources: http://oblong.com/

http://oblong.com/blog/

http://www.ted.com/talks/lang/eng/john_underkoffler_drive_3d_data_with_a_gesture.html

SixthSense

Hi readers. I would like to present to you SixthSense, made by Pranav Mistry and his team at the Fluid Interfaces Group at the MIT Media Lab. SixthSense is a wearable device that allows the user to interact with digital information that is overlaid onto the real world. The user interacts using natural hand gestures. This in effect is a form of augmented reality.

The SixthSense prototype consists of a pocket projector, a mirror and a camera. All this are arranged in a pendant-like mobile wearable device and are connected to a mobile computing device in the user's pocket. The projector will display various kinds of digital information onto almost any available surface, like a wall, a piece of paper, or even on your hand. These surfaces can be used as interfaces. The camera meanwhile, tracks the user's hand gestures and the objects in the surroundings. The system tracks our hand gestures with the aid of coloured markers placed on the tips of user's fingers. Multi-touch and multi-user interaction can also occur as the system can track any number of unique coloured markers.

There are many useful and fun applications to this system. The system allows users to carry a computer with them but with the digital information projected into the real world and not confined by a screen. For example, the user can ask the system to project a map onto any surface and the map can be manipulated by hand gestures. To zoom in, simply point two fingers to the map and increase the distances between the fingers. Hand gestures can also be interpreted as instructions. For instance, drawing a circle on your wrist will project an analog watch. Users can also take pictures using the 'framing' gesture and the photos can be viewed on any available surface. The system can also present more information about an object by projecting the information on the object itself. For example, a newspaper can show live news video connected to the news piece the viewer is reading.

By the way, Pranav Mistry says on his website that the prototype can be put together for only USD 350. He evens plans to make the system open-sourced and will post instructions showing us regular people how to build our own prototype soon!

To be honest, all these words do not do justice to this jaw-dropping technology. So, here are two videos of Pranav Mistry and Pattie Maes (Mistry's boss) demonstrating the technology:







Dean Kamen and his prosthetic arm

Hi readers. According to a very senior person in the US Department of Defense, 1600 soldiers come home to the States missing at least one full arm, from the shoulders to the fingers. About 24 of these 1600 will lose both arms. And all the military has been able to get for them are crude proshetic arms.


After some persuading, Dean Kamen, founder of DEKA Research and Development, and his team created the DEKA Arm.




The Arm by the way is funded by the Defense Advanced Research Projects Agency and the US Army Research Office.


The Arm has 14 degrees of freedom as opposed to 21 degrees of freedom in the human arm. However, Kamen assured that we don't need the degrees of freedom in the last two fingers. The 14 actuators in the arm has its own capability to sense temperature and pressure so that the arm can sense whether the object it is holding is soft or hard.
The Arm won't look like in the picture above. Instead, DEKA will conduct a CAT and MRI scan of the person's good arm and produce a silicon rubber to coat the Arm. It will then be painted on to replicate the look of the other, good arm.
For a demonstration of the arm:





Tuesday, January 25, 2011

3D Sound

Hi readers. In a sign that companies are trying to 3D-fy everything, BBC Radio, in December, started testing out 3D sound technology. According to my source article, the technology will allow surround sound from broadcasts through the use of specially positioned speakers. Public testing is however, still a long way off although BBC has tested it to a few listeners.

With this technology, the developers are hoping that online listeners will get a more immersive sound experience with the proper equipment. Also standing to gain are the sound systems of car radios where surround sound is not really effective. The developers also aim to bring the technology to TV broadcasts.

All this sounds like current surround sound technology which makes me side with my source's author in saying that the popularity of this remains to be seen.

Story source:

http://techland.time.com/2010/12/16/are-you-ready-for-3d-radio/

Voice Control for your car




Hi readers. Ford has announced during the recent Consumer Electronics Show that their 2012 Mustangs will come with their SYNC software system pre-installed in the cars. This technology, by the way, is already included in the current line up of Ford Fiestas. The SYNC system allows users to control their entertainment system using the voice recognition software in their smartphones installed with the SYNC Applink phone app.

Once the smartphones are plugged in, its screen blacks out and the driver can control it using the car's entertainment touch screen or by voice. The SYNC system allows drivers to gain access to their phone apps plus a voice-activated navigation system.

On the downside, this system currently only works with Android and Blackberry phones with an iPhone version coming out soon. What about Nokia?

All in all though, this is pretty cool. Any bets on when they'll create a car like KITT from Knight Rider?

Story source: http://techland.time.com/2011/01/06/voice-control-comes-to-mustang-with-ford-sync-phone-app/

Next-Gen Goggles


Hi readers. Here's something new. This is Recon Instruments' Transcends goggles. It is GPS enabled and it shows real time data on the heads up display in the goggles themselves. Among the data that is displayed are speed, latitude/longitude, altitude, vertical distance travelled, temperature, time and stopwatch or timer.

The goggles will use the Android OS which enables programmers to create personalized apps that can be downloaded into the goggles. It will also have integrated maps and a buddy finding system. As if that weren't enough to blow customers away, the goggles can also record video and users can access their messages, contacts and music files with their goggles through bluetooth technology.

There are two versions of this product. One, the SPPX, has lens that auto adjusts based on the brightness of the surroundings. This retails for USD500. The SPX version which only has polarized lenses will sell for USD400.

Time to start writing that Christmas wish list, folks.

Story source: http://techland.time.com/2010/09/30/recon-instruments-goggles-a-gps-based-dashboard-for-your-eyeball/

http://techland.time.com/2011/01/06/high-tech-goggles-coming-soon-next-gen-gps-technology-bluetooth-android/

Wireless Electricity

Hi readers. This is actually old news but I just found out about it and it amazed me. What I'm talking about it wireless electricity, or what its creators call WiTricity.

This technology was created by one Dr. Soljacic and a group of theoretical physicist at Massachusetts Institute of Technology when they managed to light a 60 watt lightbulb from 2 metres away. The efficiency they got at the time was about 50%. The inspiration for this came when Dr. Soljacic was woken up the third time in a row when his wife's phone beeped because it was running out of power. So he thought that with all the electricity flowing in the house, why couldn't the phone charge itself up so he could sleep?

The technology works by something the team calls resonant energy transfer using standard transformers. Transformers are able to increase or decrease electrical energy flowing through an alternating current. The coils in a transformer transfer energy in very short distances.

So, Dr. Soljacic managed to figure out how to make transformers transfer energy over large distances by resonance.

In simple terms, this is how it works. A coil is made to resonate using a radio frequency amplifier. It will then pulse at very high alternating current frequencies with a magnetic field. If you bring another device near the coil that can only work in that frequency, then both the device and coil will strongly couple and magnetic energy can be transfered to the device. That means that electrical energy is turned into magnetic energy and the device will use that magnetic energy and turn it back into electrical energy that it can use.

For a demonstration of the technology and a more detailed explanation, watch this video:





Story source: http://www.ted.com/talks/eric_giler_demos_wireless_electricity.html

Monday, January 24, 2011

Finally! Mind-control!


Hi readers.

The above picture is the EPOC neuroheadset from Emotiv. It is a device for a new personal interface for human-computer interaction. The way it works is somewhat similar to taking a traditional EEG reading, which reads and records the brain waves. With an EEG however, the medical personnel will need to attach a hairnet with many sensors on the scalp using a cunductive gel. This takes time and is not very comfortable. The whole EEG headset also cost thousands of dollars.

The EPOC neuroheadset, however, costs only USD300. The developers at Emotiv have also achieved a breakthrough with their algorithm which unfolds the brain so that the electrical signal can be mapped out closer to their source and more effectively. You see, most of the important functions of our brain is located at the surface of the brain. But our brains are folded differently from each other, so that the location of the signals for a particular thought or action differs with each person. The algorithm unfolds the brain so that the headset can work with many people.

The EPOC neuroheadset is a 14-channel, high resolution, neuro-signal acquisition and processing headset. It is also wireless thus giving the user complete freedom of movements. The 14 sensors it uses provides optimal positioning for accurate spatial resolution. A gyroscope is also included for optimal positional information for cursor and camera controls. The dongle is also USB compatible and the battery provides 12 hours of use.

With this machine, after allowing the machine to differentiate between your brain waves, one can use your brain waves to control many different types of applications and programs. Among the many uses would be creating art, illustration or music with you mind.

Of course, for gamers out there, one can now play games using your thoughts or facial expressions. Real world application also abound. Using the headset, it is now possible to control robots or control a smart home with just a thought.

Finally, this product could allow the disabled to control a wheelchair, communicate with others and essentially live a normal life.

Of course, the uses don't stop here. With increased cooperation among programmers and researchers, who knows what this machine could be used for.

Below is a video of Ms Tan Le, head of Emotiv,demonstrating the device:





















Story source: http://www.emotiv.com/

Sunday, January 23, 2011

The Fourth Dimension



Hello readers. I'm sure you all have noticed that 3D is being marketed as the next big thing what with 3D TVs and 3D glasses being produced by the bulk. The recent CES has even showcased glasses free 3D entertainment.


And although the 3D revolution is only just heating up, one forward-looking company, Scent Sciences is offering ScentScape, a machine which the company says will provide "the extra dimension of scent to gaming, entertainment and other consumer markets." The best thing about it is that it'll only cost USD 70.


Unfortunately, ScentScape currently only works with Windows and users can't plug it into their Macs or home theather system just yet. It will come with 20 basic scents per cartridge and will last at least 200 hours even with heavy usage. The company can also customize the scents to your requirements. The machine also comes with a separate "volume control" to tweak the strength of the smells. It is also quite small (3.5 x 4.25 x 5.5 inches).

With this machine users can experience a truly immersive game where, when synched to the game, users can smell burning rubber when drifting cars or the smell of the ocean in the background. And with a further USD 20, users can get the ScentEditor system where one can add scents to your home videos.

Only time will tell if this device will take off.

Story source: http://scentsciences.com/

Thursday, January 20, 2011

Perceptive Pixel

Hello readers! Glad you stopped by. This is my first post for this blog. I am not much of a blogger so forgive me for any clunkiness. I will be writing mainly about technological advancements. These will include high tech devices for both commercial and non-commercial uses. Whenever possible, I'll write about products that may not be household names.

My blog is titled 'Not Science Fiction' as sometimes the developments in the world of science and technology can sound so amazing that some of us may think it is a hoax or something out of a science fiction novel. Therefore, the title acts as an assurance to you readers that whatever is written here is 'not science fiction'.

First up are two products by a company called Perceptive Pixel. This company, founded by Jeff Han, is considered a leader in research, development and the production of multi-touch interfaces (eat your heart out Apple!).

The two products are the Multi-Touch LCD Display and the Multi-Touch Wall. Jeff Han displayed the multi-touch technology during a 2006 TED Conference. These products are used mainly in medical imaging, defense, geo-intelligence, broadcast and various other non-commercial uses.

The Display and the Wall are pressure sensitive instead of heat sensitive so one can use a glove or a stylus to control them. It also allows many users to work on the screen together as it has unlimited touch points.

Perceptive Pixel is also including in the system a Software Development Kit (SDK) for users to integrate their existing software to the Wall or create an entire new software for their purposes.

And the best thing about, I think, is that it uses natural movements and it does away with the keyboard and mouse.

Of course, all these words probably won't impress you much, so below is a video from youtube showing some of the Wall's capabilities. Trust me, it'll blow your mind:





Story Source: http://www.perceptivepixel.com/