Tuesday, March 24, 2009

Sparkle Brings Second Life to the iPhone and iPod Touch

Genkii has launched a new iPhone application in the App Store called Sparkle IM. It is a Second Life client that runs on the iPhone or iPod Touch that allows sending and receiving IMs directly on the mobile device.

Sparkle IM is just the first virtual world app by Genkii. It is expected that the updated full Sparkle Second Life iPhone client will support 3D presence, too. Stay tuned!

Virtual Worlds could be rendered on the server side as demonstrated earlier by Vollee and Liveplace.

Tuesday, March 17, 2009

Virtual Cocoon Helmet to Mimic All Five Senses in Virtual Worlds

We rely on our senses to interact with the world around us. Do we actually need to travel to be somewhere to experience it fully? The “…towards Real Virtuality” EPSRC Research Cluster" project aims to develop a "virtual cocoon" through which people can interact naturally with the world without actually traveling or being put in that particular, potentially dangerous, real situation. All five senses will be stimulated to provide a rich sensory "real virtuality" experience.

The virtual cocoon will revolutionize the way in which we do business by providing low-cost, high confidence, high quality multi-sensory knowledge directly to your current location. This will significantly change, for example, purchasing via the internet because you could smell the flowers, feel the fabric of a dress, try out a sofa for comfort, examine products in any desired lighting condition, etc, all before you buy them. The virtual cocoon could even be linked to, for example Google Earth, to enable you to investigate the ambiance of a restaurant on the other side of the world when you are planning your trip.

The Virtual Cocoon headset is a nice accessory for 3D immersive surfing of the metaverse. It is a great gadget for gaming, entertainment and virtual tourism. Too bad that it is somewhat expensive for now.

Thursday, March 12, 2009

SixthSence Wearable Device by MIT Media Lab

At the MIT Media Lab's new Fluid Interfaces Group, Pattie Maes researches the tools we use to work with information and connect with one another. She works with Pranav Mistry, who is the genius behind Sixth Sense, a wearable device that enables new interactions between the real world and the world of data. Their Sixth Sense demo was the buzz of this year's TED conference. It's a wearable device with a projector that paves the way for profound interaction with our environment. Imagine "Minority Report" and then some.

We've evolved over millions of years to sense the world around us. When we encounter something, someone or some place, we use our five natural senses to perceive information about it; that information helps us make decisions and chose the right actions to take. But arguably the most useful information is not naturally perceivable with our five senses, namely the data, information and knowledge that mankind has accumulated about everything and which is increasingly all available online. Although the miniaturization of computing devices allows us to carry computers in our pockets, there is no link between our digital devices and our interactions with the physical world. SixthSense bridges this gap, bringing intangible, digital information out into the tangible world, and allowing us to interact with this information via natural hand gestures. ‘SixthSense’ frees information from its confines by seamlessly integrating it with reality, and thus making the entire world your computer.

The SixthSense prototype is comprised of a pocket projector, a mirror and a camera. The hardware components are coupled in a pendant like mobile wearable device. Both the projector and the camera are connected to the mobile computing device in the user’s pocket. The projector projects visual information enabling surfaces, walls and physical objects around us to be used as interfaces; while the camera recognizes and tracks user's hand gestures and physical objects using computer-vision based techniques. The software program processes the video stream data captured by the camera and tracks the locations of the colored markers at the tip of the user’s fingers. The movements and arrangements are interpreted into gestures that act as interaction instructions for the projected application interfaces.

The SixthSense prototype implements several applications that demonstrate the usefulness, viability and flexibility of the system. The map application lets the user navigate a map displayed on a nearby surface using hand gestures, similar to gestures supported by Multi-Touch based systems, letting the user zoom in, zoom out or pan using intuitive hand movements. The drawing application lets the user draw on any surface by tracking the fingertip movements of the user’s index finger. SixthSense also recognizes user’s freehand gestures. For example, the SixthSense system implements a gestural camera that takes photos of the scene the user is looking at by detecting the ‘framing’ gesture. The user can stop by any surface or wall and flick through the photos he/she has taken. SixthSense also lets the user draw icons or symbols in the air using the movement of the index finger and recognizes those symbols as interaction instructions. For example, drawing a magnifying glass symbol takes the user to the map application or drawing an ‘@’ symbol lets the user check his mail. The SixthSense system also augments physical objects the user is interacting with by projecting more information about these objects projected on them. For example, a newspaper can show live video news or dynamic information can be provided on a regular piece of paper. The gesture of drawing a circle on the user’s wrist projects an analog watch.

The current prototype system costs approximate $350 to build.

The innovative Sixth Sense project demonstrates that the computer is no longer a distinct object, but a source of intelligence that’s embedded in our environment. By outfitting ourselves with digital accessories, we can continually learn from (and teach) our surroundings. The uses of this tech -- from healthcare to home furnishings, warfare to supermarkets -- are powerful and increasingly real.

Tuesday, March 10, 2009

Wolfram Alpha Computational Knowledge Engine to Answers Questions

Stephen Wolfram is known from his ambitious projects: Mathematica and A New Kind of Science. But in recent years he has been hard at work on a still more ambitious project called Wolfram|Alpha which will be launched in just two months in May 2009.
Stephen Wolfram is building something new - and it is really impressive and significant. In fact it may be as important for the Web (and the world) as Google, but for a different purpose. It's not a "Google killer" -- it does something different. It's an "answer engine" rather than a search engine. It is not even a natural language search engine. The True Knowledge answer engine shares some similarities with Wolfram Alpha but there are also considerable differences.

Wolfram Alpha actually computes the answers to a wide range of questions that have factual answers. It doesn't simply contain huge amounts of manually entered pairs of questions and answers, nor does it search for answers in a database of facts. Instead, it understands and then computes answers to certain kinds of questions. This almost gets us to what people thought computers would be able to do 50 years ago!

Stephen Wolfram has introduced his new invention on the Wolfram Blog: Wolfram|Alpha is Coming! Check out Nova Spivack's public twine for more details which also appears as a guest post on TechCrunch.