Another stunning 3D mapping service goes live: EveryScape isn't an online world, it's the world online.
EveryScape takes you from the streets to the sidewalks and through the doors of the world's cities and tours. Unlike Microsoft Virtual Earth and Google Maps, EveryScape will let users explore both the outside and inside of major cities.
EveryScape has launched beta with 4 destinations: Boston, New York City, Miami and Aspen. They plan to quickly expand this list to 10 cities in 2007. Users are not only encouraged to tour various cities and towns, but also to collaborate and share in the very creation of them. Everyscape's technology transforms digital still photos from inexpensive cameras into 3D models. The company hopes to recruit users to become "scape artists" and upload these photos.
Check out the experience in this launch video:
The 3D virtual reality experience is based on scapes. A Scape is a three-dimensional, photo-realistic experience of a city, street or business. In a scape, anyone with a Flash capable web browser can move seamlessly and look around via a 360-degree panoramic photograph.
The company aims to get real world businesses to establish their own scapes so that potential customers can experiance their business online. A scaped business can be engaging, immersive and realistic.
Standard pricing for the service is $180 a year to get a business listing displayed along with a photo of the outside. For $250 a year, a business can get one inside photo. Two or three inside photos cost $400 and $500 respectively.
Monday, October 29, 2007
Tuesday, October 16, 2007
3D Birds Eye Views in Virtual Earth
Another milestone in virtual reality has been reached by Microsoft today. Live Search Map v2 (Gemini) is out with amazing new features! The coolest of them is 3D birds eye views based on aerial photos taken from different angles stitched together just like in Photosynth.
This feature is described in the Virtual Earth blog as:
"Basically, as you navigate the virtual world the camera is snapped to the same parameters the real-world camera had at the time the scene was captured. As you rotate, you will first see virtual 3D buildings and terrain just as the corresponding scene is loaded and overlaid. if you are zoomed out past a single image, a series of white outlines hint at where to click to bring in a new image, very much like the Photosynth UI. smooth camera tweening links the scenes creating an amazing tapestry of the highest resolution aerial image online."
Check out their full to get the details on birds eye view and many other new features such as:
This feature is described in the Virtual Earth blog as:
"Basically, as you navigate the virtual world the camera is snapped to the same parameters the real-world camera had at the time the scene was captured. As you rotate, you will first see virtual 3D buildings and terrain just as the corresponding scene is loaded and overlaid. if you are zoomed out past a single image, a series of white outlines hint at where to click to bring in a new image, very much like the Photosynth UI. smooth camera tweening links the scenes creating an amazing tapestry of the highest resolution aerial image online."
Check out their full to get the details on birds eye view and many other new features such as:
- 1-Click Directions - Also known as “party maps”, this is a single permalink that you can send out, and each person can get directions to it from their house with a single click.
- Route Around Traffic - It’s a check box option to have it automatically route around traffic jams, based on the live traffic data.
- Data Import - Import GeoRSS, GPX and even some KML.
- Birds Eye View in 3D - A way to view Birds Eye imagery while in 3D mode. It’s kinda weird, but very cool.
- 3D Tours and Videos of Collections - You can build a “tour” (fly around, look at stuff, etc), then share it with others by sending them a simple URL. They can control the tour using DVD-style controls.
- 3D Modelling - Using Dassault, you can create 3D buildings in VE.
- Collection Search and Explore - A search engine for more content.
- Enhanced Detail Pages - More info about each business listed in VE
Monday, October 15, 2007
IT Futurology : What Could Happen to the Web by 2012?
What will happen when the MySpace generation grows-up? When employees want Facebook rather than a phonebook? When your monthly report is on your iPhone, your spreadsheet's on your Wiki, your e-mail's moved to Google, you haven't got a home directory (or a PC) any more, and blogging's no longer a buzzword - it's just what everybody does to stay employed? Tune in to Alec Muffett's entertaining presentation on blip.tv to find out!
Alec projects current trends and discusses the possibility of a terabyte iPod by 2012. In comparison a recent Hitachi announcement expects the 4TB desktop and 1TB notebook hard disk drives by 2011. Until then we can buy the industry's first cost effective desktop terabyte drive, the Hitachi 0A34915 1TB 7200Rpm 32MB Cache SATA II Hard Drive.
Check out Alec Muffett's blog for more on IT futurology and security.
Alec projects current trends and discusses the possibility of a terabyte iPod by 2012. In comparison a recent Hitachi announcement expects the 4TB desktop and 1TB notebook hard disk drives by 2011. Until then we can buy the industry's first cost effective desktop terabyte drive, the Hitachi 0A34915 1TB 7200Rpm 32MB Cache SATA II Hard Drive.
Check out Alec Muffett's blog for more on IT futurology and security.
Wednesday, October 3, 2007
Natural Language Search - Mining the Web for Meaning
Do you have a question? Chances are that you can find the answer online in Wikipedia's 2+ million articles or somewhere on the web. How?
The Present: Keyword based search
How do we search today? Search engines use bots to crawl the web to find documents, process them, and then build an index. Google has indexed more than 25 billion web pages in 2006. Search engines consult this huge index in response to a user's query to find a set of matching documents. So far so good. Then they try to rank the potential matches to present the most relevant results. The ranking of the matches, and possibly the short presentation of each result, are tailored to the query and potentially any other information available to the search engine.
It is challenging to rank potentially thousands or millions of matches to get relevant results. However relevance is critical especially for mobile users. Keyword-based search engines such as Google rank pages using a number of criteria and features: PageRank [link graph analysis], keyword frequency, and keyword proximity, and many others. Many of these smart algorithms are discussed in my earlier post on Building Smart Web 2.0 Applications. Google was clearly the innovator in that area that made them he undisputed leader in the search space. What is the next step to improve relevance?
The Promise of the Semantic Web
The natural language of web pages is difficult to understand and process for computers. The vision of the Semantic Web promises information that is understandable by computers, so that they can perform more of the tedious work involved in finding, sharing and combining information on the web. The idea of the Semantic Web would require authors and publishers to make the information easier to process by computers using special markup languages.
There are many projects who aims to capture the knowledge of the world in a structured manner that software can process. Most interesting of them are Freebase and Google Base. However most of the web remains unstructured text therefore the idea of the Semantic Web remains largely unrealized. How is it possible to mine the meaning of those billions of web pages?
The Future: Natural Language Search
Imagine if you could ask a search engine the following question and get relevant results: "what did steve jobs say about the iPod?"
True natural language queries have linguistic structure which keyword-oriented search engines ignore. This includes queries where the function words matter, where word order means something, and where relationships that should be explicitly stated easily are stated. Instead of ignoring the function words, a natural language search engine respects their meaning and uses it to give better results.
In fact one of the most buzzed startup companies on the TechCrunch 40 Conference aims to implement such a natural language search engine. It is a big challenge.
Powerset has licensed key Natural Language Processing (NLP) technology from Xerox PARC. Their search engine examines the actual meaning and relationships of words in each sentence for the web pages as well as the queries.
The NLP technology they’re using has been under development for 30+ years now. Their unfair advantage is the fact that Powerset has reduced the processing time for indexing one sentence down from two minutes to one second. Currently they’re limited to a select few sites to crawl: Wikipedia, New York Times and ontological resources like Freebase and WordNet. They plan to use Amazon’s EC2 and build out their data centers to scale. Indexing billions of web pages will take time but natural language search is certainly an interesting wave in the ocean of web innovations.
Check out Powerset's blog or sign up for Powerset Labs to experience their latest technology.
The Present: Keyword based search
How do we search today? Search engines use bots to crawl the web to find documents, process them, and then build an index. Google has indexed more than 25 billion web pages in 2006. Search engines consult this huge index in response to a user's query to find a set of matching documents. So far so good. Then they try to rank the potential matches to present the most relevant results. The ranking of the matches, and possibly the short presentation of each result, are tailored to the query and potentially any other information available to the search engine.
It is challenging to rank potentially thousands or millions of matches to get relevant results. However relevance is critical especially for mobile users. Keyword-based search engines such as Google rank pages using a number of criteria and features: PageRank [link graph analysis], keyword frequency, and keyword proximity, and many others. Many of these smart algorithms are discussed in my earlier post on Building Smart Web 2.0 Applications. Google was clearly the innovator in that area that made them he undisputed leader in the search space. What is the next step to improve relevance?
The Promise of the Semantic Web
The natural language of web pages is difficult to understand and process for computers. The vision of the Semantic Web promises information that is understandable by computers, so that they can perform more of the tedious work involved in finding, sharing and combining information on the web. The idea of the Semantic Web would require authors and publishers to make the information easier to process by computers using special markup languages.
There are many projects who aims to capture the knowledge of the world in a structured manner that software can process. Most interesting of them are Freebase and Google Base. However most of the web remains unstructured text therefore the idea of the Semantic Web remains largely unrealized. How is it possible to mine the meaning of those billions of web pages?
The Future: Natural Language Search
Imagine if you could ask a search engine the following question and get relevant results: "what did steve jobs say about the iPod?"
True natural language queries have linguistic structure which keyword-oriented search engines ignore. This includes queries where the function words matter, where word order means something, and where relationships that should be explicitly stated easily are stated. Instead of ignoring the function words, a natural language search engine respects their meaning and uses it to give better results.
In fact one of the most buzzed startup companies on the TechCrunch 40 Conference aims to implement such a natural language search engine. It is a big challenge.
Powerset has licensed key Natural Language Processing (NLP) technology from Xerox PARC. Their search engine examines the actual meaning and relationships of words in each sentence for the web pages as well as the queries.
The NLP technology they’re using has been under development for 30+ years now. Their unfair advantage is the fact that Powerset has reduced the processing time for indexing one sentence down from two minutes to one second. Currently they’re limited to a select few sites to crawl: Wikipedia, New York Times and ontological resources like Freebase and WordNet. They plan to use Amazon’s EC2 and build out their data centers to scale. Indexing billions of web pages will take time but natural language search is certainly an interesting wave in the ocean of web innovations.
Check out Powerset's blog or sign up for Powerset Labs to experience their latest technology.
Subscribe to:
Posts (Atom)