Web GIS in practice V: 3-D interactive and real-time mapping in Second Life
© Boulos and Burden; licensee BioMed Central Ltd. 2007
Received: 15 November 2007
Accepted: 27 November 2007
Published: 27 November 2007
Skip to main content
© Boulos and Burden; licensee BioMed Central Ltd. 2007
Received: 15 November 2007
Accepted: 27 November 2007
Published: 27 November 2007
This paper describes technologies from Daden Limited for geographically mapping and accessing live news stories/feeds, as well as other real-time, real-world data feeds (e.g., Google Earth KML feeds and GeoRSS feeds) in the 3-D virtual world of Second Life, by plotting and updating the corresponding Earth location points on a globe or some other suitable form (in-world), and further linking those points to relevant information and resources. This approach enables users to visualise, interact with, and even walk or fly through, the plotted data in 3-D. Users can also do the reverse: put pins on a map in the virtual world, and then view the data points on the Web in Google Maps or Google Earth. The technologies presented thus serve as a bridge between mirror worlds like Google Earth and virtual worlds like Second Life. We explore the geo-data display potential of virtual worlds and their likely convergence with mirror worlds in the context of the future 3-D Internet or Metaverse, and reflect on the potential of such technologies and their future possibilities, e.g. their use to develop emergency/public health virtual situation rooms to effectively manage emergencies and disasters in real time. The paper also covers some of the issues associated with these technologies, namely user interface accessibility and individual privacy.
When Google Earth and Google Maps first appeared many people marvelled at the ability to zoom in on almost any part of the planet and see objects at little more than 1 m resolution. However, the imagery is static, and relatively out of date. But what made Google Earth come alive was the ability to create so-called 'network links'–displays of data, often captured in real-time–which could be overlaid on the basic Google Earth mapping .
This capability is now (in November 2007) two years old. Whilst so-called 'mirror worlds' , such as Google Earth, have developed little further, the major innovation of the last 18 months has been the rise in popularity of 'virtual worlds', such as Second Life [3, 4] (the US Department of Defense has been using virtual worlds since the 1990s). Here again it is the interfacing of the virtual space to real world data which can start to open up new possibilities in the ways that we view and analyse geographic data.
This paper describes some of the tools developed by Daden Limited  to explore the geo-data display potential of virtual and mirror worlds, and reflects on the potential of such technologies, their future possibilities, and some of the associated issues like user interface accessibility and individual privacy. Before looking at the tools in detail, it is worth putting these new technologies into context.
A group of US companies and institutions active in this area recently published a Metaverse Roadmap . This proposed that there were four emerging technologies that make up the so-called Metaverse–a digital domain equivalent to the atom based domain of our physical lives. These technologies are:
• Mirror worlds–digital representations of our own atom based world, such as Google Earth, Google Maps, and Microsoft Virtual Earth 3D;
• Virtual worlds–digital representations of any space, imagined or real, such as Second Life;
• Lifelogging–the digital capture of information about people and objects in the real (or digital) worlds; and
• Augmented reality–sensory overlays of digital information on the real (or even virtual) world, e.g., using head-up displays (HUDs).
Whilst there have been prototypes of systems in all four of these areas over the last 20 years or so (remember the Virtuality headsets of the 1980s), it is only in the last couple of years that these technologies have reached a maturity where they can be considered for serious use. Even then their adoption is likely to follow the order above, and useful, widespread deployment of some may yet still be a decade or so away.
The Metaverse 1.0 Consortium is a related group that includes over 40 participants of large and small/medium enterprises, as well as several research institutes and universities from eight participating countries . Among the participants are IBM , Philips, Forthnet, Alcatel-Lucent, Telefonica I&D, Siemens IT, Barco, Geosim Systems Ltd., Technical University Eindhoven, Utrecht University, Technical University of Twente, Fraunhofer Rostock, Nazuka and Bertelsmann.
Metaverse 1.0 will provide a standardised global framework enabling the interoperability between various virtual and mirror worlds (virtual-virtual and virtual-mirror worlds interoperability) , and between them and the real world (sensors, actuators, vision and rendering, social and welfare systems, banking, insurance, travel, real estate and many others, enabling the realisation of 'mixed (real + virtual/mirror) reality' applications). The framework will be mainly driven by a set of selected application domains, including training, learning and simulation, eInclusion, and support for elderly, disabled and minorities, among other domains.
Within this paper we will primarily focus on mirror worlds and virtual worlds.
The problem with using RSS feeds with Google Earth is that most such feeds do not contain geographically coded information. A case in point would be something like the BBC's World News RSS feed. To successfully plot such a feed onto Google Earth required us to develop a three stage process:
Capture the feed;
Parse it for geographic information, and geocode it; and
Convert the data to KML.
Since RSS feeds are designed for public consumption by Web browsers they can be captured very simply and efficiently by a software programme that can make HTTP (Hypertext Transfer Protocol) requests out over the Internet. We do all our work in Perl  (seeing as most of our work is text rather than numbers or objects based), and Perl offers a library called LWP  to make this capture easy. The captured feed is just presented to the rest of the programme as a very long text string. Since we are dealing with pure text the capture time is often under a second–significantly less time than it takes a Web page to load.
This is the real challenge. From the simple text information in the feed we need to try and identify the geographic location of the item. For our work so far we have developed our own geocoder. This is a database of every country in the world, every major city, and every major airport, and the software searches the title (and optionally description) of the item for a place name it recognises. It then assigns the relevant geographic position for the item (cf. Metacarta's GeoParsing ). If a location is not found then the item is removed from the stream. We have also developed more detailed gazetteers for specific geographies (e.g., to village level in the UK), and it is possible to develop other bespoke gazetteers for specific clients and feeds. If data are already postcoded, then we can use postcode look-up services (such as Postcode Anywhere ) to convert from postcode to lat/long.
There are already some standards for geocoding RSS and similar data, such as ICBM  and GeoRSS . Our application, which we call NewsGlobe , can identify when these formats are being used, so obviating the need for it to do its own geocoding.
When the feed is activated, Google Earth calls the NewsGlobe web service with the URL and parameters. NewsGlobe then makes its own HTTP GET request to the target feed URL, receives the RSS file, parses it as above, builds the KML file, and returns this file to Google Earth, which displays it. Unless otherwise specified, simple Google Earth markers are used. Each marker is assigned a label based on the <title> field for the item, and a pop-up description based on the <description> field of the item.
NewsGlobe was released and announced in July 2005. Within 3 months it was being used over a third of a million times a month. Usage has (thankfully) reduced since then, but it is still used by users every day to plot a range of news feeds.
Since their development, Daden has continued to offer the NewsGlobe services for Google Maps and Google Earth for free on an as-is, non-commercial basis (details are at ). Readers are invited to make use of them for their own data.
Whilst Daden has been in Internet based virtual worlds since around 1996, and joined Second Life in 2004, it was not until Linden Lab (creator of Second Life) released the llHTTPRequest functionality  in the summer of 2006 that we felt that virtual worlds really had the opportunity to become serious business tools. Today one can find many RSS feed readers in various places around Second Life, as well as in-world scripted objects for posting blog entries from within Second Life (e.g., ), and a myriad of other objects that use LSL (Linden Scripting Language) HTTP Request and related functions to access the Internet and online databases outside Second Life. It was therefore fairly natural for us, when looking at ways to demonstrate the potential of these worlds, to go back to NewsGlobe and see whether we could achieve the same thing in a virtual world.
Visually DataGlobe is represented in Second Life by a 5 m tall globe showing a photographic whole-Earth image. This being a virtual world, one can instruct the globe to be bigger (to the 10 m limit of Second Life) or smaller (to a 1 m limit). One can also command it to rotate and tilt, and even change from photographic mapping to schematic mapping. If you users bored with the globe they can even tell it to morph into a 2-D map.
The operation is very similar to NewsGlobe, and again most of the code is re-used. As well as taking an ungeocoded RSS feed, DataGlobe can also take a KML feed – i.e., the data feeds used by Google Earth. This has the advantage that they are already geocoded. Instead of returning a KML file, NewsGlobe now returns a pipe-delimited text file, one line per record. Given the memory restrictions, the NewsGlobe API has been extended to allow the user to specify ways of limiting the amount of data returned to Second Life. For instance, title and description fields can be truncated to N characters, image links can be excluded, and lat/long can be rounded to integer or single decimal values.
DataGlobe is available for free for non-commercial use. Please IM Corro Moseley (David Burden's avatar name) in Second Life for details.
As well as the NewsGlobe based version of DataGlobe, Daden also produced three other Second Life systems to explore mapping opportunities (available to view at ):
• A Second Life to Web mapping tool, where the Second Life user places markers out on a map and names them, and then touches the map to generate a Google Earth or Google Maps data feed, which can be viewed on the Web, i.e., the reverse of DataGlobe; and
• A UK map which shows weather by having intelligent "clouds", with each cloud fetching its own weather from the Yahoo! weather feed (cf. NOAA's 3-D real time USA weather data visualisation/map in Second Life ).
One of the wonderful things about Second Life is that in many ways it works just like real life. Daden was working on the DataGlobe when its Second Life neighbour, Hayduke Ebisu (this is his Second Life name, and he is based in the USA), who is active in environmental issues in Second Life and real life, saw what the group was doing. He said that he had someone that Daden should meet. That person was Stephane Zugzwang (another Second Life name, and based in France). (Second Life is very much about social networking and collaboration ) Stephane had built a 'VR Room' in Second Life. This is similar to the 720 degree 'bubble photos' that are sometimes found on the Web, where one can pan and zoom in all directions. In the Second Life version, the photo is pasted onto the inside of a huge 20 m sphere, with the viewer's avatar standing inside and looking at the image all around her/him.
GeoGlobe received extensive blog coverage and can be seen at . The technology has since been used by the Swedish Embassy in Second Life to display locations of Swedish Embassies around the world. Readers interested in using GeoGlobe for their own data are invited to contact Corro Moseley in Second Life.
Whilst the NewsGlobe data were interesting, we still felt that there was more we could do in this area – particularly showing how real-time data could be used, and how we could move away from the 'pin-on-a-map' metaphor.
One of the Google Earth Dynamic Link Layers that had most impressed us had been one by US flight tracking company Fboweb.com . Fboweb.com are official agents for the US Federal Aviation Administration's (FAA) Aircraft Situation Display to Industry (ASDI)  data. ASDI is a feed of all the radar tracks of aircraft around the USA, provided either in real time, or with a five-minute delay for security purposes. Fboweb.com used this data to produce a KML feed of the aircraft coming in to land at Los Angeles International Airport (LAX) . One can zoom in on LAX and see the tracks of the aircraft, each identified with its flight number. Could we bring this feed into Second Life?
We did experiment with trying to animate aircraft between locations but at the 10 m map size we couldn't make them move slow enough – but it should be possible if a larger (100 m+) map was used (although this would also bring in new issues about how we 'rez' (resolve object in Second Life) the planes as there is a 10 m rez range limit, so we would need multiple 'aircraft generators'!).
The visualisation was launched at National Business Aviation Association (NBAA) Convention held in Atlanta, Georgia on 25–27th September 2007, and has since been covered by both virtual world and aviation media.
Of course the important point about this demonstration is not the aircraft or the feed, but the way in which almost any sort of real-time or near real-time data (and even live GPS–Global Positioning System data feeds) can be visualised in ways in a virtual world that would be impossible in real life.
In truth almost any visualisation that we do in a virtual world could be achieved using a (probably bespoke or high power) desktop PC application. However, for us there are some undoubted advantages to doing such visualisations in a virtual world:
• A single platform to learn and many uses–one can use a single platform for the modelling and visualisation of a wide range of data, reducing the learning curve. Where else can one see both DNA molecules and civil aviation traffic being visualised at the same time?
• Human behaviour modelling–in applied epidemiology, a virtual world, being a social network, can be used as a unique disease modelling tool that incorporates important human behaviours for applied simulation modelling of infectious diseases ;
• Instant sharing–the visualisation is instantly shareable with anyone who has a broadband connection and suitable PC anywhere on the globe, or it can be made private;
• A unique experience–being able to walk or fly an avatar through the data gives us a more immediate and personal awareness of it, than being a third party viewer of the data. This may lead to unique insights about the data, which would be impossible or highly unlikely within a third person system.
Note that this latter renewed appreciation of data is more than just about subjective camera viewpoints (which can also be done with a desktop package). It appears to tap into a more primitive and natural appreciation of our surroundings, and the scale and location of the items in them relative to ourselves. From a psychological perspective, we "become" our avatar, experiencing things as the avatar sees them, rather than as a passive, real-world observer. We would love to see more research into this immersive nature of virtual world environments.
Despite its scripting limitations, Second Life remains a haven for content creators, as opposed to virtual worlds like There , where users cannot build anything completely of their own and are limited to choosing from some pre-determined sets of semi-customisable objects. However, some virtual worlds are starting to appear that can import 3-D models from Google's 3D Warehouse , a free and extensive library of 3-D models, something Second Life currently cannot do.
Google 3D Warehouse models can also be used in Google Earth. Google Earth uses the open COLLADA 3-D modelling format . Using a Google program called SketchUp , users from around the world have built thousands of COLLADA models and made them freely available through Google 3D Warehouse. Microsoft's equivalent to Google's developments is their Virtual Earth mirror world with 3DVIA .
In a recent article entitled 'Second Earth: The World Wide Web will soon be absorbed into the World Wide Sim: an environment combining elements of Second Life and Google Earth' and published in MIT Technology Review , Wade Roush discusses what will happen when Second Life and Google Earth, or services like them, actually meet. Roush rightly argues that "while Second Life and Google Earth are commonly mentioned as likely forebears of the Metaverse, no one thinks that Linden Lab and Google will be its lone rulers". What is coming is a larger digital environment combining elements of these technologies–the Metaverse or '3-D Internet'. Indeed, in January 2007, IBM predicted that the 3-D Internet will be one of the top five innovations that will change the way we live over the next five years .
The line between the real world and its virtual representations will soon start blurring [4, 41]. The entire world is getting "wired" without wires: tiny radio and Internet-connected sensor chips are being attached these days to everything worth monitoring, including the human body [43, 44]. But the real challenge is to organise and present the vast amounts of data these sensors generate in forms that diagnosticians and decision makers can make sense of. 'Reality mining' is the term that MIT Media Lab researchers and others are using for this emerging specialty . And, as Roush  asserts, what better place (and metaphor) to mine reality collaboratively than in a social virtual space, where getting underneath, around, and inside data-rich representations of real-world objects is effortless?
Today mirror worlds like Google Earth present a real 'individual privacy challenge' by enabling everyone to see the full details of an individual's street/home, backyard and car on a 3-D colour map, along with corresponding online users' annotations and photos (thanks to GPS-enabled digital cameras), and even some relevant sounds  and YouTube videos . Google Earth has become like a layered 3-D Wikipedia of the planet that anyone can edit and add to. It is becoming more and more easy to link an individual's home, work and the leisure/shopping and other locations they visit online or in the real world to other multimedia Web info about them and their family (compiled and published by them, their family, friends and colleagues, and others, in different contexts and at different times and places around the Web ), using the many Web 2.0/social networking information sources and mashup tools that are available today, e.g. Yahoo! Pipes  and Microsoft Popfly , among many others. These trends will only increase as 3-D geobrowsers like Google Earth become more and more integrated into 3-D social networks/3-D virtual worlds over the coming years. Indeed, "knowledge shall increase" (Daniel 12:4).
People who are disabled in the real world sometimes face significant barriers to participation in 3-D mirror and virtual worlds like Second Life and Google Earth . These worlds are essentially visually based environments, and as such the blind and visually impaired are excluded. However, researchers are currently working on spatialised audio interfaces for these worlds [51, 52]. Voice communication in 3-D virtual worlds, which can be seen as an accessibility enhancer for some people having difficulties dealing with/communicating with typed text, is a challenge for the hard of hearing and deaf. But again technologies are being developed that automatically convert the spoken word to Sign Language using speech recognition to animate an avatar . Easy, smooth movement through 3-D mirror and virtual worlds requires more fine motor control than what some disabled users possess. Motion-sensitive controllers that use multi-axial accelerometer-based sensors like the Wiimote  and enhanced 3-D mouse navigation  could offer hope here. Many disability barriers can also be overcome through individualised coaching and mentoring support provided to new users with special needs to help them have a better 3-D mirror and virtual world experience, including special registration, orientation and support portals designed for this purpose (both out-world on the flat Web and in-world).
Location-aware mobile device interfaces to the future Metaverse [55–57] will also need to carefully designed to make them accessible and usable by not just the disabled, but also the non-disabled, given the smaller screen sizes of these devices, their special input modalities, their intended applications/uses (e.g., in mobile augmented reality), and other particularities.
To us this whole journey from Google Earth to virtual worlds like Second Life has been about the democratisation of GIS (Geographic Information Systems), so that they are no longer only associated with big proprietary names and solutions. New technologies like those described in this paper bring Web 2.0 approaches to GIS , and indeed this whole area has been referred to as Geography 2.0 . They let almost anyone with a modicum of programming skills mash-up varying data sources with equally varying display modes to create unique visualisations of data, and then to combine those mashups with yet others to create ever more fascinating and useful images of the datasphere.
And this is just the start. Technologies such as Yahoo! Pipes  will let non-programmers create mashups and novel visualisations without the need to programme, combining data feeds and filters with almost Lego-like simplicity. MIT's SIMILE project , for example, has created Timeline, a kind of "Google Maps" for time series data, whether one is tracking eons, milliseconds, or both. And in the next few years we are bound to see a collision between the Metaverse technologies of mirror worlds, virtual worlds, augmented reality and lifelogging, and be able to capture, visualise and analyse data in ways that only a decade ago we could only dream of.
Interested readers are further referred to the online section entitled 'Second Life GIS' at , which features many news and pointers that are directly related to the topic of this paper.
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.