Thursday, March 12, 2009
Photosynth, and it's Possibilities
Here is the link, if you can't view the movie.
This is one of the coolest things I've ever seen. Photosynth examines images for similarities to each other and uses that information to estimate the shape of the subject and the vantage point each photo was taken from. Then, the information is used to recreate the space and use it as a canvas to display and navigate through the photos. Used in conjunction with the vast amount of photographic data on the web, it allows an immersive recreation of the most interesting landmarks and events (there's a great one of Obama's inaugural address on the site). The program is freely available on the site (there's no available Mac software for the creation of Photosynth canvasses, but you can view others. but there is an app for the iPhone). You can just take a bunch of pictures of whatever, upload them, and whalla! Your room is now communicated as a 3d environment that is shared with the rest of the world. Think of all the Google street view image files, low and high flying balloon and plane images, tourist photos on Flickr, all combined into a navigable, virtual landscape. (There's actually a couple cool skyview Photosynths on the main page.)
Another important extension, as Aguera says in the above video, is the huge amount of semantic information attached to image files on the internet, such as tags for image searches, or additional tags or embedded information about the location, subject, etc.
This, in combination with mobile phone devices with cameras, is a huge step towards an approach that some people call augmented reality. A combination of image-matching, GPS coordinates, compass orientation, and an internet full of knowledge about the world are leading more and more to an up-to-date, wiki-style portal of information wherever you are, whether it's getting bus times, product or company info, book reviews, or the types of wires inside the stoplight pole across the street. Right now, it's being pursued through increasingly high-performance mobile devices like iPhone apps, and this new Microsoft device, but even now, there are previews of what it will be like when these type of human-computer interfaces are completely ephemeral. Here is a cool demo of this type of "sixth sense" by Pattie Maes.
I was thinking, though, that this software could be used for even more than all that. With this image recognizing software, which make tons of points on an image and cross-examines them against other images, this could be used to synthesize tons of visual information for many purposes. For instance, with the wealth of high resolution digital images of species on the internet, one could import all of them into a massive Photosynthish compilation, using the software to match phenotypic congruences and arranging them into a more or less continuous morphological line, and simply watch it morph through the line of the just-over-2-million identified species (and many unidentified ones scattered throughout the web). It's been discussed before how the internet and widespread high-res digital photography will aid in skyrocketing our list of known species, reducing the daunting gap between the 2 million known and perhaps 100 million unknown species in the world, bringing back an old school, Linnaen taxonomy of searching and labeling. At the very least, this is a profound art project waiting to happen, if not an invaluable tool for understanding our world in new ways.
In fact, the same could be done for almost every physical structure, giving a fresh perspective, for instance, on the morphology of musical technology over the years, or of medieval armor. This type of technology allows for an invaluable synthesis and cross-reference of visual information of all types. And synthesis is exactly what we need to see more of on the internet, with its hopelessly disconnected and unorganized nodes of information laying strewn about in the most unlikely corners.
Another huge catalyst towards a fully integrated, universally connected consciousness!