Mobile Visual Rhymer Design

28 March 2005
for Professor Julian Bleecker, CTIN541b
by Justin Hall, Yuechuan Ke, Aaron Meyers, and Doo-yul Park
Our project originated with a desire to bring a serendipitous image searching experience to mobile platforms. Browse the world and see what else it might look like, through a mobile computing lens.

In our original design, a user uploads a picture, and the computer gives back a number of similar-looking images. The server performs an interpretive act - expanding the users perceptions with procedural reference. Inexact Image Recognition. This way, the mobile can act as a portal to other locations in the physical world - the image framed by the user's camera phone provides glimpses of other places.

The engine we were handed did not support the kind of fuzzy, inexact matches we would need for the visual rhyme we had intended. We drew up this prototype below, which sacrifices serendipity for folksonomy - allowing people to tag the world around them and find other related images by these textual tags.

In addition we have a nascent Flash prototype on the right here - hinting at the handset-based interactivity we were hoping for.

Upload an Image
Is it a match to the database?
Yes!
Show 5 matching Tag images from Flickr
No.
Solicit title/keywords
Show Entry

Mockup experience

If the fuzzy version of the search were working, we would like to see this kind of results page: serendipity circular.