I’ve been fascinated by many of the early attempts I’ve seen to create group photo albums. With the continued proliferation of smartphones, it seems like someone should have solved the problem of creating unified photo albums with photos taken from distributed devices.
I had hopes that the original version of Color would fulfill that promise and I’ve seen other products such as afolio and Flock take worthy shots at solving this problem – it’s still too early to know whether they will succeed but they’re using interesting approaches. In thinking about the problem of creating group albums, it also helped me better understand some of the possible thinking between Facebook launching their own Camera app.
In thinking about this problem and playing with a few of the products in the market today, there are 3 things that jumped out at me as design / product decisions that each of these companies have had to wrestle with in building a product. And the decisions made along these lines have profound impact on the overall experience.
1. Explicit or implicit album creation?
In my opinion, the most consequential decision is whether a service in this space asks the user to explicitly create an album or not. The advantage of explicit album creation is clear – you get the explicit creation of an event as a signal to create an event. With explicit album creation, you can also ask the user for other interesting metadata (name of the album, location, etc.). The other advantage of explicit album creation is that it creates a container for all of the relevant photos for a given event – there’s less need to guess which photos should be grouped together and which should not. In theory, more structured data around albums and photos should make it easier to match up albums for the same event after the fact.
The disadvantages to explicit album creation are also clear – you’re introducing friction to the process by asking the user to create an album. If I want to take a picture, I just want to take a picture. Asking me to add explicit meta data like album name and other stuff gets in the way of simply taking photos.
For implicit album creation to work, though, the system has to be smart about using all of the metadata about photos to make sure that you get the matching right. That can include everything from date and time of photos, location (if they are geotagged), other people in the photos, etc. In an even more ideal world, you’d have data about who’s in the picture (tagged people) and the ability to do some sort of matching to get the sense whether the photos were in fact taken in the same place.
Without good smarts around implicit data, you end up with lots of false positives. For example, in Flock I’ve seen a few shared albums where I understand why they are created but they were not in fact from the same album. My friend and I both took some photos in San Francisco on the same day but we were not together and were pretty far apart. But someone will hack this stuff and start to do clever hacks to make things work. One simple hack would be to compare groups of photos to figure out similarity based on how they look, who occurs in them, etc. Making it feel like magic will require clever use of those implicit signals.
2. How do you build the attendee graph? One of the other challenges in building group or shared photos is figuring out whose content should be included in a shared album. In some cases, this is easy because all of the people at the event are already friends on Facebook. But often times, this is not the case – I’ve been to many events where people there are not my friends on Facebook (people do go to events to meet people, after all) or even friends-of-friends. So how do you build the attendee graph? Do you have everyone use a common hashtag or code for their photos? Do you allow people to invite others by SMS or email? One of the key things for making group or shared photo albums work is making it easy for people to create (or otherwise detect) the people whose photos should be contributed. Again, this is an area where I think some smart hacking could figure this stuff out.
3. Native camera or in-app camera for photo capture? One of the other things I’ve been thinking about (and appreciating) more lately is the decision to use the phone’s native camera or to require the user to open an app and use a “camera” from within the app. Again, by controlling the camera and requiring the user to use “your” camera as an app developer, you have a bit more control over what data you grab in the background. And you can control the interface a bit more and potentially do some interesting things. For example, if you’re Facebook, having the Facebook Camera app means that you can make it much easier for photos to be uploaded to Facebook at the point of capture, as opposed to waiting for users to remember to do so after the fact.
I’m really curious about how this space will evolve. If you have thoughts, feel free to leave me a comment or send me a message on Twitter @chudson.