A new patent from Apple has surfaced, and it looks like the company is still working on algorithms to help analyze an image and determine exactly what the computer is looking at from photo to photo, subject to subject. While we’ve seen something similar roll out in iPhoto (Faces), it sounds like Apple’s working on identifying structures and other objects in a photo.
The patent, filed today and dug up by Apple Insider, uses the catchphrase “faceprint” to describe the technology’s end goal. From the sounds of it, this patent is aiming to analyze and characterize exactly what is in a photo based on a number of dimension calculations. If the numbers don’t meet a certain threshold condition, the tag is rejected.
From Apple Insider:
[quote]“After a photo is analyzed, the management application compares the faceprint with other generated faceprints stored locally or remotely. If a match is discovered, the software tags the person with an identity, such as Tom Hanks. This stage relies on face recognition technology that assigns a reliability score to the faceprint, thus allowing for more accurate matches. If a score falls below a certain threshold, the system rejects the match.”[/quote]
If anything, the patent sounds like an extension of what is already included in iPhoto. The photo gallery application from Apple already determines which people are in a photo and tags them for you automatically. I’ve yet to see anything (I could be wrong) that does the same for popular objects or buildings. Ideally iPhoto would be able to tell you that your honeymoon photo was taken in Paris, and right next to the Eiffel Tower, and then place it on a map for you without having to get any manual input. Then again, isn’t that what GPS coordinates are for on our photos now?
As always, these patents rarely point to actual consumer goods that are coming down the pipeline, but what they do give us is some idea of what Apple’s been tinkering with in the deep confines of Cupertino.