A wearable depth-sensing camera may soon give sightless people a better way to master their environment.
Eelke Folmer
and Vinitha Khambadkar think blind people could do without their white
canes and instead navigate with a camera around their necks that gives
spoken guidance in response to hand gestures.
Folmer and Khambadkar, researchers at the University of Nevada, presented the technology last week at the ACM Symposium on User Interface Software and Technology. Known as the Gestural Interface for Remote Spatial Perception, which they abbreviate as GIST, the system utilizes a Microsoft Kinect sensor to analyze and identify objects in its field of view. “GIST lets you extract information from your environment,” Folmer says.
The Nevada research draws on the ideas of MIT’s Sixth Sense project, an “augmented reality” effort in which a wearable device projects information onto the physical world and lets the user interact with it by waving, pointing, or making other hand gestures. But in contrast, GIST collects data in response to hand gestures as a way of augmenting blind people’s severely reduced spatial perception.
For example, if someone wearing GIST makes a “V” sign with the index and middle fingers, the device will identify the dominant color in the area encapsulated. If the user holds out a closed fist, the system will identify whether a person is in that direction and how far away he or she is. (See the researchers’ brief demonstration video below.)
Gestures aren’t intended to be the only means of interaction with GIST, however, thanks to the Kinect’s ability to recognize objects, faces, and speech. As Folmer explains: “You say to the sensor something like ‘This is my cup.’ You put it down on the table and say, ‘Hey, where’s my cup?’ It’ll say that it’s right in front of you.” The next trick is figuring out how to continue tracking the object when it moves farther away or behind the user.
The researchers also plan to see whether GIST could effectively tell its wearers who is in front of them, by comparing the faces of people it detects to a small database that the user could set up with voice commands. If that doesn’t work, Folmer says, the researchers might try a system that identifies people based on their body shape.
GIST will soon benefit from the new Kinect 2.0 sensor, which improves upon the original’s tracking and allows precise finger recognition—as opposed to merely hands. The caveat, though, is that the new Kinect is bigger and bulkier, which makes it ill-suited to prolonged wear around the neck.
But Folmer believes it’s just a matter of time until there are Kinect-like sensors small enough to fit into smartphones—in fact, the process is already under way (see “Depth-Sensing Cameras Head to Mobile Devices”). And overall, Folmer believes that as mainstream computing devices get smaller, they will be very useful to vision-impaired people because such devices will rely on interfaces, such as speech, that the blind have been using for decades.
No comments:
Post a Comment