Making the Invisible Visible via Google Glass?
Food for thought: Will Google Glass app developers incorporate the analysis algorithms talked about in the video above?
From Chapter 4: Brainstorming the Wizer (visor with augmented intelligence built-in)
Eventually he spoke again: “So, maybe the AI accesses the current frame, references time of day, and using a gestalt subroutine, figures something is not right with a person in a slumped position. The skeletal overlay could do that–”
I interrupted him. “Uh huh. Yeah, I’ll leave the jargon to you. In essence what I mean is if the AI sees a person lying askew on the floor it can figure something’s not right by accessing feedback from the cameras. Facial expressions can easily be isolated from a stereo pair and perhaps pulse irregularity can be gleaned from image brightness, right?”
“Pulse reading from image brightness would depend on the fidelity of the image, but facial expressions, yes,” he said. “And if someone’s collapsed on the floor, chances are there will be a skeletal mismatch with the superimposed human IK rig.” — Memories with Maya
The video was sent in by Michael Smith, a reader of the story. It’s good to note that there are true Hard Science fans reading the book who are interested in plausible science.