Today’s pedestrian navigation relies heavily on top-down maps, and hasn’t yet made the leap to 3D point-of-view navigation like many in-car systems have. A major reason for this is that GPS+compass navigation lacks the precision needed to create a compelling experience on a handheld device. Several meters and several degrees of error creates an unpleasant jittery experience — even in-car systems occasionally make mistakes about what road you’re on — and the acceptable precision on the road is a lot lower than on foot.
So how do you build an inexpensive navigation system that is sub-meter precise, with degree-accurate heading?
Four months ago, Occipital built a system that is able to achieve this level of precision, by using standard mobile video as an auxiliary position sensor. After an approximate GPS position is established, video frames are transmitted to a server, where they are compared against a vast database of street-level imagery captured by earthmine. Each earthmine image is backed by a dense 3D point cloud. Using all of this information, we are able to estimate the user’s position within a meter, as well as three precise angles of orientation (6 degrees of freedom altogether).
Tomorrow’s pedestrian navigation won’t be top-down; it will be superimposed on the world in front of you.