The geospatial community is abuzz with discussions concerning the growing abundance of remotely-sensed data–specifically, how improvements in recent years to the spatial, spectral, and temporal resolution of geospatial data intrinsically create opportunities to conduct more detailed analysis. Many speculate what future imaging technology will yield and how traditional desktop analysis workstation configurations will evolve into more flexible, distributed computing models. Whatever the topic, much of this discussion is framed around a persistent paradigm of airborne and spaceborne overhead sensor technology. Given the growth of mobile image capture devices and private or municipal surveillance systems, anyone involved in deriving actionable information from remotely-sensed pixels will likely find the industry discourse shifting focus down to ground level.
Although taking pictures from mobile phones is not a new concept, the ubiquity of high-quality imagery and video from them is unprecedented. A June 2012 figure cited by the CTIA wireless association reflects a 101% wireless penetration rate in the United States, up from 81.3% in 2007. This figure means that, currently, the number of active wireless device subscriptions in the country surpasses the number of citizens. So, when a noteworthy event occurs, it’s safe to assume that someone got it on “film,” and that the captured image or video contains more pixels of information than most decent purpose-built digital cameras of a few years past. The dynamics of lower manufacturing costs and improved capabilities responsible for much of this mobile device boom has had a similar effect on the growth of video surveillance implementations.
After the recent bombing event at the Boston Marathon, the prevalence of mobile image capture devices proved to be an invaluable resource for law enforcement personnel. The city made a plea to those at the scene of the tragedy to submit any photos or video captured with the hope that information could be gleaned to assist in the apprehension of the perpetrators. In addition to cell phone images, local police commandeered video footage recorded by municipal and private surveillance cameras in the area, which resulted in untold volumes of data through which to comb. The extent of image analysis technology brought to bear in this particular investigation by the police department is not publicly known, but it is safe to say that the utilization of imagery as a source for intelligence for law enforcement purposes is here to stay.
The Boston Police conceded that high-tech facial recognition software was not directly responsible for the speedy apprehension of the bombing suspects recently. However, there is no argument that the abundance of localized video and imagery of the crime scene provided police with unprecedented information through which to comb for clues. Extracting information from image data collected by mobile phones and surveillance cameras may still be an intensive and manual process compared to some of the advanced image processing and analysis routines used with other forms of remotely-sensed data. But, advanced analysis methods in the traditional realm of airborne and spaceborne remote sensing science and technology had humble beginnings too: they were conceived from manual interpretation methods, nourished by an increasing availability of data with which to experiment, and improved to meet increasing customer and operational requirements. Image and video data collected by street-level platforms exist under similar circumstances. It won’t be surprising to see this novel source of spatial data drive innovative approaches to image analysis—and for it to fully integrate into the current discourse of the geospatial industry.