We’ve Moved!

Visit our new blog location on www.exelisvis.com!

Posted in Uncategorized | Leave a comment

ENVI and ArcGIS: An Example of Moving Beyond Interoperability

In the geospatial industry, the term “interoperability” has become a bit of a buzz word over the last several years.  Interoperability is very simply defined as the ability of diverse systems to work together.  At a high level, I consider systems to be interoperable when they play nicely with each other.  The nicer these systems play, the less work is required by users to move back and forth between systems.  In the world of GIS, this generally means data that is created in one geospatial environment can be quickly pushed to another.  This allows users to leverage the capabilities of multiple geospatial platforms in order to take advantage of the known capabilities of each.

As GIS has grown, the need for interoperability has pretty much been universally understood.  GIS technology is multidisciplinary by nature.  The power of GIS lies in its ability to pull information from many sources together to illustrate connections, relationships, and patterns that might not be obvious in any single data set.  This fusion of data enables organizations to make better decisions based on all relevant factors.  This process has become increasingly more complex as data sources have multiplied and geospatial software providers face an ever-increasing number of data types to support.  To address this issue the geospatial industry has evolved a set of concepts, standards, and technologies for implementing GIS interoperability.  This has proven highly beneficial for the geospatial industry as a whole because it has allowed for the integration of data between organizations and across applications and industries.

For GIS users, the increased availability of data collected by remote sensing platforms has promoted the utility of imagery from basic contextual backdrops to new sources of rich geographic information from which to create foundational data layers.  This sea change in the use of remotely-sensed data in GIS has been helped along by technological advancements to remote sensing software tools that have consolidated spectral science and raster analysis methods into higher-level, solutions-based tools.

Specialized image analysis tools, like ENVI, provide GIS-related capabilities for creating, editing, and exporting valuable data to a GIS environment.  Through a partnership with Esri, Exelis VIS has made steps towards moving beyond software solutions that are merely interoperable with Esri’s ArcGIS platform.  Exelis VIS has worked towards a level of integration that makes the process of extracting useable data from remotely-sensed sources and pushing the data to the ArcGIS platform virtually seamless.  This level of integration includes the ability to create data in ENVI and send it directly to ArcGIS Desktop, or even drag-and-drop data directly into ArcGIS Online.  Exelis VIS has also created a suite of analysis tools and workflows that can be accessed directly through ArcToolbox, so the capabilities of both software packages are available through the same interface.

This may not seem like that big of a deal, but to me it seems quite amazing how far this technology has come in the past several years.  With the growth of the cloud and the introduction of web-based platforms such as ArcGIS Online, I suspect the integration of powerful tools from a variety of sources into an easily-customizable GIS environment that suits the specific needs of the user to continue on its current trajectory.

Posted in Remote Sensing | Tagged , , , , , , , | 1 Comment

Looking Above for Help Below

I was in central London last weekend and you can’t help but notice all of the work that is going on above ground to support Crossrail, a railway link that passes directly through central London and that will join Maidenhead and Heathrow in the west of London to Shenfield and Abbey Wood in the east of London. The project is the largest construction project in Europe. At the heart of the project is approximately 25 miles of new tunnels that are going to be constructed beneath the streets of London. Channeling beneath a city is a huge task, due to varying geology, an array of sewers, tunnels, pipes, and the foundations of buildings, which must surely make it a delicate and complicated of engineering projects. I would have thought that one of the biggest challenges posed by any tunneling project would be to mitigate any effects on the surface structures and reduce land movement and subsidence.

Whilst thinking about the depths of the earth below me in London, I looked to the sky above me to consider the possibilities that remote sensing applications could give in monitoring land movement and subsidence.  Indeed in a city like London plagued by clouds Satellite Interferometric Synthetic Aperture Radar (InSAR) can be used to measure and map changes on the Earth’s surface as small as a few millimeters. Bouncing radar signals off the ground from the same point in space but at varying times, the radar satellite can measure changes in distance between the satellite and the ground as the land surface subsides.  Mapping the ground surface changes through these interferograms from the InSAR data can surely help construction and engineering companies understand any effects of subsidence that may have been caused by tunneling.

I wonder if techniques of change detection like InSAR are used to manage the impact of large scale construction project.

Posted in Geospatial Data | Tagged , , , , | 2 Comments

Exelis brings fun, games, and veterans assistance to Esri® this year!

We’re bringing something new to this years’ Esri® International User Conference. In an effort to educate more GIS users about the value of the information contained in geospatial imagery we have designed some interactive, touch-screen games. These games are designed for everyone and anyone who is interested in learning more about the types of information that is contained in imagery. They are designed for someone who knows that there’s some usefulness to be gleaned from imagery, but is unsure what that is or how to go about retrieving it. The games challenge players to visually extract information from imagery within a certain amount of time. Naturally, visual exploitation takes a long time and can be very resource intensive, so it is often difficult to solve the problem.

In the game shown below, you are an intel officer running change detection over the nuclear site located in Natanz, Iran (data courtesy of DigitalGlobe™ Inc.). You can see the before and after pictures, and the player is invited to manually identify areas of change between the two images.

App_Screenshot1

The player is then provided with information about ENVI change detection, and the ENVI analysis result is overlaid on the ‘after’ image. The two images demonstrate the accuracy of the image analysis workflow and the value of the software without confusing the player with over-complicated demos and analysis.

App_Screenshot

There will be three different scenarios to play at this year’s UC, if you want to learn what the other two scenarios are, you’ll just have to stop by the booth and play! As an added bonus, for every person that plays one (or all) of the games, we’ll be donating $5 to a local veterans organization via our non-profit volunteer organization, Exelis Action Corps. At this year’s User Conference, we’ve partnered with a local California organization called Team Rubicon which unites the skills of military veterans with first responders to assist in disaster situations. Team Rubicon will receive each $5 donation made as a result of playing the games.

So if you’re in San Diego this year, be sure to stop by our booth, play some games, and give some assistance to those who have served our country. Who knew helping people could be so much fun!

Posted in Remote Sensing | Tagged , , , , , , , , , , , , , | Leave a comment

How Does Google Get Accurate Geoinformation for Google Maps?

Have you ever wondered how Google Maps manages to have reasonably accurate map data for so much of the world? I ran across an interesting video from Google’s I/O 2013 conference, about their Ground Truth project, which compiles and refines data from various authoritative sources to populate Google Maps.  The project has mapped 43 countries in the first five years, and is working to expand to new countries.

It’s a really enormous job. Think about having to keep track of which directions you can turn from each road into every road intersection in 43 different countries. For one thing, that’s obviously a lot of intersections. For another, it changes over time. So, the first two ingredients in Google’s magical formula for providing accurate map data are a huge number of people, and a truly massive collection of geoinformation.

With all of those the people and data, Google’s approach is to take the highest quality raw map data, and successively clean it up with satellite and aerial imagery, and their own Street View panoramic, street-level imagery. Most of this is done with Google’s own internal, homegrown, mapping tool, Atlas. In addition to providing an interface for manual corrections, Atlas also uses algorithms to automate certain tasks. For example, it has algorithms that check street names in the maps against street signs visible in Street View imagery.

The video briefly shows a really neat feature in Atlas (see minute 9:12) which starts with a top down view of an area, and then allows you to browse around that view with a fish-eye viewer showing Street View data under the location of the cursor. It’s hard to describe in words, but quite slick and intuitive when you see it.

Ground_Truthwithfisheye

Google’s internal, homegrown Atlas software provides the ability to see a fisheye-lens view of a particular location. The fisheye view shows Google’s Street View panoramic, street-level data. (Credit: Stephen Shankland/CNET

I was also impressed by Google’s process of repeatedly revising previously mapped areas. An important source of updates is Google map users themselves, who are presented within the Google Maps interfaces with mechanisms for reporting mistakes. Google also has a browser-based product called Mapmaker that allows interested users to add their own map information, and make their own changes to Google’s maps. Changes made in this way are then moderated by Google staff to ensure that the changes conform to all Google policies. This is how Google Maps is able to provide data for 200 countries – far more than Google has addressed in their Ground Truth program.

If you ever use Google Maps and wonder how it all works, this video is worth a view.

Posted in Geospatial Data, Remote Sensing | Tagged , , , | Leave a comment

Image Analysis: Making the Most of Your Time at the Shore

If hand digitizing geographic features is the bane of a GIS technician’s existence, tasking someone to manually delineate ephemeral features of say, the banks of the Mississippi River, is downright inhumane.  But someone has to do it; maintaining accurate extents of water bodies is critical for those responsible for maritime transportation operations, ecological research, and flood assessments.  If the trend of global climate change continues, the need to analyze shorelines and update feature databases will grow.  Could this eventuality presage untold hours of servitude behind digitizing tablets?  Thankfully, the pace of geospatial innovation and interoperability between GIS and remote sensing technology offers another, more efficient, option for meeting this challenge.

River_Shoreline

If one were to reflect on past trends in geospatial mapping techniques, the availability of higher resolution imagery from remote sensing platforms marks a sea change among those in the GIS community.  Once imagery was recognized as a new source for accurate and timely geographic information rather than simply a basic contextual map backdrop, users began to identify and trace features of interest directly onscreen—making the traditional method of hand-digitizing water features from existing paper maps for inclusion in electronic spatial analysis models seem archaic.  However, the heads-up digitization of features like rivers and shorelines is still very tedious and time consuming; the highly manual process entails many mouse clicks and a skilled GIS technician with a steady hand.   Manual efforts are expensive in terms of time and money, which prove to be a budgetary challenge for projects that require ongoing updates and analysis, like coastal mapping.  A more efficient method for extracting coastline information from image data is required.

Over several decades, image processing algorithms have been refined to interpret the spatial, spectral, and textural characteristics of data contained within a pixel and its relationships to neighboring pixels.  Since these remote sensing techniques yield features of interest represented by vectorized spatial objects, they are highly interoperable with GIS analysis workflows.  Moreover, these feature extraction algorithms can be automated to comb through large amounts of geospatial data—making the delineation of coastline features an ideal application.  The rapid production of this geographic information is invaluable for updating existing maps or feeding decision support systems that address critical questions relating to land-water interfaces.

The prospects for more timely and efficient methods to map changing coastlines are bright.  Today, remote sensing platforms allow us to repeatedly capture information over vast coastal areas that are too costly or dangerous to access from the ground.  Soon, we will witness an explosion of new image data produced by an increasing number of remote sensing platforms such as Unmanned Aerial Vehicles (UAV) and small scale satellites.  With a burgeoning image archive accessible via published image service, and the ability to utilize automated coastline feature extraction routines to conduct ongoing and historical analysis, the only uncertainty is this:  What will you do with the time previously allocated to hand digitizing shoreline?

Posted in Environmental Monitoring, Remote Sensing | Tagged , , , , , | Leave a comment

Landsat 8 Sensor Improvements Benefit to GEOINT

People following the Landsat Data Continuity Mission (LDCM) know that NASA handed the controls over to the USGS on May 30, 2013 and Landsat 8 was born. Landsat 8 builds on a 40+ year heritage of earth resources remote sensing by providing free access to multispectral imagery on a global scale.

Landsat imagery has long been used in Defense and Intelligence circles as a valuable source of GEOINT to monitor land cover change, assess agricultural yields, and as a visualization backdrop for training and battlefield simulations.

New sensors on Landsat 8, the Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS), provide significant improvements over the Thematic Mapper (TM) and Enhanced Thematic Mapper (ETM+) instruments on previous Landsat missions. This post will explore a few ways these improvements could lead to greater adoption of Landsat data in GEOINT operations.

The Landsat 8 OLI carries two new spectral bands. The first is a deep blue channel in the visible portion of the spectrum. Information collected in this band is useful for characterizing coastal water and atmospheric aerosols. From a Defense and Intelligence perspective, this band could help to produce more accurate near shore water depth assessments; a key component to maritime mission planning.

Landsat8_GEOINT

The second new band on the OLI covers a known water absorption feature in the shortwave infrared region of the spectrum. This band is strategically positioned to detect the presence of cirrus clouds. This band is used as an input to a new Quality Assurance overlay that is included with each Landsat 8 product. Together they indicate the presence of clouds, water and snow. These data could enable more accurate change detection results as clouds are often responsible for false alarms when conducting reflectance based analysis between dates. Intelligence organizations depend on accurate, global scale, change detection to assess whether their foundation data (i.e., base maps) are current.

The last improvement I’ll mention is the increased signal-to-noise ratio achieved by moving from a whisk-broom to a push-broom sensor design. The push-broom design essentially allows Landsat 8 to get a longer look at the ground and increases the sensitivity of the radiance data collected. The improved signal-to-noise may slightly increase what is visually interpretable in the imagery but has larger implications when it comes to quantitative methods such as vegetation analysis, land cover classification, and sub-pixel material classification. The increased radiometric sensitivity may move Defense and Intelligence analysts to select Landsat 8 over higher spatial resolution assets to: delineate cover and concealment areas (e.g., dense vegetation), map the extent of water inundation or to perform a broad area search for manmade objects that are out of place.

What do you think? Will these improvements lead to new or more accurate applications in the Defense and Intelligence sector?

Posted in Defense Intelligence, Geospatial Data, Remote Sensing | Tagged , , , , , , , | 1 Comment