Integration heightens insight into geospatial imagery

Processes such as tagging and standards make it simpler to merge geospatial images, maps and related data.

Satellites and unmanned aerial vehicles now carry many types of sensors that give warfighters a beneficial yet dizzying mix of thermal and radar images, in addition to conventional photos. Gleaning information from a stack of similar images isn’t easy, so geospatial intelligence providers are working on new techniques that make it possible to combine them into integrated images.

Blending inputs taken from different angles and elevations requires a lot of work, beginning with proper tagging so that it’s easy to find the right data and integrate images taken at the same time. Powerful computers and human analysts are also relying on several standards that help them blend many inputs into visuals that can help warfighters accomplish their missions.


Related coverage:

Better image tagging improves warfighter situational awareness


Combining images taken by multiple sensors located at various elevations and viewing angles requires advances in many areas that range from tagging to standards development and improved techniques for distributing data to warfighters in the field. The emergence of lightweight portables such as smart phones and tablets is driving increased demand for technologies and techniques that make it easy to find all the necessary files and retrieve them quickly.

“As smart devices become more prevalent, there is increased emphasis on the need to provide better tagging of information so that it can be easily discovered by users,” said Matt Cro, AGC Imagery Systems Branch chief at the Army Geospatial Center. "The use of complex, multisource images also requires more efficient ways to stream the data to these devices." 

AGC is the Army’s knowledge center for geospatial expertise, and it supports units in contact with geospatial products.

System developers must also provide ways for analysts and users to blend imagery in ways that let them see all the information provided by multiple sensors. That includes tools that make it possible to layer images and change their properties to highlight different aspects of the integrated image.

“Our Army Geospatial Enterprise GeoGlobe has an opacity tool to allow a user to look at varied transparency levels for all of the raster data, whether it is a map or an image,” said Glenn Frano, AGC Terrain Analysis Branch chief. "The user can, if desired, keep a transparency level set as well for their work."

Many agencies are involved in related efforts. For example, the National Geospatial-Intelligence Agency's Rapid Delivery of Online GEOINT program is driving to make satellite imagery available in less than 24 hours. Commercial providers such as GeoEye and DigitalGlobe are working to enhance their Web hosting and dissemination systems.

“Accessibility is a big thrust, we’re investing in enterprise-wide technology and techniques to make imagery and value-added products available on line and on demand,” said Brian O’Toole, GeoEye’s CTO.

Tagging: Standards are it

When analysts and users are accessing all different types of files, techniques that make it easy to find them are necessary. Using standards for tagging is one of the most important tools available.

Standards ensure that all file names and descriptive data are categorized using exactly the same techniques, such as ordering the categories in a common format, making it simpler for humans and computers to search for the data they want. Military personnel are working closely with a range of standards bodies to ensure that their efforts are complementary.

“We are working very closely with the NGA and industry to ensure that we use standard formats across all of our sensors and processing platforms as well as in our exploitation systems,” Cro said. “We also need to make sure that products are disseminated in standard formats. This enables the information to be easily cataloged and discovered by other users.”

This involves a number of commercial entities such as the Open Geospatial Consortium that write specs used by well-known suppliers as diverse as Oracle, Google Earth and the Homeland Security Department. However, military users have demands that haven’t yet been tackled. The need for secrecy is an area that’s still handled using different formats, prompting some top commanders to call for improvements.

“We need to find a way to tag information by its security level,” said Vice Adm. William McRaven, commander of the Special Operations Command. “Something like unclassified data’s border is green, if it’s secure it’s blue. We need to look at how we can use artificial intelligence to quickly tell people the level of classification.”

As some researchers focus on improved techniques for challenges such as security, others are working on additional methodologies that will give warfighters more capabilities. They note that additions have already made military techniques more valuable while setting the stage for further upgrades.

“Improved tagging labels images with a currency, richness and relevance not found in traditional classification systems,” said Doug Caldwell, physical scientist at AGC’s Engineer Research and Development Center-Topographic Engineering Center. “Imagery tagging, for example, brings one of the most powerful and successful Web 2.0 concepts to the military. Soldier-generated tags are quick, inexpensive and scalable, representing an efficient, effective value-additive means of identifying and sharing feature-rich, information-laden imagery with other soldiers.”

One facet of tagging is to disperse this labeling rather than forcing operators at centralized sites to tag information they aren’t familiar with. Training warfighters to tag data in the field adds some work for them, but soldier-generated tags provide meaningful benefits now and as use patterns might change in the future.

“Better tagging of the source and its significance is important,” Cro said. "We also need to provide direct access from the platform to the point of use as opposed to spending resources moving the video to a central point and then disseminating it. The information is often very perishable and needs to be in the user’s hands as quickly as possible. It's also important to realize that the information derived from the video may be more useful than the video stream, so automated queuing and screening and the dissemination of alerts become important.”

This focus on direct access from the point of use is likely to become more important as improved networks expand data availability closer to the edge. Warfighters could benefit significantly if they quickly get imagery from a nearby team rather than waiting for it to arrive after being transported to a central server.

This aspect of real-time networking includes adopting smart phones and tablets that can be carried deep into the field. Implementing these compact portables will require additional advances in technology and use patterns.

Smaller screens won’t be able to display large maps or photos, which will often force users to determine precisely what they need. For example, warfighters might want to eschew larger areas and focus a two-block area that can be easily discerned on a smart-phone screen. At the same time, personnel in the field might not want to rely totally on network connectivity.

“Usually, you’ll want to load the image onto the phone before going into the field,” said Wes Hildebrandt, chief systems architect at GeoEye Analytics. "Most soldiers don’t want to go into battle relying on the connection when they don’t need to.”

That’s especially true when warfighters go into areas where signals could be disconnected, interrupted or have low bandwidth. Modern devices have a fair amount of flash memory, so storing files ahead of time provides insurance in case connections are unavailable.

Although some users need to store backup files, most will focus on getting updates transferred via networks. To facilitate this, more suppliers are storing images in the cloud. Those images can be accessed via the Internet, making it simple for users in any region or military branch to find them.

“Data is available through Open Geospatial Consortium-based Web client service, so any users can access and consume images for any application,” O’Toole said.

Layering imagery

During the past few years, the sensors found in satellites, UAVs and terrestrial observation points have expanded far beyond basic photography. Data gathered by radar and thermal and infrared imagers brings valuable information that helps warfighters better understand what’s happening so they can plot their strategies more efficiently.

The rapid rate of change in the semiconductor industry is a key driving factor that makes more data available. Fast microcontrollers and systems make it simpler to combine these sensor inputs into a single illustration. “There have been a lot of advances that let us more accurately blend more sources so everything can be viewed in one image,” Hildebrandt said.

Advances in high-performance computing, often achieved by combining several computers into a single entity, are one of the biggest enabling advances. When large numbers of fast microprocessors transform small pieces of a large image, the image components can be adjusted so all elements fit together precisely.

These adjustments are needed because images are taken from different altitudes and from different angles, making it tough to register multiple layers so they integrate properly. Maps created using satellite imagery will often form the basis of these blended images.

“You really need an accurate foundational map to register other inputs," O’Toole said. "When people start fusing maps with other data, it often happens downstream.”

Although those multisource images can provide more information for warfighters, blending multiple files together correctly is not a simple task done behind the scenes by powerful servers. Human input remains a critical factor, both in determining what types of data can be put together and in making sure the integrated images are processed correctly.

“We also need to be able to provide better training and capabilities for advanced geospatial intelligence processing as the problem sets become more difficult and the variety of information becomes greater,” Cro said. “Full-motion video and other intelligence information linked spatially and temporally to the standard and sharable geospatial foundation becomes very powerful in the decision-making process with regard to exploitation, behavior pattern and other factors.”

Although high-performance computers are a driving factor that makes blending feasible, human intervention remains an important factor. Imagery often comes from other groups or agencies, so representatives from all these entities will probably want to play a role in the blending and the analysis that follows.

“Multidisciplinary teams bring together people familiar with different types of intelligence,” Hildebrandt said. “They can help narrow down the areas of interest.”

The need for human insight is being driven rather than reduced as more powerful computing platforms. When users ask for integrated imagery, people must step in to process the various formats. That’s driving demand for improved tools that help these image analysts stay abreast of growing demands.

“We need to ensure that our analysts are provided with training and tools that allow them to access and work with imagery and other information from a variety of sources,” Cro said. “This will include better tools for working with video and the development of more automated tools for tagging and exploitation of data.”

As the amount of video streaming from UAVs and other sensors grows, these tools will also help reduce fatigue for analysts who must analyze hours of video data. When there isn't a reliable capability for automated queuing or screening from the wide number of video sources, all the burden falls on the analyst, which can lead to mistakes.

One way to reduce analyst fatigue is by having computers compare incoming video to previous imagery so humans don’t have to bother with unchanged imagery. Image processors can compare images on a pixel-by-pixel basis, determining when something of potential importance has changed. That will reduce the volume of data that analysts must examine while also reducing demand for bandwidth.

“There’s a lot of interest in change detection,” Hildebrandt said. “That way you don’t have to stream in the entire image, you only stream in the pixels you need.”

Although much of the focus is on imagery, written documents can also hold lots of critical information. E-mail messages from forces in the region can provide helpful intelligence for warfighters other than the recipient, and captured documents can also hold material that can be very useful.

Many companies provide tools that help users search text files. Some such as CACI International combine many types of software with their own searching programs to perform extensive searches.

“We leverage tons of third-party software so our tools can handle everything from translation to named entity extraction, where we automatically pick out names, dates and other data from the files,” said Jeffrey Perona, a systems engineer at CACI. These entity extraction searches are performed in the document’s native language to eliminate issues that arise for words such as "Gaddafi" that have multiple spellings, he added.