In the realm of military geospatial intelligence, the need to quickly share information with the right people continues to run up against a mounting challenge: how to ingest and process the mushrooming volumes of live video and other data streaming from sensors in the skies and on the ground.
Agencies such as the National Geospatial-Intelligence Agency (NGA) have made significant strides in gathering, analyzing and getting information into the hands of decision-makers at all levels, thanks to a variety of advances in video streaming and analytics technologies.
But to compound that challenge, there are ongoing questions about the level and speed of analysis needed before analysts can share geospatial intelligence with warfighters.
Agencies such as the National Geospatial-Intelligence Agency (NGA) have made significant strides in gathering, analyzing and getting information into the hands of decision-makers at all levels, thanks to a variety of advances in video streaming and analytics technologies.
But to compound that challenge, there are ongoing questions about the level and speed of analysis needed before analysts can share geospatial intelligence with warfighters.
The technologies and debate over how best to marshal geospatial intelligence arise from the underlying desire to shorten the sensor-to-shooter cycle by enabling warfighters to receive and act on real-time intelligence. The pendulum is swinging toward giving warfighters greater access to raw data, even as the debate continues over how to make that information useful.
“Some information is disseminated directly from the sensor to operational forces for immediate use,” said Maj. Gen. Bradley Heithold, commander of the Air Force Intelligence, Surveillance and Reconnaissance Agency. “In many instances, these data are useful without any additional analysis. Clearly, our emphasis is ensuring timely access to critical information. But there will continue to be a critical need for the more traditional, in-depth analysis to support both ongoing operations and the broader community's requirements."
Given the exponential growth in platforms and sensors and the resulting volume of data, the military is relying increasingly on network-centric solutions to ensure that data is available to users across the enterprise. In turn, that has forced the military services and intelligence agencies to increasingly adhere to established standards to ensure that, for example, information collected from Air Force and Army sensors is available in the right format, can be entered into compliant management systems and is easily discoverable by all users.
However, the explosion of full-motion video and other geospatial intelligence has increased the need for new tools and capabilities to minimize human involvement in processing data.
“Not all data collected requires human intervention to extract useful nuggets of information,” Heithold said. “Automated tools to aid the exploitation of data are critical, as is the ability to make information available to operators at the lowest levels."
But getting all those tools to work together remains a huge challenge, say project managers and others.
Video in Hand
When full-motion video flows directly from a sensor to warfighters, they usually view it on one of the several thousand one-system remote video terminal (OSRVT) systems in the field in Iraq and Afghanistan. Those terminals are handheld units from which soldiers and Marines can stream full-motion video from Army unmanned aerial vehicles, including the Raven, Shadow, Hunter and preproduction Sky Warrior Extended Range Multi-Purpose aircraft, in addition to some joint, manned platforms.
“Without a doubt, the best tool we have put into place to decrease the timelines of the kill chain is the OSRVT,” said Col. Gregory Gonzalez, project manager of the Army's unmanned aircraft systems. “It is not a program of record, but we received supplemental funding to put a capability into theater that allows our soldiers to receive full-motion video from whatever aircraft is flying within their line of sight. It allows the soldiers to see instantly what the unmanned systems are looking at from the air.”
A variation on the OSRVT allows Apache helicopter pilots to also stream live video into the cockpit via an unmanned aircraft system. In what the Army calls a manned/unmanned teaming arrangement, an Apache pilot in the rear seat and co-pilot/gunner in the front seat can see what the UAV is viewing. They can target objects or enemies in the UAV's view and fire on them without needing to directly see them.
“That really gets to the heart of how we shorten the kill chain — you provide the shooters with the direct feeds,” Gonzalez said.
Until recently, those video feeds weren’t necessarily encrypted. But the military services have started efforts to encrypt all video transmitted from UAVs since it was discovered that Iraqi insurgents had hacked into Predator transmissions using inexpensive, commercial hardware. Likewise, the military services are modifying the OSRVTs on the ground to de-encrypt the encrypted transmissions.
L-3 Communications provides the OSRVTs, and it has already added encryption to its latest system, known as Rover 5. AAI, which builds the Shadow UAV for the Army and is also part of the OSRVT program, links its live video stream with map coordinates so soldiers can reference where that video is coming from.
L-3 has built about 350 Rover 5s and has contracts for about 800 units, said Aaron Baker, technical program manager at the L-3 Rover Program Office in Salt Lake City. Rover 5 is a significantly smaller version of Rover 4 and doesn’t need an antenna or Panasonic Toughbook to view video.
Rover 4 and 5 are unidirectional systems, meaning that the aircraft only sends video to a terminal. For the next generation, the Army plans to introduce a bidirectional capability in the form of Rover 6. With that system, a soldier who receives streaming video could, in some circumstances, take control of the sensor that is transmitting the video and steer the aircraft to look at anything.
Rover 6 technologies will include a ruggedized laptop as Rover 4 does, which will give it the ability to display high-definition video, too.
“One of the things we’re doing with the Rover product is allowing for high-definition video to come through Rover as a pass-through to a laptop, which can be displayed in high-def,” Baker said. “If you have enough horsepower, you can display the high-def in native resolution.”
Baker said L-3 is also engaged in a research and development effort on the Rover 5 to “see if we can dumb down the HD so it can be watched on a standard screen.” The company doesn't have any customers yet for such a capability, but there have been lots of discussions, he added.
Better Networking Also Shortens the Cycle
Video is only a part of the picture. Situational awareness that leads to shortening the sensor-to-shooter cycle also depends on enhancements that come from advances in networking technologies that fuse data from multiple sensors into a single presentation and lets many warfighters simultaneously view that information in real time.
One of the most important networked systems that the Army has introduced during the past couple of years is the Base Expeditionary Targeting and Surveillance Systems-Combined (BETSS-C) system. It incorporates several elements: the Rapid Aerostat Initial Deployment (RAID) system, Cerberus mobile surveillance towers, Rapid Deployment Integrated Surveillance System (RDISS) and Force Protection Suite (FPS).
RAID is a tower-mounted electro-optical infrared sensor to detect motion outside the perimeter of a forward operating base. RDISS also is a fixed surveillance system that includes two pan-tilt-zoom cameras and eight fixed cameras with infrared capability for visual surveillance immediately outside protective barriers. The cameras all feed into a workstation that supports 24-hour continuous recording with video files that can be saved and stored for later analysis. A single soldier can monitor 10 or more cameras at one location.
FPS includes pan-tilt-zoom cameras, a thermal imaging system, an AN/PPS-5C Man-Portable Surveillance and Target Acquisition Radar and Battlefield Anti-Intrusion System unattended ground sensors to detect enemy activity. All of those sensors connect to the Tactical Automated Security System for command and control.
“Some information is disseminated directly from the sensor to operational forces for immediate use,” said Maj. Gen. Bradley Heithold, commander of the Air Force Intelligence, Surveillance and Reconnaissance Agency. “In many instances, these data are useful without any additional analysis. Clearly, our emphasis is ensuring timely access to critical information. But there will continue to be a critical need for the more traditional, in-depth analysis to support both ongoing operations and the broader community's requirements."
Given the exponential growth in platforms and sensors and the resulting volume of data, the military is relying increasingly on network-centric solutions to ensure that data is available to users across the enterprise. In turn, that has forced the military services and intelligence agencies to increasingly adhere to established standards to ensure that, for example, information collected from Air Force and Army sensors is available in the right format, can be entered into compliant management systems and is easily discoverable by all users.
However, the explosion of full-motion video and other geospatial intelligence has increased the need for new tools and capabilities to minimize human involvement in processing data.
“Not all data collected requires human intervention to extract useful nuggets of information,” Heithold said. “Automated tools to aid the exploitation of data are critical, as is the ability to make information available to operators at the lowest levels."
But getting all those tools to work together remains a huge challenge, say project managers and others.
Video in Hand
When full-motion video flows directly from a sensor to warfighters, they usually view it on one of the several thousand one-system remote video terminal (OSRVT) systems in the field in Iraq and Afghanistan. Those terminals are handheld units from which soldiers and Marines can stream full-motion video from Army unmanned aerial vehicles, including the Raven, Shadow, Hunter and preproduction Sky Warrior Extended Range Multi-Purpose aircraft, in addition to some joint, manned platforms.
“Without a doubt, the best tool we have put into place to decrease the timelines of the kill chain is the OSRVT,” said Col. Gregory Gonzalez, project manager of the Army's unmanned aircraft systems. “It is not a program of record, but we received supplemental funding to put a capability into theater that allows our soldiers to receive full-motion video from whatever aircraft is flying within their line of sight. It allows the soldiers to see instantly what the unmanned systems are looking at from the air.”
A variation on the OSRVT allows Apache helicopter pilots to also stream live video into the cockpit via an unmanned aircraft system. In what the Army calls a manned/unmanned teaming arrangement, an Apache pilot in the rear seat and co-pilot/gunner in the front seat can see what the UAV is viewing. They can target objects or enemies in the UAV's view and fire on them without needing to directly see them.
“That really gets to the heart of how we shorten the kill chain — you provide the shooters with the direct feeds,” Gonzalez said.
Until recently, those video feeds weren’t necessarily encrypted. But the military services have started efforts to encrypt all video transmitted from UAVs since it was discovered that Iraqi insurgents had hacked into Predator transmissions using inexpensive, commercial hardware. Likewise, the military services are modifying the OSRVTs on the ground to de-encrypt the encrypted transmissions.
L-3 Communications provides the OSRVTs, and it has already added encryption to its latest system, known as Rover 5. AAI, which builds the Shadow UAV for the Army and is also part of the OSRVT program, links its live video stream with map coordinates so soldiers can reference where that video is coming from.
L-3 has built about 350 Rover 5s and has contracts for about 800 units, said Aaron Baker, technical program manager at the L-3 Rover Program Office in Salt Lake City. Rover 5 is a significantly smaller version of Rover 4 and doesn’t need an antenna or Panasonic Toughbook to view video.
Rover 4 and 5 are unidirectional systems, meaning that the aircraft only sends video to a terminal. For the next generation, the Army plans to introduce a bidirectional capability in the form of Rover 6. With that system, a soldier who receives streaming video could, in some circumstances, take control of the sensor that is transmitting the video and steer the aircraft to look at anything.
Rover 6 technologies will include a ruggedized laptop as Rover 4 does, which will give it the ability to display high-definition video, too.
“One of the things we’re doing with the Rover product is allowing for high-definition video to come through Rover as a pass-through to a laptop, which can be displayed in high-def,” Baker said. “If you have enough horsepower, you can display the high-def in native resolution.”
Baker said L-3 is also engaged in a research and development effort on the Rover 5 to “see if we can dumb down the HD so it can be watched on a standard screen.” The company doesn't have any customers yet for such a capability, but there have been lots of discussions, he added.
Better Networking Also Shortens the Cycle
Video is only a part of the picture. Situational awareness that leads to shortening the sensor-to-shooter cycle also depends on enhancements that come from advances in networking technologies that fuse data from multiple sensors into a single presentation and lets many warfighters simultaneously view that information in real time.
One of the most important networked systems that the Army has introduced during the past couple of years is the Base Expeditionary Targeting and Surveillance Systems-Combined (BETSS-C) system. It incorporates several elements: the Rapid Aerostat Initial Deployment (RAID) system, Cerberus mobile surveillance towers, Rapid Deployment Integrated Surveillance System (RDISS) and Force Protection Suite (FPS).
RAID is a tower-mounted electro-optical infrared sensor to detect motion outside the perimeter of a forward operating base. RDISS also is a fixed surveillance system that includes two pan-tilt-zoom cameras and eight fixed cameras with infrared capability for visual surveillance immediately outside protective barriers. The cameras all feed into a workstation that supports 24-hour continuous recording with video files that can be saved and stored for later analysis. A single soldier can monitor 10 or more cameras at one location.
FPS includes pan-tilt-zoom cameras, a thermal imaging system, an AN/PPS-5C Man-Portable Surveillance and Target Acquisition Radar and Battlefield Anti-Intrusion System unattended ground sensors to detect enemy activity. All of those sensors connect to the Tactical Automated Security System for command and control.
Cerberus consists of a trailer-mounted tower with multiple detection and assessment systems on a single mobile platform. Cerberus uses three detection sensors: ground surveillance radar, video motion detection on staring cameras, and optional unattended ground sensors. Multiple towers can be networked to provide a common operating picture.
“At the tactical level, these persistence capabilities continue to improve the sensor-to-shooter cycle as interoperability and networking are improved,” said Army Col. Linda Herbert, project manager of Night Vision/Reconnaissance, Surveillance and Target Acquisition (NV/RSTA). “When these family of systems work together as one network, there is greater ability for the soldier to see the enemy in multiple locations at the same time."
“When that information is networked and brought down into the base defense operations center, the battle captains can see multiple things at the same time," Herbert said. "They see it, they can act on it quickly, and it shortens the entire cycle.”
In years past, soldiers would report the information up the chain of command, and it would finally reach decision-makers, who could take a while to act on it. With persistent capabilities, the model now is a flat network in which visual information, rather than voice, informs leaders at all echelons simultaneously, facilitating a faster decision cycle.
For example, PM NV/RSTA has upgraded BETSS-C’s ground station software to share point of origin/point of impact information among various systems and the Counter Rocket, Artillery and Mortar, which is essentially a land-based phalanx gatling gun system commonly found on naval vessels. By integrating sensors, imagery, interception, warning system and other components to detect, locate and combat incoming fire, BETSS-C is able to provide effective perimeter security for forward operating bases, Herbert said.
“At the tactical level, these persistence capabilities continue to improve the sensor-to-shooter cycle as interoperability and networking are improved,” said Army Col. Linda Herbert, project manager of Night Vision/Reconnaissance, Surveillance and Target Acquisition (NV/RSTA). “When these family of systems work together as one network, there is greater ability for the soldier to see the enemy in multiple locations at the same time."
“When that information is networked and brought down into the base defense operations center, the battle captains can see multiple things at the same time," Herbert said. "They see it, they can act on it quickly, and it shortens the entire cycle.”
In years past, soldiers would report the information up the chain of command, and it would finally reach decision-makers, who could take a while to act on it. With persistent capabilities, the model now is a flat network in which visual information, rather than voice, informs leaders at all echelons simultaneously, facilitating a faster decision cycle.
For example, PM NV/RSTA has upgraded BETSS-C’s ground station software to share point of origin/point of impact information among various systems and the Counter Rocket, Artillery and Mortar, which is essentially a land-based phalanx gatling gun system commonly found on naval vessels. By integrating sensors, imagery, interception, warning system and other components to detect, locate and combat incoming fire, BETSS-C is able to provide effective perimeter security for forward operating bases, Herbert said.
Quick decision-making is part of shortening the cycle, but it is not the only factor. There also is situational knowledge and understanding gained through persistent surveillance systems, which helps commanders choose the appropriate response to a situation.
“The quicker you see the incident, the more apt we are to engage on the enemy…to stop an IED explosion,” Herbert said. “Bear in mind, though, that not all incidents involved an immediate lethal response. Many times, these sensors catch the bad guys placing an IED into the ground. But it is not only speed or the time factor of seeing them doing that. We also have to be able to assess the information correctly."
“You may see something quickly, but it may not be a bad guy. We don’t want there to be a situation of friendly fire. It’s important to identify the enemy and the exact act that is taking place. Certainly, time is a key part of that, but the situational understanding that is gained through these sensors is also critical.”
Senors Need to Work Together
The BETSS-C technology was procured through a rapid development program whose initial fielding depended greatly on commercial hardware to speed the technology's introduction. “The objective was to get the sensors out on the field so we could get more eyes on the target as soon as possible,” Herbert said.
With the initial deployment complete, PM NV/RSTA has a number of upgrades planned for the BETSS-C family of systems in the near term. One of the first is development of a common graphical user interface, which is an effort that has already received funding.
“With development of the common graphical user interface, a soldier viewing feeds on a monitor from a RAID tower, for example, will have the same interface on his system as the one that is on the servers,” said Tony Budzichowski, program management division chief for the BETSS-C program.
The first iteration of the graphical user interface is expected to be ready in six to eight months, with an enhanced version planned for late 2011.
To deal with the limited networking ability of existing systems, another upgrade relates to the networking plan for the BETSS-C elements. The main goal of the next phase is to enhance the intersection of those systems and improve the interoperability of BETSS-C with others sensor systems that are in the battlefield. That will involve a number of technical and engineering challenges in an effort to develop plug-and-play interfaces so that all the sensor systems on the battlefield can interoperate.
“This is a huge undertaking,” Herbert said. “Our four systems within the BETSS-C family are interoperable with each other. Those four systems can be plugged in together so we get video feeds coming into the BDOC. But what we’re looking at right now is how do we network the BETSS-C family of systems with other systems that are currently on the battlefield."
“Even within my night-vision portfolio of sensors, I am looking to work the architecture for how all my systems can be plug and play. We’ve had many conversations with CENTCOM, and they are very excited about that concept. From a warfighter standpoint, they want to see all new systems that come to the field be able to plug in and network with each other so that you can get video feeds from all these different type of sensors coming into one PC monitor in the BDOC. So that is the path forward that we are on right now.”
“The quicker you see the incident, the more apt we are to engage on the enemy…to stop an IED explosion,” Herbert said. “Bear in mind, though, that not all incidents involved an immediate lethal response. Many times, these sensors catch the bad guys placing an IED into the ground. But it is not only speed or the time factor of seeing them doing that. We also have to be able to assess the information correctly."
“You may see something quickly, but it may not be a bad guy. We don’t want there to be a situation of friendly fire. It’s important to identify the enemy and the exact act that is taking place. Certainly, time is a key part of that, but the situational understanding that is gained through these sensors is also critical.”
Senors Need to Work Together
The BETSS-C technology was procured through a rapid development program whose initial fielding depended greatly on commercial hardware to speed the technology's introduction. “The objective was to get the sensors out on the field so we could get more eyes on the target as soon as possible,” Herbert said.
With the initial deployment complete, PM NV/RSTA has a number of upgrades planned for the BETSS-C family of systems in the near term. One of the first is development of a common graphical user interface, which is an effort that has already received funding.
“With development of the common graphical user interface, a soldier viewing feeds on a monitor from a RAID tower, for example, will have the same interface on his system as the one that is on the servers,” said Tony Budzichowski, program management division chief for the BETSS-C program.
The first iteration of the graphical user interface is expected to be ready in six to eight months, with an enhanced version planned for late 2011.
To deal with the limited networking ability of existing systems, another upgrade relates to the networking plan for the BETSS-C elements. The main goal of the next phase is to enhance the intersection of those systems and improve the interoperability of BETSS-C with others sensor systems that are in the battlefield. That will involve a number of technical and engineering challenges in an effort to develop plug-and-play interfaces so that all the sensor systems on the battlefield can interoperate.
“This is a huge undertaking,” Herbert said. “Our four systems within the BETSS-C family are interoperable with each other. Those four systems can be plugged in together so we get video feeds coming into the BDOC. But what we’re looking at right now is how do we network the BETSS-C family of systems with other systems that are currently on the battlefield."
“Even within my night-vision portfolio of sensors, I am looking to work the architecture for how all my systems can be plug and play. We’ve had many conversations with CENTCOM, and they are very excited about that concept. From a warfighter standpoint, they want to see all new systems that come to the field be able to plug in and network with each other so that you can get video feeds from all these different type of sensors coming into one PC monitor in the BDOC. So that is the path forward that we are on right now.”
No comments:
Post a Comment