Skip to main content
USNI Logo USNI Logo USNI Logo USNI 150th Anniversary
Donate
  • Cart
  • Join or Log In
  • Search

Main navigation

  • About Us
  • Membership
  • Books & Press
  • USNI News
  • Proceedings
  • Naval History
  • Archives
  • Events
  • Donate
USNI Logo USNI Logo USNI Logo USNI 150th Anniversary
Donate
  • Cart
  • Join or Log In
  • Search

Main navigation (Sticky)

  • About Us
  • Membership
  • Books & Press
  • USNI News
  • Proceedings
  • Naval History
  • Archives
  • Events
  • Donate

Search

Type

  • Author (5723)
  • Books (949)
  • Essay Contest (256)
  • Event (109)
  • Magazine Article (31801)
  • Magazine Issue (1648)
  • Memoir (44)
  • Merchandise (15)
  • Oral History (231)
  • Page (193)
  • Podcast Episode (338)
  • USNI News (9397)
  • (-) Sponsored Article (16)

Year

Author

Displaying 1 - 10 of 16

Protecting Missions From Cyber Attack With Real-Time Risk Maps

Submitted by [email protected] on Mon, 05/01/2023 - 00:00

Perhaps the greatest challenge in protecting mission-critical systems from cyberattack is that there are so many possible ways an adversary could strike. A shipboard missile system in the Pacific, for example, might be disabled by an adversary that jams satellites or spoofs sensors, or disrupts command-and-control communications, or perhaps shuts off power to the cooling system of a building, a thousand miles away, that houses DoD computer servers. A single component of a mission- critical system might have dozens of such vulnerabilities, some well- known to cyber defenders—but potentially many others that are commonly overlooked.

The task of charting a system’s complex web of cyber dependencies, when done manually, can take months, even years. And even then, defense organizations often can’t capture the full range of downstream vulnerabilities that can endanger a mission.

However, new approaches, which take advantage of advances in machine learning and modeling and simulation, are now making it possible for the joint forces to create comprehensive maps of cyber risk to mission. With these maps, defense organizations can get a clear view of where their mission systems are most vulnerable to cyberattack, often in real time. Organizations can then prioritize their resources to best protect their most important missions.

Building A Risk Map Of “Probable” Dependencies

Defense organizations usually have good understanding of their information technology (IT)—their computer- connected systems—and so can protect those components with traditional cyber defenses. However, organizations don’t always know all the ways their computer networks rely on operational technology (OT), which can range from HVAC systems on a base to radar sensors on a ship.

Organizations theoretically could connect much of their operational technology to their computer networks. However, they’re reluctant to do so, because it would greatly expand the attack surface, providing many more ways a cyber attacker could gain access to the system. Unfortunately, that leaves defense organizations with limited visibility into their OT vulnerabilities. For example, an organization’s high-priority communications network might be using only one of 25 antennas at an airbase, but the organization doesn’t know exactly which one it is. Tracking down the right antenna would take time, and it isn’t feasible to manually go into that level of detail for every possible piece of OT. A single Navy base might have thousands of complex system dependencies.

However, defense organizations can take a different approach, by creating a map of probable dependencies with the help of machine learning. For example, an organization might not have the resources to fully protect all 25 antennas at the airbase, just to make sure the one being used by the high-priority network is covered. But if it could narrow down the number to four or so—based on the types of antennas commonly used with such networks—it might be feasible to put protections in place.

Machine learning can play a key role here. The first step is to provide machine learning models with the known IT and OT dependencies of various mission systems across the DoD, based on knowledge gathered manually over the years. The models would then look for patterns in the data, and predict a given system’s most likely dependencies—for example, certain types of antennas used by a certain types of mission systems. To make sure the machine learning models are accurate, cyber analysts would do regular spot checks, and work with AI experts to tweak the models as necessary.

Modeling And Simulation To Play Out Risk Scenarios

Once organizations have created a map of probable mission dependencies, they can use modeling and simulation to gain a deeper understanding of the vulnerabilities. By playing out various scenarios, the modeling and simulation might show, for example, how damage to computer servers on the ground could disable a particular satellite array, which in turn could prevent GPS signals from updating a carrier group’s inertial navigation. With such scenarios, defense organizations can gain insight into which vulnerabilities would have the most impact on a mission, and so know where to focus their efforts.

At the same time, defense organizations can use modeling and simulation to identify alternative paths if a mission dependency is compromised. For example, modeling and simulation might find that a high-priority mission system could quickly and successfully switch from one set of sensors to another—or perhaps could use the bulk of another system’s IT and OT dependencies if necessary.

All this information can be presented to cyber analysts and decision-makers with user-friendly dashboards and other visualization tools that show, at a glance, where potential vulnerabilities lie. The dashboard might show, for example, a mission system’s 100 or so probable dependencies, identifying the ones that are not fully protected.

Real-Time Monitoring Of Cyber Risk To Mission

Creating a map of mission dependencies is not a one-and-done job. On any given system, components are constantly being switched in and out as technology and requirements change. And as missions change as well, they might take on new vulnerabilities. Once the map of dependencies is created, however, it becomes easier to keep track of changes. Cyber analysts can log in new IT and OT components as they come online.

Because the modeling and simulation is run continuously, with each change it automatically looks for newly created vulnerabilities, and possible alternate paths if a mission dependency is compromised.

Protecting Missions Under Active Cyber Attack

Real-time monitoring of cyber risk to mission is critical if a system is under attack. Analysts can be alerted if a particular dependency is being attacked or has already been compromised. The alerts would show the likely impact to the mission—which could be minor or major—and present analysts with alternatives.

In some cases, the rerouting of dependencies might be automatic— for example, a missile system might move from one set of sensors to another. Other situations might require cyber analysts and decision-makers to step in to do the rerouting, using the dashboards and other visualization tools as guides.

With the help of machine learning, modeling and simulation, and other advanced approaches, defense organizations can build real-time cyber maps that show the often hidden ways missions could be degraded by adversaries. Organizations can use the maps to plug vulnerabilities as they arise, and move quickly to protect missions under active cyberattack.


Kevin Coggins ([email protected]) is a Booz Allen vice president working across the complex landscape of weapons systems, critical infrastructure, cyber, space and intelligence—including leading the firm’s PNT business. His journey as a force recon Marine, weapons system engineer, tech startup founder, Army SES and industry executive has enabled a unique perspective on solving the myriad of technology challenges facing the warfighter..

Dale Savoy ([email protected]) leads Booz Allen’s cyber warfare domain efforts in vulnerability and mission risk analysis. His focus is on defending DoD weapon systems and critical infrastructure from cyberattack, through mission-dependency mapping and vulnerability management.

Capt. Alan Macquoid ([email protected]) is a leader in weapon systems and critical infrastructure cyber risk assessment and mitigation efforts. He has over 35 years of experience integrating kinetic and non-kinetic effects with emphasis on cyber across all domains of warfare.

boozallen.com/defense


Check out more sponsored articles.

View All
Article By Line
By Kevin Coggins, Dale Savoy, and Captain Alan MacQuoid, U.S. Navy (Ret.)
Image
Screen Shot 2023-05-01 at 11.43.33 AM.png
Sponsor Logo
Booz Allen Hamilton logo
Sponsor Name
Booz Allen Hamilton

Quantum Sensing: A New Approach To Maintaining PNT In GPS-Denied Environments

Submitted by [email protected] on Sat, 04/01/2023 - 12:00

In the event of a conflict or confrontation, the joint and allied force could lose access to satellite capabilities, most notably GPS. Ships, submarines, and aircraft would need to rely almost entirely on other technologies for positioning, navigation and timing (PNT), particularly inertial systems.

Unfortunately, because inertial navigation devices such as gyroscopes and accelerometers lose accuracy over time—and wouldn’t be able to be recalibrated in a GPS-denied environment—inertial navigation would be reliable for only a limited period.

But an emerging technology, quantum sensing, offers the possibility of increasing the accuracy of inertial navigation by orders of magnitude, greatly extending operational availability in GPS-denied environments.

The idea behind quantum sensing is fairly straightforward. Essentially, quantum refers to the realm that exists at the atomic and sub-atomic level. That realm is extremely sensitive to minute changes in the environment—changes that cannot be detected in the everyday world. Quantum sensing harnesses that sensitivity, allowing measurements that are far more precise than what is possible through conventional approaches.

Although using quantum for inertial navigation is a technology of the future, that future may not be far away. Quantum sensing is already used in atomic clocks—including in military satellites—and in devices such as MRI machines. Government and private researchers are making rapid advances in quantum sensing for inertial navigation, and some devices may be ready for deployment by the military in as little as five years, according to the NATO Review.

For that to happen, however, defense organizations need to take steps now to make sure that the quantum gyroscopes and other devices being developed are practical for current and future ships, submarines, and airplanes. Quantum sensing devices typically require a great deal of size, weight, and power, and researchers are now focusing on ways to make them work for the Navy and other services.

It’s important that defense organizations develop deep expertise in quantum sensing, and take the lead in driving the requirements, so that the quantum devices can be deployed as soon as possible. China is now aggressively pursuing quantum sensing for inertial navigation, and could leave the U.S. behind.

HOW QUANTUM SENSING WORKS

The behavior of atoms, particles of light, and other denizens of the quantum realm can reveal a great deal about what is happening in the larger physical world. For example, when a cloud of atoms inside a vacuum is in an excited state, the atoms become highly sensitive to the gravitational field around them. By looking at the patterns the atoms form, quantum devices can create a picture of the gravitational field around a ship or submarine. With repeated readings as the ship moves, that picture becomes increasingly detailed. Onboard computers can then overlay the picture with maps of Earth’s gravitational field to determine the ship’s precise location.

An entirely different type of quantum sensing can measure the surrounding magnetic field, also helping to plot a ship’s location. With a quantum magnetometer, a tiny wire made of special materials is made so cold that it has virtually no electrical resistance. This eliminates “noise” on the wire, so that when an electrical charge is sent through it, the wire becomes highly sensitive to the magnetic field at the atomic level. The device takes a series of measurements to determine the surrounding magnetic field, which can then be compared to magnetic field maps of the world.

Additional types of quantum sensing can aid other aspects of inertial navigation. A quantum gyroscope, for example, uses the wave nature of atoms to measure angular rotation. An atomic clock sets its watch by the predictable rate that excited atoms decay. A quantum accelerometer measures the movement of super-cooled atoms.

What all these quantum devices have in common is that they are self-contained and completely independent of GPS or other outside communications. In addition, because measurements in the quantum realm are far more accurate than with conventional approaches, quantum inertial navigation can be relied upon for much longer periods.

MOVING FROM THE LAB TO THE REAL WORLD

While quantum sensing devices have been proven to work, with the exception of atomic clocks they are generally too large to be of practical use for inertial navigation. For example, the refrigerators needed to supercool the wires in quantum magnetometers can take up a great deal of space—and what works in a laboratory may not fit on a submarine. In the lab, some optics-based quantum sensors feature a collection of mirrors, glass plates, lasers, and various electronics that sit on a platform the size of a dining room table.

Much of the research now being done on quantum sensing, including in DoD laboratories such as the U.S. Naval Research Laboratory, is focused on how to make the devices small enough to fit on ships, submarines, and airplanes without a significant drop-off in accuracy and precision.

A key challenge is that it’s often difficult to determine how well a smaller and lighter device, with reduced power requirements, will perform until it has been built. In addition, each type of quantum sensing device has its own complex set of trade spaces. Manufacturers may have to experiment with a number of prototypes to get the right balance of size and performance. This process might in some cases be too costly to be feasible—and too time-consuming for the DoD to keep pace with adversaries in the race for quantum sensing.

BAH Interior A23

One solution is for defense organizations to use modeling and simulation to test how particular quantum devices would work in the real world. This can be done by building models based on research data. Many research papers have been published describing different approaches to quantum sensing devices, and this information— along with data from various prototypes that have been built so far—can be used to build the models.

By continuing to play a major role in the ongoing research—including with modeling and simulation—the joint force can gain the information and expertise needed to drive the requirements for quantum sensing, rather than relying entirely on industry. Such an approach can significantly speed the adoption of quantum sensing for inertial navigation, helping to extend operational availability in GPS-denied environments.


Kevin Coggins ([email protected]) is a Booz Allen vice president working across the complex landscape of weapons systems, critical infrastructure, cyber, space and intelligence—including leading the firm’s PNT business. His journey as a force recon Marine, weapons system engineer, tech startup founder, Army SES and industry executive has enabled a unique perspective on solving the myriad of technology challenges facing the warfighter.

Dr. Jake Farinholt ([email protected]) is a senior lead scientist at Booz Allen, where he leads the firm’s overall quantum business in the national security sector, as well as the firmwide quantum sensing business. For more than a decade, he has provided expertise in quantum technologies to the intelligence community, as well as to the Navy and other defense organizations.

Dr. Oney Soykal ([email protected]) is a physicist at Booz Allen specializing in quantum computing. He develops quantum systems for research in academia, industry, and government, and provides technical analysis and management support to multiple DARPA and IARPA programs.

boozallen.com/defense


Check out more sponsored articles.

View All
Article By Line
Kevin Coggins, Dr. Jake Farinholt, Dr. Oney Soykal
Image
Screen Shot 2023-04-04 at 8.36.48 AM.png
Sponsor Logo
Sponsor Name
Booz Allen Hamilton

Creating A Digital OPLAN Environment To Integrate Allies And Partners In The Indo-Pacific

Submitted by [email protected] on Wed, 02/01/2023 - 00:00

Comprehensive operation plans (OPLANs) can help integrate the U.S. and its allies and partners across the Indo-Pacific—but to stay ahead of fast-moving changes in the region, it is increasingly important that the plans be frequently and rapidly updated. The challenge is that OPLANs tend to be static documents that often must be updated manually, a process that can be cumbersome, time-consuming, and incomplete.

However, by bringing their OPLANs into an interactive digital planning environment, the joint forces can use what’s known as “rapid modeling and simulation,” aided by AI, to test and refine their OPLANs—often as fast as conditions change. And they can use that same modeling and simulation to help put the plans into action in a confrontation.

A digital planning environment can be particularly valuable in integrating the coalition in the Indo-Pacific as a combined force of forces. The digital environment brings together vast amounts of data from across the coalition, making it possible to run tens of thousands of simulations to help planners determine how the U.S. and its allies and partners can work together in optimal ways.

And because the digital environment is interactive, planners can experiment hands-on with scenarios of their own—moving red or blue force assets in a particular area of the South China Sea, for example, and then watching as the AI-aided modeling and simulation predicts how a confrontation is likely to play out.

Planners can collaborate at the same time from multiple locations across the Indo-Pacific, including from allied and partner nations.

Nothing about this approach takes away decision making from planners or commanders. Rather, it gives them more hard data to work with, often in near-real time. They still need to use their experience, knowledge, and judgment to evaluate the data and update the OPLANs as they see fit.

BUILDING THE DIGITAL OPLAN ENVIRONMENT

Advances in data science are now making it possible to bring together and integrate an almost unlimited amount of OPLAN data from any number of sources. This includes all of the relevant time-phased force- deployment data now in spreadsheets, PowerPoint presentations, and other formats, which can be digitized through natural language processing and other techniques. Current OPLAN data can be combined with a wide range of unstructured data, from sources such as real-time intelligence reports, satellite imagery, acoustic signatures, and infrared thermography.

In addition, defense organizations can bring in large amounts of information about our potential adversaries, including detailed historical data—for example, how they have responded to certain activities by the joint forces in the past.

With this approach, all of the available data is ingested into a common, cloud-based repository, such as a data lake, and tagged with metadata. This breaks down stove-piped databases and makes it possible to analyze the entire repository of information— and all at once.

Although the data is consolidated, it is actually more secure than it would be in scattered, traditional databases. By tagging the data on a cellular level, defense organizations can tightly control who has access to each piece of data and under what circumstances.

TESTING AND REFINING OPLANS WITH RAPID MODELING AND SIMULATION

Once defense organizations have created a digital planning environment, they can test and refine their OPLANs with modeling and simulation, taking advantage of the combined information in the data lake to factor in tens of thousands of variables. With the help of AI, new rapid modeling and simulation tools can play out OPLANs’ courses of action, along with the branches and sequels, to determine the probability of coalition success every step of the way.

Planners might find, for example, that some bases would be at risk of running out of fuel or munitions during a conflict, or that certain U.S. aircraft would likely be more successful than others in particular missions. The AI might recommend courses of action, or specific branches and sequels, that planners may not have considered.

At the same time, advanced visualization tools, including interactive maps showing coalition and adversary forces, would allow planners to test out possible new scenarios. They might plug in different types of aircraft, for example, to see which are likely to be most effective, or pair manned and unmanned systems. Interactive visualization tools can also allow them to pose critical questions, such as whether a particular action would have a higher likelihood of success than others, but would cost more lives.

A digital environment also enables planners to take advantage of an emerging form of AI, known as reinforcement learning, to help predict adversaries’ first moves and subsequent actions. By analyzing vast amounts of data about a country— including its military capabilities, its doctrine, and its past actions— reinforcement learning can create an “AI agent” to represent that country in modeling and simulation. A unique feature of reinforcement learning is that allows the AI agent to pursue its own best interest, so that in modeling and simulation it would behave much like that country would.

RAPIDLY UPDATING OPLANS

BAH img 2-23

Just as important, a digital environment makes it possible for planners to update OPLANs almost as fast as conditions change. New information— such as changes in coalition or adversary logistics and capabilities—is constantly fed into the digital environment. Ongoing AI-aided modeling and simulation quickly recalculates how current OPLANs are likely will play out and makes new recommendations.

Planners can see, often in near-real time, how they might need to modify their OPLANs. If they do decide to make changes, they can run their updated OPLANs through another round of modeling and simulation and see the new predicted outcomes. They can then continue to refine the plans as needed.

The same approach can help the joint forces make a seamless transition from operation plans to execution plans. As conditions rapidly cascade in a crisis or conflict, for example, decision-makers can quickly see the actions they might take that have the highest probability of success. Because the AI has already worked out tens of thousands of scenarios with the OPLANs, it can take advantage of what it has already learned to stitch together—in near-real time— new recommended courses of action.

The joint forces have a wealth of data available for operation planning. An interactive digital planning environment, along with AI-aided modeling and simulation, would allow them to take full advantage of that data to keep OPLANs updated and help integrate the allies and partners into a joint force of forces.


Maj. Gen. David E . Clary ([email protected]) is principal at Booz Allen, where he leads the firm’s support to coalition warfighters in the Republic of Korea.

Kevin Contreras ([email protected]) leads Booz Allen’s delivery of digital solutions for the rapid modeling, simulation, and experimentation of multi-domain concepts for DoD and global defense clients.

Doug Hamrick ([email protected]) leads Booz Allen’s development of AI-enabled predictive maintenance and supply-chain capabilities for clients throughout the DoD and other federal agencies.

Boozallen.Com/Defense


Check out more sponsored articles.

View All
Article By Line
By Maj. Gen. David Clary, U.S. Air Force (Retired), Kevin Contreras and Doug Hamrick
Image
Screen Shot 2023-02-01 at 9.30.16 AM.png
Sponsor Logo
Sponsor Name
Booz Allen Hamilton

SPY-6: The Future of Navy Integrated Missile Defense

Submitted by [email protected] on Sun, 01/01/2023 - 00:00

The following is an excerpt from an interview between Bill Hamblet, Editor-In-Chief of Proceedings, and Scott Spence, the Executive Director of Naval Integrated Solutions at Raytheon Missiles and Defense.

HAMBLET: What is SPY-6 designed to do? What are its threat targets and what advantages does it offer over other radars?

SPENCE: SPY-6 is an integrated air and missile defense radar. It can cover both missions simultaneously. It was designed to be modular and scalable, for all the different threats as well as the different ships it will go on. The first SPY-6, V1, is the 37 RMA [radar module assembly] radar, the largest in the family. It will go on the Flight III destroyers, starting with the USS Jack Lucas (DDG-125).

HAMBLET: It isn’t just for destroyers, correct?

SPENCE: No, SPY-6 V2 and V3 will go on amphibious ships and carriers. Overall, the radar will go on seven classes of Navy ships: Flight-III destroyers, Flight IIA backfit destroyers, the Ford-class carriers, and it will be backfitted onto the older aircraft carriers and amphibious assault ships.

HAMBLET: Earlier this year, Raytheon Missile and Defense was awarded a $651 million contract with options totaling up to $2.5 billion for full-rate production for up to 31 Navy ships. What’s the significance of that award?

SPENCE: It shows the Navy’s commitment to the radar as their signature program. It is being delivered across all the different variants, driving down acquisition and O&M costs for years to come.

HAMBLET: How is SPY-6 easier to maintain than earlier versions?

SPENCE: The radar only needs two tools to be maintained. It uses a common software baseline across all platforms, allowing the Navy to make a fix or add a capability into the software baseline and deliver it to all ships that need that capability. Modularity allow common training across all platforms. The largest cost of any system is O&M. Driving down those costs is critical to ensuring affordability for years to come.

HAMBLET: How does SPY-6 enable distributed maritime operations?

SPENCE: This radar is going to see farther and see smaller objects at longer distances, providing a better picture of the battlespace. Second, there are advanced capabilities being developed, including network cooperative radar, that allow the radars to communicate among themselves to provide a better picture of the battlespace. Gallium nitride technology in the transmitters allows it to create more power and see farther. Increased receiver sensitivity allows it to better process that information.

HAMBLET: Can SPY-6 integrate with other systems the Navy has fielded?

SPENCE: Yes. It is combat-management-system agnostic, so it can provide data to whatever combat management system needs it.

HAMBLET: Other countries are buying and building Aegis-class ships. Is there foreign interest in SPY-6? SPENCE: International partners want to work with the U.S. Navy, and the best way is to use the same technology. Because SPY-6 is combat-management- system agnostic, it can integrate with many different systems in multiple navies across the world.

HAMBLET: How does SPY-6 address the missile threats the Chinese military is fielding?

SPENCE: We’ve participated in flight testing with the Missile Defense Agency and Navy on hypersonic threat profiles. Because it can see smaller targets at greater range, SPY-6 creates additional battlespace to handle those threats. The more time we can give sailors to react to incoming threats, the better they’ll be able to defeat them.

More here: Proceedings Podcast Episode 294: Raytheon discusses the U.S. Navy’s SPY-6 radar


Check out more sponsored articles.

View All
Image
Screen Shot 2023-01-03 at 10.35.41 AM.png
Sponsor Logo
Raytheon Missiles & Defense logo
Sponsor Name
Raytheon Missiles & Defense

How AI Can Help Integrate Allies And Partners In The Indo-Pacific

Submitted by [email protected] on Thu, 12/01/2022 - 00:00

One of the challenges in integrating the U.S. and its allies and partners in the Indo-Pacific is that there is a great deal of complexity in how a potential adversary might engage each of the different countries in different ways leading up to a conflict——tactically, strategically, economically, and politically. And there is just as much complexity in how each country might respond in its own way.

It is difficult for wargaming and exercises to fully capture this complexity, with its clues to effective mission-partner integration. However, an emerging form of AI known as reinforcement learning can play an important role. Essentially, this technology makes it possible for each country in a virtual wargame— whether an adversary, the U.S., an ally, or a partner—to be represented by its own AI “agent.”

Each agent—a sophisticated algo- rithm— brings together and analyzes vast amounts of data about that country, including its military capabil- ities, its political and economic environment, and its posture toward the other nations. A unique feature of reinforcement learning is that allows the AI agent to pursue its own best interest, so that in a wargame repre- senting a country, the AI behaves much like that country would.

This can provide valuable insight into the often-difficult challenges of mission-partner integration. For example, an AI agent representing a critical partner in the Indo-Pacific might discover, over multiple scenarios, that certain security cooperation activities would likely elicit economic or diplomatic pressures from an adversary, and that the best course of action would be to disengage and remain neutral.

Or, the AI agent might find that that if allies or partners have certain defensive weapons or other protections in place before a conflict, that would deter—or at least defer—adversary aggression. Such AI-informed scenarios can help map out the steps needed to make sure our allies and partners get the capabili- ties they to maximize deterrence.

Defense organizations are already beginning to use reinforcement learning in operational planning, by wargaming how opposing forces might engage tactically in battle. But rein- forcement learning can go even further, by helping to integrate the U.S. and its allies and partners in the Indo-Pacific through all phases of competition, crisis, and conflict, to help create a force of forces.

How Reinforcement Learning Works

With reinforcement learning, algo- rithms try to achieve specific goals, and get rewarded when they do. Using trial and error, the algorithms test out random possible actions. The closer those actions get the algorithms to their goals, the higher their score. If the actions move the algorithms way from their goals, the score drops.

In this way, the algorithms can rapidly work through thousands or even hundreds of thousands of scenarios, in a game-like setting, to determine the best course of action. With each iteration, they learn more about what works and what doesn’t, and get closer and closer to the optimal solution.

Because the algorithms can perceive their environment in a virtual wargame, and participate autono- mously, they are considered to be AI agents. And reinforcement learning is well suited for wargaming. An AI agent

can take a side and play a role, trying to achieve its own specific goals and learning as it goes along. Just as important, multiple agents in a wargame—for example, representing various allies and partners in the Indo-Pacific—can learn how to best work together to achieve common goals in the face of an adversary.

Virtual wargaming is just one example of how reinforcement learning can assist defense organizations. It can also help optimize weapons pairing, the kill chain process, cybersecurity, and other challenges.

How Reinforcement Learning Is Trained

The process of integrating allies and partners with reinforcement learning begins by bringing together a wide range of data about a particular country. In addition to information on the country’s military and other resources, it can include its recent history—for example, how an ally’s economy and politics were affected by outside pressures in the past, and how the country responded when faced with certain pressures from an adversary. All this information teaches the AI agent what kinds of actions it might see from agents representing other countries, and what kinds of actions it can take on its own.

At the same time, the AI agent is provided with that country’s goals, based on the knowledge of experts on its culture, politics, economy, military, and other areas. The agent is then programmed to use the actions at its disposal to achieve those goals. While it may be impossible to capture the full picture of a country—or the complete international environment—even limited AI agents, interacting with one another, can provide important insights. And as new information about countries is added into the mix, AI agents continually learn.

Reinforcement Learning In Action

In a virtual wargame, AI agents for the adversary, the U.S., and various allies and partners enter a scenario and begin interacting with each other autonomously—each balancing its own strengths and weaknesses to achieve its goals the best way possible. In one scenario, for example, an adversary might try to use economic or diplo- matic coercion against a number of different allies and partners at the same time, or launch sophisticated disinformation campaigns designed to pit countries against one another and break apart the coalition.

With each country pursuing its own best interest, the AI agents can reveal how they might work together against the adversary, or splinter from the others. A partner in the Pacific might decide to provide some assets to the coalition, but not others. An ally might be particularly susceptible to an adversary’s disinformation campaign, and refuse to cooperate with other allies or partners. These kinds of scenarios can suggest actions the U.S. and its allies and partners might take, which they can then try out as the virtual wargame continues.

A wargame can play out with hundreds of thousands of iterations, giving the AI agents the chance to try out any number of possibilities, and find the best solutions. Throughout the process, domain experts continually verify the AI agent’s goals and actions, making sure they accurately reflect the real world.

Reinforcement learning doesn’t replace current approaches to wargaming, planning and other activities. Rather, it is a powerful tool to aid decision- making, as leaders seek to integrate the U.S. and its mission partners into a potent force of forces in the Indo-Pacific.


Lt. Col Michael Collat ([email protected]) is a Booz Allen principal leading the delivery of data analytics, counter-malign foreign influence, and digital training solutions across USINDOPACOM. A former Air Force intelligence and communications officer, he has also led projects delivering cyber fusion processes, information operations assessments, and regional maritime and aerospace strategies.

Vincent Goldsmith ([email protected]) is a Booz Allen solutions architect providing transfor- mational technical delivery across USINDOPACOM. He focuses on wargaming, modeling and simulation, immersive, cloud, and AI solutions, and he partners with warfighters in region to integrate the latest innovative technology into their base- lines, to advance the mission.

BOOZALLEN.COM/ DEFENSE


Check out more sponsored articles.

View All
Article By Line
By Lt. Col. Michael Collat, U.S. Air Force (Retired) and Vincent Goldsmith
Image
Screen Shot 2022-12-02 at 3.24.06 PM.png
Sponsor Logo
Booz Allen Hamilton logo
Sponsor Name
Booz Allen Hamilton

Protecting Classified Algorithms In Unmanned Systems In The Pacific

Submitted by [email protected] on Sat, 10/01/2022 - 00:00

In the coming years, the joint forces will increasingly use artificial intelligence in unmanned systems in the Pacific. Many of the algorithms will be mission-specific and classified, making them potential targets of adversaries who may try to steal or disrupt them.

Protecting classified algorithms in unmanned systems in the Pacific presents a unique set of challenges. Unmanned systems may operate closer to adversaries than manned systems. And with unmanned systems, humans may not be available to detect attacks on the AI and take corrective measures.

However, by adopting a series of rigorous protections across the entire lifecycle of the algorithms—through all stages of development and deployment—and by building in resiliency, the joint forces can help keep classified algorithms in unmanned systems secure.

Protecting The Algorithms During Development

Often, many of the essential elements of a machine learning algorithm will be built in an unclassified environment, to take advantage of the expertise and innovations of the wider organization. The algorithm is then moved into a classified environment, where mission-specific and other classified elements are added.

It’s critical that algorithms be protected while still in the unclassified environment. If an algorithm is stolen, an adversary may figure out its purpose and methods—even if it hasn’t yet been configured for a specific mission—and potentially develop countermeasures.

The joint forces can help protect the algorithms for unmanned in their early, unclassified stages through government-run AI/ML factories. Instead of relying on the industrial sector—which may not apply cybersecurity consistently—these factories can impose rigorous security controls through all phases of algorithm development, including both unclassified and classified. Many defense organizations are already moving toward this level of security with other types of software factories, and they can achieve the same goals with factories that specifically develop AI and ML.

At the same time, the joint forces can require that vendors adopt a comprehensive set of cybersecurity techniques when developing algorithms. Such measures include real-time threat-sharing, so that companies can take advantage of their collective knowledge, and cyber-as-a- service, so that there is active monitoring of systems and networks rather than just snapshot audits.

Protecting The Algorithms During Transfer And Testing

Extra protection is also needed when transferring algorithms from unclassified to classified environments, and when moving algorithms between the labs doing the development and testing. The longtime practice of moving electronic information from one system to another by people— known as the “sneakernet”—carries a risk that malware could be placed on the laptops, disks and other items used in the transfers. With advances in technology, there is now more security in an infrastructure that allows direct connections between systems with different security classifications, especially on research and engineering networks.

The joint forces can also take steps to protect classified algorithms for unmanned during the testing itself. When algorithms are being tested in real-world conditions, adversaries may be able to determine how they’re being used, or even steal them. One solution is to use digital engineering to test the algorithms with modeling and simulation. This not only keeps the algorithms from being exposed to adversaries during testing—it also makes it possible to simulate cyberattacks and model different defenses.

Protecting The Algorithms During Deployment

Classified algorithms require particu- larly rigorous protections once they’re deployed in unmanned systems. If a cyberattack corrupts the data being analyzed by the algorithms—or compromises the AI/ML systems themselves—humans may not be immediately aware that something is wrong.

One way of reducing the risk is to develop automated responses to data drift or model drift. If the data coming in from sensors is significantly different from what might be expected—potentially indicating a cyberattack—the AI/ML system might automatically shut down, or switch to data from other types of sensors. There is both an art and a science to identifying patterns in the data that might suggest a cyberattack, and establishing the thresholds that will trigger the automated responses.

Another step is to make it more difficult for a cyberattack on one AI/ ML system on an unmanned vehicle to spread to other components of the vehicle—for example, from algorithms analyzing radar data to ones analyzing video feeds or signals intelligence. Here, the solution is to create a separate security boundary for each AI/ML system on the unmanned platform. This makes it possible to more tightly control the flow of data from one system to another, and to cut the connections between systems, if necessary, to keep a cyberattack from spreading.

Additional steps can help protect classified algorithms in the event an unmanned vehicle is captured by an adversary. Along with anti-tamper measures—which can make it difficult for an adversary to access and possibly reverse engineer a captured AI/ML system—the joint forces can apply an approach known as disaggregation.

An AI/ML system—one that analyzes radar data, for example—typically has a complex collection of mission algorithms. With disaggregation, no single UV in a mission has all the algorithms. Each does just a portion of the analysis and sends its piece of the puzzle to a central processing location. The goal is that even if adversaries can overcome the anti-tamper measures on a captured AI/ML system, they won’t be able to glean enough information to unlock the secrets of the system and its algorithms.

Protecting The Algorithms With Resiliency

If cyber protections do fail, the classified algorithms on an unmanned vehicle need to be replaced as quickly as possible with new and better algorithms to maintain the mission. However, with conventional approaches, algorithms can’t easily be switched in and out—often the entire AI/ML system has to be rearchitected, which can take months. In addition, algorithms and other components in a system are often so interdependent that fixing one problem—such as switching out an algorithm—can create other, unexpected problems in the system, leading to rework and more delays.

Once again, the modular approach provides an advantage. Using open architectures and other open techniques, the joint forces can build AI/ ML systems that make it possible to quickly plug-and-play new algorithms and other components. In addition to helping maintain the mission, this has other benefits. AI/ML developers can regularly tweak the classified algorithms and replace them proactively—before any cyberattack—to make it difficult for adversaries to build up information on them. Plug-and-play also makes repurposing classified algorithms from one mission to the next easier and more secure.

Protecting classified algorithms on unmanned systems in the Pacific presents its own set of challenges. But by constructing strong cyber defenses throughout the algorithms’ entire lifecycle, and by emphasizing resil- iency, the joint forces can take steps to meet those challenges.


Jandria Alexander ([email protected]) is a nationally recognized cyberse- curity expert and a vice president at Booz Allen who leads the firm’s business for NAVSEA and S&T, including unmanned systems, resilient platform and weapon systems, data science, and enterprise digital transformation strategy and solutions for Navy clients.

Mike Morgan ([email protected]) is a principal at Booz Allen who leads the firm’s NAVAIR line of business. He has over 20 years of experience supporting NAVAIR programs with a focus on systems development and cybersecurity for unmanned systems and C4ISR solutions.

Boozallen.Com/ Defense


Check out more sponsored articles.

View All
Article By Line
By Jandria Alexander and Mike Morgan
Image
Screen Shot 2022-10-04 at 2.00.09 PM.png
Sponsor Logo
Sponsor Name
Booz Allen Hamilton

How AI Can Help The Joint Forces With Persistent Targeting

Submitted by [email protected] on Sun, 05/01/2022 - 00:00

One of the thorniest challenges in the Indo-Pacific is persistent targeting—how can the joint forces keep track of a constantly changing array of often fast-moving targets, over vast open spaces, against adversaries adept at hiding what they’re doing? How can you make sure you’re always matching up the right sensors with the right targets, and at exactly the right times, so you can maintain custody on critical targets with the needed handoff from
one sensor to the next?


These are complicated problems that require rapidly bringing together and analyzing, in real time, a growing ocean of information on both targets and sensors—something that is becoming increasingly difficult using conventional manual approaches. However, those are just the kinds of problems that artificial intelligence solutions are well suited to handle. With advances in machine learning
and other forms of AI, the joint force now has the tools and opportunity to make an exponential leap in persistent targeting in the Indo-
Pacific and elsewhere.

Gaining Situational Awareness

Establishing and improving situational awareness through the use of AI starts with a robust capability to gather, store and process large amounts of data. Fortunately, today there are data platforms that can securely bring together the full range of data that the joint forces collect on targets and sensors. These platforms can seamlessly accept data from any source, and in any format, and make it
fully available to AI and other data fusion and analytic applications.

The application of trained AI models on these large sets of data can then result in rapid target identification, factoring in current or last known locations, as well other target characteristics. These models can also correlate other sensor information about a target, such as patterns in the electromagnetic, acoustic and IR signatures.

Predicting Target Paths


Properly trained AI models also can predict where targets are likely to go, so operators can optimize potential sensor-to-sensor handoffs to maintain persistent targeting and help commanders maneuver their forces in advance of adversary action. The AI
models do this by analyzing historical data on the adversary targets and actions, looking for behaviors and patterns, such as where those targets have gone in the past in particular circumstances. For example, when there’s a certain combination of adversary aircraft flying in a “package”—such as two tankers, four bombers and six fighters—what kinds of missions did such a group execute in the past and what flight paths did they tend to take? How have such patterns been changed in the past by our responses, and by other factors, such as the weather?


The power of AI comes from its ability to combine vast amounts of historical data with the current context from any number of sources, such as intelligence, political developments, and weather. This can then provide commanders with likely paths for targets of interest and assign confidence and probability values to the different potential target movements.

Predicting Sensor Accuracy


AI solutions can also identify which available sensors are best suited to maintain target custody, and can continuously perform sensor-target pairings, at machine speed, with automated handoffs—across large geographies with multiple targets and multiple sensors. For example, based on the historical data, which types of sensors have been most successful in tracking targets with certain characteristics? Which sensors are most accurate in a particular combination of environmental factors? AI models, for example, can account for water depth, sound-velocity profiles and arrival path in tracking a submarine, and also factor in the sensor’s position relative to the target. Such AI solutions can then help optimize the sensor-target paring, ensuring the right sensor is on the right target at the right time.


AI also can look many moves ahead, to identify the best sensors—not just for the upcoming handoff, but for the= next handoff and the next ones after that. As the targets move, AI models can continually update “best-sensorto-use” calculations, in the same way that a smartphone map application continually reconfigures for the fastest route. The ability to project a complex target-tracking scenario five, ten or twenty moves ahead at machine speed can provide commanders with a huge information edge in a rapidly unfolding scenario.


Prioritizing And Orchestrating The Sensors


It’s not uncommon that a particular sensor is needed for two different targets at the same time. How does the commander decide? Here again AI can help. It starts by evaluating the targets themselves and ingesting the commander’s target prioritization and the likelihood of the loss of target custody. For example, a commander may prioritize a highly accurate sensor for a high-priority target. But if the custody of that high-priority target can be assured with a different sensor for a short period of time, then the highly accurate sensor could potentially be re-tasked and then returned to the high priority target without any mission degradation. That would free
up the more accurate sensor to provide information on a target that might otherwise be difficult to acquire. The promise of AI is that it can sort out much of this complexity in real time to maintain persistent targeting and custody on multiple targets in an ever-changing environment. AI solutions can also deal with changing commander priorities, changing environmental factors, sensor degradation, and adversary counteractions all at machine speed—delivering the commander a synchronized battlespace-awareness plan optimized for both sensor and targets.


These AI solutions also learn over time. As they get “smarter,” they can better sort out which combinations of sensors are most effective at tracking which targets and under which conditions. As models incorporate more data and the results of human decision making across many different scenarios, they will also improve anomaly detection, target path prediction, and synchronized sensor target pairing.

BAH Image

Staying Ahead Of Adversaries


As the battlespace in the Indo-Pacific and other areas of interest becomes increasingly complex and crowded, and as adversaries get more skillful at hiding their intentions, persistent targeting will only get more difficult. Integrating AI solutions into today’s operations can give the joint forces a strategic edge.

LT. GEN. CHRIS BOGDAN ([email protected]) is a Booz Allen senior vice president who leads the firm’s aerospace business, delivering solutions to DoD, NASA, and commercial clients. As a 34-year U.S. Air Force officer and test pilot, he flew more than 30 different aircraft types and was the Program Executive Officer for the F-35 Joint Strike Fighter Program for the Air Force, U.S. Navy, U.S. Marine Corps, and 11 allied nations.

PATRICK BILTGEN, PH.D. ([email protected]) is the director of AI mission engineering at Booz Allen, leading data analytics and AI development for space and intelligence programs. He is the author of Activity-Based Intelligence: Principles and Applications, and recipient of the 2018 Intelligence and National Security Alliance (INSA) Edwin Land Industry Award.

BOOZALLEN.COM/DEFENSE


Check out more sponsored articles.

View All
Article By Line
By Lt. Gen. Chris Bogdan, U.S. Air Force (Ret.) and Patrick Biltgen, Ph.D.
Image
Screen Shot 2022-05-24 at 5.47.50 PM.png
Sponsor Logo
Booz Allen Hamilton logo
Sponsor Name
Booz Allen Hamilton

Strengthening JADC2 In The Pacific With Line-Of-Sight Communications

Submitted by [email protected] on Sat, 01/01/2022 - 00:00

Back in the 1990s, when the U.S. military still relied primarily on line-of-sight rather than satellites for C4ISR and other communications, the Office of Naval Research developed and tested a breakthrough approach—a self-organizing mesh network for Navy line-of-sight communications.

With this network, a ship, for example, can send radar data far beyond the horizon, using ships, planes and Navy stations in a series of line-of-sight relays. Algorithms chart the most efficient path from one line-of-sight platform to the next. Data might travel half a dozen or more “hops” before reaching its ultimate destination.

As innovative as the research was, the mesh network was never put into operation—satellite communications were quickly coming on their own in the Navy and the other services, and there was no longer a pressing demand for line-of-sight relays to go beyond the horizon.

There may be a need for such mesh network again. In the event of a conflict in the Pacific, satellite communications could be degraded or denied, undermining the effectiveness of Joint All-Domain Command and Control (JADC2). If that were to happen, the DoD would need to rely on line-of-site networks for sensor, command-and- control, and other data. Unfortunately, current approaches to line-of-sight networks have significant limitations— such networks tend to be inefficient and unstable over long distances.

However, by bringing back the mesh relay network developed by the Navy in the 1990s—and updating it with AI and infrastructure improvements—the DoD can strengthen its ability to maintain JADC2 in a satellite-denied environment.

Current Approaches To Line Of Sight

One of the weaknesses of current line-of-sight networks is that they try to create a global topology, or map, that shows all the connections between various platforms, as well as the most efficient communications routes. Satellite networks can create such global topologies because every platform can “see” the satellites. However, it is much more difficult to line-of-sight networks to create fully comprehensive maps.

Line-of-sight communications must be conducted at relatively low power to avoid giving away the platforms’ locations to adversaries. But lower power means lower bandwidth, or capacity. And when line-of-sight networks try to create a global topology, they often of end up using most of the available bandwidth just maintaining the map. Each time there’s a change in connectivity—with a ship or plane moving into or out of line-of-sight— the routers and algorithms on the network’s platforms have to completely update the global topology. This intensive router-to-router traffic between platforms not only crowds out intelligence information, sometimes there’s not even enough bandwidth for the router traffic itself. This can be a particular issue for U.S. forces in the Pacific, where airborne and seaborn platforms are constantly moving in and out of sight of one another. A global topology is typically not sustainable in a frequently changing line-of-sight environment.

Advantages Of The Mesh Network

Instead of trying to create a global topology, the mesh network developed by the Navy in the 1990s uses an innovative relay system that moves data from one line-of-sight hop at a time.

Here’s how it works: For example, say a UAV needs to send radar data to a number of ships, planes, and bases beyond the horizon in the Pacific. With the mesh network, the UAV and all of the platforms within its line of sight are using their routers and algorithms to communicate with one another. In essence, they’re creating a highly localized network map.

It wouldn’t be practical for the UAV to send its data to all of its line-of-sight neighbors—that would create too much network traffic. Instead, the UAV determines which neighbors have the most line-of-sight connec- tions of their own and sends its data only to them. In the next step, the platforms that get the UAV’s data relay it to their own line-of-sight neighbors that have the most connections. This process is repeated, from one group of line-of-sight platforms to the next, until the UAV’s data reaches its ultimate destinations.

A major advantage of this approach is that data moves throughout the network with the minimum number of platform-to-platform relays. This makes the most efficient use of line-of-sight’s limited bandwidth, freeing up capacity for intelligence data. And because the fewest possible platforms are relaying the data from one hop to the next, it lowers the risk of detection by adversaries. There’s another benefit: Unlike line-of-sight networks that try to create global topologies, the mesh network is self-healing—it seamlessly incorporates constant changes in connectivity.

The latest advances in AI have the ability to make the mesh network far more powerful than Navy researchers envisioned in the 1990s. In particular, AI can help maximize routing and network efficiency, by determining which platforms, and which data transmissions, have the highest priority based on the operational mission and the commander’s intent.

Building A Mature Line-Of-Sight Infrastructure

Mesh networks alone, however, are not enough. In order for them to operate efficiently—even with AI—they need to be part of an infrastructure that is geared toward line-of-sight communications, not just satellites. For example, in recent years sensors have been increasingly designed to stream data through satellite communications. However, it is difficult for lower bandwidth, line-of-sight communications to manage and consume streamed data. Too much data from too many sensors will bog down a line-of-sight network.

This means that sensors will need to operate differently in a satellite degraded or denied environment— instead of streaming oceans of data, they will only be able to send the most relevant bits of information. Here again AI can help, by selecting the most relevant sensor data based on mission, evaluating network conditions, and determining how much data can be sent at a given time.

In addition, sensors will need to be specifically designed to accommodate line-of-sight communications. One example of the way this is being done now: With some small UAVs, the resolution on the cameras is intentionally lower, and the frame rates are intentionally slower, so that the video can be processed more easily through line-of-sight communications.

A line-of-sight  infrastructure  also calls for changes to the routers and algorithms that  communicate  with one another to form a mesh network. The DoD now largely relies on commercial, proprietary routers and algorithms that are specifically designed for global topologies. With open operating systems and other open approaches, the DoD can develop routers and algorithms tailored to line-of-sight communications.

U.S. forces in the Pacific may someday need to transition from satellite to line-of-sight communications in order to maintain JADC2. By leveraging the mesh relay network the Navy developed in the 1990s, updating it with the latest AI, and developing a mature line-of-sight communications infrastructure, the DoD can help meet that challenge.

BOOZALLEN.COM/ DEFENSE


Mike Morgan ([email protected]) is a principal at Booz Allen who leads the firm’s NAVAIR line of business. He has over 20 years of experience supporting NAVAIR programs with a focus on systems development and cybersecurity for unmanned systems and C4ISR solutions.

Steve Tomita ([email protected]) is a principal and director of technology and digital engineering at Booz Allen, where he has been driving innovation and capability delivery to the Navy and DoD for 20 years.

Cliff Warner ([email protected]) is a chief engineer at Booz Allen. He led the research on the mesh rely network for the Office of Naval Research in the 1990s when he was with what is now Naval Information Warfare Center Pacific. He currently develops and analyzes system-of-system architectures for Navy clients.


Check out more sponsored articles.

View All
Article By Line
By Mike Morgan, Steven Tomita, and Cliff Warner
Image
BAH_Proceedings_Strength-JADC2_r2.jpg
Sponsor Logo
Sponsor Name
Booz Allen Hamilton

Making Digital Engineering For Unmanned Systems More Open

Submitted by [email protected] on Wed, 12/01/2021 - 00:00

Unmanned maritime systems (UMS) are poised to become a leading-edge capability for the Navy in potentially contested environments in the Western Pacific. As this unfolds, China will likely respond by aggressively introducing new methods and solutions to blunt the UMS’ effectiveness. The Navy will then need to introduce even more advanced sensors, analytics and other technologies – which the Chinese in turn will seek to counter as quickly as they can.

The result may be a supercharged, ongoing technology race between the Navy’s unmanned capabilities and China’s countermeasures. If the Navy is to win that race, it is crucial that new capabilities be developed and fielded with digital engineering—but not the way digital engineering for the Navy is commonly practiced today. A new approach is needed, one that takes digital engineering out of the mostly exclusive realm of original equipment manufacturers (OEMs), and makes it more open to the Navy, and to a wider range of industry and other partners.

The Problem: Limited Insight Into Design Data

Currently, most digital engineering practiced for major Navy programs of record and other projects is conducted by OEMs in their own digital environments. Because these environments are largely closed, the Navy lacks real-time insight into the design data. The OEMs typically do their design work in their own digital environments, and then extract limited data points and present them to the Navy in contractual artifacts like spreadsheets, PowerPoint presentations, and pdf files. These artifacts are usually delivered only at major milestone design reviews.

This makes it difficult for the Navy to flag problems or gain detailed insight before a design goes to testing. Not only does the Navy have to wait until the end of a design phase to obtain the artifacts, the artifacts themselves may not have all the data Navy engineers need to fully evaluate and influence the design. This often results in extensive rework and other delays. Much of the speed that digital engineering offers the Navy is simply lost.

Closed OEM digital environments also hamper the ability of the Navy to tap innovation within the wider technology development community. Other providers normally have limited access to the information they might need—including design and configuration data, system architectures and key interfaces—to determine whether they might possess new solutions to offer the Navy. While some of this information may be contained in legacy documents, it could take weeks or months to sort out—and even then it might not be enough. Here again, the Navy loses out on the potential of digital engineering.

Shared Digital Engineering Environments

If the Navy is to take full advantage of digital engineering for unmanned systems, the design work needs to be conducted in common, or shared digital environments. Shared digital environments can take several different forms, but in essence they provide multiple parties with common access to design data. They might be sponsored or managed by the Navy, by OEMs, or by other entities. The Navy is already moving toward shared digital environments, and now has the opportunity to build on that progress.

In a shared digital environment, the Navy can see the same design data the OEM is working with, and so can spot potential problems in real time, without needing to refer to artifacts at a later date. For example, if an OEM is developing a new side-scan sonar for an unmanned underwater vehicle, the Navy can provide much faster review, analysis and feedback across the entire lifecycle of the design—all of which would help get the sonar integrated, tested and fielded more rapidly.

Opening up digital engineering environments also fosters competition and innovation, by bringing in the wider community of technology providers, including academia and non-traditional defense contractors. Shared digital environments give providers earlier and deeper insight into what the Navy needs. And the more providers that can look at the problem, the greater chance that one of them will say, “We know how to solve it.”

More Open Architectures, Less Vendor-Lock

One of the keys to rapid technology insertion in unmanned systems is the ability to plug-and-play the best new technologies from across the provider community. This requires open architectures, so that any provider can build solutions that will seamlessly integrate with current systems. Shared digital engineering environments do much to encourage these open architectures. That’s because shared environments aren’t effective unless the architectures let everyone in. Shared digital engineering environments and open architectures go hand-in-hand; each promotes the other.

At the same time, this approach substantially reduces vendor-lock. When other providers have direct insight into design data—rather than just legacy documents—the Navy is less dependent on the OEMs for system updates and upgrades. And with open architectures, the Navy is no longer locked into an OEM’s proprietary approaches. Naturally, all of this must occur under appropriate levels of cybersecurity to prevent intrusions, manipulations, and theft of cutting-edge technical data—even as we reap the benefits of open architectures.

Faster Adoption Of Digital Engineering

Shared digital environments are the key to digital engineering not only for emerging platforms such as unmanned systems, but also for the Navy’s transformational technologies for critical priorities, including Project Overmatch. Shared digital environments speed this wider adoption of digital engineering.

Currently, each OEM typically has its own set of digital engineering tools and techniques, which are often not compatible with others. Common digital environments encourage common approaches, making it easier for the Navy to take digital engineering out of isolated pockets, and scale it across any number of projects.

Building On The Navy’s Progress

The Navy is already moving toward shared digital environments. One example is the planned Rapid Autonomy Integration Laboratory (RAIL), which will test new autonomous capabilities for unmanned maritime vehicles. Another example is The Forge, where the Navy can rapidly develop, test and distribute software upgrades to the Aegis and the Ship Self-Defense System (SSDS) platforms.

Both RAIL and The Forge are Navy sponsored shared digital environments. This model of government-industry collaboration gives the Navy full access to the digital environments, and taps the innovation of the wider community of technology providers.

By building on the successes of these and other shared digital environments, the Navy has the opportunity to unlock the full power of digital engineering for unmanned vehicles on the leading edge in the Pacific, and for initiatives across the Navy.


Check out more sponsored articles.

View All
Article By Line
By Brian Abbe, Commander Eric Billies, U.S. Navy (Retired), and Mike LaPierre

BRIAN ABBE [email protected], is the client service officer for Booz Allen’s Navy/Marine Corps business. He leads the development of solutions and technologies for the Navy and Marine Corps in areas such as unmanned systems; information warfare; biometrics; antitamper; air traffic control; position, navigation, and timing; augmented reality/ virtual reality; and fabrication and prototyping.

COMMANDER ERIC BILLIES [email protected], a retired surface warfare officer, leads Booz Allen’s business in the Pacific Northwest helping Navy clients chart innovative approaches for USV/UUV employment, and driving immersive tech (VR/AR/XR) across Booz Allen’s Global Defense Group.

MIKE LAPIERRE [email protected], is a senior systems engineer at Booz Allen specializing in developmental engineering and platform HW/SW integration using MBSE and digital engineering-based analyses.

BOOZALLEN.COM/DEFENSE
Image
Screen Shot 2021-12-15 at 9.56.45 AM.png
Sponsor Logo
Sponsor Name
Booz Allen Hamilton

Making Digital Engineering For Unmanned Systems More Open

Submitted by [email protected] on Mon, 11/01/2021 - 00:00

Unmanned maritime systems (UMS) are poised to become a leading-edge capability for the Navy in potentially contested environments in the Western Pacific. As this unfolds, China will likely respond by aggressively introducing new methods and solutions to blunt the UMS’ effectiveness. The Navy will then need to introduce even more advanced sensors, analytics and other technologies – which the Chinese in turn will seek to counter as quickly as they can.

The result may be a supercharged, ongoing technology race between the Navy’s unmanned capabilities and China’s countermeasures. If the Navy is to win that race, it is crucial that new capabilities be developed and fielded with digital engineering—but not the way digital engineering for the Navy is commonly practiced today. A new approach is needed, one that takes digital engineering out of the mostly exclusive realm of original equipment manufacturers (OEMs), and makes it more open to the Navy, and to a wider range of industry and other partners.

The Problem: Limited Insight Into Design Data

Currently, most digital engineering practiced for major Navy programs of record and other projects is conducted by OEMs in their own digital environments. Because these environments are largely closed, the Navy lacks real-time insight into the design data. The OEMs typically do their design work in their own digital environments, and then extract limited data points and present them to the Navy in contractual artifacts like spreadsheets, PowerPoint presentations, and pdf files. These artifacts are usually delivered only at major milestone design reviews.

This makes it difficult for the Navy to flag problems or gain detailed insight before a design goes to testing. Not only does the Navy have to wait until the end of a design phase to obtain the artifacts, the artifacts themselves may not have all the data Navy engineers need to fully evaluate and influence the design. This often results in extensive rework and other delays. Much of the speed that digital engineering offers the Navy is simply lost.

Closed OEM digital environments also hamper the ability of the Navy to tap innovation within the wider technology development community. Other providers normally have limited access to the information they might need—including design and configuration data, system architectures and key interfaces—to determine whether they might possess new solutions to offer the Navy. While some of this information may be contained in legacy documents, it could take weeks or months to sort out—and even then it might not be enough. Here again, the Navy loses out on the potential of digital engineering.

Shared Digital Engineering Environments

If the Navy is to take full advantage of digital engineering for unmanned systems, the design work needs to be conducted in common, or shared digital environments. Shared digital environments can take several different forms, but in essence they provide multiple parties with common access to design data. They might be sponsored or managed by the Navy, by OEMs, or by other entities. The Navy is already moving toward shared digital environments, and now has the opportunity to build on that progress.

In a shared digital environment, the Navy can see the same design data  the OEM is working with, and so can spot potential problems in real time, without needing to refer to artifacts at a later date. For example, if an OEM is developing a new side-scan sonar for an unmanned underwater vehicle, the Navy can provide much faster review, analysis and feedback across the entire lifecycle of the design—all of which would help get the sonar integrated, tested and fielded more rapidly.

Opening up digital engineering environments also fosters competition and innovation, by bringing in the wider community of technology providers, including academia and non-traditional defense contractors. Shared digital environments give providers earlier and deeper insight into what the Navy needs. And the more providers that can look at the problem, the greater chance that one of them will say, “We know how to solve it.”

More Open Architectures, Less Vendor-Lock

One of the keys to rapid technology insertion in unmanned systems is the ability to plug-and-play the best new technologies from across the provider community. This requires open architectures, so that any provider can build solutions that will seamlessly integrate with current systems. Shared digital engineering environments do much to encourage these open architectures. That’s because shared environments aren’t effective unless the architectures let everyone in. Shared digital engineering environments and open architectures go hand-in-hand; each promotes the other.

At the same time, this approach substantially reduces vendor-lock. When other providers have direct insight into design data—rather than just legacy documents—the Navy is less dependent on the OEMs for system updates and upgrades. And with open architectures, the Navy is no longer locked into an OEM’s proprietary approaches. Naturally, all of this must occur under appropriate levels of cybersecurity to prevent intrusions, manipulations, and theft of cutting-edge technical data—even as we reap the benefits of open architectures.

Faster Adoption Of Digital Engineering

Shared digital environments are the key to digital engineering not only for emerging platforms such as unmanned systems, but also for the Navy’s transformational technologies for critical priorities, including Project Overmatch. Shared digital environments speed this wider. adoption of digital engineering.

Currently, each OEM typically has its own set of digital engineering tools and techniques, which are often not compatible with others. Common digital environments encourage common approaches, making it easier. for the Navy to take digital engineering out of isolated pockets, and scale it across any number of projects.

Building On The Navy’s Progress

The Navy is already moving toward shared digital environments. One example is the planned Rapid Autonomy. Integration Laboratory (RAIL), which will test new autonomous capabilities for unmanned maritime vehicles. Another example is The Forge, where the Navy can rapidly develop, test and distribute software upgrades to the Aegis and the Ship Self-Defense. System (SSDS) platforms.

Both RAIL and The Forge are Navy sponsored shared digital environments. This model of government-industry collaboration gives the Navy full. access to the digital environments, and taps the innovation of the wider community of technology providers.

By building on the successes of these and other shared digital environments, the Navy has the opportunity to unlock the full power of digital engineering for unmanned vehicles on the leading edge in the Pacific, and for initiatives across the Navy.



BRIAN ABBE ([email protected]) is the client service officer for Booz Allen’s Navy/Marine Corps business. He leads the development of solutions and technologies for the Navy and Marine Corps in areas such as unmanned systems; information warfare; biometrics; antitamper; air traffic control; position, navigation, and timing; augmented reality/virtual reality; and fabrication and prototyping.

COMMANDER ERIC BILLIES ([email protected]) a retired surface warfare officer, leads Booz Allen’s
business in the Pacific Northwest helping Navy clients chart innovative approaches for USV/UUV employment, and driving immersive tech (VR/AR/XR) across Booz Allen’s Global Defense Group.

MIKE LAPIERRE ([email protected]) is a senior systems engineer at Booz Allen specializing in
developmental engineering and platform HW/SW integration using MBSE and digital engineering-based
analyses.

BOOZALLEN.COM/DEFENSE


Check out more sponsored articles.

View All
Article By Line
By Brian Abbe, Commander Eric Billies, U.S. Navy (Retired), and Mike LaPierre
Image
Screen Shot 2021-11-01 at 9.59.33 AM.png
Sponsor Logo
Sponsor Name
Booz Allen Hamilton

Pagination

  • Current page 1
  • Page 2
  • Next page ››
  • Last page Last »

Quicklinks

Footer menu

  • About the Naval Institute
  • Books & Press
  • Naval History Magazine
  • USNI News
  • Proceedings
  • Oral Histories
  • Events
  • Naval Institute Foundation
  • Photos & Historical Prints
  • Advertise With Us
  • Naval Institute Archives

Receive the Newsletter

Sign up to get updates about new releases and event invitations.

Sign Up Now
Example NewsletterPrivacy Policy
USNI Logo White
Copyright © 2023 U.S. Naval Institute Privacy PolicyTerms of UseContact UsAdvertise With UsFAQContent LicenseMedia Inquiries
  • Facebook
  • Twitter
  • LinkedIn
  • Instagram
Powered by Unleashed Technologies