Email this Article Email   

CHIPS Articles: Challenges to Acquiring C4ISR Systems Based on Service Oriented Architecture

Challenges to Acquiring C4ISR Systems Based on Service Oriented Architecture
By Lee Zimmerman and Antonio Siordia - April-June 2010
The Space and Naval Warfare Systems Command develops the standards, policies and integrated architectures for innovative and interoperable command, control, communications, computers, intelligence, surveillance and reconnaissance (C4ISR) solutions that meet warfighters' requirements. The departments of the Navy and Defense are in the midst of a multi-year transition from legacy stove-piped system architecture to a services oriented architecture (SOA) approach. Stand-alone applications are migrating to reusable services within a system, and systems are now starting to share services.

The next step in this evolution may be to do away with systems altogether and, instead, to field loosely integrated suites of services that can be fielded within multiple platforms including ships, command centers and tactical assets to enable net-centric operations for the Navy and joint customers. Although there are numerous benefits in applying SOA to C4ISR systems, significant challenges remain, which we will explore.

How Do You Buy a Services-based System?
Let's start with a scenario. A program, Alpha, might develop a service, for example, a particularly unique data visualization solution, for its own use and would logically scale the supporting infrastructure to an appropriate demand based on expected use. Alpha's developers would follow net-centric guidelines and share their new service in a directory where users associated with program Bravo discover it and find it useful. As a result, usage of the service suddenly doubles and the supporting infrastructure might be overwhelmed.

With current acquisition practices, both Alpha and Bravo are out of luck because the service was built to meet the specific requirements of Alpha, and there is no funding to upgrade the service infrastructure to support the Bravo users. Also, since this was not a planned dependency, there isn't a service level agreement (SLA) between the Alpha and Bravo programs, and subsequently there is no guarantee that the service won't go away or change its interface (application programming interface or API) or its data format. This reuse of services would be considered good news, but only if the acquisition system can be modified to be able to react by shifting resources to the Alpha program and to put in place an SLA to recognize the service reuse and the new inter-program dependency.

In fact, this scenario is limited because it still speaks in terms of programs or systems. If you take the DoD and Navy visions for net-centric operations enabled by rapidly fielded, reusable and user configurable applications to a logical extension, it makes more sense for organizations to develop interoperable services rather than complete systems. An integrating organization could then assemble a collection of services, arranged into workflows, to meet the combined mission requirements for a particular platform. Instead of systems, services become the key unit of functionality, which has huge implications for how we define, buy, test, accredit and field capabilities.

This approach causes technical, schedule and cost risks shifting from relatively self-contained programs of record (POR) to solutions composed of services developed by multiple programs and organizations. In this case, project and acquisition program managers are unlikely to be pleased with having their success dependent on how well other programs execute. Instead, they may prefer to have their resource sponsors fund them to supply individual services rather than whole systems.

Today, we integrate applications into systems and then systems into platforms. In the near future, this model could conceivably shift to integrating services directly into platforms and bypassing the system level altogether. This raises the question of who is responsible for integrating these services. Additionally, concerns of how to partition requirements to services and then services to developers also arise.

Once you have partitioned requirements to the service level, you then have to distribute funding appropriately. Further, as the previous scenario shows, funding decisions are not just limited to the development phase, but have to be revisited throughout the life cycle of each service.

The Network Enabled Command and Control (NECC) program took one approach to allocating requirements and funding. Funding for the services’ (Air Force, Army, Marine Corps and Navy) programs of record for command and control systems was shifted to the Defense Information Systems Agency (DISA) to meet an agreed upon set of joint requirements. DISA, in turn, partitioned the money back to the services to develop sub-elements of the overall solution. Unfortunately, a perceived loss of both control and funding by the services led to significant “push back” against the NECC program.

The Global Command and Control System – Integrated Imagery and Intelligence (GCCS-I3) program, on the other hand, illustrates a “coalition of the willing” approach where individual programs retained their own funding but adopted a shared technical approach. The Consolidated Afloat Networks and Enterprise Services (CANES) development approach is midway between NECC and GCCS-I3 — all hardware funding shifted from individual programs of record to CANES, but the programs retained funding and responsibility for software development. In addition, CANES provides enterprise services that programs of record have been adopting, either voluntarily or by mandate.

Once requirements and funding are allocated to multiple services, two major aspects of development remain that need coordination and control: schedule and technical standards. Ideally, schedule alignment for services should not be an issue; services would be seamlessly interoperable, could undergo continuous development, and we could just implement the most recent version of a service as needed. In cases where hosting the service is not required in-house, we can simply link the desired service into our workflows.

This is a future goal because in the foreseeable future integrators of services at the system or platform level will still need to worry about interoperability and capability issues for each service that is tied to specific releases. Therefore, there will have to be some forcing function, typically integrated test events, to align development schedules and services. If failing any of these events, they are not fielded in that baseline and fall back to the next release.

That potential failure of services to work together is what we hope to prevent using appropriate technical standards. For the most part, we do not care what goes on inside a service, giving developers considerable creative and technical latitude. We do, however, care very much about the interactions between services, so that well-defined, rigidly enforced standards for data formats, inter-service communication and the use of common services (e.g., security services) are critically important.

How Do You Test a Services-Based System?
The fundamental challenge in testing services-based systems is the realization that it’s not practical to test all possible combinations of services. While not every service can be used in combination with all other services, this still approaches an N-factorial problem in terms of combinations and permutations of possible service compositions.

One way to address the challenge is to identify the critical mission threads for any given “system” (i.e., a deployed collection of services) and make sure you test all the service compositions that support those mission threads. This has the advantage of ensuring that what must work, works, and in theory, should test almost all of the services in at least one workflow because the majority of services we should be fielding are those supporting key missions.

A more generalized approach would be to test several mission threads against a collection of services to create an approved baseline of services known to work together and to then pick from this baseline of services to field on a specific platform. This is similar to the Global Command and Control System – Maritime “segment” approach where all segments are tested, but not every ship gets all segments. With either of these two approaches, the test environment presents challenges due to the sheer scope of the services required.

Using CANES as an example, there are Navy-specific enterprise services running locally on a ship, but they need to be able to hand off user credentials and requests for information to services that are off the ship (typically through Net-Centric Enterprise Services). To fully test CANES you need the full CANES baseline (ideally spread across representations of several platforms), the naval telecommunications and Global Information Grid infrastructure that provides the connection, and all the other services (external to CANES) that support the mission threads under test.

For this reason, SPAWAR and the Program Executive Office for C4I have developed the Enterprise Engineering and Certification labs and process to provide the very large-scale, distributed, end-to-end environment supporting this type of testing.

A key element of this approach is to get the certifying agencies involved in planning test events so that a small number of large-scale end-to-end tests can be performed to serve as the development, the interoperability and, perhaps, operational acceptance tests. This is supported by having the certification agencies involved in defining graduation criteria as services move up through maturity levels, as well as having them ensure that the test environment includes the platforms, mission threads and services necessary to meet test requirements.

Another demand on the test environment is the level of instrumentation or sensors that will be required for SOA testing. We need to be able to collect the data needed to validate key performance parameters and other performance criteria — at the individual service level — and across the entire mission thread workflow. This baseline, end-to-end mission thread-based test approach is workable, but it does not answer the speed-to-capability requirement that is one of the selling points of the SOA model.

To achieve speed-to-capability, we also need the ability to test an updated version of a single service. This could be accomplished by plugging the updated service into the full baseline testing environment and re-running the full multiple mission thread test. However, there needs to be a less resource intensive way of understanding the touch points for single service and testing those elements of the service interaction, as well as for the correct service functionality.

How Do You Certify and Accredit (C&A) a Services-based System?
The challenges of achieving service/system C&A in a services-based net-centric environment range from dealing with the very newness of SOA and its evolving security model, to trust among various service providers, to the decades-old C&A experience in DoD’s traditional stove-piped information systems.

The modular, dynamic, distributed design of a SOA system goes against the underlying approach to C&A today. In a traditional system, you can define the exact components that make up the system, where they are located, what version of every software element they are running and all the physical and logical interfaces. With that knowledge, you can identify known weaknesses and verify, through testing, that vulnerabilities have been addressed.

A SOA system does not have well-defined boundaries because the services that compose a system can be hosted anywhere on any hardware platform running any operating system. In addition, in a system design that allows for dynamic composition of services, we are unable to specify the full range of services that will be working together. Clearly, a new approach to C&A is going to be required.

How Do You Field a Services-based System?
Most of the challenges of fielding a SOA system have been addressed in previous sections; however, there are a few system delivery issues that still need to be addressed. Typically, a system is a collection of software applications, and a version of that system is a specific collection of specific versions of those applications. That concept can still carry forward in a SOA environment, if a single enterprise controls all of the services that make up a system. For example, the CANES program can develop, test and accredit a baseline version of the system and field that version to some collection of platforms. That works today because most, if not all, of the services in the CANES environment are running locally and the CANES program can ensure that the different versions of CANES can talk to each other.

However, in a distributed, joint, net-centric environment, this is not the most likely scenario. The likely future environment includes hundreds of different services, developed by different organizations and enterprises, both public and private, updated continuously and used by creative service members in ways not originally intended. Clearly, this requires a new approach to the concept of configuration management and release schedules that extends beyond the boundaries of any single enterprise.

The Way Ahead for C4ISR Acquisition
The technical challenges of implementing SOA systems, while not easy, are well-known. As illustrated by the cancellation of the NECC program, the programmatic challenges of implementing SOA may be greater. Realizing this vision of purely services-based systems is going to require significant changes in how we:
• Define requirements in terms of mission threads and services;
• Allocate requirements across multiple organizations for implementation of services;
• Allocate resources for the sustainment of services;
• Test services;
• Accredit systems composed of services; and
• Sustain systems composed of interdependent services.

Team SPAWAR is engaged on several fronts to help bring about the necessary changes. First, Team SPAWAR personnel have leadership roles on many of the programs that are leading the way to SOA, including: GCCS-I3, NECC, CANES and the Command and Control Rapid Prototype Continuum (C2RPC), to name a few.

Second, SPAWAR’s Net-Centric Engineering and Integration competency has created communities of interest to share SOA best practices and to develop SOA standards and policies. Finally, Team SPAWAR personnel are actively engaged in the current information technology acquisition reform efforts recently enacted into law in Section 804 of the Fiscal Year 2010 Defense Authorization Act.

Lee Zimmerman is the national competency lead for net-centric engineering and integration for SPAWAR.
Antonio Siordia is on the net-centric engineering and integration staff for SPAWAR Systems Center Pacific.

Related CHIPS Articles
Related DON CIO News
Related DON CIO Policy

CHIPS is an official U.S. Navy website sponsored by the Department of the Navy (DON) Chief Information Officer, the Department of Defense Enterprise Software Initiative (ESI) and the DON's ESI Software Product Manager Team at Space and Naval Warfare Systems Center Pacific.

Online ISSN 2154-1779; Print ISSN 1047-9988
Hyperlink Disclaimer