Enterprise Java

Web Service security and the human dimension of SOA roadmap

In most non-trivial SOA landscapes, keeping track of the constantly evolving integrations among systems can be hard unless there is in place a clearly identified way to publish and find the appropriate pieces of information. An overview of the IT landscape, defining what is currently or will be connected to what, is a prerequisite for being able to maintain the environment. Absence of this typically leads to a feeling of “Spaghetti Oriented Environment” and reluctance to start anything big.

This statement sounds obvious but it is not always taken into account in practice. Some organizations either do not have in place such a centralized control of integration or have stopped using it because it “just got in the way of anything”. At best, this means that the integration information is kept in the head of some key individuals, which is risky. More often, teams in such places do not dare updating the service contracts “in case something is still relying on them” and rather duplicate them anytime an update is needed, which goes at counter-propose of SOA.

Sometimes a good idea needs only a few steps back to be applied correctly. I am explaining in this post why I think the need of SOA roadmap should motivate the presence of security access restrictions on most Web services, including non sensitive ones.

Why is such a simple idea hard in practice?

Several factors can motivate teams to skip this important documentation step:

  • Urgency of other important short-term tasks and the feeling that the team is “extinguishing fires” constantly, not having time for anything else
  • Lack of clearly identified central repository where to access and publish such information (such as an SOA registry or repository), or lack of usage of it.
  • Lack of centralized governance overlooking the integrations

From the human factors point of view, this situation can be worsened by the “I have enough already” syndrome. Within complex multi-teams/multi-projects environments, individuals already overwhelmed by the problems at hands are typically not taking the initiative of hunting for hard-to-find (and to solve) dependency problems with other projects. We need to predict this and assist proactively those teams, keeping in mind that those other problems they are dealing with are of course important as well.

The core root of the above is a feeling that it is easier to skip the validation/documentation steps of the integration whenever possible. We have to reverse this feeling by advertising the value of centralized integration information as well as raising the difficulty of implementing undocumented integrations.

What we need

We need an easy to use process that collects, validates and publishes current and future dependencies among systems. A key aspect is to keep it simple and close to the people who will actually use it, in a “just enough governance” fashion.

The four main components seem to be:

  • A clear procedure in place for requesting a new integration or updating an existing one. This includes validation from both business and technical perspectives, ensuring that the environment remains as clean and as future-proof as possible. If an EA effort is in place, most of those requests come from and to the EA team, which makes this step trivial! In practice, such requests will also come from project teams when they identify a required dependency during detailed design or implementation phase.
  • A clearly identified and easy-to access repository where to look for the current and planned integrations. This repository must include versioning of each future dependency as well as a deprecation/decommission planning.
  • A team responsible for updating the central repository, keeping the roadmap up to date. This would typically be the EA team, if available.
  • At technical level, impossibility to perform an integration if the above three components have not been involved. This should avoid “phantom dependencies” that remain hidden until a contract update triggers a problem.

This fourth component should in practice be an enterprise-wide IT principle stating that each Web Service implementation must require security authorization of the calling application. This will not prevent the presence of other security mechanisms when required by the service, for example transporting a ticket with the identity of the human user initiating the original business action (both REST and SOAP allow the presence of several simultaneous security tokens).

The implementation of this principle must be made easy, typically by attaching technical documentation and code samples to the IT principle. Because we do not expect colleagues to be hacking each other, this can take a very low-risk approach, the point is just to make sure it is easier to involve the EA team then to put in place a phantom dependency. When using SOAP, my recommendation would be to use a simple WS-UsernameToken policy and associate one username/password pair per client application. When using REST, a well known mechanism is using HMAC, hashing part of the request together with a nonce and/or an expiration date (this mechanism is similar to that used by Amazon S3).


In this post, I have tried to explain why I think a simple security policy systematically put in place in each Web Service helps keeping track of the IT landscape and ensures that no “phantom dependencies” exists out of sight of the SOA governance team. The implementation of this security policy must be simple to do, supported by helper documents and not be very strong, just enough to ensure the EA team is aware of all integration implementations.

Reference: Web Service security and the human dimension of SOA roadmap from our JCG partner Svend Vanderveken at the Svend blog blog.

Notify of

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Inline Feedbacks
View all comments
Back to top button