OpenSSO – Secure Token Server

I think that the demise of OpenSSO has been greatly over exaggerated. There are positions open for people with OpenSSO skills and there are many forums with people asking for help in solving OpenSSO/OpenAM problems.

One question that comes up regularly is how to configure OpenSSO as a Secure Token Server.

This blog is the first in a series that will describe how to deploy OpenSSO to protect Oracle WebLogic resources by configuring it as a Secure Token Server.

A Secure Token Server is a third-party broker that allows a Web Service client to authenticate and receive a security token which is then sent to a Web Service Provider. The Web Service Provider validates the token and verifies that it came from a trusted secure token server.  It then uses the token to make authentication and authorization decisions.

Create and Deploy the SSL Certificates

This deployment uses self-signed certificates. The following instructions describe how to create and install them using OpenSSL and keytool.

  1. Create root certificate.
  2. Create the trusted certificates store
  3. Create key and signing requests.
  4. Sign the requests.
  5. Create the keystores.
  6. Add the public certificates to the keystores.

It is assumed that openssl.cfg has already been created.

Create the root certificate

openssl req -new -x509 -extensions v3_ca -keyout private/cakey.pem -out cacert.pem -days 365 -config openssl.cnf

Create the trusted certificates store

openssl x509 -outform DER -in cacert.pem -out cacert.cert
keytool -import -trustcacerts -keystore cacerts -storepass changeit -noprompt -alias cacert -file cacert.cert

Create a Key and Signing Request


openssl req -new -nodes -out clientReq.pem -keyout private/clientKey.pem -config openssl.cnf


openssl req -new -nodes -out serverReq.pem -keyout private/serverKey.pem -config openssl.cnf


openssl req -new -nodes -out openssoReq.pem -keyout private/openssoKey.pem -config openssl.cnf

Sign the Requests


openssl ca -out clientCert.pem -config openssl.cnf -infiles clientReq.pem


openssl ca -out serverCert.pem -config openssl.cnf -infiles serverReq.pem


openssl ca -out openssoCert.pem -config openssl.cnf -infiles openssoReq.pem

Create the Keystores

The following instructions use the ImportKey class to import the keys into the Java keystore.


  • Convert both, the key and the certificate into DER format
    openssl pkcs8 -topk8 -nocrypt -in private\clientKey.pem -inform PEM -out clientKey.der -outform DER
    openssl x509 -in clientCert.pem -inform PEM -out clientCert.der -outform DER
  • Import the files into the JKS
    java ImportKey clientKey.der clientCert.der
  • Copy and rename the keystore
    copy “\<home directory>\keystore.ImportKey client.jks
  • Change keystore password:
    keytool -keystore client.jks -storepasswd
  • Change certificate password:
    keytool -keypasswd -alias importkey -keypass importkey -new changeit -keystore client.jks
  • Change the alias
    keytool -keystore client.jks -storepass changeit -changealias -alias importkey -keypass changeit -destalias client
  • Check the Keystore Contents
    keytool -list -v -keystore client.jks


  • Convert both, the key and the certificate into DER format
    openssl pkcs8 -topk8 -nocrypt -in private\serverKey.pem -inform PEM -out serverKey.der -outform DER
    openssl x509 -in serverCert.pem -inform PEM -out serverCert.der -outform DER
  • Import the files into the JKS
    java ImportKey serverKey.der serverCert.der
  • Copy and rename the keystore
    copy \<home directory>\keystore.ImportKey server.jks
  • Change keystore password:
    keytool -keystore server.jks -storepasswd
  • Change certificate password:
    keytool -keypasswd -alias importkey -keypass importkey -new changeit -keystore server.jks
  • Change the alias
    keytool -keystore server.jks -storepass changeit -changealias -alias importkey -keypass changeit -destalias server
  • Check the Keystore Contents
    keytool -list -v -keystore server.jks


  • Convert both, the key and the certificate into DER format
    openssl pkcs8 -topk8 -nocrypt -in private\openssoKey.pem -inform PEM -out openssoKey.der -outform DER
    openssl x509 -in openssoCert.pem -inform PEM -out openssoCert.der -outform DER
  • Import the files into the JKS
    java ImportKey openssoKey.der openssoCert.der
  • Copy and rename the keystore
    copy \<home directory>\keystore.ImportKey opensso.jks
  • Change keystore password:
    keytool -keystore opensso.jks -storepasswd
  • Change certificate password:
    keytool -keypasswd -alias importkey -keypass importkey -new changeit -keystore opensso.jks
  • Change the alias
    keytool -keystore opensso.jks -storepass changeit -changealias -alias importkey -keypass changeit -destalias opensso
  • Check the Keystore Contents
    keytool -list -v -keystore opensso.jks

Add the Public Certificates to the KeyStores


  • Add the Client Public Certificate
    keytool -importcert -alias client -trustcacerts -keystore server.jks -storepass changeit -file clientCert.der
  • Add the OpenSSO Public Certificate
    keytool -importcert -alias opensso -trustcacerts -keystore server.jks -storepass changeit -file openssoCert.der
  • Check the contents of the Keystore
    keytool -list -v -keystore server.jks
  • Add the Server Public Certificate
    keytool -importcert -alias server -trustcacerts -keystore client.jks -storepass changeit -file serverCert.der
  • Add the OpenSSO Public Certificate
    keytool -importcert -alias opensso -trustcacerts -keystore client.jks -storepass changeit -file openssoCert.der
  • Check the contents of the Keystore
    keytool -list -v -keystore client.jks


  • Add the Client Public Certificate
    keytool -importcert -alias client -trustcacerts -keystore opensso.jks -storepass changeit -file clientCert.der
  • Add the Server Public Certificate
    keytool -importcert -alias server -trustcacerts -keystore opensso.jks -storepass changeit -file serverCert.der
  • Check the contents of the Keystore
    keytool -list -v -keystore opensso.jks

That’s it for now.  I’ll post the next installment next week.


I think that Arsene Wenger has bought well so far in this transfer window but he has to get a more disciplined and defensive minded midfielder in to replace Alex Song.  With two weeks of the transfer window remaining I’ve decided not to give my opinion on the team until it closes.



This is a quick post about Oracle Unified Directory.

Oracle released Oracle Unified Directory (OUD) with very little fanfare in July last year and have now updated it to OUD 11gR2 as part of the Oracle Identity Management 11gR2 suite of products

For those of you that don’t know OUD is based on Sun’s Open DS project and has three components in common with ODSEE:

  • Directory Server
  • Proxy Server
  • Replication Server

The Directory Server provides the main LDAP functionality, the proxy server can be used for proxying LDAP requests and the Replication Server is used for replication from one OUD to another OUD or even ODSEE server.

At first I didn’t see the point in Oracle releasing another lightweight directory server until I took a closer look at the product.  In addition to the services mentioned above it has other services such as virtualization capabilities and Oracle’s Directory Integration Platform which allows for the synchronization of data with other directory servers such as Active Directory.  Oracle has also been optimizing OUD for the SPARC T4-1 hardware.

This makes me wonder what the future is for ODSEE.

I’ve had limited experience with OUD but can confirm that it works well as a OpenAM data-store.

Blogging Again

After a short hiatus I’m finally blogging again.

Some might think that I needed to recover from England’s unimaginative showing at the Euro’s and their inevitable exit on penalties in the quarter finals.

This is not true, I’ve been a busy working.

One of the things I’ve done is to get more familiar with the available open source cloud offerings, in particular looking at OpenStack, Eucalyptus and CloudStack.

I used Martin Loschwitz’s excellent instructions here for the installation of OpenStack on a Lenovo T5010 laptop running Ubuntu 12.04 Precise Pangolin.

A couple of things to note:

  • Hardware virtualization must be turned on at the BIOS level otherwise the VM fails to start with spawning errors.
  • There is only one NIC on this laptop so I created a virtual adapter for the second NIC.
  • Don’t forget to create the LVM volume group called nova-volumes.  This is mentioned at the end of step 1 but no instructions are given.  For those who need them:

dd if=/dev/zero of=MY_FILE_PATH bs=100M count=10
losetup –show -f MY_FILE_PATH
apt-get install lvm2
vgcreate nova-volumes /dev/loop0

I also installed OpenStack on an ESXi virtual machine.  There are lots of instructions for installing it on VirtualBox but very little for installing it on VMWare.  The issue is the requirement for hardware virtualization support.

It seems that there may be a way around this with VMWare’s vSphere 5 but I didn’t want to start reconfiguring the company ESXi server so I created a Ubuntu 12.04 virtual machine and installed DevStack by following Sam Johnston’s instructions here.  This is a documented shell script to build a complete OpenStack development environments from RackSpace Cloud Builders that installed in less that fifteen minutes.

I shall now get familiar with the APIs and try to determine how easy it is to integrate with Open Source provisioning software.

Cloud Computing and Security

I briefly discussed cloud provisioning in a previous post and am now going to take a closer look at cloud computing and security.

What is cloud computing?

This is computing that leverages the internet as a tool to enable remote computers to share memory, processing, network capacity, software and other IT services on-demand. The cloud paradigm provides utility computing and allows businesses to pay for what they use.

The National Institute of Standards and Technology (NIST) defines cloud computing thus:
Cloud Computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

The basic architecture of the cloud can be described as a cloud pyramid which is composed of three segments: Cloud Infrastructure at the bottom, Cloud Platforms in the middle and Cloud Applications at the top.

At the application level of the cloud clients are served Software as a Service (SaaS) resources and acquire access to fully functioning standard computer software.

At the platform level clients are served Platform as a Service (PaaS) resources and pass the responsibility of the creation and maintenance of the computer platform to the service provider.  However, clients have to create or install their own third-party applications.

At the infrastructure level clients are served Infrastructure as a Service (IaaS) resources and are responsible for building and maintaining their own platforms and applications.

All services provided by a cloud provider will fall into one of these three segments.

Public, Private, Community and Hybrid Clouds

When most people discuss cloud computing they generally mean the public cloud where a provider makes computing resources publicly available over the internet using a pay-as-you-use model with the resources being shared between all subscribers.  However, there are also two other types of cloud models.

A private cloud is similar to a public cloud but the resources are used by one organization. This paradigm eliminates many of the cost-benefits of public cloud computing but allows for virtualization to simulate resource allocation while assuring a more secure operating environment.

A community cloud is similar to a public cloud but all the clients have shared concerns such as mission, security requirements or policy and compliance considerations. It may be managed by the organizations or a third-party and may exist on premise or off premise.  This paradigm reduces the cost-benefits of public cloud because the costs are spread between fewer clients.

A hybrid cloud is a combination of a public and private cloud. This is becoming very popular and currently has two paradigms in use.

  • All operations are run in a private cloud with the public cloud used to increase capacity for expected and unexpected spikes in demand.
  • For the more security conscious organizations data stores containing sensitive and proprietary information are kept in the private cloud and everything else is stored in the public cloud.


There are currently no standards for cloud security. This has led to the creation of three competing organizations formed to develop security guidelines and protocols:

  • Cloud Security Alliance
  • Open Data Center Alliance
  • Cloud Standards Customer Council

The Cloud Security Alliance is a not for profit organization that promotes the use of best practices for providing security assurance within cloud computing environments.

The Open Data Center Alliance is a consortium of large IT consumers intent on developing standards for interoperable Cloud Computing. The organization was initiated by Intel as a means to push its Cloud 2015 vision of which the Intel Expressway Cloud Access 360 (or McAfee Cloud Identity Manager) is its first product.

The Cloud Standards Customer Council is backed by IBM  and CA and is focused on the standards, security and interoperability issues around moving to the cloud.  IBM has entered the cloud identity field by releasing the Tivoli Federated Identity Manager (TFIM) and TFIM Business Gateway as their cloud identity and access management solution.

The two solutions use different approaches to identity and access management for the cloud.

The Intel approach is to use an SSO portal that allows an authenticated user to select a service with each cloud solution having its own connectors. It supports simple username/password authentication and strong authentication using one time passwords. Authentication can be done against the enterprise data store.

The IBM approach uses a federated trust model where the cloud applications grant user access based on their trust of the identity provider.

August Thirteenth

We work with both cloud service providers and clients to implement user authentication and provisioning services using industry best practices and open source software. Check out our website

Arsenal: Season’s Review

The season is over and Arsenal limped into third place with a great deal of help from West Brom’s dodgy keeper.

The game followed a familiar pattern.  Arsenal scored, West Brom equalized and then took the lead.  I was so certain that this was going to happen that I didn’t even get upset I just wondered whether we’d have the bottle to fight back and win the game?  We did but it was a close run thing.

Why does Arsenal concede so many goals?  The goals conceded over the last five years doesn’t make for pleasant reading: 31, 37 , 41 , 43 and 49.  It’s obvious that our defending is getting worse but nothing has been done to rectify the problem.

Are our defenders really bad?  Actually I don’t think so.  Individually, we have some good defenders the problem is that collectively we have a bad defense.  Does anyone actually practice defending at Arsenal? The emphasis seems to be on attack with our defenders regularly in advanced positions in midfield and attack.  This is borne out by the number of times our defenders are caught too far forward allowing the opposition to launch rapid counter-attacks that often result in a goal conceded.

I’ve come to the conclusion that any team in relegation trouble hope to play Arsenal.  The game plan is simple. Drop deep and defend around the penalty box, get in the Arsenal players faces and don’t allow them time on the ball, have a couple of players hang around just inside the half-way line then wait for a misplaced pass or interception.  With all of the Arsenal team camped inside the opposition half a quick pass up-field and Arsenal are in trouble.

It’s a tactic the teams from Manchester United to Wigan to QPR have used over the years but we continue to fall for it and concede stupid goals. This wouldn’t happen if the defenders were told that their primary task is to defend and they were disciplined enough to know when to go forward and when to hang back.  The hope is that Steve Bould will help to instill this mentality into our defense to alleviate this problem.

As for the attacking side of Arsenal’s game, in a nutshell it’s predictable.  Passing the ball from one side of the pitch to the other is not going to break down a well-drilled team.  When playing against teams that bring most players back to defend the penalty area  it’s very difficult to find the pass to open them up and it’s even more difficult to find the room for a strike on goal.  Alex Song is a good passer of the ball but he’s not the type of player we need trying to open up a defense.  His problem is that he gets the ball, looks up then makes the pass.  By the time he makes the pass his intentions have been read by a defender and the pass is more often than not cut out.  We need a player who can see the pass before he gets  the ball and has the skill to make the pass immediately he gets it, you know someone like Cesc Fabragas except we sold him and didn’t buy replacement.  Mikel Arteta has been badly missed the last few games because, whereas most of our passes from midfield are easy to read, he moves the ball on quickly when he gets it giving defenders little time to read in intercept the pass.

Talking about Alex Song, he needs to be reminded that he’s a defensive midfielder not  a play-maker.

With the right mix of players I think we would be able to open up any team.  Barcelona plays a very similar way and they have to overcome the same problems.  However, they vary their attacks and have Lionel Messi who has good close control and can ghost past players to make room for himself or others.  The closest player we had that could give us this type of attacking variety was Samir Nasri but we sold him and didn’t get a replacement.  With both Fabregas and Nasri gone we were left with midfield players who were pretty much alike.  Very few of them can carry the ball and individually hurt the opposition and we don’t have anyone who has the vision to make the killer passes. Consequently, we struggled against teams at the bottom of the table because they were quite happy to defend deeply and hit us on the break.

The upcoming transfer window is going to be interesting.  It will give us an insight into Arsene Wenger’s ambitions for next season.  If he sits on his laurels again and doesn’t shake up the squad then it shows that he’s content just to continue trying to qualify for the Champions League and is not interested in competing for the title or actually winning the Champions League.  I don’t include the domestic cup competitions because with a bit of luck anyone can win those.

If he brings in players that will give some more variety to our attack, buys a defensive midfielder who will play defensive midfield and addresses our defensive attitudes then I believe we’ll be on the right track to competing with the Manchester clubs and Chelsea. Even though those three clubs will be spending a lot of money in the summer at least two of them (Manchester United and Chelsea) currently depend heavily on players in their mid to late thirties so a good portion of their budgets will be spent buying players to replace them. We have the opposite problem, we need to bring in more experience.

I honestly believe we have the core of a very good squad.  If the mix of the squad regarding experience and youth is improved then I think we can compete.  However, if we follow the route of waiting for players such as Abu Diaby to come back (how many games did he play this year before getting injured again?) we will go back to our annual cycle of injury crisis followed by a mini revival followed by struggling to get results from February to the end of the season.  Only this time if the teams around us get their act together we won’t qualify for the Champions League.

OK, next post it’s back to the day job.