This post discusses why cloud identity is such a difficult problem to resolve.
This blog is the final in a series that will describe how to deploy OpenSSO to protect Oracle WebLogic resources by configuring it as a Secure Token Server.
Modify the WSS Configuration Files
The following steps describe how to configure OpenSSO WSS Agents.
- Download and unzip the openssowssagents.zip file. This file contains OpenSSO Web Services Security Agents, based on JAX-WS Handlers.
- Expand the zip file.
- Copy the resources/keystore.jks, resources/.keypass, and resources/.storepass files to a convenient directory. This directory is represented by @KEYSTORE_LOCATION@ in the <wssagents_unzip_location>/config/AMConfig.properties file.
- Update AMConfig.properties for StockService as follows:
##### Following properties need to be updated for OpenSSO WSS Agents #####
com.iplanet.services.debug.directory=C:/Documents and Settings/Administrator/StockService/Debug
* Security Credentials to read the configuration data
##### End of properties for OpenSSO WSS Agents #####
- Update AMConfig.properties for StockClient as follows:
##### Following properties need to be updated for OpenSSO WSS Agents #####
com.iplanet.services.debug.directory=C:/Documents and Settings/Administrator/StockClient/Debug
* Security Credentails to read the configuration data
##### End of properties for OpenSSO WSS Agents #####
Modify the StockService Application
- Expand the unsecured StockService.war file that was created previously using the following command:
jar -xvf StockService.war
- Create a lib directory under the WEB-INF directory if lib does not exist already then, copy all <wssagents_unzip_location>/lib/*.jar files to the WEB-INF/lib directory.
- Copy the <wssagents_unzip_location>/config/AMConfig.properties file to the WEB-INF/classes directory.
- Merge the <wssagents_unzip_location>/config/server_handlers.xml file into the existing configuration file WEB-INF/classes/com/sun/stockquote.
The files must be merged so that the following handler is the first handler in the handler chain:
- Build the secured StockService application into the StockService.war file with the following command:
jar -cvf StockService.war *
- Deploy the StockService.war file to the WebLogic container.
- Access the web service WSDL with the following URL:
Modify the StockClient Application
- Expand the unsecured StockClient.war file that was created previously using the following command:
jar -xvf StockClient.war
- Create a lib directory under the WEB-INF directory if lib does not exist already. Then, copy all <wssagents_unzip_location>/lib/*.jar files to the WEB-INF/lib directory.
- Copy the previously updated AMConfig.properties file into the WEB-INF/classes directory.
- Merge the <wssagents_unzip_location>/config/server_handlers.xml file into the existing configuration file WEB-INF/classes/server_handlers.xml.
The following handler must be the first handler in the handler chain:
- Add the following client filter to the WEB-INF/web.xml file.
- Build the secured StockClient application into the StockClient.war file with the following command:
jar -cvf StockClient.war *
- Deploy the StockClient.war file to the WebLogic.
- Access the web service client with a URL of the following form:
- The URL redirects to the OpenSSO Authentication service UI for end-user authentication to the default authentication module, as shown in the following figure:
- After successfully authenticating to the OpenSSO server, the browser is redirected the StockClient application page.
- Click GetQuote to display the web service response from the StockService application.
This blog is the third in a series that will describe how to deploy OpenSSO to protect Oracle WebLogic resources by configuring it as a Secure Token Server.
Update the Web Service Provider Profile
The following steps describe how to create the Web Service Provider profile.
- Click Web Service Provider. Under Agent click WSP.
- Select all Security Mechanisms and set Preserve Security Headers in Message to true.
- Click the checkboxes for Is Request Signature Verified, Is Response Signed.
- Enter the Web Service End Point URL:
- Enter the key aliases, keystore location and password.
- Save the changes.
Update the Agent Authenticator Profile
This Agent Authenticator (agentAuth) acts as the application user. It authenticates WSS agents to the OpenSSO server through the OpenSSO client SDK in order to retrieve agent profiles or configurations from the OpenSSO server. To do its job, agentAuth requires permission to read the configuration information of the newly created WSC and WSP agent profiles.
Set the agentAuth read permission as follows:
- Select the Agent Authenticator tab.
- Under Agent, click agentAuth to edit it.
- Under the heading Agent Profiles allowed ensure that WSC and WSP are selected.
- Save the changes and log out of OpenSSO.
Edit the Security Token Service Configuration Parameters
Log onto OpenSSO and navigate to Configuration -> Global -> Security Token Service.
Make the following changes:
- In the Token Issuance Attributes section change the issuer to Self-CA and the Certificate Alias Name to opensso
- In the Key Store section change the Private Key Alias and the Public Key Alias of Web Service (WS-Trust) Client to opensso.
- In the Token Validation Attributes section change the Trusted Issuers to cacert:Self-CA.
Change the Cookie c66Encode Flag
The c66Encode flag resolves a problem whereby some application servers return the wrong cookie id if certain characters are used in the id. C66Encoding ensures that those characters are not used in the cookie id.
Follow these steps to turn c66Encoding on.
- Log onto the OpenSSO console and navigate to Configuration -> Servers and Sites.
- Click the Default Servers Settings button.
- Select the Advanced tab.
- Change the value of com.iplanet.am.cookie.c66Encode to true
- Save the changes
- Restart the OpenSSO web container
I read with concern Mat Honan’s blog about how his Google, Apple and Twitter accounts were hacked.
My main concern though are the differing policies between the service providers for resetting user account passwords. As demonstrated in this case a clever hacker can use this to gain access to an account by getting information from one service provider and using it as proof of identity to another service provider.
Take a look at Mat’s blog here to see what happened.
What can we learn from this?
The first is that everyone needs to learn and employ personal security safeguards when using the internet.
Mat admits to doing the following things wrong
- He didn’t back-up his data.
- He daisy-chained the accounts.
- He used the same e-mail address across several accounts.
- He should have created a unique recovery e-mail address that’s not associated with other services
In addition to the items that he highlighted he should have made better use of the security options available to him. For instance, had he enabled Google’s two factor authentication this incident may not have occurred.
With regard to the service providers, the hacker was able to take advantage of inconsistencies in the security policies between Apple and Amazon to get the information needed to access the Apple account:
This is an issue that will have to be addressed. Hopefully, the service providers will come together to create a solution.
Now that we’re halfway through the first international break it’s a good time to review what I’ve seen of Arsenal so far this season.
I’ve watched all of Arsenal’s games up to now and I must admit that I’ve been impressed with the team’s defensive discipline. True, that two of the teams they’ve played were more interested in not losing than trying to win the game but the Sunderland and Stoke games were the type of games that Arsenal could have lost in the past because of their defensive indiscipline.
In defense we can pair any two from Mertesacker, Vermaelen and Koscielny in the middle of the defense. I had my doubts about Gibbs and Jenkinson but it’s amazing what defensive coaching can do and both have improved. We have Sagna coming back soon but I still have a concern about Santos, great going forward but will he ever be able to defend?
In midfield we have a lot of options. Arsene Wenger is using Arteta and Diaby as the midfield platform to protect the defense and launch attacks with Diaby given license to carry the ball and Arteta playing deeper. It’s worked so far but I would prefer that at least one of those players was a true defensive midfielder. Against the better teams we may come under pressure in this area because, although Arteta is a tenacious tackler he’s not a defensive midfielder and is more effective when he plays further forward. As for Diaby, he’s showing what we’ve missed over the last two to three years with his injury problems and the more games he plays the more important he will be to the team. I just hope that he can play a full season without serious injury.
Some people have compared Cazorla negatively to Fabregas. To me they are two different types of players with Cazorla moving the ball quicker and causing more problems to the opposition because he is two footed and prepared to shoot on sight. I think that Diaby, Arteta and Cazorla will form the basis of our midfield when fit with other players coming in as required.
Up front Podolksi and Giroud are going to cause opposition defenses a lot of problems this season. True, Giroud hasn’t scored yet but he will and when he gets his confidence back he’ll be a real handful. The problem is that we don’t have much in reserve up front unless Chamakh discovers his mojo again and starts playing the way he did when he first joined the club.
The whole team is now working as a defensive unit with the forwards and midfield tracking back to make the team hard to break down. Against Liverpool, for instance, there were times when it seemed that Podolski was playing left back. It was heartening to see and goes to show that Arsene hasn’t been doing much work on the defensive side of the game over the last few years if Steve Bould can make such a big difference in so little time.
We have sterner tests coming but I feel optimistic that the current team is better equipped to pass them.
After a short hiatus I’m finally blogging again.
Some might think that I needed to recover from England’s unimaginative showing at the Euro’s and their inevitable exit on penalties in the quarter finals.
This is not true, I’ve been a busy working.
One of the things I’ve done is to get more familiar with the available open source cloud offerings, in particular looking at OpenStack, Eucalyptus and CloudStack.
I used Martin Loschwitz’s excellent instructions here for the installation of OpenStack on a Lenovo T5010 laptop running Ubuntu 12.04 Precise Pangolin.
A couple of things to note:
- Hardware virtualization must be turned on at the BIOS level otherwise the VM fails to start with spawning errors.
- There is only one NIC on this laptop so I created a virtual adapter for the second NIC.
- Don’t forget to create the LVM volume group called nova-volumes. This is mentioned at the end of step 1 but no instructions are given. For those who need them:
dd if=/dev/zero of=MY_FILE_PATH bs=100M count=10
losetup –show -f MY_FILE_PATH
apt-get install lvm2
vgcreate nova-volumes /dev/loop0
I also installed OpenStack on an ESXi virtual machine. There are lots of instructions for installing it on VirtualBox but very little for installing it on VMWare. The issue is the requirement for hardware virtualization support.
It seems that there may be a way around this with VMWare’s vSphere 5 but I didn’t want to start reconfiguring the company ESXi server so I created a Ubuntu 12.04 virtual machine and installed DevStack by following Sam Johnston’s instructions here. This is a documented shell script to build a complete OpenStack development environments from RackSpace Cloud Builders that installed in less that fifteen minutes.
I shall now get familiar with the APIs and try to determine how easy it is to integrate with Open Source provisioning software.
I briefly discussed cloud provisioning in a previous post and am now going to take a closer look at cloud computing and security.
What is cloud computing?
This is computing that leverages the internet as a tool to enable remote computers to share memory, processing, network capacity, software and other IT services on-demand. The cloud paradigm provides utility computing and allows businesses to pay for what they use.
The National Institute of Standards and Technology (NIST) defines cloud computing thus:
Cloud Computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
The basic architecture of the cloud can be described as a cloud pyramid which is composed of three segments: Cloud Infrastructure at the bottom, Cloud Platforms in the middle and Cloud Applications at the top.
At the application level of the cloud clients are served Software as a Service (SaaS) resources and acquire access to fully functioning standard computer software.
At the platform level clients are served Platform as a Service (PaaS) resources and pass the responsibility of the creation and maintenance of the computer platform to the service provider. However, clients have to create or install their own third-party applications.
At the infrastructure level clients are served Infrastructure as a Service (IaaS) resources and are responsible for building and maintaining their own platforms and applications.
All services provided by a cloud provider will fall into one of these three segments.
Public, Private, Community and Hybrid Clouds
When most people discuss cloud computing they generally mean the public cloud where a provider makes computing resources publicly available over the internet using a pay-as-you-use model with the resources being shared between all subscribers. However, there are also two other types of cloud models.
A private cloud is similar to a public cloud but the resources are used by one organization. This paradigm eliminates many of the cost-benefits of public cloud computing but allows for virtualization to simulate resource allocation while assuring a more secure operating environment.
A community cloud is similar to a public cloud but all the clients have shared concerns such as mission, security requirements or policy and compliance considerations. It may be managed by the organizations or a third-party and may exist on premise or off premise. This paradigm reduces the cost-benefits of public cloud because the costs are spread between fewer clients.
A hybrid cloud is a combination of a public and private cloud. This is becoming very popular and currently has two paradigms in use.
- All operations are run in a private cloud with the public cloud used to increase capacity for expected and unexpected spikes in demand.
- For the more security conscious organizations data stores containing sensitive and proprietary information are kept in the private cloud and everything else is stored in the public cloud.
There are currently no standards for cloud security. This has led to the creation of three competing organizations formed to develop security guidelines and protocols:
- Cloud Security Alliance
- Open Data Center Alliance
- Cloud Standards Customer Council
The Cloud Security Alliance is a not for profit organization that promotes the use of best practices for providing security assurance within cloud computing environments.
The Open Data Center Alliance is a consortium of large IT consumers intent on developing standards for interoperable Cloud Computing. The organization was initiated by Intel as a means to push its Cloud 2015 vision of which the Intel Expressway Cloud Access 360 (or McAfee Cloud Identity Manager) is its first product.
The Cloud Standards Customer Council is backed by IBM and CA and is focused on the standards, security and interoperability issues around moving to the cloud. IBM has entered the cloud identity field by releasing the Tivoli Federated Identity Manager (TFIM) and TFIM Business Gateway as their cloud identity and access management solution.
The two solutions use different approaches to identity and access management for the cloud.
The Intel approach is to use an SSO portal that allows an authenticated user to select a service with each cloud solution having its own connectors. It supports simple username/password authentication and strong authentication using one time passwords. Authentication can be done against the enterprise data store.
The IBM approach uses a federated trust model where the cloud applications grant user access based on their trust of the identity provider.
We work with both cloud service providers and clients to implement user authentication and provisioning services using industry best practices and open source software. Check out our website
A plan to create a standard protocol to ease provisioning of corporate users to cloud services should be approved as an IETF working group early next month.