June 16, 2017 at 5:56 pm #17729
With the advent of amster and the ease with which you can export/import config, I’m struggling to understand the benefits of using an external config store. My goal is to simplify my architecture and reduce maintenance. So if there is not a compelling reason to use an external config store, I would like to remove it and simply use an embedded config store.
I operate a cluster of 2 servers in an OpenAM site. I elected to use an external config store (2 actually) because I wanted to ensure both OpenAM servers had the same/shared config. I also didn’t want a single point of failure. So if the app server hosting OpenAM went down, the other OpenAM server would still continue to function with a clone of the config used by the other OpenAM instance. The config stores are on separate servers from OpenAM.
To be clear, I always assumed embedded config-stores did NOT replicate with other embedded config stores. Can someone confirm/deny this?
However now that we have amster, in higher environments like UAT/PROD where I will never be making manual configurations changes, I’m not sure an external config store makes sense. Instead, each OpenAM server in the cluster will have it’s own embedded config store. I will use amster to deploy the same config to each in a matter of seconds. And that’s it. Because it’s so easy to import config into AM5, I feel the original need of a replicating external config store is no longer necessary. I can (hopefully) feel confident that both servers in the site have the exact same config.
I am most concerned about maintaining high availability. Server load is relative low in my use case. I do NOT support self-registering OIDC clients, so I’m hopeful my config data is static once it’s imported.
Does anyone see any concerns with taking this approach? I’d love to hear your thoughts.
AndrewJune 16, 2017 at 11:56 pm #17738
The embedded configuration stores can in fact be setup to replicate to each other, but for production deployment the recommendation is to use an external configuration store which is replicated for high availability.
As you allude to you in your post, not everything in the configuration store is “immutable”. Things like OIDC / OAuth dynamic clients and UMA per user polices are good examples.
We are investigating how to support an immutable style for future releases, but I can’t offer any concrete timelines right now as to when that might happen.
If you don’t use those dynamic features, and you have a very disciplined approach to managing configuration drift, what you are proposing should work. But keep in mind it is not a configuration that we test or QA.June 19, 2017 at 5:13 pm #17743
Thanks. That’s some helpful insight. We are not using any of those dynamic features you described, so I might simplify our deployment and use the embedded config store.
In general, I find the concept of using an LDAP directory to store configuration overkill and perhaps outdated. I’m curious if there are any plans to move to something like a Hazelcast in-memory data grid to share the configuration among servers. This feels like a really nice light touch solution that could work really well. However, there is still the issue of persisting the config once all the servers go down. This is where something like amster could be used to write the config to json files. On start-up, OpenAM would simply read the json files into the in-memory data grid.
I’m just brainstorming…. I welcome anything to make the Forgerock stack simpler. While it’s very robust, it’s a bit of a DevOps nightmare right now (my personal opinion, obviously).June 19, 2017 at 11:12 pm #17747
I’m curious if there are any plans to move to something like a Hazelcast in-memory data grid to share the configuration among servers. This feels like a really nice light touch solution that could work really well. However, there is still the issue of persisting the config once all the servers go down.
This would really be a variant of the current “mutable” model. i.e. this is pretty much what a shared DJ configuration store gives you today. We are working to make it easier to spin up DJ instances – so this should help.
What are your thoughts on an “immutable” deployment style, where configuration is read into memory at bootstrap time, and production changes are *not* permitted? Changing configuration (adding a policy, etc) would require configuration promotion (using some CI/CD tool) from one environment to another.June 20, 2017 at 5:59 pm #17751
I very much like the idea of an immutable configuration. That would certainly simplify the promotion and deployment process.
Just thinking this through, I’m trying to imagine how I would update the config so that makes it into the higher environment. For simplicity sake, let’s assume I just have three environments: DEV, QA, and PROD.
DEV is where I do whatever I want and experiment. Once I’m satisfied, I would export my config. Next, I would need to swap my environment-specific values into the config files. (Amster supports variables to make this a little easier, but this doesn’t really save much effort since the variables in the config files are overwritten the next time I perform an export. I have already raised this issue with ForgeRock support, ticket #21830). Let’s assume I use the config files with variables. I have a single version of my config folder in Git on master, and then 3 properties files for each of my environments. After that, I just use Amster to deploy my config to all servers in QA. Full stop. We’re done, right? Then we just repeat the Amster import to prod.
In practice that works nicely. Most of the pain is still around externalizing the config values from what’s exported by Amster. For stuff like SAML2 metadata, it’s a bit uglier to put into flat properties files -YAML files might work better. I realize this isn’t the problem we’re trying to solve here, but thought I would mention it when discussing the full ‘deployment story’.
Let me know when it’s done ;)October 30, 2017 at 8:51 pm #19394andrew.s[email protected]Participant
Out of curiosity, is it possible today to deploy 2 OpenAM servers in a site/cluster configuration using amster? I like this idea of not having replicating config stores -so each server will just use an embedded config store. We’d deploy a total of 2 times, once to each server. As alluded to above, we don’t have any dynamic content (like self-registering Oauth2 clients)
Is this possible today with Amster? We’re trying to test this but getting some weird errors when importing the same config unto both servers. It doesn’t seem to like how we’ve defined Servers/01.json and Servers/03.json.
Or, for site deployments do we have to stick with the more traditional deployment tools (UI or ssoadm)?
AndrewJanuary 4, 2018 at 1:12 pm #20358
The answer to Andrew’s last question is important to us as well. It was posted on October 30, 2017 and still no answer. Would someone kindly reply? Thanks.January 4, 2018 at 3:22 pm #20362
The AM servers must share the CTS, must use sticky sessions, and you need to be aware of the limitations such as SAML SLO (the devops guide discusses limitations).
What Andrew is proposing to do should be possible, but it is not yet a supported configuration. i.e. You have to be *very* careful doing this.
We are still working to separate out mutable / immutable bits from the configuration store, which will make this “cattle” style of deployment much easier.January 4, 2018 at 6:42 pm #20365
Thanks for the quick response, Warren. I have some follow-up questions on AM 5.5.1.
- Would multiple AM installations via Amster work with an external configuration store?
- Why would clustered AM servers sharing a common CTS require sticky session? Wasn’t this limitation addressed in previous versions of AM with Crosstalk?
- Given a populated configuration store exists, how would additional AM instances be automatically added to the cluster? For our production clusters using OpenAM 12 & 13 we do this via scripting replicated configuration with the configurator JAR. I am assuming that the configurator and ssoadm have been deprecated now in favor of Amster.
Thanks.January 4, 2018 at 7:25 pm #20366
1. Yes. An external configuration store is still recommended.
2. Authentication is not yet stateless and is not synced via the CTS. It must start and finish on the same server instance (hence sticky sessions, or x-talk if you have independent server instances). This will be addressed in a future release.
3. The goal is to add new AM instances to the cluster by simply cloning instances. Eventually there will no longer be unique “server” instances (it’s all just cattle…). You can do this today with amster in AM 5.5 *assuming* you are OK with the known limitations (authN must be sticky, no SAML SLO, etc.).
Hope that helpsJanuary 10, 2018 at 6:03 pm #20464
Your response does help, thanks.
- We are definitely planning to use an external configuration/CTS store as part of our upgrade to AM 5.5.1.
- I don’t understand why sticky sessions or crosstalk is required for authentication if the CTS store is shared and external.
I do understand why cluster-wide logout may be an issue due to local token caching. But I believe that can be solved by using aspects to implement a distributed cache like Oracle Coherence.
- We have prototyped a cluster installation by hand and are working on scripting automated deployment/configuration for our cloud-based instances. Here are the steps roughly:
- Set up external configuration and CTS stores in the same DS 5.5.0 instance
- Deploy first AM 5.5.1 war
- Use Amster’s install-openam (pointing to external config/CTS DS) command followed by an import-config command on the first AM instance
- Deploy subsequent AM 5.5.1 wars
- Use Amster’s install-openam (pointing to external config/CTS DS) command on subsequent AM instances. They automatically pick up their configuration from the already-populated store.
Does that sound OK?
Unfortunately, Amster seems to be a CLI-only tool. We need to invoke Amster commands from a bootstrap script with no human-in-the-loop. Do you know how this can be done?
Thanks!January 10, 2018 at 6:48 pm #20469
Authentication does not currently use the CTS. The state is held in memory, and this is why the transaction needs to start and finish on the same server (hence sticky LB). This is being addressed as part of a future release.
For your step 2.e you do not need to run amster again. Clone the boot.json and the keystore.jceks from the original instance, and use that to bootstrap subsequent instances. Again – the usual caveats apply (stick LB, no SAML SLO, etc.)January 10, 2018 at 9:20 pm #20478
Thanks for the quick reply, Warren.
I believe there is a fundamental misunderstanding on my part regarding authentication. We simply call the CREST API /authenticate endpoint and receive a token upon success. That token is passed in the expected cookie header for subsequent requests to AM (e.g. to get user attributes). It seems to me that authentication involves a single request/response cycle, after which a token is created and stored in the CTS for all AM instances to share. So I’m wondering what state is held in memory during authentication.
I have tried cloning the boot.json and associated keystore files. I also modified the boot.json “instance” value. It did not work for me, but I will attempt it again. We were thinking that this may not even be an option for us since our AM cluster nodes will each be on a separate PaaS instance. So we would have to store the boot.json and keystore files in some persistent location and have our bootstrap script copy them to the local file system. It may be simpler to just do an Amster install-openam. Is there an issue with this Amster-based approach?January 10, 2018 at 9:53 pm #20483
Authentication can be multi-step (2 factor authentication, for example). In your case you are probably OK if your CREST call always completes in a single call.
I think cloning will be simpler (just make sure you have the same keystores, .keystore password, boot.json, etc – and don’t forget the ~/.openamcfg), but doing it with Amster should also work.March 6, 2018 at 8:00 am #21098manishjainParticipant
Hi Warren ,
Would it be possible to provide the more details on the cloning.
Currently we create different configuration for Primay and Secondary OpenAm instances where provide the host name of primary on the secondary instances.(using SSOADM and configurator tool)
In the Amster world we are planning to do it using Amster however not sure how to handle the primary and secondary stuff in that . Would cloning of boot.json would solve the purpose and we can use Amster export(of dev environment) to configure various OpenAM instances keeping the Dj config also part of the provisioning.
On a High level following would be the steps for deployment using amster
1.Baseline any of the Dev/Cert environment and take the Amster export .
2. Deploy the OpenAM WAR (with Cusom modules). Amster tool, SSOADM, Configurator tool in the 1st OpenAM environment
3. run the install-openam command with environment specific properties stored either in YAML /property file
4. run the import-config command with the export taken in step 1.
5. run the test script to do a basic health check.
Would it be possible to provide more details on how we can use cloning to make a standard deplployment practice using Amster.
You must be logged in to reply to this topic.