Forum Replies Created

Viewing 15 posts - 1 through 15 (of 16 total)
  • Author
    Posts
  • #22287

    I think I figured it out (answered my own question). Just to provide some context, we already have a small web app that acts the front-end the users see when they authenticate. It communicates with OpenAM on the back-end over REST.

    We utilize the OAuth2/OIDC auth module to enable ForgeRock to acts as a relying party and federate authentication to an downstream IdP (any IdP that supports OIDC). The trickiest part was figuring out how to get OpenAM to send the authorization_code back to my login app instead of OpenAM itself. It did this by modifying the ORIG_URL cookie that OpenAM sent back to my Login app when it started the auth flow (with a back-channel POST). It changed the value of that cookie to the URL of my Login App (/federatedComplete) . Once my login app receives the authorization code (step 10), then then it completes a back-channel POST to OpenAM to complete the authentication process and create an OpenAM session.

    It will depend on your realm settings as to whether you require the user to have a profile in your user data source (OpenDJ). Since we use SAML2 in this environment, we do require it. This means that a user is auto-created in OpenDJ if the user does NOT already existing in OpenDJ. But OpenAM/OpenDJ never see the user’s actual password that they used to authenticate with the federated IdP.

    Here are the high level steps… there are lot of implementation details that were left out, but hopefully this helps others get start if they have a similar use case.

    1. User visits relying-party app.
    2. Relying party starts auth flow with FR (SAML2 or OIDC).
    3. FR redirects user to Login app to authenticate.
    4. User enters only their username (email) in the Login app.
    5. The login app decides if the user can/should authenticate with ForgeRock or some other IdP (like AD, or an external client IdP).
    5a. If Login app decides user should authenticate with Forgerock, then the user enters their password in the Login app.
    5b. User completes normal authentication process with ForgeRock to a create a ForgeRock session (and iPlanetDirectoryPro cookie)
    5c. Skip to step 13.
    6. Login app starts OIDC auth flow with federated IdP, indirectly. It does this ‘through’ back-channel REST call to ForgeRock. The Login app doesn’t speak to the federated IdP directly. ForgeRock has an OIDC Authentication Module for each registered IdP. The Login app first wants the Federated Auth URL.
    7. ForgeRock starts an auth session for the OIDC Auth Module and returns to the authentication URL for the federated IdP.
    8. The Login app redirects to the Federated Auth URL. This is essentially starting another OIDC flow, this time between ForgeRock and the other IdP. But now, ForgeRock is the relying-party wanting identity information from the federated IdP. The redirect_url from the federated IdP will be back to ForgeRock.
    9. The user authenticates with the federated IdP. Upon success, it redirects to ForgeRock with the OIDC authorization_code.
    10. ForgeRock redirects to the Login App, /federateComplete endpoint, to complete the OIDC flow with the OIDC Auth Module.
    11. The Login app performs a back-channel REST call to ForgeRock with the authorization_code (and related auth cookies set in step 7) to create a ForgeRock session.
    12. ForgeRock creates a session and return the session id.
    13. The Login app creates a cookie from the ForgeRock session Id and returns it to the browser as the iPlanetDirectoryPro cookie.
    14. The Login app completes a final redirect back to ForgeRock to complete the OIDC flow initiated by the relying-party in step 2.
    15. ForgeRock returns the authorization_code to the relying-party (for the first OIDC flow).
    16. The relying party completes the OIDC flow to exchange the authorization_code for an id_token.

    #21534

    Thanks Peter. We use the authorize flow today. This gives me hope that this is possible using simpler Oauth2 protocols. The SAML2 proxy config seems pretty complicated and makes my head hurt thinking about supporting that across 4 environments with multiple IdPs. I would love a simpler solution.

    After staring at the documentation for a few hours, unfortunately I’m still not seeing an obvious path from OAuth2 auth modules to how one would mimic the IdP proxy flow. Since OpenID Connect is so straightforward, it almost seems easier to just do this in a simple web bypassing OpenAM completely. But before I try to reivent the wheel, I was hoping you could be so generous and outline, at a high level, how you might imagine OpenAM could do this? I have my own Login site/UI.
    I realize this is asking a lot, so I understand if it’s beyond the scope of this community forum.

    Questions I have:
    * How would Open AM proxy the OIDC authorization request to the downstream IdP after an OIDC authorization request comes in to OpenAM?
    * Along those lines to how we can create/mimic IdP Finder functionality? I could probably do this in my UI app after the user enters their username, but somehow I need to send this IdP info to OpenAM to start the OIDC authorization request to that IdP.
    * How does OpenAM translate the id_token received from the downstream IdP to an OpenAM id_token? Is this a custom OIDC Claims script?

    Right now I’m just trying to understand how much of this could be accomplished in OpenAM versus in my Login app (that is currently regisered in the OAuth2 Provider ‘Custom Login URL Template’.)

    Any help is appreciated,
    Andrew

    #21491

    I’d like to re-ask my earlier question which was never answered:

    Is there an equivalent feature to provide IdP Proxy in OpenAM using OpenID Connect instead of SAML2?

    Andrew

    #19394

    Out of curiosity, is it possible today to deploy 2 OpenAM servers in a site/cluster configuration using amster? I like this idea of not having replicating config stores -so each server will just use an embedded config store. We’d deploy a total of 2 times, once to each server. As alluded to above, we don’t have any dynamic content (like self-registering Oauth2 clients)

    Is this possible today with Amster? We’re trying to test this but getting some weird errors when importing the same config unto both servers. It doesn’t seem to like how we’ve defined Servers/01.json and Servers/03.json.

    Or, for site deployments do we have to stick with the more traditional deployment tools (UI or ssoadm)?

    Andrew

    #19063

    Here is the article on the different password schemes…. https://backstage.forgerock.com/knowledge/kb/article/a44757687

    But that’s different than the process/dependencies needed to build your own password scheme plugin.

    #19007

    Is there an equivalent feature in OpenAM using OpenID Connect instead of SAML2)?

    #18263

    I think we found a solution to our logout problems. Instead of using 2 IdPs, we stuck with a single IdP in our circle of trust. We then configured 2 Authentication Contexts. Each auth context specified the auth-chain to use and the minimum auth-level that it would accept. As mentioned above, the LDAP module has sets the highest auth level (compared to the Persistent Cookie module). Finally, we configured the extended-metadata of each SP to use a specific authentication context when authenticating.

    This way, when an SP initiates single sign on, it will call our single IdP and then authenticate through the appropriate auth chain. The other SP will use the SAME IdP, but this time invoke a different auth-chain through the different auth-context.

    And finally, when SLO is initiated, it just works because all SPs were authenticated by the same IdP (in the same COT).

    #18254

    So we have the authentication part working beautifully. We are using two IdPs in a single circle of trust. But they are configured to use different auth chains and have different minimum auth levels. The IdP that uses the persistent cookie has a lower required auth level than the one that requires full authentication(username/password). So the persistent cookie module has an auth level of 0 (and is sufficient). The LDAP auth module, next in the chain, has an auth level of 5 but marked as optional. In the normal auth chain the LDAP module is marked as required and also has an auth level of 5. If a session is created with an auth level of 5, the user can log into any SP. But if a user only authenticates with the persistent cookie, only the auth chain for app B will allow the user to proceed.

    That’s all great but now we have logout issues. If I have logged into both app A and B, then when I attempt an SP initiated single logout, the logout request that OpenAM eventually sends to the other SP fails because the logout request comes from an IdP that the other app doesn’t recognize (different IdP entity name/issuer than what the SP imported from the other IdP metadata)

    Is there a way to fix this? I’d settle for just having the SP and OpenAM sessions ended while the other SPs live on until their app sessions end. But I can’t figure out how to configure this with OpenAM. If an SP sends a logout request, then OpenAM attempts to send a logout requests to all SPs -even if they’re in a different circle of trust or we’re authenticated by a different IdP and auth chain.

    Thoughts? I feel like we’re really close to solving this but I’m not seeing many options to control the logout process.

    #18251

    Thanks, we’ll look into that. One issue we’re having with this setup is that our single-logout process doesn’t seem to work across 2 IdPs. Somewhat surprisingly, it seems logging out of one SP causes OpenAM to send logout requests to both SPs -even though they are using different IdPs.

    #17751

    I very much like the idea of an immutable configuration. That would certainly simplify the promotion and deployment process.

    Just thinking this through, I’m trying to imagine how I would update the config so that makes it into the higher environment. For simplicity sake, let’s assume I just have three environments: DEV, QA, and PROD.

    DEV is where I do whatever I want and experiment. Once I’m satisfied, I would export my config. Next, I would need to swap my environment-specific values into the config files. (Amster supports variables to make this a little easier, but this doesn’t really save much effort since the variables in the config files are overwritten the next time I perform an export. I have already raised this issue with ForgeRock support, ticket #21830). Let’s assume I use the config files with variables. I have a single version of my config folder in Git on master, and then 3 properties files for each of my environments. After that, I just use Amster to deploy my config to all servers in QA. Full stop. We’re done, right? Then we just repeat the Amster import to prod.

    In practice that works nicely. Most of the pain is still around externalizing the config values from what’s exported by Amster. For stuff like SAML2 metadata, it’s a bit uglier to put into flat properties files -YAML files might work better. I realize this isn’t the problem we’re trying to solve here, but thought I would mention it when discussing the full ‘deployment story’.

    Let me know when it’s done ;)

    #17743

    Thanks. That’s some helpful insight. We are not using any of those dynamic features you described, so I might simplify our deployment and use the embedded config store.

    In general, I find the concept of using an LDAP directory to store configuration overkill and perhaps outdated. I’m curious if there are any plans to move to something like a Hazelcast in-memory data grid to share the configuration among servers. This feels like a really nice light touch solution that could work really well. However, there is still the issue of persisting the config once all the servers go down. This is where something like amster could be used to write the config to json files. On start-up, OpenAM would simply read the json files into the in-memory data grid.

    I’m just brainstorming…. I welcome anything to make the Forgerock stack simpler. While it’s very robust, it’s a bit of a DevOps nightmare right now (my personal opinion, obviously).

    #16340

    Is a more accurate release date available now?

    #16269

    Hi Rondini,

    In your post about a custom login site in a SAML2 flow…

    You just need to remember to setup the right cookie (iPlanetDirectoryPro) and redirect to IDP end point like “../openam/SSOPOST/metaAlias/idp” or “../open/SSORedirect/metaAlias/idp” after authentication, and leave the rest of the service to OpenAM.

    Are you suggesting that we have the SP first redirect the user to our custom login page to authenticate? If so, do we first need to capture the SAML2.0 parameters in the post body like ‘RelayState’ and ‘SAMLRequest’? I assume we do, so that we can later replay them to /openam/SSOPOST/metaAlias/idp after successful authentication. The custom login site would temporarily store these in session state. Does this sound right? In a sense, we’re asking the user to log into our custom page, independent of the SAML2.0 process, and then we resume the SAML2 handshake after successful authentication. Does this sound right?

    I’m also trying to figure out what happens if we still have the iPlanetDirectoryPro cookie in the browser when they visit the custom login site. Should we attempt to validate that first in order to avoid an unncessary authentication? Does the /openam/SSOPOST/metaAlias/idp endpoint do anything with that session cookie? Any guidance is appreciated..

    #15508

    I found this post very helpful. I need to do something similar. I’m a little confused about where to find the required jars to build my extension. Ludo, you mention this can be done through Maven for newer versions of OpenDJ. Is this the repository we would use to find the dependencies: https://maven.forgerock.org/repo ?

    If there is any draft documentation about building the custom password scheme, I would appreciate it. We’re looking to extend the SaltedSHA256PasswordStorageSchemeCfg for importing credentials from an outside system that used a slightly different hashing scheme.

    Thanks,
    Andrew

    #12170

    Thanks for the insight Peter. Given the disclaimer in the documentation:

    it is recommended that you keep the Top Level administrator account name to amadmin.

    I think I’ll keep the default name for the time being.

    Andrew

Viewing 15 posts - 1 through 15 (of 16 total)