Forum Replies Created

Viewing 15 posts - 1 through 15 (of 15 total)
  • Author
    Posts
  • #6296
     Matt Mencel
    Participant

    Moved my connection info to that datasource file. Works. Brilliant! :)

    #6294
     Matt Mencel
    Participant

    Ah….there’s a new conf file there called datasource.jdbc-default.json. Looks like that’s where I need to be looking.

    #6188
     Matt Mencel
    Participant

    …and maybe solved…

    https://backstage.forgerock.com/#!/docs/openidm/3.1.0/integrators-guide/chap-synchronization#recon-query-optimization

    Reconciliation of 13500 records takes 1 minute and 20 seconds. That’s MUCH better!

    I tried this earlier but didn’t get it to work….but then I realized that I was using the CSV source attribute “tagnum” instead of “__NAME__” which OpenIDM uses for the primary naming attribute. So now the query optimization works.

    From what I understand it loads everything on source/target into memory and then does the reconciliation.

    In my mapping I’ve added these….

    "targetQuery" : {
                    "_queryFilter" : "(tagnum sw \"\")"
    
    "sourceQuery" : {
                    "_queryFilter" : "(__NAME__ sw \"\")"
    
    #6185
     Matt Mencel
    Participant

    I didn’t have taskThreads specified in the config for the managed object, so I believe it had used the default of 10 if I read the docs correctly. I decided to specify it and set it to 20.

    I created the table and setup the explicitMapping and all the objectToColumn maps. When I reconcile I can see data filling the new table in MySQL, so that works.

    It’s still pretty slow. Seems like the further along it gets in the reconciliation, the slower the inserts gets. It seems like what you said initially Jake, that maybe it’s verifying each insert to make sure it’s unique….and the further into the recon it gets the slower it gets because there’s a more data there to search through.

    I don’t think I have any “unique” policy set on this managed object, is it on by default? Where would I see if that is enabled?

    Does creating an INDEX on the MySQL attributes help (or hurt)?

    Thanks,
    Matt

    #6174
     Matt Mencel
    Participant

    So I can add an explicit mapping to repo.jdbc.json, something like this…

    "explicitMapping" : {
                "managed/micros" : {
                    "table" : "micros",
                    "objectToColumn" : {
                        "_id" : "objectid",
                        "_rev" : "rev",
                        "tagnum" : "tagnum",
                        "serial" : "serial"
                    }
                },
    

    Do I need to specify every attribute from the CSV file? Or just the ones I might want to search/index on?

    Also, what then generates this table in the MySQL database? Do I have to do that too, or does this explicit mapping tell OpenIDM to go do that for me?

    Do I have to customize any of the SQL queries that are up in the explicit tables section?

    "explicitTables" : {
                "get-users-of-direct-role" : "select objectid from ${_dbSchema}.${_table} where find_in_set(${role},replace(substring(roles,2,(length(roles) - 2)),'\"',''))",
                "query-all-ids" : "SELECT objectid FROM ${_dbSchema}.${_table}",
                "query-all-ids-count" : "SELECT COUNT(objectid) AS total FROM ${_dbSchema}.${_mainTable} obj INNER JOIN ${_dbSchema}.objecttypes objtype ON obj.objecttypes_id = objtype.id WHERE objtype.objecttype = ${_resource}",
                "for-internalcredentials" : "select * FROM ${_dbSchema}.${_table} WHERE objectid = ${uid}",
    #6172
     Matt Mencel
    Participant

    OK…sorry about the confusion, I’m pretty new to this.

    I’m using MySQL. I looked through the repo.jdbc.json file and I’ve not created anything in there for my “MICROS” managed object. I see some stuff for managed/user and some defaults.

          "genericMapping" : {
                "managed/*" : {
                    "mainTable" : "managedobjects",
                    "propertiesTable" : "managedobjectproperties",
                    "searchableDefault" : true
                },
                "managed/user" : {
                    "mainTable" : "managedobjects",
                    "propertiesTable" : "managedobjectproperties",
                    "searchableDefault" : false,
                    "properties" : {
                        "/userName" : {
                            "searchable" : true
                        },
                        "/givenName" : {
                            "searchable" : true
                        },
                        "/sn" : {
                            "searchable" : true
                        },
                        "/mail" : {
                            "searchable" : true
                        },
                        "/accountStatus" : {
                            "searchable" : true
                        },
                        "/roles" : {
                            "searchable" : true
                        },
                        "/sunset" : {
                            "searchable" : true
                        }
                    }
                },
    

    So if anything, my managed/micros object is just using whatever the defaults are. There’s apparently more to creating a new managed object than just adding it through the UI. I have to setup the MySQL table info in the repo.jdbc.json file too?

    EDIT: I’m reading about explicit mappings now in the integrator’s guide. That’s probably what I need to do to speed things up…..instead of depending on the Generic Mapping where all the data is stuffed into a single field.

    Matt

    #6170
     Matt Mencel
    Participant

    It’s actually a new managed object called MICROS. The csv file I’m testing with is inventory data. The __NAME__ field is ‘tagnum’. I found that this is set in the file provisioner.openicf-MICROS.json.

    “uniqueAttribute” : “tagnum”

    So I should remove that? Or is there a way to ignore that just during full reconciliation?

    Thanks,
    Matt

    #6167
     Matt Mencel
    Participant

    Forgot to mention that… 3.1.0

    #5661
     Matt Mencel
    Participant

    I removed my custom schema files from the config/schema directory and restarted OpenDJ. Reconnected the Control Panel and I still get this error.

    Matt

    #5600
     Matt Mencel
    Participant

    Thanks!

    #5554
     Matt Mencel
    Participant

    Thanks Chris, I’ll give that a try. I’m assuming the ‘delete target’ in this case doesn’t actually mean to delete the attribute in the target (Managed User), just to delete the update action for that target attribute?

    Matt

    • This reply was modified 7 years, 2 months ago by Matt Mencel.
    #5552
     Matt Mencel
    Participant

    Hi Chris,

    The CSV is authoritative for most of the standard attributes like first/middle/last names, address and phone, etc. So I need to continue to be able to feed updates into Managed User from that connection.

    LDAP will be authoritative for username, email, and few other attributes. LDAP will have two mappings to Managed User, one as target and one as source.

    CSV -> Managed User
    LDAP <–> Managed User

    So I still need some kind of conditional on a couple attributes from the CSV source so they don’t overwrite the values that LDAP is feeding into Managed User.

    Thanks,
    Matt

    #1870
     Matt Mencel
    Participant

    First attempt at an OpenIDM Chef cookbook. I don’t think it’s complete yet, but it should get a basic OpenIDM instance setup with a MySQL DB. My LWRP for editing the repo.jdbc.json seems to be working too and I think it’s thread safe as long as the OpenIDM service pays attention to file locks.

    https://github.com/MattMencel/chef-openidm

    #1821
     Matt Mencel
    Participant

    Hi Tim,

    I’m attempting to write an “Edit JSON” LWRP that will only modify the few entries necessary and leave the rest of the file alone. I’m using Ruby’s FLOCK to lock the file to prevent write collisions on the file too.

    Haven’t got it working in Chef yet, but it works running the code manually from the command line.

    Once I’ve got that working I may publish my OpenIDM cookbook to the Chef Supermarket….or minimally to my Github account so someone better than me can take it and make it better. :)

    Matt

    #1591
     Matt Mencel
    Participant

    ludo,

    I wasn’t aware of the password policy differences, but I haven’t really thought that far ahead yet. Not a big deal though as I could just replicate the old pw policies in OpenDJ through whatever method is available there.

    The passwords themselves….I think I have another way to load those if they don’t sync.

    Thanks for the link to the lsc-project….that looks like it might do the trick for syncing data.

    Will there be any gotchas to watch out for when exporting/importing the schema? We also make heavy use of ACLs in Sun DS so I’ll have to figure that out too…. I can tell already this is going to be lots of fun. :)

    Matt

Viewing 15 posts - 1 through 15 (of 15 total)