Tagged: 

This topic has 6 replies, 2 voices, and was last updated 4 years, 8 months ago by Chris Ridd.

  • Author
    Posts
  • #7999
     hailaeos
    Participant

    Does anyone know the size of a static group ie. how large it can be?

    I am using opendj 3.0

    I am trying to create a group that has 150k member list and the system bonks out.
    I have tried increasing the heap size, GC and other memory sized and nothing seems to be taking.
    I have tried using import-ldif and all that does is wipe out userRoot and then I have to restore from backup.

    My work around so far is create 3 groups of 50k each ie alumni01,02,03 and then create a group that contains the 3 groups and that works.

    Here is what my ldif file looks like mines 99% of the entries but you’ll get the gist:
    dn: cn=ALUMNI,ou=DUGroups,o=du.edu,o=universityofdenver
    cn: ALUMNI
    description: 22 ALUMNI
    objectclass: top
    objectClass: groupOfNames
    member: uid=870000001,ou=people,o=du.edu,o=universityofdenver
    member: uid=870000002,ou=people,o=du.edu,o=universityofdenver
    member: uid=870000003,ou=people,o=du.edu,o=universityofdenver

    Has anyone come up against this before?

    H

    • This topic was modified 4 years, 8 months ago by hailaeos.
    #8006
     Chris Ridd
    Participant

    Define “bonks out”!

    You can definitely create static groups with large numbers of members, but you do need to take care to do this efficiently in your LDAP client – our SDK dev guide has some suggestions here. Or are you just trying to import them via import-ldif?

    isMemberOf can efficiently check large static groups, and if you find your client applications are trying to return the contents of the group in any way you may find a soft reference entry cache covering the groups is beneficial, at an increased cost in memory.

    However generally speaking static groups aren’t very useful above a certain size. You should really look into using dynamic groups instead.

    #8017
     hailaeos
    Participant

    “Bonks Out” :
    I have a group called almuni that has 153,000 members its coming from an oracle database. When I create a alumi.ldif file:
    dn: cn=ALUMNI,ou=DUGroups,o=du.edu,o=universityofdenver
    cn: ALUMNI
    description: 22 ALUMNI
    objectclass: top
    objectClass: groupOfNames
    member: uid=870000001,ou=people,o=du.edu,o=universityofdenver
    member: uid=870000002,ou=people,o=du.edu,o=universityofdenver
    member: uid=870000003,ou=people,o=du.edu,o=universityofdenver

    OpenDJ 3 doesn’t import because it runs out of memory:
    ldapmodify –hostName ldap01-vld –port 636 –bindDN –trustAll –useSSL –noPropertiesFile –defaultAdd –filename ALUMNI.ldif
    Processing ADD request for cn=ALUMNIX,ou=DUGroups,o=du.edu,o=universityofdenver
    ADD operation failed
    Result Code: 80 (Other)
    Additional Information: Unchecked exception during database transaction: Requested size=8298822 exceeds maximum size=4194304

    I have tried increasing this size every which way through java.properties also with config.ldif ds-cfg-max-request-size setting it to anything other that “5 megabytes” doesnt change anything so…..

    I decided to break it smaller and bring all the members into ldap with small chunks 5k at a time and that works till I get to around 75k then it starts to fail I run a -v on ldapmodify to get verbose and get this:
    [21/02/2016:21:35:56 -0700] category=TOOLS seq=2548 severity=FINEST msg=MODIFY operation failed exception=LDAPException: MODIFY operation failed (LDAPModify.java:398 LDAPModify.java:1117 LDAPModify.java:510)
    MODIFY operation failed
    Result Code: 80 (Other)
    Additional Information: Unchecked exception during database transaction: Requested size=4320004 exceeds maximum size=4194304

    So… my work around is going to be the following:
    almuni broken down to 3 groups almuni01,02,03 with a master group containing the 3 groups and when I do that I can manage it through the effective use of Perl (yeah! Perl)

    I have searched high and low and cannot find any reference as to the upper limit for entries in a static group. Though I will be the first to admit I am untrained (I get training at the end of March)

    I will check out the SDK dev guide, I stopped trying to use import-ldif since that would just wipe out my userRoot every time I used it. Also I will look into using dynamic groups.

    h

    #8019
     Chris Ridd
    Participant

    Yes, changing the connection handler’s max-request-size property will be necessary for a large operation like this. I used 10mb for mine, and when I used a JE backend the add of a group with 150k members worked.

    However when I used the new PDB backend, I got the same error you did. I don’t think it is “running out of memory” like you think, but it seems like you might be hitting an internal limitation in PDB. Can you please raise a bug in our JIRA for this? https://bugster.forgerock.org

    So for the time being, create your backends using JE.

    As for limits in static groups, there is no real theoretical upper limit which is why you can’t find one documented. The limits are practical and will vary according to how you’re accessing the groups, how you’re updating them and how frequently, and your hardware. Personally I wouldn’t create static groups with more than about 50k entries – but that’s just a “finger in the air” and absolutely not any fundamental limit.

    You should REALLY investigate using dynamic groups instead. You will get better performance and they scale much better than static groups.

    #8025
     hailaeos
    Participant

    I rebuild my test server to JE and increased the size to 20mb and it brought in one group of over a million member entries not issue.

    So I have determined that PDB backend has a limitation and it doesn’t read the connection handler value of 5mb being changed to 20mb at all but JE does read it.

    I will put a bug in.

    Thanks for the help

    #8035
     Chris Ridd
    Participant

    The problem isn’t the connection handler value – that limit really is being applied in all cases – but rather there is a limitation in the size of values (the max size in the error is 4MB) in the PDB database.

    #8108
     Chris Ridd
    Participant
Viewing 7 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic.

©2020 ForgeRock - we provide an identity and access platform to secure every online relationship for the enterprise market, educational sector and even entire countries. Click to view our privacy policy and terms of use.

Log in with your credentials

Forgot your details?