Identity Manager Jboss Application server stops processing when a Cluster node goes down

Document ID:  TEC1382052
Last Modified Date:  08/09/2017
{{active ? 'Hide' : 'Show'}} Technical Document Details

Products

  • CA Identity Manager

Releases

  • CA Identity Manager:Release:12.6.5

Components

  • IdentityMinder(Identity Manager):IDMGR
Problem:

I currently have a 3 node Jboss cluster in unicast. When one node goes down the other 2 nodes can't process data during that time frame.

"Error Messages: destination address = JMS.Queue.Iam.IM.JMS.queue run time status detail queue is blocked"

Environment:
Identity Manager 12.6 SP5 Jboss 6.3
Resolution:

To resolve this we need to make adjustments to the standaline-full-ha.xml files on each node. 

1)Adjust the following lines

                <property name="num_initial_members">2</property>

                <property name="port_range">1</property>

                <property name="timeout">2000</property>

To match these configurations  

      <property name="num_initial_members">3</property>

                <property name="port_range">0</property>

 

                <property name="timeout">12000</property>

2) Make hornet-server names meaningful such as “active-node-pair-A”

 

<hornetq-server name="active-node-pair-A">

3) The active-node should look like this. In bold is what you have to add in addition to what should be there by default. It is important to verify this section and make sure all lines the below lines are there.

<hornetq-server name="active-node-pair-A">

                <persistence-enabled>true</persistence-enabled>

<security-enabled>false</security-enabled>

                 <cluster-user>guest</cluster-user>

                 <cluster-password>guest</cluster-password>

<backup>false</backup>

                <allow-failback>true</allow-failback>

                <failover-on-shutdown>true</failover-on-shutdown>

                <shared-store>false</shared-store>

                <journal-type>NIO</journal-type>

                <journal-file-size>102400</journal-file-size>

                <journal-min-files>2</journal-min-files>

                <check-for-live-server>true</check-for-live-server>

                <backup-group-name>pair-A</backup-group-name>

                <paging-directory path="live-hornetq-pair-A/paging"/>

                <bindings-directory path="live-hornetq-pair-A/bindings"/>

                <journal-directory path="live-hornetq-pair-A/journal"/>

 

                <large-messages-directory path="live-hornetq-pair-A/large-messages"/>

4) Validate the connector looks as such. You have to make adjustments on this section on the “netty-connector name=netty”. Previously it was of type netty-connector. Notice the in-vm server-id is 0. This value will increase by 1 for each backup node. This has to be adjusted.

<connectors>

          <netty-connector name="netty" socket-binding="messaging-pair-A"/>

          <netty-connector name="netty-throughput" socket-binding="messaging-throughput">

            <param key="batch-delay" value="50"/>

          </netty-connector>

          <in-vm-connector name="in-vm" server-id="0"/>

 

        </connectors>

5) Same as step 4 but validate this for the acceptors

<acceptors>

          <netty-acceptor name="netty" socket-binding="messaging-pair-A"/>

          <netty-acceptor name="netty-throughput" socket-binding="messaging-throughput">

            <param key="batch-delay" value="50"/>

            <param key="direct-deliver" value="false"/>

          </netty-acceptor>

          <in-vm-acceptor name="in-vm" server-id="0"/>

</acceptors>

6) Remove additional Broadcast groups and discovery groups that are configured. You will also need to change the jgroups-channel to be the same across the board so it wouldn’t have any communication issues.

<broadcast-groups>

<broadcast-group name="bg-group">

<jgroups-stack>tcp</jgroups-stack>

                <jgroups-channel>TestingClusterChannel</jgroups-channel>

                <broadcast-period>5000</broadcast-period>

                <connector-ref>netty</connector-ref>

</broadcast-group>

</broadcast-groups>

<discovery-groups>

<discovery-group name="dg-group">

                                <jgroups-stack>tcp</jgroups-stack>

                                <jgroups-channel>TestingClusterChannel</jgroups-channel>

                                <refresh-timeout>10000</refresh-timeout>

                                </discovery-group>

                </discovery-groups>

7) Under the cluster-connections make sure your reference to the discovery group matches the name of your actual discovery group name. See below.

<cluster-connections>

                                <cluster-connection name="my-cluster">

                                                <address>jms</address>

                                                <connector-ref>netty</connector-ref>

                                                <discovery-group-ref discovery-group-name="dg-group"/>

                                </cluster-connection>

 

                </cluster-connections>

8) Configure backup nodes for remaining 2 nodes on the cluster. Use the same naming scheme on all other nodes when configuring.

<hornetq-server name="backup-node-pair-B">

<hornetq-server name="backup-node-pair-C">

If the Primary is node B then the backups will be

<hornetq-server name="backup-node-pair-A">

 

<hornetq-server name="backup-node-pair-C">

 

9) Repeat steps 2-7 for remaining backup nodes.

10) We need to update the messaging ports. To do this search for the following.

<socket-binding name="messaging" port="5445"/>

Replace this line with these 3 lines.

 

<socket-binding name="messaging-pair-A" port="5445"/>
<socket-binding name="messaging-pair-B" port="5545"/>
<socket-binding name="messaging-pair-C" port="5645"/>

 

11) These steps will need to be configured on all 3 nodes.

Once these configurations were made please cycle the app servers and ensure each node starts up successfully.
**Please always keep backup files before modifying any configuration files**

 

 

Please help us improve!

Will this information enable you to resolve your issue?

Please tell us what we can do better.

{{feedbackText.length ? feedbackText.length : '0'}}/255

{{status}}

Not what you were looking for?

Search Again >

Product Information

Support by Product >

Communities

Join a Community >