08 December, 2017

Liferay DXP clustering

Hello Friends,

I am sure you would be looking for simple steps for DXP clustering.

Note:Liferay 7 clustering you can visit https://community.liferay.com/news/new-clustering-code-for-liferay-portal-community/ 

Image result for server clustering liferay

Steps for Liferay DXP clustering :

Liferay Setup:

- Setup liferay DXP in 2 separate servers which can be accesses using separate IP
- Both servers should be accessible to each other
- Here we are not going to setup 2 DXP instances in single server as it's not good for architectural perspective for production. 
- Setup first instance and populate DB data with single instance
- when you setup another DXP instance then point the same DB which you used for first instance. We are not going to use separate read write DB server here and will use single DB for read and write for both the instances. For production setup, you can think to create separate DB server for read and write operation.

Liferay Configurations for clustering 

Portal-ext Configuration for Node-1 and Node-2

liferay.home=/u01/app/oracle/product/fmw1/user_projects/domains

jdbc.default.driverClassName=oracle.jdbc.driver.OracleDriver
jdbc.default.url=jdbc:oracle:thin:@//localhost:1540/xe
jdbc.default.username=username
jdbc.default.password=password
setup.wizard.enabled=false

# For clustering settings
dl.store.impl=com.liferay.portal.store.db.DBStore
cluster.link.enabled=true
cluster.link.channel.properties.control=/u01/app/oracle/product/fmw1/user_projects/domains/base_domain/tcp.xml
cluster.link.channel.properties.transport.0=/u01/app/oracle/product/fmw1/user_projects/domains/base_domain/tcp.xml
cluster.link.autodetect.address=
cluster.link.bind.addr["cluster-link-control"]=10.10.36.111
cluster.link.bind.addr["cluster-link-udp"]=10.10.36.111
ehcache.cluster.link.replication.enabled=true
lucene.replicate.write=true
cluster.executor.debug.enabled=true
cluster.link.channel.system.properties=jgroups.bind_addr:${cluster.link.bind.addr["cluster-link-udp"]},jgroups.bind_interface:eth0


In above property file, IP address would be your server IP address and TCP.xml we will see below.

TCP.xml configuration Node-1 and Node-2

- You can get this file from Liferay foundation.lpkg file
- extract lpkg and extract com.liferay.portal.cluster.multiple-1.0.11.jar where you can see tcp.xml
- copy tcp.xml file in local and update as per below settings which are highlighted. Here we are using TCPPING. You can use JDPCPing and other methods which liferay supports.

<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xmlns="urn:org:jgroups"
        xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd">
    <TCP bind_port="7800"
          singleton_name="liferay_jgroups_tcp"
         recv_buf_size="${tcp.recv_buf_size:5M}"
         send_buf_size="${tcp.send_buf_size:5M}"
         max_bundle_size="64K"
         max_bundle_timeout="30"
         use_send_queues="true"
         sock_conn_timeout="300"

         timer_type="new3"
         timer.min_threads="4"
         timer.max_threads="10"
         timer.keep_alive_time="3000"
         timer.queue_max_size="500"

         thread_pool.enabled="true"
         thread_pool.min_threads="2"
         thread_pool.max_threads="8"
         thread_pool.keep_alive_time="5000"
         thread_pool.queue_enabled="true"
         thread_pool.queue_max_size="10000"
         thread_pool.rejection_policy="discard"

         oob_thread_pool.enabled="true"
         oob_thread_pool.min_threads="1"
         oob_thread_pool.max_threads="8"
         oob_thread_pool.keep_alive_time="5000"
         oob_thread_pool.queue_enabled="false"
         oob_thread_pool.queue_max_size="100"
         oob_thread_pool.rejection_policy="discard"/>

 <TCPPING async_discovery="true"
             initial_hosts="${jgroups.tcpping.initial_hosts:10.10.36.111[7800],10.10.36.112[7800]}"
             port_range="2"/>
    <MERGE3  min_interval="10000"
             max_interval="30000"/>
    <FD_SOCK/>
    <FD timeout="3000" max_tries="3" />
    <VERIFY_SUSPECT timeout="1500"  />
    <BARRIER />
    <pbcast.NAKACK2 use_mcast_xmit="false"
                   discard_delivered_msgs="true"/>
    <UNICAST3 />
    <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
                   max_bytes="4M"/>
    <pbcast.GMS print_local_addr="true" join_timeout="2000"
                view_bundling="true"/>
    <MFC max_credits="2M"
         min_threshold="0.4"/>
    <FRAG2 frag_size="60K"  />
    <!--RSVP resend_interval="2000" timeout="10000"/-->
    <pbcast.STATE_TRANSFER/>
</config>

- Put this tcp.xml file in classpath of server and restart both the servers (nodes).

Setup shared Elastic server

You can refer https://customer.liferay.com/documentation/7.0/deploy/-/official_documentation/deployment/configuring-elasticsearch-for-liferay-0 for setting up elastic search server and point this server to both nodes so that both nodes keep common index records.

How to verify

- Look server log for both nodes where you will be able to see logs which tells you that both nodes listening each other.
- You can also create web content/Document in one node and check in another node. it should reflect without re-indexing and removing database cache.

References:
- Liferay DXP Clustering
- How to Install Liferay DXP in a Clustered Environment
- Managing Liferay DXP's Distributed Cache

Popular Posts

Featured Post

Liferay 7.3 compatibility matrix

Compatibility Matrix Liferay's general policy is to test Liferay Portal CE against newer major releases of operating systems, open s...