You can create a managed master
or repeater in the managed configuration. It is recommended to use
only managed nodes as part of your production system.
Before you begin
Review
the managed search topologies and select the one that meets
your business needs.Note: This task applies only to topology 2 or 3,
as a managed master or repeater is required to be a member of the
Solr cluster. If you have a business need to index under production,
you must use one of these topologies. Therefore, the following steps
help you change one of the search cluster's managed subordinate servers
into either a master or a repeater.
About this task
The following list shows the high-level steps that are associated
with creating a managed master node or managed repeater:
- A managed subordinate node is identified to become the managed
master or repeater.
- The repeater server JVM is updated in the deployment manager to
point to the matching managed search templates based on its role.
- Search query requests are disabled on the selected node.
- A server transport is opened for other subordinate servers to
use for replication.
- The managed search templates are redeployed with the correct new
repeater or master hostname and password.
Procedure
- Identify a managed subordinate node that will become the
managed master or repeater. In this task, this node will be referred
to as the repeater node.
- Update the repeater server JVM properties to point to the
matching managed search templates.
- In the deployment manager, go to the search subordinate
cluster members. .
- For the target cluster member, go to the custom properties. .
-
Click the solr.solr.home property and modify it to point to the repeater
templates:
./installedApps/demo_search_cell/Search_demo.ear/managed-solr/repeater/solr/home
- Save the property into the configuration.
- Disable forwarding search query requests to the repeater
node.
- In the deployment manager, go to the search subordinate
cluster members. .
- Set the runtime and configured weight to 0 for
the repeater server.
- Save the changes to the master configuration.
- Open a server transport for other subordinate servers to
use for replication.
- In the deployment manager, go to the search subordinate
cluster members. .
- On the repeater cluster, go to .
- Click New to create a new transport
chain named replicationTransport. Then, click Next.
- Specify a port name, for example, replicationPort,
and a port number, for example, 3636. Ensure
that the host value is set to *.
- Click Save and Save
to configuration.
- Go to .
- Create new host aliases that point to the new replication
port number created earlier in this step.
- Click Save and close and Save
to configuration.
- Restart the repeater server.
- Generate the web plug-ins by selecting . Then, after
the plug-ins are generated, select Propagate Plug-in.
- Restart the search server.
- Redeploy the managed search templates with the correct
new master or repeater host name and password.
Note: If
the deployed managed-solr templates do not include the repeater or
master templates, ensure that you regenerate the template that corresponds
to the managed node you created earlier in this task. For example,
the repeater template.
- Open the solrhome/solr.xml file
on both the repeater and subordinate templates for editing.
- Update the replication port number to the new value
entered in the previous step.
For example:
<core instanceDir="MC_masterCatalogId/en_US/CatalogEntry/" name="MC_masterCatalogId_CatalogEntry_en_US">
<property name="master.server.url" value="hostname:port"/>
<property name="replication.enable.slave" value="true"/>
<property name="solr.replication.pollInterval" value="00:00:30"/>
</core>
Note: This step is required due to all
the nodes being subordinates when first setting up the search cluster.
This step can be ignored if you have already configured a repeater
in the loading.properties file when setting up
the search index structure in the managed configuration.
- Save your changes and close the file.
- Package the new solr.xml file into
the search.zip file archive so that it can be
deployed in the next task using the deployment manager.