Shared Disk (SD) secondary
SD secondary servers share disk space — except temporary dbspaces — with the primary
server. This is typically done through a network-based clustered file system. Adding a
new SD secondary to a cluster is very easy and can be done in a few seconds once the
shared disk is prepared. Because SD secondary nodes leverage the primary's disks and can
be brought up easily and quickly, they are well-suited for scale-out scenarios. In SD
secondary servers checkpoints are synchronized. This means a checkpoint at the primary
server completes only after the checkpoint at the SD server completes. SD secondary
supports committed read and committed read last committed isolation levels as well as
dirty read. An SD secondary can be promoted to a primary server with one single command:
onmode -d make primary <name of SD server>
. Because an SD
secondary server is so close to the primary (in other words, it shares the same disk),
it is often the best type of server to initially fail over to if the primary should
encounter a problem.
Configuration parameter | Server type | Supported values | Description |
---|---|---|---|
SDS_ENABLE | Primary, SD secondary |
|
Use this to allow SD secondaries to be added to the cluster |
SDS_PAGING | SD secondary | <absolute path for paging file1>,<absolute path for paging file 2> | Two paging files must be configured to bring up SDS node |
SDS_TEMPDBS | SD secondary | <dbspace_name>,<path>,<pagesiz in KB>,<offset in KB>,<size in KB> | eTemporary dbspace information for SD secondary node. You can configure up to 16 SDS_TEMPDBS entries. Example: SDS_TEMPDBS sdstmpdbs1, / work/dbspaces/ sdstmpdbs1,2,0,16000 |
SDS_TIMEOUT | Primary | >= 0 seconds |
This configuration parameter is used at the primary to decide how long to wait for an acknowledgement from an SD server. If no acknowledgement occurs, the primary acts to shut the SD server down. The default value is 20 seconds |