Database performance tuning for WebSphere Commerce and pureScale
To improve database performance, tune WebSphere Commerce and pureScale.
Registry settings
- DB2_DATABASE_CF_MEMORY
- DB2_MCR_RECOVERY_PARALLELISM_CAP
Set up WebSphere Commerce server affinity to DB2 pureScale member
By default, a WebSphere Commerce server can connect to any DB2 pureScale member by using workload balancing. This connection can occur when the enableSysplexWLB custom property in the WebSphere Application Server data source is set to true. This connection can be sufficient for workloads that are not performance or volume intensive.
An approach for achieving higher performance is to set enableSysplexWLB to false, which means the data source connects to a specific pureScale member. Failover can be configured with the enableClientAffinitiesList, clientRerouteAlternateServername, and clientRerouteAlternatePortNumer parameters for the data source. Manual load balancing can be achieved if a similar number of servers are configured for each DB2 pureScale member.
WebSphere Commerce data sources are created by default at the server scope. You can create a data source at the node scope to group child servers and simplify management of data source changes. This data source allows assignment of commerce nodes to a pureScale member rather than having to work with many servers and their data sources.
- Using the WebSphere Application Server administrative console, go to .
- Select the data source to be configured.
- In Common and required data source properties, complete all the information for the target DB2 pureScale member.
- In
pureScale properties Property Value Description enableSysplexWLB false Workload balancing off among the database members. enableClientAffinitiesList 1 The IBM Data Server Driver for JDBC and SQLJ enables client affinities for cascaded failover support. This means that only servers that are specified in the clientRerouteAlternateServerName and clientRerouteAlternatePortNumber properties are tried again. The driver does not attempt to reconnect to any other servers. affinityFailbackInterval 300 The number of seconds to wait after the first transaction boundary to fail back to the primary server. Set this value if you want to fail back to the primary server. enableSeamlessFailover 1 The IBM Data Server Driver for JDBC and SQLJ uses seamless failover. This failover means that the driver does not throw an SQLException with error code -4498 after a failed connection is successfully established when the following conditions are true: - The connection was not being used for a transaction at the time the failure occurred.
- There are no outstanding global resources, such as global temporary tables or open, held cursors, or connection states that prevent a seamless failover to another server.
add or edit the following custom properties and their values: - In
Add the host names of the DB2 pureScale members to fail over to separated by commas in the Alternate server names. Then, add their corresponding DB2 server listener port values separated by commas in the Alternate port numbers field.
, set up the DB2 pureScale members to use for failover. - Save your changes and restart the WebSphere Commerce servers to use the new data source settings.
Append mode for tables
- To enable a table for append mode, use the following DB2 command:
ALTER TABLE table_name APPEND ON
- To disable a table if it is already in append mode, use the following DB2 command:
ALTER TABLE table_name APPEND OFF
- To examine if a table is using append mode or not, use the db2look utility to get details on a table.
The drawbacks for using append mode for tables are that deletions waste space, reads cost more, and clustered indexes are not possible.
Analysis using DB2 snapshots or table functions help validate which tables are suitable for append mode. It is important to consider the table usage pattern during a workload that includes administrative and maintenance operations.
An example of a table with high inserts is the orders table. Orders are constantly created but rarely deleted and might be read periodically.
Increase table space extent size
When heavy inserts occur in tables or indexes, new extents are allocated to the table space. Each extent addition involves internal message traffic within the pureScale environment. The larger extent sizes for those table spaces with heavy inserts help reduce internal communication.
The table space extent size cannot be altered after creation. To implement this change, you must
create table spaces with larger extent sizes, create tables with heavy inserts and their associated
objects (including indexes, alias, constraints, triggers, aliases) in the larger-extent table
spaces, move data from the existing tables to the newly created tables that reside in larger-extent
table spaces, and drop those existing tables. One way to perform this task is to use the db2look DB2
utility to capture the object definitions for modification and the select insert
approach to move the data.
An example of a table with heavy inserts is the CTXDATA table, where data is inserted every time that the site gets a hit.
Drop unused indexes
Each insert, update, and delete operation in a table leads to similar operations for each of the associated indexes. To reduce the overhead of maintaining indexes, keep the minimal set of required indexes.
drop index schema_name.index_name;
Use random indexes
If the column value of data is always ascending or descending and heavy inserts occur, the inserts of the associated index occur in the same page to keep the order. This process leads to page contention. Using random indexes puts the column values in random order, reducing the page contention, and introducing an overhead for data retrieval.
To create an index with random order, drop the original index and then re-create it with the random option. To have the primary key benefit from random indexes, drop the primary key and re-create it after you create random indexes.
CREATE INDEX schema_name.index_name ON schema_name.table_name (column_name RANDOM);
Inline LOBs
When LOB is stored separately from the data, another I/O is required to insert, update, and fetch the data. The performance of LOB access can be improved by placing LOB and data together. The limit of the inline LOB length is the difference among the size of the base table row, the size of existing data columns, and 4 bytes for overhead. LOBs that cannot be stored inline are stored as separate storage objects.
alter table table_name alter LOB_column_name set inline length size;
reorg table table_name longlobdata;
For example, the PX_PROMOTION table might be a table that benefits from inline LOBs.