stagingcopy utility
The stagingcopy utility copies data from the production database to the production-ready data on a staging or authoring server. You cannot use this command if RFQs are on your production system.
- Your staging and production environments must be at the same maintenance level to successfully run the stagingprop utility.
- If the site uses the web feed utility, export the contents of the CMFEEDLOG
table, then import the contents after the stagingcopy utility completes.
For more information, see EXPORT command and IMPORT command.export to /backupdata/cmfbackup.del of del select * from cmfeedlog import from /backupdata/cmfbackup.del of del insert into cmfeedlog
For more information, see your Oracle product documentation.exp userid/password FILE=cmfeedlog.dmp TABLES=cmfeedlog imp userid/password FILE=cmfeedlog.dmp FROMUSER=userid TABLES=cmfeedlog IGNORE=Y
- If Archive Logging is enabled, running the stagingcopy utility can result in a non-linear increase in the amount of disk space that is used by the destination database. The amount of disk space that is used by the Archive Logs can be up to 4 - 8 times the final size of the destination database. Ensure that you monitor your file system size to avoid running out of disk space for the destination database.
- Log on as a user profile that has a CCSID other than 65535.
- Start a Qshell session.
- Run the script by using the following format for the
utility:
WC_installdir/bin/stagingcopy.sh (parameters . . .)
Note: The user parameters must always be specified.
Utility command
The stagingcopy utility has the following file name:
- stagingcopy.sh
- stagingcopy.bat
Parameter values
- dbtype
- Required.
- Specify DB2. DB2 is the default database type and you can omit the dbtype parameter from the command.
- Specify Oracle.
- Specify one of the following values:
- DB2/OS400
- Specify DB2/OS400 when you use the native JDBC driver.
- DB2/OS400ToolBox
- Specify DB2/OS400ToolBox when you use the IBM Toolbox for Java JDBC driver.
- scope
- Required: The level of scope for the copy to the staging or authoring server. Specify one of the
following levels:
- _all_
- Copies both records that are related to the site and to all merchants. Records are copied in the
following order:
- Site records are copied from STGSITETAB
- Site records are copied from STGMRSTTAB
- Merchant records are copied from STGMERTAB
- Merchant records are copied from STGMRSTTAB
- _site_
- Copies only site-related records. These records are data that is common to all merchants. For example, the language and country or region code that is used by the system. This data comes from the STGSITETAB table.
- _merchant_
- Copies the records that are related to individual merchants only. For example, store information is customized for individual merchants, and rows from the store tables could be specific for each merchant. You must copy all data for all merchants, not just data for one individual merchant. This data comes from the STGMERTAB table.
If you do not set your scope to _all_:
- Copy site data before merchant data, since the site data is used by all merchants. Otherwise, your copy fails due to a mismatch between the foreign and primary keys.
- When you use the parameter cleanup_stage_db to clean the site data, merchant data can be deleted because of the cascade delete. Clean the merchant data followed by the site data then copy the site data followed by the merchant data.
Note:- These tables are referenced from only the staging or authoring server, not from production.
- MEMBER table data is not cleaned for user members: MEMBER table records with TYPE column value of 'U'. However, if there are CASCADE DELETE foreign key constraints, cleanup of non-user members can result in the loss of data in tables that retain relationships with user members. For example, MBRREL, MBRGRPMBR, and MBRROLE. If information in these relationship tables must be retained, make sure that a backup of the destination database (staging or authoring) exists.
- dbtable
- The name of any specific table to be copied. All records in this table are copied, provided the
records are within the scope that is specified by the
scope
parameter; otherwise, no records are copied. - sourcedb
- Required: The name of the database on the production server.s must be a valid, full JDBC URL or conform to one of the following conditions:
- Use the Type 4 database name. The Type 4 database name is prefixed
with the database server and port, for example,
db_server:db_port/db_name
.Note: The Type 2 database name is deprecated and must not be used. - TNS entry name.
- Review the following information:
- If the dbtype parameter is DB2/OS400, specify the name of the database on the production server, as displayed in the relational database directory.
- If the dbtype parameter is DB2/OS400ToolBox, specify the host name of the server on which the production database resides.
If you use script_file, consider setting script_sourcedb as well.
- Use the Type 4 database name. The Type 4 database name is prefixed
with the database server and port, for example,
- sourcedb_user
- Required: The logon ID of the database administrator who created the source database schema.
The user profile that is associated with the commerce instance. This profile is the same as the source database schema.
- sourcedb_passwd
- Required: The password of the logon ID that is specified by the
sourcedb_user
parameter. - sourcedb_schema
- Optional: Specifies the schema on the source database where all operations are conducted. Specifically, this schema should have all database objects that are required by an active WebSphere Commerce instance. When not specified, this value defaults to the schema active on the source database when a connection is established.
- destdb
- Required: The name of the database on the staging or authoring server.s must be a valid, full JDBC URL or conform to one of the following conditions:
- Use the Type 4 database name. The Type 4 database name is prefixed
with the database server and port, for example,
db_server:db_port/db_name
.Note: The Type 2 database name is deprecated and must not be used. - TNS entry name.
- Review the following information:
- If the dbtype parameter is DB2/OS400, specify the name of the database on the production server, as displayed in the relational database directory.
- If the dbtype parameter is DB2/OS400ToolBox, specify the host name of the server on which the production database resides.
If you use script_file, consider setting script_destdb as well.
- Use the Type 4 database name. The Type 4 database name is prefixed
with the database server and port, for example,
- destdb_user
- Required: The logon ID of the database administrator who created the staging or authoring
database schema. This parameter is mandatory when you use a remote database.
The user profile that is associated with the commerce instance. This profile is the same as the staging or authoring database schema.
- destdb_passwd
- Required: The password of the logon ID that is specified by the destdb_user parameter. If not specified, the system prompts you to enter the password. This parameter is mandatory when you use a remote database.
- destdb_schema
- Optional: Specifies the schema on the destination database where all operations are conducted. Specifically, this schema should have all database objects that are required by an active WebSphere Commerce instance. When not specified, this value defaults to the schema active on the destination database when a connection is established.
- script_file
- Optional. The name of the SQL script file that is generated by the
stagingcopy utility when you use export and import to copy the production
database to the production-ready data on the specified scope. The script file also generates the
delete statements to clean the production-ready data if you use the default value or specify
cleanup_stage_db yes.Before you run the script, verify that you have enough disk space to hold the exported tables. The script file is in the stagingcopy utility directory where you invoke the stagingcopy utility.
- Consider setting the script_sourcedb and script_destdb options with script_file.
Also, before you run the script, you must set the correct code page for your DB2 command-line session. Otherwise, you might lose data during the character conversion. Since the WebSphere Commerce database uses Unicode character, you must also set the DB2 client session with a Unicode code page. This setting can be done by setting the operating system environment variable DB2CODEPAGE.Note: You must terminate the current session and start a new command-line session for the new environment variable to take effect.For example,export DB2CODEPAGE=1208 db2 terminate
Issue the following command to run the script file:db2 -vtd# -f script_file_nameset DB2CODEPAGE=1208 db2 terminate
Issue the following command to run the script file:sqlplus /nolog @script_file_name
Support for IBM i databases to use the script_file parameter might be limited. Even though the generated file contains native IBM i SQL, it relies on IMPORT and EXPORT commands that are available only in DB2 CLP to copy data from the source to the destination database. If you have access to DB2 CLP, the generated file can be successfully run by using the following steps:- Catalog the IBM i database server as a node on the DB2 CLP system:
CATALOG TCPIP NODE ibmi_node_name REMOTE ibmi_db_server SERVER ibmi_db_server_port
- Catalog the database at the IBM i server, locally:
CATALOG DATABASE ibmi_db_name AS local_db_alias AT NODE ibmi_node_name AUTHENTICATION DCS
- Create a DCS entry for the local database
alias:
CATALOG DCS DB local_db_alias AS local_db_alias
- Refer to
local_db_alias
in either the script_sourcedb or script_destdb option, as necessary.
- ibmi_node_name
- Refers to the locally stored alias for the remote IBM i system.
- ibmi_db_server
- Refers to the host name of the remote IBM i system.
- ibmi_db_server_port
- Refers to the port on
ibmi_db_server
that accepts database connection requests. The value is usually 446.
- ibmi_db_name
- Refers to the name of the database on
ibmi_db_server
.
- local_db_alias
- Refers to the locally stored "alias" for the remote IBM i database.
Note: These steps must be completed for both the source and destination. - script_sourcedb
- Optional. Database specification to be used for the source database when database commands and
statements are output to the file specified by script_file.
The default value is the same as the value that is specified for sourcedb. If script_file is not specified, script_sourcedb and its value are ignored.
You can set this option if the sourcedb value is not of a type that can be understood by command-line or file interpreter of the underlying database. For example, use the sourcedb parameter to specify the full JDBC URL. Use the script_sourcedb parameter to use a locally cataloged source database identifier or alias in the file that is generated by script_file.
- script_destdb
- Optional. Database specification to be used for the destination database when database commands
and statements are output to the file specified by script_file.
The default value is the same as the value that is specified for destdb. If script_file is not specified, script_destdb and its value are ignored.
You can set this option if the destdb value is not of a type that can be understood by command-line or file interpreter of the underlying database. For example, use the destdb parameter to specify the full JDBC URL. Use the script_destdb parameter to use a locally cataloged destination database identifier or alias in the file that is generated by script_file.
- script_prtsourcepswd
- Optional. Specifies whether the password for the source database user is to be printed with database connection statements to the file generated by script_file.
- script_prtdestpswd
- Optional. Specifies whether the password for the destination database user is to be printed with database connection statements to the file generated by script_file.
- instance
- Optional: The instance name that is local to the machine where you intend to run the stagingcopy utility.
- cleanup_stage_db
- Optional. Specify whether to clean the staging tables before the data is copied. If you do not
specify this parameter, staging tables are automatically cleaned before copying data. Specify one of
the following values:
- yes
- Cleans the staging tables before copying data. Merchant data can be deleted because of the cascade delete.
- no
- Do not clean the staging tables before copying data. Nothing is deleted from the staging tables. Your copy might fail if your copy data generates conflict or duplicate key on primary key or unique indexes.
- only
- Cleans the staging tables but no data is copied from the production database.
If you specify the scope parameter, clean and copy the merchant data after you clean and copy the site data.
- batchsize
- Optional. Turns on or off SQL batch updates and specifies the number of consolidated change log
records to include in one SQL batch.
If you do not specify this parameter, the batchsize parameter is set to a value of 100.
Setting the batchsize parameter to a value of 0 (zero) turns off SQL batch update.
Turn off SQL batch if you are publishing any of the following changes from the production-ready data to the production server:- Using a workspace to delete a WebSphere Commerce object that involves the MEMBER table. This table includes objects such as users, organizations, customer segments, member groups, customer territory groups, or customer price groups.
When SQL batch update is turned on, change log records are sorted by change type (insert, update, or delete). Each batch contains changes of one type only. For example, if you have 102 insert changes and the batchsize parameter is set to 100, 2 SQL batches are created: one batch has 100 insert operations and the other has two insert operations.
Using SQL batch updates improves the speed with which the stagingcopy utility updates the production database.
- log
- Optional. The path and name of the file in which the stagingcopy utility
records its activities and errors. The timestamp is appended to the file name, for example,
myLog_yyyy.mm.dd_hh.mm.ss.zzz.log. If this parameter is
not specified, a log file that is called
stagingcopy_yyyy.mm.dd_hh.mm.ss.zzz.log
is created in the following log directory:- WC_installdir/logs
- WC_userdir/instances
- WC_installdir\logs
- transaction
- Optional. Specifies the number of records after which a commit is issued. This number applies to
records that are modified during both the cleanup and copy steps of
StagingCopy.Note: When used with script_file, transaction requires invocation of an SQL statement to estimate the number of DELETE statements to output to the file that is being generated. For example:
The number of DELETE statements that are generated isselect count(*) num_recs from <tablename> [<scope-filtering-predicate>]
1+(num_recs/n)
. A commit is appended after each DELETE statement.By default, for each table, a commit is issued once after the cleanup step and once after the copy step. Values of n less than or equal to 0 revert functionality to default behavior.
- paramfile
- Optional. Specifies the path to the parameter file that includes command-line arguments and
values. Each argument and value needs to be in the format
argument=value
with a single argument and value on each line in the file. Any passwords within this parameter file must be encrypted.