What's new in SQL Syntax for Informix®, Version 11.70
This publication includes information about new features and changes in existing functionality.
| Overview | Reference |
|---|---|
| Overview | Reference |
|---|---|
|
Joins with lateral references
In queries that join result tables in the FROM clause, you can now use the LATERAL keyword to reference previous table and column aliases in the FROM clause. The LATERAL keyword must immediately precede any query in the FROM clause that defines a derived table as its result set, if that query references any table or column that appears earlier in the left-to-right order of FROM clause syntax elements. For SELECT statements that join derived tables, lateral table and column references comply with the ISO/ANSI standard for SQL syntax, and can improve performance. Lateral references are also valid in DELETE, UPDATE, and CREATE VIEW statements that include derived tables. |
|
|
Faster creation of foreign-key constraints
When you run the ALTER TABLE ADD CONSTRAINT statement, some foreign-key constraints can be created faster if the table has a unique index or a primary-key constraint that is already defined on the columns in the foreign-key constraint. Foreign-key constraints are not created faster, however, if the constraint key or index key includes columns of user-defined or opaque data types, including BOOLEAN and LVARCHAR, or if other restrictions are true for the foreign-key constraint or for the referenced table. |
Creating foreign-key constraints when an index exists on the referenced table
Enabling foreign-key constraints when an index exists on the referenced table |
|
Temporarily
prevent constraint validation
You can significantly increase the speed of loading or migrating large tables by temporarily preventing the database server from validating foreign-key referential constraints. You can disable the validation of constraints when you create constraints or change the mode of constraints to ENABLED or FILTERING.
The NOVALIDATE keyword prevents the database server from checking every row for referential integrity during ALTER TABLE ADD CONSTRAINT and SET CONSTRAINTS operations on foreign-key constraints. When those statements finish running, the database server automatically resumes referential-integrity enforcement of those constraints in subsequent DML operations. Use this feature only on tables whose enabled foreign-key constraints are free of violations, or when the referential constraints can be validated after the tables are loaded or migrated to the target database. |
Creating foreign-key constraints in NOVALIDATE modes |
|
Control
the duration and frequency of connection attempts
Previously, you might set the INFORMIXCONTIME and INFORMIXCONRETRY environment
variables in the client environment before you started the database
server. The values specified the number of seconds that the client
session spends trying to connect to the database server, and the number
of connection attempts. As of this fix pack, you also can control
the duration and frequency of connection attempts in the following
ways:
|
SET ENVIRONMENT statement |
|
Defining separators for fractional seconds
in date-time values
Now you can control which separator to use in the character-string representation of fractional seconds. To define a separator between seconds and fractional seconds, you must include a literal character between the %S and %F directives when you set the GL_DATETIME or DBTIME environment variable, or when you call the TO_CHAR function. By default, a separator is not used between seconds and fractional seconds. Previously, the ASCII 46 character, a period ( . ), was inserted before the fractional seconds, regardless of whether the formatting string included an explicit separator for the two fields. |
| Overview | Reference |
|---|---|
|
Coordinating transactions within a high-availability
cluster
To avoid problems caused by asynchronous log processing across server sessions or a cluster, use the new CLUSTER_TXN_SCOPE configuration parameter or the new SET ENVIRONMENT CLUSTER_TXN_SCOPE command. You can control whether a transaction commit can be returned to a client application before the transaction is applied in another server session or on another cluster node. |
CLUSTER_TXN_SCOPE session environment option |
|
Enhanced support for OUT and INOUT parameters in SPL routines
SPL user-defined routines and C user-defined routines with OUT or INOUT arguments can be invoked from other SPL routines. The OUT and INOUT return values can be processed as statement-local variables or as local SPL variables of SQL data types. The SPL routines that are invoked from SPL routines support all data types except BYTE, TEXT, BIGSERIAL, SERIAL, and SERIAL8. The C routines that are invoked from SPL routines support all data types except BYTE, TEXT, BIGSERIAL, SERIAL, SERIAL8, and ROW. |
Specifying INOUT Parameters for a User-Defined Routine |
| Overview | Reference |
|---|---|
|
IFX_BATCHEDREAD_INDEX environment option
Use the IFX_BATCHEDREAD_INDEX environment option of the SET ENVIRONMENT statement of SQL to control whether the optimizer automatically fetches a set of keys from an index buffer for a session. The environment option enables or disables the value of the BATCHEDREAD_INDEX configuration parameter for a session. |
IFX_BATCHEDREAD_INDEX session environment option |
|
Increased SQL statement length
The maximum length of SQL statements and SPL routines is 4 GB. The only exception is the length of the CREATE VIEW statement, which is restricted to 2 MB in length. The extended length is valid when using Client SDK 3.70.xC5 and JDBC 3.70.xC5. Previously, SQL statements were restricted to 64 KB in length. |
| Overview | Reference |
|---|---|
|
Easier setup of faster consistency checking
When you increase the speed of consistency checking by creating an index on the ifx_replcheck shadow column, you no longer need to include the conflict resolution shadow columns in the replicated table. In the CREATE TABLE statement, the WITH REPLCHECK keywords do not require the WITH CRCOLS keywords. |
Add or drop specialized columns |
|
Data sampling for update statistics operations
If you have a large index with more than 100 000 leaf pages, you can generate index statistics based on sampling when you run UPDATE STATISTICS statements in LOW mode. Gathering index statistics from sampled data can increase the speed of the update statistics operations. To enable sampling, set the USTLOW_SAMPLE configuration parameter or the USTLOW_SAMPLE option of the SET ENVIRONMENT statement. |
SET ENVIRONMENT statement |
| Overview | Reference |
|---|---|
|
Automatic read-ahead operations
You can enable the database server to use read-ahead operations automatically to improve performance. Most queries can benefit from processing the query while asynchronously retrieving the data required by the query. The database server can automatically use asynchronous operations for data or it can avoid them if the data for the query is already cached. Use the AUTO_READAHEAD configuration parameter to configure automatic read-ahead operations for all queries, and use the SET ENVIRONMENT AUTO_READAHEAD statement to configure automatic read-ahead operations for a particular session. The RA_THRESHOLD configuration parameter is deprecated with this release. |
SET ENVIRONMENT statement |
|
Built-in SQL compatibility functions for string
manipulation and trigonometric support
The Informix® database
server supports new built-in SQL string manipulation functions. These
functions return either a character string derived from an argument
to the function, or an integer that describes a string argument:
This release also provides two built-in trigonometric
support functions. These functions convert the units of angular measurement
of a numeric expression argument from radians into degrees, or from
degrees into radians:
These built-in SQL functions can simplify the migration to the Informix® database server of applications developed for other database servers. |
List of Expressions |
| Overview | Reference |
|---|---|
|
Table and column aliases in DML statements
The SQL parser supports new contexts for declaring
aliases in SELECT, DELETE, and UPDATE statements:
|
Declaring an alias for the table |
|
Case-insensitive queries on NCHAR and NVARCHAR
text strings
In previous HCL Informix® releases, strings stored in all Informix® databases were treated as case-sensitive by database operations. For example, a query for the string "McDavid" returns "McDavid" but not "mcdavid", "MCDAVID", or "Mcdavid". Operations designed to disregard the case of text strings require a bts index or a functional index for each query. In this release a database is still created as case-sensitive by default. However, you can use the NLSCASE INSENSITIVE option with the CREATE DATABASE statement to create a database that ignores the case of text strings. For example, querying "McDavid" returns "McDavid", "mcdavid", "MCDAVID", and "Mcdavid". A case-insensitive database ignores letter case only on NCHAR and NVARCHAR data types, but it treats the other built-in character data types (CHAR, LVARCHAR, and VARCHAR) as case-sensitive. You cannot include both case-sensitive and case-insensitive databases in a distributed query. |
Specifying NLSCASE case sensitivity
NCHAR and NVARCHAR expressions in case-insensitive databases |
| Fewer users require administrator access (UNIX™, Linux™) Informix® software can be installed and administered without root privileges on UNIX™ and Linux™ operating systems. The user who performs the non-root installation does not need to be a system administrator, and this user becomes the database server administrator (DBSA), so that the database server can run on the computer without user informix and group informix. This feature is useful for deploying the server in embedded software solutions or for situations where running software that uses root privileges raises security concerns. A non-root installation does not support high-availability replication, OpenAdmin Tool (OAT) for Informix®, the ON-Bar utility, nor role separation. |
ALTER USER statement (UNIX, Linux) CREATE DEFAULT USER statement (UNIX, Linux) CREATE USER statement (UNIX, Linux) DROP USER statement (UNIX, Linux) |
| Overview | Reference |
|---|---|
|
New editions and product names
HCL Informix® Dynamic
Server editions were withdrawn and new Informix® editions are available.
Some products were also renamed. The publications in the Informix® library pertain
to the following products:
|
For more information about the Informix® product family, go to http://www.ibm.com/software/data/informix/. |
|
Partitioning table and index storage
by an INTERVAL strategy
You can define a storage distribution strategy for tables or indexes that partitions data into a set of fragments that are each based on an interval value of the fragment key, which must be a column expression that references a single column of a numeric, DATE, or DATETIME data type. When rows are inserted that do not fit in the range fragments, the database server automatically creates new interval fragments without DBA intervention. This kind of fragmentation strategy is useful when all possible fragment key values in a growing table are not known, and the DBA does not want to allocate fragments for data that is not yet loaded. |
|
|
Partitioning table and index storage by
a LIST strategy
You can define a storage distribution strategy for tables or indexes that partitions data into a set of fragments that are each based on a list of discrete values of the fragment key. Each value in the list must be unique among the lists for fragments of the same object. Query performance can improve through fragment elimination when the fragment key for a table has a finite set of values, and queries on the table specify equality predicates on the fragment key. |
Fragmenting by LIST |
|
Fragment-level statistics
In previous releases, for fragmented tables data distributions were calculated at table level to optimize query plans. This release supports a finer granularity of statistics for fragmented tables. The statistics are calculated and stored at the individual fragment level. Set the new STATLEVEL property of fragmented tables to specify whether TABLE or FRAGMENT is the granularity for data distributions, or set to AUTO to allow the database server to automatically choose the granularity of the distribution statistics for each fragmented table. |
Statistics options of the ALTER TABLE statement |
|
Automatic detection of stale statistics
You can enable Informix® to automatically detect which table or fragment and index statistics are stale, and only refresh the stale statistics when the UPDATE STATISTICS statement is run. By default, statistics will be refreshed when 10% of the data is stale. You can use the STATCHANGE property when a table is created or altered to set the minimum percentage of change that is required for the data to be considered stale. The database server refreshes statistics only if the data has changed beyond that threshold since the distribution statistics were last calculated. |
AUTO_STAT_MODE session environment option
Using the FORCE and AUTO keywords SET ENVIRONMENT statement |
|
ALTER FRAGMENT support for list and range
interval fragments
You can now apply the ALTER FRAGMENT statement of SQL to fragments of an existing database table or index that uses a list or range interval fragmentation strategy. Supported syntax includes ADD, ATTACH, DETACH, DROP, INIT, and MODIFY options. This support provides greater flexibility in data storage, and can improve the performance of data warehouse operations on large tables whose distribution strategy calls for fragments to be periodically archived, dropped, and replaced with new fragments. |
Examples of the MODIFY clause with interval fragments |
|
Dropping fragment-level statistics automatically
after ALTER FRAGMENT operations
After the ALTER FRAGMENT statement of SQL redistributes data rows, the database server automatically drops fragment distribution statistics for a table or index from the system catalog. Supported syntax includes ATTACH, DETACH, DROP, and INIT options. The next UPDATE STATISTICS operation rebuilds all fragment level distribution, so that the query optimizer will not choose a query execution plan that is based on statistics that the ALTER FRAGMENT operation has made stale. |
|
|
Improved concurrency while redefining
table storage distributions
The new ONLINE option to the ALTER FRAGMENT ON TABLE statement of SQL can change the storage distribution of tables that use an interval fragmentation scheme. Applying an intent exclusive lock to the surviving table, rather than an exclusive lock, eliminates downtime for the table during ALTER FRAGMENT operations that attach or detach a fragment, or that modify the interval transition value of the table. Other users can run SELECT, UPDATE, DELETE, INSERT, and MERGE statements in parallel to the ALTER FRAGMENT ON TABLE ONLINE statement. |
The ONLINE keyword in ALTER FRAGMENT operations |
|
Debugging Informix® SPL routines with Optim™ Development Studio
Previous versions of Informix® support the TRACE statement of the SPL language to identify logical errors in SPL routines by examining the values of variables, arguments, return values, and error codes at runtime during execution of SPL routines. This release supports significantly enhanced capabilities for analyzing and correcting errors in SPL routines through line-by-line debugging sessions, using the Informix® Optim™ Development Studio debugger for Informix® SPL procedures and functions, or the Informix® Database Add-Ins for Visual Studio debugger for Informix® SPL Procedures. |
Debugging SPL routines
Starting an SPL debugging session with Optim Development Studio Debugging SPL procedures with Database Add-Ins for Visual Studio |
|
Query optimizer support for star-schema
and snowflake-schema queries
This release provides enhanced query optimizer support for operations on tables for which star-schema dependencies exist between a fact table and a set of dimension tables. (A primary key column in each dimension table corresponds to a foreign key in the fact table.) New STAR_JOIN, FACT, AVOID_STAR_JOIN, and AVOID_FACT optimizer directives enable users to influence the execution plans for such queries. Similar query optimizer support is available for operations on tables within a snowflake schema. Here the data that could be organized as a single dimension table of a star schema is instead normalized into multiple table for separate levels of the dimension. In a data warehouse environment, decomposing dimensions into snowflake structures can sometimes achieve better performance than star-join queries that join fact tables to very large dimension tables. You can use the new star-join optimizer directives to enhance query performance in warehousing applications. In addition, the SET OPTIMIZATION statement supports new syntax to define a general optimization environment for all SQL statements in the session. |
Star-Join Directives |
|
Query optimizer support for multi-index
scans
Queries in earlier releases typically use no more than one index to scan each table for qualifying rows. In this release, you can specify new access-method optimizer directives so that the query optimizer can combine one or multiple B-tree indexes and the Boolean operations in the WHERE clause to fetch qualifying data rows. Using these directives can provide better performance than full-table scans, both for OLTP queries and for data warehousing applications that query large tables. |
Access-Method Directives |
|
Session-level control of how much memory
can be allocated to a query
The SET ENVIRONMENT statement supports new
BOUND_IMPL_PDQ and IMPLICIT_PDQ session environment options.
The sqexplain output file of the SET EXPLAIN statement can display the settings of these variables, the calculated memory limit, and the IMPLICIT_PDQ value that was granted for the query. These session environment options can improve query performance and database server throughput in online transaction processing of large tables, and in typical data warehousing applications. |
|
|
Simplified administration of users without
operating system accounts (UNIX™, Linux™)
In previous releases, each user who needed to access the database server also needed an operating system account on the host computer. Now you can configure Informix® so that users who are authenticated by an external authentication service (such as Kerberos or Microsoft™ Active Directory) can connect to Informix®. The new USERMAPPING configuration parameter specifies whether or not such users can access the database server, and whether any of those users can have administrative privileges. When Informix® is configured to allow user mapping, you can still control which externally authenticated users are allowed to connect to Informix® and their privileges. |
Surrogate user properties (UNIX, Linux) Revoking database server access from mapped users (UNIX, Linux) |
|
Trusted connections improve security for
multiple-tier application environments
You can define trusted contexts, which can then be used to establish trusted connections between an application server and the Informix® database server on a network. Trusted connections let you set the identity of each specific user accessing a database through the middle-tier server, which facilitates discretionary access control and auditing based on user identity. Without a trusted connection in such an environment, each action on a database is performed with the single user ID of the middle-tier server, potentially lessening granular control and oversight of database security. |
CREATE TRUSTED CONTEXT statement |
|
Syntax support for DDL statements with
IF [NOT] EXISTS conditions
Now you can include the IF NOT EXISTS keywords in SQL statements that create a database object (or a database). You can also include the IF EXISTS keywords in SQL statements that destroy a database object (or a database). If the condition is false, the CREATE or DROP operation has no effect, but no error is returned to the application. Support for the IF EXISTS and IF NOT EXISTS keywords in DDL statements simplifies the migration to Informix® of SQL applications that were originally developed for other database servers that support this syntax. |
CREATE DATABASE statement |
|
Simplified SQL syntax for defining database
tables
Removing restrictions on the order in which column attributes can be defined in Data Definition Language (DDL) statements of the SQL language simplifies the syntax rules for column definitions in the CREATE TABLE and ALTER TABLE statements. The specifications for default values can precede or follow any constraint definitions. The NOT NULL constraint does not need to be listed first if additional constraints are defined. The constraints (on a single column or on a set of multiple columns) can be defined in any order within the constraint specifications, and that list of constraint definitions can be followed (or preceded) by the default value, if a default is defined on the column. In addition, the list of constraints can include the NULL keyword to indicate that the column can accept NULL values. The NULL constraint cannot be specified with NOT NULL or PRIMARY KEY in the constraint list. This support by the Informix® SQL parser for table definitions written in other dialects of the SQL language can simplify migration to this Informix® release of data management applications that were originally developed for other database servers. |
Single-Column Constraint Format |
|
Deferred extent sizing of tables created
with default storage specifications
In the CREATE TABLE statement of previous versions of Informix®, all permanent tables are created with storage allocated for a first extent. If the statement does not define a size for the extent, by default the first extent is either 16 kilobytes or 4 pages (if 16 kilobytes is too small to produce 4 pages for the table). In this release, the default size of the first extent is the same as in earlier releases. However, if the CREATE TABLE statement does not include an IN dbspace, an EXTENT specification, and a NEXT EXTENT specification, storage for the table is allocated when the first data row is inserted. This behavior can conserve disk space in applications that create hundreds or thousands of tables, but only a subset of those tables are typically used to store data. |
Deferred extent storage allocation |
|
Specifying the extent size when user-defined
indexes are created
In earlier releases, when a user define an index with the CREATE INDEX statement, the database server calculates the extent sizes in the storage partition where the index will reside. In this release, the CREATE INDEX statement supports new syntax to specify the first extent size and the next extent size when the index is defined. The existing CREATE INDEX statement has been extended to support a new EXTENT SIZE clause, similar to the SQL syntax for defining table extent sizes in the CREATE TABLE and ALTER TABLE statements. |
|
|
Reduced overhead for foreign key constraints
on very large child tables
Foreign key constraints are associated with an index on the child table that the constraint references. For child tables with a very large number of rows, but only a few distinct foreign key values, DML operations using the index can impose substantial overhead on the server, compared to sequentially scanning the child table. For these cases, the ALTER TABLE ADD CONSTRAINT FOREIGN KEY statement of SQL can now include the optional INDEX DISABLED keywords. These keywords disable the index when the foreign key constraint is created, and can improve the efficiency of insert, delete, and update operations on very large child tables. (In CREATE TABLE statements that define foreign key constraints, the existing syntax is unchanged.) |
Multiple-Column Constraint Format Using the INDEX DISABLED keywords in a foreign key definition |
|
Less root node contention with forest of trees
indexes
If you have many concurrent users who routinely experience delays due to root node contention, you might improve query performance if you convert your B-tree index to a forest of trees index. A forest of trees index is similar to a B-tree index, but has multiple root nodes and potentially fewer levels. You create forest of trees indexes with the new HASH ON clause of the CREATE INDEX statement of SQL. |
CREATE INDEX statement |
|
SQL expressions as arguments to the COUNT function
In earlier releases, queries can call the built-in COUNT function to return the number of qualifying rows, or the total number of non-NULL values (or of unique non-NULL values) in a specified column. This release extends the domain of COUNT arguments to SQL expressions that other aggregates accept, including CASE expressions. Current restrictions on the arguments to other SQL aggregate functions also apply to COUNT. |
Aggregate Expressions |
|
Replicate tables without primary keys
If you do not want to have a primary key, or want to be able to update the primary key, on tables replicated by Enterprise Replication, you can use the ERKEY shadow columns in place of a primary key. |
Using the WITH ERKEY Keywords |