Custom caching
HCL Cache extends the capabilities of DynaCache and introduces remote caching. Therefore, additional configuration options are available for custom caches. Custom caches can be configured using the Cache Configuration Yaml file for extensions.
Custom caches are declared in the WebSphere configuration and accessed with the DistributedMap interface. Migrated custom caching code does not require modification to use the HCL Cache.
The size of a cache is used as the starting point for local caching. Disk offload is not
available --remote
caching is instead recommended. See Local and Remote Caching for details.
Registering Custom Caches in the WebSphere Configuration
- Transaction server container
- When custom caches are added with the Transaction server Run Engine commands run-engine command, they are by default automatically mapped to the HCL Cache cache provider.
- Liberty containers
- Custom caches defined in the configDropins/overrides directory
must explicitly specify the HCL Cache
cacheProviderName as in the example
below:
<?xml version="1.0" encoding="UTF-8"?> <server> <distributedMap id="services/cache/CustomCache1" memorySizeInEntries="2000" memorySizeInMB="100" cacheProviderName="hcl-cache"/> </server>
Configuring HCL Cache Options
cacheConfigs:
...
services/cache/MyCustomCache:
remoteCache:
enabled: true
localCache:
enabled: false
- Using the HCL Cache for in-memory data storage
- The HCL Cache is traditionally used for caching scenarios, where if
entries are not found in the cache, they can be regenerated by the application. With the
incorporation of the remote cache that allows for large amounts of data storage, the
HCL Cache can also be used as temporary in-memory database. In its
default configuration, the HCL Cache implements maintenance processes
that remove cache entries when needed to avoid out of memory conditions. This could lead
to the loss of cache entries.If the objects stored in the cache cannot be regenerated, the Low Memory Maintenance process for the specific cache must be disabled to avoid data loss:
services/cache/MyCustomCache: remoteCache: onlineLowMemoryMaintenance: enabled: false
Low Memory Maintenance can continue to work on other caches. If the caches that disable Low Memory Maintenance require a significate amount of memory, the memory made available (maxmemory) might need to be retuned. The Redis persistence options might also need to be updated to a more durable configuration (e.g. enable AOF and RDB)
Accessing a Cache with the DistributedMap Interface
// Obtain cache reference using JNDI name
InitialContext ctx = new InitialContext();
DistributedMap myCustomCache = (DistributedMap)
ctx.lookup("services/cache/MyCustomCache");
// insert into the cache
myCustomCache.put("cacheId", myCacheEntryObject )
final int priority = 1;
// the time in seconds that the cache entry should remain in the cache. The default value is -1 and means the entry does not time out.
final int timeToLive = 1800;
// the time in seconds that the cache entry should remain in the local cache if not accessed.
final int inactivityTime = 900
// Not supported
final int sharingPolicy = 0;
final String [] dependencyIds = new String [] {"dependencyId1", "dependencyId2"};
myCustomCache.put("cacheId", myCacheEntryObject, priority, timeToLive, inactivityTime, sharingPolicy, dependencyIds );
// Read an object from cache
Object cachedObject = myCustomCache.get("cacheKey");
// Invalidate by dependency id
myCustomCache.invalidate("dependencyId1");
// Empty cache
myCustomCache.clear();