The documentation provided here describes how to use the Design Server.
A flow node invokes a sub flow within a flow. Sub-flows need not be wired to other nodes; it is valid to have a flow that is comprised of multiple sub-flows that each have listeners.
This documentation describes how Flows can be run from REST API, listeners, or they can be scheduled.
This documentation describes the function and use of the Cache Read and Cache Write nodes.
The Cache Read node reads key/value pairs from the global cache, or from the flow variables. Optionally, it can delete keys and read from the cache.
This documentation describes the settings available for the Cache Read node.
This documentation describes Design Server projects.
A map node invokes a LNK map within a flow.
Source and Target nodes provide Link with an outside-in approach to developing integrations. The inside-out approach means that you begin with a map, add inputs and outputs, and then configure these to access resources. Connections might be created in the process. The outside-in approach implements the integration from the opposite direction. You begin with the outside interfaces, then connects them to provide maps and other artifacts.
This documentation describes how to configure the Cache Read node and the Cache Write node.
Determines whether to read values from flow variables, or from the Cache.
The format of the incoming input and output data.
The Key Delimiter is enabled when the Data Format is Delimited.
The Record Delimiter is enabled when Data Format is Delimited.
The Include Key is enabled when the Data Format is Delimited.
Prefix the key values with this value. This property can include flow variables.
Determines whether or not to delete the matched keys from the cache after reading the value.
When executed the node reports the property values and matched keys into the log.
The Cache Write node writes key/value pairs to the global cache, or to the flow variables based on the scope property
The Clone node has one input and two output terminals. This node clones, or copies the input data from the single input terminal to both output terminals.
Fails the flow execution.
The Format Converter Node can be used to quickly convert data from one format to another.
Invokes a java class, performing user defined functionality based on the properties specified for the java class to act on the input.
The Join node gathers the individual results and appends to a single output file or terminal.
This node logs the raw data from the node input into the file and propagates the data from the input to the output terminal.
The REST Client flow node provides a simple and powerful way to access REST services. It can be used to directly invoke REST APIs or can use prebuilt configurations that define the APIs of the service. The node supports:
The Route node provides a way to route data conditionally to one or more outputs of the node. The node bases decisions by evaluating a condition for a flow variable and determining whether to send the data to output 1 or 2 or both, based on the result of the condition.
This node suspends the execution of the flow for the specified number of milliseconds
A Split node should be used when there is a need to split csv data processing. This might occur when csv data processing becomes excessively time consuming. Such time-consuming behavior might occur when you send big csv data to the flow input terminal or it might become memory intense when complex data validation is being performed. The Split node can be used with these, and with other data processing tasks that can be done in parallel.
A Flow that has a Map node as its first node, and uses a File adapter for an input, can enable that node's input to be a Watch.
Flow audits are a way to retrieve more verbose information about a flow instance.
Flow Variables are process data variables that go along with the flow execution, accessible to all nodes in the flow while it is being executed under flow executor/engine context.
How to use the HCL Link Design Server.
Use the schema designer to define, modify, and view schemas. A schema describes the syntax, structure, and semantics of your data.