- Map Node
A Map Node invokes an HCL Link map within a flow.
- Flow Node
A Flow node invokes a sub-flow within a flow.
- Source and Target Nodes
Source and Target Nodes provide HCL Link with an outside-in approach to developing integrations.
- Request Node
The Request Node has a single input request terminal and a single output response terminal.
- Cache Read and Write Nodes
Cache Read and Cache Write node functionality is described in this section.
- Clone Node
The Clone Node has one input and two output terminals.
- Decision Node
The Decision Node routes input data to the true, or false terminals depending upon the condition.
- Fail Node
The Fail Node causes the flow execution to fail.
- Format Converter Node
The Format Converter Node can be used to quickly convert data from one format to another.
- JSON Read Node
This documentation describes the function and use of the JSON Read Node.
- JSON Transform Node
The JSON Transform node is used to transform JSON documents from one form to another.
- Java Node
The Java Node invokes a Java class, thus performing user defined functionality based on the properties specified for the Java class to act on the input.
- Join Node
The Join Node gathers the individual results and appends to a single output file or terminal.
- Log Node
The Log Node logs the raw data from the node input into the file and propagates the data from the input to the output terminal.
- Passthrough Node
The Passthrough Node propagates data from the input to the output terminal.
- REST Client Node
The REST Client Node provides a simple and powerful way to access REST services.
- Route Node
The Route Node provides a way to route data conditionally to one or more outputs of the node. The node bases decisions by evaluating a condition for a flow variable and determining whether to send the data to output 1 or 2 or both, based on the result of the condition.
- Sleep Node
The Sleep Node suspends the execution of the flow for the specified number of milliseconds
- Split Node
A Split Node should be used when there is a need to split CSV data processing. This might occur when CSV data processing becomes excessively time consuming.