- Maps
A map node invokes a LNK map within a flow.
- Flow
A flow node invokes a sub flow within a flow. Sub-flows need not be wired to other nodes; it is valid to have a flow that is comprised of multiple sub-flows that each have listeners.
- Source and Target
Source and Target nodes provide Link with an outside-in approach to developing integrations. The inside-out approach means that you begin with a map, add inputs and outputs, and then configure these to access resources. Connections might be created in the process. The outside-in approach implements the integration from the opposite direction. You begin with the outside interfaces, then connects them to provide maps and other artifacts.
- Cache Read and Write
This documentation describes the function and use of the Cache Read and Cache Write nodes.
- Clone
The Clone node has one input and two output terminals. This node clones, or copies the input data from the single input terminal to both output terminals.
- Decision
- Fail
Fails the flow execution.
- Format Converter
The Format Converter Node can be used to quickly convert data from one format to another.
- JAVA
Invokes a java class, performing user defined functionality based on the properties specified for the java class to act on the input.
- Join
The Join node gathers the individual results and appends to a single output file or terminal.
- Log
This node logs the raw data from the node input into the file and propagates the data from the input to the output terminal.
- REST Client
The REST Client flow node provides a simple and powerful way to access REST services. It can be used to directly invoke REST APIs or can use prebuilt configurations that define the APIs of the service. The node supports:
- Route
The Route node provides a way to route data conditionally to one or more outputs of the node. The node bases decisions by evaluating a condition for a flow variable and determining whether to send the data to output 1 or 2 or both, based on the result of the condition.
- Sleep
This node suspends the execution of the flow for the specified number of milliseconds
- Split
A Split node should be used when there is a need to split csv data processing. This might occur when csv data processing becomes excessively time consuming. Such time-consuming behavior might occur when you send big csv data to the flow input terminal or it might become memory intense when complex data validation is being performed. The Split node can be used with these, and with other data processing tasks that can be done in parallel.