Skip to main content

Output

Display results from an LLM chain

What is Output?

A dark interface box with the label 'Output' at the top, featuring icons for information, settings, copy, and delete. Below, a field labeled 'Output' with a red asterisk, containing a dropdown menu set to 'Text'. To the left, a label reads 'String'.

The Output serves as a terminal in your workflow, designed to display processed text or results from a Large Language Model (LLM) chain. It can handle various output formats including text, JSON, and files, making it versatile for different use cases.

How to use it?

The Output is straightforward to use. Follow these steps to integrate it into your workflow:

  1. Add Output:

    • Drag and drop the Output into your workflow canvas from the Output category.
  2. Connect Input Anchors:

    • Link the output of the preceding node (e.g., an API Action or LLM) to the Input anchor of the Output.
    • The Output accepts different types of data inputs: Text, JSON, and File.
  3. Configure Input Anchors:

    • Select the type of data you are expecting (Text, JSON, or File) by configuring the input options.

Features

  • It displays markdown as formatted output.
  • The Output can be configured to display JSON or file outputs. This is useful when dealing with more complex data structures or when you need to visualize file outputs directly.

Additional Information

  • Ensure your preceding nodes (like OpenAI LLM or API Action) are correctly configured to generate the expected data type.
  • The Output does not require any additional input parameters, making it a plug-and-play solution for displaying results.

By following these steps, you can effectively utilize the Output to display results from various processing nodes in your workflow.