Large Language Model
Process text and generate human-like responses with advanced AI models.
What is the Large Language Model?
The Large Language Model (LLM) node is a powerful component that leverages state-of-the-art language models from providers like AWS Bedrock, OpenAI, and Google AI. This tool processes text inputs, generates human-like responses, and performs various language tasks based on the selected model and configuration.
How to use it?
-
Select a Model:
- Choose from various models provided by AWS Bedrock, OpenAI, or Google AI.
- Each model has its own strengths and capabilities. Refer to the provider's documentation:
-
Configure Model Parameters:
- Set the
temperature
(default: 0.7) to control output randomness. - Adjust the
topP
value (default: 1.0) to influence output diversity. - For AWS Bedrock models, specify the
region
(default: us-east-1).
- Set the
-
Provide Input:
- Connect a string input to the "Prompt Text" anchor.
- For supported models, you can provide an image input for multimodal tasks.
-
Set Up Output:
- Choose between "Text" (complete response) or "Streaming" (real-time chunks) output types.
-
Execute the Component:
- Run your workflow to process the input through the selected language model.
Example Task: Text Summarization Workflow
Objective: Create a workflow that accepts user-provided text and produces a concise summary.
Step-by-Step Setup
-
Add Text Input:
- Add a Text Input node to your canvas.
- Add an input variable.
- Edit the template text to be:
Summarize the following text:
- Add an input variable and click the created variable to make use of it in the text. For more details on this step see the text input documentation.
-
Add Input node
- Add an input node.
- Change its name to
ArticleText
- Change the type to text.
-
Add Large Language Model:
- Locate the Large Language Model node in your node selection.
- Add it to the canvas.
- Configure it with your API key and select an appropriate model.
-
Add Output:
- Add a String Output component to display the result.
- Connect the output of the OpenAI component to the input of the String Output.
-
Connect Components:
- Link the output of the "ArticleText" Input node to the created variable in the text input node.
- Link the text input node output to the Prompt Text of the Large Language Model node.
- Link the Large Language Model output to the Output node.
This leaves us with the result:
Execution
To execute the workflow described in the step by step guide, add your article to the input nodes value field.
Additional Information
-
Authentication: Set up necessary API keys and permissions:
-
Multimodal Inputs: Some models support image inputs. Check the model's documentation for supported formats.
-
Parameter Tuning: Experiment with temperature and topP values to balance creativity and coherence.
-
Ethical Considerations: Be aware of potential biases in AI-generated content. Review outputs before use in sensitive applications.
-
Stay Updated: Providers frequently release new models with improved capabilities.
For advanced usage, explore:
Troubleshooting
- If you encounter rate limits, check your API usage and consider upgrading your plan.
- For unexpected outputs, review your prompt and try adjusting the temperature or topP values.
- Ensure your AWS region is compatible with the chosen Bedrock model.