Advanced fields for Google Vertex AI actions on Zapier
- Tables
-
Product updates
Product updates: January 2023 Product updates: March 2023 Product updates: February 2023 Product updates: April 2023 Product updates: May 2023 Product updates: June 2023 Product updates: July 2023 Product updates: August 2023 Product updates: September 2023 Product updates: October 2023 Product updates: November 2023 Product updates: December 2023 Product updates: January 2024 Product updates: February 2024 Product updates: March 2024 Product updates: April 2024 Product updates: May 2024 Product updates: June 2024 Product updates: July 2024 Product updates: August 2024 Product updates: September 2024 Product updates: October 2024 Product updates: November 2024 Product updates: December 2024 Product updates: January 2025 Product updates: February 2025 Product updates: March 2025 Product updates: April 2025 Product updates: May 2025 Product updates: June 2025 Product updates: July 2025 Product updates: August 2025
- Zaps
- Your Zapier account
- Interfaces
- Canvas
- Chatbots
- Getting started
- Agents
- MCP
- Built-in tools
- Lead Router
- Apps
Table of Contents
When using Google Vertex AI actions in your Zaps, you can adjust the parameters of the large language model (LLM) to improve its results.
You can adjust these parameters:
- Temperature
- Max Output Token
- topP
- topK
Temperature
Temperature allows you to increase the model’s randomness level. A lower number means it will be more deterministic, selecting the most probable answer, while a higher number results in a more unexpected, creative response.
Controls the degree of randomness in your completion response. Lowering the temperature results in less random responses. Higher temperatures will result in more creative responses.
You can set the temperature between 0.0-1.0
Max Output Tokens
Max Output Tokens is how you define the maximum number of tokens to generate as a response. A token is approximately 4 characters, so 1000 tokens correspond to roughly 60-80 words.
You can set the Max OutPut tokens between 1-1024.
topP
The topP field is how you specify the minimum probability that a token must have in order to be considered for the generated response. It works differently from temperature where instead of increasing or decreasing the randomness of text, it specifies a threshold for how likely a token must be in order to be considered for selection.
A low topP will result in more diverse text, whereas a higher topP will result in more focused text.
You can set the topP between 0.0-1.0.
topK
The topK field is how you specify the number of most probable tokens to consider when generating a response. For example, a topK of 10 means that the model will only consider the top 10 most probable tokens when generating a response. This can help reduce the amount of randomness in the generated text, whilst still allowing for some creativity.
You can set the topK between 1-40.