One of my agents concludes with a "Post to Slack" tool.
I'm finding that, despite explicit (forceful) instructions to run the tool, it completes all steps without this. (It's the purpose of the entire agent.) I know that I can workaround by building a workflow with a Slack query and managing the agent response - but that's defeating the object of the agentic capability, right?
Am I missing something, best practice or RTFM?
Thanks in advance
Leon
(more context below)
IN AGENT CONFIGURATION/INSTRUCTION:
8.Finally run the postToSlack tool:
params:
{slackMessage1: A brief 1-2 sentence summary of the above, that uses either a if everything looks good or if there are warnings as above.
slackMessage2: A more detailed text summary - keep analysis of positives (passes) very short, but explain any failures of the test i more detail - using slackMoji for bullets
}
In the agent inputs (invocation):
{
"messages": [
{
"role": "user",
"content": "Please run all tasks - do not pause for user interaction or wait for approval for webpage or slack interactions. You must run the Slack notification stage at the end."
}
],
"action": "invoke"
}
ok so I think the best answer you'll get will be from one of the Retool engineers, but there tends be a common misconception that "agentic" means a specific type or way of doing something. It is possible to have an agentic workflow that involves 1 agent with lots of instructions, like what you've shown which is fully autonomous. It's also possible to have 8 agents with only a few instructions each all organized/directed by another agent. You could also have hardcoded API calls for steps 1-7, then step 8 is a call to an LLM summarizing steps 1-7. so when you ask
I'll say no. no it isn't. in your case, you still have steps 1-7 being controlled by an LLM, the addition or subtraction of 1 or 2 steps that are moved to pure code still makes the process as a WHOLE agentic. in fact, if you needed absolute control over the contents of the messages sent to slack or the format of the messages you're better off using a workflow (like adding "thanks" to the end of every message or echoing messages to a log, you can save on token length by taking that part out of the instructions and you make the LLM less likely to skip or ignore that step... because it isn't there lol (prompt engineering is finicky at best and can only go so far due to context length and what I like to call attention span).
If 100% of the decision making is up to an LLM, mistakes can be made by the model. Sometimes taking back control, if just for a little bit, gives more consistent/reliable results and is always cheaper. The general rule of thumb I go by, pretend LLMs have the attention span of a grade schooler.... the more instructions you give a kid, the more instructions that won't be fully completed or attempted .
You’re definitely not alone—agents skipping over “post to Slack” (or any crucial final tool) is a classic headache. Forceful instructions can help, but even then, LLM agents have a tendency to zone out on the last step if the sequence is long or complex.
Personally, I wouldn’t sweat “defeating the object” by putting the Slack step into a workflow. “Agentic” doesn’t have to mean everything is fully handled by the LLM in one big blob (especially if the thing you want them to do is any way deterministic). A super common pattern is: let the agent handle all the thinking, summarizing, and chaining of steps it’s good at—then hand off mission-critical stuff (like posting to Slack, updating a DB, etc.) to a workflow or tool for reliability.
If anything, I’d argue it’s more robust this way:
You get all the benefits of agentic decision-making without risking skipped steps.
You have total control over formatting, logging, and error handling.
Much easier to debug or enforce business logic if Slack messages are deterministic.
Sometimes, giving the agent “final say” on everything is like trying to get a toddler to remember an 8-step bedtime routine. The more you offload to workflows, the less gets lost in translation (and the cheaper/reliable it is).
Agent-Only Approach
Hybrid Approach (Agent + Workflow)
Pros: Fully autonomous, flexible, easy to iterate
Pros: Reliable execution, deterministic, more control
Cons: Can skip steps, less reliable for side-effects
Cons: Slightly less pure agentic, but pragmatic
For mission-critical or external actions, a workflow or explicit API call is better for reliability.
You can still keep your LLM agent fully in charge of the “smart” steps—just reserve deterministic actions (Slack, sending external emails, DB writes) for workflows/tools.
TL;DR:
Agentic design is all about combining autonomous decision-making with reliable execution. Workflows for the last mile aren’t cheating, they’re just professional pragmatism. If you ever want to see a sample setup or trade prompt ideas, happy to help!
—
Let us know if you’re interested in code/patterns for blending workflow + agents in Retool!
Thanks for the replies. I have various agents, with a mix of workflow integrations (agent does the LLM thought-chain, workflow steps do the 'deterministic' parts) and full-fat agents (that can behave oddly from time to time.)
I appreciate the difference between them, and equally appreciate the responses!
no problem at all! AI is so new buzz words become popular with people outside the industry and without considering how it may currently define those words. The best example of this is watching a Programmer and Artist argue over color.... they'll spend hours yelling until finally someone notices they're talking about the exact same thing, just one uses a professional term and one the socially accepted equivalent.
Anywho, feel free to let us know if you want any help trying to improve your agents that behave oddly occassionally
The gist being, there is some room for the "prompt engineering" to drill down and enforce a tool to always be used.
Making sure that the prompt isn't too big is a key factor in reducing the occurrence of any surprises for the final agent action. As @bobthebear points out delegating tasks out to different agents can help to greatly reduce this.
Just to clarify, you are asking about having a parent agent have a tool to call other agents?
You can do that by selecting the agent option for a new tool.
In the tool's description, you would then do some 'prompt engineering' to explain to the parent agent how and when to use this.
Currently, this process involves the parent agent passing a string to the subagent, and the subagent doing things then passing a final string response back to the parent agent in it's answer.
I am curious about n8n's documentation on this for their setup. I can definitely look to get some official docs put together to mirror our use case.