Q&A Recap
Thanks to everyone who joined us live for the final session of AI Build Week! Below is the Q&A recap 
Workflow Design & Logic
Q: When should I use a workflow vs. an agent?
A: It depends on how predictable your process is. If the path from input to output is clear and consistent, use a workflow — that’s what we did for the “BlueSky → Slack” demo.
If you’re dealing with more open-ended tasks — like deciding which tool to use or what question to ask next — an agent is a better fit. Agents are good at reasoning and adapting. Workflows are great for reliability and control.
Q: Why do we need a separate step to filter posts if the AI is already scoring them?
A: The LLM scores each post from 1 to 10 — but it doesn’t decide what counts as “relevant.” That happens in the next step, where we use a simple code block to filter out anything below 4.
This gives us flexibility — we can tweak the score threshold without touching the AI prompt.
Q: Why batch submit all the posts to the LLM instead of looping over them one by one?
A: In this case, batching made the workflow cleaner and faster. We added clear instructions in the prompt: “Score each post independently,” and it worked well — even with 4–5 posts at a time.
If you notice weird behavior (like the model getting confused or inconsistent), looping over each item individually is a safer fallback.
Prompting & Output Format
Q: How do you get the LLM to return clean JSON (without backticks or markdown)?
A: This is a common issue. The trick is to be extremely specific in the prompt.
We used:
“Return only raw JSON. No formatting, no backticks, no markdown.”
Even then, some models might still wrap things in triple backticks. You can also add a second LLM step to clean up the output or use a code block to validate and parse it.
Scheduling & Reliability
Q: How do you avoid reposting the same content to Slack every time the workflow runs?
A: We store the timestamp of the latest successful run in RetoolDB. Then, each time the workflow starts, it fetches only new posts from BlueSky — anything newer than that saved timestamp.
It’s a simple pattern that makes the whole thing feel smart and dependable.
Q: What does the concurrency limit do?
A: It makes sure only one run of your workflow happens at a time. If a new run gets triggered while one is already in progress, it either waits or skips, depending on your settings.
This is super useful if you’re updating shared resources like a database or sending Slack messages, especially with long running workflows.
Q: Can I schedule workflows to run automatically?
A: Yep! Use the “Schedule” trigger in the workflow builder. You can run it every few minutes, every hour, or use cron syntax if you want more control.
For this demo, we set it up to run it hourly at :40 past the hour.
Integrations
Q: What was the Slack bot you used?
A: That was a Slack resource configured in Retool. Once it’s set up, you can use steps like chat.postMessage
to send messages directly from your workflows.
Q: Can I send one message per post?
A: Absolutely. Just wrap your Slack step inside a Loop block, and it’ll send a message for each post. In this session, we used a sequential loop and added a short delay so Slack didn’t get rate-limited.
Q: Can I integrate with other tools that aren’t on the Retool integrations list?
A: Yes! Retool works with any tool that has an API — REST, GraphQL, even SOAP.
Shoutout to Alexius for asking about Tableau — that’s not native yet, but you can hit their REST API from a Retool resource.
Agents, Memory & Dynamic Use Cases
Q: Can agents store memory between runs?
A: Not automatically — but you can simulate it. Store relevant data (like conversation history or user actions) in RetoolDB, and pass it into the prompt when the agent runs again.
That’s how we handled memory in the RetoolGPT demo earlier this week.
Q: Can agents decide which resource to use dynamically?
A: Not yet, but we’re getting close. For now, you can give the agent access to multiple tools and use prompt logic to guide it. You can also run evaluations to see if it’s choosing the right one. It’s a bit hacky, but it works for more advanced use cases.
Q: Can I add human approvals into a workflow?
**A:**Not yet. Internally, we have explored the human in the loop concept, where you can pause on a given step and wait for someone to take action, but we don't have any timelines for when this will be available.
LLM Evaluation & Exploratory Use Cases
Q: I’m working with structured and unstructured data — should I use workflows or agents?
A: Start with a workflow if you know the steps you want (like: extract → summarize → post).
If your goal is more exploratory — like “help me figure out what to do with this data” — try an agent. You can always prototype with a prompt in an app first and go from there.
Q: What if my workflow fails and I’m not sure why?
A: This is where run history is your best friend. You can inspect each step’s input/output and debug from there. It’s especially helpful when you’re working with LLMs — sometimes they silently change behavior, and seeing what actually came back can save hours.
Resources & Follow-Up
Q: Are the sessions recorded?
A: Yep — all sessions from AI Build Week can be viewed on our YouTube channel and in the Community Hub. Each community post has a Q&A like this one!
Got more questions? Drop them below! And if you end up building something inspired by AI Build Week, we’d love to see it in Community Show & Tell category.