How the Agent instructions works?

Hello there,

I’m trying to understand how the Agent instructions works.

Here my test:

I created a very basic Agent with this instructions:

As per my understanding, the instructions should behave like the system prompt; thus, it should steer the response.

But using the Agent in the chat, it totally ignores the directive:

Also the Eval, same effect:

And here is the old-fashioned way using Retool AI, that, instead, works as expected:

Can you clarify what happens behind the scene the Agent run?

Thanks

Silly question Fabio, have you deployed the changes made for the instructions?

Hi @MiguelOrtiz

I didn’t, because I thought the Chats/Evals section within the Agent editor were affected by the current version in the editor.

I deploy an Agent like I deploy a Workflow, to be used by apps.

BTW to be sure, I deployed it, and nothing changed in the editor.

Now that you told me this, I did a further test, since I deployed the Agent, calling it from a chat-in-app component, but same result as you can see here:

And to complete my test session, I’ve tried also giving the same instructions to an Assistant/User message in the Dataset for Eval as context, again, with no effect.

Best

Hi @abusedmedia,

I understand the confusion. The issue lies with Agents being fundamentally different as a tool vs AI actions(queries).

The AI action is a thin layer on top of the LLM’s API, where we give you direct access to the system prompt and the user input, whereas with the Agent, there’s quite a long internal system prompt to make sure it successfully calls tools.

The instructions guide the behavior, but there are multiple AI steps happening within the agent that make it less suited to extremely fine-grained output formatting like in that example.

If the goal is to get a specific structure of JSON back from the LLM, the AI Action is probably the right tool for the job.

I understand that is is not ideal for using Agents to give fine grained outputs of either formatting, tool use or a hard coded value such as in your example. There is just a lot more going on under the hood to give Agents the ability to be non-deterministic for use cases with nuance.

If your goal is to have very tight control over what an LLM model is returning, AI Actions(queries) will be the better tool for deterministic outputs.

1 Like

Thanks @Jack_T for the thorough explanation.

1 Like

Hey @Jack_T! I’m in a similar situation where AI actions are probably better suited to my use case than agents. My main pull towards AI agents are the Evals we can set up to test the behaviour and spot regressions. Is there an equivalent we could use for AI actions? I’d love to make sure that we’re not causing regressions in some use cases as we tweak our system prompts

Hi @cosmick,

Great question, the Evals tool is really powerful which is one of the pros with using Agents.

I do not believe we have the same tooling for AI actions unfortunately, but I can definitely make a feature request for that :+1:

1 Like

Thanks, @Jack_T! That would be immensely helpful for us :grin: