AI Summarisation Action Stops Abruptly with no error warnings

Hello, I have developed an application where a user can upload a .txt transcript of a conversation they had, and, by using retool AI, they can summarise it in a click of a button. The application includes an upload button, a default prompt outlining the structure of how the summary should be written and a button to trigger an AI summarization action.

The .txt file is cleansed as follows (I am aware special characters are known to be an issue, so I fixed that problem by removing them):

The input to the AI action is as follows:


and, as you can see I have connected to meta's llama 3 using AWS bedrock.

The main issues I can clearly observe are:

  • No error message is given and the summarisation ends abruptly in the middle of summarising
  • Some aspects of the prompt are ignored

I can most likely fix the second one by adjusting the prompt, however, no amount of work seems to fix the first issue. As there is no error messages in the logs, I cannot seem to understand what is causing the AI action to stop abruptly whilst the summary is not finished (I know it's not finished as it ends in the middle of a sentence).

Happy to send more detailed logs upon request:

I think that the issue lies with the prompt. The input box seems to be only for the text that is meant to be summarised and not additional information regarding on how to structure the summary. Would be good if someone from the retool team could confirm this please.

Hello @Diogo_Mota!

Thank you for the well written feedback. I can double check with out eng team about how and where users should find any error messages that are being returned.

Do you have any examples of specific errors you were expecting that were not returned?

From your description it sounds like the model started responding properly but stopped in the middle. I didn't know this was possible :sweat_smile: I thought a request either works and runs or immediately errors.

Not sure if there is a polling issue where the connection is lost and that could be the reason for abrupt ends with no corresponding error message....

From personal testing, if I give directions on how to format a response in the "Input" field of an AI block in workflows, the model is able to follow these directions.

How many aspects/directions are you giving the model in "Input"? Have you tried using the "System Message" part of the AI block to give the model your directions?

Your screen shot of the workflow AI code block looks different from mine, are you self hosting? If so, what version are you on?

@retoolT
Hi Jack! Thank you for getting back to me.

I decided to change the model and it seems that it always finishes the summary! Not sure what was wrong with llama3.

However, the formatting issue still remains. It is quite un-deterministic, usually the first time I request a summarisation after opening the application is when the model ignores all instructions and just uses bullet points. After asking for another summarisation (pressing a button, as this is not a chat bot), the model takes the instructions into consideration. The instructions are passed into the input text before the text to be summarised.

I am using the Summarisation AI action in a retool application, so I can't seem to find the "System Message" part that you mention. The version is 3.68.00

That's great to hear that changing the model allowed for the summary to always finish!

I had high hopes for llama but I am sure Meta would love your feedback on their forums :face_with_peeking_eye:

Thank you for the added details.

That is an interesting pattern that the first call comes out default with bullet points but after making the 'exact same' request it is able to catch on and understand that the beginning instructions should be followed for formatting :melting_face:

Well it looks like in our next update we are rolling out an additional input text box to the Summarization AI action called "System Message" which should hopefully fix this exact issues.

From our internal, it appears that this input will be used to communicate instructions or provide context to the model at the beginning of a conversation. So fingers cross after this release it can standardize the models understanding of the context(formatting) in which it should be answering :crossed_fingers: stay tuned for the release!