Pitching the assumptions_made
metadata field for AI Agents
I’ve been thinking a lot lately about the current state of AI and large language models (LLMs). People are building really cool stuff with AI agents and chatbots that can call tools or use plugins, but there’s clearly room for growth and innovation.
One thing I’ve noticed is that these tools and plugins often use APIs. And whenever you have an AI agent or chatbot using an LLM to call an API, there are assumptions being made based on what the user says.
Let me give you an example to show what I mean. Imagine I have an AI agent that I want to book a dinner date for me and my wife. I might say something like:
“Make us dinner reservations at Malone’s next Friday.”
There are a few assumptions embedded in that request:
The LLM has to make a “best guess” based on the context it has, ask follow-up questsions, or refer to long term memory if I’ve clarified before. Different people will have different preferences for how many assumptions they’re comfortable with the AI making.
assumptions_made
So here’s my idea: what if we create a mostly standardized metadata field called assumptions_made
? This field would be populated by the LLM to list out the key assumptions it made when generating its response. Humans are almost always making assumptions. It makes sense that having it be a part of the way LLMs “work” would make them improve better.
Having an explicit list of assumptions could be really powerful. It would be possible to:
I think there’s probably an optimal balance of assumptions we want LLMs to make to keep interactions low-effort but increase accuracy.
Where I think assumptions_made
could be extra powerful is when we have AI agents communicating with each other. Imagine agent A makes a request to agent B. Along with its response to agent A, agent B also provides its assumptions_made
.
Now agent A has valuable extra context! It can see if agent B made any incorrect assumptions and provide clarification. Or it can determine that it needs more info from the user before proceeding.
Multi-agent systems haven’t really hit production yet due to a variety of issues, but I believe something like this could be crucial for them to operate reliably. Without it, small incorrect assumptions can snowball and hurt consistency.
The great thing is we can start experimenting with this right now! If you’re using ChatGPT, try this to get it to add a note to the long term memory:
Always include an "assumptions_made" field and value at the end of your responses where you list some assumptions you made in your response.
And if you’re using custom apps built on LLM provider APIs, you can add the above to your system prompt.
Then start chatting and see what assumptions the LLM is making! I’d love to hear what you discover.
In terms of implementation, assumptions_made
could be a simple JSON field or just an addendum to the end of the LLM’s output. The key is to have a standardized way to communicate assumptions. I’m excited to see where this idea goes! I think it has the potential to make AI agents much more robust and adaptable.
Let me know your thoughts! Do you see value in an assumptions_made
standard? How else could we improve communication between AI agents?
- Joseph
Sign up for my email list to know when I post more content like this. I also post my thoughts on Twitter/X.