Desperate understanding openai cookbook examples

Inconsistency on how to do function calling, I want to get the basics right, and all the examples seems left the basics on purpose.:slight_smile:

OK, here it goes:
When doing tool calls, after I get a response indicate I have to do two tool_calls, and I call the tools, got the result, then what?

Append the result messages to the conversation and try again!
But should I append the tool_call information from the response? Who knows, OPENAI didn’t say.

And if I look into the examples, it’s even worse:
in this example: [How to call functions with chat models | OpenAI Cookbook] It said:

# Append the message to messages list
response_message = response.choices[0].message
messages.append(response_message)

en,…, OK, simply copy the message from response to the request, that’s fine.
But in this cookbook:
[Structured Outputs for Multi-Agent Systems | OpenAI Cookbook]
It said:

conversation_messages.append([tool_call.function for tool_call in response.choices[0].message.tool_calls])

Holy call, tool_call.function is two levels down from the response.message, more importantly, it’s not even a valid message, there’s no “Role” on tool_call.function!

Now exactly what should I do? follow the new example? I certainly need to rework my code to be able to include tool_call.function in the messages.

Or is it a bug in the example? Can the code in the examples actually run? Or you guys think it’s just a explanation, not actual code?

Hi,

So the structure is,

  1. prompt that contains function definitions and a normal prompt along with user input/data you wish to work on.

  2. model responds with either a standard reply or a reply that indicates a tool/function should be used and what, if any, data should be sent to that function/tool.

  3. Function or Tool is executed, result of that function/tool call is appended to the existing prompt chain and the API is called again and this time the AI will use the results from that function/tool call to generate a new response which you can then present to the user/process further.

Thanks for the explanation, but what is this in the example?
These are certainly not the results of tools_call, nor the message returned by the LLM. As you can see, tool_call.function is about two levels down from response.choice[0].message, should we simply say:
conversation_messages.append(response.choices[0].message)
?

-------------The following are from the example in:
https//cookbook.openai.com/examples/structured_outputs_multi_agent

conversation_messages.append([tool_call.function for tool_call in response.choices[0].message.tool_calls])

The examples seems I need to do something extra besides what you said:
Function or Tool is executed, result of that function/tool call is appended to the existing prompt chain

Are you building a multi agent system?

Yes, I am. But first I am trying to understand the basics. The examples contradict to each other, that I am not sure which one to follow.

Great video about function calling here, takes you through the basics with code examples: