Skip to content

Function Calling and Structured Responses

Products:

  • Dataset: Function calling dataset v3 (Purchase here).
  • Pre-trained models (v3, latest):
  • Script/Notebook to train for structured responses:
    • Purchase the v3 script here (for use with the function_calling_v3 dataset).
    • Purchase the v2 script here (for use with the function_calling_extended dataset). DEPRECATED.

OR Buy access to the ADVANCED-fine-tuning repo, which includes the fine-tuning scripts for function calling.

  • Function-calling inference scripts:
    • Purchase only the function calling inference scripts, HERE
    • Purchase as part of the full ADVANCED-inference repo, HERE.
  • Pre-trained models (v2) – DEPRECATED:
    • Yi 200k context 34B (purchase here) and 6B (purchase here) models.
    • Llama 2 70B tuned for function calling. Purchase here.
    • Mistral 7B tuned for function calling. Visit here.
    • Deepseek Coder 1.3B, 6.7B and 33B fine-tuned for function calling. Visit here.
    • CodeLlama 34B tuned for function calling. Purchase here.
    • Llama 2 13B tuned for function calling. Purchase here.
  • v2 dataset – DEPRECATED. Purchase here.

Video Tutorials

Function Calling Dataset, Training and Inference

Fine-tuning for structured responses:

Supervised fine-tuning:

45 thoughts on “Function Calling and Structured Responses”

  1. Hello Ronan McGovern,
    Hope you are doing well!
    I have gone through your youtube video for fllama2 (function calling) and looks awesome. I would like to go through more about the fllama2 model. If you don’t mind can you provide free trail access for fllama2 – 13 model with limited requests so that we can test it and opt for purchasing the model.
    Thanks,
    Santhosh

        1. Cool~ Looking forward to what you can get out of this model. Since this new pretrained model ranks top1 on Huggingface, I think it will be highly recommended. Considering its 34B size, if you can get this one with function calling works good, it could be a great sell.

  2. Great!
    1. What’s the differences between Trelis/Yi-34B-200K-Llamafied-function-calling-adapters-v2 and Trelis/Yi-34B-200K-Llamafied-function-calling-v2?
    2. If I want to learn how to finetune the function calling, Dataset and Script/Notebook to train for structured responses are what I need to buy, aren’t they?
    3. You said “btw, I’m getting very poor results on the model. I’ll see what the authors say, but seems a supervised fine-tuned is required at the very least.” What happend? Is the result good neough now?

    Thanks for your work!

    1. 1. You’ll get access to both. The adapter model allows you to load the base model and then apply the adapter. It can be useful if you wish to apply multiple adapters. It’s a specific advanced use case.

      2. Yes, the Dataset and the notebook to train.

      3. Yes, the prompt format wasn’t clear and that was affecting the model. I’ve put the correct prompt format now in the model card.

  3. I purchased license for the dataset – “Trelis/function_calling_v3”.
    I am getting the following error –
    “Trelis/function_calling_v3 is awaiting a review from the repo authors.”
    How long does it take to authorize a new user?

  4. I am interested to offer function calling in a SaaS application. Can you explain how the “per user” license is to be understood in this context? Is every end user of the SaaS application an user? Thanks a lot

    1. Howdy!

      The simple way to think about “user” is who has access to the weights.

      – If you (the user) is building a SaaS application containing a model where you are hosting the model from your server (or a server you rent), then your customers don’t need a license.
      – If you are in a company where there are multiple devs working on a project using the weights, they would each buy access.
      – If your customers are hosting the model (as opposed to just inferencing or using SaaS that makes uses of your hosted endpoint or service), then they would need to buy access.

    1. Howdy Giuseppe, I love the question:

      – Tiny Llama is very weak in chat format. It has trouble finishing sentences. Phi 2 is a better model (but not permitted for commercial use, although I have petitioned Microsoft about that).
      – I have tried fine-tuning tiny models and I find that they can either a) function call or b) chat. I’ve tried a lot of things and when they can function call they can no longer answer basic queries (like how many planets in our solar system).

      Would it even be useful if the model can ONLY function call? What do you think?

      1. Thank you for sharing your insights!

        I think it could be interesting to also have an ONLY function calling model, especially considering local llm for small devices.

        Perhaps an approach to test could be to always have (and maybe finetune) a “conversational_response” function with an “answer” parameter. Do you think it could work?

        Best,
        Giuseppe

  5. Which of the fine tuned models for function calling would be the best to run on my local machine? I can run dolphin-2.6-mistral-7b.Q3_K_M.gguf decently on my Macbook, I was hoping that your mistral 7b fine tuned for functions would work as well.

  6. While training openchat function calling I’m getting this error

    TypeError: MistralForCausalLM.forward() got an unexpected keyword argument ‘loss_mask’

    1. Can you post on the GitHub repo if you have purchased access. If you only bought the scripts, then please share here:
      – what specific script you are using for training
      – what you changed versus the base script
      – where the error occurs, specifically

  7. I’m using fine_tuning_function_calling_v3 notebook.

    I have changed the following:
    FROM:
    base_model = “meta-llama/Llama-2-7b-chat-hf”
    config = LoraConfig(
    r=8, #typically use 8 for models 7B or larger and use 128 for smaller models
    lora_alpha=32, #typically use 8 for models 7B or larger and use 128 for smaller models
    # r=128,
    # lora_alpha=512,
    target_modules=[
    “q_proj”,
    “k_proj”,
    “v_proj”,
    “o_proj”,
    “gate_proj”,
    “up_proj”,
    “down_proj”,
    “lora_magnitude_vector”, #required for DoRA

    ],
    lora_dropout=0.1,
    bias=”none”,
    task_type=”CAUSAL_LM”,
    # use_dora=True
    )
    B_INST, E_INST = “[INST] “, ” [/INST]” #Llama 2 or Mistral style

    TO:
    base_model = “openchat/openchat_3.5”

    config = LoraConfig(
    r=8,
    lora_alpha = 32,
    target_modules = [
    “self_attn.q_proj”,
    “self_attn.k_proj”,
    “self_attn.v_proj”,
    “self_attn.o_proj”,
    “mlp.gate_proj”,
    “mlp.up_proj”,
    “mlp.down_proj”,
    ],
    lora_dropout = 0.1,
    bias = “none”,
    task_type = “CASUAL_LM”
    )
    B_INST, E_INST = “GPT4 Correct User: “, “GPT4 Correct Assistant:”

    And I’m getting the following error
    TypeError: MistralForCausalLM.forward() got an unexpected keyword argument ‘loss_mask’

  8. Could you do a function call fine-tuning of the Aya 7B or 13B models? That could be quite useful for people who need multilingual function calling. Not sure how well this might work but right now GPT-3.5 seems to be the only model for this use case.

          1. Yeah, not sure how good real world performance is. But I don’t know of any other open source model supporting these languages either… maybe doing some language specific fine-tuning on Mistral 7B or Phi or similar models could work? And then train it for function calling?

  9. I am using your model to send email using the function calling feature, and I want it to use some info from my database like sender name, email, and phone number when crafting the email. It performs well when it comes to following the user’s prompt and creating the email body, but it ignores the sender information that I am providing as a system prompt. I tried to use “user”, and “system”, but it still doesn’t work. What should I do.

    1. What specific model are you using? Can you comment please in the issues/community section of that model on huggingface?

      Regarding the system message, it is probably best to include that in the user’s first message instead. any further Qs – just post on the relevant HF repo

Leave a Reply

Your email address will not be published. Required fields are marked *