Full repo access includes:
- Direct Preference Optimization (DPO) Notebook [Purchase the individual script here]
- Supervised Fine-Tuning Notebook [Purchase the individual script here]
- Unsupervised Fine-Tuning Notebook [Purchase the individual script here]
- Quantization Scripts for AWQ and GGUF [Purchase the individual script here]
- Fine-tuning for Function Calling and Structure Responses v3 script here (for use with the function_calling_v3 dataset).
- Purchase the v2 script here (for use with the function_calling_extended dataset). DEPRECATED.
- Scripts to generate and use Embeddings [Purchase the individual script here]
- Q&A Dataset Generation Script
- LLM Comparison Script (compare models on code generation, passkey retrieval, website generation and reverse responses). [Purchase the individual script here]
- Priority support over public forums. (Typical responses within 1-3 days).
I purchased a access and I’m not seeing how to get to the repo (jroberge16)
Howdy Josh, I just added you there yesterday evening. Right now I manually add anyone who buys – within 24 hours of purchase. I’m working to automate the process. Cheers, Ronan
Hey i purchased the script and had few question on documentation, but never got any reply!!
Adeel, thanks for writing here and by email. Your email went to spam but I have fished it out and responded.
Your question related to doing training on a Mac M1 or M2. While the same high level approaches can be used, the scripts are quite different as the best way is to use Llama.cpp – which is written in C, not python. I have just added some notes to this ADVANCED fine tuning repo to help people get started, but detailled scripts for fine-tuning on a Mac are not included.
hi,I’ve purchased the supervised learning script, but it doesn’t seem to have the QA-generated python files you show in the video. Will you open the corresponding github permissions for me?
No problem, yes that script is included so I’ve refunded the individual script purchase. You should have github access.
Hello, I would like you to contact me, I would like to make a purchase, I want my assistant to be able to call functions, greetings
You can purchase the dataset, training script and do the training yourself AND/OR buy pre-trained models here: https://trelis.com/function-calling/
Hi,
I want to buy access to the GitHub repo, but I wanted to ask if it is also possible to fine-tune m2m models (language translator) via some offered script, such as https://huggingface.co/facebook/nllb-200-3.3 B or https://huggingface.co/facebook/m2m100_1.2B. Thank you very much for your reply and advice. I wish you a wonderful day.
The scripts are for causal language models (e.g. gpt type). M2M would use different loading and often dataset formats. If you’re only doing M2M models, you might learn something from the repo, but I can’t strongly recommend it.
Hi,
thank you for the feedback. No, I just don’t need m2m only, so I’ll buy a repo. I am a beginner and everything is explained beautifully in the videos. So looking forward to it – great job! Do you have any plans to expand your scripts repo with m2m soon? Alternatively, is it possible to order the creation of such a script for m2m fine-tuning separately? If so and you are open to it, where can I contact you? Thank you and have a nice day.
Currently I don’t have plans to do M2M, it’s not something I’ve had as much demand for.
And yes, you’ll see my email with the purchase receipt so feel free to respond there if there are issues.
Hello there,
I was following the below video and wanted access to just the script that helps create the dataset, however I do not see it in the jupyter playbook.
Can I please get access to that, the google colab / jupyter playbook doesn’t have what I need.
https://www.youtube.com/watch?v=egnf8L-EUJU&t=1s
Howdy, I just reached out to you by email, cheers, Ronan
Hello, when I use QLoRA_Fine_Tuning_SUPERVISED.ipynb, I use my own data set and the model is set to: model_id = “meta-llama/Llama-2-7b-chat-hf”. But I get error:
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Can you provide some solutions?
I have invited you to visit my colab.
Thank you for your help
Howdy, since you purchased repo access, you can post there by creating an issue. That’s my preferred way to respond as it helps all others who have purchased the repo.
Please include a code snippet for replication in your post. Thanks