Skip to content

Advanced Fine-tuning Scripts

Full repo access includes:

Watch an overview of the ADVANCED fine-tuning repo

Video Tutorials

Embeddings:


Supervised Fine-Tuning:


Unsupervised Fine-Tuning:


Preparing Fine-tuning Datasets:


Quantization:


Quantization:

16 thoughts on “Advanced Fine-tuning Scripts”

    1. Adeel, thanks for writing here and by email. Your email went to spam but I have fished it out and responded.

      Your question related to doing training on a Mac M1 or M2. While the same high level approaches can be used, the scripts are quite different as the best way is to use Llama.cpp – which is written in C, not python. I have just added some notes to this ADVANCED fine tuning repo to help people get started, but detailled scripts for fine-tuning on a Mac are not included.

  1. hi,I’ve purchased the supervised learning script, but it doesn’t seem to have the QA-generated python files you show in the video. Will you open the corresponding github permissions for me?

    1. The scripts are for causal language models (e.g. gpt type). M2M would use different loading and often dataset formats. If you’re only doing M2M models, you might learn something from the repo, but I can’t strongly recommend it.

      1. Hi,
        thank you for the feedback. No, I just don’t need m2m only, so I’ll buy a repo. I am a beginner and everything is explained beautifully in the videos. So looking forward to it – great job! Do you have any plans to expand your scripts repo with m2m soon? Alternatively, is it possible to order the creation of such a script for m2m fine-tuning separately? If so and you are open to it, where can I contact you? Thank you and have a nice day.

        1. Currently I don’t have plans to do M2M, it’s not something I’ve had as much demand for.

          And yes, you’ll see my email with the purchase receipt so feel free to respond there if there are issues.

  2. Hello, when I use QLoRA_Fine_Tuning_SUPERVISED.ipynb, I use my own data set and the model is set to: model_id = “meta-llama/Llama-2-7b-chat-hf”. But I get error:
    RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

    Can you provide some solutions?
    I have invited you to visit my colab.

    Thank you for your help

    1. Howdy, since you purchased repo access, you can post there by creating an issue. That’s my preferred way to respond as it helps all others who have purchased the repo.

      Please include a code snippet for replication in your post. Thanks

Leave a Reply

Your email address will not be published. Required fields are marked *