Includes:
- Deploy a language model to an EC2 server (e.g. AWS).
- Deploy APIs for Llama 70B in one-click.
- Deploy a 100k+ long-context Yi API in one-click.
- Includes prompt formats and guidance for Llama 2, Mistral, Yi and DeepSeek models.
- Inferencing function-calling models.
If I want to deploy codelamma on AWS, does the repo provide instructions and setup files for that scenario?
Yes, if you are running an EC2 server on AWS that has a GPU, this repo provides install instructions.
Separately, the repo explains how you can use runpod (which is cheaper per hour, unless you have lots of free AWS credits).
Hello Ronan,
Hope you are doing well.
Quick question. If I were to purchase all of your repos, would you be willing to offer a bundle/better price? I believe your work will expedite our research.
Thanks
Krishna
Howdy Krishna, no discounts but I adjust the price of the repos as I add more content – so there’s a benefit to those who are earlier buyers.
Cheers, Ronan