In this tutorial chris shows you how to run the Vicuna 13B and alpaca AI models locally using Python.
llama-cpp-python (https://github.com/abetlen/llama-cpp-...) is a very cool new package allows you to run llama based models such as (stanford alpaca model, vicuna or baize) locally using python. although the models are not as powerful as chatgpt or gpt-4, it opens up a range of possibilities by being able to run powerful AI LLM models locally. it has native c-bindings to the llama-cpp library.
he explains the differences between vicuna and alpaca and shows you how to download the vicuna model and then how to install llama-cpp-python on your machine and create a basic python app that allows you to query both the vicuna and alpaca models, comparing the differences
if you want to build python apps against AI LLM's this is the video for you.