Type something to search...
Llama2 WebUI

Llama2 WebUI

Llama2 WebUI

1.9k 192
04 May, 2024
  Jupyter Notebook

What is Llama2 WebUI ?

Llama2 WebUI allow you to run Llama 2 with gradio web UI on GPU or CPU from anywhere (Linux/Windows/Mac).


Llama2 WebUI Features


Method 1: From PyPI

Terminal window
pip install llama2-wrapper

The newest llama2-wrapper>=0.1.14 supports llama.cpp’s gguf models.

If you would like to use old ggml models, install llama2-wrapper<=0.1.13 or manually install llama-cpp-python==0.1.77 .

Method 2: From Source:

Terminal window
git clone https://github.com/liltom-eth/llama2-webui.git
cd llama2-webui
pip install -r requirements.txt

Install Issues:

bitsandbytes >= 0.39 may not work on older NVIDIA GPUs. In that case, to use LOAD_IN_8BIT , you may have to downgrade like this:

  • pip install bitsandbytes==0.38.1

bitsandbytes also need a special install for Windows:

Terminal window
pip uninstall bitsandbytes
pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.0-py3-none-win_amd64.whl


Start Chat UI

Run chatbot simply with web UI:

Terminal window
python app.py

app.py will load the default config .env which uses llama.cpp as the backend to run llama-2-7b-chat.ggmlv3.q4_0.bin model for inference. The model llama-2-7b-chat.ggmlv3.q4_0.bin will be automatically downloaded.

Terminal window
Running on backend llama.cpp.
Use default model path: ./models/llama-2-7b-chat.Q4_0.gguf
Start downloading model to: ./models/llama-2-7b-chat.Q4_0.gguf

You can also customize your MODEL_PATH , BACKEND_TYPE, and model configs in .env file to run different llama2 models on different backends (llama.cpp, transformers, gptq).

Start Code Llama UI

We provide a code completion / filling UI for Code Llama.

Base model Code Llama and extend model Code Llama — Python are not fine-tuned to follow instructions. They should be prompted so that the expected answer is the natural continuation of the prompt. That means these two models focus on code filling and code completion.

Here is an example run CodeLlama code completion on llama.cpp backend:

Terminal window
python code_completion.py --model_path ./models/codellama-7b.Q4_0.gguf


codellama-7b.Q4_0.gguf can be downloaded from TheBloke/CodeLlama-7B-GGUF.

Code Llama — Instruct trained with “natural language instruction” inputs paired with anticipated outputs. This strategic methodology enhances the model’s capacity to grasp human expectations in prompts. That means instruct models can be used in a chatbot-like app.

Example run CodeLlama chat on gptq backend:

Terminal window
python app.py --backend_type gptq --model_path ./models/CodeLlama-7B-Instruct-GPTQ/ --share True


CodeLlama-7B-Instruct-GPTQ can be downloaded from TheBloke/CodeLlama-7B-Instruct-GPTQ

Use llama2-wrapper for Your App

Use llama2-wrapper as your local llama2 backend to answer questions and more, colab example:

Terminal window
# pip install llama2-wrapper
from llama2_wrapper import LLAMA2_WRAPPER, get_prompt
llama2_wrapper = LLAMA2_WRAPPER()
# Default running on backend llama.cpp.
# Automatically downloading model to: ./models/llama-2-7b-chat.ggmlv3.q4_0.bin
prompt = "Do you know Pytorch"
answer = llama2_wrapper(get_prompt(prompt), temperature=0.9)

Run gptq llama2 model on Nvidia GPU, colab example:

Terminal window
from llama2_wrapper import LLAMA2_WRAPPER
llama2_wrapper = LLAMA2_WRAPPER(backend_type="gptq")
# Automatically downloading model to: ./models/Llama-2-7b-Chat-GPTQ

Run llama2 7b with bitsandbytes 8 bit with a model_path :

Terminal window
from llama2_wrapper import LLAMA2_WRAPPER
llama2_wrapper = LLAMA2_WRAPPER(
model_path = "./models/Llama-2-7b-chat-hf",
backend_type = "transformers",
load_in_8bit = True

Check API Document for more usages.

Start OpenAI Compatible API

llama2-wrapper offers a web server that acts as a drop-in replacement for the OpenAI API. This allows you to use Llama2 models with any OpenAI compatible clients, libraries or services, etc.

Start Fast API:

Terminal window
python -m llama2_wrapper.server

it will use llama.cpp as the backend by default to run llama-2-7b-chat.ggmlv3.q4_0.bin model.

Start Fast API for gptq backend:

Terminal window
python -m llama2_wrapper.server --backend_type gptq

Navigate to http://localhost:8000/docs to see the OpenAPI documentation.

Basic settings

-h , --helpShow this help message.
--model_pathThe path to the model to use for generating completions.
--backend_typeBackend for llama2, options: llama.cpp, gptq, transformers
--max_tokensMaximum context size.
--load_in_8bitWhether to use bitsandbytes to run model in 8 bit mode (only for transformers models).
--verboseWhether to print verbose output to stderr.
--hostAPI address
--portAPI port


Run benchmark script to compute performance on your device, benchmark.py will load the same .env as app.py .:

Terminal window
python benchmark.py

You can also select the iter , backend_type and model_path the benchmark will be run (overwrite .env args) :

Terminal window
python benchmark.py --iter NB_OF_ITERATIONS --backend_type gptq

By default, the number of iterations is 5, but if you want a faster result or a more accurate one

you can set it to whatever value you want, but please only report results with at least 5 iterations.

This colab example also show you how to benchmark gptq model on free Google Colab T4 GPU.

Some benchmark performance:

ModelPrecisionDeviceRAM / GPU VRAMSpeed (tokens/sec)load time (s)
Llama-2-7b-chat-hf8 bitNVIDIA RTX 2080 Ti7.7 GB VRAM3.76641.36
Llama-2-7b-Chat-GPTQ4 bitNVIDIA RTX 2080 Ti5.8 GB VRAM18.85192.91
Llama-2-7b-Chat-GPTQ4 bitGoogle Colab T45.8 GB VRAM18.1937.44
llama-2-7b-chat.ggmlv3.q4_04 bitApple M1 Pro CPU5.4 GB RAM17.900.18
llama-2-7b-chat.ggmlv3.q4_04 bitApple M2 CPU5.4 GB RAM13.700.13
llama-2-7b-chat.ggmlv3.q4_04 bitApple M2 Metal5.4 GB RAM12.600.10
llama-2-7b-chat.ggmlv3.q2_K2 bitIntel i7-87004.5 GB RAM7.8831.90

Download Llama-2 Models

Llama 2 is a collection of pre-trained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters.

Llama-2-7b-Chat-GPTQ is the GPTQ model files for Meta’s Llama 2 7b Chat. GPTQ 4-bit Llama-2 model require less GPU VRAM to run it.

Model List

Model Nameset MODEL_PATH in .envDownload URL
TheBloke/Llama-2-7b-Chat-GGUF/path-to/llama-2-7b-chat. Q4_0.gguf[Link](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat. Q4_0.gguf)

Running 4-bit model Llama-2-7b-Chat-GPTQ needs GPU with 6GB VRAM.

Running 4-bit model llama-2-7b-chat.ggmlv3.q4_0.bin needs CPU with 6GB RAM. There is also a list of other 2, 3, 4, 5, 6, 8-bit GGML models that can be used from TheBloke/Llama-2-7B-Chat-GGML.

Download Script

These models can be downloaded through:

Terminal window
python -m llama2_wrapper.download --repo_id TheBloke/CodeLlama-7B-Python-GPTQ
python -m llama2_wrapper.download --repo_id TheBloke/Llama-2-7b-Chat-GGUF --filename llama-2-7b-chat.Q4_0.gguf --save_dir ./models

Or use CMD like:

Terminal window
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
git clone [email protected]:meta-llama/Llama-2-7b-chat-hf

To download Llama 2 models, you need to request access from https://ai.meta.com/llama/ and also enable access on repos like meta-llama/Llama-2-7b-chat-hf. Requests will be processed in hours.

For GPTQ models like TheBloke/Llama-2-7b-Chat-GPTQ, you can directly download without requesting access.

For GGML models like TheBloke/Llama-2-7B-Chat-GGML, you can directly download without requesting access.


Env Examples

There are some examples in ./env_examples/ folder.

Model SetupExample .env
Llama-2-7b-chat-hf 8-bit (transformers backend).env.7b_8bit_example
Llama-2-7b-Chat-GPTQ 4-bit (gptq transformers backend).env.7b_gptq_example
Llama-2-7B-Chat-GGML 4bit (llama.cpp backend).env.7b_ggmlv3_q4_0_example
Llama-2-13b-chat-hf (transformers backend).env.13b_example

Run on Nvidia GPU

The running requires around 14GB of GPU VRAM for Llama-2-7b and 28GB of GPU VRAM for Llama-2-13b.

If you are running on multiple GPUs, the model will be loaded automatically on GPUs and split the VRAM usage. That allows you to run Llama-2-7b (requires 14GB of GPU VRAM) on a setup like 2 GPUs (11GB VRAM each).

Run bitsandbytes 8 bit

If you do not have enough memory, you can set up your LOAD_IN_8BIT as True in .env . This can reduce memory usage by around half with slightly degraded model quality. It is compatible with the CPU, GPU, and Metal backend.

Llama-2-7b with 8-bit compression can run on a single GPU with 8 GB of VRAM, like an Nvidia RTX 2080Ti, RTX 4080, T4, V100 (16GB).

Run GPTQ 4 bit

If you want to run 4 bit Llama-2 model like Llama-2-7b-Chat-GPTQ , you can set up your BACKEND_TYPE as gptq in .env like example .env.7b_gptq_example .

Make sure you have downloaded the 4-bit model from Llama-2-7b-Chat-GPTQ and set the MODEL_PATH and arguments in .env file.

Llama-2-7b-Chat-GPTQ can run on a single GPU with 6 GB of VRAM.

If you encounter issue like NameError: name 'autogptq_cuda_256' is not defined , please refer to here

Terminal window
pip install https://github.com/PanQiWei/AutoGPTQ/releases/download/v0.3.0/auto_gptq-0.3.0+cu117-cp310-cp310-linux_x86_64.whl

Run on CPU

Run Llama-2 model on CPU requires llama.cpp dependency and llama.cpp Python Bindings, which are already installed.

Download GGML models like llama-2-7b-chat.ggmlv3.q4_0.bin following Download Llama-2 Models section. llama-2-7b-chat.ggmlv3.q4_0.bin model requires at least 6 GB RAM to run on CPU.

Set up configs like .env.7b_ggmlv3_q4_0_example from env_examples as .env .

Run web UI python app.py .

Mac Metal Acceleration

For Mac users, you can also set up Mac Metal for acceleration, try install this dependencies:

Terminal window
pip uninstall llama-cpp-python -y
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir
pip install 'llama-cpp-python[server]'

or check details:

AMD/Nvidia GPU Acceleration

If you would like to use AMD/Nvidia GPU for acceleration, check this: