To install models manually from HuggingFace, there are some steps that you should follow.
As an example, if we wanted to download Pygmalion-6b, we would put in the following repository link into Github Desktop:
https://huggingface.co/PygmalionAI/pygmalion-6b
For local path, set it to the following folders, depending on your backend system below.
[KoboldAI Folder]/models[Oobabooga Folder]/text-generation-webui/models4bit- in the file name. If such a file does not exist, rename the safetensors file you see to be 4bit.If you see a
Xgin the filename, incorporate that to the filename itself (i.e. if the model has 128g in the filename, make the new filename be4bit-128ginstead.
git.For local path, set it to the following folders, depending on your backend system below.
[KoboldAI Folder]/models[Oobabooga Folder]/text-generation-webui/modelsgit clone <repo link>
Replace
<repo link>with the HuggingFace repository link. As an example, if we wanted to download Pygmalion-6b, the command should appear as such
git clone https://huggingface.co/PygmalionAI/pygmalion-6b
GGML models are more trickier compared to base and GPTQ models. To download them, follow these steps.
For this tutorial, we will be using this GGML repository by concedo.
Some repositories may have multiple versions of
.binfiles to download. In this case, download the the version that hasggmlin it's name. You may also noticed there are differentqX_Xandf16. We recommend downloading a model betweenq4_0andq5_1for GGML models.
.bin file you downloaded to where KoboldCPP is saved in or in [Oobabooga Folder]/text-generation-webui/models for Oobabooga.You will need a secondary device to get the link to the model and install
wgeton your system if it isn't installed. Make sure your terminal instance is in the location where KoboldCPP is saved in or in[Oobabooga Folder]/text-generation-webui/modelsfor Oobabooga.
Copy Link.wget <link>
Replace
<link>with the link you copied over.