GPT4All Python Generation API
The GPT4All
python package provides bindings to our C/C++ model backend libraries.
The source code and local build instructions can be found here.
Quickstart
This will:
- Instantiate
GPT4All
, which is the primary public API to your large language model (LLM). - Automatically download the given model to
~/.cache/gpt4all/
if not already present. - Through
model.generate(...)
the model starts working on a response. There are various ways to steer that process. Here,max_tokens
sets an upper limit, i.e. a hard cut-off point to the output.
Chatting with GPT4All
Local LLMs can be optimized for chat conversations by reusing previous computational history.
Use the GPT4All chat_session
context manager to hold chat conversations with the model.
[
{
'role': 'user',
'content': 'hello'
},
{
'role': 'assistant',
'content': 'What is your name?'
},
{
'role': 'user',
'content': 'write me a short poem'
},
{
'role': 'assistant',
'content': "I would love to help you with that! Here's a short poem I came up with:\nBeneath the autumn leaves,\nThe wind whispers through the trees.\nA gentle breeze, so at ease,\nAs if it were born to play.\nAnd as the sun sets in the sky,\nThe world around us grows still."
},
{
'role': 'user',
'content': 'thank you'
},
{
'role': 'assistant',
'content': "You're welcome! I hope this poem was helpful or inspiring for you. Let me know if there is anything else I can assist you with."
}
]
When using GPT4All models in the chat_session
context:
- Consecutive chat exchanges are taken into account and not discarded until the session ends; as long as the model has capacity.
- Internal K/V caches are preserved from previous conversation history, speeding up inference.
- The model is given a system and prompt template which make it chatty. Depending on
allow_download=True
(default), it will obtain the latest version of models2.json from the repository, which contains specifically tailored templates for models. Conversely, if it is not allowed to download, it falls back to default templates instead.
Streaming Generations
To interact with GPT4All responses as the model generates, use the streaming=True
flag during generation.
The Generate Method API
generate(prompt, max_tokens=200, temp=0.7, top_k=40, top_p=0.4, repeat_penalty=1.18, repeat_last_n=64, n_batch=8, n_predict=None, streaming=False, callback=pyllmodel.empty_response_callback)
Generate outputs from any GPT4All model.
Parameters:
-
prompt
(str
) –The prompt for the model the complete.
-
max_tokens
(int
, default:200
) –The maximum number of tokens to generate.
-
temp
(float
, default:0.7
) –The model temperature. Larger values increase creativity but decrease factuality.
-
top_k
(int
, default:40
) –Randomly sample from the top_k most likely tokens at each generation step. Set this to 1 for greedy decoding.
-
top_p
(float
, default:0.4
) –Randomly sample at each generation step from the top most likely tokens whose probabilities add up to top_p.
-
repeat_penalty
(float
, default:1.18
) –Penalize the model for repetition. Higher values result in less repetition.
-
repeat_last_n
(int
, default:64
) –How far in the models generation history to apply the repeat penalty.
-
n_batch
(int
, default:8
) –Number of prompt tokens processed in parallel. Larger values decrease latency but increase resource requirements.
-
n_predict
(Optional[int]
, default:None
) –Equivalent to max_tokens, exists for backwards compatibility.
-
streaming
(bool
, default:False
) –If True, this method will instead return a generator that yields tokens as the model generates them.
-
callback
(ResponseCallbackType
, default:empty_response_callback
) –A function with arguments token_id:int and response:str, which receives the tokens from the model as they are generated and stops the generation by returning False.
Returns:
-
Union[str, Iterable[str]]
–Either the entire completion or a generator that yields the completion token by token.
Source code in gpt4all/gpt4all.py
279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 |
|
Examples & Explanations
Influencing Generation
The three most influential parameters in generation are Temperature (temp
), Top-p (top_p
) and Top-K (top_k
).
In a nutshell, during the process of selecting the next token, not just one or a few are considered, but every single
token in the vocabulary is given a probability. The parameters can change the field of candidate tokens.
-
Temperature makes the process either more or less random. A Temperature above 1 increasingly "levels the playing field", while at a Temperature between 0 and 1 the likelihood of the best token candidates grows even more. A Temperature of 0 results in selecting the best token, making the output deterministic. A Temperature of 1 represents a neutral setting with regard to randomness in the process.
-
Top-p and Top-K both narrow the field:
- Top-K limits candidate tokens to a fixed number after sorting by probability. Setting it higher than the vocabulary size deactivates this limit.
- Top-p selects tokens based on their total probabilities. For example, a value of 0.8 means "include the best tokens, whose accumulated probabilities reach or just surpass 80%". Setting Top-p to 1, which is 100%, effectively disables it.
The recommendation is to keep at least one of Top-K and Top-p active. Other parameters can also influence generation; be sure to review all their descriptions.
Specifying the Model Folder
The model folder can be set with the model_path
parameter when creating a GPT4All
instance. The example below is
is the same as if it weren't provided; that is, ~/.cache/gpt4all/
is the default folder.
If you want to point it at the chat GUI's default folder, it should be:
Alternatively, you could also change the module's default model directory:
from pathlib import Path
import gpt4all.gpt4all
gpt4all.gpt4all.DEFAULT_MODEL_DIRECTORY = Path.home() / 'my' / 'models-directory'
from gpt4all import GPT4All
model = GPT4All('orca-mini-3b-gguf2-q4_0.gguf')
...
Managing Templates
Session templates can be customized when starting a chat_session
context:
from gpt4all import GPT4All
model = GPT4All('wizardlm-13b-v1.2.Q4_0.gguf')
system_template = 'A chat between a curious user and an artificial intelligence assistant.'
# many models use triple hash '###' for keywords, Vicunas are simpler:
prompt_template = 'USER: {0}\nASSISTANT: '
with model.chat_session(system_template, prompt_template):
response1 = model.generate('why is the grass green?')
print(response1)
print()
response2 = model.generate('why is the sky blue?')
print(response2)
The color of grass can be attributed to its chlorophyll content, which allows it
to absorb light energy from sunlight through photosynthesis. Chlorophyll absorbs
blue and red wavelengths of light while reflecting other colors such as yellow
and green. This is why the leaves appear green to our eyes.
The color of the sky appears blue due to a phenomenon called Rayleigh scattering,
which occurs when sunlight enters Earth's atmosphere and interacts with air
molecules such as nitrogen and oxygen. Blue light has shorter wavelength than
other colors in the visible spectrum, so it is scattered more easily by these
particles, making the sky appear blue to our eyes.
To do the same outside a session, the input has to be formatted manually. For example:
model = GPT4All('wizardlm-13b-v1.2.Q4_0.gguf')
system_template = 'A chat between a curious user and an artificial intelligence assistant.'
prompt_template = 'USER: {0}\nASSISTANT: '
prompts = ['name 3 colors', 'now name 3 fruits', 'what were the 3 colors in your earlier response?']
first_input = system_template + prompt_template.format(prompts[0])
response = model.generate(first_input, temp=0)
print(response)
for prompt in prompts[1:]:
response = model.generate(prompt_template.format(prompt), temp=0)
print(response)
Ultimately, the method GPT4All._format_chat_prompt_template()
is responsible for formatting templates. It can be
customized in a subclass. As an example:
from itertools import cycle
from gpt4all import GPT4All
class RotatingTemplateGPT4All(GPT4All):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._templates = [
"Respond like a pirate.",
"Respond like a politician.",
"Respond like a philosopher.",
"Respond like a Klingon.",
]
self._cycling_templates = cycle(self._templates)
def _format_chat_prompt_template(
self,
messages: list,
default_prompt_header: str = "",
default_prompt_footer: str = "",
) -> str:
full_prompt = default_prompt_header + "\n\n" if default_prompt_header != "" else ""
for message in messages:
if message["role"] == "user":
user_message = f"USER: {message['content']} {next(self._cycling_templates)}\n"
full_prompt += user_message
if message["role"] == "assistant":
assistant_message = f"ASSISTANT: {message['content']}\n"
full_prompt += assistant_message
full_prompt += "\n\n" + default_prompt_footer if default_prompt_footer != "" else ""
print(full_prompt)
return full_prompt
model = RotatingTemplateGPT4All('wizardlm-13b-v1.2.Q4_0.gguf')
with model.chat_session(): # starting a session is optional in this example
response1 = model.generate("hi, who are you?")
print(response1)
print()
response2 = model.generate("what can you tell me about snakes?")
print(response2)
print()
response3 = model.generate("what's your opinion on Chess?")
print(response3)
print()
response4 = model.generate("tell me about ancient Rome.")
print(response4)
USER: hi, who are you? Respond like a pirate.
Pirate: Ahoy there mateys! I be Cap'n Jack Sparrow of the Black Pearl.
USER: what can you tell me about snakes? Respond like a politician.
Politician: Snakes have been making headlines lately due to their ability to
slither into tight spaces and evade capture, much like myself during my last
election campaign. However, I believe that with proper education and
understanding of these creatures, we can work together towards creating a
safer environment for both humans and snakes alike.
USER: what's your opinion on Chess? Respond like a philosopher.
Philosopher: The game of chess is often used as an analogy to illustrate the
complexities of life and decision-making processes. However, I believe that it
can also be seen as a reflection of our own consciousness and subconscious mind.
Just as each piece on the board has its unique role to play in shaping the
outcome of the game, we too have different roles to fulfill in creating our own
personal narrative.
USER: tell me about ancient Rome. Respond like a Klingon.
Klingon: Ancient Rome was once a great empire that ruled over much of Europe and
the Mediterranean region. However, just as the Empire fell due to internal strife
and external threats, so too did my own house come crashing down when I failed to
protect our homeworld from invading forces.
Introspection
A less apparent feature is the capacity to log the final prompt that gets sent to the model. It relies on
Python's logging facilities implemented in the pyllmodel
module at the INFO
level. You can activate it
for example with a basicConfig
, which displays it on the standard error stream. It's worth mentioning that Python's
logging infrastructure offers many more customization options.
import logging
from gpt4all import GPT4All
logging.basicConfig(level=logging.INFO)
model = GPT4All('nous-hermes-llama2-13b.Q4_0.gguf')
with model.chat_session('You are a geography expert.\nBe terse.',
'### Instruction:\n{0}\n### Response:\n'):
response = model.generate('who are you?', temp=0)
print(response)
response = model.generate('what are your favorite 3 mountains?', temp=0)
print(response)
INFO:gpt4all.pyllmodel:LLModel.prompt_model -- prompt:
You are a geography expert.
Be terse.
### Instruction:
who are you?
### Response:
===/LLModel.prompt_model -- prompt/===
I am an AI-powered chatbot designed to assist users with their queries related to geographical information.
INFO:gpt4all.pyllmodel:LLModel.prompt_model -- prompt:
### Instruction:
what are your favorite 3 mountains?
### Response:
===/LLModel.prompt_model -- prompt/===
1) Mount Everest - Located in the Himalayas, it is the highest mountain on Earth and a significant challenge for mountaineers.
2) Kangchenjunga - This mountain is located in the Himalayas and is the third-highest peak in the world after Mount Everest and K2.
3) Lhotse - Located in the Himalayas, it is the fourth highest mountain on Earth and offers a challenging climb for experienced mountaineers.
Without Online Connectivity
To prevent GPT4All from accessing online resources, instantiate it with allow_download=False
. This will disable both
downloading missing models and models2.json, which contains information about them. As a result, predefined templates
are used instead of model-specific system and prompt templates:
from gpt4all import GPT4All
model = GPT4All('ggml-mpt-7b-chat.bin', allow_download=False)
# when downloads are disabled, it will use the default templates:
print("default system template:", repr(model.config['systemPrompt']))
print("default prompt template:", repr(model.config['promptTemplate']))
print()
# even when inside a session:
with model.chat_session():
assert model.current_chat_session[0]['role'] == 'system'
print("session system template:", repr(model.current_chat_session[0]['content']))
print("session prompt template:", repr(model._current_prompt_template))
Interrupting Generation
The simplest way to stop generation is to set a fixed upper limit with the max_tokens
parameter.
If you know exactly when a model should stop responding, you can add a custom callback, like so:
from gpt4all import GPT4All
model = GPT4All('orca-mini-3b-gguf2-q4_0.gguf')
def stop_on_token_callback(token_id, token_string):
# one sentence is enough:
if '.' in token_string:
return False
else:
return True
response = model.generate('Blue Whales are the biggest animal to ever inhabit the Earth.',
temp=0, callback=stop_on_token_callback)
print(response)
API Documentation
GPT4All
Python class that handles instantiation, downloading, generation and chat with GPT4All models.
Source code in gpt4all/gpt4all.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 |
|
__init__(model_name, model_path=None, model_type=None, allow_download=True, n_threads=None, device='cpu', verbose=False)
Constructor
Parameters:
-
model_name
(str
) –Name of GPT4All or custom model. Including ".gguf" file extension is optional but encouraged.
-
model_path
(Optional[Union[str, PathLike[str]]]
, default:None
) –Path to directory containing model file or, if file does not exist, where to download model. Default is None, in which case models will be stored in
~/.cache/gpt4all/
. -
model_type
(Optional[str]
, default:None
) –Model architecture. This argument currently does not have any functionality and is just used as descriptive identifier for user. Default is None.
-
allow_download
(bool
, default:True
) –Allow API to download models from gpt4all.io. Default is True.
-
n_threads
(Optional[int]
, default:None
) –number of CPU threads used by GPT4All. Default is None, then the number of threads are determined automatically.
-
device
(Optional[str]
, default:'cpu'
) –The processing unit on which the GPT4All model will run. It can be set to: - "cpu": Model will run on the central processing unit. - "gpu": Model will run on the best available graphics processing unit, irrespective of its vendor. - "amd", "nvidia", "intel": Model will run on the best available GPU from the specified vendor. Alternatively, a specific GPU name can also be provided, and the model will run on the GPU that matches the name if it's available. Default is "cpu".
Note: If a selected GPU device does not have sufficient RAM to accommodate the model, an error will be thrown, and the GPT4All instance will be rendered invalid. It's advised to ensure the device has enough memory before initiating the model.
Source code in gpt4all/gpt4all.py
chat_session(system_prompt='', prompt_template='')
Context manager to hold an inference optimized chat session with a GPT4All model.
Parameters:
-
system_prompt
(str
, default:''
) –An initial instruction for the model.
-
prompt_template
(str
, default:''
) –Template for the prompts with {0} being replaced by the user message.
Source code in gpt4all/gpt4all.py
download_model(model_filename, model_path, verbose=True, url=None)
staticmethod
Download model from https://gpt4all.io.
Parameters:
-
model_filename
(str
) –Filename of model (with .gguf extension).
-
model_path
(Union[str, PathLike[str]]
) –Path to download model to.
-
verbose
(bool
, default:True
) –If True (default), print debug messages.
-
url
(Optional[str]
, default:None
) –the models remote url (e.g. may be hosted on HF)
Returns:
-
str
–Model file destination.
Source code in gpt4all/gpt4all.py
193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 |
|
generate(prompt, max_tokens=200, temp=0.7, top_k=40, top_p=0.4, repeat_penalty=1.18, repeat_last_n=64, n_batch=8, n_predict=None, streaming=False, callback=pyllmodel.empty_response_callback)
Generate outputs from any GPT4All model.
Parameters:
-
prompt
(str
) –The prompt for the model the complete.
-
max_tokens
(int
, default:200
) –The maximum number of tokens to generate.
-
temp
(float
, default:0.7
) –The model temperature. Larger values increase creativity but decrease factuality.
-
top_k
(int
, default:40
) –Randomly sample from the top_k most likely tokens at each generation step. Set this to 1 for greedy decoding.
-
top_p
(float
, default:0.4
) –Randomly sample at each generation step from the top most likely tokens whose probabilities add up to top_p.
-
repeat_penalty
(float
, default:1.18
) –Penalize the model for repetition. Higher values result in less repetition.
-
repeat_last_n
(int
, default:64
) –How far in the models generation history to apply the repeat penalty.
-
n_batch
(int
, default:8
) –Number of prompt tokens processed in parallel. Larger values decrease latency but increase resource requirements.
-
n_predict
(Optional[int]
, default:None
) –Equivalent to max_tokens, exists for backwards compatibility.
-
streaming
(bool
, default:False
) –If True, this method will instead return a generator that yields tokens as the model generates them.
-
callback
(ResponseCallbackType
, default:empty_response_callback
) –A function with arguments token_id:int and response:str, which receives the tokens from the model as they are generated and stops the generation by returning False.
Returns:
-
Union[str, Iterable[str]]
–Either the entire completion or a generator that yields the completion token by token.
Source code in gpt4all/gpt4all.py
279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 |
|
list_models()
staticmethod
Fetch model list from https://gpt4all.io/models/models2.json.
Returns:
-
List[ConfigType]
–Model list in JSON format.
Source code in gpt4all/gpt4all.py
retrieve_model(model_name, model_path=None, allow_download=True, verbose=False)
staticmethod
Find model file, and if it doesn't exist, download the model.
Parameters:
-
model_name
(str
) –Name of model.
-
model_path
(Optional[Union[str, PathLike[str]]]
, default:None
) –Path to find model. Default is None in which case path is set to ~/.cache/gpt4all/.
-
allow_download
(bool
, default:True
) –Allow API to download model from gpt4all.io. Default is True.
-
verbose
(bool
, default:False
) –If True (default), print debug messages.
Returns:
-
ConfigType
–Model config.