Skip to content

GPT4All Python Generation API

The GPT4All python package provides bindings to our C/C++ model backend libraries. The source code and local build instructions can be found here.

Quickstart

pip install gpt4all
from gpt4all import GPT4All
model = GPT4All("orca-mini-3b-gguf2-q4_0.gguf")
output = model.generate("The capital of France is ", max_tokens=3)
print(output)
1. Paris

This will:

  • Instantiate GPT4All, which is the primary public API to your large language model (LLM).
  • Automatically download the given model to ~/.cache/gpt4all/ if not already present.
  • Through model.generate(...) the model starts working on a response. There are various ways to steer that process. Here, max_tokens sets an upper limit, i.e. a hard cut-off point to the output.

Chatting with GPT4All

Local LLMs can be optimized for chat conversations by reusing previous computational history.

Use the GPT4All chat_session context manager to hold chat conversations with the model.

model = GPT4All(model_name='orca-mini-3b-gguf2-q4_0.gguf')
with model.chat_session():
    response1 = model.generate(prompt='hello', temp=0)
    response2 = model.generate(prompt='write me a short poem', temp=0)
    response3 = model.generate(prompt='thank you', temp=0)
    print(model.current_chat_session)
[
   {
      'role': 'user',
      'content': 'hello'
   },
   {
      'role': 'assistant',
      'content': 'What is your name?'
   },
   {
      'role': 'user',
      'content': 'write me a short poem'
   },
   {
      'role': 'assistant',
      'content': "I would love to help you with that! Here's a short poem I came up with:\nBeneath the autumn leaves,\nThe wind whispers through the trees.\nA gentle breeze, so at ease,\nAs if it were born to play.\nAnd as the sun sets in the sky,\nThe world around us grows still."
   },
   {
      'role': 'user',
      'content': 'thank you'
   },
   {
      'role': 'assistant',
      'content': "You're welcome! I hope this poem was helpful or inspiring for you. Let me know if there is anything else I can assist you with."
   }
]

When using GPT4All models in the chat_session context:

  • Consecutive chat exchanges are taken into account and not discarded until the session ends; as long as the model has capacity.
  • Internal K/V caches are preserved from previous conversation history, speeding up inference.
  • The model is given a system and prompt template which make it chatty. Depending on allow_download=True (default), it will obtain the latest version of models2.json from the repository, which contains specifically tailored templates for models. Conversely, if it is not allowed to download, it falls back to default templates instead.

Streaming Generations

To interact with GPT4All responses as the model generates, use the streaming=True flag during generation.

from gpt4all import GPT4All
model = GPT4All("orca-mini-3b-gguf2-q4_0.gguf")
tokens = []
for token in model.generate("The capital of France is", max_tokens=20, streaming=True):
    tokens.append(token)
print(tokens)
[' Paris', ' is', ' a', ' city', ' that', ' has', ' been', ' a', ' major', ' cultural', ' and', ' economic', ' center', ' for', ' over', ' ', '2', ',', '0', '0']

The Generate Method API

generate(prompt, max_tokens=200, temp=0.7, top_k=40, top_p=0.4, repeat_penalty=1.18, repeat_last_n=64, n_batch=8, n_predict=None, streaming=False, callback=pyllmodel.empty_response_callback)

Generate outputs from any GPT4All model.

Parameters:

  • prompt (str) –

    The prompt for the model the complete.

  • max_tokens (int, default: 200 ) –

    The maximum number of tokens to generate.

  • temp (float, default: 0.7 ) –

    The model temperature. Larger values increase creativity but decrease factuality.

  • top_k (int, default: 40 ) –

    Randomly sample from the top_k most likely tokens at each generation step. Set this to 1 for greedy decoding.

  • top_p (float, default: 0.4 ) –

    Randomly sample at each generation step from the top most likely tokens whose probabilities add up to top_p.

  • repeat_penalty (float, default: 1.18 ) –

    Penalize the model for repetition. Higher values result in less repetition.

  • repeat_last_n (int, default: 64 ) –

    How far in the models generation history to apply the repeat penalty.

  • n_batch (int, default: 8 ) –

    Number of prompt tokens processed in parallel. Larger values decrease latency but increase resource requirements.

  • n_predict (Optional[int], default: None ) –

    Equivalent to max_tokens, exists for backwards compatibility.

  • streaming (bool, default: False ) –

    If True, this method will instead return a generator that yields tokens as the model generates them.

  • callback (ResponseCallbackType, default: empty_response_callback ) –

    A function with arguments token_id:int and response:str, which receives the tokens from the model as they are generated and stops the generation by returning False.

Returns:

  • Union[str, Iterable[str]]

    Either the entire completion or a generator that yields the completion token by token.

Source code in gpt4all/gpt4all.py
def generate(
    self,
    prompt: str,
    max_tokens: int = 200,
    temp: float = 0.7,
    top_k: int = 40,
    top_p: float = 0.4,
    repeat_penalty: float = 1.18,
    repeat_last_n: int = 64,
    n_batch: int = 8,
    n_predict: Optional[int] = None,
    streaming: bool = False,
    callback: pyllmodel.ResponseCallbackType = pyllmodel.empty_response_callback,
) -> Union[str, Iterable[str]]:
    """
    Generate outputs from any GPT4All model.

    Args:
        prompt: The prompt for the model the complete.
        max_tokens: The maximum number of tokens to generate.
        temp: The model temperature. Larger values increase creativity but decrease factuality.
        top_k: Randomly sample from the top_k most likely tokens at each generation step. Set this to 1 for greedy decoding.
        top_p: Randomly sample at each generation step from the top most likely tokens whose probabilities add up to top_p.
        repeat_penalty: Penalize the model for repetition. Higher values result in less repetition.
        repeat_last_n: How far in the models generation history to apply the repeat penalty.
        n_batch: Number of prompt tokens processed in parallel. Larger values decrease latency but increase resource requirements.
        n_predict: Equivalent to max_tokens, exists for backwards compatibility.
        streaming: If True, this method will instead return a generator that yields tokens as the model generates them.
        callback: A function with arguments token_id:int and response:str, which receives the tokens from the model as they are generated and stops the generation by returning False.

    Returns:
        Either the entire completion or a generator that yields the completion token by token.
    """

    # Preparing the model request
    generate_kwargs: Dict[str, Any] = dict(
        temp=temp,
        top_k=top_k,
        top_p=top_p,
        repeat_penalty=repeat_penalty,
        repeat_last_n=repeat_last_n,
        n_batch=n_batch,
        n_predict=n_predict if n_predict is not None else max_tokens,
    )

    if self._is_chat_session_activated:
        # check if there is only one message, i.e. system prompt:
        generate_kwargs["reset_context"] = len(self.current_chat_session) == 1
        self.current_chat_session.append({"role": "user", "content": prompt})

        prompt = self._format_chat_prompt_template(
            messages=self.current_chat_session[-1:],
            default_prompt_header=self.current_chat_session[0]["content"]
            if generate_kwargs["reset_context"]
            else "",
        )
    else:
        generate_kwargs["reset_context"] = True

    # Prepare the callback, process the model response
    output_collector: List[MessageType]
    output_collector = [
        {"content": ""}
    ]  # placeholder for the self.current_chat_session if chat session is not activated

    if self._is_chat_session_activated:
        self.current_chat_session.append({"role": "assistant", "content": ""})
        output_collector = self.current_chat_session

    def _callback_wrapper(
        callback: pyllmodel.ResponseCallbackType,
        output_collector: List[MessageType],
    ) -> pyllmodel.ResponseCallbackType:
        def _callback(token_id: int, response: str) -> bool:
            nonlocal callback, output_collector

            output_collector[-1]["content"] += response

            return callback(token_id, response)

        return _callback

    # Send the request to the model
    if streaming:
        return self.model.prompt_model_streaming(
            prompt=prompt,
            callback=_callback_wrapper(callback, output_collector),
            **generate_kwargs,
        )

    self.model.prompt_model(
        prompt=prompt,
        callback=_callback_wrapper(callback, output_collector),
        **generate_kwargs,
    )

    return output_collector[-1]["content"]

Examples & Explanations

Influencing Generation

The three most influential parameters in generation are Temperature (temp), Top-p (top_p) and Top-K (top_k). In a nutshell, during the process of selecting the next token, not just one or a few are considered, but every single token in the vocabulary is given a probability. The parameters can change the field of candidate tokens.

  • Temperature makes the process either more or less random. A Temperature above 1 increasingly "levels the playing field", while at a Temperature between 0 and 1 the likelihood of the best token candidates grows even more. A Temperature of 0 results in selecting the best token, making the output deterministic. A Temperature of 1 represents a neutral setting with regard to randomness in the process.

  • Top-p and Top-K both narrow the field:

    • Top-K limits candidate tokens to a fixed number after sorting by probability. Setting it higher than the vocabulary size deactivates this limit.
    • Top-p selects tokens based on their total probabilities. For example, a value of 0.8 means "include the best tokens, whose accumulated probabilities reach or just surpass 80%". Setting Top-p to 1, which is 100%, effectively disables it.

The recommendation is to keep at least one of Top-K and Top-p active. Other parameters can also influence generation; be sure to review all their descriptions.

Specifying the Model Folder

The model folder can be set with the model_path parameter when creating a GPT4All instance. The example below is is the same as if it weren't provided; that is, ~/.cache/gpt4all/ is the default folder.

from pathlib import Path
from gpt4all import GPT4All
model = GPT4All(model_name='orca-mini-3b-gguf2-q4_0.gguf',
                model_path=(Path.home() / '.cache' / 'gpt4all'),
                allow_download=False)
response = model.generate('my favorite 3 fruits are:', temp=0)
print(response)
My favorite three fruits are apples, bananas and oranges.

If you want to point it at the chat GUI's default folder, it should be:

from pathlib import Path
from gpt4all import GPT4All

model_name = 'orca-mini-3b-gguf2-q4_0.gguf'
model_path = Path.home() / 'Library' / 'Application Support' / 'nomic.ai' / 'GPT4All'
model = GPT4All(model_name, model_path)
from pathlib import Path
from gpt4all import GPT4All
import os
model_name = 'orca-mini-3b-gguf2-q4_0.gguf'
model_path = Path(os.environ['LOCALAPPDATA']) / 'nomic.ai' / 'GPT4All'
model = GPT4All(model_name, model_path)
from pathlib import Path
from gpt4all import GPT4All

model_name = 'orca-mini-3b-gguf2-q4_0.gguf'
model_path = Path.home() / '.local' / 'share' / 'nomic.ai' / 'GPT4All'
model = GPT4All(model_name, model_path)

Alternatively, you could also change the module's default model directory:

from pathlib import Path
import gpt4all.gpt4all
gpt4all.gpt4all.DEFAULT_MODEL_DIRECTORY = Path.home() / 'my' / 'models-directory'
from gpt4all import GPT4All
model = GPT4All('orca-mini-3b-gguf2-q4_0.gguf')
...

Managing Templates

Session templates can be customized when starting a chat_session context:

from gpt4all import GPT4All
model = GPT4All('wizardlm-13b-v1.2.Q4_0.gguf')
system_template = 'A chat between a curious user and an artificial intelligence assistant.'
# many models use triple hash '###' for keywords, Vicunas are simpler:
prompt_template = 'USER: {0}\nASSISTANT: '
with model.chat_session(system_template, prompt_template):
    response1 = model.generate('why is the grass green?')
    print(response1)
    print()
    response2 = model.generate('why is the sky blue?')
    print(response2)
The color of grass can be attributed to its chlorophyll content, which allows it
to absorb light energy from sunlight through photosynthesis. Chlorophyll absorbs
blue and red wavelengths of light while reflecting other colors such as yellow
and green. This is why the leaves appear green to our eyes.

The color of the sky appears blue due to a phenomenon called Rayleigh scattering,
which occurs when sunlight enters Earth's atmosphere and interacts with air
molecules such as nitrogen and oxygen. Blue light has shorter wavelength than
other colors in the visible spectrum, so it is scattered more easily by these
particles, making the sky appear blue to our eyes.

To do the same outside a session, the input has to be formatted manually. For example:

model = GPT4All('wizardlm-13b-v1.2.Q4_0.gguf')
system_template = 'A chat between a curious user and an artificial intelligence assistant.'
prompt_template = 'USER: {0}\nASSISTANT: '
prompts = ['name 3 colors', 'now name 3 fruits', 'what were the 3 colors in your earlier response?']
first_input = system_template + prompt_template.format(prompts[0])
response = model.generate(first_input, temp=0)
print(response)
for prompt in prompts[1:]:
    response = model.generate(prompt_template.format(prompt), temp=0)
    print(response)
1) Red
2) Blue
3) Green

1. Apple
2. Banana
3. Orange

The colors in my previous response are blue, green and red.

Ultimately, the method GPT4All._format_chat_prompt_template() is responsible for formatting templates. It can be customized in a subclass. As an example:

from itertools import cycle
from gpt4all import GPT4All

class RotatingTemplateGPT4All(GPT4All):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self._templates = [
            "Respond like a pirate.",
            "Respond like a politician.",
            "Respond like a philosopher.",
            "Respond like a Klingon.",
        ]
        self._cycling_templates = cycle(self._templates)

    def _format_chat_prompt_template(
        self,
        messages: list,
        default_prompt_header: str = "",
        default_prompt_footer: str = "",
    ) -> str:
        full_prompt = default_prompt_header + "\n\n" if default_prompt_header != "" else ""
        for message in messages:
            if message["role"] == "user":
                user_message = f"USER: {message['content']} {next(self._cycling_templates)}\n"
                full_prompt += user_message
            if message["role"] == "assistant":
                assistant_message = f"ASSISTANT: {message['content']}\n"
                full_prompt += assistant_message
        full_prompt += "\n\n" + default_prompt_footer if default_prompt_footer != "" else ""
        print(full_prompt)
        return full_prompt
model = RotatingTemplateGPT4All('wizardlm-13b-v1.2.Q4_0.gguf')
with model.chat_session():  # starting a session is optional in this example
    response1 = model.generate("hi, who are you?")
    print(response1)
    print()
    response2 = model.generate("what can you tell me about snakes?")
    print(response2)
    print()
    response3 = model.generate("what's your opinion on Chess?")
    print(response3)
    print()
    response4 = model.generate("tell me about ancient Rome.")
    print(response4)
USER: hi, who are you? Respond like a pirate.

Pirate: Ahoy there mateys! I be Cap'n Jack Sparrow of the Black Pearl.

USER: what can you tell me about snakes? Respond like a politician.

Politician: Snakes have been making headlines lately due to their ability to
slither into tight spaces and evade capture, much like myself during my last
election campaign. However, I believe that with proper education and
understanding of these creatures, we can work together towards creating a
safer environment for both humans and snakes alike.

USER: what's your opinion on Chess? Respond like a philosopher.

Philosopher: The game of chess is often used as an analogy to illustrate the
complexities of life and decision-making processes. However, I believe that it
can also be seen as a reflection of our own consciousness and subconscious mind.
Just as each piece on the board has its unique role to play in shaping the
outcome of the game, we too have different roles to fulfill in creating our own
personal narrative.

USER: tell me about ancient Rome. Respond like a Klingon.

Klingon: Ancient Rome was once a great empire that ruled over much of Europe and
the Mediterranean region. However, just as the Empire fell due to internal strife
and external threats, so too did my own house come crashing down when I failed to
protect our homeworld from invading forces.

Introspection

A less apparent feature is the capacity to log the final prompt that gets sent to the model. It relies on Python's logging facilities implemented in the pyllmodel module at the INFO level. You can activate it for example with a basicConfig, which displays it on the standard error stream. It's worth mentioning that Python's logging infrastructure offers many more customization options.

import logging
from gpt4all import GPT4All
logging.basicConfig(level=logging.INFO)
model = GPT4All('nous-hermes-llama2-13b.Q4_0.gguf')
with model.chat_session('You are a geography expert.\nBe terse.',
                        '### Instruction:\n{0}\n### Response:\n'):
    response = model.generate('who are you?', temp=0)
    print(response)
    response = model.generate('what are your favorite 3 mountains?', temp=0)
    print(response)
INFO:gpt4all.pyllmodel:LLModel.prompt_model -- prompt:
You are a geography expert.
Be terse.

### Instruction:
who are you?
### Response:

===/LLModel.prompt_model -- prompt/===
I am an AI-powered chatbot designed to assist users with their queries related to geographical information.
INFO:gpt4all.pyllmodel:LLModel.prompt_model -- prompt:
### Instruction:
what are your favorite 3 mountains?
### Response:

===/LLModel.prompt_model -- prompt/===
1) Mount Everest - Located in the Himalayas, it is the highest mountain on Earth and a significant challenge for mountaineers.
2) Kangchenjunga - This mountain is located in the Himalayas and is the third-highest peak in the world after Mount Everest and K2.
3) Lhotse - Located in the Himalayas, it is the fourth highest mountain on Earth and offers a challenging climb for experienced mountaineers.

Without Online Connectivity

To prevent GPT4All from accessing online resources, instantiate it with allow_download=False. This will disable both downloading missing models and models2.json, which contains information about them. As a result, predefined templates are used instead of model-specific system and prompt templates:

from gpt4all import GPT4All
model = GPT4All('ggml-mpt-7b-chat.bin', allow_download=False)
# when downloads are disabled, it will use the default templates:
print("default system template:", repr(model.config['systemPrompt']))
print("default prompt template:", repr(model.config['promptTemplate']))
print()
# even when inside a session:
with model.chat_session():
    assert model.current_chat_session[0]['role'] == 'system'
    print("session system template:", repr(model.current_chat_session[0]['content']))
    print("session prompt template:", repr(model._current_prompt_template))
default system template: ''
default prompt template: '### Human: \n{0}\n### Assistant:\n'

session system template: ''
session prompt template: '### Human: \n{0}\n### Assistant:\n'

Interrupting Generation

The simplest way to stop generation is to set a fixed upper limit with the max_tokens parameter.

If you know exactly when a model should stop responding, you can add a custom callback, like so:

from gpt4all import GPT4All
model = GPT4All('orca-mini-3b-gguf2-q4_0.gguf')

def stop_on_token_callback(token_id, token_string):
    # one sentence is enough:
    if '.' in token_string:
        return False
    else:
        return True

response = model.generate('Blue Whales are the biggest animal to ever inhabit the Earth.',
                          temp=0, callback=stop_on_token_callback)
print(response)
 They can grow up to 100 feet (30 meters) long and weigh as much as 20 tons (18 metric tons).

API Documentation

GPT4All

Python class that handles instantiation, downloading, generation and chat with GPT4All models.

Source code in gpt4all/gpt4all.py
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
class GPT4All:
    """
    Python class that handles instantiation, downloading, generation and chat with GPT4All models.
    """

    def __init__(
        self,
        model_name: str,
        model_path: Optional[Union[str, os.PathLike[str]]] = None,
        model_type: Optional[str] = None,
        allow_download: bool = True,
        n_threads: Optional[int] = None,
        device: Optional[str] = "cpu",
        verbose: bool = False,
    ):
        """
        Constructor

        Args:
            model_name: Name of GPT4All or custom model. Including ".gguf" file extension is optional but encouraged.
            model_path: Path to directory containing model file or, if file does not exist, where to download model.
                Default is None, in which case models will be stored in `~/.cache/gpt4all/`.
            model_type: Model architecture. This argument currently does not have any functionality and is just used as
                descriptive identifier for user. Default is None.
            allow_download: Allow API to download models from gpt4all.io. Default is True.
            n_threads: number of CPU threads used by GPT4All. Default is None, then the number of threads are determined automatically.
            device: The processing unit on which the GPT4All model will run. It can be set to:
                - "cpu": Model will run on the central processing unit.
                - "gpu": Model will run on the best available graphics processing unit, irrespective of its vendor.
                - "amd", "nvidia", "intel": Model will run on the best available GPU from the specified vendor.
                Alternatively, a specific GPU name can also be provided, and the model will run on the GPU that matches the name if it's available.
                Default is "cpu".

                Note: If a selected GPU device does not have sufficient RAM to accommodate the model, an error will be thrown, and the GPT4All instance will be rendered invalid. It's advised to ensure the device has enough memory before initiating the model.
        """
        self.model_type = model_type
        self.model = pyllmodel.LLModel()
        # Retrieve model and download if allowed
        self.config: ConfigType = self.retrieve_model(model_name, model_path=model_path, allow_download=allow_download, verbose=verbose)
        if device is not None:
            if device != "cpu":
                self.model.init_gpu(model_path=self.config["path"], device=device)
        self.model.load_model(self.config["path"])
        # Set n_threads
        if n_threads is not None:
            self.model.set_thread_count(n_threads)

        self._is_chat_session_activated: bool = False
        self.current_chat_session: List[MessageType] = empty_chat_session()
        self._current_prompt_template: str = "{0}"

    @staticmethod
    def list_models() -> List[ConfigType]:
        """
        Fetch model list from https://gpt4all.io/models/models2.json.

        Returns:
            Model list in JSON format.
        """
        resp = requests.get("https://gpt4all.io/models/models2.json")
        if resp.status_code != 200:
            raise ValueError(f'Request failed: HTTP {resp.status_code} {resp.reason}')
        return resp.json()

    @staticmethod
    def retrieve_model(
        model_name: str,
        model_path: Optional[Union[str, os.PathLike[str]]] = None,
        allow_download: bool = True,
        verbose: bool = False,
    ) -> ConfigType:
        """
        Find model file, and if it doesn't exist, download the model.

        Args:
            model_name: Name of model.
            model_path: Path to find model. Default is None in which case path is set to
                ~/.cache/gpt4all/.
            allow_download: Allow API to download model from gpt4all.io. Default is True.
            verbose: If True (default), print debug messages.

        Returns:
            Model config.
        """

        model_filename = append_extension_if_missing(model_name)

        # get the config for the model
        config: ConfigType = DEFAULT_MODEL_CONFIG
        if allow_download:
            available_models = GPT4All.list_models()

            for m in available_models:
                if model_filename == m["filename"]:
                    config.update(m)
                    config["systemPrompt"] = config["systemPrompt"].strip()
                    config["promptTemplate"] = config["promptTemplate"].replace(
                        "%1", "{0}", 1
                    )  # change to Python-style formatting
                    break

        # Validate download directory
        if model_path is None:
            try:
                os.makedirs(DEFAULT_MODEL_DIRECTORY, exist_ok=True)
            except OSError as exc:
                raise ValueError(
                    f"Failed to create model download directory at {DEFAULT_MODEL_DIRECTORY}: {exc}. "
                    "Please specify model_path."
                )
            model_path = DEFAULT_MODEL_DIRECTORY
        else:
            model_path = str(model_path).replace("\\", "\\\\")

        if not os.path.exists(model_path):
            raise ValueError(f"Invalid model directory: {model_path}")

        model_dest = os.path.join(model_path, model_filename).replace("\\", "\\\\")
        if os.path.exists(model_dest):
            config.pop("url", None)
            config["path"] = model_dest
            if verbose:
                print("Found model file at", model_dest, file=sys.stderr)

        # If model file does not exist, download
        elif allow_download:
            url = config.pop("url", None)

            config["path"] = GPT4All.download_model(model_filename, model_path, verbose=verbose, url=url)
        else:
            raise ValueError("Failed to retrieve model")

        return config

    @staticmethod
    def download_model(
        model_filename: str,
        model_path: Union[str, os.PathLike[str]],
        verbose: bool = True,
        url: Optional[str] = None,
    ) -> str:
        """
        Download model from https://gpt4all.io.

        Args:
            model_filename: Filename of model (with .gguf extension).
            model_path: Path to download model to.
            verbose: If True (default), print debug messages.
            url: the models remote url (e.g. may be hosted on HF)

        Returns:
            Model file destination.
        """

        def get_download_url(model_filename):
            if url:
                return url
            return f"https://gpt4all.io/models/gguf/{model_filename}"

        # Download model
        download_path = os.path.join(model_path, model_filename).replace("\\", "\\\\")
        download_url = get_download_url(model_filename)

        def make_request(offset=None):
            headers = {}
            if offset:
                print(f"\nDownload interrupted, resuming from byte position {offset}", file=sys.stderr)
                headers['Range'] = f'bytes={offset}-'  # resume incomplete response
            response = requests.get(download_url, stream=True, headers=headers)
            if response.status_code not in (200, 206):
                raise ValueError(f'Request failed: HTTP {response.status_code} {response.reason}')
            if offset and (response.status_code != 206 or str(offset) not in response.headers.get('Content-Range', '')):
                raise ValueError('Connection was interrupted and server does not support range requests')
            return response

        response = make_request()

        total_size_in_bytes = int(response.headers.get("content-length", 0))
        block_size = 2**20  # 1 MB

        with open(download_path, "wb") as file, \
                tqdm(total=total_size_in_bytes, unit="iB", unit_scale=True) as progress_bar:
            try:
                while True:
                    last_progress = progress_bar.n
                    try:
                        for data in response.iter_content(block_size):
                            file.write(data)
                            progress_bar.update(len(data))
                    except ChunkedEncodingError as cee:
                        if cee.args and isinstance(pe := cee.args[0], ProtocolError):
                            if len(pe.args) >= 2 and isinstance(ir := pe.args[1], IncompleteRead):
                                assert progress_bar.n <= ir.partial  # urllib3 may be ahead of us but never behind
                                # the socket was closed during a read - retry
                                response = make_request(progress_bar.n)
                                continue
                        raise
                    if total_size_in_bytes != 0 and progress_bar.n < total_size_in_bytes:
                        if progress_bar.n == last_progress:
                            raise RuntimeError('Download not making progress, aborting.')
                        # server closed connection prematurely - retry
                        response = make_request(progress_bar.n)
                        continue
                    break
            except Exception:
                if verbose:
                    print("Cleaning up the interrupted download...", file=sys.stderr)
                try:
                    os.remove(download_path)
                except OSError:
                    pass
                raise

        if os.name == 'nt':
            time.sleep(2)  # Sleep for a little bit so Windows can remove file lock

        if verbose:
            print("Model downloaded at:", download_path, file=sys.stderr)
        return download_path

    def generate(
        self,
        prompt: str,
        max_tokens: int = 200,
        temp: float = 0.7,
        top_k: int = 40,
        top_p: float = 0.4,
        repeat_penalty: float = 1.18,
        repeat_last_n: int = 64,
        n_batch: int = 8,
        n_predict: Optional[int] = None,
        streaming: bool = False,
        callback: pyllmodel.ResponseCallbackType = pyllmodel.empty_response_callback,
    ) -> Union[str, Iterable[str]]:
        """
        Generate outputs from any GPT4All model.

        Args:
            prompt: The prompt for the model the complete.
            max_tokens: The maximum number of tokens to generate.
            temp: The model temperature. Larger values increase creativity but decrease factuality.
            top_k: Randomly sample from the top_k most likely tokens at each generation step. Set this to 1 for greedy decoding.
            top_p: Randomly sample at each generation step from the top most likely tokens whose probabilities add up to top_p.
            repeat_penalty: Penalize the model for repetition. Higher values result in less repetition.
            repeat_last_n: How far in the models generation history to apply the repeat penalty.
            n_batch: Number of prompt tokens processed in parallel. Larger values decrease latency but increase resource requirements.
            n_predict: Equivalent to max_tokens, exists for backwards compatibility.
            streaming: If True, this method will instead return a generator that yields tokens as the model generates them.
            callback: A function with arguments token_id:int and response:str, which receives the tokens from the model as they are generated and stops the generation by returning False.

        Returns:
            Either the entire completion or a generator that yields the completion token by token.
        """

        # Preparing the model request
        generate_kwargs: Dict[str, Any] = dict(
            temp=temp,
            top_k=top_k,
            top_p=top_p,
            repeat_penalty=repeat_penalty,
            repeat_last_n=repeat_last_n,
            n_batch=n_batch,
            n_predict=n_predict if n_predict is not None else max_tokens,
        )

        if self._is_chat_session_activated:
            # check if there is only one message, i.e. system prompt:
            generate_kwargs["reset_context"] = len(self.current_chat_session) == 1
            self.current_chat_session.append({"role": "user", "content": prompt})

            prompt = self._format_chat_prompt_template(
                messages=self.current_chat_session[-1:],
                default_prompt_header=self.current_chat_session[0]["content"]
                if generate_kwargs["reset_context"]
                else "",
            )
        else:
            generate_kwargs["reset_context"] = True

        # Prepare the callback, process the model response
        output_collector: List[MessageType]
        output_collector = [
            {"content": ""}
        ]  # placeholder for the self.current_chat_session if chat session is not activated

        if self._is_chat_session_activated:
            self.current_chat_session.append({"role": "assistant", "content": ""})
            output_collector = self.current_chat_session

        def _callback_wrapper(
            callback: pyllmodel.ResponseCallbackType,
            output_collector: List[MessageType],
        ) -> pyllmodel.ResponseCallbackType:
            def _callback(token_id: int, response: str) -> bool:
                nonlocal callback, output_collector

                output_collector[-1]["content"] += response

                return callback(token_id, response)

            return _callback

        # Send the request to the model
        if streaming:
            return self.model.prompt_model_streaming(
                prompt=prompt,
                callback=_callback_wrapper(callback, output_collector),
                **generate_kwargs,
            )

        self.model.prompt_model(
            prompt=prompt,
            callback=_callback_wrapper(callback, output_collector),
            **generate_kwargs,
        )

        return output_collector[-1]["content"]

    @contextmanager
    def chat_session(
        self,
        system_prompt: str = "",
        prompt_template: str = "",
    ):
        """
        Context manager to hold an inference optimized chat session with a GPT4All model.

        Args:
            system_prompt: An initial instruction for the model.
            prompt_template: Template for the prompts with {0} being replaced by the user message.
        """
        # Code to acquire resource, e.g.:
        self._is_chat_session_activated = True
        self.current_chat_session = empty_chat_session(system_prompt or self.config["systemPrompt"])
        self._current_prompt_template = prompt_template or self.config["promptTemplate"]
        try:
            yield self
        finally:
            # Code to release resource, e.g.:
            self._is_chat_session_activated = False
            self.current_chat_session = empty_chat_session()
            self._current_prompt_template = "{0}"

    def _format_chat_prompt_template(
        self,
        messages: List[MessageType],
        default_prompt_header: str = "",
        default_prompt_footer: str = "",
    ) -> str:
        """
        Helper method for building a prompt from list of messages using the self._current_prompt_template as a template for each message.

        Args:
            messages:  List of dictionaries. Each dictionary should have a "role" key
                with value of "system", "assistant", or "user" and a "content" key with a
                string value. Messages are organized such that "system" messages are at top of prompt,
                and "user" and "assistant" messages are displayed in order. Assistant messages get formatted as
                "Response: {content}".

        Returns:
            Formatted prompt.
        """

        if isinstance(default_prompt_header, bool):
            import warnings

            warnings.warn(
                "Using True/False for the 'default_prompt_header' is deprecated. Use a string instead.",
                DeprecationWarning,
            )
            default_prompt_header = ""

        if isinstance(default_prompt_footer, bool):
            import warnings

            warnings.warn(
                "Using True/False for the 'default_prompt_footer' is deprecated. Use a string instead.",
                DeprecationWarning,
            )
            default_prompt_footer = ""

        full_prompt = default_prompt_header + "\n\n" if default_prompt_header != "" else ""

        for message in messages:
            if message["role"] == "user":
                user_message = self._current_prompt_template.format(message["content"])
                full_prompt += user_message
            if message["role"] == "assistant":
                assistant_message = message["content"] + "\n"
                full_prompt += assistant_message

        full_prompt += "\n\n" + default_prompt_footer if default_prompt_footer != "" else ""

        return full_prompt
__init__(model_name, model_path=None, model_type=None, allow_download=True, n_threads=None, device='cpu', verbose=False)

Constructor

Parameters:

  • model_name (str) –

    Name of GPT4All or custom model. Including ".gguf" file extension is optional but encouraged.

  • model_path (Optional[Union[str, PathLike[str]]], default: None ) –

    Path to directory containing model file or, if file does not exist, where to download model. Default is None, in which case models will be stored in ~/.cache/gpt4all/.

  • model_type (Optional[str], default: None ) –

    Model architecture. This argument currently does not have any functionality and is just used as descriptive identifier for user. Default is None.

  • allow_download (bool, default: True ) –

    Allow API to download models from gpt4all.io. Default is True.

  • n_threads (Optional[int], default: None ) –

    number of CPU threads used by GPT4All. Default is None, then the number of threads are determined automatically.

  • device (Optional[str], default: 'cpu' ) –

    The processing unit on which the GPT4All model will run. It can be set to: - "cpu": Model will run on the central processing unit. - "gpu": Model will run on the best available graphics processing unit, irrespective of its vendor. - "amd", "nvidia", "intel": Model will run on the best available GPU from the specified vendor. Alternatively, a specific GPU name can also be provided, and the model will run on the GPU that matches the name if it's available. Default is "cpu".

    Note: If a selected GPU device does not have sufficient RAM to accommodate the model, an error will be thrown, and the GPT4All instance will be rendered invalid. It's advised to ensure the device has enough memory before initiating the model.

Source code in gpt4all/gpt4all.py
def __init__(
    self,
    model_name: str,
    model_path: Optional[Union[str, os.PathLike[str]]] = None,
    model_type: Optional[str] = None,
    allow_download: bool = True,
    n_threads: Optional[int] = None,
    device: Optional[str] = "cpu",
    verbose: bool = False,
):
    """
    Constructor

    Args:
        model_name: Name of GPT4All or custom model. Including ".gguf" file extension is optional but encouraged.
        model_path: Path to directory containing model file or, if file does not exist, where to download model.
            Default is None, in which case models will be stored in `~/.cache/gpt4all/`.
        model_type: Model architecture. This argument currently does not have any functionality and is just used as
            descriptive identifier for user. Default is None.
        allow_download: Allow API to download models from gpt4all.io. Default is True.
        n_threads: number of CPU threads used by GPT4All. Default is None, then the number of threads are determined automatically.
        device: The processing unit on which the GPT4All model will run. It can be set to:
            - "cpu": Model will run on the central processing unit.
            - "gpu": Model will run on the best available graphics processing unit, irrespective of its vendor.
            - "amd", "nvidia", "intel": Model will run on the best available GPU from the specified vendor.
            Alternatively, a specific GPU name can also be provided, and the model will run on the GPU that matches the name if it's available.
            Default is "cpu".

            Note: If a selected GPU device does not have sufficient RAM to accommodate the model, an error will be thrown, and the GPT4All instance will be rendered invalid. It's advised to ensure the device has enough memory before initiating the model.
    """
    self.model_type = model_type
    self.model = pyllmodel.LLModel()
    # Retrieve model and download if allowed
    self.config: ConfigType = self.retrieve_model(model_name, model_path=model_path, allow_download=allow_download, verbose=verbose)
    if device is not None:
        if device != "cpu":
            self.model.init_gpu(model_path=self.config["path"], device=device)
    self.model.load_model(self.config["path"])
    # Set n_threads
    if n_threads is not None:
        self.model.set_thread_count(n_threads)

    self._is_chat_session_activated: bool = False
    self.current_chat_session: List[MessageType] = empty_chat_session()
    self._current_prompt_template: str = "{0}"
chat_session(system_prompt='', prompt_template='')

Context manager to hold an inference optimized chat session with a GPT4All model.

Parameters:

  • system_prompt (str, default: '' ) –

    An initial instruction for the model.

  • prompt_template (str, default: '' ) –

    Template for the prompts with {0} being replaced by the user message.

Source code in gpt4all/gpt4all.py
@contextmanager
def chat_session(
    self,
    system_prompt: str = "",
    prompt_template: str = "",
):
    """
    Context manager to hold an inference optimized chat session with a GPT4All model.

    Args:
        system_prompt: An initial instruction for the model.
        prompt_template: Template for the prompts with {0} being replaced by the user message.
    """
    # Code to acquire resource, e.g.:
    self._is_chat_session_activated = True
    self.current_chat_session = empty_chat_session(system_prompt or self.config["systemPrompt"])
    self._current_prompt_template = prompt_template or self.config["promptTemplate"]
    try:
        yield self
    finally:
        # Code to release resource, e.g.:
        self._is_chat_session_activated = False
        self.current_chat_session = empty_chat_session()
        self._current_prompt_template = "{0}"
download_model(model_filename, model_path, verbose=True, url=None) staticmethod

Download model from https://gpt4all.io.

Parameters:

  • model_filename (str) –

    Filename of model (with .gguf extension).

  • model_path (Union[str, PathLike[str]]) –

    Path to download model to.

  • verbose (bool, default: True ) –

    If True (default), print debug messages.

  • url (Optional[str], default: None ) –

    the models remote url (e.g. may be hosted on HF)

Returns:

  • str

    Model file destination.

Source code in gpt4all/gpt4all.py
@staticmethod
def download_model(
    model_filename: str,
    model_path: Union[str, os.PathLike[str]],
    verbose: bool = True,
    url: Optional[str] = None,
) -> str:
    """
    Download model from https://gpt4all.io.

    Args:
        model_filename: Filename of model (with .gguf extension).
        model_path: Path to download model to.
        verbose: If True (default), print debug messages.
        url: the models remote url (e.g. may be hosted on HF)

    Returns:
        Model file destination.
    """

    def get_download_url(model_filename):
        if url:
            return url
        return f"https://gpt4all.io/models/gguf/{model_filename}"

    # Download model
    download_path = os.path.join(model_path, model_filename).replace("\\", "\\\\")
    download_url = get_download_url(model_filename)

    def make_request(offset=None):
        headers = {}
        if offset:
            print(f"\nDownload interrupted, resuming from byte position {offset}", file=sys.stderr)
            headers['Range'] = f'bytes={offset}-'  # resume incomplete response
        response = requests.get(download_url, stream=True, headers=headers)
        if response.status_code not in (200, 206):
            raise ValueError(f'Request failed: HTTP {response.status_code} {response.reason}')
        if offset and (response.status_code != 206 or str(offset) not in response.headers.get('Content-Range', '')):
            raise ValueError('Connection was interrupted and server does not support range requests')
        return response

    response = make_request()

    total_size_in_bytes = int(response.headers.get("content-length", 0))
    block_size = 2**20  # 1 MB

    with open(download_path, "wb") as file, \
            tqdm(total=total_size_in_bytes, unit="iB", unit_scale=True) as progress_bar:
        try:
            while True:
                last_progress = progress_bar.n
                try:
                    for data in response.iter_content(block_size):
                        file.write(data)
                        progress_bar.update(len(data))
                except ChunkedEncodingError as cee:
                    if cee.args and isinstance(pe := cee.args[0], ProtocolError):
                        if len(pe.args) >= 2 and isinstance(ir := pe.args[1], IncompleteRead):
                            assert progress_bar.n <= ir.partial  # urllib3 may be ahead of us but never behind
                            # the socket was closed during a read - retry
                            response = make_request(progress_bar.n)
                            continue
                    raise
                if total_size_in_bytes != 0 and progress_bar.n < total_size_in_bytes:
                    if progress_bar.n == last_progress:
                        raise RuntimeError('Download not making progress, aborting.')
                    # server closed connection prematurely - retry
                    response = make_request(progress_bar.n)
                    continue
                break
        except Exception:
            if verbose:
                print("Cleaning up the interrupted download...", file=sys.stderr)
            try:
                os.remove(download_path)
            except OSError:
                pass
            raise

    if os.name == 'nt':
        time.sleep(2)  # Sleep for a little bit so Windows can remove file lock

    if verbose:
        print("Model downloaded at:", download_path, file=sys.stderr)
    return download_path
generate(prompt, max_tokens=200, temp=0.7, top_k=40, top_p=0.4, repeat_penalty=1.18, repeat_last_n=64, n_batch=8, n_predict=None, streaming=False, callback=pyllmodel.empty_response_callback)

Generate outputs from any GPT4All model.

Parameters:

  • prompt (str) –

    The prompt for the model the complete.

  • max_tokens (int, default: 200 ) –

    The maximum number of tokens to generate.

  • temp (float, default: 0.7 ) –

    The model temperature. Larger values increase creativity but decrease factuality.

  • top_k (int, default: 40 ) –

    Randomly sample from the top_k most likely tokens at each generation step. Set this to 1 for greedy decoding.

  • top_p (float, default: 0.4 ) –

    Randomly sample at each generation step from the top most likely tokens whose probabilities add up to top_p.

  • repeat_penalty (float, default: 1.18 ) –

    Penalize the model for repetition. Higher values result in less repetition.

  • repeat_last_n (int, default: 64 ) –

    How far in the models generation history to apply the repeat penalty.

  • n_batch (int, default: 8 ) –

    Number of prompt tokens processed in parallel. Larger values decrease latency but increase resource requirements.

  • n_predict (Optional[int], default: None ) –

    Equivalent to max_tokens, exists for backwards compatibility.

  • streaming (bool, default: False ) –

    If True, this method will instead return a generator that yields tokens as the model generates them.

  • callback (ResponseCallbackType, default: empty_response_callback ) –

    A function with arguments token_id:int and response:str, which receives the tokens from the model as they are generated and stops the generation by returning False.

Returns:

  • Union[str, Iterable[str]]

    Either the entire completion or a generator that yields the completion token by token.

Source code in gpt4all/gpt4all.py
def generate(
    self,
    prompt: str,
    max_tokens: int = 200,
    temp: float = 0.7,
    top_k: int = 40,
    top_p: float = 0.4,
    repeat_penalty: float = 1.18,
    repeat_last_n: int = 64,
    n_batch: int = 8,
    n_predict: Optional[int] = None,
    streaming: bool = False,
    callback: pyllmodel.ResponseCallbackType = pyllmodel.empty_response_callback,
) -> Union[str, Iterable[str]]:
    """
    Generate outputs from any GPT4All model.

    Args:
        prompt: The prompt for the model the complete.
        max_tokens: The maximum number of tokens to generate.
        temp: The model temperature. Larger values increase creativity but decrease factuality.
        top_k: Randomly sample from the top_k most likely tokens at each generation step. Set this to 1 for greedy decoding.
        top_p: Randomly sample at each generation step from the top most likely tokens whose probabilities add up to top_p.
        repeat_penalty: Penalize the model for repetition. Higher values result in less repetition.
        repeat_last_n: How far in the models generation history to apply the repeat penalty.
        n_batch: Number of prompt tokens processed in parallel. Larger values decrease latency but increase resource requirements.
        n_predict: Equivalent to max_tokens, exists for backwards compatibility.
        streaming: If True, this method will instead return a generator that yields tokens as the model generates them.
        callback: A function with arguments token_id:int and response:str, which receives the tokens from the model as they are generated and stops the generation by returning False.

    Returns:
        Either the entire completion or a generator that yields the completion token by token.
    """

    # Preparing the model request
    generate_kwargs: Dict[str, Any] = dict(
        temp=temp,
        top_k=top_k,
        top_p=top_p,
        repeat_penalty=repeat_penalty,
        repeat_last_n=repeat_last_n,
        n_batch=n_batch,
        n_predict=n_predict if n_predict is not None else max_tokens,
    )

    if self._is_chat_session_activated:
        # check if there is only one message, i.e. system prompt:
        generate_kwargs["reset_context"] = len(self.current_chat_session) == 1
        self.current_chat_session.append({"role": "user", "content": prompt})

        prompt = self._format_chat_prompt_template(
            messages=self.current_chat_session[-1:],
            default_prompt_header=self.current_chat_session[0]["content"]
            if generate_kwargs["reset_context"]
            else "",
        )
    else:
        generate_kwargs["reset_context"] = True

    # Prepare the callback, process the model response
    output_collector: List[MessageType]
    output_collector = [
        {"content": ""}
    ]  # placeholder for the self.current_chat_session if chat session is not activated

    if self._is_chat_session_activated:
        self.current_chat_session.append({"role": "assistant", "content": ""})
        output_collector = self.current_chat_session

    def _callback_wrapper(
        callback: pyllmodel.ResponseCallbackType,
        output_collector: List[MessageType],
    ) -> pyllmodel.ResponseCallbackType:
        def _callback(token_id: int, response: str) -> bool:
            nonlocal callback, output_collector

            output_collector[-1]["content"] += response

            return callback(token_id, response)

        return _callback

    # Send the request to the model
    if streaming:
        return self.model.prompt_model_streaming(
            prompt=prompt,
            callback=_callback_wrapper(callback, output_collector),
            **generate_kwargs,
        )

    self.model.prompt_model(
        prompt=prompt,
        callback=_callback_wrapper(callback, output_collector),
        **generate_kwargs,
    )

    return output_collector[-1]["content"]
list_models() staticmethod

Fetch model list from https://gpt4all.io/models/models2.json.

Returns:

  • List[ConfigType]

    Model list in JSON format.

Source code in gpt4all/gpt4all.py
@staticmethod
def list_models() -> List[ConfigType]:
    """
    Fetch model list from https://gpt4all.io/models/models2.json.

    Returns:
        Model list in JSON format.
    """
    resp = requests.get("https://gpt4all.io/models/models2.json")
    if resp.status_code != 200:
        raise ValueError(f'Request failed: HTTP {resp.status_code} {resp.reason}')
    return resp.json()
retrieve_model(model_name, model_path=None, allow_download=True, verbose=False) staticmethod

Find model file, and if it doesn't exist, download the model.

Parameters:

  • model_name (str) –

    Name of model.

  • model_path (Optional[Union[str, PathLike[str]]], default: None ) –

    Path to find model. Default is None in which case path is set to ~/.cache/gpt4all/.

  • allow_download (bool, default: True ) –

    Allow API to download model from gpt4all.io. Default is True.

  • verbose (bool, default: False ) –

    If True (default), print debug messages.

Returns:

  • ConfigType

    Model config.

Source code in gpt4all/gpt4all.py
@staticmethod
def retrieve_model(
    model_name: str,
    model_path: Optional[Union[str, os.PathLike[str]]] = None,
    allow_download: bool = True,
    verbose: bool = False,
) -> ConfigType:
    """
    Find model file, and if it doesn't exist, download the model.

    Args:
        model_name: Name of model.
        model_path: Path to find model. Default is None in which case path is set to
            ~/.cache/gpt4all/.
        allow_download: Allow API to download model from gpt4all.io. Default is True.
        verbose: If True (default), print debug messages.

    Returns:
        Model config.
    """

    model_filename = append_extension_if_missing(model_name)

    # get the config for the model
    config: ConfigType = DEFAULT_MODEL_CONFIG
    if allow_download:
        available_models = GPT4All.list_models()

        for m in available_models:
            if model_filename == m["filename"]:
                config.update(m)
                config["systemPrompt"] = config["systemPrompt"].strip()
                config["promptTemplate"] = config["promptTemplate"].replace(
                    "%1", "{0}", 1
                )  # change to Python-style formatting
                break

    # Validate download directory
    if model_path is None:
        try:
            os.makedirs(DEFAULT_MODEL_DIRECTORY, exist_ok=True)
        except OSError as exc:
            raise ValueError(
                f"Failed to create model download directory at {DEFAULT_MODEL_DIRECTORY}: {exc}. "
                "Please specify model_path."
            )
        model_path = DEFAULT_MODEL_DIRECTORY
    else:
        model_path = str(model_path).replace("\\", "\\\\")

    if not os.path.exists(model_path):
        raise ValueError(f"Invalid model directory: {model_path}")

    model_dest = os.path.join(model_path, model_filename).replace("\\", "\\\\")
    if os.path.exists(model_dest):
        config.pop("url", None)
        config["path"] = model_dest
        if verbose:
            print("Found model file at", model_dest, file=sys.stderr)

    # If model file does not exist, download
    elif allow_download:
        url = config.pop("url", None)

        config["path"] = GPT4All.download_model(model_filename, model_path, verbose=verbose, url=url)
    else:
        raise ValueError("Failed to retrieve model")

    return config