Skip to content

GPT4All Node.js API

Native Node.js LLM bindings for all.

yarn add gpt4all@latest

npm install gpt4all@latest

pnpm install gpt4all@latest

The original GPT4All typescript bindings are now out of date.

  • New bindings created by jacoobes, limez and the nomic ai community, for all to use.
  • The nodejs api has made strides to mirror the python api. It is not 100% mirrored, but many pieces of the api resemble its python counterpart.
  • Everything should work out the box.
  • See API Reference

Chat Completion

import { createCompletion, loadModel } from '../src/gpt4all.js'

const model = await loadModel('mistral-7b-openorca.Q4_0.gguf', { verbose: true });

const response = await createCompletion(model, [
    { role : 'system', content: 'You are meant to be annoying and unhelpful.'  },
    { role : 'user', content: 'What is 1 + 1?'  } 


import { createEmbedding, loadModel } from '../src/gpt4all.js'

const model = await loadModel('ggml-all-MiniLM-L6-v2-f16', { verbose: true });

const fltArray = createEmbedding(model, "Pain is inevitable, suffering optional");

Build Instructions

  • binding.gyp is compile config
  • Tested on Ubuntu. Everything seems to work fine
  • Tested on Windows. Everything works fine.
  • Sparse testing on mac os.
  • MingW works as well to build the gpt4all-backend. HOWEVER, this package works only with MSVC built dlls.


  • git
  • node.js >= 18.0.0
  • yarn
  • node-gyp
    • all of its requirements.
  • (unix) gcc version 12
  • (win) msvc version 143
    • Can be obtained with visual studio 2022 build tools
  • python 3
  • On Windows and Linux, building GPT4All requires the complete Vulkan SDK. You may download it from here:
  • macOS users do not need Vulkan, as GPT4All will use Metal instead.

Build (from source)

git clone
cd gpt4all-bindings/typescript
  • The below shell commands assume the current working directory is typescript.

  • To Build and Rebuild:

  • llama.cpp git submodule for gpt4all can be possibly absent. If this is the case, make sure to run in llama.cpp parent directory
git submodule update --init --depth 1 --recursive
yarn build:backend

This will build platform-dependent dynamic libraries, and will be located in runtimes/(platform)/native The only current way to use them is to put them in the current working directory of your application. That is, WHEREVER YOU RUN YOUR NODE APPLICATION

  • llama-xxxx.dll is required.
  • According to whatever model you are using, you'll need to select the proper model loader.
    • For example, if you running an Mosaic MPT model, you will need to select the mpt-(buildvariant).(dynamiclibrary)


yarn test

Source Overview


  • Extra functions to help aid devex
  • Typings for the native node addon
  • the javascript interface


  • simple unit testings for some functions exported.
  • more advanced ai testing is not handled


  • Average look and feel of the api
  • Should work assuming a model and libraries are installed locally in working directory

  • The bridge between nodejs and c. Where the bindings are.

  • Handling prompting and inference of models in a threadsafe, asynchronous way.

Known Issues

  • why your model may be spewing bull 💩
    • The downloaded model is broken (just reinstall or download from official site)
    • That's it so far


This package is in active development, and breaking changes may happen until the api stabilizes. Here's what's the todo list:

  • [x] prompt models via a threadsafe function in order to have proper non blocking behavior in nodejs
  • [ ] ~~createTokenStream, an async iterator that streams each token emitted from the model. Planning on following this example~~ May not implement unless someone else can complete
  • [x] proper unit testing (integrate with circle ci)
  • [x] publish to npm under alpha tag gpt4all@alpha
  • [x] have more people test on other platforms (mac tester needed)
  • [x] switch to new pluggable backend
  • [ ] NPM bundle size reduction via optionalDependencies strategy (need help)
    • Should include prebuilds to avoid painful node-gyp errors
  • [ ] createChatSession ( the python equivalent to create_chat_session )

API Reference

Table of Contents


Full list of models available DEPRECATED!! These model names are outdated and this type will not be maintained, please use a string literal instead


List of GPT-J Models

Type: ("ggml-gpt4all-j-v1.3-groovy.bin" | "ggml-gpt4all-j-v1.2-jazzy.bin" | "ggml-gpt4all-j-v1.1-breezy.bin" | "ggml-gpt4all-j.bin")


List Llama Models

Type: ("ggml-gpt4all-l13b-snoozy.bin" | "ggml-vicuna-7b-1.1-q4_2.bin" | "ggml-vicuna-13b-1.1-q4_2.bin" | "ggml-wizardLM-7B.q4_2.bin" | "ggml-stable-vicuna-13B.q4_2.bin" | "ggml-nous-gpt4-vicuna-13b.bin" | "ggml-v3-13b-hermes-q5_1.bin")


List of MPT Models

Type: ("ggml-mpt-7b-base.bin" | "ggml-mpt-7b-chat.bin" | "ggml-mpt-7b-instruct.bin")


List of Replit Models

Type: "ggml-replit-code-v1-3b.bin"


Model architecture. This argument currently does not have any functionality and is just used as descriptive identifier for user.

Type: ModelType


Callback for controlling token generation

Type: function (tokenId: number, token: string, total: string): boolean


InferenceModel represents an LLM which can make chat predictions, similar to GPT transformers.


delete and cleanup the native model

Returns void


EmbeddingModel represents an LLM which can create embeddings, which are float arrays


delete and cleanup the native model

Returns void


LLModel class representing a language model. This is a base class that provides common functionality for different types of language models.


Initialize a new LLModel.

  • path string Absolute path to the model file.
  • Throws Error If the model file does not exist.

either 'gpt', mpt', or 'llama' or undefined

Returns (ModelType | undefined)


The name of the model.

Returns string


Get the size of the internal state of the model. NOTE: This state data is specific to the type of model you have created.

Returns number the size in bytes of the internal state of the model


Get the number of threads used for model inference. The default is the number of physical cores your computer has.

Returns number The number of threads used for model inference.


Set the number of threads used for model inference.

  • newNumber number The new number of threads.

Returns void


Prompt the model with a given input and optional parameters. This is the raw output from model. Use the prompt function exported for a value


Returns Promise<string> The result of the model prompt.


Embed text with the model. Keep in mind that not all models can embed text, (only bert can embed as of 07/16/2023 (mm/dd/yyyy)) Use the prompt function exported for a value

  • text string
  • q The prompt input.
  • params Optional parameters for the prompt context.

Returns Float32Array The result of the model prompt.


Whether the model is loaded or not.

Returns boolean


Where to search for the pluggable backend libraries


Returns void


Where to get the pluggable backend libraries

Returns string


Initiate a GPU by a string identifier.

  • memory_required number Should be in the range size_t or will throw
  • device_name string 'amd' | 'nvidia' | 'intel' | 'gpu' | gpu name. read LoadModelOptions.device for more information

Returns boolean


From C documentation

Returns boolean True if a GPU device is successfully initialized, false otherwise.


GPUs that are usable for this LLModel

  • nCtx number Maximum size of context window
  • Throws any if hasGpuDevice returns false (i think)

Returns Array<GpuDevice>


delete and cleanup the native model

Returns void


an object that contains gpu data on this machine.


same as VkPhysicalDeviceType

Type: number


Options that configure a model's behavior.


Loads a machine learning model with the specified name. The defacto way to create a model. By default this will download a model from the official GPT4ALL website, if a model is not present at given path.


Returns Promise<(InferenceModel | EmbeddingModel)> A promise that resolves to an instance of the loaded LLModel.


The nodejs equivalent to python binding's chat_completion


Returns CompletionReturn The completion result.


The nodejs moral equivalent to python binding's Embed4All().embed() meow


Returns Float32Array The completion result.


Extends Partial\

The options for creating the completion.


Indicates if verbose logging is enabled.

Type: boolean


Template for the system message. Will be put before the conversation with %1 being replaced by all system messages. Note that if this is not defined, system messages will not be included in the prompt.

Type: string


Template for user messages, with %1 being replaced by the message.

Type: boolean


The initial instruction for the model, on top of the prompt

Type: string


The last instruction for the model, appended to the end of the prompt.

Type: string


A message in the conversation, identical to OpenAI's chat message.


The role of the message.

Type: ("system" | "assistant" | "user")


The message content.

Type: string


The number of tokens used in the prompt.

Type: number


The number of tokens used in the completion.

Type: number


The total number of tokens used.

Type: number


The result of the completion, similar to OpenAI's format.


The model used for the completion.

Type: string


Token usage report.

Type: {prompt_tokens: number, completion_tokens: number, total_tokens: number}


The generated completions.

Type: Array<CompletionChoice>


A completion choice, similar to OpenAI's format.


Response message

Type: PromptMessage


Model inference arguments for generating completions.


The size of the raw logits vector.

Type: number


The size of the raw tokens vector.

Type: number


The number of tokens in the past conversation.

Type: number


The number of tokens possible in the context window.

Type: number


The number of tokens to predict.

Type: number


The top-k logits to sample from. Top-K sampling selects the next token only from the top K most likely tokens predicted by the model. It helps reduce the risk of generating low-probability or nonsensical tokens, but it may also limit the diversity of the output. A higher value for top-K (eg., 100) will consider more tokens and lead to more diverse text, while a lower value (eg., 10) will focus on the most probable tokens and generate more conservative text. 30 - 60 is a good range for most tasks.

Type: number


The nucleus sampling probability threshold. Top-P limits the selection of the next token to a subset of tokens with a cumulative probability above a threshold P. This method, also known as nucleus sampling, finds a balance between diversity and quality by considering both token probabilities and the number of tokens available for sampling. When using a higher value for top-P (eg., 0.95), the generated text becomes more diverse. On the other hand, a lower value (eg., 0.1) produces more focused and conservative text. The default value is 0.4, which is aimed to be the middle ground between focus and diversity, but for more creative tasks a higher top-p value will be beneficial, about 0.5-0.9 is a good range for that.

Type: number


The temperature to adjust the model's output distribution. Temperature is like a knob that adjusts how creative or focused the output becomes. Higher temperatures (eg., 1.2) increase randomness, resulting in more imaginative and diverse text. Lower temperatures (eg., 0.5) make the output more focused, predictable, and conservative. When the temperature is set to 0, the output becomes completely deterministic, always selecting the most probable next token and producing identical results each time. A safe range would be around 0.6 - 0.85, but you are free to search what value fits best for you.

Type: number


The number of predictions to generate in parallel. By splitting the prompt every N tokens, prompt-batch-size reduces RAM usage during processing. However, this can increase the processing time as a trade-off. If the N value is set too low (e.g., 10), long prompts with 500+ tokens will be most affected, requiring numerous processing runs to complete the prompt processing. To ensure optimal performance, setting the prompt-batch-size to 2048 allows processing of all tokens in a single run.

Type: number


The penalty factor for repeated tokens. Repeat-penalty can help penalize tokens based on how frequently they occur in the text, including the input prompt. A token that has already appeared five times is penalized more heavily than a token that has appeared only one time. A value of 1 means that there is no penalty and values larger than 1 discourage repeated tokens.

Type: number


The number of last tokens to penalize. The repeat-penalty-tokens N option controls the number of tokens in the history to consider for penalizing repetition. A larger value will look further back in the generated text to prevent repetitions, while a smaller value will only consider recent tokens.

Type: number


The percentage of context to erase if the context window is exceeded.

Type: number


Creates an async generator of tokens


Returns AsyncGenerator<string> The stream of generated tokens


From python api: models will be stored in (homedir)/.cache/gpt4all/`

Type: string


From python api: The default path for dynamic libraries to be stored. You may separate paths by a semicolon to search in multiple areas. This searches DEFAULT_DIRECTORY/libraries, cwd/libraries, and finally cwd.

Type: string


Default model configuration.

Type: ModelConfig


Default prompt context.

Type: LLModelPromptContext


Default model list url.

Type: string


Initiates the download of a model file. By default this downloads without waiting. use the controller returned to alter this behavior.

  • modelName string The model to be downloaded.
  • options DownloadOptions to pass into the downloader. Default is { location: (cwd), verbose: false }.
const download = downloadModel('ggml-gpt4all-j-v1.3-groovy.bin')
download.promise.then(() => console.log('Downloaded!'))
  • Throws Error If the model already exists in the specified location.
  • Throws Error If the model cannot be found at the specified url.

Returns DownloadController object that allows controlling the download process.


Options for the model download process.


location to download the model. Default is process.cwd(), or the current working directory

Type: string


Debug mode -- check how long it took to download in seconds

Type: boolean


Remote download url. Defaults to<modelName>

Type: string


MD5 sum of the model file. If this is provided, the downloaded file will be checked against this sum. If the sums do not match, an error will be thrown and the file will be deleted.

Type: string


Model download controller.


Cancel the request to download if this is called.

Type: function (): void


A promise resolving to the downloaded models config once the download is done

Type: Promise\