Skip to content

Frequently Asked Questions


Which language models are supported?

We support models with a llama.cpp implementation which have been uploaded to HuggingFace.

Which embedding models are supported?

We support SBert and Nomic Embed Text v1 & v1.5.


What software do I need?

All you need is to install GPT4all onto you Windows, Mac, or Linux computer.

Which SDK languages are supported?

Our SDK is in Python for usability, but these are light bindings around llama.cpp implementations that we contribute to for efficiency and accessibility on everyday computers.

Is there an API?

Yes, you can run your model in server-mode with our OpenAI-compatible API, which you can configure in settings

Can I monitor a GPT4All deployment?

Yes, GPT4All integrates with OpenLIT so you can deploy LLMs with user interactions and hardware usage automatically monitored for full observability.

Is there a command line interface (CLI)?

Yes, we have a lightweight use of the Python client as a CLI. We welcome further contributions!


What hardware do I need?

GPT4All can run on CPU, Metal (Apple Silicon M1+), and GPU.

What are the system requirements?

Your CPU needs to support AVX or AVX2 instructions and you need enough RAM to load a model into memory.