Skip to main content

Select a model provider

Continue makes it easy to swap out different LLM providers. You can either click the "+" button next to the model dropdown to configure in the GUI or manually add them to your config.json. Once you've done this, you will be able to switch between them in the model selection dropdown.

In addition to selecting a model provider, you will need to figure out what LLM to use.


You can run a model on your local computer using:

Once you have it running, you will need to configure it in the GUI or manually add it to your config.json.


You can deploy a model in your AWS, GCP, Azure, or other clouds using:

If the API you use is OpenAI-compatible, you will be able to use the "openai" provider in config.json and change the apiBase to point to the server. Otherwise, you may need to wire up a new LLM object in config.ts. Learn how to do this here


Open-source LLMs

You can deploy open-source LLMs on a service using:

Commercial LLMs

You can use commercial LLMs via APIs using: