Advanced Features
Models
Through Cursor Chat, Ctrl/⌘ K, and Terminal Ctrl/⌘ K, you can easily switch between different models.
Model Dropdown
Below the AI input box, you'll see a dropdown menu that lets you choose which model to use. By default, Cursor has these models available:
GPT-4o
GPT-4
Claude 3.5 Sonnet
cursor-small
cursor-small
is Cursor's custom model that, while not as intelligent asGPT-4
, is faster and available for unlimited use.
You can add other models under Cursor Settings
> Models
> Model Names
.
What context window does Model X use?
In chat, we currently limit to about 20,000 tokens (or less if the model doesn't support that much context). For cmd-K, we limit to about 10,000 tokens to balance TTFT and quality. Long context chat uses the model's maximum context window.
Custom API Keys
Cursor allows you to input your own API keys from various LLM providers to send any number of AI messages at your own cost. When using customer API keys, we use them when calling the LLM provider.
To use your own API key, go to Cursor Settings
> Models
> OpenAI API Key
and enter your API key. Then, click the "Verify" button. Once your key is verified, your OpenAI API key will be enabled.
Some Cursor features (like Tab completion, applying from chat, and Composer) require specialized models and cannot be used with custom API keys. Custom API keys only work with features that use standard models from providers like OpenAI, Anthropic, and Google.
OpenAI API Key
You can get your own API key from the OpenAI platform.
Anthropic API Key
Similar to OpenAI, you can also set up your own Anthropic API key so you can use claude-based models at your own cost.
Google API Key
For Google API keys, you can set up your own API key so you can use Google models like gemini-1.5-flash-500k
at your own cost.
Azure Integration
Finally, you can also set up your own Azure API key so you can use Azure OpenAI models at your own cost.
Will my API key be stored or leave my device?
Your API keys are not stored but are sent with each request to our servers. All requests are routed through our backend as this is where we do final prompt construction.
Shadow Workspace
Shadow workspace is an optional setting that you can configure to improve the quality of AI-generated code. Only some features use it.
If you enable shadow workspace, background AIs can request linting for code they write. This creates a hidden window on your computer to ensure your coding experience isn't affected.
Shadow workspace increases Cursor's memory usage, so we recommend only enabling this feature when you have sufficient RAM.
You can read more about shadow workspace in our blog post.