165 lines
5.1 KiB
Plaintext
165 lines
5.1 KiB
Plaintext
---
|
|
title: 'Development'
|
|
description: 'Learn how to preview changes locally'
|
|
---
|
|
|
|
<Info>
|
|
**Prerequisite** You should have installed Node.js (version 18.10.0 or
|
|
higher).
|
|
</Info>
|
|
|
|
Step 1. Install Mintlify on your OS:
|
|
|
|
<CodeGroup>
|
|
|
|
```bash npm
|
|
npm i -g mintlify
|
|
```
|
|
|
|
```bash yarn
|
|
yarn global add mintlify
|
|
```
|
|
|
|
</CodeGroup>
|
|
|
|
Step 2. Go to the docs are located (where you can find `mint.json`) and run the following command:
|
|
|
|
```bash
|
|
mintlify dev
|
|
```
|
|
|
|
The documentation website is now available at `http://localhost:3000`.
|
|
|
|
### Custom Ports
|
|
|
|
Mintlify uses port 3000 by default. You can use the `--port` flag to customize the port Mintlify runs on. For example, use this command to run in port 3333:
|
|
|
|
```bash
|
|
mintlify dev --port 3333
|
|
```
|
|
|
|
You will see an error like this if you try to run Mintlify in a port that's already taken:
|
|
|
|
```md
|
|
Error: listen EADDRINUSE: address already in use :::3000
|
|
```
|
|
|
|
## Mintlify Versions
|
|
|
|
Each CLI is linked to a specific version of Mintlify. Please update the CLI if your local website looks different than production.
|
|
|
|
<CodeGroup>
|
|
|
|
```bash npm
|
|
npm i -g mintlify@latest
|
|
```
|
|
|
|
```bash yarn
|
|
yarn global upgrade mintlify
|
|
```
|
|
|
|
</CodeGroup>
|
|
|
|
## Deployment
|
|
|
|
<Tip>
|
|
Unlimited editors available under the [Startup
|
|
Plan](https://mintlify.com/pricing)
|
|
</Tip>
|
|
|
|
You should see the following if the deploy successfully went through:
|
|
|
|
<Frame>
|
|
<img src="/images/checks-passed.png" style={{ borderRadius: '0.5rem' }} />
|
|
</Frame>
|
|
|
|
## Troubleshooting
|
|
|
|
Here's how to solve some common problems when working with the CLI.
|
|
|
|
<AccordionGroup>
|
|
<Accordion title="Mintlify is not loading">
|
|
Update to Node v18. Run `mintlify install` and try again.
|
|
</Accordion>
|
|
<Accordion title="No such file or directory on Windows">
|
|
Go to the `C:/Users/Username/.mintlify/` directory and remove the `mint`
|
|
folder. Then Open the Git Bash in this location and run `git clone
|
|
https://github.com/mintlify/mint.git`.
|
|
|
|
Repeat step 3.
|
|
|
|
</Accordion>
|
|
<Accordion title="Getting an unknown error">
|
|
Try navigating to the root of your device and delete the ~/.mintlify folder.
|
|
Then run `mintlify dev` again.
|
|
</Accordion>
|
|
</AccordionGroup>
|
|
|
|
Curious about what changed in a CLI version? [Check out the CLI changelog.](/changelog/command-line)
|
|
|
|
## Environment Variable Configuration
|
|
|
|
The simplest way to get started with Skyvern is to set environment variables in a `.env` file at the root of the project. This file is loaded by the application at runtime, and the values are used to configure the system.
|
|
|
|
### LLM Configuration
|
|
|
|
Skyvern works with multiple LLM providers. You need to set the following environment variables to configure your LLM provider:
|
|
|
|
```bash
|
|
# One of the below must be set to true
|
|
ENABLE_OPENAI=true
|
|
# ENABLE_ANTHROPIC=true
|
|
# ENABLE_AZURE=true
|
|
# ENABLE_BEDROCK=true
|
|
# ENABLE_GEMINI=true
|
|
# ENABLE_NOVITA=true
|
|
# ENABLE_OPENAI_COMPATIBLE=true
|
|
|
|
# Set your LLM provider API key
|
|
OPENAI_API_KEY=sk-xxxxxxxxxxxxx
|
|
|
|
# Set which model to use
|
|
LLM_KEY=OPENAI_GPT4O
|
|
```
|
|
|
|
If you're using OpenAI, you'll need to set `ENABLE_OPENAI=true` and provide your `OPENAI_API_KEY`. The `LLM_KEY` specifies which model to use, and should match one of the registered models in the `LLMConfigRegistry`.
|
|
|
|
#### Using Custom OpenAI-compatible Models
|
|
|
|
Skyvern can also use any OpenAI-compatible LLM API endpoint. This is useful for connecting to alternative providers that follow the OpenAI API format or to self-hosted models. This feature is implemented using [liteLLM's OpenAI-compatible provider support](https://docs.litellm.ai/docs/providers/openai_compatible).
|
|
|
|
To enable this:
|
|
|
|
```bash
|
|
# Enable OpenAI-compatible mode
|
|
ENABLE_OPENAI_COMPATIBLE=true
|
|
|
|
# Required configuration
|
|
OPENAI_COMPATIBLE_MODEL_NAME=gpt-3.5-turbo # The model name supported by your endpoint (REQUIRED)
|
|
OPENAI_COMPATIBLE_API_KEY=your-api-key # API key for your endpoint (REQUIRED)
|
|
OPENAI_COMPATIBLE_API_BASE=https://your-api-endpoint.com/v1 # Base URL for your endpoint (REQUIRED)
|
|
|
|
# Optional configuration
|
|
OPENAI_COMPATIBLE_API_VERSION=2023-05-15 # Optional API version
|
|
OPENAI_COMPATIBLE_MAX_TOKENS=4096 # Optional, defaults to LLM_CONFIG_MAX_TOKENS
|
|
OPENAI_COMPATIBLE_TEMPERATURE=0.0 # Optional, defaults to LLM_CONFIG_TEMPERATURE
|
|
OPENAI_COMPATIBLE_SUPPORTS_VISION=false # Optional, defaults to false
|
|
OPENAI_COMPATIBLE_ADD_ASSISTANT_PREFIX=false # Optional, defaults to false
|
|
OPENAI_COMPATIBLE_MODEL_KEY=OPENAI_COMPATIBLE # Optional custom key to register the model with
|
|
```
|
|
|
|
**Important Note**: When using this feature, the model name you provide will be prefixed with "openai/" in the liteLLM configuration. This is how liteLLM determines the routing for OpenAI-compatible providers.
|
|
|
|
This feature allows you to use models from providers like:
|
|
- Together.ai
|
|
- Anyscale
|
|
- Mistral
|
|
- Self-hosted models (e.g., using LM Studio or a local vLLM server)
|
|
- Any API endpoint that follows the OpenAI API format
|
|
|
|
Once configured, you can set `LLM_KEY=OPENAI_COMPATIBLE` to use this model as your primary LLM.
|
|
|
|
For more detailed information about OpenAI-compatible providers supported by liteLLM, see their [documentation](https://docs.litellm.ai/docs/providers/openai_compatible).
|
|
|
|
### Running the Server
|