Installation
With Python
Section titled “With Python”Make sure you have Python 3.8, 3.9 or 3.10 installed, then from a terminal run:
pip install libretranslatelibretranslate [args]
⚠️ Newer version of Python like 3.13 or 3.12 are currently not supported.
Then open a web browser to http://localhost:5000
By default LibreTranslate will install support for all available languages. To only load certain languages and reduce startup time, you can use the --load-only
argument:
libretranslate --load-only en,es,fr
Check the arguments list for more options.
With Docker
Section titled “With Docker”You can also run the application with docker. First clone the repository:
git clone https://github.com/LibreTranslate/LibreTranslatecd LibreTranslate
Then on Linux/macOS run ./run.sh [args]
, on Windows run run.bat [args]
.
You can use hardware acceleration to speed up translations on a GPU machine with CUDA 12.4.1 and nvidia-docker installed.
Run this version with:
docker compose -f docker-compose.cuda.yml up -d --build
With WSGI and Gunicorn
Section titled “With WSGI and Gunicorn”pip install gunicorngunicorn --bind 0.0.0.0:5000 'wsgi:app'
You can pass application arguments directly to Gunicorn via:
gunicorn --bind 0.0.0.0:5000 'wsgi:app(api_keys=True)'
With Kubernetes
Section titled “With Kubernetes”See Medium article by JM Robles and the improved k8s.yaml by @rasos.
Helm Chart
Section titled “Helm Chart”Based on @rasos work you can now install LibreTranslate on Kubernetes using Helm.
A Helm chart is now available in the helm-chart repository where you can find more details.
You can quickly install LibreTranslate on Kubernetes using Helm with the following command:
helm repo add libretranslate https://libretranslate.github.io/helm-chart/helm repo updatehelm search repo libretranslate
helm install libretranslate libretranslate/libretranslate --namespace libretranslate --create-namespace
Arguments
Section titled “Arguments”Argument | Description | Default |
---|---|---|
--host | Set host to bind the server to | 127.0.0.1 |
--port | Set port to bind the server to | 5000 |
--char-limit | Set character limit | No limit |
--req-limit | Set maximum number of requests per minute per client (outside of limits set by api keys) | No limit |
--req-limit-storage | Storage URI to use for request limit data storage. See Flask Limiter | memory:// |
--req-time-cost | Considers a time cost (in seconds) for request limiting purposes. If a request takes 10 seconds and this value is set to 5, the request cost is 2. | No cost |
--batch-limit | Set maximum number of texts to translate in a batch request | No limit |
--ga-id | Enable Google Analytics on the API client page by providing an ID | Disabled |
--frontend-language-source | Set frontend default language - source | auto |
--frontend-language-target | Set frontend default language - target | locale |
--frontend-timeout | Set frontend translation timeout | 500 |
--api-keys-db-path | Use a specific path inside the container for the local database. Can be absolute or relative | db/api_keys.db |
--api-keys-remote | Use this remote endpoint to query for valid API keys instead of using the local database | Use local db |
--get-api-key-link | Show a link in the UI where to direct users to get an API key | No API link displayed |
--shared-storage | Shared storage URI to use for multi-process data sharing (e.g. when using gunicorn) | memory:// |
--secondary | Mark this instance as a secondary instance to avoid conflicts with the primary node in multi-node setups | Primary |
--load-only | Set available languages | All |
--threads | Set number of threads | 4 |
--metrics-auth-token | Protect the /metrics endpoint by allowing only clients that have a valid Authorization Bearer token | No auth |
--url-prefix | Add prefix to URL: example.com:5000/url-prefix/ | / |
--debug | Enable debug environment | Disabled |
--ssl | Whether to enable SSL | Disabled |
--api-keys | Enable API keys database for per-client rate limits when —req-limit is reached | Disabled |
--require-api-key-origin | Require use of an API key for programmatic access to the API, unless the request origin matches this domain | Disabled |
--require-api-key-secret | Require use of an API key for programmatic access to the API, unless the client also sends a secret match | Disabled |
--require-api-key-fingerprint | Require use of an API key for programmatic access to the API, unless the client also matches a fingerprint | Disabled |
--under-attack | Enable under attack mode. When enabled, requests must be made with an API key | Disabled |
--suggestions | Allow user suggestions | Disabled |
--disable-files-translation | Disable files translation | Enabled |
--disable-web-ui | Disable web UI | Enabled |
--update-models | Update language models at startup | Disabled |
--metrics | Enable the /metrics endpoint for exporting Prometheus usage metrics | Disabled |
Each argument has an equivalent environment variable that can be used instead. The environment variables override the default values but have lower priority than the command line arguments and are particularly useful if used with Docker. The environment variable names are the upper_snake_case of the equivalent command argument’s name with a LT
prefix. E.g. --char-limit
—> LT_CHAR_LIMIT
.
Update
Section titled “Update”Software
Section titled “Software”If you installed with pip:
pip install -U libretranslate
If you’re using docker:
docker pull libretranslate/libretranslate
Language Models
Section titled “Language Models”Start the program with the --update-models
argument. For example: libretranslate --update-models
or ./run.sh --update-models
. Setting --update-models
will update models regardless of whether updates are available or not.
Alternatively you can also run the scripts/install_models.py
script.
Prometheus Metrics
Section titled “Prometheus Metrics”LibreTranslate has Prometheus exporter capabilities when you pass the --metrics
argument at startup (disabled by default). When metrics are enabled, a /metrics
endpoint is mounted on the instance:
# HELP libretranslate_http_requests_in_flight Multiprocess metric# TYPE libretranslate_http_requests_in_flight gaugelibretranslate_http_requests_in_flight{api_key="",endpoint="/translate",request_ip="127.0.0.1"} 0.0# HELP libretranslate_http_request_duration_seconds Multiprocess metric# TYPE libretranslate_http_request_duration_seconds summarylibretranslate_http_request_duration_seconds_count{api_key="",endpoint="/translate",request_ip="127.0.0.1",status="200"} 0.0libretranslate_http_request_duration_seconds_sum{api_key="",endpoint="/translate",request_ip="127.0.0.1",status="200"} 0.0
You can then configure prometheus.yml
to read the metrics:
scrape_configs: - job_name: "libretranslate"
# Needed only if you use --metrics-auth-token #authorization: #credentials: "mytoken"
static_configs: - targets: ["localhost:5000"]
To secure the /metrics
endpoint you can also use --metrics-auth-token mytoken
.
If you use Gunicorn, make sure to create a directory for storing multiprocess data metrics and set PROMETHEUS_MULTIPROC_DIR
:
mkdir -p /tmp/prometheus_datarm /tmp/prometheus_data/*export PROMETHEUS_MULTIPROC_DIR=/tmp/prometheus_datagunicorn -c scripts/gunicorn_conf.py --bind 0.0.0.0:5000 'wsgi:app(metrics=True)'