Skip to content

Installation

Make sure you have Python 3.8, 3.9 or 3.10 installed, then from a terminal run:

Terminal window
pip install libretranslate
libretranslate [args]

⚠️ Newer version of Python like 3.13 or 3.12 are currently not supported.

Then open a web browser to http://localhost:5000

By default LibreTranslate will install support for all available languages. To only load certain languages and reduce startup time, you can use the --load-only argument:

Terminal window
libretranslate --load-only en,es,fr

Check the arguments list for more options.

You can also run the application with docker. First clone the repository:

Terminal window
git clone https://github.com/LibreTranslate/LibreTranslate
cd LibreTranslate

Then on Linux/macOS run ./run.sh [args], on Windows run run.bat [args].

You can use hardware acceleration to speed up translations on a GPU machine with CUDA 12.4.1 and nvidia-docker installed.

Run this version with:

Terminal window
docker compose -f docker-compose.cuda.yml up -d --build
Terminal window
pip install gunicorn
gunicorn --bind 0.0.0.0:5000 'wsgi:app'

You can pass application arguments directly to Gunicorn via:

Terminal window
gunicorn --bind 0.0.0.0:5000 'wsgi:app(api_keys=True)'

See Medium article by JM Robles and the improved k8s.yaml by @rasos.

Based on @rasos work you can now install LibreTranslate on Kubernetes using Helm.

A Helm chart is now available in the helm-chart repository where you can find more details.

You can quickly install LibreTranslate on Kubernetes using Helm with the following command:

Terminal window
helm repo add libretranslate https://libretranslate.github.io/helm-chart/
helm repo update
helm search repo libretranslate
helm install libretranslate libretranslate/libretranslate --namespace libretranslate --create-namespace
ArgumentDescriptionDefault
--hostSet host to bind the server to127.0.0.1
--portSet port to bind the server to5000
--char-limitSet character limitNo limit
--req-limitSet maximum number of requests per minute per client (outside of limits set by api keys)No limit
--req-limit-storageStorage URI to use for request limit data storage. See Flask Limitermemory://
--req-time-costConsiders a time cost (in seconds) for request limiting purposes. If a request takes 10 seconds and this value is set to 5, the request cost is 2.No cost
--batch-limitSet maximum number of texts to translate in a batch requestNo limit
--ga-idEnable Google Analytics on the API client page by providing an IDDisabled
--frontend-language-sourceSet frontend default language - sourceauto
--frontend-language-targetSet frontend default language - targetlocale
--frontend-timeoutSet frontend translation timeout500
--api-keys-db-pathUse a specific path inside the container for the local database. Can be absolute or relativedb/api_keys.db
--api-keys-remoteUse this remote endpoint to query for valid API keys instead of using the local databaseUse local db
--get-api-key-linkShow a link in the UI where to direct users to get an API keyNo API link displayed
--shared-storageShared storage URI to use for multi-process data sharing (e.g. when using gunicorn)memory://
--secondaryMark this instance as a secondary instance to avoid conflicts with the primary node in multi-node setupsPrimary
--load-onlySet available languagesAll
--threadsSet number of threads4
--metrics-auth-tokenProtect the /metrics endpoint by allowing only clients that have a valid Authorization Bearer tokenNo auth
--url-prefixAdd prefix to URL: example.com:5000/url-prefix//
--debugEnable debug environmentDisabled
--sslWhether to enable SSLDisabled
--api-keysEnable API keys database for per-client rate limits when —req-limit is reachedDisabled
--require-api-key-originRequire use of an API key for programmatic access to the API, unless the request origin matches this domainDisabled
--require-api-key-secretRequire use of an API key for programmatic access to the API, unless the client also sends a secret matchDisabled
--require-api-key-fingerprintRequire use of an API key for programmatic access to the API, unless the client also matches a fingerprintDisabled
--under-attackEnable under attack mode. When enabled, requests must be made with an API keyDisabled
--suggestionsAllow user suggestionsDisabled
--disable-files-translationDisable files translationEnabled
--disable-web-uiDisable web UIEnabled
--update-modelsUpdate language models at startupDisabled
--metricsEnable the /metrics endpoint for exporting Prometheus usage metricsDisabled

Each argument has an equivalent environment variable that can be used instead. The environment variables override the default values but have lower priority than the command line arguments and are particularly useful if used with Docker. The environment variable names are the upper_snake_case of the equivalent command argument’s name with a LT prefix. E.g. --char-limit —> LT_CHAR_LIMIT.

If you installed with pip:

pip install -U libretranslate

If you’re using docker:

docker pull libretranslate/libretranslate

Start the program with the --update-models argument. For example: libretranslate --update-models or ./run.sh --update-models. Setting --update-models will update models regardless of whether updates are available or not.

Alternatively you can also run the scripts/install_models.py script.

LibreTranslate has Prometheus exporter capabilities when you pass the --metrics argument at startup (disabled by default). When metrics are enabled, a /metrics endpoint is mounted on the instance:

http://localhost:5000/metrics

# HELP libretranslate_http_requests_in_flight Multiprocess metric
# TYPE libretranslate_http_requests_in_flight gauge
libretranslate_http_requests_in_flight{api_key="",endpoint="/translate",request_ip="127.0.0.1"} 0.0
# HELP libretranslate_http_request_duration_seconds Multiprocess metric
# TYPE libretranslate_http_request_duration_seconds summary
libretranslate_http_request_duration_seconds_count{api_key="",endpoint="/translate",request_ip="127.0.0.1",status="200"} 0.0
libretranslate_http_request_duration_seconds_sum{api_key="",endpoint="/translate",request_ip="127.0.0.1",status="200"} 0.0

You can then configure prometheus.yml to read the metrics:

scrape_configs:
- job_name: "libretranslate"
# Needed only if you use --metrics-auth-token
#authorization:
#credentials: "mytoken"
static_configs:
- targets: ["localhost:5000"]

To secure the /metrics endpoint you can also use --metrics-auth-token mytoken.

If you use Gunicorn, make sure to create a directory for storing multiprocess data metrics and set PROMETHEUS_MULTIPROC_DIR:

Terminal window
mkdir -p /tmp/prometheus_data
rm /tmp/prometheus_data/*
export PROMETHEUS_MULTIPROC_DIR=/tmp/prometheus_data
gunicorn -c scripts/gunicorn_conf.py --bind 0.0.0.0:5000 'wsgi:app(metrics=True)'