Installation
Installation Guide
This document describes the installation process of the Camunda Optimize distribution, as well as various configuration possibilities available after initial installation.
Before proceeding with the installation, please read the article about supported environments.
Prerequisites
If you intend to run Optimize on your local machine, please make sure you have a supported JRE (Java Runtime Environment) installed, best refer to the Java Runtime section on which runtimes are supported.
Demo distribution with Elasticsearch
The Optimize Demo distribution comes with an Elasticsearch instance. The supplied Elasticsearch server is not customized or tuned by Camunda in any manner. It is intended to make the process of trying out Optimize as easy as possible. The only requirement in addition to the demo distribution itself is a running engine (ideally on localhost).
To install the demo distribution containing Elasticsearch, please download the archive with the latest version from the download page and extract it to the desired folder. After that, start Optimize by running the script optimize-demo.sh
on Linux and Mac
./optimize-demo.sh
or optimize-demo.bat
on Windows:
.\optimize-demo.bat
The script ensures that a local version of Elasticsearch is started and waits until it has become available. Then, it starts Optimize, ensures it is running and automatically opens a tab in a browser to make it very convenient for you to try out Optimize.
In case you need to start an Elasticsearch instance only, without starting Optimize (e.g. to perform a reimport), you can use the elasticsearch-startup.sh
script:
./elasticsearch-startup.sh
or elasticsearch-startup.bat
on Windows:
.\elasticsearch-startup.bat
Production distribution without a database
This distribution is intended to be used in production. To install it, take the following steps:
- Download the production archive, which contains all the required files to startup Camunda Optimize without a database.
- Configure the database connection to connect to your pre-installed Elasticsearch/OpenSearch instance and configure the Camunda 7 connection to connect Optimize to your running engine.
- Start your Optimize instance by running the script
optimize-startup.sh
on Linux and Mac:
./optimize-startup.sh
or optimize-startup.bat
on Windows:
.\optimize-startup.bat
Dockerized installation
The Optimize Docker images can be used in production. They are hosted on our dedicated Docker registry and are available to enterprise customers who bought Optimize only. You can browse the available images in our Docker registry after logging in with your credentials.
Make sure to log in correctly:
$ docker login registry.camunda.cloud
Username: your_username
Password: ******
Login Succeeded
After that, configure the database connection to connect to your pre-installed Elasticsearch instance and configure the Camunda Platform connection to connect Optimize to your running engine. For very simple use cases with only one Camunda Engine and one Elasticsearch node you can use environment variables instead of mounting configuration files into the Docker container:
Getting started with the Optimize docker image
Heads Up!
Not all Optimize features are supported when using OpenSearch as a database. For a full list of the features that are currently supported, please refer to the Camunda 7 OpenSearch features.
Full local setup
To start the Optimize Docker image and connect to an already locally running Camunda 7 as well as Elasticsearch instance you could run the following command:
docker run -d --name optimize --network host \
registry.camunda.cloud/optimize-ee/optimize:3.14.0
If you wish to connect to an OpenSearch database instead, please make sure to additionally set the environment variable CAMUNDA_OPTIMIZE_DATABASE
to opensearch
.
docker run -d --name optimize --network host \
-e CAMUNDA_OPTIMIZE_DATABASE=opensearch \
registry.camunda.cloud/optimize-ee/optimize:3.14.0
Connect to remote Camunda 7 and database
If, however, your Camunda 7 as well as Elasticsearch instance reside on a different host, you may provide their destination via the corresponding environment variables:
docker run -d --name optimize -p 8090:8090 -p 8091:8091 \
-e OPTIMIZE_CAMUNDABPM_REST_URL=http://yourCamBpm.org/engine-rest \
-e OPTIMIZE_ELASTICSEARCH_HOST=yourElasticHost \
-e OPTIMIZE_ELASTICSEARCH_HTTP_PORT=9200 \
registry.camunda.cloud/optimize-ee/optimize:3.14.0
Alternatively, for OpenSearch:
docker run -d --name optimize -p 8090:8090 -p 8091:8091 \
-e OPTIMIZE_CAMUNDABPM_REST_URL=http://yourCamBpm.org/engine-rest \
-e CAMUNDA_OPTIMIZE_DATABASE=opensearch \
-e CAMUNDA_OPTIMIZE_OPENSEARCH_HOST=yourOpenSearchHost \
-e CAMUNDA_OPTIMIZE_OPENSEARCH_HTTP_PORT=9205 \
registry.camunda.cloud/optimize-ee/optimize:3.14.0
Available Environment Variables
There is only a limited set of configuration keys exposed via environment variables. These mainly serve the purpose of testing and exploring Optimize. For production configurations, we recommend following the setup in documentation on configuration using a environment-config.yaml
file.
The most important environment variables you may have to configure are related to the connection to the Camunda 7 REST API, as well as Elasticsearch/OpenSearch:
OPTIMIZE_CAMUNDABPM_REST_URL
: The base URL that will be used for connections to the Camunda Engine REST API (default:http://localhost:8080/engine-rest
)OPTIMIZE_CAMUNDABPM_WEBAPPS_URL
: The endpoint where to find the Camunda web apps for the given engine (default:http://localhost:8080/camunda
)
For an ElasticSearch installation:
OPTIMIZE_ELASTICSEARCH_HOST
: The address/hostname under which the Elasticsearch node is available (default:localhost
)OPTIMIZE_ELASTICSEARCH_HTTP_PORT
: The port number used by Elasticsearch to accept HTTP connections (default:9200
)CAMUNDA_OPTIMIZE_ELASTICSEARCH_SECURITY_USERNAME
: The username for authentication in environments where a secured Elasticsearch connection is configured.CAMUNDA_OPTIMIZE_ELASTICSEARCH_SECURITY_PASSWORD
: The password for authentication in environments where a secured Elasticsearch connection is configured.
For an OpenSearch installation:
CAMUNDA_OPTIMIZE_DATABASE
: The database type to connect to, in this caseopensearch
(default:elasticsearch
)CAMUNDA_OPTIMIZE_OPENSEARCH_HOST
: The address/hostname under which the OpenSearch node is available (default:localhost
)CAMUNDA_OPTIMIZE_OPENSEARCH_HTTP_PORT
: The port number used by OpenSearch to accept HTTP connections (default:9205
)CAMUNDA_OPTIMIZE_OPENSEARCH_SECURITY_USERNAME
: The username for authentication in environments where a secured OpenSearch connection is configured.CAMUNDA_OPTIMIZE_OPENSEARCH_SECURITY_PASSWORD
: The password for authentication in environments where a secured OpenSearch connection is configured.
A complete sample can be found within Connect to remote Camunda 7 and database.
Furthermore, there are also environment variables specific to the event-based process feature you may make use of:
OPTIMIZE_CAMUNDA_BPM_EVENT_IMPORT_ENABLED
: Determines whether this instance of Optimize should convert historical data to event data usable for event-based processes (default:false
)OPTIMIZE_EVENT_BASED_PROCESSES_USER_IDS
: An array of user ids that are authorized to administer event-based processes (default:[]
)OPTIMIZE_EVENT_BASED_PROCESSES_IMPORT_ENABLED
: Determines whether this Optimize instance performs event-based process instance import. (default:false
)
Additionally, there are also runtime related environment variables such as:
OPTIMIZE_JAVA_OPTS
: Allows you to configure/overwrite Java Virtual Machine (JVM) parameters; defaults to-Xms1024m -Xmx1024m -XX:MetaspaceSize=256m -XX:MaxMetaspaceSize=256m
.
In case you want to make use of the Optimize Public API, you can also set one of the following variables:
SPRING_SECURITY_OAUTH2_RESOURCESERVER_JWT_JWK_SET_URI
Complete URI to get public keys for JWT validation, e.g.https://weblogin.cloud.company.com/.well-known/jwks.json
. For more details see public API authentication.OPTIMIZE_API_ACCESS_TOKEN
secret static shared token to be provided to the secured REST API on access in the authorization header. Will be ignored ifSPRING_SECURITY_OAUTH2_RESOURCESERVER_JWT_JWK_SET_URI
is also set. For more details see public API authentication.
You can also adjust logging levels using environment variables as described in the logging configuration.
License key file
If you want the Optimize Docker container to automatically recognize your license key file, you can use standard Docker means to make the file with the license key available inside the container. Replacing the ABSOLUTE_PATH_ON_HOST_TO_LICENSE_FILE
with the absolute path to the license key file on your host can be done with the following command:
docker run -d --name optimize -p 8090:8090 -p 8091:8091 \
-v ABSOLUTE_PATH_ON_HOST_TO_LICENSE_FILE:/optimize/config/OptimizeLicense.txt:ro \
registry.camunda.cloud/optimize-ee/optimize:3.14.0
Configuration using a yaml file
In a production environment, the limited set of environment variables is usually not enough so that you want to prepare a custom environment-config.yaml
file. Refer to the configuration section of the documentation for the available configuration parameters.
You need to mount this configuration file into the Optimize Docker container to apply it. Replacing the ABSOLUTE_PATH_ON_HOST_TO_LICENSE_FILE
with the absolute path to the environment-config.yaml
file on your host can be done using the following command:
docker run -d --name optimize -p 8090:8090 -p 8091:8091 \
-v ABSOLUTE_PATH_ON_HOST_TO_LICENSE_FILE:/optimize/config/environment-config.yaml:ro \
registry.camunda.cloud/optimize-ee/optimize:3.14.0
In managed Docker container environments like Kubernetes, you may set this up using ConfigMaps.
Usage
You can start using Optimize right away by opening the following URL in your browser: http://localhost:8090
Then, you can use the users from the Camunda 7 to log in to Optimize. For details on how to configure the user access, consult the user access management section.
Before you can fully utilize all features of Optimize, you need to wait until all data has been imported. A green circle in the footer indicates when the import is finished.
Health - Readiness
To check whether Optimize is ready to use, you can make use of the health-readiness endpoint, exposed as part of Optimize’s REST API.
Configuration
All distributions of Optimize come with a predefined set of configuration options that can be overwritten by the user, based on current environment requirements. To do that, have a look into the folder named environment
. There are two files, one called environment-config.yaml
with values that override the default Optimize properties and another called environment-logback.xml
, which sets the logging configuration.
You can see all supported values and read about logging configuration here.
Optimize Web Container Configuration
Please refer to the configuration section on container settings for more information on how to adjust the Optimize Web Container configuration.
Elasticsearch configuration
You can customize the Elasticsearch connection settings as well as the index settings.
Camunda Platform configuration
To perform an import and provide the full set of features, Optimize requires a connection to the REST API of the Camunda engine. For details on how to configure the connection to the Camunda Platform, please refer to the Camunda Platform configuration section.
Import of the data set
By default, Optimize comes without any data available. To start using all the features of the system, you have to perform a data import from the Camunda Platform. This process is triggered automatically when starting Optimize.
If you are interested in the details of the import, please refer to the dedicated import overview section.
Hardware Resources
We recommend to carefully choose hardware resources that are allocated to the server with Optimize.
Please be aware that Optimize is using data structures that are different from data stored by the Camunda Platform Engine. The final amount of space on the hard drive required by Optimize will depend on your replication settings, but as a rule of thumb you could expect Optimize to use 30% of the space that your relational database is using.
How your Data Influences Elasticsearch Requirements
The Elasticsearch requirements are heavily influenced by the makeup of your data set. This is mainly because Optimize creates one instance index per definition, so the amount of indices in your Elasticserach instance will grow with the amount of definitions you have deployed. This is why we recommend a minimum of 1 GB of Elasticsearch heap space to provide for all non-instance indices plus additional space in relation to how many definitions and instances your data set has. By default, Optimize uses one shard per instance index and performance tests have shown that a shard size of 10GB is enough for approximately 1 Million instances. Elasticsearch recommends to aim for 20 shards or fewer per GB of heap memory, so you will need 1GB of additional heap memory per 20 definitions. Elasticsearch also recommends a shard size between 10 and 50 GB, so if you expect your definitions to have more than 5 Million instances, we recommend you increase the number of shards per instance index accordingly in Optimize’s index configurations.
Please note that these guidelines are based on test data that may deviate from yours. If your instance data for example includes a large amount of variables this may result in a larger shard size. In this case we recommend that you test the import with realistic data and adjust the number of shards accordingly.
Example Scenarios
Heads Up!
Exact hardware requirements highly depend on a number of factors such as: size of the data, network speed, current load on the engine and its underlying database. Therefore, we cannot guarantee that the following requirements will satisfy every use case.
20 Definitions with less than 50k Instances per Definition
We recommend to use one shard per instance index, so 20 shards overall for instance indices alone. Aiming for 20 shards per GB of Elasticsearch JVM heap space results in 1 GB of heap memory additionally to the base requirement of 1 GB. Based on performance tests, a shard size of 10 GB should be enough for up to 1 Million instances per definition, so you can expect the instance index shards to be no larger than 10GB.
- Camunda Optimize:
- 2 CPU Threads
- 512 MB RAM
- Elasticsearch:
- 2 CPU Threads
- 4 GB RAM (2 GB JVM Heap Memory, see setting JVM heap size)
- local SSD storage recommended
40 Definitions with up to 10 Million Instances per Definition
We recommend to use two shards per instance index, so 80 shards for instance indices alone. Aiming for 20 shards per GB of Elasticsearch JVM heap space results in 4 GB of heap memory additionally to the base requirement of 1 GB. Based on performance tests, a shard size of 10 GB is enough for approximately 1 Million instances per definition, so in this scenario you can expect a shard size of 50 GB for each instance index shard.
- Camunda Optimize:
- 2 CPU Threads
- 2 GB RAM
- Elasticsearch:
- 4 CPU Threads
- 10 GB RAM (5 GB JVM Heap Memory, see setting JVM heap size)
- local SSD storage recommended
Recommended Additional Configurations
Adjust engine heap size
Sending huge process definition diagrams via Rest API might cause the engine to crash if the engine heap size is inadequately limited. Thus, it is recommended to increase the heap size of the engine to at least 2 GB, e.g. by adding the following java command line property when starting the engine:
-Xmx2048m
Also, it is recommended to decrease the deployment cache size to 500
, e.g. by:
<property name="cacheCapacity" value="500" />
Adjust Optimize heap size
By default Optimize is configured with 1GB JVM heap memory. Depending on your setup and actual data you might still encounter situations where you need more than this default for a seamless operation of Optimize. To increase the maximum heap size you can set the environment variable OPTIMIZE_JAVA_OPTS
and provide the desired JVM system properties, e.g. for 2GB of Heap:
OPTIMIZE_JAVA_OPTS=-Xmx2048m
Maximum result limits for queries
It’s possible that engine queries consume a lot of memory. To mitigate this risk, you can limit the number of results a query can return. If you do this, it is highly recommended that you set the value of the queryMaxResultsLimit
setting to 10000
so that the Optimize import works without any problems. This value should still be low enough so you don’t run into any problems with the previously mentioned heap configurations.