Installation Guide

This document describes the installation process of the Camunda Optimize distribution, as well as various configuration possibilities available after initial installation.

Before proceeding with the installation, please read the article about supported environments.

Prerequisites

If you intend to run Optimize on your local machine, please make sure you have a supported JRE (Java Runtime Environment) installed, best refer to the Java Runtime section on which runtimes are supported.

Demo Distribution with Elasticsearch

The Optimize Demo distribution comes with an Elasticsearch instance. The supplied Elasticsearch server is not customized or tuned by Camunda in any manner. It is intended to make the process of trying out Optimize as easy as possible. The only requirement in addition to the demo distribution itself is a running engine (ideally on localhost).

To install the demo distribution containing Elasticsearch, please download the archive with the latest version from the download page and extract it to the desired folder. After that, start Optimize by running the script optimize-demo.sh on Linux and Mac

./optimize-demo.sh

or optimize-demo.bat on Windows:

.\optimize-demo.bat

The script ensures that a local version of Elasticsearch is started and waits until it has become available. Then it starts Optimize, ensures it is running and automatically opens a tab in a browser to make it very convenient for you to try out Optimize.

In case you need to start an Elasticsearch instance only, without starting Optimize (e.g. to perform a reimport), you can use the elasticsearch-startup.sh script:

./elasticsearch-startup.sh

or elasticsearch-startup.bat on Windows:

.\elasticsearch-startup.bat

Production Distribution without Elasticsearch

This distribution is intended to be used in production. To install it, first download the production archive, which contains all the required files to startup Camunda Optimize without Elasticsearch. After that, configure the Elasticsearch connection to connect to your pre-installed Elasticsearch instance and configure the Camunda Platform connection to connect Optimize to your running engine. You can then start your Optimize instance by running the script optimize-startup.sh on Linux and Mac:

./optimize-startup.sh

or optimize-startup.bat on Windows:

.\optimize-startup.bat

Production Docker image without Elasticsearch

The Optimize Docker images can be used in production. They are hosted on our dedicated Docker registry and are available to enterprise customers who bought Optimize only. You can browse the available images in our Docker registry after logging in with your credentials.

Make sure to log in correctly:

$ docker login registry.camunda.cloud
Username: your_username
Password: ******
Login Succeeded

After that, configure the Elasticsearch connection to connect to your pre-installed Elasticsearch instance and configure the Camunda Platform connection to connect Optimize to your running engine. For very simple use cases with only one Camunda Engine and one Elasticsearch node you can use environment variables instead of mounting configuration files into the Docker container:

Getting started with the Optimize docker image

Full local setup

To start the Optimize docker image and connect to an already locally running Camunda Platform as well as Elasticsearch instance you could run the following command:

docker run -d --name optimize --network host \
           registry.camunda.cloud/optimize-ee/optimize:3.14.0

Connect to remote Camunda Platform and Elasticsearch

If however your Camunda Platform as well as Elasticsearch instance reside on a different host you may provide their destination via the corresponding environment variables:

docker run -d --name optimize -p 8090:8090 -p 8091:8091 \
           -e OPTIMIZE_CAMUNDABPM_REST_URL=http://yourCamBpm.org/engine-rest \
           -e OPTIMIZE_ELASTICSEARCH_HOST=yourElasticHost \
           -e OPTIMIZE_ELASTICSEARCH_HTTP_PORT=9200 \
           registry.camunda.cloud/optimize-ee/optimize:3.14.0

Available Environment Variables

There is only a limited set of configuration keys exposed via environment variables. These mainly serve the purpose of testing and exploring Optimize, for production configurations we recommend to follow the setup in section Configuration using a environment-config.yaml file

The most important environment variables you may have to configure are related to the connection to the Camunda Platform REST API as well as Elasticsearch:

  • OPTIMIZE_CAMUNDABPM_REST_URL base URL that will be used for connections to the Camunda Engine REST API (default: http://localhost:8080/engine-rest)
  • OPTIMIZE_CAMUNDABPM_WEBAPPS_URL endpoint where to find the Camunda webapps for the given engine (default: http://localhost:8080/camunda)
  • OPTIMIZE_ELASTICSEARCH_HOST the address/hostname under which the Elasticsearch node is available (default: localhost)
  • OPTIMIZE_ELASTICSEARCH_HTTP_PORT port number used by Elasticsearch to accept HTTP connections (default: 9200)

A complete sample can be found here: Connect to remote Camunda Platform and Elasticsearch.

Furthermore, there are also environment variables specific to the Event Based Process feature you may make use of:

  • OPTIMIZE_CAMUNDA_BPM_EVENT_IMPORT_ENABLED determines whether this instance of Optimize should convert historical data to event data usable for Event Based Processes (default: false)
  • OPTIMIZE_EVENT_BASED_PROCESSES_USER_IDS an array of user ids that are authorized to administer Event Based Processes (default: [])
  • OPTIMIZE_EVENT_BASED_PROCESSES_IMPORT_ENABLED determines whether this Optimize instance performs Event Based Process instance import. (default: false)

Additionally, there are also runtime related environment variables such as:

  • OPTIMIZE_JAVA_OPTS allows to configure/overwrite Java Virtual Machine (JVM) parameters, defaults to -Xms1024m -Xmx1024m -XX:MetaspaceSize=256m -XX:MaxMetaspaceSize=256m

In case you want to make use of the Optimize Public API, you can also set ONE of the following variables:

  • SPRING_SECURITY_OAUTH2_RESOURCESERVER_JWT_JWK_SET_URI Complete URI to get public keys for JWT validation, e.g. https://weblogin.cloud.company.com/.well-known/jwks.json. For more details see Public API Authorization
  • OPTIMIZE_API_ACCESS_TOKEN Secret static shared token to be provided to the secured REST API on access in the authorization header. Will be ignored if SPRING_SECURITY_OAUTH2_RESOURCESERVER_JWT_JWK_SET_URI is also set. For more details see Public API Authorization

You can also adjust logging levels using environment variables as described in the logging configuration.

License key file

If you want the Optimize Docker container to automatically recognize your license key file, you can use standard Docker means to make the file with the license key available inside the container. Replacing the ABSOLUTE_PATH_ON_HOST_TO_LICENSE_FILE with the absolute path to the license key file on your host can be done with the following command:

docker run -d --name optimize -p 8090:8090 -p 8091:8091 \
           -v ABSOLUTE_PATH_ON_HOST_TO_LICENSE_FILE:/optimize/config/OptimizeLicense.txt:ro \
           registry.camunda.cloud/optimize-ee/optimize:3.14.0

Configuration using a environment-config.yaml file

In a production environment the limited set of environment variables is usually not enough so that you want to prepare a custom environment-config.yaml file. Please refer to the Configuration section of the documentation for the available configuration parameters.

Similar to the license key file, you then need to mount this configuration file into the Optimize Docker container to apply it. Replacing the ABSOLUTE_PATH_ON_HOST_TO_CONFIGURATION_FILE with the absolute path to the environment-config.yaml file on your host can be done using the following command:

docker run -d --name optimize -p 8090:8090 -p 8091:8091 \
           -v ABSOLUTE_PATH_ON_HOST_TO_CONFIGURATION_FILE:/optimize/config/environment-config.yaml:ro \
           registry.camunda.cloud/optimize-ee/optimize:3.14.0

In managed Docker container environments like kubernetes you may set this up using ConfigMaps.

Usage

You can start using Optimize right away by opening the following URL in your browser: http://localhost:8090

Then you can use the users from the Camunda Platform to login to Optimize. For details on how to configure the user access, please consult the user access management section.

Before you can fully utilize all features of Optimize, you need to wait until all data has been imported. A green circle in the footer indicates when the import is finished.

Health - Readiness

To check whether Optimize is ready to use, you can make use of the health-readiness endpoint, exposed as part of Optimize’s REST API.

Configuration

All distributions of Optimize come with a predefined set of configuration options that can be overwritten by the user, based on current environment requirements. To do that, have a look into the folder named environment. There are two files, one called environment-config.yaml with values that override the default Optimize properties and another called environment-logback.xml, which sets the logging configuration.

You can see all supported values and read about logging configuration here.

Optimize Web Container Configuration

Please refer to the configuration section on container settings for more information on how to adjust the Optimize Web Container configuration.

Elasticsearch configuration

You can customize the Elasticsearch connection settings as well as the index settings.

Camunda Platform configuration

To perform an import and provide the full set of features, Optimize requires a connection to the REST API of the Camunda engine. For details on how to configure the connection to the Camunda Platform, please refer to the Camunda Platform configuration section.

Import of the data set

By default, Optimize comes without any data available. To start using all the features of the system, you have to perform a data import from the Camunda Platform. This process is triggered automatically when starting Optimize.

If you are interested in the details of the import, please refer to the dedicated import overview section.

Hardware Resources

We recommend to carefully choose hardware resources that are allocated to the server with Optimize.

Please be aware that Optimize is using data structures that are different from data stored by the Camunda Platform Engine. The final amount of space on the hard drive required by Optimize will depend on your replication settings, but as a rule of thumb you could expect Optimize to use 30% of the space that your relational database is using.

How your Data Influences Elasticsearch Requirements

The Elasticsearch requirements are heavily influenced by the makeup of your data set. This is mainly because Optimize creates one instance index per definition, so the amount of indices in your Elasticserach instance will grow with the amount of definitions you have deployed. This is why we recommend a minimum of 1 GB of Elasticsearch heap space to provide for all non-instance indices plus additional space in relation to how many definitions and instances your data set has. By default, Optimize uses one shard per instance index and performance tests have shown that a shard size of 10GB is enough for approximately 1 Million instances. Elasticsearch recommends to aim for 20 shards or fewer per GB of heap memory, so you will need 1GB of additional heap memory per 20 definitions. Elasticsearch also recommends a shard size between 10 and 50 GB, so if you expect your definitions to have more than 5 Million instances, we recommend you increase the number of shards per instance index accordingly in Optimize’s index configurations.

Please note that these guidelines are based on test data that may deviate from yours. If your instance data for example includes a large amount of variables this may result in a larger shard size. In this case we recommend that you test the import with realistic data and adjust the number of shards accordingly.

Example Scenarios

Heads Up!

Exact hardware requirements highly depend on a number of factors such as: size of the data, network speed, current load on the engine and its underlying database. Therefore, we cannot guarantee that the following requirements will satisfy every use case.

20 Definitions with less than 50k Instances per Definition

We recommend to use one shard per instance index, so 20 shards overall for instance indices alone. Aiming for 20 shards per GB of Elasticsearch JVM heap space results in 1 GB of heap memory additionally to the base requirement of 1 GB. Based on performance tests, a shard size of 10 GB should be enough for up to 1 Million instances per definition, so you can expect the instance index shards to be no larger than 10GB.

40 Definitions with up to 10 Million Instances per Definition

We recommend to use two shards per instance index, so 80 shards for instance indices alone. Aiming for 20 shards per GB of Elasticsearch JVM heap space results in 4 GB of heap memory additionally to the base requirement of 1 GB. Based on performance tests, a shard size of 10 GB is enough for approximately 1 Million instances per definition, so in this scenario you can expect a shard size of 50 GB for each instance index shard.

Recommended Additional Configurations

Adjust engine heap size

Sending huge process definition diagrams via Rest API might cause the engine to crash if the engine heap size is inadequately limited. Thus, it is recommended to increase the heap size of the engine to at least 2 GB, e.g. by adding the following java command line property when starting the engine:

-Xmx2048m

Also, it is recommended to decrease the deployment cache size to 500, e.g. by:

<property name="cacheCapacity" value="500" />

Adjust Optimize heap size

By default Optimize is configured with 1GB JVM heap memory. Depending on your setup and actual data you might still encounter situations where you need more than this default for a seamless operation of Optimize. To increase the maximum heap size you can set the environment variable OPTIMIZE_JAVA_OPTS and provide the desired JVM system properties, e.g. for 2GB of Heap:

OPTIMIZE_JAVA_OPTS=-Xmx2048m

Maximum result limits for queries

It’s possible that engine queries consume a lot of memory. To mitigate this risk, you can limit the number of results a query can return. If you do this, it is highly recommended that you set the value of the queryMaxResultsLimit setting to 10000 so that the Optimize import works without any problems. This value should still be low enough so you don’t run into any problems with the previously mentioned heap configurations.

On this Page: