Configuration

Logging

Camunda Optimize provides logging facilities that are preconfigured to use INFO logging level. Which provides minimal output of information in log files. This level can be adjusted using environment-logback.xml configuration file.

Even though one could potentially configure logging levels for all packages, it is recommended to set logging levels for optimize system only using exact package reference as follows:

<logger name="org.camunda.optimize" level="info" />

In general, all log levels provided by logback are supported. Optimize itself supports the following log levels:

  • info: shows only errors and the most important information.
  • debug: in addition to info writes information about scheduling process, alerting as well as import into the log file.
  • trace: as debug but in addition writes all requests sent to Engine as well as all queries towards Elasticsearch in the log file.

System configuration

All distributions of Camunda Optimize come with a predefined set of configuration options that can be overwritten by the user, based on current environment requirements. To do that, have a look into the folder named environment. There is one file called environment-config.yaml with values that override the defaults optimize properties.

Configuration file contains YAML object each field of which is containing configuration values of one specific logical part of Camunda Optimize system. You can see a sample configuration file with all possible configuration fields and their default values here.

In the following section you will find description and default values of separate fields inside of main YAML object with their respective YAML Path.

Heads Up!

For changes in the configuration to take effect, you need to restart Optimize!

Java System Properties & OS Environment variable placeholders

To externalize configuration properties from the environment-config.yaml, Optimize provides variable placeholder support.

The order in which placeholders are resolved is the following:

  1. Java System Properties
  2. OS Environment variables

The placeholder format is ${VARIABLE_NAME} and allows you to refer to a value of Java System Property or OS Environment variable of your choice. The name of the variable needs to consist of lowercase or uppercase letters, digits and underscore _ and shall not begin with a digit. The corresponding regular expression is ([a-zA-Z_]+[a-zA-Z0-9_]*).

The following example illustrates the usage:

auth:
  token:
    secret: ${AUTH_TOKEN_SECRET}

Given this variable is set before Optimize is started with, e.g on Unix systems with:

export AUTH_TOKEN_SECRET=sampleTokenValue

The value will get resolved at startup to sampleTokenValue.

However if the same variable is provided at the same time as a Java System Property, e.g. via passing -DAUTH_TOKEN_SECRET=othertokenValue to the Optimize startup script.

./optimize-startup.sh -DAUTH_TOKEN_SECRET=othertokenValue

The value would get resolved to othertokenValue as Java System Properties have precedence over OS Environment variables.

Note for Windows Users:

To pass Java System Properties to the provided Windows Batch script optimize-startup.bat, you have to put them into double quotes when using the cmd.exe shell:

optimize-startup.bat "-DAUTH_TOKEN_SECRET=othertokenValue"

and for the Windows Powershell in three double quotes:

./optimize-startup.bat """-DAUTH_TOKEN_SECRET=othertokenValue"""

Authentication And Security

These values control internal mechanisms of Optimize related to authentication of users against the system and lifetime of web session tokens.

YAML Path Default Value Description
auth.token.lifeMin 15 Optimize uses token-based authentication to keep track of which users are logged in. Define the lifetime of the token in minutes.
auth.token.secret null Optional secret used to sign authentication tokens, it's recommended to use at least a 64 character secret. If set `null` a random secret will be generated with each startup of Optimize.

Container

Settings related to embedded Jetty container, which serves Optimize application.

YAML Path Default Value Description
container.host localhost A host name or IP address, to identify a specific network interface on which to listen.
container.ports.http 8090 A port number that will be used by Optimize to process HTTP connections. If set to null, ~ or left empty, http connections won't be accepted.
container.ports.https 8091 A port number that will be used by Optimize to process secure HTTPS connections.
container.keystore.location keystore.jks HTTPS requires an SSL Certificate. When you generate an SSL Certificate, you are creating a keystore file and a keystore password for use when the browser interface connects. Location of keystore file.
container.keystore.password optimize Password of keystore file.
container.status.connections.max 10 Maximum number of web socket connections accepted for status report.

Connection to Camunda BPM platform

Configuration for engines used to import data. Please note that you have to have at least one engine configured at all times. You can configure multiple engines to import data from. Each engine configuration should have unique alias associated with it and represented by ${engineAlias}.

YAML Path Default Value Description
engines.${engineAlias}.name default The name of the engine that will be used to import data.
engines.${engineAlias}.rest http://localhost:8080/engine-rest A base URL that will be used for connections to the Camunda Engine REST API.
engines.${engineAlias}.importEnabled true Determines whether this instance of Optimize should import definition & historical data from this engine.
engines.${engineAlias}.authentication.enabled false Toggles basic authentication on or off. When enabling basic authentication, please be aware that you also need to adjust the values of the user and password
engines.${engineAlias}.authentication.user When basic authentication is enabled, this user is used to authenticate against the engine.

Note: when enabled, it is required that the user has

  • READ & READ_HISTORY permission on the Process and Decision Definition resources
  • READ permission on the Authorization, Group and User resources
to enable users to log in and Optimize to import the engine data.

engines.${engineAlias}.authentication.password When basic authentication is enabled, this password is used to authenticate against the engine.
engines.${engineAlias}.webapps.endpoint http://localhost:8080/camunda Defines the endpoint where to find the camunda webapps. This allows Optimize to directly link to the other Camunda Web Applications, e.g. to jump from Optimize directly to a dedicated process instance in Cockpit
engines.${engineAlias}.webapps.enabled true Enables/disables linking to other Camunda Web Applications

Engine Common Settings

Settings used by Optimize, which are common among all configured engines, such as REST API endpoint locations, timeouts, etc.

YAML Path Default Value Description
engine-commons.connection.timeout 0 Maximum time in milliseconds without connection to the engine, Optimize should wait until a time out is triggered. A value of zero means to wait an infinite amount of time.
engine-commons.procdef.resource /process-definition The engine endpoint to the process definition.
engine-commons.procdef.xml /xml The engine endpoint to the process definition xml.
engine-commons.decision-definition.resource /decision-definition The engine endpoint to the decision definitions.
engine-commons.decision-definition.xml /xml The relative engine endpoint to the decision definition xmls.
engine-commons.read.timeout 0 Maximum time a request to the engine should last, before a timeout triggers. A value of zero means to wait an infinite amount of time.
engine-commons.user.validation.resource /identity/verify The engine endpoint for the user validation.
import.data.activity-instance.maxPageSize 10000 Determines the page size for historic activity instance fetching.
import.data.process-definition-xml.maxPageSize 2 Determines the page size for process definition xml model fetching. Should be a low value, as large models will lead to memory or timeout problems.
import.data.process-instance.maxPageSize 10000 Determines the page size for historic decision instance fetching.
import.data.variable.maxPageSize 10000 Determines the page size for historic variable instance fetching.
import.data.user-task-instance.maxPageSize 10000 Determines the page size for historic user task instance fetching.
import.data.user-operation-log-entry.maxPageSize 10000 Determines the page size for historic user operation log entry fetching.
import.data.decision-definition-xml.maxPageSize 2 Determines the page size for decision definition xml model fetching. Should be a low value, as large models will lead to memory or timeout problems.
import.data.decision-instance.maxPageSize 10000 Overwrites the maximum page size for historic decision instance fetching.
import.data.dmn.enabled true Determines if the DMN/decision data, such as decision definitions and instances should be imported.
import.elasticsearchJobExecutorThreadCount 1 Number of threads being used to process the import jobs per data type that are writing data to elasticsearch.
import.elasticsearchJobExecutorQueueSize 5 Adjust the queue size of the import jobs per data type that store data to elasticsearch. A too large value might cause memory problems.
import.handler.backoff.interval 5000 Interval in milliseconds which is used for the backoff time calculation.
import.handler.backoff.max 15 Once all pages are consumed, the import scheduler component will start scheduling fetching tasks in increasing periods of time, controlled by "backoff" counter.
import.handler.backoff.isEnabled true Tells if the backoff is enabled of not.
import.indexType import-index The name of the import index type.
import.importIndexStorageIntervalInSec 10 States how often the import index should be stored to Elasticsearch.
import.currentTimeBackoffMilliseconds 2000 This is the time interval the import backs off from the current tip of the time during the ongoing import cycle. This ensures that potentially missed concurrent writes in the engine are reread going back by the amount of this time interval.

Elasticsearch

Settings related to the elasticsearch.

Connection settings

Everything that is related to build up the connection to Elasticsearch.

Please note that you can define a number of connection points in a cluster. Therefore, everything that is under es.connection.nodes is a list of nodes optimize can connect to. If you have built an Elasticsearch cluster with several nodes it is recommended to define several connection points in case one node fails, Optimize is still able to talk to the cluster.

YAML Path Default Value Description
es.connection.timeout 10000 Maximum time without connection to Elasticsearch, Optimize should wait until a time out triggers.
es.connection.nodes[*].host localhost The address/hostname under which the Elasticsearch node is available.
es.connection.nodes[*].httpPort 9200 A port number used by Elasticsearch to accept HTTP connections.
es.connection.proxy.enabled false Whether an HTTP proxy should be used for requests to elasticsearch.
es.connection.proxy.host null The proxy host to use, must be set if es.connection.proxy.enabled = true.
es.connection.proxy.port null The proxy port to use, must be set if es.connection.proxy.enabled = true.
es.connection.proxy.sslEnabled false Whether this proxy is using a secured connection (https).

Index settings

YAML Path Default Value Description
es.settings.index.number_of_replicas 1 How often should the data replicated in case of node failure.
es.settings.index.number_of_shards 5 How many shards should be used in the cluster.

Note: this property only applies the first time Optimize is started and the schema/mapping is deployed on Elasticsearch. If you want to take this property to take effect again, you need to delete all indexes (with it all data) and restart Optimize.

es.settings.index.refresh_interval 2s How long Elasticsearch waits until the documents are available for search. A positive value defines the duration in seconds. A value of -1 means that a refresh needs to be done manually.

Security settings

Define a secured connection to be able to communicate with a secured Elasticsearch instance.

YAML Path Default Value Description
es.security.username The basic authentication (x-pack) username
es.security.password The basic authentication (x-pack) password
es.security.ssl.enabled false Used to enable or disable TLS/SSL for the HTTP connection.
es.security.ssl.certificate The path to a PEM encoded file containing the certificate (or certificate chain) that will be presented to clients when they connect.
es.security.ssl.certificate_authorities [] A list of paths to PEM encoded CA certificate files that should be trusted, e.g. ['/path/to/ca.crt'].
Note: if you are using a public CA that is already trusted by the Java runtime, you do not need to set the certificate_authorities.

Email

Setting for the email server to send email notifications, e.g. when an alert is triggered.

YAML Path Default Value Description
email.enabled false A switch to control email sending process.
email.username Username of your smtp server.
email.password Corresponding password to the given user of your smtp server.
email.address Email address that can be used to send notifications.
email.hostname The smtp server name.
email.port 587 The smtp server port. This one is also used as SSL port for the security connection.
email.securityProtocol None States how the connection to the server should be secured. Possible values are 'NONE', 'STARTTLS' or 'SSL/TLS'.

History Cleanup Settings

Settings for automatic cleanup of historic process/decision instances based on their end time.

YAML Path Default Value Description
historyCleanup.enabled false Switch to activate the history cleanup. [true/false]
historyCleanup.cronTrigger '0 1 * * *' Cron expression to schedule when the cleanup should be executed, defaults to one 1 o'clock during the night. As the cleanup can cause considerable load on the underlying ElasticSearch database it is recommended to schedule it outside of office hours. You can either use the default Cron (5 fields) or the Spring Cron (6 fields) expression format here.

For details on the format please refer to: Cron Expression Description Spring Cron Expression Documentation
historyCleanup.ttl 'P2Y' Global time to live (ttl) period for process/decision instance data since their end date, when passed the process/decision instance will get cleaned up. The format of the string is ISO_8601 duration. The default value is 2 years.

For details on the notation refer to: https://en.wikipedia.org/wiki/ISO_8601#Durations

Note: The time component of the ISO_8601 duration is not supported. Only years (Y), months (M) and days (D) are.
historyCleanup.processDataCleanupMode 'all' Global type of the cleanup to perform for process instances, possible values:

'all' - delete everything related and including the process instance that passed the defined ttl

'variables' - only delete variables of a process instance Note: This doesn't affect the decision instance cleanup which always deletes the whole instance.
historyCleanup.perProcessDefinitionConfig A List of process definition specific configuration parameters that will overwrite the global cleanup settings for the specific processDefinition identified by it's ${key}.
historyCleanup.perProcessDefinitionConfig.${key}.ttl Time to live to use for process instances of the process definition with the ${key}.
historyCleanup.perProcessDefinitionConfig.${key}.processDataCleanupMode Cleanup mode to use for process instances of the process definition with the ${key}.
historyCleanup.perDecisionDefinitionConfig A List of decision definition specific configuration parameters that will overwrite the global cleanup settings for the specific decisionDefinition identified by it's ${key}.
historyCleanup.perDecisionDefinitionConfig.${key}.ttl Time to live to use for decision instances of the decision definition with the ${key}.

Other

Settings of plugin subsystem serialization format, variable import, camunda endpoint.

YAML Path Default Value Description
plugin.variableImport.basePackages Look in the given base package list for variable import adaption plugins. If empty, the import is not influenced.
plugin.authenticationExtractor.basePackages Looks in the given base package list for authentication extractor plugins. If empty, the standard Optimize authentication mechanism is used.
plugin.engineRestFilter.basePackages Look in the given base package list for engine rest filter plugins. If empty, the REST calls are not influenced.
serialization.engineDateFormat yyyy-MM-dd'T'HH:mm:ss.SSSZ Define a custom date format that should be used (should be the same as in the engine) please not that engine prior to 7.8 use yyyy-MM-dd'T'HH:mm:ss as default date format
export.csv.limit 1000 Maximum number of records returned by CSV export
export.csv.offset 0 Position of first record which will be exported in CSV
sharing.enabled true Enable/disable the possibility to share reports and dashboards

On this Page: