Configuration
Logging
Camunda Optimize provides logging facilities that are preconfigured to use
INFO logging level. Which provides minimal output of information in log files.
This level can be adjusted using environment-logback.xml
configuration file.
Even though one could potentially configure logging levels for all packages, it is recommended to set logging levels for the following three Optimize parts only using exact package reference as follows:
- Optimize runtime environment:
<logger name="org.camunda.optimize" level="info" />
- Optimize upgrade:
<logger name="org.camunda.optimize.upgrade" level="info">
<appender-ref ref="UPGRADE"/>
</logger>
- Communication to Elasticsearch:
<logger name="org.elasticsearch" level="warn" />
To define the granularity of the information shown in the log you can set one of the following log levels:
- error: shows errors only.
- warn: like error, but shows displays warnings as well.
- info: logs everything from warn and the most important information about state changes or actions in Optimize.
- debug: in addition to info, writes information about the scheduling process, alerting as well as the import of the engine data.
- trace: as debug but in addition, writes all requests sent to the Camunda engine as well as all queries towards Elasticsearch to the log output.
System configuration
All distributions of Camunda Optimize come with a predefined set of configuration options that can be overwritten by the user, based on current environment requirements. To do that, have a look into the folder named environment. There is one file called environment-config.yaml with values that override the defaults optimize properties.
Configuration file contains YAML object each field of which is containing configuration values of one specific logical part of Camunda Optimize system. You can see a sample configuration file with all possible configuration fields and their default values here.
In the following section you will find description and default values of separate fields inside of main YAML object with their respective YAML Path.
Heads Up!
For changes in the configuration to take effect, you need to restart Optimize!
Java System Properties & OS Environment variable placeholders
To externalize configuration properties from the environment-config.yaml, Optimize provides variable placeholder support.
The order in which placeholders are resolved is the following:
- Java System Properties
- OS Environment variables
The placeholder format is ${VARIABLE_NAME}
and allows you to refer to a value of Java System Property or OS Environment variable of your choice.
The VARIABLE_NAME
is required to contain only lowercase or uppercase letters, digits and underscore _
characters and shall not begin with a digit. The corresponding regular expression is ([a-zA-Z_]+[a-zA-Z0-9_]*)
.
The following example illustrates the usage:
security:
auth:
token:
secret: ${AUTH_TOKEN_SECRET}
Given this variable is set before Optimize is started with, e.g on Unix systems with:
export AUTH_TOKEN_SECRET=sampleTokenValue
The value will get resolved at startup to sampleTokenValue
.
However if the same variable is provided at the same time as a Java System Property, e.g. via passing -DAUTH_TOKEN_SECRET=othertokenValue
to the Optimize startup script.
./optimize-startup.sh -DAUTH_TOKEN_SECRET=othertokenValue
The value would get resolved to othertokenValue
as Java System Properties have precedence over OS Environment variables.
Note for Windows Users:
To pass Java System Properties to the provided Windows Batch script optimize-startup.bat
, you have to put them into double quotes when using the cmd.exe shell:
optimize-startup.bat "-DAUTH_TOKEN_SECRET=othertokenValue"
and for the Windows Powershell in three double quotes:
./optimize-startup.bat """-DAUTH_TOKEN_SECRET=othertokenValue"""
Default values
For variable placeholders it’s also possible to provide default values using the following format: ${VARIABLE_NAME:DEFAULT_VALUE}
. The DEFAULT_VALUE
can contain any character except }
.
The following example illustrates the usage:
security:
auth:
token:
secret: ${AUTH_TOKEN_SECRET:defaultSecret}
Security
These values control mechanisms of Optimize related security, e.g. security headers and authentication.
YAML Path | Default Value | Description |
---|---|---|
security.auth.token.lifeMin | 60 | Optimize uses token-based authentication to keep track of which users are logged in. Define the lifetime of the token in minutes. |
security.auth.token.secret | null | Optional secret used to sign authentication tokens, it's recommended to use at least a 64 character secret. If set `null` a random secret will be generated with each startup of Optimize. |
security.auth.superUserIds | [] |
List of user ids that are granted full permission to all collections, reports & dashboards.
Note: For reports these users are still required to be granted access to the corresponding process/decision definitions in Camunda BPM Admin. See Authorization Management. |
security.responseHeaders.HSTS.max-age | 31536000 | HTTP Strict Transport Security (HSTS) is a web security policy mechanism which helps to protect websites against protocol downgrade attacks and cookie hijacking. The time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS. If you set the number to a negative value no HSTS header is sent. |
security.responseHeaders.HSTS.includeSubDomains | true | HTTP Strict Transport Security (HSTS) is a web security policy mechanism which helps to protect websites against protocol downgrade attacks and cookie hijacking. If this optional parameter is specified, this rule applies to all of the site’s subdomains as well. |
security.responseHeaders.X-XSS-Protection | 1; mode=block |
This header enables the cross-site scripting (XSS) filter in your browser.
Can have one of the following options:
|
security.responseHeaders.X-Content-Type-Options | true | Setting this header will prevent the browser from interpreting files as a different MIME type to what is specified in the Content-Type HTTP header (e.g. treating text/plain as text/css). |
security.responseHeaders.Content-Security-Policy | base-uri 'self' |
A Content Security Policy (CSP) has significant impact on the way browsers render pages. By default Optimize uses the base-uri directive which restricts the URLs that can be used to the Optimize pages. Find more details on [Mozzila's Content Security Policy Guide](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy) |
Container
Settings related to embedded Jetty container, which serves Optimize application.
YAML Path | Default Value | Description |
---|---|---|
container.host | localhost | A host name or IP address, to identify a specific network interface on which to listen. |
container.ports.http | 8090 | A port number that will be used by Optimize to process HTTP connections. If set to null, ~ or left empty, http connections won't be accepted. |
container.ports.https | 8091 | A port number that will be used by Optimize to process secure HTTPS connections. |
container.keystore.location | keystore.jks | HTTPS requires an SSL Certificate. When you generate an SSL Certificate, you are creating a keystore file and a keystore password for use when the browser interface connects. Location of keystore file. |
container.keystore.password | optimize | Password of keystore file. |
container.status.connections.max | 10 | Maximum number of web socket connections accepted for status report. |
container.accessUrl | null | Optional URL to access Optimize (used for links to Optimize in e.g. alert emails). If no value specified the container host and port are used instead. |
Connection to Camunda BPM platform
Configuration for engines used to import data. Please note that you have to have
at least one engine configured at all times. You can configure multiple engines
to import data from. Each engine configuration should have unique alias associated
with it and represented by ${engineAlias}
.
YAML Path | Default Value | Description |
---|---|---|
engines.${engineAlias}.name | default | The process engines name on the platform, this is the unique engine identifier on the platforms REST API. |
engines.${engineAlias}.defaultTenant.id | null | A default tenantId to the be injected on data from this engine where no tenant is configured in the engine itself. This property is only relevant in the context of a `One Process Engine Per Tenant`. For details consult the Multi-Tenancy documentation. |
engines.${engineAlias}.defaultTenant.name | null | The name used for this default tenant when displayed in the UI. |
engines.${engineAlias}.rest | http://localhost:8080/engine-rest | A base URL that will be used for connections to the Camunda Engine REST API. |
engines.${engineAlias}.importEnabled | true | Determines whether this instance of Optimize should import definition & historical data from this engine. |
engines.${engineAlias}.eventImportEnabled | false | Determines whether this instance of Optimize should convert historical data to event data usable for event based processes. |
engines.${engineAlias}.authentication.enabled | false | Toggles basic authentication on or off. When enabling basic authentication, please be aware that you also need to adjust the values of the user and password |
engines.${engineAlias}.authentication.user |
When basic authentication is enabled, this user is used to authenticate
against the engine.
Note: when enabled, it is required that the user has
|
|
engines.${engineAlias}.authentication.password | When basic authentication is enabled, this password is used to authenticate against the engine. | |
engines.${engineAlias}.webapps.endpoint | http://localhost:8080/camunda | Defines the endpoint where to find the camunda webapps. This allows Optimize to directly link to the other Camunda Web Applications, e.g. to jump from Optimize directly to a dedicated process instance in Cockpit |
engines.${engineAlias}.webapps.enabled | true | Enables/disables linking to other Camunda Web Applications |
Engine Common Settings
Settings used by Optimize, which are common among all configured engines, such as REST API endpoint locations, timeouts, etc.
YAML Path | Default Value | Description |
---|---|---|
engine-commons.connection.timeout | 0 | Maximum time in milliseconds without connection to the engine, Optimize should wait until a time out is triggered. A value of zero means to wait an infinite amount of time. |
engine-commons.procdef.resource | /process-definition | The engine endpoint to the process definition. |
engine-commons.procdef.xml | /xml | The engine endpoint to the process definition xml. |
engine-commons.decision-definition.resource | /decision-definition | The engine endpoint to the decision definitions. |
engine-commons.decision-definition.xml | /xml | The relative engine endpoint to the decision definition xmls. |
engine-commons.read.timeout | 0 | Maximum time a request to the engine should last, before a timeout triggers. A value of zero means to wait an infinite amount of time. |
engine-commons.user.validation.resource | /identity/verify | The engine endpoint for the user validation. |
import.data.activity-instance.maxPageSize | 10000 | Determines the page size for historic activity instance fetching. |
import.data.process-definition-xml.maxPageSize | 2 | Determines the page size for process definition xml model fetching. Should be a low value, as large models will lead to memory or timeout problems. |
import.data.process-definition.maxPageSize | 10000 | Determines the page size for process definition entities fetching. |
import.data.process-instance.maxPageSize | 10000 | Determines the page size for historic decision instance fetching. |
import.data.variable.maxPageSize | 10000 | Determines the page size for historic variable instance fetching. |
import.data.user-task-instance.maxPageSize | 10000 | Determines the page size for historic user task instance fetching. |
import.data.identity-link-log.maxPageSize | 10000 | Determines the page size for historic identity link log fetching. |
import.data.decision-definition-xml.maxPageSize | 2 | Determines the page size for decision definition xml model fetching. Should be a low value, as large models will lead to memory or timeout problems. |
import.data.decision-definition.maxPageSize | 10000 | Determines the page size for decision definition entities fetching. |
import.data.decision-instance.maxPageSize | 10000 | Overwrites the maximum page size for historic decision instance fetching. |
import.data.tenant.maxPageSize | 10000 | Overwrites the maximum page size for tenant fetching. |
import.data.group.maxPageSize | 10000 | Overwrites the maximum page size for groups fetching. |
import.data.authorization.maxPageSize | 10000 | Overwrites the maximum page size for authorizations fetching. |
import.data.dmn.enabled | true | Determines if the DMN/decision data, such as decision definitions and instances should be imported. |
import.data.user-task-worker.enabled | true | Determines if the user task worker data, such as assignee or candidate group of a user task, should be imported. |
import.elasticsearchJobExecutorThreadCount | 1 | Number of threads being used to process the import jobs per data type that are writing data to elasticsearch. |
import.elasticsearchJobExecutorQueueSize | 5 | Adjust the queue size of the import jobs per data type that store data to elasticsearch. A too large value might cause memory problems. |
import.handler.backoff.interval | 5000 | Interval in milliseconds which is used for the backoff time calculation. |
import.handler.backoff.max | 15 | Once all pages are consumed, the import scheduler component will start scheduling fetching tasks in increasing periods of time, controlled by "backoff" counter. |
import.handler.backoff.isEnabled | true | Tells if the backoff is enabled of not. |
import.indexType | import-index | The name of the import index type. |
import.importIndexStorageIntervalInSec | 10 | States how often the import index should be stored to Elasticsearch. |
import.currentTimeBackoffMilliseconds | 10000 | This is the time interval the import backs off from the current tip of the time during the ongoing import cycle. This ensures that potentially missed concurrent writes in the engine are reread going back by the amount of this time interval. |
import.identitySync.includeUserMetaData | true | Whether to include metaData (firstName, lastName, email) when synchronizing users. If disabled only user id's will be shown on user search and in collection permissions. |
import.identitySync.cronTrigger | 0 */2 * * * |
Cron expression for when the identity sync should run, defaults to every second hour. You can either use the default Cron (5 fields) or the Spring Cron (6 fields) expression format here.
For details on the format please refer to: |
import.identitySync.maxPageSize | 10000 | The max page size when multiple users or groups are iterated during the import. |
import.identitySync.maxEntryLimit | 100000 |
The entry limit of the user/group search cache, if you need more entries you can increase that limit.
When increasing the limit, keep in mind to account for that by increasing the JVM heap memory as well. Please refer to the Adjust Optimize heap size on how to configure the heap size. |
Elasticsearch
Settings related to the elasticsearch.
Connection settings
Everything that is related to build up the connection to Elasticsearch.
Please note that you can define a number of connection points
in a cluster. Therefore, everything that is under es.connection.nodes
is a list of nodes optimize can connect to.
If you have built an Elasticsearch cluster with several nodes it is recommended to define several connection points
in case one node fails, Optimize is still able to talk to the cluster.
YAML Path | Default Value | Description |
---|---|---|
es.connection.timeout | 10000 | Maximum time without connection to Elasticsearch, Optimize should wait until a time out triggers. |
es.connection.nodes[*].host | localhost | The address/hostname under which the Elasticsearch node is available. |
es.connection.nodes[*].httpPort | 9200 | A port number used by Elasticsearch to accept HTTP connections. |
es.connection.proxy.enabled | false | Whether an HTTP proxy should be used for requests to elasticsearch. |
es.connection.proxy.host | null | The proxy host to use, must be set if es.connection.proxy.enabled = true. |
es.connection.proxy.port | null | The proxy port to use, must be set if es.connection.proxy.enabled = true. |
es.connection.proxy.sslEnabled | false | Whether this proxy is using a secured connection (https). |
Index settings
YAML Path | Default Value | Description |
---|---|---|
es.settings.index.prefix | optimize |
The prefix prepended to all Optimize index and alias names. Custom values allow to operate multiple isolated Optimize instances on one Elasticsearch cluster. NOTE: Changing this after Optimize was already run before, will create new empty indexes. |
es.settings.index.number_of_replicas | 1 | How often should the data replicated in case of node failure. |
es.settings.index.number_of_shards | 5 |
How many shards should be used in the cluster for process instance and decision instance indices. All other indices will be made up of a single shard
Note: this property only applies the first time Optimize is started and the schema/mapping is deployed on Elasticsearch. If you want to take this property to take effect again, you need to delete all indexes (with it all data) and restart Optimize. |
es.settings.index.refresh_interval | 2s | How long Elasticsearch waits until the documents are available for search. A positive value defines the duration in seconds. A value of -1 means that a refresh needs to be done manually. |
es.settings.index.nested_documents_limit | 10000 | Optimize uses nested documents to store list information such as activities or variables belonging to a process instance. So this setting defines the maximum number of activities/variables that a single process instance can contain. This limit helps to prevent out of memory errors and should be used with care. |
Elasticsearch Security
Define a secured connection to be able to communicate with a secured Elasticsearch instance.
YAML Path | Default Value | Description |
---|---|---|
es.security.username | The basic authentication (x-pack) username | |
es.security.password | The basic authentication (x-pack) password | |
es.security.ssl.enabled | false | Used to enable or disable TLS/SSL for the HTTP connection. |
es.security.ssl.certificate | The path to a PEM encoded file containing the certificate (or certificate chain) that will be presented to clients when they connect. | |
es.security.ssl.certificate_authorities | [] |
A list of paths to PEM encoded CA certificate files that should be trusted, e.g. ['/path/to/ca.crt'].
Note: if you are using a public CA that is already trusted by the Java runtime, you do not need to set the certificate_authorities. |
Setting for the email server to send email notifications, e.g. when an alert is triggered.
YAML Path | Default Value | Description |
---|---|---|
email.enabled | false | A switch to control email sending process. |
email.address | Email address that can be used to send notifications. | |
email.hostname | The smtp server name. | |
email.port | 587 | The smtp server port. This one is also used as SSL port for the security connection. |
email.authentication.enabled | A switch to control email server authentication | |
email.authentication.username | Username of your smtp server. | |
email.authentication.password | Corresponding password to the given user of your smtp server. | |
email.authentication.securityProtocol | States how the connection to the server should be secured. Possible values are 'NONE', 'STARTTLS' or 'SSL/TLS'. |
Alert Notification Webhooks
Settings for webhooks which can receive custom alert notifications. You can configure multiple webhooks, which will be available to select from when creating or editing alerts. Each webhook configuration should have a unique human readable name which will appear in the Optimize UI.
YAML Path | Default Value | Description |
---|---|---|
webhookAlerting.webhooks.${webhookName}.url | The URL of the webhook. | |
webhookAlerting.webhooks.${webhookName}.headers | A map of the headers of the request to be sent to the webhook. | |
webhookAlerting.webhooks.${webhookName}.httpMethod | The HTTP Method of the request to be sent to the webhook. | webhookAlerting.webhooks.${webhookName}.defaultPayload | The payload of the request to be sent to the webhook. This should include the placeholder '{{ALERT_MESSAGE}}', which indicates where the content of the alert is to be inserted into the payload. |
History Cleanup Settings
Settings for automatic cleanup of historic process/decision instances based on their end time.
YAML Path | Default Value | Description |
---|---|---|
historyCleanup.cronTrigger | '0 1 * * *' |
Cron expression to schedule when the cleanup should be executed, defaults to one 1 o'clock during the night. As the cleanup can cause considerable load on the underlying ElasticSearch database it is recommended to schedule it outside of office hours. You can either use the default Cron (5 fields) or the Spring Cron (6 fields) expression format here.
For details on the format please refer to: |
historyCleanup.ttl | 'P2Y' |
Global time to live (ttl) period for process/decision/event data.
The relevant property differs between entities. For process data, it's the `endTime` of the process instance. For decision data, it's the `evaluationTime` and for ingested, events it's the `time` field.
The format of the string is ISO_8601 duration. The default value is 2 years. For details on the notation refer to: https://en.wikipedia.org/wiki/ISO_8601#Durations Note: The time component of the ISO_8601 duration is not supported. Only years (Y), months (M) and days (D) are. |
historyCleanup.processDataCleanup.enabled | false | A switch to activate the history cleanup of process data. [true/false] |
historyCleanup.processDataCleanup.cleanupMode | 'all' |
Global type of the cleanup to perform for process instances, possible values: 'all' - delete everything related and including the process instance that passed the defined ttl 'variables' - only delete variables of a process instance Note: This doesn't affect the decision instance cleanup which always deletes the whole instance. |
historyCleanup.processDataCleanup.batchSize | 10000 | Defines the batch size in which Camunda engine process instance data gets cleaned up. It may be reduced if requests fail due to request size constraints. In most cases, this should not be necessary and has only been experienced when connecting to an AWS Elasticsearch instance. |
historyCleanup.processDataCleanup.perProcessDefinitionConfig | A List of process definition specific configuration parameters that will overwrite the global cleanup settings for the specific processDefinition identified by it's ${key}. | |
historyCleanup.processDataCleanup .perProcessDefinitionConfig.${key}.ttl |
Time to live to use for process instances of the process definition with the ${key}. | |
historyCleanup.processDataCleanup .perProcessDefinitionConfig.${key}.cleanupMode |
Cleanup mode to use for process instances of the process definition with the ${key}. | |
historyCleanup.decisionDataCleanup.enabled | false | A switch to activate the history cleanup of decision data. [true/false] |
historyCleanup.decisionDataCleanup.perDecisionDefinitionConfig | A List of decision definition specific configuration parameters that will overwrite the global cleanup settings for the specific decisionDefinition identified by it's ${key}. | |
historyCleanup.decisionDataCleanup .perDecisionDefinitionConfig.${key}.ttl |
Time to live to use for decision instances of the decision definition with the ${key}. | |
historyCleanup.ingestedEventCleanup.enabled | false | A switch to activate the history cleanup of ingested event data. [true/false] |
Localization
Define the languages that can be used by Optimize.
YAML Path | Default Value | Description |
---|---|---|
localization.availableLocales | ['en','de'] |
All locales available in the Optimize Frontend. Note: for others than the default there must be a `<localeCode>.json` file available under `./environment/localization`. |
localization.fallbackLocale | 'en' | The fallback locale is used if there is a locale requested that is not available in availableLocales. The fallbackLocale is required to be present in `localization.availableLocales`. |
UI Configuration
Customize how Optimize should look like by e.g. adjusting the logo, head background color etc.
YAML Path | Default Value | Description |
---|---|---|
ui.header.textColor | 'dark' | Determines the color theme of the text in the header. Currently 'dark' and 'light' are supported. |
ui.header.pathToLogoIcon | 'logo/camunda_icon.svg' |
Path to the logo that is displayed in the header of Optimize.
Path can be:
|
ui.header.backgroundColor | '#FFFFFF' | A hex encoded color that should be used as background color for the header. Default color is white. |
Event Based Process Configuration
Configuration of the Optimize event based process feature.
YAML Path | Default Value | Description |
---|---|---|
eventBasedProcess.authorizedUserIds | [] | A list of userIds that are authorized to manage (Create, Update, Publish & Delete) event based processes. |
eventBasedProcess.eventImport.enabled | false | Determines whether this Optimize instance performs event based process instance import. |
eventBasedProcess.eventImport.maxPageSize | 10000 | The batch size of events being correlated to process instances of event based processes. |
eventBasedProcess.eventIndexRollover.scheduleIntervalInMinutes | 10 | The interval in minutes at which to check whether the conditions for a rollover of eligible indices are met, triggering one if required. This value should be greater than 0. |
eventBasedProcess.eventIndexRollover.maxIndexSizeGB | 50 | Specifies the maximum total Index size for events (excluding replicas). When shards get too large, query performance can slow down and rolling over an index can bring an improvement. Using this configuration, a rollover will occur when triggered and the current event index size matches or exceeds the maxIndexSizeGB threshold. |
Event Ingestion REST API Configuration
Configuration of the Optimize Event Ingestion REST API for Event Based Processes.
YAML Path | Default Value | Description |
---|---|---|
eventBasedProcess.eventIngestion.accessToken | null |
Secret token to be provided on the Ingestion REST API when ingesting data.
If set `null` a random token will be generated with each startup of Optimize and logged.
It is recommended to configure a value if the [Event Ingestion REST API](../) is to be used. |
eventBasedProcess.eventIngestion.maxBatchRequestBytes | 10485760 |
Content length limit for an ingestion REST API Bulk request in bytes.
Requests will be rejected when exceeding that limit.
Defaults to 10MB.
In case this limit is raised you should carefully tune the heap memory accordingly, see Adjust Optimize heap size on how to do that. |
eventBasedProcess.eventIngestion.maxRequests | 5 | The maximum number of event ingestion requests that can be serviced at any given time. |
Other
Settings of plugin subsystem serialization format, variable import, camunda endpoint.
YAML Path | Default Value | Description |
---|---|---|
plugin.directory | ./plugin | Defines the directory path in the local Optimize file system which should be checked for plugins |
plugin.variableImport.basePackages | Look in the given base package list for variable import adaption plugins. If empty, the import is not influenced. | |
plugin.authenticationExtractor.basePackages | Looks in the given base package list for authentication extractor plugins. If empty, the standard Optimize authentication mechanism is used. | |
plugin.engineRestFilter.basePackages | Look in the given base package list for engine rest filter plugins. If empty, the REST calls are not influenced. | |
plugin.decisionInputImport.basePackages | Look in the given base package list for Decision input import adaption plugins. If empty, the import is not influenced. | |
plugin.decisionOutputImport.basePackages | Look in the given base package list for Decision output import adaption plugins. If empty, the import is not influenced. | |
serialization.engineDateFormat | yyyy-MM-dd'T'HH:mm:ss.SSSZ | Define a custom date format that should be used (should be the same as in the engine) |
export.csv.limit | 1000 |
Maximum number of records returned by CSV export.
Note: Increasing this value comes at a memory cost for the Optimize application that varies based on the actual data. As a rough guideline, an export of a 50000 records raw data report containing 8 variables on each instance can cause temporary heap memory peaks of up to ~200MB with the actual CSV file having a size of ~20MB. Please adjust the heap memory accordingly, see Adjust Optimize heap size on how to do that. |
sharing.enabled | true | Enable/disable the possibility to share reports and dashboards |