Configuration
Logging
Camunda Optimize provides logging facilities that are preconfigured to use
INFO logging level. Which provides minimal output of information in log files.
This level can be adjusted using environment-logback.xml
configuration file.
In general, all log levels provided by logback are supported. Optimize itself
provides additional information in DEBUG and TRACE levels. Debug level of
logging will write information about scheduling process as well as import
progress into the log file. Trace level will enable writing all requests sent
to Engine as well as all queries towards Elasticsearch in a log file.
Even though one could potentially configure logging levels for all packages, it is recommended to set logging levels for optimize system only using exact package reference.
<logger name="org.camunda.optimize" level="debug" />
System configuration
All distributions of Camunda Optimize come with a predefined set of configuration options that can be overwritten by the user, based on current environment requirements. To do that, have a look into the folder named environment. There is one file called environment-config.yaml with values that override the defaults optimize properties.
Configuration file contains YAML object each field of which is containing configuration values of one specific logical part of Camunda Optimize system. You can see a sample configuration file with all possible configuration fields and their default values here.
In the following section you will find description and default values of separate fields inside of main YAML object with their respective YAML Path.
Heads Up!
For changes in the configuration to take effect, you need to restart Optimize!
Authentication And Security
These values control internal mechanisms of Optimize related to authentication of users against the system and lifetime of web session tokens.
YAML Path | Default Value | Description |
---|---|---|
auth.defaultAuthentication.password | admin | Default password that is automatically added the first time optimize is started. |
auth.defaultAuthentication.user | admin | Default user name that is automatically added the first time optimize is started. |
auth.defaultAuthentication.creationEnabled | true | Enables the creation of the default Optimize user. If you're using Optimize in production, you should disable this property, so you just use the users from the engine. |
auth.token.lifeMin | 15 | Optimize uses token-based authentication to keep track of which users are logged in. Define the lifetime of the token in minutes. |
Container
Settings related to embedded Jetty container, which serves Optimize application.
YAML Path | Default Value | Description |
---|---|---|
container.host | localhost |
A host name or IP address, to identify a specific network interface on
which to listen.
NOTE: changing this property also requires adjustments in the startup script start-optimize.sh on Linux or start-optimize.bat on Windows. |
container.ports.http | 8090 |
A port number that will be used by the embedded jetty server to process
HTTP connections.
NOTE: changing this property also requires adjustments in the startup script start-optimize.sh on Linux or start-optimize.bat on Windows. |
container.ports.https | 8091 |
A port number that will be used by the embedded jetty server to process
secure HTTPS connections.
NOTE: changing this property also requires adjustments in the startup script start-optimize.sh on Linux or start-optimize.bat on Windows. |
container.keystore.location | keystore.jks | HTTPS requires an SSL Certificate. When you generate an SSL Certificate, you are creating a keystore file and a keystore password for use when the browser interface connects. Location of keystore file. |
container.keystore.password | optimize | Password of keystore file. |
container.status.connections.max | 10 | Maximum number of web socket connections accepted for status report. |
Connection to Camunda BPM platform
Configuration for engines used to import data. Please note that you have to have
at least one engine configured at all times. You can configure multiple engines
to import data from. Each engine configuration should have unique alias associated
with it and represented by ${engineAlias}
.
YAML Path | Default Value | Description |
---|---|---|
engines.${engineAlias}.name | default | The name of the engine that will be used to import data. |
engines.${engineAlias}.rest | http://localhost:8080/engine-rest | A base URL that will be used for connections to the Camunda Engine REST API. |
engines.${engineAlias}.authentication.accessGroup | With the specified group id, only engine users that are part of the group can access optimize. | |
engines.${engineAlias}.authentication.enabled | false | Toggles basic authentication on or off. When enabling basic authentication, please be aware that you also need to adjust the values of the user and password |
engines.${engineAlias}.authentication.password | When basic authentication is enabled, this password is used to authenticate against the engine. | |
engines.${engineAlias}.authentication.user | When basic authentication is enabled, this user is used to authenticate against the engine. | |
engines.${engineAlias}.enabled | true | Check if engine should be considered connected. Disable only for testing! |
Engine Common Settings
Settings used by Optimize, which are common among all configured engines, such as REST API endpoint locations, timeouts, etc.
YAML Path | Default Value | Description |
---|---|---|
engine-commons.connection.timeout | 10000 | Maximum time in milliseconds without connection to the engine, Optimize should wait until a time out is triggered. |
engine-commons.groups.resource | /identity/groups | The engine endpoint to verify group memberships of users. |
engine-commons.hai.count | /history/activity-instance/count | The engine endpoint to the historic activity instance count. |
engine-commons.hai.resource | /history/activity-instance | The engine endpoint to the historic activity instances. |
engine-commons.history.procinst.count | /history/process-instance/count | The engine endpoint to the historic process instance count. |
engine-commons.history.procinst.resource | /history/process-instance | The engine endpoint to the historic process instances. |
engine-commons.history.variable.count | /history/variable-instance/count | The engine endpoint to the historic variable instance count. |
engine-commons.history.variable.resource | /history/variable-instance | The engine endpoint to the historic variable instances. |
engine-commons.procdef.count | /process-definition/count | The engine endpoint to the process definition count. |
engine-commons.procdef.resource | /process-definition | The engine endpoint to the process definition. |
engine-commons.procdef.xml | /xml | The engine endpoint to the process definition xml. |
engine-commons.read.timeout | 15000 | Maximum time a request to the engine should last, before a timeout triggers. |
engine-commons.user.validation.resource | /identity/verify | The engine endpoint for the user validation. |
import.data.activity-instance.maxPageSize | 10000 | Overwrites the maximum page size for historic activity instance fetching. |
import.data.activity-instance.elasticsearchType | event | Name used to identify index for storing information about historic activity instances already imported to optimize |
import.data.process-definition.maxPageSize | 1000 | Overwrites the maximum page size for process definitions fetching. |
import.data.process-definition.elasticsearchType | process-definition | Name used to identify index for storing data of process definitions already imported to optimize |
import.data.process-definition-xml.maxPageSize | 2 | Overwrites the maximum page size for process definition xml model fetching. Should be a low value, as large models will lead to memory or timeout problems. |
import.data.process-definition-xml.elasticsearchType | process-definition-xml | Name used to identify index for storing data of process definition XMLs already imported to optimize |
import.data.process-instance.maxPageSize | 1000 | Overwrites the maximum page size for historic process instance fetching. |
import.data.process-instance.finishedIdTrackingType | finished-process-instance-id-tracking | The name of the finished process instance (pi) tracking type that is used to find pi’s that were already imported. |
import.data.process-instance.unfinishedIdTrackingType | unfinished-process-instance-id-tracking | The name of the unfinished process instance (pi) tracking type that is used to find pi’s that were already imported. |
import.data.process-instance.elasticsearchType | process-instance | Name used to identify index storing data of imported process instances |
import.data.variable.maxPageSize | 1000 | Overwrites the maximum page size for historic process instance fetching. |
import.data.variable.elasticsearchType | variable | Name used to identify index storing data of imported process instances |
import.elasticsearchJobExecutorThreadCount | 2 | Number of threads being used to process the import jobs that are writing data to elasticsearch. |
import.elasticsearchJobExecutorQueueSize | 100 | Adjust the queue size of the import jobs that store data to elasticsearch. A too large value might cause memory problems. |
import.maxPageSize | 1000 | The data is fetched from the engine in pages. Define maximum size or, to be precise, a range for the number of entities that should be fetched at once for each import type. |
import.handler.backoff.interval | 5000 | Interval which is used for the backoff time calculation. |
import.handler.backoff.max | 15 | Once all pages are consumed, the import scheduler component will start scheduling fetching tasks in increasing periods of time, controlled by "backoff" counter. |
import.handler.backoff.isEnabled | true | Tells if the backoff is enabled of not. |
import.indexType | import-index | The name of the import index type. |
import.scrollImportIndexType | scroll-import-index | The name of the id set based import index type |
import.importIndexStorageIntervalInSec | 10 | States how often the import index should be stored to Elasticsearch. |
import.writer.numberOfRetries | 5 | Number of retries when there is a version conflict in Elasticsearch during the import. |
Elasticsearch
Settings related to the elasticsearch.
YAML Path | Default Value | Description |
---|---|---|
es.analyzer.name | case_sensitive | Defines the name of the analyzer. |
es.analyzer.tokenfilter | standard | Tokenfilter applied to every token. |
es.analyzer.tokenizer | whitespace | Tokenizer used to process tokens within a query. |
es.connection.timeout | 10000 | Maximum time without connection to Elasticsearch, Optimize should wait until a time out triggers. |
es.heatmap.duration.targetValueType | duration-target-value | The name of the duration target value type. |
es.host | localhost | A hostname on which the Elasticsearch TCP listener is available. |
es.port | 9300 | A port number used by Elasticsearch to accept TCP connections. |
es.index | optimize | An index name used to create all Camunda Optimize types, shards, etc. |
es.sampler.interval | 5000 | Connection sampler interval set to the client |
es.scrollTimeout | 60000 | Maximum time a request to elasticsearch should last, before a timeout triggers. |
es.settings.index.number_of_replicas | 0 | How often should the data replicated in case of node failure. Note, the more replicas you define, the slower Elasticsearch becomes. |
es.settings.index.number_of_shards | 1 | How many shards should be used in the cluster. |
es.settings.index.refresh_interval | 2s | How long Elasticsearch waits until the documents are available for search. A positive value defines the duration in seconds. A value of -1 means that a refresh needs to be done manually. |
Alerting
Setting for the email server to retrieve email notifications on an alert.
YAML Path | Default Value | Description |
---|---|---|
alerting.email.enabled | false | A switch to control email sending process. |
alerting.email.username | Username of your smtp server. | |
alerting.email.password | Corresponding password to the given user of your smtp server. | |
alerting.email.address | Email address that can be used to send alerts. | |
alerting.email.hostname | The smtp server name. | |
alerting.email.port | 587 | The smtp server port. This one is also used as SSL port for the security connection. |
alerting.email.securityProtocol | None | States how the connection to the server should be secured. Possible values are 'NONE', 'STARTTLS' or 'SSL/TLS'. |
Other
Settings of plugin subsystem serialization format, variable import.
YAML Path | Default Value | Description |
---|---|---|
plugin.variableImport.basePackages | Look in the given base package list for variable import adaption plugins. If empty, the import is not influenced. | |
serialization.engineDateFormat | yyyy-MM-dd'T'HH:mm:ss.SSSZ | Define a custom date format that should be used (should be the same as in the engine) please not that engine prior to 7.8 use yyyy-MM-dd'T'HH:mm:ss as default date format |
serialization.optimizeDateFormat | yyyy-MM-dd'T'HH:mm:ss.SSSZ | the date format that is used in Optimize internally |
variable.maxValueListSize | 15 | States the maximum number of values that are shown for the user in the variable filter selection. |
export.csv.limit | 1000 | Maximum number of records returned by CSV export |
export.csv.offset | 0 | Position of first record which will be exported in CSV |