User Guide

Overview

Welcome to the Camunda BPM user guide! Camunda BPM is a Java-based framework for process automation. This document contains information about the features provided by the Camunda BPM platform.

Camunda BPM is built around the process engine component. The following illustration shows the most important components of Camunda BPM along with some typical user roles.

Process Engine & Infrastructure

Web Applications

  • REST API The REST API allows using the process engine from a remote application or a Java Script application. (Note: The documentation of the REST Api is factored out into an own document.)
  • Camunda Tasklist A web application for human workflow management and user tasks that allows process participants to inspect their workflow tasks and navigate to task forms in order to work on the tasks and provide data input.
  • Camunda Cockpit A web application for process monitoring and operations that allows you to search for process instances, inspect their state and repair broken instances.
  • Camunda Admin A web application for user management that allows you to manage users, groups and authorizations.
  • Camunda Cycle A web application for synchronizing BPMN 2.0 process models between different modeling tools and modelers.

Additional Tools

  • Camunda Modeler: eclipse plugin for process modeling.
  • camunda-bpmn.js: javascript framework for parsing, rendering and executing BPMN 2.0 from XML source.

Getting Started

Getting started Tutorials »

The getting started tutorials can be found at http://docs.camunda.org/guides/getting-started-guides/.

Architecture Overview

camunda BPM is a Java-based framework. The main components are written in Java and we have a general focus on providing Java developers with the tools they need for designing, implementing and running business processes and workflows on the JVM. Nevertheless, we also want to make the process engine technology available to Non-Java developers. This is why camunda BPM also provides a REST Api which allows to build applications connecting to a remote process engine.

camunda BPM can be used both as a standalone process engine server or embedded inside custom Java applications. The embeddability requirement is at the heart of many architecture decisions within camunda BPM. For instance, we work hard to make the process engine component a lightweight component with as little dependencies on third-party libraries as possible. Furthermore, the embeddability motivates programming model choices such as the capabilities of the process engine to participate in Spring Managed or JTA transactions and the threading model.

Process Engine Architecture

  • Process Engine Public API: Service-oriented API allowing Java Applications to interact with the process engine. The different responsibilities of the process engine (ie. Process Repository, Runtime Process Interaction, Task Management, ...) are separated out into individual services. The public API features a command-style access pattern: Threads entering the process engine are routed through a Command Interceptor which is used for setting up Thread Context such as Transactions.
  • BPMN 2.0 Core Engine: this is the core of the process engine. It features a lightweight execution engine for graph structures (PVM - Process Virtual Machine), a BPMN 2.0 parser which transforms BPMN 2.0 Xml files into Java Objects and a set of BPMN Behavior implementations (providing the implementation for BPMN 2.0 constructs usch as Gateways or Service Tasks).
  • Job Executor: the Job Executor is responsible for processing asynchronous background work such as Timers or asynchronous continuations in a process.
  • The Persistence Layer: the process engine features a persistence layer responsible for persisting process instance state to a relational database. We use the MyBatis mapping engine for object relational mapping.

Required thrid-party libraries

The process engine depends on the following third party libraries:

camunda BPM platform architecture

camunda BPM platform is a flexible framework which can be deployed in different scenarios. This section provides an overview over the most common deployment scenarios.

Embedded Process Engine

In this case the process engine is added as an application library to a custom application. This way the process engine can easily be started and stopped with the application lifecycle. It is possible to run multiple embedded process engines on top of a shared database.

Shared, container-managed Process Engine

In this case the process engine is started inside the runtime container (Servlet Container, Application Server, ...). The process engine is provided as a container service and can be shared by all applications deployed inside the container. The concept can be compared to a JMS Message Queue which is provided by the runtime and can be used by all applications. There is a one-to-one mapping between process deployments and applications: the process engine keeps track of the process definitions deployed by an applications and delegates execution to the application in question.

Standalone (Remote) Process Engine Server

In this case the process engine is provided as a network service. Different applications running on the network can interact with the process engine through a remote communication channel. The easiest way for making the process engine accessible remote is to use the built-in REST api. Different communication channels such as SOAP Webservices or JMS are possible but need to be implemented by users.

Clustering Model

In order to provide scale-up or fail-over capabilities, the process engine can be distributed to different nodes in a cluster. Each process engine instance must then connect to a shared database.

The individual process engine instances do not maintain session state across transactions. Whenever the process engine runs a transaction, the complete state is flushed out to the shared database. This makes it possible to route subsequent requests which do work in the same process instance to different cluster nodes. This model is very simple and easy to understand and imposes limited restrictions when it comes to deploying a cluster installation. As far as the process engine is concerned, there is also no difference between setups for scale-up and setups for fail-over (as the process engine keeps no session state between transactions).

The process engine job executor is also clustered and runs on each node. This way, there is no single point of failure as far as the process engine is concerned. The job executor can run in both homogeneous and heterogeneous clusters.

Multi-Tenancy Model



To serve multiple, independent parties with one Camunda installation, the process engine supports multi-tenancy. The following multi tenancy models are supported:

  • Table-level data separation by using different database schemas or databases,
  • Row-level data separation by using a tenant marker.

Users should choose the model which fits their data separation needs. Camunda's APIs provide access to processes and related data specific to each tenant. More details can be found in the multi-tenancy section.

Web Application Architecture

The camunda BPM webapplications are based on a RESTful architecture.

Frameworks used:

Additional custom frameworks developed by camunda hackers:

  • camunda-bpmn.js: camunda BPMN 2.0 Javascript libraries
  • ngDefine: integration of AngularJS into RequireJS powered applications
  • angular-data-depend: toolkit for implementing complex, data heavy AngularJS applications

Supported Environments

You can run the camunda BPM platform in every java-runable environment. camunda BPM is supported with our QA infrastructure in the following environments. Here you can find more information about our enterprise support.

Container / Application Server

  • Apache Tomcat 6 / 7
  • JBoss Application Server 7.1 / JBoss EAP 6.0
  • Glassfish 3.1
  • IBM Websphere Application Server 8.0 / 8.5

Databases

  • MySQL 5.1
  • Oracle 10g / 11g
  • IBM DB2 9.7
  • PostgreSQL 9.1
  • Microsoft SQL Server 2008 R2 / 2012 (see Configuration Note)
  • H2 1.3

Webbrowser

  • Google Chrome latest
  • Mozilla Firefox latest
  • Internet Explorer 9 / 10

Java Runtime

  • Java 6 / 7

Eclipse (for camunda modeler)

  • Eclipse Indigo / Juno

Process Engine Bootstrapping

You have a number of options to configure and create a process engine depending on whether you use a application managed or a shared, container managed process engine.

Application Managed Process Engine

You manage the process engine as part of your application. The following ways exist to configure it:

Shared, Container Managed Process Engine

A container of your choice (e.g. Tomcat, JBoss, Glassfish or WebSphere) manages the process engine for you. The configuration is carried out in a container specific way, see Runtime Container Integration for details.

ProcessEngineConfiguration bean

The camunda engine uses the ProcessEngineConfiguration bean to configure and construct a standalone Process Engine. There are multiple subclasses available that can be used to define the process engine configuration. These classes represent different environments, and set defaults accordingly. It's a best practice to select the class the matches (the most) your environment, to minimize the number of properties needed to configure the engine. The following classes are currently available:

  • org.camunda.bpm.engine.impl.cfg.StandaloneProcessEngineConfiguration The process engine is used in a standalone way. The engine itself will take care of the transactions. By default, the database will only be checked when the engine boots (and an exception is thrown if there is no database schema or the schema version is incorrect).
  • org.camunda.bpm.engine.impl.cfg.StandaloneInMemProcessEngineConfiguration This is a convenience class for unit testing purposes. The engine itself will take care of the transactions. An H2 in-memory database is used by default. The database will be created and dropped when the engine boots and shuts down. When using this, probably no additional configuration is needed (except when using for example the job executor or mail capabilities).
  • org.camunda.bpm.engine.spring.SpringProcessEngineConfiguration To be used when the process engine is used in a Spring environment. See the Spring integration section for more information.
  • org.camunda.bpm.engine.impl.cfg.JtaProcessEngineConfiguration To be used when the engine runs in standalone mode, with JTA transactions.

Bootstrap a Process Engine using Java API

You can configure the process engine programatically by creating the right ProcessEngineConfiguration object or use some pre-defined one:

ProcessEngineConfiguration.createStandaloneProcessEngineConfiguration();
ProcessEngineConfiguration.createStandaloneInMemProcessEngineConfiguration();

Now you can call the buildProcessEngine() operation to create a Process Engine:

ProcessEngine processEngine = ProcessEngineConfiguration.createStandaloneInMemProcessEngineConfiguration()
  .setDatabaseSchemaUpdate(ProcessEngineConfiguration.DB_SCHEMA_UPDATE_FALSE)
  .setJdbcUrl("jdbc:h2:mem:my-own-db;DB_CLOSE_DELAY=1000")
  .setJobExecutorActivate(true)
  .buildProcessEngine();

Configure Process Engine using Spring XML

The easiest way to configure your Process Engine is via through an XML file called camunda.cfg.xml. Using that you can simply do:

ProcessEngine processEngine = ProcessEngines.getDefaultProcessEngine()

The camunda.cfg.xml must contain a bean that has the id processEngineConfiguration, select the best fitting ProcessEngineConfiguration class suiting your needs:

<bean id="processEngineConfiguration" class="org.camunda.bpm.engine.impl.cfg.StandaloneProcessEngineConfiguration">

This will look for an camunda.cfg.xml file on the classpath and construct an engine based on the configuration in that file. The following snippet shows an example configuration:

<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">

  <bean id="processEngineConfiguration" class="org.camunda.bpm.engine.impl.cfg.StandaloneProcessEngineConfiguration">

    <property name="jdbcUrl" value="jdbc:h2:mem:camunda;DB_CLOSE_DELAY=1000" />
    <property name="jdbcDriver" value="org.h2.Driver" />
    <property name="jdbcUsername" value="sa" />
    <property name="jdbcPassword" value="" />

    <property name="databaseSchemaUpdate" value="true" />

    <property name="jobExecutorActivate" value="false" />

    <property name="mailServerHost" value="mail.my-corp.com" />
    <property name="mailServerPort" value="5025" />
  </bean>

</beans>

Note that the configuration XML is in fact a Spring configuration. This does not mean that the camunda engine can only be used in a Spring environment! We are simply leveraging the parsing and dependency injection capabilities of Spring internally for building up the engine.

The ProcessEngineConfiguration object can also be created programmatically using the configuration file. It is also possible to use a different bean id:

ProcessEngineConfiguration.createProcessEngineConfigurationFromResourceDefault();
ProcessEngineConfiguration.createProcessEngineConfigurationFromResource(String resource);
ProcessEngineConfiguration.createProcessEngineConfigurationFromResource(String resource, String beanName);
ProcessEngineConfiguration.createProcessEngineConfigurationFromInputStream(InputStream inputStream);
ProcessEngineConfiguration.createProcessEngineConfigurationFromInputStream(InputStream inputStream, String beanName);

It is also possible not to use a configuration file, and create a configuration based on defaults (see the different supported classes for more information).

ProcessEngineConfiguration.createStandaloneProcessEngineConfiguration();
ProcessEngineConfiguration.createStandaloneInMemProcessEngineConfiguration();

All these ProcessEngineConfiguration.createXXX() methods return a ProcessEngineConfiguration that can further be tweaked if needed. After calling the buildProcessEngine() operation, a ProcessEngine is created as explained above.

Configure Process Engine in bpm-platform.xml

The bpm-platform.xml file is used to configure camunda BPM platform in the following distributions:

  • Apache Tomcat
  • Glassfish Application Server
  • IBM Websphere Application Server

The <process-engine ... /> xml tag allows defining a process engine:

<?xml version="1.0" encoding="UTF-8"?>
<bpm-platform xmlns="http://www.camunda.org/schema/1.0/BpmPlatform" 
              xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
              xsi:schemaLocation="http://www.camunda.org/schema/1.0/BpmPlatform http://www.camunda.org/schema/1.0/BpmPlatform">

  <job-executor>
    <job-acquisition name="default" />
  </job-executor>

  <process-engine name="default">
    <job-acquisition>default</job-acquisition>
    <configuration>org.camunda.bpm.engine.impl.cfg.StandaloneProcessEngineConfiguration</configuration>
    <datasource>java:jdbc/ProcessEngine</datasource>

    <properties>
      <property name="history">full</property>
      <property name="databaseSchemaUpdate">true</property>
      <property name="authorizationEnabled">true</property>
    </properties>

  </process-engine>
</bpm-platform>

See Deployment Descriptor Reference for complete documentation of the syntax of the bpm-platform.xml file.

Configure Process Engine in processes.xml

The process engine can also be configured and bootstrapped using the META-INF/processes.xml file. See Section on processes.xml file for details.

See Deployment Descriptor Reference for complete documentation of the syntax of the processes.xml file.

Process Engine API

Services API

The Java API is the most common way of interacting with the engine. The central starting point is the ProcessEngine, which can be created in several ways as described in the configuration section. From the ProcessEngine, you can obtain the various services that contain the workflow/BPM methods. ProcessEngine and the services objects are thread safe. So you can keep a reference to 1 of those for a whole server.

ProcessEngine processEngine = ProcessEngines.getDefaultProcessEngine();

RuntimeService runtimeService = processEngine.getRuntimeService();
RepositoryService repositoryService = processEngine.getRepositoryService();
TaskService taskService = processEngine.getTaskService();
ManagementService managementService = processEngine.getManagementService();
IdentityService identityService = processEngine.getIdentityService();
HistoryService historyService = processEngine.getHistoryService();
FormService formService = processEngine.getFormService();

ProcessEngines.getDefaultProcessEngine() will initialize and build a process engine the first time it is called and afterwards always return the same process engine. Proper creation and closing of all process engines can be done with ProcessEngines.init() and ProcessEngines.destroy().

The ProcessEngines class will scan for all camunda.cfg.xml and activiti-context.xml files. For all camunda.cfg.xml files, the process engine will be built in the typical way: ProcessEngineConfiguration.createProcessEngineConfigurationFromInputStream(inputStream).buildProcessEngine(). For all activiti-context.xml files, the process engine will be built in the Spring way: First the Spring application context is created and then the process engine is obtained from that application context.

All services are stateless. This means that you can easily run camunda BPM on multiple nodes in a cluster, each going to the same database, without having to worry about which machine actually executed previous calls. Any call to any service is idempotent regardless of where it is executed.

The RepositoryService is probably the first service needed when working with the camunda engine. This service offers operations for managing and manipulating deployments and process definitions. Without going into much detail here, a process definition is a Java counterpart of BPMN 2.0 process. It is a representation of the structure and behaviour of each of the steps of a process. A deployment is the unit of packaging within the engine. A deployment can contain multiple BPMN 2.0 xml files and any other resource. The choice of what is included in one deployment is up to the developer. It can range from a single process BPMN 2.0 xml file to a whole package of processes and relevant resources (for example the deployment 'hr-processes' could contain everything related to hr processes). The RepositoryService allows to deploy such packages. Deploying a deployment means it is uploaded to the engine, where all processes are inspected and parsed before being stored in the database. From that point on, the deployment is known to the system and any process included in the deployment can now be started.

Furthermore, this service allows to

  • Query on deployments and process definitions known to the engine.
  • Suspend and activate process definitions. Suspending means no further operations can be done on them, while activation is the opposite operation.
  • Retrieve various resources such as files contained within the deployment or process diagrams that were auto generated by the engine.

Wile the RepositoryService is rather about static information (ie. data that doesn't change, or at least not a lot), the RuntimeService is quite the opposite. It deals with starting new process instances of process definitions. As said above, a process definition defines the structure and behaviour of the different steps in a process. A process instance is one execution of such a process definition. For each process definition there typically are many instances running at the same time. The RuntimeService also is the service which is used to retrieve and store process variables. This is data which is specific to the given process instance and can be used by various constructs in the process (eg. an exclusive gateway often uses process variables to determine which path is chosen to continue the process). The Runtimeservice also allows to query on process instances and executions. Executions are a representation of the 'token' concept of BPMN 2.0. Basically an execution is a pointer pointing to where the process instance currently is. Lastly, the RuntimeService is used whenever a process instance is waiting for an external trigger and the process needs to be continued. A process instance can have various wait states and this service contains various operations to 'signal' the instance that the external trigger is received and the process instance can be continued.

Tasks that need to be performed by actual human users of the system are core to the process engine. Everything around tasks is grouped in the TaskService, such as

  • Querying tasks assigned to users or groups.
  • Creating new standalone tasks. These are tasks that are not related to a process instances.
  • Manipulating to which user a task is assigned or which users are in some way involved with the task.
  • Claiming and completing a task. Claiming means that someone decided to be the assigee for the task, meaning that this user will complete the task. Completing means 'doing the work of the tasks'. Typically this is filling in a form of sorts.

The IdentityService is pretty simple. It allows the management (creation, update, deletion, querying, ...) of groups and users. It is important to understand that the core engine actually doesn't do any checking on users at runtime. For example, a task could be assigned to any user, but the engine does not verify if that user is known to the system. This is because the engine can also used in conjunction with services such as LDAP, active directory, etc.

The FormService is an optional service. Meaning that the camunda engine can perfectly be used without it, without sacrificing any functionality. This service introduces the concept of a start form and a task form. A start form is a form that is shown to the user before the process instance is started, while a task form is the form that is displayed when a user wants to complete a form. You can define these forms in the BPMN 2.0 process definition. This service exposes this data in an easy way to work with. But again, this is optional as forms don't need to be embedded in the process definition.

The HistoryService exposes all historical data gathered by the engine. When executing processes, a lot of data can be kept by the engine (this is configurable) such as process instance start times, who did which tasks, how long it took to complete the tasks, which path was followed in each process instance, etc. This service exposes mainly query capabilities to access this data.

The ManagementService is typically not needed when coding custom application. It allows to retrieve information about the database tables and table metadata. Furthermore, it exposes query capabilities and management operations for jobs. Jobs are used in the engine for various things such as timers, asynchronous continuations, delayed suspension/activation, etc. Later on, these topics will be discussed in more detail.

Javadocs:

For more detailed information on the service operations and the engine API, see the javadocs.

Query API

To query data from the engine is possible in multiple ways:

  • Java Query API: Fluent Java API to query engine entities (like ProcessInstances, Tasks, ...).
  • REST Query API: REST API to query engine entities (like ProcessInstances, Tasks, ...).
  • Native Queries: Provide own SQL queries to retrieve engine entities (like ProcessInstances, Tasks, ...) if the Query API lacks possibilities you need (e.g. OR conditions).
  • Custom Queries: Use completly custom queries and an own MyBatis mapping to retrieve own value objects or join engine with domain data.
  • SQL Queries: Use database SQL queries for use cases like Reporting.

The recommended way is to use on of the Query APIs.

The Java Query API allows to program completely typesafe queries with a fluent API You can add various conditions to your queries (all of which are applied together as a logical AND) and precisely one ordering. The following code shows an example:

List<Task> tasks = taskService.createTaskQuery()
  .taskAssignee("kermit")
  .processVariableValueEquals("orderId", "0815")
  .orderByDueDate().asc()
  .list();

You can find more information on this in the Javadocs.

REST Query API

The Java Query API is exposed as REST service as well, see REST documentation for details.

Native Queries

Sometimes you need more powerful queries, e.g. queries using an OR operator or restrictions you can not express using the Query API. For these cases, we introduced native queries, which allow you to write your own SQL queries. The return type is defined by the Query object you use and the data is mapped into the correct objects, e.g. Task, ProcessInstance, Execution, etc. Since the query will be fired at the database you have to use table and column names as they are defined in the database; this requires some knowledge about the internal data structure and it is recommended to use native queries with care. The table names can be retrieved via the API to keep the dependency as small as possible.

List<Task> tasks = taskService.createNativeTaskQuery()
  .sql("SELECT count(\*) FROM " + managementService.getTableName(Task.class) + " T WHERE T.NAME_ = #{taskName}")
  .parameter("taskName", "gonzoTask")
  .list();

long count = taskService.createNativeTaskQuery()
  .sql("SELECT count(\*) FROM " + managementService.getTableName(Task.class) + " T1, "
         + managementService.getTableName(VariableInstanceEntity.class) + " V1 WHERE V1.TASK_ID_ = T1.ID_")
  .count();

Custom Queries

For performance reasons it might sometimes be desirable not to query the engine objects but some own value or DTO objects collecting data from different tables - maybe including your own domain classes.

SQL Queries

The table layout is pretty straightforward - we concentrated on making it easy to understand. Hence it is OK to do SQL queries for e.g. reporting use cases. Just make sure that you do not mess up the engine data by updating the tables without exactly knowing what you are doing.

Process Engine Concepts

This section explains some core process engine concepts that are used in both the process engine API and the internal process engine implementation. Understanding these fundamentals makes it easyier to use the process engine API.

Process Definitions

A process definition defines the structure of a process. You could say that the process definition is the process. camunda BPM uses BPMN 2.0 as its primary modeling language for modeling process definitions.

BPMN 2.0 Reference

camunda BPM comes with two BPMN 2.0 References:

  • The BPMN 2.0 Modeling Reference introduces the fundamentals of BPMN 2.0 and helps you to get started modeling processes. (Make sure to read the Tutorial as well.)
  • The BPMN 2.0 Implementation Reference covers the implementation of the individual BPMN 2.0 constructs in camunda BPM. You should consult this reference if you want to implement and execute BPMN processes.

In camunda BPM you can deploy processes to the process engine in BPMN 2.0 XML format. The XML files are parsed and transformed into a process definition graph structure. This graph structure is executed by the process engine.

Querying for Process Definitions

You can query for all deployed process definitions using the Java API and the ProcessDefinitionQuery made available through the RepositoryService. Example:

List<ProcessDefinition> processDefinitions = repositoryService.createProcessDefinitionQuery()
    .processDefinitionKey("invoice")
    .orderByProcessDefinitionVersion()
    .asc()
    .list();

The above query returns all deployed process definitions for the key invoice ordered by their version property.

You can also query for process definitions using the REST API.

Keys and Versions

The key of a process definition (invoice in the example above) is the logical identifier of the process. It is used throughout the API, most prominently for starting process instances (see section on process instances). The key of a process definition is defined using the id property of the corresponding <process ... > element in the BPMN 2.0 XML file:

<process id="invoice" name="invoice receipt" isExecutable="true">
  ...
</process>

If you deploy multiple processes with the same key, they are treated as individual versions of the same process definition by the process engine.

Suspending Process Definitions

Suspending a process definition disables it temporarily in that it cannot be instantiated while it is suspended. The RuntimeService Java API can be used to suspend a process definition. Similarly, you can activate a process definition to undo this effect.

Process Instances

A process instance is an individual execution of a process definition. The relation of the process instance to the process definition is the same as the relation between Object and Class in Object Oriented Programming (the process instance playing the role of the object and the process definition playing the role of the class in this analogy).

The process engine is responsible for creating process instances and managing their state. If you start a process instance which contains a wait state, for example a user task, the process engine must make sure that the state of the process instance is captured and stored inside a database until the wait state is left (the user task is completed).

Starting a Process Instance

The simplest way to start a process instance is by using the startProcessInstanceByKey(...) method offered by the RuntimeService:

ProcessInstance instance = runtimeService.startProcessInstanceByKey("invoce");

You may optionally pass in a couple of variables:

Map<String, Object> variables = new HashMap<String,Object>();
variables.put("creditor", "Nice Pizza Inc.");
ProcessInstance instance = runtimeService.startProcessInstanceByKey("invoce", variables);

Process variables are available to all tasks in a process instance and are automatically persisted to the database in case the process instance reaches a wait state.

It is also possible to start a process instance using the REST API.

Querying for Process Instances

You can query for all currently running process instances using the ProcessInstanceQuery offered by the RuntimeService:

runtimeService.createProcessInstanceQuery()
    .processDefinitionKey("invoice")
    .variableValueEquals("creditor", "Nice Pizza Inc.")
    .list();

The above query would select all process instances for the invoice process where the creditor is Nice Pizza Inc..

You can also query for process instances using the REST API.

Interacting with a Process Instance

Once you have performed a query for a particular process instance (or a list of process instances), you may want to interact with it. There are multiple possibilities to interact with a process instance, most prominently:

  • Triggering it (make it continue execution):
  • Canceling it:
    • Using the RuntimeService.deleteProcessInstance(...) method.

If your process uses User Task, you can also interact with the process instance using the TaskService API.

Suspending Process Instances'

Suspending a process instance is helpful, if you want ensure that it is not executed any further. For example, if process variables are in an undesired state, you can suspend the instance and change the variables safely.

In detail, suspension means to disallow all actions that change token state (i.e. the activities that are currently executed) of the instance. For example, it is not possible to signal an event or complete a user task for a suspended process instance, as these actions will continue the process instance execution subsequently. Nevertheless, actions like setting or removing variables are still allowed, as they do not change token state.

Also, when suspending a process instance, all tasks belonging to it will be suspended. Therefore, it will no longer be possible to invoke actions that have effects on the task's lifecycle (i.e. user assignment, task delegation, task completion, ...). However, any actions not touching the lifecycle like setting variables or adding comments will still be allowed.

A process instance can be suspended by using the suspendProcessInstanceById(...) method of the RuntimeService. Similarly it can be reactivated again.

If you would like to suspend all process instances of a given process definition, you can use the method suspendProcessDefinitionById(...) of theRepositoryService and specify the suspendProcessInstances option.

Executions

If your process instance contains multiple execution paths (like for instance after a parallel gateway), you must be able to differentiate the currently active paths inside the process instance. In the following example, two user tasks receive payment and ship order can be active at the same time.

Internally the process engine creates two concurrent executions inside the process instance, one for each concurrent path of execution. Executions are also created for scopes, for example if the process engine reaches a Embedded Sub Process or in case of Multi Instance.

Executions are hierarchical and all executions inside a process instance span a tree, the process instance being the root-node in the tree. Note: the process instance itself is an execution.

Local Variables

Executions can have local variables. Local variables are only visible to the execution itself and its children but not to siblings of parents in the execution tree. Local variables are usually used if a part of the process works on some local data object or if an execution works on one item of a collection in case of multi instance.

In order to set a local variable on an execution, use the setVariableLocal method provided by the runtime service.

runtimeService.setVariableLocal(name, value);

Querying for executions

You can query for executions using the ExecutionQuery offered by the RuntimeService:

runtimeService.createProcessInstanceQuery()
    .processInstanceId(someId)
    .list();

The above query returns all executions for a given process instance.

You can also query for executions using the REST API.

Activity Instances

The activity instance concept is similar to the execution concept but takes a different perspective. While an execution can be imagined as a token moving through the process, an activity instance represents an individual instance of an activity (task, subprocess, ...). The concept of the activity instance is thus more state-oriented.

Activity instances also span a tree, following the scope structure provided by BPMN 2.0. Activities that are "on the same level of subprocess" (ie. part of the same scope, contained in the same subprocess) will have their activity instances at the same level in the tree

Examples:

  • Process with two parallel user tasks after parallel Gateway: in the activity instance tree you will see two activity instances below the root instance, one for each user task.
  • Process with two parallel Multi Instance user tasks after parallel Gateway: in the activity instance tree, all instances of both user tasks will be listed below the root activity instance. Reason: all activity instances are at the same level of subprocess.
  • Usertask inside embedded subprocess: the activity instance three will have 3 levels: the root instance representing the process instance itself, below it an activity instance representing the instance of the embedded subprocess, and below this one, the activity instance representing the usertask.

Retrieving an Activity Instance

Currently activity instances can only be retrieved for a process instance:

ActivityInstance rootActivityInstance = runtimeService.getActivityInstance(processInstance.getProcessInstanceId());

You can retrieve the activity instance tree using the REST API as well.

Identity & Uniqueness

Each activity instance is assigned a unique Id. The id is persistent, if you invoke this method multiple times, the same activity instance ids will be returned for the same activity instances. (However, there might be different executions assigned, see below)

Relation to Executions

The Execution concept in the process engine is not completely aligned with the activity instance concept because the execution tree is in general not aligned with the activity / scope concept in BPMN. In general, there is a n-1 relationship between Executions and ActivityInstances, ie. at a given point in time, an activity instance can be linked to multiple executions. In addition, it is not guaranteed that the same execution that started a given activity instance will also end it. The process engine performs several internal optimizations concerning the compacting of the execution tree which might lead to executions being reordered and pruned. This can lead to situations where a given execution starts an activity instance but another execution ends it. Another special case is the process instance: if the process instance is executing a non-scope activity (for example a user task) below the process definition scope, it will be referenced by both the root activity instance and the user task activity instance.

Note: If you need to interpret the state of a process instance in terms of a BPMN process model, it is usually easier to use the activity instance tree as opposed to the execution tree.

Delegation Code

Delegation Code allows you to execute external Java code or evaluate expressions when certain events occur during process execution.

There are different types of Delegation Code:

  • Java Delegates can be attached to a BPMN ServiceTask.
  • Execution Listeners can be attached to any event within the normal token flow, e.g. starting a process instance or entering an activity.
  • Task Listeners can be attached to events within the user task lifecycle, e.g. creation or completion of a user task.

You can create generic delegation code and configure this via the BPMN 2.0 XML using so called Field Injection.

Java Delegate

To implement a class that can be called during process execution, this class needs to implement the org.camunda.bpm.engine.delegate.JavaDelegate interface and provide the required logic in the execute method. When process execution arrives at this particular step, it will execute this logic defined in that method and leave the activity in the default BPMN 2.0 way.

Let's create for example a Java class that can be used to change a process variable String to uppercase. This class needs to implement the org.camunda.bpm.engine.delegate.JavaDelegate interface, which requires us to implement the execute(DelegateExecution) method. It's this operation that will be called by the engine and which needs to contain the business logic. Process instance information such as process variables and other can be accessed and manipulated through the DelegateExecution interface (click on the link for a detailed Javadoc of its operations).

public class ToUppercase implements JavaDelegate {

  public void execute(DelegateExecution execution) throws Exception {
    String var = (String) execution.getVariable("input");
    var = var.toUpperCase();
    execution.setVariable("input", var);
  }

}

Note: there will be only one instance of that Java class created for the serviceTask it is defined on. All process-instances share the same class instance that will be used to call execute(DelegateExecution). This means that the class must not use any member variables and must be thread-safe, since it can be executed simultaneously from different threads. This also influences the way Field injection is handled.

The classes that are referenced in the process definition (i.e. by using camunda:class ) are NOT instantiated during deployment. Only when a process execution arrives for the first time at the point in the process where the class is used, an instance of that class will be created. If the class cannot be found, an ProcessEngineException will be thrown. The reasoning for this is that the environment (and more specifically the classpath) when you are deploying is often different from the actual runtime environment.

Activity Behavior

Instead of writing a Java Delegate is also possible to provide a class that implements the org.camunda.bpm.engine.impl.pvm.delegate.ActivityBehavior interface. Implementations have then access to the more powerful ActivityExecution that for example also allows to influence the control flow of the process. Note however that this is not a very good practice, and should be avoided as much as possible. So, it is advised to use the ActivityBehavior interface only for advanced use cases and if you know exactly what you're doing.

Field Injection

It's possible to inject values into the fields of the delegated classes. The following types of injection are supported:

  • Fixed string values
  • Expressions

If available, the value is injected through a public setter method on your delegated class, following the Java Bean naming conventions (e.g. field firstName has setter setFirstName(...)). If no setter is available for that field, the value of private member will be set on the delegate (but using private fields is highly not recommended - see warning below).

Regardless of the type of value declared in the process-definition, the type of the setter/private field on the injection target should always be org.camunda.bpm.engine.delegate.Expression.

Private fields cannot always be modified! It is not working with e.g. CDI beans (because you have proxies instead of real objects) or with some SecurityManager configurations. Please always use a public setter-method for the fields you want to have injected!

The following code snippet shows how to inject a constant value into a field. Field injection is supported when using the class attribute. Note that we need to declare a extensionElements XML element before the actual field injection declarations, which is a requirement of the BPMN 2.0 XML Schema.

<serviceTask id="javaService"
             name="Java service invocation"
             camunda:class="org.camunda.bpm.examples.bpmn.servicetask.ToUpperCaseFieldInjected">
  <extensionElements>
      <camunda:field name="text" stringValue="Hello World" />
  </extensionElements>
</serviceTask>

The class ToUpperCaseFieldInjected has a field text which is of type org.camunda.bpm.engine.delegate.Expression. When calling text.getValue(execution), the configured string value Hello World will be returned.

Alternatively, for longs texts (e.g. an inline e-mail) the camunda:string sub element can be used:

<serviceTask id="javaService"
             name="Java service invocation"
             camunda:class="org.camunda.bpm.examples.bpmn.servicetask.ToUpperCaseFieldInjected">
  <extensionElements>
    <camunda:field name="text">
        <camunda:string>
          Hello World
      </camunda:string>
    </camunda:field>
  </extensionElements>
</serviceTask>

To inject values that are dynamically resolved at runtime, expressions can be used. Those expressions can use process variables, CDI or Spring beans. As already noted, an instance of the Java class is shared among all process-instances in a service task. To have dynamic injection of values in fields, you can inject value and method expressions in a org.camunda.bpm.engine.delegate.Expression which can be evaluated/invoked using the DelegateExecution passed in the execute method.

<serviceTask id="javaService" name="Java service invocation"
             camunda:class="org.camunda.bpm.examples.bpmn.servicetask.ReverseStringsFieldInjected">

  <extensionElements>
    <camunda:field name="text1">
      <camunda:expression>${genderBean.getGenderString(gender)}</camunda:expression>
    </camunda:field>
    <camunda:field name="text2">
       <camunda:expression>Hello ${gender == 'male' ? 'Mr.' : 'Mrs.'} ${name}</camunda:expression>
    </camunda:field>
  </ extensionElements>
</ serviceTask>

The example class below uses the injected expressions and resolves them using the current DelegateExecution.

public class ReverseStringsFieldInjected implements JavaDelegate {

  private Expression text1;
  private Expression text2;

  public void execute(DelegateExecution execution) {
    String value1 = (String) text1.getValue(execution);
    execution.setVariable("var1", new StringBuffer(value1).reverse().toString());

    String value2 = (String) text2.getValue(execution);
    execution.setVariable("var2", new StringBuffer(value2).reverse().toString());
  }
}

Alternatively, you can also set the expressions as an attribute instead of a child-element, to make the XML less verbose.

<camunda:field name="text1" expression="${genderBean.getGenderString(gender)}" />
<camunda:field name="text1" expression="Hello ${gender == 'male' ? 'Mr.' : 'Mrs.'} ${name}" />

Since the Java class instance is reused, the injection only happens once, when the serviceTask is called the first time. When the fields are altered by your code, the values won't be re-injected so you should treat them as immutable and don't make any changes to them.

Execution Listener

Execution listeners allow you to execute external Java code or evaluate an expression when certain events occur during process execution. The events that can be captured are:

  • Start and ending of a process instance.
  • Taking a transition.
  • Start and ending of an activity.
  • Start and ending of a gateway.
  • Start and ending of intermediate events.
  • Ending an start event or starting an end event.

The following process definition contains 3 execution listeners:

<process id="executionListenersProcess">
  <extensionElements>
    <camunda:executionListener
        event="start"
        class="org.camunda.bpm.examples.bpmn.executionlistener.ExampleExecutionListenerOne" />
  </extensionElements>

  <startEvent id="theStart" />

  <sequenceFlow sourceRef="theStart" targetRef="firstTask" />

  <userTask id="firstTask" />

  <sequenceFlow sourceRef="firstTask" targetRef="secondTask">
    <extensionElements>
      <camunda:executionListener
          class="org.camunda.bpm.examples.bpmn.executionListener.ExampleExecutionListenerTwo" />
    </extensionElements>
  </sequenceFlow>

  <userTask id="secondTask">
    <extensionElements>
      <camunda:executionListener expression="${myPojo.myMethod(execution.event)}" event="end" />
    </extensionElements>
  </userTask>

  <sequenceFlow sourceRef="secondTask" targetRef="thirdTask" />

  <userTask id="thirdTask" />

  <sequenceFlow sourceRef="thirdTask" targetRef="theEnd" />

  <endEvent id="theEnd" />
</process>

The first execution listener is notified when the process starts. The listener is an external Java-class (like ExampleExecutionListenerOne) and should implement org.camunda.bpm.engine.delegate.ExecutionListener interface. When the event occurs (in this case end event) the method notify(ExecutionListenerExecution execution) is called.

public class ExampleExecutionListenerOne implements ExecutionListener {

  public void notify(ExecutionListenerExecution execution) throws Exception {
    execution.setVariable("variableSetInExecutionListener", "firstValue");
    execution.setVariable("eventReceived", execution.getEventName());
  }
}

It is also possible to use a delegation class that implements the org.camunda.bpm.engine.delegate.JavaDelegate interface. These delegation classes can then be reused in other constructs, such as a delegation for a serviceTask.

The second execution listener is called when the transition is taken. Note that the listener element doesn't define an event, since only take events are fired on transitions. Values in the event attribute are ignored when a listener is defined on a transition.

The last execution listener is called when activity secondTask ends. Instead of using the class on the listener declaration, a expression is defined instead which is evaluated/invoked when the event is fired.

<camunda:executionListener expression="${myPojo.myMethod(execution.eventName)}" event="end" />

As with other expressions, execution variables are resolved and can be used. Because the execution implementation object has a property that exposes the event name, it's possible to pass the event-name to your methods using execution.eventName.

Execution listeners also support using a delegateExpression, similar to a service task.

<camunda:executionListener event="start" delegateExpression="${myExecutionListenerBean}" />

Task Listener

A task listener is used to execute custom Java logic or an expression upon the occurrence of a certain task-related event.

A task listener can only be added in the process definition as a child element of a user task. Note that this also must happen as a child of the BPMN 2.0 extensionElements and in the camunda namespace, since a task listener is a construct specific for the camunda engine.

<userTask id="myTask" name="My Task" >
  <extensionElements>
    <camunda:taskListener event="create" class="org.camunda.bpm.MyTaskCreateListener" />
  </extensionElements>
</userTask>

A task listener supports following attributes:

  • event (required): the type of task event on which the task listener will be invoked. Possible events are:
    • create: occurs when the task has been created an all task properties are set.
    • assignment: occurs when the task is assigned to somebody. Note: when process execution arrives in a userTask, first an assignment event will be fired, before the create event is fired. This might seem an unnatural order, but the reason is pragmatic: when receiving the create event, we usually want to inspect all properties of the task including the assignee.
    • complete: occurs when the task is completed and just before the task is deleted from the runtime data.
  • class: the delegation class that must be called. This class must implement the org.camunda.bpm.engine.impl.pvm.delegate.TaskListener interface.

    public class MyTaskCreateListener implements TaskListener {
    
      public void notify(DelegateTask delegateTask) {
        // Custom logic goes here
      }
    
    }

    It is also possible to use field injection to pass process variables or the execution to the delegation class. Note that an instance of the delegation class is created upon process deployment (as is the case with any class delegation in the engine), which means that the instance is shared between all process instance executions.

  • expression: (cannot be used together with the class attribute): specifies an expression that will be executed when the event happens. It is possible to pass the DelegateTask object and the name of the event (using task.eventName) as parameter to the called object.

    <camunda:taskListener event="create" expression="${myObject.callMethod(task, task.eventName)}" />
  • delegateExpression: allows to specify an expression that resolves to an object implementing the TaskListener interface, similar to a service task.

    <camunda:taskListener event="create" delegateExpression="${myTaskListenerBean}" />

Field Injection on Listener

When using listeners configured with the class attribute, field injection can be applied. This is exactly the same mechanism as described for Java Delegates, which contains an overview of the possibilities provided by field injection.

The fragment below shows a simple example process with an execution listener with fields injected:

<process id="executionListenersProcess">
  <extensionElements>
    <camunda:executionListener class="org.camunda.bpm.examples.bpmn.executionListener.ExampleFieldInjectedExecutionListener" event="start">
      <camunda:field name="fixedValue" stringValue="Yes, I am " />
      <camunda:field name="dynamicValue" expression="${myVar}" />
    </camunda:executionListener>
  </extensionElements>

  <startEvent id="theStart" />
  <sequenceFlow sourceRef="theStart" targetRef="firstTask" />

  <userTask id="firstTask" />
  <sequenceFlow sourceRef="firstTask" targetRef="theEnd" />

  <endEvent id="theEnd" />
</process>

The actual listener implementation may look like the following:

public class ExampleFieldInjectedExecutionListener implements ExecutionListener {

  private Expression fixedValue;

  private Expression dynamicValue;

  public void notify(ExecutionListenerExecution execution) throws Exception {
    String value =
      fixedValue.getValue(execution).toString() +
      dynamicValue.getValue(execution).toString();

    execution.setVariable("var", value);
  }
}

The class ExampleFieldInjectedExecutionListener concatenates the 2 injected fields (one fixed an the other dynamic) and stores this in the process variable var.

@Deployment(resources = {
  "org/camunda/bpm/examples/bpmn/executionListener/ExecutionListenersFieldInjectionProcess.bpmn20.xml"
})
public void testExecutionListenerFieldInjection() {
  Map<String, Object> variables = new HashMap<String, Object>();
  variables.put("myVar", "listening!");

  ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("executionListenersProcess", variables);

  Object varSetByListener = runtimeService.getVariable(processInstance.getId(), "var");
  assertNotNull(varSetByListener);
  assertTrue(varSetByListener instanceof String);

  // Result is a concatenation of fixed injected field and injected expression
  assertEquals("Yes, I am listening!", varSetByListener);
}

Throwing BPMN Errors from Delegation Code

In the above example the error event is attached to a Service Task. In order to get this to work the Service Task has to throw the corresponding error. This is done by using a provided Java exception class from within your Java code (e.g. in the JavaDelegate):

public class BookOutGoodsDelegate implements JavaDelegate {
  public void execute(DelegateExecution execution) throws Exception {
    try {
        ...
    } catch (NotOnStockException ex) {
        throw new BpmnError(NOT_ON_STOCK_ERROR);
    }
  }
}

Process Versioning

Versioning of process definitions

Business Processes are by nature long running. The process instances will maybe last for weeks, or months. In the meantime the state of the process instance is stored to the database. But sooner or later you might want to change the process definition even if there are still running instances.

This is supported by the process engine:

  • If you redeploy a changed process definition you get a new version in the database.
  • Running process instance will keep running in the version they were started with.
  • New process instances will run in the new version - unless specified explicitly.
  • Support for migrating process instances to new a version is supported within certain limits.

So you can see different version in the process definition table and the process instaces are linked to this:

Which version will be used

When you start a process instance

  • by key: It starts an instance of the latest deployed version of the process definition with the key.
  • by id: It starts an instance of the deployed process definition with the database id. By using this you can start a specific version.

The default and recommended usage is to just use startProcessInstanceByKey and always use the latest version:

processEngine.getRuntimeService().startProcessInstanceByKey("invoice"); 
// will use the latest version (2 in our example)

If you want to specifically start an instance of an old process definition, use a Process Definition Query to find the correct ProcessDefinition id and startProcessInstanceById:

ProcessDefinition pd = processEngine.getRepositoryService().createProcessDefinitionQuery()
    .processDefinitionKey("invoice")
    .processDefinitionVersion(1).singleResult();
processEngine.getRuntimeService().startProcessInstanceById(pd.getId());

When you use BPMN CallActivities you can configure which version is used:

<callActivity id="callSubProcess" calledElement="checkCreditProcess"
  camunda:calledElementBinding="latest|deployment|version"
  camunda:calledElementVersion="17">
</callActivity>

The options are

  • latest: use the latest version of the process definition (as with startProcessInstanceByKey).
  • deployment: use the process definition in the version matching the version of the calling process. This works if they are deployed within one deployment - as then they are always versioned together (see Process Application Deployment for more details).
  • version: specify the version hard coded in the XML.

Key vs. ID of a process definition

You might have spotted that two different columns exist in the process definition table with different meanings:

  • Key: The key is the unique identifier of the process definition in the XML, so its value is read from the id attribute in the XML:

    <bpmn2:process id="invoice" ...
  • Id: The id is the database primary key and an artificial key normally combined out of the key, the version and a generated id (note that the ID may be shortened to fit into the database column, so there is no guarantee that the id is built this way).

Version Migration

Sometimes it is necessary to migrate (upgrade) running process instances to a new version, maybe you added an important new task or even fixed a bug. In this case we can migrate the running process instances to the new version.

Please not that migration can only be applied if a process instance is currently in a persistent wait state, see Transactions in Processes.

Heads-Up Due to the risks and limitations mentioned above this is considered an **advanced use case**. It is not available over the public API - but you can use an internal command.
public void migrateVersion() {
   String processInstanceId = "71712c34-af1d-11e1-8950-08002700282e";
   int newVersion = 2;
   SetProcessDefinitionVersionCmd command = 
      new SetProcessDefinitionVersionCmd(processInstanceId, newVersion);
   ((ProcessEngineImpl) ProcessEngines.getDefaultProcessEngine())
        .getProcessEngineConfiguration()
        .getCommandExecutorTxRequired().execute(command);
}

Risks and limitations of Version Migration

Process Version Migration is not an easy topic on itself. Migrating process instances to a new version works only if:

  • for all currently existing executions and running tokens
  • the "current activity" with the same id still exists in the new process definition
  • and the scopes, sub executions, jobs and so on are still valid.

Hence the cases, in which this simple instance migration works, are limited. The following examples will cause problems: If the new version introduces a new (message / signal / timer) boundary event attached to an activity, process instances which are waiting at this activity cannot be migrated (since the activity is a scope in the new version and not a scope in the old version).

  • If the new version introduces a new (message / signal / timer) boundary event attached to a subprocess, process instances which are waiting in an activity contained by the subprocess can be migrated, but the event will never trigger (event subscription / timer not created when entering the scope).
  • If the new version removes a (message / signal / timer) boundary event attached to an activity, process instances which are waiting at this activity cannot be migrated.
  • If the new version removes a timer boundary event attached to a subprocess, process instances which are waiting at an activity contained by the subprocess can be migrated. If the timer job is triggered (executed by the job executor) it will fail. The timer job is removed with the scope execution.
  • If the new version removes a signal or message boundary event attached to a subprocess, process instances which are waiting at an activity contained by the subprocess can be migrated.The signal/message subscription already exists but cannot be trigged anymore. The subscription is removed with the scope execution.
  • If a new version changes field injection on Java classes, you might want to set attributes on a Java class which doesn't exist any more or the other way round: you are missing attributes.

Other important aspects to think of when doing version migration are:

  • Execution: Migration can lead to situations where some activities from the old or new process definition might have never been executed for some process instances. Keep this in mind, you might have to deal with this in some of your own migration scripts.
  • Traceability and Audit Trail: Is the produced audit trail still valid if some entries point to version 1 and some to version 2? Do all activities still exist in the new process definition?
  • Reporting: Your reports may be broken or showing strange figures if they get confused by version mishmash.
  • KPI Monitoring: Let's assume you introduced new KPI's, for migrated process instances you might get only parts of the figures. Does this do any harm to your monitoring?

If you cannot migrate your process instance you have a couple of alternatives, for example:

  • Keep running the old version (as described at the beginning).
  • Cancel the old process instance and start a new one. The challenge might be to skip activities already executed and "jump" to the right wait state. This is currently a difficult task, you could maybe leverage Message Start Events here. We are currently discussing to provide more support on this in Migration Points. Sometimes you can skip this by adding some magic to your code or deploy some mocks during a migration phase or by another creative solution.
  • Cancel the old process instances and start a new one in a completely customized migration process definition.

So there is actually not "the standard" way, in doubt discuss with us right solution for your environment.

Database Configuration

There are two ways to configure the database that the camunda engine will use. The first option is to define the JDBC properties of the database:

  • jdbcUrl: JDBC URL of the database.
  • jdbcDriver: implementation of the driver for the specific database type.
  • jdbcUsername: username to connect to the database.
  • jdbcPassword: password to connect to the database.

Note that internally the engine uses Apache MyBatis for persistence.

The data source that is constructed based on the provided JDBC properties will have the default MyBatis connection pool settings. The following attributes can optionally be set to tweak that connection pool (taken from the MyBatis documentation):

  • jdbcMaxActiveConnections: The number of active connections that the connection pool at maximum at any time can contain. Default is 10.
  • jdbcMaxIdleConnections: The number of idle connections that the connection pool at maximum at any time can contain.
  • jdbcMaxCheckoutTime: The amount of time in milliseconds a connection can be 'checked out' from the connection pool before it is forcefully returned. Default is 20000 (20 seconds).
  • jdbcMaxWaitTime: This is a low level setting that gives the pool a chance to print a log status and re-attempt the acquisition of a connection in the case that it's taking unusually long (to avoid failing silently forever if the pool is misconfigured) Default is 20000 (20 seconds).

Example database configuration:

<property name="jdbcUrl" value="jdbc:h2:mem:camunda;DB_CLOSE_DELAY=1000" />
<property name="jdbcDriver" value="org.h2.Driver" />
<property name="jdbcUsername" value="sa" />
<property name="jdbcPassword" value="" />

Alternatively, a javax.sql.DataSource implementation can be used (e.g. DBCP from Apache Commons):

<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" >
  <property name="driverClassName" value="com.mysql.jdbc.Driver" />
  <property name="url" value="jdbc:mysql://localhost:3306/camunda" />
  <property name="username" value="camunda" />
  <property name="password" value="camunda" />
  <property name="defaultAutoCommit" value="false" />
</bean>

<bean id="processEngineConfiguration" class="org.camunda.bpm.engine.impl.cfg.StandaloneProcessEngineConfiguration">

    <property name="dataSource" ref="dataSource" />
    ...

Note that camunda does not ship with a library that allows to define such a data source. So you have to make sure that the libraries (e.g. from DBCP) are on your classpath.

The following properties can be set, regardless of whether you are using the JDBC or data source approach:

  • databaseType: it's normally not necessary to specify this property as it is automatically analyzed from the database connection meta data. Should only be specified in case automatic detection fails. Possible values: {h2, mysql, oracle, postgres, mssql, db2}. This property is required when not using the default H2 database. This setting will determine which create/drop scripts and queries will be used. See the 'supported databases' section for an overview of which types are supported.
  • databaseSchemaUpdate: allows to set the strategy to handle the database schema on process engine boot and shutdown.
    • false (default): Checks the version of the DB schema against the library when the process engine is being created and throws an exception if the versions don't match.
    • true: Upon building the process engine, a check is performed and an update of the schema is performed if it is necessary. If the schema doesn't exist, it is created.
    • create-drop: Creates the schema when the process engine is being created and drops the schema when the process engine is being closed.
Supported Databases:

For information on supported databases please refer to Supported Environments.

Here are some sample JDBC urls:

  • h2: jdbc:h2:tcp://localhost/camunda
  • mysql: jdbc:mysql://localhost:3306/camunda?autoReconnect=true
  • oracle: jdbc:oracle:thin:@localhost:1521:xe
  • postgres: jdbc:postgresql://localhost:5432/camunda
  • db2: jdbc:db2://localhost:50000/camunda
  • mssql: jdbc:sqlserver://localhost:1433/camunda

Database Table Names Explained

The table names all start with ACT. The second part is a two-character identification of the use case of the table. This use case will also roughly match the service API.

  • ACT_RE_*: 'RE' stands for repository. Tables with this prefix contain 'static' information such as process definitions and process resources (images, rules, etc.).
  • ACT_RU_*: 'RU' stands for runtime. These are the runtime tables, that contain the runtime data of process instances, user tasks, variables, jobs, etc. The engine only stores the runtime data during process instance execution, and removes the records when a process instance ends. This keeps the runtime tables small and fast.
  • ACT_ID_*: 'ID' stands for identity. These tables contain identity information, such as users, groups, etc.
  • ACT_HI_*: 'HI' stands for history. These are the tables that contain historic data, such as past process instances, variables, tasks, etc.
  • ACT_GE_*: general data, which is used in various use cases.

Additional database schema configuration

Business Key

Since the release of camunda-bpm Alpha 9, the unique constraint for the business key is removed in the runtime and history tables and the database schema create and drop scripts. If you rely on the constraint, you can add it manually to your schema by issuing following sql statements:

db2

Runtime: create unique index ACT_UNIQ_RU_BUS_KEY on ACT_RU_EXECUTION(UNI_PROC_DEF_ID, UNI_BUSINESS_KEY);
History: create unique index ACT_UNIQ_HI_BUS_KEY on ACT_HI_PROCINST(UNI_PROC_DEF_ID, UNI_BUSINESS_KEY);

h2

Runtime: alter table ACT_RU_EXECUTION add constraint ACT_UNIQ_RU_BUS_KEY unique(PROC_DEF_ID_, BUSINESS_KEY_);
History: alter table ACT_HI_PROCINST add constraint ACT_UNIQ_HI_BUS_KEY unique(PROC_DEF_ID_, BUSINESS_KEY_);

mssql

Runtime: create unique index ACT_UNIQ_RU_BUS_KEY on ACT_RU_EXECUTION (PROC_DEF_ID_, BUSINESS_KEY_) where BUSINESS_KEY_ is not null;
History: create unique index ACT_UNIQ_HI_BUS_KEY on ACT_HI_PROCINST (PROC_DEF_ID_, BUSINESS_KEY_) where BUSINESS_KEY_ is not null;

mysql

Runtime: alter table ACT_RU_EXECUTION add constraint ACT_UNIQ_RU_BUS_KEY UNIQUE (PROC_DEF_ID_, BUSINESS_KEY_);
History: alter table ACT_HI_PROCINST add constraint ACT_UNIQ_HI_BUS_KEY UNIQUE (PROC_DEF_ID_, BUSINESS_KEY_);

oracle

Runtime: create unique index ACT_UNIQ_RU_BUS_KEY on ACT_RU_EXECUTION
         (case when BUSINESS_KEY_ is null then null else PROC_DEF_ID_ end,
         case when BUSINESS_KEY_ is null then null else BUSINESS_KEY_ end);
History: create unique index ACT_UNIQ_HI_BUS_KEY on ACT_HI_PROCINST
         (case when BUSINESS_KEY_ is null then null else PROC_DEF_ID_ end,
         case when BUSINESS_KEY_ is null then null else BUSINESS_KEY_ end);

postgres

Runtime: alter table ACT_RU_EXECUTION add constraint ACT_UNIQ_RU_BUS_KEY UNIQUE (PROC_DEF_ID_, BUSINESS_KEY_);
History: alter table ACT_HI_PROCINST add constraint ACT_UNIQ_HI_BUS_KEY UNIQUE (PROC_DEF_ID_, BUSINESS_KEY_);

Custom Configuration for Microsoft SQL Server

Microsoft SQL Server implements the isolation level READ_COMMITTED different from most databases and does not play together well with the process engine's optimistic locking scheme. As a result you may suffer from deadlocks when putting the process engine under high load.

If you experience deadlocks in your MSSQL installation, you must execute the following statements in order to enable SNAPSHOT isolation:

ALTER DATABASE [process-engine]
SET ALLOW_SNAPSHOT_ISOLATION ON

ALTER DATABASE [process-engine]
SET READ_COMMITTED_SNAPSHOT ON

where [process-engine] contains the name of your database.

History and Audit Event Log

The History Event Stream provides audit information about executed process instances.

The process engine maintains the state of running process instances inside the database. This includes writing (1.) the state of a process instance to the database as it reaches a wait state and reading (2.) the state as process execution continues. We call this database the runtime database. In addition, to maintaining the runtime state, the process engine creates an audit log providing audit information about executed process instances. We call this event stream the history event stream (3.). The individual events which make up this event stream are called History Events and contain data about executed process instances, activity instances, changed process variables and so forth. In the default configuration, the process engine will simply write (4.) this event stream to the history database. The HistoryService API allows querying this database (5.). The history database and the history service are optional components; if the history event stream is not logged to the history database or if the user chooses to logg events to a different database, the process engine is still able to work and it is still able to populate the history event stream. This is possible because the BPMN 2.0 Core Engine component does not read state from the history database. It is also possible to configure the amount of data logged, using the historyLevel setting in the process engine configuration.

Since the process engine does not rely on the presence of the history database for generating the history event stream, it is possible to provide different backends for storing the history event stream. The default backend is the DbHistoryEventHandler which logs the event stream to the history database. It is possible to exchange the backend and provide a custom storage mechanism for the history event log.

Choosing a History Level

The history level controls the amount of data the process engine provides via the history event stream. The following settings are available out of the box:

  • NONE: no history events are fired.
  • ACTIVITY: the following events are fired:
    • Process Instance START, UPDATE, END: fired as process instances are being started, updated, ended
    • Activity Instance START, UPDATE, END: fired as activity instances are being started, updated, ended
    • Task Instance CREATE, UPDATE, COMPLETE, DELETE: fired as task instances are being created, updated (ie. re-assigned, delegated etc.), completed or deleted.
  • AUDIT: in addition to the events provided by history level ACTIVITY, the following events are fired:
    • Variable Instance CREATE, UPDATE, DELETE, as process variables are created, updated and deleted. The default hsitory backend (DbHistoryEventHandler) writes variable instance events to the historic variable instance database table. Rows in this table are updated as variable instances are updated, meaning that only the last value of a process variable will be available.
  • FULL: in addition to the events provided by history level AUDIT, the following additional events are fired:
    • Form property UPDATE: fired are form properties are being created and/or updated.
    • The default history backend (DbHistoryEventHandler) writes historic variable updates to the database. This makes it possible to inspect the intermediate values of a process variable using the history service.

If you need to customize the amount of history events logged, you can provide a custom implementation HistoryEventProducer and wire it in the process engine configuration.

Setting the History Level

The history level can be provided as a property in the process engine configuration. Depending on how the process engine is configured, the property can be set using Java Code

ProcessEngine processEngine = ProcessEngineConfiguration
  .createProcessEngineConfigurationFromResourceDefault()
  .setHistory(ProcessEngineConfiguration.HISTORY_FULL)
  .buildProcessEngine();

Or it can be set using Spring Xml or a deployment descriptor (bpm-platform.xml, processes.xml). When using the camunda BPM jboss Subsystem, the property can be set through jBoss configuration (standalone.xml, domain.xml).

<property name="history">audit</property>

Note that when using the dafault history backend, the history level is stored in the database and cannot be changed later.

The default History Implementation

The default history database writes History Events to the appropriate database tables. The database tables can then be queried using the History Service or using the REST API.

History Entities

There are seven History entities, which - in contrast to the runtime data - will also remain present in the DB after process instances have been completed:

  • HistoricProcessInstances containing information about current and past process instances.
  • HistoricProcessVariables containing information about the latest state a variable held in a process instance.
  • HistoricActivityInstances containing information about a single execution of an activity.
  • HistoricTaskInstances containing information about current and past (completed and deleted) task instances.
  • HistoricDetails containing various kinds of information related to either a historic process instances, an activity instance or a task instance.

Querying History

The HistoryService exposes the the methods createHistoricProcessInstanceQuery(), createHistoricProcessVariableQuery(),createHistoricActivityInstanceQuery(),createHistoricDetailQuery()andcreateHistoricTaskInstanceQuery()` which can be used for querying history.

Below are a few examples which show some of the possibilities of the query API for history. Full description of the possibilities can be found in the the javadocs, in the org.camunda.bpm.engine.history package.

HistoricProcessInstanceQuery

Get the first ten HistoricProcessInstances that are finished and which took the most time to complete (the longest duration) of all finished processes with definition 'XXX'.

historyService.createHistoricProcessInstanceQuery()
  .finished()
  .processDefinitionId("XXX")
  .orderByProcessInstanceDuration().desc()
  .listPage(0, 10);

HistoricActivityInstanceQuery

Get the last HistoricActivityInstance of type 'serviceTask' that has been finished in any process that uses the processDefinition with id XXX.

historyService.createHistoricActivityInstanceQuery()
  .activityType("serviceTask")
  .processDefinitionId("XXX")
  .finished()
  .orderByHistoricActivityInstanceEndTime().desc()
  .listPage(0, 1);

HistoricProcessVariableQuery

Get all HistoricProcessVariables from a finished process instance with id 'xxx' ordered by variable name.

historyService.createHistoricProcessVariableQuery()
  .processInstanceId("XXX")
  .orderByVariableName.desc()
  .list();

HistoricDetailQuery

The next example gets all variable-updates that have been done in process with id 123. Only HistoricVariableUpdates will be returned by this query. Note that it's possible for a certain variable name to have multiple HistoricVariableUpdate entries, one for each time the variable was updated in the process. You can use orderByTime (the time the variable update was done) or orderByVariableRevision (revision of runtime variable at the time of updating) to find out in what order they occurred.

historyService.createHistoricDetailQuery()
  .variableUpdates()
  .processInstanceId("123")
  .orderByVariableName().asc()
  .list()

The last example gets all variable updates that were performed on the task with id "123". This returns all HistoricVariableUpdates for variables that were set on the task (task local variables), and NOT on the process instance.

historyService.createHistoricDetailQuery()
  .variableUpdates()
  .taskId("123")
  .orderByVariableName().asc()
  .list()

HistoricTaskInstanceQuery

Get ten HistoricTaskInstances that are finished and which took the most time to complete (the longest duration) of all tasks.

historyService.createHistoricTaskInstanceQuery()
  .finished()
  .orderByHistoricTaskInstanceDuration().desc()
  .listPage(0, 10);

Get HistoricTaskInstances that are deleted with a delete reason that contains "invalid", which were last assigned to user 'jonny'.

historyService.createHistoricTaskInstanceQuery()
  .finished()
  .taskDeleteReasonLike("%invalid%")
  .taskAssignee("jonny")
  .listPage(0, 10);

Providing a custom History Backend

In order to understand how to provide a custom history backend, it is useful to first look at a more in-detail view on the history architecture:

Whenever the state of a runtime entity is changed, the core execution component of the process engine fires History Events. In order to make this flexible, the actual creation of the History Events as well as populating the history events with data from the runtime structures is delegated to the History Event Producer. The producer is handed in the runtime data structures (such as an ExecutionEntity or a TaskEntity), creates a new History Event and populates it with data extracted from the runtime structures.

The event is next delivered to the History Event Handler which constitutes the History Backend. The drawing above contains a logical component named event transport. This is supposed to represent the channel between the process engine core component producing the events and the History Event Handler. In the default implementation, events are delivered to the History Event Handler synchronously and inside the same JVM. It is however conceptually possible to send the event stream to a different JVM (maybe running on a different machine) and making delivery asynchronous. A good fit might be a transactional message Queue (JMS).

Once the event has reached the History Event Handler, it can be processed and stored in some kind of datastore. The default implementation writes events to the History Database such that they can be queried using the History Service.

Exchanging the History Event Handler with a custom implementation allows users to plug in a custom History Backend. In order to do so, two main steps are required:

  • Provide a custom implementation of the HistoryEventHandler interface.
  • Wire the custom implementation in the process engine configuration.

Note that if you provide a custom implementation of the HistoryEventHandler and wire it with the process engine, you override the default DbHistoryEventHandler. The consequence is that the process engine will stop writing to the history database and you will not be able to use the history service for querying the audit log. If you do not want to replace the default behavior but only provide an additional event handler, you need to write a composite History Event Handler which dispatches events a collection of handlers.

Process Definition Cache

All process definition are cached (after they're parsed) to avoid hitting the database every time a process definition is needed and because process definition data doesn't change.

Transactions in Processes

The process engine is a piece of passive Java Code, which works in the Thread of the client. For instance, if you have a web application allowing users to start a new process instance and a user clicks on the corresponding button, some thread from the application server's http-thread-pool will invoke the API method runtimeService.startProcessInstanceByKey(...), thus entering the process engine and starting a new process instance. We call this "borrowing the client thread".

On any such external trigger (i.e. start a process, complete a task, signal an execution), the engine runtime is going to advance in the process until it reaches wait states on each active path of execution. A wait state is a task which is performed later, which means that the engine persists the current execution to the database and waits to be triggered again. For example in case of a user task, the external trigger on task completion causes the runtime to execute the next bit of the process until wait states are reached again (or the instance ends). In contrast to user tasks, a timer event is not triggered externally. Instead it is continued by an internal trigger. That is why the engine also needs an active component, the job executor, which is able to fetch registered jobs and process them asynchronously.

Wait States

We talked about wait states as transaction boundaries where the process state is stored to the database, the Thread returns to the client and the transaction is committed. The following BPMN elements are always wait states:



Message Event

Timer Event

Signal Event


The Event Based Gateway:

And keep in mind that Asynchronous Continuations can add transaction boundaries to other tasks as well.

Transaction Boundaries

The transition from one such stable state to another stable state is always part of one transaction, meaning that it succeeds as a whole or is rolled back on any kind of exception occuring during its execution. This is illustrated in the following example:

We see a segment of a BPMN process with a user task, a service task and a timer event. The timer event marks the next wait state. Completing the user task and validating the address is therefore part of the same unit of work, so it should succeed or fail atomically. That means that if the service task throws an exception we want to rollback the current transaction, such that the execution tracks back to the user task and the user task is still present in the database. This is also the default behavior of the process engine.

In 1, an application or client thread completes the task. In that same thread the engine runtime is now executing the service task and advances until it reaches the wait state at the timer event (2). Then it returns the control to the caller (3) potentially committing the transaction (if it was started by the engine).

Asynchronous Continuations

In some cases this behavior is not desired. Sometimes we need custom control over transaction boundaries in a process, in order to be able to scope logical units of work. Consider the following process fragment:

This time we are completing the user task, generating an invoice and then send that invoice to the customer. This time the generation of the invoice is not part of the same unit of work so we do not want to rollback the completion of the usertask if generating an invoice fails. So what we want the engine to do is complete the user task (1), commit the transaction and return the control to the calling application (2).

Then we want to generate the invoice asynchronously, in a background thread. A pool of background threads is managed by the job executor. It periodically checks the database for asynchronous jobs, i.e. units of work in the process runtime.

So behind the scenes, when we reach the generate invoice task, we are persisting a job in the database, queueing it for later execution. This job is then picked up by the job executor and executed (3). We are also giving the local job executor a little hint that there is a new job, to improve performance. In order to use this feature, we can use the camunda:async="true" extension in the BPMN 2.0 XML. So for example, the service task would look like this:

<serviceTask id="service1" name="Generate Invoice" camunda:class="my.custom.Delegate" camunda:async="true" />

camunda:async can be specified on the following bpmn task types: task, serviceTask, scriptTask, businessRuleTask, sendTask, receiveTask, userTask, subProcess and callActivity. On a user task, receive task or other wait states, the additional async continuation allows us to execute the start execution listeners in a separate thread/transaction.

A start event may also be declared as asynchronous in the same way as above by the attribute camunda:async="true". On instantiation, the process instance will be created and persisted in the database, but execution will be deferred. Also, execution listeners will not be invoked synchronously. This can be helpful in various situations such as heterogeneous clusters, when the execution listener class is not available on the node that instantiates the process.

Rollback on Exception

We want to emphasis that in case of a non handled exception the current transaction gets rolled back and the process instance is in the last wait state (safe point). The following image visualizes that.

If an exception occurs when calling startProcessInstanceByKey the process instance will not be saved to the database at all.

Reasoning for this design

The above sketched solution normally leads to discussion as people expect the process engine to stop in the task caused an exception. Also other BPM suites often implement every task as wait state. But the approach has a couple of advantages:

  • In Testcases you know the exact state of the engine after the method call, which makes assertions on process state or service call results easy.
  • In production code the same is true; allowing you to use synchronous logic if required, for example because you want to present a synchronous user experience in the front-end as shown in the Tutorial "UI Mediator".
  • The execution is plain Java computing which is very efficient and performant.
  • You can always switch to 'async=true' if you need different behavior.

But there are consequences which you should keep in mind:

  • In case of Exceptions the state is rolled back to the last persistent wait state of the process instance. It might even mean that the process instance will never be created! You cannot easily trace the exception back to the node in the process causing the exception. You have to handle the exception in the client.
  • Parallel process paths are not executed parallelly in terms of Java Threads, the different paths are executed sequentially, since we only have and use one Thread.
  • Timers cannot fire before the transaction is committed to the database. Timers are explained in more detail later, but they are triggered by the only active part of the Process Engine where we use own Threads: The Job Executor. Hence they run in an own thread which receives the due timers from the database. But in the database the timers are not visible before the current transaction is visible. So the following timer will never fire:

Transaction Integration

The process engine can either manage transactions on its own ("Standalone" transaction management) or integrate with a platform transaction manager.

Standalone Transaction Management

If the process engine is configured to perform standalone transaction management, it always opens a new transaction for each command which is executed. To configure the process engine to use standalone transaction management, use the org.camunda.bpm.engine.impl.cfg.StandaloneProcessEngineConfiguration:

ProcessEngineConfiguration.createStandaloneProcessEngineConfiguration()
  ...
  .buildProcessEngine();

The usecases for standalone transaction management are situations where the process engine does not have to integrate with other transactional resources such as secondary datasources or messaging systems.

Note: in the tomcat distribution the process engine is configured using standalone transaction management.

Transaction Manager Integration

The process engine can be configured to integrate with a transaction manager (or transaction management systems). Out of the box, the process engine supports integration with Spring and JTA transaction management. More information can be found in the following chapters:

The usecase for transaction manager integration are situations where the process engine needs to integrate with

  • transaction focused programming models such as Java EE or Spring (think about transaction scoped JPA entity managers in Java EE),
  • other transactional resources such as secondary datasources, messaging systems or other transactional middleware like the web services stack.

The Job Executor

A job is an explicit representation of a task to trigger process execution. A job is created whenever a wait state is reached during process execution that has to be triggered internally. This is the case when a timer event or a task marked for asynchronous execution (see transaction boundaries) is approached. The job executor has two responsibilities: job acquisition and job execution. The following diagram illustrates this:

Job Executor Activation

By default, the JobExecutor is activated when the process engine boots. For unit testing scenarios it is cumbersome to work with this background component. Therefore the Java API offers to query for (ManagementService.createJobQuery) and execute jobs (ManagementService.executeJob) by hand, which allows to control job execution from within a unit test. To avoid interference with the job executor, it can be switched off.

Specify

<property name="jobExecutorActivate" value="false" />

in the process engine configuration when you don't want the JobExecutor to be activated upon booting the process engine.

Job Acquisition

Job acquisition is the process of retrieving jobs from the database that are to be executed next. Therefore jobs must be persisted to the database together with properties determining whether a job can be executed. For example, a job created for a timer event may not be executed before the defined time span has passed.

Persistence

Jobs are persisted to the database, in the ACT_RU_JOB table. This database table has the following columns (among others):

ID_ | REV_ | LOCK_EXP_TIME_ | LOCK_OWNER_ | RETRIES_ | DUEDATE_

Job acquisition is concerned with polling this database table and locking jobs.

Acquirable Jobs

A job is acquirable, i.e. a candidate for execution, if it fulfills all following conditions:

  • it is due, meaning that the value in the DUEDATE_ column is in the past
  • it is not locked, meaning that the value in the LOCKEXP_TIME column is in the past
  • its retries have not elapsed, meaning that the value in the RETRIES_ column is greater than zero.

In addition, the process engine has a concept of suspending a process definition and a process instance. A job is only acquirable if neither the corresponding process instance nor the corresponding process definition are suspended.

The two Phases of Job Acquisition

Job acquisition has two phases. In the first phase the job executor queries for a configurable amount of acquirable jobs. If at least one job can be found, it enters the second phase, locking the jobs. Locking is necessary in order to ensure that jobs are executed exactly once. In a clustered scenario, it is accustom to operate multiple job executor instances (one for each node) that all poll the same ACT_RU_JOB table. Locking a job ensures that it is only acquired by a single job executor instance. Locking a job means updating its values in the LOCK_EXP_TIME_ and LOCK_OWNER_ columns. The LOCK_EXP_TIME_ column is updated with a timestamp signifying a date that lies in the future. The intuition behind this is that we want to lock the job until that date is reached. The LOCK_OWNER_ column is updated with a value uniquely identifying the current job executor instance. In a clustered scenario this could be a node name uniquely identifying the current cluster node.

The situation where multiple job executor instances attempt to lock the same job concurrently is accounted for by using optimistic locking (see REV_ column).

After having locked a job, the job executor instance has effectively reserved a time slot for executing the job: once the date written to the LOCK_EXP_TIME_ column is reached it will be visible to job acquisition again. In order to execute the acquired jobs, they are passed to the acquired jobs queue.

Job Execution

Thread Pool

Acquired jobs are executed by a thread pool. The thread pool consumes jobs from the acquired jobs queue. The acquired jobs queue is an in-memory queue with a fixed capacity. When an executor starts executing a job, it is first removed from the queue.

In the scenario of an embedded process engine, the default implementation for this thread pool is a java.util.concurrent.ThreadPoolExecutor. However, this is not allowed in Java EE environments. There we hook into the application server capabilities of thread management. See the platform-specific information in the Runtime Container Integration section on how this achieved.

Failed Jobs

Upon failure of job execution, e.g. if a service task invocation throws an exception, a job will be retried a number of times (by default 3). It is not immediately retried and added back to the acquisition queue, but the value of the RETRIES_ column is decreased. The process engine thus performs bookkeeping for failed jobs. After updating the RETRIES_ column, the executor moves on to the next job. This means that the failed job will automatically be retried once the LOCK_EXP_TIME_ date is expired.

In real life it is useful to configure the retry strategy, i.e. the number of times a job is retried and when it is retried, so the LOCK_EXP_TIME_. In the camunda engine, this can be configured as an extension element of a task in the BPMN 2.0 XML:

<definitions ... xmlns:camunda="http://activiti.org/bpmn">
  ...
  <serviceTask id="failingServiceTask" camunda:async="true" camunda:class="org.camunda.engine.test.cmd.FailingDelegate">
    <extensionElements>
      <camunda:failedJobRetryTimeCycle>R5/PT5M</camunda:failedJobRetryTimeCycle>
    </extensionElements>
  </serviceTask>
  ...
</definitions>

The configuration follows the ISO_8601 standard for repeating time intervals. In the example, R5/PT5M means that the maximum number of retries is 5 (R5) and the delay of retry is 5 minutes (PT5M).

Similarly, the following example defines three retries after 5 seconds each for a boundary timer event:

<boundaryEvent id="BoundaryEvent_1" name="Boundary event" attachedToRef="Freigebenden_zuordnen_143">
  <extensionElements>
    <camunda:failedJobRetryTimeCycle>R3/PT5S</camunda:failedJobRetryTimeCycle>
  </extensionElements>
  <outgoing>SequenceFlow_3</outgoing>
  <timerEventDefinition id="sid-ac5dcb4b-58e5-4c0c-b30a-a7009623769d">
    <timeDuration xsi:type="tFormalExpression" id="sid-772d5012-17c2-4ae4-a044-252006933a1a">PT10S</timeDuration>
  </timerEventDefinition>
</boundaryEvent>

Recap: a retry may be required, if there are any failures during the transaction which follows the timer.

Concurrent Job Execution

The Job Executor makes sure that jobs from a single process instance are never executed concurrently. Why is this? Consider the following process definition:

We have a parallel gateway followed by three service tasks which all perform an asynchronous continuation. As a result of this, three jobs are added to the database. Once such a job is present in the database it can be processed by the job executor. It acquires the jobs and delegates them to a thread pool of worker threads which actually process the jobs. This means that using an asynchronous continuation, you can distribute the work to this thread pool (and in a clustered scenario even across multiple thread pools in the cluster).

This is usually a good thing. However it also bears an inherent problem: consistency. Consider the parallel join after the service tasks. When the execution of a service task is completed, we arrive at the parallel join and need to decide whether to wait for the other executions or whether we can move forward. That means, for each branch arriving at the parallel join, we need to take a decision whether we can continue or whether we need to wait for one or more other executions from the other branches.

This requires synchronization between the branches of execution. The engine addresses this problem with optimistic locking. Whenever we take a decision based on data that might not be current (because another transaction might modify it before we commit), we make sure to increment the revision of the same database row in both transactions. This way, whichever transaction commits first wins and the other ones fail with an optimistic locking exception. This solves the problem in the case of the process discussed above: if multiple executions arrive at the parallel join concurrently, they all assume that they have to wait, increment the revision of their parent execution (the process instance) and then try to commit. Whichever execution is first will be able to commit and the other ones will fail with an optimistic locking exception. Since the executions are triggered by a job, the job executor will retry to perform the same job after waiting for a certain amount of time and hopefully this time pass the synchronizing gateway.

However, while this is a perfectly fine solution from the point of view of persistence and consistency, this might not always be desirable behavior at a higher level, especially if the execution has non-transactional side effects, which will not be rolled back by the failing transaction. For instance, if the book concert tickets service does not share the same transaction as the process engine, we might book multiple tickets if we retry the job. That is why jobs of the same process instance are processed exclusively by default.

Exclusive Jobs

An exclusive job cannot be performed at the same time as another exclusive job from the same process instance. Consider the process shown in the section above: if the jobs correpsonding to the service tasks are treated as exclusive, the job executor will make sure that they are not executed concurrently. Instead, it will ensure that whenever it acquires an exclusive job from a certain process instance, it also acquires all other exclusive jobs from the same process instance and delegates them to the same worker thread. This enforces sequential execution of the jobs.

Exclusive Jobs are the default configuration. All asynchronous continuations and timer events are thus exclusive by default. In addition, if you want a job to be non-exclusive, you can configure it as such using camunda:exclusive="false". For example, the following service task would be asynchronous but non-exclusive.

Is this a good solution? We had some people asking whether it was. Their concern was that it would prevent you from doing things in parallel and would thus be a performance problem. Again, two things have to be taken into consideration:

  • It can be turned off if you are an expert and know what you are doing (and have understood this section). Other than that, it is more intuitive for most users if things like asynchronous continuations and timers just work.
  • It is actually not a performance issue. Performance is an issue under heavy load. Heavy load means that all worker threads of the job executor are busy all the time. With exclusive jobs, the engine will simply distribute the load differently. Exclusive jobs means that jobs from a single process instance are performed by the same thread sequentially. But consider: you have more than one single process instance. And jobs from other process instances are delegated to other threads and executed concurrently. This means that with exclusive jobs the engine will not execute jobs from the same process instance concurrently but it will still execute multiple instances concurrently. From an overall throughput perspective this is desirable in most scenarios as it usually leads to individual instances being done more quickly.

The Job Executor and Multiple Process Engines

In the case of a single, application-embedded process engine, the job executor setup is the following:

There exists a single job table that the engine adds jobs to and the acquisition consumes from. Creating a second embedded engine would therefore create another acquisition thread and execution thread-pool.

In larger deployments however, this quickly leads to a poorly manageable situation. When running camunda BPM on Tomcat or an application server, the platform allows to declare multiple process engines shared by multiple process applications. With respect to job execution, one job acquisition may serve multiple job tables (and thus process engines) and a single thread-pool for execution may be used.

This setup enables centralized monitoring of job acquisition and execution. See the platform-specific information in the Runtime Container Integration section on how the thread pooling is implemented on the different platforms.

Different job acquisitions can also be configured differently, e.g. to meet business requirements like SLAs. For example, the acquisition's timeout when no more executable jobs are present can be configured differently per acquisition.

To which job acquisition a process engine is assigned can be specified in the declaration of the engine, so either in the processes.xml deployment descriptor of a process application or in the camunda BPM platform descriptor. The following is an example configuration that declares a new engine and assigns it to the job acquisition named default, which is created when the platform is bootstrapped.

<process-engine name="newEngine">
  <job-acquisition>default</job-acquisition>
  ...
</process-engine>

Job acquisitions have to be declared in the BPM platform's deployment descriptor, see the container-specific configuration options.

Cluster Setups

When running the camunda platform in a cluster, there is a distinction between homogeneous and heterogeneous setups. We define a cluster as a set of network nodes that all run the camunda BPM platform against the same database (at least for one engine on each node). In the homogeneous case, the same process applications (and thus custom classes like JavaDelegates) are deployed to all of the nodes, as depicted below.

In the heterogeneous case, this is not given, meaning that some process applications are deployed to only part of the nodes.

Job Execution in Heterogeneous Clusters

A heterogeneous cluster setup as described above poses additional challenges to the job executor. Both platforms declare the same engine, i.e. they run against the same database. This means that jobs will be inserted into the same table. However, in the default configuration the job acquisition thread of node 1 will lock any executable jobs of that table and submit them to the local job execution pool. This means, jobs created in the context of process application B (so on node 2), may be executed on node 1 and vice versa. As the job execution may involve classes that are part of B's deployment, you are likely going to see a ClassNotFoundExeception or any of the likes.

To prevent the job acquisition on node 1 from picking jobs that belong to node 2, the process engine can be configured as deployment aware, by the setting following property in the process engine configuration:

<process-engine name="default">
  ...
  <properties>
    <property name="jobExecutorDeploymentAware">true</property>
    ...
  </properties>
</process-engine>

Now, the job acquisition thread on node 1 will only pick up jobs that belong to deployments made on that node, which solves the problem. Digging a little deeper, the acquisition will only pick up those jobs that belong to deployments that were registered with the engines it serves. Every deployment gets automatically registered. Additionally, one can explicitly register and unregister single deployments with an engine by using the ManagementService methods registerDeploymentForJobExecutor(deploymentId) and unregisterDeploymentForJobExecutor(deploymentId). It also offers a method getRegisteredDeployments() to inspect the currently registered deployments.

As this is configurable on engine level, you can also work in a mixed setup, when some deployments are shared between all nodes and some are not. You can assign the globally shared process applications to an engine that is not deployment aware and the others to a deployment aware engine, probably both running against the same database. This way, jobs created in the context of the shared process applications will get executed on any cluster node, while the others only get executed on their respective nodes.

Multi-Tenancy

Multi-tenancy regards the case in which a single Camunda installation should serve more than one tenant. For each tenant, certain guarantees of isolation should be made. For example, one tenant's process instances should not interfere with those of another tenant.

Multi-tenancy can be achieved on different levels of data isolation. On the one end of the spectrum, different tenants' data can be stored in different databases by configuring multiple process engines, while on the other end of the spectrum, runtime entities can be associated with tenant markers and are stored in the same tables. In between these two extremes, it is possible to separate tenant data into different schemas or tables.

Recommended Approach:

We recommend the approach of multiple process engines (i.e., isolation into different databases/schemas/tables) over the tenant marker approach as it is more robust and easier to use.

One Process Engine Per Tenant

Database-, schema-, and table-based multi-tenancy can be enabled by configuring one process engine per tenant. Each process engine can be configured to point to a different portion of the database. While they are isolated in that sense, they may all share computational resources such as a data source (when isolating via schemas or tables) or a thread pool for asynchronous job execution. Furthermore, the Camunda API offers convenient access to different process engines based on a tenant identifier.

Data isolation

Database, schema or table level

Advantages

  • Strict data separation
  • Hardly any performance overhead for application servers due to resource sharing
  • In case one tenant's database state is inconsistent, no other tenant is affected
  • Camunda Cockpit, Tasklist, and Admin offer tenant-specific views out of the box by switching between different process engines

Disadvantages

  • Additional process engine configuration necessary
  • No out-of-the-box support for tenant-independent queries

Implementation

Working with different process engines for multiple tenants comprises the following steps:

  • Configuration of process engines
  • Deployment of process definitions for different tenants to their respective engines
  • Access to a process engine based on a tenant identifier via the Camunda API
Configuration

Multiple process engines can be configured in a configuration file or via Java API. Each engine should be given a name that is related to a tenant such that it can be identified based on the tenant. For example, each engine can be named after the tenant it serves. See the Process Engine Bootstrapping section for details.

The process engine configuration can be adapted to achieve either database-, schema- or table-based isolation of data. If different tenants should work on entirely different databases, they have to use different jdbc settings or different data sources. For schema- or table-based isolation, a single data source can be used which means that resources like a connection pool can be shared among multiple engines. The configuration option databaseTablePrefix can be used to configure database access in this case.

For background execution of processes and tasks, the process engine has a component called job executor. The job executor periodically acquires jobs from the database and submits them to a thread pool for execution. For all process applications on one server, one thread pool is used for job execution. Furthermore, it is possible to share the acquisition thread between multiple engines. This way, resources are still manageable even when a large number of process engines is used. See the section The Job Executor and Multiple Process Engines for details.

Multi-tenancy settings can be applied in the various ways of configuring a process engine. The following is an example of a bpm-platform.xml file that specifies engines for two tenants that share the same database but work on different schemas:

<?xml version="1.0" encoding="UTF-8"?>
<bpm-platform xmlns="http://www.camunda.org/schema/1.0/BpmPlatform"
              xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
              xsi:schemaLocation="http://www.camunda.org/schema/1.0/BpmPlatform http://www.camunda.org/schema/1.0/BpmPlatform">

  <job-executor>
    <job-acquisition name="default" />
  </job-executor>

  <process-engine name="tenant1">
    <job-acquisition>default</job-acquisition>
    <configuration>org.camunda.bpm.engine.impl.cfg.StandaloneProcessEngineConfiguration</configuration>
    <datasource>java:jdbc/ProcessEngine</datasource>

    <properties>
      <property name="databaseTablePrefix">TENANT_1.</property>

      <property name="history">full</property>
      <property name="databaseSchemaUpdate">true</property>
      <property name="authorizationEnabled">true</property>
    </properties>
  </process-engine>

  <process-engine name="tenant2">
    <job-acquisition>default</job-acquisition>
    <configuration>org.camunda.bpm.engine.impl.cfg.StandaloneProcessEngineConfiguration</configuration>
    <datasource>java:jdbc/ProcessEngine</datasource>

    <properties>
      <property name="databaseTablePrefix">TENANT_2.</property>

      <property name="history">full</property>
      <property name="databaseSchemaUpdate">true</property>
      <property name="authorizationEnabled">true</property>
    </properties>
  </process-engine>
</bpm-platform>
Deployment

When developing process applications, i.e., process definitions and supplementary code, some processes may be deployed to every tenant's engine while others are tenant-specific. The processes.xml deployment descriptor that is part of every process application offers this kind of flexibility by the concept of process archives. One application can contain any number of process archive deployments, each of which can be deployed to a different process engine with different resources. See the section on the processes.xml deployment descriptor for details.

The following is an example that deploys different process definitions for two tenants. It uses the configuration property resourceRootPath that specifies a path in the deployment that contains process definitions to deploy. Accordingly, all the processes under processes/tenant1 on the application's classpath are deployed to engine tenant1, while all the processes under processes/tenant2 are deployed to engine tenant2.

<process-application
  xmlns="http://www.camunda.org/schema/1.0/ProcessApplication"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

  <process-archive name="tenant1-archive">
    <process-engine>tenant1</process-engine>
    <properties>
      <property name="resourceRootPath">classpath:processes/tenant1/</property>

      <property name="isDeleteUponUndeploy">false</property>
      <property name="isScanForProcessDefinitions">true</property>
    </properties>
  </process-archive>

  <process-archive name="tenant2-archive">
    <process-engine>tenant2</process-engine>
    <properties>
      <property name="resourceRootPath">classpath:processes/tenant2/</property>

      <property name="isDeleteUponUndeploy">false</property>
      <property name="isScanForProcessDefinitions">true</property>
    </properties>
  </process-archive>

</process-application>
Access

In order to access a specific tenant's process engine at runtime, it has to be identified by its name. The Camunda engine offers access to named engines in various programming models:

  • Plain Java API: Via the ProcessEngineService any named engine can be accessed.
  • CDI Integration: Named engine beans can be injected out of the box. The built-in CDI bean producer can be specialized to access the engine of the current tenant dynamically.
  • Via JNDI on JBoss/Wildfly: On JBoss and Wildfly, every container-managed process engine can be looked up via JNDI.

A Tenant Marker Per Process Instance

The least isolated approach is to add tenant-specific markers in form of a process variable to running processes. This marker identifies the tenant in which context the process instance is running. In order to access only data for a specific tenant, many process engine queries allow to filter by process variables. A calling application must make sure to filter according to the correct tenant.

Data isolation

Row level with applications responsible for filtering

Advantages

  • Straightforward querying for data across multiple tenants as the data for all tenants is organized in the same tables.

Disadvantages

  • Requires tenant-aware queries
  • Querying with process variables may reduce performance.
  • Risk of disclosing data that belong to other tenants because of bugs or careless application programming.

Implementation

Working with tenant markers comprises the following aspects:

  • Instantiating tenant markers
  • Querying for process entities of different tenants
Instantiating

A tenant marker can be added to a process instance by passing it as a process variable on instantiation:

Map<String, Object> variables = new HashMap<String, Object>();
variables.put("TENANT_ID", "tenant1");

runtimeService.startProcessInstanceByKey("some process", variables);

For process definitions that are specific to a single tenant, it is also possible to use an execution listener on the start event that immediately sets the variable after instantiation.

Querying

Process applications that retrieve tenant-specific data must ensure that they filter by the tenant marker in order to isolate data between tenants. The following is a query that retrieves all process instances for tenant tenant1:

List<ProcessInstance> processInstances =
  runtimeService.createProcessInstanceQuery()
    .variableValueEquals("TENANT_ID", "tenant1")
    .list();

Other queries like task and execution queries offer the same filtering capabilities. For correlation via the RuntimeService#correlateMessage methods, tenant-specific correlation can be achieved by adding the tenant marker as a correlation key like:

runtimeService.createMessageCorrelation("someMessage")
  .processInstanceVariableEquals("TENANT_ID", "tenant1")
  .correlate();

Logging

We use Java Logging to avoid any third party logging requirements.

Incidents

Incidents are notable events that happen in the process engine. Such incidents usually indicate some kind of problem related to process execution. Examples of such incidents may be a failed job with elapsed retries (retries = 0), indicating that an execution is stuck and manual administrative action is necessary for repairing the process instance. Or the fact that a process instance has entered an error state which could be modelled as a BPMN Error Boundary event or a User Task explicitly marked as "error state". If such incidents arise, the process engine fires an internal event which can be handled by a configurable incident handler.

In the default configuration, the process engine writes incidents to the process engine database. You may then query the database for different types and kinds of incidents using the IncidentQuery exposed by the RuntimeService:

runtimeService.createIncidentQuery()
  .processDefinitionId("someDefinition")
  .list();

Incidents are stored in the AC_RU_INCIDENT database table.

If you want to customize the incident handling behavior, it is possible to replace the default incident handlers in the process engine configuration and provide custom implementations (see below).

Incident Types

There are different types of incidents. Currently the process engine supports the following incidents:

  • Failed Job: this type of incident is raised when automatic retries for a Job (Timer or Asynchronous continuation) have elapsed. The incident indicates that the corresponding execution is stuck and will not continue automatically. Administrative action is necessary. The incident is resolved when the job is manually executed or when the retries for the corresponding job are reset to a value > 0.

(De-)Activating Incidents

The process engine allows you to configure on an incident type basis whether certain incidents should be raised or not.

The following properties are available in the org.camunda.bpm.engine.ProcessEngineConfiguration class:

  • createIncidentOnFailedJobEnabled: indicates whether Failed Job incidents should be raised.

Implementing custom Incident Handlers

Incident Handlers are responsible for handling incidents of a certain type (see Incident Types below).

An Incident Handler implements the following interface:

public interface IncidentHandler {

  public String getIncidentHandlerType();

  public void handleIncident(String processDefinitionId, String activityId, String executionId, String configuration);

  public void resolveIncident(String processDefinitionId, String activityId, String executionId, String configuration);

}

The handleIncident method is called when a new incident is created. The resolveIncident method is called when an incident is resolved. If you want to provide a custom incident handler implementation you can replace one or multiple incident handlers using the following method:

org.camunda.bpm.engine.impl.cfg.ProcessEngineConfigurationImpl.setCustomIncidentHandlers(List<IncidentHandler>)

An example of a custom inciddent handler could be a handler which, in addtion to the default behavior also sends an email to an administrator.

Process Engine Plugins

The process engine configuration can be extended through process engine plugins. A process engine plugin is an extension to the process engine configuration.

A plugin must provide an implementation of the ProcessEnginePlugin interface.

Configuring Process Engine Plugins

Process engine plugins can be configured

The following is an example of how to configure a process engine plugin in bpm-platform.xml file:

<?xml version="1.0" encoding="UTF-8"?>
<bpm-platform xmlns="http://www.camunda.org/schema/1.0/BpmPlatform"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.camunda.org/schema/1.0/BpmPlatform http://www.camunda.org/schema/1.0/BpmPlatform ">

  <job-executor>
    <job-acquisition name="default" />
  </job-executor>

  <process-engine name="default">
    <job-acquisition>default</job-acquisition>
    <configuration>org.camunda.bpm.engine.impl.cfg.JtaProcessEngineConfiguration</configuration>
    <datasource>jdbc/ProcessEngine</datasource>

    <plugins>
      <plugin>
        <class>org.camunda.bpm.engine.MyCustomProcessEnginePlugin</class>
        <properties>
          <property name="boost">10</property>
          <property name="maxPerformance">true</property>
          <property name="actors">akka</property>
        </properties>
      </plugin>
    </plugins>
  </process-engine>

</bpm-platform>

A process engine plugin class must be visible to the classloader which loads the process engine classes.

List of built-in Process Engine Plugins

The following is a list of built-in process engine plugins:

Identity Service

The identity service is an API abstraction over various User / Group repositories. The basic entities are

  • User: a user identified by a unique Id
  • Group: a group identified by a unique Id
  • Membership: the relationship between users and groups

Example:

User demoUser = processEngine.getIdentityService()
  .createUserQuery()
  .userId("demo")
  .singleResult();

camunda BPM distinguishes between read-only and writable user repositories. A read-only user repository provides read-only access to the underlying user / group database. A writable user repository allows write access to the user database which includes creating, updating and deleting users and groups.

In order to provide a custom identity provider implementation, the following interfaces can be implemented:

The Database Identity Service

The database identity service uses the process engine database for managing users and groups. This is the default identity service implementation used if no alternative identity service implementation is provided.

The Database Identity Service implements both ReadOnlyIdentityProvider and WritableIdentityProvider providing full CRUD functionality in Users, Groups and Memberships.

The LDAP Identity Service

The LDAP identity service provides read-only access to an LDAP-based user / group repository. The identity service provider is implemented as a Process Engine Plugin and can be added to the process engine configuration. In that case it replaces the default Database Identity Service.

In order to use the LDAP identity service, the camunda-identity-ldap.jar library has to be added to the classloader of the process engine.

<dependency>
  <groupId>org.camunda.bpm.identity</groupId>
  <artifactId>camunda-identity-ldap</artifactId>
  <version>${camunda.version}</version>
</dependency>


The following is an example of how to configure the LDAP Identity Provider Plugin using Spring XML:

<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://www.springframework.org/schema/beans   http://www.springframework.org/schema/beans/spring-beans.xsd">
  <bean id="processEngineConfiguration" class="org.camunda.bpm.engine.impl.cfg.StandaloneInMemProcessEngineConfiguration">
    ...
    <property name="processEnginePlugins">
      <list>
        <ref bean="ldapIdentityProviderPlugin" />
      </list>
    </property>
  </bean>
  <bean id="ldapIdentityProviderPlugin" class="org.camunda.bpm.identity.impl.ldap.plugin.LdapIdentityProviderPlugin">
    <property name="serverUrl" value="ldap://localhost:3433/" />
    <property name="managerDn" value="uid=daniel,ou=office-berlin,o=camunda,c=org" />
    <property name="managerPassword" value="daniel" />
    <property name="baseDn" value="o=camunda,c=org" />

    <property name="userSearchBase" value="" />
    <property name="userSearchFilter" value="(objectclass=person)" />
    <property name="userIdAttribute" value="uid" />
    <property name="userFirstnameAttribute" value="cn" />
    <property name="userLastnameAttribute" value="sn" />
    <property name="userEmailAttribute" value="mail" />
    <property name="userPasswordAttribute" value="userpassword" />

    <property name="groupSearchBase" value="" />
    <property name="groupSearchFilter" value="(objectclass=groupOfNames)" />
    <property name="groupIdAttribute" value="ou" />
    <property name="groupNameAttribute" value="cn" />
    <property name="groupMemberAttribute" value="member" />
  </bean>
</beans>

The following is an example of how to configure the LDAP Identity Provider Plugin in bpm-platform.xml / processes.xml:

<process-engine name="default">
  <job-acquisition>default</job-acquisition>
  <configuration>org.camunda.bpm.engine.impl.cfg.StandaloneProcessEngineConfiguration</configuration>
  <datasource>java:jdbc/ProcessEngine</datasource>

  <properties>...</properties>

  <plugins>
    <plugin>
      <class>org.camunda.bpm.identity.impl.ldap.plugin.LdapIdentityProviderPlugin</class>
      <properties>

        <property name="serverUrl">ldap://localhost:4334/</property>
        <property name="managerDn">uid=jonny,ou=office-berlin,o=camunda,c=org</property>
        <property name="managerPassword">s3cr3t</property>

        <property name="baseDn">o=camunda,c=org</property>

        <property name="userSearchBase"></property>
        <property name="userSearchFilter">(objectclass=person)</property>

        <property name="userIdAttribute">uid</property>
        <property name="userFirstnameAttribute">cn</property>
        <property name="userLastnameAttribute">sn</property>
        <property name="userEmailAttribute">mail</property>
        <property name="userPasswordAttribute">userpassword</property>

        <property name="groupSearchBase"></property>
        <property name="groupSearchFilter">(objectclass=groupOfNames)</property>
        <property name="groupIdAttribute">ou</property>
        <property name="groupNameAttribute">cn</property>

        <property name="groupMemberAttribute">member</property>

      </properties>
    </plugin>
  </plugins>

</process-engine>

Administrator Authorization Plugin The LDAP Identity Provider Plugin is usually used in combination with the Administrator Authorization Plugin which allows you to grant administrator authorizations for a particular LDAP User / Group.

The LDAP Identity Provider provides the following configuration properties:

Property Description
`serverUrl` The url of the LDAP server to connect to.
`managerDn` The absolute DN of the manager user of the LDAP directory.
`managerPassword` The password of the manager user of the LDAP directory
`baseDn`

The base DN: identifies the root of the LDAP directory. Is appended to all DN names composed for searching for users or groups.

Example: `o=camunda,c=org`

`userSearchBase`

Identifies the node in the LDAP tree under which the plugin should search for users. Must be relative to `baseDn`.

Example: `ou=employees`

`userSearchFilter`

LDAP query string used when searching for users. Example: `(objectclass=person)`

`userIdAttribute`

Name of the user Id property. Example: `uid`

`userFirstnameAttribute`

Name of the firstname property. Example: `cn`

`userLastnameAttribute`

Name of the lastname property. Example: `sn`

`userEmailAttribute`

Name of the email property. Example: `mail`

`userPasswordAttribute`

Name of the password property. Example: `userpassword`

`groupSearchBase`

Identifies the node in the LDAP tree under which the plugin should search for groups. Must be relative to `baseDn`.

Example: `ou=roles`

`groupSearchFilter`

LDAP query string used when searching for groups. Example: `(objectclass=groupOfNames)`

`groupIdAttribute`

Name of the group Id property. Example: `ou`

`groupNameAttribute`

Name of the group Name property. Example: `cn`

`groupTypeAttribute`

Name of the group Type property. Example: `cn`

`groupMemberAttribute`

Name of the member attribute. Example: `member`

`acceptUntrustedCertificates`

Accept of untrusted certificates if LDAP server uses Ssl. Warning: we strongly advise against using this property. Better install untrusted certificates to JDK key store.

`useSsl`

Set to true if LDAP connection uses SSL. Default: `false`

`initialContextFactory`

Value for the `java.naming.factory.initial` property. Default: `com.sun.jndi.ldap.LdapCtxFactory`

`securityAuthentication`

Value for the `java.naming.security.authentication` property. Default: `simple`

usePosixGroups

Indicates whether posix groups are used. If true, the connector will use a simple (unqualified) user id when querying for groups by group member instead of the full DN. Default: false

Authorization Service

camunda BPM provides a resource oriented authorization framework.

Authorizations An Authorization assigns a set of Permissions to an identity to interact with a given Resource.

Examples

  • User 'jonny' is authorized to start new instances of the 'invoice' process
  • Group 'marketing' is not authorized to cancel process instances.
  • Group 'marketing' is not allowed to use the tasklist application.
  • Nobody is allowed to edit process variables in the cockpit application, except the distinct user 'admin'.

Identities camunda BPM distinguished two types of identities: users and groups. Authorizations can either range over all users (userId = ANY), an individual User or a Group of users.

Permissions A Permission defines the way an identity is allowed to interact with a certain resource. Examples of permissions are CREATE, READ, UPDATE, DELETE, ... See Permissions for a set of built-in permissions.

A single authorization object may assign multiple permissions to a single user and resource:

authorization.addPermission(Permissions.READ);
authorization.addPermission(Permissions.UPDATE);
authorization.addPermission(Permissions.DELETE);

On top of the built-in permissions, camunda BPM allows using custom permission types.

Resources Resources are the entities the user interacts with. Examples of resources are GROUPS, USERS, process-definitions, process-instances, tasks ...

Built-In Resources

The following resources are currently supported by the authorization framework:

  • Application (cockpit, tasklist, ...)
  • Authorization
  • Group
  • Group Membership
  • User

On top of the built-in resources, the camunda BPM framework supports defining custom resources. Authorization on custom resources will not be automatically performed by the framework but can be performed by a process application.

Authorization Type There are three types of authorizations:

Global Authorizations (AUTH_TYPE_GLOBAL) range over all users and groups (userId = ANY) and are usually used for fixing the "base" permission for a resource. Grant Authorizations (AUTH_TYPE_GRANT) range over users and groups and grant a set of permissions. Grant authorizations are commonly used for adding permissions to a user or group that the global authorization revokes. Revoke Authorizations (AUTH_TYPE_REVOKE) range over users and groups and revoke a set of permissions. Revoke authorizations are commonly used for revoking permissions to a user or group the the global authorization grants.

Authorization Precedence Authorizations may range over all users, an individual user or a group of users or they may apply to an individual resource instance or all instances of the same type (resourceId = ANY). The precedence is as follows:

  • An authorization applying to an individual resource instance precedes over an authorization applying to all instances of the same resource type.
  • An authorization for an individual user precedes over an authorization for a group.
  • A Group authorization precedes over a GLOBAL authorization.
  • A Group REVOKE authorization precedes over a Group GRANT authorization.

Creating an Authorization

An authorization is created between a user/group and a resource. It describes the user/group's permissions to access that resource. An authorization may express different permissions, such as the permission to READ, UPDATE, DELETE the resource. (See Authorization for details).

In order to grant the permission to access a certain resource, an authorization object is created:

Authorization auth = authorizationService.createNewAuthorization();

// The authorization object can be configured either for a user or a group:
auth.setUserId("john");
//  -OR-
auth.setGroupId("management");

//and a resource:
auth.setResource("processDefinition");
auth.setResourceId("2313");

// finally the permissions to access that resource can be assigned:
auth.addPermission(Permissions.READ);

// and the authorization object is saved:
authorizationService.saveAuthorization(auth);

As a result, the given user or group will have permission to READ the referenced process definition.

The Administrator Authorization Plugin

camunda BPM has no explicit concept of "administrator". An administrator in camunda BPM is a user who has been granted all authorizations on all resources.

When downloading the camunda BPM distribution, the invoice example application creates a user with id demo and assigns administrator authorizations to this user. In addition, the camunda Admin Web application allows you to create an initial administrator user if no user is present in the database (when using the Database Identity Service or a custom implementation providing READ / UPDATE access to the user repository).

This is not the case when using the LDAP Identity Service. The LDAP idenitity service only has read access to the user repository and the "Create Initial User" dialog will not be displayed.

In this case you can use the Administrator Authorization Plugin for making sure administrator authorizations are created for a particular LDAP User or Group.

The following is an example of how to configure the Administrator Authorization Plugin in bpm-platform.xml / processes.xml:

<process-engine name="default">
  ...
  <plugins>
    <plugin>
      <class>org.camunda.bpm.engine.impl.plugin.AdministratorAuthorizationPlugin</class>
      <properties>
        <property name="administratorUserName">admin</property>
      </properties>
    </plugin>
  </plugins>
</process-engine>

The plugin will make sure that administrator authorizations (ALL permissions) are granted on all resources whenever the process engine is started.

Note It is not necessary to configure all LDAP users and groups which should have administrator authorization. It is usually enough to configure a single user and use that user to log into the webapplication and create additional authorizations using the User Interface.

Complete list of configuration properties:

Property Description
`administratorUserName` The name of the administrator user. If this name is set to a non-null and non-empty value, the plugin will create user-level Administrator authorizations on all built-in resources.
`administratorGroupName` The name of the administrator group. If this name is set to a non-null and non-empty value, the plugin will create group-level Administrator authorizations on all built-in resources.

Process Diagram Visualization

A BPMN process diagram is a formidable place to visualize information around your process. You have two options to do this:

  • BPMN JavaScript libraries for rendering BPMN 2.0 with directly in the browser
  • Java Process Diagram API using deployed PNG images

We generally recommend the JavaScript libraries, but using the Process Diagram API can be considered if

  • You use browsers not capable of the Java Script rendering (see Supported Environments)
  • You want to use the exact visualization of your business modeler to improve Business IT Alignment

BPMN JavaScript libraries

We provide BPMN JavaScript libraries which can render BPMN 2.0 process models in your browser.

Go to camunda-bpmn.js for libraries and documentation.

Process Diagram API

When using the Process Diagram API you can deploy a PNG image together with your BPMN 2.0 Process Model. Then you have an API to query the image and normalized coordinates for the process model. With these informations you can easily visualize anything on the process model. The following image shows an example using an BPMN 2.0 model from Adonis (see Roundtrip with other BPMN 2.0 Modelers):

Our Invoice Showcase is a process application that also uses the Process Diagram API showing details of the current process instance to end users working on user tasks.

Preconditions

In order to use the Process Diagram API you need to deploy a process diagram together with your process. You can use:

  • PNG or
  • JPEG format.

The deployment can be done by any deployment mechanism camunda BPM offers. For instance if you use WAR deployment, you just need to place the image right next to the BPMN 2.0 XML file of your process (meaning in the same folder). The camunda Modeler automatically creates an image and saves it to the right direction each time you save if. It is important that both files have the same name, e.g.,

camunda-invoice.bpmn and
camunda-invoice.png.

Maven will add them to the built artifact and the platform will take care of deploying it to the process engine. The deployer (only) looks for files with the extensions .png or .jpg to identify process diagrams.

The BPMN 2.0 XML file of your process must contain Diagram Interchange data. This is a special section containing positions and dimensions of the elements in the process diagram. Any modeling tool that conforms to BPMN 2.0 should be able to export this as part of its regular BPMN 2.0 XML export. Here is an example of how it looks like:

...
<bpmndi:BPMNDiagram id="sid-02bd9186-9a09-4ef7-b17d-95bc9385c7ab">
  <bpmndi:BPMNPlane id="sid-2cd25826-e553-4573-ad62-be3d38904386" bpmnElement="invoice-process">
    <bpmndi:BPMNShape id="Process_Engine_1_gui" bpmnElement="Process_Engine_1" isHorizontal="true">
      <omgdc:Bounds height="488.0" width="1126.0" x="0.0" y="0.0"/>
    </bpmndi:BPMNShape>
    ...
  </bpmndi:BPMNPlane>
</bpmndi:BPMNDiagram>
...

Getting the Process Diagram

If you have deployed a process diagram into the engine, you can retrieve it using the method getProcessDiagram() of the RepositoryService, which takes a process definition id as an argument and returns an InputStream with the content of the process diagram image.

In a Web application you can, e.g., write a Servlet to provide process diagrams (this code is taken from the Invoice Showcase, see ProcessDiagramServlet.java):

@WebServlet(value = "/processDiagram", loadOnStartup = 1)
public class ProcessDiagramServlet extends HttpServlet {

  @Inject
  private RepositoryService repositoryService;

  @Override
  protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
    String processDefinitionId = request.getParameter("processDefinitionId");
    InputStream processDiagram = repositoryService.getProcessDiagram(processDefinitionId);
    response.setContentType("image/png");
    response.getOutputStream().write(IOUtils.toByteArray(processDiagram));
  }
}

Getting Coordinates of Process Diagram Elements

The method getProcessDiagramLayout() of the RepositoryService takes a process definition id as an argument and returns a DiagramLayout object. This object provides x and y coordinates as well as width and height for all elements of the process diagram.

DiagramLayout processDiagramLayout = repositoryService.getProcessDiagramLayout(processInstance.getProcessDefinitionId());
List<DiagramNode> nodes = processDiagramLayout.getNodes();
for (DiagramNode node : nodes) {
  String id     = node.getId();
  Double x      = node.getX();
  Double y      = node.getY();
  Double width  = node.getWidth();
  Double height = node.getHeight();
  // TODO: do some thing with the coordinates
}

These coordinates are given as pixels relative to the upper left corner of the image returned by getProcessDiagram(), i.e., you can take them directly and draw or render something on top the image. Be creative!

Hint: If you have problems with the positions not fitting exactly try to add a pool around your process.

Creating an Overlay on top of a Process Diagram

To give you some inspiration of what you can do with the Process Diagram API, we have another look at the code of the Invoice Showcase . It uses JSF, HTML and CSS to highlight the current activity of a given process instance.

A CDI bean looks up the currently active activities in the RuntimeService and gets position and dimension of these activities using DiagramLayout.getNode() (see ProcessDiagramController.java):

@Named
public class ProcessDiagramController {
  ...
  public List<DiagramNode> getActiveActivityBoundsOfLatestProcessInstance() {
    ArrayList<DiagramNode> list = new ArrayList<DiagramNode>();
    ProcessInstance processInstance = getCurrentProcessInstance();
    if (processInstance != null) {
      DiagramLayout processDiagramLayout = repositoryService.getProcessDiagramLayout(processInstance.getProcessDefinitionId());
      List<String> activeActivityIds = runtimeService.getActiveActivityIds(processInstance.getId());
      for (String activeActivityId : activeActivityIds) {
        list.add(processDiagramLayout.getNode(activeActivityId));
      }
    }
    return list;
  }

This bean is then invoked by a JSF page, which displays the process diagram from the Servlet shown above and places tokens on top of it (see taskTemplate.xhtml).

<div style="position: relative">
 <img src="processDiagram?processDefinitionId=#{task.processDefinitionId}" />

 <ui:repeat
  value="#{processDiagramController.getActiveActivityBoundsOfLatestProcessInstance()}"
  var="bounds">
  <img src="token.png" style="
    position: absolute;
    left: #{bounds.x + bounds.width - 25}px;
    top: #{bounds.y - 15}px;
    z-index: 1;"/>
  </ui:repeat>
</div>

However, you can also draw a rectangle around a node.

<div style="position: relative">
    <p:graphicImage
        value="processDiagram?processDefinitionId=#{task.processDefinitionId}" />
    <ui:repeat
        value="#{processDiagramController.getActiveActivityBoundsOfLatestProcessInstance()}"
        var="bounds">
        <div style="
          position: absolute;
          left: #{bounds.x - 1}px;
          top: #{bounds.y - 1}px;
          width: #{bounds.width - 2}px;
          height: #{bounds.height - 2}px;
          border: 2px solid rgb(181, 21, 43);
          border-radius: 5px; -moz-border-radius: 5px;"></div>
    </ui:repeat>
</div>

Overview

A Process Application is an ordinary Java Application that uses the camunda process engine for BPM and Worklow functionality. Most such applications will start their own process engine (or use a process engine provided by the runtime container), deploy some BPMN 2.0 process definitions and interact with process instances derived from these process definitions. Since most process applications perform very similar bootstrapping, deployment and runtime tasks, we generalized this functionaly into a Java Class which is named - Surprise! - ProcessApplication. The concept is similar to the javax.ws.rs.core.Application class in JAX-RS: adding the process application class allows you to bootrap and configure the provided services.

Adding a ProcessApplication class to your Java Application provides your applications with the following services:

  • Bootrapping embedded process engine(s) or looking up container managed process engine(s). You can define multiple process engines in a file named processes.xml which is added to your application. The ProcessApplication class makes sure this file is picked up and the defined process engines are started and stopped as the application is deployed / undeployed.
  • Automatic deployment of classpath BPMN 2.0 resources. You can define multiple deployments (process archives) in the processes.xml file. The process application class makes sure the deployments are performed upon deployment of your application. Scanning your application for process definition resource files (engine in .bpmn20.xml or .bpmn) is supported as well.
  • Resolution of application-local Java Delegate Implementations and Beans in case of a multi-application deployment. The process application class allows your java application to expose your local Java Delegate implementations or Spring / CDI beans to a shared, container managed process engine. This way you can start a single process engine that dispatches to multiple process applications that can be (re-)deployed independently.

Transforming an existing Java Application into a Process Application is easy and non-intrusive. You simply have to add:

  • A Process Application class: The Process Application class constitutes the interface between your application and the process engine. There are different base classes you can extent to reflect different environments (e.g. Servlet vs. EJB Container).
  • A processes.xml file to META-INF: The deployment descriptor file allows you to provide a declarative configuration of the deployment(s) that this process application makes to the process engine. It can be empty (see the empty processes.xml section) and serve as simple marker file. If it is not present then the engine will start up but auto-deployment will not be performed.

Heads-up! You might want to checkout the Getting Started Tutorial first as it explaines the creation of a process application step by step or the Project Templates for Maven, which give you a complete running process application out of the box.

The Process Application class

You can deleagte the bootstrapping of the process engine and process deployment to a process application class. The basic ProcessApplication functionality is provided by the org.camunda.bpm.application.AbstractProcessApplication base class. Based on this class there is a set of environment-specific sub classes that realize integration within a specific environment:

  • ServletProcessApplication: To be used for Process Applications is a Servlet Container like Apache Tomcat.
  • EjbProcessApplication: To be used in a Java EE application server like JBoss, Glassfish or WebSphere Application Server.
  • EmbeddedProcessApplication: To be used when embedding the process engine is an ordinary Java SE application.
  • SpringProcessApplication: To be used for bootstrapping the process application from a Spring Application Context.

In the following, we walk through the different implementations and discuss where and how they can be used.

The ServletProcessApplication

All Servlet Containers

The Servlet Process Application is supported on all containers. Read the note about Servlet Process Application and EJB / Java EE containers.

Packaging: WAR (or embedded WAR inside EAR)

The ServletProcessApplication class is the base class for developing Process Applications based on the Servlet Specification (Java Web Applications). The servlet process application implements the javax.servlet.ServletContextListener interface which allows it to participate in the deployment lifecycle of your Web application

The following is an example of a Servlet Process Application:

package org.camunda.bpm.example.loanapproval;

import org.camunda.bpm.application.ProcessApplication;
import org.camunda.bpm.application.impl.ServletProcessApplication;

@ProcessApplication("Loan Approval App")
public class LoanApprovalApplication extends ServletProcessApplication {
  // empty implementation
}

Notice the @ProcessApplication annotation. This annotation fulfills two purposes:

  • providing the name of the ProcessApplication: You can provide a custom name for you process application using the annotation: @ProcessApplication("Loan Approval App"). If no name is provided, a name is automatically detected. In case of a ServletProcessApplication, the name of the ServletContext is used.
  • triggering auto-deployment. In a Servlet 3.0 container, the annotation is sufficient for making sure that the process application is automatically picked up by the servlet container and automatically added as a ServletContextListener to the Servlet Container deployment. This functionality is realized by a javax.servlet.ServletContainerInitializer implementation named org.camunda.bpm.application.impl.ServletProcessApplicationDeployer which is located in the camunda-engine module. The implementation works for both embedded deployment of the camunda-engine.jar as a web application library in the WEB-INF/lib folder of your WAR file or for the deployment of the camunda-engine.jar as a shared library in the shared library (i.e. Apache Tomcat global lib/ folder) directory of your application server. The Servlet 3.0 Specification foresees both deployment scenarios. In case of embedded deployment, the ServletProcessApplicationDeployer is notified once, when the webapplication is deployed. In case of deployment as a shared library, the ServletProcessApplicationDeployer is notified for each WAR file containing a class annotated with @ProcessApplication (as required by the Servlet 3.0 Specification).

This means that in case you deploy to a Servlet 3.0 compliant container (such as Apache Tomcat 7) annotating your class with @ProcessApplication is sufficient.

There is a project template for Maven called `camunda-archetype-servlet-war`, which gives you a complete running project based on a ServletProcessApplication.

Deploying to Apache Tomcat 6 or other Pre-Servlet 3.0 Containers

In a Pre-Servlet 3.0 container such as Apache Tomcat 6 (or JBoss Application Server 5 for that matter), you need manually register your ProcessApplication class as Servlet Context Listener in the Servlet Container. This can be achieved by adding a listener element to your WEB-INF/web.xml file:

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">

  <listener>
    <listener-class>org.my.project.MyProcessApplication</listener-class>
  </listener>

</web-app>

Using the ServletProcessApplication inside an EJB / Java EE Container such as Glassfish or JBoss

You can use the ServletProcessApplication inside an EJB / Java EE Container such as Glassfish or JBoss. Process application bootstrapping and deployment will work in the same way. However, you will not be able to use all Java EE features at runtime. In contrast to the EjbProcessApplication (see next section), the ServletProcessApplication does not perform proper Java EE cross-application context switching. When the process engine invokes Java Delegates form your application, only the Context Class Loader of the current Thread is set to the classloader of your application. This does allow the process engine to resolve Java Deleagte implementations form your application but the container will not perform an EE context switch to your application. As a consequence, if you use the ServletProcessApplciation inside a Java EE container, you will not be able to use features like:

  • using CDI beans and EJBs as JavaDelegate Implementations in combination with the Job Executor,
  • using @RequestScoped CDI Beans with the Job Executor,
  • looking up JNDI resources from the application's naming scope

If your application does not use such features, it is perfectly fine using the ServletProcessApplication inside an EE container in that case you only get servlet specification guarantees.

The EjbProcessApplication

Java EE 6 Container only

The EjbProcessApplication is supported in Java EE 6 containers or better. It is not supported on Servlet Containers like Apache Tomcat. It may be adapted to work inside Java EE 5 Containers.

Packaging: JAR, WAR, EAR

The EjbProcessApplication is the base class for developing Java EE based Process Applications. An Ejb Process Application class itself must be deployed as an EJB.

In order to add an Ejb Process Application to your Java Application, you have two options:

  • Bundling the camunda-ejb-client: we provide a generic, reusable EjbProcessApplication implementation (named org.camunda.bpm.application.impl.ejb.DefaultEjbProcessApplication) bundled as a maven artifact. This simplest possibility is to add this implementation as a maven dependency to your application.
  • Writing a custom EjbProcessApplication: if you want to customize the behavior of the EjbProcessApplication, you can write a custom subclass of the EjbProcessApplication class and add it to your application.

Both options are explained in greater detail below.

Bundling the camunda-ejb-client Jar

The most convenient option for deploying a process application to an Ejb Container is adding the following maven dependency to you maven project:

<dependency>
  <groupId>org.camunda.bpm.javaee</groupId>
  <artifactId>camunda-ejb-client</artifactId>
  <version>${camunda.version}</version>
</dependency>

The camunda-ejb-client contains a reusable default implementation of the EjbProcessApplication as a Singleton Session Bean with auto-activation.

This deployment option requires that your project is a composite deployment (such as a WAR or EAR) deployment since you need to add a library JAR file. You could of course use something like the maven shade plugin for adding the class contained in the camunda-ejb-client artifact to a JAR-based deployment.

We always recommend using the camunda-ejb-client over deploying a custom EjbProcessApplication class unless you want to customize the behavior of the EjbProcessApplication. There is a project template for Maven called `camunda-archetype-ejb-war`, which gives you a complete running project based on the camunda-ejb-client.

Deploying a custom EjbProcessApplication class

If you want to customize the behavior of the the EjbProcessApplication class you have to option of writing a custom EjbProcessApplication class. The following is an example of such an implementation:

@Singleton
@Startup
@ConcurrencyManagement(ConcurrencyManagementType.BEAN)
@TransactionAttribute(TransactionAttributeType.REQUIRED)
@ProcessApplication
@Local(ProcessApplicationInterface.class)
public class MyEjbProcessApplication extends EjbProcessApplication {

  @PostConstruct
  public void start() {
    deploy();
  }

  @PreDestroy
  public void stop() {
    undeploy();
  }

}

Expose servlet context path using a custom EjbProcessApplication

If your application is a WAR (or a WAR inside an EAR) and you want to use embedded or external task forms inside the Tasklist application, then your custom EjbProcessApplication must expose the servlet context path of your application as a property. This enables the Tasklist to resolve the path to the embedded or external task forms.

Therefore your custom EjbProcessApplication must be extended by a Map and a getter-method for that Map as follows:

@Singleton
@Startup
@ConcurrencyManagement(ConcurrencyManagementType.BEAN)
@TransactionAttribute(TransactionAttributeType.REQUIRED)
@ProcessApplication
@Local(ProcessApplicationInterface.class)
public class MyEjbProcessApplication extends EjbProcessApplication {

  protected Map<String, String> properties = new HashMap<String, String>();

  @PostConstruct
  public void start() {
    deploy();
  }

  @PreDestroy
  public void stop() {
    undeploy();
  }

  public Map<String, String> getProperties() {
    return properties;
  }

}

Furthermore, to provide the servlet context path a custom javax.servlet.ServletContextListener must be added to your application. Inside your custom implementation of the ServletContextListener you have to

  • inject your custom EjbProcessApplication using the @EJB annotation,
  • resolve the servlet context path and
  • expose the servlet context path through the ProcessApplicationInfo#PROP_SERVLET_CONTEXT_PATH property inside your custom EjbProcessApplication.

This can be done as follows:

public class ProcessArchiveServletContextListener implements ServletContextListener {

  @EJB
  private ProcessApplicationInterface processApplication;

  public void contextInitialized(ServletContextEvent contextEvent) {

    String contextPath = contextEvent.getServletContext().getContextPath();

    Map<String, String> properties = processApplication.getProperties();
    properties.put(ProcessApplicationInfo.PROP_SERVLET_CONTEXT_PATH, contextPath);
  }

  public void contextDestroyed(ServletContextEvent arg0) {
  }

}

Finally the custom ProcessArchiveServletContextListener has to be added to your WEB-INF/web.xml file:

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">

  <listener>
    <listener-class>org.my.project.ProcessArchiveServletContextListener</listener-class>
  </listener>

  ...

</web-app>

Invocation Semantics of the EjbProcessApplication

The fact that the EjbProcessApplication exposes itself as a Session Bean Component inside the EJB container

  • determines the invocation semantics when invoking code from the process application and
  • the nature of the ProcessApplicationReference held by the process engine.

When the process engine invokes the Ejb Process Application, it gets EJB invocation semantics. For example, if your process application provides a JavaDelegate implementation, the process engine will call the EjbProcessApplication's execute(java.util.concurrent.Callable) method and from that method invoke JavaDelegate. This makes sure that

  • the call is intercepted by the EJB container and "enters" the process application legally.
  • the JavaDelegate may take advantage of the EjbProcessApplication's invocation context and resolve resources from the component's environment (such as a java:comp/BeanManager).
                   Big pile of EJB interceptors
                                |
                                |  +--------------------+
                                |  |Process Application |
                  invoke        v  |                    |
 ProcessEngine ----------------OOOOO--> Java Delegate   |
                                   |                    |
                                   |                    |
                                   +--------------------+

When the EjbProcessApplication registers with a process engine (see ManagementService#registerProcessApplication(String, ProcessApplicationReference), the process application passes a reference to itself to the process engine. This reference allows the process engine to reference the process application. The EjbProcessApplication takes advantage of the Ejb Containers naming context and passes a reference containing the EJBProcessApplication's Component Name to the process engine. Whenever the process engine needs access to process application, the actual component instance is looked up and invoked.

The EmbeddedProcessApplication

All containers

The EmbeddedProcessApplication can only be used with an embedded process engine and does not provide auto-activation.

Packaging: JAR, WAR, EAR

The org.camunda.bpm.application.impl.EmbeddedProcessApplication can only be used in combination with an embedded process engine. Usage in combination with a Shared Process Engine is not supported as the class performs no process application context switching at runtime.

The Embedded Process Application does also not provide auto-startup. You need to manually call the deploy method of your process application:

// instantiate the process application
MyProcessApplication processApplication = new MyProcessApplication();

// deploy the process application
processApplication.deploy();

// interact with the process engine
ProcessEngine processEngine = BpmPlatform.getDefaultProcessEngine();
processEngine.getRuntimeService().startProcessInstanceByKey(...);

// undeploy the process application
processApplication.undeploy();

Where the class MyProcessApplication could look like this:

@ProcessApplication(
    name="my-app",
    deploymentDescriptors={"path/to/my/processes.xml"}
)
public class MyProcessApplication extends EmbeddedProcessApplication {

}

The SpringProcessApplication

Supported on

The spring process application is currently not supported on JBoss AS 7.

Packaging: JAR, WAR, EAR

The org.camunda.bpm.engine.spring.application.SpringProcessApplication class allows bootstrapping a process application through a Spring Application Context. You can either reference the SpringProcessApplication class from an Xml-based application context configuration file or use annotation-based setup.

If your application is a WebApplication you should use org.camunda.bpm.engine.spring.application.SpringServletProcessApplication as it provides support for exposing the servlet context path through the ProcessApplicationInfo#PROP_SERVLET_CONTEXT_PATH property.

SpringServletProcessApplication

We recommend always using SpringServletProcessApplication unless the deployment is not a web application. Using this class requires the org.springframework:spring-web module to be on the classpath.

Configuring a Spring Process Application

The following shows an example of how to bootstrap a SpringProcessApplication inside a spring application context Xml file:

<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
                           http://www.springframework.org/schema/beans/spring-beans.xsd">

  <bean id="invoicePa" class="org.camunda.bpm.engine.spring.application.SpringServletProcessApplication" />

</beans>

(Remember that you need a META-INF/processes.xml file, additionally).

Process Application Name

The SpringProcessApplication will use the bean name (id="invoicePa" in the example above) as auto-detected name for the process application. Make sure to provide a unique process application name here (unique across all process applications deployed on a single application server instance.) As an alternative, you can provide a custom subclass of SpringProcessApplication (or SpringServletProcessApplication) and override the getName() method.

Configuring a Managed Process Engine using Spring

If you use a Spring Process Application, you may want to configure your process engine inside the spring application context Xml file (as opposed to the processes.xml file). In this case, you must use the org.camunda.bpm.engine.spring.container.ManagedProcessEngineFactoryBean class for creating the process engine object instance. In addition to creating the process engine object, this implementation registers the process engine with the BPM Platform infrastructure such that the process engine is returned by the ProcessEngineService. The following is an example of how to configure a managed process engine using Spring.

<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
                           http://www.springframework.org/schema/beans/spring-beans.xsd">

    <bean id="dataSource" class="org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy">
        <property name="targetDataSource">
            <bean class="org.springframework.jdbc.datasource.SimpleDriverDataSource">
                <property name="driverClass" value="org.h2.Driver"/>
                <property name="url" value="jdbc:h2:mem:camunda;DB_CLOSE_DELAY=1000"/>
                <property name="username" value="sa"/>
                <property name="password" value=""/>
            </bean>
        </property>
    </bean>

    <bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
        <property name="dataSource" ref="dataSource"/>
    </bean>

    <bean id="processEngineConfiguration" class="org.camunda.bpm.engine.spring.SpringProcessEngineConfiguration">
        <property name="processEngineName" value="default" />
        <property name="dataSource" ref="dataSource"/>
        <property name="transactionManager" ref="transactionManager"/>
        <property name="databaseSchemaUpdate" value="true"/>
        <property name="jobExecutorActivate" value="false"/>
    </bean>

    <!-- using ManagedProcessEngineFactoryBean allows registering the ProcessEngine with the BpmPlatform -->
    <bean id="processEngine" class="org.camunda.bpm.engine.spring.container.ManagedProcessEngineFactoryBean">
        <property name="processEngineConfiguration" ref="processEngineConfiguration"/>
    </bean>

    <bean id="repositoryService" factory-bean="processEngine" factory-method="getRepositoryService"/>
    <bean id="runtimeService" factory-bean="processEngine" factory-method="getRuntimeService"/>
    <bean id="taskService" factory-bean="processEngine" factory-method="getTaskService"/>
    <bean id="historyService" factory-bean="processEngine" factory-method="getHistoryService"/>
    <bean id="managementService" factory-bean="processEngine" factory-method="getManagementService"/>

</beans>

The processes.xml deployment descriptor

The processes.xml deployment descriptor contains the deployment metadata for a process application. The following example is a simple example of a processes.xml deployment descriptor:

<process-application
  xmlns="http://www.camunda.org/schema/1.0/ProcessApplication"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

  <process-archive name="loan-approval">
    <process-engine>default</process-engine>
    <properties>
      <property name="isDeleteUponUndeploy">false</property>
      <property name="isScanForProcessDefinitions">true</property>
    </properties>
  </process-archive>

</process-application>

A single deployment (process-archive) is declared. The process archive has the name loan-approval and is deployed to the process engine with the name default. Two additional properties are specified:

  • isDeleteUponUndeploy: this property controls whether the undeployment of the process application should entail that the process engine deployment is deleted from the database. The default setting is false. If this property is set to true, undeployment of the process application leads to the removal of the deplyoment (including process instances) from the database.
  • isScanForProcessDefinitions: if this property is set to true, the classpath of the process application is automatically scanned for process definition resources. Process definition resources must end in .bpmn20.xml or .bpmn.

See Deployment Descriptor Reference for complete documentation of the syntax of the processes.xml file.

Empty processes.xml

The processes.xml may optionally be empty (left blank). In this case default values are used. The empty processes.xml corresponds to the following configuration:

<process-application
  xmlns="http://www.camunda.org/schema/1.0/ProcessApplication"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

  <process-archive>
    <properties>
      <property name="isDeleteUponUndeploy">false</property>
      <property name="isScanForProcessDefinitions">true</property>
    </properties>
  </process-archive>

</process-application>

The empty processes.xml will scan for process definitions and perform a single deployment to the default process engine.

Location of the processes.xml file

The default location of the processes.xml file is META-INF/processes.xml. The camunda BPM platform will parse and process all processes.xml files on the classpath of a process application. Composite process applications (WAR / EAR) may carry multiple subdeployments providing a META-INF/processes.xml file.

In an apache maven based project, add the the processes.xml file to the src/main/resources/META-INF folder.

Custom location for the processes.xml file

If you want to specify a custom location for the processes.xml file, you need to use the deploymentDescriptors property of the @ProcessApplication annotation:

@ProcessApplication(
    name="my-app",
    deploymentDescriptors={"path/to/my/processes.xml"}
)
public class MyProcessApp extends ServletProcessApplication {

}

The provided path(s) must be resolvable through the ClassLoader#getResourceAsStream(String)-Method of the classloader returned by the AbstractProcessApplication#getProcessApplicationClassloader() method of the process application.

Multiple distinct locations are supported.

Configuring process engines in the processes.xml file

The processes.xml file can also be used for configuring one or multiple process engine(s). The following is an example of a configuration of a process engine inside a processes.xml file:

<process-application
xmlns="http://www.camunda.org/schema/1.0/ProcessApplication"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

  <process-engine name="my-engine">
    <configuration>org.camunda.bpm.engine.impl.cfg.StandaloneInMemProcessEngineConfiguration</configuration>
  </process-engine>

  <process-archive name="loan-approval">
    <process-engine>my-engine</process-engine>
    <properties>
      <property name="isDeleteUponUndeploy">false</property>
      <property name="isScanForProcessDefinitions">true</property>
    </properties>
  </process-archive>

</process-application>

The <configuration>...</configuration> property allows specifying the name of a process engine configuration class to be used when building the process engine.

Process Application Deployment

When deploying a set of BPMN 2.0 files to the process engine, a process deployment is created. The process deployment is performed to the process engine database such that when the process engine is stopped and restarted, the process definitions can be restored from the database and execution can continue. When a process application performs a deployment, in addition to the database deployment, it will create a registration for this deployment with the process engine. This is illustrated in the following figure:

Deployment of the process application "invoice.war" is illustrated on the left hand side:

  1. The process application "invoice.war" deploys the invoice.bpmn file to the process engine.
  2. The process engine checks the database for a previous deployment. In this case, no such deployment exists. As a result, a new database deployment deployment-1 is created for the process definition.
  3. The process application is registered for the deployment-1 and the registration is returned.

When the process application is undeployed, the registration for the deployment is removed (see right hand side of the illustration above). After the registration is cleared, the deployment is still present in the database.

The registration allows the process engine to load additional Java Classes and resources from the process application when execution the processes. In contrast to the database deployment which can be restored whenever the process engine is restarted, the registration of the process application is kept as in-memory state. This in-memory state is local to an individual cluster node, allowing us to undeploy or redeploy a process application on a particular cluster node without affecting the other nodes and without having to restart the process engine. If the Job Executor is deployment aware, job execution will also stop for jobs created by this process application. However, as a consequence, the registration also needs to be re-created when the application server is restarted. This happens automatically if the process application takes part in the application server deployment lifecycle. For instance, ServletProcessApplications are deployed as ServletContextListeners and when the servlet context is started, it creates the deployment and registration with the process engine. The redeployment process is illustrated in the next figure:

(a) Left hand side: invoice.bpmn has not changed:

  1. The process application "invoice.war" deploys the invoice.bpmn file to the process engine.
  2. The process engine checks the database for a previous deployment. Since deployment-1 is still present in the database, the process engine compares the xml content of the database deployment with the bpmn20.xml file from the process application. In this case, both xml documents are identical which means that the existing deployment can be resumed.
  3. The process application is registered for the existing deployment deployment-1.

(b) Right hand side: invoice.bpmn has changed:

  1. The process application "invoice.war" deploys the invoice.bpmn file to the process engine.
  2. The process engine checks the database for a previous deployment. Since deployment-1 is still present in the database, the process engine compares the xml content of the database deployment with the invoice.bpmn file from the process application. In this case, changes are detected which means that a new deployment must be created.
  3. The process engine creates a new deployment deployment-2, containing the updated invoice.bpmn process.
  4. The process application is registered for the new deployment deployment-2 AND the existing deployment deployment-1.

The resuming of the previous deployment (deployment-1) is a feature called resumePreviousVersions and is activated by default. If you want to deactivate this feature, you have to set the property to false in processes.xml file:

<process-application
xmlns="http://www.camunda.org/schema/1.0/ProcessApplication"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

  <process-archive name="loan-approval">
    ...
    <properties>
      ...
      <property name="isResumePreviousVersions">false</property>
    </properties>
  </process-archive>

</process-application>

Process Application Event Listeners

The process engine supports defining two types of event listeners: Task Event Listeners and Execution Event Listeners. Task Event listeners allow to react to Task Events (Task are Created, Assigned, Completed). Execution Listeners allow to react to events fired as execution progresses to the diagram: Activities are Started, Ended and Transitions are being taken.

When using the Process Application API, the process engine makes sure that Events are delegated to the right Process Application. For example, assume there is a Process Application deployed as "invoice.war" which deploys a process definition named "invoice". The invoice process has a task named "archive invoice". The application "invoice.war" further provides a Java Class implementing the ExecutionListener interface and is configured to be invoked whenever the END event is fired on the "archive invoice" activity. The process engine makes sure that the event is delegated to the listener class located inside the process application:

On top of the Execution and Task Listeners which are explicitly configured in the BPMN 2.0 Xml, the process application API supports defining a global ExecutionListener and a global TaskListener which are notified about all events happening in the processes deployed by a process application:

@ProcessApplication
public class InvoiceProcessApplication extends ServletProcessApplication {

  public TaskListener getTaskListener() {
    return new TaskListener() {
      public void notify(DelegateTask delegateTask) {
        // handle all Task Events from Invoice Process
      }
    };
  }

  public ExecutionListener getExecutionListener() {
    return new ExecutionListener() {
      public void notify(DelegateExecution execution) throws Exception {
        // handle all Execution Events from Invoice Process
      }
    };
  }
}

In order to use the global Process Application Event Listeners, you need to activate the corresponding Process Engine Plugin:

<process-engine name="default">
  ...
  <plugins>
    <plugin>
      <class>org.camunda.bpm.application.impl.event.ProcessApplicationEventListenerPlugin</class>
    </plugin>
  </plugins>
</process-engine>

Note that the plugin is activated by default in the pre-packaged camunda BPM distributions.

The Process Application Event Listener interface is also a good place for adding the CdiEventListener bridge if you want to use Cdi Events with in combination with the shared process engine.

Maven Project Templates (Archetypes)

We provide several project templates for Maven, which are also called Archetypes. They enable a quickstart for developing process applications using the camunda-BPM-platform.

Overview of available Maven Archetypes

The following archetypes are currently provided. They are distributed via our Maven repository: https://app.camunda.com/nexus/content/repositories/camunda-bpm/

ArchetypeDescription
Process Application (EJB, WAR) Process application that uses a shared camunda BPM engine in a Java EE Container, e.g. JBoss AS7. Contains: camunda EJB client, camunda CDI Integration, BPMN Process, Java Delegate as CDI bean, JSF-based start and task forms, configuration for JPA (Hibernate), JUnit Test with in-memory engine, Arquillian Test for JBoss AS7, Ant build script for one-click deployment in Eclipse
Process Application (Servlet, WAR) Process application that uses a shared camunda BPM engine in a Servlet Container, e.g. Apache Tomcat. Contains: Servlet Process Application, BPMN Process, Java Delegate, HTML5-based start and task forms, JUnit Test with in-memory engine, Arquillian Test for JBoss AS7, Ant build script for one-click deployment in Eclipse

Usage in Eclipse IDE

Summary

  1. Add archetype catalog (Preferences -> Maven -> Archetypes -> Add Remote Catalog):

    https://app.camunda.com/nexus/content/repositories/camunda-bpm/

  2. Create Maven project from archetype (File -> New -> Project... -> Maven -> Maven Project)

Detailed Instructions

  1. Go to Preferences -> Maven -> Archetypes -> Add Remote Catalog
  2. Enter the following URL and description, click on Verify... to test the connection and if that worked on OK to save the catalog.

    Catalog File: https://app.camunda.com/nexus/content/repositories/camunda-bpm/

    Description: camunda BPM platform

Now you should be able to use the archetypes when creating a new Maven project in Eclipse:

  1. Go to File -> New -> Project... and select Maven -> Maven Project
  2. Select a location for the project or just keep the default setting.
  3. Select the archetype from the catalog that you created before.
  4. Specify Maven coordinates and camunda version and finish the project creation.

The resulting project should look like this:

Troubleshooting

Sometimes, the creation of the very first Maven project fails in Eclipse. If that happens to you, just try it again. Most of the times the second try works. If the problem persists, contact us.

Usage on Commandline

Interactive

Run the following command in a terminal to generate a project. Maven will allow you to select an archetype and ask you for all parameters needed to configure it:

mvn archetype:generate -Dfilter=org.camunda.bpm.archetype: -DarchetypeCatalog=https://app.camunda.com/nexus/content/repositories/camunda-bpm

Full Automation

The following command completely automates the project generation an can be used in shellscipts or Ant builds:

mvn archetype:generate \
  -DinteractiveMode=false \
  -DarchetypeRepository=https://app.camunda.com/nexus/content/repositories/camunda-bpm \
  -DarchetypeGroupId=org.camunda.bpm.archetype \
  -DarchetypeArtifactId=camunda-archetype-ejb-war \
  -DarchetypeVersion=7.0.0 \
  -DgroupId=org.example.camunda.bpm \
  -DartifactId=camunda-bpm-ejb-project \
  -Dversion=0.0.1-SNAPSHOT \
  -Dpackage=org.example.camunda.bpm.ejb

Source Code and Customization

You can also customize the project templates for your own technology stack. Just fork them on GitHub!

BPM Platform Services

To inspect the current state of configured process engines and deployed process applications, the class org.camunda.bpm.BpmPlatform offers access to the ProcessEngineService and the ProcessApplicationService.

ProcessEngineService

The ProcessEngineService can be accessed by calling BpmPlatform.getProcessEngineService(). It offers access to the default process engine, as well as any process engine by its name as specified in the process engine configuration. It returns ProcessEngine objects from which any services for a specific engine can be accessed.

ProcessApplicationService

The ProcessApplicationService is accessible via BpmPlatform.getProcessApplicationService(). It provides details on the process application deployments made on the application server it is running on. That means that it does not provide a global view across all nodes in a cluster.

Given a process application name, a ProcessApplicationInfo object can be retrieved that contains details on the deployments made by this process application. These correspond to the process archives declared in processes.xml.

Furthermore, application-specific properties can be retrieved such as the servlet context path in case of a servlet process application.

JNDI Bindings for BPM Platform Services

The BPM Platform Services (i.e. Process Engine Service and Process Application Service) are provided via JNDI Bindings with the following JNDI names:

  • Process Engine Service: java:global/camunda-bpm-platform/process-engine/ProcessEngineService!org.camunda.bpm.ProcessEngineService
  • Process Application Service: java:global/camunda-bpm-platform/process-engine/ProcessApplicationService!org.camunda.bpm.ProcessApplicationService

On Glassfish 3.1.1 and on JBoss AS 7 you can do a lookup with the JNDI names to get one of these BPM Platform Services. However, on Apache Tomcat 7 you have to do quite more to be able to do a lookup to get one of these BPM Platform Services.

JNDI Bindings on Apache Tomcat 7

To use the JNDI Bindings for Bpm Platform Services on Apache Tomcat 7 you have to add in your process application the file META-INF/context.xml and add the following ResourceLinks:

These elements are used to create a link to the global JNDI Resources defined in $TOMCAT_HOME/conf/server.xml.

Furthermore, declare the dependency on the JNDI binding inside the WEB-INF/web.xml deployment descriptor.

Process Engine Service ProcessEngineService org.camunda.bpm.ProcessEngineService Container

<resource-ref>
  <description>Process Application Service</description>
  <res-ref-name>ProcessApplicationService</res-ref-name>
  <res-type>org.camunda.bpm.ProcessApplicationService</res-type>
  <res-auth>Container</res-auth>
</resource-ref>
...

Note: You can choose different resource link names for the Process Engine Service and Process Application Service. The resource link name has to match with the value inside the <res-ref-name>-element inside the correspondig <resource-ref>-element in WEB-INF/web.xml. We propose the name ProcessEngineService for the Process Engine Service and ProcessApplicationService for the Process Application Service.

In order to do a lookup for a Bpm Platform Service you have to use the resource link name to get the linked global resource. For example:

  • Process Engine Service: java:comp/env/ProcessEngineService
  • Process Application Service: java:comp/env/ProcessApplicationService

If you have declared another resource link names than we proposed, you have to use java:comp/env/$YOUR_RESOURCE_LINK_NAME to do a lookup to get the corresponding Bpm Platform Service.

The camunda JBoss AS 7 Subsystem

Distribution & Installation Guide

If you download a pre-packaged distribution from camunda.org, the camunda JBoss Subsystem is readily installed into the application server

Read the installation guide in order to learn how to install the camunda JBoss subsystem into your JBoss Server.

camunda BPM provides advanced integration for JBoss Application Server 7 in the form of a custom JBoss AS 7 Subsystem.

The most prominent features are:

  • Deploy the process engine as shared jboss module.
  • Configure the process engine in standalone.xml / domain.xml and administer it though the JBoss Management System.
  • Process Engines are native JBoss Services with service lifecycle and dependencies.
  • Automatic deployment of BPMN 2.0 processes (through the Process Application API).
  • Use a managed Thread Pool provided by JBoss Threads in combination with the Job Executor.

Configuring a process engine in standalone.xml / domain.xml

Using the camunda JBoss AS 7 Subsystem, it is possible to configure and manage the process engine through the JBoss Management Model. The most straight forward way is to add the process engine configuration to the standalone.xml file of the JBoss Server:

<subsystem xmlns="urn:org.camunda.bpm.jboss:1.1">
    <process-engines>
        <process-engine name="default" default="true">
            <datasource>java:jboss/datasources/ProcessEngine</datasource>
            <history-level>full</history-level>
            <properties>
                <property name="jobExecutorAcquisitionName">default</property>
                <property name="isAutoSchemaUpdate">true</property>
                <property name="authorizationEnabled">true</property>
            </properties>
        </process-engine>
    </process-engines>
    <job-executor>
        <thread-pool-name>job-executor-tp</thread-pool-name>
        <job-acquisitions>
            <job-acquisition name="default">
                <acquisition-strategy>SEQUENTIAL</acquisition-strategy>
                <properties>
                    <property name="lockTimeInMillis">300000</property>
                    <property name="waitTimeInMillis">5000</property>
                    <property name="maxJobsPerAcquisition">3</property>
                </properties>
            </job-acquisition>
        </job-acquisitions>
    </job-executor>
</subsystem>

It should be easy to see that the configuration consists of a single process engine which uses the Datasource java:jboss/datasources/ProcessEngine and is configured to be the default process engine. In addition, the Job Executor currently uses a single Job Acquisition also named default.

If you start up your JBoss AS 7 server with this configuration, it will automatically create the corresponding services and expose them through the management model.

Providing a custom process engine configuration class

It is possible to provide a custom Process Engine Configuration class on JBoss AS 7. To this extend, provide the fully qualified classname of the class in the standalone.xml file:

<process-engine name="default" default="true">
    <datasource>java:jboss/datasources/ProcessEngine</datasource>
    <configuration>org.my.custom.ProcessEngineConfiguration</configuration>
    <history-level>full</history-level>
    <properties>
        <property name="myCustomProperty">true</property>
        <property name="lockTimeInMillis">300000</property>
        <property name="waitTimeInMillis">5000</property>
    </properties>
</process-engine>

The class org.my.custom.ProcessEngineConfiguration must be a subclass of org.camunda.bpm.engine.impl.cfg.JtaProcessEngineConfiguration.

The properties map can be used for invoking primitive valued setters (Integer, String, Boolean) that follow the Java Bean conventions. In the case of the example above, the class would provide a method named

public void setMyCustomProperty(boolean boolean) {
  ...
}

Module dependency of custom configuration class

If you configure the process engine in `standalone.xml` and provide a custom configuration class packaged inside an own module, the camunda-jboss-subsystem module needs to have a module dependency on the module providing the class.

If you fail to do this, you will see the following error log:

Caused by: org.camunda.bpm.engine.ProcessEngineException: Could not load 'foo.bar': the class must be visible from the camunda-jboss-subsystem module.
        at org.camunda.bpm.container.impl.jboss.service.MscManagedProcessEngineController.createProcessEngineConfiguration(MscManagedProcessEngineController.java:187) [camunda-jboss-subsystem-7.0.0-alpha8.jar:]
        at org.camunda.bpm.container.impl.jboss.service.MscManagedProcessEngineController.startProcessEngine(MscManagedProcessEngineController.java:138) [camunda-jboss-subsystem-7.0.0-alpha8.jar:]
        at org.camunda.bpm.container.impl.jboss.service.MscManagedProcessEngineController$3.run(MscManagedProcessEngineController.java:126) [camunda-jboss-subsystem-7.0.0-alpha8.jar:]

Extending a process engine using process engine plugins

It is possible to extend a process engine using the process engine plugins concept. You specify the process engine plugins in standalone.xml / domain.xml for each process engine separately like shown below:

<subsystem xmlns="urn:org.camunda.bpm.jboss:1.1">
    <process-engines>
        <process-engine name="default" default="true">
            <datasource>java:jboss/datasources/ProcessEngine</datasource>
            <history-level>full</history-level>
            <properties>
                ...
            </properties>
            <plugins>
                <plugin>
                    <class>org.camunda.bpm.engine.MyCustomProcessEnginePlugin</class>
                    <properties>
                        <property name="boost">10</property>
                        <property name="maxPerformance">true</property>
                        <property name="actors">akka</property>
                    </properties>
                </plugin>
            </plugins>
        </process-engine>
    </process-engines>
    ...
</subsystem>

You have to provide the fully qualified classname between the <class> tags. Additional properties can be specified using the <properties> element. The restrictions, which apply for providing a custom process engine configuration class, are also valid for the process engine plugins:

  • plugin class must be visible in the classpath for the camunda-subsystem.
  • properties map can be used for invoking primitive valued setters (Integer, String, Boolean) that follow the Java Bean conventions.

Looking up a Process Engine in JNDI

The camunda JBoss subsystem provides the same JNDI bindings for the ProcessApplicationService and the ProcessEngineService as provided on other containers. In addition, the camunda JBoss subsystem creates JNDI Bindings for all managed process engines, allowing us to look them up directly.

The global JNDI bindings for process engines follow the pattern

java:global/camunda-bpm-platform/process-engine/$PROCESS_ENGINE_NAME

If a process engine is named "engine1", it will be available using the name java:global/camunda-bpm-platform/process-engine/engine1.

Note that when looking up the process engine, using a declarative mechanism (like @Resource or referencing the resource in a deployment descriptor) is preferred over a programmatic way. The declarative mechanism makes the application server aware of our dependency on the process engine service and allows it to manage that dependency for us. See also: Managing Service Dependencies.

Looking up a Process Engine from JNDI using Spring

On JBoss AS 7, spring users should always [create a resource-ref for the process engine in web.xml](ref:#bpmplatform-container-jboss-services) and then lookup the local name in the `java:comp/env/` namespace. For an example, see this Quickstart.

Managing the process engine through the JBoss Management System

In oder to inspect and change the management model, we can use one of the multiple JBoss Management clients available.

Inspecting the configuration

It is possible to inspect the configuration using the CLI (Command Line Interface, jboss-cli.bat/sh):

You are disconnected at the moment. Type 'connect' to connect to the server or 'help' for the list of supported commands.
[disconnected /] connect
[standalone@localhost:9999 /] cd /subsystem=camunda-bpm-platform
[standalone@localhost:9999 subsystem=camunda-bpm-platform] :read-resource(recursive=true)
{
    "outcome" => "success",
    "result" => {
        "job-executor" => {"default" => {
            "thread-pool-name" => "job-executor-tp",
            "job-acquisitions" => {"default" => {
                "acquisition-strategy" => "SEQUENTIAL",
                "name" => "default",
                "properties" => {
                    "lockTimeInMillis" => "300000",
                    "waitTimeInMillis" => "5000",
                    "maxJobsPerAcquisition" => "3"
                }
            }}
        }},
        "process-engines" => {"default" => {
            "configuration" => "org.camunda.bpm.container.impl.jboss.config.ManagedJtaProcessEngineConfiguration",
            "datasource" => "java:jboss/datasources/ProcessEngine",
            "default" => true,
            "history-level" => "full",
            "name" => "default",
            "properties" => {
                "jobExecutorAcquisitionName" => "default",
                "isAutoSchemaUpdate" => "true"
            }
        }}
    }
}

Stopping a Process Engine through the JBoss Management System

Once the process engine is registered in the JBoss Management Model, it is possible to control it thorough the management API. For example, you can stop it through the CLI:

[standalone@localhost:9999 subsystem=camunda-bpm-platform] cd process-engines=default
[standalone@localhost:9999 process-engines=default] :remove
{"outcome" => "success"}

This removes the process engine and all dependent services. This means that if you remove a process engine the application server will stop all deployed applications which use the process engine.

Declaring Service Dependencies

In order for this to work, but also in order to avoid race conditions at deployment time, it is necessary that each application explicitly declares dependencies on the process engines it is using. Learn how to declare dependencies

Starting a Process Engine through the JBoss Management System

It is also possible to start a new process engine at runtime:

[standalone@localhost:9999 subsystem=camunda-bpm-platform] /subsystem=camunda-bpm-platform/process-engines=my-process-engine/:add(name=my-process-engine,datasource=java:jboss/datasources/ExampleDS)
{"outcome" => "success"}

One of the nice features of the JBoss AS 7 Management System is that it will

  • persist any changes to the model in the underlying configuration file. This means that if you start a process engine using the command line interface, the configuration will be added to standalone.xml / domain.xml such that it is available when the server is restarted.
  • distribute the configuration in the cluster and start / stop the process engine on all servers part of the same domain.

Using the JBoss JConsole Extensions

In some cases, you may find it more convenient to use the JBoss JConsole extension for starting a process engine.

The JConsole plugin allows you to inspect the management model graphically and build operations using a wizard. In order to start the JBoss JConsole plugin, start the jconsole.bat/sh file provided in the JBoss distribution. More Information in the JBoss Docs.

Managing Classpath Dependencies

Implicit module dependencies

Classpath dependencies are automatically managed for you if you use the Process Application API.

When using the camunda BPM JBoss AS subsystem, the process engine classes are deployed as jboss module. The module is named org.camunda.bpm.process-engine and is deployed in the folder $JBOSS_HOME/modules/org/camunda/bpm/camunda-engine.

By default, the Application server will not add this module to the classpath of applicaitons.If an application needs to interact with the process engine, we must declare a module dependency in the application. This can be achieved using either an implicit or an explicit module dependency.

Implicit module dependencies with the Process Application API

When using the Process Application API (ie. when deploying either a ServletProcessApplication or an EjbProcessApplication), the camunda JBoss Subsystem will detect the @ProcessApplication class in the deployment and automatically add a module dependency between the application and the process engine module. As a result, we don't have to declare the dependency ourselves. It is called an implicit module dependency because it is not explicitly declared but can be derived by inspecting the application and seeing that it provides a @ProcessApplication class.

Explicit module dependencies

If an application does not use the process application API but still needs the process engine classes to be added to its classpath, an explicit module dependency is required. JBoss AS 7 has different mechanisms for achieving this. The simplest way is to add a manifest entry to the MANIFEST.MF file of the deployment. The following example illustrates how to generate such a dependency using the maven WAR plugin:

<build>
   ...
   <plugins>
     <plugin>
       <groupId>org.apache.maven.plugins</groupId>
       <artifactId>maven-war-plugin</artifactId>
       <configuration>
          <archive>
             <manifestEntries>
                <Dependencies>org.camunda.bpm.process-engine</Dependencies>
             </manifestEntries>
          </archive>
       </configuration>
     </plugin>
   </plugins>
</build>

As a result, the Application Service will add the process engine module to the classpath of the application.

Managing Service Dependencies

Implicit service dependencies

Service dependencies are automatically managed for you if you use the Process Application API.

The camunda JBoss subsystem manages process engines as JBoss Services in the JBoss Module Service Container. In order for the Module Service Container to provide the process engine service(s) to the deployed applications, it is important that the dependencies are known. Consider the following example:

There are three applications deployed and two process engine services exist. Application 1 and Application 2 are using Process Engine 1 and Application 3 is using Process Engine 2.

Implicit Service Dependencies

When using the Process Application API (ie. when deploying either a ServletProcessApplication or an EjbProcessApplication), the camunda JBoss Subsystem will detect the @ProcessApplication class in the deployment and automatically add a service dependency between the process application component and the process engine module. This makes sure the process engine is available when the process application is deployed.

Explicit Service Dependencies

If an application does not use the process application API but still needs to interact with a process engine, it is important to declare the dependency on the process engine service explicitly. If we fail to declare the dependency, there is no guarantee that the process engine is available to the application.

  • When the application server is started, it will bring up services concurrently. If it is not aware of the dependency between the application and the process engine, the application may start before the process engine, potentially resulting in exceptions if the process engine is accessed from some deployment listener (like a servlet context listener or a @PostConstruct callback of an Enterprise Bean).
  • If the process engine is stopped while the application is deployed, the application server must stop the application as well.

The simplest way to add an explicit dependency on the process engine is to bind the process engine in application's local naming space. For instance, we can add the following resource reference to the web.xml file of a web application:

<resource-ref>
  <res-ref-name>processEngine/default</res-ref-name>
  <res-type>org.camunda.bpm.engine.ProcessEngine</res-type>
  <mapped-name>java:global/camunda-bpm-platform/process-engine/default</mapped-name>
</resource-ref>

This way, the global process engine resource java:global/camunda-bpm-platform/process-engine/default is available locally under the name processEngine/default. Since the application server is aware of this dependency, it will make sure the process engine service exists before starting the application and it will stop the application if the process engine is removed.

The same effect can be achieved using the @Resource Annotation:

@Stateless
public class PaComponent {

  @Resource(mappedName="java:global/camunda-bpm-platform/process-engine/default")
  private ProcessEngine processEngine;

  @Produces
  public ProcessEngine getProcessEngine() {
    return processEngine;
  }

}

Overview

The camunda-engine spring framework integration is located inside the camunda-engine-spring module and can be added to apache maven-based projects through the following dependency:

<dependency>
  <groupId>org.camunda.bpm</groupId>
  <artifactId>camunda-engine-spring</artifactId>
  <version>${camunda.version}</version>
</dependency>

The camunda-engine-spring artifact should be added as a library to the process application.

Process Engine Configuration

You can use a Spring application context Xml file for bootstrapping the process engine. You can bootstrap both application-managed and container-managed process engines through Spring.

Configuring an application-managed Process Engine

The ProcessEngine can be configured as a regular Spring bean. The starting point of the integration is the class org.camunda.bpm.engine.spring.ProcessEngineFactoryBean. That bean takes a process engine configuration and creates the process engine. This means that the creation and configuration of properties for Spring is the same as documented in the configuration section. For Spring integration the configuration and engine beans will look like this:

<bean id="processEngineConfiguration"
      class="org.camunda.bpm.engine.spring.SpringProcessEngineConfiguration">
    ...
</bean>

<bean id="processEngine"
      class="org.camunda.bpm.engine.spring.ProcessEngineFactoryBean">
  <property name="processEngineConfiguration" ref="processEngineConfiguration" />
</bean>


Note that the processEngineConfiguration bean uses the SpringProcessEngineConfiguration class.

Configuring a container-managed Process Engine as a Spring Bean

If you want the process enigne to be registered with the BpmPlatform ProcessEngineService, you must use org.camunda.bpm.engine.spring.container.ManagedProcessEngineFactoryBean instead of the ProcessEngineFactoryBean shown in the example above. I that case the constructed process engine object is registered with the BpmPlatform and can be referenced for creating process application deployments and exposed through the runtime container integration.

Spring Transaction Integration

We'll explain the SpringTransactionIntegrationTest found in the Spring examples of the distribution step by step. Below is the Spring configuration file that we use in this example (you can find it in SpringTransactionIntegrationTest-context.xml). The section shown below contains the dataSource, transactionManager, processEngine and the process engine services.

When passing the DataSource to the SpringProcessEngineConfiguration (using property "dataSource"), the camunda engine uses a org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy internally, which wraps the passed DataSource. This is done to make sure the SQL connections retrieved from the DataSource and the Spring transactions play well together. This implies that it's no longer needed to proxy the dataSource yourself in Spring configuration, although it's still allowed to pass a TransactionAwareDataSourceProxy into the SpringProcessEngineConfiguration. In this case no additional wrapping will occur.

Make sure when declaring a TransactionAwareDataSourceProxy in Spring configuration yourself, that you don't use it for resources that are already aware of Spring-transactions (e.g. DataSourceTransactionManager and JPATransactionManager need the un-proxied dataSource).

<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:context="http://www.springframework.org/schema/context"
       xmlns:tx="http://www.springframework.org/schema/tx"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-2.5.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.0.xsd">

  <bean id="dataSource" class="org.springframework.jdbc.datasource.SimpleDriverDataSource">
    <property name="driverClass" value="org.h2.Driver" />
    <property name="url" value="jdbc:h2:mem:camunda;DB_CLOSE_DELAY=1000" />
    <property name="username" value="sa" />
    <property name="password" value="" />
  </bean>

  <bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
    <property name="dataSource" ref="dataSource" />
  </bean>

  <bean id="processEngineConfiguration" class="org.camunda.bpm.engine.spring.SpringProcessEngineConfiguration">
    <property name="dataSource" ref="dataSource" />
    <property name="transactionManager" ref="transactionManager" />
    <property name="databaseSchemaUpdate" value="true" />
    <property name="jobExecutorActivate" value="false" />
  </bean>

  <bean id="processEngine" class="org.camunda.bpm.engine.spring.ProcessEngineFactoryBean">
    <property name="processEngineConfiguration" ref="processEngineConfiguration" />
  </bean>

  <bean id="repositoryService" factory-bean="processEngine" factory-method="getRepositoryService" />
  <bean id="runtimeService" factory-bean="processEngine" factory-method="getRuntimeService" />
  <bean id="taskService" factory-bean="processEngine" factory-method="getTaskService" />
  <bean id="historyService" factory-bean="processEngine" factory-method="getHistoryService" />
  <bean id="managementService" factory-bean="processEngine" factory-method="getManagementService" />

...
</beans>

The remainder of that Spring configuration file contains the beans and configuration that we'll use in this particular example:

<beans>
  ...
  <tx:annotation-driven transaction-manager="transactionManager"/>

  <bean id="userBean" class="org.camunda.bpm.engine.spring.test.UserBean">
    <property name="runtimeService" ref="runtimeService" />
  </bean>

  <bean id="printer" class="org.camunda.bpm.engine.spring.test.Printer" />

</beans>

First the application context is created with any of the Spring ways to do that. In this example you could use a classpath XML resource to configure our Spring application context:

ClassPathXmlApplicationContext applicationContext =
    new ClassPathXmlApplicationContext("mytest/SpringTransactionIntegrationTest-context.xml");


or, since it is a test:

@ContextConfiguration("classpath:mytest/SpringTransactionIntegrationTest-context.xml")

Then we can get the service beans and invoke methods on them. The ProcessEngineFactoryBean will have added an extra interceptor to the services that applies Propagation.REQUIRED transaction semantics on the engine service methods. So, for example, we can use the repositoryService to deploy a process like this:

RepositoryService repositoryService = (RepositoryService) applicationContext.getBean("repositoryService");
String deploymentId = repositoryService
  .createDeployment()
  .addClasspathResource("mytest/hello.bpmn20.xml")
  .deploy()
  .getId();

The other way around also works. In this case, the Spring transaction will be around the userBean.hello() method and the engine service method invocation will join that same transaction.

UserBean userBean = (UserBean) applicationContext.getBean("userBean");
userBean.hello();

The UserBean looks like this. Remember from above in the Spring bean configuration we injected the repositoryService into the userBean.

public class UserBean {

  // injected by Spring
  private RuntimeService runtimeService;

  @Transactional
  public void hello() {
    // here you can do transactional stuff in your domain model
    // and it will be combined in the same transaction as
    // the startProcessInstanceByKey to the RuntimeService
    runtimeService.startProcessInstanceByKey("helloProcess");
  }

  public void setRuntimeService(RuntimeService runtimeService) {
    this.runtimeService = runtimeService;
  }
}

Automatic Resource Deployment

Spring integration also has a special feature for deploying resources. In the process engine configuration, you can specify a set of resources. When the process engine is created, all those resources will be scanned and deployed. There is filtering in place that prevents duplicate deployments. Only when the resources actually have changed, will new deployments be deployed to the engine database. This makes sense in a lot of use case, where the Spring container is rebooted often (e.g. testing).

Here's an example:

<bean id="processEngineConfiguration" class="org.camunda.bpm.engine.spring.SpringProcessEngineConfiguration">
  ...
  <property name="deploymentResources" value="classpath*:/mytest/autodeploy.*.bpmn20.xml" />
</bean>

<bean id="processEngine" class="org.camunda.bpm.engine.spring.ProcessEngineFactoryBean">
  <property name="processEngineConfiguration" ref="processEngineConfiguration" />
</bean>

Expression Resolving

When using the ProcessEngineFactoryBean, by default, all expressions in the BPMN processes will also 'see' all the Spring beans. It's possible to limit the beans you want to expose in expressions or even exposing no beans at all using a map that you can configure. The example below exposes a single bean (printer), available to use under the key "printer". To have NO beans exposed at all, just pass an empty list as 'beans' property on the SpringProcessEngineConfiguration. When no 'beans' property is set, all Spring beans in the context will be available.

<bean id="processEngineConfiguration"
      class="org.camunda.bpm.engine.spring.SpringProcessEngineConfiguration">
  ...
  <property name="beans">
    <map>
      <entry key="printer" value-ref="printer" />
    </map>
  </property>
</bean>

<bean id="printer" class="org.camunda.bpm.engine.spring.test.transaction.Printer" />

Now the exposed beans can be used in expressions: for example, the SpringTransactionIntegrationTest hello.bpmn20.xml shows how a method on a Spring bean can be invoked using a UEL method expression:

<definitions id="definitions" ...>

  <process id="helloProcess">

    <startEvent id="start" />
    <sequenceFlow id="flow1" sourceRef="start" targetRef="print" />

    <serviceTask id="print" camunda:expression="#{printer.printMessage()}" />
    <sequenceFlow id="flow2" sourceRef="print" targetRef="end" />

    <endEvent id="end" />

  </process>

</definitions>

Where Printer looks like this:

public class Printer {

  public void printMessage() {
    System.out.println("hello world");
  }
}


And the Spring bean configuration (also shown above) looks like this:

<beans ...>
  ...

  <bean id="printer" class="org.camunda.bpm.engine.spring.test.transaction.Printer" />
</beans>

Expression resolving with the Shared Process Engine

In a shared process engine deployment scenario, you have a process engine which dispatches to multiple applications. In this case, there is not a single spring application context but each application may maintain its own application context. The process engine cannot use a single expression resolver for a single application context but must delegate to the appropriate process application, depending on which process is currently executed.

This functionality is provided by the org.camunda.bpm.engine.spring.application.SpringProcessApplicationElResolver. This class is a ProcessApplicationElResolver implementation delegating to the local application context. Expression resolving then works in the following way: the shared process engine checks which process application corresponds to the process it is currently executing. It then delegates to that process application for resolving expressions. The process application delegates to the SpringProcessApplicationElResolver which uses the local Spring application context for resolving beans.

The SpringProcessApplicationElResolver class is automatically detected if the camunda-engine-spring module is included as a library of the process application, not as a global library.

Spring-based Testing

When integrating with Spring, business processes can be tested very easily (in scope 2, see Testing Scopes) using the standard camunda testing facilities. The following example shows how a business process is tested in a typical Spring-based unit test:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration("classpath:org/camunda/bpm/engine/spring/test/junit4/springTypicalUsageTest-context.xml")
public class MyBusinessProcessTest {

  @Autowired
  private RuntimeService runtimeService;

  @Autowired
  private TaskService taskService;

  @Autowired
  @Rule
  public ProcessEngineRule processEngineRule;

  @Test
  @Deployment
  public void simpleProcessTest() {
    runtimeService.startProcessInstanceByKey("simpleProcess");
    Task task = taskService.createTaskQuery().singleResult();
    assertEquals("My Task", task.getName());

    taskService.complete(task.getId());
    assertEquals(0, runtimeService.createProcessInstanceQuery().count());

  }
}

Note that for this to work, you need to define a ProcessEngineRule bean in the Spring configuration (which is injected by auto-wiring in the example above).

<bean id="processEngineRule" class="org.camunda.bpm.engine.test.ProcessEngineRule">
  <property name="processEngine" ref="processEngine" />
</bean>

Overview

The camunda-engine-cdi module provides programming model integration with CDI (Context and Dependency Injection). CDI is the Java EE 6 standard for Dependency Injection. The camunda-engine-cdi integration leverages both the configurability of the camunda engine and the extensibility of CDI. The most prominent features are:

  • A custom El-Resolver for resolving CDI beans (including EJBs) from the process,
  • Support for @BusinessProcessScoped beans (CDI beans the lifecycle of which is bound to a process instance),
  • Declarative control over a process instance using annotations,
  • The Process Engine is hooked-up to the CDI event bus,
  • Works with both Java EE and Java SE,
  • Support for unit testing.

Maven Dependency

In order to use the camunda-engine-cdi module inside your application, you must include the following Maven dependency:

<dependency>
  <groupId>org.camunda.bpm</groupId>
  <artifactId>camunda-engine-cdi</artifactId>
  <version>7.x</version>
</dependency>

Replace 'x' with your camunda BPM version.

There is a project template for Maven called `camunda-archetype-ejb-war`, which gives you a complete running project including the CDI integration.

Process Engine Configuration

Documentation for this part has yet to be written.

Jta Transaction Integration

The process engine transaction management can integrate with JTA. In order to use JTA transaction manager integration, you need to use the

  • org.camunda.bpm.engine.impl.cfg.JtaProcessEngineConfiguration for Jta Integration only
  • org.camunda.bpm.engine.cdi.CdiJtaProcessEngineConfiguration for additional CDI Expression resolving support.

Note 1: The shared process engine distributions for Java EE Application Servers (Wildfly, JBoss, Glassfish, IBM Websphere Application Server, Oracle Weblogic Application Server) provide JTA integration out of the box.

Note 2: The process engine requires access to an implementation of javax.transaction.TransactionManager. Not all application servers provide such an implementation. Most notably WebSphere and Weblogic historically did not provide this implementation. In order to achieve JTA Transaction Integration on these containers, users should use the Spring Framework Abstraction and configure the process engine using the SpringProcessEngineConfiguration.

Expression Resolving

The camunda-engine-cdi library exposes CDI beans via Expression Language, using a custom resolver. This makes it possible to reference beans from the process:

<userTask id="authorizeBusinessTrip" name="Authorize Business Trip"
                        camunda:assignee="#{authorizingManager.account.username}" />
</script>

Where "authorizingManager" could be a bean provided by a producer method:

@Inject
@ProcessVariable
private Object businessTripRequesterUsername;

@Produces
@Named
public Employee authorizingManager() {
        TypedQuery<Employee> query = entityManager.createQuery("SELECT e FROM Employee e WHERE e.account.username='"
                + businessTripRequesterUsername + "'", Employee.class);
        Employee employee = query.getSingleResult();
        return employee.getManager();
}

We can use the same feature to call a business method of an EJB in a service task, using the camunda:expression="myEjb.method()"-extension. Note that this requires a @Named-annotation on the MyEjb-class.

Contextual Programming Model

In this section we briefly look at the contextual process execution model used by the camunda-engine-cdi extension. A BPMN business process is typically a long-running interaction, comprised of both user and system tasks. At runtime, a process is split-up into a set of individual units of work, performed by users and/or application logic. In camunda-engine-cdi, a process instance can be associated with a cdi scope, the association representing a unit of work. This is particularly useful, if a unit of work is complex, for instance if the implementation of a UserTask is a complex sequence of different forms and "non-process-scoped" state needs to be kept during this interaction. In the default configuration, process instances are associated with the "broadest" active scope, starting with the conversation and falling back to the request if the conversation context is not active.

Associating a Conversation with a Process Instance

When resolving @BusinessProcessScoped beans, or injecting process variables, we rely on an existing association between an active cdi scope and a process instance. camunda-engine-cdi provides the org.camunda.bpm.engine.cdi.BusinessProcess bean for controlling the association, most prominently:

  • the startProcessBy*(...)-methods, mirroring the respective methods exposed by the RuntimeService allowing to start and subsequently associating a business process,
  • resumeProcessById(String processInstanceId), allowing to associate the process instance with the provided id,
  • resumeTaskById(String taskId), allowing to associate the task with the provided id (and by extension, the corresponding process instance).

Once a unit of work (for example a UserTask) is completed, the completeTask() method can be called to disassociate the conversation/request from the process instance. This signals the engine that the current task is completed and makes the process instance proceed.

Note that the BusinessProcess-bean is a @Named bean, which means that the exposed methods can be invoked using expression language, for example from a JSF page. The following JSF2 snippet begins a new conversation and associates it with a user task instance, the id of which is passed as a request parameter (e.g. pageName.jsf?taskId=XX):

<f:metadata>
  <f:viewParam name="taskId" />
  <f:event type="preRenderView" listener="#{businessProcess.startTask(taskId, true)}" />
</f:metadata>

Declaratively controlling the Process

camunda-engine-cdi allows declaratively starting process instances and completing tasks using annotations. The @org.camunda.bpm.engine.cdi.annotation.StartProcess annotation allows to start a process instance either by "key" or by "name". Note that the process instance is started after the annotated method returns. Example:

@StartProcess("authorizeBusinessTripRequest")
public String submitRequest(BusinessTripRequest request) {
  // do some work
  return "success";
}

Depending on the configuration of the camunda engine, the code of the annotated method and the starting of the process instance will be combined in the same transaction. The @org.camunda.bpm.engine.cdi.annotation.CompleteTask-annotation works in the same way:

  @CompleteTask(endConversation=false)
  public String authorizeBusinessTrip() {
      // do some work
      return "success";
  }

The @CompleteTask annotation offers the possibility to end the current conversation. The default behavior is to end the conversation after the call to the engine returns. Ending the conversation can be disabled, as shown in the example above.

Working with @BusinessProcessScoped beans

Using camunda-engine-cdi, the lifecycle of a bean can be bound to a process instance. To this extend, a custom context implementation is provided, namely the BusinessProcessContext. Instances of BusinessProcessScoped beans are stored as process variables in the current process instance. BusinessProcessScoped beans need to be PassivationCapable (for example Serializable). The following is an example of a process scoped bean:

@Named
@BusinessProcessScoped
public class BusinessTripRequest implements Serializable {
        private static final long serialVersionUID = 1L;
        private String startDate;
        private String endDate;
        // ...
}

Sometimes, we want to work with process scoped beans, in the absence of an association with a process instance, for example before starting a process. If no process instance is currently active, instances of BusinessProcessScoped beans are temporarily stored in a local scope (I.e. the Conversation or the Request, depending on the context. If this scope is later associated with a business process instance, the bean instances are flushed to the process instance.

Built-In Beans

  • The ProcessEngine as well as the services are available for injection: @Inject ProcessEngine, RepositoryService, TaskService, ...
  • A specific named ProcessEngine and its services can be injected by adding the qualifier @ProcessEngineName('someEngine')
  • The current process instance and task can be injected: @Inject ProcessInstance, Task,
  • The current business key can be injected: @Inject @BusinessKey String businessKey,
  • The current process instance id be injected: @Inject @ProcessInstanceId String pid.

Process variables are available for injection. camunda-engine-cdi supports

  • type-safe injection of @BusinessProcessScoped beans using @Inject [additional qualifiers] Type fieldName
  • unsafe injection of other process variables using the @ProcessVariable(name?) qualifier:

    @Inject
    @ProcessVariable
    private Object accountNumber;
    
    @Inject
    @ProcessVariable("accountNumber")
    private Object account;

In order to reference process variables using EL, we have similar options:

  • @Named @BusinessProcessScoped beans can be referenced directly,
  • other process variables can be referenced using the ProcessVariables-bean: #{processVariables['accountNumber']}

Injecting a process engine based on contextual data

While a specific process engine can be accessed by adding the qualifier @ProcessEngineName('name') to the injection point, this requires that it is known which process engine is used at design time. A more flexible approach is to resolve the process engine at runtime based on contextual information such as the logged in user. In this case, @Inject can be used without a @ProcessEngineName annotation.

To implement resolution from contextual data, the producer bean org.camunda.bpm.engine.cdi.impl.ProcessEngineServicesProducer must be extended. The following code implements a contextual resolution of the engine by the currently authenticated user. Note that which contextual data is used and how it is accessed is entirely up to you.

@Specializes
public class UserAwareEngineServicesProvider extends ProcessEngineServicesProducer {

  // User can be any object containing user information from which the tenant can be determined
  @Inject
  private UserInfo user;

  @Specializes @Produces @RequestScoped
  public ProcessEngine processEngine() {

    // okay, maybe this should involve some more logic ;-)
    String engineForUser = user.getTenant();

    ProcessEngine processEngine =  BpmPlatform.getProcessEngineService().getProcessEngine(engineForUser);
    if(processEngine != null) {
      return processEngine;

    } else {
      return ProcessEngines.getProcessEngine(engineForUser, false);

    }
  }

  @Specializes @Produces @RequestScoped
  public RuntimeService runtimeService() {
    return processEngine().getRuntimeService();
  }

  @Specializes @Produces @RequestScoped
  public TaskService taskService() {
    return processEngine().getTaskService();
  }

  ...
}

The above code makes selecting the process engine based on the current user's tenant completely transparent. For each request, the currently authenticated user is retrieved and the correct process engine is looked up. Note that the class UserInfo represents any kind of context object that identifies the current tenant. For example, it could be a JAAS principal. The produced engine can be accessed in the following way:

@Inject
private RuntimeService runtimeService;

Cdi Event Bridge

The Process engine can be hooked-up to the CDI event-bus. We call this the "Cdi Event Bridge" This allows us to be notified of process events using standard CDI event mechanisms. In order to enable CDI event support for an embedded process engine, enable the corresponding parse listener in the configuration:

<property name="postBpmnParseHandlers">
  <list>
    <bean class="org.camunda.bpm.engine.cdi.impl.event.CdiEventSupportBpmnParseListener" />
  </list>
</property>

Now the engine is configured for publishing events using the CDI event bus.

Note: The above configuration can be used in combination with an embedded process engine. If you want to use this feature in combination with the shared process engine in a multi application environment, you need to add the CdiExecutionListener as Process Application event listener. See next section.

The following gives an overview of how process events can be received in CDI beans. In CDI, we can declaratively specify event observers using the @Observes-annotation. Event notification is type-safe. The type of process events is org.camunda.bpm.engine.cdi.BusinessProcessEvent. The following is an example of a simple event observer method:

public void onProcessEvent(@Observes BusinessProcessEvent businessProcessEvent) {
  // handle event
}

This observer would be notified of all events. If we want to restrict the set of events the observer receives, we can add qualifier annotations:

  • @BusinessProcessDefinition: restricts the set of events to a certain process definition. Example:

    @Observes
    @BusinessProcessDefinition("billingProcess")
    private BusinessProcessEvent evt;
  • @StartActivity: restricts the set of events by a certain activity. For example:

    @Observes
    @StartActivity("shipGoods")
    private BusinessProcessEvent evt;

    is invoked whenever an activity with the id "shipGoods" is entered.

  • @EndActivity: restricts the set of events by a certain activity. The following for example is invoked whenever an activity with the id "shipGoods" is left:

    @Observes
    @EndActivity("shipGoods")
    private BusinessProcessEvent evt;
  • @TakeTransition: restricts the set of events by a certain transition.

The qualifiers named above can be combined freely. For example, in order to receive all events generated when leaving the "shipGoods" activity in the "shipmentProcess", we could write the following observer method:

public void beforeShippingGoods(@Observes @BusinessProcessDefinition("shippingProcess") @EndActivity("shipGoods") BusinessProcessEvent evt) {
  // handle event
}

In the default configuration, event listeners are invoked synchronously and in the context of the same transaction. CDI transactional observers (only available in combination with JavaEE / EJB), allow to control when the event is handed to the observer method. Using transactional observers, we can for example assure that an observer is only notified if the transaction in which the event is fired succeeds:

public void onShipmentSuceeded(
  @Observes(during=TransactionPhase.AFTER_SUCCESS) @BusinessProcessDefinition("shippingProcess") @EndActivity("shipGoods") BusinessProcessEvent evt) {

  // send email to customer
}

The Cdi Event Bridge in a Process Application

In order to use the Cdi Event Bridge in combination with a multi-application deployment and the shared process engine, the CdiExecutionListener needs to be added as a Process Application Execution Event Listener.

Example configuration for Servlet Process Application:

@ProcessApplication
public class InvoiceProcessApplication extends ServletProcessApplication {

  protected ExecutionListener cdiExecutionListener = new CdiExecutionListener();

  public ExecutionListener getExecutionListener() {
    return cdiExecutionListener;
  }
}

Example configuration for Ejb Process Application:

@Singleton
@Startup
@ConcurrencyManagement(ConcurrencyManagementType.BEAN)
@TransactionAttribute(TransactionAttributeType.REQUIRED)
@ProcessApplication
@Local(ProcessApplicationInterface.class)
public class MyEjbProcessApplication extends EjbProcessApplication {

  protected ExecutionListener cdiExecutionListener = new CdiExecutionListener();

  @PostConstruct
  public void start() {
    deploy();
  }

  @PreDestroy
  public void stop() {
    undeploy();
  }

  public ExecutionListener getExecutionListener() {
    return cdiExecutionListener;
  }
}

Overview

When testing Process Applications you first have to be clear on what scope you want to test. Often Process Applications orchestrate various existing services meaning a Process Application tests quickly becomes an integration test. The following picture show the scopes we differntiate when testing Process Applications:

  • Testing process definitions only, as isolated as possible.
  • Testing your process application including e.g. CDI or EJB beans.
  • Integration testing of your applications with other deployments or services (maybe deployed as mock services) on your application server.
  • End-to-end integration test including all external systems.

Unit Testing

Business processes are an integral part of software projects and they should be tested in the same way normal application logic is tested: with unit tests. Since the camunda engine is an embeddable Java engine, writing unit tests for business processes is as simple as writing regular unit tests.

camunda supports both JUnit versions 3 and 4 styles of unit testing. In the JUnit 3 style, the ProcessEngineTestCase must be extended. This will make the ProcessEngine and the services available through protected member fields. In the setup() of the test, the processEngine will be initialized by default with the camunda.cfg.xml resource on the classpath. To specify a different configuration file, override the getConfigurationResource() method. Process engines are cached statically over multiple unit tests when the configuration resource is the same.

By extending ProcessEngineTestCase, you can annotate test methods with Deployment. Before the test is run, a resource file of the form testClassName.testMethod.bpmn20.xml in the same package as the test class, will be deployed. At the end of the test, the deployment will be deleted, including all related process instances, tasks, etc. The Deployment annotation also supports setting the resource location explicitly. See the Javadocs for more details.

Taking all that in account, a JUnit 3 style test looks as follows:

public class MyBusinessProcessTest extends ProcessEngineTestCase {

  @Deployment
  public void testSimpleProcess() {
  runtimeService.startProcessInstanceByKey("simpleProcess");

  Task task = taskService.createTaskQuery().singleResult();
  assertEquals("My Task", task.getName());

  taskService.complete(task.getId());
  assertEquals(0, runtimeService.createProcessInstanceQuery().count());
  }
}

To get the same functionality when using the JUnit 4 style of writing unit tests, the ProcessEngineRule Rule must be used. Through this rule, the process engine and services are available through getters. As with the ProcessEngineTestCase (see above), including this Rule will enable the use of the Deployment annotation (see above for an explanation of its use and configuration) and it will look for the default configuration file on the classpath. Process engines are statically cached over multiple unit tests when using the same configuration resource.

The following code snippet shows an example of using the JUnit 4 style of testing and the usage of the ProcessEngineRule.

public class MyBusinessProcessTest {

  @Rule
  public ProcessEngineRule processEngineRule = new ProcessEngineRule();

  @Test
  @Deployment
  public void ruleUsageExample() {
    RuntimeService runtimeService = processEngineRule.getRuntimeService();
    runtimeService.startProcessInstanceByKey("ruleUsage");

    TaskService taskService = processEngineRule.getTaskService();
    Task task = taskService.createTaskQuery().singleResult();
    assertEquals("My Task", task.getName());

    taskService.complete(task.getId());
    assertEquals(0, runtimeService.createProcessInstanceQuery().count());
  }
}
Our Project Templates for Maven give you a complete running project including a JUnit test out of the box.

Debugging unit tests

When using the in-memory H2 database for unit tests, the following instructions allow to easily inspect the data in the engine database during a debugging session. The screenshots here are taken in Eclipse, but the mechanism should be similar for other IDEs.

Suppose we have put a breakpoint somewhere in our unit test. In Eclipse this is done by double-clicking in the left border next to the code:

If we now run the unit test in debug mode (right-click in test class, select 'Run as' and then 'JUnit test'), the test execution halts at our breakpoint, where we can now inspect the variables of our test as shown in the right upper panel.

To inspect the data, open up the 'Display' window (if this window isn't there, open Window->Show View->Other and select Display.) and type (code completion is available) org.h2.tools.Server.createWebServer("-web").start()

Select the line you've just typed and right-click on it. Now select 'Display' (or execute the shortcut instead of right-clicking)

Now open up a browser and go to http://localhost:8082, and fill in the JDBC URL to the in-memory database (by default this is jdbc:h2:mem:camunda), and hit the connect button.

You can now see the engine database and use it to understand how and why your unit test is executing your process in a certain way.

Using Mocks to test your Process Application

--- title: 'Using Mocks to test your Process Application' category: 'Testing' ---

Using Arquillian to test your Process Application

In Java EE environments we recently use JBoss Arquillian pretty often to test Process Applications, because it makes bootstrapping the engine pretty simple. We will add more documentation on this here soon - for the moment please refer to the Arquillian Getting Started Guide.

Our Project Templates for Maven give you a complete project already containing a running Arquillian test.

What is Cycle?

With Cycle you can synchronize the BPMN diagrams in your business analyst's BPMN tool with the technically executable BPMN 2.0 XML files your developers edit with their modeler (e.g. in Eclipse). Depending on your tool we can realize a forward- and a reverse engineering, while you can store your BPMN 2.0 XML files in different repositories (e.g. SVN, file system or FTP servers).

Although business and IT use different BPMN tools, the process models keep in synch: With camunda Cycle you can synchronize BPMN diagrams in the tool chain anytime, for forward engineering as well as reverse engineering. By connecting and continuously synchronizing the process models in both environments, we keep business and IT aligned. This is what we call a full working BPM roundtrip.

The typical use cases are:

  • Synchronize a BPMN 2.0 diagram with an executable diagram (Forward Engineering)
  • Update the executable diagram and synchronize the changes with the origin BPMN 2.0 diagram (Reverse Engineering)
  • Create executable diagrams out of BPMN 2.0 diagram (Forward Engineering)

Cycle is part of our camunda BPM distribution and ready to use by opening http://localhost:8080/cycle. At the first start up you will be prompted to create an admin user. If you are new to Cycle have a look at our Hands-On Cycle Tutorial.

Connector Configuration

To connect Cycle to a suitable repository you can set up one of the following connectors:

Furthermore you get information about how to configure User Credentials for your connector.

Signavio Connector

For directly accessing your process models stored in Signavio, you must set up a Signavio Connector. The picture below shows a connector setup for Signavio's SaaS edition with globally provided credentials, meaning that every Cycle user connects with the same credentials to the repository. If you are behind a proxy, you could configure that here as well.

Hit Test to check if Cycle can find the folder you specified.

Subversion Connector

Use the subversion plugin to connect to a subversion repository like SVN or Git Hub. You must specify the URL (including subfolders, if you want to directly point to a certain folder in the subversion repository). If user credentials are mandatory, you can provide them either globally or individually for each Cycle user. In the picture below you see a connector setup for a GitHub repository. The user credentials are provided globally.

Hit Test to check if Cycle can find the folder you specified.

File System Connector

Use the File System Connector to use models stored on your local system. Select the File System Connector as connector plugin. The variable ${user.home} points to the directory of your OS user account. You can also choose an absolute path like C:\MyFolder.

Hit Test to check if Cycle can find the folder you specified.

User Credentials

If your repository requires a login you can choose between credentials provided by user or globally provided ones. Globally provided credentials can be set directly in the connector setup menu and are valid for every cycle user.

To set up credentials provided by the user you need to enter the My Profile menu and select add credentials for your connector.

Hit Test to check if the credentials are valid.

BPMN 2.0 Roundtrip

When we are talking about Roundtrip we are talking about the synchronization of BPMN 2.0 diagrams between the business perspective and the technical perspective. This synchronization is based on the standard BPMN 2.0 XML format. As on the technical site only executable processes matter Cycle provides the functionality to extract these processes out of models from the business side where manual processes (not executable) can be modeled as well. This extraction mechanism is what we call Pool Extraction. With Cycle, you can do this synchronization in both directions.

Step 1: Setup the Connector

Setup a suitable connector for your repository as described in the section Connector Configuration. In this walkthrough we use a Signavio Connector with user provided credentials.

Hit Test to check if Cycle can access your Signavio account.

Step 2: Add process model from the repository

In the left box of your roundtrip, click on Add Process Model, pick a name for your modeling tool and choose the Signavio connector from the connector's dropdown. Cycle now connects with Signavio, so after a short time you can navigate through the chosen repository to select your process model.

After you hit Add, Cycle will save a link to the process model you selected, and offer you a preview image in the left box of your roundtrip. It also says that the process model has not yet been synchronized, which is true. Changes on the diagram in Signavio will be updated automatically by Cycle.

Step 3: Create BPMN file for execution

Hit Create and choose the location you want the BPMN 2.0 XML file to be stored to. In our example, we want to store it on our local file system, in a workspace we use with our Eclipse IDE. After hitting Create, Cycle will connect to Signavio, request the BPMN 2.0 XML and save it to the location you specified. Please note that no diagram picture will be displayed until an image file of the diagram is stored in the folder. Cycle indicates that both models are in "in synch" now.

Heads up! If your process model is a collaboration diagram, Cycle will do a Pool Extraction which means that only pools will be regarded that are executable.

Step 4: Edit BPMN File

Now Cycle shows you that your roundtrip consists of the BPMN diagram stored in Signavio (left side) and the BPMN 2.0 File stored in your file repository (right side). You can also see that the two process models are currently in synch, and the date and time since the last synch has been made.

You can now either check out the BPMN 2.0 - XML from your subversion or open it directly in your local drive. In both cases, you can now edit inside your Eclipse IDE it using camunda Modeler.

Step 5: Reverse Engineering

After you have worked on the executable process model the models are out of synch, indicated by the red label "change since last sync" on the site where the change happened.

You can now hit the sync button in the corresponding direction (in our case from right to left). Afterwards you will be prompt to confirm the synchronization with the possibility to add a commit message.

Now both models are synchronized again, indicated by green labels "in sync" on both sites.

Please note! The BPMN 2.0 modeling tool must support the complete BPMN 2.0 standard and must be able to export/serialize process diagrams in valid BPMN 2.0 XML files. For more detailed information about requirements and suitable tools check the section "Roundtrip with other Tools" in our Cycle Tutorial.

Pool Extraction

During a roundtrip from business perspective to a technical process diagram Cycle checks which pools are flagged as "executable". Only those pools will actually be synchronized for the executable process model, so you don't have to bother with huge diagrams describing manual flows. We call this feature "Pool Extraction". When you synchronize the executable diagram again with the origin diagram the "non-executable" pools will be merged back into the diagram. No information gets lost.

The following example shows relevant xml tag:

<process id="sid-8E90631B-169F-4CD8-9C6B-1F31121D0702" name="MyPool" isExecutable="true">

Engine Attributes

An executable process model usually contains engine specific attributes in the BPMN 2.0 XML. So we have to make sure, that these attributes are not lost during a roundtrip with an other tool. The BPMN 2.0 Standard explicitly defines an extension mechanism for these attributes in the XML. That means that a proper BPMN 2.0 import and export functionality must keep the engine attributes, even if they are added as an engine extension.

The camunda BPM Process Engine uses multitude attributes for configuration purposes which can be set up in the camunda Modeler. Cycle retains these attributes during the roundtrip. Here is an example:

The xml export from Signavio modeler contains no engine attributes:

<serviceTask completionQuantity="1" id="sid-01234"
                   implementation="webService"
                   isForCompensation="false"
                   name="MyService"
                   startQuantity="1"/>

After update with camunda modeler class and failedJobRetryTimeCycle were added as camunda specific engine attributes:

<serviceTask id="sid-01234" camunda:class="java.lang.Object"
             camunda:async="true"
             name="MyService" 
             implementation="webService">
  <extensionElements>
    <camunda:failedJobRetryTimeCycle>R3/PT10M</camunda:failedJobRetryTimeCycle>
  </extensionElements>
  <incoming>sid-3DED1BA0-77FC-4768-AA3E-0B60A81850EA</incoming>
  <outgoing>sid-E6D3AB73-386C-4260-82B9-CB740B82001F</outgoing>
</serviceTask>

After synchronization back to Signavio the original Signavio-information like completionQuantity, isForCompensation and startQuantity were merged back:

<serviceTask camunda:async="true" camunda:class="java.lang.Object"
            completionQuantity="1" 
            id="sid-01234" 
            isForCompensation="false" 
            name="MyService" 
            startQuantity="1">
   <extensionElements>
      <camunda:failedJobRetryTimeCycle xmlns:camunda="http://activiti.org/bpmn">R3/PT10M</camunda:failedJobRetryTimeCycle>
   </extensionElements>
   <incoming>sid-3DED1BA0-77FC-4768-AA3E-0B60A81850EA</incoming>
   <outgoing>sid-E6D3AB73-386C-4260-82B9-CB740B82001F</outgoing>
</serviceTask>

What is Tasklist?

The Tasklist is a demo web application to provide you with the possibility to work on User Tasks. The Tasklist is part of our camunda BPM distribution and ready to use by opening http://localhost:8080/camunda/app/tasklist.

Notice The tasklist is a demo application, only. You may use it as a basis for your own projects or as a simple inspiration to write your own.

Find additional information about how to use the Tasklist in our Developing Process Applications tutorial.

Human Workflow Management

In the following example we walk through a typical human workflow scenario. The Tasklist has four demo users which belong to different user groups. Sign in with the user demo and start a process instance.

Start a Process

To start a process instance via the Tasklist hit the dropdown button and select a process. If there is no process listed please check that your process is deployed correctly.

Depending on whether you have defined a start form for your process it will be displayed now. Otherwise you get the notification that no form has been defined for starting the process. In this case click Start process using generic form. The generic task form allows you to enter variables for your process.

In our example you have to enter the desired values and hit Start Process to step through.

Working on Tasks / Task Completion

Tasks that are assigned to you are listed on the Tasklist main page where you can hit the button to start working on a task.

In our example task form you are asked to assign an approver for your invoice. Enter a colleague's name who should be assigned to approve the task. Have a look at the Task Overview. The assigned task is now in your colleague's folder.

If no task form is defined for a Start Event you will be forwarded to a generic form. In a generic form you can define the input data yourself.

When you complete a task by submitting the task form, the task is completed and the process continues in the engine.

Furthermore you can visualize the process model by clicking on the symbol. By highlighting the current task the visualization gives you also information your task on the context of the whole process.

User and Group Task Overview

In the User and Group Task overview you can see how many tasks are assigned to you, the different groups and your colleagues. Have a look in your colleagues' folder. You will see that you can see their tasks but you are not able to work on them.

The folder Inbox contains all tasks that are assigned to the user groups. Like tasks of a group they are ready to be claimed.

(Un-)Claim a Task

If tasks are assigned to a group, more than one person sees it at once. In order to avoid different people working on it at the same time, the task needs to be claimed first. By claiming a task you become the assignee and the task is moved to your personal tasks folder ("My Tasks"). Hit the button and select claim.

User can also unclaim a task by selecting unclaim. The task will go back to the associated user group.

You can bulk (un-)claim tasks after selecting multiple tasks via strg + click.

Delegate a Task

When you delegate a task to one of your colleagues he has the possibility to take a look at your task and give it back to you. This can be helpful if you want to get help from a colleague or you need feedback. To delegate a task doesn’t mean that you assign it to someone else. After delegation, you are still the assignee of the task. If you do not want to be the assignee of a task, use the (un-)claim function.

You can bulk delegate tasks after selecting multiple tasks via strg + click.

User Assignment

Tasks can be directly assigned to a user, to a candidate group (group) or to a candidate list (multiple users). Compared to the direct assignment of tasks, the tasks of candidate groups or candidate lists are not yet assigned. They must be claimed by a user who is part of the list or belongs to the group. Depending on this affiliation the Tasklist displays tasks in different folders.

In the properties panel in your camunda Modeler you can configure all relevant attributes.

To determine which user or user group is able to work with a task you can set the following extension attributes for your User Task:

  • Assignee: direct assign to a user

    <userTask id="theTask" name="my task" camunda:assignee="John" ></userTask>
  • Candidate User: makes user a candidate for a task

    <userTask id="theTask" name="my task" camunda:candidateUsers="John, Mary" ></userTask>
  • Candidate Group: makes user groups a candidate for a task

    <userTask id="theTask" name="my task" camunda:candidateGroups="management, accountancy" ></userTask>

You can define the Candidate User and the Candidate Group on the same task. Find more detailed information regarding extension attributes for User Task here.

Task Forms

The Tasklist can work with different types of forms. To implement a Task Form in your application you have to connect the form resource with the BPMN 2.0 element in your process diagram. Suitable BPMN 2.0 elements for calling Tasks Forms are the Start Event and the User Task.

Embedded Task Forms

To add an embedded Task Form to your application simply create an HTML file and attach it to a User Task or a Start Event in your process model. Add a folder src/main/webapp/forms to your project folder and create a FORM_NAME.html file containing the relevant content for your form. The following example shows a simple form with two input fields:

<form class="form-horizontal">
  <div class="control-group">
    <label class="control-label">Customer ID</label>
    <div class="controls">
      <input form-field type="string" name="customerId"></input>
    </div>
  </div>
  <div class="control-group">
    <label class="control-label">Amount</label>
    <div class="controls">
      <input form-field type="number" name="amount"></input>
    </div>
  </div>
</form>

To configure the form in your process open the process in your Eclipse IDE with the camunda Modeler and select the desired User Task or Start Event. Open the properties view and enter embedded:app:forms/FORM_NAME.html as Form Key. The relevant XML tag looks like this:

<userTask id="theTask" camunda:formKey="embedded:app:forms/FORM_NAME.html"
          camunda:candidateUsers="John, Mary"
          name="my Task">

External Task Forms

If you want to call a Task Form that is not part of your application you can add a reference to the desired form. The Referenced Task Form will be configured similar to the Embedded Task Form. Open the properties view and enter FORM_NAME.html as Form Key. The relevant XML tag looks like this:

<userTask id="theTask" camunda:formKey="FORM_NAME.html"
          camunda:candidateUsers="John, Mary"
          name="my Task">

The tasklist creates the URL by the pattern:

"../.." + contextPath (of process application) + "/" + "app" + formKey (from BPMN 2.0 XML) + "processDefinitionKey=" + processDefinitionKey + "&callbackUrl=" + callbackUrl;

When you have completed the task the call back URL will be called.

Generic Task Forms

The generic form will be used whenever you have not added a dedicated form for a User Task or a Start Event.

Hit the button to add a variable that will be passed to the process instance upon task completion. State a variable name and select the type and enter the desired value. Enter as much variables as you need. After hitting the Complete Task button the process instance contains the entered values. Generic Task Forms can be very helpful during the development stage, so you do not need to implement all Task Forms before you can run a workflow. For debugging and testing this concept has many benefits as well.

Task Lifecycle

The diagram below shows the task lifecycle and supported transitions supported by camundaBPM. To get to know how to programmatically work with the lifecycle in your application refer to the Java-API Reference.

What is Cockpit?

With camunda BPM Cockpit you can monitor and administrate your running process instances. The Cockpit architecture allows using plugins to extend the functionality, so you can individually adapt the tool to your personal requirements.

Start Page View

On the start page of Cockpit you get an overview of the installed plugins - at least you will see the two pre-installed plugins. Additionally installed plugins will be automatically added below the existing ones.

Deployed Processes (List)

With this plugin you can easily observe the state of a processes definition. Green and red dots signalize running and failed jobs. At this observing level a red dot signifies that there is at least one process instance or a sub process instance which has an unresolved incident. You can localize the problem by using the Process Definitions View.

Deployed Processes (Icons)

This plugin gives you an overview of all deployed processes on the engine and displayes them as rendered process models. In addition you get information about how many instances of the process are currently running and about the process state. Green and red dots signalize running and failed jobs. Click on the model to get a Process Definition View.

Multi Tenancy

If you are working with more than one engine you can select the desired engine via a dropdown selection. Cockpit provides all information of the selected engine.

Process Definitions View

The Process Definitions View provides you with information about the definition and the status of a process. On the left hand side you can easily survey the versions of the process and how many instances of the version are running. Incidents of all running process instances are displayed together with an instances counter label in the corresponding rendered diagram. So it is easy to locate failed activities in the process. Use the mouse to navigate through the diagram. By turning the mouse wheel you can zoom in out. Hold the left mouse button pressed to slide the diagram in the desired direction.

In the tab `Process Instaces` all running instances are listed in a table view. Beside information about start time, business key and state you can select an instance by ID and go down to the Process Instance View.
The tab `Called Process Definitions` displays the called child processes. In the column Called Process Definition the name of the called sub processes is listed. Click on the name to display the process in the Process Definitions View. Please note that a filter called Parent is automaticaly set for the process so that you see only the instences that belongs to the parent process. In the column Activity you can select the instance that is calling the child process.

Filter

The filter function on the left hand side of the Process Definitions View allows you to find certain instances by filtering for variables, business keys or by selecting the version of a process. Beyond that you can combine different filters as logical AND relation. Filter expressions on variables must be specified as variableName OPERATOR value where the operator my be one of the following terms =, !=, >, >=, <, <=, like. Apart from the like operator the operator expressions does not have to be seperated by spaces. The like operator is for string variables only. You can use % as wildcard in the value expression. String values must be properly enclosed in "".

Process Instance Detail View

Open the Process Instance View by selecting a process instance from the Process Definitions View instance list. This view allows you to drill down into a single process instance and explore it's running activities as well as the variables, tasks, jobs, etc.

Beside the diagram view the process will be displayed as an Activity Instance Tree View. Variables that belong to the instance will be listed in a variables table of the [Detailed Information Panel](ref:#tools-cockpit-process-instance-view-detailed-information-panel). Now you can select single or multiple ('ctrl + click') flow nodes in the interactive BPMN 2.0 diagram or you can select an activity instance within the activity tree view. As diagram, tree view and variables table correspond with each other the selected flow node will also be selected in the tree as well as the associated variables will be shown and vice versa.

Activity Instance Tree

The activity instance tree contains a node for each activity that is currently active in the process instance. It allows you to select activity instances to explore their details. Concurrently the selected instance will be marked in the rendered process diagram and the corresponding variables will be listed in the Detailed Information Panel.

Detailed Information Panel

Use the Detailed Information Panel to get an overview about variables, incidents, called process instances and user tasks that the process instance contains. Depending on the selected activity instance in the rendered diagram the panel lists the corresponding information. You can also focus on the activity instance via a scope link in the table.
In addition to the instance information you can edit variables or change the asignee of user tasks.

Adding Variables

Hit the button on the right hand side to add variables to a process instance. You can chosse between different data types. Please note that variable will be overwritten if you add a new variable with an existing name.

Editing Variables

Hit the symbol in the Detailed Information Panel to edit variables. This feauture allows you to change the value of variables as well as the type. A validation of the date format and for the value of integers happens on client side. If you enter NULL the variable will be converted to string.

Cancel a Process Instance

When you select a single process instance you can cancel it in the Process Instance View.

Hit the button on the right hand side. After you have confirmed this step the runtime data of the canceld instance will be deleted.

Failed Jobs

Unresolved incidents of a process instance or a sub process instance are indicated by Cockpit as failed jobs. To localize which instance of a process failed cockpit allows you to drill down to the unresolved incident by using the process status dots. Hit a red status dot of the affected instance in the Process Definitions View to get an overview about all incidents. The `incidents` tab in the Detailed Information Panel lists the failed activities with additional information. Furthermore you have the chance to go down the failing instance of a sub process.

Retry a Failed Job

To resolve a failed job you can use the button on the right hand side. Select the corresponding instance in the confirmation dialog so the engine will re-trigger this jobs and incerement it's retry value in the database.

Plugins

Cockpit defines a plugin concept to add own functionality without being forced to extend or hack the Cockpit web application. You can add plugins at various plugin points, e.g. the start page as shown in the following example:

The nature of a cockpit plugin

A cockpit plugin is a maven jar project that is included in the cockpit webapplication as a library dependency. It provides a server-side and a client-side extension to cockpit.

The integration of a plugin into the overall cockpit architecture is depicted in the following figure.

On the server-side, it can extend cockpit with custom SQL queries and JAX-RS resource classes. Queries (defined via MyBatis) may be used to squeeze additional intel out of an engine database or to execute custom engine operations. JAX-RS resources on the other hand extend the cockpit API and expose data to the client-side part of the plugin.

On the client-side a plugin may include AngularJS modules to extend the cockpit webapplication. Via those modules a plugin provides custom views and services.

File structure

The basic skeleton of a cockpit plugin looks as follows:

cockpit-plugin/
├── src/
|   ├── main/
|   |   ├── java/
|   |   |   └── org/my/plugin/
|   |   |       ├── db/
|   |   |       |   └── MyDto.java                                    (5)
|   |   |       ├── resource/
|   |   |       |   ├── MyPluginRootResource.java                     (3)
|   |   |       |   └── ...                                           (4)
|   |   |       └── MyPlugin.java                                     (1)
|   |   └── resources/
|   |       ├── META-INF/services/
|   |       |   └── org.camunda.bpm.cockpit.plugin.spi.CockpitPlugin  (2)
|   |       └── org/my/plugin/
|   |           ├── queries/
|   |           |   └── sample.xml                                    (6)
|   |           └── assets/app/                                       (7)
|   |               └── app/
|   |                   ├── plugin.js                                 (8)
|   |                   ├── view.html
|   |                   └── ...
|   └── test/
|       ├── java/
|       |   └── org/my/plugin/
|       |       └── MyPluginTest.java
|       └── resources/
|           └── camunda.cfg.xml
└── pom.xml

As runtime relevant resources it defines

  1. a plugin main class
  2. a META-INF/services entry that publishes the plugin to cockpit
  3. a plugin root JAX-RS resource that wires the server-side API
  4. other resources that are part in the server-side API
  5. data transfer objects used by the resources
  6. mapping files that provide additional cockpit queries as MyBatis mappings
  7. resource directory from which client-side plugin assets are served as static files
  8. a main file that bootstraps the client-side plugin in a AngularJS / RequireJS environment

What is Admin?

Along with the camunda web applications we ship Admin, accessible via http://localhost:8080/camunda/app/admin/. Admin is a small application that allows you to configure users and groups via the engines Identity Service. Furthermore you can connect camunda Admin to your LDAP system.

Initial User Setup

On first access of a process engine through Cockpit or Tasklist a setup screen will be shown. That screen allows you to configure an initial user account with administrator rights.

Administrator users are not global but per engine. Thus, you will need to set up an admin user for every single engine.

My Profile

In the My Profile menu you can edit your personal account settings like:

  • Profile: Change your name or email address. You cannot change the user account ID!
  • Account: Change your password or to delete your account. Be careful, deleting cannot be undone.
  • Groups: This menu lists all groups where you are member. With administrator right you can assign your account to available groups.

Administrator Account

Users who belong to the group camunda-admin have administrator privileges. At least there must be one member in this group otherwise the initial setup screen appears. Beside user and group management as administrator you are able to define authorization rules for users and groups to control access permissions for applications and set the visibility of users and groups.

In the following you will learn how to use an administrator account by the help of a simple use cases. You will create a group with two users who will be able to work together in Tasklist.

Users

The Users administration menue allows you to add, edit and delete user profiles.

Login with your admin account and add two new users. Give them an unique ID and a password you can remember.

Groups

The Groups administration menue allows you to add, edit and delete user groups.

Create a new group called support and add the new users to the group. Therefore go back to the Users menue and edit the new accounts. In the menue Groups you can add the user to the support group.

Authorizations

Manage authorizations for users, groups and applications. Define which user or group has access to the applications, which users are visible for other groups or direct group members.

Application Access

Set the authorizations for the new group and the created users. First you have to define to which application the members of your new group have access to. Select the Application menue and create a new Application Authorization rule. The group members should be able to access Tasklist, so add the following rule:

Now every member of the group support can use Tasklist.

Furthermore you want one of the new users to get access to Cockpit. Therefore add a new user specific rule:

This specific rule is valid for the user lemmy only and provides him additional access authorization.

Login with the new user accounts and test if you can access the desired application.

Member Visibility

Depending on the users authorization Tasklist will show you information about your colleagues and groups. Currently you can only see the group folder support but not your colleague. To change that login to the admin application as administrator, enter the Users Authorization menue and create the following rules:

Now every member of the group support is able to see the new users lemmy and ozzy.

LDAP Connection

If you connect the camunda BPM platform with the LDAP identity service you have read-only access to the users and groups. Create new users and groups via the LDAP system but not in the admin application. Find more information about how to configure the process engine in order to use the LDAP identity service here.