Welcome to the Camunda BPM user guide! Camunda BPM is a Java-based framework for process automation. This document contains information about the features provided by the Camunda BPM platform.
Camunda BPM is built around the process engine component. The following illustration shows the most important components of Camunda BPM along with some typical user roles.
Process Engine & Infrastructure
Web Applications
Additional Tools
Camunda is a flexible framework which can be used in different contexts. See Architecture Overview for more details. Based on how you want to use camunda, you can choose a different distribution.
Camunda provides separate runtime downloads for community users and enterprise subscription customers:
Download the full distribution if you want to use a shared process engine or if you want to get to know camunda quickly, without any additional setup or installation steps required*.
The full distribution bundles
* Note that if you download the full distribution for an open source application server/container, the container itself is included. For example, if you download the tomcat distribution, tomcat itself is included and the camunda binaries (process engine and webapplications) are pre-installed into the container. This is not true for the the Oracle Weblogic and IBM Websphere downloads. These downloads do not include the application servers themselves.
See Installation Guide for additional details.
Download the standalone web application distribution if you want to use Cockpit, Tasklist, Admin applications as a self-contained WAR file with an embedded process engine.
The standalone web application distribution bundles
The standalone web application can be deployed to any of the supported application servers.
The Process engine configuration is based on the Spring Framework. If you want to change the
database configuration, edit the WEB_INF/applicationContext.xml
file inside the WAR file.
See Installation Guide for additional details.
Camunda Modeler is an Eclipse based modeling Tool for BPMN 2.0. Camunda Modeler can be downloaded from the community download page.
The getting started tutorials can be found at http://docs.camunda.org/guides/getting-started-guides/.
camunda BPM is a Java-based framework. The main components are written in Java and we have a general focus on providing Java developers with the tools they need for designing, implementing and running business processes and workflows on the JVM. Nevertheless, we also want to make the process engine technology available to Non-Java developers. This is why camunda BPM also provides a REST API which allows you to build applications connecting to a remote process engine.
camunda BPM can be used both as a standalone process engine server or embedded inside custom Java applications. The embeddability requirement is at the heart of many architecture decisions within camunda BPM. For instance, we work hard to make the process engine component a lightweight component with as little dependencies on third-party libraries as possible. Furthermore, the embeddability motivates programming model choices such as the capabilities of the process engine to participate in Spring Managed or JTA transactions and the threading model.
Required third-party libraries
See section on third-party libraries.
camunda BPM platform is a flexible framework which can be deployed in different scenarios. This section provides an overview over the most common deployment scenarios.
In this case the process engine is added as an application library to a custom application. This way the process engine can easily be started and stopped with the application lifecycle. It is possible to run multiple embedded process engines on top of a shared database.
In this case the process engine is started inside the runtime container (Servlet Container, Application Server, ...). The process engine is provided as a container service and can be shared by all applications deployed inside the container. The concept can be compared to a JMS Message Queue which is provided by the runtime and can be used by all applications. There is a one-to-one mapping between process deployments and applications: the process engine keeps track of the process definitions deployed by an application and delegates execution to the application in question.
In this case the process engine is provided as a network service. Different applications running on the network can interact with the process engine through a remote communication channel. The easiest way for making the process engine accessible remotely is to use the built-in REST api. Different communication channels such as SOAP Webservices or JMS are possible but need to be implemented by users.
In order to provide scale-up or fail-over capabilities, the process engine can be distributed to different nodes in a cluster. Each process engine instance must then connect to a shared database.
The individual process engine instances do not maintain session state across transactions. Whenever the process engine runs a transaction, the complete state is flushed out to the shared database. This makes it possible to route subsequent requests which do work in the same process instance to different cluster nodes. This model is very simple and easy to understand and imposes limited restrictions when it comes to deploying a cluster installation. As far as the process engine is concerned there is also no difference between setups for scale-up and setups for fail-over (as the process engine keeps no session state between transactions).
The process engine job executor is also clustered and runs on each node. This way, there is no single point of failure as far as the process engine is concerned. The job executor can run in both homogeneous and heterogeneous clusters.
To serve multiple, independent parties with one Camunda installation, the process engine supports multi-tenancy. The following multi tenancy models are supported:
Users should choose the model which fits their data separation needs. Camunda's APIs provide access to processes and related data specific to each tenant. More details can be found in the multi-tenancy section.
The camunda BPM web applications are based on a RESTful architecture.
Frameworks used:
Additional custom frameworks developed by camunda hackers:
You can run the camunda BPM platform in every Java-runnable environment. camunda BPM is supported with our QA infrastructure in the following environments. Here you can find more information about our enterprise support.
Please find the supported environments for version 7.0 here.
Camunda BPM is developed by Camunda as an open source project in collaboration with the community. The "core project" (namely "Camunda BPM platform") is the basis for the Camunda BPM product which is provided by Camunda as a commercial offering. The commercial Camunda BPM product contains additional (non-open source) features and is provided to Camunda BPM customers with service offerings such as enterprise support and bug fix releases.
camunda supports the community in its effort to build additional community extensions under the camunda BPM umbrella. Such community extensions are maintained by the community and are not part of the commercial camunda BPM product. camunda does not support community extensions as part of its commercial services to enterprise subscription customers.
The following is a list of current (unsupported) community extensions:
Do you have a great idea around open source BPM you want to share with the world? Awesome! camunda will support you in building your own community extension. Have a look at our contribution guidelines to find out how to propose a community project.
In the following section all third-party libraries are listed on which components of the camunda platform depend.
The process engine depends on the following third-party libraries:
Additional optional dependencies:
The REST API depends on the following third-party libraries:
Additional optional dependencies:
The Spring support depends on the following third-party libraries:
The camunda Webapp (Cockpit, Tasklist, Admin) depends on the following third-party libraries:
Cycle depends on the following third-party libraries:
Javascript dependencies:
Java dependencies:
The camunda Modeler depends on the following third-party libraries:
The camunda platform provides a public API. This section covers the definition of the public API and backwards compatibility for version updates.
camunda BPM public API is limited to the following items:
Java API:
camunda-engine
: all non implementation Java packages (package name does not contain impl
)camunda-engine-spring
: all non implementation Java packages (package name does not contain impl
)camunda-engine-cdi
: all non implementation Java packages (package name does not contain impl
)HTTP API (REST API):
camunda-engine-rest
: HTTP interface (set of HTTP requests accepted by the REST API as documented in REST API reference). Java classes are not part of the public API.The camunda versioning scheme follows the MAJOR.MINOR.PATCH pattern put forward by Semantic Versioning. camunda will maintain public API backwards compatibility for MINOR version updates. Example: Update from version 7.1.x to 7.2.x will not break the public API.
You have a number of options to configure and create a process engine depending on whether you use a application managed or a shared, container managed process engine.
You manage the process engine as part of your application. The following ways exist to configure it:
A container of your choice (e.g. Tomcat, JBoss, GlassFish or WebSphere) manages the process engine for you. The configuration is carried out in a container specific way, see Runtime Container Integration for details.
The camunda engine uses the ProcessEngineConfiguration bean to configure and construct a standalone Process Engine. There are multiple subclasses available that can be used to define the process engine configuration. These classes represent different environments, and set defaults accordingly. It's a best practice to select the class the matches (the most) your environment, to minimize the number of properties needed to configure the engine. The following classes are currently available:
org.camunda.bpm.engine.impl.cfg.StandaloneProcessEngineConfiguration
The process engine is used in a standalone way. The engine itself will take care of the transactions. By default, the database will only be checked when the engine boots (and an exception is thrown if there is no database schema or the schema version is incorrect).org.camunda.bpm.engine.impl.cfg.StandaloneInMemProcessEngineConfiguration
This is a convenience class for unit testing purposes. The engine itself will take care of the transactions. An H2 in-memory database is used by default. The database will be created and dropped when the engine boots and shuts down. When using this, probably no additional configuration is needed (except when using for example the job executor or mail capabilities).org.camunda.bpm.engine.spring.SpringProcessEngineConfiguration
To be used when the process engine is used in a Spring environment. See the Spring integration section for more information.org.camunda.bpm.engine.impl.cfg.JtaProcessEngineConfiguration
To be used when the engine runs in standalone mode, with JTA transactions.You can configure the process engine programmatically by creating the right ProcessEngineConfiguration object or use some pre-defined one:
ProcessEngineConfiguration.createStandaloneProcessEngineConfiguration();
ProcessEngineConfiguration.createStandaloneInMemProcessEngineConfiguration();
Now you can call the buildProcessEngine()
operation to create a Process Engine:
ProcessEngine processEngine = ProcessEngineConfiguration.createStandaloneInMemProcessEngineConfiguration()
.setDatabaseSchemaUpdate(ProcessEngineConfiguration.DB_SCHEMA_UPDATE_FALSE)
.setJdbcUrl("jdbc:h2:mem:my-own-db;DB_CLOSE_DELAY=1000")
.setJobExecutorActivate(true)
.buildProcessEngine();
The easiest way to configure your Process Engine is via through an XML file called camunda.cfg.xml
. Using that you can simply do:
ProcessEngine processEngine = ProcessEngines.getDefaultProcessEngine()
The camunda.cfg.xml
must contain a bean that has the id processEngineConfiguration
, select the best fitting ProcessEngineConfiguration
class suiting your needs:
<bean id="processEngineConfiguration" class="org.camunda.bpm.engine.impl.cfg.StandaloneProcessEngineConfiguration">
This will look for an camunda.cfg.xml
file on the classpath and construct an engine based on the configuration in that file. The following snippet shows an example configuration:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="processEngineConfiguration" class="org.camunda.bpm.engine.impl.cfg.StandaloneProcessEngineConfiguration">
<property name="jdbcUrl" value="jdbc:h2:mem:camunda;DB_CLOSE_DELAY=1000" />
<property name="jdbcDriver" value="org.h2.Driver" />
<property name="jdbcUsername" value="sa" />
<property name="jdbcPassword" value="" />
<property name="databaseSchemaUpdate" value="true" />
<property name="jobExecutorActivate" value="false" />
<property name="mailServerHost" value="mail.my-corp.com" />
<property name="mailServerPort" value="5025" />
</bean>
</beans>
Note that the configuration XML is in fact a Spring configuration. This does not mean that the camunda engine can only be used in a Spring environment! We are simply leveraging the parsing and dependency injection capabilities of Spring internally for building up the engine.
The ProcessEngineConfiguration object can also be created programmatically using the configuration file. It is also possible to use a different bean id:
ProcessEngineConfiguration.createProcessEngineConfigurationFromResourceDefault();
ProcessEngineConfiguration.createProcessEngineConfigurationFromResource(String resource);
ProcessEngineConfiguration.createProcessEngineConfigurationFromResource(String resource, String beanName);
ProcessEngineConfiguration.createProcessEngineConfigurationFromInputStream(InputStream inputStream);
ProcessEngineConfiguration.createProcessEngineConfigurationFromInputStream(InputStream inputStream, String beanName);
It is also possible not to use a configuration file, and create a configuration based on defaults (see the different supported classes for more information).
ProcessEngineConfiguration.createStandaloneProcessEngineConfiguration();
ProcessEngineConfiguration.createStandaloneInMemProcessEngineConfiguration();
All these ProcessEngineConfiguration.createXXX()
methods return a ProcessEngineConfiguration
that can further be tweaked if needed. After calling the buildProcessEngine()
operation, a ProcessEngine
is created as explained above.
The bpm-platform.xml
file is used to configure camunda BPM platform in the following distributions:
The <process-engine ... />
xml tag allows defining a process engine:
<?xml version="1.0" encoding="UTF-8"?>
<bpm-platform xmlns="http://www.camunda.org/schema/1.0/BpmPlatform"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.camunda.org/schema/1.0/BpmPlatform http://www.camunda.org/schema/1.0/BpmPlatform">
<job-executor>
<job-acquisition name="default" />
</job-executor>
<process-engine name="default">
<job-acquisition>default</job-acquisition>
<configuration>org.camunda.bpm.engine.impl.cfg.StandaloneProcessEngineConfiguration</configuration>
<datasource>java:jdbc/ProcessEngine</datasource>
<properties>
<property name="history">full</property>
<property name="databaseSchemaUpdate">true</property>
<property name="authorizationEnabled">true</property>
</properties>
</process-engine>
</bpm-platform>
See Deployment Descriptor Reference for complete documentation of the syntax of the bpm-platform.xml
file.
The process engine can also be configured and bootstrapped using the META-INF/processes.xml
file. See Section on processes.xml file for details.
See Deployment Descriptor Reference for complete documentation of the syntax of the processes.xml
file.
The Java API is the most common way of interacting with the engine. The central starting point is the ProcessEngine, which can be created in several ways as described in the configuration section. From the ProcessEngine, you can obtain the various services that contain the workflow/BPM methods. ProcessEngine and the services objects are thread safe. So you can keep a reference to 1 of those for a whole server.
ProcessEngine processEngine = ProcessEngines.getDefaultProcessEngine();
RuntimeService runtimeService = processEngine.getRuntimeService();
RepositoryService repositoryService = processEngine.getRepositoryService();
TaskService taskService = processEngine.getTaskService();
ManagementService managementService = processEngine.getManagementService();
IdentityService identityService = processEngine.getIdentityService();
HistoryService historyService = processEngine.getHistoryService();
FormService formService = processEngine.getFormService();
ProcessEngines.getDefaultProcessEngine()
will initialize and build a process engine the first time it is called and afterwards always return the same process engine. Proper creation and closing of all process engines can be done with ProcessEngines.init()
and ProcessEngines.destroy()
.
The ProcessEngines class will scan for all camunda.cfg.xml and activiti-context.xml files. For all camunda.cfg.xml
files, the process engine will be built in the typical way: ProcessEngineConfiguration.createProcessEngineConfigurationFromInputStream(inputStream).buildProcessEngine()
. For all activiti-context.xml
files, the process engine will be built in the Spring way: First the Spring application context is created and then the process engine is obtained from that application context.
All services are stateless. This means that you can easily run camunda BPM on multiple nodes in a cluster, each going to the same database, without having to worry about which machine actually executed previous calls. Any call to any service is idempotent regardless of where it is executed.
The RepositoryService is probably the first service needed when working with the camunda engine. This service offers operations for managing and manipulating deployments and process definitions. Without going into much detail here, a process definition is a Java counterpart of BPMN 2.0 process. It is a representation of the structure and behavior of each of the steps of a process. A deployment is the unit of packaging within the engine. A deployment can contain multiple BPMN 2.0 xml files and any other resource. The choice of what is included in one deployment is up to the developer. It can range from a single process BPMN 2.0 xml file to a whole package of processes and relevant resources (for example the deployment 'hr-processes' could contain everything related to hr processes). The RepositoryService allows to deploy such packages. Deploying a deployment means it is uploaded to the engine, where all processes are inspected and parsed before being stored in the database. From that point on, the deployment is known to the system and any process included in the deployment can now be started.
Furthermore, this service allows to
While the RepositoryService is rather about static information (ie. data that doesn't change, or at least not a lot), the RuntimeService is quite the opposite. It deals with starting new process instances of process definitions. As said above, a process definition defines the structure and behavior of the different steps in a process. A process instance is one execution of such a process definition. For each process definition there typically are many instances running at the same time. The RuntimeService also is the service which is used to retrieve and store process variables. This is data which is specific to the given process instance and can be used by various constructs in the process (eg. an exclusive gateway often uses process variables to determine which path is chosen to continue the process). The RuntimeService also allows to query on process instances and executions. Executions are a representation of the 'token' concept of BPMN 2.0. Basically an execution is a pointer pointing to where the process instance currently is. Lastly, the RuntimeService is used whenever a process instance is waiting for an external trigger and the process needs to be continued. A process instance can have various wait states and this service contains various operations to 'signal' the instance that the external trigger is received and the process instance can be continued.
Tasks that need to be performed by actual human users of the system are core to the process engine. Everything around tasks is grouped in the TaskService, such as
The IdentityService is pretty simple. It allows the management (creation, update, deletion, querying, ...) of groups and users. It is important to understand that the core engine actually doesn't do any checking on users at runtime. For example, a task could be assigned to any user, but the engine does not verify if that user is known to the system. This is because the engine can also used in conjunction with services such as LDAP, active directory, etc.
The FormService is an optional service. Meaning that the camunda engine can perfectly be used without it, without sacrificing any functionality. This service introduces the concept of a start form and a task form. A start form is a form that is shown to the user before the process instance is started, while a task form is the form that is displayed when a user wants to complete a form. You can define these forms in the BPMN 2.0 process definition. This service exposes this data in an easy way to work with. But again, this is optional as forms don't need to be embedded in the process definition.
The HistoryService exposes all historical data gathered by the engine. When executing processes, a lot of data can be kept by the engine (this is configurable) such as process instance start times, who did which tasks, how long it took to complete the tasks, which path was followed in each process instance, etc. This service exposes mainly query capabilities to access this data.
The ManagementService is typically not needed when coding custom application. It allows to retrieve information about the database tables and table metadata. Furthermore, it exposes query capabilities and management operations for jobs. Jobs are used in the engine for various things such as timers, asynchronous continuations, delayed suspension/activation, etc. Later on, these topics will be discussed in more detail.
For more detailed information on the service operations and the engine API, see the Javadocs.
To query data from the engine is possible in multiple ways:
The recommended way is to use on of the Query APIs.
The Java Query API allows to program completely typesafe queries with a fluent API. You can add various conditions to your queries (all of which are applied together as a logical AND) and precisely one ordering. The following code shows an example:
List<Task> tasks = taskService.createTaskQuery()
.taskAssignee("kermit")
.processVariableValueEquals("orderId", "0815")
.orderByDueDate().asc()
.list();
You can find more information on this in the Javadocs.
The Java Query API is exposed as REST service as well, see REST documentation for details.
Sometimes you need more powerful queries, e.g. queries using an OR operator or restrictions you can not express using the Query API. For these cases, we introduced native queries, which allow you to write your own SQL queries. The return type is defined by the Query object you use and the data is mapped into the correct objects, e.g. Task, ProcessInstance, Execution, etc. Since the query will be fired at the database you have to use table and column names as they are defined in the database, this requires some knowledge about the internal data structure and it is recommended to use native queries with care. The table names can be retrieved via the API to keep the dependency as small as possible.
List<Task> tasks = taskService.createNativeTaskQuery()
.sql("SELECT count(*) FROM " + managementService.getTableName(Task.class) + " T WHERE T.NAME_ = #{taskName}")
.parameter("taskName", "aOpenTask")
.list();
long count = taskService.createNativeTaskQuery()
.sql("SELECT count(*) FROM " + managementService.getTableName(Task.class) + " T1, "
+ managementService.getTableName(VariableInstanceEntity.class) + " V1 WHERE V1.TASK_ID_ = T1.ID_")
.count();
For performance reasons it might sometimes be desirable not to query the engine objects but some own value or DTO objects collecting data from different tables - maybe including your own domain classes.
The table layout is pretty straightforward - we concentrated on making it easy to understand. Hence it is OK to do SQL queries for e.g. reporting use cases. Just make sure that you do not mess up the engine data by updating the tables without exactly knowing what you are doing.
This section explains some core process engine concepts that are used in both the process engine API and the internal process engine implementation. Understanding these fundamentals makes it easier to use the process engine API.
A process definition defines the structure of a process. You could say that the process definition is the process. camunda BPM uses BPMN 2.0 as its primary modeling language for modeling process definitions.
camunda BPM comes with two BPMN 2.0 References:
In camunda BPM you can deploy processes to the process engine in BPMN 2.0 XML format. The XML files are parsed and transformed into a process definition graph structure. This graph structure is executed by the process engine.
You can query for all deployed process definitions using the Java API and the ProcessDefinitionQuery
made available through the RepositoryService
. Example:
List<ProcessDefinition> processDefinitions = repositoryService.createProcessDefinitionQuery()
.processDefinitionKey("invoice")
.orderByProcessDefinitionVersion()
.asc()
.list();
The above query returns all deployed process definitions for the key invoice
ordered by their version
property.
You can also query for process definitions using the REST API.
The key of a process definition (invoice
in the example above) is the logical identifier of the process. It is used throughout the API, most prominently for starting process instances (see section on process instances). The key of a process definition is defined using the id
property of the corresponding <process ... >
element in the BPMN 2.0 XML file:
<process id="invoice" name="invoice receipt" isExecutable="true">
...
</process>
If you deploy multiple processes with the same key, they are treated as individual versions of the same process definition by the process engine.
Suspending a process definition disables it temporarily in that it cannot be instantiated while it is suspended. The RuntimeService
Java API can be used to suspend a process definition. Similarly, you can activate a process definition to undo this effect.
A process instance is an individual execution of a process definition. The relation of the process instance to the process definition is the same as the relation between Object and Class in Object Oriented Programming (the process instance playing the role of the object and the process definition playing the role of the class in this analogy).
The process engine is responsible for creating process instances and managing their state. If you start a process instance which contains a wait state, for example a user task, the process engine must make sure that the state of the process instance is captured and stored inside a database until the wait state is left (the user task is completed).
The simplest way to start a process instance is by using the startProcessInstanceByKey(...)
method offered by the RuntimeService:
ProcessInstance instance = runtimeService.startProcessInstanceByKey("invoice");
You may optionally pass in a couple of variables:
Map<String, Object> variables = new HashMap<String,Object>();
variables.put("creditor", "Nice Pizza Inc.");
ProcessInstance instance = runtimeService.startProcessInstanceByKey("invoice", variables);
Process variables are available to all tasks in a process instance and are automatically persisted to the database in case the process instance reaches a wait state.
It is also possible to start a process instance using the REST API.
You can query for all currently running process instances using the ProcessInstanceQuery
offered by the RuntimeService
:
runtimeService.createProcessInstanceQuery()
.processDefinitionKey("invoice")
.variableValueEquals("creditor", "Nice Pizza Inc.")
.list();
The above query would select all process instances for the invoice
process where the creditor
is Nice Pizza Inc.
.
You can also query for process instances using the REST API.
Once you have performed a query for a particular process instance (or a list of process instances), you may want to interact with it. There are multiple possibilities to interact with a process instance, most prominently:
RuntimeService.deleteProcessInstance(...)
method.If your process uses User Task, you can also interact with the process instance using the TaskService API.
Suspending a process instance is helpful, if you want ensure that it is not executed any further. For example, if process variables are in an undesired state, you can suspend the instance and change the variables safely.
In detail, suspension means to disallow all actions that change token state (i.e. the activities that are currently executed) of the instance. For example, it is not possible to signal an event or complete a user task for a suspended process instance, as these actions will continue the process instance execution subsequently. Nevertheless, actions like setting or removing variables are still allowed, as they do not change token state.
Also, when suspending a process instance, all tasks belonging to it will be suspended. Therefore, it will no longer be possible to invoke actions that have effects on the task's lifecycle (i.e. user assignment, task delegation, task completion, ...). However, any actions not touching the lifecycle like setting variables or adding comments will still be allowed.
A process instance can be suspended by using the suspendProcessInstanceById(...)
method of the RuntimeService
. Similarly it can be reactivated again.
If you would like to suspend all process instances of a given process definition, you can use the method suspendProcessDefinitionById(...)
of theRepositoryService
and specify the suspendProcessInstances
option.
If your process instance contains multiple execution paths (like for instance after a parallel gateway), you must be able to differentiate the currently active paths inside the process instance. In the following example, two user tasks receive payment and ship order can be active at the same time.
Internally the process engine creates two concurrent executions inside the process instance, one for each concurrent path of execution. Executions are also created for scopes, for example if the process engine reaches a Embedded Sub Process or in case of Multi Instance.
Executions are hierarchical and all executions inside a process instance span a tree, the process instance being the root-node in the tree. Note: the process instance itself is an execution.
Executions can have local variables. Local variables are only visible to the execution itself and its children but not to siblings of parents in the execution tree. Local variables are usually used if a part of the process works on some local data object or if an execution works on one item of a collection in case of multi instance.
In order to set a local variable on an execution, use the setVariableLocal
method provided by the runtime service.
runtimeService.setVariableLocal(name, value);
You can query for executions using the ExecutionQuery
offered by the RuntimeService
:
runtimeService.createProcessInstanceQuery()
.processInstanceId(someId)
.list();
The above query returns all executions for a given process instance.
You can also query for executions using the REST API.
The activity instance concept is similar to the execution concept but takes a different perspective. While an execution can be imagined as a token moving through the process, an activity instance represents an individual instance of an activity (task, subprocess, ...). The concept of the activity instance is thus more state-oriented.
Activity instances also span a tree, following the scope structure provided by BPMN 2.0. Activities that are "on the same level of subprocess" (ie. part of the same scope, contained in the same subprocess) will have their activity instances at the same level in the tree
Examples:
Currently activity instances can only be retrieved for a process instance:
ActivityInstance rootActivityInstance = runtimeService.getActivityInstance(processInstance.getProcessInstanceId());
You can retrieve the activity instance tree using the REST API as well.
Each activity instance is assigned a unique Id. The id is persistent, if you invoke this method multiple times, the same activity instance ids will be returned for the same activity instances. (However, there might be different executions assigned, see below)
The Execution concept in the process engine is not completely aligned with the activity instance concept because the execution tree is in general not aligned with the activity / scope concept in BPMN. In general, there is a n-1 relationship between Executions and ActivityInstances, ie. at a given point in time, an activity instance can be linked to multiple executions. In addition, it is not guaranteed that the same execution that started a given activity instance will also end it. The process engine performs several internal optimizations concerning the compacting of the execution tree which might lead to executions being reordered and pruned. This can lead to situations where a given execution starts an activity instance but another execution ends it. Another special case is the process instance: if the process instance is executing a non-scope activity (for example a user task) below the process definition scope, it will be referenced by both the root activity instance and the user task activity instance.
Note: If you need to interpret the state of a process instance in terms of a BPMN process model, it is usually easier to use the activity instance tree as opposed to the execution tree.
The camunda process engine includes a component named the Job Executor. The Job Executor is scheduling component responsible for performing asynchronous background work. Consider the example of a Timer Event: whenever the process engine reached the timer event, it will stop execution, persist the current state to the database and create a job to resume execution in the future. A job has a duedate which is calculated using the timer expression provided in BPMN XML.
When a process is deployed, the process engine creates a Job Definition for each activity in the process which will create jobs at runtime. This allows you to query information about timers and asynchronous continuations in your processes.
Using the management service, you can query for jobs. The following selects all jobs which are due after a certain date:
managementService.createJobQuery()
.duedateHigherThan(someDate)
.list()
It is possible to query for jobs using the REST Api.
Using the management service, you can also query for job definitions. The following selects all job definitions form a specific process definition:
managementService.createJobDefinitionQuery()
.processDefinitionKey("orderProcess")
.list()
The result will contain information about all timers and asynchronous continuations in the order process.
It is possible to query for jobs using the REST Api.
Job suspension prevents jobs from being executed. Suspension of job execution can be controlled on different levels:
managementService.suspendJob(...)
API or transitively when suspending a Process Instance or a Job Definition.Job suspension by Job definition allows you to suspend all instances of a certain timer or an asynchronous continuation. Intuitively, this allows you to suspend a certain activity in a process in a way that all process instances will advance until they have reached this activity and then not continue since the activity is suspended.
Let's assume there is a process deployed with key orderProcess
which contains a service task named processPayment
. The service task has an asynchronous continuation configured which causes it to be executed by the job executor. The following example shows how you can prevent the processPayment
service from being executed:
List<JobDefinition> jobDefinitions = managementService.createJobDefinitionQuery()
.processDefinitionKey("orderProcess")
.activityIdIn("processPayment")
.list();
for (JobDefinition jobDefinition : jobDefinitions) {
managementService.suspendJobDefinitionById(jobDefinition.getId(), true);
}
Delegation Code allows you to execute external Java code or evaluate expressions when certain events occur during process execution.
There are different types of Delegation Code:
You can create generic delegation code and configure this via the BPMN 2.0 XML using so called Field Injection.
To implement a class that can be called during process execution, this class needs to implement the org.camunda.bpm.engine.delegate.JavaDelegate
interface and provide the required logic in the execute
method. When process execution arrives at this particular step, it
will execute this logic defined in that method and leave the activity
in the default BPMN 2.0 way.
Let's create for example a Java class that can be used to change a
process variable String to uppercase. This class needs to implement
the org.camunda.bpm.engine.delegate.JavaDelegate
interface, which requires us to implement the execute(DelegateExecution)
method. It's this operation that will be called by the engine and
which needs to contain the business logic. Process instance
information such as process variables and other can be accessed and
manipulated through the DelegateExecution interface (click on the link for
a detailed Javadoc of its operations).
public class ToUppercase implements JavaDelegate {
public void execute(DelegateExecution execution) throws Exception {
String var = (String) execution.getVariable("input");
var = var.toUpperCase();
execution.setVariable("input", var);
}
}
Note: there will be only one instance of that Java class created for the serviceTask it is
defined on. All process-instances share the same class instance that
will be used to call
The classes that are referenced in the process definition (i.e. by using
camunda:class
) are NOT instantiated
during deployment. Only when a process execution arrives for the
first time at the point in the process where the class is used, an
instance of that class will be created. If the class cannot be found,
an ProcessEngineException
will be thrown. The reasoning for this is that the environment (and
more specifically the classpath) when you are deploying is often different from the actual runtime
environment.
Instead of writing a Java Delegate is also possible to provide a class that implements the org.camunda.bpm.engine.impl.pvm.delegate.ActivityBehavior
interface. Implementations have then access to the more powerful ActivityExecution
that for example also allows to influence the control flow of the process. Note however
that this is not a very good practice, and should be avoided as much as possible. So, it is advised to use the ActivityBehavior
interface only for advanced use cases and if you know exactly what
you're doing.
It's possible to inject values into the fields of the delegated classes. The following types of injection are supported:
If available, the value is injected through a public setter method on
your delegated class, following the Java Bean naming conventions (e.g.
field firstName
has setter setFirstName(...)
).
If no setter is available for that field, the value of private
member will be set on the delegate (but using private fields is highly not recommended - see warning below).
Regardless of the type of value declared in the process-definition, the type of the
setter/private field on the injection target should always be org.camunda.bpm.engine.delegate.Expression
.
Private fields cannot always be modified! It is not working with e.g. CDI beans (because you have proxies instead of real objects) or with some SecurityManager configurations. Please always use a public setter-method for the fields you want to have injected!
The following code snippet shows how to inject a constant value into a field.
Field injection is supported when using the extensionElements
XML element before the actual field injection
declarations, which is a requirement of the BPMN 2.0 XML Schema.
<serviceTask id="javaService"
name="Java service invocation"
camunda:class="org.camunda.bpm.examples.bpmn.servicetask.ToUpperCaseFieldInjected">
<extensionElements>
<camunda:field name="text" stringValue="Hello World" />
</extensionElements>
</serviceTask>
The class ToUpperCaseFieldInjected
has a field
text
which is of type org.camunda.bpm.engine.delegate.Expression
.
When calling text.getValue(execution)
, the configured string value
Hello World
will be returned.
Alternatively, for longs texts (e.g. an inline e-mail) the
<serviceTask id="javaService"
name="Java service invocation"
camunda:class="org.camunda.bpm.examples.bpmn.servicetask.ToUpperCaseFieldInjected">
<extensionElements>
<camunda:field name="text">
<camunda:string>
Hello World
</camunda:string>
</camunda:field>
</extensionElements>
</serviceTask>
To inject values that are dynamically resolved at runtime, expressions
can be used. Those expressions can use process variables, CDI or Spring
beans. As already noted, an instance of the Java class is shared among
all process-instances in a service task. To have dynamic injection of
values in fields, you can inject value and method expressions in a
org.camunda.bpm.engine.delegate.Expression
which can be evaluated/invoked using the DelegateExecution
passed in the execute
method.
<serviceTask id="javaService" name="Java service invocation"
camunda:class="org.camunda.bpm.examples.bpmn.servicetask.ReverseStringsFieldInjected">
<extensionElements>
<camunda:field name="text1">
<camunda:expression>${genderBean.getGenderString(gender)}</camunda:expression>
</camunda:field>
<camunda:field name="text2">
<camunda:expression>Hello ${gender == 'male' ? 'Mr.' : 'Mrs.'} ${name}</camunda:expression>
</camunda:field>
</ extensionElements>
</ serviceTask>
The example class below uses the injected expressions and resolves
them using the current DelegateExecution
.
public class ReverseStringsFieldInjected implements JavaDelegate {
private Expression text1;
private Expression text2;
public void execute(DelegateExecution execution) {
String value1 = (String) text1.getValue(execution);
execution.setVariable("var1", new StringBuffer(value1).reverse().toString());
String value2 = (String) text2.getValue(execution);
execution.setVariable("var2", new StringBuffer(value2).reverse().toString());
}
}
Alternatively, you can also set the expressions as an attribute instead of a child-element, to make the XML less verbose.
<camunda:field name="text1" expression="${genderBean.getGenderString(gender)}" />
<camunda:field name="text1" expression="Hello ${gender == 'male' ? 'Mr.' : 'Mrs.'} ${name}" />
Since the Java class instance is reused, the injection only happens once, when the serviceTask is called the first time. When the fields are altered by your code, the values won't be re-injected so you should treat them as immutable and don't make any changes to them.
Execution listeners allow you to execute external Java code or evaluate an expression when certain events occur during process execution. The events that can be captured are:
The following process definition contains 3 execution listeners:
<process id="executionListenersProcess">
<extensionElements>
<camunda:executionListener
event="start"
class="org.camunda.bpm.examples.bpmn.executionlistener.ExampleExecutionListenerOne" />
</extensionElements>
<startEvent id="theStart" />
<sequenceFlow sourceRef="theStart" targetRef="firstTask" />
<userTask id="firstTask" />
<sequenceFlow sourceRef="firstTask" targetRef="secondTask">
<extensionElements>
<camunda:executionListener
class="org.camunda.bpm.examples.bpmn.executionListener.ExampleExecutionListenerTwo" />
</extensionElements>
</sequenceFlow>
<userTask id="secondTask">
<extensionElements>
<camunda:executionListener expression="${myPojo.myMethod(execution.event)}" event="end" />
</extensionElements>
</userTask>
<sequenceFlow sourceRef="secondTask" targetRef="thirdTask" />
<userTask id="thirdTask" />
<sequenceFlow sourceRef="thirdTask" targetRef="theEnd" />
<endEvent id="theEnd" />
</process>
The first execution listener is notified when the process starts. The listener is an external Java-class (like ExampleExecutionListenerOne) and should implement the org.camunda.bpm.engine.delegate.ExecutionListener
interface. When the event occurs (in this case end event) the method notify(DelegateExecution execution)
is called.
public class ExampleExecutionListenerOne implements ExecutionListener {
public void notify(DelegateExecution execution) throws Exception {
execution.setVariable("variableSetInExecutionListener", "firstValue");
execution.setVariable("eventReceived", execution.getEventName());
}
}
It is also possible to use a delegation class that implements the org.camunda.bpm.engine.delegate.JavaDelegate
interface. These delegation classes can then be reused in other constructs, such as a delegation for a serviceTask.
The second execution listener is called when the transition is taken. Note that the listener element doesn't define an event, since only take events are fired on transitions. Values in the event attribute are ignored when a listener is defined on a transition.
The last execution listener is called when activity secondTask ends. Instead of using the class on the listener declaration, a expression is defined instead which is evaluated/invoked when the event is fired.
<camunda:executionListener expression="${myPojo.myMethod(execution.eventName)}" event="end" />
As with other expressions, execution variables are resolved and can be used. Because the execution implementation object has a property that exposes the event name, it's possible to pass the event-name to your methods using execution.eventName.
Execution listeners also support using a delegateExpression, similar to a service task.
<camunda:executionListener event="start" delegateExpression="${myExecutionListenerBean}" />
A task listener is used to execute custom Java logic or an expression upon the occurrence of a certain task-related event.
A task listener can only be added in the process definition as a child element of a user task. Note that this also must happen as a child of the BPMN 2.0 extensionElements and in the camunda namespace, since a task listener is a construct specific for the camunda engine.
<userTask id="myTask" name="My Task" >
<extensionElements>
<camunda:taskListener event="create" class="org.camunda.bpm.MyTaskCreateListener" />
</extensionElements>
</userTask>
A task listener supports following attributes:
class: the delegation class that must be called. This class must implement the org.camunda.bpm.engine.impl.pvm.delegate.TaskListener
interface.
public class MyTaskCreateListener implements TaskListener {
public void notify(DelegateTask delegateTask) {
// Custom logic goes here
}
}
It is also possible to use field injection to pass process variables or the execution to the delegation class. Note that an instance of the delegation class is created upon process deployment (as is the case with any class delegation in the engine), which means that the instance is shared between all process instance executions.
expression: (cannot be used together with the class attribute): specifies an expression that will be executed when the event happens. It is possible to pass the DelegateTask object and the name of the event (using task.eventName) as parameter to the called object.
<camunda:taskListener event="create" expression="${myObject.callMethod(task, task.eventName)}" />
delegateExpression: allows to specify an expression that resolves to an object implementing the TaskListener interface, similar to a service task.
<camunda:taskListener event="create" delegateExpression="${myTaskListenerBean}" />
When using listeners configured with the class attribute, field injection can be applied. This is exactly the same mechanism as described for Java Delegates, which contains an overview of the possibilities provided by field injection.
The fragment below shows a simple example process with an execution listener with fields injected:
<process id="executionListenersProcess">
<extensionElements>
<camunda:executionListener class="org.camunda.bpm.examples.bpmn.executionListener.ExampleFieldInjectedExecutionListener" event="start">
<camunda:field name="fixedValue" stringValue="Yes, I am " />
<camunda:field name="dynamicValue" expression="${myVar}" />
</camunda:executionListener>
</extensionElements>
<startEvent id="theStart" />
<sequenceFlow sourceRef="theStart" targetRef="firstTask" />
<userTask id="firstTask" />
<sequenceFlow sourceRef="firstTask" targetRef="theEnd" />
<endEvent id="theEnd" />
</process>
The actual listener implementation may look like the following:
public class ExampleFieldInjectedExecutionListener implements ExecutionListener {
private Expression fixedValue;
private Expression dynamicValue;
public void notify(DelegateExecution execution) throws Exception {
String value =
fixedValue.getValue(execution).toString() +
dynamicValue.getValue(execution).toString();
execution.setVariable("var", value);
}
}
The class ExampleFieldInjectedExecutionListener
concatenates the 2 injected fields (one fixed an the other dynamic) and stores this in the process variable var.
@Deployment(resources = {
"org/camunda/bpm/examples/bpmn/executionListener/ExecutionListenersFieldInjectionProcess.bpmn20.xml"
})
public void testExecutionListenerFieldInjection() {
Map<String, Object> variables = new HashMap<String, Object>();
variables.put("myVar", "listening!");
ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("executionListenersProcess", variables);
Object varSetByListener = runtimeService.getVariable(processInstance.getId(), "var");
assertNotNull(varSetByListener);
assertTrue(varSetByListener instanceof String);
// Result is a concatenation of fixed injected field and injected expression
assertEquals("Yes, I am listening!", varSetByListener);
}
It is possible to access the public API services (RuntimeService
, TaskService
, RepositoryService
...) from delegation code. The following is an example showing
how to access the TaskService
from a JavaDelegate
implementation.
public class DelegateExample implements JavaDelegate {
public void execute(DelegateExecution execution) throws Exception {
TaskService taskService = execution.getProcessEngineServices().taskService();
taskService.createTaskQuery()...;
}
}
In the above example the error event is attached to a Service Task. In order to get this to work the Service Task has to throw the corresponding error. This is done by using a provided Java exception class from within your Java code (e.g. in the JavaDelegate):
public class BookOutGoodsDelegate implements JavaDelegate {
public void execute(DelegateExecution execution) throws Exception {
try {
...
} catch (NotOnStockException ex) {
throw new BpmnError(NOT_ON_STOCK_ERROR);
}
}
}
Business Processes are by nature long running. The process instances will maybe last for weeks, or months. In the meantime the state of the process instance is stored to the database. But sooner or later you might want to change the process definition even if there are still running instances.
This is supported by the process engine:
So you can see different version in the process definition table and the process instances are linked to this:
When you start a process instance
The default and recommended usage is to just use startProcessInstanceByKey
and always use the latest version:
processEngine.getRuntimeService().startProcessInstanceByKey("invoice");
// will use the latest version (2 in our example)
If you want to specifically start an instance of an old process definition, use a Process Definition Query to find the correct ProcessDefinition id and startProcessInstanceById
:
ProcessDefinition pd = processEngine.getRepositoryService().createProcessDefinitionQuery()
.processDefinitionKey("invoice")
.processDefinitionVersion(1).singleResult();
processEngine.getRuntimeService().startProcessInstanceById(pd.getId());
When you use BPMN CallActivities you can configure which version is used:
<callActivity id="callSubProcess" calledElement="checkCreditProcess"
camunda:calledElementBinding="latest|deployment|version"
camunda:calledElementVersion="17">
</callActivity>
The options are
startProcessInstanceByKey
).You might have spotted that two different columns exist in the process definition table with different meanings:
Key: The key is the unique identifier of the process definition in the XML, so its value is read from the id attribute in the XML:
<bpmn2:process id="invoice" ...
Id: The id is the database primary key and an artificial key normally combined out of the key, the version and a generated id (note that the ID may be shortened to fit into the database column, so there is no guarantee that the id is built this way).
Sometimes it is necessary to migrate (upgrade) running process instances to a new version, maybe you added an important new task or even fixed a bug. In this case we can migrate the running process instances to the new version.
Please not that migration can only be applied if a process instance is currently in a persistent wait state, see Transactions in Processes.
public void migrateVersion() {
String processInstanceId = "71712c34-af1d-11e1-8950-08002700282e";
int newVersion = 2;
SetProcessDefinitionVersionCmd command =
new SetProcessDefinitionVersionCmd(processInstanceId, newVersion);
((ProcessEngineImpl) ProcessEngines.getDefaultProcessEngine())
.getProcessEngineConfiguration()
.getCommandExecutorTxRequired().execute(command);
}
Process Version Migration is not an easy topic on itself. Migrating process instances to a new version works only if:
Hence the cases, in which this simple instance migration works, are limited. The following examples will cause problems: If the new version introduces a new (message / signal / timer) boundary event attached to an activity, process instances which are waiting at this activity cannot be migrated (since the activity is a scope in the new version and not a scope in the old version).
Other important aspects to think of when doing version migration are:
If you cannot migrate your process instance you have a couple of alternatives, for example:
So there is actually not "the standard" way, in doubt discuss with us right solution for your environment.
There are two ways to configure the database that the camunda engine will use. The first option is to define the JDBC properties of the database:
jdbcUrl
: JDBC URL of the database.jdbcDriver
: implementation of the driver for the specific database type.jdbcUsername
: username to connect to the database.jdbcPassword
: password to connect to the database.Note that internally the engine uses Apache MyBatis for persistence.
The data source that is constructed based on the provided JDBC properties will have the default MyBatis connection pool settings. The following attributes can optionally be set to tweak that connection pool (taken from the MyBatis documentation):
jdbcMaxActiveConnections
: The number of active connections that the connection pool at maximum at any time can contain. Default is 10.jdbcMaxIdleConnections
: The number of idle connections that the connection pool at maximum at any time can contain.jdbcMaxCheckoutTime
: The amount of time in milliseconds a connection can be 'checked out' from the connection pool before it is forcefully returned. Default is 20000 (20 seconds).jdbcMaxWaitTime
: This is a low level setting that gives the pool a chance to print a log status and re-attempt the acquisition of a connection in the case that it's taking unusually long (to avoid failing silently forever if the pool is mis-configured) Default is 20000 (20 seconds).Example database configuration:
<property name="jdbcUrl" value="jdbc:h2:mem:camunda;DB_CLOSE_DELAY=1000" />
<property name="jdbcDriver" value="org.h2.Driver" />
<property name="jdbcUsername" value="sa" />
<property name="jdbcPassword" value="" />
Alternatively, a javax.sql.DataSource
implementation can be used (e.g. DBCP from Apache Commons):
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" >
<property name="driverClassName" value="com.mysql.jdbc.Driver" />
<property name="url" value="jdbc:mysql://localhost:3306/camunda" />
<property name="username" value="camunda" />
<property name="password" value="camunda" />
<property name="defaultAutoCommit" value="false" />
</bean>
<bean id="processEngineConfiguration" class="org.camunda.bpm.engine.impl.cfg.StandaloneProcessEngineConfiguration">
<property name="dataSource" ref="dataSource" />
...
Note that camunda does not ship with a library that allows to define such a data source. So you have to make sure that the libraries (e.g. from DBCP) are on your classpath.
The following properties can be set, regardless of whether you are using the JDBC or data source approach:
databaseType
: it's normally not necessary to specify this property as it is automatically analyzed from the database connection meta data. Should only be specified in case automatic detection fails. Possible values: {h2, mysql, oracle, postgres, mssql, db2}. This property is required when not using the default H2 database. This setting will determine which create/drop scripts and queries will be used. See the 'supported databases' section for an overview of which types are supported.databaseSchemaUpdate
: allows to set the strategy to handle the database schema on process engine boot and shutdown.false
(default): Checks the version of the DB schema against the library when the process engine is being created and throws an exception if the versions don't match.true
: Upon building the process engine, a check is performed and an update of the schema is performed if it is necessary. If the schema doesn't exist, it is created.create-drop
: Creates the schema when the process engine is being created and drops the schema when the process engine is being closed.For information on supported databases please refer to Supported Environments.
Here are some sample JDBC urls:
The table names all start with ACT. The second part is a two-character identification of the use case of the table. This use case will also roughly match the service API.
Since the release of camunda-bpm 7.0.0-alpha9, the unique constraint for the business key is removed in the runtime and history tables and the database schema create and drop scripts. If you rely on the constraint, you can add it manually to your schema by issuing following sql statements:
db2
Runtime:
alter table ACT_RU_EXECUTION add UNI_BUSINESS_KEY varchar (255) not null generated always as (case when "BUSINESS_KEY_" is null then "ID_" else "BUSINESS_KEY_" end);
alter table ACT_RU_EXECUTION add UNI_PROC_DEF_ID varchar (64) not null generated always as (case when "PROC_DEF_ID_" is null then "ID_" else "PROC_DEF_ID_" end);
create unique index ACT_UNIQ_RU_BUS_KEY on ACT_RU_EXECUTION(UNI_PROC_DEF_ID, UNI_BUSINESS_KEY);
History:
alter table ACT_HI_PROCINST add UNI_BUSINESS_KEY varchar (255) not null generated always as (case when "BUSINESS_KEY_" is null then "ID_" else "BUSINESS_KEY_" end);
alter table ACT_HI_PROCINST add UNI_PROC_DEF_ID varchar (64) not null generated always as (case when "PROC_DEF_ID_" is null then "ID_" else "PROC_DEF_ID_" end);
create unique index ACT_UNIQ_HI_BUS_KEY on ACT_HI_PROCINST(UNI_PROC_DEF_ID, UNI_BUSINESS_KEY);
h2
Runtime:
alter table ACT_RU_EXECUTION add constraint ACT_UNIQ_RU_BUS_KEY unique(PROC_DEF_ID_, BUSINESS_KEY_);
History:
alter table ACT_HI_PROCINST add constraint ACT_UNIQ_HI_BUS_KEY unique(PROC_DEF_ID_, BUSINESS_KEY_);
mssql
Runtime:
create unique index ACT_UNIQ_RU_BUS_KEY on ACT_RU_EXECUTION (PROC_DEF_ID_, BUSINESS_KEY_) where BUSINESS_KEY_ is not null;
History:
create unique index ACT_UNIQ_HI_BUS_KEY on ACT_HI_PROCINST (PROC_DEF_ID_, BUSINESS_KEY_) where BUSINESS_KEY_ is not null;
mysql
Runtime:
alter table ACT_RU_EXECUTION add constraint ACT_UNIQ_RU_BUS_KEY UNIQUE (PROC_DEF_ID_, BUSINESS_KEY_);
History:
alter table ACT_HI_PROCINST add constraint ACT_UNIQ_HI_BUS_KEY UNIQUE (PROC_DEF_ID_, BUSINESS_KEY_);
oracle
Runtime:
create unique index ACT_UNIQ_RU_BUS_KEY on ACT_RU_EXECUTION
(case when BUSINESS_KEY_ is null then null else PROC_DEF_ID_ end,
case when BUSINESS_KEY_ is null then null else BUSINESS_KEY_ end);
History:
create unique index ACT_UNIQ_HI_BUS_KEY on ACT_HI_PROCINST
(case when BUSINESS_KEY_ is null then null else PROC_DEF_ID_ end,
case when BUSINESS_KEY_ is null then null else BUSINESS_KEY_ end);
postgres
Runtime:
alter table ACT_RU_EXECUTION add constraint ACT_UNIQ_RU_BUS_KEY UNIQUE (PROC_DEF_ID_, BUSINESS_KEY_);
History:
alter table ACT_HI_PROCINST add constraint ACT_UNIQ_HI_BUS_KEY UNIQUE (PROC_DEF_ID_, BUSINESS_KEY_);
Microsoft SQL Server implements the isolation level READ_COMMITTED
different
from most databases and does not play together well with the process engine's
optimistic locking scheme. As a result you may suffer from deadlocks when
putting the process engine under high load.
If you experience deadlocks in your MSSQL installation, you must execute the following statements in order to enable SNAPSHOT isolation:
ALTER DATABASE [process-engine]
SET ALLOW_SNAPSHOT_ISOLATION ON
ALTER DATABASE [process-engine]
SET READ_COMMITTED_SNAPSHOT ON
where [process-engine]
contains the name of your database.
The History Event Stream provides audit information about executed process instances.
The process engine maintains the state of running process instances inside the database. This includes writing (1.) the state of a process instance to the database as it reaches a wait state and reading (2.) the state as process execution continues. We call this database the runtime database. In addition, to maintaining the runtime state, the process engine creates an audit log providing audit information about executed process instances. We call this event stream the history event stream (3.). The individual events which make up this event stream are called History Events and contain data about executed process instances, activity instances, changed process variables and so forth. In the default configuration, the process engine will simply write (4.) this event stream to the history database. The HistoryService
API allows querying this database (5.). The history database and the history service are optional components; if the history event stream is not logged to the history database or if the user chooses to log events to a different database, the process engine is still able to work and it is still able to populate the history event stream. This is possible because the BPMN 2.0 Core Engine component does not read state from the history database. It is also possible to configure the amount of data logged, using the historyLevel
setting in the process engine configuration.
Since the process engine does not rely on the presence of the history database for generating the history event stream, it is possible to provide different backends for storing the history event stream. The default backend is the DbHistoryEventHandler
which logs the event stream to the history database. It is possible to exchange the backend and provide a custom storage mechanism for the history event log.
The history level controls the amount of data the process engine provides via the history event stream. The following settings are available out of the box:
NONE
: no history events are fired.ACTIVITY
: the following events are fired:AUDIT
: in addition to the events provided by history level ACTIVITY
, the following events are fired:FULL
: in addition to the events provided by history level AUDIT
, the following additional events are fired:If you need to customize the amount of history events logged, you can provide a custom implementation HistoryEventProducer and wire it in the process engine configuration.
The history level can be provided as a property in the process engine configuration. Depending on how the process engine is configured, the property can be set using Java Code
ProcessEngine processEngine = ProcessEngineConfiguration
.createProcessEngineConfigurationFromResourceDefault()
.setHistory(ProcessEngineConfiguration.HISTORY_FULL)
.buildProcessEngine();
Or it can be set using Spring Xml or a deployment descriptor (bpm-platform.xml, processes.xml). When using the camunda BPM jboss Subsystem, the property can be set through jBoss configuration (standalone.xml, domain.xml).
<property name="history">audit</property>
Note that when using the default history backend, the history level is stored in the database and cannot be changed later.
The default history database writes History Events to the appropriate database tables. The database tables can then be queried using the History Service
or using the REST API.
There are seven History entities, which - in contrast to the runtime data - will also remain present in the DB after process instances have been completed:
HistoricProcessInstances
containing information about current and past process instances.HistoricProcessVariables
containing information about the latest state a variable held in a process instance.HistoricActivityInstances
containing information about a single execution of an activity.HistoricTaskInstances
containing information about current and past (completed and deleted) task instances.HistoricDetails
containing various kinds of information related to either a historic process instances, an activity instance or a task instance.HistoricIncidents
containing information about current and past (ie. deleted or resolved) incidents.UserOperationLogEntry
log entry containing information about an operation performed by a user. This is used for logging actions such as creating a new task, completing a task, ...The HistoryService exposes the methods createHistoricProcessInstanceQuery()
, createHistoricProcessVariableQuery()
, createHistoricActivityInstanceQuery()
, createHistoricDetailQuery()
, createHistoricTaskInstanceQuery()
, createHistoricIncidentQuery()
and createUserOperationLogQuery()
which can be used for querying history.
Below are a few examples which show some of the possibilities of the query API for history. Full description of the possibilities can be found in the the javadocs, in the org.camunda.bpm.engine.history
package.
HistoricProcessInstanceQuery
Get the first ten HistoricProcessInstances
that are finished and which took the most time to complete (the longest duration) of all finished processes with definition 'XXX'.
historyService.createHistoricProcessInstanceQuery()
.finished()
.processDefinitionId("XXX")
.orderByProcessInstanceDuration().desc()
.listPage(0, 10);
HistoricActivityInstanceQuery
Get the last HistoricActivityInstance
of type 'serviceTask' that has been finished in any process that uses the processDefinition with id XXX.
historyService.createHistoricActivityInstanceQuery()
.activityType("serviceTask")
.processDefinitionId("XXX")
.finished()
.orderByHistoricActivityInstanceEndTime().desc()
.listPage(0, 1);
HistoricProcessVariableQuery
Get all HistoricProcessVariables from a finished process instance with id 'xxx' ordered by variable name.
historyService.createHistoricProcessVariableQuery()
.processInstanceId("XXX")
.orderByVariableName.desc()
.list();
HistoricDetailQuery
The next example gets all variable-updates that have been done in process with id 123. Only HistoricVariableUpdates will be returned by this query. Note that it's possible for a certain variable name to have multiple HistoricVariableUpdate entries, one for each time the variable was updated in the process. You can use orderByTime (the time the variable update was done) or orderByVariableRevision (revision of runtime variable at the time of updating) to find out in what order they occurred.
historyService.createHistoricDetailQuery()
.variableUpdates()
.processInstanceId("123")
.orderByVariableName().asc()
.list()
The last example gets all variable updates that were performed on the task with id "123". This returns all HistoricVariableUpdates for variables that were set on the task (task local variables), and NOT on the process instance.
historyService.createHistoricDetailQuery()
.variableUpdates()
.taskId("123")
.orderByVariableName().asc()
.list()
HistoricTaskInstanceQuery
Get ten HistoricTaskInstances that are finished and which took the most time to complete (the longest duration) of all tasks.
historyService.createHistoricTaskInstanceQuery()
.finished()
.orderByHistoricTaskInstanceDuration().desc()
.listPage(0, 10);
Get HistoricTaskInstances that are deleted with a delete reason that contains "invalid", which were last assigned to user 'jonny'.
historyService.createHistoricTaskInstanceQuery()
.finished()
.taskDeleteReasonLike("%invalid%")
.taskAssignee("jonny")
.listPage(0, 10);
HistoricIncidentQuery
Query for all resolved incidents:
historyService.createHistoricIncidentQuery()
.resolved()
.list();
UserOperationLogQuery
Query for all operations performed by user "jonny":
historyService.createUserOperationLogQuery()
.userId("jonny")
.listPage(0, 10);
In order to understand how to provide a custom history backend, it is useful to first look at a more in-detail view on the history architecture:
Whenever the state of a runtime entity is changed, the core execution component of the process engine fires History Events. In order to make this flexible, the actual creation of the History Events as well as populating the history events with data from the runtime structures is delegated to the History Event Producer. The producer is handed in the runtime data structures (such as an ExecutionEntity or a TaskEntity), creates a new History Event and populates it with data extracted from the runtime structures.
The event is next delivered to the History Event Handler which constitutes the History Backend. The drawing above contains a logical component named event transport. This is supposed to represent the channel between the process engine core component producing the events and the History Event Handler. In the default implementation, events are delivered to the History Event Handler synchronously and inside the same JVM. It is however conceptually possible to send the event stream to a different JVM (maybe running on a different machine) and making delivery asynchronous. A good fit might be a transactional message Queue (JMS).
Once the event has reached the History Event Handler, it can be processed and stored in some kind of datastore. The default implementation writes events to the History Database such that they can be queried using the History Service.
Exchanging the History Event Handler with a custom implementation allows users to plug in a custom History Backend. In order to do so, two main steps are required:
Note that if you provide a custom implementation of the HistoryEventHandler and wire it with the process engine, you override the default DbHistoryEventHandler. The consequence is that the process engine will stop writing to the history database and you will not be able to use the history service for querying the audit log. If you do not want to replace the default behavior but only provide an additional event handler, you need to write a composite History Event Handler which dispatches events a collection of handlers.
All process definition are cached (after they're parsed) to avoid hitting the database every time a process definition is needed and because process definition data doesn't change.
The process engine is a piece of passive Java Code, which works in the Thread of the client. For instance, if you have a web application allowing users to start a new process instance and a user clicks on the corresponding button, some thread from the application server's http-thread-pool will invoke the API method runtimeService.startProcessInstanceByKey(...)
, thus entering the process engine and starting a new process instance. We call this "borrowing the client thread".
On any such external trigger (i.e. start a process, complete a task, signal an execution), the engine runtime is going to advance in the process until it reaches wait states on each active path of execution. A wait state is a task which is performed later, which means that the engine persists the current execution to the database and waits to be triggered again. For example in case of a user task, the external trigger on task completion causes the runtime to execute the next bit of the process until wait states are reached again (or the instance ends). In contrast to user tasks, a timer event is not triggered externally. Instead it is continued by an internal trigger. That is why the engine also needs an active component, the job executor, which is able to fetch registered jobs and process them asynchronously.
We talked about wait states as transaction boundaries where the process state is stored to the database, the Thread returns to the client and the transaction is committed. The following BPMN elements are always wait states:
And keep in mind that Asynchronous Continuations can add transaction boundaries to other tasks as well.
The transition from one such stable state to another stable state is always part of one transaction, meaning that it succeeds as a whole or is rolled back on any kind of exception occuring during its execution. This is illustrated in the following example:
We see a segment of a BPMN process with a user task, a service task and a timer event. The timer event marks the next wait state. Completing the user task and validating the address is therefore part of the same unit of work, so it should succeed or fail atomically. That means that if the service task throws an exception we want to rollback the current transaction, such that the execution tracks back to the user task and the user task is still present in the database. This is also the default behavior of the process engine.
In 1, an application or client thread completes the task. In that same thread the engine runtime is now executing the service task and advances until it reaches the wait state at the timer event (2). Then it returns the control to the caller (3) potentially committing the transaction (if it was started by the engine).
In some cases this behavior is not desired. Sometimes we need custom control over transaction boundaries in a process, in order to be able to scope logical units of work. Consider the following process fragment:
This time we are completing the user task, generating an invoice and then send that invoice to the customer. This time the generation of the invoice is not part of the same unit of work so we do not want to rollback the completion of the usertask if generating an invoice fails. So what we want the engine to do is complete the user task (1), commit the transaction and return the control to the calling application (2).
Then we want to generate the invoice asynchronously, in a background thread. A pool of background threads is managed by the job executor. It periodically checks the database for asynchronous jobs, i.e. units of work in the process runtime.
So behind the scenes, when we reach the generate invoice task, we are persisting a job in the database, queuing it for later execution. This job is then picked up by the job executor and executed (3). We are also giving the local job executor a little hint that there is a new job, to improve performance. In order to use this feature, we can use the camunda:async="true"
extension in the BPMN 2.0 XML. So for example, the service task would look like this:
<serviceTask id="service1" name="Generate Invoice" camunda:class="my.custom.Delegate" camunda:async="true" />
camunda:async
can be specified on the following bpmn task types: task
, serviceTask
, scriptTask
, businessRuleTask
, sendTask
, receiveTask
, userTask
, subProcess
and callActivity
. On a user task, receive task or other wait states, the additional async continuation allows us to execute the start execution listeners in a separate thread/transaction.
A start event may also be declared as asynchronous in the same way as above by the attribute camunda:async="true"
. On instantiation, the process instance will be created and persisted in the database, but execution will be deferred. Also, execution listeners will not be invoked synchronously. This can be helpful in various situations such as heterogeneous clusters, when the execution listener class is not available on the node that instantiates the process.
We want to emphasis that in case of a non handled exception the current transaction gets rolled back and the process instance is in the last wait state (safe point). The following image visualizes that.
If an exception occurs when calling startProcessInstanceByKey
the process instance will not be saved to the database at all.
The above sketched solution normally leads to discussion as people expect the process engine to stop in the task caused an exception. Also other BPM suites often implement every task as wait state. But the approach has a couple of advantages:
But there are consequences which you should keep in mind:
The process engine can either manage transactions on its own ("Standalone" transaction management) or integrate with a platform transaction manager.
If the process engine is configured to perform standalone transaction management, it always opens a
new transaction for each command which is executed. To configure the process engine to use
standalone transaction management, use the
org.camunda.bpm.engine.impl.cfg.StandaloneProcessEngineConfiguration
:
ProcessEngineConfiguration.createStandaloneProcessEngineConfiguration()
...
.buildProcessEngine();
The usecases for standalone transaction management are situations where the process engine does not have to integrate with other transactional resources such as secondary datasources or messaging systems.
Note: in the tomcat distribution the process engine is configured using standalone transaction management.
The process engine can be configured to integrate with a transaction manager (or transaction management systems). Out of the box, the process engine supports integration with Spring and JTA transaction management. More information can be found in the following chapters:
The usecase for transaction manager integration are situations where the process engine needs to integrate with
A job is an explicit representation of a task to trigger process execution. A job is created whenever a wait state is reached during process execution that has to be triggered internally. This is the case when a timer event or a task marked for asynchronous execution (see transaction boundaries) is approached. The job executor has two responsibilities: job acquisition and job execution. The following diagram illustrates this:
By default, the JobExecutor is activated when the process engine boots. For unit testing scenarios it is cumbersome to work with this background component. Therefore the Java API offers to query for (ManagementService.createJobQuery
) and execute jobs (ManagementService.executeJob
) by hand, which allows to control job execution from within a unit test. To avoid interference with the job executor, it can be switched off.
Specify
<property name="jobExecutorActivate" value="false" />
in the process engine configuration when you don't want the JobExecutor to be activated upon booting the process engine.
Job acquisition is the process of retrieving jobs from the database that are to be executed next. Therefore jobs must be persisted to the database together with properties determining whether a job can be executed. For example, a job created for a timer event may not be executed before the defined time span has passed.
Jobs are persisted to the database, in the ACT_RU_JOB
table. This database table has the following columns (among others):
ID_ | REV_ | LOCK_EXP_TIME_ | LOCK_OWNER_ | RETRIES_ | DUEDATE_
Job acquisition is concerned with polling this database table and locking jobs.
A job is acquirable, i.e. a candidate for execution, if it fulfills all following conditions:
DUEDATE_
column is in the pastLOCK_EXP_TIME_
column is in the pastRETRIES_
column is greater than zero.In addition, the process engine has a concept of suspending a process definition and a process instance. A job is only acquirable if neither the corresponding process instance nor the corresponding process definition are suspended.
Job acquisition has two phases. In the first phase the job executor queries for a configurable amount of acquirable jobs. If at least one job can be found, it enters the second phase, locking the jobs. Locking is necessary in order to ensure that jobs are executed exactly once. In a clustered scenario, it is accustom to operate multiple job executor instances (one for each node) that all poll the same ACT_RU_JOB table. Locking a job ensures that it is only acquired by a single job executor instance. Locking a job means updating its values in the LOCK_EXP_TIME_ and LOCK_OWNER_ columns. The LOCK_EXP_TIME_ column is updated with a timestamp signifying a date that lies in the future. The intuition behind this is that we want to lock the job until that date is reached. The LOCK_OWNER_ column is updated with a value uniquely identifying the current job executor instance. In a clustered scenario this could be a node name uniquely identifying the current cluster node.
The situation where multiple job executor instances attempt to lock the same job concurrently is accounted for by using optimistic locking (see REV_ column).
After having locked a job, the job executor instance has effectively reserved a time slot for executing the job: once the date written to the LOCK_EXP_TIME_ column is reached it will be visible to job acquisition again. In order to execute the acquired jobs, they are passed to the acquired jobs queue.
By default the job executor does not impose an order in which acquirable jobs are acquired. This means that the job acquisition order depends on the database and its configuration. That's why the job acquisition is assumed to be nondeterministic. The intention for this is to keep the job acquisition query simple and fast.
But this simple job acquisition query can lead to disadvantages in some situations. In theory job starvation is possible if there are always too many jobs to acquire and the database returns the acquirable jobs in a manner that some jobs are never returned. Another observation could be that timer execution is delayed in a high load scenario, meaning that the execution date of a timer job can be significantly later than its actual due date. This is not unexpected behavior since the due date only specifies the earliest date a job can be executed and not the date of the actual execution. However, in some scenarios it may be preferable to acquire timer jobs as soon as they become available to execute.
To address the previously described issues, the job acquisition query can be controlled by the process engine configuration. Currently, two options are supported:
jobExecutorPreferTimerJobs
. If set to true
, the job executor
will acquire all acquirable timer jobs before other job types. This
doesn't specify a order within types of jobs which are acquired.
jobExecutorAcquireByDueDate
. If set to true
, the job executor will
acquire jobs by ascending due date. Where an asynchronous continuation
has its creation date set as due date, it is immediately executable.
If both options (jobExecutorPreferTimerJobs
and jobExecutorAcquireByDueDate
)
are set to true
the job executor will first acquire timer jobs and after that
asynchronous continuation jobs. And also sort these jobs within the type ascending
by due date.
false
by default and should only be
activated if required by the use case. The options alter the used job
acquisition query and may affect its performance. That's why we also advise to
add an index on the corresponding column(s) of the ACT_RU_JOB
table.
jobExecutorPreferTimerJobs | jobExecutorAcquireByDueDate | Recommend Index |
---|---|---|
true |
false |
ACT_RU_JOB(TYPE_ DESC) |
false |
true |
ACT_RU_JOB(DUEDATE_ ASC) |
true |
true |
ACT_RU_JOB(TYPE_ DESC, DUEDATE_ ASC) |
Acquired jobs are executed by a thread pool. The thread pool consumes jobs from the acquired jobs queue. The acquired jobs queue is an in-memory queue with a fixed capacity. When an executor starts executing a job, it is first removed from the queue.
In the scenario of an embedded process engine, the default implementation for this thread pool is a java.util.concurrent.ThreadPoolExecutor
. However, this is not allowed in Java EE environments. There we hook into the application server capabilities of thread management. See the platform-specific information in the Runtime Container Integration section on how this achieved.
Upon failure of job execution, e.g. if a service task invocation throws an exception, a job will be retried a number of times (by default 3). It is not immediately retried and added back to the acquisition queue, but the value of the RETRIES_ column is decreased. The process engine thus performs bookkeeping for failed jobs. After updating the RETRIES_ column, the executor moves on to the next job. This means that the failed job will automatically be retried once the LOCK_EXP_TIME_ date is expired.
In real life it is useful to configure the retry strategy, i.e. the number of times a job is retried and when it is retried, so the LOCK_EXP_TIME_. In the camunda engine, this can be configured as an extension element of a task in the BPMN 2.0 XML:
<definitions ... xmlns:camunda="http://activiti.org/bpmn">
...
<serviceTask id="failingServiceTask" camunda:async="true" camunda:class="org.camunda.engine.test.cmd.FailingDelegate">
<extensionElements>
<camunda:failedJobRetryTimeCycle>R5/PT5M</camunda:failedJobRetryTimeCycle>
</extensionElements>
</serviceTask>
...
</definitions>
The configuration follows the ISO_8601 standard for repeating time intervals. In the example, R5/PT5M
means that the maximum number of retries is 5 (R5
) and the delay of retry is 5 minutes (PT5M
).
Similarly, the following example defines three retries after 5 seconds each for a boundary timer event:
<definitions ... xmlns:camunda="http://activiti.org/bpmn">
...
<boundaryEvent id="BoundaryEvent_1" name="Boundary event" attachedToRef="Freigebenden_zuordnen_143">
<extensionElements>
<camunda:failedJobRetryTimeCycle>R3/PT5S</camunda:failedJobRetryTimeCycle>
</extensionElements>
<outgoing>SequenceFlow_3</outgoing>
<timerEventDefinition id="sid-ac5dcb4b-58e5-4c0c-b30a-a7009623769d">
<timeDuration xsi:type="tFormalExpression" id="sid-772d5012-17c2-4ae4-a044-252006933a1a">PT10S</timeDuration>
</timerEventDefinition>
</boundaryEvent>
...
</definitions>
Recap: a retry may be required, if there are any failures during the transaction which follows the timer.
The Job Executor makes sure that jobs from a single process instance are never executed concurrently. Why is this? Consider the following process definition:
We have a parallel gateway followed by three service tasks which all perform an asynchronous continuation. As a result of this, three jobs are added to the database. Once such a job is present in the database it can be processed by the job executor. It acquires the jobs and delegates them to a thread pool of worker threads which actually process the jobs. This means that using an asynchronous continuation, you can distribute the work to this thread pool (and in a clustered scenario even across multiple thread pools in the cluster).
This is usually a good thing. However it also bears an inherent problem: consistency. Consider the parallel join after the service tasks. When the execution of a service task is completed, we arrive at the parallel join and need to decide whether to wait for the other executions or whether we can move forward. That means, for each branch arriving at the parallel join, we need to take a decision whether we can continue or whether we need to wait for one or more other executions from the other branches.
This requires synchronization between the branches of execution. The engine addresses this problem with optimistic locking. Whenever we take a decision based on data that might not be current (because another transaction might modify it before we commit), we make sure to increment the revision of the same database row in both transactions. This way, whichever transaction commits first wins and the other ones fail with an optimistic locking exception. This solves the problem in the case of the process discussed above: if multiple executions arrive at the parallel join concurrently, they all assume that they have to wait, increment the revision of their parent execution (the process instance) and then try to commit. Whichever execution is first will be able to commit and the other ones will fail with an optimistic locking exception. Since the executions are triggered by a job, the job executor will retry to perform the same job after waiting for a certain amount of time and hopefully this time pass the synchronizing gateway.
However, while this is a perfectly fine solution from the point of view of persistence and consistency, this might not always be desirable behavior at a higher level, especially if the execution has non-transactional side effects, which will not be rolled back by the failing transaction. For instance, if the book concert tickets service does not share the same transaction as the process engine, we might book multiple tickets if we retry the job. That is why jobs of the same process instance are processed exclusively by default.
An exclusive job cannot be performed at the same time as another exclusive job from the same process instance. Consider the process shown in the section above: if the jobs corresponding to the service tasks are treated as exclusive, the job executor will make sure that they are not executed concurrently. Instead, it will ensure that whenever it acquires an exclusive job from a certain process instance, it also acquires all other exclusive jobs from the same process instance and delegates them to the same worker thread. This enforces sequential execution of the jobs.
Exclusive Jobs are the default configuration. All asynchronous continuations and timer events are thus exclusive by default. In addition, if you want a job to be non-exclusive, you can configure it as such using camunda:exclusive="false"
. For example, the following service task would be asynchronous but non-exclusive.
<serviceTask id="service" camunda:expression="${myService.performBooking(hotel, dates)}" camunda:async="true" camunda:exclusive="false" />
Is this a good solution? We had some people asking whether it was. Their concern was that it would prevent you from doing things in parallel and would thus be a performance problem. Again, two things have to be taken into consideration:
In the case of a single, application-embedded process engine, the job executor setup is the following:
There exists a single job table that the engine adds jobs to and the acquisition consumes from. Creating a second embedded engine would therefore create another acquisition thread and execution thread-pool.
In larger deployments however, this quickly leads to a poorly manageable situation. When running camunda BPM on Tomcat or an application server, the platform allows to declare multiple process engines shared by multiple process applications. With respect to job execution, one job acquisition may serve multiple job tables (and thus process engines) and a single thread-pool for execution may be used.
This setup enables centralized monitoring of job acquisition and execution. See the platform-specific information in the Runtime Container Integration section on how the thread pooling is implemented on the different platforms.
Different job acquisitions can also be configured differently, e.g. to meet business requirements like SLAs. For example, the acquisition's timeout when no more executable jobs are present can be configured differently per acquisition.
To which job acquisition a process engine is assigned can be specified in the declaration of the engine, so either in the processes.xml
deployment descriptor of a process application or in the camunda BPM platform descriptor. The following is an example configuration that declares a new engine and assigns it to the job acquisition named default
, which is created when the platform is bootstrapped.
<process-engine name="newEngine">
<job-acquisition>default</job-acquisition>
...
</process-engine>
Job acquisitions have to be declared in the BPM platform's deployment descriptor, see the container-specific configuration options.
When running the camunda platform in a cluster, there is a distinction between homogeneous and heterogeneous setups. We define a cluster as a set of network nodes that all run the camunda BPM platform against the same database (at least for one engine on each node). In the homogeneous case, the same process applications (and thus custom classes like JavaDelegates) are deployed to all of the nodes, as depicted below.
In the heterogeneous case, this is not given, meaning that some process applications are deployed to only part of the nodes.
A heterogeneous cluster setup as described above poses additional challenges to the job executor. Both platforms declare the same engine, i.e. they run against the same database. This means that jobs will be inserted into the same table. However, in the default configuration the job acquisition thread of node 1 will lock any executable jobs of that table and submit them to the local job execution pool. This means, jobs created in the context of process application B (so on node 2), may be executed on node 1 and vice versa. As the job execution may involve classes that are part of B's deployment, you are likely going to see a ClassNotFoundExeception
or any of the likes.
To prevent the job acquisition on node 1 from picking jobs that belong to node 2, the process engine can be configured as deployment aware, by the setting following property in the process engine configuration:
<process-engine name="default">
...
<properties>
<property name="jobExecutorDeploymentAware">true</property>
...
</properties>
</process-engine>
Now, the job acquisition thread on node 1 will only pick up jobs that belong to deployments made on that node, which solves the problem. Digging a little deeper, the acquisition will only pick up those jobs that belong to deployments that were registered with the engines it serves. Every deployment gets automatically registered. Additionally, one can explicitly register and unregister single deployments with an engine by using the ManagementService
methods registerDeploymentForJobExecutor(deploymentId)
and unregisterDeploymentForJobExecutor(deploymentId)
. It also offers a method getRegisteredDeployments()
to inspect the currently registered deployments.
As this is configurable on engine level, you can also work in a mixed setup, when some deployments are shared between all nodes and some are not. You can assign the globally shared process applications to an engine that is not deployment aware and the others to a deployment aware engine, probably both running against the same database. This way, jobs created in the context of the shared process applications will get executed on any cluster node, while the others only get executed on their respective nodes.
Multi-tenancy regards the case in which a single Camunda installation should serve more than one tenant. For each tenant, certain guarantees of isolation should be made. For example, one tenant's process instances should not interfere with those of another tenant.
Multi-tenancy can be achieved on different levels of data isolation. On the one end of the spectrum, different tenants' data can be stored in different databases by configuring multiple process engines, while on the other end of the spectrum, runtime entities can be associated with tenant markers and are stored in the same tables. In between these two extremes, it is possible to separate tenant data into different schemas or tables.
Recommended Approach:
We recommend the approach of multiple process engines (i.e., isolation into different databases/schemas/tables) over the tenant marker approach as it is more robust and easier to use.
Database-, schema-, and table-based multi-tenancy can be enabled by configuring one process engine per tenant. Each process engine can be configured to point to a different portion of the database. While they are isolated in that sense, they may all share computational resources such as a data source (when isolating via schemas or tables) or a thread pool for asynchronous job execution. Furthermore, the Camunda API offers convenient access to different process engines based on a tenant identifier.
Database, schema or table level
Working with different process engines for multiple tenants comprises the following steps:
Multiple process engines can be configured in a configuration file or via Java API. Each engine should be given a name that is related to a tenant such that it can be identified based on the tenant. For example, each engine can be named after the tenant it serves. See the Process Engine Bootstrapping section for details.
The process engine configuration can be adapted to achieve either database-, schema- or table-based isolation of data. If different tenants should work on entirely different databases, they have to use different jdbc settings or different data sources. For schema- or table-based isolation, a single data source can be used which means that resources like a connection pool can be shared among multiple engines. The configuration option databaseTablePrefix can be used to configure database access in this case.
For background execution of processes and tasks, the process engine has a component called job executor. The job executor periodically acquires jobs from the database and submits them to a thread pool for execution. For all process applications on one server, one thread pool is used for job execution. Furthermore, it is possible to share the acquisition thread between multiple engines. This way, resources are still manageable even when a large number of process engines is used. See the section The Job Executor and Multiple Process Engines for details.
Multi-tenancy settings can be applied in the various ways of configuring a process engine. The following is an example of a bpm-platform.xml file that specifies engines for two tenants that share the same database but work on different schemas:
<?xml version="1.0" encoding="UTF-8"?>
<bpm-platform xmlns="http://www.camunda.org/schema/1.0/BpmPlatform"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.camunda.org/schema/1.0/BpmPlatform http://www.camunda.org/schema/1.0/BpmPlatform">
<job-executor>
<job-acquisition name="default" />
</job-executor>
<process-engine name="tenant1">
<job-acquisition>default</job-acquisition>
<configuration>org.camunda.bpm.engine.impl.cfg.StandaloneProcessEngineConfiguration</configuration>
<datasource>java:jdbc/ProcessEngine</datasource>
<properties>
<property name="databaseTablePrefix">TENANT_1.</property>
<property name="history">full</property>
<property name="databaseSchemaUpdate">true</property>
<property name="authorizationEnabled">true</property>
</properties>
</process-engine>
<process-engine name="tenant2">
<job-acquisition>default</job-acquisition>
<configuration>org.camunda.bpm.engine.impl.cfg.StandaloneProcessEngineConfiguration</configuration>
<datasource>java:jdbc/ProcessEngine</datasource>
<properties>
<property name="databaseTablePrefix">TENANT_2.</property>
<property name="history">full</property>
<property name="databaseSchemaUpdate">true</property>
<property name="authorizationEnabled">true</property>
</properties>
</process-engine>
</bpm-platform>
When developing process applications, i.e., process definitions and supplementary code, some processes may be deployed to every tenant's engine while others are tenant-specific. The processes.xml deployment descriptor that is part of every process application offers this kind of flexibility by the concept of process archives. One application can contain any number of process archive deployments, each of which can be deployed to a different process engine with different resources. See the section on the processes.xml deployment descriptor for details.
The following is an example that deploys different process definitions for two tenants. It uses the configuration property resourceRootPath
that specifies a path in the deployment that contains process definitions to deploy. Accordingly, all the processes under processes/tenant1
on the application's classpath are deployed to engine tenant1
, while all the processes under processes/tenant2
are deployed to engine tenant2
.
<process-application
xmlns="http://www.camunda.org/schema/1.0/ProcessApplication"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<process-archive name="tenant1-archive">
<process-engine>tenant1</process-engine>
<properties>
<property name="resourceRootPath">classpath:processes/tenant1/</property>
<property name="isDeleteUponUndeploy">false</property>
<property name="isScanForProcessDefinitions">true</property>
</properties>
</process-archive>
<process-archive name="tenant2-archive">
<process-engine>tenant2</process-engine>
<properties>
<property name="resourceRootPath">classpath:processes/tenant2/</property>
<property name="isDeleteUponUndeploy">false</property>
<property name="isScanForProcessDefinitions">true</property>
</properties>
</process-archive>
</process-application>
In order to access a specific tenant's process engine at runtime, it has to be identified by its name. The Camunda engine offers access to named engines in various programming models:
The least isolated approach is to add tenant-specific markers in form of a process variable to running processes. This marker identifies the tenant in which context the process instance is running. In order to access only data for a specific tenant, many process engine queries allow to filter by process variables. A calling application must make sure to filter according to the correct tenant.
Row level with applications responsible for filtering
Working with tenant markers comprises the following aspects:
A tenant marker can be added to a process instance by passing it as a process variable on instantiation:
Map<String, Object> variables = new HashMap<String, Object>();
variables.put("TENANT_ID", "tenant1");
runtimeService.startProcessInstanceByKey("some process", variables);
For process definitions that are specific to a single tenant, it is also possible to use an execution listener on the start event that immediately sets the variable after instantiation.
Process applications that retrieve tenant-specific data must ensure that they filter by the tenant marker in order to isolate data between tenants. The following is a query that retrieves all process instances for tenant tenant1
:
List<ProcessInstance> processInstances =
runtimeService.createProcessInstanceQuery()
.variableValueEquals("TENANT_ID", "tenant1")
.list();
Other queries like task and execution queries offer the same filtering capabilities. For correlation via the RuntimeService#correlateMessage
methods, tenant-specific correlation can be achieved by adding the tenant marker as a correlation key like:
runtimeService.createMessageCorrelation("someMessage")
.processInstanceVariableEquals("TENANT_ID", "tenant1")
.correlate();
We use Java Logging to avoid any third party logging requirements.
Incidents are notable events that happen in the process engine. Such incidents usually indicate some kind of problem related to process execution. Examples of such incidents may be a failed job with elapsed retries (retries = 0), indicating that an execution is stuck and manual administrative action is necessary for repairing the process instance. Or the fact that a process instance has entered an error state which could be modeled as a BPMN Error Boundary event or a User Task explicitly marked as "error state". If such incidents arise, the process engine fires an internal event which can be handled by a configurable incident handler.
In the default configuration, the process engine writes incidents to the process engine database. You may then query the database for different types and kinds of incidents using the IncidentQuery
exposed by the RuntimeService
:
runtimeService.createIncidentQuery()
.processDefinitionId("someDefinition")
.list();
Incidents are stored in the AC_RU_INCIDENT database table.
If you want to customize the incident handling behavior, it is possible to replace the default incident handlers in the process engine configuration and provide custom implementations (see below).
There are different types of incidents. Currently the process engine supports the following incidents:
The process engine allows you to configure on an incident type basis whether certain incidents should be raised or not.
The following properties are available in the org.camunda.bpm.engine.ProcessEngineConfiguration
class:
createIncidentOnFailedJobEnabled
: indicates whether Failed Job incidents should be raised.Incident Handlers are responsible for handling incidents of a certain type (see Incident Types below).
An Incident Handler implements the following interface:
public interface IncidentHandler {
public String getIncidentHandlerType();
public void handleIncident(String processDefinitionId, String activityId, String executionId, String configuration);
public void resolveIncident(String processDefinitionId, String activityId, String executionId, String configuration);
}
The handleIncident
method is called when a new incident is created. The resolveIncident
method is called when an incident is resolved. If you want to provide a custom incident handler implementation you can replace one or multiple incident handlers using the following method:
org.camunda.bpm.engine.impl.cfg.ProcessEngineConfigurationImpl.setCustomIncidentHandlers(List<IncidentHandler>)
An example of a custom incident handler could be a handler which, in addition to the default behavior also sends an email to an administrator.
The process engine configuration can be extended through process engine plugins. A process engine plugin is an extension to the process engine configuration.
A plugin must provide an implementation of the ProcessEnginePlugin interface.
Process engine plugins can be configured
The following is an example of how to configure a process engine plugin in bpm-platform.xml file:
<?xml version="1.0" encoding="UTF-8"?>
<bpm-platform xmlns="http://www.camunda.org/schema/1.0/BpmPlatform"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.camunda.org/schema/1.0/BpmPlatform http://www.camunda.org/schema/1.0/BpmPlatform ">
<job-executor>
<job-acquisition name="default" />
</job-executor>
<process-engine name="default">
<job-acquisition>default</job-acquisition>
<configuration>org.camunda.bpm.engine.impl.cfg.JtaProcessEngineConfiguration</configuration>
<datasource>jdbc/ProcessEngine</datasource>
<plugins>
<plugin>
<class>org.camunda.bpm.engine.MyCustomProcessEnginePlugin</class>
<properties>
<property name="boost">10</property>
<property name="maxPerformance">true</property>
<property name="actors">akka</property>
</properties>
</plugin>
</plugins>
</process-engine>
</bpm-platform>
A process engine plugin class must be visible to the classloader which loads the process engine classes.
The following is a list of built-in process engine plugins:
The identity service is an API abstraction over various User / Group repositories. The basic entities are
Example:
User demoUser = processEngine.getIdentityService()
.createUserQuery()
.userId("demo")
.singleResult();
camunda BPM distinguishes between read-only and writable user repositories. A read-only user repository provides read-only access to the underlying user / group database. A writable user repository allows write access to the user database which includes creating, updating and deleting users and groups.
In order to provide a custom identity provider implementation, the following interfaces can be implemented:
The database identity service uses the process engine database for managing users and groups. This is the default identity service implementation used if no alternative identity service implementation is provided.
The Database Identity Service implements both ReadOnlyIdentityProvider
and WritableIdentityProvider
providing full CRUD functionality in Users, Groups and Memberships.
The LDAP identity service provides read-only access to an LDAP-based user / group repository. The identity service provider is implemented as a Process Engine Plugin and can be added to the process engine configuration. In that case it replaces the default Database Identity Service.
In order to use the LDAP identity service, the camunda-identity-ldap.jar
library has to be added to the classloader of the process engine.
<dependency>
<groupId>org.camunda.bpm.identity</groupId>
<artifactId>camunda-identity-ldap</artifactId>
<version>${camunda.version}</version>
</dependency>
The following is an example of how to configure the LDAP Identity Provider Plugin using Spring XML:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="processEngineConfiguration" class="org.camunda.bpm.engine.impl.cfg.StandaloneInMemProcessEngineConfiguration">
...
<property name="processEnginePlugins">
<list>
<ref bean="ldapIdentityProviderPlugin" />
</list>
</property>
</bean>
<bean id="ldapIdentityProviderPlugin" class="org.camunda.bpm.identity.impl.ldap.plugin.LdapIdentityProviderPlugin">
<property name="serverUrl" value="ldap://localhost:3433/" />
<property name="managerDn" value="uid=daniel,ou=office-berlin,o=camunda,c=org" />
<property name="managerPassword" value="daniel" />
<property name="baseDn" value="o=camunda,c=org" />
<property name="userSearchBase" value="" />
<property name="userSearchFilter" value="(objectclass=person)" />
<property name="userIdAttribute" value="uid" />
<property name="userFirstnameAttribute" value="cn" />
<property name="userLastnameAttribute" value="sn" />
<property name="userEmailAttribute" value="mail" />
<property name="userPasswordAttribute" value="userpassword" />
<property name="groupSearchBase" value="" />
<property name="groupSearchFilter" value="(objectclass=groupOfNames)" />
<property name="groupIdAttribute" value="ou" />
<property name="groupNameAttribute" value="cn" />
<property name="groupMemberAttribute" value="member" />
</bean>
</beans>
The following is an example of how to configure the LDAP Identity Provider Plugin in bpm-platform.xml / processes.xml:
<process-engine name="default">
<job-acquisition>default</job-acquisition>
<configuration>org.camunda.bpm.engine.impl.cfg.StandaloneProcessEngineConfiguration</configuration>
<datasource>java:jdbc/ProcessEngine</datasource>
<properties>...</properties>
<plugins>
<plugin>
<class>org.camunda.bpm.identity.impl.ldap.plugin.LdapIdentityProviderPlugin</class>
<properties>
<property name="serverUrl">ldap://localhost:4334/</property>
<property name="managerDn">uid=jonny,ou=office-berlin,o=camunda,c=org</property>
<property name="managerPassword">s3cr3t</property>
<property name="baseDn">o=camunda,c=org</property>
<property name="userSearchBase"></property>
<property name="userSearchFilter">(objectclass=person)</property>
<property name="userIdAttribute">uid</property>
<property name="userFirstnameAttribute">cn</property>
<property name="userLastnameAttribute">sn</property>
<property name="userEmailAttribute">mail</property>
<property name="userPasswordAttribute">userpassword</property>
<property name="groupSearchBase"></property>
<property name="groupSearchFilter">(objectclass=groupOfNames)</property>
<property name="groupIdAttribute">ou</property>
<property name="groupNameAttribute">cn</property>
<property name="groupMemberAttribute">member</property>
</properties>
</plugin>
</plugins>
</process-engine>
Administrator Authorization Plugin The LDAP Identity Provider Plugin is usually used in combination with the Administrator Authorization Plugin which allows you to grant administrator authorizations for a particular LDAP User / Group.
The LDAP Identity Provider provides the following configuration properties:
Property | Description |
---|---|
serverUrl |
The url of the LDAP server to connect to. |
managerDn |
The absolute DN of the manager user of the LDAP directory. |
managerPassword |
The password of the manager user of the LDAP directory |
baseDn |
The base DN: identifies the root of the LDAP directory. Is appended to all DN names composed for searching for users or groups. Example: |
userSearchBase |
Identifies the node in the LDAP tree under which the plugin should search for users. Must be relative to Example: |
userSearchFilter |
LDAP query string used when searching for users. Example: |
userIdAttribute |
Name of the user Id property. Example: |
userFirstnameAttribute |
Name of the firstname property. Example: |
userLastnameAttribute |
Name of the lastname property. Example: |
userEmailAttribute |
Name of the email property. Example: |
userPasswordAttribute |
Name of the password property. Example: |
groupSearchBase |
Identifies the node in the LDAP tree under which the plugin should search for groups. Must be relative to Example: |
groupSearchFilter |
LDAP query string used when searching for groups. Example: |
groupIdAttribute |
Name of the group Id property. Example: |
groupNameAttribute |
Name of the group Name property. Example: |
groupTypeAttribute |
Name of the group Type property. Example: |
groupMemberAttribute |
Name of the member attribute. Example: |
acceptUntrustedCertificates |
Accept of untrusted certificates if LDAP server uses Ssl. Warning: we strongly advise against using this property. Better install untrusted certificates to JDK key store. |
useSsl |
Set to true if LDAP connection uses SSL. Default: |
initialContextFactory |
Value for the |
securityAuthentication |
Value for the |
usePosixGroups |
Indicates whether posix groups are used. If true, the connector will use a simple
(unqualified) user id when querying for groups by group member instead of the full DN.
Default: |
A BPMN process diagram is a formidable place to visualize information around your process. You have two options to do this:
We generally recommend the JavaScript libraries, but using the Process Diagram API can be considered if
We provide BPMN JavaScript libraries which can render BPMN 2.0 process models in your browser.
Go to camunda-bpmn.js for libraries and documentation.
When using the Process Diagram API you can deploy a PNG image together with your BPMN 2.0 Process Model. Then you have an API to query the image and normalized coordinates for the process model. With these informations you can easily visualize anything on the process model. The following image shows an example using an BPMN 2.0 model from Adonis (see Roundtrip with other BPMN 2.0 Modelers):
Our Invoice Showcase is a process application that also uses the Process Diagram API showing details of the current process instance to end users working on user tasks.
In order to use the Process Diagram API you need to deploy a process diagram together with your process. You can use:
The deployment can be done by any deployment mechanism camunda BPM offers. For instance if you use WAR deployment, you just need to place the image right next to the BPMN 2.0 XML file of your process (meaning in the same folder). The camunda Modeler automatically creates an image and saves it to the right direction each time you save if. It is important that both files have the same name, e.g.,
camunda-invoice.bpmn and
camunda-invoice.png.
Maven will add them to the built artifact and the platform will take care of deploying it to the process engine. The deployer (only) looks for files with the extensions .png or .jpg to identify process diagrams.
The BPMN 2.0 XML file of your process must contain Diagram Interchange data. This is a special section containing positions and dimensions of the elements in the process diagram. Any modeling tool that conforms to BPMN 2.0 should be able to export this as part of its regular BPMN 2.0 XML export. Here is an example of how it looks like:
...
<bpmndi:BPMNDiagram id="sid-02bd9186-9a09-4ef7-b17d-95bc9385c7ab">
<bpmndi:BPMNPlane id="sid-2cd25826-e553-4573-ad62-be3d38904386" bpmnElement="invoice-process">
<bpmndi:BPMNShape id="Process_Engine_1_gui" bpmnElement="Process_Engine_1" isHorizontal="true">
<omgdc:Bounds height="488.0" width="1126.0" x="0.0" y="0.0"/>
</bpmndi:BPMNShape>
...
</bpmndi:BPMNPlane>
</bpmndi:BPMNDiagram>
...
If you have deployed a process diagram into the engine, you can retrieve it using the method getProcessDiagram()
of the RepositoryService, which takes a process definition id as an argument and returns an InputStream
with the content of the process diagram image.
In a Web application you can, e.g., write a Servlet to provide process diagrams (this code is taken from the Invoice Showcase, see ProcessDiagramServlet.java):
@WebServlet(value = "/processDiagram", loadOnStartup = 1)
public class ProcessDiagramServlet extends HttpServlet {
@Inject
private RepositoryService repositoryService;
@Override
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
String processDefinitionId = request.getParameter("processDefinitionId");
InputStream processDiagram = repositoryService.getProcessDiagram(processDefinitionId);
response.setContentType("image/png");
response.getOutputStream().write(IOUtils.toByteArray(processDiagram));
}
}
The method getProcessDiagramLayout()
of the RepositoryService takes a process definition id as an argument and returns a DiagramLayout object. This object provides x
and y
coordinates as well as width
and height
for all elements of the process diagram.
DiagramLayout processDiagramLayout = repositoryService.getProcessDiagramLayout(processInstance.getProcessDefinitionId());
List<DiagramNode> nodes = processDiagramLayout.getNodes();
for (DiagramNode node : nodes) {
String id = node.getId();
Double x = node.getX();
Double y = node.getY();
Double width = node.getWidth();
Double height = node.getHeight();
// TODO: do some thing with the coordinates
}
These coordinates are given as pixels relative to the upper left corner of the image returned by getProcessDiagram()
, i.e., you can take them directly and draw or render something on top the image. Be creative!
Hint: If you have problems with the positions not fitting exactly try to add a pool around your process.
To give you some inspiration of what you can do with the Process Diagram API, we have another look at the code of the Invoice Showcase . It uses JSF, HTML and CSS to highlight the current activity of a given process instance.
A CDI bean looks up the currently active activities in the RuntimeService and gets position and dimension of these activities using DiagramLayout.getNode()
(see ProcessDiagramController.java):
@Named
public class ProcessDiagramController {
...
public List<DiagramNode> getActiveActivityBoundsOfLatestProcessInstance() {
ArrayList<DiagramNode> list = new ArrayList<DiagramNode>();
ProcessInstance processInstance = getCurrentProcessInstance();
if (processInstance != null) {
DiagramLayout processDiagramLayout = repositoryService.getProcessDiagramLayout(processInstance.getProcessDefinitionId());
List<String> activeActivityIds = runtimeService.getActiveActivityIds(processInstance.getId());
for (String activeActivityId : activeActivityIds) {
list.add(processDiagramLayout.getNode(activeActivityId));
}
}
return list;
}
This bean is then invoked by a JSF page, which displays the process diagram from the Servlet shown above and places tokens on top of it (see taskTemplate.xhtml).
<div style="position: relative">
<img src="processDiagram?processDefinitionId=#{task.processDefinitionId}" />
<ui:repeat
value="#{processDiagramController.getActiveActivityBoundsOfLatestProcessInstance()}"
var="bounds">
<img src="token.png" style="
position: absolute;
left: #{bounds.x + bounds.width - 25}px;
top: #{bounds.y - 15}px;
z-index: 1;"/>
</ui:repeat>
</div>
However, you can also draw a rectangle around a node.
<div style="position: relative">
<p:graphicImage
value="processDiagram?processDefinitionId=#{task.processDefinitionId}" />
<ui:repeat
value="#{processDiagramController.getActiveActivityBoundsOfLatestProcessInstance()}"
var="bounds">
<div style="
position: absolute;
left: #{bounds.x - 1}px;
top: #{bounds.y - 1}px;
width: #{bounds.width - 2}px;
height: #{bounds.height - 2}px;
border: 2px solid rgb(181, 21, 43);
border-radius: 5px; -moz-border-radius: 5px;"></div>
</ui:repeat>
</div>
A Process Application is an ordinary Java Application that uses the camunda process engine for BPM and Worklow functionality. Most such applications will start their own process engine (or use a process engine provided by the runtime container), deploy some BPMN 2.0 process definitions and interact with process instances derived from these process definitions. Since most process applications perform very similar bootstrapping, deployment and runtime tasks, we generalized this functionality into a Java Class which is named - Surprise! - ProcessApplication
. The concept is similar to the javax.ws.rs.core.Application
class in JAX-RS: adding the process application class allows you to bootstrap and configure the provided services.
Adding a ProcessApplication
class to your Java Application provides your applications with the following services:
processes.xml
which is added to your application. The ProcessApplication class makes sure this file is picked up and the defined process engines are started and stopped as the application is deployed / undeployed.processes.xml
file. The process application class makes sure the deployments are performed upon deployment of your application. Scanning your application for process definition resource files (engine in .bpmn20.xml or .bpmn) is supported as well.Transforming an existing Java Application into a Process Application is easy and non-intrusive. You simply have to add:
Heads-up! You might want to checkout the Getting Started Tutorial first as it explains the creation of a process application step by step or the Project Templates for Maven, which gives you a complete running process application out of the box.
You can delegate the bootstrapping of the process engine and process deployment to a process application class. The basic ProcessApplication functionality is provided by the org.camunda.bpm.application.AbstractProcessApplication
base class. Based on this class there is a set of environment-specific sub classes that realize integration within a specific environment:
In the following section, we walk through the different implementations and discuss where and how they can be used.
All Servlet Containers
The Servlet Process Application is supported on all containers. Read the note about Servlet Process Application and EJB / Java EE containers.
Packaging: WAR (or embedded WAR inside EAR)
The ServletProcessApplication
class is the base class for developing Process Applications based on the Servlet Specification (Java Web Applications). The servlet process application implements the javax.servlet.ServletContextListener
interface which allows it to participate in the deployment lifecycle of your Web application
The following is an example of a Servlet Process Application:
package org.camunda.bpm.example.loanapproval;
import org.camunda.bpm.application.ProcessApplication;
import org.camunda.bpm.application.impl.ServletProcessApplication;
@ProcessApplication("Loan Approval App")
public class LoanApprovalApplication extends ServletProcessApplication {
// empty implementation
}
Notice the @ProcessApplication
annotation. This annotation fulfills two purposes:
@ProcessApplication("Loan Approval App")
. If no name is provided, a name is automatically detected. In case of a ServletProcessApplication, the name of the ServletContext is used.javax.servlet.ServletContainerInitializer
implementation named org.camunda.bpm.application.impl.ServletProcessApplicationDeployer
which is located in the camunda-engine module. The implementation works for both embedded deployment of the camunda-engine.jar as a web application library in the WEB-INF/lib
folder of your WAR file or for the deployment of the camunda-engine.jar as a shared library in the shared library (i.e. Apache Tomcat global lib/
folder) directory of your application server. The Servlet 3.0 Specification foresees both deployment scenarios. In case of embedded deployment, the ServletProcessApplicationDeployer
is notified once, when the webapplication is deployed. In case of deployment as a shared library, the ServletProcessApplicationDeployer
is notified for each WAR file containing a class annotated with @ProcessApplication
(as required by the Servlet 3.0 Specification).This means that in case you deploy to a Servlet 3.0 compliant container (such as Apache Tomcat 7) annotating your class with @ProcessApplication
is sufficient.
There is a project template for Maven called camunda-archetype-servlet-war
, which gives you a complete running project based on a ServletProcessApplication.
In a Pre-Servlet 3.0 container such as Apache Tomcat 6 (or JBoss Application Server 5 for that matter), you need manually register your ProcessApplication class as Servlet Context Listener in the Servlet Container. This can be achieved by adding a listener element to your WEB-INF/web.xml
file:
<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">
<listener>
<listener-class>org.my.project.MyProcessApplication</listener-class>
</listener>
</web-app>
You can use the ServletProcessApplication inside an EJB / Java EE Container such as Glassfish or JBoss. Process application bootstrapping and deployment will work in the same way. However, you will not be able to use all Java EE features at runtime. In contrast to the EjbProcessApplication
(see next section), the ServletProcessApplication
does not perform proper Java EE cross-application context switching. When the process engine invokes Java Delegates from your application, only the Context Class Loader of the current Thread is set to the classloader of your application. This does allow the process engine to resolve Java Delegate implementations from your application but the container will not perform an EE context switch to your application. As a consequence, if you use the ServletProcessApplciation inside a Java EE container, you will not be able to use features like:
If your application does not use such features, it is perfectly fine to use the ServletProcessApplication inside an EE container. In that case you only get servlet specification guarantees.
Java EE 6 Container only
The EjbProcessApplication is supported in Java EE 6 containers or higher. It is not supported on Servlet Containers like Apache Tomcat. It may be adapted to work inside Java EE 5 Containers.
Packaging: JAR, WAR, EAR
The EjbProcessApplication is the base class for developing Java EE based Process Applications. An Ejb Process Application class itself must be deployed as an EJB.
In order to add an Ejb Process Application to your Java Application, you have two options:
org.camunda.bpm.application.impl.ejb.DefaultEjbProcessApplication
) bundled as a maven artifact. The simplest possibility is to add this implementation as a maven dependency to your application.Both options are explained in greater detail below.
The most convenient option for deploying a process application to an Ejb Container is by adding the following maven dependency to you maven project:
<dependency>
<groupId>org.camunda.bpm.javaee</groupId>
<artifactId>camunda-ejb-client</artifactId>
<version>${camunda.version}</version>
</dependency>
The camunda-ejb-client contains a reusable default implementation of the EjbProcessApplication as a Singleton Session Bean with auto-activation.
This deployment option requires that your project is a composite deployment (such as a WAR or EAR) since you need to add a library JAR file. You could of course use something like the maven shade plugin for adding the class contained in the camunda-ejb-client artifact to a JAR-based deployment.
camunda-archetype-ejb-war
, which gives you a complete running project based on the camunda-ejb-client.
If you want to customize the behavior of the the EjbProcessApplication class you have the option of writing a custom EjbProcessApplication class. The following is an example of such an implementation:
@Singleton
@Startup
@ConcurrencyManagement(ConcurrencyManagementType.BEAN)
@TransactionAttribute(TransactionAttributeType.REQUIRED)
@ProcessApplication
@Local(ProcessApplicationInterface.class)
public class MyEjbProcessApplication extends EjbProcessApplication {
@PostConstruct
public void start() {
deploy();
}
@PreDestroy
public void stop() {
undeploy();
}
}
If your application is a WAR
(or a WAR
inside an EAR
) and you want to use embedded or external task forms inside the Tasklist application, then your custom EjbProcessApplication must expose the servlet context path of your application as a property. This enables the Tasklist to resolve the path to the embedded or external task forms.
Therefore your custom EjbProcessApplication must be extended by a Map
and a getter-method for that Map
as follows:
@Singleton
@Startup
@ConcurrencyManagement(ConcurrencyManagementType.BEAN)
@TransactionAttribute(TransactionAttributeType.REQUIRED)
@ProcessApplication
@Local(ProcessApplicationInterface.class)
public class MyEjbProcessApplication extends EjbProcessApplication {
protected Map<String, String> properties = new HashMap<String, String>();
@PostConstruct
public void start() {
deploy();
}
@PreDestroy
public void stop() {
undeploy();
}
public Map<String, String> getProperties() {
return properties;
}
}
Furthermore, to provide the servlet context path a custom javax.servlet.ServletContextListener
must be added to your application. Inside your custom implementation of the ServletContextListener
you have to
@EJB
annotation,ProcessApplicationInfo#PROP_SERVLET_CONTEXT_PATH
property inside your custom EjbProcessApplication.This can be done as follows:
public class ProcessArchiveServletContextListener implements ServletContextListener {
@EJB
private ProcessApplicationInterface processApplication;
public void contextInitialized(ServletContextEvent contextEvent) {
String contextPath = contextEvent.getServletContext().getContextPath();
Map<String, String> properties = processApplication.getProperties();
properties.put(ProcessApplicationInfo.PROP_SERVLET_CONTEXT_PATH, contextPath);
}
public void contextDestroyed(ServletContextEvent arg0) {
}
}
Finally the custom ProcessArchiveServletContextListener
has to be added to your WEB-INF/web.xml
file:
<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">
<listener>
<listener-class>org.my.project.ProcessArchiveServletContextListener</listener-class>
</listener>
...
</web-app>
The fact that the EjbProcessApplication exposes itself as a Session Bean Component inside the EJB container determines
ProcessApplicationReference
held by the process engine.When the process engine invokes the Ejb Process Application, it gets EJB invocation semantics. For example, if your process application provides a JavaDelegate
implementation, the process engine will call the EjbProcessApplication's execute(java.util.concurrent.Callable)
method and from that method invoke JavaDelegate
. This makes sure that
JavaDelegate
may take advantage of the EjbProcessApplication's invocation context and resolve resources from the component's environment (such as a java:comp/BeanManager
).Big pile of EJB interceptors | | +--------------------+ | |Process Application | invoke v | | ProcessEngine ----------------OOOOO--> Java Delegate | | | | | +--------------------+
When the EjbProcessApplication registers with a process engine (see ManagementService#registerProcessApplication(String, ProcessApplicationReference)
, the process application passes a reference to itself to the process engine. This reference allows the process engine to reference the process application. The EjbProcessApplication takes advantage of the Ejb Containers naming context and passes a reference containing the EJBProcessApplication's Component Name to the process engine. Whenever the process engine needs access to process application, the actual component instance is looked up and invoked.
All containers
The EmbeddedProcessApplication can only be used with an embedded process engine and does not provide auto-activation.
Packaging: JAR, WAR, EAR
The org.camunda.bpm.application.impl.EmbeddedProcessApplication
can only be used in combination with an embedded process engine. Usage in combination with a Shared Process Engine is not supported as the class performs no process application context switching at runtime.
The Embedded Process Application also does not provide auto-startup. You need to manually call the deploy method of your process application:
// instantiate the process application
MyProcessApplication processApplication = new MyProcessApplication();
// deploy the process application
processApplication.deploy();
// interact with the process engine
ProcessEngine processEngine = BpmPlatform.getDefaultProcessEngine();
processEngine.getRuntimeService().startProcessInstanceByKey(...);
// undeploy the process application
processApplication.undeploy();
Where the class MyProcessApplication
could look like this:
@ProcessApplication(
name="my-app",
deploymentDescriptors={"path/to/my/processes.xml"}
)
public class MyProcessApplication extends EmbeddedProcessApplication {
}
Supported on
The spring process application is currently not supported on JBoss AS 7.
Packaging: JAR, WAR, EAR
The org.camunda.bpm.engine.spring.application.SpringProcessApplication
class allows bootstrapping a process application through a Spring Application Context. You can either reference the SpringProcessApplication class from an Xml-based application context configuration file or use an annotation-based setup.
If your application is a WebApplication you should use org.camunda.bpm.engine.spring.application.SpringServletProcessApplication
as it provides support for exposing the servlet context path through the ProcessApplicationInfo#PROP_SERVLET_CONTEXT_PATH
property.
We recommend to always use SpringServletProcessApplication unless the deployment is not a web application. Using this class requires the
org.springframework:spring-web
module to be on the classpath.
The following shows an example of how to bootstrap a SpringProcessApplication inside a spring application context Xml file:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="invoicePa" class="org.camunda.bpm.engine.spring.application.SpringServletProcessApplication" />
</beans>
(Remember that you additionally need a META-INF/processes.xml
file.
The SpringProcessApplication will use the bean name (id="invoicePa"
in the example above) as auto-detected name for the process application. Make sure to provide a unique process application name here (unique across all process applications deployed on a single application server instance.) As an alternative, you can provide a custom subclass of SpringProcessApplication (or SpringServletProcessApplication) and override the getName()
method.
If you use a Spring Process Application, you may want to configure your process engine inside the spring application context Xml file (as opposed to the processes.xml file). In this case, you must use the org.camunda.bpm.engine.spring.container.ManagedProcessEngineFactoryBean
class for creating the process engine object instance. In addition to creating the process engine object, this implementation registers the process engine with the BPM Platform infrastructure so that the process engine is returned by the ProcessEngineService
. The following is an example of how to configure a managed process engine using Spring.
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="dataSource" class="org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy">
<property name="targetDataSource">
<bean class="org.springframework.jdbc.datasource.SimpleDriverDataSource">
<property name="driverClass" value="org.h2.Driver"/>
<property name="url" value="jdbc:h2:mem:camunda;DB_CLOSE_DELAY=1000"/>
<property name="username" value="sa"/>
<property name="password" value=""/>
</bean>
</property>
</bean>
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource"/>
</bean>
<bean id="processEngineConfiguration" class="org.camunda.bpm.engine.spring.SpringProcessEngineConfiguration">
<property name="processEngineName" value="default" />
<property name="dataSource" ref="dataSource"/>
<property name="transactionManager" ref="transactionManager"/>
<property name="databaseSchemaUpdate" value="true"/>
<property name="jobExecutorActivate" value="false"/>
</bean>
<!-- using ManagedProcessEngineFactoryBean allows registering the ProcessEngine with the BpmPlatform -->
<bean id="processEngine" class="org.camunda.bpm.engine.spring.container.ManagedProcessEngineFactoryBean">
<property name="processEngineConfiguration" ref="processEngineConfiguration"/>
</bean>
<bean id="repositoryService" factory-bean="processEngine" factory-method="getRepositoryService"/>
<bean id="runtimeService" factory-bean="processEngine" factory-method="getRuntimeService"/>
<bean id="taskService" factory-bean="processEngine" factory-method="getTaskService"/>
<bean id="historyService" factory-bean="processEngine" factory-method="getHistoryService"/>
<bean id="managementService" factory-bean="processEngine" factory-method="getManagementService"/>
</beans>
The processes.xml deployment descriptor contains the deployment metadata for a process application. The following example is a simple example of a processes.xml
deployment descriptor:
<process-application
xmlns="http://www.camunda.org/schema/1.0/ProcessApplication"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<process-archive name="loan-approval">
<process-engine>default</process-engine>
<properties>
<property name="isDeleteUponUndeploy">false</property>
<property name="isScanForProcessDefinitions">true</property>
</properties>
</process-archive>
</process-application>
A single deployment (process-archive) is declared. The process archive has the name loan-approval and is deployed to the process engine with the name default. Two additional properties are specified:
isDeleteUponUndeploy
: this property controls whether the undeployment of the process application should entail that the process engine deployment is deleted from the database. The default setting is false. If this property is set to true, undeployment of the process application leads to the removal of the deplyoment (including process instances) from the database.isScanForProcessDefinitions
: if this property is set to true, the classpath of the process application is automatically scanned for process definition resources. Process definition resources must end in .bpmn20.xml
or .bpmn
.See Deployment Descriptor Reference for complete documentation of the syntax of the processes.xml
file.
The processes.xml may optionally be empty (left blank). In this case default values are used. The empty processes.xml corresponds to the following configuration:
<process-application
xmlns="http://www.camunda.org/schema/1.0/ProcessApplication"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<process-archive>
<properties>
<property name="isDeleteUponUndeploy">false</property>
<property name="isScanForProcessDefinitions">true</property>
</properties>
</process-archive>
</process-application>
The empty processes.xml will scan for process definitions and perform a single deployment to the default process engine.
The default location of the processes.xml file is META-INF/processes.xml
. The camunda BPM platform will parse and process all processes.xml files on the classpath of a process application. Composite process applications (WAR / EAR) may carry multiple subdeployments providing a META-INF/processes.xml file.
In an apache maven based project, add the the processes.xml file to the src/main/resources/META-INF
folder.
If you want to specify a custom location for the processes.xml file, you need to use the deploymentDescriptors
property of the @ProcessApplication
annotation:
@ProcessApplication(
name="my-app",
deploymentDescriptors={"path/to/my/processes.xml"}
)
public class MyProcessApp extends ServletProcessApplication {
}
The provided path(s) must be resolvable through the ClassLoader#getResourceAsStream(String)
-Method of the classloader returned by the AbstractProcessApplication#getProcessApplicationClassloader()
method of the process application.
Multiple distinct locations are supported.
The processes.xml file can also be used for configuring one or multiple process engine(s). The following is an example of a configuration of a process engine inside a processes.xml file:
<process-application
xmlns="http://www.camunda.org/schema/1.0/ProcessApplication"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<process-engine name="my-engine">
<configuration>org.camunda.bpm.engine.impl.cfg.StandaloneInMemProcessEngineConfiguration</configuration>
</process-engine>
<process-archive name="loan-approval">
<process-engine>my-engine</process-engine>
<properties>
<property name="isDeleteUponUndeploy">false</property>
<property name="isScanForProcessDefinitions">true</property>
</properties>
</process-archive>
</process-application>
The <configuration>...</configuration>
property allows specifying the name of a process engine configuration class to be used when building the process engine.
When deploying a set of BPMN 2.0 files to the process engine, a process deployment is created. The process deployment is performed to the process engine database so that when the process engine is stopped and restarted, the process definitions can be restored from the database and execution can continue. When a process application performs a deployment, in addition to the database deployment it will create a registration for this deployment with the process engine. This is illustrated in the following figure:
Deployment of the process application "invoice.war" is illustrated on the left hand side:
deployment-1
is created for the process definition.deployment-1
and the registration is returned.When the process application is undeployed, the registration for the deployment is removed (see right hand side of the illustration above). After the registration is cleared, the deployment is still present in the database.
The registration allows the process engine to load additional Java Classes and resources from the process application when executing the processes. In contrast to the database deployment, which can be restored whenever the process engine is restarted, the registration of the process application is kept as in-memory state. This in-memory state is local to an individual cluster node, allowing us to undeploy or redeploy a process application on a particular cluster node without affecting the other nodes and without having to restart the process engine. If the Job Executor is deployment aware, job execution will also stop for jobs created by this process application. However, as a consequence, the registration also needs to be re-created when the application server is restarted. This happens automatically if the process application takes part in the application server deployment lifecycle. For instance, ServletProcessApplications are deployed as ServletContextListeners and when the servlet context is started, it creates the deployment and registration with the process engine. The redeployment process is illustrated in the next figure:
(a) Left hand side: invoice.bpmn has not changed:
deployment-1
is still present in the database, the process engine compares the xml content of the database deployment with the bpmn20.xml file from the process application. In this case, both xml documents are identical which means that the existing deployment can be resumed.deployment-1
.(b) Right hand side: invoice.bpmn has changed:
deployment-1
is still present in the database, the process engine compares the xml content of the database deployment with the invoice.bpmn file from the process application. In this case, changes are detected which means that a new deployment must be created.deployment-2
, containing the updated invoice.bpmn process.deployment-2
AND the existing deployment deployment-1
.The resuming of the previous deployment (deployment-1) is a feature called resumePreviousVersions
and is activated by default. If you want to deactivate this feature, you have to set the property to false
in processes.xml file:
<process-application
xmlns="http://www.camunda.org/schema/1.0/ProcessApplication"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<process-archive name="loan-approval">
...
<properties>
...
<property name="isResumePreviousVersions">false</property>
</properties>
</process-archive>
</process-application>
The process engine supports defining two types of event listeners: Task Event Listeners and Execution Event Listeners. Task Event listeners allow to react to Task Events (Task are Created, Assigned, Completed). Execution Listeners allow to react to events fired as execution progresses to the diagram: Activities are Started, Ended and Transitions are being taken.
When using the Process Application API, the process engine makes sure that Events are delegated to the right Process Application. For example, assume there is a Process Application deployed as "invoice.war" which deploys a process definition named "invoice". The invoice process has a task named "archive invoice". The application "invoice.war" further provides a Java Class implementing the ExecutionListener interface and is configured to be invoked whenever the END event is fired on the "archive invoice" activity. The process engine makes sure that the event is delegated to the listener class located inside the process application:
On top of the Execution and Task Listeners which are explicitly configured in the BPMN 2.0 Xml, the process application API supports defining a global ExecutionListener and a global TaskListener which are notified about all events happening in the processes deployed by a process application:
@ProcessApplication
public class InvoiceProcessApplication extends ServletProcessApplication {
public TaskListener getTaskListener() {
return new TaskListener() {
public void notify(DelegateTask delegateTask) {
// handle all Task Events from Invoice Process
}
};
}
public ExecutionListener getExecutionListener() {
return new ExecutionListener() {
public void notify(DelegateExecution execution) throws Exception {
// handle all Execution Events from Invoice Process
}
};
}
}
In order to use the global Process Application Event Listeners, you need to activate the corresponding Process Engine Plugin:
<process-engine name="default">
...
<plugins>
<plugin>
<class>org.camunda.bpm.application.impl.event.ProcessApplicationEventListenerPlugin</class>
</plugin>
</plugins>
</process-engine>
Note that the plugin is activated by default in the pre-packaged camunda BPM distributions.
The Process Application Event Listener interface is also a good place for adding the CdiEventListener bridge if you want to use Cdi Events with in combination with the shared process engine.
We provide several project templates for Maven, which are also called Archetypes. They enable a quickstart for developing process applications using the camunda-BPM-platform.
The following archetypes are currently provided. They are distributed via our Maven repository: https://app.camunda.com/nexus/content/repositories/camunda-bpm/
Archetype | Description |
---|---|
Process Application (EJB, WAR) | Process application that uses a shared camunda BPM engine in a Java EE Container, e.g. JBoss AS7. Contains: camunda EJB client, camunda CDI Integration, BPMN Process, Java Delegate as CDI bean, JSF-based start and task forms, configuration for JPA (Hibernate), JUnit Test with in-memory engine, Arquillian Test for JBoss AS7, Ant build script for one-click deployment in Eclipse |
Process Application (Servlet, WAR) | Process application that uses a shared camunda BPM engine in a Servlet Container, e.g. Apache Tomcat. Contains: Servlet Process Application, BPMN Process, Java Delegate, HTML5-based start and task forms, JUnit Test with in-memory engine, Arquillian Test for JBoss AS7, Ant build script for one-click deployment in Eclipse |
Add archetype catalog (Preferences -> Maven -> Archetypes -> Add Remote Catalog):
https://app.camunda.com/nexus/content/repositories/camunda-bpm/
Enter the following URL and description, click on Verify... to test the connection and if that worked on OK to save the catalog.
Catalog File: https://app.camunda.com/nexus/content/repositories/camunda-bpm/
Description: camunda BPM platform
Now you should be able to use the archetypes when creating a new Maven project in Eclipse:
The resulting project should look like this:
Sometimes, the creation of the very first Maven project fails in Eclipse. If that happens to you, just try it again. Most of the times the second try works. If the problem persists, contact us.
Run the following command in a terminal to generate a project. Maven will allow you to select an archetype and ask you for all parameters needed to configure it:
mvn archetype:generate -Dfilter=org.camunda.bpm.archetype: -DarchetypeCatalog=https://app.camunda.com/nexus/content/repositories/camunda-bpm
The following command completely automates the project generation an can be used in shellscipts or Ant builds:
mvn archetype:generate \ -DinteractiveMode=false \ -DarchetypeRepository=https://app.camunda.com/nexus/content/repositories/camunda-bpm \ -DarchetypeGroupId=org.camunda.bpm.archetype \ -DarchetypeArtifactId=camunda-archetype-ejb-war \ -DarchetypeVersion=7.0.0 \ -DgroupId=org.example.camunda.bpm \ -DartifactId=camunda-bpm-ejb-project \ -Dversion=0.0.1-SNAPSHOT \ -Dpackage=org.example.camunda.bpm.ejb
You can also customize the project templates for your own technology stack. Just fork them on GitHub!
To inspect the current state of configured process engines and deployed process applications, the class org.camunda.bpm.BpmPlatform
offers access to the ProcessEngineService
and the ProcessApplicationService
.
The ProcessEngineService can be accessed by calling BpmPlatform.getProcessEngineService()
. It offers access to the default process engine, as well as any process engine by its name as specified in the process engine configuration. It returns ProcessEngine
objects from which any services for a specific engine can be accessed.
The ProcessApplicationService is accessible via BpmPlatform.getProcessApplicationService()
. It provides details on the process application deployments made on the application server it is running on. That means that it does not provide a global view across all nodes in a cluster.
Given a process application name, a ProcessApplicationInfo
object can be retrieved that contains details on the deployments made by this process application. These correspond to the process archives declared in processes.xml.
Furthermore, application-specific properties can be retrieved such as the servlet context path in case of a servlet process application.
The BPM Platform Services (i.e. Process Engine Service and Process Application Service) are provided via JNDI Bindings with the following JNDI names:
java:global/camunda-bpm-platform/process-engine/ProcessEngineService!org.camunda.bpm.ProcessEngineService
java:global/camunda-bpm-platform/process-engine/ProcessApplicationService!org.camunda.bpm.ProcessApplicationService
On Glassfish 3.1.1 and on JBoss AS 7 you can do a lookup with the JNDI names to get one of these BPM Platform Services. However, on Apache Tomcat 7 you have to do quite a bit more to be able to do a lookup to get one of these BPM Platform Services.
To use the JNDI Bindings for Bpm Platform Services on Apache Tomcat 7 you have to add the file META-INF/context.xml
to your process application and add the following ResourceLinks:
<Context>
<ResourceLink name="ProcessEngineService"
global="global/camunda-bpm-platform/process-engine/ProcessEngineService!org.camunda.bpm.ProcessEngineService"
type="org.camunda.bpm.ProcessEngineService" />
<ResourceLink name="ProcessApplicationService"
global="global/camunda-bpm-platform/process-engine/ProcessApplicationService!org.camunda.bpm.ProcessApplicationService"
type="org.camunda.bpm.ProcessApplicationService" />
</Context>
These elements are used to create a link to the global JNDI Resources defined in $TOMCAT_HOME/conf/server.xml
.
Furthermore, declare the dependency on the JNDI binding inside the WEB-INF/web.xml
deployment descriptor.
<web-app>
<resource-ref>
<description>Process Engine Service</description>
<res-ref-name>ProcessEngineService</res-ref-name>
<res-type>org.camunda.bpm.ProcessEngineService</res-type>
<res-auth>Container</res-auth>
</resource-ref>
<resource-ref>
<description>Process Application Service</description>
<res-ref-name>ProcessApplicationService</res-ref-name>
<res-type>org.camunda.bpm.ProcessApplicationService</res-type>
<res-auth>Container</res-auth>
</resource-ref>
...
</web-app>
Note: You can choose different resource link names for the Process Engine Service and Process Application Service. The resource link name has to match the value inside the <res-ref-name>
-element inside the corresponding <resource-ref>
-element in WEB-INF/web.xml
. We propose the name ProcessEngineService
for the Process Engine Service and ProcessApplicationService
for the Process Application Service.
In order to do a lookup for a Bpm Platform Service you have to use the resource link name to get the linked global resource. For example:
java:comp/env/ProcessEngineService
java:comp/env/ProcessApplicationService
If you have declared other resource link names than we proposed, you have to use java:comp/env/$YOUR_RESOURCE_LINK_NAME
to do a lookup to get the corresponding Bpm Platform Service.
Distribution & Installation Guide
If you download a pre-packaged distribution from camunda.org, the camunda JBoss subsystem is readily installed into the application server
Read the installation guide in order to learn how to install the camunda JBoss subsystem into your JBoss Server.
camunda BPM provides advanced integration for JBoss Application Server 7 in the form of a custom JBoss AS 7 Subsystem.
The most prominent features are:
Using the camunda JBoss AS 7 Subsystem, it is possible to configure and manage the process engine through the JBoss Management Model. The most straightforward way is to add the process engine configuration to the standalone.xml
file of the JBoss Server:
<subsystem xmlns="urn:org.camunda.bpm.jboss:1.1">
<process-engines>
<process-engine name="default" default="true">
<datasource>java:jboss/datasources/ProcessEngine</datasource>
<history-level>full</history-level>
<properties>
<property name="jobExecutorAcquisitionName">default</property>
<property name="isAutoSchemaUpdate">true</property>
<property name="authorizationEnabled">true</property>
</properties>
</process-engine>
</process-engines>
<job-executor>
<thread-pool-name>job-executor-tp</thread-pool-name>
<job-acquisitions>
<job-acquisition name="default">
<acquisition-strategy>SEQUENTIAL</acquisition-strategy>
<properties>
<property name="lockTimeInMillis">300000</property>
<property name="waitTimeInMillis">5000</property>
<property name="maxJobsPerAcquisition">3</property>
</properties>
</job-acquisition>
</job-acquisitions>
</job-executor>
</subsystem>
It should be easy to see that the configuration consists of a single process engine which uses the Datasource java:jboss/datasources/ProcessEngine
and is configured to be the default
process engine. In addition, the Job Executor currently uses a single Job Acquisition also named default.
If you start up your JBoss AS 7 server with this configuration, it will automatically create the corresponding services and expose them through the management model.
It is possible to provide a custom Process Engine Configuration class on JBoss AS 7. To this extent, provide the fully qualified classname of the class in the standalone.xml
file:
<process-engine name="default" default="true">
<datasource>java:jboss/datasources/ProcessEngine</datasource>
<configuration>org.my.custom.ProcessEngineConfiguration</configuration>
<history-level>full</history-level>
<properties>
<property name="myCustomProperty">true</property>
<property name="lockTimeInMillis">300000</property>
<property name="waitTimeInMillis">5000</property>
</properties>
</process-engine>
The class org.my.custom.ProcessEngineConfiguration
must be a subclass of org.camunda.bpm.engine.impl.cfg.JtaProcessEngineConfiguration
.
The properties map can be used for invoking primitive valued setters (Integer, String, Boolean) that follow the Java Bean conventions. In the case of the example above, the class would provide a method named
public void setMyCustomProperty(boolean boolean) {
...
}
Module dependency of custom configuration class
If you configure the process engine in standalone.xml
and provide a custom configuration class packaged inside an own module, the camunda-jboss-subsystem module needs to have a module dependency on the module providing the class.
If you fail to do this, you will see the following error log:
Caused by: org.camunda.bpm.engine.ProcessEngineException: Could not load 'foo.bar': the class must be visible from the camunda-jboss-subsystem module. at org.camunda.bpm.container.impl.jboss.service.MscManagedProcessEngineController.createProcessEngineConfiguration(MscManagedProcessEngineController.java:187) [camunda-jboss-subsystem-7.0.0-alpha8.jar:] at org.camunda.bpm.container.impl.jboss.service.MscManagedProcessEngineController.startProcessEngine(MscManagedProcessEngineController.java:138) [camunda-jboss-subsystem-7.0.0-alpha8.jar:] at org.camunda.bpm.container.impl.jboss.service.MscManagedProcessEngineController$3.run(MscManagedProcessEngineController.java:126) [camunda-jboss-subsystem-7.0.0-alpha8.jar:]
It is possible to extend a process engine using the process engine plugins concept.
You specify the process engine plugins in standalone.xml
/ domain.xml
for each process engine separately as shown below:
<subsystem xmlns="urn:org.camunda.bpm.jboss:1.1">
<process-engines>
<process-engine name="default" default="true">
<datasource>java:jboss/datasources/ProcessEngine</datasource>
<history-level>full</history-level>
<properties>
...
</properties>
<plugins>
<plugin>
<class>org.camunda.bpm.engine.MyCustomProcessEnginePlugin</class>
<properties>
<property name="boost">10</property>
<property name="maxPerformance">true</property>
<property name="actors">akka</property>
</properties>
</plugin>
</plugins>
</process-engine>
</process-engines>
...
</subsystem>
You have to provide the fully qualified classname between the <class>
tags. Additional properties can be specified using the <properties>
element.
The restrictions, which apply for providing a custom process engine configuration class, are also valid for the process engine plugins:
The camunda JBoss subsystem provides the same JNDI bindings for the ProcessApplicationService and the ProcessEngineService as provided on other containers. In addition, the camunda JBoss subsystem creates JNDI Bindings for all managed process engines, allowing us to look them up directly.
The global JNDI bindings for process engines follow the pattern
java:global/camunda-bpm-platform/process-engine/$PROCESS_ENGINE_NAME
If a process engine is named "engine1", it will be available using the name java:global/camunda-bpm-platform/process-engine/engine1
.
Note that when looking up the process engine, using a declarative mechanism (like @Resource
or referencing the resource in a deployment descriptor) is preferred over a programmatic way. The declarative mechanism makes the application server aware of our dependency on the process engine service and allows it to manage that dependency for us. See also: Managing Service Dependencies.
A declarative mechanism like @Resource
could be
@Resource(mappedName = "java:global/camunda-bpm-platform/process-engine/$PROCESS_ENGINE_NAME"
Looking up a Process Engine from JNDI using Spring
On JBoss AS 7, spring users should always create a resource-ref for the process engine in web.xml and then lookup the local name in the java:comp/env/
namespace. For an example, see this Quickstart.
In oder to inspect and change the management model, we can use one of the multiple JBoss Management clients available.
It is possible to inspect the configuration using the CLI (Command Line Interface, jboss-cli.bat/sh):
You are disconnected at the moment. Type 'connect' to connect to the server or 'help' for the list of supported commands. [disconnected /] connect [standalone@localhost:9999 /] cd /subsystem=camunda-bpm-platform [standalone@localhost:9999 subsystem=camunda-bpm-platform] :read-resource(recursive=true) { "outcome" => "success", "result" => { "job-executor" => {"default" => { "thread-pool-name" => "job-executor-tp", "job-acquisitions" => {"default" => { "acquisition-strategy" => "SEQUENTIAL", "name" => "default", "properties" => { "lockTimeInMillis" => "300000", "waitTimeInMillis" => "5000", "maxJobsPerAcquisition" => "3" } }} }}, "process-engines" => {"default" => { "configuration" => "org.camunda.bpm.container.impl.jboss.config.ManagedJtaProcessEngineConfiguration", "datasource" => "java:jboss/datasources/ProcessEngine", "default" => true, "history-level" => "full", "name" => "default", "properties" => { "jobExecutorAcquisitionName" => "default", "isAutoSchemaUpdate" => "true" } }} } }
Once the process engine is registered in the JBoss Management Model, it is possible to control it thorough the management API. For example, you can stop it through the CLI:
[standalone@localhost:9999 subsystem=camunda-bpm-platform] cd process-engines=default [standalone@localhost:9999 process-engines=default] :remove {"outcome" => "success"}
This removes the process engine and all dependent services. This means that if you remove a process engine the application server will stop all deployed applications which use the process engine.
Declaring Service Dependencies
In order for this to work, but also in order to avoid race conditions at deployment time, it is necessary that each application explicitly declares dependencies on the process engines it is using. Learn how to declare dependencies
It is also possible to start a new process engine at runtime:
[standalone@localhost:9999 subsystem=camunda-bpm-platform] /subsystem=camunda-bpm-platform/process-engines=my-process-engine/:add(name=my-process-engine,datasource=java:jboss/datasources/ExampleDS) {"outcome" => "success"}
One of the nice features of the JBoss AS 7 Management System is that it will
standalone.xml
/ domain.xml
such that it is available when the server is restarted.In some cases, you may find it more convenient to use the JBoss JConsole extension for starting a process engine.
The JConsole plugin allows you to inspect the management model graphically and build operations using a wizard. In order to start the JBoss JConsole plugin, start the jconsole.bat/sh file provided in the JBoss distribution. More Information in the JBoss Docs.
Implicit module dependencies
Classpath dependencies are automatically managed for you if you use the Process Application API.
When using the camunda BPM JBoss AS subsystem, the process engine classes are deployed as jboss module. The module is named
org.camunda.bpm.camunda-engine
and is deployed in the folder $JBOSS_HOME/modules/org/camunda/bpm/camunda-engine
.
By default, the Application server will not add this module to the classpath of applications.If an application needs to interact with the process engine, we must declare a module dependency in the application. This can be achieved using either an implicit or an explicit module dependency.
When using the Process Application API (ie. when deploying either a ServletProcessApplication or an EjbProcessApplication), the camunda JBoss Subsystem will detect the @ProcessApplication
class in the deployment and automatically add a module dependency between the application and the process engine module. As a result, we don't have to declare the dependency ourselves. It is called an implicit module dependency because it is not explicitly declared but can be derived by inspecting the application and seeing that it provides a @ProcessApplication
class.
If an application does not use the process application API but still needs the process engine classes to be added to its classpath, an explicit module dependency is required. JBoss AS 7 has different mechanisms for achieving this. The simplest way is to add a manifest entry to the MANIFEST.MF file of the deployment. The following example illustrates how to generate such a dependency using the maven WAR plugin:
<build>
...
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<configuration>
<archive>
<manifestEntries>
<Dependencies>org.camunda.bpm.camunda-engine</Dependencies>
</manifestEntries>
</archive>
</configuration>
</plugin>
</plugins>
</build>
As a result, the Application Service will add the process engine module to the classpath of the application.
Implicit service dependencies
Service dependencies are automatically managed for you if you use the Process Application API.
The camunda JBoss subsystem manages process engines as JBoss Services in the JBoss Module Service Container. In order for the Module Service Container to provide the process engine service(s) to the deployed applications, it is important that the dependencies are known. Consider the following example:
There are three applications deployed and two process engine services exist. Application 1 and Application 2 are using Process Engine 1 and Application 3 is using Process Engine 2.
When using the Process Application API (ie. when deploying either a ServletProcessApplication or an EjbProcessApplication), the camunda JBoss Subsystem will detect the @ProcessApplication
class in the deployment and automatically add a service dependency between the process application component and the process engine module. This makes sure the process engine is available when the process application is deployed.
If an application does not use the process application API but still needs to interact with a process engine, it is important to declare the dependency on the process engine service explicitly. If we fail to declare the dependency, there is no guarantee that the process engine is available to the application.
The simplest way to add an explicit dependency on the process engine is to bind the process engine in application's local naming space. For instance, we can add the following resource reference to the web.xml
file of a web application:
<resource-ref>
<res-ref-name>processEngine/default</res-ref-name>
<res-type>org.camunda.bpm.engine.ProcessEngine</res-type>
<mapped-name>java:global/camunda-bpm-platform/process-engine/default</mapped-name>
</resource-ref>
This way, the global process engine resource java:global/camunda-bpm-platform/process-engine/default
is available locally under the name processEngine/default
. Since the application server is aware of this dependency, it will make sure the process engine service exists before starting the application and it will stop the application if the process engine is removed.
The same effect can be achieved using the @Resource Annotation:
@Stateless
public class PaComponent {
@Resource(mappedName="java:global/camunda-bpm-platform/process-engine/default")
private ProcessEngine processEngine;
@Produces
public ProcessEngine getProcessEngine() {
return processEngine;
}
}
The camunda-engine spring framework integration is located inside the camunda-engine-spring module and can be added to apache maven-based projects through the following dependency:
<dependency>
<groupId>org.camunda.bpm</groupId>
<artifactId>camunda-engine-spring</artifactId>
<version>${camunda.version}</version>
</dependency>
The camunda-engine-spring
artifact should be added as a library to the process application.
You can use a Spring application context Xml file for bootstrapping the process engine. You can bootstrap both application-managed and container-managed process engines through Spring.
The ProcessEngine can be configured as a regular Spring bean. The starting point of the integration is the class org.camunda.bpm.engine.spring.ProcessEngineFactoryBean
. That bean takes a process engine configuration and creates the process engine. This means that the creation and configuration of properties for Spring is the same as documented in the configuration section. For Spring integration the configuration and engine beans will look like this:
<bean id="processEngineConfiguration"
class="org.camunda.bpm.engine.spring.SpringProcessEngineConfiguration">
...
</bean>
<bean id="processEngine"
class="org.camunda.bpm.engine.spring.ProcessEngineFactoryBean">
<property name="processEngineConfiguration" ref="processEngineConfiguration" />
</bean>
Note that the processEngineConfiguration bean uses the SpringProcessEngineConfiguration class.
If you want the process engine to be registered with the BpmPlatform ProcessEngineService, you must use org.camunda.bpm.engine.spring.container.ManagedProcessEngineFactoryBean
instead of the ProcessEngineFactoryBean shown in the example above. I that case the constructed process engine object is registered with the BpmPlatform and can be referenced for creating process application deployments and exposed through the runtime container integration.
We'll explain the SpringTransactionIntegrationTest
found in the Spring examples of the distribution step by step. Below is the Spring configuration file that we use in this example (you can find it in SpringTransactionIntegrationTest-context.xml
). The section shown below contains the dataSource
, transactionManager
, processEngine
and the process engine services.
When passing the DataSource to the SpringProcessEngineConfiguration
(using property "dataSource"), the camunda engine uses a org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy
internally, which wraps the passed DataSource. This is done to make sure the SQL connections retrieved from the DataSource and the Spring transactions play well together. This implies that it's no longer needed to proxy the dataSource yourself in Spring configuration, although it's still allowed to pass a TransactionAwareDataSourceProxy
into the SpringProcessEngineConfiguration
. In this case no additional wrapping will occur.
Make sure when declaring a TransactionAwareDataSourceProxy
in Spring configuration yourself, that you don't use it for resources that are already aware of Spring-transactions (e.g. DataSourceTransactionManager
and JPATransactionManager
need the un-proxied dataSource).
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:tx="http://www.springframework.org/schema/tx"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-2.5.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.0.xsd">
<bean id="dataSource" class="org.springframework.jdbc.datasource.SimpleDriverDataSource">
<property name="driverClass" value="org.h2.Driver" />
<property name="url" value="jdbc:h2:mem:camunda;DB_CLOSE_DELAY=1000" />
<property name="username" value="sa" />
<property name="password" value="" />
</bean>
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource" />
</bean>
<bean id="processEngineConfiguration" class="org.camunda.bpm.engine.spring.SpringProcessEngineConfiguration">
<property name="dataSource" ref="dataSource" />
<property name="transactionManager" ref="transactionManager" />
<property name="databaseSchemaUpdate" value="true" />
<property name="jobExecutorActivate" value="false" />
</bean>
<bean id="processEngine" class="org.camunda.bpm.engine.spring.ProcessEngineFactoryBean">
<property name="processEngineConfiguration" ref="processEngineConfiguration" />
</bean>
<bean id="repositoryService" factory-bean="processEngine" factory-method="getRepositoryService" />
<bean id="runtimeService" factory-bean="processEngine" factory-method="getRuntimeService" />
<bean id="taskService" factory-bean="processEngine" factory-method="getTaskService" />
<bean id="historyService" factory-bean="processEngine" factory-method="getHistoryService" />
<bean id="managementService" factory-bean="processEngine" factory-method="getManagementService" />
...
</beans>
The remainder of that Spring configuration file contains the beans and configuration that we'll use in this particular example:
<beans>
...
<tx:annotation-driven transaction-manager="transactionManager"/>
<bean id="userBean" class="org.camunda.bpm.engine.spring.test.UserBean">
<property name="runtimeService" ref="runtimeService" />
</bean>
<bean id="printer" class="org.camunda.bpm.engine.spring.test.Printer" />
</beans>
First the application context is created with any of the Spring ways to do that. In this example you could use a classpath XML resource to configure our Spring application context:
ClassPathXmlApplicationContext applicationContext =
new ClassPathXmlApplicationContext("mytest/SpringTransactionIntegrationTest-context.xml");
or, since it is a test:
@ContextConfiguration("classpath:mytest/SpringTransactionIntegrationTest-context.xml")
Then we can get the service beans and invoke methods on them. The ProcessEngineFactoryBean will have added an extra interceptor to the services that applies Propagation.REQUIRED transaction semantics on the engine service methods. So, for example, we can use the repositoryService to deploy a process like this:
RepositoryService repositoryService = (RepositoryService) applicationContext.getBean("repositoryService");
String deploymentId = repositoryService
.createDeployment()
.addClasspathResource("mytest/hello.bpmn20")
.addClasspathResource("mytest/hello.png")
.deploy()
.getId();
The other way around also works. In this case, the Spring transaction will be around the userBean.hello() method and the engine service method invocation will join that same transaction.
UserBean userBean = (UserBean) applicationContext.getBean("userBean");
userBean.hello();
The UserBean looks like this. Remember from above in the Spring bean configuration we injected the repositoryService into the userBean.
public class UserBean {
// injected by Spring
private RuntimeService runtimeService;
@Transactional
public void hello() {
// here you can do transactional stuff in your domain model
// and it will be combined in the same transaction as
// the startProcessInstanceByKey to the RuntimeService
runtimeService.startProcessInstanceByKey("helloProcess");
}
public void setRuntimeService(RuntimeService runtimeService) {
this.runtimeService = runtimeService;
}
}
Spring integration also has a special feature for deploying resources. In the process engine configuration, you can specify a set of resources. When the process engine is created, all those resources will be scanned and deployed. There is filtering in place that prevents duplicate deployments. Only in case the resources have actually changed, new deployments will be deployed to the engine database. This makes sense in a lot of use cases, where the Spring container is rebooted often (e.g. testing).
Here's an example:
<bean id="processEngineConfiguration" class="org.camunda.bpm.engine.spring.SpringProcessEngineConfiguration">
...
<property name="deploymentResources" value="classpath*:/mytest/autodeploy.*.bpmn20" />
<property name="deploymentResources">
<list>
<value>classpath*:/mytest/autodeploy.*.bpmn20</value>
<value>classpath*:/mytest/autodeploy.*.png</value>
</list>
</property>
</bean>
<bean id="processEngine" class="org.camunda.bpm.engine.spring.ProcessEngineFactoryBean">
<property name="processEngineConfiguration" ref="processEngineConfiguration" />
</bean>
When using the ProcessEngineFactoryBean, by default, all expressions in the BPMN processes will also 'see' all the Spring beans. It's possible to limit the beans you want to expose in expressions or even exposing no beans at all using a map that you can configure. The example below exposes a single bean (printer), available to use under the key "printer". To have NO beans exposed at all, just pass an empty list as 'beans' property on the SpringProcessEngineConfiguration. When no 'beans' property is set, all Spring beans in the context will be available.
<bean id="processEngineConfiguration"
class="org.camunda.bpm.engine.spring.SpringProcessEngineConfiguration">
...
<property name="beans">
<map>
<entry key="printer" value-ref="printer" />
</map>
</property>
</bean>
<bean id="printer" class="org.camunda.bpm.engine.spring.test.transaction.Printer" />
Now the exposed beans can be used in expressions: for example, the SpringTransactionIntegrationTest hello.bpmn20.xml shows how a method on a Spring bean can be invoked using a UEL method expression:
<definitions id="definitions" ...>
<process id="helloProcess">
<startEvent id="start" />
<sequenceFlow id="flow1" sourceRef="start" targetRef="print" />
<serviceTask id="print" camunda:expression="#{printer.printMessage()}" />
<sequenceFlow id="flow2" sourceRef="print" targetRef="end" />
<endEvent id="end" />
</process>
</definitions>
Where Printer looks like this:
public class Printer {
public void printMessage() {
System.out.println("hello world");
}
}
And the Spring bean configuration (also shown above) looks like this:
<beans ...>
...
<bean id="printer" class="org.camunda.bpm.engine.spring.test.transaction.Printer" />
</beans>
In a shared process engine deployment scenario, you have a process engine which dispatches to multiple applications. In this case, there is not a single spring application context but each application may maintain its own application context. The process engine cannot use a single expression resolver for a single application context but must delegate to the appropriate process application, depending on which process is currently executed.
This functionality is provided by the org.camunda.bpm.engine.spring.application.SpringProcessApplicationElResolver
. This class is a ProcessApplicationElResolver implementation delegating to the local application context. Expression resolving then works in the following way: the shared process engine checks which process application corresponds to the process it is currently executing. It then delegates to that process application for resolving expressions. The process application delegates to the SpringProcessApplicationElResolver which uses the local Spring application context for resolving beans.
The SpringProcessApplicationElResolver
class is automatically detected if the camunda-engine-spring
module is included as a library of the process application, not as a global library.
When integrating with Spring, business processes can be tested very easily (in scope 2, see Testing Scopes) using the standard camunda testing facilities. The following example shows how a business process is tested in a typical Spring-based unit test:
@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration("classpath:org/camunda/bpm/engine/spring/test/junit4/springTypicalUsageTest-context.xml")
public class MyBusinessProcessTest {
@Autowired
private RuntimeService runtimeService;
@Autowired
private TaskService taskService;
@Autowired
@Rule
public ProcessEngineRule processEngineRule;
@Test
@Deployment
public void simpleProcessTest() {
runtimeService.startProcessInstanceByKey("simpleProcess");
Task task = taskService.createTaskQuery().singleResult();
assertEquals("My Task", task.getName());
taskService.complete(task.getId());
assertEquals(0, runtimeService.createProcessInstanceQuery().count());
}
}
Note that for this to work, you need to define a ProcessEngineRule bean in the Spring configuration (which is injected by auto-wiring in the example above).
<bean id="processEngineRule" class="org.camunda.bpm.engine.test.ProcessEngineRule">
<property name="processEngine" ref="processEngine" />
</bean>
The camunda-engine-cdi module provides programming model integration with Cdi (Context and Dependency Injection). Cdi is the Java EE 6 standard for Dependency Injection. The camunda-engine-cdi integration leverages both the configuration of the camunda engine and the extensibility of Cdi. The most prominent features are:
In order to use the camunda-engine-cdi module inside your application, you must include the following Maven dependency:
<dependency>
<groupId>org.camunda.bpm</groupId>
<artifactId>camunda-engine-cdi</artifactId>
<version>7.x</version>
</dependency>
Replace 'x' with your camunda BPM version.
camunda-archetype-ejb-war
, which gives you a complete running project including the Cdi integration.
Documentation for this part has yet to be written.
The process engine transaction management can integrate with JTA. In order to use JTA transaction manager integration, you need to use the
org.camunda.bpm.engine.impl.cfg.JtaProcessEngineConfiguration
for Jta Integration onlyorg.camunda.bpm.engine.cdi.CdiJtaProcessEngineConfiguration
for additional CDI Expression
resolving support.Note 1: The shared process engine distributions for Java EE Application Servers (Wildfly, JBoss, Glassfish, IBM Websphere Application Server, Oracle Weblogic Application Server) provide JTA integration out of the box.
Note 2: The process engine requires access to an implementation of
javax.transaction.TransactionManager
. Not all application servers provide such an implementation. Most notably WebSphere and Weblogic historically did not provide this implementation. In order to achieve JTA Transaction Integration on these containers, users should use the Spring Framework Abstraction and configure the process engine using the SpringProcessEngineConfiguration.
The camunda-engine-cdi
library exposes Cdi beans via Expression Language, using a custom resolver. This makes it possible to reference beans from the process:
<userTask id="authorizeBusinessTrip" name="Authorize Business Trip"
camunda:assignee="#{authorizingManager.account.username}" />
</script>
Where "authorizingManager" could be a bean provided by a producer method:
@Inject
@ProcessVariable
private Object businessTripRequesterUsername;
@Produces
@Named
public Employee authorizingManager() {
TypedQuery<Employee> query = entityManager.createQuery("SELECT e FROM Employee e WHERE e.account.username='"
+ businessTripRequesterUsername + "'", Employee.class);
Employee employee = query.getSingleResult();
return employee.getManager();
}
We can use the same feature to call a business method of an EJB in a service task, using the camunda:expression="myEjb.method()"
-extension.
Note that this requires a @Named
-annotation on the MyEjb-class.
In this section we briefly look at the contextual process execution model used by the camunda-engine-cdi extension. A BPMN business process is typically a long-running interaction, comprised of both user and system tasks. At runtime, a process is split-up into a set of individual units of work, performed by users and/or application logic. In camunda-engine-cdi, a process instance can be associated with a cdi scope, the association representing a unit of work. This is particularly useful, if a unit of work is complex, for instance if the implementation of a UserTask is a complex sequence of different forms and "non-process-scoped" state needs to be kept during this interaction. In the default configuration, process instances are associated with the "broadest" active scope, starting with the conversation and falling back to the request if the conversation context is not active.
When resolving @BusinessProcessScoped beans, or injecting process variables, we rely on an existing association between an active cdi scope and a process instance. camunda-engine-cdi provides the org.camunda.bpm.engine.cdi.BusinessProcess bean for controlling the association, most prominently:
Once a unit of work (for example a UserTask) is completed, the completeTask() method can be called to disassociate the conversation/request from the process instance. This signals the engine that the current task is completed and makes the process instance proceed.
Note that the BusinessProcess-bean is a @Named bean, which means that the exposed methods can be invoked using expression language, for example from a JSF page. The following JSF2 snippet begins a new conversation and associates it with a user task instance, the id of which is passed as a request parameter (e.g. pageName.jsf?taskId=XX
):
<f:metadata>
<f:viewParam name="taskId" />
<f:event type="preRenderView" listener="#{businessProcess.startTask(taskId, true)}" />
</f:metadata>
camunda-engine-cdi allows declaratively starting process instances and completing tasks using annotations. The @org.camunda.bpm.engine.cdi.annotation.StartProcess annotation allows to start a process instance either by "key" or by "name". Note that the process instance is started after the annotated method returns. Example:
@StartProcess("authorizeBusinessTripRequest")
public String submitRequest(BusinessTripRequest request) {
// do some work
return "success";
}
Depending on the configuration of the camunda engine, the code of the annotated method and the starting of the process instance will be combined in the same transaction. The @org.camunda.bpm.engine.cdi.annotation.CompleteTask
-annotation works in the same way:
@CompleteTask(endConversation=false)
public String authorizeBusinessTrip() {
// do some work
return "success";
}
The @CompleteTask
annotation offers the possibility to end the current conversation. The default behavior is to end the conversation after the call to the engine returns. Ending the conversation can be disabled, as shown in the example above.
Using camunda-engine-cdi, the lifecycle of a bean can be bound to a process instance. To this extent, a custom context implementation is provided, namely the BusinessProcessContext. Instances of BusinessProcessScoped beans are stored as process variables in the current process instance. BusinessProcessScoped beans need to be PassivationCapable (for example Serializable). The following is an example of a process scoped bean:
@Named
@BusinessProcessScoped
public class BusinessTripRequest implements Serializable {
private static final long serialVersionUID = 1L;
private String startDate;
private String endDate;
// ...
}
Sometimes, we want to work with process scoped beans, in the absence of an association with a process instance, for example before starting a process. If no process instance is currently active, instances of BusinessProcessScoped beans are temporarily stored in a local scope (I.e. the Conversation or the Request, depending on the context. If this scope is later associated with a business process instance, the bean instances are flushed to the process instance.
@Inject
ProcessEngine, RepositoryService, TaskService, ...@ProcessEngineName('someEngine')
Process variables are available for injection. camunda-engine-cdi supports
@BusinessProcessScoped
beans using @Inject [additional qualifiers] Type fieldName
unsafe injection of other process variables using the @ProcessVariable(name?)
qualifier:
@Inject
@ProcessVariable
private Object accountNumber;
@Inject
@ProcessVariable("accountNumber")
private Object account;
In order to reference process variables using EL, we have similar options:
@Named @BusinessProcessScoped
beans can be referenced directly,#{processVariables['accountNumber']}
While a specific process engine can be accessed by adding the qualifier @ProcessEngineName('name')
to the injection point, this requires that it is known which process engine is used at design time. A more flexible approach is to resolve the process engine at runtime based on contextual information such as the logged in user. In this case, @Inject
can be used without a @ProcessEngineName
annotation.
To implement resolution from contextual data, the producer bean org.camunda.bpm.engine.cdi.impl.ProcessEngineServicesProducer
must be extended. The following code implements a contextual resolution of the engine by the currently authenticated user. Note that which contextual data is used and how it is accessed is entirely up to you.
@Specializes
public class UserAwareEngineServicesProvider extends ProcessEngineServicesProducer {
// User can be any object containing user information from which the tenant can be determined
@Inject
private UserInfo user;
@Specializes @Produces @RequestScoped
public ProcessEngine processEngine() {
// okay, maybe this should involve some more logic ;-)
String engineForUser = user.getTenant();
ProcessEngine processEngine = BpmPlatform.getProcessEngineService().getProcessEngine(engineForUser);
if(processEngine != null) {
return processEngine;
} else {
return ProcessEngines.getProcessEngine(engineForUser, false);
}
}
@Specializes @Produces @RequestScoped
public RuntimeService runtimeService() {
return processEngine().getRuntimeService();
}
@Specializes @Produces @RequestScoped
public TaskService taskService() {
return processEngine().getTaskService();
}
...
}
The above code makes selecting the process engine based on the current user's tenant completely transparent. For each request, the currently authenticated user is retrieved and the correct process engine is looked up. Note that the class UserInfo
represents any kind of context object that identifies the current tenant. For example, it could be a JAAS principal. The produced engine can be accessed in the following way:
@Inject
private RuntimeService runtimeService;
The Process engine can be hooked-up to the Cdi event-bus. We call this the "Cdi Event Bridge" This allows us to be notified of process events using standard Cdi event mechanisms. In order to enable Cdi event support for an embedded process engine, enable the corresponding parse listener in the configuration:
<property name="postParseListeners">
<list>
<bean class="org.camunda.bpm.engine.cdi.impl.event.CdiEventSupportBpmnParseListener" />
</list>
</property>
Now the engine is configured for publishing events using the Cdi event bus.
Note: The above configuration can be used in combination with an embedded process engine. If you want to use this feature in combination with the shared process engine in a multi application environment, you need to add the CdiEventListener as Process Application event listener. See next section.
The following gives an overview of how process events can be received in Cdi beans. In Cdi, we can declaratively specify event observers using the @Observes-annotation. Event notification is type-safe. The type of process events is org.camunda.bpm.engine.cdi.BusinessProcessEvent. The following is an example of a simple event observer method:
public void onProcessEvent(@Observes BusinessProcessEvent businessProcessEvent) {
// handle event
}
This observer would be notified of all events. If we want to restrict the set of events the observer receives, we can add qualifier annotations:
@BusinessProcessDefinition
: restricts the set of events to a certain process definition. Example:
public void onProcessEvent(@Observes @BusinessProcessDefinition("billingProcess") BusinessProcessEvent businessProcessEvent) {
// handle event
}
@StartActivity
: restricts the set of events by a certain activity. For example:
public void onActivityEvent(@Observes @StartActivity("shipGoods") BusinessProcessEvent businessProcessEvent) {
// handle event
}
is invoked whenever an activity with the id "shipGoods" is entered.
@EndActivity
: restricts the set of events by a certain activity. The following method is invoked whenever an activity with the id "shipGoods" is left:
public void onActivityEvent(@Observes @EndActivity("shipGoods") BusinessProcessEvent businessProcessEvent) {
// handle event
}
@TakeTransition
: restricts the set of events by a certain transition.
@CreateTask
: restricts the set of events by a certain task. The following is invoked whenever a task with the definition key (id in BPMN XML) "approveRegistration" is created:
public void onTaskEvent(@Observes @CreateTask("approveRegistration") BusinessProcessEvent businessProcessEvent) {
// handle event
}
@AssignTask
: restricts the set of events by a certain task. The following is invoked whenever a task with the definition key (id in BPMN XML) "approveRegistration" is assigned:
public void onTaskEvent(@Observes @AssignTask("approveRegistration") BusinessProcessEvent businessProcessEvent) {
// handle event
}
@CompleteTask
: restricts the set of events by a certain task. The following is invoked whenever a task with the definition key (id in BPMN XML) "approveRegistration" is completed:
public void onTaskEvent(@Observes @CompleteTask("approveRegistration") BusinessProcessEvent businessProcessEvent) {
// handle event
}
@DeleteTask
: restricts the set of events by a certain task. The following is invoked whenever a task with the definition key (id in BPMN XML) "approveRegistration" is deleted:
public void onTaskEvent(@Observes @DeleteTask("approveRegistration") BusinessProcessEvent businessProcessEvent) {
// handle event
}
The qualifiers named above can be combined freely. For example, in order to receive all events generated when leaving the "shipGoods" activity in the "shipmentProcess", we could write the following observer method:
public void beforeShippingGoods(@Observes @BusinessProcessDefinition("shippingProcess") @EndActivity("shipGoods") BusinessProcessEvent evt) {
// handle event
}
In the default configuration, event listeners are invoked synchronously and in the context of the same transaction. Cdi transactional observers (only available in combination with JavaEE / EJB), allow to control when the event is handed to the observer method. Using transactional observers, we can for example assure that an observer is only notified if the transaction in which the event is fired succeeds:
public void onShipmentSuceeded(
@Observes(during=TransactionPhase.AFTER_SUCCESS) @BusinessProcessDefinition("shippingProcess") @EndActivity("shipGoods") BusinessProcessEvent evt) {
// send email to customer
}
Note: BusinessProcessEvent.getTask will return an instance of DelegateTask (in case the event is a task event). If the listener is invoked after the transaction has completed, the DelegateTask object cannot be used for modifying variables.
In order to use the Cdi Event Bridge in combination with a multi-application deployment and the shared process engine, the CdiEventListener needs to be added as a Process Application Execution Event Listener.
Example configuration for Servlet Process Application:
@ProcessApplication
public class InvoiceProcessApplication extends ServletProcessApplication {
protected CdiEventListener cdiEventListener = new CdiEventListener();
public ExecutionListener getExecutionListener() {
return cdiEventListener;
}
public TaskListener getTaskListener() {
return cdiEventListener;
}
}
Example configuration for Ejb Process Application:
@Singleton
@Startup
@ConcurrencyManagement(ConcurrencyManagementType.BEAN)
@TransactionAttribute(TransactionAttributeType.REQUIRED)
@ProcessApplication
@Local(ProcessApplicationInterface.class)
public class MyEjbProcessApplication extends EjbProcessApplication {
protected CdiEventListener cdiEventListener = new CdiEventListener();
@PostConstruct
public void start() {
deploy();
}
@PreDestroy
public void stop() {
undeploy();
}
public ExecutionListener getExecutionListener() {
return cdiEventListener;
}
public TaskListener getTaskListener() {
return cdiEventListener;
}
}
When testing Process Applications you first have to be clear on what scope you want to test. Often Process Applications orchestrate various existing services which means that a Process Application test quickly becomes an integration test. The following picture show the scopes we differentiate when testing Process Applications:
Business processes are an integral part of software projects and they should be tested in the same way normal application logic is tested: with unit tests. Since the camunda engine is an embeddable Java engine, writing unit tests for business processes is as simple as writing regular unit tests.
camunda supports both JUnit versions 3 and 4 styles of unit testing. In the JUnit 3 style, the ProcessEngineTestCase must be extended. This will make the ProcessEngine and the services available through protected member fields. In the setup() of the test, the processEngine will be initialized by default with the camunda.cfg.xml resource on the classpath. To specify a different configuration file, override the getConfigurationResource() method. Process engines are cached statically over multiple unit tests when the configuration resource is the same.
By extending ProcessEngineTestCase, you can annotate test methods with Deployment. Before the test is run, a resource file of the form testClassName.testMethod.bpmn20.xml, in the same package as the test class, will be deployed. At the end of the test the deployment will be deleted, including all related process instances, tasks, etc. The Deployment annotation also supports setting the resource location explicitly. See the Javadocs for more details.
Taking all that into account, a JUnit 3 style test looks as follows:
public class MyBusinessProcessTest extends ProcessEngineTestCase {
@Deployment
public void testSimpleProcess() {
runtimeService.startProcessInstanceByKey("simpleProcess");
Task task = taskService.createTaskQuery().singleResult();
assertEquals("My Task", task.getName());
taskService.complete(task.getId());
assertEquals(0, runtimeService.createProcessInstanceQuery().count());
}
}
To get the same functionality when using the JUnit 4 style of writing unit tests, the ProcessEngineRule Rule must be used. Through this rule, the process engine and services are available through getters. As with the ProcessEngineTestCase (see above), including this Rule will enable the use of the Deployment annotation (see above for an explanation of its use and configuration) and it will look for the default configuration file on the classpath. Process engines are statically cached over multiple unit tests when using the same configuration resource.
The following code snippet shows an example of using the JUnit 4 style of testing and the usage of the ProcessEngineRule.
public class MyBusinessProcessTest {
@Rule
public ProcessEngineRule processEngineRule = new ProcessEngineRule();
@Test
@Deployment
public void ruleUsageExample() {
RuntimeService runtimeService = processEngineRule.getRuntimeService();
runtimeService.startProcessInstanceByKey("ruleUsage");
TaskService taskService = processEngineRule.getTaskService();
Task task = taskService.createTaskQuery().singleResult();
assertEquals("My Task", task.getName());
taskService.complete(task.getId());
assertEquals(0, runtimeService.createProcessInstanceQuery().count());
}
}
When using the in-memory H2 database for unit tests, the following instructions allow to easily inspect the data in the engine database during a debugging session. The screenshots here are taken in Eclipse, but the mechanism should be similar for other IDEs.
Suppose we have put a breakpoint somewhere in our unit test. In Eclipse this is done by double-clicking in the left border next to the code:
If we now run the unit test in debug mode (right-click in test class, select 'Run as' and then 'JUnit test'), the test execution halts at our breakpoint, where we can now inspect the variables of our test as shown in the right upper panel.
To inspect the data, open up the 'Display' window (if this window isn't there, open Window->Show View->Other and select Display.) and type (code completion is available) org.h2.tools.Server.createWebServer("-web").start()
Select the line you've just typed and right-click on it. Now select 'Display' (or execute the shortcut instead of right-clicking)
Now open up a browser and go to http://localhost:8082, and fill in the JDBC URL to the in-memory database (by default this is jdbc:h2:mem:camunda), and hit the connect button.
You can now see the engine database and use it to understand how and why your unit test is executing your process in a certain way.
In Java EE environments we recently use JBoss Arquillian pretty often to test Process Applications, because it makes bootstrapping the engine pretty simple. We will add more documentation on this here soon - for the moment please refer to the Arquillian Getting Started Guide.
The camunda BPMN model API provides a simple and lightweight library for parsing, creating, editing and writing of BPMN 2.0 XML files. The model API enables an easy extraction of information from an existing process definition or to create a complete new one without manual XML parsing. The BPMN model API is based on a general XML model API which is useful for general XML processing.
Note: Currently the BPMN model API does not fully support the whole BPMN 2.0 specification. The list of already supported BPMN 2.0 elements can be found in the source code package org.camunda.bpm.model.bpmn.instance.
To create a new BPMN model from scratch you have to create a empty BPMN model instance with the following method.
BpmnModelInstance modelInstance = Bpmn.createEmptyModel();
The next step is to create a BPMN definitions element, set the target namespace on it and add it to the newly created empty model instance.
Definitions definitions = modelInstance.newInstance(Definitions.class);
definitions.setTargetNamespace("http://camunda.org/examples");
modelInstance.setDefinitions(definitions);
After that you usually want to add a process to your model. This follows the same 3 steps as the creation of the BPMN definitions element:
Process process = modelInstance.newInstance(Process.class);
process.setId("process");
definitions.addChildElement(process);
To simplify this repeating procedure you can use a helper method like this one.
protected <T extends BpmnModelElementInstance> T createElement(BpmnModelElementInstance parentElement, String id, Class<T> elementClass) {
T element = modelInstance.newInstance(elementClass);
element.setAttributeValue("id", id, true);
parentElement.addChildElement(element);
return element;
}
After you created the elements of your process like start event, tasks, gateways and end event you have to connect the elements with sequence flows. Again, this follows the same 3 steps of element creation and can be simplified by the following helper method.
public SequenceFlow createSequenceFlow(Process process, FlowNode from, FlowNode to) {
String identifier = from.getId() + "-" + to.getId();
SequenceFlow sequenceFlow = createElement(process, identifier, SequenceFlow.class);
process.addChildElement(sequenceFlow);
sequenceFlow.setSource(from);
from.getOutgoing().add(sequenceFlow);
sequenceFlow.setTarget(to);
to.getIncoming().add(sequenceFlow);
return sequenceFlow;
}
After you created your process you can validate the model against the BPMN 2.0 specification and convert it to a XML string or save it to a file or stream.
// validate the model
Bpmn.validateModel(modelInstance);
// convert to string
String xmlString = Bpmn.convertToString(modelInstance);
// write to output stream
OutputStream outputStream = new OutputStream(...);
Bpmn.writeModelToStream(outputStream, modelInstance);
// write to file
File file = new File(...);
Bpmn.writeModelToFile(file, modelInstance);
With the basic helper methods from above it is very easy and straightforward to create simple processes. First create a process with a start event, user task and a end event.
The following code creates this process using the helper methods from above (without the DI elements).
// create an empty model
BpmnModelInstance modelInstance = Bpmn.createEmptyModel();
Definitions definitions = modelInstance.newInstance(Definitions.class);
definitions.setTargetNamespace("http://camunda.org/examples");
modelInstance.setDefinitions(definitions);
// create the process
Process process = createElement(definitions, "process-with-one-task", Process.class);
// create start event, user task and end event
StartEvent startEvent = createElement(process, "start", StartEvent.class);
UserTask task1 = createElement(process, "task1", UserTask.class);
task1.setName("User Task");
EndEvent endEvent = createElement(process, "end", EndEvent.class);
// create the connections between the elements
createSequenceFlow(process, startEvent, task1);
createSequenceFlow(process, task1, endEvent);
// validate and write model to file
Bpmn.validateModel(modelInstance);
File file = File.createTempFile("bpmn-model-api-", ".bpmn");
Bpmn.writeModelToFile(file, modelInstance);
Even complexer processes can be created with a few lines of code with the standard BPMN model API.
// create an empty model
BpmnModelInstance modelInstance = Bpmn.createEmptyModel();
Definitions definitions = modelInstance.newInstance(Definitions.class);
definitions.setTargetNamespace("http://camunda.org/examples");
modelInstance.setDefinitions(definitions);
// create elements
StartEvent startEvent = createElement(process, "start", StartEvent.class);
ParallelGateway fork = createElement(process, "fork", ParallelGateway.class);
ServiceTask task1 = createElement(process, "task1", ServiceTask.class);
task1.setName("Service Task");
UserTask task2 = createElement(process, "task2", UserTask.class);
task2.setName("User Task");
ParallelGateway join = createElement(process, "join", ParallelGateway.class);
EndEvent endEvent = createElement(process, "end", EndEvent.class);
// create flows
createSequenceFlow(process, startEvent, fork);
createSequenceFlow(process, fork, task1);
createSequenceFlow(process, fork, task2);
createSequenceFlow(process, task1, join);
createSequenceFlow(process, task2, join);
createSequenceFlow(process, join, endEvent);
// validate and write model to file
Bpmn.validateModel(modelInstance);
File file = File.createTempFile("bpmn-model-api-", ".bpmn");
Bpmn.writeModelToFile(file, modelInstance);
If you already created a BPMN model and want to process it by the BPMN model API you can import it with the following methods.
// read a model from a file
File file = new File("PATH/TO/MODEL.bpmn");
BpmnModelInstance modelInstance = Bpmn.readModelFromFile(file);
// read a model from a stream
InputStream stream = [...]
BpmnModelInstance modelInstance = Bpmn.readModelFromStream(stream);
After you imported your model you can search for elements by their id or by the type of elements.
// find element instance by ID
StartEvent start = (StartEvent) modelInstance.getModelElementById("start");
// find all elements of the type task
ModelElementType taskType = modelInstance.getModel().getType(Task.class);
Collection<ModelElementInstance> taskInstances = modelInstance.getModelElementsByType(taskType);
For every element instance you can now read and edit the attribute values. You can do this by either using the provided helper methods or the generic XML model API. If you added custom attributes to the BPMN elements you can always access them with the generic XML model API.
StartEvent start = (StartEvent) modelInstance.getModelElementById("start");
// read attributes by helper methods
String id = start.getId();
String name = start.getName();
// edit attributes by helper methods
start.setId("new-id");
start.setName("new name");
// read attributes by generic XML model API (with optional namespace)
String custom1 = start.getAttributeValue("custom-attribute");
String custom2 = start.getAttributeValueNs("custom-attribute-2", "http://camunda.org/custom");
// edit attributes by generic XML model API (with optional namespace)
start.setAttributeValue("custom-attribute", "new value");
start.setAttributeValueNs("custom-attribute", "http://camunda.org/custom", "new value");
Uou can also access the child elements of an element or references to other elements. For example a sequence flow references a source and a target element while a flow node (like start event, tasks etc.) has child elements for incoming and outgoing sequence flows.
For example the following BPMN model was created by the BPMN model API as an example for a simple process.
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<definitions targetNamespace="http://camunda.org/examples" xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL">
<process id="process-with-one-task">
<startEvent id="start">
<outgoing>start-task1</outgoing>
</startEvent>
<userTask id="task1">
<incoming>start-task1</incoming>
<outgoing>task1-end</outgoing>
</userTask>
<endEvent id="end">
<incoming>task1-end</incoming>
</endEvent>
<sequenceFlow id="start-task1" sourceRef="start" targetRef="task1"/>
<sequenceFlow id="task1-end" sourceRef="task1" targetRef="end"/>
</process>
</definitions>
You can now use the BPMN model API to get the source and target flow node of the sequence flow with the ID start-task1.
// read bpmn model from file
BpmnModelInstance modelInstance = Bpmn.readModelFromFile(new File("/PATH/TO/MODEL.bpmn"));
// find sequence flow by id
SequenceFlow sequenceFlow = (SequenceFlow) modelInstance.getModelElementById("start-task1");
// get the source and target element
FlowNode source = sequenceFlow.getSource();
FlowNode target = sequenceFlow.getTarget();
// get all outgoing sequence flows of the source
Collection<SequenceFlow> outgoing = source.getOutgoing();
assert(outgoing.contains(sequenceFlow));
With these references you can easily create helper methods for different use cases. For example if you want to find the following flow nodes of an task or a gateway you can use a helper method like the following.
public Collection<FlowNode> getFlowingFlowNodes(FlowNode node) {
Collection<FlowNode> followingFlowNodes = new ArrayList<FlowNode>();
for (SequenceFlow sequenceFlow : node.getOutgoing()) {
followingFlowNodes.add(sequenceFlow.getTarget());
}
return followingFlowNodes;
}
To create simple BPMN processes we provide a fluent builder API. With this API you can easily create basic processes in a few lines of code. In the generate process fluent api quickstart we demonstrate how to create a rather complex process with 5 tasks and 2 gateways within less than 50 lines of code.
The fluent builder API is not nearly complete but provides you with the following basic elements:
To create a empty model instance with a new process the method Bpmn.createProcess()
is used. After this
you can add as many tasks and gateways as you like. At the end you must call done()
to return the generated
model instance. So for example a simple process with one user task can be created like this:
BpmnModelInstance modelInstance = Bpmn.createProcess()
.startEvent()
.userTask()
.endEvent()
.done();
To add a new element you have to call a function which is named like the element to add. Additionally you can set attributes of the last created element.
So for example let's set the name of the process and mark it as executable and also give the user task a name.
BpmnModelInstance modelInstance = Bpmn.createProcess()
.name("Example process")
.executable()
.startEvent()
.userTask()
.name("Some work to do")
.endEvent()
.done();
As you can see, a sequential process is really simple and straightforward to model but often you want
branches and a parallel execution path, which is also possible with the fluent builder API. Just add
a parallel or exclusive gateway and model the first path till an end event or another gateway. After that,
call the moveToLastGateway()
method and you return to the last gateway and can model the next path.
BpmnModelInstance modelInstance = Bpmn.createProcess()
.startEvent()
.userTask()
.parallelGateway()
.scriptTask()
.endEvent()
.moveToLastGateway()
.serviceTask()
.endEvent()
.done();
This example models a process with a user task after the start event followed by a parallel gateway with two parallel outgoing execution paths, each with a task and an end event.
Normally you want to add conditions on outgoing flows of an exclusive gateway which is also simple with
the fluent builder API. Just use the method condition()
and give it a label and an expression.
BpmnModelInstance modelInstance = Bpmn.createProcess()
.startEvent()
.userTask()
.exclusiveGateway()
.name("What to do next?")
.condition("Call an agent", "#{action = 'call'}")
.scriptTask()
.endEvent()
.moveToLastGateway()
.condition("Create a task", "#{action = 'task'}")
.serviceTask()
.endEvent()
.done();
If you want to use the moveToLastGateway()
method but have multiple incoming
sequence flows at your current position you have to use the generic
moveToNode
method with the id of the gateway. This could for example happen
if you add a join gateway to your process. For this purpose and for loops we
added the connectTo(elementId)
method.
BpmnModelInstance modelInstance = Bpmn.createProcess()
.startEvent()
.userTask()
.parallelGateway("fork")
.serviceTask()
.parallelGateway("join")
.moveToNode("fork")
.userTask()
.connectTo("join")
.moveToNode("fork")
.scriptTask()
.connectTo("join")
.endEvent()
.done()
This example creates a process with three parallel execution paths which all
join in the second gateway. Notice that the first call of moveToNode
is not
necessary, because at this point the joining gateway only has one incoming sequence
flow, but was used for consistency.
BpmnModelInstance modelInstance = Bpmn.createProcess()
.startEvent()
.userTask()
.id("question")
.exclusiveGateway()
.name("Everything fine?")
.condition("yes", "#{fine}")
.serviceTask()
.userTask()
.endEvent()
.moveToLastGateway()
.condition("no", "#{!fine}")
.userTask()
.connectTo("question")
.done()
This example creates a parallel gateway with a feedback loop in the second execution path.
To create an embedded subprocess with the fluent builder you can directly add it to your process building or you could detach it and create flow elements of the subprocess later on.
// Directly define the subprocess
BpmnModelInstance modelInstance = Bpmn.createProcess()
.startEvent()
.subProcess()
.camundaAsync()
.embeddedSubProcess()
.startEvent()
.userTask()
.endEvent()
.subProcessDone()
.serviceTask()
.endEvent()
.done();
// Detach the subprocess building
modelInstance = Bpmn.createProcess()
.startEvent()
.subProcess("subProcess")
.serviceTask()
.endEvent()
.done();
SubProcess subProcess = (SubProcess) modelInstance.getModelElementById("subProcess");
subProcess.builder()
.camundaAsync()
.embeddedSubProcess()
.startEvent()
.userTask()
.endEvent();
With the fluent builder API you can not only create processes, you can also extend existing processes.
For example imagine a process containing a parallel gateway with the id gateway
. You now want to
add another execution path to it for a new service task which has to be executed every time.
BpmnModelInstance modelInstance = Bpmn.readModelFromFile(new File("PATH/TO/MODEL.bpmn"));
ParallelGateway gateway = (ParallelGateway) modelInstance.getModelElementById("gateway");
gateway.builder()
.serviceTask()
.name("New task")
.endEvent();
Another use case is to insert new tasks between existing elements. Imagine a process
containing a user task with the id task1
which is followed by a service task. And now
you want to add a script task and a user task between these two.
BpmnModelInstance modelInstance = Bpmn.readModelFromFile(new File("PATH/TO/MODEL.bpmn"));
UserTask userTask = (UserTask) modelInstance.getModelElementById("task1");
SequenceFlow outgoingSequenceFlow = userTask.getOutgoing().iterator().next();
FlowNode serviceTask = outgoingSequenceFlow.getTarget();
userTask.getOutgoing().remove(outgoingSequenceFlow);
userTask.builder()
.scriptTask()
.userTask()
.connectTo(serviceTask.getId());
If you use Delegation Code you can access the BPMN model instance and current element of the executed process. If a BPMN model is accessed it will be cached to avoid redundant database queries.
If your class implements the org.camunda.bpm.engine.delegate.JavaDelegate
interface you can access the BPMN model instance
and the current flow element. In the following example the JavaDelegate
was added to a service task in the BPMN model.
Therefore the returned flow element can be casted to a ServiceTask
.
public class ExampleServiceTask implements JavaDelegate {
public void execute(DelegateExecution execution) throws Exception {
BpmnModelInstance modelInstance = execution.getBpmnModelInstance();
ServiceTask serviceTask = (ServiceTask) execution.getBpmnModelElementInstance();
}
}
If your class implements the org.camunda.bpm.engine.delegate.ExecutionListener
interface you can access the BPMN model instance
and the current flow element. As an Execution Listener can be added to several elements like process, events, tasks, gateways
and sequence flows it can not be guaranteed which type the flow element will be.
public class ExampleExecutionListener implements ExecutionListener {
public void notify(DelegateExecution execution) throws Exception {
BpmnModelInstance modelInstance = execution.getBpmnModelInstance();
FlowElement flowElement = execution.getBpmnModelElementInstance();
}
}
If your class implements the org.camunda.bpm.engine.delegate.TaskListener
interface you can access the BPMN model instance
and the current user task since a Task Listener can only be added to a user task.
public class ExampleTaskListener implements TaskListener {
public void notify(DelegateTask delegateTask) {
BpmnModelInstance modelInstance = delegateTask.getBpmnModelInstance();
UserTask userTask = delegateTask.getBpmnModelElementInstance();
}
}
It is also possible to access the BPMN model instance by the process definition id using the Repository Service. As the following incomplete test sample code shows. Please see the generate-jsf-form quickstart for a complete example.
public void testRepositoryService() {
runtimeService.startProcessInstanceByKey(PROCESS_KEY);
String processDefinitionId = repositoryService.createProcessDefinitionQuery()
.processDefinitionKey(PROCESS_KEY).singleResult().getId();
BpmnModelInstance modelInstance = repositoryService.getBpmnModelInstance(processDefinitionId);
}
Custom extension elements are a standardized way to extend the BPMN model. The camunda extension elements are fully implemented in the BPMN model API but unknown extension elements can also easily be accessed and added.
Every BPMN BaseElement
can have a child element of the type extensionElements
.
This element can contain all sorts of extension elements. To access the
extension elements you have to call the getExtensionElements()
method and
if no such child element exists you must create one first.
StartEvent startEvent = modelInstance.newInstance(StartEvent.class);
ExtensionElements extensionElements = startEvent.getExtensionElements();
if (extensionElements == null) {
extensionElements = modelInstance.newInstance(ExtensionElements.class);
startEvent.setExtensionElements(extensionElements);
}
Collection<ModelElementInstance> elements = extensionElements.getElements();
After that you can add or remove extension elements to the collection.
CamundaFormData formData = modelInstance.newInstance(CamundaFormData.class);
extensionElements.getElements().add(formData);
extensionElements.getElements().remove(formData);
You can also access a query-like interface to filter the extension elements.
extensionElements.getElementsQuery().count();
extensionElements.getElementsQuery().list();
extensionElements.getElementsQuery().singleResult();
extensionElements.getElementsQuery().filterByType(CamundaFormData.class).singleResult();
Additionally, their are some shortcuts to add new extension elements. You can use
the namespaceUri
and the elementName
to add your own extension elements. Or
you can use the class
of a known extension element type, e.g. the camunda
extension elements. The extension element is added to the BPMN element and returned
so that you can set attributes or add child elements.
ModelElementInstance element = extensionElements.addExtensionElement("http://example.com/bpmn", "myExtensionElement");
CamundaExecutionListener listener = extensionElements.addExtensionElement(CamundaExecutionListener.class);
Another helper method exists for the fluent builder API which allows you to add prior defined extension elements.
CamundaExecutionListener camundaExecutionListener = modelInstance.newInstance(CamundaExecutionListener.class);
camundaExecutionListener.setCamundaClass("org.camunda.bpm.MyJavaDelegte");
startEvent.builder()
.addExtensionElement(camundaExecutionListener);
With Cycle you can synchronize the BPMN diagrams in your business analyst's BPMN tool with the technically executable BPMN 2.0 XML files your developers edit with their modeler (e.g. in Eclipse). Depending on your tool we can realize forward- and reverse engineering, while you can store your BPMN 2.0 XML files in different repositories (e.g. SVN, file system or FTP servers).
Although business and IT use different BPMN tools, the process models stay in sync: with camunda Cycle you can synchronize BPMN diagrams in the tool chain anytime, for forward engineering as well as reverse engineering. By connecting and continuously synchronizing the process models in both environments, we keep business and IT aligned. This is what we call a full working BPM roundtrip.
The typical use cases are:
Cycle is part of our camunda BPM distribution and ready to use by opening http://localhost:8080/cycle. At the first start up you will be prompted to create an admin user. If you are new to Cycle have a look at our Hands-On Cycle Tutorial.
To connect Cycle to a suitable repository you can set up one of the following connectors:
Furthermore you get information about how to configure User Credentials for your connector.
For directly accessing your process models stored in Signavio, you must set up a Signavio Connector. The picture to the left shows a connector setup for Signavio's SaaS edition with globally provided credentials, meaning that every Cycle user connects to the repository with the same credentials. If you are behind a proxy, you could configure that here as well.
Hit Test
to check if Cycle can find the folder you specified.
Use the subversion plugin to connect to a subversion repository like SVN or GitHub. You must specify the URL (including subfolders, if you want to directly point to a certain folder in the subversion repository). If user credentials are mandatory, you can provide them either globally or individually for each Cycle user. In the picture to the left you see a connector setup for a GitHub repository. The user credentials are provided globally.
Hit Test
to check if Cycle can find the folder you specified.
Use the File System Connector to use models stored on your local system. Select the File System Connector as connector plugin. The variable ${user.home}
points to the directory of your OS user account. You can also choose an absolute path like C:\MyFolder
.
Hit Test
to check if Cycle can find the folder you specified.
To set up credentials provided by the user you need to enter the My Profile
menu and select add credentials
for your connector.
Hit Test
to check if the credentials are valid.
When we are talking about a Roundtrip we are talking about the synchronization of BPMN 2.0 diagrams between the business perspective and the technical perspective. This synchronization is based on the standard BPMN 2.0 XML format. As on the technical side only executable processes matter, Cycle provides the functionality to extract these processes out of models from the business side where manual processes (not executable) can be modeled as well. This extraction mechanism is what we call Pool Extraction. With Cycle, you can do this synchronization in both directions.
Set up a suitable connector for your repository as described in the section Connector Configuration. In this walkthrough we use a Signavio Connector with user provided credentials.
Hit Test
to check if Cycle can access your Signavio account.
In the left box of your roundtrip, click on Add Process Model
, pick a name for your modeling tool and choose the Signavio connector from the connector's dropdown. Cycle now connects with Signavio, so after a short time you can navigate through the chosen repository to select your process model.
After you hit Add
, Cycle will save a link to the process model you selected and offer you a preview image in the left box of your roundtrip. It also says that the process model has not yet been synchronized, which is true. Changes on the diagram in Signavio will be updated automatically by Cycle.
Hit Create
and choose the location you want the BPMN 2.0 XML file to be stored to. In our example, we want to store it on our local file system, in a workspace we use with our Eclipse IDE. After hitting Create
, Cycle will connect to Signavio, request the BPMN 2.0 XML and save it to the location you specified. Please note that no diagram picture will be displayed until an image file of the diagram is stored in the folder. Cycle indicates that both models are in "in sync" now.
Heads up! If your process model is a collaboration diagram, Cycle will do a Pool Extraction which means that only pools that are executable will be regarded.
Now Cycle shows you that your roundtrip consists of the BPMN diagram stored in Signavio (left side) and the BPMN 2.0 file stored in your file repository (right side). You can also see that the two process models are currently in sync, and the date and time since the last sync has been made.
You can now either check out the BPMN 2.0 - XML from your subversion or open it directly on your local drive. In both cases, you can now edit it inside your Eclipse IDE using the camunda Modeler.
After you have worked on the executable process model the models are out of sync, indicated by the red label "change since last sync" on the side where the change happened.
You can now hit the sync button in the corresponding direction (in our case from right to left). Afterwards you will be prompted to confirm the synchronization with the option to add a commit message.
Now both models are synchronized again, indicated by green labels "in sync" on both sites.
During a roundtrip from a business perspective to a technical process diagram Cycle checks which pools are flagged as "executable". Only those pools will actually be synchronized for the executable process model, so you don't have to bother with huge diagrams describing manual flows. We call this feature "Pool Extraction". When you synchronize the executable diagram again with the origin diagram the "non-executable" pools will be merged back into the diagram. No information gets lost.
The following example shows a relevant xml tag:
<process id="sid-8E90631B-169F-4CD8-9C6B-1F31121D0702" name="MyPool" isExecutable="true">
An executable process model usually contains engine specific attributes in the BPMN 2.0 XML. So we have to make sure that these attributes are not lost during a roundtrip with an other tool. The BPMN 2.0 Standard explicitly defines an extension mechanism for these attributes in the XML. That means that a proper BPMN 2.0 import and export functionality must maintain the engine attributes, even if they are added as an engine extension.
The camunda BPM Process Engine uses multitude attributes for configuration purposes which can be set up in the camunda Modeler. Cycle retains these attributes during the roundtrip. Here is an example:
The xml export from Signavio modeler contains no engine attributes:
<serviceTask completionQuantity="1" id="sid-01234"
implementation="webService"
isForCompensation="false"
name="MyService"
startQuantity="1"/>
After the update with camunda Modeler, class and failedJobRetryTimeCycle were added as camunda specific engine attributes:
<definitions ... xmlns:camunda="http://activiti.org/bpmn" xmlns:fox="http://www.camunda.com/fox">
...
<serviceTask id="sid-01234" camunda:class="java.lang.Object"
camunda:async="true"
name="MyService"
implementation="webService">
<extensionElements>
<fox:failedJobRetryTimeCycle>R3/PT10M</fox:failedJobRetryTimeCycle>
</extensionElements>
<incoming>sid-3DED1BA0-77FC-4768-AA3E-0B60A81850EA</incoming>
<outgoing>sid-E6D3AB73-386C-4260-82B9-CB740B82001F</outgoing>
</serviceTask>
...
</definitions>
After synchronization back to Signavio the original Signavio-information like completionQuantity, isForCompensation and startQuantity were merged back:
<definitions ... xmlns:camunda="http://activiti.org/bpmn" xmlns:fox="http://www.camunda.com/fox">
...
<serviceTask camunda:async="true" camunda:class="java.lang.Object"
completionQuantity="1"
id="sid-01234"
isForCompensation="false"
name="MyService"
startQuantity="1">
<extensionElements>
<fox:failedJobRetryTimeCycle>R3/PT10M</fox:failedJobRetryTimeCycle>
</extensionElements>
<incoming>sid-3DED1BA0-77FC-4768-AA3E-0B60A81850EA</incoming>
<outgoing>sid-E6D3AB73-386C-4260-82B9-CB740B82001F</outgoing>
</serviceTask>
...
</definitions>
The Tasklist is a demo web application to provide you with the possibility to work on User Tasks. The Tasklist is part of our camunda BPM distribution and ready to use by opening http://localhost:8080/camunda/app/tasklist.
Notice The tasklist is only a demo application. You may use it as a basis for your own projects or as a simple inspiration to write your own.
Find additional information about how to use the Tasklist in our Developing Process Applications tutorial.
In the following example we walk through a typical human workflow scenario. The Tasklist has four demo users which belong to different user groups. Sign in with the user demo and start a process instance.
To start a process instance via the Tasklist hit the dropdown button and select a process. If there is no process listed please verify that your process is deployed correctly.
Depending on whether you have defined a start form for your process it will be displayed now. Otherwise you get the notification that no form has been defined for starting the process. In this case click Start process using generic form
. The generic task form allows you to enter variables for your process.
In our example you have to insert the desired values and hit Start Process
to continue to the next step.
Tasks that are assigned to you are listed on the Tasklist main page where you can hit the button to start working on a task.
In our example task form you are asked to assign an approver for your invoice. Enter a colleague's name who should be assigned to approve the task. Have a look at the Task Overview. The assigned task is now in your colleague's folder.
If no task form is defined for a Start Event you will be forwarded to a generic form. In a generic form you can define the input data yourself.
When you complete a task by submitting the task form, the task is completed and the process continues in the engine.
In the User and Group Task overview you can see how many tasks are assigned to you, the different groups and your colleagues. Have a look in your colleagues' folder. You will see that you can see their tasks but you are not able to work on them.
The folder Inbox
contains all tasks that are assigned to the user groups. Like tasks of a group they are ready to be claimed.
If tasks are assigned to a group, they are visible to more than one person. In order to avoid different people working on it at the same time, the task first needs to be claimed. By claiming a task you become the assignee and the task is moved to your personal tasks folder ("My Tasks"). Hit the button and select claim
.
User can also unclaim a task by selecting unclaim
. The task will go back to the associated user group.
You can bulk (un-)claim tasks after selecting multiple tasks via Ctrl + click
.
You can bulk delegate tasks after selecting multiple tasks via Ctrl + click
.
Tasks can be directly assigned to a user, to a candidate group (group) or to a candidate list (multiple users). Compared to the direct assignment of tasks, the tasks of candidate groups or candidate lists are not yet assigned. They must be claimed by a user who is part of the list or belongs to the group. Depending on this affiliation the Tasklist displays tasks in different folders.
In the properties panel in your camunda Modeler you can configure all relevant attributes.
To determine which user or user group is able to work on a task you can set the following extension attributes for your User Task:
Assignee: directly assign a task to a user
<userTask id="theTask" name="my task" camunda:assignee="John" ></userTask>
Candidate User: makes a user a candidate for a task
<userTask id="theTask" name="my task" camunda:candidateUsers="John, Mary" ></userTask>
Candidate Group: makes a user group a candidate for a task
<userTask id="theTask" name="my task" camunda:candidateGroups="management, accountancy" ></userTask>
You can define the Candidate User and the Candidate Group on the same task. Find more detailed information regarding extension attributes for User Tasks here.
The Tasklist can work with different types of forms. To implement a Task Form in your application you have to connect the form resource with the BPMN 2.0 element in your process diagram. Suitable BPMN 2.0 elements for calling Tasks Forms are the Start Event and the User Task.
Out of the box, camunda Tasklist supports four different kinds of task forms:
When embedding the process engine into a custom application, you can integrate the process engine with any form technology such as Java Server Faces, Java Swing and Java FX, Rest-based Javascript web applications and many more.
To add an embedded Task Form to your application simply create an HTML file and attach it to a User Task or a Start Event in your process model.
Add a folder src/main/webapp/forms
to your project folder and create a FORM_NAME.html file containing the relevant content for your form. The following example shows a simple form with two input fields:
<form class="form-horizontal">
<div class="control-group">
<label class="control-label">Customer ID</label>
<div class="controls">
<input form-field type="string" name="customerId"></input>
</div>
</div>
<div class="control-group">
<label class="control-label">Amount</label>
<div class="controls">
<input form-field type="number" name="amount"></input>
</div>
</div>
</form>
To configure the form in your process, open the process in your Eclipse IDE with the camunda Modeler and select the desired User Task or Start Event. Open the properties view and enter embedded:app:forms/FORM_NAME.html
as Form Key. The relevant XML tag looks like this:
<userTask id="theTask" camunda:formKey="embedded:app:forms/FORM_NAME.html"
camunda:candidateUsers="John, Mary"
name="my Task">
To create an embedded task form read the Creating Embedded Task Forms section.
The camunda process engine supports generating Html Task Forms based on Form Data Matadata provided in BPMN 2.0 XML. Form Data Metadata is a set of BPMN 2.0 vendor extensions provided by camunda, allowing you to define form fields directly in BPMN 2.0 XML:
<userTask id="usertask" name="Task">
<extensionElements>
<camunda:formData>
<camunda:formField
id="firstname" label="Firstname" type="string">
<camunda:validation>
<camunda:constraint name="maxlength" config="25" />
<camunda:constraint name="required" />
</camunda:validation>
</camunda:formField>
<camunda:formField
id="lastname" label="Lastname" type="string">
<camunda:validation>
<camunda:constraint name="maxlength" config="25" />
<camunda:constraint name="required" />
</camunda:validation>
</camunda:formField>
<camunda:formField
id="dateOfBirth" label="Date of Birth" type="date" />
</camunda:formData>
</extensionElements>
</userTask>
camunda Modeler: Form Metadata can be graphically edited using the camunda Modeler.
This form would look like this in the camunda Tasklist:
As you can see, the <camunda:formData ... />
element is provided as a child element of the BPMN <extensionElements>
element. Form Metadata consists of multiple Form Fields which represent individual input fields where a user has to provide some value or selection.
A form field can have the following attributes:
Attribute | Explanation |
---|---|
id | unique id of the form field, corresponding to the name of the process variable to which the value of the form field is added when the form is submitted. |
label | The label to be displayed next to the form field. |
type | The data type of the form field. The following types are supported out of the box:
|
defaultValue | Value to be used as a default (pre-selection) for the field. |
Validation can be used for specifying frontend and backend validation of form fields. camunda BPM provides a set of built-in form field validators and an extension point for plugging in custom validators.
Validation can be configured for each form field in BPMN 2.0 XML:
<camunda:formField
id="firstname" name="Firstname" type="string">
<camunda:validation>
<camunda:constraint name="maxlength" config="25" />
<camunda:constraint name="required" />
</camunda:validation>
</camunda:formField>
As you can see, you can provide a list of validation constraints for each Form Field.
The following built-in validators are supported out of the box:
Validator | Explanation |
---|---|
required |
Applicable to all types. Validates that a value is provided for the form field. Rejects 'null' values and empty strings.
|
minlength |
Applicable to string fields. Validates minlength of text content. Accepts 'null' values.
|
maxlength |
Applicable to string fields. Validates maxlength of text content. Accepts 'null' values.
|
min |
Applicable to numeric fields. Validates the min value of a number. Accepts 'null' values.
|
max |
Applicable to numeric fields. Validates the max value of a number. Accepts 'null' values.
|
readonly |
Applicable to all type. Makes sure no input is submitted for given form field.
|
camunda BPM supports custom validators. Custom validators are referenced using their fully qualified classname or an expression. Expressions can be used for resolving Spring or CDI @Named beans:
<camunda:formField
id="firstname" name="Firstname" type="string">
<camunda:validation>
<camunda:constraint name="validator" config="com.asdf.MyCustomValidator" />
<camunda:constraint name="validator" config="${validatorBean}" />
</camunda:validation>
</camunda:formField>
A custom validator implements the org.camunda.bpm.engine.impl.form.validator.FormFieldValidator
interface:
public class CustomValidator implements FormFieldValidator {
public boolean validate(Object submittedValue, FormFieldValidatorContext validatorContext) {
// ... do some custom validation of the submittedValue
// get access to the current execution
DelegateExecution e = validatorContext.getExecution();
// get access to all form fields submitted in the form submit
Map<String,Object> completeSubmit = validatorContext.getSubmittedValues();
}
}
If the process definition is deployed as part of a ProcessApplication deployment, the validator instance is resolved using the process application classloader and / or the process application Spring Application Context / CDI Bean Manager in case of an expression.
If you want to call a Task Form that is not part of your application you can add a reference to the desired form. The Referenced Task Form will be configured similar to the Embedded Task Form. Open the properties view and enter FORM_NAME.html
as Form Key. The relevant XML tag looks like this:
<userTask id="theTask" camunda:formKey="app:FORM_NAME.html"
camunda:candidateUsers="John, Mary"
name="my Task">
The tasklist creates the URL by the pattern:
"../.." + contextPath (of process application) + "/" + "app" + formKey (from BPMN 2.0 XML) + "processDefinitionKey=" + processDefinitionKey + "&callbackUrl=" + callbackUrl;
When you have completed the task the call back URL will be called.
The generic form will be used whenever you have not added a dedicated form for a User Task or a Start Event.
Complete Task
button the process instance contains the entered values. Generic Task Forms can be very helpful during the development stage, so you do not need to implement all Task Forms before you can run a workflow. For debugging and testing this concept has many benefits as well.
Embedded task forms are plain HTML documents which contain input fields that map to process variables. This input must be annotated with a form-field
attribute. Additionally they must declare the type and name of the mapped variable. A simple process variable mapping input is shown below:
<input form-field type="boolean" name="myBoolean" />
Input fields are HTML input fields of the form
<input form-field type="[type]" name="[variableName]" />
The following variable types are supported on input fields: boolean
, string
, number
and date
. The mapping between variable types and rendered input is as follows:
Variable Type | Input Type |
---|---|
boolean | checkbox |
string | text |
number | number |
date | datetime |
Select Boxes are HTML <select>
elements of the form
<select form-field type="[type]" name="[variableName]" form-values="[optionsVarName]">
<option value="[value]">[label]</option>
<option value="[value]">[label]</option>
</select>
The following parameters are supported:
Parameter | Explanation |
---|---|
type |
The datatype of the select box. The following types are supported:
|
variableName |
The name of the process variable to which this input field should be bound. |
optionsVarName |
Process variable providing the select options. The process variable can be of type |
value |
This value is used as value when submitting the select box. |
label |
This label is displayed to the user. |
A simple example of a select box binding to the process variable approver
:
<select form-field type="string" name="approver">
<option value="demo">Demo</option>
<option value="john">Jonny</option>
<option value="peter">Peter Meter</option>
</select>
Select options can also be loaded from a process variable:
<select form-field type="string" name="approver" form-values="names">
</select>
Radio buttons are HTML <input>
elements of the form
<input form-field type="radio" name="[variableName]" value="[value]">
Note: currently the radio button only supports string variables.
Parameter | Explanation |
---|---|
variableName |
The name of the process variable |
value |
The value of the radio button |
Radio Buttons are usually used as a group:
<input form-field type="radio" name="approver" value="jonny"> Jonny <br>
<input form-field type="radio" name="approver" value="mary"> Mary
Textareas are HTML <textarea>
elements of the form
<textarea form-field name="[variableName]"></textarea>
Note: currently the textarea only supports string variables.
Parameter | Explanation |
---|---|
variableName |
The name of the process variable |
This is an example of the textarea:
<textarea form-field name="selectedName"></textarea>
You can reference the value of a process variable using the formVariable()
method:
The user {{formVariable('selectedName')}} should approve this request!
Form validation may be added via AngularJS validation directives that are available for text input, checkbox and number input.
For example, the following snippet validates the form input against the pattern 00-00
:
<input form-field type="string" name="myString" ng-pattern="/\d{2}-\d{2}/" />
To query the validation state of a form, you may use the variablesForm
variable that is available in the scope of the embedded task form:
<input form-field type="string" name="myString" ng-pattern="/\d{2}-\d{2}/" />
<p ng-if="variablesForm.$invalid">
Your form contains errors!
</p>
<p ng-if="variablesForm.myString.$invalid">
The form input <em>myString</em> is not valid. Allowed pattern: <code>00-00</code>.
</p>
Based on the validation state of a form, a form submit button will either be disabled (form has errors) or enabled (form is ok).
It is possible to inject custom JavaScript code into the scope of an embedded form. To do that, the script must be wrapped into a
<script form-script type="text/form-script"></script>
block.
Inside the script the variable $scope
is provided to bind functions such as form input change listeners to it.
With these change listeners advanced validation may be carried out.
Check out the AngularJS documentation on ngModel to learn more about how to interact with form elements.
<input form-field type="string" name="myString" ng-change="myStringChanged()" />
<script form-script type="text/form-script">
$scope.myStringChanged = function(e) {
var formField = $scope.variablesForm.myString,
value = formField.$modelValue;
// value must equal 'cat'
if (value != 'cat') {
value.$setValidity('catEntered', false);
} else {
value.$setValidity('catEntered', true);
}
};
</script>
This above example binds a change listener to the input named myString
. Inside the change listener the form fields value is retrieved.
Using the value, a validation is performed (must equal cat
) and the form fields validation state is updated accordingly.
In case you would like to have access to internal services such as $http to perform validation against a backend you may use the inject
hook provided inside a form script:
<script form-script type="text/form-script">
inject([ '$scope', '$http', function($scope, $http) {
$scope.myStringChanged = function(e) {
var formField = $scope.variablesForm.myString,
value = formField.$modelValue;
$http.get("...?myString=" + value).success(function(data) {
if (data == "ok") {
value.$setValidity('backendOk', true);
} else {
value.$setValidity('backendOk', false);
}
});
};
}]);
</script>
The example performs backend validation of the form field value using the $http
service.
Note that you may want to debounce the backend validation rather than firing one query per user interaction.
The diagram below shows the task lifecycle and supported transitions supported by camundaBPM. To get to know how to programmatically work with the lifecycle in your application refer to the Java-API Reference.
Some visual aspects of the web interface can be configured in the
_vars.less
file (located in webapps/camunda-webapp/webapp/src/main/webapp/assets/styles/
)
as following:
Header colors: you can change the values of @main-color
and @main-darker
variables.
Header logo: you can either override the app-logo.png
image file
located in the webapps/camunda-webapp/webapp/src/main/webapp/assets/img/tasklist/
or override the @logo-tasklist
variable to point on a other image file.
With camunda BPM Cockpit you can monitor and administrate your running process instances. The Cockpit architecture allows you to use plugins to extend the functionality, so you can individually adapt the tool to your personal requirements.
On the start page of Cockpit you get an overview of the installed plugins - you will see at least two pre-installed plugins. Additionally installed plugins will automatically be added below the existing ones.
The Process Definition View provides you with information about the definition and the status of a process. On the left hand side you can easily survey the versions of the process and how many instances of the version are running. Incidents of all running process instances are displayed together with an instances counter label in the corresponding rendered diagram. So it is easy to locate failed activities in the process. Use the mouse to navigate through the diagram. By turning the mouse wheel you can zoom in out. Hold the left mouse button pressed to pan the diagram in the desired direction.
In the tab Process Instances
all running instances are listed in a tabular view. Besides information about start time, business key and state you can select an instance by ID and go down to the Process Instance View.
The tab Called Process Definitions
displays the called child processes. In the column Called Process Definition the names of the called sub processes are listed. Click on the name to display the process in the Process Definition View. Please note that a filter called Parent is automatically set for the process so that you only see the instances that belong to the parent process. In the column Activity you can select the instance that is calling the child process.
The tab Job Definitions
displays the Job Definitions that are linked to this Process Definition. You can see the name of the activity, the type of job, the configuration thereof and the state thereof. You can also suspend and re-activate the job definition (see Job Definition Suspension for more information).
The filter function on the left hand side of the Process Definition View allows you to find certain instances by filtering for variables, business keys, start time and date or by selecting the version of a process. Beyond that you can combine different filters as logical AND relation. Filter expressions on variables must be specified as variableName OPERATOR value
where the operator my be one of the following terms =
, !=
, >
, >=
, <
, <=
, like
. Apart from the like
operator, the operator expressions do not have to be separated by spaces.
The like
operator is for string variables only. You can use %
as wildcard in the value expression. String values must be properly enclosed in " "
.
Open the Process Instance View by selecting a process instance from the Process Definition View instance list. This view allows you to drill down into a single process instance and explore its running activities as well as the variables, tasks, jobs, etc.
Beside the diagram view the process will be displayed as an Activity Instance Tree View. Variables that belong to the instance will be listed in a variables table of the Detailed Information Panel. Now you can select single or multiple ('ctrl + click') flow nodes in the interactive BPMN 2.0 diagram or you can select an activity instance within the activity tree view. As diagram, tree view and variables table correspond with each other the selected flow node will also be selected in the tree and the associated variables will be shown and vice versa.
The activity instance tree contains a node for each activity that is currently active in the process instance. It allows you to select activity instances to explore their details. Concurrently the selected instance will be marked in the rendered process diagram and the corresponding variables will be listed in the Detailed Information Panel.
Use the Detailed Information Panel to get an overview of the variables, incidents, called process instances and user tasks that the process instance contains. Depending on the selected activity instance in the rendered diagram, the panel lists the corresponding information. You can also focus on the activity instance via a scope link in the table.
In addition to the instance information you can edit variables or change the assignees of user tasks.
In the Incidents tab you can click on the Incident message name, which will open the stacktrace of the selected incident. In the Incidents tab you can also increment the number of retries for a failed job by hitting the button and in the User Tasks tab you can manage the groups for the selected user task by hitting the button.
Hit the button on the right hand side to add variables to a process instance. You can choose between different data types. Please note that variables will be overwritten if you add a new variable with an existing name.
Hit the symbol in the Detailed Information Panel to edit variables. This feature allows you to change the value of variables as well as the type. A validation of the date format and for the value of integers happens on client side. If you enter NULL the variable will be converted to a string type.
Enterprise Feature
Please note that this feature is only included in the enterprise edition of the camunda BPM platform, it is not available in the community edition.Check the camunda enterprise homepage for more information or get your free trial version.
At the top right of the Process Definition View and the Process Instance View, you can hit the History Button to access the historical view.
In the historical view of the Process Definition you see an overview of all of the running and completed process instances. On the left side of the screen, a Filter can be applied and you have the option of selecting to only see process instances in a specific state. Running and completed instances can be selected.
In the historical view of the process instance you see instance-specific information. On the left side of the screen, a Filter can be applied and you have the option of selecting to only see process instances in specific states. Running, completed and canceled process instances can be viewed as well as task-specific activity states.
You can access various information regarding the specific instance by selecting the applicable tab at the bottom of the screen. Among other details, you can view the Audit Log of an instance, which includes detailed information about the activities that took place within the process instance. In the Action column of the Variables tab, you can see a log of the variables which were used in the instance and in the User Tasks tab you can see a log of the User Tasks of the instance.
Unresolved incidents of a process instance or a sub process instance are indicated by Cockpit as failed jobs. To localize which instance of a process failed, Cockpit allows you to drill down to the unresolved incident by using the process status dots. Hit a red status dot of the affected instance in the Process Definition View to get an overview of all incidents. The Incidents
tab in the Detailed Information Panel lists the failed activities with additional information. Furthermore, you have the possibility of going down to the failing instance of a sub process.
Enterprise Feature
Please note that the following feature is only included in the enterprise edition of the camunda BPM platform, it is not available in the community edition.Check the camunda enterprise homepage for more information or get your free trial version.
In the Process Definition View and in the Process Instance View you have the option of suspending the selected process definition or the process instance that you are viewing by using the button on the right hand side.
If you suspend the process definition, you can prevent the process definition from being instantiated. No further operations can be done while the process definition is in the suspended state. You can simply re-activate the process definition by using the button on the right hand side. You also have the option of suspending/reactivating all process instances of the process definition as well as defining if the process definition (and process instances) should instantly be suspended/reactivated or at a specific time in a confirmation dialog. You can find more information about the functionality of this in the Suspending process definitions section of the Process Engine chapter.
If you suspend the process instance, you can prevent the process instance from being executed any further. This includes suspending all tasks included in the process instance. You can simply re-activate the process instance by using the button on the right hand side. You can find more information about the functionality of this in the Suspending process instances section of the Process Engine chapter.
In the Process Definition View you have the option of suspending a job definition. This can be done by using the button displayed in the Action column of the Job Definitions tab at the bottom of the screen. By doing this, you can prevent this job definition from being processed in all process instances of the selected process definition. You can simply re-activate the job definition by using the button in the same Action column. You can find more information about the functionality of this in the Suspending and activating job execution section of the Process Engine chapter.
Cockpit defines a plugin concept to add own functionality without being forced to extend or hack the Cockpit web application. You can add plugins at various plugin points, e.g. the start page as shown in the following example:
A cockpit plugin is a maven jar project that is included in the cockpit webapplication as a library dependency. It provides a server-side and a client-side extension to cockpit.
The integration of a plugin into the overall cockpit architecture is depicted in the following figure.
On the server-side, it can extend cockpit with custom SQL queries and JAX-RS resource classes. Queries (defined via MyBatis) may be used to squeeze additional intel out of an engine database or to execute custom engine operations. JAX-RS resources on the other hand extend the cockpit API and expose data to the client-side part of the plugin.
On the client-side a plugin may include AngularJS modules to extend the cockpit webapplication. Via those modules a plugin provides custom views and services.
The basic skeleton of a cockpit plugin looks as follows:
cockpit-plugin/
├── src/
| ├── main/
| | ├── java/
| | | └── org/my/plugin/
| | | ├── db/
| | | | └── MyDto.java (5)
| | | ├── resource/
| | | | ├── MyPluginRootResource.java (3)
| | | | └── ... (4)
| | | └── MyPlugin.java (1)
| | └── resources/
| | ├── META-INF/services/
| | | └── org.camunda.bpm.cockpit.plugin.spi.CockpitPlugin (2)
| | └── org/my/plugin/
| | ├── queries/
| | | └── sample.xml (6)
| | └── assets/app/ (7)
| | └── app/
| | ├── plugin.js (8)
| | ├── view.html
| | └── ...
| └── test/
| ├── java/
| | └── org/my/plugin/
| | └── MyPluginTest.java
| └── resources/
| └── camunda.cfg.xml
└── pom.xml
As runtime relevant resource it defines
META-INF/services
entry that publishes the plugin to CockpitYou can exclude some plugins from the interface by adding a cam-exclude-plugins
attribute to the HTML base
tag of the page loading the interface.
The content of the attribute is a comma separated list formatted like: <plugin.key>:<feature.id>
.
If the feature ID is not provided, the whole plugin will be excluded.
This example will completely deactivate the action buttons on the right side of the process instance view.
<base href="/"
cam-exclude-plugins="cockpit.processInstance.runtime.action" />
In this example we deactivate the definition list in the cockpit dashboard but keep the diagram previews and disable the job retry action button:
<base href="/"
cam-exclude-plugins="cockpit.dashboard:process-definition-tiles,
cockpit.processInstance.runtime.action:job-retry-action" />
Here you can see the various points at which you are able to add your own plugins.
cockpit.dashboard
.
cockpit.processDefinition.runtime.tab
.
cockpit.processInstance.runtime.tab
.
cockpit.processDefinition.runtime.action
.
cockpit.processInstance.runtime.action
.
cockpit.processDefinition.view
.
cockpit.processInstance.view
.
cockpit.processDefinition.diagram.overlay
.
cockpit.processInstance.diagram.overlay
.
cockpit.jobDefinition.action
.
Here is an example of how to configure where you place your plugin:
var ViewConfig = [ 'ViewsProvider', function(ViewsProvider) {
ViewsProvider.registerDefaultView('cockpit.processDefinition.view', {
id: 'runtime',
priority: 20,
label: 'Runtime'
});
}];
For more information on creating and configuring your own plugin, please see How to develop a Cockpit plugin.
Some visual aspects of the web interface can be configured in the
_vars.less
file (located in webapps/camunda-webapp/webapp/src/main/webapp/assets/styles/
)
as follows:
Header colors: you can change the values of the @main-color
and @main-darker
variables.
Header logo: you can either override the app-logo.png
image file
located in webapps/camunda-webapp/webapp/src/main/webapp/assets/img/cockpit/
or override the @logo-cockpit
variable to point to a other image file.
Along with the camunda web applications we ship Admin, accessible via http://localhost:8080/camunda/app/admin/. Admin is a small application that allows you to configure users and groups via the engines Identity Service. Furthermore, you can connect camunda Admin to your LDAP system.
On first access of a process engine through Cockpit or Tasklist a setup screen will be shown. This screen allows you to configure an initial user account with administrator rights.
Administrator users are not global but per engine. Thus, you will need to set up an admin user for each separate engine.
In the My Profile menu you can edit your personal account settings, such as:
Users who belong to the group camunda-admin have administrator privileges. There must be at least one member in this group, otherwise the initial setup screen appears. Besides user- and groupmanagement, as administrator you are able to define authorization rules for users and groups to control access permissions for applications and set the visibility of users and groups.
In the following sections you will learn how to use an administrator account with the help of a simple use cases. You will create a group with two users who will be able to work together in Tasklist.
The Users menu allows you to add, edit and delete user profiles.
Login with your admin account and add two new users. Give them a unique ID and a password you can remember.
The Groups menu allows you to add, edit and delete user groups.
Create a new group called support and add the new users to the group. To do so, go back to the Users menu and edit the new accounts. In the menu Groups you can add the users to the support group.
Manage authorizations for users, groups and applications. Define which users or groups have access to the applications, which users are visible to other groups or direct group members.
Set the authorizations for the new group and the created users. First, you have to define which application the members of your new group have access to. Select the Application menu and create a new Application Authorization
rule. The group members should be able to access Tasklist, so add the following rule:
Now every member of the group 'support' can use Tasklist.
Furthermore, you want one of the new users to get access to Cockpit. Therefore add a new user-specific rule:
This specific rule is only valid for the user 'lemmy' and provides him with additional authorization.
Login with the new user accounts and test if you can access the desired application.
Depending on the users authorization, Tasklist will show you information about your colleagues and groups. Currently you can only see the group folder support but not your colleague. To change that, login to the admin application as administrator, enter the Users Authorization menu and create the following rules:
Now every member of the group support is able to see the new users lemmy and ozzy.
If you connect the camunda BPM platform with the LDAP identity service you have read-only access to the users and groups. Create new users and groups via the LDAP system, but not in the admin application. Find more information about how to configure the process engine in order to use the LDAP identity service here.
Some visual aspects of the web interface can be configured in the
_vars.less
file (located in webapps/camunda-webapp/webapp/src/main/webapp/assets/styles/
)
as follows:
Header colors: you can change the values of @main-color
and @main-darker
variables.
Header logo: you can either override the app-logo.png
image file
located in the webapps/camunda-webapp/webapp/src/main/webapp/assets/img/admin/
or override the @logo-admin
variable to point to another image file.
The camunda Modeler is an open source BPMN 2.0 modeling plugin for Eclipse which focuses on seamless modeling of process and collaboration diagrams. The camunda Modeler supports the complete BPMN 2.0 standard.
After you have installed the camunda Modeler in Eclipse you can start to setup your environment. The modeler IDE is split into the following three parts:
This view provides a hierarchical view of the resources in your workspace. Projects and files are displayed here. To open the Project Explorer click Window / Show View / Other... / General / Project Explorer
. In the Project Explorer you can add, delete and rename files. Furthermore you can copy files from or to the explorer.
The Properties Panel allows you to maintain BPMN and camunda BPM vendor extensions in your diagrams. To open this view click Window / Show View / Other... / General / Properties
.
To open the diagram canvas right-click on a *.bpmn file in the Project Explorer and select Open With / Bpmn2 Diagram Editor
. On the right hand side of the screen, the Palette offers you all BPMN 2.0 elements grouped into different sections. You can add elements to your diagram by dragging and dropping them onto the Diagram Canvas.
Before you can create a BPMN file you need a project. You can create projects by right-clicking in the project explorer and selecting New / Project
or in the menu File / New / Project ...
. Only a General / Project
is suitable for using BPMN 2.0 files. For process application development select a Java Project.
To add a new BPMN 2.0 file, select File / New / Other / BPMN / BPMN 2.0 Diagram
. You can choose a location for the new file. Please note that this input is mandatory.
Now you can start to create a BPMN 2.0 model. Add the desired elements from the palette on the right hand side by dragging and dropping them onto the diagram canvas. Alternatively, you can add new elements by using the context menu that appears when you hover over an element in the diagram. The type of an element can easily be changed by the morph function in the context menu.
In the properties panel you see and edit information about the element specific attributes, grouped into different tabs. Select the desired element and start to edit the properties.
You can extend the modeler to ship reusable custom tasks through custom task providers.
The following functionality is exposed to custom task providers and thus usable when implementing custom tasks:
Head over to the custom task tutorial to learn more about how to provide custom tasks. You may also check out the advanced custom task example that showcases most of the options.