This is documentation about engine internals. If you are a Camunda user, i.e., you develop process applications, please consult the User Guide.
This guide is for people developing the Camunda BPM platform itself and anyone interested in it. The concepts described here are mostly not part of the public API but are internals. The contents of this chapter may change at any time.
In the following sections, we will dig a little deeper into the engine's internal representation of process instances, the execution tree and its properties.
In order to execute processes, the process engine has a hierarchical representation of process instances. This representation consists of so-called executions. Whenever process execution reaches a wait state, the hierarchy of executions (in the following also referred to as execution tree) is persisted to the database table ACT_RU_EXECUTION
and read from there when execution continues. The tree roughly corresponds to the hierarchy of active BPMN activity instances but is not the same. The following describes how an execution tree is structured and what an execution represents.
Before considering runtime state, let's define the notion of activities the process engine has. These include BPMN activities, gateways, events, as well as the process definition itself. Activities may be scopes or not. An activity is a scope if it is a variable scope according to BPMN and Camunda (examples: process definition, subprocesses, multi-instance activities, activities with input/output mappings) or if it defines a context in which events can be received (examples: any activity with a boundary event, any activity containing an event subprocess, any catching intermediate event, any event-based gateway).
Consider the following example:
Out of these, the following activities are scope activities:
The following activities are not scope activities:
The execution tree's structure and properties depend on whether the current activities are scopes or not.
An execution is an entity responsible for executing a part of a process instance. Many other instance-related entities are linked to it. These are:
Representing a process instance by more than one execution (in fact a tree structure of executions) allows to manage the mentioned related entities conveniently. For example, when an execution is removed, e.g., because it reaches an end event, all related variables that reference it can be easily removed while variables referencing other, still active executions are kept. In short, the lifetime and validity of an execution defines the lifetime and validity of all the entities that reference it.
In the hierarchical structure of executions, different executions have different roles. These are reflected by a set of properties. The basic execution properties are the following:
Regardless of how complex the structure of a process instance is, the execution tree is composed of four basic patterns. By combining these patterns, an execution tree can represent arbitrary states of BPMN processes. The patterns are the following:
Activity A is not a scope, because it is not a variable scope according to BPMN and has no boundary event.
When A is active, the process instance is represented by the following execution tree:
There is a single execution 1. 1 is the process instance itself. That means, it is responsible for executing the process definition. The process-definition is a scope, so the attribute isScope is true. 1 is currently executing the activity A, which explains the activityId setting.
Again, activities A and B are not scope activities.
When both A and B are active, the execution tree has the following structure:
Now, there are three executions. Starting from the root, execution 1 is again the process instance and a scope because the process definition defines a variable scope. It has two children, 2 and 3. Each of these is concurrent (isConcurrent is true) and the activities they execute are not scopes (isScope is false). Execution 1 has no activityId because it is not responsible for actively executing an activity. Instead, 2 and 3 have a non-null activityId instead.
This time, activity A is a scope activity. Due to the boundary event, it defines a context in which events can be received. This context is valid for the lifetime of the activity instance. It therefore requires an extra execution.
Thus, when A is active, the execution tree looks as follows:
There are two scope executions. Execution 1 is the scope execution for the process definition scope while 2 is the scope execution for activity A. Having two scopes is important, because when activity A completes, only the event subscriptions of execution 2 should be removed.
Both, A and B, are scope activities and located in the same parent scope activity.
When both are active, the execution tree is the following:
The executions 2 and 3 are just there to represent the concurrency in the scope of 1. For execution of the activities, the scope executions 4 and 5 are responsible.
As mentioned above, there are different kinds of scope and none-scope activities. The above patterns are not specific for plain user tasks and user tasks with timer boundary events. Instead, they apply to all cases of non-scope activities (represented by plain user tasks) and scope activities (represented by user tasks with boundary events). In order to apply these patterns to different types of activities, you have to know whether they are scopes or not. Then you can apply the according pattern. For example, for a single task with an input/output mapping (i.e., a scope task), the execution tree looks exactly the same as in the pattern Single Scope Activity.
With arbitrary structures of activities and nesting in subprocesses, the execution tree becomes more complex than the four patterns shown above. However, the tree is only a composition of these patterns. Let's look at an example:
When both activities, A and B, are active, the tree looks as follows:
Notice how the execution structure of 1, 2, 3, and 4 is a mixed instance of the patterns Concurrent No-Scope Activities and Concurrent-Scope Activities. Execution 4 is responsible for executing the subprocess scope. Since activity B is a scope, 4 has a child execution 5. This is an instance of the pattern Single Scope Activity.
In general, each scope execution in a complex tree is the root of a pattern instance. Here, these scope executions are 1 and 4 with the above-mentioned patterns.
There are certain invariants for a consistent execution tree. The following statements should always hold:
null
activityId. All other executions have a null
activityIdIf you understand why these invariants hold, you have very likely understood the contents of this chapter :)
Above, we have considered the static execution tree at a specific point in time. Of course, this tree changes during the course of a process instance, for example when activities start or complete, or a parallel gateway is executed. This is the task of the Process Virtual Machine (PVM). It is responsible for transforming the execution tree such that it always represents the current execution state according to the four patterns. For example, before starting the execution of a scope activity, the PVM makes sure to create a new child execution of the current execution and sets the properties accordingly.
The following sections describe how we handle the development of SQL scripts and their adjustments, including patch level fixes. After reading this guide, the developer will have some knowledge about what is required to write the different SQL scripts, where they are located and how they are tested.
The Camunda Engine uses SQL databases as storage backend so it is necessary to develop the data definition language (DDL) for each supported database type like DB2 etc.
The create
and drop
scripts contain all necessary logic to create the required schema for the Engine and Identity mechanism. This includes the creation and deletion of tables, foreign keys and indexes.
The naming convention for creating a create
/drop
script is:
activiti.${database_type}.${action}.${purpose}.sql
where ${database_type}
the database identifier like db2, h2, etc. With
${action}
is specified whether this script creates or drops the database
scheme. The ${purpose}
describes the purpose which database scheme is
managed by this script. For example engine
as purpose denotes the creation
of the necessary runtime engine tables.
A complete naming example for a create script:
activiti.db2.create.engine.sql
, for a drop script:
activiti.mssql.drop.history.sql
.
The create
and drop
scripts are tested using the Engine testsuite. The test suite is executed against all supported databases.
During the tests the scripts are executed through the engine to create the necessary tables before testing and dropping them afterwards.
The existing create
and drop
scripts for the Engine can be found here.
Camunda BPM as Open Source project provides SQL upgrade scripts for all supported databases to migrate between minor versions. These scripts contain all necessary schema changes to upgrade the existing schema of the previous minor version to the following minor version.
The upgrade
scripts contain all necessary logic to create the required schema for the Engine and Identity mechanism. This includes the creation and deletion of tables, foreign keys and indexes.
The naming convention for creating a upgrade
script is:
${database_type}_${purpose}_${old_minor_version}_to_${new_minor_version}.sql
where ${purpose}
in this case denotes what is affected when you execute the script. Currently there is only engine
as purpose.
The placeholders ${old_minor_version}
and ${new_minor_version}
describe the minor versions.
Example: db2_engine_7.2_to_7.3.sql
or mysql_engine_7.1_to_7.2.sql
.
The upgrade scripts are tested using the database upgrade project. During the execution of the project these things are done:
The source files can be found here.
When our customers or community users discover SQL schema related problems during a minor version, we create so called SQL patch level scripts.
These scripts apply the necessary fixes for the bug, nothing more. Patch-level
and upgrade
scripts have no intersection, meaning they do not contain the same statement/s.
They are released in a patch level version for a specific minor version.
To create a patch level script, follow these guidelines:
Identify which database and Camunda BPM patch versions are affected.
Following the naming convention
Create the corresponding patch scripts with the fix for each affected minor version branch. Start by creating them on master
in the upgrade
-folder of the sql-scripts
project of the distribution located here.
If master
is affected too, the patch for it will be done in the create
and drop
scripts only. Because it is not yet released, database changes are not done in a patch script.
Add the fix to the create
and drop
scripts of each affected database on each minor version branch, except master
where it is already done through the create
scripts.
Make sure you have added all patch scripts for all versions on master in the upgrade
-folder. On each minor version branch, only patch scripts are added which belong to the version of the minor branch and previously released minor versions.
E.g., on master
, all patch scripts are available. On 7.2
, only patch scripts affecting 7.2
and lower minor versions are available, i.e., 7.1
and 7.0
. On 7.1
, only patch scripts regarding 7.1
and lower are available.
To get the testing right, some modifications must be done to the patch-new-schema
section of the sql-maven-plugin
and the generate-test-resources
phase configuration of the maven-antrun-plugin
in the database upgrade and instance migration tests.
Regarding the maven-antrun-plugin
, it is necessary to add a touch
command for the patch script of the branch you are currently working on. This action generates a dummy
patch file during testing for databases where no patch is required.
The sql-maven-plugin
is responsible for creating the old schema of the previous minor version, patching the old schema, upgrading to new minor version, patching the new minor version and finally dropping the schema after the tests.
BUT you only have to do a manual modification if the bug is not also
present in the previous minor version. If the bug is present in the previous
minor version it is automatically applied in the patch-old-schema
execution
of the pom.xml
. If this is not the case you have to add it manually to the
patch-new-schema
execution of the pom.xml
.
Document the new patch level scripts by adding them to the list of available SQL Patch scripts in the migration guide. Describe for each patch script file: Affected Camunda BPM minor version, the full name of the patch file, a description what it fixes, the affected databases and a link to the concrete CAM issue in our issue tracker. If the same fix is in multiple patch scripts, e.g., on different branches, then also mention those patch scripts. This is important, so the users know that they may have already applied the fix through another patch script from a previous minor version branch.
The context:
7.1
and 7.2
are affected. Also the currently developed version 7.3
(i.e., the master
branch)Steps:
Identify affected database and Camunda BPM minor versions:
db2
7.1.9
, 7.2.4
and 7.3.0-SNAPSHOT
Creation of the corresponding patch scripts for each affected branch:
For 7.2
, create the patch file db2_engine_7.2_patch_7.2.4_to_7.2.5.sql
. For 7.1
, create the patch file db2_engine_7.1_patch_7.1.9_to_7.1.10.sql
.
Put all created patch scripts into the upgrade
-folder on master
.
Since master
is also affected, the fix for it has to be done in the activiti.db2.create.engine.sql
and activiti.db2.drop.engine.sql
scripts.
Add fix to the activiti.db2.create.engine.sql
and activiti.db2.drop.engine.sql
on master
, 7.2
and 7.1
branches.
Overview of the new files on each affected branch in the upgrade
-folder:
master
's upgrade
-folder now looks like this:
upgrade
|-- db2_engine_7.1_patch_7.1.9_to_7.1.10.sql
|-- db2_engine_7.2_patch_7.2.4_to_7.2.5.sql
On 7.2
, add the db2_engine_7.2_patch_7.2.4_to_7.2.5.sql
and db2_engine_7.1_patch_7.1.9_to_7.1.10.sql
patch scripts. The upgrade
-folder should now look like this:
upgrade
|-- db2_engine_7.1_patch_7.1.9_to_7.1.10.sql
|-- db2_engine_7.2_patch_7.2.4_to_7.2.5.sql
On 7.1
, only the db2_engine_7.1_patch_7.1.9_to_7.1.10.sql
patch script is added. The upgrade
-folder looks like this:
upgrade
|-- db2_engine_7.1_patch_7.1.9_to_7.1.10.sql
The following lines need to be added to the pom.xml
of database upgrade and instance migration tests. For each test project, add the lines to the maven-antrun-plugin
and sql-maven-plugin
.
On master
and 7.2
nothing is added, because the patches of the previous minor version are applied automatically. On 7.1
, the patch scripts for 7.1
must be added to the patch-new-schema
section.
This must be done because the 7.0
minor version doesn't apply the patch script as an old version like it is happening on master
and 7.2
.
For 7.1:
...
<artifactId>maven-antrun-plugin</artifactId>
...
<target>
...
<!-- patches for current minor version if any -->
...
<!-- NEWLY ADDED FILES -->
<touch
file="${project.build.directory}/scripts-current/sql/upgrade/${database.type}_engine_${camunda.current.majorVersion}.${camunda.current.minorVersion}_patch_${camunda.current.majorVersion}.${camunda.current.minorVersion}.9_to_${camunda.current.majorVersion}.${camunda.current.minorVersion}.10.sql" />
</target>
...
<artifactId>sql-maven-plugin</artifactId>
<executions>
...
<execution>
<id>patch-new-schema</id>
...
<configuration>
<srcFiles>
...
<!-- NEWLY ADDED FILES -->
<srcFile>${project.build.directory}/scripts-current/sql/upgrade/${database.type}_engine_${camunda.current.majorVersion}.${camunda.current.minorVersion}_patch_${camunda.current.majorVersion}.${camunda.current.minorVersion}.9_to_${camunda.current.majorVersion}.${camunda.current.minorVersion}.10.sql</srcFile>
</srcFiles>
</configuration>
</execution>
...
Add notes on the patch scripts to the list of available SQL Patch scripts.
For the patch script fixing 7.1.9, following values are added:
7.1
$DATABASE_engine_7.1_patch_7.1.9_to_7.1.10.sql
$DATABASE_engine_7.2_patch_7.2.4_to_7.2.5.sql
.DB2
CAM-3565
For the patch script fixing 7.2.4, following values are added:
7.2
$DATABASE_engine_7.2_patch_7.2.4_to_7.2.5.sql
$DATABASE_engine_7.1_patch_7.1.9_to_7.1.10.sql
.DB2
CAM-3565
The naming convention for creating a patch level script is:
${database_type}_${purpose}_${minor_version}_patch_${patch_version_with_bug}_to_${fix_patch_version}.sql
where
${database_type}
describes the affected database, e.g., db2
, mysql
and so on. The placeholder ${purpose}
denotes what is affected when you execute the script. Currently the only purpose is engine
.${minor version}
is the version of the current branch you are on, e.g., it is 7.2
on 7.2
branch.${patch_version_with_bug}
describes the patch level version affected by the bug and ${fix_patch_version}
describes the fix patch version.Example: db2_engine_7.2_patch_7.2.4_to_7.2.5.sql
or mssql_engine_7.1_patch_7.1.9_to_7.1.10.sql
The patch level scripts are tested using the same mechanism as the upgrade scripts.
The files can be found here together with the upgrade
scripts.