Data Flow & Context
The Execution Context
Every node receives an ExecutionContext instance that contains everything it needs to operate. Understanding the context is essential for building correct workflows.
class ExecutionContext
{
public array $input; // Output from the immediately preceding node
public array $config; // This node's configuration
public array $sourceData; // Original data from the trigger node
public Execution $execution;
public Node $node;
// Methods
public function getConfig(string $key, mixed $default = null): mixed;
public function getFullContext(): array;
public function setVar(string $key, mixed $value): void;
public function getVar(string $key, mixed $default = null): mixed;
public function getNodeResult(string $nodeId): array;
}Data Propagation Model
Sequential $input
Each node receives the output of its direct predecessor as $context->input:
Trigger → [output: {timestamp: ...}]
↓ input
Data Model → [output: {data: [{id:1,...}, {id:2,...}]}]
↓ input
Filter → [output: {data: [{id:1,...}]}] (filtered)
↓ input
Send MailSource Data $sourceData
The trigger's original payload is preserved throughout the entire workflow in $context->sourceData. This allows any node to access the original event data even after it has been transformed by intermediate nodes.
Global Variables
The Set Variable node writes to $context->vars. Any downstream node can read these values using {{variables.key_name}} in templates or $context->getVar('key_name') in PHP Code nodes.
Variables are persisted in the context JSON column of the Execution record, so they survive across async job dispatches.
Full Context Object
$context->getFullContext() returns a merged array combining:
[
// Trigger payload (top-level)
'event' => '...',
'user_id' => 42,
// Input from previous node
'input' => [...],
// Global variables
'variables' => [
'my_var' => 'value',
],
// Results from named nodes
'nodes' => [
'node_uuid' => ['output' => [...]],
],
// Current iteration item (inside For Each)
'item' => [...],
]This merged object is what {{tag}} resolution runs against.
Data Normalization
Voodflow normalises data passing through nodes:
- Arrays are passed as PHP
array(not objects) - Eloquent models are converted to arrays via
->toArray()before entering the pipeline - Null values are preserved
- Non-serializable objects are logged as warnings and stripped
Context Size Management
The context object grows as nodes add data. For large datasets, avoid accumulating all records in the context. Instead:
- Use the
Aggregatenode to reduce large arrays into summaries - Use the
Transformnode to project only the fields you need downstream - Set
VOODFLOW_CONTEXT_SIZE_WARNING_KBto get warned when context grows large