The magic behind Siemplify is the Data Processing Engine (DPE). This component analyzes, processes and aggregates data automatically to create Siemplify Cases. On a technical side – the Data Processing engine is designed as a parallel massive framework that parses raw data, enriches it, maps and models it into a graph structure, groups related Alerts into a Case and stores it in the database.
The DPE can be configured to suit any data type (Alerts, Enrichment etc.) or data source (Splunk, QRadar, ELK etc.) and provide flexibility into the pre-processing and processing stage.
The following example show how Splunk record fields are mapped into specific fields of the Siemplify data model.
Following that, the DPE will model the mapped fields into a graph representation that is stored in the database.
Both data mapping and data modeling rules can be updated using the Siemplify Console.
An SDK is also available to develop new data processing actions. The SDK contains the data model, common methods for working with different data formats, transformation functions, etc. The DPE supports an internal dynamic data mapper that performs configurable mapping levels from a raw data model to Siemplify data models.
Multiple instances of DPE can run in parallel, either on a single or multiple nodes, to allow scaling out.
Need more help with this?
Click here to open a Support ticket