This article describes the results of performance benchmark tests performed using Siemplify 5.5.0 software in March 2020. The tests were conducted using Siemplify recommended hardware and software equipment to ensure replicability in customer production environments.

Siemplify Performance Benchmark Test

The benchmark test was performed on dedicated virtual servers running Centos 7.5 and based on the following specifications. Note that these are the recommended specs for AIO.

  • 16 CPU Cores
  • 32 GB RAM
  • 800 GB SSD
  • Siemplify version: 5.5.0

Benchmarking Process

The performance benchmark process was executed using an automated test, that injected and ran 50/100/500/1000 alerts with an automated playbook to completion.

Ingestion Process

The ingestion process used the File Processor, which enables alert ingestion through Json file format. The files were ingested using batch ingestion methods.

The benchmarking process was executed using an automated test, which allows for batch incident parsing, mapping and classifying, ingestion and execution of a specified playbook. The test is measured by the total time it took from the first alert received to closing the cyber case.

Each ingested alert includes 5 security events and 23 entities.
One playbook is executed per each alert.
Each playbook performs the following actions:

  • Create insight
  • Add comment
  • Assign user
  • Change priority
  • Add tag

Results

The results show the average time for each test.

Single Server Test

Number of alerts executed in parallel Time to complete all ingestion Time to complete all Ontology & Grouping Time to complete all playbooks
50 0.4s 48s 2.1m
100 0.7 1.4m 4.1m
500 3.2s 2.4m 6.12m
1000 6.1s 4.6m 12.21m

Semi-Scale deployment (3 servers: 1 app server, 1 DPU server, 1 main DB node)

Number of alerts executed in parallel Time to complete all ingestion Time to complete all Ontology & Grouping Time to complete all playbooks
50 0.25s 24s 1.1m
100 0.4s 48s 2.1m
500 1.7s 1.7m 3.8m
1000 3.2s 3.1m 7.4m

NOTE: The data specified in the Performance benchmark were processed without data compression. The results might vary based on many factors, including the hardware specifications, system configurations, OS version, Linux version, and the type of actions performed.

Need more help with this?
Click here to open a Support ticket

Thanks for your feedback.