Scaling IRIS+ Professional Hardware¶
This chapter outlines how to scale hardware for IRIS+ Professional to support up to 4,096 channels. For deployments exceeding that, multiple query nodes are required - Future updates to this document will cover such advanced scaling scenarios.
Architecture¶
The diagram below illustrates the architecture of IRIS+ Professional. Indexer nodes ingest video streams via the RTSP protocol and generate metadata streams.
-
The input video bit rate typically ranges from 1 to 5 Mbps per stream.
-
Indexer nodes are equipped with GPUs to decode video and run deep learning models on incoming streams. They also include HDDs for video storage.
-
Each indexer node significantly reduces the data rate of incoming video, compressing it to just 0.1 Mbps per metadata stream. Therefore, 10,000 video streams generate an aggregated data rate of approximately 1 Gbps.
-
The query node stores and processes metadata in real-time or on-demand for forensic analysis.
To maximize I/O speed, query nodes utilize multiple NVMe SSDs (up to 24 per node).
Scaling Indexer Nodes¶
To determine the required hardware for the indexer nodes, consider the following parameters:
- Number of channels
- Average channel bit rate
- Video retention period
Standard Channel Definition¶
All calculations below assume standard channels, which have the following characteristics:
- Maximum resolution: 4K @ 10 FPS or 1080p @ 30 FPS
- Maximum bit rate: 10 Mbps
- Frame analysis rate: 4 FPS
- Feature vectors per frame: 4
- Speed/range tradeoff: Balanced
Note
If your channels differ from these specifications, additional calculations may be needed, potentially increasing resource costs.
GPU Requirements¶
The table below shows the maximum number of standard channels each GPU can support. To determine the required number of GPUs, divide the total number of channels by the GPU capacity.
GPU Model | Maximum Standard Channels |
---|---|
RTX 2000 ADA | 16 |
RTX 4000 ADA | 35 |
L4 | 27 |
A100 | 54 |
RTX 4060 | 21 |
Storage Requirements (HDD)¶
The system requires HDD storage for video streams based on the specified retention period.
The table below shows the required storage per channel (in GB) based on the bit rate and retention period.
Retention Period / Bit Rate | 0.5 Mbps | 1 Mbps | 2 Mbps | 4 Mbps |
---|---|---|---|---|
1 day | 6 | 11 | 22 | 44 |
10 days | 54 | 108 | 216 | 432 |
20 days | 108 | 216 | 432 | 864 |
30 days | 162 | 324 | 648 | 1296 |
To calculate the total required storage, multiply these values by the total number of channels.
Alternatively, use the following formula:
Sizing¶
Once you have determined the required number of GPUs and total storage capacity, distribute them across nodes efficiently.
- GPU Allocation: The maximum number of GPUs per server depends on the manufacturer, typically ranging from 6 to 8 GPUs per node.
- Channel Distribution: After assigning GPUs to nodes, determine the number of channels per node based on GPU capacity.
- Storage Sizing: Allocate storage per node to match the channel count and retention requirements.
Each node should include an NVMe SSD (at least 250GB) for the operating system.
CPU and RAM Allocation¶
The following table shows the required CPU cores and RAM for different channel numbers.
Number of Channels | CPU Cores (Total) | RAM (GB) |
---|---|---|
16 | 8 | 16 |
32 | 12 | 32 |
64 | 20 | 32 |
128 | 32 | 48 |
256 | 64 | 80 |
Scaling the Query Node¶
The query node server is optimized for high I/O performance and must support multiple NVMe SSDs to handle large-scale data processing efficiently.
The table below provides a scaling guide for the query node based on the number of channels.
It assumes a maximum of 24 NVMe SSDs.
Number of Channels | NVMe SSDs | SSD Size (TB) | RAM (GB) | CPU Cores |
---|---|---|---|---|
32 | 1 | 1 | 64 | 16 |
64 | 2 | 1 | 64 | 24 |
128 | 4 | 1 | 64 | 32 |
256 | 8 | 1 | 128 | 48 |
512 | 16 | 1 | 256 | 64 |
1024 | 24 | 2 | 512 | 96 |
2048 | 24 | 3 | 512 | 128 |
4096 | 24 | 6 | 1024 | 192 |
This scaling ensures sufficient storage, memory, and compute power to meet increasing channel demands while maintaining optimal query performance.
For deployments exceeding 4,096 channels, multiple query nodes are required - Future updates to this document will cover such advanced scaling scenarios.
SSD Requirements
Query nodes are optimized to maximize IO speed - All SSDs must be high speed NVMe.
Example Configuration¶
For a deployment handling 1000 channels with the following parameters:
- Bit rate per channel: 1 Mbps
- Retention period: 10 days
- Channel type: Standard
Based on the sizing rules above, the system requires 5 indexer nodes with the following specifications:
Component | Specification |
---|---|
GPUs | 6× A4000 ADA |
CPU | 64 cores |
RAM | 80 GB |
Storage | 4× 8TB HDDs (32TB total) + 1× NVMe SSD (250GB for OS) |
Based on the scaling table, the system requires the following query node configuration:
Number of Channels | NVMe SSDs | SSD Size (TB) | RAM (GB) | CPU Cores |
---|---|---|---|---|
1024 | 24 | 2 | 512 | 128 |
With these node requirements in mind, see an example configuration below, with specific hardware models:
CyberServe Xeon SP2-412G-GPU G3¶
- Gigabyte G492-H80 - 4U - Xeon SP - 12x SATA - Dual 10Gb/s LAN - Redundant 2200W
- 2x Intel Xeon Gold 6430 Processor – 32 Cores each (60M Cache, 2.10/3.60 GHz) 270W
- 10x 8GB 3200MHz DDR4 ECC Registered DIMM Module (Total: 80 GB RAM)
- 4x 8TB Enterprise Class SATA 7200RPM - 3.5" - Enterprise Class Drive
- 480GB Solidigm / Intel SSD D3-S4520 DataCentre SERIES 2.5IN SATA3
- 6x NVIDIA RTX4000 ADA - 6144 CUDA Cores, 20GB ECC GDDR6
CyberServe EPYC EP2 224 NVMe-G G4¶
- CyberServe R283-Z92-AAE1 2U, 24x 2.5" Hot-Swappable Gen4 NVMe / SATA / SAS
- 2x AMD EPYC 9654 - 64 Cores each, 2.4/3.7GHz, 384MB Cache (360 Watt)
- 4x 128GB 4800MT/s DDR5 ECC Registered DIMM Module
- 24x Intel / Solidigm P5520 1.9TB Drive - 2.5 NVMe U.2 PCIe-4.0 Drive