Videos¶
IRIS+ Professional uses both live and recorded (historic) videos as its inputs. The videos can be added from RTSP streams or uploaded from your local storage. The videos are then processed by the Indexer, which extracts the necessary information from the video stream. The extracted information is then used by the queries to generate alerts.
Manage Videos¶
Make sure to first consult the video requirements.
Videos are grouped into folders. On first login, your Workspace will be displayed, which acts as the root folder for all videos and queries - videos are created here by default. You can create subfolders to organize your videos better.
Videos can be filtered by their status (e.g., Indexing, Uploading, Error) or by the folder they are in.
Add Camera¶
- On the left sidebar, navigate to the folder where you would like to add videos.
- Click the +Add button in the upper right corner of the screen select the type of video source you would like from the dropdown menu.
- Type the RTSP URL of your camera into the text field, then click Test RTSP.
If the camera meets all requirements, the test is successful, and a thumbnail preview of the stream gets diplayed on the right, together with metadata, such as the resolution, frame rate, and codec of your stream.
Note that the camera is not registered yet. Click Add Camera to proceed.
- If the test is successful, click Add Camera to register the camera and start indexing.
Video details¶
- Camera name: A name identifying your camera.
- Technical name: A unique identifier for the camera.
- Description: Describe the camera here.
- Retention time (in days, 1 by default): The retention time for videos, after which their data will be deleted.
Analysis¶
This bar shows the capacity of the Indexer to process video streams.
Analysis Parameters¶
Note
Note that currently, it is not possible to edit indexing parameters after the camera has been added. If you need to change them, you will have to delete the camera and add it again with the new parameters.
Here you can set the parameters for indexing the video.
- Detector FPS (4 by default): The number of frames the detector will analyse per second.
Warning
The higher the FPS, the more frames the detector is able to analyse per second. Note, however, that this will also increase GPU usage. The default value of 4 FPS is a good compromise between accuracy and processing needs. Consult the hardware requirements for more information.
Enables the extraction of attributes from the video stream, for all object types (on by default).
- Number of feature vectors (2 by default): The number of feature vectors to extract from the video. The number of feature vectors depends on the number of objects in the video. In case of a scene with low / rare activity, leave it as default. As the number of objects increases, you may consider increasing the number of feature vectors so that no objects are missed.
Enables the extraction of face attributes from the video stream (off by default).
- Number of feature vectors (2 by default): The number of feature vectors to extract from the video. The desired number of feature vectors depends on the number of faces in the video. In case of a scene with low / rare activity, leave it as default. As the number of faces increases, you may consider increasing the number of feature vectors so that no faces are missed.
This feature is currently unavailable for editing. It will be supported in a future release.
Enables the extraction of attributes from the video background (on by default).
- Max background vector calculations per frame (1 by default): The maximum number of vector calculations per frame. The optimal number of background vectors depends on how likely it is that the background will change. If the background is static, set it to 1. If the background changes frequently (e.g. in case of a drone footage, or PTZ camera), set it to 2 or more.
Feature vectors
Feature vectors are quantifiable attributes extracted from the video stream. They are used to identify objects in the video and can be used for various purposes, such as object tracking, classification, and recognition.
Click Add Camera to register your stream and start indexing.
Upload Video¶
- On the left sidebar, navigate to the folder where you would like to add videos.
- Click the +Add button in the upper right corner of the screen select the type of video source you would like from the dropdown menu.
- Click Upload Video and select the video file from your local storage.
Video details¶
- Video name: A name identifying your video, auto-filled by default.
- Technical name: A unique identifier for the video.
- Description (optional): Describe the video here.
- Start date and time: The value set here is used for event timestamp generation. Type it or select one by clicking the calendar icon. The current date and time is used by default.
Analysis Parameters¶
Note
Note that currently, it is not possible to edit the parameters after the camera has been added. If you need to change them, you will have to delete the camera and add it again with the desired changes.
Here you can set the parameters for indexing the video.
- Detector FPS (4 by default): The number of frames the detector will analyse per second.
Warning
The higher the FPS, the more accurate the detection will be. However, this will also increase the processing time; It is recommended to leave FPS at the default value of 4.
Enables the extraction of attributes from the video stream, for all object types (on by default).
- Number of feature vectors (2 by default): The number of feature vectors to extract from the video. The number of feature vectors depends on the number of objects in the video. In case of a scene with low / rare activity, leave it as default. As the number of objects increases, you may consider increasing the number of feature vectors so that no objects are missed.
Enables the extraction of face attributes from the video stream (off by default).
- Number of feature vectors (2 by default): The number of feature vectors to extract from the video. The desired number of feature vectors depends on the number of faces in the video. In case of a scene with low / rare activity, leave it as default. As the number of faces increases, you may consider increasing the number of feature vectors so that no faces are missed.
This feature is currently unavailable for editing. It will be supported in a future release.
Enables the extraction of attributes from the video background (on by default).
- Max background vector calculations per frame (1 by default): The maximum number of vector calculations per frame. The optimal number of background vectors depends on how likely it is that the background will change. If the background is static, set it to 1. If the background changes frequently (e.g. in case of a drone footage, or PTZ camera), set it to 2 or more.
Feature vectors
Feature vectors are quantifiable object attributes extracted from the video stream. They are used to identify objects in the video and can be used for various purposes, such as object tracking, classification, and recognition.
Click Upload Video to register your video and start indexing.
Next steps¶
Once you have added your videos, you can proceed to running queries.