Skip to content

Installation

IRIS+ Professional can be installed using Ansible to automate the deployment process - This guide provides step-by-step instructions.

Tip

It is recommended to have a high speed network connection, as multiple GBs of data will be downloaded during installation.

Docker credentials required

You will need valid Docker credentials to proceed with the installation. Make sure you have your Docker username and password ready, as they will be required in later steps. If you do not have these credentials, please contact technicalsupport@irisity.com to obtain them before continuing.

Installation via Ansible requires two types of machines:

  • Control Machine: The machine where Ansible is installed. It runs the playbooks as well as the coordinate operations.

  • Target Machine: The machines where IRIS+ Professional is to be installed or updated. The target machine can be classified by deployment type, which can be core or indexer. There must be at least 1 core and 1 indexer nodes in either distributed or standalone mode. The indexer nodes can be scaled horizontally.

You must have SSH capability from the Control to the Target machines.

Info

A single machine can function as both the control and the target machine in an Ansible setup. This means you can install Ansible on the same machine as IRIS+ Professional and include it in the inventory, allowing it to manage itself along with other machines. This configuration enables the machine to execute Ansible tasks on itself as well as on other target systems.


Prerequisites

Installation requires sudo rights.

Logging

Logging is based on your journald configuration. Changes to parameters such as log retention time and disk space usage can be made by modifying the journald configuration.

1. System update

Make sure your operating system is up-to-date.

1
2
3
sudo apt update && \
sudo apt upgrade -y && \
sudo reboot

2. Add Docker GPG key and repository

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg && \
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

3. (Indexer nodes only) Add NVIDIA Docker GPG key and repository

curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg && \
echo 'deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://nvidia.github.io/libnvidia-container/stable/deb/$(ARCH) /' | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list > /dev/null

4. Install required dependencies

1
2
3
sudo apt update && \
sudo apt -y install gpg ca-certificates curl software-properties-common docker-ce docker-ce-cli docker-compose-plugin containerd.io python3-pip python3-venv nvidia-driver-580 nvidia-docker2 openssh-server && \
sudo reboot
1
2
3
sudo apt update && \
sudo apt -y install gpg ca-certificates curl software-properties-common docker-ce docker-ce-cli docker-compose-plugin containerd.io python3-pip python3-venv openssh-server && \
sudo reboot

1. System update

Make sure your operating system is up-to-date.

1
2
3
sudo apt update && \
sudo apt upgrade -y && \
sudo reboot

2. Add Docker GPG key and repository

curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg && \
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

3. (Indexer nodes only) Add NVIDIA Docker GPG key and repository

curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg && \
echo 'deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://nvidia.github.io/libnvidia-container/stable/deb/$(ARCH) /' | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list > /dev/null

4. (Indexer nodes only) Install NVIDIA CUDA keyring

1
2
3
wget https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/cuda-keyring_1.1-1_all.deb  -P /tmp && \
sudo dpkg -i /tmp/cuda-keyring_1.1-1_all.deb && \
sudo rm /tmp/cuda-keyring_1.1-1_all.deb

5. Install required dependencies

1
2
3
sudo apt update && \
sudo apt -y install linux-headers-$(uname -r) nvidia-driver cuda-drivers  ca-certificates curl software-properties-common docker-ce docker-ce-cli docker-compose-plugin containerd.io python3-pip python3-venv nvidia-docker2 openssh-server && \
sudo reboot
1
2
3
sudo apt update && \
sudo apt -y install ca-certificates curl software-properties-common docker-ce docker-ce-cli docker-compose-plugin containerd.io python3-pip python3-venv openssh-server && \
sudo reboot

You can verify that Docker and the NVIDIA GPU driver is installed correctly by running the following commands:

nvidia-smi
docker run hello-world
Expected output
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.65.06              Driver Version: 580.65.06      CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 2070        Off |   00000000:01:00.0 Off |                  N/A |
| 23%   34C    P8             36W /  175W |       1MiB /   8192MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
  (amd64)
3. The Docker daemon created a new container from that image which runs the
  executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
  to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/

For more examples and ideas, visit:
https://docs.docker.com/get-started/

Installation

1. Install Ansible

Note that the installer requires Ansible to be on the control machine.

1
2
3
4
5
sudo apt -y install python3 python3-pip python3-venv && \
python3 -m venv ~/.professional_venv && \
source ~/.professional_venv/bin/activate && \
pip3 install ansible==10.7.0 jmespath dacite pandas && \
deactivate

2. Download and extract the Ansible installer

1
2
3
4
5
 wget https://professional.irisity.com/docs/r28/ansible_playbook.tar -P /tmp && \
 mkdir -p ~/ansible-installer && \
 tar -xvf /tmp/ansible_playbook.tar -C ~/ansible-installer && \
 rm /tmp/ansible_playbook.tar && \
 cd ~/ansible-installer

3. Set up the Ansible inventory

Info

If your control machine and target machine are both on a single machine, 127.0.0.1 can be used for ansible_host.

Set up the Ansible inventory file under inventories/all.yaml.

all:
  hosts:
    YOUR_TARGET_NODE_NAME_HERE:
      ansible_host:  # TARGET_NODE_IP_HERE
      ansible_port:  # TARGET_NODE_PORT_HERE
  • Replace YOUR_TARGET_NODE_NAME_HERE with the name of your target node.

  • ansible_host: The IP address or domain name of the target node.

  • ansible_port: The SSH port number of the target node.

Checking connectivity

You can check connectivity via the nc command:

nc -z TARGET_NODE_IP_HERE TARGET_NODE_PORT_HERE && echo "Connectivity succeeded!" || echo "Connectivity failed!"

4. Set up variables

Create a directory for host specific variables in the document root of the installer and copy the node class specific example variables:

mkdir -p host_vars/YOUR_TARGET_NODE_NAME_HERE && \
cp examples/NODE_TYPE/vars.yaml host_vars/YOUR_TARGET_NODE_NAME_HERE/
NODE_TYPE: The type of the node, which can be core, indexer, or standalone. This is used to determine the role of the node in the distributed system.

Contact technicalsupport@irisity.com for docker credentials.

Fill in host_vars/YOUR_TARGET_NODE_NAME_HERE/vars.yaml:

  • The ansible_user parameter, which specifies the remote user account that Ansible uses to connect to managed hosts via SSH.

  • The docker_username and docker_password variables.

  • init_sysadmin_email: The initial system administrator email address.

  • init_sysadmin_pwd: The initial system administrator password.

  • metadata_broker_ip: The network endpoint (IP address) of the core machine that the indexer node must be able to access for proper operation.

  • metadata_storage_limit_mb: The maximum size of the metadata (indexes, imagesets and results) storage in megabytes. This is used to limit the amount of metadata that can be stored on the core node.

  • volume_kafka_folder: The root folder for the message broker data directory. This is where the message broker will store its data. By default, it is set to /var/lib/u-query/kafka.

  • internal_encryption_enabled: Can be true or false. If set to true, remote indexer nodes (for standalone nodes: new nodes; for core nodes: any indexer node) will communicate with the core/standalone node via TLS encryption only. CA certificate is automatically generated during installation.

    • Using your own CA certificate

      Set the internal_connections_custom_ca_enabled variable to true to use your own CA certificate for internal encryption. You must provide the CA certificate file and its key at host_files/YOUR_TARGET_NODE_NAME_HERE/tls.

      If your CA certificate’s key has a passphrase, you need to use the internal_connections_custom_ca_key_passphrase parameter.

Adding Indexers via TLS

In a distributed deployment, the core node and indexer nodes need to communicate with each other. If the above variable is set to true, when adding new indexer nodes to the deployment, make sure to follow the steps in the Add Indexers with TLS tab in the Indexers section.

  • primary_indexer_ip: The network endpoint (IP address) of the indexer machine that the core node must be able to access for proper operation. Optional in case of standalone deployment.

If encryption is enabled, the Indexers will also need to be added via TLS encryption.

  • videostorage_limit_gb (10 minimum): The maximum size of the video storage in gigabytes. This is used to limit the amount of video that can be stored on the indexer node.

  • volume_videostorage_folder: The root folder for video storage. This is where the video files will be stored. By default, it is set to /var/lib/u-query/video_storage.

HTTPS settings

You can enable HTTPS connectivity using the https_enabled variable in host_vars/YOUR_TARGET_NODE_NAME_HERE/vars.yaml. If you need HTTPS connectivity, select the 'certificate_type':

  • In the case of 'official_ca', our reverse proxy will attempt to generate a Let's Encrypt signed certificate.

  • It is required to fill in the domain variable as well.

  • An automatic renew process included.

  • The domain's A record must be a public IPv4 address that points to your target node.

  • The domain must be publicly accessible due to HTTP-01 validation for certificate generation and automatic renewal process.

  • In the case of 'self_signed', our reverse proxy will attempt to generate a self signed certificate.

  • It is required to fill in the domain variable as well.

  • An automatic renew process included.

  • In the case of 'custom', you can provide your own certificate and private key pair.

  • It is not required to fill in the domain variable.

  1. Create a directory relative to the installer's docroot:

    mkdir -p host_files/YOUR_TARGET_NODE_NAME_HERE/ssl
    
  2. Place the following in the created directory:

    • certificate.crt which represents the certificate.

    • certificate.key which represents the private key.

5. Start the installation

Start the Ansible playbook for installation:

1
2
3
source ~/.professional_venv/bin/activate && \
ansible-playbook start.yaml -t install && \
deactivate

Tip

If you can authenticate via an SSH private key, you can leave the value of SSH password blank. If there is no password requirement for sudo commands, you can leave the value of BECOME password blank as well.

You will be prompted for your SSH and sudo passwords for authenticating the target node.

SSH password:
BECOME password[defaults to SSH password]:

Note that installation may take up to 30 minutes.

Adding Indexers

TLS encryption

Adding Indexers with TLS encryption requires installation of the core node with TLS support. In that case, follow the steps in the Add Indexers with TLS tab below.

  1. Go to Settings Indexers.

Indexers

Note the port number for the Video storage service, above, changed from previous versions to 7177.

All Indexers added to IRIS+ Professional are listed here, along with their properties. You can also add new Indexers and delete existing ones.

  1. To add a new Indexer, fill in the form on the right side of the screen, then click Test to check the connection to the Indexer. If the connection is successful, click Submit to add the Indexer to IRIS+ Professional.
  1. Go to Settings Indexers.

Indexers

Note the port number for the Video storage service, above, changed from previous versions to 7177.

Fill in the Register new host fields, then download the storage and indexer certificate files. Click Test to check the connection to the Indexer. If the connection is successful, click Submit.

  1. Log in to the remote indexer node via SSH.

  2. Navigate to the /opt/u-query/ directory open the .env file in a text editor.

  3. Set the INDEXER_NODE_TLS_ENABLED variable to true.

  4. Copy the downloaded json file:

    *-indexer-cert.json to /opt/u-query/config/certs/indexer-cert.json

    *-storage-cert.json to /opt/u-query/config/certs/storage-cert.json

  5. (Re)start the services:

sudo COMPOSE_FILE=$(ls compose*.yaml | sort -r | paste -sd:) docker compose up -d