Skip to main content

Installation

Viseron runs exclusively in Docker.
First of all, choose the appropriate Docker container for your machine.

Builds are published to Docker Hub.

HOW DO I CHOOSE?

Have a look at the supported architectures below.

Supported architectures

Viserons images support multiple architectures such as amd64, aarch64 and armhf.
Pulling roflcoopter/viseron:latest should automatically pull the correct image for you.
An exception to this is if you have the need for a specific container, eg the CUDA version. Then you will need to specify your desired image.

The images available are:

ImageArchitectureDescription
roflcoopter/viseronmultiarchMultiarch image
roflcoopter/aarch64-viseronaarch64Generic aarch64 image, with RPi4 hardware accelerated decoding/encoding
roflcoopter/amd64-viseronamd64Generic image
roflcoopter/amd64-cuda-viseronamd64Image with CUDA support
roflcoopter/rpi3-viseronarmhfBuilt specifically for the RPi3 with hardware accelerated decoding/encoding
roflcoopter/jetson-nano-viseronaarch64Built specifically for the Jetson Nano with:
- GStreamer hardware accelerated decoding
- FFmpeg hardware accelerated decoding
- CUDA support

Running Viseron

Below are a few examples on how to run Viseron.
Both docker and docker-compose examples are given.

IMPORTANT

You have to change the values between the brackets {} to match your setup.

64-bit Linux machine
docker run --rm \
-v {recordings path}:/recordings \
-v {config path}:/config \
-v /etc/localtime:/etc/localtime:ro \
-p 8888:8888 \
--name viseron \
roflcoopter/viseron:latest
64-bit Linux machine with VAAPI (Intel NUC for example)
docker run --rm \
-v {recordings path}:/recordings \
-v {config path}:/config \
-v /etc/localtime:/etc/localtime:ro \
-p 8888:8888 \
--name viseron \
--device /dev/dri \
roflcoopter/viseron:latest
64-bit Linux machine with NVIDIA GPU
docker run --rm \
-v {recordings path}:/recordings \
-v {config path}:/config \
-v /etc/localtime:/etc/localtime:ro \
-p 8888:8888 \
--name viseron \
--runtime=nvidia \
roflcoopter/amd64-cuda-viseron:latest
caution

Make sure NVIDIA Docker is installed.

On a Jetson Nano
docker run --rm \
-v {recordings path}:/recordings \
-v {config path}:/config \
-v /etc/localtime:/etc/localtime:ro \
-p 8888:8888 \
--name viseron \
--runtime=nvidia \
--privileged \
roflcoopter/jetson-nano-viseron:latest
caution

It is a must to run with --privileged so the container gets access to all the needed devices for hardware acceleration.
You can probably get around this by manually mounting all the needed devices but this is not something I have looked into.

On a RaspberryPi 4
docker run --rm \
--privileged \
-v {recordings path}:/recordings \
-v {config path}:/config \
-v /etc/localtime:/etc/localtime:ro \
-v /dev/bus/usb:/dev/bus/usb \
-v /opt/vc/lib:/opt/vc/lib \
-p 8888:8888 \
--name viseron \
--device=/dev/video10:/dev/video10 \
--device=/dev/video11:/dev/video11 \
--device=/dev/video12:/dev/video12 \
--device /dev/bus/usb:/dev/bus/usb \
roflcoopter/viseron:latest
caution

Viseron is quite RAM intensive, mostly because of the object detection.
I do not recommend using an RPi unless you have a Google Coral EdgeTPU.
The CPU is not fast enough and you might run out of memory.

tip

Configure a substream if you plan on running Viseron on an RPi.

RaspberryPi 3b+
docker run --rm \
--privileged \
-v {recordings path}:/recordings \
-v {config path}:/config \
-v /etc/localtime:/etc/localtime:ro \
-v /opt/vc/lib:/opt/vc/lib \
-p 8888:8888 \
--name viseron \
--device /dev/vchiq:/dev/vchiq \
--device /dev/vcsm:/dev/vcsm \
--device /dev/bus/usb:/dev/bus/usb \
roflcoopter/viseron:latest
caution

Viseron is quite RAM intensive, mostly because of the object detection.
I do not recommend using an RPi unless you have a Google Coral EdgeTPU.
The CPU is not fast enough and you might run out of memory.

tip

To make use of hardware accelerated decoding/encoding you might have to increase the allocated GPU memory.
To do this edit /boot/config.txt and set gpu_mem=256 and then reboot.

tip

Configure a substream if you plan on running Viseron on an RPi.

Viseron will start up immediately and serve the Web UI on port 8888.
Please proceed to the next chapter on how to configure Viseron.

VAAPI

VAAPI hardware acceleration support is built into every amd64 container.
To utilize it you need to add --device /dev/dri to your docker command.

EdgeTPU

EdgeTPU support is also included in all containers.
To use it, add -v /dev/bus/usb:/dev/bus/usb --privileged to your docker command.

User and Group Identifiers

When using volumes (-v flags) permissions issues can happen between the host and the container. To solve this, you can specify the user PUID and group PGID as environment variables to the container.

Docker command
docker run --rm \
-v {recordings path}:/recordings \
-v {config path}:/config \
-v /etc/localtime:/etc/localtime:ro \
-p 8888:8888 \
--name viseron \
-e PUID=1000 \
-e PGID=1000 \
roflcoopter/viseron:latest
Docker Compose

Example docker-compose

version: "2.4"

services:
viseron:
image: roflcoopter/viseron:latest
container_name: viseron
volumes:
- {recordings path}:/recordings
- {config path}:/config
- /etc/localtime:/etc/localtime:ro
ports:
- 8888:8888
environment:
- PUID=1000
- PGID=1000

Ensure the volumes are owned on the host by the user you specify. In this example PUID=1000 and PGID=1000.

tip

To find the UID and GID of your current user you can run this command on the host:

id your_username_here
note

Viseron runs as root (PUID=0 and PGID=0) by default.
This is because it can be problematic to get hardware acceleration and/or EdgeTPUs to work properly for everyone.
The s6-overlay init scripts do a good job at fixing permissions for other users, but you may still face some issues if you choose to not run as root.
If you do have issues, please open an issue and i will do my best to fix them.