Skip to main content

FFmpeg

FFmpeg

FFmpeg enables Viseron to read frames from cameras.

FFmpeg can be quite complex to work with, but in most cases the automatically generated commands will fit your needs.

Hardware acceleration is available on a wide variety of systems.

Configuration

Configuration example
ffmpeg:
camera:
camera_one:
name: Camera 1
host: 192.168.XX.X
port: 554
path: /Streaming/Channels/101/
username: !secret camera_one_user
password: !secret camera_one_pass
substream:
path: /Streaming/Channels/102/
stream_format: rtsp
port: 554
mjpeg_streams:
my_stream:
width: 100
height: 100
draw_objects: true
rotate: 45
mirror: true
objects:
draw_objects: true
draw_zones: true
draw_motion: true
draw_motion_mask: true
draw_object_mask: true
recorder:
idle_timeout: 5
frame_timeout: 10
ffmpegmap required
FFmpeg Configuration.

Camera

A camera domain fetches frames from a camera source and distributes it to other domains in Viseron.

Hardware acceleration

Viseron supports both hardware accelerated decoding and encoding.
This means your CPU will be offloaded and give you a significant performance increase.

Supported hardware:

  • NVIDIA GPU
  • VA-API on compatible Intel CPU's
  • Raspberry Pi 3
  • Raspberry Pi 4
  • NVIDIA Jetson Nano

Viseron will detect what system you are running and will automagically utilize the hardware acceleration.

danger

The Jetson Nano support is very limited in FFmpeg. If you have a Nano i suggest looking at the gstreamer component instead.

Default FFmpeg decoder command

A default FFmpeg decoder command is generated. To use hardware acceleration, the command varies a bit depending on the Docker container you use.

Commands

NVIDIA GPU support in the roflcoopter/amd64-cuda-viseron image:

ffmpeg -hide_banner -loglevel error -avoid_negative_ts make_zero -fflags nobuffer -flags low_delay -strict experimental -fflags +genpts -stimeout 5000000 -use_wallclock_as_timestamps 1 -vsync 0 -c:v h264_cuvid -rtsp_transport tcp -i rtsp://{username}:{password}@{host}:{port}{path} -f rawvideo -pix_fmt nv12 pipe:1

This means that you do not have to set hwaccel_args unless you have a specific need to change the default command (say you need to change h264_cuvid to hevc_cuvid)

Custom FFmpeg decoder command

You can customize the generated command through the config. It can be a bit hard to get this right so it is not recommended unless you know what you are doing.

The command is built up like this:

"ffmpeg" + global_args + input_args + hwaccel_args + codec + "-rtsp_transport tcp -i " + (stream url) + " -vf " + video_filters + output_args

Each entry in video_filters are appended together, separated with a ,.

Config example to rotate image 180 degrees
ffmpeg:
camera:
camera_1:
....
video_filters: # These filters rotate the images processed by Viseron
- transpose=2
- transpose=2
recorder:
video_filters: # These filters rotate the recorded video
- transpose=2
- transpose=2

And the resulting command looks like this:

ffmpeg -hide_banner -loglevel error -avoid_negative_ts make_zero -fflags nobuffer -flags low_delay -strict experimental -fflags +genpts -use_wallclock_as_timestamps 1 -vsync 0 -stimeout 5000000 -c:v h264_cuvid -rtsp_transport tcp -i rtsp://*****:*****@192.168.**.**:554/Streaming/Channels/101/ -vf transpose=2,transpose=2,fps=1.0 -f rawvideo -pix_fmt nv12 pipe:1

MJEPG Streams

Viseron will serve MJPEG streams of all cameras.

Dynamic streams

The dynamic streams are automatically created for each camera. They utilize query parameters to control what is displayed on the stream.

Example URL: http://localhost:8888/<camera_identifier>/mjpeg-stream

Query parameters

A number of query parameters are available to instruct Viseron to resize the stream or draw different things on the image.
To utilize a parameter you append it to the URL after a ?. To add multiple parameters you separate them with &, like this:

http://localhost:8888/<camera name slug>/mjpeg-stream?<parameter1>=<value>&<parameter2>=<value>`
Expand to see all available query parameters
ParameterTypeDescription
widthintframe will be resized to this width
heightintframe will be resized to this height
draw_objectsanyIf this query parameter is set to a truthy value (true, 1 etc), found objects will be drawn
draw_object_maskanyIf this query parameter is set to a truthy value (true, 1 etc), configured object masks will be drawn
draw_motionanyIf this query parameter is set to a truthy value (true, 1 etc), detected motion will be drawn
draw_motion_maskanyIf this query parameter is set to a truthy value (true, 1 etc), configured motion masks will be drawn
draw_zonesanyIf this query parameter is set to a truthy value (true, 1 etc), configured zones will be drawn
mirroranyIf this query parameter is set to a truthy value (true, 1 etc), mirror the image horizontally.
rotateanyDegrees to rotate the image. Positive/negative values rotate clockwise/counter clockwise respectively
danger

If you are going to have more than one consumer of the stream, it is better to configure your own static MJPEG streams. This is because each dynamic stream will process their frames individually, duplicating the processing.

Static streams

The MJPEG streams work exactly as the dynamic streams, but instead of defining the query parameters in the URL, they are defined in the config.yaml
The benefit of using these predefined streams instead is that frame processing happens only once.
This means that you can theoretically have as many streams open as you want without increased load on your machine.

Config example
<component that provides camera domain>:
camera:
front_door:
...
mjpeg_streams:
my-big-front-door-stream:
width: 100
height: 100
draw_objects: true
my-small-front-door-stream:
width: 100
height: 100
draw_objects: true
draw_zones: true
draw_object_mask: true

The config example above would give you two streams, available at these endpoints:
http://localhost:8888/front_door/mjpeg-streams/my-big-front-door-stream
http://localhost:8888/front_door/mjpeg-streams/my-small-front-door-stream

FFprobe Stream Information

Viseron needs to know the width, height, FPS and audio/video codecs of your stream.
FFprobe is used on initialization to figure all this information out.

Some cameras dont play nice with this and fail to report some information.
To circumvent this you can manually specify the stream information.

FFprobe timeout

Sometimes FFprobe fails to connect to the stream and times out.
If this is a recurring issue you should specify all of width, height, fps, codec and audio_codec manually. Viseron will then not need to call FFprobe and startup will be significantly faster.

Recoverable Errors

Sometimes FFmpeg prints errors which are not fatal, such as [h264 @ 0x55b1e115d400] error while decoding MB 0 12, bytestream 114567.
Viseron always performs a sanity check on the FFmpeg decoder command with -loglevel fatal.
If Viseron gets stuck on an error that you believe is not fatal, you can add a subset of that error to ffmpeg_recoverable_errors.
So to ignore the error above you would add this to your configuration:

ffmpeg_recoverable_errors:
- error while decoding MB

Recorder

FFmpeg segments are used to handle recordings.
FFmpeg will write small 5 second segments of the stream to disk, and in case of any recording starting, Viseron will find the appropriate segments and concatenate them together.
The reason for using segments instead of just starting the recorder on an event, is to support the lookback feature which makes it possible to record before an event actually happened.

The default concatenation command
ffmpeg -hide_banner -loglevel error -y -protocol_whitelist file,pipe -f concat -safe 0 -i - -c:v copy {outfile.mp4}

If you want to re-encode the video you can choose codec, video_filters and optionally hwaccel_args.

Store segments in memory

To place the segments in memory instead of writing to disk, you can mount a tmpfs disk in the container. This will use more memory but reduce the load on your harddrives.

Example tmpfs configuration

Example Docker command

docker run --rm \
-v {recordings path}:/recordings \
-v {config path}:/config \
-v /etc/localtime:/etc/localtime:ro \
-p 8888:8888 \
--tmpfs /tmp \
--name viseron \
roflcoopter/viseron:latest

Example docker-compose

version: "2.4"

services:
viseron:
image: roflcoopter/viseron:latest
container_name: viseron
volumes:
- {recordings path}:/recordings
- {config path}:/config
- /etc/localtime:/etc/localtime:ro
ports:
- 8888:8888
tmpfs:
- /tmp
/config/config.yaml
recorder:
segments_folder: /tmp

Substream

Using the substream is a great way to reduce the system load from FFmpeg.
When configured, two FFmpeg processes will spawn:

  • One that reads the main stream and creates segments for recordings. Codec -c:v copy is used so practically no resources are used.
  • One that reads the substream and pipes frames to Viseron for motion/object detection.

To really benefit from this you should reduce the framerate of the substream to match the lowest fps set for either motion or object detection.
It is also a good idea to change the resolution to something lower than the main stream.

Rotating video

If you rotate your camera 90 or 180 degrees, you can rotate the video in Viseron to match.
To do this you can use the video_filters option in the config.

note

If you are rotating the video 90 degrees, you need to tell Viseron the width and height of the video, which should be the opposite of the cameras real resolution. If you have a camera with 1920x1080 resolution, you need to set width: 1080 and height: 1920 in the config.

Config to rotate 90 degrees clockwise
/config/config.yaml
ffmpeg:
camera:
camera_one:
name: Camera 1
host: 192.168.XX.X
port: 554
path: /Streaming/Channels/101/
username: !secret camera_one_user
password: !secret camera_one_pass
video_filters: # Rotate the frames processed by Viseron
- transpose=1
width: 1080 # Width of the rotated video = height of the camera
height: 1920 # Height of the rotated video = width of the camera
recorder:
idle_timeout: 5
video_filters: # Rotate the recorded videos
- transpose=1

Raw command

If you want to use a custom FFmpeg command, you can do so by specifying the raw_command option. Viseron needs to be able to read frames, so you must make sure to output frames to stdout. By default this is done using the pipe:1 as the output file.

You also need to make sure that you are outputting frames in the raw format (-f rawvideo) that Viseron expects.

The third consideration is that small segments need to be saved to disk for processing by the recorder. This is done by using the format -f segment.

To get the hang of it you can start by using the default command and then modify it to your liking.

Raw command with substream

If you specify substream, Viseron will use the substream for motion/object detection, which means that the raw command for the substream has to output to pipe:1.
The main stream raw command will be used for recordings, so it has to output segments using -f segment.

Example using both main and substream
/config/config.yaml
ffmpeg:
camera:
camera_one:
name: Camera 1
host: 192.168.XX.X
port: 554
path: /onvif_camera/profile.0
username: !secret camera_one_user
password: !secret camera_one_pass
width: 1920
height: 1080
fps: 30
substream:
port: 554
path: /onvif_camera/profile.1
width: 1920
height: 1080
fps: 1
raw_command: | # Output to pipe:1
ffmpeg -rtsp_transport tcp -i rtsp://user:pass@192.168.XX.X:554/onvif_camera/profile.1 -vf fps=1.0 -f rawvideo -pix_fmt nv12 pipe:1
raw_command: | # Output segments to /segments/viseron_vscode_camera
ffmpeg -rtsp_transport tcp -i rtsp://user:pass@192.168.XX.X:554/onvif_camera/profile.0 -f segment -segment_time 5 -reset_timestamps 1 -strftime 1 -c:v copy /segments/viseron_vscode_camera/%Y%m%d%H%M%S.mp4
danger

Most of the configuration options are ignored when using raw_command.

note

If you create a command that works well for your particular hardware, please share it with the community!