Skip to main content

Google Coral EdgeTPU

Google Coral EdgeTPU

The Coral EdgeTPU provides fast, efficient, private and offline AI inferencing capabilities in multiple form factors, such as a USB accessory or a PCIe module.

tip

The edgetpu component can also run on the CPU with compatible Tensorflow Lite models.

Configuration

Configuration example
edgetpu:
object_detector:
cameras:
camera_one:
fps: 1
labels:
- label: person
confidence: 0.8
- label: cat
confidence: 0.8
camera_two:
fps: 1
scan_on_motion_only: false
labels:
- label: dog
confidence: 0.8
trigger_recorder: false
image_classification:
device: cpu
cameras:
camera_two:
labels:
- dog
edgetpumap required
EdgeTPU Configuration.

Object detector

An object detector scans an image to identify multiple objects and their position.

tip

Object detectors can be taxing on the system, so it is wise to combine it with a motion detector

Labels

Labels are used to tell Viseron what objects to look for and keep recordings of. The available labels depends on what detection model you are using.

The max/min width/height is used to filter out any unreasonably large/small objects to reduce false positives.
Objects can also be filtered out with the use of an optional mask.

Zones

Zones are used to define areas in the cameras field of view where you want to look for certain objects (labels).
Say you have a camera facing the sidewalk and have labels setup to record the label person.
This would cause Viseron to start recording people who are walking past the camera on the sidewalk. Not ideal.
To remedy this you define a zone which covers only the area that you are actually interested in, excluding the sidewalk.

edgetpu:
object_detector:
cameras:
camera_one:
...
zones:
- name: sidewalk
coordinates:
- x: 522
y: 11
- x: 729
y: 275
- x: 333
y: 603
- x: 171
y: 97
labels:
- label: person
confidence: 0.8
trigger_recorder: true
tip

See Mask for how to get the coordinates for a zone.

Mask

Masks are used to exclude certain areas in the image from object detection. If a detected object has its lower portion inside of the mask it will be discarded.

The coordinates form a polygon around the masked area.
To easily generate coordinates you can use a tool like image-map.net.
Just upload an image from your camera, choose the Poly shape and start drawing your mask.
Then click Show me the code! and adapt it to the config format.
Coordinates coords="522,11,729,275,333,603,171,97" should be turned into this:

edgetpu:
object_detector:
cameras:
camera_one:
...
mask:
- coordinates:
- x: 522
y: 11
- x: 729
y: 275
- x: 333
y: 603
- x: 171
y: 97

Paste your coordinates here and press Get config to generate a config example

Pre-trained models

The included models are placed inside the /detectors/models/edgetpu folder.

There are three models:

  • SSD MobileNet V2
  • EfficientDet-Lite3
  • SSDLite MobileDet

The default model is EfficientDet-Lite3 because it features higher precision than the others, with a slightly higher latency.

More information on these models, as well as more object detector models can be found on the coral.ai website

Image classification

Image classification runs as a post processor when a specific object is detected.

Image classification works by labeling and image with known objects and provide a score. Classifiers have more fine tuned models than object detectors typically have, which allows them to have more detailed detections.

The downside to this is that classifiers can only give one result, unlike object detectors who can detect multiple different objects in a frame.

But, if you combine the two you can achieve some cool results.
For instance, you could set up the object detector look for bird, and the image classifier with a model that is trained to recognize different birds.

Labels

Labels are used to tell Viseron when to run a post processor.

Any label configured under the object_detector for your camera can be added to the post processors labels section.

note

Only objects that are tracked by an object_detector can be sent to a post_processor. The object also has to pass all of its filters (confidence, height, width etc).

Pre-trained models

The included model is MobileNet V3. It is placed inside the /classifiers/models/edgetpu folder. It was chosen because it has high accuracy and low latency.

More image classification models can be found on the coral.ai website

There you will also find information to help you understand if you might want to switch to another model.