Quadrilateral detection models detect instances in an image and produce a convex quadrilateral for each, in addition to the instance category.
Datasets follow this structure:
endpoint_url/bucket
├── prefix/images/
├── prefix/instances.yaml
└── prefix/metadata.yaml
Dataset images are placed directly inside images/ (subdirectories are ignored).
The metadata file looks something like this:
task: quadrilateral detection
annotations: instances.yaml
categories: [cat1, cat2, cat3]
The annotations field specifies the name of
the file containing the ground truth annotations.
Here's an example of annotations file:
000.jpg:
- category: cat2
quad: [[0, 3], [6, 7], [5, 10], [1, 8]] # [[x, y], [x, y], [x, y], [x, y]]
- category: cat1
quad: [[4, 2], [7, 7], [7, 9], [2, 5]]
001.jpg: [] # no instance in this image
# ...