View-invariant models produce similar representations for different
transformations ("views") of the same source image.
Using rotation as a transformation example - a view-invariant model would produce
similar representations for differently rotated versions of the same image, which
could be a useful property for downstream tasks in the context of transfer learning.
This is a self-supervised task.
Datasets follow this structure:
endpoint_url/bucket
└── prefix/images/