Once a model is trained, you can make predictions by providing it an input
image.
You have several options to do that:
Try it inside the browser
Just drag-and-drop an image and see the result visualization. This is
useful to sanity-check models rapidly.
Call the managed REST endpoint
Trained models are automatically deployed to a REST endpoint, which
you can easily call from your app or scripts.
Your first call will be slower due to the server needing to download your
model. Subsequent calls, however, will be much faster because your model
will already be cached. If too much time elapses, your model might get
removed from the cache, to make space for other models that are more frequently
used. If consistent, low latency inference is required, you should deploy
the model yourself.
Export the model weights
Model exports are license-free, meaning you can use them for any
purpose, including commercial ones.
Here are the available formats: