Unbox
Search…
Tasks

Overview

Once you have model(s) and dataset(s) on the platform, you can create a task that organizes them under one logical unit. Within a task, you can iterate on your models and datasets in three ways (broadly speaking):
  1. 1.
    Uncover errors
  2. 2.
    Generate data to re-train your model against these errors
  3. 3.
    Create unit/regression tests to raise the bar for the next version
You can only upload models & datasets which share the same shape, i.e. class names. While at least one model is required to create a task, a dataset is not necessarily required. This is because you can generate tests from template strings to run against your model.

Linked Models & Datasets

You may have a bunch of models with different architectures that all accomplish the same conceptual task, e.g. sentiment classification. Additionally, for each architecture you may have a number of versions from re-training models and/or re-collecting data. A task houses all of these models and datasets, allowing you to compare performance across different models and datasets.
More info on comparing general performance in the "Run Reports" section:

Test Suites & Reports

Within the task page, you can also create test suites to run across your models, and generate more data to add back to your datasets. More on this in the "Test Suites & Reports" section:
Last modified 4mo ago