Video and Images

This is where you can view productivity and quality provided by your annotators, and view the type of mistakes they are making.

To see the annotator's performance for a task, you can click on a task name.

Productivity Metrics

a) Users: Number of users who have attempted the task.

b) Man hours: Total time spent by all the users in this task.

c) Annotations Made: The total number of annotations made. For video annotation projects, an object in one frame counts as one annotation. For example, a car present in ten frames counts as ten annotations.

d) Making - Time per Annotation

Total time spent on maker taskTotal Annotations made\frac{Total\ time\ spent\ on \ maker\ task} {Total\ Annotations\ made}

e) Annotations Edited: Total Annotations edited or added in the second loop. The second loop occurs if a reviewer rejects a job.

f) Editing - Time per Annotation

Total time spent in the second loopTotal annotations edited in the second loop \frac{Total\ time\ spent\ in\ the\ second\ loop}{Total\ annotations\ edited\ in\ the \ second\ loop}

g) Frames Solved: Total frames submitted by the Maker.

h) Frames Edited: Total frames submitted by the maker in the second loop.

i) Time per frame - Making

Total time spent on maker task Total frame solved \frac{Total\ time\ spent\ on \ maker\ task\ }{Total\ frame\ solved}

j) Time per frame - Editing

Total time spent in the second loop Total frame solved \frac{Total\ time\ spent\ in \ the\ second\ loop\ }{Total\ frame\ solved}

Quality Metrics

a) Annotations Approved: Annotations approved by the reviewer with or without any change.

b) Precision, Recall, and Accuracy: After a reviewer edits any job, every annotation made by the maker and the reviewer gets compared to determine TP (True Positive), FP (False Positive), and FN (False Negative).

  • True Positive: If the reviewer does not make any modifications to an annotation, it will be a true positive.

  • False Positive: If the reviewer makes any modifications to an annotation, it will be false positive, the changes can be:

    • Geometry change (IOU < 0.99)

    • Class Change

    • Attribute Change

    • Reviewer deletes an annotation

  • False Negative: If the maker misses an annotation.

c) Questions solved: Total jobs solved by the Maker.

d) Questions reviewed: Total Jobs approved by the reviewer with or without any change.

e) Questions rejected: Total jobs rejected by the reviewers.

Mistake classification

You can see the type of mistakes happening in this task. It will help you to provide better feedback to annotators.

1. Classwise Distribution

In this table, you can see the class level distribution of mistakes. You can identify: a) Classes where your annotators are making mistakes. b) Type of mistakes made. c) Users who are making mistakes for a class

2. Class and attribute mislabelling

Now that you know the type of mistakes made by your annotators, you can further dig down to see which classes and attributes are the most confusing for them.

a) Correct class/attribute: Correct value of the class/attribute marked by the reviewer.

b) Incorrect class/attribute: Correct value of the class/attribute marked by the maker.

c) Users: Name of the users who are mislabelling class/attributes.

d) Incorrect annotations: Total annotations incorrect due to a mislabelling. The percentage value in this column indicates: the percentage of annotations impacted by a mislabelling, out of total annotations. For example:

  • Total Annotations: 1000 (for all the classes)

  • Correct class: Car

  • Incorrect class: Truck

  • Incorrect Annotations: 100

For the above case, the percentage of incorrect annotations will be 10%. It indicates 10% of the annotations are incorrect because Cars are labeled as Truck

Incorrect Annotations(%)=Annotations incorrect due to a mislabellingTotal AnnotationsIncorrect\ Annotations(\%) = \frac{Annotations \ incorrect\ due\ to\ a\ mislabelling}{Total\ Annotations}

Tips: You can go to the User Details tab to see all the metrics for a particular user.

Last updated