#
#
Tracking model performance

After a successful training, you can go to a detailed view of the model to check its statistics or make additional conversions.

The first section displays basic information about the model, such as:

- Name of the trained model,
- Type of operation - Classification or Detection,
- Framework used,
- Name of pretrained model,
- Datasets used,
- Categories used.

In the *Training Details* section you can see general information about the trained model:

Statistics (calculated on the basis of accuracy/loss functions). Their detailed description can be found in the documentation of our chosen framework.

- Model accuracy
- Model loss
- Training time

##
#
Classification accuracy

For the classification model, the following measures of accuracy ratings are calculated for each class:

- Precision - TP/(TP + FP):
- TP - True Positive, i.e. the number of pictures correctly attributed to a given class,
- FP - False Positive, i.e. the number of pictures incorrectly attributed to a given class.
- Recall - TP/(TP + FN):
- FN - False Negative, i.e. the number of pictures incorrectly attributed to a different class,
- F1-score (2 * precision * recall / (precision + recall)).

You can then analyze interactive graphs showing the change in Accuracy/Val Accuracy and Loss/Val Loss values against epochs.

A Confusion Matrix is another element that will help you assess whether your model has been trained correctly.

CM is a N×N matrix, where the columns correspond to the correct decision classes and the rows correspond to the recognitions of the learned model. The number at the intersection of column and row corresponds to the number of images from the column class that were referred to the row class by the classifier.

Efficiency percentage range for diagonal cells (The range is inversed for non-diagonal cells):

- Very low efficiency [0% - 20%],
- Low efficiency [21% - 40%],
- Medium efficiency [41% - 60%],
- High efficiency [61% - 80%],
- Very high efficiency [81% - 100%].

Diagonal cells represent correct recognitions. For example: the current model recognized 90,00% of the tiger photos correctly, falling into a very high efficiency range (80-100%), so the cell has been colored green.

Every row sums up to 100%. The cell on the left, which is non-diagonal, represents photos where tigers have been recognized as lions. The cell is colored green because of the range inversion. The smaller the number of wrong recognitions, the more effective the model is.

- In simple words: the more green cells in the matrix, the better the trained model is. For each cell that is not green, the overall efficiency of the model decreases according to the percentage range shown in the cell (and the colors that correspond to them). The model is completely useless if the matrix is all red.

##
#
Detection accuracy

For the detection model, for each class we can extract the following statistics:

- True positive - i.e. the number of detections correctly recognizing a given class,
- False positive - the number of detections that falsely recognize a given class,
- Average precision - TP/(TP + FP),
- Recall - TP/(TP + FN),
- Intersect over union = area of overlap / area of union,
- Area of overlap - area of overlap actual and predicted label,
- Area of union - area of union of the actual and predicted label,
- F1-score - (2 * precision * recall / (precision + recall)).

You can then analyze the interactive graphs that show the change in Precision and Loss values against every hundredth iteration.

The Precision value is calculated after the * Burn In* process is completed.