Performance Metrics
Performance metrics provide insights into how well a model is performing.
Suppose you have following Model,
Accuracy (ACC): ACC measures the ratio of correctly predicted instances to the total instances:
= 4/8
~ 50%Precision (PREC): PREC measures the ratio of correctly predicted positive observations to the total predicted positives. [From predicted +ve, see Actual +ve]:
ex, spam cassificationRecall (Sensitivity): RECALL measures the ratio of correctly predicted positive observations to all actual positives [From Actual +ve, see predicted +ve]:
ex, has cancer or notF Score: F Score is the harmonic mean of PREC and RECALL, balancing both metrics: ex, "Tomorrow stock market will crash", has different answer from company and user pov. So, we can't make decision for Precision & Recall. In this case, we use F score, F-Beta
Fβ=(1+β2)×(β2×Precision)+RecallPrecision×Recall
Beta = 1, indicates F1 Score.If FP is more important than FN, decrease Beta value. beta=0.5 F0.5 Score If FN is more important than FP, increase Beta value. beta = 2 F2 Score 5. Specificity (SPEC): SPEC measures the ratio of correctly predicted negative observations to all actual negatives:
ROC-AUC: ROC-AUC measures the area under the Receiver Operating Characteristic (ROC) curve, indicating how well the model distinguishes between classes. A higher AUC suggests a better model. For more details, go through below link : https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc
Mean Absolute Error (MAE) and Mean Squared Error (MSE): For regression problems, MAE and MSE are used. MAE is the average of the absolute differences between predicted and actual values, while MSE is the average of the squared differences.
Comments
Post a Comment