matchzoo.metrics
¶
Submodules¶
Package Contents¶
-
class
matchzoo.metrics.
Precision
(k: int = 1, threshold: float = 0.0)¶ Bases:
matchzoo.engine.base_metric.RankingMetric
Precision metric.
-
ALIAS
= precision¶
-
__repr__
(self)¶ Returns: Formated string representation of the metric.
-
__call__
(self, y_true: np.array, y_pred: np.array)¶ Calculate precision@k.
Example
>>> y_true = [0, 0, 0, 1] >>> y_pred = [0.2, 0.4, 0.3, 0.1] >>> Precision(k=1)(y_true, y_pred) 0.0 >>> Precision(k=2)(y_true, y_pred) 0.0 >>> Precision(k=4)(y_true, y_pred) 0.25 >>> Precision(k=5)(y_true, y_pred) 0.2
Parameters: - y_true – The ground true label of each document.
- y_pred – The predicted scores of each document.
Returns: Precision @ k
Raises: ValueError: len(r) must be >= k.
-
-
class
matchzoo.metrics.
DiscountedCumulativeGain
(k: int = 1, threshold: float = 0.0)¶ Bases:
matchzoo.engine.base_metric.RankingMetric
Disconunted cumulative gain metric.
-
ALIAS
= ['discounted_cumulative_gain', 'dcg']¶
-
__repr__
(self)¶ Returns: Formated string representation of the metric.
-
__call__
(self, y_true: np.array, y_pred: np.array)¶ Calculate discounted cumulative gain (dcg).
Relevance is positive real values or binary values.
Example
>>> y_true = [0, 1, 2, 0] >>> y_pred = [0.4, 0.2, 0.5, 0.7] >>> DiscountedCumulativeGain(1)(y_true, y_pred) 0.0 >>> round(DiscountedCumulativeGain(k=-1)(y_true, y_pred), 2) 0.0 >>> round(DiscountedCumulativeGain(k=2)(y_true, y_pred), 2) 2.73 >>> round(DiscountedCumulativeGain(k=3)(y_true, y_pred), 2) 2.73 >>> type(DiscountedCumulativeGain(k=1)(y_true, y_pred)) <class 'float'>
Parameters: - y_true – The ground true label of each document.
- y_pred – The predicted scores of each document.
Returns: Discounted cumulative gain.
-
-
class
matchzoo.metrics.
MeanReciprocalRank
(threshold: float = 0.0)¶ Bases:
matchzoo.engine.base_metric.RankingMetric
Mean reciprocal rank metric.
-
ALIAS
= ['mean_reciprocal_rank', 'mrr']¶
-
__repr__
(self)¶ Returns: Formated string representation of the metric.
-
__call__
(self, y_true: np.array, y_pred: np.array)¶ Calculate reciprocal of the rank of the first relevant item.
Example
>>> import numpy as np >>> y_pred = np.asarray([0.2, 0.3, 0.7, 1.0]) >>> y_true = np.asarray([1, 0, 0, 0]) >>> MeanReciprocalRank()(y_true, y_pred) 0.25
Parameters: - y_true – The ground true label of each document.
- y_pred – The predicted scores of each document.
Returns: Mean reciprocal rank.
-
-
class
matchzoo.metrics.
MeanAveragePrecision
(threshold: float = 0.0)¶ Bases:
matchzoo.engine.base_metric.RankingMetric
Mean average precision metric.
-
ALIAS
= ['mean_average_precision', 'map']¶
-
__repr__
(self)¶ Returns: Formated string representation of the metric.
-
__call__
(self, y_true: np.array, y_pred: np.array)¶ Calculate mean average precision.
Example
>>> y_true = [0, 1, 0, 0] >>> y_pred = [0.1, 0.6, 0.2, 0.3] >>> MeanAveragePrecision()(y_true, y_pred) 1.0
Parameters: - y_true – The ground true label of each document.
- y_pred – The predicted scores of each document.
Returns: Mean average precision.
-
-
class
matchzoo.metrics.
NormalizedDiscountedCumulativeGain
(k: int = 1, threshold: float = 0.0)¶ Bases:
matchzoo.engine.base_metric.RankingMetric
Normalized discounted cumulative gain metric.
-
ALIAS
= ['normalized_discounted_cumulative_gain', 'ndcg']¶
-
__repr__
(self)¶ Returns: Formated string representation of the metric.
-
__call__
(self, y_true: np.array, y_pred: np.array)¶ Calculate normalized discounted cumulative gain (ndcg).
Relevance is positive real values or binary values.
Example
>>> y_true = [0, 1, 2, 0] >>> y_pred = [0.4, 0.2, 0.5, 0.7] >>> ndcg = NormalizedDiscountedCumulativeGain >>> ndcg(k=1)(y_true, y_pred) 0.0 >>> round(ndcg(k=2)(y_true, y_pred), 2) 0.52 >>> round(ndcg(k=3)(y_true, y_pred), 2) 0.52 >>> type(ndcg()(y_true, y_pred)) <class 'float'>
Parameters: - y_true – The ground true label of each document.
- y_pred – The predicted scores of each document.
Returns: Normalized discounted cumulative gain.
-
-
class
matchzoo.metrics.
Accuracy
¶ Bases:
matchzoo.engine.base_metric.ClassificationMetric
Accuracy metric.
-
ALIAS
= ['accuracy', 'acc']¶
-
__repr__
(self)¶ Returns: Formated string representation of the metric.
-
__call__
(self, y_true: np.array, y_pred: np.array)¶ Calculate accuracy.
Example
>>> import numpy as np >>> y_true = np.array([1]) >>> y_pred = np.array([[0, 1]]) >>> Accuracy()(y_true, y_pred) 1.0
Parameters: - y_true – The ground true label of each document.
- y_pred – The predicted scores of each document.
Returns: Accuracy.
-
-
class
matchzoo.metrics.
CrossEntropy
¶ Bases:
matchzoo.engine.base_metric.ClassificationMetric
Cross entropy metric.
-
ALIAS
= ['cross_entropy', 'ce']¶
-
__repr__
(self)¶ Returns: Formated string representation of the metric.
-
__call__
(self, y_true: np.array, y_pred: np.array, eps: float = 1e-12)¶ Calculate cross entropy.
Example
>>> y_true = [0, 1] >>> y_pred = [[0.25, 0.25], [0.01, 0.90]] >>> CrossEntropy()(y_true, y_pred) 0.7458274358333028
Parameters: - y_true – The ground true label of each document.
- y_pred – The predicted scores of each document.
- eps – The Log loss is undefined for p=0 or p=1, so probabilities are clipped to max(eps, min(1 - eps, p)).
Returns: Average precision.
-
-
matchzoo.metrics.
list_available
() → list¶