The scoring parameter can be a callable that takes model predictions and ground truth.
However, if you want to use a scoring function that takes additional parameters, such as fbeta_score, you need to generate an appropriate scoring object. The simplest way to generate a callable object for scoring is by using make_scorer. That function converts score functions (discussed below in Function for prediction-error metrics) into callables that can be used for model evaluation.
One typical use case is to wrap an existing scoring function from the library with non default value for its parameters such as the beta parameter for the fbeta_score function:
The second use case is to build a completely new and custom scorer object from a simple python function:
make_scorer takes as parameters:
- the function you want to use
- whether it is a score (greater_is_better=True) or a loss (greater_is_better=False),
- whether the function you provided takes predictions as input (needs_threshold=False) or needs confidence scores (needs_threshold=True)
- any additional parameters, such as beta in an f1_score.
No comments:
Post a Comment