vignettes/Knowledge_check.Rmd
Knowledge_check.Rmd
This script was prepared for the first forester “hands-on” conducted inside of the MI2 Data Lab researchers and R enthusiasts. The main goal of this event is to check the completeness level of the forester package, find some bugs, check the clarity of the user interface, and obtain ideas for new modules which could increase the value of the package.
To obtain such effects, during this “hands-on,” we will briefly introduce the forester package and show the participants how to use its functionalists. After the workshop part, the participants will have time to test the package on their ml datasets prepared beforehand.
The outline of the workshop is presented below:
For each task, we will define WHY? we are asking you to do this, WHAT? are you supposed to do.
To run the code, we have to install the forester package from our GitHub repository, load the package, and get to know the primary dataset.
install.packages('DALEX')
install.packages('devtools')
library(devtools)
install_github('ModelOriented/forester')
library(DALEX)
library(forester)
data(lisbon)
View(lisbon)
To better understand our dataset, we will use the forester
check_data()
function, which was designed to provide basic
dataset info for the user and provide him with additional information
describing the dataset’s quality. As our target column is named ‘Price’,
we will provide it to the check_data()
function. Typically
this function is run inside of the train()
, so we have to
assign the output to the variable.
check <- check_data(lisbon, 'Price', verbose = TRUE)
Check data report correctly detected the issues present in the data, such as high correlation of subset of the columns, presence of duplicate and static columns, unequal quantile bins, or suspicious Id columns.
Forester is not picky and accepts datasets in various formats and qualities (e.g., it may contain NA values). You will check how the forester sees the Lisbon dataset.
Based on the check_data()
function, write the following
information about the dataset.
Column number
Static columns
Dominating values
Duplicates
NA number
Correlated pairs of numerical values number
Correlated pairs of categorical values number
Optional outliers number
Is there any ID column?
The first task is designed to introduce you to the main function,
which is called train()
. This function conducts all autoML
processes, including data preprocessing, model training, and
evaluation.
In this task, you are asked to assign the output of
train()
to the variable output1
. Instead of
using default parameters, we want to shorten the computation time and
ask you to limit the number of bayesian optimization iterations to 0,
hide the output of the function and set the number of random search
iterations to 5 In the end, print the ranked list from the output
object.
output1 <- train(data = lisbon,
y = 'Price',
bayes_iter = 0,
verbose = FALSE,
random_evals = 5)
output1$score_test
The output is a pretty complex object later used in other functions.
The most important part is, however the ranked_list
, which
evaluates all models trained. The main metric sorts the output list for
each task (classification or regression). Can you guess which metric is
the main for the regression task?
The second task aims to get you to know the train function parameters
better. As plenty of the processes hidden behind train()
are complex and can also be tuned in some way, these options have to be
accessible from train()
interface.
This time you are asked to experiment with the other parameters and compare what they do. Try to create the output with ranger, xgboost and decision_tree models only. You can then set bayesian optimization iterations to 5 and random search iterations to 3. What are the differences between these two hyperparameter tuning methods’ behavior? Finally, you can turn on the text outputs from the function.
The check()
function suggests hypothetical problems with
the dataset, but the train()
function also offers turning
on the preprocessing option.
output3 <- train(data = lisbon,
y = 'Price',
bayes_iter = 0,
verbose = FALSE,
random_evals = 5,
advanced_preprocessing = TRUE)
output3$score_test[1, ]
output3
and without output1
.
Have the results improved?
output3$score_test[1, ]
output1$score_test[1, ]
As the forester team, our goal is to make the training process quick and simple and understandable, even for newcomers. Our extensional tools are one of the points that differentiate us from other autoML frameworks. In this chapter, we will use these extra functions to show the full capabilities of the forester.
This time we will ask you to use the output created in the previous task and explore four functions provided by the forester:
save()
functionlisbon
data frame and running the
predict_new()
method on this. The method enables the user
to make predictions on the data unseen by models before. Why is this
function needed?explain()
method and use some DALEX visualization on the outcome. (Task 6)report()
outcome.
(Task 7)
save(train = output2,
name = 'hands_on_save',
return_name = TRUE)
new_lisbon <- lisbon[50:100, ]
predict_new(train_out = output2,
data = new_lisbon)
Explaining the model helps you understand it better. The forester
does not include its functions to explain the model but uses a
ready-made DALEX package. The simple explain()
function
creates an object adapted to the DALEX functions.
Create a DALEX object from the best model from output2
.
Then create feature importance, use the
DALEX::model_parts()
function and introduce it in the chart
(plot()
function).
exp_list <- forester::explain(models = output2$best_models[[1]],
test_data = output2$test_data,
y = output2$y)
exp <- exp_list$xgboost_bayes
p1 <- DALEX::model_parts(exp)
plot(p1)
For convenient storage and comparison of the training results of the models, the forester offers a report-generating function - containing the most important information about the data and training. Thanks to this, it is possible to store the results of work in independent files and come back to them at any time.
Generate a report based on output2
. Then based on the
report, find out which model is the best, assess the dispersion of the
training set, and test. Compare the feature importance chart to that
created in Task 6.
report(train_output = output2,
output_file = 'hands_on_report')
Forester offers some of the most commonly used metrics for model evaluation, however, it does not preclude using a different, proprietary metric. Applying self-made allows a fuller use of the forester’s potential for the individual needs of the user.
Use the train()
function with the default parameters but
with your metric added.
It’s a good idea to turn off Bayes Optimization and decrease Random
Search iterations to build models faster.
Example:
As a forester team, we ask you to join the discussion about features
that should be improved and what you would change in the forester or
add. If you find any bug, please post it on our GitHub
Issues
page. https://github.com/ModelOriented/forester/issues