Internally, it will be converted toĭtype=np.float32 and if a sparse matrix is provided Parameters loss of shape (n_samples, n_features) In each stage a regression tree is fit on the negative gradient of the It allows for the optimization of arbitrary differentiable loss functions. GB builds an additive model in a forward stage-wise fashion GradientBoostingRegressor ( *, loss = 'squared_error', learning_rate = 0.1, n_estimators = 100, subsample = 1.0, criterion = 'friedman_mse', min_samples_split = 2, min_samples_leaf = 1, min_weight_fraction_leaf = 0.0, max_depth = 3, min_impurity_decrease = 0.0, init = None, random_state = None, max_features = None, alpha = 0.9, verbose = 0, max_leaf_nodes = None, warm_start = False, validation_fraction = 0.1, n_iter_no_change = None, tol = 0.0001, ccp_alpha = 0.0 ) ¶
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |