资讯专栏INFORMATION COLUMN

XGboost数据比赛实战之调参篇(完整流程)

wuaiqiu / 2733人阅读

摘要:字符串函数名,或是可调用对象,需要其函数签名形如如果是,则使用的误差估计函数。运行后的结果为每轮迭代运行结果参数的最佳取值最佳模型得分由输出结果可知参数的最佳取值。提醒一点,这个分数是根据前面设置的得分函数算出来的,即中的。

这一篇博客的内容是在上一篇博客Scikit中的特征选择,XGboost进行回归预测,模型优化的实战的基础上进行调参优化的,所以在阅读本篇博客之前,请先移步看一下上一篇文章。

我前面所做的工作基本都是关于特征选择的,这里我想写的是关于XGBoost参数调整的一些小经验。之前我在网站上也看到很多相关的内容,基本是翻译自一篇英文的博客,更坑的是很多文章步骤讲的不完整,新人看了很容易一头雾水。由于本人也是一个新手,在这过程中也踩了很多大坑,希望这篇博客能够帮助到大家!下面,就进入正题吧。


首先,很幸运的是,Scikit-learn中提供了一个函数可以帮助我们更好地进行调参:

sklearn.model_selection.GridSearchCV

常用参数解读:

estimator:所使用的分类器,如果比赛中使用的是XGBoost的话,就是生成的model。比如: model = xgb.XGBRegressor(**other_params)

param_grid:值为字典或者列表,即需要最优化的参数的取值。比如:cv_params = {"n_estimators": [550, 575, 600, 650, 675]}

scoring :准确度评价标准,默认None,这时需要使用score函数;或者如scoring="roc_auc",根据所选模型不同,评价准则不同。字符串(函数名),或是可调用对象,需要其函数签名形如:scorer(estimator, X, y);如果是None,则使用estimator的误差估计函数。scoring参数选择如下:

具体参考地址:http://scikit-learn.org/stable/modules/model_evaluation.html

这次实战我使用的是r2这个得分函数,当然大家也可以根据自己的实际需要来选择。

调参刚开始的时候,一般要先初始化一些值:

</>复制代码

  1. learning_rate: 0.1

  2. n_estimators: 500

  3. max_depth: 5

  4. min_child_weight: 1

  5. subsample: 0.8

  6. colsample_bytree:0.8

  7. gamma: 0

  8. reg_alpha: 0

  9. reg_lambda: 1

链接:XGBoost常用参数一览表

你可以按照自己的实际情况来设置初始值,上面的也只是一些经验之谈吧。

调参的时候一般按照以下顺序来进行:

1、最佳迭代次数:n_estimators

</>复制代码

  1. if __name__ == "__main__":
  2. trainFilePath = "dataset/soccer/train.csv"
  3. testFilePath = "dataset/soccer/test.csv"
  4. data = pd.read_csv(trainFilePath)
  5. X_train, y_train = featureSet(data)
  6. X_test = loadTestData(testFilePath)
  7. cv_params = {"n_estimators": [400, 500, 600, 700, 800]}
  8. other_params = {"learning_rate": 0.1, "n_estimators": 500, "max_depth": 5, "min_child_weight": 1, "seed": 0,
  9. "subsample": 0.8, "colsample_bytree": 0.8, "gamma": 0, "reg_alpha": 0, "reg_lambda": 1}
  10. model = xgb.XGBRegressor(**other_params)
  11. optimized_GBM = GridSearchCV(estimator=model, param_grid=cv_params, scoring="r2", cv=5, verbose=1, n_jobs=4)
  12. optimized_GBM.fit(X_train, y_train)
  13. evalute_result = optimized_GBM.grid_scores_
  14. print("每轮迭代运行结果:{0}".format(evalute_result))
  15. print("参数的最佳取值:{0}".format(optimized_GBM.best_params_))
  16. print("最佳模型得分:{0}".format(optimized_GBM.best_score_))

写到这里,需要提醒大家,在代码中有一处很关键:

model = xgb.XGBRegressor(**other_params)两个*号千万不能省略!可能很多人不注意,再加上网上很多教程估计是从别人那里直接拷贝,没有运行结果,所以直接就用了 model = xgb.XGBRegressor(other_params)悲剧的是,如果直接这样运行的话,会报如下错误:

</>复制代码

  1. xgboost.core.XGBoostError: b"Invalid Parameter format for max_depth expect int but value...

不信,请看链接:xgboost issue

以上是血的教训啊,自己不运行一遍代码,永远不知道会出现什么Bug!

运行后的结果为:

</>复制代码

  1. [Parallel(n_jobs=4)]: Done 25 out of 25 | elapsed: 1.5min finished
  2. 每轮迭代运行结果:[mean: 0.94051, std: 0.01244, params: {"n_estimators": 400}, mean: 0.94057, std: 0.01244, params: {"n_estimators": 500}, mean: 0.94061, std: 0.01230, params: {"n_estimators": 600}, mean: 0.94060, std: 0.01223, params: {"n_estimators": 700}, mean: 0.94058, std: 0.01231, params: {"n_estimators": 800}]
  3. 参数的最佳取值:{"n_estimators": 600}
  4. 最佳模型得分:0.9406056804545407

由输出结果可知最佳迭代次数为600次。但是,我们还不能认为这是最终的结果,由于设置的间隔太大,所以,我又测试了一组参数,这次粒度小一些:

</>复制代码

  1. cv_params = {"n_estimators": [550, 575, 600, 650, 675]}
  2. other_params = {"learning_rate": 0.1, "n_estimators": 600, "max_depth": 5, "min_child_weight": 1, "seed": 0,
  3. "subsample": 0.8, "colsample_bytree": 0.8, "gamma": 0, "reg_alpha": 0, "reg_lambda": 1}

运行后的结果为:

</>复制代码

  1. [Parallel(n_jobs=4)]: Done 25 out of 25 | elapsed: 1.5min finished
  2. 每轮迭代运行结果:[mean: 0.94065, std: 0.01237, params: {"n_estimators": 550}, mean: 0.94064, std: 0.01234, params: {"n_estimators": 575}, mean: 0.94061, std: 0.01230, params: {"n_estimators": 600}, mean: 0.94060, std: 0.01226, params: {"n_estimators": 650}, mean: 0.94060, std: 0.01224, params: {"n_estimators": 675}]
  3. 参数的最佳取值:{"n_estimators": 550}
  4. 最佳模型得分:0.9406545392685364

果不其然,最佳迭代次数变成了550。有人可能会问,那还要不要继续缩小粒度测试下去呢?这个我觉得可以看个人情况,如果你想要更高的精度,当然是粒度越小,结果越准确,大家可以自己慢慢去调试,我在这里就不一一去做了。

2、接下来要调试的参数是min_child_weight以及max_depth

注意:每次调完一个参数,要把 other_params对应的参数更新为最优值。

</>复制代码

  1. cv_params = {"max_depth": [3, 4, 5, 6, 7, 8, 9, 10], "min_child_weight": [1, 2, 3, 4, 5, 6]}
  2. other_params = {"learning_rate": 0.1, "n_estimators": 550, "max_depth": 5, "min_child_weight": 1, "seed": 0,
  3. "subsample": 0.8, "colsample_bytree": 0.8, "gamma": 0, "reg_alpha": 0, "reg_lambda": 1}

运行后的结果为:

</>复制代码

  1. [Parallel(n_jobs=4)]: Done 42 tasks | elapsed: 1.7min
  2. [Parallel(n_jobs=4)]: Done 192 tasks | elapsed: 12.3min
  3. [Parallel(n_jobs=4)]: Done 240 out of 240 | elapsed: 17.2min finished
  4. 每轮迭代运行结果:[mean: 0.93967, std: 0.01334, params: {"min_child_weight": 1, "max_depth": 3}, mean: 0.93826, std: 0.01202, params: {"min_child_weight": 2, "max_depth": 3}, mean: 0.93739, std: 0.01265, params: {"min_child_weight": 3, "max_depth": 3}, mean: 0.93827, std: 0.01285, params: {"min_child_weight": 4, "max_depth": 3}, mean: 0.93680, std: 0.01219, params: {"min_child_weight": 5, "max_depth": 3}, mean: 0.93640, std: 0.01231, params: {"min_child_weight": 6, "max_depth": 3}, mean: 0.94277, std: 0.01395, params: {"min_child_weight": 1, "max_depth": 4}, mean: 0.94261, std: 0.01173, params: {"min_child_weight": 2, "max_depth": 4}, mean: 0.94276, std: 0.01329...]
  5. 参数的最佳取值:{"min_child_weight": 5, "max_depth": 4}
  6. 最佳模型得分:0.94369522247392

由输出结果可知参数的最佳取值:{"min_child_weight": 5, "max_depth": 4}。(代码输出结果被我省略了一部分,因为结果太长了,以下也是如此)

3、接着我们就开始调试参数:gamma:

</>复制代码

  1. cv_params = {"gamma": [0.1, 0.2, 0.3, 0.4, 0.5, 0.6]}
  2. other_params = {"learning_rate": 0.1, "n_estimators": 550, "max_depth": 4, "min_child_weight": 5, "seed": 0,
  3. "subsample": 0.8, "colsample_bytree": 0.8, "gamma": 0, "reg_alpha": 0, "reg_lambda": 1}

运行后的结果为:

</>复制代码

  1. [Parallel(n_jobs=4)]: Done 30 out of 30 | elapsed: 1.5min finished
  2. 每轮迭代运行结果:[mean: 0.94370, std: 0.01010, params: {"gamma": 0.1}, mean: 0.94370, std: 0.01010, params: {"gamma": 0.2}, mean: 0.94370, std: 0.01010, params: {"gamma": 0.3}, mean: 0.94370, std: 0.01010, params: {"gamma": 0.4}, mean: 0.94370, std: 0.01010, params: {"gamma": 0.5}, mean: 0.94370, std: 0.01010, params: {"gamma": 0.6}]
  3. 参数的最佳取值:{"gamma": 0.1}
  4. 最佳模型得分:0.94369522247392

由输出结果可知参数的最佳取值:{"gamma": 0.1}

4、接着是subsample以及colsample_bytree:

</>复制代码

  1. cv_params = {"subsample": [0.6, 0.7, 0.8, 0.9], "colsample_bytree": [0.6, 0.7, 0.8, 0.9]}
  2. other_params = {"learning_rate": 0.1, "n_estimators": 550, "max_depth": 4, "min_child_weight": 5, "seed": 0,
  3. "subsample": 0.8, "colsample_bytree": 0.8, "gamma": 0.1, "reg_alpha": 0, "reg_lambda": 1}

运行后的结果显示参数的最佳取值:{"subsample": 0.7,"colsample_bytree": 0.7}

5、紧接着就是:reg_alpha以及reg_lambda:

</>复制代码

  1. cv_params = {"reg_alpha": [0.05, 0.1, 1, 2, 3], "reg_lambda": [0.05, 0.1, 1, 2, 3]}
  2. other_params = {"learning_rate": 0.1, "n_estimators": 550, "max_depth": 4, "min_child_weight": 5, "seed": 0,
  3. "subsample": 0.7, "colsample_bytree": 0.7, "gamma": 0.1, "reg_alpha": 0, "reg_lambda": 1}

运行后的结果为:

</>复制代码

  1. [Parallel(n_jobs=4)]: Done 42 tasks | elapsed: 2.0min
  2. [Parallel(n_jobs=4)]: Done 125 out of 125 | elapsed: 5.6min finished
  3. 每轮迭代运行结果:[mean: 0.94169, std: 0.00997, params: {"reg_alpha": 0.01, "reg_lambda": 0.01}, mean: 0.94112, std: 0.01086, params: {"reg_alpha": 0.01, "reg_lambda": 0.05}, mean: 0.94153, std: 0.01093, params: {"reg_alpha": 0.01, "reg_lambda": 0.1}, mean: 0.94400, std: 0.01090, params: {"reg_alpha": 0.01, "reg_lambda": 1}, mean: 0.93820, std: 0.01177, params: {"reg_alpha": 0.01, "reg_lambda": 100}, mean: 0.94194, std: 0.00936, params: {"reg_alpha": 0.05, "reg_lambda": 0.01}, mean: 0.94136, std: 0.01122, params: {"reg_alpha": 0.05, "reg_lambda": 0.05}, mean: 0.94164, std: 0.01120...]
  4. 参数的最佳取值:{"reg_alpha": 1, "reg_lambda": 1}
  5. 最佳模型得分:0.9441561344357595

由输出结果可知参数的最佳取值:{"reg_alpha": 1, "reg_lambda": 1}

6、最后就是learning_rate,一般这时候要调小学习率来测试:

</>复制代码

  1. cv_params = {"learning_rate": [0.01, 0.05, 0.07, 0.1, 0.2]}
  2. other_params = {"learning_rate": 0.1, "n_estimators": 550, "max_depth": 4, "min_child_weight": 5, "seed": 0,
  3. "subsample": 0.7, "colsample_bytree": 0.7, "gamma": 0.1, "reg_alpha": 1, "reg_lambda": 1}

运行后的结果为:

</>复制代码

  1. [Parallel(n_jobs=4)]: Done 25 out of 25 | elapsed: 1.1min finished
  2. 每轮迭代运行结果:[mean: 0.93675, std: 0.01080, params: {"learning_rate": 0.01}, mean: 0.94229, std: 0.01138, params: {"learning_rate": 0.05}, mean: 0.94110, std: 0.01066, params: {"learning_rate": 0.07}, mean: 0.94416, std: 0.01037, params: {"learning_rate": 0.1}, mean: 0.93985, std: 0.01109, params: {"learning_rate": 0.2}]
  3. 参数的最佳取值:{"learning_rate": 0.1}
  4. 最佳模型得分:0.9441561344357595

由输出结果可知参数的最佳取值:{"learning_rate": 0.1}

我们可以很清楚地看到,随着参数的调优,最佳模型得分是不断提高的,这也从另一方面验证了调优确实是起到了一定的作用。不过,我们也可以注意到,其实最佳分数并没有提升太多。提醒一点,这个分数是根据前面设置的得分函数算出来的,即:

</>复制代码

  1. optimized_GBM = GridSearchCV(estimator=model, param_grid=cv_params, scoring="r2", cv=5, verbose=1, n_jobs=4)

中的scoring="r2"。在实际情境中,我们可能需要利用各种不同的得分函数来评判模型的好坏。

最后,我们把得到的最佳参数组合扔到模型里训练,就可以得到预测的结果了:

</>复制代码

  1. def trainandTest(X_train, y_train, X_test):
  2. # XGBoost训练过程,下面的参数就是刚才调试出来的最佳参数组合
  3. model = xgb.XGBRegressor(learning_rate=0.1, n_estimators=550, max_depth=4, min_child_weight=5, seed=0,
  4. subsample=0.7, colsample_bytree=0.7, gamma=0.1, reg_alpha=1, reg_lambda=1)
  5. model.fit(X_train, y_train)
  6. # 对测试集进行预测
  7. ans = model.predict(X_test)
  8. ans_len = len(ans)
  9. id_list = np.arange(10441, 17441)
  10. data_arr = []
  11. for row in range(0, ans_len):
  12. data_arr.append([int(id_list[row]), ans[row]])
  13. np_data = np.array(data_arr)
  14. # 写入文件
  15. pd_data = pd.DataFrame(np_data, columns=["id", "y"])
  16. # print(pd_data)
  17. pd_data.to_csv("submit.csv", index=None)
  18. # 显示重要特征
  19. # plot_importance(model)
  20. # plt.show()

好了,调参的过程到这里就基本结束了。正如我在上面提到的一样,其实调参对于模型准确率的提高有一定的帮助,但这是有限的。最重要的还是要通过数据清洗,特征选择,特征融合,模型融合等手段来进行改进!

下面我就贴出完整代码(声明一点,我的代码质量不是很好,大家参考一下思路就行):

</>复制代码

  1. #!/usr/bin/env python
  2. # -*- coding: utf-8 -*-
  3. # @File : soccer_value.py
  4. # @Author: Huangqinjian
  5. # @Date : 2018/3/22
  6. # @Desc :
  7. import numpy as np
  8. import pandas as pd
  9. import xgboost as xgb
  10. from sklearn import preprocessing
  11. from sklearn import metrics
  12. from sklearn.preprocessing import Imputer
  13. from sklearn.grid_search import GridSearchCV
  14. from hyperopt import hp
  15. # 加载训练数据
  16. def featureSet(data):
  17. imputer = Imputer(missing_values="NaN", strategy="mean", axis=0)
  18. imputer.fit(data.loc[:, ["rw", "st", "lw", "cf", "cam", "cm"]])
  19. x_new = imputer.transform(data.loc[:, ["rw", "st", "lw", "cf", "cam", "cm"]])
  20. le = preprocessing.LabelEncoder()
  21. le.fit(["Low", "Medium", "High"])
  22. att_label = le.transform(data.work_rate_att.values)
  23. # print(att_label)
  24. def_label = le.transform(data.work_rate_def.values)
  25. # print(def_label)
  26. data_num = len(data)
  27. XList = []
  28. for row in range(0, data_num):
  29. tmp_list = []
  30. tmp_list.append(data.iloc[row]["club"])
  31. tmp_list.append(data.iloc[row]["league"])
  32. tmp_list.append(data.iloc[row]["potential"])
  33. tmp_list.append(data.iloc[row]["international_reputation"])
  34. tmp_list.append(data.iloc[row]["pac"])
  35. tmp_list.append(data.iloc[row]["sho"])
  36. tmp_list.append(data.iloc[row]["pas"])
  37. tmp_list.append(data.iloc[row]["dri"])
  38. tmp_list.append(data.iloc[row]["def"])
  39. tmp_list.append(data.iloc[row]["phy"])
  40. tmp_list.append(data.iloc[row]["skill_moves"])
  41. tmp_list.append(x_new[row][0])
  42. tmp_list.append(x_new[row][1])
  43. tmp_list.append(x_new[row][2])
  44. tmp_list.append(x_new[row][3])
  45. tmp_list.append(x_new[row][4])
  46. tmp_list.append(x_new[row][5])
  47. tmp_list.append(att_label[row])
  48. tmp_list.append(def_label[row])
  49. XList.append(tmp_list)
  50. yList = data.y.values
  51. return XList, yList
  52. # 加载测试数据
  53. def loadTestData(filePath):
  54. data = pd.read_csv(filepath_or_buffer=filePath)
  55. imputer = Imputer(missing_values="NaN", strategy="mean", axis=0)
  56. imputer.fit(data.loc[:, ["rw", "st", "lw", "cf", "cam", "cm"]])
  57. x_new = imputer.transform(data.loc[:, ["rw", "st", "lw", "cf", "cam", "cm"]])
  58. le = preprocessing.LabelEncoder()
  59. le.fit(["Low", "Medium", "High"])
  60. att_label = le.transform(data.work_rate_att.values)
  61. # print(att_label)
  62. def_label = le.transform(data.work_rate_def.values)
  63. # print(def_label)
  64. data_num = len(data)
  65. XList = []
  66. for row in range(0, data_num):
  67. tmp_list = []
  68. tmp_list.append(data.iloc[row]["club"])
  69. tmp_list.append(data.iloc[row]["league"])
  70. tmp_list.append(data.iloc[row]["potential"])
  71. tmp_list.append(data.iloc[row]["international_reputation"])
  72. tmp_list.append(data.iloc[row]["pac"])
  73. tmp_list.append(data.iloc[row]["sho"])
  74. tmp_list.append(data.iloc[row]["pas"])
  75. tmp_list.append(data.iloc[row]["dri"])
  76. tmp_list.append(data.iloc[row]["def"])
  77. tmp_list.append(data.iloc[row]["phy"])
  78. tmp_list.append(data.iloc[row]["skill_moves"])
  79. tmp_list.append(x_new[row][0])
  80. tmp_list.append(x_new[row][1])
  81. tmp_list.append(x_new[row][2])
  82. tmp_list.append(x_new[row][3])
  83. tmp_list.append(x_new[row][4])
  84. tmp_list.append(x_new[row][5])
  85. tmp_list.append(att_label[row])
  86. tmp_list.append(def_label[row])
  87. XList.append(tmp_list)
  88. return XList
  89. def trainandTest(X_train, y_train, X_test):
  90. # XGBoost训练过程
  91. model = xgb.XGBRegressor(learning_rate=0.1, n_estimators=550, max_depth=4, min_child_weight=5, seed=0,
  92. subsample=0.7, colsample_bytree=0.7, gamma=0.1, reg_alpha=1, reg_lambda=1)
  93. model.fit(X_train, y_train)
  94. # 对测试集进行预测
  95. ans = model.predict(X_test)
  96. ans_len = len(ans)
  97. id_list = np.arange(10441, 17441)
  98. data_arr = []
  99. for row in range(0, ans_len):
  100. data_arr.append([int(id_list[row]), ans[row]])
  101. np_data = np.array(data_arr)
  102. # 写入文件
  103. pd_data = pd.DataFrame(np_data, columns=["id", "y"])
  104. # print(pd_data)
  105. pd_data.to_csv("submit.csv", index=None)
  106. # 显示重要特征
  107. # plot_importance(model)
  108. # plt.show()
  109. if __name__ == "__main__":
  110. trainFilePath = "dataset/soccer/train.csv"
  111. testFilePath = "dataset/soccer/test.csv"
  112. data = pd.read_csv(trainFilePath)
  113. X_train, y_train = featureSet(data)
  114. X_test = loadTestData(testFilePath)
  115. # 预测最终的结果
  116. # trainandTest(X_train, y_train, X_test)
  117. """
  118. 下面部分为调试参数的代码
  119. """
  120. #
  121. # cv_params = {"n_estimators": [400, 500, 600, 700, 800]}
  122. # other_params = {"learning_rate": 0.1, "n_estimators": 500, "max_depth": 5, "min_child_weight": 1, "seed": 0,
  123. # "subsample": 0.8, "colsample_bytree": 0.8, "gamma": 0, "reg_alpha": 0, "reg_lambda": 1}
  124. #
  125. # cv_params = {"n_estimators": [550, 575, 600, 650, 675]}
  126. # other_params = {"learning_rate": 0.1, "n_estimators": 600, "max_depth": 5, "min_child_weight": 1, "seed": 0,
  127. # "subsample": 0.8, "colsample_bytree": 0.8, "gamma": 0, "reg_alpha": 0, "reg_lambda": 1}
  128. #
  129. # cv_params = {"max_depth": [3, 4, 5, 6, 7, 8, 9, 10], "min_child_weight": [1, 2, 3, 4, 5, 6]}
  130. # other_params = {"learning_rate": 0.1, "n_estimators": 550, "max_depth": 5, "min_child_weight": 1, "seed": 0,
  131. # "subsample": 0.8, "colsample_bytree": 0.8, "gamma": 0, "reg_alpha": 0, "reg_lambda": 1}
  132. #
  133. # cv_params = {"gamma": [0.1, 0.2, 0.3, 0.4, 0.5, 0.6]}
  134. # other_params = {"learning_rate": 0.1, "n_estimators": 550, "max_depth": 4, "min_child_weight": 5, "seed": 0,
  135. # "subsample": 0.8, "colsample_bytree": 0.8, "gamma": 0, "reg_alpha": 0, "reg_lambda": 1}
  136. #
  137. # cv_params = {"subsample": [0.6, 0.7, 0.8, 0.9], "colsample_bytree": [0.6, 0.7, 0.8, 0.9]}
  138. # other_params = {"learning_rate": 0.1, "n_estimators": 550, "max_depth": 4, "min_child_weight": 5, "seed": 0,
  139. # "subsample": 0.8, "colsample_bytree": 0.8, "gamma": 0.1, "reg_alpha": 0, "reg_lambda": 1}
  140. #
  141. # cv_params = {"reg_alpha": [0.05, 0.1, 1, 2, 3], "reg_lambda": [0.05, 0.1, 1, 2, 3]}
  142. # other_params = {"learning_rate": 0.1, "n_estimators": 550, "max_depth": 4, "min_child_weight": 5, "seed": 0,
  143. # "subsample": 0.7, "colsample_bytree": 0.7, "gamma": 0.1, "reg_alpha": 0, "reg_lambda": 1}
  144. #
  145. # cv_params = {"learning_rate": [0.01, 0.05, 0.07, 0.1, 0.2]}
  146. # other_params = {"learning_rate": 0.1, "n_estimators": 550, "max_depth": 4, "min_child_weight": 5, "seed": 0,
  147. # "subsample": 0.7, "colsample_bytree": 0.7, "gamma": 0.1, "reg_alpha": 1, "reg_lambda": 1}
  148. #
  149. # model = xgb.XGBRegressor(**other_params)
  150. # optimized_GBM = GridSearchCV(estimator=model, param_grid=cv_params, scoring="r2", cv=5, verbose=1, n_jobs=4)
  151. # optimized_GBM.fit(X_train, y_train)
  152. # evalute_result = optimized_GBM.grid_scores_
  153. # print("每轮迭代运行结果:{0}".format(evalute_result))
  154. # print("参数的最佳取值:{0}".format(optimized_GBM.best_params_))
  155. # print("最佳模型得分:{0}".format(optimized_GBM.best_score_))

更多干货,欢迎去听我的GitChat:

文章版权归作者所有,未经允许请勿转载,若此文章存在违规行为,您可以联系管理员删除。

转载请注明本文地址:https://www.ucloud.cn/yun/44691.html

相关文章

  • XGboost数据比赛实战之调参篇(完整流程)

    摘要:字符串函数名,或是可调用对象,需要其函数签名形如如果是,则使用的误差估计函数。运行后的结果为每轮迭代运行结果参数的最佳取值最佳模型得分由输出结果可知参数的最佳取值。提醒一点,这个分数是根据前面设置的得分函数算出来的,即中的。 这一篇博客的内容是在上一篇博客Scikit中的特征选择,XGboost进行回归预测,模型优化的实战的基础上进行调参优化的,所以在阅读本篇博客之前,请先移步看一下上...

    cppprimer 评论0 收藏0
  • XGboost数据比赛实战之调参篇(完整流程)

    摘要:字符串函数名,或是可调用对象,需要其函数签名形如如果是,则使用的误差估计函数。运行后的结果为每轮迭代运行结果参数的最佳取值最佳模型得分由输出结果可知参数的最佳取值。提醒一点,这个分数是根据前面设置的得分函数算出来的,即中的。 这一篇博客的内容是在上一篇博客Scikit中的特征选择,XGboost进行回归预测,模型优化的实战的基础上进行调参优化的,所以在阅读本篇博客之前,请先移步看一下上...

    draveness 评论0 收藏0
  • XGboost数据比赛实战之调参篇(完整流程)

    摘要:字符串函数名,或是可调用对象,需要其函数签名形如如果是,则使用的误差估计函数。运行后的结果为每轮迭代运行结果参数的最佳取值最佳模型得分由输出结果可知参数的最佳取值。提醒一点,这个分数是根据前面设置的得分函数算出来的,即中的。 这一篇博客的内容是在上一篇博客Scikit中的特征选择,XGboost进行回归预测,模型优化的实战的基础上进行调参优化的,所以在阅读本篇博客之前,请先移步看一下上...

    isLishude 评论0 收藏0
  • 采用 Python 机器学习预测足球比赛结果

    摘要:采用机器学习预测足球比赛结果足球是世界上最火爆的运动之一,世界杯期间也往往是球迷们最亢奋的时刻。特征工程在机器学习中占有非常重要的作用,一般认为括特征构建特征提取特征选择三大部分。 采用 Python 机器学习预测足球比赛结果 足球是世界上最火爆的运动之一,世界杯期间也往往是球迷们最亢奋的时刻。比赛狂欢季除了炸出了熬夜看球的铁杆粉丝,也让足球竞猜也成了大家茶余饭后最热衷的话题。甚至连原...

    FrancisSoung 评论0 收藏0

发表评论

0条评论

wuaiqiu

|高级讲师

TA的文章

阅读更多
最新活动
阅读需要支付1元查看
<