当前位置:   article > 正文

机器学习-决策树_dt决策树图形

dt决策树图形

机器学习-决策树

本章介绍机器学习中一个非常重要的监督学习算法,决策树,决策树有很多分类,如CART(基于基尼系数,可用于分类或回归),ID3(基于信息增益,使用信息增益来选择特征),C4.5(基于信息增益,与ID3类似,但使用信息增益的比例来选择特征等,这里介绍ID3

包括以下内容:

  1. 决策树的原理
  2. 信息,熵,信息增益的概念
  3. 如何划分数据集
  4. 如何选择最好的数据划分
  5. 构建决策树
  6. 图形化查看决策树
  7. 测试决策树
  8. 存储和加载决策树
  9. ID3的优缺点
  10. 案例 - 使用决策树预测隐形眼镜类型

部分内容引用自《Machine Learning in Action》


决策树的原理

问题提出:

假定某个数据集S包含M个元素,每个元素都有属性(特征)X1,X2,...,Xk,且每个元素都有具体的分类,分类的集合由C1,C2,...,Cn组成。若给定数据集S外的其它某个元素的属性(x1,x2,...,xk),求该元素属于哪个分类。

思路:

决策树的的思想是先通过某种方式,将数据集转换成一颗树(决策树),再按此决策树中定义的顺序依次判断目标元素的属性值x1,x2,...,xk,最后定位到的叶子节点的分类就是目标元素的分类。决策树的构造相对比较费时,但判断分类是非常高效的。

举例:

假设通过某种方式,我们得到了以下用于判断是否去相亲的决策树:

现在给定一人A,其属性为(白,富,美),通过决策树可以得到结果“”,说明可以去相亲。再给定另一人B,其属性为(不富,白,不美),通过决策树得到结果“犹豫”,说明需要考虑一下是否去相亲。可以看出,决策树可以非常快速的判断某个元素属于哪个分类。

信息,熵,信息增益的概念

在ID3的算法中,为了得到一棵合理的决策树,我们需要知道信息信息增益的概念。

信息:

假设数据集S中有n个分类,C1,C2,...,Cn,则第i个分类的信息定义为:

\large l\left ( {C_{i}} \right ) = -\log_{2}p\left ( C_{i} \right )

其中,\large p\left ( C_{i} \right ) 为 C_{i} 在所有分类中出现的概率。通过此公式可知,概率越大,信息就越接近于0,反之越大于0。注意,信息是基于分类的,而不是基于集合中的元素。

熵:

熵可以称为香农熵,是克劳德香农在二十世纪发明的,其描述的是任意数据集S中元素的混乱程度。熵的值越大表示数据越混乱,越小表示越统一。熵定义为信息的期望

\large H = -\sum_{i=1}^{n} p\left ( C_{i} \right ) \log_{2}p\left ( C_{i} \right )

注意,熵基于信息,所以熵也是针对数据集中的分类,而不是数据集中的元素。例如,假设数据集S中有100个元素,所有这些元素的属性值都不一样且相差很大,但这些元素都属于同一个分类Ci。那么该数据集的熵为0,表示所有元素绝对统一,没有任何混乱(都属于同一个分类,所以统一)。

信息增益:

信息增益定义为熵的减少。假设初始情况下数据集S的熵为\large H_{1},经过某些操作后其熵变为\large H_{2},则该操作带来的信息增益为:

\large \Delta_{H}= H_{1} - H_{2}

如果\large \Delta_{H}为正数,表示经过某些操作后能减少S的混乱程度,能让S中的元素更加统一。但如果\large \Delta_{H}为负数,表示该操作让S中的的元素更加混乱。下面通过实例计算数据集的熵。

创建Python模块 entropy.py,输入以下代码:

  1. import math
  2. def cal_entropy(data_set):
  3. """
  4. This function calculates entropy of a data set, no matter how many attributes that data set have, only depends on
  5. the last value of each element, it's the element category(label).
  6. :param data_set: data set
  7. :return: entropy of a data set
  8. """
  9. labels = {}
  10. for data in data_set:
  11. label = data[-1]
  12. if label not in labels.keys():
  13. labels[label] = 0
  14. labels[label] += 1
  15. entropy = 0.0
  16. for label in labels:
  17. prob = float(labels[label]) / len(data_set)
  18. entropy += (-prob * math.log2(prob))
  19. return entropy
  20. if __name__ == '__main__':
  21. data_set = [[1, 1, 'yes'], [1, 1, 'yes'], [1, 0, 'no'], [0, 1, 'no'], [0, 1, 'no']]
  22. entropy = cal_entropy(data_set)
  23. print(entropy)
  24. data_set = [[1, 1, 'yes'], [1, 1, 'not sure'], [1, 0, 'no'], [0, 1, 'no'], [0, 1, 'no']]
  25. entropy = cal_entropy(data_set)
  26. print(entropy)
'
运行

运行结果:

  1. D:\work\python_workspace\machine_learning\venv\Scripts\python.exe D:/work/python_workspace/machine_learning/decision_tree/entropy.py
  2. 0.9709505944546686
  3. 1.3709505944546687
  4. Process finished with exit code 0

可以看出,数据集1中的元素相对于数据集2中的元素更加统一。直观来看,数据集1中只有两个分类,yesno,而数据集2中增加了一个分类not sure,所以数据集2更加混乱。

如何划分数据集

一个数据集中的元素有很多属性(特征),我们需要确定划分属性的先后顺序。只要能得到属性划分的先后顺序,就很容易构建出最终的决策树。为了弄清楚这个问题,我们先要尝试划分数据集。

设原数据集为S,其属性有C1,C2,...,Cn,若按照第i个属性的某个值v划分数据集,则将得到新的数据子集Si。其中,SiS中第i个属性的值为v的所有元素的集合,但是每个元素都去掉了第i个属性本身(减少属性,便于收敛)。

实例:

创建模块 split_data_set.py,输入以下代码:

  1. def split_data_set(data_set, axis, value):
  2. """
  3. Split data set by giving axis and value, only return the sub data set which element value on axis equals to the
  4. giving value, and sub data will not include the value on axis index.
  5. :param data_set: data set
  6. :param axis: axis index
  7. :param value: giving value
  8. :return: sub data set
  9. """
  10. return_data_set = []
  11. for data in data_set:
  12. if data[axis] == value:
  13. reduced_data_set = data[:axis]
  14. reduced_data_set.extend(data[axis + 1:])
  15. return_data_set.append(reduced_data_set)
  16. return return_data_set
  17. if __name__ == '__main__':
  18. data_set = [[1, 1, 'yes'], [1, 1, 'yes'], [1, 0, 'no'], [0, 1, 'no'], [0, 1, 'no']]
  19. print("Original data set: %r" % data_set)
  20. result = split_data_set(data_set, 0, 1)
  21. print("Split index 0, value 1: %r" % result)
  22. result = split_data_set(data_set, 1, 0)
  23. print("Split index 1, value 0: %r" % result)
'
运行

运行结果:

  1. D:\work\python_workspace\machine_learning\venv\Scripts\python.exe D:/work/python_workspace/machine_learning/decision_tree/split_data_set.py
  2. Original data set: [[1, 1, 'yes'], [1, 1, 'yes'], [1, 0, 'no'], [0, 1, 'no'], [0, 1, 'no']]
  3. Split index 0, value 1: [[1, 'yes'], [1, 'yes'], [0, 'no']]
  4. Split index 1, value 0: [[1, 'no']]
  5. Process finished with exit code 0

如何选择最好的划分

现在我们知道了如何划分数据集,那么选择哪个属性进行划分才是最好的划分呢?我们应该优先选择能为我们带来最大信息增益的属性来划分数据集,因为这样划分后剩下的数据集将在最大程度上变得统一,能减少决策树的深度和结构。

假设数据集Sn个分类,C1,C2,...,Cn,按照第i个属性的所有值(vi1,vi2,...,vim)能划分出m个子集Si1,Si2,...,Sim,通过上面熵的计算方式可以得到这m个子集的熵分别为 Hi1,Hi2,...,Him,则这m个子集合并到一起的熵为:

\large H_{i} = \sum_{j=1}^{m} p\left ( S_{ij} \right ) H_{ij}

其中,\large p\left ( S_{ij} \right ) 为 集合S_{ij}相对于原集合S的概率,即S_{ij}中元素的个数除以S中元素的个数。

由于S_{ij}中的元素都没有第i个属性,所以其能够用于判断剩下的属性的熵,看其是否更加统一。依次计算按每个属性划分后的熵,选择信息增益最大的属性作为最好的数据划分。

实例:

创建模块 best_feature.py,输入以下代码:

  1. import decision_tree.entropy as entropy
  2. import decision_tree.split_data_set as sp
  3. def choose_best_feature_to_split(data_set):
  4. feature_size = len(data_set[0]) - 1
  5. base_entropy = entropy.cal_entropy(data_set)
  6. best_info_gain = 0.0
  7. best_feature = -1
  8. for feature_index in range(feature_size):
  9. # All element feature values on i axis, this is a set, so will remove duplicated value
  10. feature_set = set([element[feature_index] for element in data_set])
  11. feature_entropy = 0.0
  12. for feature_value in feature_set:
  13. sub_data_set = sp.split_data_set(data_set, feature_index, feature_value)
  14. prob = len(sub_data_set) / float(len(data_set))
  15. feature_entropy += prob * entropy.cal_entropy(sub_data_set)
  16. info_gain = base_entropy - feature_entropy
  17. # print("Info gain on feature %r is %r" % (feature_index, info_gain))
  18. if info_gain > best_info_gain:
  19. best_info_gain = info_gain
  20. best_feature = feature_index
  21. return best_feature
  22. if __name__ == "__main__":
  23. data_set = [[1, 1, 'yes'], [1, 1, 'yes'], [1, 0, 'no'], [0, 1, 'no'], [0, 1, 'no']]
  24. best_feature = choose_best_feature_to_split(data_set)
  25. print("The best feature is: %r" % best_feature)

运行结果:

  1. D:\work\python_workspace\machine_learning\venv\Scripts\python.exe D:/work/python_workspace/machine_learning/decision_tree/best_feature.py
  2. The best feature is: 0
  3. Process finished with exit code 0

输出为0,说明在当前数据集中,按照第一个属性的划分是最好的划分。

构建决策树

可以用递归的方式依次找出给定集合中的最优划分属性,然后根据该属性将给定数据集划分成若干个子集,再找出每个子集中的最优划分属性进行子集划分。递归退出有两个条件,满足一个即可,1)子集中所有类别都相同,2)检查完所有属性,如果此时存在不同类别,则选择类别最多的作为最终类别。

实例:

创建模块 create_tree.py,输入以下代码:

  1. import operator
  2. import decision_tree.best_feature as bf
  3. import decision_tree.split_data_set as sp
  4. def majority_count(class_list):
  5. class_count = {}
  6. for value in class_list:
  7. if value not in class_count.keys():
  8. class_count[value] = 0
  9. class_count[value] += 1
  10. sorted_class_count = sorted(class_count.items(), key=operator.itemgetter(1), reverse=True)
  11. return sorted_class_count[0][0]
  12. def create_decision_tree(data_set, labels):
  13. class_list = [element[-1] for element in data_set]
  14. if class_list.count(class_list[0]) == len(class_list):
  15. return class_list[0]
  16. if len(data_set[0]) == 1:
  17. return majority_count(class_list)
  18. best_feature = bf.choose_best_feature_to_split(data_set)
  19. best_feature_label = labels[best_feature]
  20. tree = {best_feature_label: {}}
  21. del (labels[best_feature])
  22. feature_set = set([element[best_feature] for element in data_set])
  23. for feature in feature_set:
  24. sub_labels = labels[:]
  25. sub_data_set = sp.split_data_set(data_set, best_feature, feature)
  26. tree[best_feature_label][feature] = create_decision_tree(sub_data_set, sub_labels)
  27. return tree
  28. if __name__ == "__main__":
  29. data_set = [[1, 1, 'yes'], [1, 1, 'yes'], [1, 0, 'no'], [0, 1, 'no'], [0, 1, 'no']]
  30. labels = ['A', 'B']
  31. tree = create_decision_tree(data_set, labels)
  32. print(tree)

注:A表示第一个属性的名称是A,B表示第二个属性的名称是B。

运行结果:

  1. D:\work\python_workspace\machine_learning\venv\Scripts\python.exe D:/work/python_workspace/machine_learning/decision_tree/create_tree.py
  2. {'A': {0: 'no', 1: {'B': {0: 'no', 1: 'yes'}}}}
  3. Process finished with exit code 0

输出的决策树是一个dict格式,key代表的是label和该属性的所有值。注意,这里的决策树是普通的树,不是二叉树,因为任何属性都可能有多个值。

图形化查看决策树

可以使用 matplotlib 库画出决策树的图像,便于直观理解。

实例:

针对上面的决策树 {'A': {0: 'no', 1: {'B': {0: 'no', 1: 'yes'}}}},我们画出其图形。创建模块 plot_tree.py,并输入以下代码:

  1. import matplotlib.pyplot as plt
  2. decisionNode = dict(boxstyle="sawtooth", fc="0.8")
  3. leafNode = dict(boxstyle="round4", fc="0.8")
  4. arrow_args = dict(arrowstyle="<-")
  5. def get_num_leafs(tree):
  6. num_leafs = 0
  7. first_str = list(tree.keys())[0]
  8. second_dict = tree[first_str]
  9. for key in second_dict.keys():
  10. if type(second_dict[
  11. key]).__name__ == 'dict': # test to see if the nodes are dictonaires, if not they are leaf nodes
  12. num_leafs += get_num_leafs(second_dict[key])
  13. else:
  14. num_leafs += 1
  15. return num_leafs
  16. def get_tree_depth(tree):
  17. max_depth = 0
  18. first_str = list(tree.keys())[0]
  19. second_dict = tree[first_str]
  20. for key in second_dict.keys():
  21. if type(second_dict[
  22. key]).__name__ == 'dict': # test to see if the nodes are dictonaires, if not they are leaf nodes
  23. this_depth = 1 + get_tree_depth(second_dict[key])
  24. else:
  25. this_depth = 1
  26. if this_depth > max_depth: max_depth = this_depth
  27. return max_depth
  28. def plot_node(node_txt, center_pt, parent_pt, node_type):
  29. create_plot.ax1.annotate(node_txt, xy=parent_pt, xycoords='axes fraction',
  30. xytext=center_pt, textcoords='axes fraction',
  31. va="center", ha="center", bbox=node_type, arrowprops=arrow_args)
  32. def plot_mid_text(center_pt, parent_pt, txt_str):
  33. x_mid = (parent_pt[0] - center_pt[0]) / 2.0 + center_pt[0]
  34. y_mid = (parent_pt[1] - center_pt[1]) / 2.0 + center_pt[1]
  35. create_plot.ax1.text(x_mid, y_mid, txt_str, va="center", ha="center", rotation=30)
  36. def plot_tree(tree, parent_pt, node_txt): # if the first key tells you what feat was split on
  37. num_leafs = get_num_leafs(tree) # this determines the x width of this tree
  38. depth = get_tree_depth(tree)
  39. first_str = list(tree.keys())[0] # the text label for this node should be this
  40. cntr_pt = (plot_tree.xOff + (1.0 + float(num_leafs)) / 2.0 / plot_tree.totalW, plot_tree.yOff)
  41. plot_mid_text(cntr_pt, parent_pt, node_txt)
  42. plot_node(first_str, cntr_pt, parent_pt, decisionNode)
  43. second_dict = tree[first_str]
  44. plot_tree.yOff = plot_tree.yOff - 1.0 / plot_tree.totalD
  45. for key in second_dict.keys():
  46. if type(second_dict[
  47. key]).__name__ == 'dict': # test to see if the nodes are dictonaires, if not they are leaf nodes
  48. plot_tree(second_dict[key], cntr_pt, str(key)) # recursion
  49. else: # it's a leaf node print the leaf node
  50. plot_tree.xOff = plot_tree.xOff + 1.0 / plot_tree.totalW
  51. plot_node(second_dict[key], (plot_tree.xOff, plot_tree.yOff), cntr_pt, leafNode)
  52. plot_mid_text((plot_tree.xOff, plot_tree.yOff), cntr_pt, str(key))
  53. plot_tree.yOff = plot_tree.yOff + 1.0 / plot_tree.totalD
  54. def create_plot(tree):
  55. fig = plt.figure(1, facecolor='white')
  56. fig.clf()
  57. axprops = dict(xticks=[], yticks=[])
  58. create_plot.ax1 = plt.subplot(111, frameon=False, **axprops) # no ticks
  59. # createPlot.ax1 = plt.subplot(111, frameon=False) #ticks for demo puropses
  60. plot_tree.totalW = float(get_num_leafs(tree))
  61. plot_tree.totalD = float(get_tree_depth(tree))
  62. plot_tree.xOff = -0.5 / plot_tree.totalW;
  63. plot_tree.yOff = 1.0;
  64. plot_tree(tree, (0.5, 1.0), '')
  65. plt.show()
  66. def retrieve_tree(i):
  67. list_of_tree = [{'A': {0: 'no', 1: {'B': {0: 'no', 1: 'yes'}}}},
  68. {'A': {0: 'no', 1: {'B': {0: {'head': {0: 'no', 1: 'yes'}}, 1: 'no'}}}}
  69. ]
  70. return list_of_tree[i]
  71. if __name__ == '__main__':
  72. tree = retrieve_tree(0)
  73. print(tree)
  74. create_plot(tree)
'
运行

运行结果:

  1. D:\work\python_workspace\machine_learning\venv\Scripts\python.exe D:/work/python_workspace/machine_learning/decision_tree/plot_tree.py
  2. {'A': {0: 'no', 1: {'B': {0: 'no', 1: 'yes'}}}}

测试决策树

构建完决策树以后,我们希望通过该决策树来判断给定数据的分类。方法是根据给定数据的属性值依次遍历决策树的节点,从而找到最后的分类。

实例:

创建模块 classify.py,输入以下代码:

  1. import decision_tree.create_tree as ct
  2. def classify(tree, feature_labels, test_vec):
  3. first_str = list(tree.keys())[0]
  4. second_dict = tree[first_str]
  5. feat_index = feature_labels.index(first_str)
  6. for value in second_dict.keys():
  7. if test_vec[feat_index] == value:
  8. if type(second_dict[value]).__name__ == 'dict':
  9. class_label = classify(second_dict[value], feature_labels, test_vec)
  10. else:
  11. class_label = second_dict[value]
  12. return class_label
  13. if __name__ == '__main__':
  14. data_set = [[1, 1, 'yes'], [1, 1, 'yes'], [1, 0, 'no'], [0, 1, 'no'], [0, 1, 'no']]
  15. labels = ['A', 'B']
  16. tree = ct.create_decision_tree(data_set, ['A', 'B'])
  17. print(tree)
  18. obj_leabl = classify(tree, labels, [1, 0])
  19. print("The label of [1, 0] is %r" % obj_leabl)
  20. obj_leabl = classify(tree, labels, [1, 1])
  21. print("The label of [1, 1] is %r" % obj_leabl)
  22. obj_leabl = classify(tree, labels, [0, 1])
  23. print("The label of [0, 1] is %r" % obj_leabl)

运行结果:

  1. D:\work\python_workspace\machine_learning\venv\Scripts\python.exe D:/work/python_workspace/machine_learning/decision_tree/classify.py
  2. {'A': {0: 'no', 1: {'B': {0: 'no', 1: 'yes'}}}}
  3. The label of [1, 0] is 'no'
  4. The label of [1, 1] is 'yes'
  5. The label of [0, 1] is 'no'
  6. Process finished with exit code 0

存储和加载决策树

构建决策树的过程往往会消耗比较多的时间,我们可以将构建好的决策树存储到磁盘空间,需要使用的时候再加载,这样就可以不用重复构建决策树了。这可以通过Python内置模块 pickle来实现。

实例:

创建模块 store.py,并输入以下代码:

  1. import decision_tree.create_tree as ct
  2. import decision_tree.classify as cl
  3. import pickle
  4. def store_tree(tree, file_path):
  5. with open(file_path, 'wb') as f:
  6. pickle.dump(tree, f)
  7. def restore_tree(file_path):
  8. with open(file_path, 'rb') as f:
  9. return pickle.load(f)
  10. if __name__ == '__main__':
  11. data_set = [[1, 1, 'yes'], [1, 1, 'yes'], [1, 0, 'no'], [0, 1, 'no'], [0, 1, 'no']]
  12. tree = ct.create_decision_tree(data_set, ['A', 'B'])
  13. print(tree)
  14. store_tree(tree, './my_tree.data')
  15. tree = restore_tree('./my_tree.data')
  16. print(tree)
  17. obj_leabl = cl.classify(tree, ['A', 'B'], [1, 1])
  18. print("The label of [1, 1] is %r" % obj_leabl)

运行结果:

  1. D:\work\python_workspace\machine_learning\venv\Scripts\python.exe D:/work/python_workspace/machine_learning/decision_tree/store.py
  2. {'A': {0: 'no', 1: {'B': {0: 'no', 1: 'yes'}}}}
  3. {'A': {0: 'no', 1: {'B': {0: 'no', 1: 'yes'}}}}
  4. The label of [1, 1] is 'yes'
  5. Process finished with exit code 0

可以看出此时能够将构建好的决策树存储到文件 my_tree.data,并从该文件加载决策树。

ID3的优缺点

我们目前讨论的决策树都基于ID3,该算法的优点是相对比较简单,便于实现。缺点是如果匹配项太多可能出现过渡匹配现象,其无法裁剪不必要的叶子节点。另外,ID3只能划分标称型数据,无法处理连续的数值数据。

案例 - 使用决策树预测隐形眼镜类型

最后,我们通过一个案例来综合演练决策树的实际应用,使用决策树来预测是否应该给患者配隐形眼镜,以及配什么材质的隐形眼镜。

数据集属性和取值:

属性取值
age(年龄)pre(小孩)
young(年轻人)
presbyopic(老年人)
prescript(症状)myopia(近视眼)
hyperopia(远视眼)
astigmatic(是否散光)yes
no
tearRate(眼泪数量)normal(正常)
reduced(减少)

注:括号内是中文说明,加粗黑体表示属性名和取值。

数据集分类:

分类说明
no lenses不能使用隐形眼镜
soft使用软材质隐形眼镜
hard使用硬材质隐形眼镜

数据集保存到文件 lenses.txt,每列分别代表年龄症状是否散光眼泪数量分类

  1. young myope no reduced no lenses
  2. young myope no normal soft
  3. young myope yes reduced no lenses
  4. young myope yes normal hard
  5. young hyper no reduced no lenses
  6. young hyper no normal soft
  7. young hyper yes reduced no lenses
  8. young hyper yes normal hard
  9. pre myope no reduced no lenses
  10. pre myope no normal soft
  11. pre myope yes reduced no lenses
  12. pre myope yes normal hard
  13. pre hyper no reduced no lenses
  14. pre hyper no normal soft
  15. pre hyper yes reduced no lenses
  16. pre hyper yes normal no lenses
  17. presbyopic myope no reduced no lenses
  18. presbyopic myope no normal no lenses
  19. presbyopic myope yes reduced no lenses
  20. presbyopic myope yes normal hard
  21. presbyopic hyper no reduced no lenses
  22. presbyopic hyper no normal soft
  23. presbyopic hyper yes reduced no lenses
  24. presbyopic hyper yes normal no lenses

创建模块 lenses.py,并输入以下代码:

  1. import decision_tree.create_tree as ct
  2. import decision_tree.plot_tree as pt
  3. import decision_tree.classify as cf
  4. def get_tree():
  5. with open('./lenses.txt') as f:
  6. lenses = [inst.strip().split('\t') for inst in f.readlines()]
  7. tree = ct.create_decision_tree(lenses, get_lense_labels())
  8. return tree
  9. def get_lense_labels():
  10. labels = ['age', 'prescript', 'astigmatic', 'tearRate']
  11. return labels
  12. def plot_tree():
  13. tree = get_tree()
  14. pt.create_plot(tree)
  15. def test_classify():
  16. tree = get_tree()
  17. print(tree)
  18. labels = get_lense_labels()
  19. test_data = ['young', 'hyper', 'yes', 'normal']
  20. result = cf.classify(tree, labels, test_data)
  21. print("The label of data %r is %r" % (test_data, result))
  22. if __name__ == '__main__':
  23. test_classify()
  24. plot_tree()

运行结果:

  1. D:\work\python_workspace\machine_learning\venv\Scripts\python.exe D:/work/python_workspace/machine_learning/decision_tree/lenses.py
  2. {'tearRate': {'normal': {'astigmatic': {'no': {'age': {'pre': 'soft', 'presbyopic': {'prescript': {'myope': 'no lenses', 'hyper': 'soft'}}, 'young': 'soft'}}, 'yes': {'prescript': {'myope': 'hard', 'hyper': {'age': {'pre': 'no lenses', 'presbyopic': 'no lenses', 'young': 'hard'}}}}}}, 'reduced': 'no lenses'}}
  3. The label of data ['young', 'hyper', 'yes', 'normal'] is 'hard'

可以看出,此时能够正确的判断是否应该给患者配隐形眼镜,以及用什么材质的隐形眼镜。

声明:本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号