当前位置:   article > 正文

《Machine Learning(Tom M. Mitchell)》读书笔记——3、第二章_machine learning tom m. mitchell

machine learning tom m. mitchell

1. Introduction (about machine learning)

2. Concept Learning and the General-to-Specific Ordering

3. Decision Tree Learning

4. Artificial Neural Networks

5. Evaluating Hypotheses

6. Bayesian Learning

7. Computational Learning Theory

8. Instance-Based Learning

9. Genetic Algorithms

10. Learning Sets of Rules

11. Analytical Learning

12. Combining Inductive and Analytical Learning

13. Reinforcement Learning


2. Concept Learning and the General-to-Specific Ordering

In this chapter we consider the problem of automatically inferring the general definition of some concept, given examples labeled as+.members or nonmembers of the concept. This task is commonly referred to as concept learning, or approx- imating a boolean-valued function from examples.

Concept learning: Inferring a boolean-valued function from training examples of its input and output.

A CONCEPT LEARNING TASK: "days on which my friend Aldo enjoys his favorite water sport". To summarize, the EnjoySport concept learning task requires learning the set of days for which EnjoySport = yes, describing this set by a conjunction of constraints over the instance attributes. In general, any concept learning task can be described by the set of instances over which the target function is defined, the target function, the set of candidate hypotheses considered by the learner, and the set of available training examples(实例的集合; 实例集合上的目标函数; 候选假设的集合; 训练样例的集合). The definition of the EnjoySport concept learning task in this general form is given in Table 2.2. 


Notation(术语定义): Throughout this book, we employ the following terminology(术语) when discussing concept learning problems. The set of items over which the concept is defined is called the set of instances, which we denote by X. In the current example, X is the set of all possible days, each represented by the attributes Sky, AirTemp, Humidity, Wind, Water, and Forecast. The concept or function to be learned is called the target concept, which we denote by c. In general, c can be any boolean-valued function defined over the instances X; that is, c : X -> {O, 1}. In the current example, the target concept corresponds to the value of the attribute EnjoySport (i.e., c(x) = 1 if EnjoySport = Yes, and c(x) = 0 if EnjoySport = No). When learning the target concept, the learner is presented a set of training examples, each consisting of an instance x from X, along with its target concept value c(x) (e.g., the training examples in Table 2.1). Instances for which c(x) = 1 are called positive examples, or members of the target concept. Instances for which c(X) = 0 are called negative examples, or nonmembers of the target concept. We will often write the ordered pair <x, c(x)> to describe the training example consisting of the instance x and its target concept value c(x). We use the symbol D to denote the set of available training examples. Given a set of training examples of the target concept c, the problem faced by the learner is to hypothesize, or estimate, c. We use the symbol H to denote the set of all possible hypotheses that the learner may consider regarding the identity of the target concept. Usually H is determined by the human designer's choice of hypothesis representation. In general, each hypothesis h in H represents a boolean-valued function defined over X; that is, h : X --+ {O, 1). The goal of the learner is to find a hypothesis h such that h(x) = c(x) for all x in X.

The inductive learning hypothesis: Any hypothesis found to approximate the targe

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Monodyee/article/detail/343735
推荐阅读
相关标签
  

闽ICP备14008679号