赞
踩
http://blog.csdn.net/pipisorry/article/details/52247679
本blog内容有特征预处理(标准化、归一化、正则化、特征二值化、缺失值处理)和标签label预处理(label二值化、multi-label多值化)。
[均值、方差与协方差矩阵 ]
Note: 一定要注意归一化是归一化什么,归一化features还是samples。
Standardization: mean removal and variance scaling
数据标准化:当单个特征的样本取值相差甚大或明显不遵从高斯正态分布时,标准化表现的效果较差。实际操作中,经常忽略特征数据的分布形状,移除每个特征均值,划分离散特征的标准差,从而等级化,进而实现数据中心化。
Note: test set要和training set做相同的预处理操作(standardization、data transformation、etc)。
[数据标准化/归一化normalization ]
from sklearn import preprocessing
preprocessing.scale(X) #等价于下面的StandardScaler
def scale(X, axis=0, with_mean=True, with_std=True, copy=True)
注意,scikit-learn中assume that all features are centered around zero and have variance in the same order.公式为:(X-X_mean)/X_std 计算时对每个属性/每列分别进行。
参数解释:
X:{array-like, sparse matrix} 数组或者矩阵,一维的数据都可以(但是在0.19版本后一维的数据会报错了!)。
lz:X中不允许存在nan,这是和pandas直接处理最大区别[pandas小记:pandas数据规整化-缺失、冗余、替换]。查看nan值所有行列:print(np.where(np.isnan(dense_features_df)))。
axis:int类型,初始值为0,axis用来计算均值 means 和标准方差 standard deviations. 如果是0,则单独的标准化每个特征(列),如果是1,则标准化每个观测样本(行)。
with_mean: boolean类型,默认为True,表示将数据均值规范到0
with_std: boolean类型,默认为True,表示将数据方差规范到1
这种标准化相当于z-score 标准化(zero-mean normalization)
scale标准化示例
- >>> from sklearn import preprocessing
- >>> import numpy as np
- >>> X = np.array([[ 1., -1., 2.],
- ... [ 2., 0., 0.],
- ... [ 0., 1., -1.]])
- >>> X_scaled = preprocessing.scale(X)
-
- >>> X_scaled
- array([[ 0. ..., -1.22..., 1.33...],
- [ 1.22..., 0. ..., -0.26...],
- [-1.22..., 1.22..., -1.06...]])
对于一维数据的一种可能的处理:先转换成二维,再在结果中转换为一维
cn = preprocessing.scale([[p] for _, _, p in cn]).reshape(-1)
转换后的数据有0均值(zero mean)和单位方差(unit variance,方差为1)
- >>> X_scaled.mean(axis=0)
- array([ 0., 0., 0.])
-
- >>> X_scaled.std(axis=0)
- array([ 1., 1., 1.])
一般我们的标准化先在训练集上进行,在测试集上也应该做同样mean和variance的标准化,这样就应该将训练集上的标准化参数保存下来。
The preprocessing module further provides a utility class StandardScaler that implements the Transformer API to computethe mean and standard deviation on a training set so as to beable to later reapply the same transformation on the testing set.This class is hence suitable for use in the early steps of a sklearn.pipeline.Pipeline:
- >>> scaler = preprocessing.StandardScaler().fit(X)
- >>> scaler
- StandardScaler(copy=True, with_mean=True, with_std=True)
-
- >>> scaler.mean_
- array([ 1. ..., 0. ..., 0.33...])
-
- >>> scaler.scale_
- array([ 0.81..., 0.81..., 1.24...])
-
- >>> scaler.transform(X)
- array([[ 0. ..., -1.22..., 1.33...],
- [ 1.22..., 0. ..., -0.26...],
- [-1.22..., 1.22..., -1.06...]])
The scaler instance can then be used on new data to transform it thesame way it did on the training set:
- >>> scaler.transform([[-1., 1., 0.]])
- array([[-2.44..., 1.22..., -0.26...]])
It is possible to disable either centering or scaling by eitherpassing with_mean=False or with_std=False to the constructorof StandardScaler.[StandardScaler]
[Standardization, or mean removal and variance scaling]
StandardScaler示例
示例
from sklearn.preprocessing import StandardScaler
data = [[0, 1], [0, 3], [1, 2], [1, 2]]
scaler = StandardScaler()
scaler.fit(data)
print(scaler.mean_)
print(scaler.scale_)
print(scaler.transform(data))
print(scaler.transform([[2, 2]]))
[0.5 2. ]
[0.5 0.70710678]
[[-1. -1.41421356]
[-1. 1.41421356]
[ 1. 0. ]
[ 1. 0. ]]
示例1
''' dense feature process''' df_dense = df.loc[:, args.dense_cols] # with open(os.path.join(REAL_PATH, '../data/scaler.pkl'), 'rb') as f: # scaler = pickle.load(f) scaler = preprocessing.StandardScaler().fit(df_dense) with open(os.path.join(REAL_PATH, '../data/scaler.pkl'), 'wb') as f: pickle.dump(scaler, f) dense_index = df_dense.index df_dense = pd.DataFrame(scaler.transform(df_dense), index=dense_index, columns=args.dense_cols)
示例2
def preprocess(): if not os.path.exists(os.path.join(DIR, train_file1)) or not os.path.exists(os.path.join(DIR, test_file1)) or 0: xy = np.loadtxt(os.path.join(DIR, train_file), delimiter=',', dtype=float) x, y = xy[:, 0:-1], xy[:, -1] scaler = preprocessing.StandardScaler().fit(x) xy = np.hstack([scaler.transform(x), y]) np.savetxt(os.path.join(DIR, train_file1), xy, fmt='%.7f') x_test = np.loadtxt(os.path.join(DIR, test_file), delimiter=',', dtype=float) x_test = scaler.transform(x_test) np.savetxt(os.path.join(DIR, test_file1), x_test, fmt='%.7f') else: print('data loading...') xy = np.loadtxt(os.path.join(DIR, train_file1), dtype=float) x_test = np.loadtxt(os.path.join(DIR, test_file1), dtype=float) return xy[:, 0:-1], xy[:, -1], x_test
Note:
pipeline能简化该过程( See Pipeline and FeatureUnion: combining estimators ,翻译后的文章:scikit-learn:4.1. Pipeline and FeatureUnion: combining estimators(特征与预测器结合;特征与特征结合) - 程序园):
- >>> from sklearn.pipeline import make_pipeline
- >>> clf = make_pipeline(preprocessing.StandardScaler(), svm.SVC(C=1))
- >>> cross_validation.cross_val_score(clf, iris.data, iris.target, cv=cv)
- ...
- array([ 0.97..., 0.93..., 0.95...])
将属性缩放到一个指定的最大值和最小值(通常是1-0)之间,这可以通过preprocessing.MinMaxScaler类来实现。
使用这种方法的目的包括:
1、对于方差非常小的属性可以增强其稳定性;
2、维持稀疏矩阵中为0的条目。
min_max_scaler = preprocessing.MinMaxScaler()
X_minMax = min_max_scaler.fit_transform(X)
sklearn.preprocessing.robust_scale(X, axis=0, with_centering=True, with_scaling=True, quantile_range=(25.0, 75.0), copy=True)
Center to the median and component wise scaleaccording to the interquartile range.
其它
Constructs a transformer from an arbitrary callable.
lz自定义了一个归一化函数:大于某个THRESHOLD时其属于1的概率值要大于0.5,小于THRESHOLD时概率值小于0.5,接近最大值时其概率值越接近1,接近最小值时其概率值越接近0。相当于min-max归一化的一点改进吧。
from sklearn.preprocessing import FunctionTransformer import numpy as np def scalerFunc(x, maxv, minv, THRESHOLD=200): ''' :param x: (n_samples, n_features)!! ''' label = x >= THRESHOLD result = 0.5 * (1 + (x - THRESHOLD) * (label / (maxv - THRESHOLD) + (label - 1) / (minv - THRESHOLD))) # print(result) return result x = np.array([100, 150, 201, 250, 300]).reshape(-1, 1) scaler = FunctionTransformer(func=scalerFunc, kw_args={'maxv': x.max(), 'minv': x.min()}).fit(x) print(scaler.transform(x)) [[ 0. ] [ 0.25 ] [ 0.505] [ 0.75 ] [ 1. ]]
Note: 自定义函数的参数由FunctionTransformer中的kw_args指定,是字典类型,key必须是字符串。
[preprocessing.FunctionTransformer([func, ...])]
[sklearn.preprocessing: Preprocessing and Normalization¶]
正则化的过程是将每个样本缩放到单位范数(每个样本的范数为1),如果要使用如二次型(点积)或者其它核方法计算两个样本之间的相似性这个方法会很有用。
该方法是文本分类和聚类分析中经常使用的向量空间模型(Vector Space Model)的基础.
Normalization主要思想是对每个样本计算其p-范数,然后对该样本中每个元素除以该范数,这样处理的结果是使得每个处理后样本的p-范数(l1-norm,l2-norm)等于1。
Normalization is the process of scaling individual samples to haveunit norm.This process can be useful if you plan to use a quadratic formsuch as the dot-product or any other kernel to quantify the similarityof any pair of samples.This assumption is the base of the Vector Space Model often used in textclassification and clustering contexts.
def normalize(X, norm='l2', axis=1, copy=True)
注意,这个操作是对所有样本(而不是features)进行的,也就是将每个样本的值除以这个样本的Li范数。所以这个操作是针对axis=1进行的。
>>> X = [[ 1., -1., 2.],
... [ 2., 0., 0.],
... [ 0., 1., -1.]]
>>> X_normalized = preprocessing.normalize(X, norm='l2')
>>> X_normalized
array([[ 0.40..., -0.40..., 0.81...],
[ 1. ..., 0. ..., 0. ...],
[ 0. ..., 0.70..., -0.70...]])
由于不同的原因,许多现实中的数据集都包含有缺失值,要么是空白的,要么使用NaNs或者其它的符号替代。这些数据无法直接使用scikit-learn分类器直接训练,所以需要进行处理。幸运地是,sklearn中的Imputer类提供了一些基本的方法来处理缺失值,如使用均值、中位值或者缺失值所在列中频繁出现的值来替换。
Imputer类同样支持稀疏矩阵。
>>> import numpy as np
>>> from sklearn.preprocessing import Imputer
>>> imp = Imputer(missing_values='NaN', strategy='mean', axis=0)
>>> imp.fit([[1, 2], [np.nan, 3], [7, 6]])
Imputer(axis=0, copy=True, missing_values='NaN', strategy='mean', verbose=0)
>>> X = [[np.nan, 2], [6, np.nan], [7, 6]]
>>> print(imp.transform(X))
[[ 4. 2. ]
[ 6. 3.666...]
[ 7. 6. ]]
不过lz更倾向于使用pandas进行数据的这种处理[pandas小记:pandas数据规整化-缺失、冗余、替换]。
[Imputation of missing values]
[Generating polynomial features]
Binarize data (set feature values to 0 or 1) according to a threshold. LabelBinarizer is a utility class to help create a label indicator matrix from a list of multi-class labels. 特征的二值化主要是为了将数据特征转变成boolean变量。
sklearn.preprocessing.Binarizer函数可以设定一个阈值,结果数据值大于阈值的为1,小于阈值的为0。
示例1
y_pred_score= [[0.52, 0.43, 0.53, 0.47], [0.52, 0.43, 0.53, 0.56]]
from sklearn import preprocessing
bin = preprocessing.Binarizer(copy=True, threshold=0.5)
y_pred = bin.fit_transform(y_pred_score)
print(y_pred )
# [[1. 0. 1. 0.]
# [1. 0. 1. 1.]]
示例2
>>> X = [[ 1., -1., 2.],
... [ 2., 0., 0.],
... [ 0., 1., -1.]]
>>> binarizer = preprocessing.Binarizer().fit(X) # fit does nothing
>>> binarizer
Binarizer(copy=True, threshold=0.0)
>>> binarizer.transform(X)
array([[ 1., 0., 1.],
[ 1., 0., 0.],
[ 0., 1., 0.]])
Transform between iterable of iterables and a multilabel format
类数目
mlb.classes_.size
所有类名的ndarray
mlb.classes_
转换class_names为class_ids
transform(self, y)
y : iterable of iterables. 是一个可迭代对象就可以,当然其中的数据需要是mlb.classes_中的数据。返回一个二维的 (n_samples, n_classes) 的multi-hot表示。
转换class_ids为class_names
inverse_transform(self, yt)
其中参数:yt : array or sparse matrix of shape (n_samples, n_classes) .A matrix containing only 1s ands 0s. 必须是一个二维的有shape参数的ndarray或者tensor具体值(所以如果只是一个一维数据需要先转成np.array([ndarray_data])或者tf.expand_dims(tensor_data, 0)),且其中的数据不能是logits,而应该是0,1值的ids。
返回:y : list of tuples. The set of labels for each sample such that y[i] consists of classes_[j] for each yt[i, j] == 1. 返回的是一个一维列表,其中的元素为label的tuple(因为可能是multi-label)。
示例1
# 重复用建议先再init时fit再trans
- class ClassificationTrainer(object):
- def __init__(self, label_size, ...):
- self.mlb = preprocessing.MultiLabelBinarizer()
- self.mlb.fit([range(label_size)])
-
- self.bin = preprocessing.Binarizer(copy=True, threshold=conf.eval.threshold)
-
- def eval(self, ...):
- ...
- y_true = self.mlb.transform(standard_labels)
- y_pred = self.bin.fit_transform(predict_probs)
示例2
- from sklearn import preprocessing
-
- y_true_id = [[3, 1], [1], [4], [0]]
- mlb = preprocessing.MultiLabelBinarizer()
-
- # mlb.fit(y_true_id)
- # y_true = mlb.transform(y_true_id)
- y_true = mlb.fit_transform(y_true_id)
- print(y_true)
- # [[0 1 1 0]
- # [0 1 0 0]
- # [0 0 0 1]
- # [1 0 0 0]]
示例3:fit字典中所有的字
mlb = MultiLabelBinarizer()
with open(os.path.join(DATADIR, 'vocab.tags.txt'), 'r', encoding='utf-8') as f:
mlb.fit([[l.strip() for l in f.readlines()]])
示例4
- from sklearn.preprocessing import MultiLabelBinarizer
- import numpy as np
-
- mlb = MultiLabelBinarizer()
- ids = mlb.fit_transform([('a', 'b'), ('大', '小'), ('大',), ('左右', '晨')])
- ids = mlb.transform(['a', '小'])
- labels1 = mlb.inverse_transform(ids)
- labels2 = mlb.inverse_transform(np.array([[0, 0, 0, 1, 0, 0]]))
-
- print(ids)
- print(mlb.classes_.size)
- print(mlb.classes_)
- print(ids)
- print(labels1)
- print(labels2)
-
- [[1 0 0 0 0 0]
- [0 0 0 1 0 0]]
- 6
- ['a' 'b' '大' '小' '左右' '晨']
- [[1 0 0 0 0 0]
- [0 0 0 1 0 0]]
- [('a',), ('小',)]
- [('小',)]
[preprocessing.MultiLabelBinarizer([classes, …])]
分类特征处理Encoding categorical features 或者 二分类标签处理
class sklearn.preprocessing.OneHotEncoder(*, categories='auto', drop=None, sparse='deprecated', sparse_output=True, dtype=<class 'numpy.float64'>, handle_unknown='error', min_frequency=None, max_categories=None, feature_name_combiner='concat')
参数:handle_unknown='ignore'。Whether to raise an error or ignore if an unknown categorical feature is present during transform (default is to raise). When this parameter is set to ‘ignore’ and an unknown category is encountered during transform, the resulting one-hot encoded columns for this feature will be all zeros. In the inverse transform, an unknown category will be denoted as None.
函数:get_feature_names(raw_cols),输入原列名,返回new_cols的列名,当然一般列个数会变。
出错bugfix:TypeError: Encoders require their input to be uniformly strings or numbers. Got ['int', 'str']。.fit(df_sparse)里面的df_sparse貌似要一样的数据类型,最好都处理成str吧。df[sparse_cols]=df[sparse_cols].astype(str)
[SciKit-Learn Label Encoder resulting in error 'argument must be a string or number']
y_true_id = [[3], [1], [4], [0]]
num_labels = max([i[0] for i in y_true_id]) + 1
# 方式1:分类的数值或者字符时都ok
from sklearn import preprocessing
enc = preprocessing.OneHotEncoder(max_categories=num_labels, sparse_output=False)
# enc.fit(y_true_id) # 重复用建议先再init时fit再trans
# y_true = enc.transform(y_true_id)
y_true = enc.fit_transform(y_true_id)
print(y_true)
# [[0. 0. 1. 0.]
# [0. 1. 0. 0.]
# [0. 0. 0. 1.]
# [1. 0. 0. 0.]]
# 方式2:连续的数值时
import numpy as np
y_true = (np.arange(num_labels) == y_true_id).astype(np.float32)
print(y_true)
# [[0. 0. 0. 1. 0.]
# [0. 1. 0. 0. 0.]
# [0. 0. 0. 0. 1.]
# [1. 0. 0. 0. 0.]]
示例1:使用pandas dataframe格式的处理流程
''' sparse feature process'''
df_sparse = df.loc[:, args.sparse_cols]
onehoter = preprocessing.OneHotEncoder(sparse=False, handle_unknown='ignore').fit(df_sparse)
# with open(os.path.join(REAL_PATH, '../data/onehoter.pkl'), 'rb') as f:
# onehoter = pickle.load(f)
new_sparse_cols = onehoter.get_feature_names(args.sparse_cols)
with open(os.path.join(REAL_PATH, '../data/onehoter.pkl'), 'wb') as f:
pickle.dump(onehoter, f)
df_sparse = pd.DataFrame(onehoter.transform(df_sparse), columns=new_sparse_cols)
示例2:对于sparse特征,指定sparse能解析的category(未指定的全部解析成[0...])
import pandas as pd
from sklearn.preprocessing import OneHotEncoder
df = pd.DataFrame([[1, '3', 3.4],
[1, '2', 3.4],
[1, '3', 3.4],
[1, '0', 3.4],
[1, None, 3.4]]
, columns=['c1', 'c2', 'c3']
)
df.fillna(0, inplace=True) #这里也可以不需要将category里的列fillna,onehoter会处理成0。
print(df.dtypes)
print(df)
df_sparse = df.loc[:, ['c1', 'c2']]
onehoter = OneHotEncoder(categories=[[1, 2, 3], ['1', '2', '3']], sparse=False, handle_unknown='ignore')
onehoter = onehoter.fit(df_sparse)
new_sparse_cols = onehoter.get_feature_names(['c1', 'c2']).tolist()
df_sparse = pd.DataFrame(onehoter.transform(df_sparse), columns=new_sparse_cols)
print(df_sparse)
c1 int64
c2 object
c3 float64
dtype: object
c1 c2 c3
0 1 3 3.4
1 1 2 3.4
2 1 3 3.4
3 1 0 3.4
4 1 0 3.4
c1_1 c1_2 c1_3 c2_1 c2_2 c2_3
0 1.0 0.0 0.0 0.0 0.0 1.0
1 1.0 0.0 0.0 0.0 1.0 0.0
2 1.0 0.0 0.0 0.0 0.0 1.0
3 1.0 0.0 0.0 0.0 0.0 0.0
4 1.0 0.0 0.0 0.0 0.0 0.0
其它编码方式
[Encoding categorical features]
from: Scikit-learn:数据预处理Preprocessing data_皮皮blog-CSDN博客
ref: [sklearn.preprocessing: Preprocessing and Normalization¶]
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。