逻辑回归 - 欺诈检测

import pandas as pd
import matplotlib.pyplot as plt
import numpy as np

creditcard = 'C:/Users/Amber/Documents/唐宇迪-机器学习课程资料/机器学习算法配套案例实战/逻辑回归-信用卡欺诈检测/逻辑回归-信用卡欺诈检测/creditcard.csv'
data = pd.read_csv(creditcard)
print(data.head())
   Time        V1        V2        V3  ...       V27       V28  Amount  Class
0   0.0 -1.359807 -0.072781  2.536347  ...  0.133558 -0.021053  149.62      0
1   0.0  1.191857  0.266151  0.166480  ... -0.008983  0.014724    2.69      0
2   1.0 -1.358354 -1.340163  1.773209  ... -0.055353 -0.059752  378.66      0
3   1.0 -0.966272 -0.185226  1.792993  ...  0.062723  0.061458  123.50      0
4   2.0 -1.158233  0.877737  1.548718  ...  0.219422  0.215153   69.99      0

[5 rows x 31 columns]

上面的数据,Time不必关心,V1-V28 共28个参数,amount为交易金额, 该数据集的特征已经被提取好,非原始数据。

数据分析的目的,将0 正确的类别,与1 异常的类别能够区分开

真实的数据集,应该是正常样本的居多,异常的样本极少

import pandas as pd
import matplotlib.pyplot as plt
import numpy as np

creditcard = 'C:/Users/Amber/Documents/唐宇迪-机器学习课程资料/机器学习算法配套案例实战/逻辑回归-信用卡欺诈检测/逻辑回归-信用卡欺诈检测/creditcard.csv'
data = pd.read_csv(creditcard)
# print(data.head())

count_classes = pd.value_counts(data['Class'], sort = True).sort_index()   
# 透过value_counts就可以计算出Class那一列中为1与0的sample各有多少个
# 在Calss列中,0表示正常,1表示异常
count_classes.plot(kind = 'bar')  # kind=‘bar’ 表示画的是条形图
plt.title("Fraud class histogram")
plt.xlabel("Class")
plt.ylabel("Frequency")

透过数据的大概分布,就可以看到样本分布极不均衡。 针对此类数据集,常用过采样/下采样去处理

下采样:让不均衡的数据变为均衡数据的方法,让0/1一样小即可。即,如果class=1的样本有几百个,则class=0的样本也一样取几百个,再将这些样本组合在一起。 即使得class=0与class=1的样本数一样少。

上采样:让不均衡的数据变为均衡数据的方法,让0/1一样多即可。即,如果class=1的样本有几百个,则采用样本生成策略,让生成出来的class=1的样本与class=0的样本数一样多。

分析原始数据,发现Amount这一列,数值分布差异比较大。做机器学习时有一个规则,即首先让数值分布差异差不多。

拿V28与Amount这两列为例,V28的数值分布在-1~1之间,而Amount这一列的数值分布在0-400之间,机器学习中有一个潜规则,认为数值大的元素更加重要。

使用sklearn提供的预处理功能/标准化模块,将Amount这一列标准化,重新定义为-1~1之间

import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from sklearn.preprocessing import StandardScaler

creditcard = 'C:/Users/Amber/Documents/唐宇迪-机器学习课程资料/机器学习算法配套案例实战/逻辑回归-信用卡欺诈检测/逻辑回归-信用卡欺诈检测/creditcard.csv'
data = pd.read_csv(creditcard)

# 使用sklearn提供的预处理功能/标准化模块,将Amount这一列标准化,重新定义为-1~1之间使用
data['normAmount'] = StandardScaler().fit_transform(data['Amount'].values.reshape(-1, 1))
# 此处,StandardScaler().fit_transform(data['Amount'].reshape(-1, 1))会报错,提示
# AttributeError: 'Series' object has no attribute 'reshape',应该更新为
# StandardScaler().fit_transform(data['Amount'].values.reshape(-1, 1))
data = data.drop(['Time','Amount'],axis=1) # 去掉Time与Amount两个列,此二列将不再使用
print(data.head())
         V1        V2        V3  ...       V28  Class  normAmount
0 -1.359807 -0.072781  2.536347  ... -0.021053      0    0.244964
1  1.191857  0.266151  0.166480  ...  0.014724      0   -0.342475
2 -1.358354 -1.340163  1.773209  ... -0.059752      0    1.160686
3 -0.966272 -0.185226  1.792993  ...  0.061458      0    0.140534
4 -1.158233  0.877737  1.548718  ...  0.215153      0   -0.073403

特征数据已经做好, 下面为下采样

X = data.ix[:, data.columns != 'Class']   # 所有的行都取进来,除了特征值Class
y = data.ix[:, data.columns == 'Class']   # 所有行的特征值都取进来,只取Class这一列

# Number of data points in the minority class
number_records_fraud = len(data[data.Class == 1])   # Class等于1的样本的数量
fraud_indices = np.array(data[data.Class == 1].index) # 取出Class等于1的样本对应的行索引

# Picking the indices of the normal classes
normal_indices = data[data.Class == 0].index   # 将Class=0的样本对应的索引取出来

# Out of the indices we picked, randomly select "x" number (number_records_fraud)
random_normal_indices = np.random.choice(normal_indices, number_records_fraud, replace = False) # 在normal_indices个sample中随机取出number_records_fraud个sample,不代替
random_normal_indices = np.array(random_normal_indices) # 取出这些样本的index值,组成array

# Appending the 2 indices
under_sample_indices = np.concatenate([fraud_indices,random_normal_indices])
# 将这些取出的sample合并在一起

# Under sample dataset
under_sample_data = data.iloc[under_sample_indices,:]

X_undersample = under_sample_data.iloc[:, under_sample_data.columns != 'Class']  #特征
y_undersample = under_sample_data.iloc[:, under_sample_data.columns == 'Class']  # label

# Showing ratio
print("Percentage of normal transactions: ", len(under_sample_data[under_sample_data.Class == 0])/len(under_sample_data))
print("Percentage of fraud transactions: ", len(under_sample_data[under_sample_data.Class == 1])/len(under_sample_data))
print("Total number of transactions in resampled data: ", len(under_sample_data))
Percentage of normal transactions:  0.5
Percentage of fraud transactions:  0.5
Total number of transactions in resampled data:  984

各取出50%。 下采样会有些问题

交叉验证/比较难选的是参数选择,调参

from sklearn.model_selection import train_test_split

# Whole dataset,先把整体的数据集进行切分
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 0.3, random_state = 0)
# test_size = 0.3, 表示30%的数据作为训练集合,70%的数据作为测试集
# random_state = 0,这样可以排除样本的影响
# train会对原始数据集进行洗牌,洗完后采取切分,保证随机切分

print("Number transactions train dataset: ", len(X_train))
print("Number transactions test dataset: ", len(X_test))
print("Total number of transactions: ", len(X_train)+len(X_test))

# Undersampled dataset
X_train_undersample, X_test_undersample, y_train_undersample, y_test_undersample = train_test_split(X_undersample
                                                                                                   ,y_undersample
                                                                                                   ,test_size = 0.3
                                                                                                   ,random_state = 0)
print("")
print("Number transactions train dataset: ", len(X_train_undersample))
print("Number transactions test dataset: ", len(X_test_undersample))
print("Total number of transactions: ", len(X_train_undersample)+len(X_test_undersample))
Number transactions train dataset:  199364
Number transactions test dataset:  85443
Total number of transactions:  284807

Number transactions train dataset:  688
Number transactions test dataset:  296
Total number of transactions:  984

利用下采样数据集分割出来的数据集进行训练,利用原始数据集分割出来的test数据集进行测试。

模型容易找,但如何评估模型?

1- 用精度评估,但往往精度是骗人的,比如该案例,如果按照精度评估,则可能会导致无法找出小概率时间,即欺诈事件,或癌症检测事件,无法检测出癌症患者

2- 用recall,查全率 作为模型评估标准更加科学 Recall = TP/(TP+FN)

计算得到的平均recall值越大,效果越佳

#Recall = TP/(TP+FN)
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.metrics import confusion_matrix,recall_score,classification_report 
def printing_Kfold_scores(x_train_data,y_train_data):
    fold = KFold(len(y_train_data),5,shuffle=False) 
# 机器学习时,首先应作交叉验证,此处,将原始数据集切分为5份

    # Different C parameters
    c_param_range = [0.01,0.1,1,10,100]
# 正则化惩罚项,在做机器学习时,此参数应该被定义。
# 模型除了应满足训练数据,还要符合测试数据,并且模型的泛化能力/越平稳越好。过拟合往往是由于模型的权重过大引起
# 正则化惩罚项目:惩罚theta
# L2正则化:损失函数loss + 1/2 * W.^2, loss值低这为佳 。 lambda * L2 中,设置惩罚项lambda来设置惩罚力度。
# L1正则化: loss + |W|
# 透过交叉验证,来确定哪个惩罚项目lamda<[0.01,0.1,1,10,100]> 效果更好。即 c_param_range 就是惩罚项lambda

    results_table = pd.DataFrame(index = range(len(c_param_range),2), columns = ['C_parameter','Mean recall score'])
    results_table['C_parameter'] = c_param_range

    # the k-fold will give 2 lists: train_indices = indices[0], test_indices = indices[1]
    j = 0
    for c_param in c_param_range:  # 循环计算每个c值,判断哪个c值比较好
        print('-------------------------------------------')
        print('C parameter: ', c_param)
        print('-------------------------------------------')
        print('')

        recall_accs = []
        for iteration, indices in enumerate(fold,start=1): #进行交叉验证

            # Call the logistic regression model with a certain C parameter
            lr = LogisticRegression(C = c_param, penalty = 'l1')  # 先实例化模型

            # Use the training data to fit the model. In this case, we use the portion of the fold to train the model
            # with indices[0]. We then predict on the portion assigned as the 'test cross validation' with indices[1]
            lr.fit(x_train_data.iloc[indices[0],:],y_train_data.iloc[indices[0],:].values.ravel()) # 模型训练

            # Predict values using the test indices in the training data
            y_pred_undersample = lr.predict(x_train_data.iloc[indices[1],:].values) # 用交叉验证集去测试效果

            # Calculate the recall score and append it to a list for recall scores representing the current c_parameter
            recall_acc = recall_score(y_train_data.iloc[indices[1],:].values,y_pred_undersample)
            recall_accs.append(recall_acc)
            print('Iteration ', iteration,': recall score = ', recall_acc)

        # The mean value of those recall scores is the metric we want to save and get hold of.
        results_table.ix[j,'Mean recall score'] = np.mean(recall_accs)
        j += 1
        print('')
        print('Mean recall score ', np.mean(recall_accs))
        print('')

    best_c = results_table.loc[results_table['Mean recall score'].idxmax()]['C_parameter']
    
    # Finally, we can check which C parameter is the best amongst the chosen.
    print('*********************************************************************************')
    print('Best model to choose from cross validation is with C parameter = ', best_c)
    print('*********************************************************************************')
    
    return best_c

实际执行会报错,参照下文 https://blog.csdn.net/weixin_40283816/article/details/83242777

测试程序如下,未完。。。

import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.metrics import confusion_matrix,recall_score,classification_report 


creditcard = 'C:/Users/Amber/Documents/唐宇迪-机器学习课程资料/机器学习算法配套案例实战/逻辑回归-信用卡欺诈检测/逻辑回归-信用卡欺诈检测/creditcard.csv'
data = pd.read_csv(creditcard)
# print(data.head())

# count_classes = pd.value_counts(data['Class'], sort = True).sort_index()   
# 透过value_counts就可以计算出Class那一列中为1与0的sample各有多少个
# 在Calss列中,0表示正常,1表示异常
# count_classes.plot(kind = 'bar')  # kind=‘bar’ 表示画的是条形图
# plt.title("Fraud class histogram")
# plt.xlabel("Class")
# plt.ylabel("Frequency")

# 使用sklearn提供的预处理功能/标准化模块,将Amount这一列标准化,重新定义为-1~1之间使用
data['normAmount'] = StandardScaler().fit_transform(data['Amount'].values.reshape(-1, 1))
data = data.drop(['Time','Amount'],axis=1)
#print(data.head())

X = data.loc[:, data.columns != 'Class']   # 所有的行都取进来,除了特征值Class
y = data.loc[:, data.columns == 'Class']   # 所有行的特征值都取进来,只取Class这一列

# Number of data points in the minority class
number_records_fraud = len(data[data.Class == 1])   # Class等于1的样本的数量
fraud_indices = np.array(data[data.Class == 1].index) # 取出Class等于1的样本对应的行索引

# Picking the indices of the normal classes
normal_indices = data[data.Class == 0].index   # 将Class=0的样本对应的索引取出来

# Out of the indices we picked, randomly select "x" number (number_records_fraud)
random_normal_indices = np.random.choice(normal_indices, number_records_fraud, replace = False) # 在normal_indices个sample中随机取出number_records_fraud个sample,不代替
random_normal_indices = np.array(random_normal_indices) # 取出这些样本的index值,组成array

# Appending the 2 indices
under_sample_indices = np.concatenate([fraud_indices,random_normal_indices])
# 将这些取出的sample合并在一起

# Under sample dataset
under_sample_data = data.iloc[under_sample_indices,:]

X_undersample = under_sample_data.loc[:, under_sample_data.columns != 'Class']  #特征
y_undersample = under_sample_data.loc[:, under_sample_data.columns == 'Class']  # label

# Showing ratio
# print("Percentage of normal transactions: ", len(under_sample_data[under_sample_data.Class == 0])/len(under_sample_data))
# print("Percentage of fraud transactions: ", len(under_sample_data[under_sample_data.Class == 1])/len(under_sample_data))
# print("Total number of transactions in resampled data: ", len(under_sample_data))

# Whole dataset
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 0.3, random_state = 0)

# print("Number transactions train dataset: ", len(X_train))
# print("Number transactions test dataset: ", len(X_test))
# print("Total number of transactions: ", len(X_train)+len(X_test))

# Undersampled dataset
X_train_undersample, X_test_undersample, y_train_undersample, y_test_undersample = train_test_split(X_undersample
                                                                                                   ,y_undersample
                                                                                                   ,test_size = 0.3
                                                                                                   ,random_state = 0)
# print("")
# print("Number transactions train dataset: ", len(X_train_undersample))
# print("Number transactions test dataset: ", len(X_test_undersample))
# print("Total number of transactions: ", len(X_train_undersample)+len(X_test_undersample))

def printing_Kfold_scores(x_train_data,y_train_data):
    fold = KFold(5,shuffle=False) 
    
    # Different C parameters
    c_param_range = [0.01,0.1,1,10,100]

    results_table = pd.DataFrame(index = range(len(c_param_range),2), columns = ['C_parameter','Mean recall score'])
    results_table['C_parameter'] = c_param_range

    # the k-fold will give 2 lists: train_indices = indices[0], test_indices = indices[1]
    j = 0
    for c_param in c_param_range:
        print('-------------------------------------------')
        print('C parameter: ', c_param)
        print('-------------------------------------------')
        print('')

        recall_accs = []
        #for iteration, indices in enumerate(fold,start=1):
        for iteration, indices in enumerate(fold.split(X_train_undersample)):

            # Call the logistic regression model with a certain C parameter
            lr = LogisticRegression(C = c_param, penalty = 'l1')

            # Use the training data to fit the model. In this case, we use the portion of the fold to train the model
            # with indices[0]. We then predict on the portion assigned as the 'test cross validation' with indices[1]
            lr.fit(x_train_data.iloc[indices[0],:],y_train_data.iloc[indices[0],:].values.ravel())

            # Predict values using the test indices in the training data
            y_pred_undersample = lr.predict(x_train_data.iloc[indices[1],:].values)

            # Calculate the recall score and append it to a list for recall scores representing the current c_parameter
            recall_acc = recall_score(y_train_data.iloc[indices[1],:].values,y_pred_undersample)
            recall_accs.append(recall_acc)
            print('Iteration ', iteration,': recall score = ', recall_acc)

        # The mean value of those recall scores is the metric we want to save and get hold of.
        results_table.ix[j,'Mean recall score'] = np.mean(recall_accs)
        j += 1
        print('')
        print('Mean recall score ', np.mean(recall_accs))
        print('')

    best_c = results_table.loc[results_table['Mean recall score'].idxmax()]['C_parameter']
    
    # Finally, we can check which C parameter is the best amongst the chosen.
    print('*********************************************************************************')
    print('Best model to choose from cross validation is with C parameter = ', best_c)
    print('*********************************************************************************')
    
    return best_c

best_c = printing_Kfold_scores(X_train_undersample,y_train_undersample)
-------------------------------------------
C parameter:  0.01
-------------------------------------------

Iteration  0 : recall score =  0.9315068493150684
Iteration  1 : recall score =  0.9178082191780822
Iteration  2 : recall score =  1.0
Iteration  3 : recall score =  0.972972972972973
Iteration  4 : recall score =  0.9696969696969697

Mean recall score  0.9583970022326186

-------------------------------------------
C parameter:  0.1
-------------------------------------------

Iteration  0 : recall score =  0.8493150684931506
Iteration  1 : recall score =  0.863013698630137
Iteration  2 : recall score =  0.9322033898305084
Iteration  3 : recall score =  0.9459459459459459
Iteration  4 : recall score =  0.9090909090909091

Mean recall score  0.8999138023981302

-------------------------------------------
C parameter:  1
-------------------------------------------

Iteration  0 : recall score =  0.863013698630137
Iteration  1 : recall score =  0.9041095890410958
Iteration  2 : recall score =  0.9830508474576272
 3 : recall score =  0.9459459459459459
Iteration  4 : recall score =  0.9090909090909091

Mean recall score  0.921042198033143

-------------------------------------------
C parameter:  10
-------------------------------------------

Iteration  0 : recall score =  0.863013698630137
Iteration  1 : recall score =  0.9041095890410958
Iteration  2 : recall score =  0.9830508474576272
Iteration  3 : recall score =  0.9324324324324325
Iteration  4 : recall score =  0.9242424242424242

Mean recall score  0.9213697983607434

-------------------------------------------
C parameter:  100
-------------------------------------------

Iteration  0 : recall score =  0.863013698630137
Iteration  1 : recall score =  0.9041095890410958
Iteration  2 : recall score =  0.9830508474576272
Iteration  3 : recall score =  0.9459459459459459
Iteration  4 : recall score =  0.9242424242424242

Mean recall score  0.924072501063446

 

def plot_confusion_matrix(cm, classes,
                          title='Confusion matrix',
                          cmap=plt.cm.Blues):
    """
    This function prints and plots the confusion matrix.
    """
    plt.imshow(cm, interpolation='nearest', cmap=cmap)
    plt.title(title)
    plt.colorbar()
    tick_marks = np.arange(len(classes))
    plt.xticks(tick_marks, classes, rotation=0)
    plt.yticks(tick_marks, classes)

    thresh = cm.max() / 2.
    for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
        plt.text(j, i, cm[i, j],
                 horizontalalignment="center",
                 color="white" if cm[i, j] > thresh else "black")

    plt.tight_layout()
    plt.ylabel('True label')
    plt.xlabel('Predicted label')
import itertools
lr = LogisticRegression(C = best_c, penalty = 'l1')
lr.fit(X_train_undersample,y_train_undersample.values.ravel())
y_pred_undersample = lr.predict(X_test_undersample.values)

# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test_undersample,y_pred_undersample)
np.set_printoptions(precision=2)

print("Recall metric in the testing dataset: ", cnf_matrix[1,1]/(cnf_matrix[1,0]+cnf_matrix[1,1]))

# Plot non-normalized confusion matrix
class_names = [0,1]
plt.figure()
plot_confusion_matrix(cnf_matrix
                      , classes=class_names
                      , title='Confusion matrix')
plt.show()
Recall metric in the testing dataset:  0.9319727891156463
 

混淆矩阵Confusion matrix,由一个坐标系组成,x轴由0/1,y轴也有0/1,x轴表示预测值,y轴表示真值

精度值 = (129 + 137)/(129 + 137 + 10 + 20)

 

lr = LogisticRegression(C = best_c, penalty = 'l1')
lr.fit(X_train_undersample,y_train_undersample.values.ravel())
y_pred = lr.predict(X_test.values)

# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test,y_pred)
np.set_printoptions(precision=2)

print("Recall metric in the testing dataset: ", cnf_matrix[1,1]/(cnf_matrix[1,0]+cnf_matrix[1,1]))

# Plot non-normalized confusion matrix
class_names = [0,1]
plt.figure()
plot_confusion_matrix(cnf_matrix
                      , classes=class_names
                      , title='Confusion matrix')
plt.show()
Recall metric in the testing dataset:  0.918367346939

上图,8581 个误杀样本,比较多。不会影响Recall值,但使得精度降低,工作量增大。

目的是找出135个判断正确的。

样本什么都不做,直接用原始数据预测的模型效果如下,发现Recall值偏低,不及之前下采样的结果

best_c = printing_Kfold_scores(X_train,y_train)
-------------------------------------------
C parameter:  0.01
-------------------------------------------

Iteration  1 : recall score =  0.492537313433
Iteration  2 : recall score =  0.602739726027
Iteration  3 : recall score =  0.683333333333
Iteration  4 : recall score =  0.569230769231
Iteration  5 : recall score =  0.45

Mean recall score  0.559568228405

-------------------------------------------
C parameter:  0.1
-------------------------------------------

Iteration  1 : recall score =  0.567164179104
Iteration  2 : recall score =  0.616438356164
Iteration  3 : recall score =  0.683333333333
Iteration  4 : recall score =  0.584615384615
Iteration  5 : recall score =  0.525

Mean recall score  0.595310250644

-------------------------------------------
C parameter:  1
-------------------------------------------

Iteration  1 : recall score =  0.55223880597
Iteration  2 : recall score =  0.616438356164
Iteration  3 : recall score =  0.716666666667
Iteration  4 : recall score =  0.615384615385
Iteration  5 : recall score =  0.5625

Mean recall score  0.612645688837

-------------------------------------------
C parameter:  10
-------------------------------------------

Iteration  1 : recall score =  0.55223880597
Iteration  2 : recall score =  0.616438356164
Iteration  3 : recall score =  0.733333333333
Iteration  4 : recall score =  0.615384615385
Iteration  5 : recall score =  0.575

Mean recall score  0.61847902217

-------------------------------------------
C parameter:  100
-------------------------------------------

Iteration  1 : recall score =  0.55223880597
Iteration  2 : recall score =  0.616438356164
Iteration  3 : recall score =  0.733333333333
Iteration  4 : recall score =  0.615384615385
Iteration  5 : recall score =  0.575

Mean recall score  0.61847902217

*********************************************************************************
Best model to choose from cross validation is with C parameter =  10.0
*********************************************************************************
lr = LogisticRegression(C = best_c, penalty = 'l1')
lr.fit(X_train,y_train.values.ravel())
y_pred_undersample = lr.predict(X_test.values)

# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test,y_pred_undersample)
np.set_printoptions(precision=2)

print("Recall metric in the testing dataset: ", cnf_matrix[1,1]/(cnf_matrix[1,0]+cnf_matrix[1,1]))

# Plot non-normalized confusion matrix
class_names = [0,1]
plt.figure()
plot_confusion_matrix(cnf_matrix
                      , classes=class_names
                      , title='Confusion matrix')
plt.show()

 

Recall metric in the testing dataset:  0.619047619048

不同门限值对结果的影响

参照logistics sigmoid函数,判决门限比如大于0.9才判为正例,而不是默认的0.5,这样对准确度的影响是多少?入门的门槛比较高

如果降低门限<0.1:宁可错杀一万也不放走一例

lr = LogisticRegression(C = 0.01, penalty = 'l1')
lr.fit(X_train_undersample,y_train_undersample.values.ravel())
y_pred_undersample_proba = lr.predict_proba(X_test_undersample.values) # 之前是预测类别值,现在是预测概率值

thresholds = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]

plt.figure(figsize=(10,10))

j = 1
for i in thresholds:
    y_test_predictions_high_recall = y_pred_undersample_proba[:,1] > i
    
    plt.subplot(3,3,j)
    j += 1
    
    # Compute confusion matrix
    cnf_matrix = confusion_matrix(y_test_undersample,y_test_predictions_high_recall)
    np.set_printoptions(precision=2)

    print("Recall metric in the testing dataset: ", cnf_matrix[1,1]/(cnf_matrix[1,0]+cnf_matrix[1,1]))

    # Plot non-normalized confusion matrix
    class_names = [0,1]
    plot_confusion_matrix(cnf_matrix
                          , classes=class_names
                          , title='Threshold >= %s'%i) 
Recall metric in the testing dataset:  1.0
Recall metric in the testing dataset:  1.0
Recall metric in the testing dataset:  1.0
Recall metric in the testing dataset:  0.986394557823
Recall metric in the testing dataset:  0.931972789116
Recall metric in the testing dataset:  0.884353741497
Recall metric in the testing dataset:  0.836734693878
Recall metric in the testing dataset:  0.748299319728
Recall metric in the testing dataset:  0.571428571429

 Threhold值很小,即宁可错杀也不放过,则此时的recall值较高

Threhold值增高,则放松规则,则recall值下降

Threhold = 0.1, 精度很低,即所有的样本都预测成目标样本。

Threhold = 0.2,精度低

therhold = 0.4, 误杀率降低,但也出现部分sample检测不到

threhold = 0.6时的精度更高

结合实际工程需求,总格recall,精度等不同指标

过采样

数据生成,需要用到smote算法。

计算每一个样本到所有样本点的欧式距离,然后进行排序,如果乘以5倍,则按照上面的Xnew = x + rand(0,1)x(x~ - x) 生成。 随机生成

第一次用d1,第二次d2,。。。 透过距离值生成新的sample

import pandas as pd
from imblearn.over_sampling import SMOTE
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
credit_cards=pd.read_csv('creditcard.csv')

columns=credit_cards.columns
# The labels are in the last column ('Class'). Simply remove it to obtain features columns
features_columns=columns.delete(len(columns)-1)

features=credit_cards[features_columns]
labels=credit_cards['Class']
features_train, features_test, labels_train, labels_test = train_test_split(features, 
                                                                            labels, 
                                                                            test_size=0.2, 
                                                                            random_state=0)
oversampler=SMOTE(random_state=0)
os_features,os_labels=oversampler.fit_sample(features_train,labels_train)
# 利用训练集进行生成。测试集保持不动
len(os_labels[os_labels==1])
227454
os_features = pd.DataFrame(os_features)
os_labels = pd.DataFrame(os_labels)
best_c = printing_Kfold_scores(os_features,os_labels)
-------------------------------------------
C parameter:  0.01
-------------------------------------------

Iteration  1 : recall score =  0.890322580645
Iteration  2 : recall score =  0.894736842105
Iteration  3 : recall score =  0.968861347792
Iteration  4 : recall score =  0.957595541926
Iteration  5 : recall score =  0.958430881173

Mean recall score  0.933989438728

-------------------------------------------
C parameter:  0.1
-------------------------------------------

Iteration  1 : recall score =  0.890322580645
Iteration  2 : recall score =  0.894736842105
Iteration  3 : recall score =  0.970410534469
Iteration  4 : recall score =  0.959980655302
Iteration  5 : recall score =  0.960178498807

Mean recall score  0.935125822266

-------------------------------------------
C parameter:  1
-------------------------------------------

Iteration  1 : recall score =  0.890322580645
Iteration  2 : recall score =  0.894736842105
Iteration  3 : recall score =  0.970454796946
Iteration  4 : recall score =  0.96014552489
Iteration  5 : recall score =  0.960596168431

Mean recall score  0.935251182603

-------------------------------------------
C parameter:  10
-------------------------------------------

Iteration  1 : recall score =  0.890322580645
Iteration  2 : recall score =  0.894736842105
Iteration  3 : recall score =  0.97065397809
Iteration  4 : recall score =  0.960343368396
Iteration  5 : recall score =  0.960530220596

Mean recall score  0.935317397966

-------------------------------------------
C parameter:  100
-------------------------------------------

Iteration  1 : recall score =  0.890322580645
Iteration  2 : recall score =  0.894736842105
Iteration  3 : recall score =  0.970543321899
Iteration  4 : recall score =  0.960211472725
Iteration  5 : recall score =  0.960903924995

Mean recall score  0.935343628474

*********************************************************************************
Best model to choose from cross validation is with C parameter =  100.0
*********************************************************************************
lr = LogisticRegression(C = best_c, penalty = 'l1')
lr.fit(os_features,os_labels.values.ravel())
y_pred = lr.predict(features_test.values)

# Compute confusion matrix
cnf_matrix = confusion_matrix(labels_test,y_pred)
np.set_printoptions(precision=2)

print("Recall metric in the testing dataset: ", cnf_matrix[1,1]/(cnf_matrix[1,0]+cnf_matrix[1,1]))

# Plot non-normalized confusion matrix
class_names = [0,1]
plt.figure()
plot_confusion_matrix(cnf_matrix
                      , classes=class_names
                      , title='Confusion matrix')
plt.show()
Recall metric in the testing dataset:  0.90099009901

发现,透过过采样方式得到的误杀率有所降低,即模型的准确率提高。

如果计算能力许可,尽量用尽量多的数据进行预测,提高模型的精度。

总结:

1- 拿到数据,首先看一下数据的整体分布,是比较均匀,还是差异比较大,如果不均匀,则采用下采样/过采样等方法

2- 看数据是否需要预处理,本例数据已经与处理过,可以直接拿来使用

3- 先对数据标准化,让所有的数据分布在-1~1区间内

4- 不同的参数,对结果的影响很大,如何找到合适的参数? 交叉验证

5- 混淆矩阵,即模型选择标准

6- 透过阈值与预测值比较,综合判断,选择合适的阈值

 

表情包
插入表情
评论将由博主筛选后显示,对所有人可见 | 还能输入1000个字符
相关推荐
©️2020 CSDN 皮肤主题: 大白 设计师:CSDN官方博客 返回首页