下载资源后端资源详情
linear_regression.zip
大小:5.31KB
价格:14积分
下载量:0
评分:
5.0
上传者:shjinkiller
更新日期:2024-07-26

基于pytorch的线性回归模型,python

资源文件列表(大概)

文件名
大小
linear_regression/.idea/
-
linear_regression/.idea/.gitignore
50B
linear_regression/.idea/.name
20B
linear_regression/.idea/inspectionProfiles/
-
linear_regression/.idea/inspectionProfiles/profiles_settings.xml
174B
linear_regression/.idea/linear_regression.iml
322B
linear_regression/.idea/misc.xml
281B
linear_regression/.idea/modules.xml
293B
linear_regression/.idea/workspace.xml
2.22KB
linear_regression/example.csv
2.93KB
linear_regression/linear_regression.py
2.49KB

资源内容介绍

基于pytorch的线性回归模型的训练和测试,含csv文件数据的读取与数据集构建,训练后特征值系数的输出(以3个特征值为例),训练迭代的loss曲线显示,测试集预测结果与实际值对比可视化
import torchfrom torch import nnfrom torch.utils.data import TensorDataset, DataLoaderimport pandas as pdimport numpy as npimport matplotlib.pyplot as plt#读取含有样本数据的CSV文件df = pd.read_csv('example.csv')new_index = np.random.permutation(df.index)# 打乱数据集df = df.reindex(new_index).reset_index(drop=True)# 设置训练集比例train_dataset_rate=0.8num_train = int(len(df)*train_dataset_rate)#数据集最后一列是因变量X_train = df.iloc[:num_train, :-1] # 训练集特征y_train = df.iloc[:num_train, -1] # 训练集标注结果X_test = df.iloc[num_train:, :-1] # 测试集特征y_test = df.iloc[num_train:, -1] # 测试集标注结果X_train_tensor = torch.tensor(X_train.values, dtype=torch.float32)y_train_tensor = torch.tensor(y_train.values, dtype=torch.float32)y_train_tensor = y_train_tensor.reshape((-1,1))X_test_tensor = torch.tensor(X_test.values, dtype=torch.float32)y_test_tensor = torch.tensor(y_test.values, dtype=torch.float32)y_test_tensor = y_test_tensor.reshape((-1,1))# 使用TensorDataset,DataLoader创建训练集dataset_train = TensorDataset(X_train_tensor, y_train_tensor)loader_train = DataLoader(dataset_train, batch_size=4, shuffle=True)# 迭代次数epochs=200# 构建线性回归网络net = nn.Sequential(nn.Linear(X_train_tensor.shape[1],1))# 损失函数loss = nn.MSELoss()#优化器optimizer = torch.optim.SGD(net.parameters(),lr=0.3)# 初始化一个列表来存储每次迭代的loss值loss_list = []# 训练for epoch in range(epochs): for _,(X,y) in enumerate(loader_train): optimizer.zero_grad() l = loss(net(X),y) l.backward() optimizer.step() loss_list.append(loss(net(X_train_tensor),y_train_tensor).data) print(f'第{epoch+1}次的loss值为{loss(net(X_train_tensor),y_train_tensor).data}')print("训练完成")print("每个特征值的系数分别为")print(net[0].weight.data[0].numpy())print("偏置b为")print(net[0].bias.data[0].numpy())# loss曲线plt.plot(loss_list)plt.title('Loss Over Epochs')plt.xlabel('Epoch')plt.ylabel('Loss')plt.grid(True)plt.show()# 对测试集的预测结果y_pred=net(X_test_tensor)x = np.arange(len(y_pred))plt.plot(x,y_test_tensor.numpy(),label='Actual Values')plt.scatter(x,y_pred.data.numpy(),c='red',label='Predicted Values')plt.legend()plt.title('Comparison of Actual and Predicted Values')plt.xlabel('Index')plt.ylabel('Value')plt.grid(True)plt.show()

用户评论 (0)

发表评论

captcha