注册 登录
电子工程世界-论坛 返回首页 EEWORLD首页 频道 EE大学堂 下载中心 Datasheet 专题
LitchiCheng的个人空间 https://home.eeworld.com.cn/space-uid-1003127.html [收藏] [复制] [分享] [RSS]
日志

一起读《动手学深度学习(PyTorch版)》- RNN-sequence-model

已有 266 次阅读2024-11-2 12:13 |个人分类:深度学习

sin 1000加上噪声

import torch
from torch import nn
import matplotlib.pyplot as plt

T = 1000 
time = torch.arange(1, T + 1, dtype=torch.float32)
x = torch.sin(0.01 * time) + torch.normal(0, 0.2, (T,))
plt.plot(time.tolist(), x.tolist())
plt.show()

 

训练后,进行单步预测

import torch
from torch import nn
from torch.utils import data
from torchvision import transforms
import matplotlib.pyplot as plt

def load_array(data_arrays, batch_size, is_train=True):
    dataset = data.TensorDataset(*data_arrays)
    return data.DataLoader(dataset, batch_size, shuffle=is_train, num_workers=6)

T = 1000
time = torch.arange(1, T + 1, dtype=torch.float32)
x = torch.sin(0.01 * time) + torch.normal(0, 0.2, (T,))
# plt.plot(time.tolist(), x.tolist())
# plt.show()

tau = 4
# [996, 4]
features = torch.zeros((T - tau, tau))
for i in range(tau):
    # pick up 996 elements of x and then slide 1 element every time
    features[:, i] = x[i: T - tau + i]
    # print(features[:, i])
    # print(features[:, i].shape)
labels = x[tau:].reshape((-1, 1))

batch_size, n_train = 16, 600
train_iter = load_array((features[:n_train], labels[:n_train]),
                            batch_size, is_train=True)

def init_weights(m):
    if type(m) == nn.Linear:
        nn.init.xavier_uniform_(m.weight)

def get_net():
    net = nn.Sequential(nn.Linear(4, 10),
                        nn.ReLU(),
                        nn.Linear(10, 1))
    net.apply(init_weights)
    return net

class Accumulator:
    def __init__(self, n) -> None:
        self.data = [0.0]*n
    
    def add(self, *args):
        # args is a tupe
        self.data = [a + float(b) for a, b in zip(self.data, args)]

    def reset(self):
        self.data = [0.0] * len(self.data)

    def __getitem__(self, idx):
        return self.data[idx]

def evaluate_loss(net, data_iter, loss):
    metric = Accumulator(2)
    for X, y in data_iter:
        out = net(X)
        y = y.reshape(out.shape)
        l = loss(out, y)
        metric.add(l.sum(), l.numel())
    return metric[0] / metric[1]

loss = nn.MSELoss(reduction='none')

def train(net, train_iter, loss, epochs, lr):
    trainer = torch.optim.Adam(net.parameters(), lr)
    for epoch in range(epochs):
        for X, y in train_iter:
            trainer.zero_grad()
            l = loss(net(X), y)
            l.sum().backward()
            trainer.step()
        print(f'epoch {epoch + 1}, '
              f'loss: {evaluate_loss(net, train_iter, loss):f}')

net = get_net()
train(net, train_iter, loss, 5, 0.01)

onestep_preds = net(features)
plt.plot(time.tolist(), x.tolist())
plt.plot(time[tau:].tolist(), onestep_preds.tolist())
plt.show()

 

本文来自论坛,点击查看完整帖子内容。

评论 (0 个评论)

facelist doodle 涂鸦板

您需要登录后才可以评论 登录 | 注册

热门文章