使用均方范数作为硬性限制

通过限制参数值的选择范围来控制模型容量

min(w,b) subject to w2θ\min \ell(\mathbf{w}, b) \text { subject to }\|\mathbf{w}\|^{2} \leq \theta

通常不限制偏移 bb(限不限制都差不多)

小的 θ\theta 意味着更强的正则项

使用均方范数作为柔性限制

对每个 θ\theta 都可以找到 λ\lambda 使得目标函数等价于:

min(w,b)+λ2w2\min \ell(\mathbf{w}, b)+\frac{\lambda}{2}\|\mathbf{w}\|^{2}

可以通过拉格朗日乘子来证明

超参数 λ\lambda 控制了正则项的重要程度,λ=0\lambda=0 无作用,λ,w0\lambda\rarr\infin,\mathbf{w}^*\rarr0

Untitled

参数更新法则

计算梯度:

w((w,b)+λ2w2)=(w,b)w+λw\frac{\partial}{\partial \mathbf{w}}\left(\ell(\mathbf{w}, b)+\frac{\lambda}{2}\|\mathbf{w}\|^{2}\right)=\frac{\partial \ell(\mathbf{w}, b)}{\partial \mathbf{w}}+\lambda \mathbf{w}

时间 tt 更新参数:

wt+1=wtηwt=(1ηλ)wtη(wt,bt)wt\mathbf{w}_{t+1}=\mathbf{w}_{t}-\eta\frac{\partial}{\partial\mathbf{w}_{t}}=(1-\eta \lambda) \mathbf{w}_{t}-\eta \frac{\partial \ell\left(\mathbf{w}_{t}, b_{t}\right)}{\partial \mathbf{w}_{t}}

通常 ηλ<1\eta \lambda<1,称为权重衰退,实践中可以尝试 e2,e3,e4e^{-2},e^{-3},e^{-4} 这几个选项

总结

  • 权重衰退通过 L2 正则项使得模型参数不会过大,从而控制模型复杂度
  • 正则项权重是控制模型复杂度的超参数

代码实现

1
2
3
4
%matplotlib inline
import torch
from torch import nn
from d2l import torch as d2l

生成一些数据

y=0.05+i=1d0.01xi+ϵ where ϵN(0,0.012)y=0.05+\sum_{i=1}^{d} 0.01 x_{i}+\epsilon \text { where } \epsilon \sim \mathcal{N}\left(0,0.01^{2}\right)

1
2
3
4
5
6
n_train, n_test, num_inputs, batch_size = 20, 100, 200, 5
true_w, true_b = torch.ones((num_inputs, 1)) * 0.01, 0.05
train_data = d2l.synthetic_data(true_w, true_b, n_train)
train_iter = d2l.load_array(train_data, batch_size)
test_data = d2l.synthetic_data(true_w, true_b, n_test)
test_iter = d2l.load_array(test_data, batch_size, is_train=False)

初始化模型参数

1
2
3
4
def init_params():
w = torch.normal(0, 1, size=(num_inputs, 1), requires_grad=True)
b = torch.zeros(1, requires_grad=True)
return [w, b]

定义 L2L_2 范数惩罚

1
2
def l2_penalty(w):
return torch.sum(w.pow(2)) / 2

定义训练代码实现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
def train(lambd):
w, b = init_params()
net, loss = lambda X: d2l.linreg(X, w, b), d2l.squared_loss
num_epochs, lr = 100, 0.003
animator = d2l.Animator(xlabel='epochs', ylabel='loss', yscale='log',
xlim=[5, num_epochs], legend=['train', 'test'])
for epoch in range(num_epochs):
for X, y in train_iter:
# with torch.enable_grad():
l = loss(net(X), y) + lambd * l2_penalty(w)
l.sum().backward()
d2l.sgd([w, b], lr, batch_size)
if (epoch + 1) % 5 == 0:
animator.add(epoch + 1, (d2l.evaluate_loss(net, train_iter, loss),
d2l.evaluate_loss(net, test_iter, loss)))
print('w的L2范数是:', torch.norm(w).item())

忽略正则化直接训练

1
train(lambd=0)

w 的 L2 范数是: 12.717231750488281

Untitled

使用权重衰减

1
train(lambd=3)

w 的 L2 范数是: 0.3712291121482849

Untitled

简洁实现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
def train_concise(wd):
net = nn.Sequential(nn.Linear(num_inputs, 1))
for param in net.parameters():
param.data.normal_()
loss = nn.MSELoss()
num_epochs, lr = 100, 0.003
trainer = torch.optim.SGD([{
"params": net[0].weight,
'weight_decay': wd}, {
"params": net[0].bias}], lr=lr)
animator = d2l.Animator(xlabel='epochs', ylabel='loss', yscale='log',
xlim=[5, num_epochs], legend=['train', 'test'])
for epoch in range(num_epochs):
for X, y in train_iter:
with torch.enable_grad():
trainer.zero_grad()
l = loss(net(X), y)
l.backward()
trainer.step()
if (epoch + 1) % 5 == 0:
animator.add(epoch + 1, (d2l.evaluate_loss(net, train_iter, loss),
d2l.evaluate_loss(net, test_iter, loss)))
print('w的L2范数:', net[0].weight.norm().item())
1
train_concise(0)

w 的 L2 范数: 13.439167976379395

Untitled

1
train_concise(3)

w 的 L2 范数: 0.4080515205860138

Untitled