搭CNN网络训练时loss不下降,求解答
Hi_Boy022 发布于2019-10 浏览:5884 回复:5
0
收藏
最后编辑于2022-04

最近我用paddle.fluid自己搭建一个CNN网络来进行图像情感识别的分类问题,我在本地的电脑上用keras在CPU上跑过,train的过程中能够看到网络的loss和accuracy随着训练逐步变好,loss最终能到0.6左右,accuracy能到0.56的样子,可是我在用paddle.fluid进行训练的时候,网络搭建的结构是一样的,就是在训练的时候发现loss基本就徘徊在1.81左右,accuracy一直在0.25,而且很迷的是改变了网络结构或者学习速率等也会得到差不多的训练结果,我不知道这是什么问题。难道是在训练的时候并没有使网络的参数更新吗?还请社区的各位大佬指教一下。

框架用的是 paddle 1.5.1 训练是在AI Studio GPU V100上跑的

这是网络结构的搭建模块:

import paddle.fluid as fluid
train_program = fluid.Program()
startup_program = fluid.Program()
with fluid.program_guard(main_program=train_program, startup_program=startup_program):
     with fluid.unique_name.guard():
          ## =====Convolution Network====
          # define input data
          image_in = fluid.layers.data(name='image', shape=[1, 48, 48])
          label_in = fluid.layers.data(name='label', shape=[1])
          # define Network
          conv = fluid.layers.conv2d(input=image_in, num_filters=10, filter_size=(8, 8), stride=1, padding=4)
          conv = fluid.layers.conv2d(input=conv, num_filters=10, filter_size=(8, 8), stride=1, padding=4)
          pool = fluid.layers.pool2d(input=conv, pool_size=4, pool_stride=(2, 2))
          for i in range(2):
                conv = fluid.layers.conv2d(input=pool, num_filters=10, filter_size=(4, 4), stride=1)
          pool = fluid.layers.pool2d(input=conv, pool_size=2, pool_stride=(2, 2))
           flat = fluid.layers.flatten(pool)
          # x = fluid.layers.dropout(flat, 0.25)
          x = flat
          for i in range(20):
               x = fluid.layers.fc(input=x, size=100, act='relu')
               # x = fluid.layers.dropout(x, 0.25)
          predict = fluid.layers.fc(input=x, size=7, act='softmax')
          loss = fluid .layers.mean(fluid.layers.cross_entropy(input=predict, label=label_in))
          acc = fluid.layers.accuracy(input=predict, label=label_in)

          test_program = train_program.clone(for_test=True)
          # define optimizer
          adam = fluid.optimizer.Adam(learning_rate=0.01)
          adam.minimize(loss)

这是训练的模块:

history = {'loss':[], 'acc':[], 'loss_val':[], 'acc_val':[]}
exe = fluid.Executor(fluid.CUDAPlace(0))
startup_program.random_seed = 1
exe.run(startup_program)
epoch = 50
batch_size = 32
for i in range(epoch):
     print("epoch", i, "================>")
     sample_train_temp, label_train_temp = shuffle(sample_train, label_train)
     for j in range(int(np.floor(len(sample_train)/batch_size))):
          out = exe.run(program=train_program,
          feed={
                    'image': sample_train_temp[j*batch_size : (j+1)*batch_size].astype(np.float32),
                    'label': label_train_temp[j*batch_size : (j+1)*batch_size].astype(np.int64)},
                    fetch_list=[loss.name, acc.name])
           print("Batch:%d,loss:%.4f,acc:%.4f"%(j, out[0], out[1])) if j%200 == 0 else None
     loss_train, acc_train = exe.run(program=test_program,
                    feed={
                          'image':sample_train.astype(np.float32),
                          'label':label_train.astype(np.int64)},
                    fetch_list=[loss.name, acc.name])
     loss_val, acc_val = exe.run(program=test_program,
                    feed={
                          'image': sample_val.astype(np.float32),
                          'label': label_val.astype(np.int64)},
                    fetch_list=[loss.name, acc.name])
     history['loss'].append(loss_train)
     history['acc'].append(acc_train)
     history['loss_val'].append(loss_val)
     history['acc_val'].append(acc_val)
     print("loss: %.2f, acc: %.2f, loss_val: %.2f, acc_val: %.2f"%(loss_train, acc_train, loss_val, acc_val))

收藏
点赞
0
个赞
共5条回复 最后由用户已被禁言回复于2022-04
#6嘿嘿小小苏回复于2021-08

请问怎么解决的?

0
#5xix810回复于2020-11

请问是怎么解决的?

0
#4羿羽锋回复于2020-04

请问怎么解决的,我也是相同的问题

 

0
#3i_am_a_support回复于2020-03

请问是怎么解决的

0
#2Hi_Boy022回复于2019-12

已经解决了

0
TOP
切换版块