Keras概述

TensorFlow2教程-Keras概述

Keras 是一个用于构建和训练深度学习模型的高阶 API。它可用于快速设计原型、高级研究和生产。

Keras的3个优点:
方便用户使用、模块化和可组合、易于扩展

1 导入tf.keras

TensorFlow2推荐使用tf.keras构建网络,常见的神经网络都包含在tf.keras.layer中(最新的tf.keras的版本可能和keras不同)

# 查看版本
import tensorflow as tf
from tensorflow.keras import layers
print(tf.__version__)
print(tf.keras.__version__)
2.0.0-alpha0
2.2.4-tf

2 构建简单模型

2.1 模型堆叠

最常见的模型类型是层的堆叠:tf.keras.Sequential 模型

model = tf.keras.Sequential()
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))

2.2 网络配置

tf.keras.layers中主要的网络配置参数如下:

activation:设置层的激活函数。此参数可以是函数名称字符串,也可以是函数对象。默认情况下,系统不会应用任何激活函数。

kernel_initializer 和 bias_initializer:创建层权重(核和偏置)的初始化方案。此参数是一个名称或可调用的函数对象,默认为 "Glorot uniform" 初始化器。

kernel_regularizer 和 bias_regularizer:应用层权重(核和偏置)的正则化方案,例如 L1 或 L2 正则化。默认情况下,系统不会应用正则化函数。

layers.Dense(32, activation='sigmoid')
layers.Dense(32, activation=tf.sigmoid)
layers.Dense(32, kernel_initializer='orthogonal')
layers.Dense(32, kernel_initializer=tf.keras.initializers.glorot_normal)
layers.Dense(32, kernel_regularizer=tf.keras.regularizers.l2(0.01))
layers.Dense(32, kernel_regularizer=tf.keras.regularizers.l1(0.01))



3 训练和评估

3.1 设置训练流程

构建好模型后,通过调用 compile 方法配置该模型的学习流程:

model = tf.keras.Sequential()
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))
model.compile(optimizer=tf.keras.optimizers.Adam(0.001),
             loss=tf.keras.losses.categorical_crossentropy,
             metrics=[tf.keras.metrics.categorical_accuracy])

3.2 输入Numpy数据

对于小型数据集,可以使用Numpy构建输入数据。

import numpy as np

train_x = np.random.random((1000, 72))
train_y = np.random.random((1000, 10))

val_x = np.random.random((200, 72))
val_y = np.random.random((200, 10))

model.fit(train_x, train_y, epochs=10, batch_size=100,
          validation_data=(val_x, val_y))
Train on 1000 samples, validate on 200 samples
Epoch 1/10
1000/1000 [==============================] - 0s 275us/sample - loss: 11.8882 - categorical_accuracy: 0.0990 - val_loss: 11.9292 - val_categorical_accuracy: 0.0800
Epoch 2/10
1000/1000 [==============================] - 0s 18us/sample - loss: 12.0337 - categorical_accuracy: 0.0880 - val_loss: 12.2373 - val_categorical_accuracy: 0.1150
Epoch 3/10
1000/1000 [==============================] - 0s 17us/sample - loss: 12.5521 - categorical_accuracy: 0.1000 - val_loss: 13.0598 - val_categorical_accuracy: 0.1200
Epoch 4/10
1000/1000 [==============================] - 0s 17us/sample - loss: 13.8049 - categorical_accuracy: 0.1000 - val_loss: 14.9167 - val_categorical_accuracy: 0.1300
Epoch 5/10
1000/1000 [==============================] - 0s 16us/sample - loss: 16.2108 - categorical_accuracy: 0.0960 - val_loss: 17.8260 - val_categorical_accuracy: 0.1250
Epoch 6/10
1000/1000 [==============================] - 0s 16us/sample - loss: 18.9017 - categorical_accuracy: 0.0960 - val_loss: 19.7697 - val_categorical_accuracy: 0.1250
Epoch 7/10
1000/1000 [==============================] - 0s 14us/sample - loss: 20.8168 - categorical_accuracy: 0.0950 - val_loss: 22.1196 - val_categorical_accuracy: 0.1350
Epoch 8/10
1000/1000 [==============================] - 0s 15us/sample - loss: 23.9623 - categorical_accuracy: 0.0910 - val_loss: 25.5313 - val_categorical_accuracy: 0.1400
Epoch 9/10
1000/1000 [==============================] - 0s 14us/sample - loss: 28.2102 - categorical_accuracy: 0.0830 - val_loss: 30.8286 - val_categorical_accuracy: 0.1400
Epoch 10/10
1000/1000 [==============================] - 0s 16us/sample - loss: 34.6473 - categorical_accuracy: 0.0950 - val_loss: 37.5337 - val_categorical_accuracy: 0.1450








3.3 tf.data输入数据

对于大型数据集可以使用tf.data构建训练输入。

dataset = tf.data.Dataset.from_tensor_slices((train_x, train_y))
dataset = dataset.batch(32)
dataset = dataset.repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((val_x, val_y))
val_dataset = val_dataset.batch(32)
val_dataset = val_dataset.repeat()

model.fit(dataset, epochs=10, steps_per_epoch=30,
          validation_data=val_dataset, validation_steps=3)
WARNING:tensorflow:Expected a shuffled dataset but input dataset `x` is not shuffled. Please invoke `shuffle()` on input dataset.
Epoch 1/10
30/30 [==============================] - 0s 2ms/step - loss: 50.9325 - categorical_accuracy: 0.0885 - val_loss: 62.2331 - val_categorical_accuracy: 0.1354
Epoch 2/10
30/30 [==============================] - 0s 2ms/step - loss: 82.0328 - categorical_accuracy: 0.0812 - val_loss: 98.1532 - val_categorical_accuracy: 0.1354
Epoch 3/10
30/30 [==============================] - 0s 2ms/step - loss: 125.9149 - categorical_accuracy: 0.0887 - val_loss: 144.6487 - val_categorical_accuracy: 0.1458
Epoch 4/10
30/30 [==============================] - 0s 2ms/step - loss: 179.0180 - categorical_accuracy: 0.0983 - val_loss: 199.4754 - val_categorical_accuracy: 0.1354
Epoch 5/10
30/30 [==============================] - 0s 2ms/step - loss: 239.4420 - categorical_accuracy: 0.0833 - val_loss: 261.4482 - val_categorical_accuracy: 0.1250
Epoch 6/10
30/30 [==============================] - 0s 2ms/step - loss: 305.0409 - categorical_accuracy: 0.0769 - val_loss: 325.7398 - val_categorical_accuracy: 0.1354
Epoch 7/10
30/30 [==============================] - 0s 2ms/step - loss: 371.2375 - categorical_accuracy: 0.0897 - val_loss: 389.0976 - val_categorical_accuracy: 0.1458
Epoch 8/10
30/30 [==============================] - 0s 2ms/step - loss: 432.9626 - categorical_accuracy: 0.0855 - val_loss: 445.8658 - val_categorical_accuracy: 0.1042
Epoch 9/10
30/30 [==============================] - 0s 2ms/step - loss: 487.4057 - categorical_accuracy: 0.0929 - val_loss: 491.9482 - val_categorical_accuracy: 0.1562
Epoch 10/10
30/30 [==============================] - 0s 2ms/step - loss: 531.5106 - categorical_accuracy: 0.0780 - val_loss: 520.3076 - val_categorical_accuracy: 0.0625








3.4 评估与预测

评估和预测函数:tf.keras.Model.evaluate和tf.keras.Model.predict方法,都可以可以使用NumPy和tf.data.Dataset构造的输入数据进行评估和预测

# 模型评估
test_x = np.random.random((1000, 72))
test_y = np.random.random((1000, 10))
model.evaluate(test_x, test_y, batch_size=32)
test_data = tf.data.Dataset.from_tensor_slices((test_x, test_y))
test_data = test_data.batch(32).repeat()
model.evaluate(test_data, steps=30)
1000/1000 [==============================] - 0s 24us/sample - loss: 539.8281 - categorical_accuracy: 0.1010
30/30 [==============================] - 0s 1ms/step - loss: 539.0173 - categorical_accuracy: 0.1000





[539.0173116048177, 0.1]



# 模型预测
result = model.predict(test_x, batch_size=32)
print(result)
[[8.32659006e-02 0.00000000e+00 6.45017868e-28 ... 1.63265705e-01
  0.00000000e+00 0.00000000e+00]
 [1.07721470e-01 0.00000000e+00 9.00545094e-31 ... 3.00054163e-01
  0.00000000e+00 0.00000000e+00]
 [8.90668631e-02 0.00000000e+00 8.70908121e-28 ... 2.28218928e-01
  0.00000000e+00 0.00000000e+00]
 ...
 [7.19683096e-02 0.00000000e+00 3.52643310e-28 ... 2.36090288e-01
  0.00000000e+00 0.00000000e+00]
 [9.28377360e-02 0.00000000e+00 7.30382149e-29 ... 1.51132658e-01
  0.00000000e+00 0.00000000e+00]
 [1.44966379e-01 0.00000000e+00 3.26522249e-24 ... 2.58556724e-01
  1.18584464e-35 0.00000000e+00]]

4 构建复杂模型

4.1 函数式API

tf.keras.Sequential 模型是层的简单堆叠,无法表示任意模型。使用 Keras的函数式API可以构建复杂的模型拓扑,例如:

  • 多输入模型,
  • 多输出模型,
  • 具有共享层的模型(同一层被调用多次),
  • 具有非序列数据流的模型(例如,残差连接)。

使用函数式 API 构建的模型具有以下特征:

  • 层实例可调用并返回张量。
  • 输入张量和输出张量用于定义 tf.keras.Model 实例。
  • 此模型的训练方式和 Sequential 模型一样。
input_x = tf.keras.Input(shape=(72,))
hidden1 = layers.Dense(32, activation='relu')(input_x)
hidden2 = layers.Dense(16, activation='relu')(hidden1)
pred = layers.Dense(10, activation='softmax')(hidden2)
# 构建tf.keras.Model实例
model = tf.keras.Model(inputs=input_x, outputs=pred)
model.compile(optimizer=tf.keras.optimizers.Adam(0.001),
             loss=tf.keras.losses.categorical_crossentropy,
             metrics=['accuracy'])
model.fit(train_x, train_y, batch_size=32, epochs=5)
Epoch 1/5
1000/1000 [==============================] - 0s 116us/sample - loss: 12.0565 - accuracy: 0.0950
Epoch 2/5
1000/1000 [==============================] - 0s 31us/sample - loss: 14.7174 - accuracy: 0.0950
Epoch 3/5
1000/1000 [==============================] - 0s 30us/sample - loss: 23.8354 - accuracy: 0.0960
Epoch 4/5
1000/1000 [==============================] - 0s 28us/sample - loss: 41.6427 - accuracy: 0.0970
Epoch 5/5
1000/1000 [==============================] - 0s 27us/sample - loss: 73.9011 - accuracy: 0.0990








4.2 模型子类化

可以通过对 tf.keras.Model 进行子类化并定义自己的前向传播来构建完全可自定义的模型。

  • 在\init\ 方法中创建层并将它们设置为类实例的属性。
  • 在\__call\__方法中定义前向传播
class MyModel(tf.keras.Model):
    def __init__(self, num_classes=10):
        super(MyModel, self).__init__(name='my_model')
        self.num_classes = num_classes
        # 定义网络层
        self.layer1 = layers.Dense(32, activation='relu')
        self.layer2 = layers.Dense(num_classes, activation='softmax')
    def call(self, inputs):
        # 定义前向传播
        h1 = self.layer1(inputs)
        out = self.layer2(h1)
        return out
    
    def compute_output_shape(self, input_shape):
        # 计算输出shape
        shape = tf.TensorShape(input_shape).as_list()
        shape[-1] = self.num_classes
        return tf.TensorShape(shape)
# 实例化模型类,并训练
model = MyModel(num_classes=10)
model.compile(optimizer=tf.keras.optimizers.RMSprop(0.001),
             loss=tf.keras.losses.categorical_crossentropy,
             metrics=['accuracy'])

model.fit(train_x, train_y, batch_size=16, epochs=5)
Epoch 1/5
1000/1000 [==============================] - 0s 128us/sample - loss: 14.3538 - accuracy: 0.0950
Epoch 2/5
1000/1000 [==============================] - 0s 46us/sample - loss: 19.4187 - accuracy: 0.0930
Epoch 3/5
1000/1000 [==============================] - 0s 47us/sample - loss: 22.8924 - accuracy: 0.1010
Epoch 4/5
1000/1000 [==============================] - 0s 48us/sample - loss: 25.7712 - accuracy: 0.1030
Epoch 5/5
1000/1000 [==============================] - 0s 48us/sample - loss: 28.2494 - accuracy: 0.1060








4.3 自定义层

通过对 tf.keras.layers.Layer 进行子类化并实现以下方法来创建自定义层:

  • \__init\__: (可选)定义该层要使用的子层
  • build:创建层的权重。使用 add_weight 方法添加权重。
  • call:定义前向传播。
  • compute_output_shape:指定在给定输入形状的情况下如何计算层的输出形状。
  • 可选,可以通过实现 get_config 方法和 from_config 类方法序列化层。
class MyLayer(layers.Layer):
    def __init__(self, output_dim, **kwargs):
        self.output_dim = output_dim
        super(MyLayer, self).__init__(**kwargs)
    
    def build(self, input_shape):
        shape = tf.TensorShape((input_shape[1], self.output_dim))
        self.kernel = self.add_weight(name='kernel1', shape=shape,
                                   initializer='uniform', trainable=True)
        super(MyLayer, self).build(input_shape)
    
    def call(self, inputs):
        return tf.matmul(inputs, self.kernel)

    def compute_output_shape(self, input_shape):
        shape = tf.TensorShape(input_shape).as_list()
        shape[-1] = self.output_dim
        return tf.TensorShape(shape)

    def get_config(self):
        base_config = super(MyLayer, self).get_config()
        base_config['output_dim'] = self.output_dim
        return base_config

    @classmethod
    def from_config(cls, config):
        return cls(**config)

# 使用自定义网络层构建模型
model = tf.keras.Sequential(
[
    MyLayer(10),
    layers.Activation('softmax')
])


model.compile(optimizer=tf.keras.optimizers.RMSprop(0.001),
             loss=tf.keras.losses.categorical_crossentropy,
             metrics=['accuracy'])

model.fit(train_x, train_y, batch_size=16, epochs=5) 
Epoch 1/5
1000/1000 [==============================] - 0s 107us/sample - loss: 11.6361 - accuracy: 0.1030
Epoch 2/5
1000/1000 [==============================] - 0s 48us/sample - loss: 11.6344 - accuracy: 0.0970
Epoch 3/5
1000/1000 [==============================] - 0s 51us/sample - loss: 11.6304 - accuracy: 0.1010
Epoch 4/5
1000/1000 [==============================] - 0s 49us/sample - loss: 11.6264 - accuracy: 0.0990
Epoch 5/5
1000/1000 [==============================] - 0s 49us/sample - loss: 11.6246 - accuracy: 0.1050








4.3 回调

回调是传递给模型以自定义和扩展其在训练期间的行为的对象。我们可以编写自己的自定义回调,或使用tf.keras.callbacks中的内置函数,常用内置回调函数如下:

  • tf.keras.callbacks.ModelCheckpoint:定期保存模型的检查点。
  • tf.keras.callbacks.LearningRateScheduler:动态更改学习率。
  • tf.keras.callbacks.EarlyStopping:验证性能停止提高时进行中断培训。
  • tf.keras.callbacks.TensorBoard:使用TensorBoard监视模型的行为 。
# win 10下会报错
# EarlyStopping = tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss')
# callbacks = [
#     tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
#     tf.keras.callbacks.TensorBoard(log_dir='./logs')
# ]

import os,sys
# 定义目录
log_dir= os.path.join('keras_overview') #win10下的bug,
if not os.path.exists(log_dir):
    os.mkdir(log_dir)
tensorboard = tf.keras.callbacks.TensorBoard(log_dir = log_dir)
EarlyStopping = tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss')
model.fit(train_x, train_y, batch_size=16, epochs=5, validation_data=(val_x, val_y),
         callbacks=[tensorboard, EarlyStopping])
Train on 1000 samples, validate on 200 samples
Epoch 1/5
1000/1000 [==============================] - 0s 290us/sample - loss: 11.6231 - accuracy: 0.1000 - val_loss: 11.6666 - val_accuracy: 0.1300
Epoch 2/5
1000/1000 [==============================] - 0s 115us/sample - loss: 11.6195 - accuracy: 0.0990 - val_loss: 11.6666 - val_accuracy: 0.1250
Epoch 3/5
1000/1000 [==============================] - 0s 135us/sample - loss: 11.6184 - accuracy: 0.0990 - val_loss: 11.6647 - val_accuracy: 0.1200
Epoch 4/5
1000/1000 [==============================] - 0s 204us/sample - loss: 11.6164 - accuracy: 0.1030 - val_loss: 11.6669 - val_accuracy: 0.1200
Epoch 5/5
1000/1000 [==============================] - 0s 99us/sample - loss: 11.6153 - accuracy: 0.1020 - val_loss: 11.6632 - val_accuracy: 0.1200








5 模型保存与恢复

5.1 权重保存

model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),  # 需要有input_shape
layers.Dense(10, activation='softmax')])

model.compile(optimizer=tf.keras.optimizers.Adam(0.001),
              loss='categorical_crossentropy',
              metrics=['accuracy'])
# 权重保存与重载
model.save_weights('./keras_overview/weights/model')
model.load_weights('./keras_overview/weights/model')
# 保存为h5格式
model.save_weights('./keras_overview/model.h5', save_format='h5')
model.load_weights('./keras_overview/model.h5')

5.2 保存网络结构

# 序列化成json
import json
import pprint
json_str = model.to_json()
pprint.pprint(json.loads(json_str))
# 从json中重建模型
fresh_model = tf.keras.models.model_from_json(json_str)
{'backend': 'tensorflow',
 'class_name': 'Sequential',
 'config': {'layers': [{'class_name': 'Dense',
                        'config': {'activation': 'relu',
                                   'activity_regularizer': None,
                                   'batch_input_shape': [None, 32],
                                   'bias_constraint': None,
                                   'bias_initializer': {'class_name': 'Zeros',
                                                        'config': {}},
                                   'bias_regularizer': None,
                                   'dtype': 'float32',
                                   'kernel_constraint': None,
                                   'kernel_initializer': {'class_name': 'GlorotUniform',
                                                          'config': {'seed': None}},
                                   'kernel_regularizer': None,
                                   'name': 'dense_17',
                                   'trainable': True,
                                   'units': 64,
                                   'use_bias': True}},
                       {'class_name': 'Dense',
                        'config': {'activation': 'softmax',
                                   'activity_regularizer': None,
                                   'bias_constraint': None,
                                   'bias_initializer': {'class_name': 'Zeros',
                                                        'config': {}},
                                   'bias_regularizer': None,
                                   'dtype': 'float32',
                                   'kernel_constraint': None,
                                   'kernel_initializer': {'class_name': 'GlorotUniform',
                                                          'config': {'seed': None}},
                                   'kernel_regularizer': None,
                                   'name': 'dense_18',
                                   'trainable': True,
                                   'units': 10,
                                   'use_bias': True}}],
            'name': 'sequential_3'},
 'keras_version': '2.2.4-tf'}


# 保持为yaml格式  #需要提前安装pyyaml
yaml_str = model.to_yaml()
print(yaml_str)
# 从yaml数据中重新构建模型
fresh_model = tf.keras.models.model_from_yaml(yaml_str)
backend: tensorflow
class_name: Sequential
config:
  layers:
  - class_name: Dense
    config:
      activation: relu
      activity_regularizer: null
      batch_input_shape: !!python/tuple
      - null
      - 32
      bias_constraint: null
      bias_initializer:
        class_name: Zeros
        config: {}
      bias_regularizer: null
      dtype: float32
      kernel_constraint: null
      kernel_initializer:
        class_name: GlorotUniform
        config:
          seed: null
      kernel_regularizer: null
      name: dense_17
      trainable: true
      units: 64
      use_bias: true
  - class_name: Dense
    config:
      activation: softmax
      activity_regularizer: null
      bias_constraint: null
      bias_initializer:
        class_name: Zeros
        config: {}
      bias_regularizer: null
      dtype: float32
      kernel_constraint: null
      kernel_initializer:
        class_name: GlorotUniform
        config:
          seed: null
      kernel_regularizer: null
      name: dense_18
      trainable: true
      units: 10
      use_bias: true
  name: sequential_3
keras_version: 2.2.4-tf

g:\softwares\conda\envs\learn\lib\site-packages\tensorflow\python\keras\saving\model_config.py:76: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  config = yaml.load(yaml_string)

注意:子类模型不可序列化,因为其体系结构由call方法主体中的Python代码定义。

5.3 保存整个模型

model = tf.keras.Sequential([
  layers.Dense(10, activation='softmax', input_shape=(72,)),
  layers.Dense(10, activation='softmax')
])
model.compile(optimizer='rmsprop',
              loss='categorical_crossentropy',
              metrics=['accuracy'])
model.fit(train_x, train_y, batch_size=32, epochs=5)
# 保存整个模型
model.save('keras_overview/all_model.h5')
# 导入整个模型
model = tf.keras.models.load_model('keras_overview/all_model.h5')
Epoch 1/5
1000/1000 [==============================] - 0s 108us/sample - loss: 11.6180 - accuracy: 0.0880
Epoch 2/5
1000/1000 [==============================] - 0s 33us/sample - loss: 11.6532 - accuracy: 0.0930
Epoch 3/5
1000/1000 [==============================] - 0s 32us/sample - loss: 11.6921 - accuracy: 0.0950
Epoch 4/5
1000/1000 [==============================] - 0s 28us/sample - loss: 11.7793 - accuracy: 0.1130
Epoch 5/5
1000/1000 [==============================] - 0s 28us/sample - loss: 11.8443 - accuracy: 0.1200

6 将keras用于Estimator

Estimator API 用于针对分布式环境训练模型。它适用于一些行业使用场景,例如用大型数据集进行分布式训练并导出模型以用于生产

model = tf.keras.Sequential([layers.Dense(10,activation='softmax'),
                          layers.Dense(10,activation='softmax')])

model.compile(optimizer=tf.keras.optimizers.RMSprop(0.001),
              loss='categorical_crossentropy',
              metrics=['accuracy'])

estimator = tf.keras.estimator.model_to_estimator(model)
INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: C:\Users\CHNITA~1\AppData\Local\Temp\tmplbss2z_9
INFO:tensorflow:Using the Keras model provided.
INFO:tensorflow:Using config: {'_model_dir': 'C:\\Users\\CHNITA~1\\AppData\\Local\\Temp\\tmplbss2z_9', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_service': None, '_cluster_spec': , '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}

7 Eager execution

Eager execution是一个动态执行的编程环境,它可以立即评估操作。Keras不需要此功能,但它受tf.keras程序支持和对检查程序和调试有用。
所有的tf.keras模型构建API都与Eager execution兼容。尽管可以使用Sequential和函数API,但Eager execution有利于模型子类化和构建自定义层:其要求以代码形式编写前向传递的API(而不是通过组装现有层来创建模型的API)。

8 多GPU上运行

  • 本地搭建的是Cpu,后续会运行Gpu版本的

tf.keras模型可使用tf.distribute.Strategy在多个GPU上运行 。该API在多个GPU上提供了分布式培训,几乎无需更改现有代码。

当前tf.distribute.MirroredStrategy是唯一受支持的分发策略。MirroredStrategy在单台计算机上使用全缩减进行同步训练来进行图内复制。要使用 distribute.Strategys,请将优化器实例化以及模型构建和编译嵌套在Strategys中.scope(),然后训练模型。

以下示例tf.keras.Model在单个计算机上的多GPU分配。
首先,在分布式策略范围内定义一个模型:

strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
    model = tf.keras.Sequential()
    model.add(layers.Dense(16, activation='relu', input_shape=(10,)))
    model.add(layers.Dense(1, activation='sigmoid'))
    optimizer = tf.keras.optimizers.SGD(0.2)
    model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.summary()
WARNING:tensorflow:Not all devices in `tf.distribute.Strategy` are visible to TensorFlow.
Model: "sequential_7"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_25 (Dense)             (None, 16)                176       
_________________________________________________________________
dense_26 (Dense)             (None, 1)                 17        
=================================================================
Total params: 193
Trainable params: 193
Non-trainable params: 0
_________________________________________________________________

然后像单gpu一样在数据上训练模型即可

x = np.random.random((1024, 10))
y = np.random.randint(2, size=(1024, 1))
x = tf.cast(x, tf.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.shuffle(buffer_size=1024).batch(32)
model.fit(dataset, epochs=1)
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
32/32 [==============================] - 0s 4ms/step - loss: 0.7029

本文于 2021-04-28 11:24 由作者进行过修改

本文链接:http://itarvin.com/detail-218.aspx

登录或者注册以便发表评论

登录

注册