I have Keras installed with the Tensorflow backend and CUDA. I'd like to sometimes on demand force Keras to use CPU. Can this be done without say installing a separate CPU-only Tensorflow in a virtual environment? If so how? If the backend were Theano, the flags could be set, but I have not heard of Tensorflow flags accessible via Keras.
If you want to force Keras to use CPU
Way 1
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"] = ""
before Keras / Tensorflow is imported.
Way 2
Run your script as
$ CUDA_VISIBLE_DEVICES="" ./your_keras_code.py
See also
https://github.com/keras-team/keras/issues/152 https://github.com/fchollet/keras/issues/4613
This worked for me (win10), place before you import keras:
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
A rather separable way of doing this is to use
import tensorflow as tf
from keras import backend as K
num_cores = 4
if GPU:
num_GPU = 1
num_CPU = 1
if CPU:
num_CPU = 1
num_GPU = 0
config = tf.ConfigProto(intra_op_parallelism_threads=num_cores,
inter_op_parallelism_threads=num_cores,
allow_soft_placement=True,
device_count = {'CPU' : num_CPU,
'GPU' : num_GPU}
)
session = tf.Session(config=config)
K.set_session(session)
Here, with booleans
GPU
and CPU
, we indicate whether we would like to run our code with the GPU or CPU by rigidly defining the number of GPUs and CPUs the Tensorflow session is allowed to access. The variables num_GPU
and num_CPU
define this value. num_cores
then sets the number of CPU cores available for usage via intra_op_parallelism_threads
and inter_op_parallelism_threads
.
The intra_op_parallelism_threads
variable dictates the number of threads a parallel operation in a single node in the computation graph is allowed to use (intra). While the inter_ops_parallelism_threads
variable defines the number of threads accessible for parallel operations across the nodes of the computation graph (inter).
allow_soft_placement
allows for operations to be run on the CPU if any of the following criterion are met:
there is no GPU implementation for the operation there are no GPU devices known or registered there is a need to co-locate with other inputs from the CPU
All of this is executed in the constructor of my class before any other operations, and is completely separable from any model or other code I use.
Note: This requires tensorflow-gpu
and cuda
/cudnn
to be installed because the option is given to use a GPU.
Refs:
What do the options in ConfigProto like allow_soft_placement and log_device_placement mean?
Meaning of inter_op_parallelism_threads and intra_op_parallelism_threads
allow_soft_placement
, intra_op_parallelism_threads
, inter_op_parallelism_threads
inter
/intra_op_parallelism_threads
refer to CPU or GPU operations?
Just import tensortflow and use keras, it's that easy.
import tensorflow as tf
# your code here
with tf.device('/gpu:0'):
model.fit(X, y, epochs=20, batch_size=128, callbacks=callbacks_list)
tf.device('/cpu:0')
, I could still see memory being allocated to python later with nvidia-smi
.
with
?
As per keras tutorial, you can simply use the same tf.device
scope as in regular tensorflow:
with tf.device('/gpu:0'):
x = tf.placeholder(tf.float32, shape=(None, 20, 64))
y = LSTM(32)(x) # all ops in the LSTM layer will live on GPU:0
with tf.device('/cpu:0'):
x = tf.placeholder(tf.float32, shape=(None, 20, 64))
y = LSTM(32)(x) # all ops in the LSTM layer will live on CPU:0
with
can be any Keras code.
I just spent some time figure it out. Thoma's answer is not complete. Say your program is test.py
, you want to use gpu0 to run this program, and keep other gpus free.
You should write CUDA_VISIBLE_DEVICES=0 python test.py
Notice it's DEVICES
not DEVICE
For people working on PyCharm, and for forcing CPU, you can add the following line in the Run/Debug configuration, under Environment variables:
<OTHER_ENVIRONMENT_VARIABLES>;CUDA_VISIBLE_DEVICES=-1
Success story sharing
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
as in an answer belowCUDA_DEVICE_ORDER=PCI_BUS_ID
in issue #152import os os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152 os.environ["CUDA_VISIBLE_DEVICES"] = ""
, now how do I "undo" this ? I would like Keras to use the GPU again.