ChatGPT解决这个技术问题 Extra ChatGPT

Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2

I have recently installed tensorflow (Windows CPU version) and received the following message:

Successfully installed tensorflow-1.4.0 tensorflow-tensorboard-0.4.0rc2

Then when I tried to run

import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
sess.run(hello)
'Hello, TensorFlow!'
a = tf.constant(10)
b = tf.constant(32)
sess.run(a + b)
42
sess.close()

(which I found through https://github.com/tensorflow/tensorflow)

I received the following message:

2017-11-02 01:56:21.698935: I C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2

But when I ran

import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))

it ran as it should and output Hello, TensorFlow!, which indicates that the installation was successful indeed but there is something else that is wrong.

Do you know what the problem is and how to fix it?

tf works, the information it spits outt just means it isnt as fast as it could be. To get rid of it you can install it from the source see here
I am also facing the same issue with the commands that you could run successfully. >>> sess = tf.Session() 2017-11-05 18:02:44.670825: I C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\ 35\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instruct ions that this TensorFlow binary was not compiled to use: AVX AVX2
@Ben so it's just a warning, but everything will work just fine ? (at least from a beginner's perspective)
To compile Tensorflow with AVX instructions, see this answer
I got a very similar message in the same situation, the message is Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2.

P
Peter Cordes

What is this warning about?

Modern CPUs provide a lot of low-level instructions, besides the usual arithmetic and logic, known as extensions, e.g. SSE2, SSE4, AVX, etc. From the Wikipedia:

Advanced Vector Extensions (AVX) are extensions to the x86 instruction set architecture for microprocessors from Intel and AMD proposed by Intel in March 2008 and first supported by Intel with the Sandy Bridge processor shipping in Q1 2011 and later on by AMD with the Bulldozer processor shipping in Q3 2011. AVX provides new features, new instructions and a new coding scheme.

In particular, AVX introduces fused multiply-accumulate (FMA) operations, which speed up linear algebra computation, namely dot-product, matrix multiply, convolution, etc. Almost every machine-learning training involves a great deal of these operations, hence will be faster on a CPU that supports AVX and FMA (up to 300%). The warning states that your CPU does support AVX (hooray!).

I'd like to stress here: it's all about CPU only.

Why isn't it used then?

Because tensorflow default distribution is built without CPU extensions, such as SSE4.1, SSE4.2, AVX, AVX2, FMA, etc. The default builds (ones from pip install tensorflow) are intended to be compatible with as many CPUs as possible. Another argument is that even with these extensions CPU is a lot slower than a GPU, and it's expected for medium- and large-scale machine-learning training to be performed on a GPU.

What should you do?

If you have a GPU, you shouldn't care about AVX support, because most expensive ops will be dispatched on a GPU device (unless explicitly set not to). In this case, you can simply ignore this warning by

# Just disables the warning, doesn't take advantage of AVX/FMA to run faster
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

... or by setting export TF_CPP_MIN_LOG_LEVEL=2 if you're on Unix. Tensorflow is working fine anyway, but you won't see these annoying warnings.

If you don't have a GPU and want to utilize CPU as much as possible, you should build tensorflow from the source optimized for your CPU with AVX, AVX2, and FMA enabled if your CPU supports them. It's been discussed in this question and also this GitHub issue. Tensorflow uses an ad-hoc build system called bazel and building it is not that trivial, but is certainly doable. After this, not only will the warning disappear, tensorflow performance should also improve.


It's worth mentioning that TensorFlow Serving has separate installs for non-optimized CPU and optimized CPU (AVX, SSE4.1, etc). the details are here: github.com/tensorflow/serving/blob/…
According to a deleted answer to this question, AVX512F on an i9-7900x (Skylake-AVX512) with GPU (GTX1080Ti) "makes a 28% gain 68s->48s on CIFAR10 1000 iterations". Are you sure it's good advice to ignore the warning when using a GPU? Unless that comment is bogus, it appears there is something to gain from CPU instruction sets in at least some cases.
@PeterCordes If that is so, I'll certainly include it into my answer. But the statement "my model speeds up by 30%" sounds the same as "my C++ program speeds up by 30%". What model exactly? Is there manual CPU placement? How is data transferred? E.g., there could be a lot of work in numpy. Of course, it is possible to make CPU a bottleneck and there're a lot of questions about it on SO. It's usually considered a bug.
@Maxim: The entire text of the deleted answer is "In my test the instruction AVX512F on I9 (7900x) GPU (GTX1080Ti) makes a 28% gain 68s->48s on CIFAR10 1000 iterations". So unfortunately there are no details (or punctuation, grammar, or formatting).
Apparently if you are on a Mac, it won't be using GPU, stackoverflow.com/questions/46364917/…
H
HimalayanCoder

Update the tensorflow binary for your CPU & OS using this command

pip install --ignore-installed --upgrade "Download URL"

The download url of the whl file can be found here

https://github.com/lakshayg/tensorflow-build


I tried on Windows 10 wuth url stackoverflow.com/questions/47068709/…. I get an error saying zipfile.BadZipFile: File is not a zip file
It worked when I download and use the downloaded version
anyone getting "error saying zipfile.BadZipFile: File is not a zip file" should get the raw link like for cuda9.2avx2 the link is github.com/fo40225/tensorflow-windows-wheel/raw/master/1.9.0/…
For windows, I tried this. Uninstall existing tensorflow using "pip uninstall tensorflow", then reinstall it using "pip install <Path to downloaded WHL file>". Download this WHL file into your computer - github.com/fo40225/tensorflow-windows-wheel/blob/master/1.10.0/… , if you have a 3.6 Python and a 64-bit windows(ignore the amd you see). Otherwise navigate a step back in github and search for correct WHL. It works
Worked for me. Ubuntu 16.04.4, Python 3.5.2, gcc 5.4.0 - downloaded the whl and installed. Currently using a p2.xLarge aws instance. Performance improved from 16 seconds per iteration to 9 seconds for a custom object detection exercise with 230 classes running on Faster R-CNN.
S
Sam

CPU optimization with GPU

There are performance gains you can get by installing TensorFlow from the source even if you have a GPU and use it for training and inference. The reason is that some TF operations only have CPU implementation and cannot run on your GPU.

Also, there are some performance enhancement tips that makes good use of your CPU. TensorFlow's performance guide recommends the following:

Placing input pipeline operations on the CPU can significantly improve performance. Utilizing the CPU for the input pipeline frees the GPU to focus on training.

For best performance, you should write your code to utilize your CPU and GPU to work in tandem, and not dump it all on your GPU if you have one. Having your TensorFlow binaries optimized for your CPU could pay off hours of saved running time and you have to do it once.


f
flaviussn

For Windows, you can check the official Intel MKL optimization for TensorFlow wheels that are compiled with AVX2. This solution speeds up my inference ~x3.

conda install tensorflow-mkl

Still got this warning after install tensorflow-mkl "Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2" . Any idea why?
@Pinch: According to the answers to this question, the warnings can be ignored as long as MKL is in place.
@Pinch: In particular, I'm seeing 1.5x improvements on a particular workload, just by using tensorflow-mkl, even though I also still get the errors. Perhaps interestingly, I don't see that improvement on WSL; here, both tensorflow and tensorflow-mkl offer a 2x improvement over the Windows baseline.
Z
Z.Wei

For Windows (Thanks to the owner f040225), go to here: https://github.com/fo40225/tensorflow-windows-wheel to fetch the url for your environment based on the combination of "tf + python + cpu_instruction_extension". Then use this cmd to install:

pip install --ignore-installed --upgrade "URL"

If you encounter the "File is not a zip file" error, download the .whl to your local computer, and use this cmd to install:

pip install --ignore-installed --upgrade /path/target.whl

the GPU ones are split into parts and labeled as .7z files. How to piece them together?
@user3496060 I used winrar to uncompress the split files
H
Hazarapet Tunanyan

If you use the pip version of TensorFlow, it means it's already compiled and you are just installing it. Basically you install TensorFlow-GPU, but when you download it from the repository and trying to build it, you should build it with CPU AVX support. If you ignore it, you will get a warning every time when you run on the CPU. You can take a look at those too.

Proper way to compile Tensorflow with SSE4.2 and AVX

What is AVX Cpu support in tensorflow


how can I avoid this error, what are things should I need to follow?
this is not an error. It's a warning that thensorflow does not support AVX for cpu. If you don't want to see it, then just turn it off by setting os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
J
James Brett

The easiest way that I found to fix this is to uninstall everything then install a specific version of tensorflow-gpu:

uninstall tensorflow:

    pip uninstall tensorflow

uninstall tensorflow-gpu: (make sure to run this even if you are not sure if you installed it)

    pip uninstall tensorflow-gpu

Install specific tensorflow-gpu version:

    pip install tensorflow-gpu==2.0.0
    pip install tensorflow_hub
    pip install tensorflow_datasets

You can check if this worked by adding the following code into a python file:

from __future__ import absolute_import, division, print_function, unicode_literals

import numpy as np

import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds

print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub Version: ", hub.__version__)
print("GPU is", "available" if tf.config.experimental.list_physical_devices("GPU") else "NOT AVAILABLE")

Run the file and then the output should be something like this:

Version:  2.0.0
Eager mode:  True
Hub Version:  0.7.0
GPU is available

Hope this helps


ModuleNotFoundError: No module named 'tensorflow_hub'
ModuleNotFoundError: No module named 'tensorflow_datasets'
Try to install the modules separately: pip install tensorflow_hub and pip install tensorflow_datasets
yup -> just trying to be helpful in completeness of your answer.
Oh I do not remember having to install those separately. Thanks!
s
shivam13juna

What worked for me tho is this library https://pypi.org/project/silence-tensorflow/

Install this library and do as instructed on the page, it works like a charm!


a
arunppsg

Try using anaconda. I had the same error. One lone option was to build tensorflow from source which took long time. I tried using conda and it worked.

Create a new environment in anaconda. conda -c conda-forge tensorflow

Then, it worked.


CommandNotFoundError: No command 'conda conda-forge'. - So, I followed this: conda-forge.org. But, anyway, I got this: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA
M
Martijn Pieters

He provided once the list, deleted by someone but see the answer is Download packages list

Output:

F:\temp\Python>python test_tf_logics_.py
[0, 0, 26, 12, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0]
[ 0  0  0 26 12  0  0  0  2  0  0  0  0  0  0  0  0]
[ 0  0 26 12  0  0  0  2  0  0  0  0  0  0  0  0  0]
2022-03-23 15:47:05.516025: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-03-23 15:47:06.161476: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 10 MB memory:  -> device: 0, name: NVIDIA GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1
[0 0 2 0 0 0 0 7 0 0 0 0 0 0 0 0 0]
...

https://i.stack.imgur.com/arS48.png


M
Matias Molinas

As the message says, your CPU supports instructions that TensorFlow binary was not compiled to use. This should not be an issue with CPU version of TensorFlow since it does not perform AVX (Advanced Vector Extensions) instructions. However, it seems that TensorFlow uses AVX instructions in some parts of the code and the message is just a warning, you can safely ignore it. You can compile your own version of TensorFlow with AVX instructions.


Yes, you can safely ignore it if you don't care about TensorFlow running slower than it could. The reason for the warning is that most people would rather be taking full advantage of the CPU hardware they're using.

关注公众号,不定期副业成功案例分享
Follow WeChat

Success story sharing

Want to stay one step ahead of the latest teleworks?

Subscribe Now