ChatGPT解决这个技术问题 Extra ChatGPT

图灵完备性有多大用处?神经网络图灵完备吗?

While reading some papers about the Turing completeness of recurrent neural nets (for example: Turing computability with neural nets, Hava T. Siegelmann and Eduardo D. Sontag, 1991), I got the feeling that the proof which was given there was not really that practical. For example the referenced paper needs a neural network which neuron activity must be of infinity exactness (to reliable represent any rational number). Other proofs need a neural network of infinite size. Clearly, that is not really that practical.

But I started to wonder now if it does make sense at all to ask for Turing completeness. By the strict definition, no computer system nowadays is Turing complete because none of them will be able to simulate the infinite tape.

Interestingly, programming language specification leaves it most often open if they are turing complete or not. It all boils down to the question if they will always be able to allocate more memory and if the function call stack size is infinite. Most specification don't really specify this. Of course all available implementations are limited here, so all practical implementations of programming languages are not Turing complete.

So, what you can say is that all computer systems are just equally powerful as finite state machines and not more.

And that brings me to the question: How useful is the term Turing complete at all?

And back to neural nets: For any practical implementation of a neural net (including our own brain), they will not be able to represent an infinite number of states, i.e. by the strict definition of Turing completeness, they are not Turing complete. So does the question if neural nets are Turing complete make sense at all?

The question if they are as powerful as finite state machines was answered already much earlier (1954 by Minsky, the answer of course: yes) and also seems easier to answer. I.e., at least in theory, that was already the proof that they are as powerful as any computer.

Some other questions which are more about what I really want to know:

Is there any theoretical term which can say something more specific about the computational power of a computer? (given its limited memory space)

How can you compare the computational power of practical implementations of neural nets with computers? (Turing-completeness is not useful as argumented above.)

I’m voting to close this question because it's not a programming issue.

m
metaliving

The point of stating that a mathematical model is Turing Complete is to reveal the capability of the model to perform any calculation, given a sufficient amount of resources (i.e. infinite), not to show whether a specific implementation of a model does have those resources. Non-Turing complete models would not be able to handle a specific set of calculations, even with enough resources, something that reveals a difference in the way the two models operate, even when they have limited resources. Of course, to prove this property, you have to do have to assume that the models are able to use an infinite amount of resources, but this property of a model is relevant even when resources are limited.


+1 like all things in math involving infinity, it may not be physically realizable, but that does not make it a physically useless concept.
I'm still a bit curious about what "given a sufficient amount of resources" exactly means (in a mathematically sense) and also what it would mean for a neural network (adding more neurons to it is not really comparable to adding more memory to a computer).
Well, in feedforward multi-layer neural networks, you can think of adding more neurons to the network as improving on the precision of the network. In other words, the precision of the network at a given problem is (non-linearly) correlated with the number of neurons and layers in the network. This comes in contrast with a traditional algorithm, where there is no precision issue: there is a certain threshold in the amount of resources, above which the problem can be solved, under which solving the problem with the given algorithm is impossible.
I'm not really having feedforward networks in mind. Those are clearly (under each relaxed interpretation) not Turing complete. You need a recurrent network to get the Turing computational power.
Ya and don't forget, your brain is not Turing complete. It doesn't make much sense to "give your brain more resources". Or rather, the "capability" and the "resources" can't be separated for the brain, since the capability is in the computational units themselves, so adding more of them changes the "computer" itself.
m
mikera

Regular feedforward neural networks are not turing complete. They are, in effect, equivalent to a single complicated mathematical function that may do quite a lot of calculations but doesn't have any ability perform looping or other control flow operations.

However, if you wire up a neural network with some way to access a stateful environment then it can be be made into a turing complete machine.

As a most trivial example, you could recreate a classic-style Turing machine where:

the input to the neural network is the value on the tape and the previous state

the output of the neural network is the next state and the action

You could then train the neural network to emulate the actions of any desired turing machine state table / configuration (perhaps by supervised learning on the actions of another turing machine?)

Note: The idea of running a feedforward net repeatedly with some form of state feedback is essentially equivalent to a recurrent neural network. So you can think of a recurrent neural network plus the logic that runs it repeatedly as being Turing complete. You need the extra logic (over and above the network itself) to ensure Turing completeness because it is necessary to handle things like termination, repetition and IO.


I fully agree about feed-forward nets but I was more interested about recurrent nets.
@Albert - that's the point I was making! A recurrent neural net is, after all, equivalent to a feed-forward net run repeatedly in a stateful environment with some form of feedback :-) I'll try to make the answer more clear though.....
If you mean an external memory by "access a stateful environment", you end up with a model like the Neural Turing Machine (NTM) or Differentiable Neural Computer (DNC), which are in fact stated to be as powerful as a Turing machine.
K
Kenneth Cochran

When modern computers are said to be Turing Complete there is an unspoken exception for the infinite storage device Turing described, which is obviously an impossibilty on a finite physical computation device. If a computation device can do everything a Turing machine can do (infinite storage not withstanding) it is Turing complete for all practical intents and purposes. By this less strict definition of Turing completeness, yes, its possible that many neural networks are Turing complete.

It is of course possible to create one that is not Turing complete.


H
Hans-Peter Stricker

Turing-completeness of recurrent neural networks could mean: the (finite) transition tables of each and every Turing machine (with a finite-state head and an infinite tape) can be modelled by a finite recurrent neural network (finitely many neurons with finitely many states, especially only two states). The transition tables define three functions:

next-state(current-state,current-symbol)

next-symbol(current-state, current-symbol)

direction(current-state,current-symbol)

This is how a recurrent neural network may perform this task (just a very raw sketch):

https://i.stack.imgur.com/MFTid.png

The green neurons read the symbol in the current cell (in binary representation), the gray neurons (initally mute) determine the current state, the red neurons write the new symbol to the current cell, the yellow neurons determine whether to go left or right. The blue neurons are the inner neurons (initially mute).

The claim is, that for each and every Turing machine there is such a recurrent neural network.

I wonder if there is a systematic way to construct such a network from given transition tables.


An often cited paper is On the computational power of neural nets, 1992 which does such RNN construction. Note however, this assumes rational numbers, i.e. the infinite memory is encoded as rational numbers, or floating point numbers of infinite precision. See also here. This is not realistic in practice. That is why Neural Turing Machines (NTM) or Differential Neural Computers (DNC) are different. And the human brain as well.
In your construction, you assume that there is an external infinite tape, which is not part of your NN. So yes, a NN + external infinite tape is Turing complete. However, my question was about just a NN, without anything external.
But something must be infinite: if not the tape, then the NN. And if not the number of neurons, then the number of states.
That's the point. The NN (as-is) is finite. Thus it is in fact not Turing complete, which was my original question. If you add something to it, of course you can make it TM-complete, but that is true for anything, right? I wrote a bit about this also now in an own answer to the question.
@Albert yes, in that sense, NN is not Turing complete, and neither is anything else in this universe, including my laptop. But NN and my laptop is Turing complete in a restricted sense that given infinite storage, both are Turing complete. Note that a calculator with infinite storage is not Turing complete even in the restricted sense. When people talk about Turing completeness of physical devices, they omit the word 'restricted', hoping that it is understood under the context.
A
Albert

After many years, let me give an answer to this question myself.

Turing completeness

As powerful as a Turing machine (TM).

Requires an infinite memory. I.e. in practice, no physical device ever can be Turing complete.

Consider a normal personal computer (PC): A specific physical instance is not Turing complete, as it has finite memory. However, consider the concept of a PC as something where you could add memory on demand. All programs will still work in the same way when you add more memory. For any given input, and any given program, there is a maximum amount of memory such that it should work. (Let's not be pedantic now about the int64 memory address limit or things like that. These are technical limits, which could be solved, e.g. by big ints, etc.) Thus, we can say that the concept of a PC is Turing complete. This is also called an abstract machine. So, differentiate between a specific machine or device (PC, RNN, human brain) and a programming language or abstract machine for a programming language. A programming language usually does not have a restriction on the amount of memory it can use, thus programming languages can be Turing complete. See the official C language standard definition, which uses an abstract machine to define the semantics. The main question is, which of those specific classes (PC, RNN, human brain) allows to formulate abstract machine variants, and are those abstract machine variants Turing complete?

A specific physical instance is not Turing complete, as it has finite memory.

However, consider the concept of a PC as something where you could add memory on demand. All programs will still work in the same way when you add more memory. For any given input, and any given program, there is a maximum amount of memory such that it should work. (Let's not be pedantic now about the int64 memory address limit or things like that. These are technical limits, which could be solved, e.g. by big ints, etc.) Thus, we can say that the concept of a PC is Turing complete. This is also called an abstract machine. So, differentiate between a specific machine or device (PC, RNN, human brain) and a programming language or abstract machine for a programming language. A programming language usually does not have a restriction on the amount of memory it can use, thus programming languages can be Turing complete. See the official C language standard definition, which uses an abstract machine to define the semantics. The main question is, which of those specific classes (PC, RNN, human brain) allows to formulate abstract machine variants, and are those abstract machine variants Turing complete?

Turing completeness is really mostly about the memory concept. Consider any finite state machine/automaton (FSA), and some access to external memory. Depending on the type of memory, you end up in different classes of the Chomsky hierarchy: no memory / finite memory: regular languages single stack: pushdown automata, context-free languages two or more stacks: Turing complete, recursively enumerable language

no memory / finite memory: regular languages

single stack: pushdown automata, context-free languages

two or more stacks: Turing complete, recursively enumerable language

Recurrent neural networks (RNN)

On the computational power of neural nets

An often cited paper is On the computational power of neural nets, Siegelmann & Sonntag, 1992, which states that RNNs are Turing complete. This paper assumes that we have rational numbers without limits in the nominator/denominator, i.e. the infinite memory is encoded as rational numbers, or floating point numbers of infinite precision. See also here. Usually we do not model a NN in a way that is operates with rational numbers (without limit). When we restrict ourselves to (R)NNs with finite precision weights and activations, the proof in the paper falls appart, and does not apply anymore. Thus, this paper is not so relevant.

There is a more recent paper, On the Practical Computational Power of Finite Precision RNNs for Language Recognition, Weiss et al, 2018, which exactly addresses this.

Universal approximator

It is well known that most standard NNs are universal approximators. This states, that given any function (nonconstant, bounded, and continuous), and given some allowed threshold, you can construct a NN which approximates this function within the allowed threshold. This is about finite dimensional vector spaces. When we speak about computability, we speak about sequences, thus we have an infinite dimensional vector space. Thus this property is not so relevant.

Without external memory

The question is how to define the concept of a standard RNN. In the paper mentioned above, it assumes infinite precision in the activations. But I argue this is not a reasonable concept of a RNN as you never have this. And with this being limited, there is no other way how a concept of a standard RNN can have infinite memory.

Thus, to state it explicitly: Without external memory, the standard RNN, and also LSTM is not Turing complete. There is also no straight-forward way how you could define a concept of a RNN, where you could add memory on demand. The memory of a RNN are the most recent hidden activations. Adding more memory means to change the NN, i.e. adding new neurons, thus adding the internal workings of it. This is like changing the program itself.

With external memory

There is the Neural Turing machine (NTM) and a few similar models, which have some sort of external memory. Here it is straight-forward to think about a concept of a NTM where you would add memory on demand. Thus we can say that the concept of a NTM is Turing complete.

There are some details like the type of attentions used in the heads, which needs some adaptation. There is a follow up on the model, the Differentiable neural computer (DNC) which explicitly addresses this, and also has some explicit mechanism to add memory.

Learning / training

We spoke mostly about the theoretic computation power. A very different question is whether the NN can learn such a function. I.e. whether training (gradient search) leads to a NN which has learned a computable function.

Human brain

We might interpret the human brain (or any brain) as kind of a complex neural network. We can also ask the question, whether the human brain (model) is Turing complete. See e.g. here. This question is a tricky one. The intuition would say that we are able to perform any kind of computation, thus the human brain is Turing complete. However, the arguments we have outlined here shows that a RNN is not Turing complete. Important is again the memory effect. At some point, the memory capacity of a human brain is not enough to operate on some input. Thus we would need external memory. So, the human brain together with external memory is Turing complete, obviously. But this is maybe not the interesting question (and also assumes that a human brain could be as large as needed to encode any Turing machine program, so there is no upper size of a human brain, which is not really true). More interesting is the question of just the human brain itself, without any external memory. Or basically, how to define the abstract machine or the concept of a human brain? Does this concept allows for infinite memory? Is this straightforward to define?

There is one aspect of the memory in a human brain which is a bit different than in a standard RNN: It can generalize to a high degree, and the addressing mechanism to access certain activations is different. Also, it has some kind of adaptive weights (which however also only can store finite information). So, considering this, it's actually already more similar to a NTM than just a RNN. There are many different kinds of memory in the human brain. Some of it is just as static as a RNN (like the current activations) but other types allow for associative memory like a NTM. And thus the human brain is also similar to a PC, which also has finite memory, whereas the concept of a PC has infinite memory.

So maybe we can say that a concept of a human brain has infinite memory as well, due to the associative memory, and also can be as large as needed to encode any program, and thus the concept of a human brain is Turing complete. Maybe this should be called an abstract human brain (instance of an abstract machine).


Thank you for your answer. Taking off from your last section, I would also add that the finite amount of matter in the Universe prohibits Turing completeness of any machine. In order to add more memory, one would need to add more entropy, which would result in addition of more material in form of atoms or even elementary particles, which is impossible since there are approximately 10^80 atoms in the Universe.
"I.e. in practice, no physical device ever can be Turing complete." " the finite amount of matter in the Universe prohibits Turing completeness of any machine" - No, that's a conjecture without a proof. You can create infinite sequences in functional languages (Lambda Calculus is equivalent to Turing Machines). You could have a TC language that has an infinite number of variables A[1] to A[inf] that runs on your PC ... in all cases they ARE TC, you can reason about anything computable but for some algo's you might run out of resources such as memory or time.
@wentbackward A functional language is not a physical device.
@Albert the physical device doesn't actually need infinite memory to be turing complete, only some algorithm implementations might. The point I wanted to make is that lambda calculus is mathematically equivalent to a turing mahcine but does not need infinite memory.
@wentbackward No, every Turing complete machine needs infinite memory. Turing complete implies that you can simulate a Turing machine. A Turing machine itself has infinite memory.
G
Gabe Moothart

To partially address your second question:

Neural Networks have the property of being universal approximators - that is, they can approximate any function to an arbitrary degree of precision. It's the "degree of precision" part that keeps the Neural Network from needing to be infinite.


Universal approximators of functions over finite dimensional vector spaces. When you speak about computability, you speak about sequences, thus you have an infinite dimensional vector space. Thus this property is not so relevant.
S
Sanjay Manohar

I think the concept of Turing completeness is not intended to tell us whether a particular computer can perform a particular task.

Rather, it purposes to tell whether a particular language is capable of expressing a particular task. That is, I'd say it's really about expressing an algorithm not performing it.

As neural networks don't have a language, it's a question of expressing an algorithm in terms of a neural network, rather than the capability of that network. So I don't know the answer to the last bit of your question!


N
Niki

I think an important point about the turing machine is that for any given input and program, the machine will only need a finite amount of tape, assuming it halts some time. That's why I would say the term "turing complete" is useful: You only need finite memory to run one specific turing complete program on some specific input (if the program halts). But if you have a non-turing-complete machine/language/technology, it won't be able to simulate certain algorithms, not matter how much memory you add.


+1 I was just in the process of writing almost exactly the same answer when yours came in.
佚名

It is almost always nice to know which class in the Chomsky hierarchy your system is. This is especially important in the more constricted classes, like regular languages / finite automata vs context-free languages. Also having the skill to recognize which class your the problem you are trying to solve is in is also important, otherwise one might try to do silly things like parsing HTML or XML with regular expressions only, which is impossible.

Having the knowlegde that your formalism or system is turing complete makes a statement that you can build whatever you want with it. It does not say anything about practicality, just the possibility or impossibility of solving problems. This is painfully true when considering turing tarpits, but there is also many turing complete systems which are specifically made for niche purposes that no one should ever dream of using for general purpose work in a production setting.

In short, good knowledge of the Chomsky hierarchy will help you out in very many situations, not only for choosing the right type of parser; regexp, pushdown, CFG or more powerful, but also for choosing the right type of machine or formalism for expressing processes in general.


u
user204724

Basically it means that with programming language or architecture which are Turing complete you can execute wide variety of algorithms... mostly -- any kind of them.

Non-Turing languages are greatly tighter in potential.


3
3xCh1_23

The answer is very simple. If you can emulate a NOR or a NAND gate with it, then it is Turing Complete, assuming that the rest is just a matter of combining things together.