> If, during training, some collection of weights exist along the gradient that approximate cognition
What do you mean? Is cognition a set of weights on a gradient? Cognition involves conscious reasoning and understanding. How do you know it is computable at all? There are many things which cannot be computed by a program (e.g. whether an arbitrary program will halt or not)...
You seem to think human consious reasoning and understanding are magic. The human brain is nothing more than a bio computer and it can't compute either, whether an arbitrary program will halt or not. That doesn't stop it from being able to solve a wide range of problems.
> The human brain is nothing more than a bio computer
That's a pretty simplistic view. How do you know we can't determine whether an arbitrary program will halt or not (assuming access to all inputs and enough time to examine it)? What in principle would prevent us from doing so? But computers in principle cannot, since the problem is often non-algorithmic.
For example, consider the following program, which is passed the text of the file it is in as input:
function doesHalt($program, $inputs): bool {...}
$input = $argv[0]; // contents of this file
if (doesHalt($input, [$input])) {
while(true) {
print "Wrong! It doesn't halt!";
}
} else {
print "Wrong! It halts!";
}
It is impossible for the doesHalt function to return the correct result for the program. But as a human I can examine the function to understand what it will return for the input, and then correctly decide whether or not the program will halt.
This is a silly argument. If you fed this program the source code of your own brain and could never see the answer, then it would fool you just the same.
You are assuming that our minds are an algorithmic program which can be implemented with source code, but this just begs the question. I don't believe the human mind can be reduced to this. We can accomplish many non-algorithmic things such as understanding, creativity, loving others, appreciating beauty, experiencing joy or sadness, etc.
actually a computer can in fact tell that this function halts.
And while the human brain might not be a bio-computer, I'm not sure, its computational prowess are doubtfully stronger than a quantum turing machine, which can't solve the halting problem either.
For what input would a human in principle be unable to determine the result (assuming unlimited time)?
It doesn't matter what the algorithmic doesHalt function returns - it will always be incorrect for this program. What makes you certain there is an algorithmic analog for all human reasoning?
Well, wouldn't the program itself be an input on which a human is unable to determine the result (i.e., if the program halts)? I'm curious on your thoughts here, maybe there's something here I'm missing.
The function we are trying to compute is undecidable. Sure we as humans understand that there's a dichotomy here: if the program halts it won't halt; if it doesn't halt it will halt. But the function we are asked to compute must have one output on a given input. So a human, when given this program as input, is also unable to assign an output.
So humans also can't solve the halting problem, we are just able to recognize that the problem is undecidable.
With this example, a human can examine the implementation of the doesHalt function to determine what it will return for the input, and thus whether the program will halt.
Note: whatever algorithm is implemented in the doesHalt function will contain a bug for at least some inputs, since it's trying to generalize something that is non-algorithmic.
In principle no algorithm can be created to determine if an arbitrary program will halt, since whatever it is could be implemented in a function which the program calls (with itself as the input) and then does the opposite thing.
With a assumtion of unlimited time even a computer can decide the halting problem by just running the program in question to test if it halts. The issue is that the task is to determine for ALL programs if they halt and for each of them to determine that in a FINITE amount of time.
> What makes you certain there is an algorithmic analog for all human reasoning?
(Maybe) not for ALL human thought but at least all communicatable deductive reasoning can be encoded in formal logic.
If I give you an algorithm and ask you to decide if it does halt or does not halt (I give you plenty of time to decide) and then ask you to explain to me your result and convince me that you are correct, you have to put your thoughts into words that I can understand and and the logic of your reasoning has to be sound. And if you can explain to me you could as well encode your though process into an algorithm or a formal logic expression. If you can not, you could not convince me. If you can: now you have your algorithm for deciding the halting problem.
There might be or there mightn't be -- your argument doesn't help us figure out either way. By its source code, I mean something that can simulate your mind's activity.
Exactly. It's moments like this where Daniel Dennett has it exactly right that people run up against the limits of their own failures of imagination. And they treat those failures like foundational axioms, and reason from them. Or, in his words, they mistake a failure of imagination for an insight into necessity. So when challenged to consider that, say, code problems may well be equivalent to brain problems, the response will be a mere expression of incredulity rather than an argument with any conceptual foundation.
And it is also true to say that you are running into the limits of your imagination by saying that a brain can be simulated by software : you are falling back to the closest model we have : discrete math/computers, and are failing to imagine a computational mechanism involved in the operation of a brain that is not possible with a traditional computer.
The point is we currently have very little understanding of what gives rise to consciousness, so what is the point of all this pontificating and grand standing. Its silly. We've no idea what we are talking about at present.
Clearly, our state of the art models of nueral-like computation do not really simulate consciousness at all, so why is the default assumption that they could if we get better at making them? The burden of evidence is on conputational models to prove they can produce a consciousness model, not the other way around.
Neural networks are universal approximators. If cognition can be represented as a mathematical function then it can be approximated by a neural network.
If cognition magically exists outside of math and science, then sure, all bets are off.
There is no reason at all to believe that cognition can be represented as a mathematical function.
We don't even know if the flow of water in a river can always be represented by a mathematical function - this is one of the Millennium Problems. And we've known the partial differential equations that govern that system since the 1850's.
We are far, far away from even being able to write down anything resembling a mathematical description of cognition, let alone being able to say whether the solutions to that description are in the class of Lebesgue-integrable functions.
The flow of the a river can be approximated with the Navier–Stokes equations. We might not be able to say with certainty it's an exact solution, but it's a useful approximation nonetheless.
There was, past tense, no reason to believe cognition could be represented as a mathematical function. LLMs with RLHF are forcing us to question that assumption. I would agree that we are a long way from a rigorous mathematical definition of human thought, but in the meantime that doesn't reduce the utility of approximate solutions.
I'm sorry but you're confusing "problem statement" with "solution".
The Navier-Stokes equations are a set of partial differential equations - they are the problem statement. Given some initial and boundary conditions, we can find (approximate or exact) solutions, which are functions. But we don't know that these solutions are always Lebesgue integrable, and if they are not, neural nets will not be able to approximate them.
This is just a simple example from well-understood physics that we know neural nets won't always be able to give approximate descriptions of reality.
There are even strong inapproximability results for some problems, like set cover.
"Neural networks are universal approximators" is a fairly meaningless sound bite. It just means that given enough parameters and/or the right activation function, a neural network, which is itself a function, can approximate other functions. But "enough" and "right" are doing a lot of work here, and pragmatically the answer to "how approximate?" can be "not very".
This is absurd. If you can mathematically model atoms, you can mathematically model any physical process. We might not have the computational resources to do it well, but nothing in principle puts modeling what's going on in our heads beyond the reach of mathematics.
A lot of people who argue that cognition is special to biological systems seem to base the argument on our inability to accurately model the detailed behavior of neurons. And yet kids regularly build universal computers out of stuff in Minecraft. It seems strange to imagine the response characteristics of low-level components of a system determine whether it can be conscious.
I'm not saying that we won't be able to eventually mathematically model cognition in some way.
But GP specifically says neural nets should be able to do it because they are universal approximators (of Lebesgue integratable functions).
I'm saying this is clearly a nonsense argument, because there are much simpler physical processes than cognition where the answers are not Lebesgue integratable functions, so we have no guarantee that neural networks will be able to approximate the answers.
For cognition we don't even know the problem statement, and maybe the answers are not functions over the real numbers at all, but graphs or matrices or Markov chains or what have you. Then having universal approximators of functions over the real numbers is useless.
I don't think he means practically, but theoretically. Unless you believe in a hidden dimension, the brain can be represented mathematically. The question is, will we be able to practically do it? That's what these companies (ie: OpenAI) are trying to answer.
We have cognition (our own experience of thinking and the thinking communicated to us by other beings) and we have the (apparent) physical world ('maths and science'). It is only an assumption that cognition, a primary experience, is based in or comes from the physical world. It's a materialist philosophy that has a long lineage (through a subset of the ancient Greek philosophers and also appearing in some Hinduistic traditions for example) but has had fairly limited support until recently, where I would suggest it is still not widely accepted even amongst eminent scientists, one of which I will now quote :
Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental. It cannot be accounted for in terms of anything else.
Claims that cannot be tested, assertions immune to disproof are veridically worthless, whatever value they may have in inspiring us or in exciting our sense of wonder.
Schrödinger was a real and very eminent scientist, one who has staked their place in the history of science.
Sagan, while he did a little bit of useful work on planetary science early in his career, quickly descended into the realm of (self-promotional) pseudo-science. This was his fanciful search for 'extra-terrestrial intelligence'. So it's apposite that you bring him up (even if the quote you bring is a big miss against a philosophical statement), because his belief in such an 'ET' intelligence was a fantasy as much as the belief in the possibility of creating an artificial intelligence is.
How do you know that? Do you have an example program and all its inputs where we cannot in principle determine if it halts?
Many things are non-algorithmic, and thus cannot be done by a computer, yet we can do them (e.g. love someone, enjoy the beauty of a sunset, experience joy or sadness, etc).
I can throw a ton of algorithms that no human alive can hope to decide whether they halt or not. Human minds aren't inherently good at solving halting problems and I see no reason to suggest that they can even decide whether all turing machines with number of states, say, below the number of particles in the observable universe, very much less all possible computers.
Moreover, are you sure that e.g. loving people in non-algorithmic? We can already make chatbots which pretty convincingly act as if they love people. Sure, they don't actually love anyone, they just generate text, but then, what would it mean for a system or even a human to "actually" love someone?
They said - there is no evidence. The reply hence is not supposed to be - how do you know that.
The proposition begs for a counter example, in this case an evidence.
Simply saying - love is non algorithmic - is not evidence, it is just another proposition that has not been proven, so it brings us no closer to an answer i am afraid.
When mathematicians solve the Collatz Conjecture then we'll know. This will likely require creativity and thoughtful reasoning, which are non-algorithmic and can't be accomplished by computers.
We may use computers as a tool to help us solve it, but nonetheless it takes a conscious mind to understand the conjecture and come up with rational ways to reach the solution.
Human minds are ultimately just algorithms running on a wetware computer. Every problem that humans have ever solved is by definition an algorithmic problem.
Oh? What algorithm was executed to discover the laws of planetary motion, or write The Lord of the Rings, or the programs for training the GPT-4 model, for that matter? I'm not convinced that human creativity, ingenuity, and understanding (among other traits) can be reduced to algorithms running on a computer.
They're already algorithms running on a computer. A very different kind of computer where computation and memory are combined at the neuron level and made of wet squishy carbon instead of silicon, but a computer nonetheless.
Conscious experience is evidence that the brain doesn't something we have no idea how to compute. One could argue that computation is an abstraction from collective experience, in which the conscious qualities of experiences are removed in order to mathematize the world, so we can make computable models.
If it can't be shown, then doesn't that strongly suggest that consciousness isn't computable? I'm not saying it isn't correlated with the equivalent of computational processes in the brain, but that's not the same thing as there being a computation for consciousness itself. If there was, it could in principle be shown.
What do you mean? Is cognition a set of weights on a gradient? Cognition involves conscious reasoning and understanding. How do you know it is computable at all? There are many things which cannot be computed by a program (e.g. whether an arbitrary program will halt or not)...