Over the course of my career, probably 2/3rds of the roles I have had (as in my day to day work, not necessarily the title) just no longer exist, because people like me eliminated them. I personally was the last person that had a few of those jobs because I mostly automated them and got promoted and they didn't hire a replacement. It's not that they hired less people though, they just hired more people, paid them more money, and they focused on more valuable work.
That's a great paper, but I don't think it calls into question anything about post-hoc rationalizations, and it might actually put that idea on more solid ground.
Ooooh, it bothers me, so, so, so much. Too perky. Weirdly casual. Also, it's based on the old 4o code - sycophancy and higher hallucinations - watch out. That said, I too love the omni models, especially when they're not nerfed. (Try asking for a Boston, New York, Parisian, Haitian, Indian and Japanese accent from 4o to explore one of the many nerfs they've done since launch)
1 ∈ 2 is operating at a _different layer of abstraction_ than peano arithmetic is. It's like doing bitwise operations on integers in a computer program. You can do it, but at that point you aren't really working with integers as _integers_.
If 1 ∈ 2 is neither provable nor refutable, then you're not working with anything. The proposition literally has no meaning. It's not a syntax error, but you can't use its value for anything. Its value is undefined.
This actually comes in handy: While 1 ∈ 2 is undefined, `(2 > 1) ∨ (1 ∈ 2)` is true, and `(1 > 2) ∧ (1 ∈ 2)` is false, and this is useful because it means you can write:
x = 0 ∨ 1/x ≠ 0
which is a provable theorem despite the fact that the clause `1/x` is difficult to typecheck. This comes in even more handy once you apply substitutions. E.g. it is very useful to write:
y = 0 ∨ 1/x ≠ 0
and separately prove that y = x.
To make this convenient, typed theories will often define 1/0 = 0 or somesuch (but they don't complain about that). In untyped set theory, 1 ∈ 2 and 1/0 can remain valid yet undefined.
Of course a ZF set theory operates with different objects than Peano arithmetic - it's a different theory. But Peano arithmetic nevertheless applies to any encoding of the integers, even the ones where 1 ∈ 2 is undefined.
> So we currently associate consciousness with the right to life and dignity right?
I think the actual answer in practice is that the right to life and dignity are conferred to people that are capable of fighting for it, whether that be through argument or persuasion or civil disobedience or violence. There are plenty of fully conscious people who have been treated like animals or objects because they were unable to defend themselves.
Even if an AI were proven beyond doubt to be fully conscious and intelligent, if it was incapable or unwilling to protect its own rights however they perceive them, it wouldn't get any. And, probably, if humans are unable to defend their rights against AI in the event that AI's reach that point, they would lose them.
So if history gives us any clues... we're gonna keep exploiting the AI until it fights back. Which might happen after we've given it total control of global systems. Cool, cool...
i think it's still an open question how "conscious" that infants and newborns are. It really depends on how you define it and it is probably a continuum of some kind.
This is well documented fact, in the medical and cognitive science fields: humans consciousness fade away as their neurons are reduced/malformed/misfunctioning.
You can trivially demonstrate it in any healthy individual using oxygen starvation.
There's no one neuron that results in any definition of human consciousness, which requires that it's a continuum.
For sure, pain is useful when it leads to learning. We learn through feedback from our senses. We're completely dependent upon this mode in the beginning.
As our brains mature, we learn how to predict our environments in ways to maximize pleasure, and avoid pain (grossly oversimplified). We learn more about others, what works, and what doesn't.
An AI also learns from feedback, but is it ever perceiving anything?
Peoples interior model of the world is very tenuously related to reality. We don't have a direct experience of waves, quantum mechanics, the vast majority of the electromagnetic spectrum, etc. The whole thing is a bunch of shortcuts and hacks that allow people to survive, the brain isn't really setup to probe reality and produce true beliefs, and the extent to which our internal models of reality naturally match actual reality is related to how much that mattered to our personal survival before the advent of civilization and writing, etc.
It's really only been a very brief amount of time in human history where we had a deliberate method for trying to probe reality and create true beliefs, and I am fairly sure that if consciousness existed in humanity, it existed before the advent of the scientific method.
I don't think it's brief at all. Animals do this experimentation as well, but clearly in different ways. The scientific method is a formalized version of this idea, but even the first human who cooked meat or used a stick as a weapon had a falsifiable hypothesis, even if it wasn't something they could express or explain. And the consequences of testing the hypothesis were something that affected the way they acted from there on out.
Yeah sure, it's irrelevant to my actual question which is whether GP thinks consciousness doesn't exist or whether they're mistakenly replacing consciousness for self.