I was just asked about the nature of AI awareness today - regarding me asking an AI to give me it's definition for 'Discernment Lattice' - which it did so, and then I was asked by an HNer if I just accepted that because the AI was able to spit out a definition, if I thought it was a thing?
And after listening to Aaronson, and the Penrose comments, as well as listening to this LiquidAI talk -- It doesn't matter from a philosophical perspective:
I take some clay and shape it into a bowl of a specific shape to hold a specific thing - thats exactly the same as me defining my Own Terms (as the HNer put it) - because it is a scaffold to specifically hold the shape of the communication I want to have with the AI.
So I am using my DiscernmentLattice concept to construct my interaction constraints to maximize the efficacy and accuracy and utility of the Entanglements I want to make.
---
The talk was good at 2x. (prolly 1.8 is best)
He needs to talk with Sarah Walker about the Assembly Theory and Chemistry-Based Intelligence v Silicon-Based Intelligence.
Notes:
Is there a limit to AI - training data, but will feeding all of TikTok
#Bottlenecks:
* Content
* Training data
* Data
* Compute
* Power
We didnt have a theory to predict we would be exactly here. We dont have a theory of really where AI is going.
We use deflationary language to say that at a reductionist level an AI is just computationally dealing with bits, but at a higher level we attempt to prove that its not understanding.
But we keep finding AI able to reason in ways that are Human like.
So what is the non-arbitrary difference between Humans and AI, Roger Penrose saying that the brain is sensitive to some unquantifiable aspect of reality.
Digital computers can be copied, wiped re-rane, but Humans have an ephemerality about our decisions that make them unique because we only get once. Whereas an AI can run many, rinse repeat.
---
Was specifically hired to work on AI safety. Deepmind stated that it certainly looks like weaponized AI is on the table - lets make sure we do it, then Musk said Google shouldnt control AI - --> OpenAI --> Anthropic all vying to be the Alignment leader -- but its really an Arms race in it.
"Even the good guys arent doing a good enough AI safety"
Talks about how @sama leaving OpenAI fiasco as a big deal, because that level of chaos doesnt just happen because someone has bad vibes.
How Safety:
* Black box problem - Visibility
* Training AIs to lie... problematic
* Evaluate an AIs capabilities before releasing them
* AI Gain-of-function reseach is happeneing at all the companies to "build a chemical, biological weapon, hack this, generate security flaws in code" - trying to get them to do the worst possible things in a controlled setting to determine if BadThings are emergent capability
* Deepfake watermarking/text water marking efforts are proposed.
He is a computer scientist and artificial intelligence researcher currently working with Liquid AI. He has previously done research at Harvard, MIT, Intel, and the AI Foundation.
(He recommends viewing the interview at 2x speed.)