Could you please elaborate on the distinction that you see between "artificial" intelligence and whatever it is that we as humans possess? Furthermore, what specific aspects of this intelligence are unachievable by an AI? Is it a "human intelligence is non-computational" line of thinking?
Machines are not alive, they are constructed and for them to develop intelligence the capacity would either need to be constructed too (how?) or it would need to appear as an 'emergent quality'. I think the latter is the line that believers in the concept of 'AI' mostly take but I see it as magical thinking as we have had no indications of such emergent behaviour in our experience with the machines we have constructed, nor are there any good reasons as far as I can see as to why we might hope or expect it to appear. I see it only as a part of the long history of humans and human cultures projecting their own intelligence and agency onto inanimate objects. Again, 'magical thinking'.
I acknowledge and am mostly fine with the idea that machines can 'learn'. But they learn (the game of Go, navigating a car in the real world, etc) under our direction and training (even if they potentially go on to surpass our abilities in these tasks). They don't have any agency; they don't have any curiosity; they don't have any 'spirit of consciousness'; they are not intelligent. They have simply been trained and learnt to perform a task. It's a great mistake to confuse this with intelligence. And the field itself is acknowledging this mistake as it matures, with the ongoing change of nomenclature from 'Artificial intelligence' to 'machine learning'.