Outside of exceptionally rare circumstances, no practical computing problem comes down to lack of storage. Speed is the killer. And Turing universality does nothing to help with speed.
In other words, the current situation is basically equivalent to having perfect infinite-taped Turing machines, in all their ponderous glory.
As far as storage goes, I understand and disagree with your point in practical terms. Every computer we use is 99.9% identical to a universal Turing machine, because the amount of storage available is functionally identical to infinite.
In cases like the article, hard limits exist and can require effort to work around.
It's not possible for a computer with 512kb of memory to load a program whose context requires more than that to exist in-memory, because the memory bounds are finite.
Other limits exist as well, such as the number of reads or writes that can take place before hardware decays.
Something like an Arduino Uno couldn't read into memory, and out again, 2 billion decimals of Pi without damage occurring.
You can glue a micro SD card to almost anything. And if you're executing anything in a Turing-machine-like manner, one microSD will probably last you the lifetime of the machine. Even if you're not, for just about any problem it's trivial to divert 1% of the budget into buying more microSD cards.
> Something like an Arduino Uno couldn't read into memory, and out again, 2 billion decimals of Pi without damage occurring.
Why would damage occur? I'm confused.
> It's not possible for a computer with 512kb of memory to load a program whose context requires more than that to exist in-memory, because the memory bounds are finite.
Since we're comparing to a Turing machine, I think it's fair to demand that all input be in the form of rewritable storage. Let's say no more than half-full to start with. It's pretty tricky to think of a problem like this where using a low-memory algorithm will run out. You can always recompute temporary data instead of storing it.
> Your answer to the claim that resources are finite... Is that added resources negates that?
Not merely that you can add resources. That you can add enough resources to emulate a Turing machine for nearly zero cost.
In other words: There is no real-world project where you want to emulate a Turing machine and can't. You have to be artificially limiting yourself. Lack of Turing-completeness causes zero problems in practice.
Machines that don't have enough storage to do a job? Happens all the time. But that's because the requirements for the job go far beyond mere Turing-completeness. They require actual performance metrics.
> If the memory was good enough to be treated infintely, why do you select a low-memory algorithm? Why not a high-memory program?
> Remember: My claim was not every Turing machine can emulate every other.
A cheap blob of memory can let you emulate a Turing machine practically forever, but you do have to manage it with care. It's infinite to the machine being emulated inside the sandbox. It's not infinite outside of the sandbox.
If the Turing machine to be emulated is high-memory, go for it. You could run that emulation for the lifetime of the machine.
> That you can add enough resources to emulate a Turing machine for nearly zero cost.
> In other words: There is no real-world project where you want to emulate a Turing machine and can't. You have to be artificially limiting yourself. Lack of Turing-completeness causes zero problems in practice.
That, is frankly ridiculous.
Mostly because of the next quote.
> Machines that don't have enough storage to do a job? Happens all the time.
That's a lack of universality. It causes such an issue we run data centers to work around the problems. You can't always keep adding memory.
If we had universality, memory would be no issue. If memory is no issue, cache misses wouldn't exist, and memory optimisation wouldn't exist either.
The halting problem would be solved overnight, and we'd be able to abolish all safety issues in programs because we'd be able to formally solve the entire program.
Or we wouldn't end up with stories such as Amazon running out of provisioning space [0].
> If the Turing machine to be emulated is high-memory, go for it. You could run that emulation for the lifetime of the machine.
The lifetime of the machine... Which is less than that which it is emulated.
---
Allow me to try and summarise.
I pointed out merely that, technically, we don't have universal Touring machines.
You argue that resources can always be added, thus we may as well have them. That no "real world" problem suffers from the fact that memory is limited.
Yet, companies like Amazon, nVidia, AMD & Intel spent millions in R&D attempting to work around the very limitations that universality creates.
A Turing machine emulating a different Turing machine is an abomination of efficiency. It's so slow, it will never get anything done. It's so slow, that the theoretical limitations of non-infinite tape are meaningless. You can design it to use a meager amount of storage, and it won't run out before you die.
When it comes to machines that were designed to be performant, sometimes you need a lot of storage. But when it comes to Turing equivalency, you don't. Speed is the only meaningful limitation.
>That's a lack of universality. It causes such an issue we run data centers to work around the problems. You can't always keep adding memory.
>If we had universality, memory would be no issue. If memory is no issue, cache misses wouldn't exist, and memory optimisation wouldn't exist either.
>The halting problem would be solved overnight, and we'd be able to abolish all safety issues in programs because we'd be able to formally solve the entire program.
This would be true, based on what I was saying, if universality implied fast (or even exponential) performance. It doesn't.
In other words, the current situation is basically equivalent to having perfect infinite-taped Turing machines, in all their ponderous glory.