Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Linux on an 8-bit microcontroller (dmitry.gr)
172 points by Ivoah on Dec 4, 2017 | hide | past | favorite | 38 comments


This always manages to put a smile on my face. This is a pointless but awesome technical achievement, and in the spirit of the old definition of "hacker".


So next, let's run Windows on a DEC PDP-10?

Why DEC PDP-10? Gates and Allen had neither an interpreter nor even an Altair system, for their first Altair BASIC product of Microsoft. However, Allen had written an Intel 8008 emulator for their previous venture Traf-O-Data that ran on a PDP-10 time-sharing computer. [1]

Another fun fact, Apple used Cray XMP supercomputers to design Apple hardware [2] while Seymour Cray used an Apple desktop some of the time when designing the Cray-2 [3].

[1] https://en.wikipedia.org/wiki/Altair_BASIC

[2] http://wiki.c2.com/?AppleCrayComputer

[3] "Cray and Apple, seemly at opposite ends of the computer spectrum, do have some subtle links. It was known that Seymour Cray used an Apple desktop some of the time when designing the Cray-2. It is also known that Apple had a sequence of Cray machines starting in March 1986 with an XMP/28 followed by another XMP in Feb 1991. A YMP-2E arrived later in 1991 and finally an EL from Dec 1993 to Jun 98. It is said that Apples first XMP was bought by Steve Jobs after he just walked into the Cray facility in Mendota Heights." http://wiki.c2.com/?AppleCrayComputer


Any Turing-machine can emulate any other Turing-machine. Speed?? Well, that's a different matter. Amount of RAM?? That too needs tweaking.


> Any Turing-machine can emulate any other Turing-machine.

True, but technically, a requirement of a Turing machine is an "infinite" tape. However, though most of the computers we use can represent something like that, limitations like not enough RAM show that what we have are not universal Turing machines. We don't have universality, so some things aren't possible to do, without greater resources.


An implication of your observation is that all the classical impossibility results, like for example the halting problem, concretized to the machine at hand become solveable theoretically, and maybe even practically. The turing machine abstraction is too idealized, I think.


Theoretically yes, you're absolutely correct. See Minsky's work.

Practically? No.

Also from Minksy's work: A million binary states, results in 2 ^ 1,000,000 possible states that need to be examined.

So, a 512kb program, needs an examination of about 4,000,000 states. However, that'd be a bare-metal program. Any examination would also need to include hosting code, such as the Operating System, and drivers being interacted with, and so on.

It quickly blows out of proportion, and begins to become highly impractical, in similar ways to cryptographic algorithms.

More than that, my examples are undercutting the scale - they're assuming program states and assembled programs are equivalent, but I don't think they are.


You are talking about the worst-case. The worst-case is not very insightful. How probable is the worst-case? What about the average or median case? How hard are the formerly unsolveable problems in practice?


From what I understand of Minsky's work, that isn't worst-case, it's the work required. You need to examine every possible state to get an answer.

> even if such a machine were to operate at the frequencies of cosmic rays, the aeons of galactic evolution would be as nothing compared to the time of a journey through such a cycle ...the magnitudes involved should lead one to suspect that theorems and arguments based chiefly on the mere finiteness the state diagram may not carry a great deal of significance. [Minsky, 1967]

Minsky's impression seems to be that this is the bare minimum amount of work required to solve the halting problem for a specific case.


If a finite state machine is in a deadlock, i.e. a state that directly leads to itself no matter what, you trivially find the cycle. That is only a trivial example. Another example are programs that never jump backwards. There are many other functions where the halting problem is practically solveable. The question is, what are the problems that are impossible in a turing machine but practical in a real computer?


Have you seen how quickly busy beaver functions grow with the size of the program? It'd have to be a pretty restricted system to be tractable...


Those are still predicated on having an infinite tape.

With finite storage, every program becomes a finite-state machine; and to "solve" the halting problem, one only needs to use the usual cycle-finding algorithms.


With more than a handful of bits of state ever this restricted form of the halting problem is not solvable in general in practice. It's like saying "reading AES encrypted stuff is easy, you just have to try all keys"


What is the relation between busy beavers on turing machines and cycle finding in finite state machines?


With infinity resources our computers would be universal Turing machines.

Personally, I think that is sufficient to call something a Turing machine even if it has limited resources in reality. After all you can't have infinite tape.


But if you do, and take unreflectedly the results of computer science about computability for granted, you may get wrong conclusions.


I don't simply take results from CS without some reflections on how they work in a physically limited turing machine, that would be silly, so my conclusions are usually right.


Computers are still universal Turing machines because an infinite-tape computation[1] can be subdivided into an infinite sequence of finite-tape computations.

[1] Meaning a computation using an arbitrarily-long tape, since by definition a computation is finite or its result is undefined.


Outside of exceptionally rare circumstances, no practical computing problem comes down to lack of storage. Speed is the killer. And Turing universality does nothing to help with speed.

In other words, the current situation is basically equivalent to having perfect infinite-taped Turing machines, in all their ponderous glory.


That's not my point. Any Turing machine can't necessarily emulate any other Turing machine.


As far as storage goes, I understand and disagree with your point in practical terms. Every computer we use is 99.9% identical to a universal Turing machine, because the amount of storage available is functionally identical to infinite.


I can't agree with that. Limits do indeed exist.

In cases like the article, hard limits exist and can require effort to work around.

It's not possible for a computer with 512kb of memory to load a program whose context requires more than that to exist in-memory, because the memory bounds are finite.

Other limits exist as well, such as the number of reads or writes that can take place before hardware decays.

Something like an Arduino Uno couldn't read into memory, and out again, 2 billion decimals of Pi without damage occurring.

But a universal Turing machine could.

Not all machines are equal.


You can glue a micro SD card to almost anything. And if you're executing anything in a Turing-machine-like manner, one microSD will probably last you the lifetime of the machine. Even if you're not, for just about any problem it's trivial to divert 1% of the budget into buying more microSD cards.

> Something like an Arduino Uno couldn't read into memory, and out again, 2 billion decimals of Pi without damage occurring.

Why would damage occur? I'm confused.

> It's not possible for a computer with 512kb of memory to load a program whose context requires more than that to exist in-memory, because the memory bounds are finite.

Since we're comparing to a Turing machine, I think it's fair to demand that all input be in the form of rewritable storage. Let's say no more than half-full to start with. It's pretty tricky to think of a problem like this where using a low-memory algorithm will run out. You can always recompute temporary data instead of storing it.


> You can glue a micro SD card to almost anything.

Your answer to the claim that resources are finite... Is that added resources negates that?

> Why would damage occur? I'm confused.

EEPROM is limited to about 100,000 writes.

SD cards to about 150,000 writes.

After that you have physical failure.

> t's pretty tricky to think of a problem like this where using a low-memory algorithm will run out.

This just sounds intentionally obtuse.

If the memory was good enough to be treated infintely, why do you select a low-memory algorithm? Why not a high-memory program?

Remember: My claim was not every Turing machine can emulate every other.


> Your answer to the claim that resources are finite... Is that added resources negates that?

Not merely that you can add resources. That you can add enough resources to emulate a Turing machine for nearly zero cost.

In other words: There is no real-world project where you want to emulate a Turing machine and can't. You have to be artificially limiting yourself. Lack of Turing-completeness causes zero problems in practice.

Machines that don't have enough storage to do a job? Happens all the time. But that's because the requirements for the job go far beyond mere Turing-completeness. They require actual performance metrics.

> If the memory was good enough to be treated infintely, why do you select a low-memory algorithm? Why not a high-memory program?

> Remember: My claim was not every Turing machine can emulate every other.

A cheap blob of memory can let you emulate a Turing machine practically forever, but you do have to manage it with care. It's infinite to the machine being emulated inside the sandbox. It's not infinite outside of the sandbox.

If the Turing machine to be emulated is high-memory, go for it. You could run that emulation for the lifetime of the machine.


> That you can add enough resources to emulate a Turing machine for nearly zero cost.

> In other words: There is no real-world project where you want to emulate a Turing machine and can't. You have to be artificially limiting yourself. Lack of Turing-completeness causes zero problems in practice.

That, is frankly ridiculous.

Mostly because of the next quote.

> Machines that don't have enough storage to do a job? Happens all the time.

That's a lack of universality. It causes such an issue we run data centers to work around the problems. You can't always keep adding memory.

If we had universality, memory would be no issue. If memory is no issue, cache misses wouldn't exist, and memory optimisation wouldn't exist either.

The halting problem would be solved overnight, and we'd be able to abolish all safety issues in programs because we'd be able to formally solve the entire program.

Or we wouldn't end up with stories such as Amazon running out of provisioning space [0].

> If the Turing machine to be emulated is high-memory, go for it. You could run that emulation for the lifetime of the machine.

The lifetime of the machine... Which is less than that which it is emulated.

---

Allow me to try and summarise.

I pointed out merely that, technically, we don't have universal Touring machines.

You argue that resources can always be added, thus we may as well have them. That no "real world" problem suffers from the fact that memory is limited.

Yet, companies like Amazon, nVidia, AMD & Intel spent millions in R&D attempting to work around the very limitations that universality creates.

[0] https://www.channelweb.co.uk/crn-uk/news/3007237/amazon-runn...


Let me try to put it a different way.

A Turing machine emulating a different Turing machine is an abomination of efficiency. It's so slow, it will never get anything done. It's so slow, that the theoretical limitations of non-infinite tape are meaningless. You can design it to use a meager amount of storage, and it won't run out before you die.

When it comes to machines that were designed to be performant, sometimes you need a lot of storage. But when it comes to Turing equivalency, you don't. Speed is the only meaningful limitation.

>That's a lack of universality. It causes such an issue we run data centers to work around the problems. You can't always keep adding memory.

>If we had universality, memory would be no issue. If memory is no issue, cache misses wouldn't exist, and memory optimisation wouldn't exist either.

>The halting problem would be solved overnight, and we'd be able to abolish all safety issues in programs because we'd be able to formally solve the entire program.

This would be true, based on what I was saying, if universality implied fast (or even exponential) performance. It doesn't.


This thread nicely illustrates the difference between a computer engineering and computer science perspective.


Great example of Turing tarpit


> It takes about 2 hours to boot to bash prompt

While Bellard's JSLinux can boot to bash prompt in seconds. Still both are mind blowing projects!


jslinux has a multi-GHz 64-bit CPU to play with


(2012) but still awesome :)

see also previous discussion https://news.ycombinator.com/item?id=5581851



One might say it's running on an microcoded ARM processor. If you're not too picky about exactly what microcode is.


Right. Microcoding used to be (and still is, as an idea) radically different from emulation; today, it probably means "emulation in hardware" more than what it meant originally.


I expected something along the lines of http://www.dougbraun.com/uzi.html (link to code broken) or the more ambitious (a Unix-like OS on Z80, 6809, 6502, 68000, and others) https://github.com/EtchedPixels/FUZIX (with ‘Lots more bugs right now’ than UZI)

This is a nice hack, but not as cool as those projects.


A great book for stuff like this.

"The Embedded Linux Primer" by Christopher Hallinan.

https://read.amazon.com/kp/embed?asin=B004AE3IA6&preview=new...


This is so stupid and so inspiring. Thank you.


Nice to see Dmitry on the front page :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: