Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've hit JVM bugs in my professional career, sure. I just don't see the scenario where you'd need to be switching back and forth more than occasionally.

If you're running x.0.3 in production, you'd run x.0.3 locally. If there's a JVM bug that your application hits on x.0.3, either it's a showstopper in which case you'll find a way to upgrade/fix production quickly, or it's something you can tolerate, in which case you can tolerate it in local dev too. If you decide it's time to upgrade to x.0.4, you'd upgrade to x.0.4 locally, test a bit, then upgrade to x.0.4 in production. What's the scenario where you need to keep switching between x.0.3 and x.0.4 locally? You upgraded application Y to x.0.4, then discovered that application Z hits a showstopper bug and needs to stay on x.0.3? That's not a situation that you let fester for months, you either fix or work around the bug pretty quickly, or you decide that x.0.4 is too buggy and put everything back to x.0.3, and for that short period sure theoretically you would want to develop application Y under x.0.4 but the risk of developing it under x.0.3 is pretty damn small.

I get the argument that the cost is pretty low. It's just this is addressing something that really doesn't feel like a problem for me, and something that I don't think should be acceptable for it to be a problem. The JVM always used to be something you could upgrade almost fearlessly, and I think there was a lot of value in that.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: