Hacker Newsnew | past | comments | ask | show | jobs | submit | Jean-Papoulos's commentslogin

Gaddafi was trying to establish a gold-backed "arab" currency system and wanted to sell his oil using it. This was a threat to the US dollar so Obama was very happy to see Sarkozy knock at his door asking to go get the oil themselves lol

Microsoft developers are secretly the people doing the most work to develop alternatives to Windows, by making it hell on earth to use it

>Your job isn’t to complete tickets that fulfill a list of asks from your product manager. Your job is to build software that solves users problems.

While I agree in spirit, when you reach a certain amount of people working on a project it's impossible. The product manager's job is to understand real user problems and communicate them efficiently to the engineering team so the engineering team can focus on engineering.


No. The product manager has to understand the big picture, but when you're working on a team that big, it follows that you're going to be working on a product big enough that no one person is going to be able to keep every single small detail in their mind at once either.

You wouldn't expect the engineering manager to micromanage every single code decision—their job is to delegate effectively so that the right people are working on the right problems, and set up the right feedback loops so that engineers can feel the consequences of their decisions, good or bad. In the same way, you can't expect the product manager to be micromanaging every single aspect of the product experience—their job is to delegate effectively so that the right people are working on the most important problems, but there are going to be a million and one small product decisions that engineers are going to have to have the right tools to be able to make autonomously. Plus, you're never going to arrive at a good engineering design unless you understand the constraints for yourself intuitively—product development requires a collaborative back and forth with engineering, and if you silo product knowledge into a single role, then you lose the ability to push back constructively to make features simpler in places where it would be a win/win for both engineering and product. This is what OP means when they say that "The engineer who truly understands the problem often finds that the elegant solution is simpler than anyone expected".


If it’s impossible to understand users problems then something has gone horribly wrong.

Absolutely not, the company laptop will be locked down and you won't be able to install your own OS.

yeah i was a little confused by the suggestion. If a client hands you a laptop to use for a project then there's corp. policy reasons why you have to use it as a contractor. (some companies have serious teeth in these policies)

It would be interesting to be a fly on the wall and listen in when infosec calls you and asks why your laptop disappeared from their monitoring tools and you told them you installed nixos (assuming that would even be possible) because that's what you prefer.


Exactly. They don’t want you to just use their hardware. They want you to use all of it.

I switched from a 1h30 of commute by bus to 15 minutes by car a while back ; then I switched jobs and now have 1h total commute. It's kind of annoying that I can't do anything of substance during this time because I'm driving. I used to read on the bus and while I don't miss the bus, I certainly miss the reading.


>The company uses pure, purpose-made CO2 instead of sourcing it from emissions or the air, because those sources come with impurities and moisture that degrade the steel in the machinery.

So no environmental advantages. It's supposedly 30% cheaper than lithium-ion, but BYD cars have sodium-based based batteries on the road right now which CATL says will end up being 10-20$/kwh (10x cheaper than current batteries).

So what's the actual advantage of this ? I think it's just lucky to land just at the right time where batteries aren't cheaper enough yet.


To cite and expand on lambdaone below [1]:

> Clearly power capacity cost (scaling compressors/expanders and related kit) and energy storage cost (scaling gasbags and storage vessels) are decoupled from one another in this design

Lambdaone is differentiating between the costs to store energy (measured in kWh or Joules) and the costs to store energy per time (which is power, measured in Watts). If you want to store the whole excess energy that solar panels and wind turbines generate on a sunny, windy day, you need to have a lot of power storage capability (gigawatts of power generated during peak power generation). This can be profitable even if you only have a low energy storage capability, e.g. if you can only store a day worth of excess solar/wind energy, because you can sell this energy in the short term, for example in the next night, when the data centers are still running, but solar panels don't produce power. This is what batteries give you -- high power storage capabilities but low energy storage capacities.

Of course, you can always buy more batteries to increase the energy storage capacities, but they are very expensive per energy (kWh) stored. In contrast, these CO2 "batteries" are very cheap per energy (kWh) stored -- "just" build more high pressure tanks -- but expensive per power (Watts) stored, because to store more power, you need to build more expensive compressors, coolers etc. This ability to scale out the energy storage capability independently of the power storage capability is what Lambdaone was referring to with the decoupling.

For what is this useful? For shifting energy over a larger amount of time. Because energy storage costs of batteries are so high, they are a bad fit for storing excess energy in the summer (lots of solar) and releasing it in the winter (lots of heating). I'm not sure if these "CO2" batteries are good for such long time frames (maybe pressure loss is too high), but the claim most certainly is that they can shift energy over a longer time frame than is possible with batteries in an economically profitable fashion.

[1] https://news.ycombinator.com/item?id=46347251


What an excellent explanation, thanks


Even if sodium-ion really gets to $10–20/kWh, you still have degradation, cycle limits, fire risk, and a practical lifetime that's maybe 10–15 years


If it is barely cheaper than lithium, it's much more expensive than traditional pumped storage.

Yeah, it's expensive to build, but then cheap to run for decades.

It's nice that we explore alternatives but this just seems like investor bait


Pumped hydro is just not a valid comparison. I wish people would understand that already… it’s only good for long term storage in certain key geographical regions. Its use case is very limited.

You don’t want to used pumped hydro for short term storage because the rapid cycling will drive up the maintenance costs. You actually hear about hydro power plants talking about installing batteries to reduce wear.

In these discussions please keep in mind that frequency regulation, short term and long term shortage are different applications with different needs. The costs for pumped hydro are generally reported with their target application in mind. It’s not as applicable to dedicated short term storage and certainly not applicable to frequency regulation.


It's cute you think short cycles are somehow better in gas turbines and compressors and that you will restart the whole thing constantly to fill short term demands

> In these discussions please keep in mind that frequency regulation, short term and long term shortage are different applications with different needs.

The comparison is valid; If you want to fill hour to hour demand or add some frequency regulation, an inverter with a bunch of batteries is far, far better than this

> You don’t want to used pumped hydro for short term storage because the rapid cycling will drive up the maintenance costs. You actually hear about hydro power plants talking about installing batteries to reduce wear.

They are still cycled daily, that's the entire point of them that even worked pre renewables - load up on cheap night energy and unload it with demand. Renewables just flipped that to load in solar peak.

And putting few hours worth of batteries to reduce cycling is beneficial in both of those cases.


> It's cute you think […]

Don't do that here.


Ignorant and patronizing answers get snark back.

But it's sad that you get triggered by snark and not... ignorance and/or lying


Ireland is lucky enough to have several suitable sites, but just one operational: Turlough Hill, which has been running for over 50 years and is in use daily. It's at least as useful in terms of grid stability and (relatively) rapid dispatch as capacity. Output ~0.7% of total daily (~120GWh), ~5% of daily peak (~6GW), wintertime figures. For comparison electricity usage has increased about 8-fold since it was deployed in 1974.


AFAIU, pumped storage can only be built in very few locations around the globe.


This is mentioned in the article, that you need very specific topography for water pumped storage. Additionally, it can require a lot of space and be quite expensive and time-consuming to build.


Pumped hydro is not viable in most areas of the world. This is.


> So what's the actual advantage of this ?

I would posit that they hope Wright's Law will take hold; the components can be optimised and the deployment standardised. Also it looks as if most of the stuff can be made within the US or EU, dodging tariffs.


welcome to tumblr!!


>Bergmann agreed with declaring the experiment over, worrying only that Rust still "doesn't work on architectures that nobody uses".

I love you Arnd. More seriously, this will become an issue when someone starts the process of integrating Rust code into a core subsystem. I wonder whether this will lead to the kernel dropping support for some architectures, or to Rust doing the necessary work. Probably a bit of both.


There are two separate ongoing projects to make a rust compiler that uses GCC as a backend (one on the gcc side adding a c++ frontend that directly reads rust, one on the rustc side to make rustc emit an intermediate format that gcc can ingest).

The long-term solution is for either of those to mature to the point where there is rust support everywhere that gcc supports.


I wonder how good a LLVM backend for these rare architectures would have to be for that to be “good enough” for the kernel team. Obviously correctness should be non-negotiable, but how important is it that the generated e.g. Alpha code is performant for somebody’s hobby?


I suspect more the latter than anything. It could be that by the time Rust gets used in the kernel core, one or both of the GCC implementations would be functional enough to compile the kernel.

I'm curious though, if someone has an ancient/niche architecture, what's the benefit of wanting newer kernels to the point where it'd be a concern for development?

I presume that outside of devices and drivers, there's little to no new developments in those architectures. In which case, why don't the users/maintainers of those archs use a pre-6.1 kernel (IIRC when Rust was introduced) and backport what they need?


No one is doing any kind of serious computing on 30 year old CPUs. But the point of the hobby isn’t turning on the computer and doing nothing with it. The hobby is putting together all the pieces you need to turn it on, turning it on and then doing nothing with it.

There’s an asymmetry in what the retro computing enthusiasts are asking for and the amount of effort they’re willing to put in. This niche hobby benefits from the free labour of open source maintaining support for their old architectures. If the maintainers propose dropping support because of the cost of maintenance the hobbyists rarely step up. Instead they make it seem like the maintainers are the bad guys doing a reprehensible thing.

You propose they get their hands dirty and cherry pick changes from newer kernels. But they don’t want to put in effort like that. And they might just feel happier that they’re using the “real” latest kernel.


> I'm curious though, if someone has an ancient/niche architecture, what's the benefit of wanting newer kernels to the point where it'd be a concern for development?

Wanting big fixes (including security fixes, because old machines can still be networked) and feature improvements, just like anyone else?

> I presume that outside of devices and drivers, there's little to no new developments in those architectures.

There's also core/shared features. I could very easily imagine somebody wanting eg. ebpf features to get more performance out of ancient hardware.

> In which case, why don't the users/maintainers of those archs use a pre-6.1 kernel (IIRC when Rust was introduced) and backport what they need?

Because backporting bits and pieces is both hard and especially hard to do reliably without creating more problems.


The kernel must adapt to Rust, not the other way around. Rust is the way!


This is nice, but please don't clickbait headlines with straight-up lies. This is not JSON-compatible.


Yeah JSON compatable is very different from convertable.


Any benchmarking ? Because this fundamentally sounds like replacing node (V8) with another javascript engine. Which I'm not sure is going to be much of a gain, at which point why use an entire different toolchain than the rest of the world ?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: