Hacker Newsnew | past | comments | ask | show | jobs | submit | simne's commentslogin

Video have some inaccuracies. As example, said "to detect heartbeat from 100m, 10-20 T", but then talking about "from 100km".

So, if carefully looking, SQUID could detect with 10-15T, so need just additional 5 magnitudes. And as I know from practice amateur astronomy, with regular signal, 5 additional magnitudes could be achieved with numerical methods.

So we have two questions remain - how CIA managed to have SQUID small enough to fit on something like helicopter (or weapons bay on fighter), and how they managed to fly nearer than 100m to pilot, each could be answered - "possible, if have highly motivated people, with big money".


Also, internal CPU caches grow over time - in 286 and before, just was not any cache; in 386 first included page cache for mmu - stores tables with mostly used pages; in next generations sometimes advertised grow of page cache.

So yes, even when your cpu could address similar size of ram, possible it don't have enough page cache for your application.


Because very large share of market now are datacenters. Difference from desktop is dramatic - for desktop really acceptable very simple chips with bad energy efficiency, but DCs already deal with extremely high power consumption, as they typically "compress" so much consumption in one rack, that constantly working near to physical constraints.


You can't make desktop computer 4 times larger but there's very little preventing you form putting 4 racks where you had 1 before. If the floor space is the expensive part of data center then probably some incentives are misaligned.


For about price of land and connectivity - in large city land price begin on few millions dollars per square kilometer, and usage of cable channels could cost from 50$ per meter (easy could be 200$/m).

Plus, space arrange could last years.

Heat dissipation in range of megawatts could be just prohibited by local regulations.

So, space in large cities is very serious problem, and for business it is usually easier to "compress" as much computing power as possible in one rack.


> in large city land price begin on few millions dollars per square kilometer,

There's little need to put large datacenters in downtown Chicago and Manhattan.


Bigger chips = more distance to cover for your electrons = more power required = more generated heat = slower throughput for your data.

Surely you don't believe that the entire chip industry had not thought of "wait what if we just make the chips bigger".


AMD hiding Threadripper behind their back: Uh yeah what a terrible idea, we definitely didn't actually do that. Making a CPU that's twice the size, how ridiculous would that be right?!


You found an exception to my two-line generalization. Congrats!


The main reason to reduce feature size since quite some time has been to make more money per wafer, and faster.

Same reason that so much work was put into increasing wafer diameter over the decades.

More chips per wafer means a lot.

Much more than for performance sake.


You cannot place dc anywhere, in large cities space is extremely constrained, and land is extremely expensive.

Also big problem - connectivity - you cannot place DC where it cannot be connected to power grid and to very powerful network.

So yes, DC floor space is severely limited.

And the third issue - last decades, rack servers dissipate extremely large amounts of heat, I hear numbers up to tens Kilowatts per rack, which is just hard to dissipate with air cooling (as example, all IBM Power servers have option of liquid cooling, but this is totally different price range).


That's the AI hype narrative, but aren't server CPUs only like 25% of the total market? That's tiny compared to consumer volume, though revenue is likely on par given the higher cost per unit.


> aren't server CPUs only like 25% of the total market?

Yes and no. If just formally calculate, yes, servers are small market volumes. But, they are much less constrained financially, than private person, so from same fab one could earn much more money if sell to server market, than if sell to consumer market.


I don't think that's correct, server chips aren't really "more expensive" than consumer chips when you correctly account for performance. Older-gen server chips have comparable performance to new top-of-the-line consumer chips and sell for a similar price. Newer-gen server chips in turn are priced at a premium over the current value of the older-gen, to account for their higher performance. The lower financial constraints don't enter into it all that much.


For many years until about a decade ago (more precisely until the launch of the Intel Skylake Server processors) the server CPUs had a performance per dollar comparable to desktop CPUs so the expensive server CPUs were expensive because of their higher performance.

But since then the prices of server CPUs have ballooned and now their performance per dollar is many times worse than for desktop CPUs. Server CPUs have very good performance per watt, but the same performance per watt is achieved with desktop CPUs by underclocking them.

The only advantage of server CPUs is that they aggregate in a single socket the equivalent of many desktop CPUs, including not only the aggregate number of cores, but also the aggregate number of memory channels and the aggregate number of PCIe lanes. Thus a server computer becomes equivalent with a cluster of desktop computers that would be interconnected by network interfaces much faster than the typically available Ethernet links.

While for embarrassingly parallel tasks a server computer will cost many times more than a cluster of desktop computers with the same performance, it will have a much less disadvantage or it might even have a better performance/cost ratio for tasks with a lot of interprocess/interthread communication, where the tight coupling between the many cores hosted by the same socket ensures a lower latency and a higher throughput for such communication.

The owners of datacenters are willing to pay the much higher prices of modern server CPUs because the consolidation into a single server of multiple old servers brings economies in other components, due to less coolers, less power supplies, less racks, simpler maintenance and administration, etc.

While the prices of server CPUs at retail are huge, the biggest costumers, like cloud owners, can get very large discounts, so for them the difference in comparison with desktop CPUs is not as great as for SMEs and individuals. The large discounts that Intel was forced to accept during the last few years, to avoid losing too much of the market to AMD, were the cause why Intel's server CPU division has lost many billions of $.


Sure, we need better math, it is obvious.

Unfortunately, nobody at big companies know, what exactly math will win, so competition not end.

So, researchers will try one solution, then other solution, etc, until find something perfect, or until semiconductors production (Moore's Law) made enough semiconductors to run current models fast enough.

I believe, somebody already have silver bullet of ideal AI algorithm, which will lead all us to AGI, when scaled in some big company, but this knowledge is not obvious at the moment.


Amateurs in USSR 50 years ago made wireless and powerless headphones, which use wire lay on perimeter of room to transfer sound and power.

In headphones there is tiny coil.

It really work and very reliable, but result coil (size of room) have very large reactive resistance, so it is nearly impossible to transfer even high frequencies, only low (bass) and medium, so it workable for speech but music is heavily distorted.


they also made microphones work like this. place em behind the panels of sockets on the wires. some accounts of such devices being found can be found in the book Spy Catchers (by some ex MI5 science officer during cold war). pretty interesting tech!


Could you use multiple coils for different bands?


You could transfer different bands via different coils on different frequencies, but unfortunately, capacity of information channel is limited by frequency. Because of this, radio using high frequency waves as carrier (radio or light, or even some sort of invisible rays), not coil, and have hassle with some sort of modulation of waves.

This mean, you could not transfer more information than half of maximum frequency.

https://en.wikipedia.org/wiki/Shannon%E2%80%93Hartley_theore...

Very good channels with very high signal to noise ratio, could handle more bits than Shannon limit (on engineers slang "channel is ringing", such example is fiber channel).

https://en.wikipedia.org/wiki/Shannon%E2%80%93Hartley_theore...

Most modern research also consider some digital techniques of sound (information) compressing, like use LLM as (de)compressor (google llm compression algorithms).


They treat you belong to community, and use your appearance in hidden ads as "just another consumer choose A.. products".

Even if you will intentionally hide all logos of A.. from A.. products u use, their design is very distinctive and widely known, so even looking on Xiaomi most people will think it is A..

Plus, A.. products usually deep integrated into their infrastructure, I mean A.. Wi-fi router, A.. printer, A.. speakers, A.. interfaces (Lightning), etc.


Well people, knowledge become belief. - Position of Moon creating tides, which affect some places very strong, because line of shore constantly moving more then by cup meters, so some operations need to be planned according to tides, this is not belief, this is fact. Position of Sun amplifies tides for up to 50%. BTW, people knowing Astronomy, easy conclude, most extreme Sun tides amplification happen with Sun Eclipse.

I also seen few other wrong classifications.

Sorry, good idea, but such mistakes made it unplayable.


I hear about one specific computer of 360 era, on which was dangerous to touch two keyboards simultaneously, because there was high voltage between.

When one guy asked "why not?", consultant answered as IBM - "we don't think anybody ever will need two keyboards to work".

In reality, two keyboards was convenient for debugging, because you could control two terminals, and now it is standard debugging practice.


If possible for exact this plane, could make software update just as routine procedure.

But as I hear, air transporters could buy planes in different configurations, so for example, Emirates airlines, or Lufthansa always buy planes with all features included, but small Asian airlines could buy limited configuration (even without some safety indicators).

So for Emirates or Lufthansa, will need one empty flight to home airport, but for small airline will need to flight to some large maintenance base (or to factory base) and wait in queue there (you could find in internet images of Boeing factory base with lot of grounded 737-MAXes few years ago).

So for Emirates or Lufthansa will be minimal impact to flights (just like replacement of bus), but for small airlines things could be much worse.


> how do you avoid the voting circuit becoming a single point of failure

They do not. Just make voting circuit much more reliable than computing blocks.

As example, computing block could be CMOS, but voting circuit made from discrete components, which are just too large to be sensitive to particles.

Unfortunately, discrete components are sensitive to overall exposure (more than nm scale transistors), because large square gather more events and suffered by diffusion.

Other example from aviation world - many planes still have mechanic connection of steering wheel to control surfaces, because mechanic connection considered ideally reliable. Unfortunately, at least one catastrophe happen because one pilot blocked his wheel and other cannot overcome this block.

BTW weird fact, modern planes don't have rod physically connected to engine, because engine have it's own computer, which emulate behavior of old piston carburetor, and on Boeing emulating stick have electronic actuator, so it automatically placed in position, corresponding to actual engine mode, but Airbus don't have such actuator.

I want to say - especially big planes (and planes overall), are weird mix of very conservative inherited mechanisms and new technologies.


Electronics in high-radiation environments benefit from a large feature size with regard to SEU reduction, but you're correct that the larger parts degrade faster in such environments, so they've created "rad-hard" components to mitigate that issue.

https://en.wikipedia.org/wiki/Radiation_hardening

It's interesting to me that triple-voting wasn't as necessary on the older (rad-hard) processors. Every foundry in the world is steering toward CPUs with smaller and smaller feature sizes, because they are faster and consume less power, but the (very small) market for space-based processors wants large feature sizes. Because those aren't available anymore, TMR is the work-around.

https://en.wikipedia.org/wiki/IBM_RAD6000

https://en.wikipedia.org/wiki/RAD750

Most modern space processing systems use a combination of rad-hard CPUs and TMR.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: