So they don’t effectively block communication to and from the device? Or they don’t block all RF? Because the former seems to qualify as working, while the latter seems irrelevant. Or the only sometimes block communication to/from the device?
It is only irrelevant because devices made in this century put a lot of work into not transmitting outside of the intended frequencies.
My old Nokia would smash the whole range of AM/FM and UHF bands.
I don't know about higher frequencies that could escape one of these cages intended to block WiFi/5G/GPS but it is possible in theory and then it would be likely a backdoor that only becomes active when no signal can be detected.
> So they don’t effectively block communication to and from the device?
That is impossible to know without knowing the characteristics of the signal, noise, attenuation performance, sensitivity of the receiver, and other environmental conditions.
> Or they don’t block all RF?
They definitely don’t.
If you want to attenuate an RF signal, you need to do RF engineering. There are products to help people do this (eg RF test enclosure), but they aren’t marketed as “blocking RF” because that is nonsensical. The products that advertise as “blocking RF” without any real specifications are unsuitable for serious RF engineering, they are primarily sold to conspiracy theorists, hypochondriacs, etc.
Yes I have spent thousands of dollars and months testing them.
You can cut off GPS and high frequency cell spectrum pretty easily. Most cell spectrum is effectively attenuated by good quality professional RF enclosures designed for those frequencies. 2.4 ghz signals like wifi (with good quality radios) are hard to attenuate to the point where they can’t connect to other radios nearby, even with very expensive RF test enclosures.
If you’re trying to block against an unknown threat you are fucked. If someone wanted to back door a baseband they’d probably make it transmit at low speeds and low frequencies to be resistant to attenuation.
I read somewhere that LLM chat apps are optimized to return something useful, not correct or comprehensive (where useful is defined as the user accepts it). I found this explanation to be a useful (ha!) way to explain to friends and family why they need to be skeptical of LLM outputs.
> attempted to compensate using prefabulated amulite in the magneto-reluctance housing, but this only exacerbated the side-fumbling in the hyperboloid waveform generators
Wrote my PhD dissertation on this. It would've been in the literature for Apple's engineers to find, but unfortunately I lost institutional support to get this into a journal after my college (Mailorderdegrees.com, an FTX University^TM) folded mid-process.
This belief is how UC San Diego ends up with 900 freshman below high school math proficiency. And thus college becomes a remedial education institution.
I live 45 minutes from A&M. I’ll repeat myself. No one in Texas calls it tu. ;)
There may be some disenfranchised barbarians out in the wilderness that do. But likely only due to their extremely tenuous grasp of the fundamentals of English. Primarily they communicate through grunts and hand signs.
> There may be some disenfranchised barbarians out in the wilderness that do. But likely only due to their extremely tenuous grasp of the fundamentals of English. Primarily they communicate through grunts and hand signs.
Yes but are those the marginal buyers and sellers that drive price movements? Most people in index funds are probably not flitting in and out of index positions at anything approaching even medium frequency.
This is not the 1950s. Most American's have internet, where they can see average people living lives around the world. Their houses may be a bit worse, but their cars are normally newer, they have internet, they have FAMILY. They have vibrant COMMUNITY. They have free time.
That is a much less sensational, less "on trend" story than "Nefarious AI company convinces user to commit murder-suicide". But I agree. Each of these cases that I have dug further into seem to be idiosyncratic and not mainly driven by OpenAI's failings.
This article reads like opinions and vibes without a solid grounding in data.
For example:
"Look how much faster you can find Save or Share in the right variant..."
But each variant took me the same amount of time. Or so I think. But that demonstrates the issue: is any of this being measured and analyzed?
My opinion is that much of design is just "convincing opinion wins" (where convincing-ness is often not at all based on measurement of some kind), leading to crappy stuff like ultra flat design and Corporate Memphis.