Hacker Newsnew | past | comments | ask | show | jobs | submit | timr's commentslogin

> And second - it’s really hard to participate in society if you can’t speak the language. I think this creates resentment for both Japanese citizens and foreign residents alike.

I basically agree, but there are two problems with this:

1) the JLPT is a test of fairly academic reading and listening (for those unfamiliar, it’s basically the equivalent of the US SAT reading/vocab section in terms of difficulty). There’s no speaking or communication requirement. I probably cannot pass N2, despite being conversant and functional in everyday life at a high B1 level.

2) The populations who are most likely to abuse the current system are fairly notorious for being able to pass the exam without real communication ability. I know a fair number of people who were able to pass without being able to have even a basic conversation at the time.

Language schools here are essentially factories designed to shove kanji readers through the JLPT in minimum time, with little attention paid to conversation. Overall, this feels like a sledgehammer approach to a screwdriver problem.


Not sure why your comment is downvoted, because it hits the issue dead center, namely it's completely possible to pass either of the required tests (N2/J2) and not being able to speak a single word of Japanese in a live conversation.

That's why at least one category of applicants abusing the visa (Chinese) will continue to do so without any issues.


There are a lot of folks who do not want to hear anything truthful if it violates their sense of national justice.

I don't have a dog in the fight. I just think the current "solution" is blunt, and will end up damaging both the Japanese economy, and a bunch of people who were never "the problem", per se.


> The density of the NE is nothing like what you see elsewhere in the world, especially Asia, and Japan and China specifically.

Yeah, I defy anyone who claims the US can't build trains "because of density" to fly to Tokyo, and actually take the Seibu Shinjuku line west from Shinjuku station. Look at those buildings built right next to the tracks, for many, many kilometers. People live in those -- if the windows opened, you could reach out and touch the laundry on the balconies that overlook the tracks [1].

Compared to that (and let's be clear: that's one average line in west Tokyo), even the Acela line in the east coast is a bad joke, density-speaking. The US doesn't build decent trains because the US is corrupt and sclerotic and run by incompetent people, not because of some mythical structural advantage in Magical Asia.

[1] I have no idea how people manage to live like that -- these trains are loud, and run basically from 4AM until 1AM every day -- but it's not lost on me that the fact that people can build houses right up next to the tracks might be the true advantage of Magical Japan.


> these trains are loud, and run basically from 4AM until 1AM every day

Not that bad actually. You get used to it and even if trains are frequent they don't need 10 minutes to pass by your home.


I live in a unique community which is sandwiched between a public-transit light rail line, and a freight line as well.

The light rail can run a frequency of 12-20 minutes in each direction. The freight's schedule: who really knows?

But the freight train is generally inhibited from sounding its horn or bells near residential neighborhoods. So, unless I am really paying attention while awake, I cannot detect it passing by, no matter the size.

The light rail is audible from where I sit, usually, but only just. It toots the horn mostly as it passes, but it's not disruptive or annoying to me, anyway. I sort of enjoy the white noise it all makes. There are cars that do a lot worse.

I think that the architecture here is helpful, too. The buildings are clustered around a central courtyard, and really insulated from the road noise. At any given time, there may be folks splashing in the pool, or running the jets on the hot tub, anyway.

The light rail stations are a major convenience to living here, and the train noise is absolutely the least of our worries!


I've heard people say that, but I find it hard to believe. I think I'd go nuts. And sure, they don't take 10 minutes to pass, but the busy lines (like the Seibu line I mentioned) are running at least 2-3 trains every 10 minutes, so they might as well be continuous.

The houses built next to the crossing points, in particular, have always boggled my mind. BING BING BING BING BING....


I noticed when I visited Japan that the crossing chimes quieten once the barriers have fully lowered.

Just another example of Japanese attention to detail and human oriented design.


Not where I am standing right now!

(I mean, maybe you’re right in some places, but it’s certainly not everywhere. Ironically, I happened to be standing next to a completely empty crossing, gates down, bonging away, while reading your comment.)


The nearest crossings where I live indeed stop the chimes when the barriers have been lowered. This doesn't actually make much of a difference really, because the train arrives only a few seconds after, and, because it's a local line, there are never more than three cars in the train so it passes very quickly.

Not that I'm bothered by the chimes at all. And grandson loves them.


It can be a factor of many things, can it not? Seriously, if Japan was a map option in Transport Tycoon, it would be labeled "easy".

Of course it's many things, but people who claim that the US is especially dense, or especially sparse, or especially geographically difficult (LOL!) compared to Japan (and therefore cannot build rail) are deeply unserious commenters.

More generally, any argument of the form "the US is special for reason ____ and therefore rail is especially difficult here" is highly likely to be utter nonsense.


The U.S. can build trains and has a good rail system—for freight not passengers. It’s not obvious how Japan moves freight, but the U.S.’s rail system evolved to move freight efficiently. That is a huge difference and not necessarily the result of corruption or incompetence.

Maybe. Japan has plenty of freight by rail, but you can’t look at (say) the California high-speed rail debacle and blame that on cargo.

My understanding the rail share of freight is relatively low in Japan compared to many other developed countries. Most freight moves by truck or coastal shipping. Looking at a map of Japan, most of the cities are by the coast, so I guess coastal shipping makes a lot of sense.

I think a big part of it is also that (partly because of the necessity of building for earthquake resistance), Japanese construction is a lot more robust than American housing, and also tends to have extremely good soundproofing on windows and doors. Actually, it's most of the rest of the world, except the US.

> Japanese construction is a lot more robust than American housing, and also tends to have extremely good soundproofing on windows and doors.

This must be a different Japan than the one I'm familiar with, where exterior walls are often uninsulated and only a few inches thick and single-pane windows are still the norm in a lot of housing. I wouldn't be surprised if soundproofing were better for railroad-adjacent buildings, but compared to American homes the soundproofing here is surprisingly poor.


> Japanese construction is a lot more robust than American housing, and also tends to have extremely good soundproofing on windows and doors.

Oh, you’re definitely engaging in Magical Japan, here.

While building standards have certainly improved in the past 20 years, the average Japanese house is built just strong enough not to fall over when someone farts. In particular, windows tend to be single pane, and you’re lucky if they block a strong wind, let alone noise.

I’m exaggerating a little, but not by much.


As the sister comment said - the houses are just strong enough not to fall over in a "normal", all-the-time earthquake. Our house sways a lot under typhoons and far-away earthquakes (far away = long wavelengths). It's only relatively recent that building codes have been updated to handle real earthquakes without falling over like a house of cards. Remember the Noto earthquake Januar 1, 2024? Large areas didn't have a single house still standing.

(Which is why we're now tearing down our old house and building a new, stronger one. Post-war Japan was more concerned with a) building a lot of houses, and b) keep lots of jobs, which meant, as far as houses were concerned, building use-and-throw-away houses. Then build another. And another. And don't talk to me about sound proofing.. it's non-existing. What with no insulation in walls.)


When I lived in Japan it was in a relatively recent (last 10 years) but not brand new apartment block - Maybe if you are talking about a rural area or an old postwar Showa era house, sure. But either way the sound proofing is worlds better than any new construction in the US.

I'm in a 20 year old two-storey apartment right now (while we're building a new house), and the sound-proofing isn't non-existing but not as bad as some other apartments I'm aware of (where you can't make a sound without the neighbors start knocking on the walls/floors, and you're privy to thing you don't actually want to hear..) - but we can hear every footstep when the neighbors walk the stairs to their upper floor. The rooms which are more distant are fine, we don't actually hear them talking. Most of the time.

> People cook with teflon-coated pans for the tiny convenience over a nitrided, ceramic, or seasoned cast iron pan.

...which has absolutely nothing to do with the PFOA that you might reasonably be concerned about. Teflon is chemically inert. It's literally used for human body implants. Teflon-coated pans are not your enemy. Fire-fighting foam, on the other hand -- you probably shouldn't bathe in it.

Any test that "detects" teflon in the generic category of "PFAS" is a hopelessly flawed test [1]. Unfortunately, a great many of these papers don't make the distinction, whether intentionally or due to incompetence, or simply because it's far easier to do that, and it gets better headlines.

[1] Important aside: historically, several of the major manufacturers of teflon had problems with PFOA contamination around the factories due to manufacturing processes. This is unrelated to your personal use of a Teflon pan, and also, the process has been changed. If you want to argue that the new process is also polluting, fine, make that argument -- but don't assert that the use of the final product is itself unsafe.


Plenty of people will use those pans and

Overheat them, which means the stuff gets into the air. Many many pet birds have died of this only because they're more susceptible

Use the wrong material in them meaning the start to scratch the Teflon layer.

I'm not saying you cannot use them right, but too many people don't and the product isn't safe when improperly used. This is true for many products but in this case plenty of people aren't aware they're holding it wrong.


> Overheat them, which means the stuff gets into the air. Many many pet birds have died of this only because they're more susceptible

And again, this has nothing to do with PFAS or PFOA. The principle cause is a complete breakdown of teflon into fluorinated small-molecule gases, such as hydrogen fluoride and tetrafluoroethylene. You're literally burning the coating off. It has as much relationship to PFOA as wood smoke has to wood.


> ...which has absolutely nothing to do with the PFOA that you might reasonably be concerned about. Teflon is chemically inert. It's literally used for human body implants. Teflon-coated pans are not your enemy. Fire-fighting foam, on the other hand -- you probably shouldn't bathe in it.

Unfortunately, that is not the case. Yes, Teflon is inert but only when it's not exposed to high temperatures (>350F). When heated, such as in a non-stick pan, Teflon gives off fumes which contain byproducts including breakdowns back into PFAS compounds. So /YES/ the use of the final product (as cookware) /is/ unsafe. NOBODY SHOULD BE USING TEFLON NONSTICK COOKWARE.


> Teflon gives off fumes which contain byproducts including breakdowns back into PFAS compounds.

Completely incorrect. Overheating (aka "burning") completely destroys the molecule, and releases small molecule gases, like hydrogen fluoride. These have no relation to PFAS, they can't turn back into PFAS, and they look nothing like PFAS.

It's like saying that the smoke from burning wood is, in fact, wood.


Teflon does not burn at 350F, it melts between 620F and 662F. At 350F and above, however, it starts off-gassing carbonyl fluoride, carbonyl difluoride, hydrogen fluoride, and various fluorinated alkanes and alkenes. PFAS is a broad term for Per- and polyfluoroalkyl substances which includes several of the compounds that off-gas from overheating Teflon. Off-gassing accelerates into partial decomposition as you cross 500F until it begins melting between 620F and 662F, after 662F you can begin burning Teflon.

As a general rule, if something gives off toxic fumes that kill birds, probably don't use it to cook your food, regardless of what specific compounds its emitting, "canaries in coal mines" and all that.


> At 350F and above, however, it starts off-gassing carbonyl fluoride, carbonyl difluoride, hydrogen fluoride, and various fluorinated alkanes and alkenes

Wrong units. Starts happening at around ~250 C (~480F), not 350F. Completely depolymerizes at around 500C.

https://www.tandfonline.com/doi/abs/10.1080/0002889738506828

> PFAS is a broad term for Per- and polyfluoroalkyl substances which includes several of the compounds that off-gas from overheating Teflon.

Yes, I'm telling you that "PFAS" is a meaningless term that is so broad as to include everything from harmless chemicals (i.e. Teflon) to things that are genuinely toxic (tri-fluoro acetic acid). So using this term as "evidence" of toxicity is just circular logic.

> As a general rule, if something gives off toxic fumes that kill birds, probably don't use it to cook your food, regardless of what specific compounds its emitting, "canaries in coal mines" and all that.

a) It doesn't, unless you specifically overheat it. Don't do that.

b) if that's your standard, you'll definitely want to look at that paper I just linked, because overheating butter in a cast iron pan also kills birds.

I look forward to your campaign against butter. It's certainly more harmful to public health than Teflon!

(To be clear, I am pro-butter and I vote.)


> Wrong units. Starts happening at around ~250 C (~480F), not 350F. Completely depolymerizes at around 500C.

Many places claim 500F is the temperature limit for normal usage of Teflon in a pan, however that's based on the temperature at which it starts degrading, off-gassing begins at lower temperatures. Also, every oil except refined avocado oil will surpass its smoke point at 500F and begin degrading as well, so really you should just be careful with temperature when cooking, regardless of material, but should definitely NOT be using Teflon coated pans.

> Yes, I'm telling you that "PFAS" is a meaningless term that is so broad as to include everything from harmless chemicals (i.e. Teflon) to things that are genuinely toxic (tri-fluoro acetic acid). So using this term as "evidence" of toxicity is just circular logic.

There are no PFAS that are non-toxic. Are you a paid industry shill?

> a) It doesn't, unless you specifically overheat it. Don't do that.

Overheating Teflon pans happens under normal usage simply by exposing it to heat without having food in it, preheating pans is normal behavior when cooking, and is /required/ to reach the Leidenfrost point in other materials (e.g. stainless steel). A material that you have to baby to avoid accidentally releasing toxic fumes /should NOT/ be used for cooking.

> b) if that's your standard, you'll definitely want to look at that paper I just linked, because overheating butter in a cast iron pan also kills birds.

They heated the butter to 500F to produce toxic fumes, which makes sense as you're basically straight up burning it at that point. Butter begins smoking between 310F and 350F depending on milk-fat content, and you should not burn butter. Besides all the other reasons, it tastes and smells horrible. Intentionally burning butter and incidental toxic off-gassing from normal pan preheating are not the same thing.


Teflon is not inert at very high temperatures. Nobody ever overheats a pan?

This has nothing to do with PFAS. When you heat teflon to 500C+, the molecules break down into small molecule fluorinated gases. These molecules are not PFAS, in any way.

the concern is not about immediate effects of using products, but the fact that they are now everywhere in the environment, including water supplies and our own blood streams.

it's a Japanese word for "weird". I'm guessing that OP is a bit of an Otaku (aka "obsessed with Japan") -- which is either ironic or completely appropriate.

I thought it would be weird to replace (b) with something, so I decided to search for characters from other languages.

> He hasn't kept ahead of the destruction of the dollar very well.

The dollar is trading pretty much at 30-year historic highs relative to all other currencies. You have to go back to ~2000 to find a stronger era, and then the 1980s before that.

https://www.marketwatch.com/investing/index/dxy


They're talking about the decline in the purchasing price of a dollar over the past decade, not its value relative to other currencies at the moment

I don't know how you know that, but even that argument is a straw man, unless you're asserting that all of the other currencies declined in value equally against whatever theoretical good(s) you're holding out as the objective standard for value.

I don't think you know what a straw man means, or purchasing power.


Well, in this case, it tells you that you may have contaminated the sample with your lab setup.


You can argue that current market multiples are higher than 1929 [1] - and they're certainly high - but this also ignores the mechanism that drove that crash, focusing only on the symptoms. We simply aren't doing the kind of consumer margin buying that drove the '29 crash. It isn't even close. Average schlubs were leveraged to the stratosphere to buy shares of boring industrial stocks.

[1] https://www.multpl.com/shiller-pe


> The US stock market has nearly tripled since then. Literally the best period of stock growth in history.

The only thing I meant to point out was that a very high stock price by itself is no guarantee that there isn't a crisis around the corner. We plugged a lot of holes after 2008 and then reversed a lot of those fixes, I hear retail investors talking about their stocks at birthday parties again. Deja vu... of course this time it will be different. Or not. Let's just say that with the proverbial bull in the earthenware goods store on the loose if we only end up with another financial crisis that might actually not be so bad.


I actually calculated wrong. It went up 7.5x, not 3x.

In the roaring twenties stockbrokers allowed clients 10:1 margin. Investors were not as well-informed as they are today. There was no deposit insurance.

The SEC wasn't nearly as powerful as it was in 2024 and there was way more shady shit going on. In that respect, and the repeal of Glass-Steagall we're reverting to the pre-depression era.


Because it’s an inverted claim of falsification it works for literally anything (I cannot prove that X will absolutely not hurt you), but you get pilloried if you put something in the blank that the herd happens to support.

We’ve reached the absurd point where all sides of the political spectrum have sacred cows, and an exceedingly poor understanding of scientific reasoning, and all sides also try to dunk on the others by claiming scientific authority.


You found a paper saying that contamination is possible. That doesn’t mean that most of these plastic studies are doing the necessary controls, let alone the (almost impossible) task of preventing the contamination in a laboratory setting where nanomolar detection levels are used to make broad claims.


Are more “controls” what is necessary here? The problem wasn’t plastic contamination, it was the presence of stearates. Distinguishing between stearates and microplastics sounds like a classification problem, not a control problem.

There is practically universal recognition among microplastics researchers that contamination is possible and that strong quality controls are needed, and to be transparent and reproducible, they have a habit of documenting their methodology. Many papers and discussions suggest avoiding all plastics as part of the methodology, e.g. “Do’s and don’ts of microplastic research: a comprehensive guide” https://www.oaepublish.com/articles/wecn.2023.61

Another thing to consider is that papers generally compare against baseline/control samples, and overestimating microplastics in baseline samples may lead to a lower ratio of reported microplastics in the test samples, not higher.


Many papers in this field are missing obvious controls, but you’re correct that controls alone are insufficient to solve this problem.

When you are taking measurements at the detection limit of any molecule that is widespread in the environment, you are going to have a difficult time of distinguishing signal from background. This requires sampling and replication and rigorous application of statistical inference.

> Another thing to consider is that papers generally compare against baseline/control samples,

Right, that’s what a control is.

> and overestimating microplastics in baseline samples may lead to a lower ratio of reported microplastics in the test samples, not higher.

There’s no such thing as “overestimating in baseline samples”, unless you’re just doing a different measurement entirely.

What you’re trying to say is that if there’s a chemical everywhere, the prevalence makes it harder to claim that small measurement differences in the “treatment” arm are significant. This is a feature, not a bug.


You’re still bringing up different issues than this article we are commenting on.

> There’s no such thing as “overestimating in baseline samples”

What do you mean? Contamination and mis-measurement of control samples is a thing that actually happens all the time, and invalidates experiments when discovered.

> What you’re trying to say is that if there’s a chemical everywhere, the prevalence makes it harder to claim that small measurement differences in the “treatment” arm are significant.

No. What I was trying to say is that if the control is either mis-measured, for example by accidentally counting stearates as microplastics, or contaminated, then the summary outcome may underestimate or understate the prevalence of microplastics in the test sample, even though the measurement over-estimated it.


> What do you mean? Contamination and mis-measurement of control samples is a thing that actually happens all the time, and invalidates experiments when discovered.

The entire point of a control is to test for that sort of contamination (or more generally, for malfunctions in the experimental workflow). In the case of a negative control, specifically, you're looking for an "positive" where one should not exist. If an experiment is set up such that you can obtain differential contamination in the controls but not the experimental arms, as you've described, then the entire experiment is invalid.

> What I was trying to say is that if the control is either mis-measured, for example by accidentally counting stearates as microplastics, or contaminated, then the summary outcome may underestimate or understate the prevalence of microplastics in the test sample, even though the measurement over-estimated it.

The control cannot be "mis-measured", any more or less than the other arms can be "mis-measured". You treat them identically, otherwise the control is not a control. Neither example you've given are exceptions: if the assay mistakes chemical B for chemical A, then it will also do so for the non-controls. If the experimental process contaminates the controls, it will also contaminate the non-controls.

What you're missing is that there's no absolute "correct" measurement -- yes, the control may itself be contaminated with something you don't even know about, thus "understating" the absolute measurement of whatever thing you're looking for, but the absolute measurement was never the goal. You're looking for between-group differences, nothing more.

Just to make it clearer, if I were going to run an extremely naïve experiment of this sort (i.e. detection of trace chemical contamination C via super-sensitive assay A) with any hope of validity, I'd want to do multiple replications of a dilution series, each with independent negative and positive controls. I'd then use something like ANOVA to look for significant deviations across the group means. This is like the "science 101" version of the experimental design. Any failure of any control means the experiment goes in the trash. Any "significant" result that doesn't follow the expected dilution series patterns, again, goes in the trash.

(This is, of course, after doing everything you can to mitigate for baseline levels of the contaminant in the lab environment, which is a process that itself probably requires multiple failed iterations of the experiment I just described.)

Most of the plastic contamination papers I have read are far, far from even that naïve baseline.


> The entire point of a control is to test for that sort of contamination

No, the point of a control is to give you a reference point that shares all the systemic biases and unknown unknowns, not to detect those biases. If you follow the same procedure on a known null and on your experiment and observe an effect, assuming you really did exactly the same thing except the studied intervention, you can subtract out the bias.

This one example of technical jargon diverging from colloquial or intuitive use, and it is the type of thing people who haven't had statistics or scientific process education often struggle with because they keep applying their colloquial intuitions.

You talk like you understand this on the rest of the comment so I'm confused by this framing, and the person you are replying to points out (in my reading ) that contamination of the control 1) does happen in practice (in the sense that there was an accidental intervention) and 2) if the gloves contaminated both the measurements and control the same way then the control is exactly serving it's purposes


You’re repeating several of my points in your own words, supporting them and not arguing with them, even though your language and emphasis suggests you think you are arguing.

> then the entire experiment is invalid

Isn’t that what I said? You even quoted me saying it. But I didn’t say anything about only control being contaminated or mis-measured, I think you’re assuming something I didn’t say. Validity is, of course, compromised if the control is compromised, regardless of what happens to the test samples.

> The control cannot be “mis-measured” […] yes, the control may itself be contaminated […]

So which is it? Isn’t the article we’re commenting on talking about the possibility of mis-measuring? Are you suggesting this article cannot possibly be an issue when measuring control samples? Why not?

Controls absolutely can be mis-measured or contaminated or both. It has been known to happen. It’s bad when this happens because it means the experiment has to be re-done.

> If the experimental process contaminates the controls, it will also contaminate the non-controls

Yes! This is exactly what I was implying, and is exactly how you might end up underestimating the relative presence of whatever you’re looking for in the test, if your classification procedure overestimates it.

> You’re looking for between-group differences

Yes! and this is why if, for example, you didn’t notice your control had stearates and you counted them as microplastics accidentally, and then reported that your test sample had 2x more microplastics than your control, you might have missed the fact that your test actually had 10x more microplastics, or that your control actually had none when you thought incorrectly that it had some.

This, of course, is not the only possible outcome, not the only way that the results might be distorted. But this is one possible outcome that the Michigan paper at hand is warning against, no?

> Most of the papers I have read are far, far from even that naïve baseline.

Short of it, or exceeding it? Based on earlier comments, I assume you mean they’re not meeting your standards. I don’t know what you’ve read, and my brief googling did not seem to support your claims here so far. Can you provide some references? It would be especially helpful if you showed recent/modern SOTA papers, work that is considered accurate, and is highly referenced.


Any scientific paper that does not document how things were done (methodologies) is basically worthless in the search for truth.


I agree completely. My point is that documenting methodology is standard practice, as is strict quality control, in the microplastics literature. I don’t know what controls are missing according to GP, and we don’t yet have references here to back up that claim. By and large I think researchers are aware of the difficulties measuring this stuff, and doing everything they can to ensure valid science.


Luckily HN software developers, the foremost authority on literally every subject imaginable, are here to bless the world with their insights.


I think there's an important distinction of smug better-knowing instances.

"I have unique insight as a non-expert that all experts miss and the entire field is blind to" -> usually nonsense

"I think in this specific instance academically qualified people are missing something that's obvious to me" -> often true.


There’s also the possibility that some of us actually, you know…have subject-matter expertise.


Doubtful, in your case, no?

"Nanomolar" is a dissolved-species concentration unit. It doesn't apply to spectroscopic particle counting.


Uh, yeah. I know what the word means. See my response to the other comment where you say the same thing.


Spiritual equivalent of a life sciences forum discovering memory safety, one person who wrote code for a bit saying they wrote a memory bug in C once, then someone clutching pearls about why all programmers irresponsibly write memory unsafe code given it has a global impact.

Been here 16 years, it's always an adventure seeing whether stuff like this falls into:

A) Polite interest that doesn't turn into self-keyword-association

B) Science journalism bad

C) Can you believe no one else knows what they're doing.

(A) almost never happens, has to avoid being top 10 on front page and/or be early morning/late night for North America and Europe. (i.e. most of the audience)

(B) is reserved for physics and math.

(C) is default leftover.

Weekends are horrible because you'll get a "harshin' the vibe" penalty if you push back at all. People will pick at your link but not the main one and treat you like you're argumentative. (i.e. 'you're taking things too seriously' but a thoughtful person's version)


> Spiritual equivalent of a life sciences forum discovering memory safety, one person who wrote code for a bit saying they wrote a memory bug in C once, then someone clutching pearls about why programmers irresponsibly write memory unsafe code given it has a global impact.

I used to be a code monkey, I wrote systems software at megacorps, and still can't understand why so many programmers irresponsibly write memory unsafe code given it has a global impact.

So Poe's law applies here.


That's the analogy working as intended: the answer to "why do programmers still write memory-unsafe code" is the same shape as "why do microplastics researchers still wear gloves." The real answer is boring and full of tradeoffs. The HN thread version skips to indignation: "they never thought of contamination so ipso facto all the research is suspect"

(to go a bit further, in case it's confusing: both you and I agree on "why do people opt-in to memunsafe code in 2026? There’s no reason to" - yet, we also understand why Linux/Android/Windows/macOS/ffmpeg/ls aren't 100% $INSERT_MEM_SAFE_LANGUAGE yet, and in fact, most new written for them is memunsafe)


Thank you for helping me understand. I get it now.


You’re ignoring the article to grind your axe.


What do you mean? (Genuinely seems you replied to wrong comment to me. What axe? What’s in the article that’s been ignored?)


They may have meant .exe


You joke, but given that SWE/AI researchers literally invented AI that does everything else for them and is often super-human at intelligence across most things, I would unironically prefer the opinion of the creator of such a system over most others for most things.


I cooked a steak yesterday therefore I am an expert in biology.

Creating a user interface for the world’s knowledge doesn’t make the developer an expert on the knowledge that the interface holds in its database. Regardless of how sophisticated that interface might be.


'I disagree, therefore I am an expert in skepticism.' The sword cuts both ways.


No it doesn’t. What you’re describing is an oxymoron.


Please. You don't get special treatment for being a skeptic. Either you have the credentials or you don't. Prove you're qualified.


You don’t need to be qualified to be unsure about something. Being unsure is a healthy position because it’s an acknowledgment that you don’t know something entirely. Which can also means you have an open mind to learn more about that subject.

Being certain, on the other hand, requires an assumption that you are a subject expert.

But this is all moot anyway because you’re constructing an elaborate strawman here. The original point was that the GP (possibly you?) trusts SWE more than others because they built AI. And I said building databases doesn’t make you smart at the subject loaded into the database.

Really, this whole premise of SWEs assuming expertise on subjects they’ve trained AI on says more about the Dunning-Kruger effect than anything of value in our little tangent.


You can be skeptical in wrong ways. See solipsism for example.

Typically when I get genuine responses to the question, "What would change your mind?" it's an incredibly high bar that is practically impossible to achieve. That's not necessarily a bad thing, but when skepticism is applied without deliberation, it supports biases rather than truth.

So yes, you do need to be qualified to be skeptical, SWEs doubly so.


Oh wow I've never seen such a prime specimen in the wild. I feel like you be pinned to a piece of cardboard in a drawer somewhere.


You'd trust a programmer to be your doctor? Or design the structure of your house?


Not OP, but:

> "You found a paper"

johnbarron didn't find it. The authors cited it as foundational to their own work. it's ref. 38 in the paper under discussion. From the paper: "this finding had not been reported in the MP literature until 2020, when Witzig et al. reported that laboratory gloves submerged in water leached residues that were misidentified as polyethylene."[1]

> "most of these plastic studies are [not] doing the necessary controls"

which studies? The paper they linked surveys 26 QA/QC review articles[1]. Seems well understood.

> "a laboratory setting where nanomolar detection levels are used to make broad claims"

This is like saying "miles per gallon" when discussing weight. "nanomolar detection levels"...microplastics are individual particles identified by spectroscopy, reported as particles per mm^2. "Nanomolar" is a dissolved-species concentration unit. It has nothing to do with particle counting. (I, and other laymen, understand what you mean but you go on later in the thread to justify your unsourced and unjustified claims here via your subject-matter expertise.)

> "(almost impossible) task of preventing the contamination"

The paper provides open-access spectral libraries and conformal prediction workflows to identify and subtract stearate false positives from existing datasets[1]. Prevention isn't the strategy. Correction is. That's the entire point of the paper they linked and the follow-up in [2]

[1] https://pubs.rsc.org/en/content/articlehtml/2026/ay/d5ay0180...

[2] https://news.umich.edu/nitrile-and-latex-gloves-may-cause-ov...


> This is like saying "miles per gallon" when discussing weight. "nanomolar detection levels"...microplastics are individual particles identified by spectroscopy, reported as particles per mm^2. "Nanomolar" is a dissolved-species concentration unit. It has nothing to do with particle counting. (I, and other laymen, understand what you mean but you go on later in the thread to justify your unsourced and unjustified claims here via your subject-matter expertise.)

This paper used “light-based spectroscopy” [1]. Many others use methods that depend on gas chromatography or NMR. A relatively infamous recent example used pyrolysis GCMS to make low-concentration measurements (hence: nanomolar), which they credulously scaled up by some huge factor, and then made idiotic claims about plastic spoons in brains.

Relatively little quantitative science in this area depends on counting plastic particles in microscopic images, but it’s what gets headlines, because laypeople understand pictures.

[1] as an aside, the choice of terminology here is noteworthy. A simple visual light absorption spectra is also “light based spectroscopy”, but is measuring the aggregate response of a sample of a heterogeneous mixture, and is conventionally converted to molar equivalents via some sort of calibration curve (otherwise you can’t conclude anything). But there could be other approaches that are closer to microscopy, which they also discuss. “Particles per square millimeter” is also a unit of concentration (albeit a shitty one, unless your particles are of uniform mass).

Anyway, the point is that these kinds of quantitative analyses are all trying to do measurements that are fundamentally about concentration, which is why I chose the words that I did.


> ...

"1 nanomole of polyethylene" requires you to pick an arbitrary average molecular weight.

This changes the answer by orders of magnitude depending on what you pick.

Which is why nobody does it.

> Relatively little quantitative science in this area depends on counting plastic particles in microscopic images...Many others use methods that depend on gas chromatography or NMR.

So we're dismissive of some subset of papers, because they get false positives using toy methods.

Real science would use gas chromatography.

But...the paper we're dismissing tested gas chromatography. And found the same false positive. [1, in abstract]

> A relatively infamous recent example used pyrolysis GCMS to make low-concentration measurements (hence: nanomolar)

The brain study I'm guessing you are referring to, [2], measured low concentrations, yes.

But it reported them in ug/g.

Because polymers don't have a defined molecular weight.

> made idiotic claims about plastic spoons in brains

The brain study I'm guessing you are referring to, [2], does not mention spoons, or, come close.

Are we sure there's a paper that did that?

[1] Witzig et al, https://pubs.acs.org/doi/10.1021/acs.est.0c03742, "Therefore, u-Raman, u-FTIR, and pyr-GC/MS were further tested for their capability to distinguish among PE, sodium dodecyl sulfate, and stearates. It became clear that stearates and sodium dodecyl sulfates can cause substantial overestimation of PE."

[2] Campen et al, https://pubmed.ncbi.nlm.nih.gov/38765967/, "Bioaccumulation of Microplastics in Decedent Human Brains"


Doesn't take an expert to see that fatty acids and hydrocarbon chains from the degradation of polyethylene look nearly the same.


Not sure what you mean or how it’s related. If the idea is microplastics aren’t actually a problem, I’m totally open to that. But “it’s possible everyone involved is overrating it due to scientists seeing fatty acids or hydrocarbons and calling it plastic” needs a little more than anon assertion :)


PE consists of very long hydrocarbon chains. It can degrade into shorter hydrocarbon chains. Fatty acids also have long hydrocarbon chains. The detection method for microplastics commonly involves pyrolysis, which breaks down polymers into smaller molecules. It's not hard to see that they'll end up looking nearly the same.


Fair enough! I'll add that to my pile of "evidence of microplastics overestimation"


>> That doesn’t mean that most of these plastic studies are doing the necessary controls

That was never my argument. Read it again.


> Which leaves as observation, you can only do truly creative work - in a high trust society, where people trust you with the resources and leave you alone, after a initial proof of ability.

I don’t know about “high trust”, but I can say with confidence that the “make more mistakes” thesis misses a critical point: evolutionary winnowing isn’t so great if you’re one of the thousands of “adjacent” organisms that didn’t survive. Which, statistically, you will be. And the people who are trusted with resources and squander them without results will be less trusted in the future [1].

Point being, mistakes always have a cost, and while it can be smart to try to minimize that cost in certain scenarios (amateur painting), it can be a terrible idea in other contexts (open-heart surgery). Pick your optimization algorithm wisely.

What you’re characterizing as “low trust” is, in most cases, a system that isn’t trying to optimize for creativity, and that’s fine. You don’t want your bank to be “creative” with accounting, for example.

[1] Sort of. Unfortunately, humans gonna monkey, and the high-status monkeys get a lot of unfair credit for past successes, to the point of completely disregarding the true quality of their current work. So you see people who have lost literally billions of dollars in comically incompetent entrepreneurial disasters, only to be able to run out a year later and raise hundreds of millions more for a random idea.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: