Hacker Newsnew | past | comments | ask | show | jobs | submit | arghwhat's commentslogin

Interceptors are just wrappers in disguise.

    const myfetch = async (req, options) => {
        let options = options || {};
        options.headers = options.headers || {};
        options.headers['Authorization'] = token;
    
        let res = await fetch(new Request(req, options));
        if (res.status == 401) {
            // do your thing
            throw new Error("oh no");
        }
        return res;
    }
Convenience is a thing, but it doesn't require a massive library.

That fetch requires so many users to rewrite the same code - that was already handled well by every existing node HTTP client- says something about the standards process.

It could also be trivially written for XMLHttpRequest or any node client if needed. Would be nice if they had always been the same, but oh well - having a server and client version isn't that bad.

Because it is so few lines it is much more sensible to have everyone duplicate that little snippet manually than import a library and write interceptors for that...

(Not only because the integration with the library would likely be more lines of code, but also because a library is a significantly liability on several levels that must be justified by significant, not minor, recurring savings.)


> Because it is so few lines it is much more sensible to have everyone duplicate that little snippet manually

Mine's about 100 LOC. There's a lot you can get wrong. Having a way to use a known working version and update that rather than adding a hundred potentially unnecessary lines of code is a good thing. https://github.com/mikemaccana/fetch-unfucked/blob/master/sr...

> import a library and write interceptors for that...

What you suggesting people would have to intercept? Just import a library you trust and use it.


Your wrapper does do a bunch of extra things that aren't necessary, but pulling in a library here is a far greater maintenance and security liability than writing those 100 lines of trivial code for the umpteenth time.

So yes you should just write and keep those lines. The fact that you haven't touched that file in 3 years is a great anecdotal indicator of how little maintenance such a wrapper requires, and so the primary reason for using a library is non-existent. Not like the fetch API changes in any notable way, nor does the needs of the app making API calls, and as long as the wrapper is slim it won't get in the way of an app changing its demands of fetch.

Now, if we were dealing with constantly changing lines, several hundred or even thousand lines, etc., then it would be a different story.


But you said so yourself they are necessary… otherwise you would just use fetch. This reasoning is going around in circles.

Why the 'but'? Where is the circular reasoning? What are you suggesting we have to intercept?

- Don't waste time rewriting and maintaining code unecessarily. Install a package and use it.

- Have a minimum release age.

I do not know what the issue is.


but it does for massive DDoS :p

Could also be looping videos - some browsers had bugs whereby looping videos would continously redownload.

I recall some years back having corporate IT ask me why I was downloading terabytes off this weird website called "imgur" that they didn't know about. Realized I had a tab open with a stupid jackie chan mp4 a few seconds long on some background workspace, and that had just kept downloading over and over and over and over...


First thing would be that a small geofence (i.e., a narrow church on available data) is entirely orthogonal to having high precision, high quality location data available.

I won't claim with certainty that this is the case, but it seems likely that Factual was overselling their capabilities. That, or they relied specifically on having users grant high precision location data access and had nothing otherwise.

Apps that already need location data are probably the most likely sources of collecting such data - food apps, dating apps, chat apps you have sent your location in, ...


"Apps that already need location data are probably the most likely sources of collecting such data"

Yes, and many companies have access to both feeds.....


And yet, who would you trust more - a CEO that raised 100M on their "vision" or someone who got slapped in the face?


We also shouldn't call it "vegan leather" when it is in fact just plastic.

Naming departs from technical accuracy when adopted by the masses, as they retrofit their common understanding. Wouldn't be too surprised if "vaccine" ends up covering other strong defense-boosters.


> "vegan leather" when it is in fact just plastic.

https://knowingfabric.com/mushroom-leather-mycelium-sustaina...

is pretty neat


Mycelium is neat, but last time I heard of it the problem was far, far too low manufacturing throughput.

I don't think anyone would even consider marketing that as "vegan leather", as doing so would mean putting you in the same bucket as cheap-as-dirt polyurethane (which is what regular "vegan leather" is), at an astronomically higher price. You'd pick a new term to differentiate.

I vote for "shroomskin".


excellent name!


Interesting topic, offensive website. Back to the story …


I found it funny because the opposite direction, people accused Tesla of naming “autopilot” misleadingly, because it gave them the impression of fully unattended self-driving.

In aviation, autopilot features were until recently (and still for GA pilots) essentially just cruise control: maintain this speed and heading, maintain this climb rate and heading, maintain this bank angle, etc.


Because Tesla was claiming in 2016 that "next year" it would be able to drive across the Unted Sttes without any inputs.


Well, okay, but that’s like 95% of flying.


It’s the other 5% that takes 90% of effort :)


Though by the 0.1% highly qualified and extensively trained, so that the chances of misunderstanding by a pilot is like 0.00001% or less.



Yes, but in this case the name is likely to actually reduce the adoption not increase it.


Wouldn't be too surprised, either - but I still think there's merit in using words in a more precise manner than the marketing department would like to do.


Mushroom leather says hello


A good example for the discussion: leather being animal skin which obviously cannot come from a mushroom.

Assuming you were countering my vegan leather claim: Products marketed "vegan leather" is polyurethane or similar, and for marketing reasons you would use a different term if you did something fancier to differentiate. My gut feeling is that a mycelium-based product would be far more expenisive than simple polyurethane, and quite an upsell.


I mean the word “vaccine” literally specifically references cow pox, so it’s already broadened. No reason not to go up another level.


Their subscriptions aren't cheap, and it has nothing really to do with them controlling the system.

It's just price differentiation - they know consumers are price sensitive, and that companies wanting to use their APIs to build products so they can slap AI on their portfolio and get access to AI-related investor money can be milked. On the consumer-facing front, they live off branding and if you're not using claude code, you might not associate the tool with Anthropic, which means losing publicity that drives API sales.


Well, not really. It means you have a renderer that is closer to being portable to web, not an editor that will run in web "with some additional work". The renderer was already modular before this PR.


With the disclaimer that I am comparing to the memory of some entry-level cameras, I would still say that it's way too noisy.

Even on old, entry-level APS-C cameras, ISO1600 is normally very usable. What is rendered here at ISO1600 feels more like the "get the picture at any cost" levels of ISO, which on those limited cameras would be something like ISO6400+.

Heck, the original pictures (there is one for each aperture setting) are taken at ISO640 (Canon EOS 5D MarkII at 67mm)!

(Granted, many are too allergic to noise and end up missing a picture instead of just taking the noisy one which is a shame, but that's another story entirely.)


Noise depends a lot on the actual amount of light hitting the sensor per unit of time, which is not really a part of the simulation here. ISO 1600 has been quite usable in daylight for a very long time; at night it's a somewhat different story.

The amount and appearance of noise also heavily depends on whether you're looking at a RAW image before noise processing or a cooked JPEG. Noise reduction is really good these days but you might be surprised by what files from even a modern camera look like before any processing.

That said, I do think the simulation here exaggerates the effect of noise for clarity. (It also appears to be about six years old.)


The kind of noise also makes a huge difference. Chroma noise looks like ugly splotches of colour, whereas luma noise can add positively to the character of the image. Fortunately humans are less sensitive to chroma resolution so denoising can be done more aggressively in the ab channels of Lab space.

Yes, this simulation exaggerates a lot. Either that, or contains a tiny crop of a larger image.


Yeah, I don't think that it's easy to reproduce noise (if it was, noise reduction would be even better). Also, bokeh/depth of field. That's not so easy to reproduce (although AI may change that).


Rather than a moat of details, it's first-mover advantage. Anyone can run a credit card network, but merchants and banks need to support them. Many others exist, but the issue is that they don't have widespread adoption. Solutions that work exist, which means the lesser supported alternative is not widely used, which again reduces reason for wider adoption...

Regulation changes "why bother" to "oh crap".


jup. once this is built, if adoption is lacking, it's not hard to imagine how the EU could make it the standard payment option.


The real outcome is mostly a change in workflow and a reasonable increase in throughput. There might be a 10x or even 100x increase in creation of tiny tools or apps (yay to another 1000 budget assistant/egg timer/etc. apps on the app/play store), but hardly something one would notice.

To be honest, I think the surrounding paragraph lumps together all anti-AI sentiments.

For example, there is a big difference between "all AI output is slop" (which is objectively false) and "AI enables sloppy people to do sloppy work" (which is objectively true), and there's a whole spectrum.

What bugs me personally is not at all my own usage of these tools, but the increase in workload caused by other people using these tools to drown me in nonsensical garbage. In recent months, the extra workload has far exceeded my own productivity gains.

For the non-technical, imagine a hypochondriac using chatgpt to generate hundreds of pages of "health analysis" that they then hand to their doctor and expect a thorough read and opinion of, vs. the doctor using chatgpt for sparring on a particular issue.


>people using these tools to drown me in nonsensical garbage

https://en.wikipedia.org/wiki/Brandolini%27s_law

>The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: