Hacker Newsnew | past | comments | ask | show | jobs | submit | parl_match's commentslogin

Careful with absolutist statements :)

macOS can in fact be configured to use a third party idp, including interactive elements, on loginwindow.

So, you could build your own through the ExtensibleSingleSignOn and Extensible Enterprise SSO macOS plugin API. You would do touchid, and then have it pop your own custom window/app, providing a prompt through that API, except it's just a hardcoded value (or some shit idk)

https://youtu.be/ph37Yd1vV-c

So yes, macOS can in fact do that. Just not out of the box. I strongly believe that it is a glaring omission, or at least something they should gate through lockdown mode. idk!


I've thought the Apple platform has two glaring omissions

- touchid and biometric configuration profiles (standard, paranoid, extra paranoid)

- versioning for icloud backup

The simple fact is that there is no one-sized-fits-all use case for this.

Biometrics are great for the average user! They reduce shoulder surfing and increase security.

But for some users, you might want two factor for biometrics (such as an apple watch), or short windows before password entry is forced. You might want both biometrics AND password entry required. You might want to enable biometrics only when two factor is enabled.

Look, I'm not saying that what I've said is the ideal setup, by the way. Just that there is a lot of room for improvement versus the status quo.


and consumer/prosumer pcie encapsulation over thunderbolt over fiber was happening in 2015. maybe even earlier

still, it's incredibly cool for one guy to pull this off on his own. demonstrates mastery of the subject


> Advertisement for some company

quote please

> generously sprinkled with anti-communism

quote please


> At some point SBCs that require a custom linux image will become unacceptable, right?

The flash images contain information used by the bios to configure and bring up the device. It's more than just a filesystem. Just because it's not the standard consoomer "bios menu" you're used to doesn't mean it's wrong. It's just different.

These boards are based off of solutions not generally made available to the public. As a result, they require a small amount of technical knowledge beyond what operating a consumer PC might require.

So, packaging a standard arm linux install into a "custom" image is perfectly fine, to be honest.


If the image contains information required to bring up the device, why isn't that data shipped in firmware?

> If the image contains information required to bring up the device, why isn't that data shipped in firmware?

the firmware is usually an extremely minimal set of boot routines loaded on the SOC package itself. to save space and cost, their goal is to jump to an external program.

so, many reasons

- firmware is less modular, meaning you cant ship hardware variants without also shipping firmware updates (the boot blob contains the device tree). also raises cost (see next)

- requires flash, which adds to BOM. intended designs of these ultra low cost SOCs would simply ship a single emmc (which the SD card replaces)

- no guaranteed input device for interactive setup. they'd have to make ui variants, including for weird embedded devices (such as a transit kiosk). and who is that for? a technician who would just reimage the device anyways?

- firmware updates in the field add more complexity. these are often low service or automatic service devices

anyways if you're shipping a highly margin sensitive, mass market device (such as a set top box, which a lot of these chipsets were designed for), the product is not only the SOC but the board reference design. when you buy a pi-style product, you're usually missing out on a huge amount of normally-included ecosystem.

that means that you can get a SBC for cheap using mass produced merchant silicon, but the consumer experience is sub-par. after all, this wasn't designed for your use case :)


In some cases the built-in firmware is very barebones, just enough to get U-boot to load up and do the rest of the job.

Yes, you are crazy for thinking that. The extra ram is useful for small LLMs and also running lots of dock containers. The very low power consumption makes it ideal for a low end home server.

I use the 16GB SKU to host a bunch of containers and some light debugging tools, and the power usage that sips at idle will probably pay for the whole board over my previous home server, within about 5 years.


You can just as well not run docker. 1GiB machine can run a lot of server software, if RAM is not wasted on having duplicate OSes on one machine.


Docker is about containerization/sandboxing, you don't need to duplicate the OS. You can run your app as the init process for the sandbox with nothing else running in the background.


That makes docker entirely useless if you use it just for sandboxing. Systemd services can do all that just fine, without all the complexity of docker.


I think that on linux docker is not nearly as resource intensive as on Mac. Not sure of the actual (for example) memory pressures due to things like not sharing shared libs between processes, granted


That's the major problem. The more shared libs the app is using, the worse this is.


Containers are not Virtual Machines. 1GB cannot run a lot of server software.

If stuff is written in .NET, Java or JavaScript. Hosting a non-trivial web app can use several hundred megabytes of memory.


Any node server app will be ~50-100 MiB (because that's roughly the size of node binary + shared deps + some runtime state for your app). If you failed to optimize things correctly, and you're storing and working with lots of data in the node process itself, instead of it serving as a thin intermediary between http service and a database/other backend services, you may get spikes of memory use well above that, but that should be avoided in any case, for multiple reasons.

And most of this 50-100 MiB will be shared if you run multiple node services on the same machine the old way. So you can run 6 node app servers this way, and they'll consume eg. 150MiB of RAM total.

With docker, it's anyone guess how much running 6 node backend apps will consume, because it depends on how many things can be shared in RAM, and usually it will be nothing.


Only Java qualifies under your arbitrary rules, and even then I imagine it's trying to catch up to .NET (after all.. blu-ray players execute Java).. which can run on embedded systems https://nanoframework.net/


I listed some popular languages that web applications I happened to run dockerised are using. They are not arbitrary.

If you run normal web applications they often take many hundreds of megabytes if they are built with some popular languages that I happened to list off the top of my head. That is a fact.

Comparing that to cut down frameworks with many limitations meant for embedded devices isn't a valid comparison.


1GB is plenty for almost every case I've seen, 10-20x the need. Yes if you're running a repeated full OS underneath (hello VMs) then it'll waste more.

I run (regular) .NET (8) in <50mb, Javascript in <50mb, PHP in <50mb. C, Perl, Go in <20mb.

Unless you're talking about disk space.. runtimes take space.


> 1GB is plenty for almost every case I've seen, 10-20x the need

Couldn't have seen many then! Maybe you should look elsewhere.

> Yes if you're running a repeated full OS underneath (hello VMs) then it'll waste more.

Docker is not VMs. Other people have stated this.

> I run (regular) .NET (8) in <50mb, Javascript in <50mb, PHP in <50mb. C, Perl, Go in <20mb.

Good for you. I run web services that are heavier. The container has nothing to do with it.


It's not OS duplication per se, but a SystemD one.


> You can just as well not run docker.

this is naive

"just as well"? lmao sure i guess i could just manually set up the environment and have differences from what im hoping to use in productio

> 1GiB machine can run a lot of server software,

this is naive

it really depends if you're crapping out some basic web app versus doing something that's actually complicated and has a need for higher performance than synchronous web calls :)

in addition, my mq pays attention to memory pressure and tunes its flow control based on that. so i have a test harness that tests both conditions to ensure that some of my backoff logic works

> if RAM is not wasted on having duplicate OSes on one machine.

this is naive

that's not how docker works...


Yes, it's exactly how docker works if you use it for where it matters for a hobbyist - which is where you are installing random third-party apps/containers that you want to run on your SBC locally.

I don't know why people instantly forget the context of the discussion, when their favorite way of doing things gets threatened. :)

Context is hobbyists and SBC market (mostly various ARM boards). Maybe I'm weird, but I really don't care about minor differences between my arch linux workstation, and my arch linux arm SBCs, because 1) they're completely different architectures, so I can't avoid the differences anyway 2) it's a hobby, I have one instance at most of any service. 3) most hobbyist run services will not work with a shitton of data or have to handle 1000s of parallel clients


> Yes, it's exactly how docker works if you use it for where it matters for a hobbyist

What you described is exactly the opposite of how it works. There is no reasonable scenario in how that is how it works. In fact, what you're saying is opposite of the whole point of containers versus using a VM.

> when their favorite way of doing things gets threatened

No, it's when someone (like you) thinks they have an absolute answer without knowing the context.

And by the way, in my scenario, container overhead is in the range of under a hundred MiB total . The thing I'm working on HAPPENS to require a fair amount of RAM.

But you confidently asserted that "1GiB machine can run a lot of server software". And that's true for many people (like you), but not true for a lot of other people (like me).

> most hobbyist run services will not work with a shitton of data or have to handle 1000s of parallel clients

neither of these are true for me but you need to take a step back and maybe stop making absolute statements about what people are doing or working on :)


> where it matters for a hobbyist

you dont get to define "where it matters" for a hobbyist

> which is where you are installing random third-party apps/containers that you want to run on your SBC locally

this is such a consoomer take. for those of us who actually build software, we have actual valid reasons for using it during development

> they're completely different architectures, so I can't avoid the differences anyway

ironically this is a side benefit modern containers are useful

i think you have a fundamental misunderstanding of how containers work and why theyre useful for software development. based on your other posts in this thread only makes me more sure of that. im not saying containers/etc are a perfect solution or always the right solution, but your misconceptions are separate from that


No I don't have a fundamental misunderstanding. In the entire thread I'm talking about docker, not "containers" in general. You seem to have a misunderstanding apparently.

I've been working with "containers" since before docker existed, and I also wrote several applications that use basic technologies so called "docker containers" are based on in Linux. You can use these technologies (various namespaces, etc.) in a way that does not waste RAM. That will not happen for common docker use, where you don't control the apps and base OS completely. You can if you try hard make it efficient, but you have to have a lot of control. The moment you start pulling random dockerfiles from random sources, you'll be wasting colossal amounts of resources compared to just installing packages on your host OS, to share maximum amount of resources.

And for all these "let's have just a big static binary and put it into a container" containers, that don't really have/or need a real full OS userspace under them, there's barely any difference deployment wise from just running them without docker. In fact docker is just a very complicated additional duplicated layer in this case for what systemd does, that most people already have on their OS. So that's another RAM waste and additional overhead from what is now reduced to a service manager in this use case scenario.


> No I don't have a fundamental misunderstanding. In the entire thread I'm talking about docker, not "containers" in general. You seem to have a misunderstanding apparently.

i said modern containers. and you do have a FUNDAMENTAL MISUNDERSTANDING. you are repeating falsehoods throughout this entire thread.

> That will not happen for common docker use

again you are asserting a "common" use of software, when the people youre replying to are clearly using it for development

> where you don't control the apps and base OS completely

stopping saying "you" to me. id tell you to speak for yourself but you seem incapable of doing that

> And for all these "let's have just a big static binary and put it into a container" containers, that don't really have/or need a real full OS userspace under them, there's barely any difference deployment wise from just running them without docker.

ironically enough it does have differences, glaring big differences. like ironically the deployment differences are about the only reason to use docker in this situation

another stark example of you popping off with incorrect assertions. and yes there are reasons not to do use docker for this as well but it depends on multiple factors

> In fact docker is just a very complicated additional duplicated layer in this case for what systemd does, that most people already have on their OS. So that's another RAM waste and additional overhead from what is now reduced to a service manager in this use case scenario.

there are so many misconceptions in there asserted as if theyre the entire truth. yes people can use docker containers poorly but its not everyone.

> The moment you start pulling random dockerfiles from random sources, you'll be wasting colossal amounts of resources compared to just installing packages on your host OS, to share maximum amount of resources.

its a good thing that I'm not doing that! ive already stated that im using them to build software, not just "pulling random dockerfiles from random sources"

you are digging your heels in and you are now trying to assert a set of conditions and situation in which youre correct, even though youre dead wrong for the use cases that the people youre replying to are describing

you have repeated falsehoods as fact repeatedly and seem unable to adjust to people telling you "im not doing that thing youre complaining about"

frankly, i think youre out of your depth on this subject and youre trying to do anything you can justify your original claim that 1GiB is enough, or whatever

TLDR

feel free to have the last word, im sure youll have lots of them. maybe youll get lucky and a few will be correct. im exiting this conversation


there are no real deployment differences, eg. systemd has portable services, full containers via nspawn, etc. and there are many other ways to realize what docker does with or without containers (eg. what yandex does internally by just packaging their internal software and parts of configuration into debian packages, and manage reproducibility that way)

and you don't provide any other technical arguments

what remains is you strongly telling me something I already acknowledged in the previous post (that you can perhaps make efficient use of docker, but it's hard to make it not waste resources in general use case)


> Firewire support was removed from the Linux kernel

This is very much incorrect. Maybe the subsystem wasn't built into a custom kernel you're using?

edit: google says improvements through 2026, support through 2029


Many distros (including Raspberry Pi OS) don't enable `CONFIG_FIREWIRE_OHCI` in the kernel, so support isn't built-in, unless you build your own kernel.

But yes, it will be supported through 2029, and then after that, it could remain in the kernel longer, there's no mandate to remove it if I'm reading the maintenance status correctly: https://ieee1394.docs.kernel.org/en/latest/#maintenance-sche...

> [After 2029, it] would be possibly removed from Linux operating system any day


Right, that matches my understanding. After 2029, It'll stick around as long as it continues to compile. If it fails to compile it would get dropped instead of updated as there's no maintainer.


This was around 2020 or 2021. I had an old laptop with a firewire port which was already running Ubuntu. I couldn't make it work. That's when I found that the support was removed from the kernel, and that's what led me to Linux Mint. I bought a new SSD and installed Linux Mint, and I was able to import my video tapes with no further issue.

An Ubuntu support page says eth1394 has been removed from the kernel since version 2.6.22.

Edit: This was a VERY old laptop. I think it has a 32 bit processor. Maybe that confounded the issue.


> An Ubuntu support page says eth1394 has been removed from the kernel since version 2.6.22.

that doesn't really mean what you think it means, since they removed that module to replace it with a more standard module. and in addition, the presence or lack of eth1394 wouldn't affect a camera or fire interface in any meaningful way


file interface*


> The resemblance must be a complete coincidence.

I don't know why so many people are willing to descend into flippant, lazy conspiracy instead of a 7 second Google search before making a claim?

AG1 was started in 2010 by a police officer from New Zealand and AG stands for Athletic Greens.

There is a fair amount of controversy around the company's claims, so I suppose that is one symmetry between AG1 and AGI.


Not a conspiracy, and I know the history—just a joke. The current branding sure looks like AGI if you're not looking closely (or maybe I just read too much hn)


I laughed!


Gemini does it but not in a clickbaity way. It basically asks, at the end "would you like to know more about this specific or that specific"?

Yes, there's some "growth hacking" bs, but prompting the user to ask more questions about details is a far distance from what oAI is doing. I agree it's all bad behavior, but in shades.


I found Gemini to keep asking the same follow-up questions regardless of my responses. In discussing a health topic, it repeatedly offered recipes for healthy snacks - 4 times, before I finally affirmatively said “no, I do not need snack recipes.” It dutifully stopped. Not quite clickbait, but it had very clearly decided where it wanted the conversation to go.


At least with Gemini, I found the trick is to add anything in any system instruction about a task list. Then the follow-up prompt will always be, do you want to add a task for that? Which is actually useful most of the time.


> many still saw the "metaverse" vision as inevitable; a clear trajectory for the future of the internet.

As a VR enthusiast, I beg to differ. Anyone who had spent a lot of time in the space knew that this was largely a hardware problem.

You need a lightweight, see-through head mounted display. It needs to be aware of local lighting conditions and does more than just room mapping, which means it needs a lot of compute power. It needs to have eye tracking (for minor perceptual angle drawing, at least), a high resolution (or light field) display. It needs to stay cool, and have a 6+ hour battery life (which is one working session). Oh, and people don't like any tethers. Or controllers. Which means extremely accurate hand tracking and integration with a keyboard/mouse. Price doesn't matter, as much as people think. AVP costs less today than a mid tier powerbook 25 years ago. But that also needs to come down.

Apple Vision Pro is the first VR/AR headset to come close, by the way. And even that is very far off. In fact, I'd blame that more for this shutdown than anything single other thing: it demonstrated that Meta's hardware labs were so fundamentally off for what they were trying to achieve that it basically rendered their entire investment useless.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: