Hacker Newsnew | past | comments | ask | show | jobs | submit | rmorey's commentslogin

For me the Apple Watch killer feature is a bit embarrassing... it's the button that makes your phone ring so you can find it...


Oh man I forgot about this feature! It's really great except that I don't wear my watch so I just ask my wife to call me instead


"The tech is from America actually, decades ago... But they give up and china continues the work"

Many such cases...


rare earth..


It only didn’t help with the first incident of accidental injuries (makes sense, almost everyone has one early on regardless) but it was still also associated with reduced recurrence


fwiw (and I am not an expert) in my understanding, Swift also has guaranteed memory safety without a GC (using automatic reference counting). not sure how it compares to Rust in that aspect


I'm a big fan of Rust and I also use Swift as part of my job. In terms of memory safety, Rust has a better story in that it will tell you of type problems pretty much upfront. The whole "Rust compile time takes forever" is only half true, because type checking happens pretty quickly. You're never left waiting for long to find out your types are wrong. The long compile happens at codegen phase, after type checking, so once it starts you know it'll finish successfully (unless there's a link error at the very end).

With Swift, that's not true. Sometimes you can wait for a while for the compiler to churn before it bails and tells you it can't fully infer types. As a Rust user, this is entirely unacceptable.

I would have to say, while I don't thoroughly dislike Swift, I do thoroughly dislike Xcode and the Apple ecosystem. The fact that Swift was tied so closely to iOS development for so long means it's not a language that people generally reach for. It feels more like ObjectiveC++ and a facet of the Apple ecosystem as a vehicle into iOS development.

People say that Rust's killer feature is the memory safety, but for me it's always been the ergonomics. Cargo and the painless dependency process is the real killer feature. Swift just doesn't have the same appeal, and that they are slowly getting there is a testament to Rust having already cracked the code; Swift only went fully cross platform (Windows+Linux+Mac) in 2020, so there's a lot of reputation as an Apple language to undo, as well as a lot of ground to catch up on. It's interesting to note that the ground they have to make up is pretty much the path that Rust blazed. So for a lot of the target audience of Swift, the question isn't "why Swift?", it's "why not Rust?". Really, they only good answer for Swift right now is "my target platform Apple."


> Why not Rust?

For me the answer is: Ergonomics.

Swift is, IMHO, much more readable and easier to grasp than rust. You don’t have to understand low-level concepts, but can go there if you need performance.


The point about Rust is to avoid any extra runtime cost by statically enforcing a set of rules (borrow checking) on reference use that are sufficient to guarantee memory safety. It also has ARC but it's reserved only for cases where those rules are too restrictive.


The thing is, Swift has automatic reference counting, and Rust has an atomic reference counted type. Their “ARC”s are related but different.


I was actually talking about automatic reference counting. I think both Rc and Arc count as automatic reference counting? It's just that the term (which was apparently coined by Apple?) is not commonly used outside Apple's languages.

I'm not familiar with Swift though so my understanding could be incorrect.


You're a little off.

To put it in Rust terms, the "automatic" means that Swift will insert the equivalent of calls to ".clone()" for you, whereas this is manual in Rust.


I see.

My understanding is that Rust doesn't have _automatic_ reference counting as in Swift only because it has an alternative (move), which requires the programmer to specify their intent. The principle is nevertheless the same: ensure every time a reference is copied the ref count is incremented, free only when ref count is zero, and we get temporal memory safety.


It is true that the idea is memory safety in both cases, absolutely.


I think actually 4o image generation in ChatGPT is still a tool call with a prompt to an “image_gen” tool, I don’t think the generator receives the full context of the conversation. If you do a ChatGPT data export and inspect the record of a conversation using 4o image gen, you’ll see it’s a tool call with a distinct prompt, much like it was with dalle. And if you pass an image in as context, it’ll pass that to the tool as well.

I imagine this is for anti-jailbreak moderation reasons, which is understandable


I was going to suggest chess position recognition, AFAIK it's a completely unsolved computer vision task (once a position is recognized, I think analysis is well solved by, say, a stockfish tool for the LLM, but there is interesting work going on with language models themselves understanding chess)


You seem to be getting downvoted, but as a former Vultr employee I can confirm this is correct


I know the author, Jon. Delightful guy


Something like Geekbench for CLI tools would be awesome


The sample output is very poor. Cool demo, but really just emphasizes how much of a hit product the NotebookLM team has managed to come up with, ostensibly with more or less the same foundation models already available.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: