> Use text, not icons, for menu titles. Only menu bar extras use icons to represent menus. See Menu Bar Extras. It’s also not acceptable to use a mixture of text and icons in menu titles.
> Avoid using custom symbols in menus. People are familiar with the standard symbols. Using nonstandard symbols introduces visual clutter and could confuse the user.
The notable thing here is how recent of a shift this is, and how longstanding the prior rule was. Navigating internet archive is slow/tedious, but I think the rule/guideline was explicitly called out in the guidelines up until a year or two ago. So it was probably the guideline for ~20 years on macOS and has just now been changed.
The author teaches AI workshops. Nothing wrong with that, but I think it should be disclosed here. A lot of money is riding on LLMs being financially successful which explains a lot of the hype.
The language tends to affect everything, but to give a quick Developer example there’s Zed. Developers use it because it’s fast. Same with Sublime Text.
Your criticism makes more sense with products targeting non-technical users though. But IMO tech choices have cascading effects. I won’t buy a vehicle if the infotainment software sucks, and that’s the 2nd largest purchase I’ll ever make.
Just Zed (if AI features are a requirement) as far as I know.
But to elaborate, they’ve found a niche simply by using Rust and rendering the GUI in a performant way on the GPU. I’m not saying performance is the only thing, but for a chunk of people it is something they care about.
Good performance is a strong proxy for making other good software decisions. You generally don't get good performance if you haven't thought things through or planned for features in the long term.
I’m really sick of constantly seeing cloudflare, and their bullshit captchas. Please, look at how much grief they’re causing trying to be the gateway to the internet. Don’t give them this power
I appreciate their honesty. After a quick look I’d give another point to Pocketbase for it’s admin UI. The TrailBase one is pretty sloppy (on mobile at least), and looks like it’s using bootstrap.
Pocketbase has a sense of quality/care around it that seems missing.
You won't get any argument here. PocketBase's is very polished and friendly. I fell back onto a popular pre-existing UI component system called shadcn (which does look a bit like bootstrap), not only because you gotta start somewhere but also because I'm not the caliber of UX designer Gani is :bow:. If you have the time, I would appreciate any feedback on how to improve.
I'm a bit surprised on the mobile comment, since last I checked PB's UI wasn't responsive, i.e. you had to scroll a lot horizontally on mobile. Despite it's missing polish, I tried to make TB's at least work well on mobile. Could you elaborate? - thanks
The part that prompted my comment was clicking to edit a row on mobile in vertical orientation. Rather than fill the screen the edit shows up in a modal which doesn't really make sense, since you have so little horizontal space to work with. And the inputs are cut off slightly due to no padding.
And the whole admin UI overfills my (pro max) iPhone when held vertically, I think due to the navbar icons? Anyway, it'd be nice to not have to horizontally scroll to see those 40px or w/e everywhere. It makes it so you have to horizontally scroll to see or evaluate every page
One other little nice thing you could do if you wanted is prefill the username/password with the demo user.
---
Very cool project, and it's cool to see how Rust benefits the performance.
Every recent release has bumped the pricing significantly. If I was building a product and my margins weren’t incredible I’d be concerned. The input price almost doubled with this one.
I'm not sure how concerned people should be at the trend lines. If you're building a product that already works well, you shouldn't feel the need to upgrade to a larger parameter model. If your product doesn't work and the new architectures unlock performance that would let you have a feasible business, even a 2x on input tokens shouldn't be the dealbreaker.
If we're paying more for a more petaflop heavy model, it makes sense that costs would go up. What really would concern me is if companies start ratcheting prices up for models with the same level of performance. My hope is raw hardware costs and OSS releases keep a lid on the margin pressure.
Plenty. I assumed that the code examples had been cleaned up manually, so instead I looked at a few random "Caveats, alternatives, edge cases" sections. These contain errors typically made by LLMs, such as suggesting to use features that doesn't exist (std.mem.terminated), are non-public (argvToScriptCommandLineWindows) or removed (std.BoundedArray). These sections also surfaces irrelevant stdlib and compiler implementation details.
This looks like more data towards the "LLMs were involved" side of the argument, but as my other comment pointed out, that might not be an issue.
We're used to errata and fixing up stuff produced by humans, so if we can fix this resource, it might actually be valuable and more useful than anything that existed before it. Maybe.
One of my things with AI is that if we assume it is there to replace humans, we are always going to find it disappointing. If we use it as a tool to augment, we might find it very useful.
A colleague used to describe it (long before GenAI, when we were talking about technology automation more generally) as following: "we're not trying to build a super intelligent killer robot to replace Deidre in accounts. Deidre knows things. We just want to give her better tools".
So, it seems like this needs some editing, but it still has value if we want it to have value. I'd rather this was fixed than thrown away (I'm biased, I want to learn systems programming in zig and want a good resource to do so), and yes the author should have been more upfront about it, and asked for reviewers, but we have it now. What to do?
There's a difference between the author being more upfront about it and straight-up lying on multiple locations that zero AI is involved. It's stated on the landing page, documentation and GitHub - and there might be more locations I havent' seen.
Personally, I would want no involvement in a project where the maintainer is this manipulative and I would find it a tragedy if any people contributed to their project.
> and yes the author should have been more upfront about it
They should not have lied about. That's not someone I would want to trust and support. There's probably a good reason why they decided to stay anonymous.
We really are in the trenches. How is this garbage #1 on the front page of *HN* right now?
Even if it was totally legitimate, the "landing page" (its design) and the headline ("Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software."?????) should discredit it immediately.
When was the front page of HN that impressive anyways? It has always been latest fad and the first to comment "the right thing to say" gets rewarded with fake internet points.
I made a comment about it obviously being AI generated, and my comment[1] was quickly downvoted, and there were comments explaining how I was obviously incorrect ("Clearly your perception of what is AI generated is wrong."). My comment was pushed under many comments saying the book was a fantastic resource. Extremely strange.
> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.
That issue has been deleted. In addition, the author has tagged other issues with labels that are not appropriate for the mission of being a core zig learning resource:
I seem to remember seeing this a week or two ago, and it was very obviously AI generated. (For those unfamiliar with Zig, AI is awful at generating Zig code: small sample dataset and the language updates faster than the models.) Reading it today I had a hard time spotting issues. So I think the author put a fair amount of work into cleaning up hallucinations and fixing inaccuracies.
The exchange on https://github.com/zigbook/zigbook/issues/4? If so, your botdar is better than mine. While the project owner doesn't understand the issue at first, they seem to get it in the end. The exchange looks sort of normal to me, but then I guess I am doomed to be fooled regularly in our new regime.
Some text in the book itself is odd, but I'll be a guinea pig and try to learn zig from this book and see how far I get.
The exchanges look totally like a robot to me. It looks to be deleted now. But the first response that responded to the bug report with how many systems its tested on seems weird and robot like. Then the screenshot that the "person" has to be told to scroll down, that is very ai like.
I literally just came across this resource a couple of days ago and was going to go through it this week as a way to get up to speed on Zig. Glad this popped up on HN so I can avoid the AI hallucinations steering me off track.
> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.
The author could of course be lying. But why would you use AI and then very explicitly call out that you’re not using AI?
There are too many things off about the origin and author to not be suspicious of it. I’m not sure what the motivation was, but it seems likely. I do think they used the Zig source code heavily, and put together a pipeline of some sort feeding relevant context into the LLM, or maybe just codex or w/e instructed to read in the source.
It seems like it had to take quite a bit of effort to make, and is interesting on its own. And I would trust it more if I knew how it was made (LLMs or not).
Because AI content is at minimum controversial nowadays. And if you are ok with lying about authorship then It is not further down the pole to embelish the lie a bit more
I looked into that project issue your referencing. There is absolutely zero mentioning of zig labeled blocks in that exchange. There is no misunderstanding or confusion whatsoever.
It's a formatting bug with zig labeled blocks and the response was a screenshot of code without one, saying (paraphrasing) lgtm it must be on your end.
I'd love it if we can stop the "Oh, this might be AI, so it's probably crap" thing that has taken over HN recently.
1. There is no evidence this is AI generated. The author claims it wasn't, and on the specific issue you cite, he explains why he's struggling with understanding it, even if the answer is "obvious" to most people here.
2. Even if it were AI generated, that does not automatically make it worthless. In fact, this looks pretty decent as a resource. Producing learning material is one of the few areas we can likely be confident that AI can add value, if the tools are used carefully - it's a lot better at that than producing working software, because synthesising knowledge seen elsewhere and moving it into a new relatable paradigm (which is what LLMs do, and excel at), is the job of teaching.
3. If it's maintained or not is neither here nor there - can it provide value to somebody right now, today? If yes, it's worth sharing today. It might not be in 6 months.
4. If there are hallucinations, we'll figure them out and prove the claim it is AI generated one way or another, and decide the overall value. If there is one hallucination per paragraph, it's a problem. If it's one every 5 chapters, it might be, but probably isn't. If it's one in 62 chapters, it's beating the error rate of human writers quite some way.
Yes, the GitHub history looks "off", but maybe they didn't want to develop in public and just wanted to get a clean v1.0 out there. Maybe it was all AI generated and they're hiding. I'm not sure it matters, to be honest.
But I do find it grating that every time somebody even suspects an LLM was involved, there is a rush of upvotes for "calling it out". This isn't rational thinking. It's not using data to make decisions, its not logical to assume all LLM-assisted writing is slop (even if some of it is), and it's actually not helpful in this case to somebody who is keen to learn zig to decide if this resource is useful or not: there are many programming tutorials written by human experts that are utterly useless, this might be a lot better.
That didn't happen.
And if it did, it wasn't that bad.
And if it was, that's not a big deal.
And if it is, that's not my fault.
And if it was, I didn't mean it.
And if I did, you deserved it.
> 1. There is no evidence this is AI generated. The author claims it wasn't, and on the specific issue you cite, he explains why he's struggling with understanding it, even if the answer is "obvious" to most people here.
There is, actually,
You may copy the introduction to Pangram and it will say 100% AI generated.
What’s the evidence that it is human-generated? Oh I see. If it is AI generated then you still have to judge it by its merit, manually. (Or can I get an AI to do it for me?) And if they lied about it being human-authored? Well what if the author refutes that accusation? (Maybe using AI? But why judge them if they use AI to refute the claim? After all we must judge its on its own merit (repeats forever))
> 2. Even if it were AI generated, that does not automatically make it worthless.
It does make it automatically worthless if the author claims it's hand made.
How am I supposed to trust this author if they just lie about things upfront? What worth does learning material have if it's written by a liar? How can I be sure the author isn't just lying with lots of information throughout the book?
I was pretty skeptical too, but it looks legit to me. I've been doing Zig off and on for several years, and have read through the things I feel like I have a good understanding of (though I'm not working on the compiler, contributing to the language, etc.) and they are explained correctly in a logical/thoughtful way. I also work with LLMs a ton at work, and you'd have to spoon-feed the model to get outputs this cohesive.
> Use text, not icons, for menu titles. Only menu bar extras use icons to represent menus. See Menu Bar Extras. It’s also not acceptable to use a mixture of text and icons in menu titles.
> Avoid using custom symbols in menus. People are familiar with the standard symbols. Using nonstandard symbols introduces visual clutter and could confuse the user.
The notable thing here is how recent of a shift this is, and how longstanding the prior rule was. Navigating internet archive is slow/tedious, but I think the rule/guideline was explicitly called out in the guidelines up until a year or two ago. So it was probably the guideline for ~20 years on macOS and has just now been changed.
reply