Hacker Newsnew | past | comments | ask | show | jobs | submit | jonhohle's commentslogin

I always installed Watercolor on a new computer. It's still beautiful and definitely the look they should have chosen and played to their strengths.

I think they were so caught off guard by how incredible Mac OS X _looked_, that they didn't realize it wasn't just veneer, but a genuine evolution and improvement of how Mac OS _worked_. This became Apple's competitive advantage for over a decade as Microsoft chased different styles while consistently botching how it would impact usability.


That was the point of Tog's conclusion: edges of the screen have infinite target size in one cardinal direction, corners have infinite target size in two cardinal directions. Any click target that's not infinite in comparison, has infinitely smaller area, which I suppose you could conclude is infinitely worse if clickable area is your primary metric.

This wasn't just the menu bar either. The first Windows 95-style interfaces didn't extend the start menu click box to the lower left corner of the screen. Not only did you have to get the mouse down there, you had to back off a few pixels in either direction to open the menu. Same with the applications in the task bar.

The concept was similar to NEXTSTEP's dock (that was even licensed by Microsoft for Windows 95), but missed the infinite area aspect that putting it on the screen edge allowed.


And in my experience, when people moved from Windows to the Mac they're so annoyed that there are differences. When I try to explain that these were present in the Mac long before Windows, people start to understand.

Since OS X 10.3 (2003) Control+F2 moves focus to the Apple menu. The arrow keys can then select any menu item which is selected with Return or canceled with Escape. Command+? will bring you to a search box in the Help menu. Not only that, any menu item in any app can be bound to any keyboard shortcut of the user's choosing not just the defaults provided by the system or application.

How about, if your taxable income exceeds some multiple of the median income of your district, you are no longer eligible to represent them. It’s pretty amazing how much a representative’s income grows once they take public service positions.

This smells like funding schools based on student test results. Won't it disadvantage the most vulnerable areas? If I live in a state with some poor areas and some wealthy areas why would the most qualified people not compete to represent the wealthy areas?

If the problem is representatives using insider knowledge to enrich themselves then just hire more Inspectors General. If the problem isn't insider knowledge specifically then make whatever allows them to get rich illegal.


How about we stop screwing around and let becoming a legislator become an attractive & competitive job and just hold our noses at the little things that make politicians as a class generally unattractive people? Like not limitless, not with total impunity, but instead of trying to micromanage our way to perfection every fucking step of the way, we accept that politicians are going to politic.

You are going in the right direction.

I think its more :

if your taxable income during OR post-office exceeds (some 1,3,5 yr average) prior high watermark income, or the officeholder's salary (whichever is higher), every penny over high watermark is taxed at 99% tax rate.

That should take care of those pesky "speaking fees" and other nonsense that makes politicians rich.


I read that last phrase as if one were on their death bed and called out to the grim reaper, “Wait! I haven’t read War & Peace yet!” At which point the reaper sighs and vows to return when you’ve finished.

About 20 years ago I used Cyrillic confusables to watermark internal documentation that was being leaked by a disgruntled customer service employee. The document would dynamically render and include the employee ID based encoded as bits in the text. It survived copy/paste to plain text well.

I did run into some issues in early versions on when characters in Linux commands or visible web addresses were replaced. Fortunately the source docs were HTML, and it was easy to exclude code or pre nodes when rendering.

I thought this was so clever, but the leaker was never caught using it, to the best of my knowledge.


We did this with variations of white space characters.

I don’t think most modern file systems have any limit to the depth of nested directories, that’s not how directory trees work. There are other limits like the number of objects in the file system. The ability to reference an arbitrary path is is defined by PATH_MAX, which is the maximum string length. You can still access paths longer than string length, just not in a single string representation.

Isn't there a max filepath length? Or does find not ever deal with that and just deal in terms of building its own stack of inodes or something like that?

That’s what PATH_MAX is. It’s the size of the buffer used for paths - commonly 4096 bytes. You can’t navigate directly to a path longer than that, but you can navigate to relative paths beyond that (4096 bytes at a time).

Yeah, this never made sense to me and I’ve suggested it to the district, especially for lower grades. They will never block all of the websites they need to unless they block all of the websites. Allow teachers to unblock specific sites for the students they’re responsible for.

It’s pedantic, but in the malloc example, I’d put the defer immediately after the assignment. This makes it very obvious that the defer/free goes along with the allocation.

It would run regardless of if malloc succeeded or failed, but calling free on a NULL pointer is safe (defined to no-op in the C-spec).


I'd say a lot of users are going to borrow patterns from Go, where you'd typically check the error first.

    resource, err := newResource()
    if err != nil {
        return err
    }
    defer resource.Close()
IMO this pattern makes more sense, as calling exit behavior in most cases won't make sense unless you have acquired the resource in the first place.

free may accept a NULL pointer, but it also doesn't need to be called with one either.


This example is exactly why RAII is the solution to this problem and not defer.

I love RAII. C++ and Rust are my favourite languages for a lot of things thanks to RAII.

RAII is not the right solution for C. I wouldn't want C to grow constructors and destructors. So far, C only runs the code you ask it to; turning variable declaration into a hidden magic constructor call would, IMO, fly in the face of why people may choose C in the first place.


defer is literally just an explicit RAII in this example. That is, it's just unnecessary boiler plate to wrap the newResource handle into a struct in this context.

In addition, RAII has it's own complexities that need to be dealt with now, i.e. move semantics, which obviously C does not have nor will it likely ever.


> RAII has it's own complexities that need to be dealt with now, i.e. move semantics, which obviously C does not have nor will it likely ever.

In the example above, the question of "do I put defer before or after the `if err != nil` check" is deferred to the programmer. RAII forces you to handle the complexity, defer lets you shoot yourself in the foot.


It seems less pedantic and more unnecessarily dangerous due to its non uniformity: in the general case the resource won’t exist on error, and breaking the pattern for malloc adds inconsistency without any actual value gain.

Free works with NULL, but not all cleanup functions do. Instead of deciding whether to defer before or after the null check on a case-by-case basis based on whether the cleanup function handles NULL gracefully, I would just always do the defer after the null check regardless of which pair of allocation/cleanup functions I use.

3…2…1… and somebody writes a malloc macro that includes the defer.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: