Hacker Newsnew | past | comments | ask | show | jobs | submit | gmfawcett's commentslogin

Ostrom's results didn't disprove ToC. She showed that common resources can be communally maintained, not that tragic outcomes could never happen.

i dont thjnk anything can disprove that ToC issues can happen under any situation.

that seems like an unreasonable bar, and less useful than "does this system make ToC less frequent than that system"


"Going Postal" was brilliant. GNU Terry Pratchett.

Delegated to the provost and deans. Who else would you expect to hold accountable for developing a graduate attribute?


I guess they would already have had that authority?

I think it would be the ongoing job of the dean's or at least someone to be setting graduation requirements? Why would the trustees have to explicitly delegate it?


I think this was more of a press release than an edict. The Purdue announcement says, "Built on recently launched AI majors, minors and certificates across colleges, and following the establishment of a working group last summer, with additional careful deliberation and advice from the University Senate through its Undergraduate Curriculum Council..."


I've had the same intuition. I've had mixed results in this area, although I'm certainly no expert. Recently I wanted to formalize a model of a runbook for a tricky system migration, to help me reason through some alternatives. I ended up writing a TLA+ spec before generating some visualizations, and also some possible prototypes in MiniZinc. All three (spec, visualizations, CP models) were vibe-coded in different sessions, in about that order, though most of my personal effort went into the spec.

While the later AIs quickly understood many aspects of the spec, they struggled with certain constraints whose intuitive meaning was concealed behind too much math. Matters which I had assumed were completely settled, because a precise constraint existed in the spec, had to be re-explained to the AI after implementation errors were found. Eventually, I added more spec comments to explain the motivation for some of the constraints, which helped somewhat. (While it's an untested idea, my next step was going to be to capture traces of the TLA+ spec being tested against some toy models, and including those traces as inputs when producing the implementations, e.g. to construct unit tests. Reasoning about traces seemed to be a fairly strong suit for the AI helper.)

In hindsight, I feel I set my sights a little too high. A human reader would have had similar comprehension problems with my spec, and they probably would have taken longer to prime themselves than the AI did. Perhaps my takeaway is that TLA+ is a great way to model certain systems mathematically, because precision in meaning is a great quality; but you still have to show sympathy to your reader.


Was the TLA spec just hard to understand for the reader or for a peer as well?


That's a fair question, but I didn't have another person read it. (If I'm being honest, I think my colleagues would have looked at me funny. We don't exactly have a culture of using TLA+ to describe sysadmin processes here, and it felt a bit like a sledgehammer-swatting-fly even by my own standards! They thought the visualizations were cool, though...)

Many of my rules seemed pretty basic to me -- a lot of things like, "there is no target-node candidate whose ordinal is higher than the current target's and also satisfies the Good_Match predicate." But if I had been writing it for a human reader, rather than just to document constraints, I would have put in more effort to explain why the constraints existed in the first place (what's a Good match? Are there Poor matches? Why do we care? etc.). I didn't skip this step altogether but I didn't invest much time into it.

I did take care to separate "physics" rules from "strategy" rules (i.e, explicitly separating core actions and limits from objectives and approaches). That seemed to help the AI, and I'm sure it would have helped people too.


That's pretty impressive -- thanks for sharing the link.


Very well expressed. That's great early-career guidance, but also a good refresher for many senior staff.


The simpler version is "Actually deliver on what you're asked to do".

It sounds so simple that people get offended when you say that. Yet it's at the root of a lot of employee performance problems where the employee is doing a lot of activity, but not delivering on expectations.

I've worked with some brilliant engineers who failed because they were always off trying to reinvent the wheel with their own completely unnecessary framework or rewriting some code that didn't need to be touched. In each case their managers were practically begging them to just focus on delivering the tasks they were assigned, but they were still surprised when the performance management actions came along.


I've often taken inspiration from RFC 2418, "IETF Working Group Guidelines and Procedures" [1], a rare RFC that defines a human protocol ("rough consensus") rather than a technical one.

[1] https://www.rfc-editor.org/rfc/rfc2418.html


(also, velociraptors)


Arguably the European churches were such an industry for centuries, and were highly effective at it.


It gives a cue about how many times I've probably seen the article before. Quite useful, IMO. I read this particular article when it came out in 2006... it's convenient to know we're not discussing a novel finding on the same topic.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: