I was really confused how this could be possible for such a seemingly simple site but it looks like it's storing + writing many new commits every time there's a new review, or new financial data, or a new show, etc.
Someone might want to tell the author to ask Claude what a database is typically used for...
json in git for reference data actually isn't terrible. having it with the code isn't great, and the repo is massively bloated in other ways, but for change tracking a source of truth, not bad except for maybe it should be canonicalized.
It's not a terrible storage mechanism but 36,625 workflow runs taking between ~1-12 minutes seems like a terrible use of runner resources. Even at many orgs, constantly actions running for very little benefit has been a challenge. Whether it's wasted dev time or wasted cpu, to say nothing of the horrible security environment that global arbitrary pr action triggers introduce, there's something wrong with Actions as a product.
Can you? My understanding is that AI cannot claim copyright and my assumption would be that copyright law immediately extends authorship to the user operating the AI (or their employer).
So you're suggesting that an AI translation of, say, a novel removes human authorship from the result? Unless a human goes in and makes further "substantive transformations" to the AI generated work?
And if that's not what you are saying then how are you determining that prompts to and AI are not copyrighted by the author of the prompt? The results are nothing more than a derivative work of the prompt. So you are faced with having to determine whether the prompts themselves or in combination are copyrightable. Depending on the prompt they may or may not be, but you can't apply a blanket rule here.
(Notwithstanding that Claude inserts itself explicitly as co-author and the author is listed on the commit as well)
> The Copyright Office affirms that existing principles of copyright law are flexible enough to apply to this new technology, as they have applied to technological innovations in the past. It concludes that the outputs of generative AI can be protected by copyright only where a human author has determined sufficient expressive elements. This can include situations where a human-authored work is perceptible in an AI output, or a human makes creative arrangements or modifications of the output, but not the mere provision of prompts.
People have tried over and over again to register copyright of AI output they supplied the prompts for, for example, in one instance[1], someone prompted AI with over 600+ cumulative iterations of prompting to arrive at the final product, and it wasn't accepted by the Copyright Office.
> In March 2023, the Office provided
public guidance on registration of works created by a generative-AI system. The guidance explained that, in considering an application for registration, the Office will ask “whether the ‘work’ is basically one of human authorship, with the computer [or other device] merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.”
You are going to have to prove that things Claude stamps as co-authored are not the work of an "assisting instrument". It's certainly true that some vibe-coded one-shot thing might not apply.
I would also note that the applicant applying for copyright in your linked case explicitly refused guidance and advice from the examiner. That could well be because the creation of that specific work was not shaped much by that artist's efforts.
I wouldn't read too much into that when discussing a GitHub repo. It really will depend on how the user is using Claude and their willingness to demonstrate which parts they contributed themselves. You need to remember that copyright extends to plays and other works of performance. Everything the copyright office is saying in your linked ruling suggests that an AI-implementation of a human-design is copyrightable.
Yes, but the point is that the AI output could still be covered by the definitely-human copyright in the prompt, just not a new copyright in the output.
For example, machine-translating a book doesn't create a new copyright in the new translation, but that new translation would still inherit the copyright in the original book.
Both are dramatically over-engineered. & That's okay. I find them to be products of an industry reconciling how to really work with AI as well as optimize workflows around it. Similar to Gastown et al.
Otherwise, if you can own your own thinking, orchestrating, and steering of agents, you're in a more mature place.
This has aged well. Paradoxically, the more capable AI gets, the more important specification becomes (or costly lack of it becomes), and the more time you spend planning and iterating on intent before you let an agent act.
Tokens and code might be cheap, but we are moving closer and closer to where we let AI operate overnight or where we want agents to operate within our actual environments in real time. The tokens or actions in these instances, have higher costs.
Right now I enjoy the labs' cli harnesses, Claude Code, and Codex (especially for review). I do a bunch of niche stuff with Pi and OpenCode. My productivity is up. Some nuances with working with others using the same AI tools- we all end up trying to boil the ocean at first- creating a ton of verbose docs and massive PRs, but I/they end up regressing on throwing up every sort of LLM output we get. Instead, we continously refine the outputs in a consumable+trusted way.
My workday is fairly simple. I spend all day planning and reviewing.
1. For most features, unless it's small things, I will enter plan mode.
2. We will iterate on planning. I built a tool for this, and it seems that this is a fairly desired workflow, given the popularity through organic growth. https://github.com/backnotprop/plannotator
- This is a very simple tool that captures the plan through a hook (ExitPlanMode) and creates a UI for me to actually read the plan and annotate, with qol things like viewing plan diffs so I can see what the agent changed.
3. After plan's approved, we hit eventual review of implementation. I'll use AI reviewers, but I will also manually review using the same tool so that I can create annotations and iterate through a feedback loop with the agents.
4. Do a lot of this / multitasking with worktrees now.
I've been working on a thing for worktrees to work with docker-compose setups so you can run multiple localhost environments at once https://coasts.dev/. It's free and open source. In my experience it's made worktrees 10x better but would love to hear what other folks are doing about things like port conflicts and db isolation.
I think I'd be okay with a smaller, more narrative-detailed plan - not so much about verbosity, more about me understanding what is about to happen & why. There hadn't been much discourse once planning mode entered (ie QA). It would jump into its own planning and idle until I saw only a set of projected code changes.
Could you provide the details of the complete verification?
*On the original story you only showed Claude like responses, not how you dug into the binary
I understand. Thank you for sharing. I didn't uncover all of this until Claude told me its specific system instructions when I asked it to conduct introspection. I'll revise the blog so that I don't encourage anybody else to do deeper introspection with the tool.
As a divergent thinker who is harmed when Claude behaves in unpredictable manners that go counter to my extensive harm prevention protocols, I may have, or may not have, done deep investigation of the tool in order to understand how to create my harm prevention protocols. When Anthropic employees push out unstable work, developers in general are significantly impacted. When unstable products end up in my workflow I am harmed both financially AND psychologically. I can lose hours, days, even weeks by an unstable model or IDE. I should not EVER be tested on. And if maybe diving into their product protects me, so be it.
I understand. Just with AI, I don't think the behavior should change so drastically. Which I understand is paradoxical because we enjoy it when it can 10x or 1000x our workflow. I think responsible AI includes more transparency and capability control.
That ship has sailed. These models were trained unethically on stollen data, they pollute tremendously and are causing a bubble that is hurting people.
At 2mo old - nearly a 1GB repo, 24M loc, 52K commits
https://github.com/thomaspryor/Broadwayscore
Polished site:https://broadwayscorecard.com/
reply