Hacker Newsnew | past | comments | ask | show | jobs | submit | threecheese's commentslogin

Almost like that’s his job.

Hey, I’m with you - I think social media needs to die specifically for this reason. I’m reminded of the term “snake oil” - it’s like the dawn of newspapers again.


Media as a whole needs to die

Including books and the internet?

I dislike the “brainwashed” comment from sibling, I believe it makes some assumptions. There aren’t any doubts that:

- AI is extremely resource intensive, consuming electricity, water, silicon, etc at levels possibly never seen before in humanity’s history; whether that’s a waste or not is subjective - Massive datacenters are popping up like anthills, and coupled with R-flavored regulation rollback there is a definite risk for environmental impact - just like during our last industrialization push where we poisoned much of the country, leading to a massive rollout of environmental protections in the 1970s and 1980s - Students are taking advantage of LLMs to shirk school responsibilities. Whether this is damaging or not is subjective until proven, and AI may not be causal here (students may not have been getting the expected value from their education without LLMs, again remains to be proven) - Many companies have used AI as a justification for layoffs, who knows what’s actually true though. There is a very real fear across society that it will continue to impact jobs, and senior AI company leaders are fueling this with public predictions of massive labor shifts. Again, maybe they are lying, but can you blame anyone for worrying?

There are counterarguments to all of these, but dismissing the fear as uneducated or brainwashed reveals your own priors and ignores all of these facts. It’s healthy to ingest OP’s criticisms - especially on a form populated most by Smart People (tm).


I think you’re right. In a very narrow, short term scope. That’s the issue.

The problem with this argument is that assumes the world is static. When trains were invented, they polluted a LOT. Technology evolved. Looking backwards, the amount of value unlocked by them outweighed by order of magnitude the short term pollution they generated. Inefficient in the short term. Generation changing over the longer horizon. Extend the timeframe of your argument. Do you think it holds 20 years from now when we have more efficient algorithms and energy generation technologies? I don’t think so.


Totally agree, but I would say that strategic thinking is easier for the wearer of the boot than the owner of the neck.

Said less calamitously (word?): while it’s important to be objective, objectivity is difficult when there are real existential risks.

Thank you for the frank discussion!


Be skeptic of those telling you that technological advances are bad. They usually want something from you. And it’s usually your vote.

What political office do you believe Finnucane is running for?

Idk, but I think he left us with a pretty straightforward worldview

Like you, I dislike that the providers make it intentionally difficult to retrieve conversation history from their web UIs; you can Copy/Paste easily, or use the OS Share feature to access a public version, but they make it very difficult for me to build tooling to extract the history - it requires website automation, or a browser plugin.

Exactly. And I feel it gets even worse each new agent that gets released. Wanna test openclaw? Good luck exporting all of your contexts. What does your ai stack looks like? Are you still heavily using the web uis on your routine?

I have many more fears than just annoyances :)

My biggest annoyance is hiding Thinking tokens; I have little trust in these aliens, and seeing how the sausage was made helped me to be more comfortable with eating it. Anthropic was the biggest provider that did not do this until recently, and they give a good rationale for the switch but that doesn’t make it less annoying. I also dislike the UX they put around it, “Hmm.”, “I should think about this” etc.


This is WILD. And the fact everyone just accepts this fact makes it even worse. We’re relying our daily decisions on a closed chain of thought. Do you see this changing anytime soon?

I think in the end it all boils down to a trust issue on the big labs


By UX you mean the chat interface? Or the lack of transparency of it? Or both?

I do this every day, because Codex writes my requirements and Claude implements them. Just ask it for whatever you think the next model will need, tell it to be verbose if you like, and even have a second ChatGPT check it if you are worried. You can even give it a format, going as far as providing a specification or template if you do it frequently. Stick that template in both ChatGPT and claude projects so one can write it and the other can read it.

Edit: I shouldn’t admit this, but I even have an ontology defined - RDF and all - for some of my LLM tasks. Its classes contain examples, and so is like a few-shot instruction, and it’s working scarily well for structuring tasks.


Holy shit. That’s scarily clever. Do you trigger it at a certain max token spend ratio? And do you think it generalizes to pass all kinds of context or its tailored for structured tasks?

It is generalizable given a defined ontology, even better if your life experience can be represented using my ontology.

I had a large context model analyze the last ten years of my notes and build an ontology, it took a lot of iterating. Examples; A software project may have Decision, Risk etc entities; Life may consist of Activities, Goals, Concerns, Problems, etc; the World has knowledge/facts (Topic Taxonomies like Wikipedia) etc. These are all joinable given the relationships are intact.

The agent put everything into a huge RDF ontology - a world model. I worked with the agent to re-frame that large ontology, to I can build a skill appropriate for a small context model to serve as an expert on the ontology, it owns it for all intents and purposes.

I then worked with an agent to define use cases using my notes, real world things I do and have done: research, project management, goal setting, “hey I found a cool project online, it would be useful for my X project which is on the back burner but I don’t want to forget about it” you get the idea. I used those use cases to build out a few skills.

These serve as the actual ontology-aligned data layer, and so have access to URIs pointing to Goal entries in Obsidian, Projects in ClickUp, my calendar etc etc. And so it knows what my most salient Concerns are, and which other entities are associated with them - projects, goals, documents etc, and utilize various MCP tools for external systems. It also creates “context packs” using its view across systems, and so I can have it export structured markdown for any arbitrary entities and their direct and transitive relationships, to facilitate some other agent performing targeted work.

This enables me to: 1. Build targeted software tools for life and work management, I bring the skills into that project which own the data layer OR export context in the form of agent-ready markdown 2. Give skills to my Claws (which aren’t Claws at all, it can be any arbitrary skill supporting agent harness like Goose or Hermes) and LLM apps like Claude Desktop.


I’ve been getting a lot unsolicited “hey, I saw your github” AI-generated emails lately; I appreciate your post for revealing the rent-seeking “talent-industry” middlemen behind this. I’ll go anonymize my github profile to hopefully reduce this - and I quote - “Unsolicited Commercial Email”.

Folks, this is how the death of SaaS is going to impact us; commercial interests are going to find every crack in your personal privacy into which they can wedge a vibe-coded automation.


That's unfortunate to hear. This has allowed me to connect with talent that has otherwise not been given the opportunity to showcase their skills. At the scale I'm doing it it's pretty harmless but can see how it can be easily exploited.

I’m jaded, and I shouldn’t let it show :)

One only needs to encounter a handful of bad actors to paint everyone with that brush - perhaps unfairly. Best of luck to you.


Not sure if I’m a user just yet, but this quote stands out for me:

> If a node isn't physically standing next to you, it doesn't exist on your network. As humans move throughout the physical world, the data propagates organically.

/applause


Exactly! The idea is our phones are more powerful than what it took to land on the moon and they can be doing a LOT more to protect our privacy and improve our day-to-day lives - and using the Internet should not always be a first resort for this.

Any reason why you’ve closed access? I could swear there was more here yesterday

I haven't changed anything on this post, the website, or on github. Are you seeing something different?

Start with booze; always works :)

OP’s Qwen3.6 27B Q6 seems to run north of 20GB on huggingface, and should function on an Apple Silicon with 32GB RAM. Smaller models work unreasonably well even on my M1/64GB MacBook.

I am getting 10tok/sec on a 27B of Qwen3.5 (thinking, Q4, 18GB) on an M4/32GB Mac Mini. It’s slow.

For a 9B (much smaller, non-thinking) I am getting 30tok/sec, which is fast enough for regular use if you need something from the training data (like how to use grep or Hemingways favorite cocktail).

I’m using LMStudio, which is very easy and free (beer).


Go even further, and add this into the skill-creator skill, and let the agent improve the skill regularly. I do this with determinism, and have my skills try to identify steps which can be scripted.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: