Hacker Newsnew | past | comments | ask | show | jobs | submit | mritchie712's commentslogin

I don't like the "I canceled my x subscription" hype posts, but I did cancel Figma today. We've barely used it in months and this was the nail in the coffin.

I remembered this post from (only) 3 years ago:

Show HN: I've built a C# IDE, Runtime, and AppStore inside Excel

670 points | 179 comments

One of the main use cases was to analyze Excel data with SQL. I'm the kind of nerd that loves stuff like that, but stuff like that seems completely obsolete now.

[0] https://news.ycombinator.com/item?id=34516366


this was true a year ago, but if you give an agent a new spec to follow (e.g. a .md file), it will follow it.

we have a custom .yaml spec for data pipelines in our product and the agent follows it as well as anything in the training data.

while I agree you don't need to build a new thing "for agents", you can get them to understand new things, that are not in the training data, very easily.


Just because they can doesn't mean inventing a new framework "for agents" is going to be superior to letting agents use what's in their training data. I suspect it'll be worse, but the time/resources needed to prove that is beyond what I'd be willing to invest.

What makes something like this "for agents", anyway? It's opinionated...a human's opinions, I assume, since agents don't want anything and thus can't have opinions. But, many existing tools are opinionated. Types are good for agents, because it keeps them honest, but many existing things in this space have types. Python is good for agents, because there's a shitload of Python code and documentation in their training data, but many existing things are built with Python (and TypeScript, Go, and Rust are also typed languages and well-represented in the training data).

I dunno. I think a lot of folks are sitting around with an agent thinking, what can I build? And, a lot of things "for agents" are being built, as a result. I think most of them don't need to be built and don't improve software development with agents. They often just chew up context and cache with extra arbitrary rules the agent needs to follow without delivering improvements.


> we have a custom .yaml spec for data pipelines in our product and the agent follows it as well as anything in the training data.

Doesn't this end up being way more expensive, because you don't pay for model parameter activations but for the tokens in/out, meaning that anything not in the training data (and therefore, the model) will cost you. I could make Opus use a new language I came up with if I wanted to and it'd do an okay job with enough information... but it'd be more expensive and wasteful than just telling it to write the same algorithms in Python, and possibly a bit more error prone. Same with frameworks and libraries.


> The data infrastructure underneath it took two years.

yep, that's what Definite is for: https://www.definite.app/

All the data infra (datalake + ELT/ETL + dashboards) you need in 5 minutes.


If I order now, do I get a second set for free?


tldr: this caches your S3 data in EFS.

we run datalakes using DuckLake and this sounds really useful. GCP should follow suit quickly.


I was thinking of using it with Duckdb as well but seems it would be of limited benefit. Parquet objects are in MBs, so they would be streamed directly from S3. With raw parquet objects, it might help with S3 listing if you have a lot of them (shave off a couple of seconds from the query). If you are already on Ducklake, Duckdb will use that for getting the list of relevant objects anyway.


Maybe the OP is thinking of reading/writing to DuckDB native format files. Those require filesystem semantics for writing. Unfortunately, even NFS or SMB are not sufficiently FS-like for DuckDB.

Parquet is static append only, so DuckDB has no problems with those living on S3.


What does DuckDB need that NFS/SMB do not provide?


I am curious about this use case

How do you see it helping with DuckLake?


Latency, predicate pushdown.

Pre-compaction the recent data can be in small files, and the delete markers will also be in small files. This will bring down fetch times, while ducklake may have many of the larger blocks in memory or disk cache already.

Reading block headers for filtering is lots of small ranges, this could speed it up by 10x.


For files up to 100kB of size, this should effectively be really close to the same price as S3 when writing (didn't check reading so much, but the writes/PUT is always much more expensive than read/GET)

Would be really useful pre-compaction and to deal with small files issue without latency penalties


I think this is a move to get business users paying consumption (per token) pricing for codex instead of a flat rate.


How would that work, when it's the flat rate subscription they've reduced the price of?


Tighter limits under the auspices of "lower price!"


Yeah, the lede is buried a bit, these new rate-cards seem to be aligning towards token-based pricing with the prior rates now labeled 'legacy'

https://help.openai.com/en/articles/20001106-codex-rate-card https://help.openai.com/en/articles/11481834-chatgpt-rate-ca...


The prior rates in question there are message based pricing, not the flat rate subscription.


Hannes (one of the creators) had a pet duck


what's the use case for cortex? is anyone here using it?

We run a lakehouse product (https://www.definite.app/) and I still don't get who the user is for cortex. Our users are either:

non-technical: wants to use the agent we have built into our web app

technical: wants to use their own agent (e.g. claude, cursor) and connect via MCP / API.

why does snowflake need it's own agentic CLI?


When you say just Cortex it is ambiguous as there is Cortex Search, Agents, Analyst, and Code.

Cortex Code is available via web and cli. The web version is good. I've used the cli and it is fine too, though I prefer the visuals of the web version when looking at data outputs. For writing code it is similar to a Codex or Claude Code. It is data focussed I gather more so than other options and has great hooks into your snowflake tables. You could do similar actions with Snowpark and say Claude Code. I find Snowflake focus on personas are more functional than pure technical so the Cortex Code fits well with it. Though if you want to do your own thing you can use your own IDE and code agent and there you are back to having an option with the Codex Code CLI along with Codex, Cursor or Claude Code.


Because "stock price go up"?


claude code solved this about a month ago


I think you're reading it exactly right


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: