I don't like the "I canceled my x subscription" hype posts, but I did cancel Figma today. We've barely used it in months and this was the nail in the coffin.
Show HN: I've built a C# IDE, Runtime, and AppStore inside Excel
670 points | 179 comments
One of the main use cases was to analyze Excel data with SQL. I'm the kind of nerd that loves stuff like that, but stuff like that seems completely obsolete now.
this was true a year ago, but if you give an agent a new spec to follow (e.g. a .md file), it will follow it.
we have a custom .yaml spec for data pipelines in our product and the agent follows it as well as anything in the training data.
while I agree you don't need to build a new thing "for agents", you can get them to understand new things, that are not in the training data, very easily.
Just because they can doesn't mean inventing a new framework "for agents" is going to be superior to letting agents use what's in their training data. I suspect it'll be worse, but the time/resources needed to prove that is beyond what I'd be willing to invest.
What makes something like this "for agents", anyway? It's opinionated...a human's opinions, I assume, since agents don't want anything and thus can't have opinions. But, many existing tools are opinionated. Types are good for agents, because it keeps them honest, but many existing things in this space have types. Python is good for agents, because there's a shitload of Python code and documentation in their training data, but many existing things are built with Python (and TypeScript, Go, and Rust are also typed languages and well-represented in the training data).
I dunno. I think a lot of folks are sitting around with an agent thinking, what can I build? And, a lot of things "for agents" are being built, as a result. I think most of them don't need to be built and don't improve software development with agents. They often just chew up context and cache with extra arbitrary rules the agent needs to follow without delivering improvements.
> we have a custom .yaml spec for data pipelines in our product and the agent follows it as well as anything in the training data.
Doesn't this end up being way more expensive, because you don't pay for model parameter activations but for the tokens in/out, meaning that anything not in the training data (and therefore, the model) will cost you. I could make Opus use a new language I came up with if I wanted to and it'd do an okay job with enough information... but it'd be more expensive and wasteful than just telling it to write the same algorithms in Python, and possibly a bit more error prone. Same with frameworks and libraries.
I was thinking of using it with Duckdb as well but seems it would be of limited benefit. Parquet objects are in MBs, so they would be streamed directly from S3. With raw parquet objects, it might help with S3 listing if you have a lot of them (shave off a couple of seconds from the query). If you are already on Ducklake, Duckdb will use that for getting the list of relevant objects anyway.
Maybe the OP is thinking of reading/writing to DuckDB native format files. Those require filesystem semantics for writing. Unfortunately, even NFS or SMB are not sufficiently FS-like for DuckDB.
Parquet is static append only, so DuckDB has no problems with those living on S3.
Pre-compaction the recent data can be in small files, and the delete markers will also be in small files. This will bring down fetch times, while ducklake may have many of the larger blocks in memory or disk cache already.
Reading block headers for filtering is lots of small ranges, this could speed it up by 10x.
For files up to 100kB of size, this should effectively be really close to the same price as S3 when writing (didn't check reading so much, but the writes/PUT is always much more expensive than read/GET)
Would be really useful pre-compaction and to deal with small files issue without latency penalties
When you say just Cortex it is ambiguous as there is Cortex Search, Agents, Analyst, and Code.
Cortex Code is available via web and cli. The web version is good. I've used the cli and it is fine too, though I prefer the visuals of the web version when looking at data outputs. For writing code it is similar to a Codex or Claude Code. It is data focussed I gather more so than other options and has great hooks into your snowflake tables. You could do similar actions with Snowpark and say Claude Code. I find Snowflake focus on personas are more functional than pure technical so the Cortex Code fits well with it. Though if you want to do your own thing you can use your own IDE and code agent and there you are back to having an option with the Codex Code CLI along with Codex, Cursor or Claude Code.
reply