Hacker Newsnew | past | comments | ask | show | jobs | submit | davidgomes's commentslogin

A lot of progress is being made here on the Cursor side I encourage you to try it again.

(Cursor dev)


As someone with a 5:38 delta, I'm very anxiously waiting for BAA to announce the official cutoff.

In the meantime, if you're at all curious about the kinds of levels to which people go with trying to predict the cutoff check out this blog[1]. This is from Brian Rock [2], who every year collects data about a lot of marathons all over the world and then tries to guess the official cutoff for the Boston marathon. Very cool stuff!

[1]: https://runningwithrock.com/boston-marathon-cutoff-time-trac... [2]: https://runningwithrock.com/about-me/


Brian Rock's tracker is great. It's a ton of work to collect and maintain that estimate throughout the year, so we hope he keeps it up!


You should look at [Modal](https://modal.com/), not affiliated.


Lovable runs on Modal Sandboxes.


I wonder if it was geolocation? Anthropic is based in SF, the author seems to be based in Munich, and maybe they're not open to hiring people who aren't based in the US right now? Given the state of US visas right now, this wouldn't shock me.


My company, which is significantly smaller, hires people in multiple countries across the world. You don't need an office to hire (I am sure there so exist countries where you do, but I expect they are the minority).


Having worked in such companies, switching to that mode requires very different processes.


London too.


After Brexit that's still quite a hassle.


Phenomenal comment, thank you for writing it, made my day :)


The whole point of this interview is that the candidate is operating on a single-threaded environment.


These are multiple assumptions "This queue is only on one machine and on one thread", what's the real world use-case here? Not saying there's none but make it clear. I wouldn't want to work for a company that has to think of some random precise question instead of e.g. "when would you not use mysql?"


I guess I don’t want to hire candidates who assume the world is single-threaded


Yeah, we have a PR in the works for this (https://github.com/appdotbuild/platform/issues/166), should be fixed tomorrow!


Alright sounds good. Question, what LLM model does this use out of the box? Is it using the models provided by Github (after I give it access)?


If you run locally you can mix and match any anthropic / gemini models. As long as it satisfies this protocol https://github.com/appdotbuild/agent/blob/4e0d4b5ac03cee0548... you can plug in anything.

We have a similar wrapper for local LLMs on the roadmap.

If you use CLI only - we run claude 4 + gemini on the backend, gemini serving most of the vision tasks (frontend validation) and claude doing core codegen.


We use both Claude 4 and Gemini by default (for different tasks). But the idea is you can self-host this and use other models (and even BYOM - bring your own models).


OP here, everything is available on GitHub:

- https://github.com/appdotbuild/agent

- https://github.com/appdotbuild/platform

And we also blogged[1] about how the whole thing works. We're very excited about getting this out but we have a ton of improvements we'd like to make still. Please let us know if you have any questions!

[1]: https://www.app.build/blog/app-build-open-source-ai-agent


Probably not the question you want to hear, but, what template or css lib is that for the landing page? I'm really in love with it.



But you can get more availability and more durability with much easier alternatives:

- Availability: spin up more read replicas.

- Durability: spin up more read replicas and also write to S3 asynchronously.

With Postgres on Neon, you can have both of these very easily. Same with Aurora.

(Disclaimer: I work at Neon)


This doesn’t seem to provide higher write availability, and if the read replicas are consistent with the write replica this design must surely degrade write availability as it improves read availability, since the write replica must update all the read replicas.

This also doesn’t appear to describe a higher durability design at all by normal definitions (in the context of databases at least) if it’s async…?


Yeah, this is not about write availability, but as the OP/author points out, scaling that is not the bottleneck for most apps.


I think you may have misunderstood the GP and are perhaps misusing terminology. You cannot meaningfully scale vertically to improve write availability, and if you care about availability a single machine (and often a primary/secondary setup) is insufficient.

Even if you only care about scaling reads, eventually the 1:N write:read replica ratio will become too costly to maintain, and long before you reach that point you likely sacrifice real-time isolation guarantees to maintain your write availability and throughput.


> You cannot meaningfully scale vertically to improve write availability

Disagree. Even if you limit yourself to the cloud, r7i/r8g.48xl gets you 192 vCPU / 1.5 TiB RAM. If you really want to get silly, x2iedn.32xl is 128 vCPU / 4 TiB RAM, and you get 3.8 TiB of local NVMe storage for temp tablespace. The money you’ll pay ($16.5K - $44K month, depending on specific class) would pay for a similarly spec’d server in the same amount of time, though.

Which brings me to the novel concept of owning your own hardware. A quick look at Supermicro’s site shows a 2U w/ up to 1.92 PB of Gen5 NVMe, 8 TiB of RAM, and dual sockets. That would likely cost a wee bit more than a month of renting the aforementioned AWS VM, but a more reasonably spec’d one would not. Realistically, that much storage would be used as SDS for other DBs to use. NVMoF isn’t quite as fast as local disks, but it’s a hell of a lot faster than EBS et al.

The point is that you actually can vertically scale to stupidly high levels, it’s just that most companies have no idea how to run servers anymore.

> and if you care about availability a single machine (and often a primary/secondary setup) is insufficient.

Depending on your availability SLOs, of course, I think you’d find that a two-node setup (optionally having N read replicas) with one in standby would be quite sufficient. Speaking from personal experience on RDS (MySQL fronted with ProxySQL on K8s, load balanced with NLB), I experienced a single outage in two years. When it happened, no one noticed, it was so brief. Some notice-only alerts for 500s in Slack, but no pages went out.


> If you really want to get silly, x2iedn.32xl is 128 vCPU / 4 TiB RAM, and you get 3.8 TiB of local NVMe

This doesn't affect availability - except insofar as unavailability might be caused by insufficient capacity, which is not the typical definition.

> Depending on your availability SLOs, of course

Yes, exactly. Which is the point the GP was making. You generally make the trade-off in question not for performance, but because you have SLOs demanding higher availability. If you do not have these SLOs, then of course you don't want to make that trade-off.


> This doesn't affect availability - except insofar as unavailability might be caused by insufficient capacity, which is not the typical definition.

I agree, but it seemed to me that GP was using it as such: "You cannot meaningfully scale vertically to improve write availability"


The big caveat about these configurations is the amount of time it takes to rebuild a replica due to the quantity of storage per node that has to be pushed over the network. This is one of the low-key major advantages of disaggregated storage.

I prefer to design my own hardware infrastructure but there are many operational tradeoffs to consider.


> you likely sacrifice real-time isolation guarantees to maintain your write availability and throughput

No worries there, in all likelihood isolation has probably been killed twice already. Once by running the DB on READ COMMITTED, and a second time by using an ORM like EF to read data into your application, fiddle with it in-RAM, and write the new (unrelated-to-what-was-read) data back to the DB.

In other words, we throw out all that performant 2010-2020 NoSQL & eventual consistency tech, and go back to good old fashioned SQL & ACID, because everyone knows SQL, and ACID is amazing. Then we use LINQ/EF instead because it turns out that no-one actually wants to touch SQL, and full isolation is too slow so that gets axed too.


No loss of committed transactions is acceptable to any serious business.

>I work at Neon

In my opinion, distributed DB solutions without synchronous write replication are DOA. Apparently a good number of people don't share this opinion because there's a whole cottage industry around such solutions, but I would never touch them with a 10 foot stick.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: