Hacker Newsnew | past | comments | ask | show | jobs | submit | iddan's commentslogin

They use a real browser

Syntax looks cool. Would have expected proper syntax highlighting in the website (and plugin for IDEs). The website is currently too convoluted until you see actual syntax would highly suggest having code block at the front. See for good reference the landing page of ruby: https://www.ruby-lang.org/en/

I see syntax highlighting on the website fwiw

React query is a lot leaner and safer than handful redux even w rtk query

What’s preventing you from putting all of those in a single parent directory and boot into it?


The rich and complex history of spreadsheets inspired me to build React Spreadsheet. Along the way I deepened (and others) understanding of the complexities and intricacies of spreadsheets https://iddan.github.io/react-spreadsheet


Thanks for that ad-read and self-promotion! Maybe next time you can contribute some insight that doesn't feed your balance (spread)sheet.


The regular ones are okay. The watchlist is diabolical. Unvoted as soon as I saw the watchlist.


half the watchlist is literally vaporware


So we are reinventing the docs /*/*.md directory? /s I think this is a good idea just don’t really get why would you need a tool around it


One of the things that I've been chewing on lately is the sync problem. Having a CI job that identifies places where the docs have drifted from the implementation seems pretty valuable.


Python community figured this out in 2001: https://docs.python.org/3/library/doctest.html


I don't think this is related in any way.


> Having a CI job that identifies places where the docs have drifted from the implementation seems pretty valuable.

https://docs.python.org/3/library/doctest.html

> To check that a module’s docstrings are up-to-date by verifying that all interactive examples still work as documented. To perform regression testing by verifying that interactive examples from a test file or a test object work as expected. To write tutorial documentation for a package, liberally illustrated with input-output examples. Depending on whether the examples or the expository text are emphasized, this has the flavor of “literate testing” or “executable documentation”.

Seems pretty related to me.


I appreciate doctest very much, but those aren’t the kind of documentation I’m worried about drifting. I’m thinking more on the “how does this communication protocol between this server and this client work?”, which is generally terrible to try to summarize in doctests. If you want to take the idea to the extreme, imagine a CI test that answers “does this server implementation conform to this RFC?”


Not really.

> Having a CI job that identifies places where the docs have drifted from the implementation seems pretty valuable.

Testing with lat isn't about ensuring consistency of code with public API documentation. It is about:

* ensuring you can quickly analyze what tests were added / changed by looking at the English description

* ensuring you spot when an agent randomly drops or alters an important functional/regression tests

The problem with coding agents is that they produce enormous diffs, and while reading tests code is very important in practice your focus and attention drifts and you can't do thorough analysis.

This isn't a new problem though, the same thing applies to classic code reviews -- rarely coding is a bottle neck, it's getting all reviews from humans to vet the change.

Lat shifts the focus from reading test code to understanding the semantics of the test. And because instead of reviewing 2000 lines of code you can focus on reviewing only 100 lines change in lat.md you'll be able to control your tests and implementation more tightly.

For projects where code quality isn't paramount I now just glance over the code to spot anti-pattern and models failing to DRY and resorting to duplicating large swaths of code.


I am guessing as Google is vertically integrated and "actually pays" for AI infra (compared to OpenAI & Anthropic that receives hardware as partnerships) they have a more urgent incentive to reduce model sizes. Also, Google and Apple will be the first to gain from running model on-device


I can assure you OpenAI and Anthropic pay for hardware. They don’t receive it for free.


This seems to be an inference-time optimization and they are putting AI on every search result page. That seems like plenty of incentive to optimize.


Worked with a CTO that had the same rule of thumb. I quickly proved strategic testing is net positive for the business


The prophecy of the hypermedia web


I feel like I haven’t read anything about this in combination with mcp and like I am taking crazy pills: does no one remember hateoas?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: