Hacker Newsnew | past | comments | ask | show | jobs | submit | Maxious's commentslogin

Details are still emerging, update in the last hour was that at least 5 different hacking groups were in ubisoft's systems and yeah some might have got their via bribes rather than mongodb https://x.com/vxunderground/status/2005483271065387461

I’ll give you $1000 to run Mongo.

Both Claude Pro and Google Antigravity free tier have Opus 4.5

If you want to add custom lsps, they need to be wrapped in a Claude code plugin which is where the little bit of actual documentation can be found https://code.claude.com/docs/en/plugins-reference

There's two other sites for the time.nist.gov service so it'll be okay.

Probably more interesting is how you get a tier 0 site back in sync - NIST rents out these cyberpunk looking units you can use to get your local frequency standards up to scratch for ~$700/month https://www.nist.gov/programs-projects/frequency-measurement...


What happens in the event all the sites for time.nist.gov go down? is it included in the spec?

Also thank you for that link, this is exactly the kind of esoteric knowledge that I enjoy learning about


Most high-availability networks use pool.ntp.org or vendor-specific pools (e.g., time.cloudflare.com, time.google.com, time.windows.com). These systems would automatically switch to a surviving peer in the pool.

Many data centers and telecom hubs use local GPS/GNSS-disciplined oscillators or atomic clocks and wouldn’t be affected.

Most laptops, smartphones, tablets, etc. would be accurate enough for days before drift affected things for the most part.

Kerberos requires clocks to be typically within 5 minutes to prevent replay attacks, so they’d probably be ok.

Sysadmins would need to update hardcoded NTP configurations to point to secondary servers.

If timestamps were REALLY off, TLS certificates might fail, but that’s highly unlikely.

Databases could be corrupted due to failure of transaction ordering.

Financial exchanges are often legally required to use time traceable to a national standard like UTC(NIST). A total failure of the NIST distribution layer could potentially trigger a suspension of electronic trading to maintain audit trail integrity.

Modern power grids use Synchrophasors that require microsecond-level precision for frequency monitoring. Losing the NIST reference would degrade the grid's ability to respond to load fluctuations, increasing the risk of cascading outages.


Great list! Just double-checked the CAT timekeeping requirements [1] and the requirement is NIST sync. So a subset of all UTC.

You don’t need to actually sync to NIST. I think most people PTP/PPS to a GPS-connected Grandmaster with high quality crystals.

But one must report deviations from NIST time, so CAT Reporters must track it.

I think you are right — if there is no NIST time signal then there is no properly auditable trading and thus no trading. MFID has similar stuff but I am unfamiliar.

One of my favorite nerd possessions is my hand-signed letter from Judah Levine with my NIST Authenticated NTP key.

[1] https://www.finra.org/rules-guidance/rulebooks/finra-rules/6...


Considering how many servers are in existence, probably the exact same procedure for starting a brand new one?

I must have one of those units oh my god

Someone needs to sell replicas (forgive the pun) of these.

It's like a toaster oven, but it toasts time.

To get the production level performance, you do need the RDNA compatible hardware.

However, vLLM supports multi node clusters over normal ethernet too https://docs.vllm.ai/en/stable/serving/parallelism_scaling/#...



Their listed number on jaxsheriff.us? What if they bought Google ads to get the first result for Jacksonville Office?


https://www.askmodu.com/rankings independently aggregates traffic from a variety of agents and amp consistently has the highest success rate for small and large tasks

That aligns with my annecdata :)


Thanks for the link.

But my first thought looking at this is that the numbers are probably skewed due to distribution of user skill levels, and what types of users choose which tool.

My hypothesis is that Amp is chosen by people who are VERY highly skilled in agentic development. Meaning these are the people most likely to provide solid context, good prompts, etc. That means these same people would likely get the best results from ANY coding agent. This also tracks with Amp being so expensive -- users or companies are more likely to pay a premium if they can get the most from the tool.

Claude Code on the other hand is used by (I assume) a way larger population. So the percentage of low-skill users is likely to be much higher. Those users may still get value from the tool, but their success rate will be lower by some factor with ANY coding agent. And this issue (if my hypothesis is correct) is likely 10x as true for GitHub Copilot.

Therefore I don't know how much we should read into stats like the total PR merge success percentage, because it's hard to tell the degree of noise caused by this user skill distribution imbalance.

Still interesting to see the numbers though!


The co-diagnosis of autism and ADHD became possible with the DSM-5 in 2013. According to the scientific literature, 50 to 70% of individuals with autism spectrum disorder (ASD) also present with comorbid attention deficit hyperactivity disorder (ADHD).

https://pmc.ncbi.nlm.nih.gov/articles/PMC8918663/


Wan 2.2: "This generation was run on an RTX 3060 (12 GB VRAM) and took 900 seconds to complete at 840 × 420 resolution, producing 81 frames." https://www.nextdiffusion.ai/tutorials/how-to-run-wan22-imag...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: