Hacker Newsnew | past | comments | ask | show | jobs | submit | diggan's commentslogin

Isn't that up to the reader/visitor/user to decide? As it stands right now, Cursor are publishing results they won't say how they got them, and compares them against aggregate scores we don't know the true results of, and you're saying "it doesn't matter, the tool is better anyways".

Then why publish the obscured benchmarks in the first place then?


No I said I don’t believe any of the existing benchmarks do well when it comes to using a tool chain. They built a model specifically to be used with their tool chain calls, something that a lot of the models out there struggle with.


More like looking a thin net preventing mosquitoes from biting your skin, as there is some intention behind it, not just physics.


Don't we have more internet submarine cables and less single points of failure in our internet infrastructure today than years ago? If so, shouldn't that make it easier to route around failures?

The web though I agree isn't very decentralized.


Considering that the AWS outage took out a lot of lines of communication (email, video, chat systems) for both commercial and government entities, I'd say that US-East-1 is a pretty big single point of failure. Even if it didn't result in infrastructure impact directly, if there was some kind of infrastructure issue and you had delayed or unavailable communications, how would you know? How quickly could a response be mounted? There's some parts of the infrastructure that could damage themselves irreparably in the time it would take to to fix the outage or get comms routed through a backup channel - like parts of the electrical grid or water treatment plants.

An attacker (read: nation-state actor) wouldn't even need to take down US-East-1, it could just take advantage of the outage.

I assume (hope?) there's some kind of backup comms plan or infra in place for critical events, but I don't actually know.


Maybe yes in that regard. But in the past, most organizations ran their own mail and web servers. Software supporting the business ran on-prem. Now they use Google or Azure or AWS. So business and civilian usage, at least, seem more vulnerable now.


We sacrificed resillience for effeciency. Now things are much more fragile and liable to exploitation.


I'm a European with an Audi car, been looking at switching to EV and I can't lie and say BYD doesn't look like more value for the money than the Audi alternatives, so doesn't really surprise me.


And that's despite the EU tariffs on Chinese manufacturers.


Well, in my country I can get up to 7K EUR back if I purchase a EV before the money runs out, not sure tariffs end up having any impact.


Then again, Chinese car companies are price dumping on purpose to kill the competition.


Chinese cars are a lot more expensive outside of China than they are inside of China. And the difference is not just tariffs and shipping; they earn a lot more profit per car for exported cars than ones sold internally.

IOW, they're not dumping, they're doing the opposite. I don't know why.


> IOW, they're not dumping, they're doing the opposite. I don't know why.

If they're doing the opposite yet end up a lot cheaper with higher quality and more features, I'm really not sure what's going on. Don't see how they could be cheaper either.


I don't know why, but I can guess.

1. Competition in China is fierce, there are over 100 car companies. A large fraction of them are losing money. BYD et al are lowering prices to drive their competitors in China bankrupt. They've got higher prices outside of China to enable this behaviour.

2. They're quite sensitive to the dumping charges, so bend over backwards to ensure they aren't.


> on Nov. 1, would apply export controls “on any and all critical software,” pushing the technology stocks lower after hours.

What sort of software would be impacted by this? Almost anything could be critical depending on the context.


Semiconductor design pipeline software.


That is already under export control, with companies like Huawei already being completely shut out, yet being able to produce.

Obviously plenty of Chinese alternatives exist at this point.


If it was shot on film, isn't it possible to get 4K from it? Thought that was old news already.


Are you using it over Ethernet or WiFi? I remember I tried Moonlight to a local computer two or three years ago over Ethernet and the latency was still too bad, any ideas if that's better today?


If you were using a TV streaming stick, many have slow Ethernet due to slow port (Micro-USB), slow PHY hardware (100 mbps) or slow network stack. For the popular streaming apps they only need 25 mbps max, so most stick makers put no effort into design or validation testing beyond that minimal use case. And they don't care about latency.

I use Moonlight via direct 1 gbps Ethernet from a high-end gaming PC in the same house through a Google Chromecast 4K HDMI dongle with a powered USB-C hub for the RJ-45 input and it works flawlessly at 60 fps 4K 10-bit HDR with around 12 ms video latency. Some USB 3 hubs and USB Ethernet dongles won't reach full speeds on some streaming devices USB ports. The second one I tried worked at full 1 gbps.

You have to verify every software and hardware component in the chain is working at high-speed/low latency in your environment with a local speed test hosted on your source machine. I used self-hosted OpenSpeedTest. Moonlight works great but none of the consumer streaming stick or USB hub/RJ-45 dongles test for high speed/low latency across dozens of different device port hardware/firmware combos - so you can't trust claimed specs. Assume it's slow until you verify it's not.


Moonlight works flawlessly for me and I use FreeBSD as a daily driver. Of all OSes to play games.

UnRaid + KVM VM + GPU Passthrough with Moonlight has meant I no longer have to dual boot to game.

60FPS at 1080p on a 4k screen. 4k struggles but I think that's more my GPU then anything else. I do have 2x of them.


I'm assuming you don't play many games with anticheat though since they'd flag it running in a VM


It goes gaming desktop PC -> ethernet -> fiber -> 5g -> wifi -> Amazon Fire stick at a flat 100km away from the PC, and I still finished Expedition 33 on it with no problems.

I'd say definitely give it another go.


Unfortunately seems it's Mac/iPhone only. Any cross platform alternatives?


Any possibility of getting all of those things except the animations themselves? Mostly curious about the auto rigging, auto bone structure and layered image output so I could do the animation myself with Spine yet use the tool so I can skip the annoying rigging/setup steps :) But currently it kind of forces me to generate the rigging + animation together.

Also, being able to touch up the model between Step 1 and Step 2 would probably be a neat addition, I could see some minor faults I could quickly clean up if I could download the generated .glb and import a new version, so Step 2 doesn't rely on a model containing mistakes.

Otherwise, this looks pretty damn good, the UX is slightly confusion at first, and the colors all over the place on the website itself, but it does seem to work better than I expected. Kudos for making the life of animators easier :)

Edit: two minor notes, the download URLs doesn't actually trigger a browser download, but instead shows it in the browser, and the uploaded files seems to end up being semi-public, no authentication at all. Might want to tell people to not upload private content just yet, or put uploaded/generated data behind auth. Might want to zip up all the files (so the directories for the image is correct too) and offer a download of that instead.

Edit2: Looking at the generated .atlas, are all the names correct? I'm seeing "animated_Orc Idle_7328aa0f" even in the .atlas files generated from my own characters, using a walking animation. The attachments in the skins also reference "animated_Orc" in the generated .json, maybe it's just some static string that isn't supposed to change?


I kind of do this, semi-manually when using the web chat UIs (which happens less and less). I basically never let the conversations go above two messages in total (one message from me + one reply, since the quality of responses goes down so damn quick), and if anything is wrong, I restart the conversation and fix the initial prompt so it gets it right. And rather than manually writing my prompts in the web UIs, I manage prompts with http://github.com/victorb/prompta which makes it trivial to edit the prompts as I find out the best way of getting the response I want, together with some simple shell integrations to automatically include logs, source code, docs and what not.


I work similarly. I keep message rounds short (1-3) and clear often. If I have to steer the conversation too much, I start over.

I built a terminal tui to manage my contexts/prompts: https://github.com/pluqqy/pluqqy-terminal


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: