Hacker Newsnew | past | comments | ask | show | jobs | submit | rahkiin's commentslogin

Oh come on. That pays for more than 10 fte in some countries

I made this joke with "$1,500-$2000 per month" last night and everyone thought I was serious

I know people who burned several hundreds a day and still were finding it worth it.

Were they actually making money though? A lot of the people on the forefront of this AI stuff seem like cult leaders and crackheads to me.

I'd pay up to $1000 pretty easily just based off the time it saves me personally from a lot of grindy type work which frees me up for more high value stuff.

It's not 10x by any means but it doesn't need to be at most dev salaries to pay for itself. 1.5x alone is probably enough of an improvement for most >jr developers for a company to justify $1000/month.

I suppose if your area of responsibility wasn't very broad the value would decrease pretty quickly so maybe less value for people at very large companies?


I can see $200 but $1,000 per month seems crazy to me.

Using Claude Code for one year is worth the same as a used sedan (I.E., ~$12,000) to you?

You could be investing that money!


Yes, easily. Paying for Claude would be investing that money. Assuming 10% return which would be great I'd make an extra $1200 a year investing it. I'm pretty sure over the course of a year of not having to spend time doing low value or repetitive work I can increase productivity enough to more than cover the $13k difference. Developer work scales really well so removing a bunch of the low end and freeing up time for the more difficult problems is going to return a lot of value.

I would probably pay $2000 a month if I had to - it's a small fraction of my salary, and the productivity boost is worth it.

It's *worth it* when you're salaried? Compared to investing the money? Do you plan to land a very-high-paying executive role years down the line? Are you already extremely highly paid? Did Claude legitimately 10x your productivity?

edit: Fuck I'm getting trolled


I'm serious - the productivity boost I'm getting from using AI models is so significant, that it's absolutely worth paying even 2k/month. It saves me a lot of time, and enables me to deliver new features much faster (making me look better for my employer) - both of which would justify spending a small fraction of my own money. I don't have to, because my employer pays for it, but as I said, if I had to, I would pay.

I am not paying this myself, but the place I work at is definitely paying around 2k a month for my Claude Code usage. I pay 2 x 200, for my personal projects.

I think personal subs are subsidized while corporate ones definitely not. I have CC for my personal projects running 16h a day with multiple instances, but work CC still racks way higher bills with less usage. If I had to guess my work CC is using 4x as little for 5x the cost so at least 20x difference.

I am not going to say it has 10xed or whatever with my productivity, but I would have never ever in that timeframe built all those things that I have now.


I don't know why you keep insisting that no one is making any money off of this. Claude Code has made me outrageously more productive. Time = Money right?

That's true if you're working as a contractor and being able to do more translates to higher income but it doesn't work so for employees.

I'm an employee, and my boss loves me because I deliver things he wants quickly and reliably - because I use AI tools. Guess who he will keep in the next round of layoffs?

My windows with corporate crap is sometimes 2000x slower than without corporate crap. And consistently 10x slower than an M3

Don’t worry, my new M4 doesn’t feel much faster either due to all the corporate crapware. Since Windows Defender got ported to Mac it’s become terrible in I/O and overall responsiveness. Any file operations will consume an entire core or two on Defender processes.

My personal M1 feels just as fast as the work M4 due to this.


I was impressed with my M4 mini when I got it a year ago but sometime after the Liquid Glass update it is now: beachball… beachball… beachball… reboot… beachball… beachball… Reminds me of the bad old days of Win XP.

How much RAM do you have? That seems to be the main thing that slows down my MacBooks (original launch-day 16GB M1 MBP and 32 GB M2 Pro). The M1 CPU is finally starting to show its age for some things, but the M2 Pro is really only RAM limited in perceived speed for me.

RAM. You must have 16 GB or more. And for serious work now, I’m looking at 32 GB or more.

I haven't had a laptop with less than 32GB of RAM in about 15 years. RAM is extremely useful for some workloads.

Mine has 48GB.

You can report a bug by typing applefeedback:// in Safari.

Those sound like very well tested numbers, founded in reality /s

I did. On equal hardware, in same order, with same windows version, on clean installs. ‘npm install’ is very file-heavy and Windows with corpware hates small files especially with the .js extension

Same here. ng install takes 2000x as long as on my similairly priced mac. Installing a package for any language locks up the laptop for indexing


Which might signal to the EU to not build local cloud infrastructure?


Why would it signal that? The loud and clear message would be "do not let American companies get involved in your infrastructure, government or any other system where government requirements would come into open conflict with their profits".


So you’d basically want what was the iPod Touch?


Lua is awful for anything large. It is untyped, refactoring does not exist, etc. C# is an amazing language with amazing tools and very good libraries.


Have you considered using tables?

It is funny how we keep asking more and more and more even though we already have it so much better than before. Can we never be happy with what we have?


> It is funny how we keep asking more and more and more even though we already have it so much better than before.

I've been developing web stuff for 15 years now and sometimes I can't believe comments like these. We didn't have it "so much better before". CSS sucked hard and getting things right for three devices was an incredible pain in the ass.

Tables have semantic meaning. They don't support fractional units. Reflowing for mobile is impossible and you need JS hacks like splitting tables. You can't reorder natively.


I have been developing web stuff for 20 years now and I also can’t believe comments like these.

Flex and grid enable layouts that are far beyond anything we could do with table layouts. Anyone who claims otherwise has obviously not done any amount of serious, production FE UI design and development.

Are there bits of DX ergonomics I’d like in flex and grid? Of course. Does the syntax sometimes feel a bit arcane? Yeah. But the raw power is there, and anyone who claims the contrary is either a gormless backend developer, or some troll who is trying to design things in MS Word.


Tbf it said “we have it so much better Than before” I think they agree with you


I saw a similar comment on HN recently that CSS was "better" back in the day and what we have today is either unnecessary or too hard.

I reminded that person we had to use floats and positioning hacks and abuse HTML tables for page layout before flexbox and CSS Grid were created.

There was no way simple method to center a div!


> we already have it so much better than before

They meant now. "we have it so much better than how it used to be."


How would tables solve the issue they're talking about?


Borders can be applied to table cells independent of the content inside cells.

Gap decorations allow you to add borders between flex/grid items, but without the woes of dealing with table quirks and behavior.

Common use cases would include mimicking design patterns found in print layouts, particularly newspapers and menus, to help divide groups of items or info.

Examples: https://developer.chrome.com/blog/gap-decorations


How are you currently retrieving feedback on the language design? Have you worked with teenagers and parents on it?

I went through the demo on the first page and found it quite complex (but then I am stuck in existing patterns of course).


I’ve got a Discord from my previous game of about 2000 people, mostly teenagers, and my testers have mostly come from there. To name one example, just yesterday a teenager completed a chess game after 3-4 weeks on Easel. I’ve been incorporating tons of feedback from the testers over the past year and a half.

I think that it may look strange to a person who has coded before because the language is semi-declarative. Most teenagers come to Easel as players with no prior programming experience, and begin by remixing their favourite game, and that’s when the semi-declarative model really shines. Many interesting changes can be done in a single edit because the code is clumped together in a hierarchy. Whereas in another programming language there may be more indirection and you might need to edit 3 separate parts in different files to make 1 change, and people who haven’t coded before don’t know how to find all the parts. I think Easel works for players becoming makers but can feel strange for people who come from other languages.


It is very interesting though! I have been interested in this kind of language design for interactive UI for a while. If there was a quick article outlining how all the "with" and "on" and "own" work to more experienced developers using references to existing language features, I'd love to read it. Right now it reminds me of the declarative style of qt ui and online primitives introduced in godot, but i haven't looked at it in more details. Also love your take on async. Wish you all the best luck, this seems like a really thought through language design!


This is a very kind comment, thank you! Yes it has been a LOT of iteration to make the language what it is. I think it would make sense to have a page for experienced developers to better understand what Easel is. Right now maybe the closest is this page: https://easel.games/docs/learn/key-concepts

Thanks again for your kind words!


This is really cool, these patterns (run once now and then once triggered) surface all the time and usually turn into ugly code! How many interations did it take?

So most lines like A { B{ on D{ print() } } C{} } equivalently desugar into something like a = A; b = B(); a.mount(b); d = D(); d.on(f); b.mount(d); .. ?

I got confused by a couple of things. One of them is whether object parameters act like context parameters and there for depend on names in the caller variable scope? Ie if i define 'fn ship.Explode', i must have variable ship at call site? But i can still otherwise pass it explicitly as alien_ship.Explode(), right? How do i know if a particular call takes the current object into account? If i have two variables in my nested scope: ship and asteriod and both have ship.Explode and asteroid.Explode, which one is picked if i do just `Explode`? The innermost? Or I can't have two functions like that because the first thing is literally just a named variable and not a "method"?

Overall, if you could provide some examples of how things could have de-sugured into a different language, that'd be very interesting! Maybe with some examples of why this or that pattern is useful? I think it does a good job for things like on / once, but I'm not grokking how one would structure an app using this variable scoping use clause and object parameters.

Also not sure how to define functions that could be on'd or once'd. (Ah, i see, delve)


Price. Data sovereignty. Legal. All are valid reasons to self-host


I’ve been using AirDrop to send files between different mames of iOS phone and tablet for ages.


The fact that you are impressed that different products from the same brand are interoperable between them says volumes about Apple and Apple users.


Android didn't have a way to share files between them for the longest time. Initially there was Beam but it never worked. The first semi-reliable way to exchange files between two android phones, without using a third party utility, was Nearby Share dates from 2020.

So yeah, it's a low bar, but one that only Apple bothered to clear from the get go apparently.


Ah, uneducated smugness, another Apple trait. It's been possible to share files between any android since 2009, via Bluetooth.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: