Hacker Newsnew | past | comments | ask | show | jobs | submit | rishabhaiover's commentslogin

I'm assuming this is for tool call and orchestration. I didn't know we needed higher exploitable parallelism from the hardware, we had software bottlenecks (you're not running 10,000 agents concurrently or downstream tool calls)

Can someone explain what is Vera CPU doing that a traditional CPU doesn't?


> you're not running 10,000 agents concurrently or downstream tool calls

Cursor seem to be doing exactly that though


Lots and lots of CPUs pooled. Faster more efficient power RAM accessible to both GPU and CPU. IIUC.

But at what stage are we asking for that RAM? if it's the inference stage then doesn't that belong to the GPU<>Memory which has nothing to do with the CPU?

I did see they have the unified CPU/GPU memory which may reduce the cost of host/kernel transactions especially now that we're probably lifting more and more memory with longer context tasks.


I'm in a CS program right now, I've seen wild shifts from ChatGPT 3.0 to the current models:

1) I've seen students scoring A grades in courses they've barely attended for the entire semester

2) Using generative AI to solve assignments and take-home exams felt "too easy" and I was ethically conscious at first

3) At this point, a lot of students have complex side-projects to a point where everyone's resume looks the same. It's harder to create a competitive edge.


> 3) At this point, a lot of students have complex side-projects to a point where everyone's resume looks the same. It's harder to create a competitive edge.

This one of the things that breaks my heart personally.

I have personal projects I am so proud of that took me years to build or considerable effort to reading through papers and implementing by hand.

I used to show these in interviews with such pride, but now these are at best neutral to my application, but more likely a knock against me because they're so easy to vibe code.

I guess it would be like if you spent the last decade writing novels which you were really proud of and felt was part of the small contribution you've made humanity, then overnight people decided they were actually awful and of zero value.

Everything I ever wrote – all the SWE blog posts, tutorials, books, github repos. It's all useless now.


I'm curious, what was the algorithm problem?

It’s a variant of a knapsack problem. But neither Claude nor I initially realized it was a knapsack problem: it became clear only after the solution was found and proved.

So many more products are competing for finite attention now. And the solution to that problem is not to productize your commodity imo, art created for the sake of selling is not art.

If you don't productize something you won't make money and then you'll starve and die.

Then UBI. This is a failure of our economy, which creates perverse incentives. Clean air, clean water, good food, plentiful housing, and opportunities for sport, contemplation and art are the things we need, but our economy incentivizes people to pollute, sell slop, restrict housing, and exploit ourselves and others.

We’re not going to pay you to sit at home and wank while the rest of us keep the lights on. It’s not happening.

The day people like you can do that is the day society collapses.


What will happen instead? Massive population collapse until everyone has work?

Naive question but do people really value certifications like these?

As a consumer of them, I love them: a company with an influential, widely-used technology or platform spends a ton of money signaling to the industry exactly what's important to know about it, creating training curriculum for it, and a whole infrastructure to verify when someone knows it, I'm going to take them up on all of that, especially in the cases where the investment is like $100, a little bit of studying (the likes of which I'd want to do anyway if I'm learning something new, and I'm happy to have their structured, prioritized list of topics and/or guided curriculum) and a couple hours taking an online-proctored exam. From that perspective, I don't have a good reason not to have a certification in something that's super relevant to my role.

In interview/hiring situations where they're not expected or effectively required, they make for great chat fodder and a really good opportunity to exhibit awareness about yourself, the industry, and how the person on the other side of the table might perceive certifications given the context.


God I hatelove this type of comment. You're totally right, but it's a complete repudiation of my initial reflex, which is to make a mockery of this.

Great perspective. I'm going to do this. Haha.


> spends a ton of money

Bruh lol these courses are marketing material designed by fresh grad communications majors. You're falling for exactly the scam they want you to fall for by giving so much benefit of the doubt to entities which deserve none.

Edit: no I don't do this kind of work but my mother does so I know exactly how the sausage is made.


Unfortunately some business leads value these types of certifications and partner programs. I imagine there’s a great deal of overlap with these folks and those who use Gartner’s Magic Quadrant for purchasing decisions.

Consultancies do. Deloitte are quoted on the page. Consultancy people at my place of work have all been "AI trained".

Doesn't stop them being useless though, like giving an electric drill to a chimp and telling them to build a house...lots of action, a lot of screeching, not much work.

One of the mistakes with AI is that people believe it will turn lead into gold: if you give AI bad prompts, AI will produce bad work.


Consultancies sell the resume and not the person. It's easier for them to quantify, "We have 300 CCAs" than it is "What have this person Kim who is really good."

Yes, because if that was their sales pitch, they would need to pay Kim more, and they would have to account for the fact that she's already allocated elsewhere. It's better to pretend all those CCAs are interchangeable.

If you give bad prompts to humans it produces bad work too.

They do. Certifications make technical expertise legible to non-technical decisionmakers, and I've encountered people on both sides of that dynamic who affirmatively like it when companies set up programs like this. Obviously you and I would rather have someone who understands Claude make decisions about whether and how to use it, but in a lot of industries that's not realistic.

Most employees at most businesses show up do as they are trained and then go home, because that is what is asked of them. Even those who might have the inclination to explore new technology often will not for fear of doing something wrong. And that creates a big market for training: a company wants their employees to use Claude so the employees must be trained.

Startups / technology companies that expect employees to be self-starters who can be set free to frolic amongst the problems are an aberration.


My naive guess is that business with no tech component hire consultants, and these are part of the sales pitch.

Or governments/large organizations performing box checking exercises


Consultants go anywhere management wants to offload responsibility for choices.

It depends entirely on managers whether its just a blamewashing affair or actual beneficial responsibilirt.


Think of these like the Google cloud or AWS certifications. A few companies that specialize in them will want you to have them. But for the rest of the industry, your ability to ace the technical interview will matter more.

I have a couple cloud certs because I was forced to get them by a previous employer. They are useless.

Non-technicals do.

people no, legal persons yes

Google's projected AI capex spend is $170-180 billion for this year. It's unreasonable to think AI would not be a reason for companies to consider layoffs.

There are two ways to interpret your comment:

1. Google is getting so much productivity out of their AI that they need fewer people.

2. Google is spending so much on AI they can’t afford to keep the people they need.


Or

3. Google is spending so much on AI that they can't afford to keep paying people, but they are ok with this because they are convinced the AI investment will replace the people at an eventual cost savings.


That seems to have been Dorsey's approach. The business has been stagnant, so cut the roster and bet big on some future returns from AI.

Google (and almost all other BigTech) is spending on scaling compute (data centers/securing power generation/chip contracts). My comment was not related to AI producivity and its impact on reduction of workforce. I believe a company spending nearly all its free cash flow on scaling compute (or borrowing money to do so) would have a different opinion on the economics of human capital.

I subscribe to the second point of view. Several companies fall in that bucket. Oracle comes to mind.

Does that include R&D? Google is an AI _provider_, which is a considerably different profile in terms of spend from companies who are consumers. I would expect Google to be investing considerable resources to keep up with Anthropic and OpenAI.

I don't think it includes all of the R&D. From what I've read that's the amount they will spend on infrastructure for AI.

I guess some of that infrastructure will get used for AI R&D, but there are other R&D costs such as salaries that wouldn't be included in the figure.


These kind of HN submissions test how fair discussions can be here:

> Please don't use Hacker News for political or ideological battle. It tramples curiosity.

Reference: https://news.ycombinator.com/newsguidelines.html


Elon is literally a political figure. How is one supposed to discuss his actions without invoking his politics?

discuss != battle

In the context of what Elon has done, the only real discussion should be condemnation. If that leads to Elon fans feeling embattled, well, they should get better role models to look up to.

It's not what this site is for, and destroys what it is for. Preserving the community has to take precedence.

So, it utterly fails? A good part of the community still seems to be stuck in 2017 where Elon could do no wrong.

Turns out a lot of not just wrong, but malice could be done in 9 years. And worse yet, incompetent malice. I don't know why that has to be a political statement these days, but thems the brakes here.


Please don't use Hacker News for political or ideological battle. It tramples curiosity.

And I repeat

> I don't know why [This person done bad actions] has to be a political statement these days, but thems the brakes here.

Thanks for proving my point.


They’re a troll account, only a few days old.

You can always go back to Reddit

> When you burn enough bridges, the only way to move is forward

I don't "move back", only move forward to the next community. Until that is compromised. Then the cycle repeats anew.

I wonder which community, if any, will break that cycle.


Is it politics or ideology to recognize the flawed character of someone? How cultish his following is? His erratic behavior, the damage that he's doing?

Some people will cry "politics" just to take the voice away from those who dare to question their beloved celebrities.


Yeah and it’s not our fault every Elon discussion involves politics. It’s literally all he does all day, and all he seems interested in, anymore.

They trample science, the Paradox of tolerance in action.

Who fights can lose, who doesn't fight has already lost.


> Please don't use Hacker News for political or ideological battle. It tramples curiosity.

That ship has sailed a long time ago, with the approval of the moderation itself.


That's excellent modbait, but of course what you say is the opposite of what we approve.

It's is a complex and hard question, but the principles we apply to it have been around for a long time and are consistent with the site guidelines. If they weren't, we'd change the latter.

I've explained all of this many times. If you, or anyone, would like to know how we approach the question, you could start here:

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...


I simply disagree, you know what topics I flagged, I'm not trying to bait you or any other moderator, and will discuss the matter no further.

Yup, since around 2016 HN and other tech spaces got infested with people who cannot separate their political ideology from technical discussions.

When it comes to FOSS they claim that FOSS has always been political to justify the politicization of everything they touch.

Things used to be much better when the people adhered to the age-old wisdom "Keep politics and religion out of the office" and carried this attitude to neutral spaces online.

In part, some of us got into tech because it was one of the places where meritocracy ruled and you could get away from those who thrive by overwhelming others with BS.

I apologize for the rant.


Being “apolitical” is a luxury of the privileged, especially in turbulent times.

True tests of courage, morals, and ethics are occurring more and more every day now, especially in the tech industry that is so closely intertwined with the regimes across the world who seek to cause great harm to those who do not look like, speak like, or believe in the same things as them.

"The only thing necessary for the triumph of evil is for good men to do nothing" - there’s your quote for political apathy.


How is this related to the current discussion at hand?

I can't tell if this is a bot or human response.

It’s definitely coming across as having being written by an LLM

How do you get local sandboxing with a permission based model? I thought wasmtime was the answer!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: