Hacker Newsnew | past | comments | ask | show | jobs | submit | TGower's commentslogin

Fex is not coming to Android https://wiki.fex-emu.com/index.php/FAQ

> This page was last edited on 22 October 2023, at 09:05.

Since then:

> In Android 16, Google expanded the "Linux Terminal" feature, which was initially introduced in Android 15 QPR2 beta, allowing users to run Linux applications within a virtual machine on their devices. This feature utilizes the Android Virtualization Framework (AVF) to create a Debian-based environment where users can execute Linux commands and graphical applications. The guest operating system is fully isolated by the hypervisor (KVM or gunyah) and manages its own resources with its own Linux kernel. Notably, it supports running classic software such as Doom, demonstrating its ability to run full desktop applications.


Fex is already running on android, within things like https://github.com/utkarshdalal/GameNative

Only Antigravity and Gemini access was banned, not email or other google account stuff.

It wouldn't make sense to have the LLM try to do the target recognition, trajectory planning, or motor control. It might make sense to have the LLM at a higher level handling monitoring of systems and coordination with other instances, to provide more flexibility to react to novel situations than rules bases systems.

What's the plan for using an engraving laser in open air without blinding a neighbor? Does the bot fully roll over the target area before firing or something?

TBC, just trying to get the platform and mechanical weeding working for now

My understanding of intermittent fasting is that it can encourage "garbage collection" of the body pruning the dead/sickly cells. Weight loss/gain is still driven by calories in/out.


It's always calories in and calories out. The idea is that intermittent fasting makes you less hungry over time and thus you take in less calories.

If they had their test subjects eat the same amount to see if intermittent fasting metabolized food better then it seems obvious that there would be little to no difference.


My SO did IF and strict calorie counting for around 2 weeks to a momth, and it drastically reduced their appetite to something more akin to a normal level. Now, they can barely finish a large meal at McDonald's without leftovers.

They've cut quite a bit of weight since then and mostly have just focused on keeping their appetite low, and eating healthier more fibrous meals in general.


IF is not marketed for weight loss, in fact. Although it can be paired with a low carb diet to obtain some.


With a chess engine, you could ask any practitioner in the 90's what it would take to achieve "Stage 4" and they could estimate it quite accurately as a function of FLOPs and memory bandwidth. It's worth keeping in mind just how little we understand about LLM capability scaling. Ask 10 different AI researchers when we will get to Stage 4 for something like programming and you'll get wild guesses or an honest "we don't know".


That is not what happened with chess engines. We didn’t just throw better hardware at it, we found new algorithms, improved the accuracy and performance of our position evaluation functions, discovered more efficient data structures, etc.

People have been downplaying LLMs since the first AI-generated buzzword garbage scientific paper made its way past peer review and into publication. And yet they keep getting better and better to the point where people are quite literally building projects with shockingly little human supervision.

By all means, keep betting against them.


Chess grandmasters are living proof that it’s possible to reach grandmaster level in chess on 20W of compute. We’ve got orders of magnitude of optimizations to discover in LLMs and/or future architectures, both software and hardware and with the amount of progress we’ve got basically every month those ten people will answer ‘we don’t know, but it won’t be too long’. Of course they may be wrong, but the trend line is clear; Moore’s law faced similar issues and they were successively overcome for half a century.

IOW respect the trend line.


And their predictions about Go were wrong, because they thought the algorithm would forever be α-β pruning with a weak value heuristic


> With a chess engine, you could ask any practitioner in the 90's what it would take to achieve "Stage 4" and they could estimate it quite accurately as a function of FLOPs and memory bandwidth.

And the same practitioners said right after deep blue that go is NEVER gonna happen. Too large. The search space is just not computable. We'll never do it. And yeeeet...


Only in the "cucumbers are fruit" botanically technically correct but incorrect from common usage standpoint.


Sure, except this is the first time in my life I've seen the term "pulse" used for a vegetable. And, honestly, only in the last 10 years have I been hearing the term legume in common conversation. Grain is definitely the more common term.


"elected for no obvious reason" isn't quite right, as a test image for computer graphics it has regions of very high frequency detail and regions of very low frequency detail which make it easier to spot various compression artifacts, and it makes a good study for edge detection, with both very clear edges along the outline, but more subjective edges in the feathering.


It's redish. Ok it has a blur and details on the foreground but could have been any image with blurred background and a face.

"very low frequency detail", we are talking about a 512x512 picture here, it has low and high frequency details (FFT speaking) like most photos.

"Good for edges detection" doesn't mean anything. Like, is the image good for edge detection or the algorithm is good at detecting edges ? What does "subjective edges" even mean ? Does it mean hard to spot ?

That looks like technical reasons but it just noise. They literally grab a playboy magazine and decided it was well enough (and indeed, it wasn't that bad, yes). Still not professional. The message is "We have playboy magazines at work and we are proud of it".


Try out running different edge detection algorithms on that image and you will see that there is a lot of disagreement amongst them in the feathering region. Exploring what the differences are, and how the algorithms lead to those differences helps build intuition about the range of things we might call an "edge", and which algorithm is appropriate for a particular task at hand.


Just wanted to say thanks for sharing your work! Was a great reference when working on hall effect datahand style project.


I wonder if the author was making similar arguments against solar power 20 years ago. The case for both isn't one of immediate ecenomic advantage, though that may come with sufficient development like solar has had. If you take it as a given that compute demand continues scaling, at some point we will need to shift power generation off Earth, and it's a lot easier to move computed data streams instead of terrawatts of power.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: