You can find a lot of discussion about what the minimum specs for Quake are. Famously, it needs a decent FPU, and the Pentium was a convenient early CPU with a decent built-in FPU. It was significantly faster than a 486.
…But people have managed to run Quake on the 486.
And the myth people tell about Quake is that it killed Cyrix, because Quake performance on Cyrix was subpar. But was that true? And if it was true, was that because the Cyrix was slower than a Pentium, or was it because the Quake code had assembly that was hand-optimized for the Pentium FPU pipeline?
Anyway. “Most simple computer that could run Quake” is probably going to include a decent FPU. If you are implementing something on an FPGA, you can probably get somewhere around 200 MHz clock anyway. At which point you can run Quake II.
The Cyrix story is actually well-documented. Quake's software renderer used hand-optimized x86 assembly with FPU instruction sequences specifically tuned for the Pentium's pipeline. Cyrix processors had a different FPU execution pipeline that stalled on those specific instruction orderings — the issue wasn't raw FPU performance, it was that the Pentium-optimized code ran slower on Cyrix than straightforward C code would have. It was hand-optimization that made things worse, not better, on a competitor's hardware.
The timing was brutal for Cyrix. This was right when "Intel Inside" was becoming a meaningful consumer brand signal, and game benchmarks were becoming the primary way consumers evaluated CPU purchases. Quake wasn't just a game, it was the benchmark everyone ran at CompUSA to compare machines. Being demonstrably worse at Quake, regardless of the cause, was a marketing catastrophe.
The real floor for running Quake is basically "does it have a hardware FPU." The 486 DX (with FPU) could do it at low resolution and low framerate. The 486 SX (no FPU, software float emulation) was genuinely painful. The Pentium was the first CPU where it actually felt good.
My perspective from being a teen doing lan party stuff at the time: Quake ran slow on them, but it was far from the only thing that ran slow. Cyrix was well understood to be the value brand for general office apps and such, but not up to it for more demanding computing, and for having random compatibility issues here and there.
Ultimately what killed Cyrix is they just couldn't offer enough of a discount vs intel to matter, especially with all the lock in stuff intel was doing with Dell, Gateway, etc.
Intel Inside was a successful marketing campaign as well. If you were around back then I bet you can imagine the jingle/chord immediately.
I had a Cyrix 6x86 when Quake first came out. My disappointment at how poorly Quake ran on it was significant, especially because pretty much every other game at the time ran well on the Cyrix. The FPU performance in Quake was doubly handicapped on the Cyrix: not only was its FPU slower than the Pentium's to begin with, Quake's code was indeed hand-optimized for the Pentium's FPU pipeline. Fabien Sanglard's writeup of Michael Abrash's optimizations for Quake goes into great detail: https://fabiensanglard.net/quake_asm_optimizations/
Yes but also no. The problem with fixed point arithmetic is a lack of dynamic range compared to floating point. Floats are great at representing both large numbers with limited precision and small numbers with high precision, but with fixed point you have to make a choice based on which kind of number you're trying to represent. Meaning you need to use a mixture of 8.24, 16.16 and 24.8 fixed point types (and appropriate conversions) depending on the context of the calculations that you're doing.
It's possible to write a game engine with that limitation, but there's no easy natural conversion from Quake's judicious use of floats to a fully fixed-point codebase. You'd have to redesign and rewrite the entire engine from scratch, basically.
The PS1 doesn't an FPU but got a version of Quake 2, so it's possible. That said, it was somewhat different from the PC version, so it could be argued that it's not the same game.
I can't speak on Quake, but I was a level designer on the failed effort to port Unreal to PSX.
My understanding from talking to the coders at the time was that Unreal's software renderer was a huge advantage as a starting point. They were able to reuse a lot of the portal rendering stuff as setup on the R3K cpu, but none of the rasterization. That had to go to the graphics core, which was a post setup 2D engine that in addition to the usual sprites, could do tris and quads.
We had a budget of about 3k polygons post clipping, and having two enemies on screen would burn about half of that. The other huge limit is the texture cache was tiny, so we couldn't do lightmaps. Our lightning was baked in at vertex level and it just was what it was.
I imagine the situation with Quake was comparable. The BSP stuff would carry right over, but I can't imagine they got lightmapping proper working at the time. They'd also need some sort of solution for overdraw, as Quake's PVS was a lot more loose than Unreal's portal clipping.
The PS1 version uses a custom engine based on technology built for the game Shadow Master, the previous title by Hammerhead Studios. It was a technical tour de force for the original PlayStation.
I want to look at this from a different perspective… a single-precision floating-point multiply is pretty simple, no? 24x24 bit multiply, which is about half as many gates as a 32x32 bit multiply.
Maybe I would prefer to rip out the integer multiplication unit first, before ripping out the FPU.
DDR3 traces need to be length matched, because at 800MHz (the slowest "standard" rate, though I think you can drop to 666MHz safely) the value on the pins is changing every 1.25ns, and having traces of different lengths means you probably won't see the right values on all the pins at the same moment. Length matching produces the squiggles.
The diagonal orientation of the DDR3 chip and corresponding diagonal traces I suspect is a choice made by the author to ease the layout process - it's more likely that is hand laid out to get traces of somewhat similar length with a minimum of fuss, followed by a length matching tool. A non-standard orientation can cause issues with pick-and-place machinery, which usually will handle 90 degrees fine, and _often_ 45 degrees fine, but (AFAIK) _rarely_ anything else, but that's not a problem for the author because he's assembling it himself. A diagonal IC also usually results in wasted space, which you can see in the empty areas of the resulting board. A 90 degree orientation may have allowed for a few more decoupling capacitor banks, but since his board works, who am I to sit here and judge?
Yes, I had to place the DDR3 chip diagonally to simplify routing.
Otherwise the length difference on address lanes was so big that I couldn't compensate it with serpentines.
I didn't use autorouter: I haven't found any reasonable working KiCad plugin for it, and didn't want to buy and commercial software for a hobby project.
> A non-standard orientation can cause issues with pick-and-place machinery, which usually will handle 90 degrees fine, and _often_ 45 degrees fine
This sounds like nonsense. Pick and place machines don't pick up components perfectly deterministically. There is always a tilt and an offset when you are picking the part up, which is why a computer vision system has to account for part orientation and the center of the part. The machine must compensate the error by moving and rotating the part accordingly.
When you say “You don’t know that”, you expect the people reading your comment to interpret it generously. A good interpretation of your comment is something like, “You’ve provided no reasoning to back up that argument” or “I think it’s unlikely that you have evidence to support your claims”. A bad interpretation of your comment is, “I can answer with certainty whether you have this specific piece of knowledge, and the answer is no.”
I encourage you to apply the same generosity to comments you read.
pocksuppet’s advice is I think more of a reaction to a specific way that you could take a short position, and in 2026 I think you want to assume that people who know what “short” means, also know what options are.
The advice is good in a kind of stopped clock sense.
I'm old, so I am a stopped clock. However, I have invested my whole life including good times and bad. I believe that for a retail trader -- someone who doesn't get paid to trade other people's money-- options are bad. OK yes there are special cases like when your job requires you to hold a lot of one stock etc. I'm not going to make the case why here I am sure it has been argued to death.
I do remember smart friends getting interested in options at different times in the last thirty years because they make higher returns. Then they have a period where make lower returns, or have a real problem. I don't think its worth the attention and the trading cost for most people, even people who understand what a short is. You can't argue with a person who has been doing really well with them for five years but it always seems like people stop.
My take on why options are bad—options are bad most people because most people don’t get use from hedging, don’t have enough information about the timeline of price movements, and all you’re left with is a form of gambling. A form of gambling that’s pervasive enough to worry me. People on Reddit trying to get rich with SPY options (how could you possibly know where SPY is moving?)
Short positions are also bad, because there’s an ongoing cost to carrying a short position, and that cost is likely to cannibalize your expected gains.
Lots of good reasons around to avoid short positions and options like they’re the plague. I don’t like the “unlimited downside” reason because it’s solvable.
To people who are making lots of money in stocks or options… my question is always, “do you have high returns, or do you just have high volatility?” Because it’s easy to look at high short-term returns and believe that you’ve somehow beaten the market, when you’re really just holding a high volatility position that got lucky.
> I do remember smart friends getting interested in options at different times in the last thirty years because they make higher returns. Then they have a period where make lower returns, or have a real problem.
Volatility. Never trade options if you don’t understand volatility.
“Shorting” a company does not just mean short selling stock. Instead, it means having a short position, which you can use without unlimited downside.
The easy way is to buy puts. Maybe your next question is, “who is selling puts?” And that’s a good question, but you don’t really care, because you can buy your puts on the open market and when you do that, you get protection from credit risk.
There are other reasons why this isn’t a good idea but “unlimited downside” is not one of them.
> “Shorting” a company does not just mean short selling stock. Instead, it means having a short position, which you can use without unlimited downside.
If you are an equity index holder anyway, simply by not holding any exposure in an otherwise "market" portfolio is a "short" relative to benchmark.
ie if I "buy" the SP500 constituents according to weight but with TSLA zero'd out my portfolio is essentially the same as long SP500 and short weigtht*TSLA.
Normally you buy into something like SP500 via something like an ETF, something with a very low fee because it’s managed entirely automatically via simple algorithms.
How can you invest in SP500 minus TSLA without racking up exorbitant fees?
Unless such a fund already exists, you’d be managing it yourself and pretty much wiping out any gains any time you rebalanced.
> How can you invest in SP500 minus TSLA without racking up exorbitant fees?
Various options…
1. Direct indexing (requires minimum amount of assets),
2. Certain actively-managed ETFs like GGRW, which is not exactly SP500 minus TSLA but it’s not too far off
3. Buying passively-managed ETFs in sectors that don’t include TSLA,
4. TSLQ, maybe. You get fees and other problems. I wouldn’t.
Direct indexing costs more than ETFs in terms of fees, but there’s apparently some kind of tax loss harvesting that you can do with direct indexing to offset the fees, and some people say you can come out ahead. I don’t understand how tax loss harvesting works at a satisfactory level (I’ve read articles and watched videos, but I think I would need to take an accounting class and really sit down with a spreadsheet before I could say that I understand how direct indexing and tax loss harvesting work together.)
And puts are highly manipulated by MMs so you have to really study the chain and how it behaves before you have a chance to buy the contracts at a fair price. MMs will flood the market with contracts and devalue yours even when the price is moving in your direction. I highly suggest people think twice about trading options, they are best used as hedges for large positions during particularly vulnerable periods.
I’m not convinced. If you think you know what the fair price is for a put, then you can bid that price. If you don’t think you know what the fair price is, then you shouldn’t be trading options.
There are reasons for not trading options, but the main reason is “you know less about price movement than you think you do”.
Realistically, if you don't have the volume to be a market maker, there's no point bidding anything except the current market price. Either the price is higher than your bid, and your order won't fill (so why place it?) or the price is lower than your bid, and you should expect the market knows something you don't.
> Either the price is higher than your bid, and your order won't fill (so why place it?) or the price is lower than your bid, and you should expect the market knows something you don't.
There is no risk-free way to trade. You can place a market order and guarantee execution, bearing the risk that you get a bad price. You can place a limit order, and guarantee price, bearing the risk that your trade doesn’t execute.
It sounds like you’re starting with the assumption that you don’t know whether the options are undervalued or overvalued, and if you start with that assumption, yes, the correct answer is don’t buy or sell the option (barring some other reason to buy or sell). Duh. But the reason the market “knows something you don’t” is because it’s full of people doing research. Sometimes, the person doing the research is you, and you have an idea of where the price will go. That’s what an edge is. When you have an edge, you can make money, but maybe not very much and not very reliably.
Where it gets ridiculous is when people speculate with SPY options or dumb shit like that. The reason why speculating with SPY is so ridiculous is because it’s just so unlikely that you could get an edge with SPY. But in general? Yes, it’s possible to get an edge.
Trades always execute at exactly the market price. A limit order says that if the market price reaches your limit price, execute the trade. At that moment, your limit price will equal the market price.
That is technically correct but uninformative. If there’s a point you’re making, I can’t figure it out.
You earlier said that there’s no point in bidding anything but “current market price”, and that’s what I was responding to. Limit orders can execute at current market price but they can also execute at some future market price. It’s ok to place limit orders, they just have different risks from market orders.
It would be more helpful to say don't buy options once the trade is obvious. As soon as something has hit the FOMO phase and IV skyrockets and all strikes cost the same, that's a sign you're too late and might want to bet against your thesis or use a different instrument. The financial shoggoths do a reasonably good job ensuring there's no free money. However they're obligated to trade regardless of conditions and sometimes that's their weakness.
Imagine you got a loan to buy a bunch of laundry machines to run a laundromat. But your laundromat earns $8,000 a month, and the loan payment is $10,000.
You can decide to sink $2,000 of your personal money into the laundromat every month, or you can give up.
USB-C is the bane of my existence. Everything looks the same, but certain cables won't charge certain devices for seemingly no reason, and other cables won't transfer data, and there's no easy way (AFAIK) to tell the difference
not sure how you can make a cable that doesnt connect power from end to end. I can see if it doesnt charge as fast as others if it doesnt have the bits required for higher current support. and if a device requires >5V to charge, thats on the device not the cable.
> other cables won't transfer data
again, not sure you can make a cable that doesnt connect the USB2 pair from end to end. but if device doesnt use USB2 and requires something else without mentioning it then that again seems to be on the device not the cable.
FWIW the PS5 controller is super particular about what charger you use due to Sony being dumb, but the deciding factor there is the charger, not the cable.
It's probably a problem with my devices. I've never seen these problems with more expensive devices, but my cheap bluetooth speakers will only charge with certain cables.
I also have cheap cables that don't seem able to do data transfer. Guessing it's not actually following the USB-C spec.
Are your bluetooth speakers connected over a C-to-C cable or is there any legacy USB in the mix (type-A and/or microusb)? The reason I ask is legacy USB expected 5 volts to be supplied by default, whereas in type-C you have to specifically request any current. So some C-to-A / A-to-C adapters/cables include the resistors to request the current whereas others do not, leading to legacy USB devices not getting power through some adapters/cables.
…But people have managed to run Quake on the 486.
And the myth people tell about Quake is that it killed Cyrix, because Quake performance on Cyrix was subpar. But was that true? And if it was true, was that because the Cyrix was slower than a Pentium, or was it because the Quake code had assembly that was hand-optimized for the Pentium FPU pipeline?
Anyway. “Most simple computer that could run Quake” is probably going to include a decent FPU. If you are implementing something on an FPGA, you can probably get somewhere around 200 MHz clock anyway. At which point you can run Quake II.
reply