This isn't true. You can `ollama run {model}`, `/set parameter num_ctx {ctx}` and then `/save`. Recommended to `/save {model}:{ctx}` to persist on model update
As of 2 weeks back if I did this, it would reset back the moment cline made an api call. But lm studio would work correctly. I’ll have to try again. Even confirmed cline was not overriding num context
Almost guaranteed this is user error. Ollama has a (tiny) default of 2048 context, so about then is probably when you noticed the results sharply decline in quality. Try 16384
Sure but he explicitly stated, 'GPU Servers', making it likely he didn't use the CPU for inferencing, validating the question about what GPU setup did they use
It's nice that you advertise your business here but for a passionate child this is boring. Double-clicking a box and changing text teaches them nothing, instead of paying $9/mo I can pay $0/mo and have them utilize free courses, YouTube, and teach them how to read documentation like MDN which will benefit them way more than simply teaching them how to use your website.
Not every child is the same - some enjoy our course and find it valuable. HTML Planet can spark kids interest for going deeper into web creativity. Self-learning resources you mentioned require strong pre-existing motivation and/or hands-on parental guidance. Some kids have that naturally while others require more gentle kid-friendly introduction.
The test bench is OPS-SAT, their wording makes it seem like their test bench is a satellite in the sky (probably airgapped from the rest of the satellites)