Does anyone know if this has its PE in alignment with the other Sedna type objects found?
I think there is a tendency for them to have their PE out to one side and the AP out to the other giving a fairly obvious pattern indicating another larger object is shepherding the others into their orbits.
Yeah I read up on this a bit as I was very impressed by the quality. It was shot on widescreen film so they could just rescan (if that’s the term?) the film at widescreen HD resolution. I think they had to retouch or adjust the crop in some scenes because there was stuff off to the side which you weren’t meant to see, which was out of the 4:3 area but inside the 16:9 area.
Some of the special FX shots look bad because they were digital and so they’ve just had to upscale them, but overall I was really impressed too. It’s nice because it lets you enjoy it all over again without being distracted by how “bad” it looks on modern TVs
The article states that they got a demo where they were able to ask questions about the new Bing launch, so the delay is ~hours now, not 2021 anymore (at least, for this new Microsoft thing).
From memory I think this takes place in Schild's Ladder by Greg Egan, but in real time. People have personal AI’s wired into their brains that can be asked to converse with others.
I'm always interested by the relationship between science fiction ideas, how the idea changes as it becomes near-term... and how the idea manifests in real life.
AIs wired into brains is a complex endpoint. It's a plausible nexus of technologies or culture, a fit premise for a plausible fiction. Fold space, spice monopoly and galactic aristocracy. As sf approaches near term, the nexus tends to simplicity. Once we get to imminent, the nexus can get downright subtle. At this point, the agents of technological change might be a catalyst or process... not necessarily a breakthrough. The breakthroughs may have already happened. It's just a little hard to know, until after the affect.
The point we're at now, plausibly, has us on the precipice of "AI avatars having a discussion for us." That path from here to there, is relatively banal. Autocomplete, autoreply. It could happen with or without moments of decision "I hereby decide to devolve such-and-such powers to my ai avatar because Y." Consequences can be profound, but decision making rarely is. The banality of will.
FWIW, I can really see this happening quick and hard. In fact, now is the first time I've had non-abstract or distant concerns about ai. The proliferation of a gpt enhanced software keyboard has some unpredictable potentialities.
There's also Phoenix Exultant by John C Wright, where there are AIs that are basically clones of you that are empowered to make all sorts of decisions for you. IMO it's more approachable sci-fi, but that's not always what folks are looking for.
It’s very clever in how you can tell it that it was wrong and it admits this mistake. Has it been written in such a way that the operators instructions are given a higher weight compared to what it knows?
I’ve asked it to generate python code and then format it with black. It reproduced the original code and then what was supposed to be the formatted code, but it was identical.
When told of this it admitted it had made a mistake and this time correctly outputted the formatted code.
The operator prompts becomes part of the input vector (it has a certain context depth it can accommodate), whereas the data it was trained on actually affects the models weights and biases. You can think of it like “evaluating this statement in light of the prior context against the training set”.