I tried something related today with Claude, who'd messed up a certain visualization of entropies using JS: I snapped a phone photo and said 'behold'. The next try was a glitch mess, and I said hey, could you get your JS to capture the canvas as an image and then just look at the image yourself? Claude could indeed, and successfully debugged zir own code that way with no more guidance.
I should've said I didn't do any control experiment. I looked more closely at what Claude did in another case and it was console-logging some rough features out of the canvas data that time. If it actually was "looking at" the image the first time, it had to have been through a text encoding -- I think I remember a data URL briefly starting to appear in the transcript.
GAI (if we get it) will start creating its own tools and programming languages to become more efficient. Tools as such won’t be going away. GAI will use them for the same reasons we do.
Why not? It's still going to be quicker for the AI to use automated refactoring tooling than to manually make all the changes itself.