Nearly ten years after publishing Upload, and over twenty years after I started writing it, I find myself in the interesting position of being able to use DALL-E, an artificial-intelligence tool used to generate images from text prompts, to create a portrait of Raymond.

While DALL-E is by no means an artificial general intelligence, it’s pretty exciting, as an author, to use an AI to generate a portrait of a near-future sci-fi character whose life is profoundly entwined with AI and A-Life. (The book is set far enough in the future that the idea of Raymond using a VR visor that looks familiar to us in 2022 is a stretch, but he is into retro hardware.)
There’s potential here for authors to flesh out their own mental images of characters, props, and settings, using tools like DALL-E to engage in a design dialog that can then inspire written descriptions that are richer, more detailed, and even more creative.
How do you think DALL-E and I did — is it a reasonable portrait? Does it in any way change your image of Raymond, for better or worse?