Not yet! One thing that AI generated images right now are not so good at is maintaining a consistent portrayal of a character from image to image, which is something you want for illustrating a story.
You might be able to do something like that with a 3d modeler to pose characters, generate a wireframe, and then feed that wireframe into ControlNet. Or if you have a huge corpus of existing images of a particular character portrayed in a particular way, you could maybe create new images with them in new situations. But without that, it’s hard to go from a text description to many images portrayed in a consistent way. For one image, it works, and for some things, that’s fine. But you’d have a hard time doing, say, a graphic novel that way.
I suspect that doing something like that is going to require having models that are actually working with 3D internal representations of the world, rather than 2D, at a bare minimum.
Full Size
UI: ComfyUI
Model: STOIQNewrealityFLUXSD_F1DAlpha
cute-cave-spider-workflow.json.xz.base64
hi Tal have you written a beautiful fantasy novel yet?
Not yet! One thing that AI generated images right now are not so good at is maintaining a consistent portrayal of a character from image to image, which is something you want for illustrating a story.
You might be able to do something like that with a 3d modeler to pose characters, generate a wireframe, and then feed that wireframe into ControlNet. Or if you have a huge corpus of existing images of a particular character portrayed in a particular way, you could maybe create new images with them in new situations. But without that, it’s hard to go from a text description to many images portrayed in a consistent way. For one image, it works, and for some things, that’s fine. But you’d have a hard time doing, say, a graphic novel that way.
I suspect that doing something like that is going to require having models that are actually working with 3D internal representations of the world, rather than 2D, at a bare minimum.