Loading…
Loading…
Using AI tools to assist with creating design assets, layouts, and content.
stellae.design
Generative AI uses LLMs, diffusion models, and other systems to create design artifacts — layouts, illustrations, copy, code, and prototypes. Tools like Midjourney generate visuals; ChatGPT produces UX copy; GitHub Copilot writes code. It accelerates exploration, enables non-designers to produce reasonable outputs, and handles repetitive tasks. Requires strong judgment to evaluate and refine outputs.
Generative AI is reshaping design workflows by enabling rapid ideation, content generation, and variation exploration at a scale that was previously impossible within typical project timelines. When used thoughtfully, it accelerates the divergent phase of the design process — producing dozens of layout concepts, copy variations, or image options in minutes — freeing designers to focus on critical evaluation and refinement. However, the technology introduces real risks around originality, bias amplification, and quality control that teams must actively manage to avoid shipping derivative or harmful outputs.
Figma's generative design features allow users to describe a layout or component in natural language and receive a structured first draft that respects Auto Layout, constraints, and design tokens. Designers use the output as a starting scaffold that they then refine manually, cutting initial wireframing time significantly while retaining full creative control over the final result. The tool succeeds because it produces editable design objects rather than flat images, keeping the output within the designer's existing workflow.
Product teams use Jasper to generate dozens of microcopy variations for buttons, error messages, and onboarding prompts, then run the top candidates through A/B tests to find the highest-performing option. This approach surfaces phrasing ideas the team might not have considered, especially for tone-of-voice experimentation across different user segments. The key is that a human writer reviews and selects the final copy, using the AI output as a starting point rather than a final answer.
A startup uses an image-generation model to create all marketing illustrations and publishes them directly without human review or brand-consistency checks. Several images contain visual artifacts, culturally insensitive stereotypes, and stylistic inconsistencies that clash with the brand's established illustration system. Users perceive the brand as careless, and the team spends more time fielding complaints and replacing images than they saved by skipping the review process.
• The most pervasive mistake is treating AI-generated outputs as final deliverables rather than raw material that requires human curation, editing, and quality assurance before reaching users. Teams also underestimate the legal and ethical risks — generative models can reproduce copyrighted styles or perpetuate harmful biases present in their training data, and shipping those outputs without review creates real liability. Another frequent error is over-relying on a single AI tool for an entire workflow instead of identifying the specific stages where generative AI adds value and keeping human expertise in control of strategic decisions.
Was this article helpful?