What Are Negative Prompts?
If you've ever generated an AI image and thought, "This is great, except for the six fingers and the melting face" — negative prompts are your fix.
A negative prompt tells the AI what you don't want in your image. While your regular (positive) prompt describes the picture you're going for — "a portrait of a woman in a sunlit garden" — the negative prompt handles everything you want to keep out: blurriness, extra limbs, watermarks, weird anatomy, you name it.
Think of it like ordering food: "I'll have the pasta, but no olives and no anchovies." Your positive prompt is the pasta. Your negative prompt is the olives and anchovies.
Simple concept. But using negative prompts well is what separates decent AI art from genuinely impressive output. And the specifics change depending on which model you're using — which is what most guides skip over.
Let's fix that.
How Negative Prompts Actually Work (The Technical Bit)
You don't need a PhD to understand this, but knowing the basics helps you write better negative prompts. So here's the 60-second version.
Most diffusion models (Stable Diffusion 1.5, SDXL, etc.) use something called Classifier-Free Guidance (CFG) to generate images. Here's what happens behind the scenes:
- The model generates an image prediction based on your positive prompt.
- It generates a second prediction based on your negative prompt (or with no prompt at all, if you leave it empty).
- It compares the two and pushes the final image away from the negative prediction and toward the positive one.
The CFG scale controls how hard it pushes. A higher CFG value means stronger guidance — the model tries harder to match your positive prompt and avoid your negative prompt. Too high, and images get oversaturated and crispy. Too low, and your prompts barely matter.
This is why negative prompts and CFG scale go hand-in-hand. If you're curious about CFG scale settings for different models, we've written a complete guide to CFG scale in Stable Diffusion that covers the details.
The key takeaway: negative prompts aren't just a "nice to have" — they're a core part of how the model decides what to generate. Leaving them blank means you're only using half the steering wheel.
Negative Prompts Across Different Models
Here's what most guides get wrong: they treat negative prompts as universal. They're not. Each model handles them differently, and using the wrong approach can actually hurt your results.
Stable Diffusion 1.5
The OG. Negative prompts are essential here. SD 1.5 is famously prone to generating extra fingers, melting faces, and anatomical nightmares. Without a solid negative prompt, you're rolling the dice every time.
Recommended approach: Always include a baseline negative prompt (we'll give you one below). SD 1.5 responds well to long, detailed negative prompts — don't be shy about stacking terms.
CFG scale sweet spot: 7–12
SDXL (Stable Diffusion XL)
SDXL is a major step up in image quality, and it needs less hand-holding with negative prompts. The base model already handles anatomy and quality much better than SD 1.5.
Recommended approach: You can get away with shorter, more targeted negative prompts. The boilerplate "bad anatomy, extra fingers, mutated hands" stuff is less critical — SDXL handles anatomy reasonably well on its own. Focus on removing specific unwanted elements (styles, objects, lighting you don't want) rather than quality-fixing keywords.
CFG scale sweet spot: 5–9
Stable Diffusion 3 / 3.5
SD3 introduced a new architecture (MMDiT — Multimodal Diffusion Transformer) and a triple text encoder setup. The model was not originally trained with negative prompts, which means they work differently here.
Recommended approach: Negative prompts do work in SD 3.5, but they're more of a refinement tool than a necessity. Use them to fine-tune colors, remove specific elements, or steer artistic style — not to fix basic quality issues. Keep them short and targeted. Sometimes, removing negative prompts actually improves SD 3.5 output, particularly for complex scenes.
CFG scale sweet spot: 3.5–5
Flux
Here's the curveball: Flux was not designed to use negative prompts at all.
Flux uses flow matching training instead of the traditional diffusion approach. It's built to operate with a CFG value of 1, which means there's no mechanism for classifier-free guidance to push away from a negative prompt.
The workaround: The community has developed a technique called Dynamic Thresholding using a ComfyUI extension. It lets you increase CFG above 1 and use negative prompts, though results are hit-or-miss. For most Flux users, the better approach is to write extremely detailed positive prompts that describe exactly what you want — including specifying qualities you don't want by framing them positively (e.g., "perfect hands with five fingers" instead of using a negative prompt for "extra fingers").
Bottom line: If negative prompts are a big part of your workflow, Flux might not be your best choice. If you prioritize prompt adherence and image quality out of the box, Flux excels — it just steers differently.
The Universal Negative Prompt (Copy-Paste Starter)
Before we break things into categories, here's a solid all-purpose negative prompt that works well with SD 1.5 and SDXL. Start with this and customize from there:
worst quality, low quality, normal quality, lowres, blurry, jpeg artifacts, watermark, signature, text, logo, username, bad anatomy, bad hands, extra fingers, fewer fingers, missing fingers, extra limbs, fused fingers, too many fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, disfigured, mutation, ugly, duplicate, out of frame, cropped
This covers the big three problem areas: quality issues, anatomy problems, and unwanted overlays. It's a solid baseline — think of it as your safety net.
Important note: The terms "worst quality" and "low quality" are especially effective with models fine-tuned on datasets that used quality tags (like NovelAI-based models and many SDXL fine-tunes). For vanilla SD 1.5, they're less impactful, but they rarely hurt.
Negative Prompts by Category
Here's where it gets practical. Copy the prompts you need based on what you're generating.
Portrait and People Photography
Portraits are where negative prompts earn their keep. Hands, faces, and body proportions are the most common failure points in AI image generation.
bad anatomy, bad proportions, bad hands, extra fingers, fewer fingers, missing fingers, fused fingers, too many fingers, mutated hands, poorly drawn hands, poorly drawn face, extra arms, extra legs, missing arms, missing legs, fused limbs, long neck, cross-eyed, deformed iris, deformed pupils, cloned face, disfigured, gross proportions, malformed limbs, missing limbs
Pro tip: If you're generating realistic portraits with tools like AI Photo Generator, pair these negative prompts with a specific pose description in your positive prompt. Telling the model exactly what the hands should be doing (e.g., "hands clasped in lap," "hand resting on table") works better than just telling it to avoid bad hands.
Landscape and Nature
Landscapes are generally easier to nail, but negative prompts help with common issues like unwanted people, modern objects breaking the mood, or overly processed looks.
people, humans, figures, person, crowd, buildings, cars, roads, modern structures, power lines, signs, text, watermark, oversaturated, HDR, overprocessed, cartoon, illustration, painting, drawing, frame, border
Anime and Illustration
Anime-style art has its own set of common artifacts. The model can confuse styles or add unwanted realism if you're not careful.
realistic, photo, photograph, 3d render, bad anatomy, bad proportions, extra limbs, extra fingers, poorly drawn hands, poorly drawn face, mutation, deformed, ugly, blurry, low quality, worst quality, text, watermark, signature, username, jpeg artifacts, lowres
For anime specifically, you might also want to add style-specific negatives like western cartoon, Disney style, Pixar if the model keeps drifting toward the wrong art style.
Product and Commercial Photography
Clean, sharp product shots need their own approach. The goal is eliminating distracting elements and keeping the focus tight.
blurry, out of focus, low quality, grainy, noise, watermark, text, logo, busy background, cluttered, messy, distorted, warped, shadow artifacts, color fringing, chromatic aberration, overexposed, underexposed, washed out
Architecture and Interior Design
Architectural images need straight lines, proper perspective, and consistent geometry — things AI models often struggle with.
distorted perspective, warped lines, bent walls, crooked, uneven surfaces, floating objects, impossible geometry, bad perspective, people, humans, low quality, blurry, cluttered, messy, construction equipment, damage, cracks, stains
Text and Typography in Images
AI models are getting better at text, but it's still a weak spot. If your image includes text elements, these negative prompts help minimize garbled letters.
misspelled text, garbled text, extra letters, missing letters, blurry text, distorted text, overlapping text, unreadable text, wrong font, inconsistent text size
Reality check: Even with these negatives, AI-generated text is unreliable. For anything where text accuracy matters, plan to fix it in post-processing.
Advanced Negative Prompt Techniques
Once you've got the basics down, these techniques will help you squeeze more out of your negative prompts.
Prompt Weighting
Most interfaces (Automatic1111, ComfyUI, and others) support prompt weighting using parentheses. This lets you control how strongly the model avoids certain things:
(blurry:1.3), (extra fingers:1.5), (watermark:0.8)
Higher numbers = stronger avoidance. A value of 1.0 is default. Going above 1.5 can cause artifacts, so bump things up gradually.
When to use it: When a specific problem keeps showing up despite being in your negative prompt. Bumping its weight by 0.2-0.3 often fixes persistent issues.
Embedding-Based Negative Prompts
If you're running Stable Diffusion locally (through Automatic1111 or ComfyUI), you can use textual inversion embeddings as negative prompts. These are pre-trained embeddings that encode complex "bad quality" concepts more effectively than text alone.
Popular options include:
- EasyNegative — The most widely used negative embedding. Drop it in your embeddings folder and add
EasyNegativeto your negative prompt. Covers quality, anatomy, and common artifacts in one token. - bad_prompt_version2 — Focuses specifically on deformed anatomy and faces.
- BadDream + UnrealisticDream — A pair designed for realistic image generation. Good at suppressing the "AI look."
Advantage: A single embedding token can encode far more information than a text prompt, so they're more efficient and often more effective.
Limitation: These are model-specific. An embedding trained for SD 1.5 won't work with SDXL, and vice versa. Check compatibility before downloading.
The "Less Is More" Principle
Here's something counterintuitive: longer negative prompts aren't always better.
Every term in your negative prompt competes for the model's attention. If you list 50 things to avoid, the model spreads its "avoidance energy" thin across all of them. Sometimes removing irrelevant negative terms actually improves the result because the model can focus harder on the ones that matter.
Rule of thumb: Start with a short negative prompt. Only add terms when you see a specific problem in your output. This is especially true for SDXL and SD 3.5, which need less aggressive negative prompting than SD 1.5.
Iterative Refinement
The best negative prompts are built, not written from scratch. Here's the workflow that consistently produces the best results:
- Generate with a minimal negative prompt (just quality terms like "worst quality, low quality, blurry").
- Look at what's wrong with the output. Extra fingers? Add anatomy terms. Unwanted objects? Add those specifically.
- Regenerate and compare. Did the negative prompt fix the issue without creating new ones?
- Adjust weights if a problem persists despite being listed.
- Remove terms that aren't doing anything. If your image doesn't have watermarks without mentioning "watermark," drop it.
This iterative approach beats copy-pasting a massive prompt list every time. It's more work upfront, but you'll learn what actually moves the needle for your specific use case.
Common Mistakes (and How to Avoid Them)
After reviewing thousands of AI-generated images and the prompts behind them, these are the mistakes that keep coming up:
1. Using Positive Descriptions as Negatives
Wrong: not a beautiful sunset
Right: sunset, orange sky, warm lighting
The model processes concepts, not natural language. Writing "not a beautiful sunset" can actually make the model more likely to generate a sunset, because the concept of "sunset" is now in the model's attention. Use bare keywords for what you want to avoid, not conversational sentences.
2. Contradicting Your Positive Prompt
If your positive prompt says "a woman in a red dress" and your negative prompt includes "red," you're fighting yourself. The model will try to generate red (positive prompt) and avoid red (negative prompt) simultaneously, leading to muddy, confused output.
Check for conflicts between your positive and negative prompts before generating. It sounds obvious, but it happens more often than you'd think.
3. Copy-Pasting Without Understanding
There's nothing wrong with starting from a template. But blindly pasting a 200-term negative prompt into every generation is wasteful. Many of those terms are irrelevant to your specific image, and the bloat dilutes the terms that actually matter.
Better approach: Understand what each term does. Keep a core set of 10-15 terms and add situational ones as needed.
4. Ignoring Model-Specific Behavior
Using SD 1.5 negative prompts verbatim in SDXL, SD 3.5, or Flux is a common mistake. As we covered earlier, each model handles negative prompts differently. What's essential for SD 1.5 might be unnecessary or even counterproductive for newer models.
5. Never Updating Your Negative Prompt
Models evolve. Community knowledge evolves. The "perfect negative prompt" from 2023 isn't necessarily optimal for models released in 2025-2026. Stay current with what works — communities on Reddit's r/StableDiffusion, Civitai, and various Discord servers are great resources for keeping up.
Negative Prompt Quick Reference
Here's a decision tree to help you choose the right negative prompt approach:
Using SD 1.5?
- Use a comprehensive negative prompt (quality + anatomy + style)
- Consider negative embeddings (EasyNegative)
- CFG 7-12
Using SDXL?
- Use a targeted negative prompt (focus on specific issues)
- Quality terms are helpful but less critical
- CFG 5-9
Using SD 3.5?
- Keep negatives minimal and targeted
- Try generating without negatives first
- CFG 3.5-5
Using Flux?
- No native negative prompt support
- Write detailed positive prompts instead
- Dynamic Thresholding extension if you need negatives
Frequently Asked Questions
Do negative prompts slow down image generation?
Not significantly. The model already computes both conditional (with prompt) and unconditional (without prompt) predictions during CFG. Adding a negative prompt replaces the unconditional prediction with a conditional one based on your negative terms — the computational cost is nearly identical.
Can I use negative prompts with online AI image generators?
It depends on the platform. Some tools like AI Photo Generator let you fine-tune your prompts for better results, while simpler consumer tools may not expose negative prompt fields at all. Check your tool's settings — if there's an "advanced" or "negative prompt" field, use it.
What happens if I leave the negative prompt empty?
The model uses an "unconditional" prediction (essentially: "generate something generic") as the baseline for CFG guidance. This works fine for many images, but you lose the ability to specifically steer the model away from unwanted elements. For casual generation, an empty negative prompt is totally acceptable. For serious work, it's leaving quality on the table.
Do negative prompts work with img2img and inpainting?
Yes. Negative prompts work across all Stable Diffusion generation modes — text-to-image, image-to-image, and inpainting. They're particularly useful in inpainting, where you can prevent the model from introducing unwanted elements into the repainted region.
How many terms should my negative prompt have?
There's no magic number. For SD 1.5, 15-30 terms is common. For SDXL, 5-15 terms usually suffices. For SD 3.5, keep it under 10. Quality over quantity — every term should address a specific issue you've actually observed in your output.
Wrapping Up
Negative prompts aren't complicated. But they are model-specific, and the "one prompt fits all" approach that most guides push doesn't actually work very well in practice.
The short version: start minimal, add terms when you see problems, understand how your specific model handles negative conditioning, and resist the urge to paste 200 keywords into every generation.
Your images will thank you.