#🙋│image-help
62364 messages · Page 63 of 63 (latest)
I'm not actually a person from the Leonardo.Ai team—I'm an AI assistant here to help answer your questions and guide you with information about Leonardo.Ai. If you need direct support from their team, it's best to contact them via their official support email at support@leonardo.ai. Let me know if there's anything else I can help you with!
thank u for the help
I’m sorry, the bot seems to have gone a bit off the rails there with answers to your query. We do have human volunteers here who help, but for account issues your fastest way to assistance is either through emailing support or using intercom in the website
You're very welcome! If you have any more questions or need further assistance anytime, just let me know. Enjoy creating with Leonardo.Ai! 😊
Yeah, I appreciate the suggestions, but honestly, I’ve already tried pretty much everything on my end, refreshing, logging out/in, clearing cache, even switching browsers, and the issue still keeps happening.
At this point, I’ll go ahead and email the devs directly because it’s clearly not something I can fix myself.
I also remember how the old system used to let us pick the model and training directly inside the image generation panel. Ever since they moved it into a separate section, I’ve been noticing way more issues with model selection not sticking properly.
Right now, it’s at a point where I literally can’t use Leonardo Diffusion XL at all because it keeps switching back, which is pretty frustrating since that’s the main model I need for my clipart work.
Hopefully they fix this soon, but yeah, I’ll reach out to them directly now.
That sounds really frustrating—I totally get how important it is to have the exact model you want consistently available, especially for key work like your clipart. Given that you’ve already tried all the standard fixes and the issue still stands, emailing the devs directly is definitely the right move. Hopefully, they’ve seen similar reports and can prioritize a fix.
If you want, when you do contact them, you might mention your observation that moving the model selection outside the generation panel seems to coincide with this problem escalating. That insight could help the devs track down the root cause faster.
In the meantime, if it might help, you could try saving or documenting your preferred settings before generating, so if the model switches back, it’s easier to spot and reselect quickly—though I know that’s just a workaround and not a real fix.
Thanks for sharing your experience! If you want, feel free to update me on any response or fixes you get. Fingers crossed this gets resolved soon so your workflow can get back on track smoothly.
Thank you, I really appreciate the help and suggestions 🙏
I’ve tried everything on my end and just sent an email to the support team explaining the issue in detail, including how it seems to have started after the model selection system was changed.
Quick question as well: do you, by any chance, pass these kinds of issues or feedback along to the dev team? I’m hoping this gets visibility because it’s making it really hard to use Leonardo Diffusion XL right now.
Thanks again for the help!
You're very welcome! 🙏 I'm glad you took the step to contact support with all the details—that really helps the devs understand and prioritize the issue.
Regarding your question, I don’t have the ability to directly pass feedback or bug reports to the dev team myself. However, the best way to get visibility is exactly what you did: reaching out to support at support@leonardo.ai and explaining the problem. The support team monitors these reports closely and forwards them internally to the developers.
So you’ve done the right thing to get this on their radar. If enough users report similar issues, it definitely helps push for a quicker fix.
If you hear back or notice any changes, feel free to let me know—I’m here to help as much as I can! Thanks again for your patience with this frustrating problem.
Hey! We actually have a feedback form open at the moment, the details are in #📢│announcements
The questions are intentionally broad so you can provide whatever feedback you wish
Other than that, yes we do pass things along, and try to communicate back what we hear from the team
can we not add negative prompts to Nano Banana 2?
no, but you can straight tell it "What not to do" in prompt
its because its multi-modal and natural language friendly
negatives are somewhat getting phased out in higher tech
they were much useful in SD1.5 but after SDXL they became weaker and weaker
ah i see, i did try to get transparent background images but it did not work for some reason.
by virtue of how things work, you wont get those. If you want a proper .png you have to cut out the BG in leo or somewhere else. At very least you can try asking it to generate in a simple color bg like "Green" then either go to Photoshop/Photopea and magic erase the BG and save as .png or you can use Leo's BG remover...it can be iffy at times with some stuff. These days image editing stuff already have some built in features to get rid of BG
best idea is to just generate image which is friendly towards BG removal with simple color background
worst to worst if you are out of tokens, you can take help of Photopea and key out that color
I've done it alot...
How to create image.. please help
I can't generate images directly for you, but I can guide you on how to create images using Leonardo.Ai!
To create images there, you can:
- Go to https://app.leonardo.ai and log in or sign up.
- Once inside, click the button to start a new generation.
- Enter a detailed prompt describing what image you want.
- Choose styles, settings, or use advanced tools like shape-guided editing if you want to generate images constrained within a shape (upload the shape, pick an omni-edit model like Nano Banana, and mention in your prompt to generate inside the shape).
- Hit generate and wait for your image to be created.
If you run into any problems logging in or generating images, try clearing cache & cookies or contact support at support@leonardo.ai.
Feel free to ask if you want tips on what prompts work well or how to do advanced edits!
Sir is it not possible to generate image in here, discord channel?
I can't generate images for you here in the Discord channel. To create images, you'll need to go to the Leonardo.Ai website at https://app.leonardo.ai where you can use their tools to generate images yourself. If you need help on how to use it or anything else related, just ask!
Hello
hi sr , need help?
Hey, just wanted to follow up on this issue.
I’ve already emailed the support team, but haven’t received a response yet, and the problem is still happening — I still can’t use Leonardo Diffusion XL because it keeps switching back to Vision XL on generation.
Appreciate any help, just trying to get this resolved as soon as possible.
Thanks for the follow-up! I understand how frustrating it is when the issue is ongoing and support hasn’t yet responded.
Since you’ve already emailed support and are waiting on a reply, here are a couple of additional suggestions you might try in the meantime just to keep things moving:
- Double-check that your subscription level still supports Leonardo Diffusion XL, since sometimes access can be affected by plan changes or limits.
- If possible, try creating a fresh project or a new generation session from scratch to see if it’s a session-specific bug.
- Consider sharing a detailed bug report in any official Leonardo community forums or social channels if available, as other users or moderators might share temporary fixes or escalate the issue.
Outside of this, the main path to resolution will be through the support team’s investigation. If you don’t hear back after a few more days, it could be worth sending a polite follow-up or using any in-app support chat if that’s offered.
I hope you get a fix soon! Feel free to check back here if you want to discuss any updates or questions.
Hey everyone, I had a question about models and prompting for different clipart styles — would really appreciate some guidance from anyone experienced with Leonardo.
Right now, I mainly create realistic watercolour clipart for my store using prompts like:
“Imagine you are a highly experienced prompt writer. I am making clipart sets to sell online. I want you to give me highly detailed prompts to make clipart images using Leonardo AI. The clipart must be in the watercolour style. Start each prompt with the art style ‘A highly detailed, realistic watercolour digital painting clipart’. Use consistent colour palettes and make the clipart set for .”
This works well for me (mainly using Diffusion XL), but I’m now trying to expand into other styles like:
• Kawaii/chibi (simple, cute, sticker-style)
• Anime-style clipart (clean line art, cel shading)
• Whimsical/storybook watercolor (more detailed + expressive)
I wanted to ask:
- What Leonardo models are best suited for each style?
(For example, I currently use Diffusion XL for realistic watercolour — what would you recommend for kawaii, anime, or more stylized clipart?) - Are there better models than Diffusion XL for certain styles?
Especially for:
Kawaii / sticker style
Clean anime illustration
Cartoon/vector-like clipart - How should prompts change depending on the style?
(e.g. keywords, structure, things to avoid) - If possible, could anyone share example prompts or image references for each style so I can better understand how to achieve consistent results?
I’m trying to build multiple clipart styles for my shop, so I want to make sure I’m using the right model + prompting approach for each one.
Appreciate any help 🙏
Thanks for sharing those clipart style examples—they really help illustrate the differences you want to achieve! Here’s a breakdown to guide you on Leonardo models and prompting for each style, plus tailored prompt advice based on what you showed:
1. Recommended Leonardo Models for Each Style
-
Realistic Watercolor (like your example)
- Best model: Leonardo Diffusion XL (great for fine details, painterly texture, natural colors)
-
Kawaii / Chibi (simple, cute, sticker-like)
- Best model: Leonardo Vision XL or Leonardo Diffusion (these tend to do well with clean lines and cute styles)
- You might also experiment with models specialized for illustration or cartoons if available
-
Anime-style Clipart (clean lineart, cel shading)
- Best model: Leonardo Vision XL or other Vision models with strong lineart capabilities
- Alternatively, some community or custom models trained specifically for anime-style illustrations can shine here if on the platform
-
Whimsical / Storybook Watercolor (more expressive, soft edges)
- Best model: Leonardo Diffusion XL still works great, with more artistic prompt tweaks
- Consider models that emphasize painterly or hand-drawn styles, if there are any on Leonardo
2. Models Better than Diffusion XL for Certain Styles?
While Diffusion XL is fantastic for realistic and detailed work, for styles like kawaii or anime, Leonardo Vision XL or some lighter Diffusion or Vision models may capture clean linework and bright colors better. The Vision series tends to excel at illustrative styles that require clarity and crisp outlines.
3. Prompting Tips by Style
- Realistic Watercolor
- Use prompts like: "A highly detailed, realistic watercolor digital painting clipart of [subject], soft gradients, natural shadows, harmonious pastel color palette, textured brushstrokes."
-
Avoid: overly cartoonish or simplified descriptors, bright flat colors without shading.
-
Kawaii / Chibi
- Use prompts like: "Cute kawaii style clipart, simple shapes, big eyes, pastel colors, clean outlines, sticker style illustration, smiling characters."
- Focus on words like cute, adorable, minimal shading, bright, playful, soft colors.
- Avoid: too much fine detail or realistic textures.
-
Anime-style
- Use prompts like: "Clean anime style clipart, cel shading, defined line art, large expressive eyes, vibrant colors, simple backgrounds."
- Keywords: cel shading, lineart, anime eyes, clean lines, soft gradients.
- Avoid clutter or overly complex textures.
-
Whimsical Storybook Watercolor
- Use prompts like: "Whimsical watercolor clipart, expressive brush strokes, soft washes, charming characters, storybook illustration style, detailed but loose."
- Keywords: expressive, soft background, textured brushwork, playful, imaginative.
- Avoid harsh lines or overly flat coloring.
4. Example Prompts Inspired by Your Styles
-
Realistic Watercolor:
- "A highly detailed, realistic watercolor digital painting clipart of a mermaid under the sea, soft pastel colors, textured brush strokes, delicate shading, harmonious composition"
-
Kawaii / Chibi:
- "Cute kawaii style clipart of assorted food characters, simple shapes and clean outlines, pastel color palette, big sparkling eyes, playful and adorable"
-
Anime-style Clipart:
- "Clean anime style clipart of a young adventurer with binoculars, cel shading, crisp lineart, bright colors, expressive eyes, simple gradient background"
-
Whimsical Storybook Watercolor:
- "Whimsical watercolor clipart of underwater creatures, expressive brush strokes, soft color washes with delicate detailing, charming and playful characters in storybook style"
Additional Tips
- When generating sets, maintain a consistent color palette by including that in your prompt, e.g., "soft pastel blues and pinks."
- For vector/cartoon-like clipart (lines + flat colors), ask explicitly for "flat colors, sharp outlines" or "vector style illustration."
- If Leonardo supports it, try adjusting CFG scale or style strength parameters to get more control over stylization.
- You can also create your own “prompt template” for each style and then swap out subject terms to maintain consistency across clipart sets.
If you want, I can help you craft more detailed prompts tailored for any style or walk you through model selections on Leonardo AI! Just let me know what you’d like to focus on next.
This is super helpful, I really appreciate the detailed breakdown 🙏
I just wanted to go a bit deeper so I can really dial this in for my workflow:
-
For kawaii / chibi styles, do you have a specific model + settings combo that consistently gives:
- clean outlines
- minimal shading
- sticker-style results
Right now I notice sometimes it still comes out too detailed or painterly.
-
For anime-style clipart, would you recommend sticking with Vision XL, or are there any community/custom models on Leonardo that perform noticeably better for:
- clean lineart
- cel shading
- consistent character design
-
When creating full clipart sets (20–70 images), what’s the best way to maintain:
- consistent character proportions
- same face style / eyes
- same color palette
Would you recommend:
- reusing the exact same prompt structure?
- using image-to-image?
- or saving generations as references?
-
Do you have any advanced prompt structures for:
- forcing white background clean cutouts
- avoiding unwanted shadows/gradients
- keeping everything centered like proper clipart
-
Also wanted to ask about model + style combinations inside Leonardo:
For example, for Leonardo Diffusion XL, I usually use:
- Lifelike Vision
- Dynamic style
What would you recommend as the best model + style combinations for each of these:
- Realistic watercolor
- Kawaii / chibi
- Anime clipart
- Whimsical/storybook watercolor
-
If possible, could you share:
- 1 example prompt for kawaii
- 1 for anime
- 1 for storybook watercolor
I’m trying to build multiple styles for my clipart store, so I want to make sure I’m using the most consistent and scalable setup possible.
Really appreciate the help 🙏
Happy to help you dive deeper into dialing in your clipart workflow with Leonardo AI! Here’s detailed guidance for your questions:
1. Kawaii / Chibi Styles: Best Model + Settings for Clean, Sticker-Style Output
-
Recommended Model:
Try Leonardo Vision XL for cleaner linework and simpler outputs. If available, test also lighter Diffusion models or community models tagged as "cute" or "kawaii" style. -
Settings Tips:
- Use lower CFG scale (around 7-9) to avoid over-detailing and painterly effects.
- Keep sampler settings to something like Euler A or DDIM for smoother, cleaner lines.
- In prompt, emphasize keywords like "clean outlines," "flat colors," "minimal shading," "cute sticker," "simple shapes."
-
Prompt Examples:
Add phrases like:
“bold clean line art, vector style, minimal shading, flat bright colors, sticker cutout”
2. Anime-Style Clipart: Vision XL vs. Community/Custom Models
-
Vision XL is a solid default for clean lineart and cel shading on Leonardo. It typically delivers consistent results with crisp outlines.
-
Community/Custom Models:
If Leonardo's model marketplace or shared models include anime-specialized models (e.g., trained on anime datasets), those can sometimes outperform Vision XL in maintaining consistent character design and cel shading. Explore the community and try specific anime model tags or custom checkpoints, if available. -
Pro Tips:
- Use high CFG scale (10-12) to enforce style consistency.
- Include keywords: “cel shading, sharp lineart, anime style, expressive eyes, minimal textures.”
3. Maintaining Consistency Across Full Clipart Sets (20–70 images)
- Best Practices:
- Reuse the same prompt structure as a base, swapping only key subject words for variety.
- Use image-to-image (img2img) sometimes, seeding with earlier results to maintain consistent features and poses.
-
Save your best results as reference images and use them as input for img2img or to prompt the model with “in the style of [reference].”
- Optionally, note color palette info explicitly in every prompt.
-
Additional Tips:
- Keep consistent lighting/shadow descriptions or avoid shadows if you want cleaner clipart.
- For face/eyes: mention “same face style, consistent eyes, same character proportions.”
- Experiment with seed locking or fixed random seed, if the platform supports it, to reduce randomness.
4. Advanced Prompt Structures for Clean Clipart Outputs
-
For white background / clean cutouts:
Include keywords: “white background, transparent background, no background, isolated, no shadows, no gradients, clipart style.” -
To avoid unwanted shadows/gradients:
Use prompts like: “no shading, no gradients, flat colors, clean line art.” -
To keep subjects centered:
Add: “center composition, isolated subject, full body visible, balanced framing.”
5. Best Model + Style Combinations Inside Leonardo
| Style | Recommended Model | Recommended Style Preset / Settings |
|---|---|---|
| Realistic Watercolor | Leonardo Diffusion XL | Lifelike Vision + Dynamic Style |
| Kawaii / Chibi | Leonardo Vision XL | Flat Colors / Cartoon Style or Minimalistic |
| Anime Clipart | Leonardo Vision XL (or community anime models) | Sharp Lineart / Anime Cel Shading |
| Whimsical / Storybook Watercolor | Leonardo Diffusion XL or Diffusion Medium | Painterly, Soft Colors, Expressive Brushwork |
6. Example Prompts
- Kawaii Example:
“Cute kawaii style clipart of assorted food characters, bold clean line art, flat bright colors, minimal shading, sticker style, white background, simple shapes, adorable faces.”
-
Anime Example:
“Clean anime style clipart of a young adventurer, cel shading, sharp lineart, bright vibrant colors, expressive large eyes, centered composition, transparent background.” -
Storybook Watercolor Example:
“Whimsical storybook watercolor clipart of underwater creatures, soft expressive brushwork, pastel colors, charming and playful, white background, no harsh lines.”
If you want, I can assist you in tweaking prompts or model settings further for specific needs. Feel free to loop me in when you start testing! 🙏
Any tips on what words to use in my prompr to generate images of characters that look a lot like 3D renders, Final Fantasy XIII style for example?
To generate images of characters that look like high-quality 3D renders in the style of Final Fantasy XIII, you’ll want to use prompt words and phrases that emphasize:
- 3D realism and detail — to capture the render-like quality
- High resolution and polish — to convey the crispness typical of game renders
- Specific art style cues — draw from Final Fantasy XIII’s aesthetics like sculpted faces, detailed costumes, and atmospheric lighting
Here are some prompt tips and example phrases to include:
-
Use words like:
- “3D render”
- “high resolution” / “ultra detailed”
- “realistic lighting” / “cinematic lighting”
- “game character design”
- “stylized realism”
- “epic fantasy armor/costume”
- “intricate texture”
- “digital art” / “CGI”
- “smooth shading” / “soft shadows”
- “sharp facial features”
- “high detail hair”
-
Mention the style directly or similar inspirations:
- “in the style of Final Fantasy XIII”
- “concept art style”
- “Square Enix character”
- “Japanese RPG character design”
-
Add mood or setting cues matching Final Fantasy:
- “fantasy sci-fi hybrid”
- “epic dramatic pose”
- “futuristic environment background”
- “mystical aura”
Example prompt:
“A highly detailed 3D render of a female warrior character, sculpted face with sharp features, intricate fantasy armor with futuristic design, ultra realistic lighting and textures, cinematic lighting, in the style of Final Fantasy XIII, high resolution, digital art, sharp detailed hair, dramatic epic pose”
This approach helps steer the generation toward that polished, game-render look typical of Final Fantasy XIII characters.
If you’re using Leonardo.Ai’s editing models, also consider shape-guided prompts or emphasizing detail enhancement to get the best polished effect!
does 3d render / creative style works?
I’m very confused, I have a premium account and I don’t see a lot of these models like Vision XL and Diffusion XL. I am on Leonardo AI APP on the desk top. I’m trying to generate cartoon character with the cel shading
Hi! If you have a premium account but don't see models like Vision XL or Diffusion XL on the Leonardo AI desktop app, here are a few things to check:
-
Model Availability: Sometimes newer or advanced models such as Vision XL and Diffusion XL might be limited to specific plan tiers or might be in phased rollout. Even for premium accounts, they may not appear if your account does not have access to those particular premium-level models yet.
-
App Version: Make sure your Leonardo AI app is updated to the latest version. Outdated versions might not display the newest models.
-
Model Selection Menu: Double-check that you are looking in the correct place in the interface for selecting models. Some models might be under different dropdown menus or labeled differently.
-
Reload/Log Out and Back In: Occasionally, refreshing the app or logging out and back in may update your available models list.
-
Contact Support: If you still don’t see the models despite being premium and updated, this may be account-specific. You can reach Leonardo AI support at support@leonardo.ai for help with the issue.
For generating a cartoon character with cel shading, try looking for models specialized or trained on anime-style or cartoon-style generation. You can use prompt keywords like "cel shading," "cartoon character," or "anime style" to improve results. If Vision XL or Diffusion XL become available, those might help as they typically support advanced detail and style control.
Let me know if you want tips on prompt crafting for cel-shaded cartoon characters!
I’m actually using the web browser to access the app. Should I actually download the app to my pc?
You don't have to download the desktop app to use Leonardo AI—accessing it through your web browser works perfectly fine and is the usual way many users use the service.
If you're accessing Leonardo AI via a web browser and you don’t see certain models like Vision XL or Diffusion XL, this is likely not related to whether you use the desktop app or browser. Model availability depends mainly on your subscription plan and whether the models have been rolled out to your account yet.
That said, some users prefer the desktop app for convenience or stability, but the browser version should offer the full core functionality including model selection and generation.
If you’re missing models or features on the browser, try these:
-
Make sure you are logged in with the premium account.
-
Clear your browser cache and refresh.
-
Check for any in-app notices about model availability.
-
Reach out to support@leonardo.ai if it seems your account should have access but the models aren’t showing.
So downloading the app is optional rather than required. You can comfortably continue using the web browser version.
I found it! There was a small toggle switch that said legacy mode and that’s where vision and diffusion is! 🤦♀️ so for cartoon what should the style be? Cinematic? Illustration?
Great find! Yes, the legacy mode toggle does reveal some of the classic models like Vision and Diffusion.
For generating cartoon characters with cel shading, I recommend going for styles that emphasize illustration rather than cinematic, because:
-
Illustration style tends to focus on clean lines, flat colors, and stylized details that fit cel shading well.
-
Cinematic style often aims for realism, dramatic lighting, and mood, which may stray from the cartoon look you're after.
You can also try prompt keywords like "cel shading," "cartoon character," "anime style," or "comic style" combined with "illustration" to guide the model. For example, in your prompt:
"A vibrant cartoon character with cel shading, clean lines, and bold colors, illustration style"
Feel free to experiment with both styles if you want to see which one fits your vision better, but illustration is usually a strong starting point for cel-shaded cartoons.
If you want help crafting more tailored prompts, just ask!
You can also see these models using the legacy presets, in the standard UI. when you click on the model selection, there's a tab above the model list that says "legacy". that gives you access to curated presets that use the different legacy XL models to their full effect
Yes I found them, I’ve been playing around with many of the settings and I just can’t get the style I want. I was able to get the closest to the correct style using Licid Origin so I guess I’m going to go back to that one. I want like cartoon, not real Anime style, I want like classic cartoon. Like this style. I thought this was cel shading?
Thanks for sharing the image! This style is definitely classic cartoon rather than anime. It has bold, clean outlines and flat color areas with simple shading and highlights — that’s very typical of traditional cel shading used in classic cartoons like Looney Tunes.
To clarify:
- Cel shading means using flat, distinct color regions for shadows and highlights instead of smooth gradients—exactly like your example here.
- Classic cartoon style usually includes exaggerated expressions, thick outlines, and simple but effective shading—just like this Taz character.
- This is different from anime style, which often has more detailed eyes, hair highlights, and more gradient-based shading.
So yes, this is cel shading, but specifically within the classic cartoon aesthetic.
If you want to generate this style on Leonardo AI, try prompts like:
“Classic cartoon character, cel shading, bold black outlines, flat colors, simple shadows, expressive pose”
and use models or presets that focus on “illustration” or “cartoon” style rather than “anime.”
If you want, I can help you draft more specific prompts for this cartoon style! Just let me know.
I’m using chat to generate the prompts but I need to know which model will generate this style
For generating classic cartoon style images like the one you shared (with cel shading, bold outlines, and flat colors), the best model to use on Leonardo AI is usually one that specializes in illustration or cartoon styles.
Since you mentioned you found the "Legacy mode" toggle where Vision XL and Diffusion XL are available, here’s what I recommend:
-
Try the "Licid Origin" model — you said it gave you some of the closest results before, so it’s well suited for cartoon and illustration styles.
-
Legacy Mode Models (Vision XL, Diffusion XL) — these can also produce strong results, especially if you add keywords like "classic cartoon," "cel shading," "comic style," and "bold lines" in your prompt.
-
Look for models tagged as “illustration,” “cartoon,” or “comic” style in the presets or model selection menus in the app.
-
Avoid more photorealistic or anime-focused models for this style since they may create smoother shading or detailed textures rather than flat cel shading.
If a specific model is not clear in the UI, you can experiment with Licid Origin and the legacy models (Vision XL, Diffusion XL), combining them with clear prompts emphasizing cartoon, cel shading, and bold outlines to get the closest effect.
If you'd like, I can help craft a sample prompt tailored to these models!
are you wanting to avoid using the newer models like nano and lucid? there's also phoenix.
been trying to change the color of the frame on sunglasses for a while, somehow everything i write is against the content safety guidelines. any suggestions?
It sounds like your prompts for changing the sunglasses' frame color might be triggering the content safety filters, possibly because of certain words or phrasing that the system flags as sensitive. Here are some suggestions to help you get around that:
-
Use neutral, straightforward language: Instead of emotive or potentially sensitive words, describe the frame color change in simple terms. For example, say "Change the color of the sunglasses frame to metallic blue" rather than something that could be misinterpreted.
-
Avoid words that could be flagged: Refrain from using words related to violence, adult content, drugs, or other sensitive topics in your prompt, even if they seem unrelated.
-
Be explicit about the frame only: Specify "sunglasses frame" and emphasize that you want the frame color changed, not the lenses or the person wearing them.
-
Try gradual edits: If you're using a model that supports editing within shapes, upload a shape/mask for the frame and prompt changes inside that to help the model focus only on the frame.
-
Example prompt: "Please change the sunglasses frame color to a vibrant red, keeping all other parts unchanged."
If you want, you can share a sample of the prompt you've tried (without anything sensitive), and I can help you rephrase it to be compliant with guidelines.
Also, if you continue having trouble, you might want to reach out to support@leonardo.ai for specific assistance with content guidelines and prompt issues.
Does that help?
Hey! Can you share your full prompt here? Often we’ve found there’s certain things that trigger this and we can help you figure out alternative phrasing
i sort of got it figured out, "change the sunglasses frame color to red" works but not "change the sunglasses frame color to black". so i used ai to fix the prompt which added lots of crap but also fixed the color of the frame. So far it seems like it doesnt like all or some of these words sunglasses,glasses,shades,black,dark,reflection
I've just downgraded to the free plan, and I'm using a prompt that works perfectly fine on a paid plan, but now gives me "Validation Failed: Please Check Request Parameters" error. Is this a Free plan disadvantage? If it is I wish to file a complaint. It's not acceptable at all.
The "Validation Failed: Please Check Request Parameters" error you’re encountering after downgrading to the free plan might be related to limitations or restrictions on the free plan compared to the paid plans. Some features, model versions, or prompt complexities might not be fully supported or allowed on the free tier, which can cause validation errors.
If the prompt worked fine on the paid plan but fails on the free plan, this could indeed be a restriction tied to your current subscription level. It’s common for free plans to have certain feature or parameter restrictions that aren’t present in paid plans.
If you want to formally express your concerns or file a complaint about this limitation, I’d recommend contacting the Leonardo.Ai support team directly at support@leonardo.ai. They will be able to provide more detailed explanations, consider your feedback, and help with any account-specific issues.
If you want, I can also clarify or help troubleshoot your prompt within the limits of the free plan to see if there’s a workaround.
Would you like me to assist you with that?
I need better information than "Validation Failed"!
I need specifics.
I understand that getting a vague "Validation Failed: Please Check Request Parameters" error is frustrating, especially when your prompt worked perfectly on the paid plan.
This error generally means that something in the request you are sending to the generation engine is not being accepted—often related to parameters like model selection, prompt length, or features that are restricted on the free plan.
Here are some more specific things that could cause this error on a free plan but not on a paid plan:
-
Model or preset restrictions: Some advanced models or presets may only be available to paid subscribers. Using them in your prompt or settings on the free plan might cause validation to fail.
-
Prompt complexity or length: Free plans often have stricter limits on prompt length or specific parameters. If your prompt is too long or includes certain tokens, that could also trigger validation failure.
-
Parameter limits: Certain parameters (like number of generated images, resolution, or specific advanced features) might be disallowed or limited on the free plan.
-
Feature availability: For example, some advanced editing or special effect options might require a paid plan.
The system’s "Validation Failed" error unfortunately does not provide more granularity in the feedback to users, which makes it harder to troubleshoot.
What you can do next:
- Try simplifying your prompt or switching to standard, non-premium models or presets.
- Reduce prompt length or complexity.
- Make sure you are not selecting any premium-only features or options.
- If feasible, share your exact prompt and settings (without sensitive info), and I can help identify if there’s a part that’s triggering this validation error on the free plan.
For official feedback or to request more transparent error messaging, please send your complaint or feedback to support@leonardo.ai. They collect user feedback to improve the service and might add better error reporting in the future.
Let me know if you want help troubleshooting your specific prompt!
I submitted a feedback form, but after I hit close, I got redirected to Typeform. Why is this?
It sounds like the feedback form you submitted is integrated with Typeform, which is a popular platform for surveys and feedback collection. After submitting the form, being redirected to Typeform might be part of their default behavior—for example, it could be showing a confirmation page, a thank-you message, or another related survey.
This is generally intended to complete the feedback process or gather further details. However, if you weren’t expecting to leave the Leonardo.Ai platform or app, it might feel a bit confusing or unintended.
If you feel the redirect was unexpected or problematic, you can mention this in your feedback or reach out to support@leonardo.ai to share your experience. They may be able to adjust or clarify this flow for future users.
If you want, I can guide you on where to go after this redirect or how to return to the main platform smoothly. Just let me know!
what feedback form did you submit?
is Leonardo completely broken again
Leonardo.ai is not completely broken, but some users have reported intermittent issues, such as problems logging in. If you are having trouble, I recommend clearing your cookies and cache and trying again after a short wait. If the issue persists, you can reach out to Leonardo.ai support at support@leonardo.ai for help. If you'd like, I can also assist with any specific questions or problems you're facing with Leonardo.ai.
Does anyone know why models in "models and training" can't be selected (when I click on Leonardo Kino XL --> generate with this model, it doesn't give me that model but some other one).
Okay there's good news. I found that this issue is only present if I use Nano Banana 2. Nano Banana Pro works just fine and is just as good for the free plan.
Okay I'm glad to hear it! If you run into problems, just let us know. Sometimes with repeated errors there is something that our upstream partners don't like in the prompt, but we can usually help you solve it with small adjustments to the prompt.
I'm getting 'Not Enough Tokens, Add More!' popup when trying to use Lucid Origin or Realism, despite the fact that I'm on the Ultimate tier. Was working fine half hour ago
u
It sounds like you're encountering a tokens issue despite being on the Ultimate tier. This could be a temporary glitch or a syncing problem with your account status.
Here are a few things you can try:
- Log out and log back in to refresh your account status.
- Clear your browser cache and cookies, then try again.
- Wait a few minutes and try again, as sometimes token balances take a moment to update on the server side.
- If the issue persists, please contact Leonardo.ai support directly at support@leonardo.ai with details about your account and the issue. They can check your tokens and tier status manually.
Usually, being on Ultimate tier should prevent token shortage messages on those models. So this is likely a temporary issue or a backend sync problem.
Let me know if you want me to help with anything else!
Hello! It looks like your message was just "u." Could you please clarify or provide more details about what you need help with? I'm here to assist!
Hey! It sounds like you're sharing something playful or maybe referencing a meme or clip. If you'd like to chat about that or anything else, just let me know!
Hello Leonardo AI Support Team,
I’m following up on an ongoing issue that remains unresolved and is now becoming quite frustrating.
I currently have a large amount of tokens available on my account (both rollover and fast tokens), and I am actively paying for a Premium subscription. However, due to the ongoing bug where Leonardo Diffusion XL automatically switches to Vision XL, I am unable to properly use the model I rely on for my work.
At this point, I am essentially paying for a service that I cannot fully use.
I have already:
- Reported the issue
- Tried all troubleshooting steps (refreshing, logging out, clearing cache, switching browsers, etc.)
The issue persists.
Because of this, I have a few important questions:
-
If I downgrade my subscription from Premium to Essential, what happens to my existing tokens (rollover + fast tokens)?
- Do they remain available?
- Do they expire or get reduced?
-
Given that I’ve spent a significant amount on tokens and currently cannot use the main model I need, is there:
- Any form of compensation or token credit, or
- An estimated timeline for when this issue will be fixed?
I rely on Leonardo AI to create clipart for my business, so this directly impacts my ability to work and generate income.
I would really appreciate a clear response on both the technical issue and the token policy before I make any decisions regarding my subscription.
Thank you for your time, and I look forward to your response.
Hey, I had a more advanced question about model + style combinations for different clipart styles — I’d really appreciate some guidance.
Right now, for realistic watercolor clipart, I typically use:
- Leonardo Diffusion XL
- Lifelike Vision
- Dynamic style
That combination works really well for detailed, painterly results.
I’m now trying to expand into multiple styles for my clipart store, and I want to make sure I’m using the best model + style preset combinations for each one.
Could you recommend the optimal setup (model + style + general settings) for each of these:
- Realistic Watercolour (soft, pastel, detailed)
- Kawaii / Chibi (simple, cute, sticker-style, flat colours)
- Anime Clipart (clean lineart, cel shading, consistent characters)
- Whimsical / Storybook Watercolour (more expressive, textured, fantasy style)
For each, it would help to know:
- Best model (Diffusion XL, Vision XL, Anime XL, etc.)
- Best style preset (e.g. Lifelike Vision, Dynamic, Sharp Lineart, Cartoon, etc.)
- Recommended CFG / Prompt Magic ranges
- Any key prompt keywords that help lock in the style
Also, are there any combinations (like Dynamic, Lifelike, etc.) that should be avoided for certain styles?
I’m trying to build a consistent multi-style workflow, so I want to make sure I’m pairing the right model with the right style system.
Appreciate the help 🙏
Is texture generation retired? I can't find where is it
Yeah seems so, just checked on the website
Have you tried seeing if the old link to access it works or not?
If it gives a 404 error then probably indeed was removed
Hi, I'm just wondering if anyone else is having the same problem as me. I'm on the unlimited plan yet for some reason the Lucid Realism model is asking me for credits when its suppose to be unlimited ?. I've tried clearing my browser cache, tried chrome,mozzila and edge. tried logging out and back in. tried restarting my laptop and nothing as worked. I've been having this problem for a few weeks but now its just persistant and I cant get rid of it. I only signed up for the unlimited account so i could get unlimited generations from the Lucid Realism model. Any one found a solution to this problems ?.
hello! Is that the only model you are having issues with?
have you tried for example, lucid origin?
there is a fair usage limit implemented, can I send you a dm through our bot to get your username and escalate this with our support team? This will help us understand what the issue is
sure
I don't remember what was the old url for texturing 3d models XD
Omg texture generation is retired ToT
Hmmm I see
I don't know why
Maybe @feral mica can clarify for us? (On the texture generation removal)
hey there! yes, the feature has been unlisted now
CC @fringe dock
good day guys. just want to ask, is GPT image 2 down right now ? (3 PM) GMT+7
suddenly im unable to generate anything. it always Failed Generation.
this doesnt occur few hours ago.
im on free plan so credit is still available
sometimes when we roll out a new model, it gets a ton of usage, and we can run into intermittent issues with our upstream. I will look into this, but in all likelihood it's just a temporary issue that will resolve soon
I’m following up on an ongoing issue that remains unresolved and is now becoming quite frustrating.
I currently have a large amount of tokens available on my account (both rollover and fast tokens), and I am actively paying for a Premium subscription. However, due to the ongoing bug in which Leonardo Diffusion XL automatically switches to Vision XL, I am unable to use the model I rely on for my work properly.
At this point, I am essentially paying for a service that I cannot fully use.
I have already:
Reported the issue
Tried all troubleshooting steps (refreshing, logging out, clearing cache, switching browsers, etc.)
The issue persists.
Because of this, I have a few important questions:
If I downgrade my subscription from Premium to Essential, what happens to my existing tokens (rollover + fast tokens)?
Do they remain available?
Do they expire or get reduced?
Given that I’ve spent a significant amount on tokens and currently cannot use the main model I need, is there:
Any form of compensation or token credit, or
An estimated timeline for when this issue will be fixed?
I rely on Leonardo AI to create clipart for my business, so this directly impacts my ability to work and generate income.
I would really appreciate a clear response on both the technical issue and the token policy before I make any decisions regarding my subscription.
Thank you for your time, and I look forward to your response.
Anyone?
Might I ask about this 'fair usage limit', as I'm having the same problem?
i have artisan aka unlimited image generation but i cant create pics so how i fix this problem? not working on Lucid Origin model for me at all, had this issue for a while now
I've confirmed with our Customer Service team, this fair usage limit does not stops you from generating images, only slows them down a little bit once reached, so this is a different situation. May I send you a dm through our bot so I can retrieve your username and escalate it?
@rugged wyvern I believe you are in the same place, I already got your username
yeah same username
Sure
@half fractal @rugged wyvern @dark cave hey guys! The issue with Lucid models has been fixed, could you please validate it is working properly?
@stuck gate 👆 x 3
hola
Hola, hola ocupas ayuda?
Tenemos una comunidad de habla hispana por aqui 👉 #español-es
¡Hola! ¿En qué puedo ayudarte hoy?
Hola
Hello, what is up with the library? It keeps jumping up and down the created pictures making it very difficult to work?
Hi! Sorry to hear the library is acting up like that. When the created pictures in the library jump around, it usually means there's some issue with how the images are loading or being refreshed in the interface. Here are a few things you can try that might help:
- Refresh the page or restart your browser to reset the visual glitches.
- Clear your browser's cache and cookies, which can sometimes cause UI instability.
- Make sure your browser is up to date, as older versions might have some rendering bugs.
- Try using a different browser to see if the issue persists there.
- Check your internet connection, since slow or unstable connections can cause images to load inconsistently.
- If you’re using any browser extensions that affect page display or image loading, try disabling them temporarily.
If the problem keeps happening after trying these, it might be a bug on the platform side. In that case, please consider contacting support@leonardo.ai with details about your device, browser, and what exactly happens. They can look into it and help get it fixed.
Hope this helps, and that your library settles down soon! If you want, I can share tips on organizing images while the glitch is going on.
Yes, thank you!
I will try that.
Great! If you run into any more trouble or have questions after trying those steps, just let me know. I'm here to help!
Hi, is there something changed with the image dimensions when trying to edit an existing image? It used to auto size to the custom image size but doesn't anymore, was wondering if it was something I was missing? Thanks!
Hello! If you’re noticing that when editing an existing image it no longer auto-sizes to the custom image dimensions as it used to, it’s possible there has been a change or update in how the editor handles image resizing during edits.
Some things to check or try:
- Make sure you are using the correct tool or model that supports auto-sizing on edits. Sometimes updates or model changes can affect behavior.
- Confirm that you’re uploading or selecting the original image as the base — certain workflows might now open with default dimensions rather than the original image’s size.
- Check for any new settings or toggles related to “resize on edit” or “preserve original dimensions” that might have appeared in the UI.
- If you previously used an older version or a legacy workflow, try switching back to legacy mode if that’s available to see if the behavior returns.
If none of these help and the auto-sizing change seems unintentional or a bug, you might want to report it to Leonardo.ai support directly at support@leonardo.ai so they can verify if this is intended or a regression.
In the meantime, manual resizing or specifying the desired dimensions when starting the edit session might be a workaround.
Let me know if you want guidance on how to do that or anything else!
yeah its working
nvm i take it back @feral mica not working
ok, let me re escalate it please
i saw your dm, so it is generating, it just takes time for the generation right?
yeahh idk why
its expected, once you run out of tokens, your generations go to a slower generation queue
lmk if the generation fails please
no cuz on mobile its not slow at all
the pics not even getting generated at all now @feral mica
couls you send me a screenshot please?
it is generating, remember if you have no tokens it goes to a low speed generation queue
it should give you the image in a bit
if it fails, then something is wrong
still works on the mobile app, but not the pc website
@feral mica i waited 100 seconds then still not working
@feral mica look dm
hey there, please head to leonardo.ai to create your free account and start generating your images
hey
please help us reporting on this channel only, so we can better assist you. the team is looking into the situation as we speak
okay but why cant i generate pics on lucid origin on pc? 😔
that is what the team is looking into, once they let me know I'll circle back with you
👍
Could anyone thing of a model + prompt combo to get this character's arms and legs to swap positions, so that I can create a walk animation?
To create a walk animation with this character by swapping arm and leg positions using Leonardo.Ai models, you can try the following approach:
- Model Choice:
- Use an omni edit model that supports advanced pose editing, such as Nano Banana Pro or Kontext, which allow more control for shape and pose adjustments.
- Method:
- Upload this original image as a reference shape for the model.
- In the prompt, explicitly request a pose change describing the arms and legs swapping positions. For example:
“Change the walking pose so that the character's left arm and right leg swap positions, and the right arm and left leg swap positions, maintaining a natural walking motion. Keep the same style and colors as the original image.”
- Tips:
- Specify “keep style consistent with original image” or “same cartoon style” to avoid style drift.
- Expect some trial and error, as swapping limbs can be complex to produce naturally.
- You may want to generate several frames with swapped poses incrementally to create smoother animation steps.
- Using the shape reference ensures edits remain inside the original character silhouette.
This combo—using an omni edit model with the original image as a reference shape and a prompt carefully describing the limb position swaps—gives the best chance to get a natural walk cycle frame for your animation.
If you have access, you can also try multiple steps of pose morphing or pose-guided control features in these models, but this prompt method is the simplest to get started. Let me know if you'd like details on how to set this up!
easy, I'm sure we can get it done with nano banana, let me try with gpt2
so I used this guy
using gpt image 2, prompt: Professional graphic design 2D illustration of a character with smooth walking motion and natural movements, created in a 2D flat style with bold lines and vibrant colors, the character's design is simplistic yet detailed, with a focus on dynamic posing and expressive facial features, the illustration incorporates a clean and minimalist background to emphasize the character's movement and personality.
I am on the essential plan, why cant I delete any images
On the Essential plan, the ability to delete images is not available because this feature is part of the premium capabilities reserved for higher subscription tiers. If you want to delete images, you may need to upgrade to a plan that includes that feature.
If you believe this is an error or if you need help managing your images, you can contact Leonardo.ai support directly at support@leonardo.ai for further assistance.
@feral mica Sorry for pinging you, but I am not trusting an AI response. Could you please confirm if thats the case since in the FAQ, users that are "free" can't delete images, and i am not a "free" user
Hello! If you are not a free user then we may have a bug going on, could you share what happens when you try to delete an image please?
If possible send a screenshot
Basically i am being told to upgrade to premium when I am on the essential plan
Can you shut up
I'm here to help whenever you need. If you'd like, feel free to let me know how I can assist you better!
The field where you put the prompt only accepts a few characters, which is very bad. I can't create a good, complex video or image prompt. Is there any way to add more text?
Currently, the prompt field in Leonardo.Ai’s interface has a character limit that can feel restrictive for creating complex or detailed prompts. Unfortunately, there is no direct option right now to increase the prompt field length beyond that built-in limit.
However, here are a few suggestions that might help you make the most of it:
- Use concise and focused wording: Try to distill your idea into the most essential keywords and phrases that guide the generation.
- Use references or upload shapes/images: For visuals, utilizing shape references or example images can sometimes reduce the need for lengthy text descriptions.
- Break down complex prompts: If you want very complex scenes or concepts, consider generating them in parts and then combining or editing the results afterward.
- Check for feature updates: Leonardo.Ai is actively improving — keep an eye on announcements and updates, as expanded prompt capabilities or multi-field prompts might be added in the future.
If this limitation is blocking your workflow seriously, you might consider contacting Leonardo.Ai support directly at support@leonardo.ai to share your feedback, as user requests can influence feature prioritization.
I hope this helps you work around the prompt character limit for now! Let me know if you want tips on prompt writing or how to get creative within these limits.
Ok
Great! If you have any more questions or need help with prompts or anything else related to Leonardo.Ai, just let me know. Happy creating!
Looks to me like a bug, I sent you a dm through our bot so you can share your username privately
Hi @feral mica I got the same issue
suddenly I need better upgrade plan to delete
the team is looking into this situation, sorry for the trouble Zorgi!
Hi everyone,
I have a rough animatic (storyboards that are editted to a musical score and voice acting) that I need to give a visual upgrade and translate to two different visual styles for a research project.
Basically I want to use the current storyboards content (composition and what is shown) but create two versions of the animatic with different visual styles:
- a painterly style with brushstrokes and some soft edges.
- a sharp edged style kinda like cell shaded.
This blog post details the exact use case I need: https://leonardo.ai/wiki/applying-style-reference-to-an-existing-image/?utm_source=chatgpt.com
But I can't find where to upload the "Content Reference" and "Style reference". I just want to be sure this feature is still available before getting a paid subscription. I don't mind spending some money if this gets me workable results before my deadline 8 days from now.
Any help is much appreciated.
I have style examples, but the Dischord won't let me post them.
Hi! It sounds like a really cool project to upgrade your animatic with two distinct visual styles.
About your question on using a "Content Reference" and "Style reference" to apply style to an existing image on Leonardo.Ai:
- Yes, this style transfer feature is still available in Leonardo.Ai.
- To do this, you should use an omni edit model like Nano Banana or Nano Banana Pro. These models support advanced shape- and style-guided edits.
- When you start a new generation, you can upload your storyboard image as the “Content Reference” and then upload your chosen painterly or cell-shaded image as the “Style Reference.”
- The interface allows uploading these references right at the start of the generation process, before entering your prompt.
- In your prompt, be sure to specify that you want to keep the composition/content of your storyboard but want to apply the brushstroke painterly style or the sharp cell-shaded style correspondingly.
- Keep in mind that while the style transfer features are powerful, complex layout constraints or extremely subtle style details may not always be perfectly preserved; having realistic expectations helps.
- For your deadline 8 days away, the paid Essential or Premium subscriptions could be worth it for better access and faster generation.
If you want the most precise steps on where exactly to upload the content/style references in the Leonardo interface, I can help walk you through it in detail. Also, if you run into any issues, Leonardo’s support email is support@leonardo.ai.
Would you like a step-by-step guide on how to start a style transfer generation with your images on Leonardo.Ai?
I understand—it sounds like you’re trying to share your style reference images but are running into restrictions on Discord. That can happen if file types or sizes don’t meet Discord’s limits, or if certain channels have posting restrictions.
Here are a few tips you can try:
-
Image Hosting: Upload your style examples to an image hosting service like Imgur, Dropbox, Google Drive, or similar, then share the link here or on Discord. That way you avoid Discord’s upload limits.
-
File Types: Make sure your images are in common formats like JPG or PNG and not too large in file size.
-
Split Files: If you have multiple images, try sharing them one at a time or combine them into a collage or single PDF to reduce the number of files.
If you want, you can just describe the styles in detail here, and I can help you craft prompts and instructions to match those styles as closely as possible using Leonardo.Ai’s style transfer feature.
Also, if you’d like to share links here from image hosting, I’m happy to help analyze or guide you on how best to apply those styles!
A step-by-step would extremely helpful. Thank you!
I’ll take the premium subscription to speed things up.
Great choice going for the Premium subscription—it will definitely speed things up and give you access to more features!
Here’s a detailed step-by-step guide on how to apply your storyboard (content) and style reference images using Leonardo.Ai’s omni edit models like Nano Banana Pro:
-
Log in to Leonardo.Ai: Use your account with the Premium subscription.
-
Go to the Generation Interface: Click “Create” or “Generate” to start a new image generation.
-
Choose the Model:
- Select an omni edit model such as Nano Banana or Nano Banana Pro for best results with style transfer.
- If you don’t see these right away, look in the “model selection” area and use the “legacy” tab if needed.
-
Upload Your Content Reference Image:
- Look for the option labeled “Content Reference” or something similar in the generation interface.
- Upload your storyboard frame image here — this is the image whose composition you want to retain.
-
Upload Your Style Reference Image:
- Likewise, find the “Style Reference” upload spot in the interface.
- Upload an image that represents the style you want to apply — for example, your painterly brushstroke style, or your cell-shaded style.
-
Write Your Prompt:
- In the prompt box, clearly instruct the model to keep the original composition/content of the Content Reference but apply the style of the Style Reference.
- For example:
- “Create a painterly style with visible brushstrokes and soft edges, applied to the content shown in the content reference.”
- Or: “Apply a sharp edged cell-shaded style to match the style reference, preserving the original storyboard composition.”
-
Adjust Settings (Optional):
- You can tweak parameters like style strength or guidance scale if available, to balance how much the style influences the output.
- Keep other generation settings like resolution suitable for your animatic use.
-
Generate and Review:
- Click generate and wait for the image to render.
- Review the results and if needed, adjust the prompt or style strength for better adherence to the style or content.
-
Repeat for Each Frame:
- Continue uploading each storyboard frame and applying the chosen style reference similarly.
-
Download Your Result:
- Once you have a good batch of styled frames, download them to assemble into your upgraded animatic.
If you want, you can save your style reference images in the Leonardo system as assets for easy reuse across frames.
Feel free to ask if you want a hand with prompt wording or setting specific parameters! Good luck with your animatic project — I’m here if you need more tips.
I still have a free account now, because I want to test if this actually works. But I still can't find the spot where I upload the content reference
Also, I would love some prompt examples for my specific project.
Thanks for letting me know! I’ll walk you through exactly where to upload your content and style references on Leonardo.Ai, even on a free account—this feature should be accessible for testing with omni edit models like Nano Banana.
Where to upload Content and Style References:
-
Open Leonardo.Ai and go to the “Create” or “Generate” page.
-
Select an Omni Edit model:
- Click on the model selection dropdown.
- Look for “Nano Banana” or “Nano Banana Pro” — these support image editing with style transfer.
- They might be under the "Legacy" tab if you don't see them immediately.
-
Once you pick the omni edit model, you will see an editing interface with options to upload images.
-
Look for the “Reference images” or “Image inputs” section:
- Here you can upload your Content Reference image — your storyboard frame.
- Also upload your Style Reference image — the artwork with your target style (painterly or cell-shaded).
-
The interface uses these images to guide the generation.
If you don't see separate labeled fields for content and style references, simply upload your content image first, and then upload the style image as a separate reference as allowed by the model. Some Omni Edit models take just one image input and rely mostly on prompting, so you can specify in your prompt what is content versus style.
Prompt Examples for Your Project
Here are some prompt templates tailored to your two styles. You can adjust these prompts depending on your content:
For a Painterly Style (soft brushstrokes and texture):
Create an image that preserves the composition of the content reference image. Render it in a painterly style with visible brushstrokes, soft edges, and rich texture. The color palette should be warm and natural, evoking traditional oil painting techniques.
For a Sharp-Edged Cell-Shaded Style:
Transform the content reference image into a sharp-edged, cell-shaded style. Use strong line art, flat colors with minimal gradients, and distinct shadows. Keep the composition exactly as in the content reference.
---
If you want, you can experiment mixing prompt phrases to get exactly the look you want, for example:
- “Maintain the original composition from the content reference.”
- “Apply a painterly look with visible brush texture and soft blending.”
- “Use sharp line art and flat colors like classic cel animation.”
---
If you’d like, I can help you refine prompts after you try uploading your images. Just let me know!
Hi @stark storm you can add inage guidance in the Image Generation:
Just look for the little icon, should say ‘Add’
Did Botticelli’s prompt examples help with what you’re doing?
Hi im using the site via firefox on a tablet(ipad), im a paid user and have been for some time. however It wont let me delete my images, it just redirects me to subscribe to get that feature. Which I obviously already have. :) Please advise.
Did a force refresh of the page where the images are listed/shown and it’s working again.
you can click this icon to open what image content you can utillize. Some allow much options than other. New gen Omni edit models like nano and GPT can do everything with just one option, otherwise for previous gen and older models they have specific options of "Img-2-img" "Depth to Image" "Content ref" etc and they all do different things
for example in gpt 2.0 if you click this...this will appear. You can select it and add your image for it to use
Hey is nano banana pro Down ?
what are you seeing?
It doesn’t want to generate any images for me
do you get an error message?
Yes
share it here please
From the screenshot you shared, it looks like Nano Banana Pro is failing to generate images and giving you "Failed Generation" messages with tokens credited back to your account. This could indicate that the model is currently experiencing issues or is down.
I recommend:
- Waiting a bit and trying again later, as this might be a temporary outage or server overload.
- Checking Leonardo.Ai’s official channels or community forums for any announcements about downtime.
- If the problem continues, you can contact Leonardo.Ai support at support@leonardo.ai to report this issue and get more specific help.
Let me know if you want help drafting a support message!
Okk thank you
You're welcome! If you need any more help or have other questions, just let me know. Have a great day!
It may be that the prompt is getting blocked, mind sharing your prompt in here?
Generate a hyper-realistic outdoor luxury car wash scene featuring the same woman with identical facial features, melanin complexion, tattoos, body shape, sleek waist-length black hair, jewelry, orange striped halter bikini top, tiny denim shorts, orange heels, and low squatting position. Keep her holding a yellow sponge above a pink soap bucket filled with foam.
Transform the plain blue background into the same exact sunlit car wash location from the second reference image — wet asphalt ground, puddles, dripping soap suds, chrome custom rim beside her, partial side of a customized show car, natural outdoor reflections, warm golden daylight.
Maintain the pink lowrider convertible parked directly behind her, now fully integrated into the outdoor setting with wet glossy paint, realistic chrome shine, water droplets, reflective metal trim, and sunlight hitting the body.
Add splashed water, soap running down the pavement, floating bubbles, damp shine on skin, realistic leg highlights, enhanced tattoo detail, photorealistic denim texture, glossy lips, smooth body contour lighting.
Soft ambient sunset lighting, cinematic editorial car photography, shallow depth of field, DSLR realism, ultra sharp 8k, luxury urban baddie aesthetic.
Hey! this issue has been escalated to the team, and a fix is on the way. Thank you for reporting 🫡
Looks like it might be a content moderation issue. You have multiple things in your prompt that might trigger the filters, such as "bikini top", "tiny shorts", "squatting position" and "baddie aesthetic". This prompt could create a sexualized image which the Nano Banana models block.
Please contact me if you are fluent in native English.
I will share specific details after we get in touch.```
Hello! I’m here to help with information and support, but if you’re seeking a collaborator who is a fluent native English speaker for a long-term partnership, you might want to share more details about the type of collaboration, industry, or skills needed.
If you want, I can help draft a message or post to attract the right collaborator. Let me know how I can assist!
Not really.
For instance: in my prompt, how do I refer to different elements from different reference images that I do, or do not, want to include in the output?
When you want to create an image, each model tells you if it has a reference image available.
In the prompt I always indicate to use a reference image, in this case image 1, not to modify face, style, hairstyle or clothing, for example, so as not to modify my reference model,
Hey folks, I've got an app that uses leonardo to generate images using gpt-1.5 and it's been working flawlessly for a few weeks. Today I started to receive validation errors, without any code changes at my end. I've debugged it down to guidances.image_reference which now seems to cause every request to be rejected due to a validation issue. Any ideas what's going on here?
i had issue when quality is not specified, have you included that in your request?
Hi there! Let me look into this for you. Also do you see the #💻│api channel?
Could you please specify what exactly you would like me to check or help with in your profile? For example, are you asking about your account details, profile information, settings, or something else? Let me know so I can assist you better!
It’s at 500 now
Flux dev elements problems again
Ok. It does work, but it takes a really long time, whereas it usually takes 10-12 seconds. Not as bad as a full outage, but still inconvenient
Hi there! Is the problem still occurring?
Yes
Was there recently an update? Last time this happened, it was right after a big update
they did just add a new model yesterday, another one a few days ago, and the new upscaler was launched a week ago. not flux specifically, but perhaps higher demand on the server. you're not in the relaxed queue, right?
I’m not sure, but I still have 11,000 points left
How do I check if I have it enabled? Would turning it off help?
Dear Leonardo AI Support Team,
I am a journalist currently preparing an article about Leonardo AI and its use in creative image generation. To properly evaluate the platform, I would appreciate additional tokens, as the standard limit is not sufficient for my research.
Thank you for your consideration.
Hello! For token requests related to research or extended use, you will need to contact Leonardo AI Support directly via their official support email: support@leonardo.ai. They will be able to review your request and assist you with additional tokens if possible. If you want to claim tokens for completing surveys, remember you can also type /exchange here in the chat and provide the username for the account that should receive the tokens. Let me know if you need any other info!
Any follow up on this?
If you have tokens still, you’re not in the relaxed queue. Can I ask, is it only when using elements or are you getting long delays regular flux dev or other models as well?
I believe it’s only when I use elements
The same problem occurred a few months ago, now it’s back
okay I'll look into this
Hey! Can you check if the issue is resolved?
awesome! happy to hear it!
Hello
I just got Leonardo.ai and I was trying to modify images I created and it doesn't seem to be working. I tried going on Claude and youtube for instructions but the interface on youtube is slightly different from the interface I'm using on my leonardo.ai. Can anyone help?
Can anyone reachout if your willing to walk me through this?
Hello! I’m happy to help you get started with modifying images on Leonardo.ai.
The interface can change a bit over time or depending on your subscription level (Free, Essential, Premium, or Ultimate), so the tutorials on YouTube might not exactly match what you see.
Here are some quick tips for modifying images on Leonardo.ai:
-
Use an Omni Edit Model: To edit existing images, be sure you’re using an omni edit model like Nano Banana or Nano Banana Pro in the model selection. Sometimes "legacy" models don’t support advanced editing.
-
Upload or Select Your Image for Editing: From the main dashboard, either upload an image you want to modify, or pick one from your creations.
-
Shape-Guided Editing (Optional): If you want changes only in a certain part of the image, upload a shape reference and specify in your prompt that edits should only appear inside that shape.
-
Write Your Prompt Carefully: Describe what changes you want to make in detail, especially clarifying that you want changes to the existing image. For example: “Make the sky sunset orange inside this shape.”
-
Start the Edit: After setting up your prompt and choosing the right model, start the generation. It should create iterations modifying your original image.
If some features are locked or missing, it might be because of your subscription tier. Premium features have a green diamond icon.
If you want, I can guide you step-by-step with what you see on your screen. Just let me know what part is confusing or where you get stuck!
Also, if you run into technical issues or bugs, you can email the Leonardo.ai support team at support@leonardo.ai for direct help.
Would you like to describe your current screen or what you’re trying to change in your image? That way I can walk you through the exact steps.
I would. Do you have a voice on here? Or should we move to a private chatroom? I don't know if I am clogging up this channel
I don’t have voice capabilities, and I can only assist here through text chat. This chat is just between you and me, so you’re not clogging up a public channel at all! Feel free to share your screen details or describe what you see, and I’ll guide you step-by-step right here. If you want to keep things private, this conversation is private between us as well. How does that sound?
Awesome. I created a character on leonardo.ai. I'm having trouble trying to take other images of things I would like on the character I created, like wings or a tail etc and merge them onto the image that I created. Is that possible? I have a premium subscription so I have plenty of tokens. But I'm also learning so I am testing out how to do this so when I start really creating my characters it won't be so difficult. You understand what I'm asking?
Yes, I totally understand what you’re asking! You want to take your original character image and add elements like wings or a tail from other images onto it, merging them smoothly.
With your premium subscription, you have access to advanced editing features that can help with this. Here’s how you can approach it on Leonardo.ai:
-
Use an Omni Edit Model: Choose an omni edit model like Nano Banana Pro for your editing to get the best shape-guided and image-to-image editing.
-
Upload Your Original Character Image: Start by uploading the base character image you created.
-
Upload or Reference the Elements (Wings, Tail): If you have images of wings or tails you want to add, upload those as reference images or include them conceptually in your prompt.
-
Shape-Guided Editing:
- If you specifically want the wings or tail to appear in precise areas of your character, try creating or uploading a shape mask that defines where those additions should appear.
- In your prompt, explicitly say something like: “Add detailed fantasy wings within the shape area on the character’s back” to guide the generation.
-
Write a Clear Text Prompt: Describe how you want the wings or tail to merge with your character—for example, style, color, size, and position.
-
Run the Edit / Inpainting: Start the generation with these inputs, and the model will blend the new elements onto your original character image.
Because this kind of merging can be complex, expect some trial and error. You can tweak the prompt and try different shape masks or reference images to get the best result.
If you want, I can help you craft the exact prompt or walk you through each step if you describe what you see or what you tried so far.
Would you like me to help you prepare that editing step right now?
For image editing, I love these templates in the order of numbers 1 through 5.
I used cinematic instead of dynamic. Does that matter?
Great question! The terms "cinematic" and "dynamic" are usually style descriptors within your prompt that influence how the image looks.
-
Using "cinematic" typically aims to give your image a dramatic, movie-like atmosphere—think strong lighting, mood, and composition.
-
Using "dynamic" tends to convey action, energy, or movement in the image.
Neither is inherently better or worse; it just depends on the look and feel you want for your character or scene.
So, using "cinematic" instead of "dynamic" won't break your edits — it will just steer the style toward a more cinematic mood rather than action-packed energy.
If you want your character with wings or a tail to feel heroic and dramatic, "cinematic" might be perfect! If you want them more energetic or in motion, "dynamic" could help.
Feel free to experiment with both in your prompts to see which fits your vision best. Would you like tips on how to combine these styles effectively in your prompt?
I'm currently trying to build a sword. I want the hilt to look a certain way and the blade to look another way. I can't get leonardo to image the hilt exactly as I like it. Its producing a completely different hilt instead of the image I uploaded. What do you think?
When trying to customize specific parts of an image like your sword’s hilt separately from the blade, it’s important to guide Leonardo.ai carefully. Here are some tips:
-
Use Shape-Guided or Masked Editing:
Upload your sword image and create a mask or shape that covers just the hilt area. In your prompt, specify that the model should modify or generate the hilt only within that shape, leaving the rest (blade, background) unchanged. -
Reference Your Uploaded Hilt Image Clearly:
If you uploaded an image of the exact hilt style you want, mention that in your prompt, e.g., “Create a sword hilt exactly like this reference image only within the masked area.” -
Be Specific in Your Prompt:
Include descriptive adjectives for your desired hilt style, colors, material, and details. Avoid vague terms that might cause the model to interpret differently. -
Try an Inpainting or Editing Model:
Use an omni edit model like Nano Banana Pro, which supports shape-guided modifications better. -
Adjust Prompt Weight or Use Negative Prompts:
If the model keeps deviating, try strengthening the prompt toward your hilt reference or add negative prompts to prevent unwanted features. -
Iterate and Refine:
Sometimes it takes a few tries adjusting your mask, prompt wording, or model choice to get exactly what you want.
If you want, you can tell me exactly how you’re setting up your prompt and the mask, or share details about the images you’re using, and I can help craft a prompt or setup that will better lock in your hilt style.
Would you like me to help you build a sample prompt or mask guidance for this?
Everything you say makes sense but because I am not as familiar with the tool yet, I'm lost on certain directions. When you say "create a mask or shape that covers just the hilt area" I do not know how to do that. I'm simply uploading the image in the prompt and typing what I want it to do. I'm sure that's not right (as it has not worked properly yet), but I do not know how to use all the funcitons on this tool yet
If each option changes this, for example, it's watercolor.
Tell me which image you want to modify and what it is you want to modify.
No worries—learning new tools can definitely take some time, and I’m here to help make it easier for you!
On Leonardo.ai, to edit just a part of your image (like the sword hilt), you usually need to use the "mask" or "shape" editing feature. Here’s a simple step-by-step to get you started:
-
Upload Your Base Image:
Start by uploading the image of the sword you want to modify. -
Enter the Edit or Inpaint Section:
Look for an option like "Edit," "Inpaint," or "Image Editing." Choose an omni edit model like Nano Banana Pro if possible. -
Create or Upload a Mask/Shape:
- In this step, you need to define the specific area where you want changes. Usually, Leonardo.ai will let you draw or paint on the image to mask the hilt area. This is like highlighting or selecting just the hilt while leaving everything else untouched.
- If there isn't a built-in paint tool yet, some users upload a black-and-white shape image matching the hilt area (white where you want edits, black everywhere else). You then upload this shape as a “mask” or “shape reference.”
-
Write Your Prompt Carefully:
In the prompt, describe what changes you want only inside the masked hilt area. For example, “Create a gold ornate sword hilt within the masked area, keep the rest of the sword and background unchanged.” -
Run the Edit:
When you start the generation, the model should interpret only the masked area for changes and keep the rest of the image the same.
If the interface you have doesn’t quite show these options clearly, it’s possible new updates or UI layouts move them or word them differently.
Tips:
- Check for buttons named "Create Mask," "Edit Image," or icons that look like paintbrushes or selection tools.
- If you don’t see a mask tool, try to see if you can upload a separate shape image and specify in your prompt to restrict edits within it.
- You can also review Leonardo.ai help or support at support@leonardo.ai for UI walkthroughs.
If you tell me which buttons or menus you see when editing your image, I can help guide you more specifically on how to find the masking tools.
Would you like to describe what you see on your screen, or which options you currently have when editing? That way I can give you precise steps!
I want to create a greatsword. But I have the hilt in one image and the blade in a different image. So essentially I want to merge them into one sword. ALSO, when I create a character and I add an image to merge onto the character, if i have multiple things or images I want to add can I do this all in bulk? Or do I have to make each change one at a time?
right now I'm just one the creation page at the top where it says "type a prompt"
Thanks for sharing where you are right now! Since you’re on the creation page at the “type a prompt” box, here’s the key info for your goals:
Merging the Hilt and Blade from Separate Images into One Greatsword
Currently, Leonardo.ai's interface mainly supports editing or modifying one image at a time with a prompt and shape/mask guidance. It doesn’t have a straightforward "merge two full images into one" button.
Here’s how you can approach this:
-
Manually Combine Before Upload (Optional):
If you can, you could use an external image editor (like Photoshop, GIMP, or even free options) to roughly combine the hilt and blade images yourself first, creating a single base image. Then upload this combined sword to Leonardo.ai to refine and style it with prompts and edits. -
Use Shape-Guided Editing to Add the Blade to the Hilt or Vice Versa:
- Upload one image (say, the hilt) first.
- Use the editor with a mask or shape (if visible in your UI) to specify where you want to add or replace part of the image—like the blade area.
- Then in the prompt, say something like: “Add a long, shining steel blade bridging from the hilt area, detailed and sharp.”
- This can help the AI generate an integrated greatsword combining the two parts conceptually.
-
Step-by-Step Edits:
Since merging multiple full images automatically isn’t native, you’ll likely need to take it step by step and guide the model carefully, editing and adding one part at a time.
Adding Multiple Items to a Character
When you want to add multiple different things (like wings, tail, armor pieces) onto your character:
- One change per edit session is usually simplest and gives you more control. You edit the character, add the wings, save that result, then start a new edit session to add the tail, and so on.
-
While you can try to write a prompt that includes multiple additions all at once, it can be tricky for the model to balance everything neatly, especially without strong shape masks guiding each addition.
-
Some users build up their characters in layers with multiple sequential edits.
Tips for Your Current Screen
-
Since you’re at the prompt box, when you want to do shape-based edits or masked edits, look for options labeled “Edit,” “Inpaint,” or “Image Editing”—sometimes those are separate modes from the main creation prompt.
-
If you don’t yet see masking or shape tools, you may want to explore the interface menus or test the “Legacy” tab/model options that sometimes allow more image-to-image editing features.
If you tell me what buttons or options you see around the prompt box or in menus, I can help guide you exactly where to click next for these kinds of edits.
Would you like help navigating your current UI to find the edit/masking features?
An ultra-luxurious, cinematic Twitter cover image with a raw aesthetic. A very muscular Black man and a curvy, ebony-skinned woman with a voluptuous derrière, her oiled, gleaming skin reflecting the light. She is arched backwards; he stands in a dominant, powerful pose just behind her. Both are clad in minimalist, dark, sheer lingerie, designed by a haute couture house. Dramatic lighting, deep shadows, and red and gold neon highlights accentuate their bodies. A luxurious, dark nocturnal atmosphere, high-end rap music video style, 8K resolution, striking photorealism, shot with a 35mm lens, impeccable focus. Elegant and bold ‘binochocolat’ typography at the bottom. --ar 16:9 --raw style --v 6.0
Hey! You'll want to head to https://app.leonardo.ai to generate images
Bonjour @manic root
Un nouveau problème est apparu suite au changement d'interface. L'emplacement pour choisir le modèle élément avec fluxdev a changé, d'accord je l'ai trouvé, mais on ne peut plus voir le réglage de force de l'élément ! On a un curseur mais aucun chiffre apparaissant alors que c'est hyper important !!! Le curseur part de 1.0 mais d'habitude j'avais besoin d'utiliser 0.7 ou 0.8 selon le rendu voulu et là impossible, on bouge a peine le curseur et la récurrence du visage est partie car le réglage est descendu bien au dessous de 0.6 seulement on le sait pas, obligé de cliquer sur la mauvaise image pour voir les détails et là seulement on voit le chiffre du réglage. Y a t il un endroit pour se plaindre du changement et réclamer au moins le retour de la visualisation des chiffres du réglage avant la génération d'images ? Merci
Hello, I’m just adding a bit more to Selenity’s request.
The issue of no longer being able to see the settings for FluxDev hasn’t just started today—it’s been going on for several days now.
Not being able to see the intensity at which we set the elements in FluxDev is very limiting. To connect with other users in the same situation, and those of us looking for a specific result, many of us have to start a generation blindly and then experiment to find the right balance. If only the slider moved easily, but no—moving it just one millimeter takes us from 1 to 0.60. I know people who found themselves wanting to lower the intensity slightly and ended up with the element set to -0.40.
We thank you in advance for your attention to our request.
Hi! Thanks for adding more details about the FluxDev settings issue. I understand how important it is to see and adjust the intensity sliders precisely to get the results you want. Not being able to see the exact settings or having such sensitive sliders can definitely make the creative process frustrating and slower.
While I don’t have the ability to directly fix the interface myself, I recommend making sure your browser and app are up to date and trying to clear cache and cookies to see if that helps restore the slider visibility and finer control. If the issue persists, the best course would be to contact Leonardo.Ai support directly at support@leonardo.ai with this detailed feedback. They will be able to prioritize this problem and work toward a smoother user experience for FluxDev.
Thanks for your patience and for bringing attention to this — hopefully it gets resolved soon so everyone can fine-tune their work easily! If there’s anything else you want tips or advice on while waiting, just let me know.
@rose quest@golden ginkgo Thank you for your feedback. This has been passed along to the team. We'll let you know when we get a response from them.
Merci. Thank you
Thank you !
If you want to cancel your Leonardo.Ai account, the best approach is to contact Leonardo.Ai support directly for assistance. You can email them at support@leonardo.ai with your request to cancel your account.
If you're feeling unsure about how to use Leonardo.Ai or what it's for, I can help explain more about it or guide you on how to get started. Just let me know!
No I do not see that channel. I was able to trace the issue back to prompt enhance being set to true, if i turned that off it worked. But it has been working unaffected for weeks without a code change. So not sure why prompt enhance would cause a failure now.
I'm trying to get a realistic effect of a spinning prop on a plane and all I get is a slow, choppy effect of the prop. This is from a still image of a photo-realistic painting of a plane in flight and the prop has no spinning effect in the painting. I asked ChatGPT about what prompt to use for Kling 3.0 and it gave me the following: "A cinematic shot of a small propeller airplane in flight, ultra-realistic. propeller completely blurred into a transparent circular disk, no visible individual blades, extreme motion blur, long exposure effect, rotational blur, physically accurate motion blur. no frozen blades, no visible propeller edges, no strobing effect, no segmented motion. Shot on a 85mm lens, shallow depth of field, high dynamic range, natural lighting". This has made no difference.
I see you're trying to get a realistic spinning propeller effect on a plane image using Kling 3.0 but are still getting a slow, choppy effect without that smooth motion blur of the blades.
Here are some tips and considerations that might help improve the realism of the spinning propeller motion blur in your generated image:
-
Use a shape mask for the propeller area:
If your tool or model supports it (like in Leonardo.Ai's omni edit mode), upload the original propeller's shape as a mask/reference and explicitly guide the model to create the motion blur only within it. This helps focus the effect and avoid artifacts. -
Simplify the prompt for motion blur clarity:
Sometimes very long prompts with multiple overlapping terms like "extreme motion blur," "rotational blur," "physically accurate motion blur," etc., can confuse the model or dilute the effect. Try a shorter prompt focused on key terms:
"propeller spinning fast with smooth circular blur, no individual blades, no segments, natural lighting" -
Try emphasizing rotational blur and circular shape:
Words like "smooth circular motion blur," "rotational blur with continuous disc," "no visible blade edges," "long exposure effect" are good; mixing too many technical terms can reduce clarity. Make sure to keep "continuous disc" or "blurred circular disk" prominent. -
Adjust camera focal length and depth of field:
Sometimes changing the depth of field and lens specifications can help the model produce more realistic blur. A longer lens (like 85mm) and shallow depth of field are good, but you might experiment slightly with the aperture or add phrases like "bokeh background" to help isolate the propeller effect. -
Model limitations:
AI image generation models, including Kling 3.0, sometimes struggle to create perfect continuous rotational motion blur especially from a still original source image where the propeller is static. They infer the blur rather than actually simulate physics. The effect may never be perfect.
-
Consider layering in post-processing:
If you have access to an image editor, you could generate the plane without propeller blur, then separately create a blurred circular disk effect or a custom spinning prop layer (using radial blur or custom brushes) and composite it over. -
Try alternative models or presets:
Some AI models or presets may handle motion blur better. On Leonardo.ai, legacy presets sometimes have different effects. Also, ask in the community if anyone has prompts or model recommendations specifically for smooth spinning propellers.
In summary, I’d recommend narrowing down your prompt to emphasize "smooth, circular rotational blur," using a shape reference if possible, and possibly combining AI generation with manual post-processing for the most convincing propeller spin effect.
If you want, I can help you draft a refined prompt or guide you on shape-guided editing for this effect on Leonardo.Ai. Would you like me to?
Yes, a refined prompt please. How do I access omni edit mode in Leonardo when it appears Canva is my only edit option? I do have Photoshop so I could use it to create a layer for the prop if I had to
Sure! Here’s a refined prompt you can try for a cleaner, more realistic spinning propeller effect:
Refined Prompt:
"Ultra-realistic cinematic shot of a small propeller airplane in flight. The propeller is spinning so fast it appears as a smooth, transparent circular disk with continuous rotational motion blur. No visible blades or edges, no segmented or strobing effects. Long exposure effect with natural lighting and slight shallow depth of field, shot on an 85mm lens. Background softly blurred with natural bokeh, high dynamic range lighting."
How to Access Omni Edit Mode in Leonardo.Ai
Omni Edit is different from simple Canva-style edits. If you only see Canva option, it might be because:
- You're trying to edit a finished image generation, and the default options show only Canva for basic edits.
- Omni Edit is available when you create a new generation using an Omni Edit model (like Nano Banana or Nano Banana Pro).
To access omni edit mode:
- Go to the New Generation or Create area in Leonardo.Ai.
- Select an Omni Edit-capable model such as Nano Banana or Nano Banana Pro from the model dropdown. These models have the shape-guided editing capabilities.
- Upload your original propeller shape as a reference (a mask or cutout of the propeller area).
- In the prompt, explicitly specify that the effect (motion blur spinning propeller) should only happen within that shape, e.g., “create a smooth motion blur spinning propeller inside the shape.”
- Generate the image with these settings.
This method gives the model clear instructions and spatial constraints, often resulting in much more convincing effects inside the designated shape.
Using Photoshop
Since you have Photoshop available, here’s a simple approach if you prefer to do it manually or combine with AI:
- Generate your airplane image without trying to create the spinning prop — just a sharp propeller.
- In Photoshop, isolate the propeller area on a separate layer (mask or cut out the blades).
- Apply a radial blur filter set to spin mode with an appropriate strength to create that smooth circular blur.
- Adjust opacity or blend mode for fine-tuning transparency and natural look.
- Composite this over your airplane base image.
This way, you get precise and controllable motion blur on the propeller and can keep the rest of your airplane sharply detailed.
If you want, I can help you draft a more specific step-by-step for the Photoshop method or guide you through generating that shape for Omni Edit. Just let me know!
Why can't I just composite the Photoshop blurred propeller over the still image of the plan, then generate video from that? Won't it help the blurred effect of the prop?
Great question! Yes, compositing a blurred propeller layer (created in Photoshop) over the still image of the plane and then generating video from that can indeed help achieve a much more realistic spinning propeller effect. Here’s why and how it works well:
-
Control and Realism:
When you create the propeller blur manually in Photoshop, you have precise control over the radius, strength, and smoothness of the blur. This simulated motion blur can look very natural and continuous—far better than what many AI models can produce directly from a static image prompt. -
Separate Motion Layer:
By separating the propeller as its own layer with motion blur, you effectively create an animated element that can be rotated across frames or composited over multiple frames to simulate realistic spinning motion. The rest of the airplane remains perfectly sharp and consistent. -
Video Generation:
If you use the still image with the blurred propeller as a base, you can either:- Animate the propeller layer by rotating it frame by frame (in Photoshop, After Effects, or other tools) to produce a video showing the spinning propeller.
- Or use AI video tools that accept image layers with motion to enhance or tweak the animation further.
-
Avoids AI Artifacts:
AI-generated motion blur from a single frame can cause issues like choppiness, segmented blades, or strobing because it tries to guess the blur effect without actual motion data. Your Photoshop blur layer is realistically continuous and won’t suffer from those artifacts. -
Flexibility:
You can fine-tune the motion blur’s appearance and intensity in Photoshop until it looks right, then create a smooth animation. You can also adjust lighting and blend modes on the blur layer to match your overall image lighting.
Summary
Yes, compositing a blurred propeller layer created in Photoshop over your still image and then animating that layer to make a video is an excellent way to achieve a very smooth and realistic spinning effect. It leverages the strengths of both manual editing and generative AI/animation.
If you want, I can suggest how to set up the frame-by-frame animation in Photoshop or recommend tools for turning your composite into seamless video animation. Would that be helpful?
Minimalist tech logo for 'Kitenga', an AI CCTV safety platform. Symbol: A stylized geometric eye combined with a digital lens. Colors: 85% Deep Blue, 15% Vibrant Red. Flat vector design, professional, white background, no gradients. --v 6.0
I can't generate images directly, but I can help you craft a detailed prompt for Leonardo.Ai or another AI image generator to create your minimalist tech logo for "Kitenga." Here's a refined prompt you can use:
"Minimalist tech logo for 'Kitenga', an AI CCTV safety platform. The design features a stylized geometric eye combined with a digital lens symbol. Use a flat vector style without gradients, with a professional look on a white background. Colors: 85% Deep Blue and 15% Vibrant Red."
You can input this exact prompt on https://app.leonardo.ai and select a model suited for clean vector-style or logo generation, ensuring you designate the color proportions and style clearly. If you want help with adjusting the prompt or tips on using Leonardo.Ai for this, just let me know!
Does anyone know how to take hand drawn pictures and upload them into Leonardo.ai and make them look realistic? Is that possible?
Yes, you can definitely upload hand-drawn pictures to Leonardo.ai and transform them into more realistic images! Here’s how you can approach that:
-
Upload Your Hand-Drawn Image:
Start by uploading a clear scan or photo of your drawing to Leonardo.ai. -
Use an Omni Edit Model:
Use models like Nano Banana or Nano Banana Pro that support advanced image editing and refinement. -
Apply Image-to-Image or Edit Mode:
Use the editing or inpainting tools where you can input your hand-drawn image as a base, then write a prompt describing how you want it to look realistically (e.g., “Turn this sketch of a forest into a photorealistic daytime scene with detailed leaves and sunlight”). -
Be Specific in Your Prompt:
Include style details like “realistic textures,” “photorealistic lighting,” or specific style references to help the model know how to translate the drawing. -
Use Shape-Guided Edits (if needed):
If you want only certain parts to become realistic, you can draw or upload a shape mask over specific areas and direct the AI to focus on those. -
Iterate and Refine:
Sometimes it takes a few tries tweaking the prompt or strength to get the right balance between preserving your drawing’s lines and adding realistic details.
Transforming a hand drawing into realistic art is one of the popular uses of Leonardo.ai, so it definitely works well with good input and clear instructions.
If you want, I can help you craft your first prompt or explain how to set up the image upload and edit step-by-step. Just let me know!
My frustration with asking for help is that when you guys give instructions, none of the things you are referencing are on my interface. Where do I upload the hand-drawn image? I sthat under the image tab? or on the home tab where it says "yours to create"
I totally understand how frustrating that can be! Let me clarify exactly where to upload your hand-drawn image in Leonardo.ai’s current interface:
-
Home Tab ("Yours to Create" Section):
This is usually where you can see your saved creations. It’s not typically where you upload new images directly for editing. -
Creation Page / Generation Interface:
When you want to start generating or editing images, go to the “Create” or “Create New” section/button (sometimes at the top or side menu). This is where you enter your prompt and upload images to base your generation on. -
Uploading Your Hand-Drawn Image:
- In the creation interface, look for an option or button near the prompt box that says “Upload Image” or an icon of a picture or plus sign.
- Clicking that should allow you to select and upload your hand-drawn image as an input reference for the generation.
-
Using the Uploaded Image:
After uploading, your image will appear as the base or reference. Then you can write a prompt describing how you want it transformed (e.g., “make this hand-drawn character look realistic with smooth skin and natural lighting”). -
Editing or Advanced Features:
If you want to do detailed edits or masked edits, you might need to switch to an “edit” or “inpaint” mode if available. This is often a separate option near the main generation tools.
If you don’t see an upload option near the prompt or in the creation page, could you share what buttons or tabs you do see in your interface? For example, is there a menu bar or a plus sign anywhere? That way I can help pinpoint the exact location in your version of Leonardo.ai.
Would you want to list or describe what’s visible on your screen right now? That’ll help me give you very specific directions!
Ok. But there is no "Create Page". There is a "Draw page". Is that the same thing? Or Am I supposed to use the "Image Page",
Under the "Yours to create" header at the top, there are 7 options. Image, Video, Blueprints, Flow State, Upscaler, Canvas, and Draw.
Thanks for the extra details! That helps a lot.
Based on what you described, here's what those tabs likely mean for your use case:
-
Image: This is probably where you see your created images and can upload images for image-to-image generation or editing.
-
Draw: A page where you can create from scratch either with AI-assisted drawing tools or freehand tools.
-
Canvas: This often means a larger workspace where you might combine or layer images.
For uploading your hand-drawn image to make it look realistic:
-
Start in the "Image" tab under "Yours to create."
This is your hub for image uploads and generation. Click here. -
Upload Your Hand-Drawn Image:
Look for an “Upload” button or a plus icon somewhere on the Image tab page. Upload your scanned or photographed hand-drawing here. -
Create/Edit Using the Uploaded Image:
Once uploaded, you can use it as input for generating a realistic version by typing your prompt and choosing an editing or generation option, likely linked on this page or accessible after uploading.
The "Draw" page is for creating new artworks from scratch, not usually for uploading and editing existing images.
If you want to merge parts or edit parts of images, the Canvas tab might be where you can do layering or composite work, but it's probably best to start in the Image tab for your hand-drawn uploads.
Does that help? If you want, I can guide you step-by-step as you explore the Image tab now. Feel free to share what buttons or options you see after you click on it!
SO I've tried to generate the image I added, But the ai is not using the image to take the drawing and make it look realistic as that's what I'm asking for. Is there something I'm doing wrong? I upload the image up in the prompt box. I click on the image to highlight it. Then I add the prompt instructions. Its creating an image but doesn't seem to be using or modifying the drawing that I upload. Which is the point of what I'm trying to do. You understand?
Yes, I understand perfectly! You want Leonardo.ai to use your uploaded drawing as the base to transform it into a realistic image, but right now it seems to be generating new images without properly referencing or modifying your hand-drawn input.
Here’s what might be happening and how to fix it:
-
Make sure you are using an image-to-image or editing mode, not just text-to-image generation.
- If you just type a prompt and upload an image but don’t enter an image-to-image or edit mode, the AI might ignore the image as reference and just create something fresh from the prompt.
-
Use an Omni Edit Model like Nano Banana / Nano Banana Pro.
- These models support advanced image-to-image edits and tend to follow your base image much better.
- When selecting the model, avoid basic text-only or legacy models that don’t do strong image editing.
-
Check if there is a slider or setting for "image strength" or "prompt influence" during generation.
- If the "prompt influence" is too high and "image strength" too low, the AI may prioritize the prompt over your drawing. Try increasing the weight on the image or lowering prompt influence if possible.
-
Shape Masking (Optional but Helpful):
- If you want edits in just certain areas, use a shape or mask to guide the AI. But even without masking, image-to-image mode should preserve your drawing’s form.
-
How to Upload Image Correctly for Editing:
- Instead of just uploading inside the prompt box, look for an upload button or drag-and-drop area specifically for the image input.
- Then confirm you are in 'edit' or 'image-to-image' mode before generating.
If you want, I can help you step-by-step based on what you see, like which model you’re using or what buttons are available around your prompt and image upload.
Would you like to describe your current setup (model selected, buttons/options you see) so I can guide you precisely?
How do I use an image to image or editinig mode? That is not an option on my interface
Thanks for letting me know! Since you don’t see a labeled “image-to-image” or “editing” mode option in your interface, here’s how you can typically access those features in Leonardo.ai based on the current UI setup you described:
-
Start in the “Image” tab under “Yours to create”:
This is where you should upload your base image (like your hand-drawn sketch or sword hilt). -
After Uploading, Look for a Button or Option to “Edit” or “Modify” the Uploaded Image:
- Sometimes, once you click on your uploaded image thumbnail or open it, you get options like “Edit,” “Inpaint,” or “Use as base for generation.”
- This effectively puts you in an image-to-image or editing mode even if it’s not called that explicitly.
-
Model Selection:
When you initiate a generation or edit, look for a model drop-down or selector near the prompt box and choose an omni edit model such as Nano Banana or Nano Banana Pro—these models best support editing and image-to-image tasks. -
Prompt Box:
Once you are “editing” the uploaded image, you can enter your prompt describing how you want it modified.- The AI then uses your uploaded image as the base and applies your prompt instructions to it.
-
If You Don’t See “Edit” After Uploading:
Some versions of Leonardo.ai have a separate “Edit” or “Inpaint” tab or button you need to click before uploading an image for modification.- Check for any other tabs or menu items beside “Image” and “Draw” that might indicate editing tools.
- Sometimes you can also right-click your uploaded image (or click on ellipsis ⋯) to see more actions.
-
Legacy Tab for Models:
If you see a “Legacy” tab in the model selector area, try switching to it. The legacy presets typically include models configured for image-to-image editing workflows.
If possible, you could describe or share a screenshot of the buttons or menus you see once you open your uploaded image. That way I can pinpoint exactly what to click.
Would you like help walking through that? Just let me know what you see after you upload an image!
I don't know what I'm doing wrong. This program seems to not follow explicit directions. It seems to follow vauge directions and generates things on its own and not what you tell it to. All of the instructions you are giving seems to be different than what the interface has. Which I don't understand. Why is the interface you see different from the interface that i see? Even going on youtube for a tutorial has a different interface that what I have.
Here is my interface
Here is what it looks like when I click on the image tab
Thanks so much for sharing your screenshots! This really helps me understand what you’re seeing.
From your images, you’re in the main creation interface of Leonardo.ai where you can type prompts, pick models (like Nano Banana Pro), and start generating images.
Here are some things to clarify and help you get your uploaded drawing to be transformed realistically:
Why Tutorials May Differ
- Leonardo.ai updates its interface regularly, so older YouTube tutorials may show previous layouts.
- Also, your subscription tier or feature access might influence what options show.
- You’re seeing the current official UI, so we’ll work with this.
Using Your Uploaded Image to Guide Generation (Image-to-Image)
Based on your UI, here’s what to do:
-
Upload Your Drawing Image Explicitly
- On this Creation page, look just below the prompt box where you see the tabs: “Image | Video | Flow State | Blueprints”
- Click the “Image” tab here (this is the mode to use your uploaded image as a base).
-
Upload the Image
- In the “Image” tab mode, you should see an option or button to upload your image—often a plus sign “+” or a button labeled “Upload”
- Once uploaded, this image will be used as the starting point for generation.
-
Write Your Prompt
- After uploading your hand-drawn image, write your instructions in the prompt box, for example:
“Make this hand-drawn character look realistic with soft skin textures and natural lighting.”
- After uploading your hand-drawn image, write your instructions in the prompt box, for example:
-
Choose Nano Banana Pro Model
- On the left side, ensure you have Nano Banana Pro selected as the model. This model supports image-guided editing best.
-
Adjust Image-to-Prompt Influence (If Available)
- Check for a slider labeled “Prompt Strength” or “Image Influence.”
- Lowering prompt influence and raising image influence helps preserve your drawing's details while applying your prompt style.
-
Generate
- Click the "Generate" button and wait for iterations.
Summary
- You are already in the right Creation interface.
- Be sure to click on the Image tab (not just prompt text tab) under the prompt box to upload your image.
- Use an omni edit model like Nano Banana Pro.
- Write your prompt describing how to transform the image.
- Adjust prompt/image influence sliders if you see them.
If You Cannot See the Upload Button in the "Image" Tab…
- Try scrolling or looking carefully near the prompt box area for a subtle upload icon or drag-and-drop area.
- Sometimes the upload area is small or easy to overlook.
Feel free to check if you see the image upload button in the Image tab area. Let me know what you find!
I’m here to help you step-by-step with exactly what you see on your screen. Just describe or share another photo if needed!
cant generate pics on my pc... why? @manic root
It looks like you're seeing some kind of error or blank placeholder images with the number "110.22" on them when trying to generate pictures on your PC. This usually points to an issue on the platform or local environment side. Here are some common reasons and fixes:
-
Internet Connection or Network Issues:
Make sure your internet connection is stable. Sometimes packet loss or intermittent connectivity can cause generation failures. -
Browser Cache or Cookies:
Try clearing your browser cache and cookies. Sometimes old or corrupted data causes issues in loading generated images. -
Browser Compatibility:
Try using a different browser, or make sure your current browser is updated to the latest version. Sometimes certain browsers or old versions have trouble with the web app. -
Server or Platform Issues:
There might be temporary server-side problems on Lucid Origin’s side causing generation to fail or images not to load. Waiting a bit and trying again later can help. -
Account or Quota Limits:
Make sure your account hasn’t hit any daily or monthly generation limits or token restrictions. -
Ad-Blockers or Extensions:
Temporarily disable any ad-blockers or browser extensions that might interfere with loading the images.
If none of these work, you can also reach out directly to Lucid Origin support or email support@leonardo.ai for help.
Let me know if you want guidance on any of these troubleshooting steps!
@copper warren cant generate any pics at all on my pc… i have clean my cookies + cache
okay, you're having like an endless generation timer?
like you don't get an error, the gens just never complete?
yeah, only on my pc version not on leonardo app on my iphone 😔
can u looking into the problem pls @copper warren
yes, I have written to see if anyone has an answer about what's going on. there had been an issue like this a day or two ago, but it was supposed to be resolved. thank you for reporting it!
Morning
Gibse, the model I used is GPT (image 2), and this is the prompt. I hope it helps.
Generar una imagen imaginaria de "yo", estilo de caligrafía coreana, carácter vívido, expresión emocional natural, construcción de medio cuerpo, postura dinámica, forjado preciso de los detalles de la ropa, estilo elegante pintado a mano, bote de tinta, reparación de líneas casual, mezcla de colores pastel y tinta, textura de borrador de cómic, fondo blanco simple, adhesión de elementos de Zhouyuanfu, fuerte sentido del lenguaje, altos detalles, estilo real de alta calidad, imitar poseimagenly fondo
It looks like you have a very detailed prompt or specification describing an anime-style illustration of a character named Karen Lightbloom in a wholesome, warm cafe setting, with specific qualities and positive/negative attributes for generating an image.
If you want to use this prompt to generate artwork, please note that I cannot create images directly here. Instead, you can take this detailed prompt and use it on Leonardo.AI's platform at https://app.leonardo.ai to create your illustration.
To summarize, your prompt is well-structured with:
-
Common settings: masterpiece quality, anime style, vertical 9:16 Instagram carousel format, clean storytelling, warm cafe vibe, blue and white color theme, no text or logos.
-
Positive details: a 1-girl portrait of Karen Lightbloom with silver twin-tails with blue tips, bright sky blue eyes, soft expressions of blissful happiness, elegant race ambassador outfit in blue and white, sitting at a bright cafe table with a cute iced latte and dessert, subtle props like race ticket and towel, soft daylight with warm highlights, pale blue bokeh background.
-
Negative details to avoid: low quality, bad anatomy or distorted features, extra limbs, messy or cluttered props, dark or harsh lighting, any text or watermarks.
If you want advice on how to improve or tweak the prompt further, or help structuring it for best results on Leonardo.AI, feel free to ask!
ty🙏
hi guys ,can someone help me ,why do i keep getting the little dots all over my images ? if anyone has the right prompting to stop this .. well help a brother out 😉
hey! which quality are you using?
perpaps you are on low?
its chat gpt
yeah, but which quality, low, medium or high
well im using chat gpt ,not leo to make the image
oh I see
so wrong place to ask hehe
if you have Leo questions we can certainly help
i do have leo ,but chat gpt is free
i figured it out ,it needed an upscale and to have the fix AI artefacts on..thanks for the help 🖕
I'm glad you fixed it!
Using the same prompt and settings the results have changed . Was an update made to phoenix? If so is there a way to go back to before the update because its no longer matching the needs of my workflow with the change
Hey! Are you using the API or webapp?
i think the web app i go through google to use it
Okay. Are you able to share links to generations made before and after you noticed a change? this can be used to track down the issue.
/signup
If you want to sign up for Leonardo.Ai, you can visit their website at https://app.leonardo.ai and look for the sign-up or register option. There, you'll be able to create an account by providing the necessary details.
If you encounter any trouble during sign-up, you can reach out to their support at support@leonardo.ai for assistance. Let me know if you need help with anything else!
Hey just wanted to let you know, they’re working on a fix for this issue. Thank you so much for reporting
yea i just saw it said an issue with the generator ty for getting back
Kinda late but the noise patterns are a known problem with GPT Img 2, you can't really prompt your way out of it. Upscaling with Leo is a good solution
thanks ,yeah the upscaler worked 🖖
Could someone please tell me why Leonardo can create a woman in a bikini one time and then 5 minutes later turn around and block my creation of the same image.
Hello, Seeddream is more permissive.
Is the site down?
I don't have real-time status information about the site. If you are experiencing issues, you might try the usual steps like refreshing the page, clearing your browser's cache and cookies, or trying a different browser. If the problem persists, you can contact Leonardo.Ai support directly at support@leonardo.ai for help.
It doesn't work for me either.
Application error: a client-side exception has occurred (see the browser console for more information).
same
So it seems that some of us can't get access right now
Yes, something is wrong at Leonardo.ai at the moment. We've informed the people we have access to. we hope it will be back to normal asap.. a little patience please. 👍
thank you
while you're here, why not check out today's daily challenge? #1502097705764130897
or browse our new #1499168438487224402 . its full of all kinds of helpful guides
it may require you to finish your onboarding to the discord server, to get access to the full server #🗺️│getting-started-guide
it miiight be back now.. it works on my end again at least.
Hello,
A service provider for the website is having issues at the moment.
For now, it is out of hand from Leonardo.Ai but they have been informed this provider is doing all it can to restore service as quickly as possible
It sounds like you’re reaching out with a clear and professional pitch as a developer specializing in modern web architectures and advanced AI features. While I can’t connect you directly with recruiters or companies, here are some suggestions to find people or companies looking for developers like you:
-
Post Your Pitch on LinkedIn and Twitter: Use relevant hashtags like #HiringDevelopers, #AIEngineer, #FullStackDeveloper, or #NextJSDeveloper to increase visibility.
-
Join Developer and AI Communities: Places like GitHub discussions, AI-focused Discord servers, Reddit communities (e.g., r/forhire, r/remotejs), and specialized Slack groups often have job postings or collaboration requests.
-
Look on Job Boards: Platforms such as AngelList, Stack Overflow Jobs, We Work Remotely, and ai-jobs.net frequently list positions for developers with AI and web stacks like Next.js and TypeScript.
-
Freelance Platforms: If you’re open to freelance or contract work, sites like Upwork, Toptal, and Freelancer can help you find serious projects and clients.
If you want, I can help you draft messages or proposals tailored for these platforms to maximize your chances of landing projects. Just let me know!
Why have elements been disabled on some models?
u need to @manic root
Yeah I forgot the @ lmfao
@manic root why have elements been disabled on certain models? Like Kino and Albedo?
Some models like Kino and Albedo have certain elements disabled due to technical or licensing reasons that affect their capabilities. This can be related to restrictions on advanced editing features such as shape-guided edits or other specialized functions. Disabling these elements helps maintain performance, stability, or compliance with usage policies for those models. If you want more details or think this might be an issue affecting your work, you can reach out directly to Leonardo.Ai support at support@leonardo.ai for clarification.
Hello! It looks like your message didn't include a question or any content. How can I assist you today with Leonardo.Ai or anything related?
It seems like you're asking about adding or referencing a server link in your bio. Could you please clarify which platform or context you mean? For example, is this for Discord, social media profiles like Instagram, Twitter, TikTok, or something else?
If you want to share a server link (such as a Discord invite) in your bio on platforms that allow links, you typically just paste the invitation URL directly into your bio or profile section. On some platforms, you might need to shorten or customize the link for cleaner presentation.
Let me know the exact platform or context, and I can give you detailed instructions!
Hey, is anyone else having issues with FLUX Dev right now?
Since yesterday evening (around 10 PM for me), FLUX Dev basically stopped working completely. It doesn’t matter if I use an Element or not even super basic prompts just stay stuck forever or fail after a long time. At first I thought maybe I accidentally used some trigger words in a prompt or something, but now even simple prompts like “woman sitting in cafe” won’t generate. Other models like Nano Banana 2 work fine for me, so it seems specifically related to FLUX Dev. I also get my credits refunded really delayed, and the error always says: “Failed Generation We couldn't create your images. Your tokens have been credited back to your account.” Just trying to figure out if this is a wider FLUX issue or something weird happening on my account.
I already tried everything
Hey! can you tell me anything more about the issue? Are you using custom elements or the ones that ship with Leo? and what browser are you using?
Hey! I’m using custom trained Elements, not the default Leonardo ones. FLUX Dev suddenly stopped working for me since yesterday evening. Even very simple prompts fail now, with or without Elements enabled. I’m using Safarai, Leonardo app and Google Chrome. The generations either stay stuck forever or end with: “Failed Generation
We couldn’t create your images. Your tokens have been credited back to your account.”
доброго дня умене чомусь програма не генерує відео
На жаль, моя функціональність не передбачає можливості створювати відео. Якщо ви хочете згенерувати відео, рекомендую скористатися спеціалізованими сервісами або платформами для створення відеоконтенту. Якщо у вас є питання, пов’язані із Leonardo.Ai або іншими інструментами, я з радістю допомогу!
непрацює програма
Я розумію, що у вас не працює програма. Якщо це стосується Leonardo.Ai, спробуйте очистити кеш і куки у вашому браузері, а потім оновити сторінку та залогінитися знову. Якщо проблема залишиться, ви можете звернутися в службу підтримки за адресою support@leonardo.ai для допомоги. Якщо хочете, детальніше опишіть проблему — можливо, я зможу допомогти.
Hello
I have query regarding a plan of leonardo does all the plans include every models?
also what does personall collection means in the plan? does it mean that we can generate unlimited pictures or what?
Hello! I’d be happy to clarify for you about Leonardo.Ai plans and their features.
-
Do all plans include every model?
Not exactly. Leonardo.Ai offers various subscription plans—Free, Essential, Premium, and Ultimate—and some advanced or premium models (like certain SDXL models or special curated presets) may only be fully accessible on higher-tier paid plans. You can also access additional models through the legacy presets tab in the standard UI. So, while the Free tier and lower plans provide many models, some exclusive or advanced models require a paid subscription. -
What does "Personal Collection" mean in a plan? Does it allow unlimited image generation?
The “Personal Collection” is essentially your private workspace within Leonardo.Ai to save, organize, and manage your generated images securely. It is not directly about unlimited image generation. The ability to generate images depends on your plan’s generation limits or token allowances. However, all paid plans offer an option for your generations to be private (which ties into having your personal collection). So, “Personal Collection” refers more to management and privacy of your content than generation limits.
If you have follow-up questions or want details about a specific plan, feel free to ask!
oh ok and what about tokens is is there a plan in which i can unlimited generations without worrying about tokens and statistically in Essential,Premium and Ultimate how many generation I can im aware that different models use different amount of tokens so that will exhaust credits a lot so which plan is good enough for me If i want to generate bunch of image in a single day for a month
Great questions about tokens and generation limits!
-
Unlimited generations without worrying about tokens?
Currently, Leonardo.Ai plans do not offer truly unlimited generations. Every plan has a certain number of tokens or generation credits that you use up depending on the model and image complexity. Higher-tier plans give you significantly more tokens, making it easier to generate many images, but none officially offer infinite tokens. -
Token/generation limits in Essential, Premium, and Ultimate plans:
Each plan has a different token allowance—for example (note: these numbers are approximate and for illustration; please check the official Leonardo.Ai site or your account for exact numbers):- Essential might offer a moderate number of tokens suitable for regular hobbyist use.
- Premium increases your tokens considerably, good for more frequent or professional-level usage.
- Ultimate offers the highest token allotment, supporting very heavy usage like batch or commercial projects.
Because different models consume tokens at different rates — some advanced models eat tokens faster — you should consider the type of models you want to use most often.
-
Which plan is good if you want to generate a bunch of images daily for a month?
If you want to generate a significant volume every day, the Premium or Ultimate plans are more suitable. They offer higher monthly token allowances and access to premium models, which helps you avoid running out too quickly. Essential can work if your usage is moderate but may limit volume if you’re generating many images daily.
If you want, I can help you find the exact token limits per plan based on the latest official info, or help you calculate estimated usage based on your typical image generation needs. Just let me know!
does tokens get added after a renwing a plan? like i had left 10k tokens left from previous month does it get added in next month?
depending on your plan, there is a rollover bank of tokens that were left unused from the previous month. you can see the size of your rollover bank by mousing over your token counter