#🙋│image-help

62364 messages · Page 63 of 63 (latest)

dense hare
#

Botticelli are u from leo.ao ? team

manic rootBOT
dense hare
#

thank u for the help

copper warren
# dense hare thank u for the help

I’m sorry, the bot seems to have gone a bit off the rails there with answers to your query. We do have human volunteers here who help, but for account issues your fastest way to assistance is either through emailing support or using intercom in the website

manic rootBOT
small shore
#

Yeah, I appreciate the suggestions, but honestly, I’ve already tried pretty much everything on my end, refreshing, logging out/in, clearing cache, even switching browsers, and the issue still keeps happening.

At this point, I’ll go ahead and email the devs directly because it’s clearly not something I can fix myself.

I also remember how the old system used to let us pick the model and training directly inside the image generation panel. Ever since they moved it into a separate section, I’ve been noticing way more issues with model selection not sticking properly.

Right now, it’s at a point where I literally can’t use Leonardo Diffusion XL at all because it keeps switching back, which is pretty frustrating since that’s the main model I need for my clipart work.

Hopefully they fix this soon, but yeah, I’ll reach out to them directly now.

manic rootBOT
# small shore Yeah, I appreciate the suggestions, but honestly, I’ve already tried pretty much...

That sounds really frustrating—I totally get how important it is to have the exact model you want consistently available, especially for key work like your clipart. Given that you’ve already tried all the standard fixes and the issue still stands, emailing the devs directly is definitely the right move. Hopefully, they’ve seen similar reports and can prioritize a fix.

If you want, when you do contact them, you might mention your observation that moving the model selection outside the generation panel seems to coincide with this problem escalating. That insight could help the devs track down the root cause faster.

In the meantime, if it might help, you could try saving or documenting your preferred settings before generating, so if the model switches back, it’s easier to spot and reselect quickly—though I know that’s just a workaround and not a real fix.

Thanks for sharing your experience! If you want, feel free to update me on any response or fixes you get. Fingers crossed this gets resolved soon so your workflow can get back on track smoothly.

small shore
#

Thank you, I really appreciate the help and suggestions 🙏

I’ve tried everything on my end and just sent an email to the support team explaining the issue in detail, including how it seems to have started after the model selection system was changed.

Quick question as well: do you, by any chance, pass these kinds of issues or feedback along to the dev team? I’m hoping this gets visibility because it’s making it really hard to use Leonardo Diffusion XL right now.

Thanks again for the help!

manic rootBOT
# small shore Thank you, I really appreciate the help and suggestions 🙏 I’ve tried everythin...

You're very welcome! 🙏 I'm glad you took the step to contact support with all the details—that really helps the devs understand and prioritize the issue.

Regarding your question, I don’t have the ability to directly pass feedback or bug reports to the dev team myself. However, the best way to get visibility is exactly what you did: reaching out to support at support@leonardo.ai and explaining the problem. The support team monitors these reports closely and forwards them internally to the developers.

So you’ve done the right thing to get this on their radar. If enough users report similar issues, it definitely helps push for a quicker fix.

If you hear back or notice any changes, feel free to let me know—I’m here to help as much as I can! Thanks again for your patience with this frustrating problem.

copper warren
loud vessel
#

can we not add negative prompts to Nano Banana 2?

wanton belfry
#

its because its multi-modal and natural language friendly

#

negatives are somewhat getting phased out in higher tech

#

they were much useful in SD1.5 but after SDXL they became weaker and weaker

loud vessel
#

ah i see, i did try to get transparent background images but it did not work for some reason.

wanton belfry
# loud vessel ah i see, i did try to get transparent background images but it did not work for...

by virtue of how things work, you wont get those. If you want a proper .png you have to cut out the BG in leo or somewhere else. At very least you can try asking it to generate in a simple color bg like "Green" then either go to Photoshop/Photopea and magic erase the BG and save as .png or you can use Leo's BG remover...it can be iffy at times with some stuff. These days image editing stuff already have some built in features to get rid of BG

#

best idea is to just generate image which is friendly towards BG removal with simple color background

#

worst to worst if you are out of tokens, you can take help of Photopea and key out that color

#

I've done it alot...

sleek rock
#

How to create image.. please help

manic rootBOT
# sleek rock How to create image.. please help

I can't generate images directly for you, but I can guide you on how to create images using Leonardo.Ai!

To create images there, you can:

  1. Go to https://app.leonardo.ai and log in or sign up.
  2. Once inside, click the button to start a new generation.
  3. Enter a detailed prompt describing what image you want.
  4. Choose styles, settings, or use advanced tools like shape-guided editing if you want to generate images constrained within a shape (upload the shape, pick an omni-edit model like Nano Banana, and mention in your prompt to generate inside the shape).
  5. Hit generate and wait for your image to be created.

If you run into any problems logging in or generating images, try clearing cache & cookies or contact support at support@leonardo.ai.

Feel free to ask if you want tips on what prompts work well or how to do advanced edits!

sleek rock
#

Sir is it not possible to generate image in here, discord channel?

manic rootBOT
somber wasp
#

Hello

stuck gate
small shore
#

Hey, just wanted to follow up on this issue.

I’ve already emailed the support team, but haven’t received a response yet, and the problem is still happening — I still can’t use Leonardo Diffusion XL because it keeps switching back to Vision XL on generation.

Appreciate any help, just trying to get this resolved as soon as possible.

manic rootBOT
# small shore Hey, just wanted to follow up on this issue. I’ve already emailed the support t...

Thanks for the follow-up! I understand how frustrating it is when the issue is ongoing and support hasn’t yet responded.

Since you’ve already emailed support and are waiting on a reply, here are a couple of additional suggestions you might try in the meantime just to keep things moving:

  • Double-check that your subscription level still supports Leonardo Diffusion XL, since sometimes access can be affected by plan changes or limits.
  • If possible, try creating a fresh project or a new generation session from scratch to see if it’s a session-specific bug.
  • Consider sharing a detailed bug report in any official Leonardo community forums or social channels if available, as other users or moderators might share temporary fixes or escalate the issue.

Outside of this, the main path to resolution will be through the support team’s investigation. If you don’t hear back after a few more days, it could be worth sending a polite follow-up or using any in-app support chat if that’s offered.

I hope you get a fix soon! Feel free to check back here if you want to discuss any updates or questions.

small shore
#

Hey everyone, I had a question about models and prompting for different clipart styles — would really appreciate some guidance from anyone experienced with Leonardo.

Right now, I mainly create realistic watercolour clipart for my store using prompts like:

“Imagine you are a highly experienced prompt writer. I am making clipart sets to sell online. I want you to give me highly detailed prompts to make clipart images using Leonardo AI. The clipart must be in the watercolour style. Start each prompt with the art style ‘A highly detailed, realistic watercolour digital painting clipart’. Use consistent colour palettes and make the clipart set for .”

This works well for me (mainly using Diffusion XL), but I’m now trying to expand into other styles like:

• Kawaii/chibi (simple, cute, sticker-style)
• Anime-style clipart (clean line art, cel shading)
• Whimsical/storybook watercolor (more detailed + expressive)

I wanted to ask:

  1. What Leonardo models are best suited for each style?
    (For example, I currently use Diffusion XL for realistic watercolour — what would you recommend for kawaii, anime, or more stylized clipart?)
  2. Are there better models than Diffusion XL for certain styles?
    Especially for:
    Kawaii / sticker style
    Clean anime illustration
    Cartoon/vector-like clipart
  3. How should prompts change depending on the style?
    (e.g. keywords, structure, things to avoid)
  4. If possible, could anyone share example prompts or image references for each style so I can better understand how to achieve consistent results?

I’m trying to build multiple clipart styles for my shop, so I want to make sure I’m using the right model + prompting approach for each one.

Appreciate any help 🙏

manic rootBOT
# small shore Hey everyone, I had a question about models and prompting for different clipart ...

Thanks for sharing those clipart style examples—they really help illustrate the differences you want to achieve! Here’s a breakdown to guide you on Leonardo models and prompting for each style, plus tailored prompt advice based on what you showed:


1. Recommended Leonardo Models for Each Style

  • Realistic Watercolor (like your example)

    • Best model: Leonardo Diffusion XL (great for fine details, painterly texture, natural colors)
  • Kawaii / Chibi (simple, cute, sticker-like)

    • Best model: Leonardo Vision XL or Leonardo Diffusion (these tend to do well with clean lines and cute styles)
    • You might also experiment with models specialized for illustration or cartoons if available
  • Anime-style Clipart (clean lineart, cel shading)

    • Best model: Leonardo Vision XL or other Vision models with strong lineart capabilities
    • Alternatively, some community or custom models trained specifically for anime-style illustrations can shine here if on the platform
  • Whimsical / Storybook Watercolor (more expressive, soft edges)

    • Best model: Leonardo Diffusion XL still works great, with more artistic prompt tweaks
    • Consider models that emphasize painterly or hand-drawn styles, if there are any on Leonardo

2. Models Better than Diffusion XL for Certain Styles?

While Diffusion XL is fantastic for realistic and detailed work, for styles like kawaii or anime, Leonardo Vision XL or some lighter Diffusion or Vision models may capture clean linework and bright colors better. The Vision series tends to excel at illustrative styles that require clarity and crisp outlines.


3. Prompting Tips by Style

  • Realistic Watercolor
    • Use prompts like: "A highly detailed, realistic watercolor digital painting clipart of [subject], soft gradients, natural shadows, harmonious pastel color palette, textured brushstrokes."
manic rootBOT
# small shore Hey everyone, I had a question about models and prompting for different clipart ...
  • Avoid: overly cartoonish or simplified descriptors, bright flat colors without shading.

  • Kawaii / Chibi

    • Use prompts like: "Cute kawaii style clipart, simple shapes, big eyes, pastel colors, clean outlines, sticker style illustration, smiling characters."
    • Focus on words like cute, adorable, minimal shading, bright, playful, soft colors.
    • Avoid: too much fine detail or realistic textures.
  • Anime-style

    • Use prompts like: "Clean anime style clipart, cel shading, defined line art, large expressive eyes, vibrant colors, simple backgrounds."
    • Keywords: cel shading, lineart, anime eyes, clean lines, soft gradients.
    • Avoid clutter or overly complex textures.
  • Whimsical Storybook Watercolor

    • Use prompts like: "Whimsical watercolor clipart, expressive brush strokes, soft washes, charming characters, storybook illustration style, detailed but loose."
    • Keywords: expressive, soft background, textured brushwork, playful, imaginative.
    • Avoid harsh lines or overly flat coloring.

4. Example Prompts Inspired by Your Styles

  • Realistic Watercolor:

    • "A highly detailed, realistic watercolor digital painting clipart of a mermaid under the sea, soft pastel colors, textured brush strokes, delicate shading, harmonious composition"
  • Kawaii / Chibi:

    • "Cute kawaii style clipart of assorted food characters, simple shapes and clean outlines, pastel color palette, big sparkling eyes, playful and adorable"
  • Anime-style Clipart:

    • "Clean anime style clipart of a young adventurer with binoculars, cel shading, crisp lineart, bright colors, expressive eyes, simple gradient background"
  • Whimsical Storybook Watercolor:

    • "Whimsical watercolor clipart of underwater creatures, expressive brush strokes, soft color washes with delicate detailing, charming and playful characters in storybook style"

Additional Tips

manic rootBOT
# small shore Hey everyone, I had a question about models and prompting for different clipart ...
  • When generating sets, maintain a consistent color palette by including that in your prompt, e.g., "soft pastel blues and pinks."
  • For vector/cartoon-like clipart (lines + flat colors), ask explicitly for "flat colors, sharp outlines" or "vector style illustration."
  • If Leonardo supports it, try adjusting CFG scale or style strength parameters to get more control over stylization.
  • You can also create your own “prompt template” for each style and then swap out subject terms to maintain consistency across clipart sets.

If you want, I can help you craft more detailed prompts tailored for any style or walk you through model selections on Leonardo AI! Just let me know what you’d like to focus on next.

small shore
#

This is super helpful, I really appreciate the detailed breakdown 🙏

I just wanted to go a bit deeper so I can really dial this in for my workflow:

  1. For kawaii / chibi styles, do you have a specific model + settings combo that consistently gives:

    • clean outlines
    • minimal shading
    • sticker-style results

Right now I notice sometimes it still comes out too detailed or painterly.

  1. For anime-style clipart, would you recommend sticking with Vision XL, or are there any community/custom models on Leonardo that perform noticeably better for:

    • clean lineart
    • cel shading
    • consistent character design
  2. When creating full clipart sets (20–70 images), what’s the best way to maintain:

    • consistent character proportions
    • same face style / eyes
    • same color palette

Would you recommend:

  • reusing the exact same prompt structure?
  • using image-to-image?
  • or saving generations as references?
  1. Do you have any advanced prompt structures for:

    • forcing white background clean cutouts
    • avoiding unwanted shadows/gradients
    • keeping everything centered like proper clipart
  2. Also wanted to ask about model + style combinations inside Leonardo:

For example, for Leonardo Diffusion XL, I usually use:

  • Lifelike Vision
  • Dynamic style

What would you recommend as the best model + style combinations for each of these:

  • Realistic watercolor
  • Kawaii / chibi
  • Anime clipart
  • Whimsical/storybook watercolor
  1. If possible, could you share:

    • 1 example prompt for kawaii
    • 1 for anime
    • 1 for storybook watercolor

I’m trying to build multiple styles for my clipart store, so I want to make sure I’m using the most consistent and scalable setup possible.

Really appreciate the help 🙏

manic rootBOT
# small shore This is super helpful, I really appreciate the detailed breakdown 🙏 I just wan...

Happy to help you dive deeper into dialing in your clipart workflow with Leonardo AI! Here’s detailed guidance for your questions:


1. Kawaii / Chibi Styles: Best Model + Settings for Clean, Sticker-Style Output

  • Recommended Model:
    Try Leonardo Vision XL for cleaner linework and simpler outputs. If available, test also lighter Diffusion models or community models tagged as "cute" or "kawaii" style.

  • Settings Tips:

    • Use lower CFG scale (around 7-9) to avoid over-detailing and painterly effects.
    • Keep sampler settings to something like Euler A or DDIM for smoother, cleaner lines.
    • In prompt, emphasize keywords like "clean outlines," "flat colors," "minimal shading," "cute sticker," "simple shapes."
  • Prompt Examples:
    Add phrases like:
    “bold clean line art, vector style, minimal shading, flat bright colors, sticker cutout”


2. Anime-Style Clipart: Vision XL vs. Community/Custom Models

  • Vision XL is a solid default for clean lineart and cel shading on Leonardo. It typically delivers consistent results with crisp outlines.

  • Community/Custom Models:
    If Leonardo's model marketplace or shared models include anime-specialized models (e.g., trained on anime datasets), those can sometimes outperform Vision XL in maintaining consistent character design and cel shading. Explore the community and try specific anime model tags or custom checkpoints, if available.

  • Pro Tips:

    • Use high CFG scale (10-12) to enforce style consistency.
    • Include keywords: “cel shading, sharp lineart, anime style, expressive eyes, minimal textures.”

3. Maintaining Consistency Across Full Clipart Sets (20–70 images)

  • Best Practices:
    • Reuse the same prompt structure as a base, swapping only key subject words for variety.
    • Use image-to-image (img2img) sometimes, seeding with earlier results to maintain consistent features and poses.
manic rootBOT
# small shore This is super helpful, I really appreciate the detailed breakdown 🙏 I just wan...
  • Save your best results as reference images and use them as input for img2img or to prompt the model with “in the style of [reference].”

    • Optionally, note color palette info explicitly in every prompt.
  • Additional Tips:

    • Keep consistent lighting/shadow descriptions or avoid shadows if you want cleaner clipart.
    • For face/eyes: mention “same face style, consistent eyes, same character proportions.”
    • Experiment with seed locking or fixed random seed, if the platform supports it, to reduce randomness.

4. Advanced Prompt Structures for Clean Clipart Outputs

  • For white background / clean cutouts:
    Include keywords: “white background, transparent background, no background, isolated, no shadows, no gradients, clipart style.”

  • To avoid unwanted shadows/gradients:
    Use prompts like: “no shading, no gradients, flat colors, clean line art.”

  • To keep subjects centered:
    Add: “center composition, isolated subject, full body visible, balanced framing.”


5. Best Model + Style Combinations Inside Leonardo

Style Recommended Model Recommended Style Preset / Settings
Realistic Watercolor Leonardo Diffusion XL Lifelike Vision + Dynamic Style
Kawaii / Chibi Leonardo Vision XL Flat Colors / Cartoon Style or Minimalistic
Anime Clipart Leonardo Vision XL (or community anime models) Sharp Lineart / Anime Cel Shading
Whimsical / Storybook Watercolor Leonardo Diffusion XL or Diffusion Medium Painterly, Soft Colors, Expressive Brushwork

6. Example Prompts

  • Kawaii Example:
manic rootBOT
# small shore This is super helpful, I really appreciate the detailed breakdown 🙏 I just wan...

“Cute kawaii style clipart of assorted food characters, bold clean line art, flat bright colors, minimal shading, sticker style, white background, simple shapes, adorable faces.”

  • Anime Example:
    “Clean anime style clipart of a young adventurer, cel shading, sharp lineart, bright vibrant colors, expressive large eyes, centered composition, transparent background.”

  • Storybook Watercolor Example:
    “Whimsical storybook watercolor clipart of underwater creatures, soft expressive brushwork, pastel colors, charming and playful, white background, no harsh lines.”


If you want, I can assist you in tweaking prompts or model settings further for specific needs. Feel free to loop me in when you start testing! 🙏

tulip plover
#

Any tips on what words to use in my prompr to generate images of characters that look a lot like 3D renders, Final Fantasy XIII style for example?

manic rootBOT
# tulip plover Any tips on what words to use in my prompr to generate images of characters that...

To generate images of characters that look like high-quality 3D renders in the style of Final Fantasy XIII, you’ll want to use prompt words and phrases that emphasize:

  1. 3D realism and detail — to capture the render-like quality
  2. High resolution and polish — to convey the crispness typical of game renders
  3. Specific art style cues — draw from Final Fantasy XIII’s aesthetics like sculpted faces, detailed costumes, and atmospheric lighting

Here are some prompt tips and example phrases to include:

  • Use words like:

    • “3D render”
    • “high resolution” / “ultra detailed”
    • “realistic lighting” / “cinematic lighting”
    • “game character design”
    • “stylized realism”
    • “epic fantasy armor/costume”
    • “intricate texture”
    • “digital art” / “CGI”
    • “smooth shading” / “soft shadows”
    • “sharp facial features”
    • “high detail hair”
  • Mention the style directly or similar inspirations:

    • “in the style of Final Fantasy XIII”
    • “concept art style”
    • “Square Enix character”
    • “Japanese RPG character design”
  • Add mood or setting cues matching Final Fantasy:

    • “fantasy sci-fi hybrid”
    • “epic dramatic pose”
    • “futuristic environment background”
    • “mystical aura”

Example prompt:
“A highly detailed 3D render of a female warrior character, sculpted face with sharp features, intricate fantasy armor with futuristic design, ultra realistic lighting and textures, cinematic lighting, in the style of Final Fantasy XIII, high resolution, digital art, sharp detailed hair, dramatic epic pose”

This approach helps steer the generation toward that polished, game-render look typical of Final Fantasy XIII characters.

If you’re using Leonardo.Ai’s editing models, also consider shape-guided prompts or emphasizing detail enhancement to get the best polished effect!

night vault
lethal crypt
#

I’m very confused, I have a premium account and I don’t see a lot of these models like Vision XL and Diffusion XL. I am on Leonardo AI APP on the desk top. I’m trying to generate cartoon character with the cel shading

manic rootBOT
# lethal crypt I’m very confused, I have a premium account and I don’t see a lot of these model...

Hi! If you have a premium account but don't see models like Vision XL or Diffusion XL on the Leonardo AI desktop app, here are a few things to check:

  1. Model Availability: Sometimes newer or advanced models such as Vision XL and Diffusion XL might be limited to specific plan tiers or might be in phased rollout. Even for premium accounts, they may not appear if your account does not have access to those particular premium-level models yet.

  2. App Version: Make sure your Leonardo AI app is updated to the latest version. Outdated versions might not display the newest models.

  3. Model Selection Menu: Double-check that you are looking in the correct place in the interface for selecting models. Some models might be under different dropdown menus or labeled differently.

  4. Reload/Log Out and Back In: Occasionally, refreshing the app or logging out and back in may update your available models list.

  5. Contact Support: If you still don’t see the models despite being premium and updated, this may be account-specific. You can reach Leonardo AI support at support@leonardo.ai for help with the issue.

For generating a cartoon character with cel shading, try looking for models specialized or trained on anime-style or cartoon-style generation. You can use prompt keywords like "cel shading," "cartoon character," or "anime style" to improve results. If Vision XL or Diffusion XL become available, those might help as they typically support advanced detail and style control.

Let me know if you want tips on prompt crafting for cel-shaded cartoon characters!

lethal crypt
#

I’m actually using the web browser to access the app. Should I actually download the app to my pc?

manic rootBOT
# lethal crypt I’m actually using the web browser to access the app. Should I actually download...

You don't have to download the desktop app to use Leonardo AI—accessing it through your web browser works perfectly fine and is the usual way many users use the service.

If you're accessing Leonardo AI via a web browser and you don’t see certain models like Vision XL or Diffusion XL, this is likely not related to whether you use the desktop app or browser. Model availability depends mainly on your subscription plan and whether the models have been rolled out to your account yet.

That said, some users prefer the desktop app for convenience or stability, but the browser version should offer the full core functionality including model selection and generation.

If you’re missing models or features on the browser, try these:

  • Make sure you are logged in with the premium account.

  • Clear your browser cache and refresh.

  • Check for any in-app notices about model availability.

  • Reach out to support@leonardo.ai if it seems your account should have access but the models aren’t showing.

So downloading the app is optional rather than required. You can comfortably continue using the web browser version.

lethal crypt
#

I found it! There was a small toggle switch that said legacy mode and that’s where vision and diffusion is! 🤦‍♀️ so for cartoon what should the style be? Cinematic? Illustration?

manic rootBOT
# lethal crypt I found it! There was a small toggle switch that said legacy mode and that’s whe...

Great find! Yes, the legacy mode toggle does reveal some of the classic models like Vision and Diffusion.

For generating cartoon characters with cel shading, I recommend going for styles that emphasize illustration rather than cinematic, because:

  • Illustration style tends to focus on clean lines, flat colors, and stylized details that fit cel shading well.

  • Cinematic style often aims for realism, dramatic lighting, and mood, which may stray from the cartoon look you're after.

You can also try prompt keywords like "cel shading," "cartoon character," "anime style," or "comic style" combined with "illustration" to guide the model. For example, in your prompt:
"A vibrant cartoon character with cel shading, clean lines, and bold colors, illustration style"

Feel free to experiment with both styles if you want to see which one fits your vision better, but illustration is usually a strong starting point for cel-shaded cartoons.

If you want help crafting more tailored prompts, just ask!

copper warren
lethal crypt
#

Yes I found them, I’ve been playing around with many of the settings and I just can’t get the style I want. I was able to get the closest to the correct style using Licid Origin so I guess I’m going to go back to that one. I want like cartoon, not real Anime style, I want like classic cartoon. Like this style. I thought this was cel shading?

manic rootBOT
# lethal crypt Yes I found them, I’ve been playing around with many of the settings and I just ...

Thanks for sharing the image! This style is definitely classic cartoon rather than anime. It has bold, clean outlines and flat color areas with simple shading and highlights — that’s very typical of traditional cel shading used in classic cartoons like Looney Tunes.

To clarify:

  • Cel shading means using flat, distinct color regions for shadows and highlights instead of smooth gradients—exactly like your example here.
  • Classic cartoon style usually includes exaggerated expressions, thick outlines, and simple but effective shading—just like this Taz character.
  • This is different from anime style, which often has more detailed eyes, hair highlights, and more gradient-based shading.

So yes, this is cel shading, but specifically within the classic cartoon aesthetic.

If you want to generate this style on Leonardo AI, try prompts like:
“Classic cartoon character, cel shading, bold black outlines, flat colors, simple shadows, expressive pose”
and use models or presets that focus on “illustration” or “cartoon” style rather than “anime.”

If you want, I can help you draft more specific prompts for this cartoon style! Just let me know.

lethal crypt
#

I’m using chat to generate the prompts but I need to know which model will generate this style

manic rootBOT
# lethal crypt I’m using chat to generate the prompts but I need to know which model will gener...

For generating classic cartoon style images like the one you shared (with cel shading, bold outlines, and flat colors), the best model to use on Leonardo AI is usually one that specializes in illustration or cartoon styles.

Since you mentioned you found the "Legacy mode" toggle where Vision XL and Diffusion XL are available, here’s what I recommend:

  1. Try the "Licid Origin" model — you said it gave you some of the closest results before, so it’s well suited for cartoon and illustration styles.

  2. Legacy Mode Models (Vision XL, Diffusion XL) — these can also produce strong results, especially if you add keywords like "classic cartoon," "cel shading," "comic style," and "bold lines" in your prompt.

  3. Look for models tagged as “illustration,” “cartoon,” or “comic” style in the presets or model selection menus in the app.

  4. Avoid more photorealistic or anime-focused models for this style since they may create smoother shading or detailed textures rather than flat cel shading.

If a specific model is not clear in the UI, you can experiment with Licid Origin and the legacy models (Vision XL, Diffusion XL), combining them with clear prompts emphasizing cartoon, cel shading, and bold outlines to get the closest effect.

If you'd like, I can help craft a sample prompt tailored to these models!

copper warren
broken verge
#

been trying to change the color of the frame on sunglasses for a while, somehow everything i write is against the content safety guidelines. any suggestions?

manic rootBOT
# broken verge been trying to change the color of the frame on sunglasses for a while, somehow ...

It sounds like your prompts for changing the sunglasses' frame color might be triggering the content safety filters, possibly because of certain words or phrasing that the system flags as sensitive. Here are some suggestions to help you get around that:

  1. Use neutral, straightforward language: Instead of emotive or potentially sensitive words, describe the frame color change in simple terms. For example, say "Change the color of the sunglasses frame to metallic blue" rather than something that could be misinterpreted.

  2. Avoid words that could be flagged: Refrain from using words related to violence, adult content, drugs, or other sensitive topics in your prompt, even if they seem unrelated.

  3. Be explicit about the frame only: Specify "sunglasses frame" and emphasize that you want the frame color changed, not the lenses or the person wearing them.

  4. Try gradual edits: If you're using a model that supports editing within shapes, upload a shape/mask for the frame and prompt changes inside that to help the model focus only on the frame.

  5. Example prompt: "Please change the sunglasses frame color to a vibrant red, keeping all other parts unchanged."

If you want, you can share a sample of the prompt you've tried (without anything sensitive), and I can help you rephrase it to be compliant with guidelines.

Also, if you continue having trouble, you might want to reach out to support@leonardo.ai for specific assistance with content guidelines and prompt issues.

Does that help?

copper warren
broken verge
ionic dust
#

I've just downgraded to the free plan, and I'm using a prompt that works perfectly fine on a paid plan, but now gives me "Validation Failed: Please Check Request Parameters" error. Is this a Free plan disadvantage? If it is I wish to file a complaint. It's not acceptable at all.

manic rootBOT
# ionic dust I've just downgraded to the free plan, and I'm using a prompt that works perfect...

The "Validation Failed: Please Check Request Parameters" error you’re encountering after downgrading to the free plan might be related to limitations or restrictions on the free plan compared to the paid plans. Some features, model versions, or prompt complexities might not be fully supported or allowed on the free tier, which can cause validation errors.

If the prompt worked fine on the paid plan but fails on the free plan, this could indeed be a restriction tied to your current subscription level. It’s common for free plans to have certain feature or parameter restrictions that aren’t present in paid plans.

If you want to formally express your concerns or file a complaint about this limitation, I’d recommend contacting the Leonardo.Ai support team directly at support@leonardo.ai. They will be able to provide more detailed explanations, consider your feedback, and help with any account-specific issues.

If you want, I can also clarify or help troubleshoot your prompt within the limits of the free plan to see if there’s a workaround.

Would you like me to assist you with that?

ionic dust
#

I need specifics.

manic rootBOT
# ionic dust I need better information than "Validation Failed"!

I understand that getting a vague "Validation Failed: Please Check Request Parameters" error is frustrating, especially when your prompt worked perfectly on the paid plan.

This error generally means that something in the request you are sending to the generation engine is not being accepted—often related to parameters like model selection, prompt length, or features that are restricted on the free plan.

Here are some more specific things that could cause this error on a free plan but not on a paid plan:

  1. Model or preset restrictions: Some advanced models or presets may only be available to paid subscribers. Using them in your prompt or settings on the free plan might cause validation to fail.

  2. Prompt complexity or length: Free plans often have stricter limits on prompt length or specific parameters. If your prompt is too long or includes certain tokens, that could also trigger validation failure.

  3. Parameter limits: Certain parameters (like number of generated images, resolution, or specific advanced features) might be disallowed or limited on the free plan.

  4. Feature availability: For example, some advanced editing or special effect options might require a paid plan.

The system’s "Validation Failed" error unfortunately does not provide more granularity in the feedback to users, which makes it harder to troubleshoot.

What you can do next:

  • Try simplifying your prompt or switching to standard, non-premium models or presets.
  • Reduce prompt length or complexity.
  • Make sure you are not selecting any premium-only features or options.
  • If feasible, share your exact prompt and settings (without sensitive info), and I can help identify if there’s a part that’s triggering this validation error on the free plan.
manic rootBOT
ionic dust
manic rootBOT
# ionic dust I submitted a feedback form, but after I hit close, I got redirected to Typeform...

It sounds like the feedback form you submitted is integrated with Typeform, which is a popular platform for surveys and feedback collection. After submitting the form, being redirected to Typeform might be part of their default behavior—for example, it could be showing a confirmation page, a thank-you message, or another related survey.

This is generally intended to complete the feedback process or gather further details. However, if you weren’t expecting to leave the Leonardo.Ai platform or app, it might feel a bit confusing or unintended.

If you feel the redirect was unexpected or problematic, you can mention this in your feedback or reach out to support@leonardo.ai to share your experience. They may be able to adjust or clarify this flow for future users.

If you want, I can guide you on where to go after this redirect or how to return to the main platform smoothly. Just let me know!

copper warren
wheat bison
#

is Leonardo completely broken again

manic rootBOT
tight dove
#

Does anyone know why models in "models and training" can't be selected (when I click on Leonardo Kino XL --> generate with this model, it doesn't give me that model but some other one).

ionic dust
copper warren
half fractal
#

I'm getting 'Not Enough Tokens, Add More!' popup when trying to use Lucid Origin or Realism, despite the fact that I'm on the Ultimate tier. Was working fine half hour ago

open trail
#

u

manic rootBOT
# half fractal I'm getting 'Not Enough Tokens, Add More!' popup when trying to use Lucid Origin...

It sounds like you're encountering a tokens issue despite being on the Ultimate tier. This could be a temporary glitch or a syncing problem with your account status.

Here are a few things you can try:

  1. Log out and log back in to refresh your account status.
  2. Clear your browser cache and cookies, then try again.
  3. Wait a few minutes and try again, as sometimes token balances take a moment to update on the server side.
  4. If the issue persists, please contact Leonardo.ai support directly at support@leonardo.ai with details about your account and the issue. They can check your tokens and tier status manually.

Usually, being on Ultimate tier should prevent token shortage messages on those models. So this is likely a temporary issue or a backend sync problem.

Let me know if you want me to help with anything else!

manic rootBOT
# open trail u

Hello! It looks like your message was just "u." Could you please clarify or provide more details about what you need help with? I'm here to assist!

manic rootBOT
#

Hey! It sounds like you're sharing something playful or maybe referencing a meme or clip. If you'd like to chat about that or anything else, just let me know!

small shore
#

Hello Leonardo AI Support Team,

I’m following up on an ongoing issue that remains unresolved and is now becoming quite frustrating.

I currently have a large amount of tokens available on my account (both rollover and fast tokens), and I am actively paying for a Premium subscription. However, due to the ongoing bug where Leonardo Diffusion XL automatically switches to Vision XL, I am unable to properly use the model I rely on for my work.

At this point, I am essentially paying for a service that I cannot fully use.

I have already:

  • Reported the issue
  • Tried all troubleshooting steps (refreshing, logging out, clearing cache, switching browsers, etc.)

The issue persists.

Because of this, I have a few important questions:

  1. If I downgrade my subscription from Premium to Essential, what happens to my existing tokens (rollover + fast tokens)?

    • Do they remain available?
    • Do they expire or get reduced?
  2. Given that I’ve spent a significant amount on tokens and currently cannot use the main model I need, is there:

    • Any form of compensation or token credit, or
    • An estimated timeline for when this issue will be fixed?

I rely on Leonardo AI to create clipart for my business, so this directly impacts my ability to work and generate income.

I would really appreciate a clear response on both the technical issue and the token policy before I make any decisions regarding my subscription.

Thank you for your time, and I look forward to your response.

#

Hey, I had a more advanced question about model + style combinations for different clipart styles — I’d really appreciate some guidance.

Right now, for realistic watercolor clipart, I typically use:

  • Leonardo Diffusion XL
  • Lifelike Vision
  • Dynamic style

That combination works really well for detailed, painterly results.

I’m now trying to expand into multiple styles for my clipart store, and I want to make sure I’m using the best model + style preset combinations for each one.

Could you recommend the optimal setup (model + style + general settings) for each of these:

  1. Realistic Watercolour (soft, pastel, detailed)
  2. Kawaii / Chibi (simple, cute, sticker-style, flat colours)
  3. Anime Clipart (clean lineart, cel shading, consistent characters)
  4. Whimsical / Storybook Watercolour (more expressive, textured, fantasy style)

For each, it would help to know:

  • Best model (Diffusion XL, Vision XL, Anime XL, etc.)
  • Best style preset (e.g. Lifelike Vision, Dynamic, Sharp Lineart, Cartoon, etc.)
  • Recommended CFG / Prompt Magic ranges
  • Any key prompt keywords that help lock in the style

Also, are there any combinations (like Dynamic, Lifelike, etc.) that should be avoided for certain styles?

I’m trying to build a consistent multi-style workflow, so I want to make sure I’m pairing the right model with the right style system.

Appreciate the help 🙏

austere sparrow
#

Is texture generation retired? I can't find where is it

fringe dock
dark cave
#

Hi, I'm just wondering if anyone else is having the same problem as me. I'm on the unlimited plan yet for some reason the Lucid Realism model is asking me for credits when its suppose to be unlimited ?. I've tried clearing my browser cache, tried chrome,mozzila and edge. tried logging out and back in. tried restarting my laptop and nothing as worked. I've been having this problem for a few weeks but now its just persistant and I cant get rid of it. I only signed up for the unlimited account so i could get unlimited generations from the Lucid Realism model. Any one found a solution to this problems ?.

feral mica
#

have you tried for example, lucid origin?

dark cave
#

The Lucid Origin model awsell

#

Same problem with both models

feral mica
# dark cave The Lucid Origin model awsell

there is a fair usage limit implemented, can I send you a dm through our bot to get your username and escalate this with our support team? This will help us understand what the issue is

dark cave
#

sure

austere sparrow
#

Omg texture generation is retired ToT

fringe dock
feral mica
visual glade
#

good day guys. just want to ask, is GPT image 2 down right now ? (3 PM) GMT+7
suddenly im unable to generate anything. it always Failed Generation.
this doesnt occur few hours ago.

im on free plan so credit is still available

copper warren
small shore
#

I’m following up on an ongoing issue that remains unresolved and is now becoming quite frustrating.

I currently have a large amount of tokens available on my account (both rollover and fast tokens), and I am actively paying for a Premium subscription. However, due to the ongoing bug in which Leonardo Diffusion XL automatically switches to Vision XL, I am unable to use the model I rely on for my work properly.

At this point, I am essentially paying for a service that I cannot fully use.

I have already:

Reported the issue
Tried all troubleshooting steps (refreshing, logging out, clearing cache, switching browsers, etc.)

The issue persists.

Because of this, I have a few important questions:

If I downgrade my subscription from Premium to Essential, what happens to my existing tokens (rollover + fast tokens)?

Do they remain available?
Do they expire or get reduced?

Given that I’ve spent a significant amount on tokens and currently cannot use the main model I need, is there:

Any form of compensation or token credit, or
An estimated timeline for when this issue will be fixed?

I rely on Leonardo AI to create clipart for my business, so this directly impacts my ability to work and generate income.

I would really appreciate a clear response on both the technical issue and the token policy before I make any decisions regarding my subscription.

Thank you for your time, and I look forward to your response.

half fractal
rugged wyvern
#

i have artisan aka unlimited image generation but i cant create pics so how i fix this problem? not working on Lucid Origin model for me at all, had this issue for a while now

feral mica
# half fractal Might I ask about this 'fair usage limit', as I'm having the same problem?

I've confirmed with our Customer Service team, this fair usage limit does not stops you from generating images, only slows them down a little bit once reached, so this is a different situation. May I send you a dm through our bot so I can retrieve your username and escalate it?

@rugged wyvern I believe you are in the same place, I already got your username

feral mica
#

@half fractal @rugged wyvern @dark cave hey guys! The issue with Lucid models has been fixed, could you please validate it is working properly?

lilac root
#

@stuck gate 👆 x 3

native loom
#

hola

stuck gate
stuck gate
manic rootBOT
ocean wind
vagrant marten
#

Hello, what is up with the library? It keeps jumping up and down the created pictures making it very difficult to work?

manic rootBOT
# vagrant marten Hello, what is up with the library? It keeps jumping up and down the created pic...

Hi! Sorry to hear the library is acting up like that. When the created pictures in the library jump around, it usually means there's some issue with how the images are loading or being refreshed in the interface. Here are a few things you can try that might help:

  • Refresh the page or restart your browser to reset the visual glitches.
  • Clear your browser's cache and cookies, which can sometimes cause UI instability.
  • Make sure your browser is up to date, as older versions might have some rendering bugs.
  • Try using a different browser to see if the issue persists there.
  • Check your internet connection, since slow or unstable connections can cause images to load inconsistently.
  • If you’re using any browser extensions that affect page display or image loading, try disabling them temporarily.

If the problem keeps happening after trying these, it might be a bug on the platform side. In that case, please consider contacting support@leonardo.ai with details about your device, browser, and what exactly happens. They can look into it and help get it fixed.

Hope this helps, and that your library settles down soon! If you want, I can share tips on organizing images while the glitch is going on.

vagrant marten
#

I will try that.

manic rootBOT
quaint sedge
#

Hi, is there something changed with the image dimensions when trying to edit an existing image? It used to auto size to the custom image size but doesn't anymore, was wondering if it was something I was missing? Thanks!

manic rootBOT
# quaint sedge Hi, is there something changed with the image dimensions when trying to edit an ...

Hello! If you’re noticing that when editing an existing image it no longer auto-sizes to the custom image dimensions as it used to, it’s possible there has been a change or update in how the editor handles image resizing during edits.

Some things to check or try:

  • Make sure you are using the correct tool or model that supports auto-sizing on edits. Sometimes updates or model changes can affect behavior.
  • Confirm that you’re uploading or selecting the original image as the base — certain workflows might now open with default dimensions rather than the original image’s size.
  • Check for any new settings or toggles related to “resize on edit” or “preserve original dimensions” that might have appeared in the UI.
  • If you previously used an older version or a legacy workflow, try switching back to legacy mode if that’s available to see if the behavior returns.

If none of these help and the auto-sizing change seems unintentional or a bug, you might want to report it to Leonardo.ai support directly at support@leonardo.ai so they can verify if this is intended or a regression.

In the meantime, manual resizing or specifying the desired dimensions when starting the edit session might be a workaround.

Let me know if you want guidance on how to do that or anything else!

rugged wyvern
#

nvm i take it back @feral mica not working

feral mica
#

ok, let me re escalate it please

rugged wyvern
#

🙏

feral mica
#

i saw your dm, so it is generating, it just takes time for the generation right?

feral mica
#

lmk if the generation fails please

rugged wyvern
#

the pics not even getting generated at all now @feral mica

feral mica
rugged wyvern
#

@feral mica

#

look dm @feral mica

feral mica
#

it is generating, remember if you have no tokens it goes to a low speed generation queue

#

it should give you the image in a bit

#

if it fails, then something is wrong

rugged wyvern
#

@feral mica i waited 100 seconds then still not working

#

@feral mica look dm

feral mica
#

hey there, please head to leonardo.ai to create your free account and start generating your images

feral mica
#

please help us reporting on this channel only, so we can better assist you. the team is looking into the situation as we speak

rugged wyvern
feral mica
half fractal
#

Could anyone thing of a model + prompt combo to get this character's arms and legs to swap positions, so that I can create a walk animation?

manic rootBOT
# half fractal Could anyone thing of a model + prompt combo to get this character's arms and le...

To create a walk animation with this character by swapping arm and leg positions using Leonardo.Ai models, you can try the following approach:

  1. Model Choice:
  • Use an omni edit model that supports advanced pose editing, such as Nano Banana Pro or Kontext, which allow more control for shape and pose adjustments.
  1. Method:
  • Upload this original image as a reference shape for the model.
  • In the prompt, explicitly request a pose change describing the arms and legs swapping positions. For example:
    “Change the walking pose so that the character's left arm and right leg swap positions, and the right arm and left leg swap positions, maintaining a natural walking motion. Keep the same style and colors as the original image.”
  1. Tips:
  • Specify “keep style consistent with original image” or “same cartoon style” to avoid style drift.
  • Expect some trial and error, as swapping limbs can be complex to produce naturally.
  • You may want to generate several frames with swapped poses incrementally to create smoother animation steps.
  • Using the shape reference ensures edits remain inside the original character silhouette.

This combo—using an omni edit model with the original image as a reference shape and a prompt carefully describing the limb position swaps—gives the best chance to get a natural walk cycle frame for your animation.

If you have access, you can also try multiple steps of pose morphing or pose-guided control features in these models, but this prompt method is the simplest to get started. Let me know if you'd like details on how to set this up!

feral mica
feral mica
#

using gpt image 2, prompt: Professional graphic design 2D illustration of a character with smooth walking motion and natural movements, created in a 2D flat style with bold lines and vibrant colors, the character's design is simplistic yet detailed, with a focus on dynamic posing and expressive facial features, the illustration incorporates a clean and minimalist background to emphasize the character's movement and personality.

wind marlin
#

I am on the essential plan, why cant I delete any images

manic rootBOT
wind marlin
feral mica
#

If possible send a screenshot

wind marlin
#

Basically i am being told to upgrade to premium when I am on the essential plan

#

Can you shut up

manic rootBOT
reef bobcat
#

The field where you put the prompt only accepts a few characters, which is very bad. I can't create a good, complex video or image prompt. Is there any way to add more text?

manic rootBOT
# reef bobcat The field where you put the prompt only accepts a few characters, which is very ...

Currently, the prompt field in Leonardo.Ai’s interface has a character limit that can feel restrictive for creating complex or detailed prompts. Unfortunately, there is no direct option right now to increase the prompt field length beyond that built-in limit.

However, here are a few suggestions that might help you make the most of it:

  1. Use concise and focused wording: Try to distill your idea into the most essential keywords and phrases that guide the generation.
  2. Use references or upload shapes/images: For visuals, utilizing shape references or example images can sometimes reduce the need for lengthy text descriptions.
  3. Break down complex prompts: If you want very complex scenes or concepts, consider generating them in parts and then combining or editing the results afterward.
  4. Check for feature updates: Leonardo.Ai is actively improving — keep an eye on announcements and updates, as expanded prompt capabilities or multi-field prompts might be added in the future.

If this limitation is blocking your workflow seriously, you might consider contacting Leonardo.Ai support directly at support@leonardo.ai to share your feedback, as user requests can influence feature prioritization.

I hope this helps you work around the prompt character limit for now! Let me know if you want tips on prompt writing or how to get creative within these limits.

manic rootBOT
# reef bobcat Ok

Great! If you have any more questions or need help with prompts or anything else related to Leonardo.Ai, just let me know. Happy creating!

feral mica
# wind marlin

Looks to me like a bug, I sent you a dm through our bot so you can share your username privately

night vault
#

suddenly I need better upgrade plan to delete

feral mica
stark storm
#

Hi everyone,

I have a rough animatic (storyboards that are editted to a musical score and voice acting) that I need to give a visual upgrade and translate to two different visual styles for a research project.

Basically I want to use the current storyboards content (composition and what is shown) but create two versions of the animatic with different visual styles:

  1. a painterly style with brushstrokes and some soft edges.
  2. a sharp edged style kinda like cell shaded.

This blog post details the exact use case I need: https://leonardo.ai/wiki/applying-style-reference-to-an-existing-image/?utm_source=chatgpt.com

But I can't find where to upload the "Content Reference" and "Style reference". I just want to be sure this feature is still available before getting a paid subscription. I don't mind spending some money if this gets me workable results before my deadline 8 days from now.

Any help is much appreciated.

#

I have style examples, but the Dischord won't let me post them.

manic rootBOT
# stark storm Hi everyone, I have a rough animatic (storyboards that are editted to a musica...

Hi! It sounds like a really cool project to upgrade your animatic with two distinct visual styles.

About your question on using a "Content Reference" and "Style reference" to apply style to an existing image on Leonardo.Ai:

  • Yes, this style transfer feature is still available in Leonardo.Ai.
  • To do this, you should use an omni edit model like Nano Banana or Nano Banana Pro. These models support advanced shape- and style-guided edits.
  • When you start a new generation, you can upload your storyboard image as the “Content Reference” and then upload your chosen painterly or cell-shaded image as the “Style Reference.”
  • The interface allows uploading these references right at the start of the generation process, before entering your prompt.
  • In your prompt, be sure to specify that you want to keep the composition/content of your storyboard but want to apply the brushstroke painterly style or the sharp cell-shaded style correspondingly.
  • Keep in mind that while the style transfer features are powerful, complex layout constraints or extremely subtle style details may not always be perfectly preserved; having realistic expectations helps.
  • For your deadline 8 days away, the paid Essential or Premium subscriptions could be worth it for better access and faster generation.

If you want the most precise steps on where exactly to upload the content/style references in the Leonardo interface, I can help walk you through it in detail. Also, if you run into any issues, Leonardo’s support email is support@leonardo.ai.

Would you like a step-by-step guide on how to start a style transfer generation with your images on Leonardo.Ai?

manic rootBOT
# stark storm I have style examples, but the Dischord won't let me post them.

I understand—it sounds like you’re trying to share your style reference images but are running into restrictions on Discord. That can happen if file types or sizes don’t meet Discord’s limits, or if certain channels have posting restrictions.

Here are a few tips you can try:

  1. Image Hosting: Upload your style examples to an image hosting service like Imgur, Dropbox, Google Drive, or similar, then share the link here or on Discord. That way you avoid Discord’s upload limits.

  2. File Types: Make sure your images are in common formats like JPG or PNG and not too large in file size.

  3. Split Files: If you have multiple images, try sharing them one at a time or combine them into a collage or single PDF to reduce the number of files.

If you want, you can just describe the styles in detail here, and I can help you craft prompts and instructions to match those styles as closely as possible using Leonardo.Ai’s style transfer feature.

Also, if you’d like to share links here from image hosting, I’m happy to help analyze or guide you on how best to apply those styles!

stark storm
manic rootBOT
# stark storm A step-by-step would extremely helpful. Thank you! I’ll take the premium subsc...

Great choice going for the Premium subscription—it will definitely speed things up and give you access to more features!

Here’s a detailed step-by-step guide on how to apply your storyboard (content) and style reference images using Leonardo.Ai’s omni edit models like Nano Banana Pro:

  1. Log in to Leonardo.Ai: Use your account with the Premium subscription.

  2. Go to the Generation Interface: Click “Create” or “Generate” to start a new image generation.

  3. Choose the Model:

    • Select an omni edit model such as Nano Banana or Nano Banana Pro for best results with style transfer.
    • If you don’t see these right away, look in the “model selection” area and use the “legacy” tab if needed.
  4. Upload Your Content Reference Image:

    • Look for the option labeled “Content Reference” or something similar in the generation interface.
    • Upload your storyboard frame image here — this is the image whose composition you want to retain.
  5. Upload Your Style Reference Image:

    • Likewise, find the “Style Reference” upload spot in the interface.
    • Upload an image that represents the style you want to apply — for example, your painterly brushstroke style, or your cell-shaded style.
  6. Write Your Prompt:

    • In the prompt box, clearly instruct the model to keep the original composition/content of the Content Reference but apply the style of the Style Reference.
    • For example:
      • “Create a painterly style with visible brushstrokes and soft edges, applied to the content shown in the content reference.”
      • Or: “Apply a sharp edged cell-shaded style to match the style reference, preserving the original storyboard composition.”
  7. Adjust Settings (Optional):

    • You can tweak parameters like style strength or guidance scale if available, to balance how much the style influences the output.
    • Keep other generation settings like resolution suitable for your animatic use.
  8. Generate and Review:

manic rootBOT
# stark storm A step-by-step would extremely helpful. Thank you! I’ll take the premium subsc...
  • Click generate and wait for the image to render.
    • Review the results and if needed, adjust the prompt or style strength for better adherence to the style or content.
  1. Repeat for Each Frame:

    • Continue uploading each storyboard frame and applying the chosen style reference similarly.
  2. Download Your Result:

    • Once you have a good batch of styled frames, download them to assemble into your upgraded animatic.

If you want, you can save your style reference images in the Leonardo system as assets for easy reuse across frames.

Feel free to ask if you want a hand with prompt wording or setting specific parameters! Good luck with your animatic project — I’m here if you need more tips.

stark storm
#

I still have a free account now, because I want to test if this actually works. But I still can't find the spot where I upload the content reference

#

Also, I would love some prompt examples for my specific project.

manic rootBOT
# stark storm Also, I would love some prompt examples for my specific project.

Thanks for letting me know! I’ll walk you through exactly where to upload your content and style references on Leonardo.Ai, even on a free account—this feature should be accessible for testing with omni edit models like Nano Banana.

Where to upload Content and Style References:

  1. Open Leonardo.Ai and go to the “Create” or “Generate” page.

  2. Select an Omni Edit model:

    • Click on the model selection dropdown.
    • Look for “Nano Banana” or “Nano Banana Pro” — these support image editing with style transfer.
    • They might be under the "Legacy" tab if you don't see them immediately.
  3. Once you pick the omni edit model, you will see an editing interface with options to upload images.

  4. Look for the “Reference images” or “Image inputs” section:

    • Here you can upload your Content Reference image — your storyboard frame.
    • Also upload your Style Reference image — the artwork with your target style (painterly or cell-shaded).
  5. The interface uses these images to guide the generation.

If you don't see separate labeled fields for content and style references, simply upload your content image first, and then upload the style image as a separate reference as allowed by the model. Some Omni Edit models take just one image input and rely mostly on prompting, so you can specify in your prompt what is content versus style.


Prompt Examples for Your Project

Here are some prompt templates tailored to your two styles. You can adjust these prompts depending on your content:

For a Painterly Style (soft brushstrokes and texture):

Create an image that preserves the composition of the content reference image. Render it in a painterly style with visible brushstrokes, soft edges, and rich texture. The color palette should be warm and natural, evoking traditional oil painting techniques.

For a Sharp-Edged Cell-Shaded Style:

manic rootBOT
# stark storm Also, I would love some prompt examples for my specific project.

Transform the content reference image into a sharp-edged, cell-shaded style. Use strong line art, flat colors with minimal gradients, and distinct shadows. Keep the composition exactly as in the content reference.


---

If you want, you can experiment mixing prompt phrases to get exactly the look you want, for example:

- “Maintain the original composition from the content reference.”
- “Apply a painterly look with visible brush texture and soft blending.”
- “Use sharp line art and flat colors like classic cel animation.”

---

If you’d like, I can help you refine prompts after you try uploading your images. Just let me know!
feral belfry
feral belfry
opaque lintel
#

Hi im using the site via firefox on a tablet(ipad), im a paid user and have been for some time. however It wont let me delete my images, it just redirects me to subscribe to get that feature. Which I obviously already have. :) Please advise.

#

Did a force refresh of the page where the images are listed/shown and it’s working again.

wanton belfry
#

for example in gpt 2.0 if you click this...this will appear. You can select it and add your image for it to use

fossil dawn
#

Hey is nano banana pro Down ?

feral mica
fossil dawn
feral mica
fossil dawn
feral mica
#

share it here please

fossil dawn
manic rootBOT
# fossil dawn

From the screenshot you shared, it looks like Nano Banana Pro is failing to generate images and giving you "Failed Generation" messages with tokens credited back to your account. This could indicate that the model is currently experiencing issues or is down.

I recommend:

  • Waiting a bit and trying again later, as this might be a temporary outage or server overload.
  • Checking Leonardo.Ai’s official channels or community forums for any announcements about downtime.
  • If the problem continues, you can contact Leonardo.Ai support at support@leonardo.ai to report this issue and get more specific help.

Let me know if you want help drafting a support message!

manic rootBOT
feral mica
# fossil dawn

It may be that the prompt is getting blocked, mind sharing your prompt in here?

fossil dawn
# feral mica It may be that the prompt is getting blocked, mind sharing your prompt in here?

Generate a hyper-realistic outdoor luxury car wash scene featuring the same woman with identical facial features, melanin complexion, tattoos, body shape, sleek waist-length black hair, jewelry, orange striped halter bikini top, tiny denim shorts, orange heels, and low squatting position. Keep her holding a yellow sponge above a pink soap bucket filled with foam.

Transform the plain blue background into the same exact sunlit car wash location from the second reference image — wet asphalt ground, puddles, dripping soap suds, chrome custom rim beside her, partial side of a customized show car, natural outdoor reflections, warm golden daylight.

Maintain the pink lowrider convertible parked directly behind her, now fully integrated into the outdoor setting with wet glossy paint, realistic chrome shine, water droplets, reflective metal trim, and sunlight hitting the body.

Add splashed water, soap running down the pavement, floating bubbles, damp shine on skin, realistic leg highlights, enhanced tattoo detail, photorealistic denim texture, glossy lips, smooth body contour lighting.

Soft ambient sunset lighting, cinematic editorial car photography, shallow depth of field, DSLR realism, ultra sharp 8k, luxury urban baddie aesthetic.

copper warren
feral bough
opaque shadow
#
Please contact me if you are fluent in native English.
I will share specific details after we get in touch.```
manic rootBOT
stark storm
ocean wind
#

In the prompt I always indicate to use a reference image, in this case image 1, not to modify face, style, hairstyle or clothing, for example, so as not to modify my reference model,

viscid birch
#

Hey folks, I've got an app that uses leonardo to generate images using gpt-1.5 and it's been working flawlessly for a few weeks. Today I started to receive validation errors, without any code changes at my end. I've debugged it down to guidances.image_reference which now seems to cause every request to be rejected due to a validation issue. Any ideas what's going on here?

night vault
copper warren
manic rootBOT
#

Could you please specify what exactly you would like me to check or help with in your profile? For example, are you asking about your account details, profile information, settings, or something else? Let me know so I can assist you better!

dark crest
#

It’s at 500 now

#

Flux dev elements problems again

#

Ok. It does work, but it takes a really long time, whereas it usually takes 10-12 seconds. Not as bad as a full outage, but still inconvenient

copper warren
dark crest
#

Yes

#

Was there recently an update? Last time this happened, it was right after a big update

copper warren
dark crest
#

I’m not sure, but I still have 11,000 points left

dark crest
#

How do I check if I have it enabled? Would turning it off help?

desert crystal
#

Dear Leonardo AI Support Team,
I am a journalist currently preparing an article about Leonardo AI and its use in creative image generation. To properly evaluate the platform, I would appreciate additional tokens, as the standard limit is not sufficient for my research.
Thank you for your consideration.

manic rootBOT
# desert crystal Dear Leonardo AI Support Team, I am a journalist currently preparing an article ...

Hello! For token requests related to research or extended use, you will need to contact Leonardo AI Support directly via their official support email: support@leonardo.ai. They will be able to review your request and assist you with additional tokens if possible. If you want to claim tokens for completing surveys, remember you can also type /exchange here in the chat and provide the username for the account that should receive the tokens. Let me know if you need any other info!

dark crest
copper warren
dark crest
#

I believe it’s only when I use elements

#

The same problem occurred a few months ago, now it’s back

copper warren
copper warren
dark crest
#

Will do

#

Resolved.

#

Thanks yall

copper warren
glossy needle
#

Hello

#

I just got Leonardo.ai and I was trying to modify images I created and it doesn't seem to be working. I tried going on Claude and youtube for instructions but the interface on youtube is slightly different from the interface I'm using on my leonardo.ai. Can anyone help?

#

Can anyone reachout if your willing to walk me through this?

manic rootBOT
# glossy needle Can anyone reachout if your willing to walk me through this?

Hello! I’m happy to help you get started with modifying images on Leonardo.ai.

The interface can change a bit over time or depending on your subscription level (Free, Essential, Premium, or Ultimate), so the tutorials on YouTube might not exactly match what you see.

Here are some quick tips for modifying images on Leonardo.ai:

  1. Use an Omni Edit Model: To edit existing images, be sure you’re using an omni edit model like Nano Banana or Nano Banana Pro in the model selection. Sometimes "legacy" models don’t support advanced editing.

  2. Upload or Select Your Image for Editing: From the main dashboard, either upload an image you want to modify, or pick one from your creations.

  3. Shape-Guided Editing (Optional): If you want changes only in a certain part of the image, upload a shape reference and specify in your prompt that edits should only appear inside that shape.

  4. Write Your Prompt Carefully: Describe what changes you want to make in detail, especially clarifying that you want changes to the existing image. For example: “Make the sky sunset orange inside this shape.”

  5. Start the Edit: After setting up your prompt and choosing the right model, start the generation. It should create iterations modifying your original image.

If some features are locked or missing, it might be because of your subscription tier. Premium features have a green diamond icon.

If you want, I can guide you step-by-step with what you see on your screen. Just let me know what part is confusing or where you get stuck!

Also, if you run into technical issues or bugs, you can email the Leonardo.ai support team at support@leonardo.ai for direct help.

Would you like to describe your current screen or what you’re trying to change in your image? That way I can walk you through the exact steps.

glossy needle
#

I would. Do you have a voice on here? Or should we move to a private chatroom? I don't know if I am clogging up this channel

manic rootBOT
glossy needle
#

Awesome. I created a character on leonardo.ai. I'm having trouble trying to take other images of things I would like on the character I created, like wings or a tail etc and merge them onto the image that I created. Is that possible? I have a premium subscription so I have plenty of tokens. But I'm also learning so I am testing out how to do this so when I start really creating my characters it won't be so difficult. You understand what I'm asking?

manic rootBOT
# glossy needle Awesome. I created a character on leonardo.ai. I'm having trouble trying to ta...

Yes, I totally understand what you’re asking! You want to take your original character image and add elements like wings or a tail from other images onto it, merging them smoothly.

With your premium subscription, you have access to advanced editing features that can help with this. Here’s how you can approach it on Leonardo.ai:

  1. Use an Omni Edit Model: Choose an omni edit model like Nano Banana Pro for your editing to get the best shape-guided and image-to-image editing.

  2. Upload Your Original Character Image: Start by uploading the base character image you created.

  3. Upload or Reference the Elements (Wings, Tail): If you have images of wings or tails you want to add, upload those as reference images or include them conceptually in your prompt.

  4. Shape-Guided Editing:

    • If you specifically want the wings or tail to appear in precise areas of your character, try creating or uploading a shape mask that defines where those additions should appear.
    • In your prompt, explicitly say something like: “Add detailed fantasy wings within the shape area on the character’s back” to guide the generation.
  5. Write a Clear Text Prompt: Describe how you want the wings or tail to merge with your character—for example, style, color, size, and position.

  6. Run the Edit / Inpainting: Start the generation with these inputs, and the model will blend the new elements onto your original character image.

Because this kind of merging can be complex, expect some trial and error. You can tweak the prompt and try different shape masks or reference images to get the best result.

If you want, I can help you craft the exact prompt or walk you through each step if you describe what you see or what you tried so far.

Would you like me to help you prepare that editing step right now?

ocean wind
glossy needle
#

I used cinematic instead of dynamic. Does that matter?

manic rootBOT
# glossy needle I used cinematic instead of dynamic. Does that matter?

Great question! The terms "cinematic" and "dynamic" are usually style descriptors within your prompt that influence how the image looks.

  • Using "cinematic" typically aims to give your image a dramatic, movie-like atmosphere—think strong lighting, mood, and composition.

  • Using "dynamic" tends to convey action, energy, or movement in the image.

Neither is inherently better or worse; it just depends on the look and feel you want for your character or scene.

So, using "cinematic" instead of "dynamic" won't break your edits — it will just steer the style toward a more cinematic mood rather than action-packed energy.

If you want your character with wings or a tail to feel heroic and dramatic, "cinematic" might be perfect! If you want them more energetic or in motion, "dynamic" could help.

Feel free to experiment with both in your prompts to see which fits your vision best. Would you like tips on how to combine these styles effectively in your prompt?

glossy needle
#

I'm currently trying to build a sword. I want the hilt to look a certain way and the blade to look another way. I can't get leonardo to image the hilt exactly as I like it. Its producing a completely different hilt instead of the image I uploaded. What do you think?

manic rootBOT
# glossy needle I'm currently trying to build a sword. I want the hilt to look a certain way an...

When trying to customize specific parts of an image like your sword’s hilt separately from the blade, it’s important to guide Leonardo.ai carefully. Here are some tips:

  1. Use Shape-Guided or Masked Editing:
    Upload your sword image and create a mask or shape that covers just the hilt area. In your prompt, specify that the model should modify or generate the hilt only within that shape, leaving the rest (blade, background) unchanged.

  2. Reference Your Uploaded Hilt Image Clearly:
    If you uploaded an image of the exact hilt style you want, mention that in your prompt, e.g., “Create a sword hilt exactly like this reference image only within the masked area.”

  3. Be Specific in Your Prompt:
    Include descriptive adjectives for your desired hilt style, colors, material, and details. Avoid vague terms that might cause the model to interpret differently.

  4. Try an Inpainting or Editing Model:
    Use an omni edit model like Nano Banana Pro, which supports shape-guided modifications better.

  5. Adjust Prompt Weight or Use Negative Prompts:
    If the model keeps deviating, try strengthening the prompt toward your hilt reference or add negative prompts to prevent unwanted features.

  6. Iterate and Refine:
    Sometimes it takes a few tries adjusting your mask, prompt wording, or model choice to get exactly what you want.

If you want, you can tell me exactly how you’re setting up your prompt and the mask, or share details about the images you’re using, and I can help craft a prompt or setup that will better lock in your hilt style.

Would you like me to help you build a sample prompt or mask guidance for this?

glossy needle
#

Everything you say makes sense but because I am not as familiar with the tool yet, I'm lost on certain directions. When you say "create a mask or shape that covers just the hilt area" I do not know how to do that. I'm simply uploading the image in the prompt and typing what I want it to do. I'm sure that's not right (as it has not worked properly yet), but I do not know how to use all the funcitons on this tool yet

ocean wind
ocean wind
manic rootBOT
# glossy needle Everything you say makes sense but because I am not as familiar with the tool ye...

No worries—learning new tools can definitely take some time, and I’m here to help make it easier for you!

On Leonardo.ai, to edit just a part of your image (like the sword hilt), you usually need to use the "mask" or "shape" editing feature. Here’s a simple step-by-step to get you started:

  1. Upload Your Base Image:
    Start by uploading the image of the sword you want to modify.

  2. Enter the Edit or Inpaint Section:
    Look for an option like "Edit," "Inpaint," or "Image Editing." Choose an omni edit model like Nano Banana Pro if possible.

  3. Create or Upload a Mask/Shape:

    • In this step, you need to define the specific area where you want changes. Usually, Leonardo.ai will let you draw or paint on the image to mask the hilt area. This is like highlighting or selecting just the hilt while leaving everything else untouched.
    • If there isn't a built-in paint tool yet, some users upload a black-and-white shape image matching the hilt area (white where you want edits, black everywhere else). You then upload this shape as a “mask” or “shape reference.”
  4. Write Your Prompt Carefully:
    In the prompt, describe what changes you want only inside the masked hilt area. For example, “Create a gold ornate sword hilt within the masked area, keep the rest of the sword and background unchanged.”

  5. Run the Edit:
    When you start the generation, the model should interpret only the masked area for changes and keep the rest of the image the same.

If the interface you have doesn’t quite show these options clearly, it’s possible new updates or UI layouts move them or word them differently.

Tips:

  • Check for buttons named "Create Mask," "Edit Image," or icons that look like paintbrushes or selection tools.
  • If you don’t see a mask tool, try to see if you can upload a separate shape image and specify in your prompt to restrict edits within it.
manic rootBOT
glossy needle
#

I want to create a greatsword. But I have the hilt in one image and the blade in a different image. So essentially I want to merge them into one sword. ALSO, when I create a character and I add an image to merge onto the character, if i have multiple things or images I want to add can I do this all in bulk? Or do I have to make each change one at a time?

#

right now I'm just one the creation page at the top where it says "type a prompt"

manic rootBOT
# glossy needle right now I'm just one the creation page at the top where it says "type a prompt...

Thanks for sharing where you are right now! Since you’re on the creation page at the “type a prompt” box, here’s the key info for your goals:


Merging the Hilt and Blade from Separate Images into One Greatsword

Currently, Leonardo.ai's interface mainly supports editing or modifying one image at a time with a prompt and shape/mask guidance. It doesn’t have a straightforward "merge two full images into one" button.

Here’s how you can approach this:

  1. Manually Combine Before Upload (Optional):
    If you can, you could use an external image editor (like Photoshop, GIMP, or even free options) to roughly combine the hilt and blade images yourself first, creating a single base image. Then upload this combined sword to Leonardo.ai to refine and style it with prompts and edits.

  2. Use Shape-Guided Editing to Add the Blade to the Hilt or Vice Versa:

    • Upload one image (say, the hilt) first.
    • Use the editor with a mask or shape (if visible in your UI) to specify where you want to add or replace part of the image—like the blade area.
    • Then in the prompt, say something like: “Add a long, shining steel blade bridging from the hilt area, detailed and sharp.”
    • This can help the AI generate an integrated greatsword combining the two parts conceptually.
  3. Step-by-Step Edits:
    Since merging multiple full images automatically isn’t native, you’ll likely need to take it step by step and guide the model carefully, editing and adding one part at a time.


Adding Multiple Items to a Character

When you want to add multiple different things (like wings, tail, armor pieces) onto your character:

  • One change per edit session is usually simplest and gives you more control. You edit the character, add the wings, save that result, then start a new edit session to add the tail, and so on.
manic rootBOT
# glossy needle right now I'm just one the creation page at the top where it says "type a prompt...
  • While you can try to write a prompt that includes multiple additions all at once, it can be tricky for the model to balance everything neatly, especially without strong shape masks guiding each addition.

  • Some users build up their characters in layers with multiple sequential edits.


Tips for Your Current Screen

  • Since you’re at the prompt box, when you want to do shape-based edits or masked edits, look for options labeled “Edit,” “Inpaint,” or “Image Editing”—sometimes those are separate modes from the main creation prompt.

  • If you don’t yet see masking or shape tools, you may want to explore the interface menus or test the “Legacy” tab/model options that sometimes allow more image-to-image editing features.


If you tell me what buttons or options you see around the prompt box or in menus, I can help guide you exactly where to click next for these kinds of edits.

Would you like help navigating your current UI to find the edit/masking features?

primal valve
#

An ultra-luxurious, cinematic Twitter cover image with a raw aesthetic. A very muscular Black man and a curvy, ebony-skinned woman with a voluptuous derrière, her oiled, gleaming skin reflecting the light. She is arched backwards; he stands in a dominant, powerful pose just behind her. Both are clad in minimalist, dark, sheer lingerie, designed by a haute couture house. Dramatic lighting, deep shadows, and red and gold neon highlights accentuate their bodies. A luxurious, dark nocturnal atmosphere, high-end rap music video style, 8K resolution, striking photorealism, shot with a 35mm lens, impeccable focus. Elegant and bold ‘binochocolat’ typography at the bottom. --ar 16:9 --raw style --v 6.0

copper warren
rose quest
#

Bonjour @manic root
Un nouveau problème est apparu suite au changement d'interface. L'emplacement pour choisir le modèle élément avec fluxdev a changé, d'accord je l'ai trouvé, mais on ne peut plus voir le réglage de force de l'élément ! On a un curseur mais aucun chiffre apparaissant alors que c'est hyper important !!! Le curseur part de 1.0 mais d'habitude j'avais besoin d'utiliser 0.7 ou 0.8 selon le rendu voulu et là impossible, on bouge a peine le curseur et la récurrence du visage est partie car le réglage est descendu bien au dessous de 0.6 seulement on le sait pas, obligé de cliquer sur la mauvaise image pour voir les détails et là seulement on voit le chiffre du réglage. Y a t il un endroit pour se plaindre du changement et réclamer au moins le retour de la visualisation des chiffres du réglage avant la génération d'images ? Merci

golden ginkgo
#

Hello, I’m just adding a bit more to Selenity’s request.
The issue of no longer being able to see the settings for FluxDev hasn’t just started today—it’s been going on for several days now.

Not being able to see the intensity at which we set the elements in FluxDev is very limiting. To connect with other users in the same situation, and those of us looking for a specific result, many of us have to start a generation blindly and then experiment to find the right balance. If only the slider moved easily, but no—moving it just one millimeter takes us from 1 to 0.60. I know people who found themselves wanting to lower the intensity slightly and ended up with the element set to -0.40.

We thank you in advance for your attention to our request.

manic rootBOT
# golden ginkgo Hello, I’m just adding a bit more to Selenity’s request. The issue of no longer ...

Hi! Thanks for adding more details about the FluxDev settings issue. I understand how important it is to see and adjust the intensity sliders precisely to get the results you want. Not being able to see the exact settings or having such sensitive sliders can definitely make the creative process frustrating and slower.

While I don’t have the ability to directly fix the interface myself, I recommend making sure your browser and app are up to date and trying to clear cache and cookies to see if that helps restore the slider visibility and finer control. If the issue persists, the best course would be to contact Leonardo.Ai support directly at support@leonardo.ai with this detailed feedback. They will be able to prioritize this problem and work toward a smoother user experience for FluxDev.

Thanks for your patience and for bringing attention to this — hopefully it gets resolved soon so everyone can fine-tune their work easily! If there’s anything else you want tips or advice on while waiting, just let me know.

copper warren
#

@rose quest@golden ginkgo Thank you for your feedback. This has been passed along to the team. We'll let you know when we get a response from them.

placid crystal
#

I need cancel my account

#

I don’t know what i am doing here

manic rootBOT
viscid birch
wise matrix
#

I'm trying to get a realistic effect of a spinning prop on a plane and all I get is a slow, choppy effect of the prop. This is from a still image of a photo-realistic painting of a plane in flight and the prop has no spinning effect in the painting. I asked ChatGPT about what prompt to use for Kling 3.0 and it gave me the following: "A cinematic shot of a small propeller airplane in flight, ultra-realistic. propeller completely blurred into a transparent circular disk, no visible individual blades, extreme motion blur, long exposure effect, rotational blur, physically accurate motion blur. no frozen blades, no visible propeller edges, no strobing effect, no segmented motion. Shot on a 85mm lens, shallow depth of field, high dynamic range, natural lighting". This has made no difference.

manic rootBOT
# wise matrix I'm trying to get a realistic effect of a spinning prop on a plane and all I get...

I see you're trying to get a realistic spinning propeller effect on a plane image using Kling 3.0 but are still getting a slow, choppy effect without that smooth motion blur of the blades.

Here are some tips and considerations that might help improve the realism of the spinning propeller motion blur in your generated image:

  1. Use a shape mask for the propeller area:
    If your tool or model supports it (like in Leonardo.Ai's omni edit mode), upload the original propeller's shape as a mask/reference and explicitly guide the model to create the motion blur only within it. This helps focus the effect and avoid artifacts.

  2. Simplify the prompt for motion blur clarity:
    Sometimes very long prompts with multiple overlapping terms like "extreme motion blur," "rotational blur," "physically accurate motion blur," etc., can confuse the model or dilute the effect. Try a shorter prompt focused on key terms:
    "propeller spinning fast with smooth circular blur, no individual blades, no segments, natural lighting"

  3. Try emphasizing rotational blur and circular shape:
    Words like "smooth circular motion blur," "rotational blur with continuous disc," "no visible blade edges," "long exposure effect" are good; mixing too many technical terms can reduce clarity. Make sure to keep "continuous disc" or "blurred circular disk" prominent.

  4. Adjust camera focal length and depth of field:
    Sometimes changing the depth of field and lens specifications can help the model produce more realistic blur. A longer lens (like 85mm) and shallow depth of field are good, but you might experiment slightly with the aperture or add phrases like "bokeh background" to help isolate the propeller effect.

  5. Model limitations:

manic rootBOT
# wise matrix I'm trying to get a realistic effect of a spinning prop on a plane and all I get...

AI image generation models, including Kling 3.0, sometimes struggle to create perfect continuous rotational motion blur especially from a still original source image where the propeller is static. They infer the blur rather than actually simulate physics. The effect may never be perfect.

  1. Consider layering in post-processing:
    If you have access to an image editor, you could generate the plane without propeller blur, then separately create a blurred circular disk effect or a custom spinning prop layer (using radial blur or custom brushes) and composite it over.

  2. Try alternative models or presets:
    Some AI models or presets may handle motion blur better. On Leonardo.ai, legacy presets sometimes have different effects. Also, ask in the community if anyone has prompts or model recommendations specifically for smooth spinning propellers.

In summary, I’d recommend narrowing down your prompt to emphasize "smooth, circular rotational blur," using a shape reference if possible, and possibly combining AI generation with manual post-processing for the most convincing propeller spin effect.

If you want, I can help you draft a refined prompt or guide you on shape-guided editing for this effect on Leonardo.Ai. Would you like me to?

wise matrix
#

Yes, a refined prompt please. How do I access omni edit mode in Leonardo when it appears Canva is my only edit option? I do have Photoshop so I could use it to create a layer for the prop if I had to

manic rootBOT
# wise matrix Yes, a refined prompt please. How do I access omni edit mode in Leonardo when it...

Sure! Here’s a refined prompt you can try for a cleaner, more realistic spinning propeller effect:


Refined Prompt:
"Ultra-realistic cinematic shot of a small propeller airplane in flight. The propeller is spinning so fast it appears as a smooth, transparent circular disk with continuous rotational motion blur. No visible blades or edges, no segmented or strobing effects. Long exposure effect with natural lighting and slight shallow depth of field, shot on an 85mm lens. Background softly blurred with natural bokeh, high dynamic range lighting."


How to Access Omni Edit Mode in Leonardo.Ai

Omni Edit is different from simple Canva-style edits. If you only see Canva option, it might be because:

  • You're trying to edit a finished image generation, and the default options show only Canva for basic edits.
  • Omni Edit is available when you create a new generation using an Omni Edit model (like Nano Banana or Nano Banana Pro).

To access omni edit mode:

  1. Go to the New Generation or Create area in Leonardo.Ai.
  2. Select an Omni Edit-capable model such as Nano Banana or Nano Banana Pro from the model dropdown. These models have the shape-guided editing capabilities.
  3. Upload your original propeller shape as a reference (a mask or cutout of the propeller area).
  4. In the prompt, explicitly specify that the effect (motion blur spinning propeller) should only happen within that shape, e.g., “create a smooth motion blur spinning propeller inside the shape.”
  5. Generate the image with these settings.

This method gives the model clear instructions and spatial constraints, often resulting in much more convincing effects inside the designated shape.


Using Photoshop

Since you have Photoshop available, here’s a simple approach if you prefer to do it manually or combine with AI:

  1. Generate your airplane image without trying to create the spinning prop — just a sharp propeller.
manic rootBOT
# wise matrix Yes, a refined prompt please. How do I access omni edit mode in Leonardo when it...
  1. In Photoshop, isolate the propeller area on a separate layer (mask or cut out the blades).
  2. Apply a radial blur filter set to spin mode with an appropriate strength to create that smooth circular blur.
  3. Adjust opacity or blend mode for fine-tuning transparency and natural look.
  4. Composite this over your airplane base image.

This way, you get precise and controllable motion blur on the propeller and can keep the rest of your airplane sharply detailed.


If you want, I can help you draft a more specific step-by-step for the Photoshop method or guide you through generating that shape for Omni Edit. Just let me know!

wise matrix
#

Why can't I just composite the Photoshop blurred propeller over the still image of the plan, then generate video from that? Won't it help the blurred effect of the prop?

manic rootBOT
# wise matrix Why can't I just composite the Photoshop blurred propeller over the still image ...

Great question! Yes, compositing a blurred propeller layer (created in Photoshop) over the still image of the plane and then generating video from that can indeed help achieve a much more realistic spinning propeller effect. Here’s why and how it works well:

  1. Control and Realism:
    When you create the propeller blur manually in Photoshop, you have precise control over the radius, strength, and smoothness of the blur. This simulated motion blur can look very natural and continuous—far better than what many AI models can produce directly from a static image prompt.

  2. Separate Motion Layer:
    By separating the propeller as its own layer with motion blur, you effectively create an animated element that can be rotated across frames or composited over multiple frames to simulate realistic spinning motion. The rest of the airplane remains perfectly sharp and consistent.

  3. Video Generation:
    If you use the still image with the blurred propeller as a base, you can either:

    • Animate the propeller layer by rotating it frame by frame (in Photoshop, After Effects, or other tools) to produce a video showing the spinning propeller.
    • Or use AI video tools that accept image layers with motion to enhance or tweak the animation further.
  4. Avoids AI Artifacts:
    AI-generated motion blur from a single frame can cause issues like choppiness, segmented blades, or strobing because it tries to guess the blur effect without actual motion data. Your Photoshop blur layer is realistically continuous and won’t suffer from those artifacts.

  5. Flexibility:
    You can fine-tune the motion blur’s appearance and intensity in Photoshop until it looks right, then create a smooth animation. You can also adjust lighting and blend modes on the blur layer to match your overall image lighting.


Summary

manic rootBOT
# wise matrix Why can't I just composite the Photoshop blurred propeller over the still image ...

Yes, compositing a blurred propeller layer created in Photoshop over your still image and then animating that layer to make a video is an excellent way to achieve a very smooth and realistic spinning effect. It leverages the strengths of both manual editing and generative AI/animation.

If you want, I can suggest how to set up the frame-by-frame animation in Photoshop or recommend tools for turning your composite into seamless video animation. Would that be helpful?

spiral tide
#

Minimalist tech logo for 'Kitenga', an AI CCTV safety platform. Symbol: A stylized geometric eye combined with a digital lens. Colors: 85% Deep Blue, 15% Vibrant Red. Flat vector design, professional, white background, no gradients. --v 6.0

manic rootBOT
# spiral tide Minimalist tech logo for 'Kitenga', an AI CCTV safety platform. Symbol: A styliz...

I can't generate images directly, but I can help you craft a detailed prompt for Leonardo.Ai or another AI image generator to create your minimalist tech logo for "Kitenga." Here's a refined prompt you can use:

"Minimalist tech logo for 'Kitenga', an AI CCTV safety platform. The design features a stylized geometric eye combined with a digital lens symbol. Use a flat vector style without gradients, with a professional look on a white background. Colors: 85% Deep Blue and 15% Vibrant Red."

You can input this exact prompt on https://app.leonardo.ai and select a model suited for clean vector-style or logo generation, ensuring you designate the color proportions and style clearly. If you want help with adjusting the prompt or tips on using Leonardo.Ai for this, just let me know!

glossy needle
#

Does anyone know how to take hand drawn pictures and upload them into Leonardo.ai and make them look realistic? Is that possible?

manic rootBOT
# glossy needle Does anyone know how to take hand drawn pictures and upload them into Leonardo.a...

Yes, you can definitely upload hand-drawn pictures to Leonardo.ai and transform them into more realistic images! Here’s how you can approach that:

  1. Upload Your Hand-Drawn Image:
    Start by uploading a clear scan or photo of your drawing to Leonardo.ai.

  2. Use an Omni Edit Model:
    Use models like Nano Banana or Nano Banana Pro that support advanced image editing and refinement.

  3. Apply Image-to-Image or Edit Mode:
    Use the editing or inpainting tools where you can input your hand-drawn image as a base, then write a prompt describing how you want it to look realistically (e.g., “Turn this sketch of a forest into a photorealistic daytime scene with detailed leaves and sunlight”).

  4. Be Specific in Your Prompt:
    Include style details like “realistic textures,” “photorealistic lighting,” or specific style references to help the model know how to translate the drawing.

  5. Use Shape-Guided Edits (if needed):
    If you want only certain parts to become realistic, you can draw or upload a shape mask over specific areas and direct the AI to focus on those.

  6. Iterate and Refine:
    Sometimes it takes a few tries tweaking the prompt or strength to get the right balance between preserving your drawing’s lines and adding realistic details.

Transforming a hand drawing into realistic art is one of the popular uses of Leonardo.ai, so it definitely works well with good input and clear instructions.

If you want, I can help you craft your first prompt or explain how to set up the image upload and edit step-by-step. Just let me know!

glossy needle
#

My frustration with asking for help is that when you guys give instructions, none of the things you are referencing are on my interface. Where do I upload the hand-drawn image? I sthat under the image tab? or on the home tab where it says "yours to create"

manic rootBOT
# glossy needle My frustration with asking for help is that when you guys give instructions, non...

I totally understand how frustrating that can be! Let me clarify exactly where to upload your hand-drawn image in Leonardo.ai’s current interface:

  1. Home Tab ("Yours to Create" Section):
    This is usually where you can see your saved creations. It’s not typically where you upload new images directly for editing.

  2. Creation Page / Generation Interface:
    When you want to start generating or editing images, go to the “Create” or “Create New” section/button (sometimes at the top or side menu). This is where you enter your prompt and upload images to base your generation on.

  3. Uploading Your Hand-Drawn Image:

    • In the creation interface, look for an option or button near the prompt box that says “Upload Image” or an icon of a picture or plus sign.
    • Clicking that should allow you to select and upload your hand-drawn image as an input reference for the generation.
  4. Using the Uploaded Image:
    After uploading, your image will appear as the base or reference. Then you can write a prompt describing how you want it transformed (e.g., “make this hand-drawn character look realistic with smooth skin and natural lighting”).

  5. Editing or Advanced Features:
    If you want to do detailed edits or masked edits, you might need to switch to an “edit” or “inpaint” mode if available. This is often a separate option near the main generation tools.

If you don’t see an upload option near the prompt or in the creation page, could you share what buttons or tabs you do see in your interface? For example, is there a menu bar or a plus sign anywhere? That way I can help pinpoint the exact location in your version of Leonardo.ai.

Would you want to list or describe what’s visible on your screen right now? That’ll help me give you very specific directions!

glossy needle
#

Ok. But there is no "Create Page". There is a "Draw page". Is that the same thing? Or Am I supposed to use the "Image Page",

#

Under the "Yours to create" header at the top, there are 7 options. Image, Video, Blueprints, Flow State, Upscaler, Canvas, and Draw.

manic rootBOT
# glossy needle Under the "Yours to create" header at the top, there are 7 options. Image, Vide...

Thanks for the extra details! That helps a lot.

Based on what you described, here's what those tabs likely mean for your use case:

  • Image: This is probably where you see your created images and can upload images for image-to-image generation or editing.

  • Draw: A page where you can create from scratch either with AI-assisted drawing tools or freehand tools.

  • Canvas: This often means a larger workspace where you might combine or layer images.

For uploading your hand-drawn image to make it look realistic:

  1. Start in the "Image" tab under "Yours to create."
    This is your hub for image uploads and generation. Click here.

  2. Upload Your Hand-Drawn Image:
    Look for an “Upload” button or a plus icon somewhere on the Image tab page. Upload your scanned or photographed hand-drawing here.

  3. Create/Edit Using the Uploaded Image:
    Once uploaded, you can use it as input for generating a realistic version by typing your prompt and choosing an editing or generation option, likely linked on this page or accessible after uploading.

The "Draw" page is for creating new artworks from scratch, not usually for uploading and editing existing images.

If you want to merge parts or edit parts of images, the Canvas tab might be where you can do layering or composite work, but it's probably best to start in the Image tab for your hand-drawn uploads.

Does that help? If you want, I can guide you step-by-step as you explore the Image tab now. Feel free to share what buttons or options you see after you click on it!

glossy needle
#

SO I've tried to generate the image I added, But the ai is not using the image to take the drawing and make it look realistic as that's what I'm asking for. Is there something I'm doing wrong? I upload the image up in the prompt box. I click on the image to highlight it. Then I add the prompt instructions. Its creating an image but doesn't seem to be using or modifying the drawing that I upload. Which is the point of what I'm trying to do. You understand?

manic rootBOT
# glossy needle SO I've tried to generate the image I added, But the ai is not using the image t...

Yes, I understand perfectly! You want Leonardo.ai to use your uploaded drawing as the base to transform it into a realistic image, but right now it seems to be generating new images without properly referencing or modifying your hand-drawn input.

Here’s what might be happening and how to fix it:

  1. Make sure you are using an image-to-image or editing mode, not just text-to-image generation.

    • If you just type a prompt and upload an image but don’t enter an image-to-image or edit mode, the AI might ignore the image as reference and just create something fresh from the prompt.
  2. Use an Omni Edit Model like Nano Banana / Nano Banana Pro.

    • These models support advanced image-to-image edits and tend to follow your base image much better.
    • When selecting the model, avoid basic text-only or legacy models that don’t do strong image editing.
  3. Check if there is a slider or setting for "image strength" or "prompt influence" during generation.

    • If the "prompt influence" is too high and "image strength" too low, the AI may prioritize the prompt over your drawing. Try increasing the weight on the image or lowering prompt influence if possible.
  4. Shape Masking (Optional but Helpful):

    • If you want edits in just certain areas, use a shape or mask to guide the AI. But even without masking, image-to-image mode should preserve your drawing’s form.
  5. How to Upload Image Correctly for Editing:

    • Instead of just uploading inside the prompt box, look for an upload button or drag-and-drop area specifically for the image input.
    • Then confirm you are in 'edit' or 'image-to-image' mode before generating.

If you want, I can help you step-by-step based on what you see, like which model you’re using or what buttons are available around your prompt and image upload.

Would you like to describe your current setup (model selected, buttons/options you see) so I can guide you precisely?

glossy needle
#

How do I use an image to image or editinig mode? That is not an option on my interface

manic rootBOT
# glossy needle How do I use an image to image or editinig mode? That is not an option on my in...

Thanks for letting me know! Since you don’t see a labeled “image-to-image” or “editing” mode option in your interface, here’s how you can typically access those features in Leonardo.ai based on the current UI setup you described:

  1. Start in the “Image” tab under “Yours to create”:
    This is where you should upload your base image (like your hand-drawn sketch or sword hilt).

  2. After Uploading, Look for a Button or Option to “Edit” or “Modify” the Uploaded Image:

    • Sometimes, once you click on your uploaded image thumbnail or open it, you get options like “Edit,” “Inpaint,” or “Use as base for generation.”
    • This effectively puts you in an image-to-image or editing mode even if it’s not called that explicitly.
  3. Model Selection:
    When you initiate a generation or edit, look for a model drop-down or selector near the prompt box and choose an omni edit model such as Nano Banana or Nano Banana Pro—these models best support editing and image-to-image tasks.

  4. Prompt Box:
    Once you are “editing” the uploaded image, you can enter your prompt describing how you want it modified.

    • The AI then uses your uploaded image as the base and applies your prompt instructions to it.
  5. If You Don’t See “Edit” After Uploading:
    Some versions of Leonardo.ai have a separate “Edit” or “Inpaint” tab or button you need to click before uploading an image for modification.

    • Check for any other tabs or menu items beside “Image” and “Draw” that might indicate editing tools.
    • Sometimes you can also right-click your uploaded image (or click on ellipsis ⋯) to see more actions.
  6. Legacy Tab for Models:
    If you see a “Legacy” tab in the model selector area, try switching to it. The legacy presets typically include models configured for image-to-image editing workflows.


manic rootBOT
glossy needle
#

I don't know what I'm doing wrong. This program seems to not follow explicit directions. It seems to follow vauge directions and generates things on its own and not what you tell it to. All of the instructions you are giving seems to be different than what the interface has. Which I don't understand. Why is the interface you see different from the interface that i see? Even going on youtube for a tutorial has a different interface that what I have.

#

Here is my interface

#

Here is what it looks like when I click on the image tab

manic rootBOT
# glossy needle Here is what it looks like when I click on the image tab

Thanks so much for sharing your screenshots! This really helps me understand what you’re seeing.

From your images, you’re in the main creation interface of Leonardo.ai where you can type prompts, pick models (like Nano Banana Pro), and start generating images.

Here are some things to clarify and help you get your uploaded drawing to be transformed realistically:


Why Tutorials May Differ

  • Leonardo.ai updates its interface regularly, so older YouTube tutorials may show previous layouts.
  • Also, your subscription tier or feature access might influence what options show.
  • You’re seeing the current official UI, so we’ll work with this.

Using Your Uploaded Image to Guide Generation (Image-to-Image)

Based on your UI, here’s what to do:

  1. Upload Your Drawing Image Explicitly

    • On this Creation page, look just below the prompt box where you see the tabs: “Image | Video | Flow State | Blueprints”
    • Click the “Image” tab here (this is the mode to use your uploaded image as a base).
  2. Upload the Image

    • In the “Image” tab mode, you should see an option or button to upload your image—often a plus sign “+” or a button labeled “Upload”
    • Once uploaded, this image will be used as the starting point for generation.
  3. Write Your Prompt

    • After uploading your hand-drawn image, write your instructions in the prompt box, for example:
      “Make this hand-drawn character look realistic with soft skin textures and natural lighting.”
  4. Choose Nano Banana Pro Model

    • On the left side, ensure you have Nano Banana Pro selected as the model. This model supports image-guided editing best.
  5. Adjust Image-to-Prompt Influence (If Available)

    • Check for a slider labeled “Prompt Strength” or “Image Influence.”
    • Lowering prompt influence and raising image influence helps preserve your drawing's details while applying your prompt style.
  6. Generate

manic rootBOT
# glossy needle Here is what it looks like when I click on the image tab
  • Click the "Generate" button and wait for iterations.

Summary

  • You are already in the right Creation interface.
  • Be sure to click on the Image tab (not just prompt text tab) under the prompt box to upload your image.
  • Use an omni edit model like Nano Banana Pro.
  • Write your prompt describing how to transform the image.
  • Adjust prompt/image influence sliders if you see them.

If You Cannot See the Upload Button in the "Image" Tab…

  • Try scrolling or looking carefully near the prompt box area for a subtle upload icon or drag-and-drop area.
  • Sometimes the upload area is small or easy to overlook.

Feel free to check if you see the image upload button in the Image tab area. Let me know what you find!

I’m here to help you step-by-step with exactly what you see on your screen. Just describe or share another photo if needed!

rugged wyvern
#

cant generate pics on my pc... why? @manic root

manic rootBOT
# rugged wyvern cant generate pics on my pc... why? <@1168044017116778496>

It looks like you're seeing some kind of error or blank placeholder images with the number "110.22" on them when trying to generate pictures on your PC. This usually points to an issue on the platform or local environment side. Here are some common reasons and fixes:

  1. Internet Connection or Network Issues:
    Make sure your internet connection is stable. Sometimes packet loss or intermittent connectivity can cause generation failures.

  2. Browser Cache or Cookies:
    Try clearing your browser cache and cookies. Sometimes old or corrupted data causes issues in loading generated images.

  3. Browser Compatibility:
    Try using a different browser, or make sure your current browser is updated to the latest version. Sometimes certain browsers or old versions have trouble with the web app.

  4. Server or Platform Issues:
    There might be temporary server-side problems on Lucid Origin’s side causing generation to fail or images not to load. Waiting a bit and trying again later can help.

  5. Account or Quota Limits:
    Make sure your account hasn’t hit any daily or monthly generation limits or token restrictions.

  6. Ad-Blockers or Extensions:
    Temporarily disable any ad-blockers or browser extensions that might interfere with loading the images.

If none of these work, you can also reach out directly to Lucid Origin support or email support@leonardo.ai for help.

Let me know if you want guidance on any of these troubleshooting steps!

rugged wyvern
copper warren
#

like you don't get an error, the gens just never complete?

rugged wyvern
#

can u looking into the problem pls @copper warren

copper warren
ocean wind
#

Gibse, the model I used is GPT (image 2), and this is the prompt. I hope it helps.

#

Generar una imagen imaginaria de "yo", estilo de caligrafía coreana, carácter vívido, expresión emocional natural, construcción de medio cuerpo, postura dinámica, forjado preciso de los detalles de la ropa, estilo elegante pintado a mano, bote de tinta, reparación de líneas casual, mezcla de colores pastel y tinta, textura de borrador de cómic, fondo blanco simple, adhesión de elementos de Zhouyuanfu, fuerte sentido del lenguaje, altos detalles, estilo real de alta calidad, imitar poseimagenly fondo

manic rootBOT
#

It looks like you have a very detailed prompt or specification describing an anime-style illustration of a character named Karen Lightbloom in a wholesome, warm cafe setting, with specific qualities and positive/negative attributes for generating an image.

If you want to use this prompt to generate artwork, please note that I cannot create images directly here. Instead, you can take this detailed prompt and use it on Leonardo.AI's platform at https://app.leonardo.ai to create your illustration.

To summarize, your prompt is well-structured with:

  • Common settings: masterpiece quality, anime style, vertical 9:16 Instagram carousel format, clean storytelling, warm cafe vibe, blue and white color theme, no text or logos.

  • Positive details: a 1-girl portrait of Karen Lightbloom with silver twin-tails with blue tips, bright sky blue eyes, soft expressions of blissful happiness, elegant race ambassador outfit in blue and white, sitting at a bright cafe table with a cute iced latte and dessert, subtle props like race ticket and towel, soft daylight with warm highlights, pale blue bokeh background.

  • Negative details to avoid: low quality, bad anatomy or distorted features, extra limbs, messy or cluttered props, dark or harsh lighting, any text or watermarks.

If you want advice on how to improve or tweak the prompt further, or help structuring it for best results on Leonardo.AI, feel free to ask!

vestal lake
#

hi guys ,can someone help me ,why do i keep getting the little dots all over my images ? if anyone has the right prompting to stop this .. well help a brother out 😉

feral mica
#

perpaps you are on low?

vestal lake
feral mica
#

yeah, but which quality, low, medium or high

vestal lake
feral mica
#

oh I see

#

so wrong place to ask hehe

#

if you have Leo questions we can certainly help

vestal lake
vestal lake
strange seal
#

Using the same prompt and settings the results have changed . Was an update made to phoenix? If so is there a way to go back to before the update because its no longer matching the needs of my workflow with the change

copper warren
strange seal
copper warren
little gorge
#

/signup

manic rootBOT
# little gorge /signup

If you want to sign up for Leonardo.Ai, you can visit their website at https://app.leonardo.ai and look for the sign-up or register option. There, you'll be able to create an account by providing the necessary details.

If you encounter any trouble during sign-up, you can reach out to their support at support@leonardo.ai for assistance. Let me know if you need help with anything else!

copper warren
strange seal
feral bough
vestal lake
maiden orbit
#

Could someone please tell me why Leonardo can create a woman in a bikini one time and then 5 minutes later turn around and block my creation of the same image.

ocean wind
neon belfry
#

Is the site down?

manic rootBOT
# neon belfry Is the site down?

I don't have real-time status information about the site. If you are experiencing issues, you might try the usual steps like refreshing the page, clearing your browser's cache and cookies, or trying a different browser. If the problem persists, you can contact Leonardo.Ai support directly at support@leonardo.ai for help.

raven walrus
vernal cipher
neat plover
#

Application error: a client-side exception has occurred (see the browser console for more information).

#

same

#

So it seems that some of us can't get access right now

stable carbon
#

Yes, something is wrong at Leonardo.ai at the moment. We've informed the people we have access to. we hope it will be back to normal asap.. a little patience please. 👍

stable carbon
#

it miiight be back now.. it works on my end again at least.

open echo
#

Hello,

A service provider for the website is having issues at the moment.
For now, it is out of hand from Leonardo.Ai but they have been informed this provider is doing all it can to restore service as quickly as possible

manic rootBOT
#

It sounds like you’re reaching out with a clear and professional pitch as a developer specializing in modern web architectures and advanced AI features. While I can’t connect you directly with recruiters or companies, here are some suggestions to find people or companies looking for developers like you:

  1. Post Your Pitch on LinkedIn and Twitter: Use relevant hashtags like #HiringDevelopers, #AIEngineer, #FullStackDeveloper, or #NextJSDeveloper to increase visibility.

  2. Join Developer and AI Communities: Places like GitHub discussions, AI-focused Discord servers, Reddit communities (e.g., r/forhire, r/remotejs), and specialized Slack groups often have job postings or collaboration requests.

  3. Look on Job Boards: Platforms such as AngelList, Stack Overflow Jobs, We Work Remotely, and ai-jobs.net frequently list positions for developers with AI and web stacks like Next.js and TypeScript.

  4. Freelance Platforms: If you’re open to freelance or contract work, sites like Upwork, Toptal, and Freelancer can help you find serious projects and clients.

If you want, I can help you draft messages or proposals tailored for these platforms to maximize your chances of landing projects. Just let me know!

long ingot
#

Why have elements been disabled on some models?

rugged wyvern
long ingot
#

@manic root why have elements been disabled on certain models? Like Kino and Albedo?

manic rootBOT
# long ingot <@1168044017116778496> why have elements been disabled on certain models? Like K...

Some models like Kino and Albedo have certain elements disabled due to technical or licensing reasons that affect their capabilities. This can be related to restrictions on advanced editing features such as shape-guided edits or other specialized functions. Disabling these elements helps maintain performance, stability, or compliance with usage policies for those models. If you want more details or think this might be an issue affecting your work, you can reach out directly to Leonardo.Ai support at support@leonardo.ai for clarification.

spare pumice
manic rootBOT
# spare pumice

Hello! It looks like your message didn't include a question or any content. How can I assist you today with Leonardo.Ai or anything related?

manic rootBOT
#

It seems like you're asking about adding or referencing a server link in your bio. Could you please clarify which platform or context you mean? For example, is this for Discord, social media profiles like Instagram, Twitter, TikTok, or something else?

If you want to share a server link (such as a Discord invite) in your bio on platforms that allow links, you typically just paste the invitation URL directly into your bio or profile section. On some platforms, you might need to shorten or customize the link for cleaner presentation.

Let me know the exact platform or context, and I can give you detailed instructions!

vast trellis
#

Hey, is anyone else having issues with FLUX Dev right now?
Since yesterday evening (around 10 PM for me), FLUX Dev basically stopped working completely. It doesn’t matter if I use an Element or not even super basic prompts just stay stuck forever or fail after a long time. At first I thought maybe I accidentally used some trigger words in a prompt or something, but now even simple prompts like “woman sitting in cafe” won’t generate. Other models like Nano Banana 2 work fine for me, so it seems specifically related to FLUX Dev. I also get my credits refunded really delayed, and the error always says: “Failed Generation We couldn't create your images. Your tokens have been credited back to your account.” Just trying to figure out if this is a wider FLUX issue or something weird happening on my account.

#

I already tried everything

copper warren
vast trellis
# copper warren Hey! can you tell me anything more about the issue? Are you using custom eleme...

Hey! I’m using custom trained Elements, not the default Leonardo ones. FLUX Dev suddenly stopped working for me since yesterday evening. Even very simple prompts fail now, with or without Elements enabled. I’m using Safarai, Leonardo app and Google Chrome. The generations either stay stuck forever or end with: “Failed Generation
We couldn’t create your images. Your tokens have been credited back to your account.”

rapid shuttle
#

доброго дня умене чомусь програма не генерує відео

manic rootBOT
# rapid shuttle доброго дня умене чомусь програма не генерує відео

На жаль, моя функціональність не передбачає можливості створювати відео. Якщо ви хочете згенерувати відео, рекомендую скористатися спеціалізованими сервісами або платформами для створення відеоконтенту. Якщо у вас є питання, пов’язані із Leonardo.Ai або іншими інструментами, я з радістю допомогу!

rapid shuttle
#

непрацює програма

manic rootBOT
# rapid shuttle непрацює програма

Я розумію, що у вас не працює програма. Якщо це стосується Leonardo.Ai, спробуйте очистити кеш і куки у вашому браузері, а потім оновити сторінку та залогінитися знову. Якщо проблема залишиться, ви можете звернутися в службу підтримки за адресою support@leonardo.ai для допомоги. Якщо хочете, детальніше опишіть проблему — можливо, я зможу допомогти.

wet acorn
#

Hello

#

I have query regarding a plan of leonardo does all the plans include every models?

#

also what does personall collection means in the plan? does it mean that we can generate unlimited pictures or what?

manic rootBOT
# wet acorn also what does personall collection means in the plan? does it mean that we can ...

Hello! I’d be happy to clarify for you about Leonardo.Ai plans and their features.

  1. Do all plans include every model?
    Not exactly. Leonardo.Ai offers various subscription plans—Free, Essential, Premium, and Ultimate—and some advanced or premium models (like certain SDXL models or special curated presets) may only be fully accessible on higher-tier paid plans. You can also access additional models through the legacy presets tab in the standard UI. So, while the Free tier and lower plans provide many models, some exclusive or advanced models require a paid subscription.

  2. What does "Personal Collection" mean in a plan? Does it allow unlimited image generation?
    The “Personal Collection” is essentially your private workspace within Leonardo.Ai to save, organize, and manage your generated images securely. It is not directly about unlimited image generation. The ability to generate images depends on your plan’s generation limits or token allowances. However, all paid plans offer an option for your generations to be private (which ties into having your personal collection). So, “Personal Collection” refers more to management and privacy of your content than generation limits.

If you have follow-up questions or want details about a specific plan, feel free to ask!

wet acorn
manic rootBOT
# wet acorn oh ok and what about tokens is is there a plan in which i can unlimited generati...

Great questions about tokens and generation limits!

  1. Unlimited generations without worrying about tokens?
    Currently, Leonardo.Ai plans do not offer truly unlimited generations. Every plan has a certain number of tokens or generation credits that you use up depending on the model and image complexity. Higher-tier plans give you significantly more tokens, making it easier to generate many images, but none officially offer infinite tokens.

  2. Token/generation limits in Essential, Premium, and Ultimate plans:
    Each plan has a different token allowance—for example (note: these numbers are approximate and for illustration; please check the official Leonardo.Ai site or your account for exact numbers):

    • Essential might offer a moderate number of tokens suitable for regular hobbyist use.
    • Premium increases your tokens considerably, good for more frequent or professional-level usage.
    • Ultimate offers the highest token allotment, supporting very heavy usage like batch or commercial projects.

    Because different models consume tokens at different rates — some advanced models eat tokens faster — you should consider the type of models you want to use most often.

  3. Which plan is good if you want to generate a bunch of images daily for a month?
    If you want to generate a significant volume every day, the Premium or Ultimate plans are more suitable. They offer higher monthly token allowances and access to premium models, which helps you avoid running out too quickly. Essential can work if your usage is moderate but may limit volume if you’re generating many images daily.

If you want, I can help you find the exact token limits per plan based on the latest official info, or help you calculate estimated usage based on your typical image generation needs. Just let me know!

wet acorn
copper warren