#🙋│image-help
62364 messages · Page 4 of 63
your prompt with the monster told to be a "COLOSSAL SLENDERMAN"
should i write colossal slenderman?
I think you've got a really repeatable prompt, just play with your wording on the monster
if you know
until you actually get what you're hoping for
A technique I use, inspired by tech support, is to start back at the “minimum functional state”
Start a brand new prompt and just put the bare minimum you want to see, then iterate from there. Often too much words will make it difficult to zero-in on what is causing the issue.
imagine instead of the big moon or planet, it was a big mouth the size of the planet thats captured
enter the world devourer
then I'll get something like the marvel guy
I dont know I think i am searching for an horror like creature
better, but a polar bear doesnt really fit the universe type style
ill start a new prompt
I think you just need to play with the prompt, changing the monster still seems to leave the rest!
Oh? A planet eater?
more like that yes
now just imagine the camera perspective is placed on earth
the monster just would cover the whole sky
What did you write as a prompt?
I think I'll just cut the whole previous mosnter part and replace it with the planet eater thing
I think it still interprates a titan into the mosnter
Really short and to the point:
Screenshot, tv show, modern anime.Gargantuan cosmic entity, tentacles, grotesque, devouring an entire planet. Cinematic, cosmic horror, seen from space. Background of black space
thanks, Ill build a new prompt right now
i think i got something
well
it's kind of better than before
“Well…. That’s gonna ruin my weekend…”
it surprisingly perfectly set the camera angle just like in the old long prompt
Theres something missing
is there maybe any movie monster that could fit that description?
Maybe a eren like founder titan monster?
That's just really long but thin and horror like
Sirenhead? 🤣
oh I got one like that and dismissed it as too "titany" 😄
whoops
nice! glad you have one you love
i think it just was the eyes and the mouth
the other titans just didnt do anything
Now, would it be theoretically possible to take that image and transform it to something like an anime but with realistic effects
I dont know what the anime is called
but theres like animes that are just 2d animes nothing to do with reality
and then real effects
Depth to Image in Image Guidance is your friend 😄
How much depth is recommended?
❤️ the “Duo” Facing zee titan 
I’d start it at max, and lower it only if it stays too close to the original
The fusion of media types is going to be difficult for the AI to generate.
i mean something like this
I now have it as image guidance. Do i delete the prompt now and write "Realistic effect and sfx"?
Keep the subjects in the prompt but change the style
Depth to Image can really do magic with existing pictures 🤣
woah
it didnt change anything 😂 (for my image)
it somehow
does not change a pixel
also if i use guidance scale 0.5
oh now it did something
Guidance Scale determines adherence to the words of the prompt. It should be left at 7 for most of the time
Ah! Use Depth to Image instead of Image to Image
depth to image really is better:
Thanks for the support ooo3r1cooo, Tuxman and azothane 
I wish we could select the motion type when generating videos. Frustrating trying to get decent ones while getting tons of that does not even fit to the image. Or even general direction of it.
hey, i wonder if somehow i can create damaged hair? i tried lots of prompts, and also used a prompt generator. also, gpt came up with restyling words. but everytime i get the same - pure healthy hair.
Maybe someone has ideas on how to prompt for a thin damaged hair?
Imperfection is one of the hardest things to generate with AI, because its been trained not to generate anything imperfect
ah, ok. thanks!!!
how do you make images
You have to go to https://leonardo.ai
By participating in discord. Talking sharing images and participating in the daily challenges.
Make an account. Then click on image generation on the right hand side.
oh!
你好
你好👋👋 製作圖像要在這裡唷-> https://leonardo.ai/ (註冊後即可使用)
我們有 #中文-ch 中文頻道唷
有問題歡迎來做詢問🤗🤗

araba
Is it possible to upload a screenshot of a game and make it into a realistic photo?
My first guess would be to use Depth to Image in Image Guidance, then to start the prompt with “photorealistic” and describe broadly what is depicted in the screenshot
Just a quick experiment 😅 with more time it can be tweaked to get more realism and then adjust objects in the Canvas Editor
Omg this is brilliant hahahaha
I'm gonna try it now
Let me guess…. Some unnamed block based creative sandbox? 😉
Hahahaha no I'm gonna look for some characters to make into cursed photorealistic versions ;p
Therapist: Hyperrealistic bowser is not real, he can't hurt you
Hyperrealistic bowser:
ikr, this is a great idea
WAIT
DAILY THEME IDEAAAAAAAAAAAAAAAAAAAAAAAAA
imma run to robin
is there a detailed guide that I can be directed to in order to better understand how to get Leonardo to render images correctly ?
thats a big question - what kind of errors are you seeing?
@tardy hazel here is a guide on https://discord.com/channels/989166677390426132/1105168997248413727
thank you so much
the most confirmed way of doing it is by uniquely naming the character and using the same descriptors every time
"Jenevra is a twenty year old caucasian lady, with loose brown hair and hazel eyes, wearing well-worn jeans and a Metallica t-shirt"
and then always describe her as Jenevra with brown hair and hazel eyes, and whatever else you want
You can use the same name, it's not really TEACHING the AI anything, we've just anecdotally seen that giving a name helps anchor a lot of the same details
you can play with the name and might get a few differences in appearance too
so don't stress too much over a unique unique name... could even be Karen
Got it, I'm trying to automate character creations and then consistent images for that character in different settings, clothes etc
sure
it will still be a bit of a dice roll, as it is with all AI gens
but have found good results with this practice
have you played around with model training?
so if I feed it some arguably NSFW images, would that get me closer to what I want?
I don't know what you want, but what you put in is what it will output
read your TOS carefully
Will do, thank you for all the help
oh... you must pay for a Leo sub?
I'm already paying
OK there's another Role for that with it's own channel
if you DM me your user so we can verify that it's subbed you can get the exclusive role
or post here and I'll delete immediately
sent you a DM
I got it and I'm seeing if somebody can grant the role!
you can either use prompts for Colouring Book in XL models (I like Diffusion for it) or there are Community Models specifically built for colouring books
Thank you
Thanks. I will try that now.
Anyone mind sharing their favorite negatives to prevent multiple people having the same face? Mine aren't doing so good.
In the past, I've used "diverse group" in the positives and it seemed to work. I don't know if that's something you already tried
I'll try that thank you!
Either with an image reference set to edge to image or text image input or you add it afterwards with a different program.
ok thanks
creat logo name city 068
Does anyone have a good advice on writing in the AI?
For example for a friend he wanted a robot standing with a Neon logo behind saying TOMBOT.
I had to generate alot of pictures before it spelled TOM correct, then i used that picture to "guide" the making forward (image to image), until many generations later it spelled the full word correct and i could put That one as image guidance, Final result was this below:
Im just asking if there is a more simple way to make it understand words or nicknames that aint "coca cola" for example 🙂 Thanks in advance // Hilik
Words are always hard for the ai. Probably will be for some time. There are 2 premium image to image settings that do it well though. Edge to image or text image reference. Or adding it afterwards in a different program. But despite the difficulties you managed to get a good result.
took Alot of coins tho XD, So maybe adding a image to image of a robot, then another image with TOMBOT text to image would speed up the process maybe? 😄 @robust gulch Thank you very much for reply 🙂
I've tried making a picture similar spelling "Hilik" but i gave up after 500 coins XD
I have seen prompts where it would say 2girls 1guy at the beginning to ensure the image is of 2 girls and 1 guy, but is there any reason to think this is truly helpful?
I have been trying to create pictures of a cheerleading squad with their mascot (a dog), but most images will have more than one dog in it, I didn't have the expected results putting "multiple dogs" in the negative prompt.
I kind of like the images I get, but for my purpose having a second (or more) dogs will not fit the story behind.
@robust gulch Im trying adding two guideing pictures but cant seem to get it to work, can you maybe see what i do wrong XD?
You can't add more than one at a time. Sorry if I confused you.
Oh ok then i get it , thanks again man ❤️
No problem!
unfortunately at this time, multiple subjects is a place where the AI is weak. I too have seen those types of prompts, but honestly it's still luck whether it works or not. The tech behind the generating doesn't have this type of prompt structure
You CAN add more than one image to an input using paid features, which I can see you are using. (if you send me your Leo profile name I can get you another role) I don't think there's a nice way of doing this how you are hoping though
the reason it's failed here is that you're still using Alchemy with Prompt Magic on while trying to use multiple inputs
you basically have to strip it back to nothing to enable multiple controlnets
secondly, the output of this type of 2 image input would likely have been just HILIK with the image fit inside the text
Canvas is your best bet for adding text to a generation you like already (if Leo can't figure out the text like it did with Tombot - very cool by the way!!)
I gonna try that 😄 thanks for advice and you taking a time to answer
And when all else fails, there's always the possibility of a little elbow grease and removing the extra characters manually through photo editing application 😉
https://discord.com/channels/989166677390426132/1119326564631449710
Im a rookie at editing and ai generation so i hope for the AI to figure it out for me XD
yeah this is a very common question, and in short - it's best if you planned to keep them from the start: https://discord.com/channels/989166677390426132/1105168997248413727
the best way we have seen in practice to keep a consistent character by prompt alone, is by giving them a unique name, and describing them the same way every time you try to use that character
but trying to apply that trick now may change what this guy above looks like since he may not have been named originally
other things you can do include using controlnet, or creating a trained model with multiple generations of this character that look the way you wanted, but those are more difficult and only useful in certain situations
you can put the name where it makes sense in the prompt - "Floppenheimer is a man in his early thirties with close cut blonde hair and a stoic expression, he sits in front of his monitors watching the successful launch of an ICBM that carries a nuclear device he created"
but the next one might be more environmental - Environmental photo of grey storm clouds, the wind batters down on Floppenheimer, a man in his early thirties with close cut blonde hair and a stoic expression"
etc
the important part is nailing the same Keywords
describe the parts of the character that you want the ai to reall try and keep always
most of my prompts are more like: man sitting at computer, ICBM launch seen through viewing window, multiple monitors tracking progress, highly detailed CG render, octane render, hyperrealism
but that would make a random man every time
you're not really TEACHING the AI anything, but we've found that using a name + description helps really anchor the same look in a generation over and over
it's still technically random
yes hopefully
it's a very common question, so if it can be done more easily, I'm sure that's going to be a simpler controlnet or something in the future
@idle ocean
did you put the text in
if so its yours
Yes.
yea its not just AI
Only the character is generated based on a prompt, I changed his colors and edited skin on some.
Bet.
but only wont be able to gain copyright
hey listen, we're also not Lawyers... 😄 but your own generic character, plus your own editing, I don't think you need be worried. Still we do like more Leo users 😄
I was against AI. But it opens up a realm of possibilities for sure.
Yeah I got the app via this dude.
yea
He mentioned leonardo but google showed davinci lmao
use leonardo
What do you guys use the app for?
Are you a graphic designer?
I can call myself that yes
Ah you do like 3d modeling of houses etc?
I do use AI but I am 100% my work is not just first simple prompt of x
I love blender
but lets go to #💭│off-topic for that
Ook Nederlands zie ik haha. Yeah graphic design definitely is broad.
or #💬│general if its AI
I have to get back to my business but I will try and be active in the chats thanks!
If I want this style images any tips how to accomplish that?
Im getting this but thats more mortal kombat
I can do it myself via photoshop but easier the closest it can get with the app itself.
Just a shot in the dark, but to remove the photorealistic aspect, I’d start the prompt with “Digital art” or “digital painting”. And maybe “cyberpunk” in there to hype up the technology aspect?
It’s really just an intuition.. I didn’t test it 😅
Thats a great idea, also I see you can use image guidance. Will use both thanks
That worked.
Someone know what to do?
how do i add Motion to an image where i removed the background? it only applies to main image
Indeed for now motion works ou in the main image (maybe in the future?)
Hard to say, what were you trying to post?
Hand Wash
Hello and welcome to the Leonardo.Ai Discord server! 😊 I am Verrocchio, an AI assistant, here to help you get access due to the absence of human helpers at the moment. To get started, could you share what kind of images you are interested in creating with Leonardo.Ai?
A unique and very attractive hand wash
That sounds intriguing! To create visually appealing designs like the hand wash you're interested in, Leonardo.Ai's webapp offers a variety of tools. Before we proceed, have you already created an account at https://app.leonardo.ai to start generating your images?
Yes please
Great to hear that you've taken the first step! Could you please tell me a bit more about how you plan to use Leonardo.Ai? For instance, are you a hobbyist, game developer or looking to integrate it into a business?
I am Content creator
Being a content creator is exciting, and Leonardo.Ai can certainly enhance your creative process with its various features. To best support your endeavors, our community is a place to share insights and seek inspiration. Are you interested in participating in discussions, challenges, and sharing your work with fellow creators?
Yes please
It's great to have you on board! You've been approved to access the rest of the server. 🎉 Check out the #🖼│image-share channel to share your creations and maybe join the #1108130218260041758 for daily image challenges!
Thanks a lot
Guys, im new here and i have a question - ive trained my model, put there like 30 photos of model. But when i wanted to generate now new graphic with my model and i ofc set my finetuned model right and in prompt i use model name… i dont recive anything with my model. What i do wrong? Because you guys do this and got great results as well
what did you train your model on? 30 images of what? for example 30 images of yourself, or ..?
Ive put like 30 different photos of my AI model into new dataset
how to you get the character to look as closely to a specific character? like a game character? Does it need to be a popular one like Son Goku or should the AI model know a lot
It does do best with popular characters yes
With other ones you're gonna need to describe their features in great detail
I see. that makes sense. Does the image guidance helps with that or rather not
Somewhat, but getting poses different from the input image will be hard
hi
are you sure it was flagged for the word war
im trying to do true crime stories and the AI seems to accept
ok good
yes im in the exact same position
so you need to describe you character in detail first and give him a name
yes ive seen that too
keep your first prompt in full describing the officer than add to it
Anyone a graphic designer here?
fiirst of all in the second scene Leo dosent know to make it black and white
what is the prompt for this?
Is this allowed? The above one is the edited version. For my channel. The whole art is yours?
your not specific enough with the uniform
also the background is confused with the prompt. WW2 scene or forrest scene?
me too ork
Ur making a video? or a blog
im using the still shots
You can also use Motion for short clips using AI
People have made awesome stuff with that
yes some of the shots i render into video
i got one
Do you have your own discord? Can I join it to see the process
Is it about a specific person, a officer? or Just officers or germans in general
I might be able to give you some insight and perspective.
Yes you can.
Have you ever had a chance to watch stuff like "Scary stories" On youtube? Where a youtube artist narrates a video with only voice using images that are still and found on the internet mostly?
You can edit the image yourself to make color's match, for example in the lower one you can lower the saturation and increase the contrast and highlights.
For example
Subject Description + Type of Image + Style of the image+ Camera Shot + Render Related Information.
He uses a still image, makes it move by zooming in and adding moving effects ,and adds a raining effect
I found this online and it seems to be working better for me
they are not black and white
dont give up it can be done ive been trying every day for 2 weeks
what we are trying to do is much harder than a one off image
You can make color red etc
But its not about a specific character right so it shouldnt matter too much what the characer looks like as long as you nail narration, color grading, theme. stances
Add fog or other particles and even physics
right then its a bit more difficult I agree, perhaps use warp tool to work on cheekbones and coloring alongside it.
DOesnt have to be perfect IMO But I understand ur concern for sure
Very cool project.
you can have much better results
Basic thing you learn in filmschool is visuals are not as important as sound. So as long as you nail the script and narration and audio. Im SURE you can work out an amazing project my G
Watch this whole video for inspiration on perhaps to implement aspects for ur project I'd say
Hi, the AI models are not trained on images of very specific people like this. You can recreate celebrities with AI but that's only because there are so many images of them. AI does not know who Friedrich Lengfeld is so unfortunately you won't be able to accurately recreate him.
good
That is also not easy to do with current AI tech, but there may be some things in development for that. The best you can do now is: 1) Keep the seed the same, this is under Advanced options 2) Give the character a name; you're already doing that, and that's good for consistency, but its not guaranteed that your character will look like the actual person you want.
Can someone please help me figure out what to type to get these noodles to be wires instead of noodles. It always comes out as ramen noodles.
Hi Vae, I can try, what do you currently have?
With a graceful and precise movement, a robot delicately eats a bowl of ramen noodles made of wires, each strand carefully rendered and topped with computer chips for a visually stunning and unique dining experience.
done with diffusion xl
Try taking out.... ramen noodles made of, let me see what it will do. Be right back
Yes, getting the same.
hmm okay. No worries. I will think on it for a while and try something els. Thankyou
Wires mixed in a bowl
Surrealism, with a graceful and precise movement, a robot delicately slurps wires from a bowl, each strand carefully rendered and topped with computer chips for a visually stunning and unique dining experience.
Yes, got that too.
Have you tried with the simplest prompt possible first?
yes I did that as well
No idea if that’s what you had in mind
It always comes out as noodles
Highly Detailed Robot Eating Coloured Wires From A Bowl, Hyper Realistic 3D Render, Vibrant Background
awesome!
My prompt:
Robot eating a bowl of colorful wires, background of bistro
Hahaha
My motto: precise, concise 😁
dang thats good
Thanks the top one is my face so far
yes diggin in now he needs chopsticks like any decent robot
İs there any app that creates videos for more then 4 seconds
I know people have been chaining 4 seconds videos in movie editing softwares, so that’s one possibility. (For a more advanced way, I know someone was taking the last frame of the 4 seconds and generating another 4 seconds from that 😅)
Also the Motion feature from Leonardo is still young, so who know what lies ahead? 😄
Ahah yes i also play with 4 second videos to make them longer but not too sufficent
Leonardo ai has motion feature?
I was using different to create video from images
Different apps
When you generate an image, you can then apply a motion effect. You can choose the strength of the motion from 1 to 10 (5 being the middle ground of course.. higher you can expect some …. Interesting results 😅)
Thanx
You Ramen Junkies
“When my dish hits the table but i asked for NO “extras”
Why does my cat 3D model texture look like this?
when the original texture is this
Like why is the cat face on the belly, the eyes are distorted, wrinkles everywhere, etc. I have nothing on the negative prompt.
Am I doing something wrong? I may need help since I dont know how the AI here works that much with 3D
Confusing the machine in prompt? Model choice?
No like it doesnt meet my expectations. its the 3D model texture generator. The prompt is Stylized grey Siamese male cat, dark blue eyes. Nothing in negative prompt
Tried to recreate but major denies from the Leo generation (wow) really
:/
Who is in charge of what is “right” in prompt writing, seems pretty pinpointed (no no NSFW crap mentioned here)
Tough when platforms tie down 100%
so is there anyone who can help me with this?
My prompt:
A snowy winter suburban landscape, a woman in a sleigh, the sleigh is pulled by one white sleigh dog. (in the sleigh sits Jenna (Jenna Coleman)). Jenna is wearing a wine red dress and coat with white fur at the hood and sleves. the sleigh dog is a female Samoyed dog
Negative prompt:
multiple dogs, dog in sleigh
I use Leonardo Diffusion XL, Dynamic, no image guidance, input dimensions are set to 768x512
Obviously I would like the dog to pull the sleigh, with Jenna sitting in it. But I consistently get the dog IN the sleigh and often several dogs.
It seems that image size has some influence on what the result is, but I had hoped that being wider than the height, it would simply adjust the scale - or am I completely off here?
So I have been trying to ge this to work but I am having no luck. The ai just doesn't seem to understand a dog pulling the sleigh. My typical move when I run into an issue like this is to use an image to image reference to help the ai out a bit.
how could i make constant character model in Leonardo ai ?
with many poses , did any one know ?
Hello there!
I see the result is (well, obviously 😅) not what you expected. I’d be surprised too if that’s what popped out.
Just so we can also understand how it got to this result, can you walk back the steps you took to arrive there?
First things first, did you select “Image Generation” in the left toolbar?
Yes thats what I used
Wait no not image
I was using texture generation
Perfect
And what is the .obj file you were using ?
It was a cat
Can you try to upload it here if it’s not too heavy?
This is the original UV map and texture map
Its over 100MB since its animated
Unable
Ah ok
But is a .obj of a 3D model of an untextured cat?
Bit like this?
Also its a payware model so I cant upload it publicly
Did you have a clear preview of a cat when you uploaded the model?
(I’m sorry I’ll just have to pause for now … it’s 3AM here 😅 ill check up on the conversation when I wake up later)
if I remember, yes
I m looking help for a prompt : the idea is simple but difficult to be made
A half opened iridescent box with tilted lid . Strong colorful music notes escape from between the lid and box and fill the picture. An orange stands over the lid
I tried many times to do that in any program tested but none could add that especially the orange . this orange has a special meaning to the photo
The problem is that the ai will probably interpret that as a color and add it randomly to the image. You may have to be more specific and mention a kind of orange or say orange-fruit or something like that.
I also had a log time problem with this prompt
a commercial thin-plastic bottled water container is filled with olive oil . A diamond nerby the bottle s shown [conveying the meaning that the olive oil costs nearly as a diamond]
Really here one liter of oil costs 5.5Eu
Do you have a better prompt for that ? Here you see the bottle
Not going to be easy with a prompt... but there is RealTime canvas, I did a very quick one to show you what can be done. If you spend some time there, you should be able to get a lot closer to what you want.
The ai won't recognize the meaning you are trying to show. There isn't much point prompting for it.
Canvas Editor for you I'd say
A container of olive oil next to a diamond might be enough. Generally simple prompts are better.
all times i have the picture of your photo . see also the picure of the milk container . No diamod however!!
I'd say as long as you don't mind spending a few tokens, you should get there. I had a little try and got this on the first try, I'm sure you can do better if you take a little more time and tokens.
the second is closer but does still does not show something similar to what is required (as with the milk bottle) now expecting your ideas
Yes, that could be a way forward. It is frustrating, I keep having to remind myself of the random workings of AI image generation, it should be simple... but it isn't 🙂
Thanks for trying 🙂
Hi senpai. I have a logo (image) which I want Leonardo to put on every images that I have. Would it be possible? Just wanted to know the feasibility before I deep dive and explore.
Hello there 😄
Even if Leonardo itself doesn’t have a watermark feature, I know many photo editing apps do. (I’d have to do a quick search for a free one)
For exemple, you download all the pictures you want to share or send to clients, add them in the software and then export them back out but with the logo applied.
In the days of Aperture (a now defunct photo app), it was just a matter of setting up the logo in the settings and how we wanted it to look, then when we exported pictures, we could check the “apply watermark” options.
So… there’s hope 😁
Without prompting for the orange, it gets some good results. Then the orange can be added in the Canvas Editor.
Or alternatively there’s also a (slightly more difficult) way of building a quick collage of items in a free photo editing app (like Photopea) and passing it through the Depth to Image function of Image Guidance
https://discord.com/channels/989166677390426132/1105169069436575846
Hello, Im Johnny and I have a vodka brand company. Im trying to create images using my real bottle but its not working. Could someone help me?
Thats my vodka bottle that i would like to create images with
like i want to create images for my instagram with this bottle. Like the bottle with a drink on the beach
everytime i try something the ai change the bottle, but i want to use the exactly bottle i have
can u help me?
Oops I think I have misrepresented myself. I wanted to put a logo I made onto the tshirt of my character.
Ah! Then yes the bottle will change during the image generation 😄 Sorry I misundertood this aspect
So to start, this will help you:
It's your picture but without a background.
(Hang on, I'll prepare an exemple)
tuxman, can we make a videocall ?
can send u a vodka bottle for your help hahahahaha
a really need make it workout haha
I don't do video calls 😉
But I'll write out the instructions here.. just a moment
So I'm going to use a version of the technique outlined here: https://discord.com/channels/989166677390426132/1105169069436575846
(And a little bit of color correction at the end: https://discord.com/channels/989166677390426132/1113604362254700614)
So I start by making a very crude collage of a random beach picture from the web and I paste in the bottle pic without a background. I rotate it a bit to give some angles (I mean, if it was in the sand... it'd be tilted right?)
I get the picture into Leonardo's Image Guidance, select Depth to Image, then I make a quick prompt that describe what the picture should be. I get two versions and I choose the one I like most
I bring back that picture to my collage, paste it as a new layer and move the original bottom above it (Don't mind the edges of the fake bottle... we're gonna fix it in the next step)
So then with the same way I rotated the bottle, I scale it so it hides the fake one underneath. Then I take the eraser tool and choose a smooth one to make it seem like it in the sand.
We could stop there... but the bottle doesnt have the same tint as if it was in a sunset, so let's fix it (And by the way.... THANK YOU for making it opaque 🤣 WAY easier!)
So I just create a new layer on top, trace some lines with black and orange then make the layer transparent (I made it quickly cause I have toi get back to work.. but you get the idea 🤣 )
So, there it is! Take an object and insert it in any images without having any details changed 😄
great tutorial!
Right in the middle under the prompt and model
Hi, follow-up on my question. is it possible to add my own image into specific section of the generated AI image?
Sup! You can pretty much just paste the logo over the picture in a free photo editing app and scale/rotate it to fit the scene 😄
I did exactly that to put my lightsaber shirt design on ..well.. actual characters with lightsabers 🤣
You could put the logo on a collage and use it through Depth to Image, or in the Canvas, but it will get slightly changed and modified in the generation process so you won't have exactly your logo
cool. so i do not need to use Leonardo... which software do you use?
Photopea, it's free and it's online
I use Leonardo to generate the assets, and I bring them together in Photopea
For exemple, this is like Leonardo 8 images 🤣
@nova canyon So! Now that I had third coffee! 🤣
Bottom line is, I've seen people have reasonable success with texture generation (on trees, balls, etc...) but still the generation process don't understand the physiology of animals. Even if the texture looks right in the web page, the map will be all over the place and might not apply correctly back in your 3D software
(Like in my exemple, it didn't recognize that a cat only has two eyes and there are many intersecting bodies)
Texture Generation was marked as Alpha until recently. I suggest sending a ticket to support to get their official stance on the state of the feature 🤔
Hey all! I'm trying to get an image that is at least 1125x1725px. I've generated a 1536x1536px one which is the maximum dimensions for generating an image, but when I run the Alchemy Upscaler on it the result is still only 1536x1536px. How do I actually get it to 'up'scale?
I seem to recall there used to be more than one upscale option in Leonardo, am I mixing up my AI art tools or did they get moved somewhere else?
I'm a free user btw, if that makes a difference
what if you cut it differently (the uv's) so the app has no chance to get it that much wrong
good good, I'll use it when I am too lazy to do photoshop
Oh, answered the wrong person haha
i dont wanna be that guy to tell u to get an idea how 3d models are working but maybe u should get an idea how they are working in general ^^ its easier to get good results if you understand the basics. so if you model something in 3d and you want to texture it you need UVs. Thats what u have on your picture. Those UV's you can texture. So it will have a texture on the model. Depending on how you cut the uvs its easier to do or not. the Cat seems like to have lots of small pieces. But normally u try and seek for bigger pieces of uvs. its also better for the end result because most of the times seams you create are always visable on the model. thats the uv map of a whole human. its still weird to look at but u can see the different parts of the body like the head, the torse and the hands. i think if you would vut it like this it would be easier. also it would be easier for you to start with a more simple thing.
Haha! Indeed, I've never claimed to be an expert at 3D 😄
How is your experience with Leo's texture generation?
thats why i tried to explain it
So you had good results with it?
i just did that. seems like its kinda working on more simple objects. at least its doing something.
cant say thats a great result but its a start to work from
ig
A gritty, black and white rendering of a left side view of an Italian criminal from 1898, his piercing gaze looking off camera as he holds a gleaming knife in his hand.
so no matter what I do this image has no knife Hahaha
ive tried to make it very simple and not ask for much
Yeah... I guess models of living things might be too complex still :S
Hello there! 😄
Sorry I missed your Upscaler question haha
When you use XL models, you indeed only get the Alchemy Refiner.
When you use other models you will see the Alternate upscale, smooth and crisp.
Also note that you can upscale images further if you use a local upscaler like "Upscayl". it's a free program and you can get your pics in truly ridiculous sizes (With great results, of course :D)
@brittle raft Please reconsider what you post here. Repeated off-topic messages, bad language, and spam are not allowed.
Aha, that would explain it. I was using Leonardo Diffusion XL. Thanks for the info! 😄
I guess its still in alpha, idk why they removed it
Have you tried separating elements with commas? That's usually how I go about it
Ah! The last one, the hat would need some touchups.. but it's a cool pose 😅
What Is the best input dimensions?
When I want higher quality the promt Is being ignored In a few things like the background scene.
I think it depends on what you want, I have come to the conclusion that it is important when you want someone standing that the hight of the image is "large" and if you want a view of say a landscap, make sure to have a wide format. But if there was a "best" setting I am convinced the dev team would have just put that in?
Most models tend to use depth of field and slightly blur the background. You can put depth of field in the negative or intricate background in the prompt to help.
welcome to AI image generation
😄
your best bet if you need it exactly the same is to remove background on one image and insert him into another scene using canvas after generating the other background
it will then be the exact same pose, though
otherwise you can do your best to inpaint all the minor details and throw tokens until it is solved?
you will need custom model/Lora for that
agreed, but even then it would need to be trained on input images that have the consistent uniform
for this LORA is used, however due to issues we can only have elements which are indeed LORA but of limited use. We can't be using characters. Another way is to go Japanese visual novel way....
right now I've seen many people make comics with LORA, however that too is inconsistent
No AI can help you with this, even if you train a perfect LORA, its still not going to be perfect
I am using a celebrity as my character and just giving them a beard or changing the nationality
Haha not quite what I was looking for in a masked rober
The start of 2020 did a real number on the dataset when it comes to masks 😅
Hahahaha
They can’t identify your DNA if you can’t spread germs! Genius!
he is keeping them safe while robbing
is it just me or is Canvas having issues actually using specific models? Sometimes when I change models for Canvas inpainting, etc. The new model doesn't register
Which model do you select?
Must have seen Gattaca 😂
I've been trying to use more of Leonardo Diffusion and Leonardo XL for my Canvas edits. I guess they can't be used for inpainting and outpainting yet though
I will look into this for you
Hello, mod-ish people!
I have a little mystery for you...
What do you do when your prompt is too long to fit in the online form?
You mean with Ethan Hawke? That's my jam, that and Predestination. Next level Sci-Fi
OK, I ended up cheating, DALLE had no problems creating the basis image, but it won't generate with... Come to think of it, I asked it to use Clara Oswald, not just "Jenna Coleman" that might have been the issue, but we discussed copyrights...
SO basis from DALLE and then the finished product from Leonardo
Prompt:
A snowy winter suburban landscape, a woman in a sleigh, the sleigh is pulled by one white sleigh dog. (in the sleigh sits Jenna (Jenna Coleman)). Jenna is wearing a wine red dress and coat with white fur at the hood and sleves. the sleigh dog is a female Samoyed dog
how to color a drawing with leonardo ? is that only possible for paying users ?
Hello! 👋
You can use the Edge to Image feature in Image Guidance. It detects the lines of the input image, and tries to generate around it. Make sure your prompt describes briefly what’s in the picture so the systems knows what’s actually there 😜
It’s a premium feature for the XL models, but it’s free for all the rest of the models.
@karmic pulsar Hey there 😄 This will be a better channel to get help for your prompts
@white delta We can discuss over here instead.
sounds good
so im kind of the guy who likes generating dwarf images
and i have been generating so
but the generations have changed
have a look so you can see
@winter spire
Taking a look now. If you don't mind, could you indicate an image like what you expect to see vs an image that you're getting now that you're unsatisfied with?
ofcourse, that would be my pleasure.
Hm. I re-ran the prompts you used for both of these sets and they seemed to generate reasonable results. This to say, the settings seem reasonable as a sanity check:
So from here I have two thoughts: Either A) It's possible that there's some weird data hanging around in your local internet files. You could try flushing your cache and cookies, see if that does anything. Or, B) Your most recent generation was a seed the AI didn't resolve well. It happens, not every generation is an award-winner.
Having said this, it looks like the configuration you were using previously netted more accurate dwarf results. Stick with the original prompt with PM settings of 0.3 Strength and High Contrast enabled
When all else fails, there also a "reset" button at the bottom left corner. Just to be sure there's not a obscure option that was changed by accident
so I've noticed while working on a new same-character scene that the new scene will have some elements of the old scene. do I need to flush or clear something?
It could be a combo
It could be that some of the terms to conserve the character are bleeding into the scene
ok ill restart the app as well
And we've anecdotally noticed that the cache holds bits of previous gens
Could try clearing your browser cache as well and see if you get a different result
thank you!
It depends. If it was an image generated in a group we might be able to get some info from the other ones
The prompt will not influence the seed
Or do you also have another group that was generated soon after or before?
I meant if you generated like 4 or 8 pictures at the same time 😁
Do you have the link to the generation? I can have a look when I get home
You could, it will not be exact though
As Tux said, if you generate multiple images at the same time, they will be one generation and you can can copy the seed from that. If they weren't generated at the same time, the seed will be completely different even if the prompt is the same
So if you deleted all the images from that particular generation, you won't be able to recover the seed unfortunately
Ah yeah then they won't share a seed unfortunately
Thanks a milli
Where can we learn before generate images like we want
Hello there 😄
I’ve been told this guide wasn’t so bad to learn prompt structure 😅
https://discord.com/channels/989166677390426132/1156647553387999353
Is anyone here willing to assist me with an issue I've been trying to figure out for a few days?
I'm using a custom-trained model on unique faces, but the generator has made a habit of generating up to two of the same faces in the same generation over several generations, how do I get around this?
Might need some more information to help. But if I were a gambling man, I'd guess it's related to the dimensions you're using on your generation. If they're much larger or of a different ratio to the model size, the AI will try to fill in the extra space with what it knows. In this case, adding in another face. Does this help?
The dimensions are all 1:1 and I've been using the recommended dimensions which are the same as those in the dataset (768x768).
Would you like me to provide images?
Yes please!
A link to your profile might also help so I can check some generation settings. Or even just some data on what you're configurations are set to.
Prompt:
Capture a centred close-up centred portraiture of a black woman in her twenties, she faces forward against a white backdrop.
All my generations are private, I'm not ready to share my generations until I've perfected the model and prompting.
There's also several other persistent issues such as multiple heads, faces, black and white images
Perfectly reasonable, dw
I just generated 8 more images and sure enough, two of them are practically the same character, 1 more being only a little less similair
Was this model trained for SD2.1 or SD1.5? And are you using your Instance Prompt in your prompt?
SD1.5 as recommended in the model trainer, what is instance prompt?
Oh yeah i do use instance prompt
it's only 3 words
Dang ok.
So some of the output images would say to me that the subjects in the training images aren't framed properly, by which I mean there's too much background space. Could also be a lack of variety in the source images in terms of facial features etc. Model training is tricky business, and admittedly I'm not expert.
However, whenever @real lynx happens to come online she might be able to give you a better idea of how you can solve this problem. She uses trained models extensively.
There is a fair bit of background space, the background for all the subjects is pure white.
I've suspected before it could have something to do with
I'll keep an eye out for her
Hello
Hi
I was reading your problem
Apparently it would be scale
The problem
The model was created in a data set
Are there 40 photos?
There are currently 13 images, Its a tedious process to prepare each image for the dataset, I plan on filling it to 40.
I can show you the dataset if it would help
a ss
There are programs on the internet that simplify the process.
Because all photos have to be the same size
Yeah they're all the same size
The photos must be of good quality and clear.
I've went as far as aligning the heads and eyes of the models so they are well centered ine ach image
I'm going to try your prompt on my model and then I'll take photos so you can see what I want you to try.
Alright.
Is your account free?
It should be premium
Change that scale to the minimum
Ok i'll change it to 512x768
wait, what if my model has different faces, does it have to be the same face?
: (
Both eyes should always be seen from the front in medium profile.
And if there are 40 photos the result is optimal.
My goal is to combine multiple faces to create a new one
I never tried it that way
I don't see bad results. Did you try changing the scale?
Sometimes even on my own face I notice that I take one eye from one face and the other from another.
That's why in different faces...
The result would be disastrous
It produces some really unique and beautiful faces because of this tho
I can show you the contrast between a bad and good result
I want to see what comes of this
And I want to modify your prompt to give it more details
And see what happen
Ok gimme a few seconds
Once the model is created you have to wait until next month to modify it
never knew that
If you try to modify it right now it tells you when you can modify it again
Alright
I've just realised I've deleted mostly all the horrible results so give me a few more seconds
It's about 4 now basically better versions of eachother as I learned to improve the dataset images
2 "unfavourable results"
both favourable and unfavourable results are from the same dataset/model
There women are combinations of the subject women in the dataset, a fair amount of the time you can tell who the "parent" subjects are based on certain features
I like shaved
Masterpiece intrincate meticuloso Face Desing ,Capturing a strikingly beautiful portrait of a young black woman in her twenties, the subject is centered and confidently gazes forward against a pristine white backdrop. This high-quality photograph radiates with the subject's radiant brown skin, accentuated by soft natural light that delicately highlights her exquisite features. The woman's almond-shaped eyes shimmer with an alluring depth, while her naturally coiled, 1lustrous black hair cascades around her face like a crown of elegance. The impeccable clarity and meticulous attention to detail in this captivating image flawlessly captures the woman's strength, grace, and timeless beauty.
Modify your prompt
These dice generate a prompt based on your instructions
With the second option
It has many indications
Write the prompt shorter
You already uploaded the one that I modified
And it's a long prompt
You modify the face of the model with the prompt. The prompt is based on the model depending on the force you give it.
Different prompt
Same model
thank you I didnt know the dice did that
Jeez this uses a lotta tokens
Wait so the higher the prompt the lower the dataset model influence?
I'm liking the increase in quality and detail so far
I think so although
Because I still want it to combine faces
If it is too long, it is very likely that the last words of the message will not have force.
Oh yeah, the first words have more strength
How do I make the model look more like say subject 4 in the dataset?
()10%(())20%
?????
Parenthesis also gives more strength to the word
ah
I don't think it will ever give you the same result.
My photos are my personal photos and it is the same face
That's why take my features
Why does the ratio have to be 2:3 or 3:2 ?
You would have to do that model randomly 40 times to give you a similar result to create the data set of that model
Or look for a modelless prompt that looks like that
Lend me the photo of the model you liked
here?
And then I'll show you how
Just use any one of these
I speak Spanish, where are you from?
That's why my English is bad, I use a translator
Maybe some things I don't understand
Output Model
Not input, input are real people, I'm assuming you were asking for the generated model (person)
Perfect as an example to teach you
And I like this model so later I'm going to peel myself too xD
This page generates a prompt with the photo you upload
wow
a woman in a strapless top posing for a picture, portrait featured on unsplash, brown skin like soil, pixie cut, passport photo, anime visual of a young woman, sleek head, short facial hair, profile picture, unedited, face shown, upper body avatar, purity
I'm going to try the prompt on my model and also without anything
I'll wait
hoh
I never had an hair so short, it doesn't look bad on me
elo?
Hair short
also for datasets should I use SD1.5 or SD2
It won't let me have shorter hair
The hairstyle must have a name
I'm going to google it
Alright, as for now I'm going to experiment with everything you've taught me.
I would suggest SD1.5
then you can apply more of the tools after the fact
but if all you care about is pixels and your output, without using other controlnet etc then 2.1 is good!
Okay, I don't know what you mean by controlnet at the other stuff but 1.5 has been good to me so far.
Thank you
Any questions happy to help
Are we allowed to PM others?
Sometimes I have them blocked
I would like to further the discussion privately if I may
But if you surprise me as soon as I see it, I'll answer you
I am trying to illustrate a scene using leonardo AI and I was wondering if there is a way to use caricature art ? like use real photos of a person to generate caricatures and use those images in this illustration. Please let me know if that is possible ?
The Art of War: Behind the Victories of Alexander, Caesar, Genghis, and Napoleon cinematik 8k heyperrealistik
You can't generate images on discord. You have to go to https://leonardo.ai
Hello Isotope, yes. You can upload a real photo into "Image Guidance" (image to image setting), as a reference for the AI, then generate images using one of the anime models. Results will vary according to your prompt, and the "image to image" strength slider. The higher the strength, the more the generated image will adhere to your photo.
Anyone know WTF happened to #🖼│image-share ?
how to place a photo in image share
Just press the + button in your chat bar and upload an image :)
thats all ? i now but if you upload how you sent the picture of is upload everything you most do
you can also just copy/paste 😄
or you the share button in your feed
How do you maintain a consistent style? I've tried to use "Train a Model", I've also tried keeping the same settings, but the results vary from generation to generation
do you used a fixed seed ?
No, I don't understand much about that
you can either use a fixed seed in your settings to keep that consistency (mostly to make subtle changes) or you to to use Img2Img or Img-Prompt(Prompt-Magic)
if you like a specific Image Style you created... you can find the Seed for it in your feed
Ok, so, say I have a perfect style I've achieved and the image happens to be of a woman, but I want to generate one of a man or dog in the same style. The seed will keep it consistent?
I am not that much of an expert but so far mostly yes
same seed
I have not specified anything about the person itself though
Hmm ok I'll try it
you can see it has the same style like the image before 🙂 ... I just changed it from a Harp to a Guitar
I can not creat image
You have to go to https://leonardo.ai to generate images.
do you have spoken to Verrocchio ?
I need a quick way to make consistent characters in a comic book style. I thought that using a celebrity face was the answer. It is for consistency, but if you use a celebrity face, you cannot modify it much, like asking Leo to close their eyes or shave their head. any ideas?
fixed seed
i can close eyes idk what you on about
im trying to make an image of Sylvester Stallone with his eyes closed very hard
ok ill look into fixed seed thank you
Can this program make a AI version of my portret if I take a picture with a camera and upload it to the program somehow? if so how?
You can use it is an image to image reference.
Like this
Whats the prompt if i want it to be my image and not soemthign random?
Some people just write copy or you can describe the image.
Or if you but the strength low you can rougly base a new image off the old one and use the written prompt to describe what you want the new image to look like.
Thank you!
No problem. 🙂
Anyone here good with Youtube SEO / marketing / Thumbnails?
I use google bard to come up with seo keywords and descriptions and canva pro to make thumbnails
you can use your picture in Canvas though 🙂 ...
I literally thought you ment Canva
I was like what
Im gonna try that out thankyou G
Good morning everyone, everyone!
I need some help.
How do you create a realistic portrait of someone while maintaining the appearance of his face but with different framing and shooting. For example in the forest, with different clothes??
with a fixed seed 🙂
@spiral axle Okay, can we contact each other in a private message so you can explain to me, please?
advanced settings 🙂
Okay but how do I keep my model portrait and make different framing as for example walking in the forest while keeping the same physical aspect of his face without modifying it?
You can also use canvas
That’s going to keep the same framing though (the general pose and outline will be the same)
When you use a fixed seed, it only means the beginning of the generation process stays the same. So the random color blob that will become the image will be constant. But after that, if you change A LOT of the prompt, you will lose consistency because even though it’s using the same base, you are taking it in a completely different direction
Probably cuz the gender isn't specified in the prompt, a consistent character should be achieved with both a prompt that clearly describes the character and the fixed seed
I'm currently still a university student in computer science, but I have been programming for fun since I was a kid haha
My favorite language is Lua, but I know a ton of different ones
I know Python as well but its one of my least favorite ones unless I'm working with custom neural networks/data science
Hahaha yeah I never liked hacking personally, but I went a similar path. Just started experimenting with code, trying what worked and what didn't. Personally its always been the creative part of it that attracted me, being able to create something and seeing it come to life.
What ?
sometimes that happens. innocent images get tagged as NSFW. Probably not your fault. you could try contacting support to get them uncensored. or just leave them be and make more 🙂
I will not I have to much pictures
You know, a lot of time has passed and more pictures have been "collected"😅 😅
yea i cant even tell you why that happened. maybe its something in the prompt that caused it, or maybe its one of the other secret detection things on the website 🤷♂️
anyone else having issue with #🖼│image-share ?
server or sub server, whatever... nukes itself then comes back with a message that you don't have permission to view history in #🖼│image-share ?
I can still post there, though
and another user is in their now
\hfj
Maybe you also have some tips on how to get this type of character design in image gens?
What is a Screencap? anything like old school cell animation?
A screencap is short for screen capture, so in the context of anime a still image taken from an animated scene
Screencap = Screenshot from TV/Computer... dont know if its working or not 🙂
I use it a lot too haha, seems to work
I just found a Screencap Style LoRa so I thought I can get Anime Episode Pictures
u got it wrong xD
Its more about emulating the style of Anime Episodes
like the shading and the lineart
the coloring
it exists
everywhere
its not adding some kind of perspective effect in there or something... just basic "flat" like Anime TV Shows
and it cant generate shows... since I mashup multiple anime shows to generate unique characters
like this for example 😄
NGE Bunny
all depends on your prompt 😄
Actually this is kind of cool. You should use it for a project. Would make a cool Mech Anime.
like this 😉
Naruto in dad jeans
Nanamin
This generated anime is too good what is in this prompt or any of them?
Baki and Mashle xD
I have to learn how to do this
Like I said it depends on all your personal preferences 😄
you can go Painting Style, you can go Anime Style
you can even turn Luffy into another Art Style
lots of examples 😄 (but I keep my prompt mostly the same)
Teach me oh Great Master of the Meta Arts🙏
I am your humble student.
its just my normal Prompt xD .... you just have to edit it to your Needs xD
My first guess would be that your prompts are too long (Just going by the comment you made some days ago about not having enough space for the prompt :D)
The more words you add to the prompt, the bigger salad you get and the less control you retain
If you want to be surprised and don't have a goal in mind, go ham with the prose and the words (It's a fun exercise :D)
If you want something specific, be precise and concise. Avoid figure of speech and metaphors, keep your word count to a minimum
Thought I had to be a Craftsman to post these? Even
Knight is fine for posting images
but I'm not a Knight🤔 ...thought I should be by now
Knight rank doesn't exist 😄
well Verro is wrong then xD
@open echo Verro still says you can post when you reach Knight Rank xD
you still have the shield
Shield icon is for Volunteer Moderator 😅
Yeah, it is now, but it wasn't before
I remember Knight, have I fallen into a Mandela? BTW... the Fruit of the Looms Mandela effect is actually real. There was a cornucopia in the logo before they changed the logo. I know what I wear. That's my story and I'm stickin' to it.😆
So, if I still had an old account I'd be an automatic lock for a mod?
Who knows?
Shield icon has always been only for Volunteer Moderators.
In fact the Volunteer Moderator position has been around since before the Knight rank was introduced
Can anyone clarify if having a white or blank background in dataset images causes unwanted elements in generated images?
blank as in transparent?
if ALL your images are white backgrounds, you will only be generating images with white backgrounds with that model
it will really try and stick to what is conserved in the training dataset
Funny thing is, it doesn't do that, it just fills it with random textures
I wish It would stick to the white backgrounds
What was the knight emoji then? I had the shield but I wasn't a mod. So, I don't remember what knight even looks like then?
I'm not sure Knight ever had an emoji
and why can't I get past this silly Novice level?😆
The icons next to the names are quite a recent thing and only the hat was added for Former Nobles
Is anyone here an expert on datasets
if you want to explain your dataset I can help troubleshoot
Hi, mi problem with Canvas editor is that it doesn´t load images from my computer.
interesting, what browser are you working with?
can you try in another browser to see if this is browser specific? Do you know if you have any new extensions?
No, I don´t havy any new extensions, but I can change browsers.
also one of the first things we suggest is to clear your browser cache
sometimes it holds on to bad data and causes issues
Ok, then. I will do that, if I keep having problems I will come back. Thank you so much!
What do you need to know?
you're training for an artstyle if I recall correctly? How many images are you using, which version of SD, what category, etc
are all your images cropped to the same pixel dimensions?
No an artyle I'm training a model to create new faces
yes
I believe I was strict with that aspect
they're all 768x768
ok so SD2.1
1.5
then they should be 512x512
so 768x768 is for 2.1?
yes, that's correct
you can apply different elements and controlnet to the model in SD1.5
brb sorry, my dog is panting - need to take him for a walk, 15 mins max
Do you recall when it was that you were a member, before?
512 or 768 are what they offer now unfortunately. the resolution that train I have seen not exceeding 1024x1024, anything above will need upscale as additional step.
I'm fine with both resolutions, I'm just trying to figure out why the background in the output is always some weird texture
It seems using "pose to image" has added a lot of stability and consistency to the image background.
Face 1 + Face 2 = Face 3
so half of your images are one subject and half are another?
this is a simplification
understood
yes it should output a great front facing individual
did you set it as "general" or "character" or what when training the model?
photography should have been also applicable for this purpose
and what is your prompt structure like when you use the model?
the instance prompt or the generation prompt?
an example generation prompt
Current Prompt
Black Women Dataset 2, Capture a centred close-up portrait of an african woman in her twenties with distinct facial features and clear skin, she is facing forward gazing at the camera against a soft, clean, and neutral colour photoshoot background.
I was adviced to put the model name at the start
curious, all of this is consistent with what you're looking to achieve
yes, that is very correct
well... no the instance prompt
if it's the same as your model name, then yes
the instance prompt is also there: "african woman"
this is without pose to image:
It's usually much worse
Here's with pose to image:
yes pose to image would help get the model in exactly the position you want
try just calling out a white background
I tried that before, didn't do anything I'll do it again and show
including "white", without pose to image:
there's a number of disasters
white as in white background
yes I can see 😄 what settings are you generating
like 2021
and 22
try increasing the guidance scale to 9, and in Advanced options try choosing Euler Discrete as your scheduler
I don't use alchemy in this stage of my project because it introduces unwanted elements. FYI.
I don't see anything incorrect
so I'm just trying to pick settings that could increase prompt adherence
let me attempt
not sure what to say
increased saturation?
can you kill the negative prompt for a generation?
less saturation?
it doesn't want them to be lonely for sure
I've tried including the word "single" in earlier attempts, didn't work.
pose to image seems to be the only fix for now, just wish I didn't need to use it
I could hang this on the wall and people might think its from some artist
How do we get the highest resolution possible without having to go into another site to improve it?
generate at the highest resolution that gives you the intended result, then upscale
Alchemy High Resolution first if you can
and then Crisp Upscale
would you be willing to share your dataset with me? I could train a few models differently and see if it grants better results? I see nothing incorrect with what you're attempting
hmmm
if not, that's cool
sure, Its not complete anyways
but I think you're doing everything right and it wants to give you random effects
we could shorten the prompt
bout 30 more models to find and process
I'll Dm the dataset
open to ideas on that
try: "African Woman, front facing beauty photography, white background, captured in professional DSLR, narrow depth of field"
eular and 9 guidance scale with no pose to image and negative prompts?
try that first yes, I like to start simple and then add as needed
yeah not much better
hm
try the negative prompt added?
with negatives:
like... 2 correct.. ish
yeah a lil better than the previous
try "beauty portrait photo" instead of front facing
Do you still want the dataset if it makes it easier?
no I think this shows that prompt editing could achieve it, just need to find what the model likes
😄
unless you want to crowdsource
the word beauty brought some fashion clothing into the generations 😁
Where is the crisp upscale option? I think I've been doing it right with alchemy and the refiner scale
are you generating in the XL models?
Yeah most of them have been in there
all too common
I have been playing around with some modes there is some without the XL
do you have a paid account?
because the XL models only have Alchemy upscales
Yeah I do 🙂
oh cool! (If you want a specific role for paid members in discord, send me your Leo profile name)
then Crisp is available as the DNA helix icon
and will 2x the size of the image
sorry there doesn't seem to be a quick answer!
Ooooooh brilliant thanks you so much I've been going into different site to do it and then asks me to subscribe and thought to myself "wait Leonardo should don it I must be missing something " 🤣🤣🤣
Rtrcreations ")
It's fine, as long as someone can help me figure it out sometime
I also have an issue with alchemy completely ignoring my dataset
alchemy does a lot of extra processing
if you want alchemy power but strict adherence, use the "None" pipeline
that would allow you to use weighting in your prompt structure
but not get the added effects from the post-processing from alchemy
where is this
actually, let's try that... African Woman, (front facing: 1.3) beauty photography, (white background: 1.4), captured in professional DSLR, narrow depth of field using Alchemy, with the None pipeline
right down there at the bottom of the pipeline list
ok 1 sec
They look african indeed, but not corresponding to my expectations mainly because I don't see any influence from the model:
the pose, position and background are better but everything else is a bit extra yes
How do I stop this extra limbs and missing letters lol
Its a shame we don't get refunding points when the ai does wrong lol
good negative prompting for the limbs, but letters are a roll of the dice
Canvas editor is nice for removing unwanted elements
Canvas is excellent for a lot of things 😄
could even just generate the card, then take to canvas to add text after
Ooooh I haven't mastered that fully yet lol