Comfyui safetensors list sdxl reddit. safetensors" file .

Comfyui safetensors list sdxl reddit. 5 in about 11 seconds each.

  • Comfyui safetensors list sdxl reddit I tested with different SDXL models and tested without the Lora but the result is always the same. Source image. But for a base to start at it'll work. City, alley, poverty, ragged clothes, homeless. After download, just put it into " The controlnet-union-sdxl-1. r/StableDiffusion /r i'm currently playing around with dynamic prompts. Low-mid denoising strength isn't really any good when you want to completely remove or add something. I also added the pony vae, but still the images are bad compared to using the old ipadapter. 5 in about 11 seconds each. I find the results interesting for comparison; hopefully others will too. This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. If I were you however, I would look into ComfyUI first as that will likely be the easiest to work with in its current format. safetensors", 0. 5 to SDXL there is no Inpainting model for ControlNet /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, You can do this at runtime in ComfyUI. This happens for both of the controlnet model loaders In part 1 , we implemented the simplest SDXL Base workflow and generated our first images; Part 2 - we added SDXL-specific conditioning implementation + tested the impact Here, we need "ip-adapter-plus_sdxl_vit-h. Which you can directly load everything into ComfyUI or A1111. *SDXL-Turbo is based on a novel training method called Adversarial Diffusion Distillation (ADD) (see the technical report), which allows sampling large-scale I have a 2060 super (8gb) and it works decently fast (15 sec for 1024x1024) on AUTOMATIC1111 using the --medvram flag. This is a weird workflow i've been messing with that creates a 1. Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: Step 1: Download SDXL Turbo checkpoint. Open comment Thanks for the link, and it brings up another very important point to consider: the checkpoint. , Realistic Stock Photo). Wanted to share my approach to generate multiple hand fix options and then choose the best. Belittling their efforts will get you banned. ) Seems very compatible with SDXL (I tried it with a VAE for SDXL, etc. Here's an image for You who downvoted this humble request for aid! As the title suggests, Generating Images with any SDXL based model runs fine when I use Comfyui, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Welcome to the unofficial ComfyUI subreddit. safetensors" 2. Nodes (all default): ModelMergeAdd ModelMergeSubstract Welcome to the official subreddit of the PC Master Race / PCMR! All PC-related content is welcome, including build help, tech support, and any doubt one might have about PC ownership. 3 Welcome to the unofficial ComfyUI subreddit. I used the workflow kindly provided by the user u/LumaBrik, mainly /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. In one of them you use a text prompt to create an initial image with SDXL but the text prompt only guides the input image creation, not what should happen in the video. io Open. 1. 5 as There is an official list of recommended SDXL resolution outputs. I installed safe tensor by (pip install safetensors). safetensors) I've searched far and wide on this issue. ) Just install it and use lower-than-normal CFG values, like 2. For SDXL models (specifically, Pony XL V6) HighRes-Fix Script Constantly distorts the image, even with the KSampler's denoise at 0. Style/Composition. If you need help with any version of DOSBox, ('Motion model temporaldiff-v1-animatediff. I hope you can help me. Please share your tips, tricks, and workflows for using this software to create your AI art. I'm on a colab jupyter notebook (kaggle). safetensors checkpoint, if using 1. The idea was that SDXL would make most of the image, and the SDXL refiner would improve the image before it was actually finished. At the moment i generate my image with detail lora at 512 or 786 to avoid weird generations I then latent upscale them by 2 with nearest and run them with 0. \python_embeded\python. 30 votes, 25 comments. safetensors'] Output will be ignored. When I search with quotes it didn't give any results (know it's only giving this reddit post) and without quotes it gave me a bunch of stuff mainly related to sdxl but not cascade and the first result is this: Oh interesting--like there are different comfyUI nodes for sdxl? I looked up unCLIPConditioning sdxl but didn't get any results. That also explain why SDXL Niji SE is so different. safetensors, Duchesses of Worcester - SDXL + COMFYUI + LUMA 0:45. Thanks for the tips on Comfy! I'm enjoying it a lot so far. Loader SDXL), unfortunately, does not work exactly as I need. i mainly use the wildcards to generate creatures/monsters in a location, all set by Yes, I agree with your theory. json, SDXL seems to operate at clip skip 2 by Protip: If you want to use multiple instance of these workflows, you can open them in different tabs in your browser. Locked post. there are other custom nodes that also use wildcards (forgot the names) and i haven't really tried some of them. This "works", subreddit was born from subreddit stable diffusion due to many posts about ai wars on the main stable diff sub reddit. Unlike SD1. Duchesses of Worcester - SDXL + COMFYUI + LUMA Welcome to the unofficial ComfyUI subreddit. The intended way to use SDXL is that you use the Base model to make a "draft" image and then you use the Refiner to make it better. SDXL + COMFYUI + LUMA 0:45. Next fork of A1111 WebUI, by Vladmandic. Hello. If we look at comfyui\comfy\sd2_clip_config. making a list of wildcards and also downloading some on civitai brings a lot of fun results. Duchesses of Worcester - ('Motion model temporaldiff-v1-animatediff. 4 Clip vision models are initially named: model. after that you may decide to get other models from civitai or the like once you figured out the basics Welcome to the unofficial ComfyUI subreddit. New comments cannot be posted. Be aware that mostly control net does not work well with SDXL based models as the controlnet models for SDXL seem to have a number of issues. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app + wd-1-5-beta2-aesthetic-fp32. What are the latest, best ControlNets for SDXL ComfyUI? Question - Help /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. ', MotionCompatibilityError('Expected biggest down_block to be 2, but was 3 - temporaldiff-v1-animatediff. ComfyUI users have had the option from the beginning to use Base then Refiner. safetensors is not compatible with neither AnimateDiff-SDXL nor HotShotXL. t5xxl is a large language model capable of much more sophisticated prompt understanding. 5 model (I set at 0. Both Comfy and A1111 have it implemented. This comparison is the sample images and prompts provided by Microsoft to show off DALL-E 3 SDXL and SD15 do not work together from what I found Where did you get the realismEngineSDXL_v30VAE. SDXL) sdXL_v10VAEFix. 6 - s2 = 0. But somehow this model with this node giving me memory errors which only sdxl gave before. github. Some users utilizing A1111 and Forge might not be able to view the SDXL LoRas on the list within the UI because they were not properly tagged as SDXL. safetensors" file Help with SDXL in ComfyUI comments. The idea was that SDXL would make most of the image, Below is my XL Turbo workflow, which includes a lot of toggles and focuses on latent upscaling. I spent some time fine-tuning it and really like it. upvotes /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind Welcome to the unofficial ComfyUI subreddit. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, That will prevent you from getting Nan errors and black images. *you can turn on vae selection as a drop down by going into settings, user interface, and in the quick setting list bar, type in sd_vae, and use that after reloading. So I made a workflow to genetate multiple Posted by u/bdsqlsz - 28 votes and 10 comments Hot shot XL vibes. 0, which comes with 2 models and a 2-step process: the base model is used to Step 1: Download SDXL Turbo checkpoint. You can just drop the image into ComfyUI's interface and it will load the workflow. Set the tiles to 1024x1024 (or your sdxl resolution) and set the tile padding to 128. With the generally good prompt adherence in SDXL, even though Fooocus is kinda simple, it spits out pretty good content pretty often if you're just making stuff like me - /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 5 models but results may vary, somehow no problem for me and almost makes then feel like sdxl models, if it's actually working then it's working really well with getting rid of double people that show up and weird stretched out Welcome to the unofficial ComfyUI subreddit. 5 model as generation base and the SDXL refiner pass afterwards. I learned about MeshGraphormer from this youtube video of Scott Detweiler, but felt like simple inpainting does not do the trick for me, especially with SDXL. CLIP_L and CLIP_G are the same encoders that are used by SDXL. Prior to the update to torch & ComfyUI to support FP8, I was unable to use SDXL+refiner as it requires ~20GB of system RAM or enough VRAM to fit all the models in GPU memory. safetensors" model for SDXL checkpoints listed under model name column as shown above. 25 votes, 30 comments. 25K subscribers in the comfyui community. View community ranking In the Top 20% of largest communities on Reddit. And while I'm posting the link to the CivitAI pageagain, I could also mention that I added a little prompting guide on the side of the workflow. VAE's are also embedded in some models - there is a VAE embedded in the SDXL 1. ckpt files. I think for me at least for now with my current laptop using comfyUI is the way to go. More info: Help with SDXL in ComfyUI upvote r/StableDiffusion. Hello! I'm new at ComfyUI and I've been experimenting the whole saturday with it. Then i placed the model in models/Stable-diffusion. actually put a few. See the Controlnet repo This article compiles ControlNet models available for the Stable Diffusion XL model, including various ControlNet models developed by different authors. 5 image, passes it to a SDXL Welcome to the unofficial ComfyUI subreddit. 3GB in size. I've found SDXL easier to prompt, but sometimes it doesn't (ironically) get the detail I'd like - or simply won't render the image I've prompted. That is why you need to use the separately released VAE with the current SDXL files. bin'] * ControlNetLoader 40: - Value not in list: control_net_name: 'instantid-controlnet. 5 model to be compatible with my LoRA). There is also the whole checkpoint format now. safetensors to make things more clear. safetensors" is same size as "CLIP-ViT-H-14-laion2B-s32B-b79K. Nope, ended up using ComfyUI a little bit and surprisingly a lot of Fooocus. If you're having trouble installing a node, click the name in manager and check the github page for additional installation instructions. My command line is stuck on the following when it happens: Setting up MemoryEfficientCrossAttention. The official Python community for Reddit! Stay up to date with the latest news, packages, and meta information relating to the Python programming language. I used the CLIP and VAE from the regular SDXL checkpoint but you can use the VAELoader with the SDXL vae and the DualCLIPLoader node with the two text encoder models instead. Step 3: Update ComfyUI Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: Drag and drog and sample image into ConfyUI Step 6: SDXL was ROUGH, and in order to make results that were more workable, they made 2 models: the main SDXL model, and a refiner. To address this, load a 1. More info: SDXL 1. 0 has a baked in VAE and I've been using it, but from what I heard i thought SDXL was verry fast but after trying it out i realized it was verry slow and lagged my pc(rx6650xt(8gb vram) ≊ rtx 3060-70, ryzen 5 5600 (int his case, I'm using sd_xl_base_10. I use a1111 too much to recondition myself. However, I am having big trouble getting controlnet to work at all, which is the last thing that keeps bringing me back to Auto111. And it didn't just break for me. I'm having some issues with (as the title says) HighRes-Fix Script. And above all, BE NICE. I've put them both in A1111's embeddings folder and ComfyUI's, then tested editing the . I already add "XL" to the beginning of SDXL checkpoints right after I download them so they sort together. As a bit of a beginner to this, can anyone help explain step by step how to install ControlNet for SDXL using ComfyUI. Failed to validate prompt for output 4: Output will be ignored Safetensors on a 4090, there's a share memory issue that slows generation down using - - medvram fixes it (haven't tested it on this release yet may not be needed) If u want to run safetensors drop the base and refiner into the stable diffusion folder in models use diffuser backend and set sdxl pipeline Posted by u/Interesting-Smile575 - 1,153 votes and 175 comments /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, I call it 'The Ultimate ComfyUI Workflow', easily switch from Txt2Img to Img2Img, built-in can you explain me where did you get the example-lora_1. 640 x 1536 768 x 1344 832 x 1216 896 x 1152 1152 x 896 1216 x 832 1344 x 768 1536 x 640 SDXL will almost certainly produce bad images at 512x512. Yes, if you do get a chance, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 🍬 #HotshotXLAnimate diff Hey I'm curious about the mixing of 1. r/StableDiffusion • I MASSIVE SDXL ARTIST COMPARISON: Welcome to the unofficial ComfyUI subreddit. Step 2: Download this sample Image. 0 Base SDXL 1. Why are my SDXL renders coming out looking deep fried? analog photography of a cat in a spacesuit taken inside the cockpit of a stealth fighter jet, fujifilm, kodak portra 400, vintage photography Negative prompt: text, watermark, 3D render, illustration drawing Steps: 20, Sampler: DPM++ 2M SDE Karras, CFG scale: 7, Seed: 2582516941, Size: 1024x1024, Model r/StableDiffusion • MASSIVE SDXL ARTIST COMPARISON: I tried out 208 different artist names with the same subject prompt for SDXL. TLDR, workflow: link. It loads " clip_g_sdxl. Please keep /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the If using SDXL train on top of the talmendoxlSDXL_v11Beta. It seems fast and the nodes make a lot of sense for flexibility. safetensors, that one works a charm The unofficial Scratch community on Reddit. 0 for ComfyUI - Now with support for SD 1. 100 votes, 15 comments. safetensors in the drop down when generating. 5 and sdxl but I still think that there is more that can be done in terms of detail. More info: The power of SDXL in ComfyUI with better UI that hides the nodes graph /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude I generated a series of images in a 9:16 aspect ratio, some in comfyui with sdxl, and others in midjourney. Giving 'NoneType' object has no attribute 'copy' errors. Welcome to the unofficial ComfyUI subreddit. Nasir Khalid (your link) indicates that he has obtained very good results with the following parameters: b1 = 1. Can you let me know how to fix this issue? I have the following arguments: --windows-standalone-build --disable-cuda-malloc --lowvram --fp16-vae --disable-smart-memory SDXL is the newer base model for stable diffusion; compared to the previous models it generates at a higher resolution and produces much less body-horror, and I find it seems to follow prompts a lot better and provide more consistency /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, then you want to upload the "Lora. This information tells us what hardware ComfyUI sees and is using. Also, if this is new and exciting to you, feel free to Just use ComfyUI Manger ! And ComfyAnonymous confessed to changing the name, "Note that I renamed diffusion_pytorch_model. Since I'm in that topic, any plugins to give list of Upcoming tutorial - SDXL Lora + using 1. I tried adding a folder there I've not tried a1111 SDXL yet as comfyui workflows are less resource intensive. 1 - b2 = 1. 1024x1024 is intended although you can use resolution in other aspect ratios with similar pixel capacities. #ComfyUI Hope you all explore same. safetensors from? I can't find it anywhere Or you can use epicrealism_naturalSinRC1VAE. Here are some examples I did generate using comfyUI + SDXL 1. then you put it into the models/checkpoint folder inside your ComfyUI folder. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, ComfyUI - SDXL + Image Distortion custom workflow Resource | Update This workflow/mini tutorial is for anyone to use, it contains both the whole sampler setup for SDXL plus an additional digital distortion filter which is what im focusing on here, it would be very useful for people making certain kinds of horror images or people too lazy to use photoshop like me :P Near the top there is system information for VRAM, RAM, what device was used (graphics card), and version information for ComfyUI. 2 - s1 = 0. So far I find it amazing but so far I'm not achieving the same level of quality I had with Automatic 1111. SDXL most definitely doesn't work with the old control net. Then I combine it with a combination of either Depth, Canny and OpenPose ControlNets. 5 denoise (needed for latent idk why though) The wheel scrolling backwards is a problem with even a shorter list. I did a whole new install and didn't edit the path for more models to be my auto1111( did that the first time) and placed a model in the checkpoints. More info: Comfyui Tutorial : SDXL-Turbo with Refiner tool Tutorial - Guide Locked post. Share Sort by: Best. 0. SDXL 1. Before SDXL came out I was generating 512x512 images on SD1. safetensors is not a valid AnimateDiff-SDXL motion module!')) google that chkp_name, it leads to huggingface where you can download it, around 4. Reply reply A reddit for the DOSBox emulator and all forks. they are all ones from a tutorial and that guy got things working. There is an Nvidia issue at this time relating to the way the newer drivers manage GPU memoryso all SDXL pre release implementations will be affected. 156 votes, 58 comments. 0 with refiner. Safetensors is just safer :) You can use safetensors the same as before in ComfyUI etc. 5 or so seems to work well. With the extension "ComfyUI manager" you can install almost automatically the missing nodes with the "install missing custom nodes" button. pretty much the same speed i get from ComfyUI edit: I just made a copy of the . safetensors checkpoint. (You can try others, these just worked for me) Don't use classification images, I have been having issues with it especially in SDXL producing artifacts even with good set. Additionally, the Load CLIP Vision node documentation in the ComfyUI Community Manual provides a basic overview of how to load a CLIP vision model, indicating the inputs and outputs of the process, but specific file placement Welcome to the unofficial ComfyUI subreddit. comfyanonymous. Sure, here's a quick one for testing. I want to transition from SD 1. safetensors, and attempting to refine with sd_xl_refiner_1. Hi amazing ComfyUI community. And bump the mask blur to 20 to help with seams. bin' not in ['ip-adapter. 5 train on top of hard_er. More info: https: I am using just the SUPIR-v0Q. My problem was likely an update to AnimateDiff - specifically where this update broke the "AnimateDiffSampler" node. safetensors" and then rename it to "controlnet-zoe-depth-sdxl-1. - Value not in list: instantid_file: 'instantid-ip-adapter. safetensors Welcome to the unofficial ComfyUI subreddit. 6650000000000006, 0. Choose the vae fix option instead of normal sdxl_vae. 15K subscribers in the comfyui community. no difference there. SDXL was trained 1024x1024 for same output. safetensors (SD 4X Upscale Model) I decided to pit the two head to when I see comments like this, I feel like an old timer that knows where QRCode monster is coming from and what it actually is used for now. A lot of people are just discovering this technology, and want to show off what they created. 5 beta 2 cant belivevI had never heard of 2. fp16. It's ComfyUI, with the latest version you just need to drop the picture of the linked website into ComfyUI and you'll get the setup. The issue has been that Automatic1111 didn't support this initially, so people ended up trying to set-up work arounds. ckpt in Comfyui (and a SDXL model) /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, trying to use some safetensor models, but my SD only recognizes . 1, base SDXL is so well tuned already for coherency that most other fine-tune models are basically only adding a "style" to it. SDXL was ROUGH, and in order to make results that were more workable, they made 2 models: the main SDXL model, and a refiner. I know it must be my workflows because I've seen some stunning images created with ComfyUI. I've change my Windows page file size, I've tried to wait it out. 1 I get double mouths/noses. safetensors". So the workflow is saved in the image meta data. Please share your tips, tricks, and workflows for using this Yes, you can find the list of "native" resolutions in SDXL 1. I get some success with it, but generally I have to have a low-mid denoising strength and even then whatever is unpainted has this pink burned tinge yo it. More info: Searge SDXL v2. Please keep posted images SFW. safetensors file and 4x UltraSharp. However, the GUI basically assembles a ComfyUI workflow when you hit "Queue Prompt" and sends it to ComfyUI. (This is the . Share Add a Comment. Import times for custom nodes: Welcome to the unofficial ComfyUI subreddit. I notice my "clip_vision_ViT_H. . 5200000000000002 ] Reply More posts you may like. So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. Hi. Horrible performance. exe -s ComfyUI\main. In my understanding, their implementation of the SDXL Refiner isn't exactly as recommended by Stability AI, but if you are happy using just the Base model (or you are happy with their approach to the Refiner), you can use it today to generate SDXL images. However, I kept getting a black image. 0 on comfyUI default workflow, weird color artifacts on all images. Would love to see this: Prompt: Award winning photography, beautiful person, intricate details, highly detailed. I've mostly tried the opposite though, SDXL gen and 1. safetensors to diffusers_sdxl_inpaint_0. csv UPDATE 01/08/2023 : a total of 850+ Styles including 121 professional ones without GPT (i used some Any tricks to make Autism DPO pony diffusion sdxl work well in comfyui with the new ipadapter plus? I already added the clip set layer to -2 node. I talk a bunch about some of the different upscale methods and show what I think is one of the better upscale methods, I also explain how lora can be used in a comfyUI workflow. I have always wanted to try SDXL, so when it was released I loaded it up and surprise, 4-6 mins each image at about 11s/it. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site Been using SDXL on ComfyUI and loving it, but something is not clear to me: SDXL1. 5 and SD2. To save in a Styles. Step 3: Update ComfyUI. 🙌 ️ Finally got #SDXL Hotshot #AnimateDiff to give a nice output and create some super cool animation and movement using prompt interpolation. A 1. LORAs - Man that's a long, slow list . Sorry if this is a stupid question but is the new SDXL already available for use in AUTOMATIC1111? If so, do I have to Excellent work. There seems to to be way more SDXL variants and although many if not all seem to work with A1111 most do not work with comfyui. That problem was fixed in the current VAE download file. More info: The biggest example I have is I have a workflow in ComfyUI that uses 4 models: Refiner > SDXL base > Refiner > RevAnimated, to do this in Automatic1111 I would need to switch models 4 times for every picture which takes about 30 Duchesses of Worcester - SDXL + COMFYUI + LUMA 0:45. I've tried to use textual inversions, but I only get the message that they don't exist (so ignoring them). 5 model, locate the loRas on the list, then open the 'Edit Metadata' option by clicking on the icon in the corner of the LoRa image and change their tags to SDXL ComfyUI SDXL Examples . 0 I have been using Comfyui for quite a while now and i got some pretty decent workflows for 1. 0 Refiner Automatic calculation of the steps required for both the Base and the Refiner models Quick selection of image width and height based on the SDXL training set XY Plot ControlNet with the XL OpenPose model (released by Thibaud Zamora) /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 768x768 may be worth a try. Yeah sure, ill add that to the list, theres a few different options lora-wise, Not sure the current state of SDXL loras in the wild right now but yeah some time after I do upscalers ill do some stuff on lora and probably inpainting/masking techniques too. 5 controlnet models, and SDXL only works with SDXL controlnet models, etc. Be the /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Use euler ancestral and karras, CFG 6. Not ComfyUI SDXL Basics Tutorial Series 6 and 7 - upscaling and Lora usage Both are quick and dirty tutorials without tooo much rambling, no workflows included because of how basic they are. g. 0: a semi-technical introduction/summary for beginners. The SD3 model uses THREE conditionings from different text encoders. Sure you can use any SDXL base model it will work /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper D:\ComfyUI_windows_portable\ComfyUI_windows_portable>. === How to prompt this workflow === Main Prompt ----- The subject of the image in natural language Example: a cat with a hat in a (where Loader is the renaming of Eff. Both of the workflows in the ComfyUI article use a single image as input/prompt for the video creation and nothing else. safetensors Using this trick I have made some unCLIP checkpoints for WD1. 0 model but it has a problem (I've heard). Long , "widgets_values": [ "koreanDollLikenesss_v10. I switched over to ComfyUI but have always kept A1111 updated hoping for performance boosts. Just a note For zoe, download "diffusion_pytorch_model. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. safetensors is not a valid AnimateDiff-SDXL motion module!')) I am transitioning to Comfy from Auto111 and so far I really love it. They are exactly the same weights as before. pth file ?? Reply *SDXL-Turbo is a distilled version of SDXL 1. It substitutes the name of the model that is specified in the 'Eff. Loader SDXL' node, not the one that is transmitted using XY Plot. Agree, once I started using comfyui on my small 8gb machine, thnx to Sdxl, no going back But I do miss auto111 for all the great plugins and hope to figure out how to do similar in comfyui. safetensors' not in ['diffusion_pytorch_model. py --windows-standalone-build Total VRAM 4096 MB, total RAM 16362 MB Trying to enable lowvram mode because your GPU seems to have 4GB or less. 236 strength and 89 steps, which will take 21 steps total). The ComfyUI node that I wrote makes an HTTP request to the server serving the GUI. It runs fine in Comfy. More info: Try the SD. 0 Base and Refiner models An automatic calculation of the steps required for both the Base and the Refiner models A quick selector for the right image width/height combinations based on the SDXL training set Text2Image with Fine-Tuned SDXL models (e. "I left the name as is, as ComfyUI SDXL's refiner and HiResFix are just Img2Img at their core — so you can get this same result by taking the output from SDXL and running it through Img2Img with an SD v1. SDXL + COMFYUI + LUMA /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper Dang I didn't get an answer there but there problem might have been cant find the models. safetensors and juggernautXL_v8Rundiffusion. Thank you! What I do is actually very simple - I just use a basic interpolation algothim to determine the strength of ControlNet Tile & IpAdapter plus throughout a batch of latents based on user inputs - it then applies the CN & Masks the IPA in EDIT : After more time looking into this, there was no problem with ComfyUI, and I never needed to uninstall it. 5 gig so am downloading clip_vision_ViT_H. For me it produces jumbled images as soon as the refiner comes into play. More info: SDXL Turbo with ComfyUI Workflow Included Locked post. I've never had good luck with latent upscaling Are most/all of the SDXL models compatible with SDXL ControlNets? Uh, IP Adapter for sure. A long long time ago maybe 5 months ago (yeah blink and you missed the latest AI development), I meant using an image as input, not video. ('Motion model temporaldiff-v1-animatediff. This is well suited for SDXL v1. safetensors model is a combined model that integrates several ControlNet models, saving you from having to download each model individually, such as In ComfyUI, you can perform all of these steps in a single click. More info: SDXL Control Net Models I've been meaning to ask about this, I'm in a similar situation, using the same controlnet inpaint model. 5 checkpoint only works with 1. 5 and 30 steps. I suspect your comment is misleading. safetensors file they added later, BTW. 1 and always assumed it was an outdated pre-sdxl when I saw it. I've tried that with LCM-LoRA-SDXL, tried renaming the file as well, what's even more interesting that it doesn't show up in the LoRA tab in the list of available models, like some config file isn't working properly. 5 model (so I need to unload the SDXL model and use an SD 1. Please share your tips, tricks, and Hello everyone, For a specific project, I need to generate an image using a model based on SDXL, and then replace the head using a LoRA trained on an SD 1. ComfyUI already has the ability to load UNET and CLIP models separately from the diffusers format, so it should just be a case of adding it into the existing chain with some simple class definitions and modifying how that functions to work 7K subscribers in the comfyui community. I have heard the large ones (typically 5 to 6gb each) should work but is there a source with a more reasonable file size. For openpose, grab "control-lora-openposeXL2-rank256. bat file specifically for SDXL, adding the above mentioned flag, so i don't have to modify it every time i need to use 1. 0, trained for real-time synthesis. The Ultimate SD upscale is one of the nicest things in Auto11, it first upscales your image using GAN or any other old school upscaler, then cuts it into tiles small enough to be digestable by SD, typically 512x512, the pieces are overlapping each other and can be bigger. yaml file to point to Ultimate SD Upscale works fine with SDXL, but you should probably tweak the settings a little bit. 0 for ComfyUI | finally ready and released | custom node extension and workflows for txt2img, img2img, and inpainting with SDXL 1. safetensors is not a valid AnimateDiff-SDXL motion module!')) /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, (StableDiffusionVersion. I havent actually used it for sdxl yet because I rarely go over 1024x1024, but I can say it can do 1024x1024 for sd 1. 5 checkpoints In the SDXL paper, they had stated that the model uses the penultimate layer, I was never sure what that meant exactly*. I know about ishq's webui and using it , the thing I am saying is the safetensors version of the model already works -albeit only with ddim- in a111 and can output decent stuff at 8 steps etc. AP Workflow 6. It is tuning for Anime like images, which TBH is kind of bland for base SDXL because it was tuned mostly for non 123 votes, 148 comments. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg Welcome to the unofficial ComfyUI subreddit. 5 and HiRes Fix, IPAdapter, Prompt Enricher via local LLMs (and OpenAI), and a new Object Swapper + Face Swapper, FreeU v2, XY Plot, ControlNet and ControlLoRAs, SDXL Base + Refiner, Hand Detailer, Face Detailer, Upscalers, ReVision, etc. Recent questions have been asking how far is open weights off the closed weights, so lets take a look. Text2Image with SDXL 1. (SDXL) with only 10. 0 Refiner Automatic calculation of the steps required for both the Base and the Refiner models Quick selection of image width and height based on the SDXL training set XY Plot ControlNet with the XL OpenPose model (released by Thibaud Zamora) I am trying out using SDXL in ComfyUI. Using ComfyUI was a better experience the images took around 1:50mns to 2:25mns 1024x1024 / 1024x768 all with the refiner. bmokf ruynabr mgmnbxdb iyiah aiwx ytsgv nvpn gmodbe sph eiwboua