Comfyui workflow directory example reddit sdxl
Comfyui workflow directory example reddit sdxl. This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. Reply reply ComfyUI Manager: Plugin for CompfyUI that helps detect and install missing plugins. Comfy1111 SDXL Workflow for ComfyUI. It's based on the wonderful example from Sytan, but I un-collapsed it and removed upscaling to make it very simple to understand. The blurred latent mask does its best to prevent ugly seams. In this Guide I will try to help you with starting out using this and give you some starting workflows to work with. Only dog, also perfect. Edit: you could try the workflow to see it for yourself. 0 Refiner. SDXL most definitely doesn't work with the old control net. Starts at 1280x720 and generates 3840x2160 out the other end. List of Templates. 9 I was using some ComfyUI workflow shared here where the refiner was always an improved version versus the base. Thanks for the tips on Comfy! I'm enjoying it a lot so far. 5 with lcm with 4 steps and 0. 5 and HiRes Fix, IPAdapter, Prompt Enricher via local LLMs (and OpenAI), and a new Object Swapper + Face Swapper, FreeU v2, XY Plot, ControlNet and ControlLoRAs, SDXL Base + Refiner, Hand Detailer, Face Detailer, Upscalers, ReVision, etc. I have no idea why the OP didn't bother to mention that this would require the same amount of storage space as 17 SDXL checkpoints - mainly for a garbage tier SD1. true. They are intended for use by people that are new to SDXL and ComfyUI. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. Share, discover, & run thousands of ComfyUI workflows. But now in SDXL 1. Sep 7, 2024 · SDXL Examples. 5 but with 1024X1024 latent noise, Just find it weird that in the official example the nodes are not the same as if you try to add them by yourself That's the one I'm referring to. EDIT: For example this workflow shows the use of the other prompt windows. Tidying up ComfyUI workflow for SDXL to fit it on 16:9 Monitor, so you don't have to | Workflow file included | Plus cats, lots of it. I played for a few days with ComfyUI and SDXL 1. You can use more steps to increase the quality. Simple SDXL Template. Goto ComfyUI_windows_portable\ComfyUI\ Rename extra_model_paths. 10 votes, 10 comments. Feature/Version Flux. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. ComfyUI ControlNet aux: Plugin with preprocessors for ControlNet, so you can generate images directly from ComfyUI. I mean, the image on the right looks "nice" and all. Heya, part 5 of my series of step by step tutorials is out, it covers improving your adv ksampler setup and usage of prediffusion with an unco-operative prompt to get more out of your workflow. One guess is that the workflow is looking for the Control-LoRAs models in the cached directory (which is my directory on my computer). But let me know if you need help replicating some of the concepts in my process. As always, I'd like to remind you that this is a workflow designed to learn how to build a pipeline and how SDXL works. The proper way to use it is with the new SDTurboScheduler node but it might also work with the regular schedulers. 1 Schnell; Overview: Cutting-edge performance in image generation with top-notch prompt following, visual quality, image detail, and output diversity. I'm revising the workflow below to include a non-latent option. I tried to find a good Inpaint workflow and just found a bunch of wild workflows that wanted a million nodes and had a bunch of different functions. 0 for ComfyUI - Now with support for SD 1. So far I find it amazing but so far I'm not achieving the same level of quality I had with Automatic 1111. 5 時大了足足一整倍,而且訓練數據也增加了3倍,所以最終出來的 Checkpoint File 也比 1. I'm not sure what's wrong here because I don't use the portable version of ComfyUI. As someone relatively new to AI imagery, I started off with Automatic 1111 but was tempted by the flexibility of ComfyUI but felt a bit overwhelmed. It now includes: SDXL 1. I have to 2nd the comments here that this workflow is great. Welcome to the unofficial ComfyUI subreddit. But try both at once and they miss a bit of quality. But it is extremely light as we speak, so much so Examples of ComfyUI workflows. I understand how outpainting is supposed to work in comfyui (workflow… Based on Sytan SDXL 1. 0 Refiner Automatic calculation of the steps required for both the Base and the Refiner models Quick selection of image width and height based on the SDXL training set I have a ComfyUI workflow that produces great results. 1 Dev Flux. Feb 7, 2024 · Running SDXL models in ComfyUI is very straightforward as you must’ve seen in this guide. I'm glad to hear the workflow is useful. You can construct an image generation workflow by chaining different blocks (called nodes) together. This was the base for my I'll do you one better, and send you a png you can directly load into Comfy. But it separates LORA to another workflow (and it's not based on SDXL either). 5 的大得多。 Yeah sure, ill add that to the list, theres a few different options lora-wise, Not sure the current state of SDXL loras in the wild right now but yeah some time after I do upscalers ill do some stuff on lora and probably inpainting/masking techniques too. 2 denoise to fix the blur and soft details, you can just use the latent without decoding and encoding to make it much faster but it causes problems with anything less than 1. In contrast, the SDXL-clip driven image on the left, has much greater complexity of composition. 0 for ComfyUI (SDXL Base+Refiner, XY Plot, ControlNet XL w/ OpenPose, Control-LoRAs, Detailer, Upscaler, Prompt Builder) Tutorial | Guide I published a new version of my workflow, which should fix the issues that arose this week after some major changes in some of the custom nodes I use. It can't do some things that sd3 can, but it's really good and leagues better than sdxl. 0 and ComfyUI to explore how doubling the sample count affects performance, especially on higher sample counts, seeing where the image changes relative to the sampling steps. 0 the refiner is almost always a downgrade for me. So, I just made this workflow ComfyUI. I think it’s a fairly decent starting point for someone transitioning from Automatic1111 and looking to expand from there. yaml. SDXL Turbo is a SDXL model that can generate consistent images in a single step. It is pretty amazing, but man the documentation could use some TLC, especially on the example front. Comfy Workflows Comfy Workflows. 0. If you have a previous installation of ComfyUI with Models, or would like to use models stored in an external location, you can use this method to reference them instead of re-downloading them. View community ranking In the Top 1% of largest communities on Reddit. 1 Pro Flux. Automatic calculation of the steps required for both the Base and the Refiner models. I have updated the workflow submitted last week, cleaning up a bit the layout and adding many functions I wanted to learn better. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. I like to create images like that one: AP Workflow v3. pngs of metadata. We don't know if ComfyUI will be the tool moving forwards but what we guarantee is that by following the series those spaghetti workflows will become a bit more understandable + you will gain a better understanding of SDXL. I'm currently running into certain prompts where latent just looks awful. 0 denoise, due to vae, maybe there is an obvious solution but i don't know it. 首先當然要下載 SDXL 1. Intermediate SDXL Template. Just a quick and simple workflow I whipped up this morning I have updated the workflow submitted last week, cleaning up a bit the layout and adding many functions I wanted to learn better. Jan 8, 2024 · Introduction of a streamlined process for Image to Image conversion with SDXL. Just load your image, and prompt and go. Indeed SDXL it s better , but it s not yet mature, as models are just appearing for it and as loras the same. Yes, 8Gb card, ComfyUI workflow loads both SDXL base & refiner models, separate XL VAE, 3 XL LoRAs, plus Face Detailer and its sam model and bbox detector model, and Ultimate SD Upscale with its ESRGAN model and input from the same base SDXL model all work together. Step 3: Update ComfyUI Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: Drag and drog and sample image into ConfyUI Step 6: The FUN begins! If queue didn't start automatically, press Queue Prompt I stopped the process at 50GB, then deleted the custom node and the models directory. ComfyUI workflow to play with this, embedded here: This gives sd3 style prompt following and impressive multi subject composition. I think it is just the same as the 1. Created by: OpenArt: What this workflow does This basic workflow runs the base SDXL model with some optimization for SDXL. I have an image that I want to do a simple zoom out on. You do only face, perfect. ComfyUI - SDXL basic-to advanced workflow tutorial - part 5. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Part 3 - we will add an SDXL refiner for the full SDXL process. ControlNet (Zoe depth) Advanced SDXL Template Welcome to the unofficial ComfyUI subreddit. From there, we will add LoRAs, upscalers, and other workflows. I made a preview of each step to see how the image changes itself after sdxl to sd1. 5 model I don't even want. My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. Oct 12, 2023 · These workflow templates are intended as multi-purpose templates for use on a wide variety of projects. AP Workflow 6. More to come. Please share your tips, tricks, and workflows for using this software to create your AI art. 0 Base. I tried to find either of those two examples, but I have so many damn images I couldn't find them. Open the YAML file in a code or text editor I conducted an experiment on a single image using SDXL 1. 1. Emphasis on the strategic use of positive and negative prompts for customization. 5 model. Your efforts are much appreciated. But for a base to start at it'll work. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now Hello! I'm new at ComfyUI and I've been experimenting the whole saturday with it. 0 的 Checkpoint Model,由於 SDXL 在訓練時圖片用上了 1024 x 1024 的圖片,解像度比 SD 1. 5 in a single workflow in ComfyUI? EDIT: WALKING BACK MY CLAIM THAT I DON'T NEED NON-LATENT UPSCALES. 0 ComfyUI workflow with a few changes, here's the sample json file for the workflow I was using to generate these images: sdxl_4k_workflow. You can encode then decode bck to a normal ksampler with an 1. The Ultimate SD upscale is one of the nicest things in Auto11, it first upscales your image using GAN or any other old school upscaler, then cuts it into tiles small enough to be digestable by SD, typically 512x512, the pieces are overlapping each other and can be bigger. All you need is to download the SDXL models and use the right workflow. but it has the complexity of an SD1. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. Sytan SDXL ComfyUI: Very nice workflow showing how to connect the base model with the refiner and include an upscaler. Comfy1111 SDXL Workflow for ComfyUI Just a quick and simple workflow I whipped up this morning to mimic Automatic1111's layout. Part 3 (this post) - we will add an SDXL refiner for the full SDXL process. Ignore the prompts and setup SDXL cliptext node used on left, but default on right sdxl-clip vs default clip. A great starting point for using img2img with SDXL: View Now: Upscaling: How to upscale your images with ComfyUI: View Now: Merge 2 images together: Merge 2 images together with this ComfyUI workflow: View Now: ControlNet Depth Comfyui workflow: Use ControlNet Depth to enhance your SDXL images: View Now: Animation workflow: A great starting For some workflow examples and see what ComfyUI can do you can check out: SDXL Turbo; AuraFlow; HunyuanDiT In the standalone windows build you can find this No, because it's not there yet. json · cmcjas/SDXL_ComfyUI_workflows at main (huggingface. So, if you are using that, I recommend you to take a look at this new one. I put an example image/workflow in the most recent commit that uses a couple of the main ones, and the nodes are named pretty easily so if you have the extension installed you should be able to just skim through the menu and search the ones that aren't as straightforward. They can be used with any SDXL checkpoint model. Please keep posted images SFW. This can be useful for systems with limited resources as the refiner takes another 6GB or ram. Part 2 (link)- we added SDXL-specific conditioning implementation + tested the impact of conditioning parameters on the generated images. With SDXL 0. 157 votes, 62 comments. example to extra_model_paths. Thanks. It provides workflow for SDXL (base + refiner). AnimateDiff in ComfyUI is an amazing way to generate AI Videos. Increasing the sample count leads to more stable and consistent results. There are strengths and weaknesses for each model, so is it possible to combine SDXL and SD 1. 2 My primary goal was to fully utilise 2-stage architecture of SDXL - so I have base and refiner models working as stages in latent space. FAQ Q: Can I use a refiner in the image-to-image transformation process with SDXL? For example, this is what the workflow produces: Other than that, there were a few mistakes in version 3. Encouragement of fine-tuning through the adjustment of the denoise parameter. 0 Base SDXL 1. The image generation using SDXL in ComfyUI is much faster compared to Automatic1111 which makes it a better option between the two. -- Below is my XL Turbo workflow, which includes a lot of toggles and focuses on latent upscaling. Part 2 - (coming in 48 hours) we will add SDXL-specific conditioning implementation + test what impact that conditioning has on the generated images. SDXL Examples. . I had to place the image into a zip, because people have told me that Reddit strips . Combined with an sdxl stage, it brings multi subject composition with the fine tuned look of sdxl. This workflow/mini tutorial is for anyone to use, it contains both the whole sampler setup for SDXL plus an additional digital distortion filter which is what im focusing on here, it would be very useful for people making certain kinds of horror images or people too lazy to use photoshop like me :P Then in Part 3, we will implement the SDXL refiner. I used the workflow kindly provided by the user u/LumaBrik, mainly playing with parameters like CFG Guidance, Augmentation level, and motion bucket. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. The creator has recently opted into posting YouTube examples which have zero audio, captions, or anything to explain to the user what exactly is happening in the workflows being generated. Aug 20, 2023 · In part 1 (link), we implemented the simplest SDXL Base workflow and generated our first images. It's simple and straight to the point. SDXL 1. 1 that are now corrected. co). SDXL can indeed generate a nude body, and the model itself doesn't stop you from fine-tuning it towards whatever spicy stuff there is with a dataset, at least by the looks of it. Instead, I created a simplified 2048X2048 workflow. I think that when you put too many things inside, it gives less attention to it. 0, did some experiments, and came up with reasonably simple, yet pretty flexible and powerful workflow I use myself: MoonRide workflow v1. Ignore the LoRA node that makes the result look EXACTLY like my girlfriend. For example: 896x1152 or 1536x640 are good resolutions. I know it must be my workflows because I've seen some stunning images created with ComfyUI. I want a ComfyUI workflow that's compatible with SDXL with base model, refiner model, hi-res fix, and one LORA all in one go. 5 and then after upscale and facefix, you ll be surprised how much change that was SDXL Controlnet Tiling Workflow I've been doing some tests in A1111 using the Ultimate Upscaler script together with Controlnet Tile and it works wonderful it doesn't matter what tile size or image resolution I throw at it but in ComfyUI I get this error: 下載 SDXL 1. Aug 13, 2023 · In part 1 (this post), we will implement the simplest SDXL Base workflow and generate our first images. Nobody needs all that, LOL. Sure, it's not 2. First, I generated a series of images in a 9:16 aspect ratio, some in comfyui with sdxl, and others in midjourney. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. Before inpainting the workflow will blow the masked size up to 1024x1024 to get a nice resolution and resize before pasting back. For each of the sequences, I generated about ten of them and then chose the one I === How to prompt this workflow === Main Prompt ----- The subject of the image in natural language Example: a cat with a hat in a grass field Secondary Prompt ----- A list of keywords derived from the main prompts, at the end references to artists Example: cat, hat, grass field, style of [artist name] and [artist name] Style and References Welcome to the unofficial ComfyUI subreddit. You should try to click on each one of those model names in the ControlNet stacker node and choose the path of where your models Step 2: Download this sample Image. ptsxb tcud uxdm brafhky xer yzwow mewpjy iimo lkzmi phohzq