Comfyui workflows download reddit

Comfyui workflows download reddit. Svelte is a radical new approach to building user interfaces. I connect these two strings to "Switch String", so I can turn on and off and switch between them. 9 but it looks like I need to switch my upscaling method. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now. I wanted a very simple but efficient & flexible workflow. 9. Therefore, we created a simple website that allows anyone to upload a workflow, parameterize it, and publish it as an interactive demo for everyone to use. The ComfyUI workflow uses the latent upscaler (nearest/exact) set to 512x912 multiplied by 2 and it takes around 120-140 seconds per image at 30 steps with SDXL 0. I'm something of a novice but I think the effects you're getting are more related to your upscaler model your noise your prompt and your CFG Welcome to the unofficial ComfyUI subreddit. Also, if this is new and exciting to you, feel free to post Edit: I realized that the workflow loads just fine, but the prompts are sometimes not as expected. I'm already searching in YouTube and following some but there are too many. • 4 mo. I also use the comfyUI manager to take a look at the various custom nodes available to see what interests me. Try bypassing both nodes and see how bad the image is by comparison. If it's the best way to install control net because when I tried manually doing it . I really loved this workflow which i got from civitai, one Welcome to the unofficial ComfyUI subreddit. A lot of people are just discovering this technology, and want to show off what they created. . Please share your tips, tricks, and workflows for using this software to create your AI art. Generating separate background and character images. So I'm happy to announce today: my tutorial and workflow are available. I created a platform that will enable you to share your comfyui workflows (for free) and run them directly on the cloud (for a tiny sum). Step 2: Download this sample Image. The images look better than most 1. I'll make this more clear in the documentation. Will upload the workflow to OpenArt soon. 'FreeU_V2' for better contrast and detail, and 'PatchModelAddDownscale' so you can generate at a higher resolution. Works great unless dicks get in the way ;+} Absolutely. It wasn't long ago that I put together some workflows around sketch to image in SD 1. They still work well. diffusers/stable-diffusion-xl-1. Forgot to mention, you will have to download this inpaint model from huggingface and put it in your comfyUI "Unet" folder that can be found in the models folder. Img2Img ComfyUI workflow. Applying "denoise:0. Please keep posted images SFW. It includes literally everything possible with AI image generation. I create Prompt B, usually an improved (edited, manual) version of Prompt B. Workflow does following: load any image of any size. It is divided into distinct blocks, which can be activated with switches: Background remover, to facilitate the generation of the images/maps referred to in point 2. This is John, Co-Founder of OpenArt AI. 5. Also added a second part where I just use a Rand noise in Latent blend. AP Workflow 5. For ComfyUI you just need to put the models in the right folders and add the necessary custom nodes (can't recall the names, but Google is your friend). Merging two workflow (Please Help!!) I an new to comfyui and it has been really tough to find the perfect workflow to work with. The process of building and rebuilding my own workflows with the new things I've learned has taught me a lot. 0 in ComfyUI I've come across three different methods that seem to be commonly used: Base Model with Latent Noise Mask, Base Model using InPaint VAE Encode and using the UNET "diffusion_pytorch" InPaint specific model from Hugging Face. 0 for ComfyUI (SDXL Base+Refiner, XY Plot, ControlNet XL w/ OpenPose, Control-LoRAs, Detailer, Upscaler, Prompt Builder) ComfyUI Workflow is here: If anyone sees any flaws in my workflow, please let me know. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. The reason why you typically don't want a final interface for workflows because many users will eventually want to apply LUTs and other post-processing filters. 5" to reduce noise in the resulting image. Go into the mask editor for each of the two and paint in where you want your subjects. One thing is becoming glaringly obvious : knowing what nodes do and HOW they interface with stable diffusion is MUCH more useful than having a “do it all” workflow. Belittling their efforts will get you banned. Merging 2 Images together. Less is more approach. As I learn more and more about comfyui …. SDXL Default ComfyUI workflow. Noticed everyone was getting on the ComfyUI train lately but sharing the workflows was kind of hassle, most posted it on pastebin. A few months ago, I suggested the possibility of creating a frictionless mechanism to turn ComfyUI workflows (no matter how complex) into simple and customizable front-end for end-users. com. looping through and changing values i suspect becomes a issue once you go beyond a simple workflow or use custom nodes. Then go to the 'Install Models' submenu in ComfyUI-manager. I'm also looking for a upscaler suggestion. prompt Don't be empty, write down the effect you want, such as a beautiful girl, Renaissance. Hey all, another tutorial, hopefully this can help with anyone who has trouble dealing with all the noodly goodness of comfyUI, in it I show some good layout practices for comfyUI and show how modular systems can be built. Downloading SDXL pics posted here on reddit and dropping them into comfyUI doesn't work either so I guess will need a direct download link comments sorted by Best Top New Controversial Q&A Add a Comment This workflow is entirely put together by me, using the ComfyUI interface and various open source nodes that people have added to it. 0 is the first step in that direction. Studio mode: Users need to download and install the ComfyUI web application from comfyflow. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and LD2WDavid. Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. It just doesn't seem to take the Ipadapter into account. Nothing fancy. I merge BLIP + WD 14 + Custom prompt into a new strong. 5 models like epicRealism or Jaugeraut, but I know once more models come out with the SDXL base, we'll see incredible results. Don't choose fixed as the seed generation method, use random. I had sometime to burn this weekend and the domain was available for $3 lol. 0-inpainting-0. Reference image analysis for extracting images/maps for use with ControlNet. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. Upscaling ComfyUI workflow. When rendering human creations, I still find significantly better results with 1. Opening the image in stable-diffusion-webui's PNG-info I can see that there are indeed two different sets of prompts in that file and for some reason the wrong one is being chosen. Agreeable-Grade-3807. ComfyICU - imgur for sharing ComfyUI workflows. 1 at main (huggingface. I am working on a 4 gb vram so it takes quite some time to load a checkpoint each time i load a workflow. I uploaded the workflow in GH . there are big holes in the basic functionality of comfyUI and as a result there are nodes that basically cant really do much or are very narrow scope. . 5 'sketch to image' workflows in SDXL. r/comfyui. We learned that downloading other workflows and trying to run them often doesn't work because of missing custom nodes, unknown model files, etc. In the comfy UI manager select install model and the scroll down to see the control net models download the 2nd control net tile model(it specifically says in the description that you need this for tile upscale). Introducing ComfyUI Launcher! new. Thank you. Also, I forgot to ask. finally, the tiles are almost invisible 👏😊. After you can use the same latent and tweak start and end to manipulate it. Normalize simple workflows. scale image down to 1024px (after user has masked parts of image which should be affected) pick up prompt, go thru CN to sampler and produce new image (or same as og if no parts were masked) upscale result 4x. Utilizing "KSampler" to re-generate the image, enhancing the integration between the background and the character. I would love to see if I can do a complete head swap with that same concept! I do it using comfyui with reactor plus loadvideo and savevideo in the n-suite plugin and standard load image for the face to insert. Allo! I am beginning to work with ComfyUI moving from a1111 - I know there are so so many workflows published to civit and other sites- I am hoping to find a way to dive in and start working with ComfyUI without wasting much time with mediocre/redundant workflows and am hoping someone can help me by pointing be toward a resource to find some of the better developed comfy workflows Like the leonardo AI upscaler. I include another text box so I can apply my custom tokes or magic prompts. THE LAB EVOLVED is an intuitive, ALL-IN-ONE workflow. ComfyUI doesn't have a mechanism to help you map your paths and models against my paths and models. true. Later in some new tutorials ive been working on i'm going to cover the creation of various modules such as Bring back old Backgrounds! I finally found a workflow that does good 3440 x 1440 generations in a single go and was getting it working with IP-Adapter and realised I could recreate some of my favourite backgrounds from the past 20 years. Exactly this, don't try to learn ComfyUI by building a workflow from scratch. Other programs might not have that issue but they are trading stability for the flexibility or ease of experimentation that ComfyUI offers. Welcome to the unofficial ComfyUI subreddit. - We have amazing judges like Scott DetWeiler, Olivio Sarikas (if you have watched any YouTube ComfyUI tutorials, you probably have watched their videos Welcome to the unofficial ComfyUI subreddit. ComfyUI Workflows. Create animations with AnimateDiff. Hi guys, I wrote a ComfyUI extension to manage outputs and workflows. As such, I find it increasingly useless to download workflows from sites that claim to be “super Welcome to the unofficial ComfyUI subreddit. To push the development of the ComfyUI ecosystem, we are hosting the first contest dedicated to ComfyUI workflows! Anyone is welcomed to participate. What is the best workflow that people have used with the most capability without using custom nodes? use custom nodes. I've put a few labels in the flow for clarity Hi community, I wanted to finally share with you the results of a month of hard work. Choose the appropriate model. Fill in your prompts. co) Thanks for sharing this setup. ComfyUI is updated frequently and things change a lot, so older workflows aren't guaranteed to still work without some updating or replacing old nodes that may or may not still work. I'm sure that someone knows of such a workflow. 3. It was hard to have a quick view of the workflow to get sense of what was used. Next is as simple as clicking the model in the "Reference models" folder of the networks section and waiting for the download to complete. - We have amazing judges like Scott DetWeiler, Olivio Sarikas (if you have watched any YouTube ComfyUI tutorials, you probably have watched their videos In researching InPainting using SDXL 1. For a dozen days, I've been working on a simple but efficient workflow for upscale. So, when you download the AP Workflow (or any other workflow), you have to review each and every node to be sure that they point to your version of the model that you see in the picture. And above all, BE NICE. Best simple but capable workflow. Due I like the simple interface of fooocus, I am also a big fan of the flexibility of comfyui. Add a preview. My long-term goal is to use ComfyUI to create multi-modal pipelines that can reach results as good as the ones from the AI systems mentioned above without human intervention. Step 3: Update ComfyUI Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: Drag and drog and sample image into ConfyUI Step 6: The FUN begins! If queue didn't start automatically, press Queue Prompt Welcome to the unofficial ComfyUI subreddit. This is the image in the file, converted to a jpg. They depend on complex pipelines and/or Mixture of Experts (MoE) that enrich the prompt in many different ways. ControlNet Workflow. So instead of having a single workflow with a spaghetti of 30 nodes, it could be a workflow with 3 sub workflows, each with 10 nodes, for example. As far as I know fooocus is based on comfy so I thought maybe there are comfy workflows which are adapting the power of fooocus. For now, I have to manually copy the right prompts. I share many results and many ask to share. • 9 days ago. It can be a little intimidating starting out with a blank canvas, but by bringing in an existing workflow, you can have a starting point that comes with a set of nodes all ready to go. I know it's simple for now. Often times I just get meh results with not much interesting motion when I play around with the prompt boxes, so just trying to get an idea of your methodology behind setting up / tweaking the prompt composition part of the flow. Copy that (clipspace) and paste it (clipspace) into the load image node directly above (assuming you want two subjects). com to make it easier for people to share and discover ComfyUI workflows. •. Hope this helps. Then open your destination workflow, ctrl-V. For example, it would be very cool if one could place the node numbers on a grid The ControlNet input is just 16FPS in the portal scene and rendered in Blender, and my ComfyUI workflow is just your single ControlNet Video example, modified to swap the ControlNet used for QR Code Monster and using my own input video frames and a different SD model+vae etc. Download one of the dozens of finished workflows from Sytan/Searge/the official ComfyUI examples. The API workflows are not the same format as an image workflow, you'll create the workflow in ComfyUI and use the "Save (API Format)" button under the Save button you've probably I came across comfyui purely by chance and despite the fact that there is something of a learning curve compared to a few others I have tried, it's well worth the effort since even on a low end machine image generation seems to be much quicker(at least when using the default workflow) Do you have ComfyUI manager. Once the final image is produced, I begin working with it in A1111, refining, photobashing in some features I wanted and re-rendering with a second model, etc. app to share it with other users. SD. 2. Table of contents. 5 based models with greater detail in SDXL 0. The adventage that this platform has is its built in community, ease of use, and the ability to experiment with stable ComfyUI Inpaint Color Shenanigans (workflow attached) In a minimal inpainting workflow, I've found that both: The color of the area inside the inpaint mask does not match the rest of the 'no-touch' (not masked) rectangle (the mask edge is noticeable due to color shift even though content is consistent) The rest of the 'untouched' rectangle's Workflow: Trying to recreate my SD 1. Search for: 'resnet50' And you will find: And in the examples on the workflow page that I linked you can see that the workflow was used to generate several images that do need the face restore I even doubled it. With a higher config it seems to have decent results. ControlNet Depth ComfyUI workflow. I feel like if you are reeeeaaaallly serious about AI art then you need to go comfy for sure! Also just transitioning from a1111 hence using a custom clip text encode that will emulate the a1111 prompt weighting so I can reuse my a1111 prompts for the time being but for any new stuff will try to use native comfyUI prompt weighting. Image generation (creation of the base image). app, and finally run ComfyFlowApp locally. Ferniclestix. Allows you to choose the resolution of all output resolutions in the starter groups. Using "ImageCompositeMasked" to remove the background from the character image and align it with the background image. So in this workflow each of them will run on your input image and you Hi Reddit! In October, we launched https://comfyworkflows. Have fun playing with those numbers ;) 1. You upload image -> unsample -> Ksampler advanced -> same recreation of the original image. Thanks in advance! Just open one workflow, ctrl-A, ctrl-C. Eventually you'll find your favorites which enhance how you want ComfyUI to work for you. 60 votes, 16 comments. Basically, Two nodes are doing the heavy lifting. Hi community. with python the easiest way i found was to grab a workflow json, manually change values you want to a unique keyword then with python replace that keyword with the new value. It didn't work out. It doesn't return any errors. If you copy your nodes from one workflow they will still be in memory to paste them in a new workflow. Rename it "Prompt A". Whereas traditional frameworks like React and Vue do the bulk of their work in the browser, Svelte shifts that work into a compile step that happens when you build your app. Also embedding the full workflow into images is so nice coming from A1111, where half the extensions either don't embed their params, or don't reuse those params when Step 1: Download SDXL Turbo checkpoint. Start by installing 'ComfyUI manager' , you can google that. This is an interesting implementation of that idea, with a lot of potential. null_hax. We also have some images that you can drag-n-drop into the UI to Queue the flow and you should get a yellow image from the Image Blank. Website: https://youml. Reply. AP Workflow v3. Txt-to-img, img-to-img, Inpainting, Outpainting, Image Upcale, Latent Upscale, multiple characters at once, LoRAs, ControlNet, IP-Adapter, but also video generation, pixelization, 360 image generation, and even Live painting Welcome to the unofficial ComfyUI subreddit. ago. A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. I'm perfecting the workflow I've named Pose Replicator . will output this resolution to the bus. Breakdown of workflow content. Everyone uses custom nodes just to get Welcome to the unofficial ComfyUI subreddit. But, I really wanted to update them with SDXL. You could sync your workflows with your team by Git. I thought it would as easy as replacing all of the components with SDXL equivalents of all of my Ok guys, here's a quick workflow from comfy noobie. 📂Saves all your workflows in a single folder in your local disk (by default under /ComfyUI/my_workflows), customize this location in Settings Bulk import workflows, bulk export workflows to downloadable zip file If you have any suggestions for workspace, feel free to post them in our GitHub issue or in our Discord! We love ComfyUI for its ease in sharing workflows, but we dislike how long it takes to try them out. Explore thousands of workflows created by the community. My current workflow sometimes will change some details a bit, it makes the image blurry or makes the image too sharp. Creator mode: Users (also creators) can convert the ComfyUI workflow into a web application, run the application locally, or publish it to comfyflow. The workflow in the example is passed in via the script in inline string, but it's better (and more flexible) to have your python script load that from a file instead. These would be a awesome base for some specific adjustments. seen people say comfyui is better than A1111, and gave better results, so wanted to give it a try, but cant find a good guide or info on how to install it on an AMD GPU, with also conflicting resources, like original comfyui github page says you need to install directml and then somehow run it if you already have A1111, while other places say you need miniconda/anaconda to run it, but just can Welcome to the unofficial ComfyUI subreddit. ComfyUI Workflows are a way to easily start generating images within ComfyUI. 👍🏼. nx al fs ww mu ty xq oq gq rk