Theta Health - Online Health Shop

Comfyui examples reddit

Comfyui examples reddit. What I meant was tutorials involving custom nodes, for example. 4 - The best workflow examples are through the github examples pages. Please share your tips, tricks, and workflows for using this software to create your AI art. This guide is about how to setup ComfyUI on your Windows computer to run Flux. For those that don't know what unCLIP is it's a way of using images as concepts in your prompt in addition to text. Flux is a family of diffusion models by black forest labs. Explore its features, templates and examples on GitHub. You can use mklink to link to your existing models, embeddings, lora and vae for example: F:\ComfyUI\models>mklink /D checkpoints F:\stable-diffusion-webui\models\Stable-diffusion Thank you u/AIrjen!Love the variant generator, super cool. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. When I run them through 4x_NMKD-Siax_200k upscaler for example, the eyes get really glitchy / blurry / deformed, even with negative prompts in place for eyes. If you don’t have t5xxl_fp16. 86s/it on a 4070 with the 25 frame model, 2. Anyway, Im sharing this because these things are not well documented because of the frankly arcane method some of the creators used to provide examples and the fact that many images they put up to show examples are badly compressed or done with older versions. and remember sdxl does not play well with 1. 0 and Pony for example which, for Pony I think needs 2 always) because of how their CLIP is encoded. Belittling their efforts will get you banned. Flux. Only the LCM Sampler extension is needed, as shown in this video. If you find it confusing, please post here for help or create an Issue in GitHub. example from the filename. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. Reply reply More replies More replies More replies I cant load workflows from the example images using a second computer. My comfyUI backend is an API that can be used by other apps if they want to do things with stable diffusion so chainner could add support for the comfyUI backend and nodes if they wanted to. be/ppE1W0-LJas - the tutorial. My own tests left me still with questions lol. With some nervous trepidation, I release my first node for ComfyUI, an implementation of the DemoFusion iterative mixing sampling process. 0 denoise, due to vae, maybe there is an obvious solution but i don't know it. This repo contains examples of what is achievable with ComfyUI. CLIPVision extracts the concepts from the input images and those concepts are what is passed to the model. For example, it's like performing sampling with the A model for only 10 steps, then synthesizing another latent, injecting noise, and proceeding with 20 steps using the B model. all in one workflow would be awesome. It's completely free and open-source but donations would be much appreciated, you can find the download as well as the source at https://github. Most of the security issues in ComfyUI come from the manager which isn't part of the base install because these types of issues have not been solved yet. 0. [2]. yaml. 5 so that may give you a lot of your errors. Please keep posted images SFW. I'm not sure what's wrong here because I don't use the portable version of ComfyUI. I found that sometimes simply uninstalling and reinstalling will do it. Please share your tips, tricks, and workflows for using this… Img2Img Examples. 2 denoise to fix the blur and soft details, you can just use the latent without decoding and encoding to make it much faster but it causes problems with anything less than 1. Warning. Continuing with the car analogy, Learning ComfyUI is a bit like learning to driving with manual shift. It covers the following topics: Introduction to Flux. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. We would like to show you a description here but the site won’t allow us. 1 with ComfyUI ComfyUI Examples. Aug 2, 2024 · Introduction. Also, if this is new and exciting to you, feel free to post Welcome to the unofficial ComfyUI subreddit. safetensors already in your ComfyUI/models/clip/ directory you can find them on: this link. ComfyUI is a powerful and modular GUI for diffusion models with a graph interface. 5 with lcm with 4 steps and 0. If you understand how the pipes fit together, then you can design your own unique workflow (text2image, img2img, upscaling, refining, etc). The workflow posted here relies heavily on useless third-party nodes from unknown extensions. One guess is that the workflow is looking for the Control-LoRAs models in the cached directory (which is my directory on my computer). 5 models? Thank you. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. ComfyUI Extra Samplers: A repository of extra samplers, usable within ComfyUI for most nodes. 5 + SDXL Refiner Workflow : StableDiffusion. But standard A1111 inpaint works mostly same as this ComfyUI example you provided. The prompt for the first couple for example is this: Workflow. I do a first pass at low-res (say, 512x512), then I use the IterativeUpscale custo Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. You should try to click on each one of those model names in the ControlNet stacker node and choose the path of where your models 80 votes, 48 comments. IPAdapter with use of attention masks is a nice example of the kind of tutorials that I'm looking for. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. try civitai . Pro-tip for anyone running both, ComfyUI has a config file called extra_model_paths. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. 1. Welcome to the unofficial ComfyUI subreddit. Note, this site has a lot of NSFW content. I'm not entirely sure what ultimate SD upscale does, so I'll answer generally as to how I do upscales. example: All you have to do is change base_path to your stable-diffusion-webui path, and remove . I feel like this is possible, I am still semi new to Comfy. For the easy to use single file versions that you can easily use in ComfyUI see below: FP8 Checkpoint Version You can encode then decode bck to a normal ksampler with an 1. I tried this pack and it seemed promising, however cant seem to find info on the samplers, or how they improve on the existing ones. I provide one example JSON to demonstrate how it works. start with simple workflows . perhaps my Google-fu is weak. 10K subscribers in the comfyui community. These are examples demonstrating how to do img2img. Haven't used it, but I believe this is correct. https://youtu. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. if a box is in red then it's missing . I think it is just the same as the 1. com/ImDarkTom/ComfyUIMini . 75s/it with the 14 frame model. I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging, loras, etc. A checkpoint is your main model and then loras add smaller models to vary output in specific ways . 1; Flux Hardware Requirements; How to install and use Flux. 168. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. 5 but with 1024X1024 latent noise, Just find it weird that in the official example the nodes are not the same as if you try to add them by yourself 23K subscribers in the comfyui community. WAS suite has some workflow stuff in its github links somewhere as well. If you look at the ComfyUI examples for Area composition, you can see that they're just using the nodes Conditioning (Set Mask / Set Area) -> Conditioning Combine -> positive input on K-sampler. . Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. Any ideas on this? Welcome to the unofficial ComfyUI subreddit. The TL;DR version is this: it makes a image from your prompt without a LoRA, runs it through ControlNet, and uses that to make a new image with the LoRA. It seems also that what order you install things in can make the difference. The graphic style Then find example workflows . The images above were all created with this method. Civitai has a ton of examples including many comfyui workflows that you can download and explore. I can only make a stab at some of these, as I'm still very much learning. In other words, I'd like to know more about new custom nodes or inventive ways of using the more popular ones. You can Load these images in ComfyUI to get the full workflow. However, I am curious about how A1111 handles various processes at the latent level, which ComfyUI does extensively with its node-based approach. Thanks. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. safetensors or clip_l. 1:8188 but when i try to load a flow through one of the example images it just does nothing. For example, see this: SDXL Base + SD 1. I couldn't find the workflows to directly import into Comfy. 1; Overview of different versions of Flux. I am so sorry but my video is outdated now because ComfyUI has officially implemented the a SVD natively, update ComfyUI and copy the previously downloaded models from the ComfyUI-SVD checkpoints to your comfy models SVD folder and just delete the custom nodes ComfyUI-SVD. 73 votes, 25 comments. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. [3]. Seems very hit and miss, most of what I'm getting look like 2d camera pans. 1 ComfyUI install guidance, workflow and example. My ComfyUI workflow was created to solve that. In this case he also uses the ModelSamplingDiscrete node from the WAS node suite, supposedly for chained loras, however in my tests that node made no difference whatsoever so it can be ignored as well. Base ComfyUI also doesn't even connect to the internet for anything unless you run the update script. You can construct an image generation workflow by chaining different blocks (called nodes) together. Please share your tips, tricks, and workflows for using this… A higher clipskip (in A1111, lower in ComfyUI's terms, or more negative) equates to LESS detail in CLIP (not to be confused by details in the image). Flux Examples. comfyui manager will identify what is missing and download for you . If anyone else is reading this and wanting the workflows, here's a few simple SDXL workflows, using the new OneButtonPrompt nodes, saving prompt to file (I don't guarantee tidiness): /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. There is a ton of stuff here and may be a bit overwhelming but worth exploring. For example: 1-Enable Model SDXL BASE -> This would auto populate my starting positive and negative prompts and my sample settings that work best with that model. or through searching reddit, the comfyUI manual needs updating imo. Even with 4 regions and a global condition, they just combine them all 2 at a time until it becomes a single positive condition to plug into the sampler. I'm glad to hear the workflow is useful. true. 4. I've updated the ComfyUI Stable Video Diffusion repo to resolve the installation issues people were facing earlier (sorry to everyone that had installation issues!) Welcome to the unofficial ComfyUI subreddit. You can't change clipskip and get anything useful from some models (SD2. I can load the comfyui through 192. The denoise controls the amount of noise added to the image. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. Examples of ComfyUI workflows. Would some of you have some tips or perhaps even a workflow to get a decent 4x or even just 2x upscale from a 512x768 image in ComfyUI while using SD1. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. I can load workflows from the example images through localhost:8188, this seems to work fine. A lot of people are just discovering this technology, and want to show off what they created. And above all, BE NICE. Breakdown of workflow content. Jul 28, 2024 · It uses the built-in ComfyUI API to send data back and forth between the comfyui instance and the interface. Image Processing A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. fvuay qxrrjr qmts zdqaunl xpquruo opsyio yttl qakeej soszg kujmr
Back to content