I've had quite a few people ask me what programs I use and how to install them, and I usually link to a video on how to install them, but that's kinda lazy, so figured I'd make a more detailed list of things needed and system specs to either make images or mess around with LLMs (chatgpt kinda stuff). This is for Nvidia GPUs and Windows 10 only, I have little to no experience with AMD gpus or Linux
Part 1 Stable Diffusion and Comfy UI:
System Requirements are: A 1000 series and above Nvidia GPU with at least 8gb of VRAM (can be lower but you will need lots of system RAM to offset the difference) A CPU with at least 4 cores (6 core 12 thread recommended) At least 16-24gb of system RAM, 32-64GB recommended for extra headroom.
How to install Stable Diffusion using ComfyUI.
If you want to install it quickly https://www.comfy.org/download and it takes care of most everything (thank yisikopato for this one ;) If you want to install it manually or use the portable version, follow the steps below.
You will need alot of free storage space before hand, I'd recommend at least 20gb of free space to get started with just basic stable diffusion. If you choose to install Ollama and LLMs, add another 30gb to the mix.
Prerequisites are: Python 3.12 and CUDA toolkit 12.9 if you are using a Nvidia 4000 series or greater. CUDA Toolkit 12.6 if you're using a Nvidia 3000 series or below (will work with 1000 series cards). Then after you install those and restart your system, install Pytorch, choose which version of CUDA you just installed and open a command prompts (windows key+r and type cmd ) and copy the command from "Run this Command:" into command prompt and it will download and install Pytorch. Now lastly Git, so you can install the ComfyUI manager later https://git-scm.com/downloads/win
Create a folder where you want ComfyUI at on your system (on a SSD or NVME drive recommended) and extract the ComfyUI zip file into the folder you created. Then run the "run_nvidia_gpu.bat" file to initially install all the requirements for the web user interface. After a while it should show a IP address in the command prompt, the default is '127.0.0.1:8188' and it may automatically open your default browser and take you to the default workflow viewer. Once you've verified that it works, close the command prompt and head over to https://github.com/Comfy-Org/ComfyUI-Manager and scroll down to "Installation " and since we used the portable version, use the 2nd method of installing the manager. Right click on the link "scripts/install-manager-for-portable-version.bat" and choose 'Save link as', and save it inside your Comfy UI folder (where the run_nvidia.bat file is) and run the script to automatically install the manager.
There is a download button on the right next to the "Create" button. Download one or both of those main models (6gb each) and go back to your ComfyUI main folder, navigate to ComfyUI\models\checkpoints\ and place them inside. While you don't need LoRAs, if you want to make something of a specific character you're going to need one my Character Roster List has links to all the characters listed, just click on their name. But keep in mind not all LoRAs will work with the main model you are using, to check, where is says "details" on the right look for "Base model" and see what model it used (Example https://imgur.com/a/GWuefCF )
Ok I think that covered all the bases on getting it installed and running... hopefully, I didn't miss something.
It takes some time and practice to get used to everything, the best way to figure things out is to open a new blank workflow and just play around with adding nodes and seeing what happens, it will usually give you a error if a node is missing on it needs.
Part 2 LLMs (Large Language Models):
This one is as simple as following a video tutorial - https://youtu.be/Wjrdr0NU4Sk?si=1CX2mBVH0iVMWoZs by far the best one that walks one through how to setup everything, and too because its the one I used to install Ollama on my AI home server, but I do recommend downloading the CUDA toolkit and Pytorch and installing those because it will speed up your TPS (tokens per seconds) by allowing the GPU to be better utilized. My Titan Xp, pre-cuda and pytorch got about 9 TPS no matter the model, after installing it averages around 30-60 tps depending on model size.
But for models that will work with Ollama, https://huggingface.co/TheDrummer has a huge list of all different kinds of models. and https://ollama.com/search has a good suite of starter models. Just remember the size of the file in GB is roughly how much VRAM it will take to run the model, so if you're running a 8GB gpu, you can run 12b models but they will need to be quantized to below 8gb. You 'can' run models largers than your GPUs Vram limit, but it will be significantly slower because it will use system ram as bleed over.
You don't need any LLMs to run stable diffusion btw, they're just something fun to play around with and also... *looks over shoulder* uncensored chatbot models can RP as basically any character you describe to it and be NSFW as well *wink*
Whew I think that covered everything, hopefully I'm not forgetting anything.
Thanks for including the LLM part. I once tried it a couple years ago, but wasn't satisfied. Since then I found CrushOn.ai which is amazing, but they do have restrictions on content, but even that I have learned how to trick it to get what I want, not to mention I'm paying a lot for it. It would be amazing to just do it all locally, so maybe I'll check it out again with your tutorial here. Thanks again!
Thanks for including the LLM part. I once tried it a couple years ago, but wasn't satisfied. Since t
Oh and I did forget one thing, to install .gguf files into OpenWebUi (from the tutorial video), its in the "experimental" tab in the admin settings for models.
You might have to do some trial and error to find a good RP chatbot, the one I use is https://huggin
Yeah I taught someone how to generate images with comfyui and realized they added this. This also gives you a whole windows for generating images instead of doing it all in your browser. I am still on the portable version though because... old habits.
Yeah I taught someone how to generate images with comfyui and realized they added this. This also gi
For Loras, its located in the 'Models' folder inside the EasyDiffusion folder. For main models (checkpoints) those go in the 'Stable-diffusion' folder inside the Models folder.
You'll need to find where you installed Easy Diffusion at and then go from there. Say if you installed it on your C drive it would be C:\EasyDiffusion\models\
For Loras, its located in the 'Models' folder inside the EasyDiffusion folder. For main models (che