Not sure if this goes here or if this post will be hated upon? but i want to host ai like llms and comfyuis newer models locally but im not sure what type of setup or parts would work best on a possible slim budget? im not sure either if now is the time with inflation and such.
I dont have a price in mind yet but im wondering how much it would cost or what parts i may need?
If you have any questions or concerns please leave a comment.
I’m running a couple of smaller chat models on my mid-range new-ish laptop and they’re fairly quick. Try out Jan with something like their jan-nano model on whatever you’ve already got and get a feel for what you can do.
Alex Ziskind on YT tests a number of on-site AI devices: https://youtu.be/QbtScohcdwI
It really comes down to what kind of speed you want. You can run some LLMs on older hardware “just fine” and many models without a dedicated GPU. The problem is that the time taken to generate responses gets to be crazy.
I ran DeepSeek on an old R410 for shits and giggles a while back, and it worked. It just took multiple minutes to actually give me a complete response.
High RAM for MOE models, high VRAM for dense models, and the highest GPU memory bandwidth you can get.
For stable diffusion models (comfyui), you want high VRAM and bandwidth. Diffusion is a GPU heavy and memory intensive operation.
Software/driver support is very important for diffusion models and comfy UI, so your best experience will be Nvidia cards.
I think realistically you need 80gb+ of RAM for things like qwen image quants (40 for model, 20-40 for LORA adapters in ComfyUI to get output).
I run an 128gb AMD AI 395+ Max rig, qwen image takes 5-20 minutes per 720p qwen image result in ComfyUI. Batching offers an improvement, reducing iterations during prototyping makes a huge difference. I have not tested since the fall though, and the newer models are more efficient.
I’m running gpt-oss20b fine on my m3 MacMini
AI said:
To run AI models locally, you’ll need a computer with a capable CPU, sufficient RAM, and a powerful GPU
While it’s possible to run some AI models on a laptop, a dedicated desktop setup with a powerful GPU will generally offer better performance. The cost of building a dedicated AI PC can range from around $800 for a budget build to $2,500 for a performance-oriented system
Hope that helps /s
I wonder if it took into account when generating the price estimated, all the hikes in RAM pricing that it itself is causing…🤔
Stupid fucking AI data centers…
I was using a Nvidia 3060 for a while, then had 2 in one box, then switched to a 3090.
The amount of vram is a big factor for decent performance. Getting it to not sound like a predictably repetitive bot though is a whole separate thing that is still kind of elusive.
Depends on how fast you want it to run. A Raspberry Pi with an AI hat runs well enough.
What’s an ai hat? Like a red hat? Or a fedora?
Hats are little modules you can stick on your pi for extra functionality!
And they probably do have a Fedora hat…
Crazy! I thought that’s a joke. Thanks!
A lot of expansions of the Pi are called hats from some reason.
FYI diffusion models are not really LLMs
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters Git Popular version control system, primarily for code NAS Network-Attached Storage NUC Next Unit of Computing brand of Intel small computers NVR Network Video Recorder (generally for CCTV) PSU Power Supply Unit Plex Brand of media server package PoE Power over Ethernet RAID Redundant Array of Independent Disks for mass storage SSD Solid State Drive mass storage Unifi Ubiquiti WiFi hardware brand VPS Virtual Private Server (opposed to shared hosting)
11 acronyms in this thread; the most compressed thread commented on today has 9 acronyms.
[Thread #91 for this comm, first seen 13th Feb 2026, 17:50] [FAQ] [Full list] [Contact] [Source code]
You forgot the acronym “EVIL.”


