Not sure if this goes here or if this post will be hated upon? but i want to host ai like llms and comfyuis newer models locally but im not sure what type of setup or parts would work best on a possible slim budget? im not sure either if now is the time with inflation and such.

I dont have a price in mind yet but im wondering how much it would cost or what parts i may need?

If you have any questions or concerns please leave a comment.

  • KairuByte@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    13
    ·
    5 hours ago

    It really comes down to what kind of speed you want. You can run some LLMs on older hardware “just fine” and many models without a dedicated GPU. The problem is that the time taken to generate responses gets to be crazy.

    I ran DeepSeek on an old R410 for shits and giggles a while back, and it worked. It just took multiple minutes to actually give me a complete response.