Stamets@lemmy.world to People Twitter@sh.itjust.works · 1 year agoThe dreamlemmy.worldimagemessage-square95fedilinkarrow-up1691arrow-down122
arrow-up1669arrow-down1imageThe dreamlemmy.worldStamets@lemmy.world to People Twitter@sh.itjust.works · 1 year agomessage-square95fedilink
minus-squarecandle_lighter@lemmy.mllinkfedilinkEnglisharrow-up58arrow-down1·1 year agoI want said AI to be open source and run locally on my computer
minus-squareTalesFromTheKitchen@lemmy.mllinkfedilinkarrow-up6·edit-21 year agoI can run a pretty alright text generation model and the stable diffusion models on my 2016 laptop with two GTX1080m cards. You can try with these tools: Oobabooga textgenUi Automatic1111 image generation They might not be the most performant applications but they are very easy to use.
minus-squarelad@programming.devlinkfedilinkarrow-up0arrow-down1·1 year agoYou seem to have missed the point a bit
minus-squareTalesFromTheKitchen@lemmy.mllinkfedilinkarrow-up5·1 year agoJust read it again and you’re right. But maybe someone else finds it useful.
minus-squarelad@programming.devlinkfedilinkarrow-up1·1 year agoFunny how these comments appeared only today in my instance, I guess there are some federation issues still
minus-squareintensely_human@lemm.eelinkfedilinkarrow-up1arrow-down1·1 year ago“I wish I had X” “Here’s X” What point was missed here?
minus-squarelad@programming.devlinkfedilinkarrow-up1arrow-down1·1 year agoThe post “I wish X instead of Y” The comment: “And run it [X] locally” The next comment: “You can run Y locally” Also the one I told this literally admitted that I was right and you’re arguing still
minus-squaretegs_terry@feddit.uklinkfedilinkEnglisharrow-up3·1 year agoI want mine in an emotive-looking airborne bot like Flubber
minus-squareGrappling7155@lemmy.calinkfedilinkarrow-up2·1 year agoCheckout /r/localLlama, Ollama, and Mistral. This is all possible and became a lot easier to do recently.
minus-squarePsychedSy@sh.itjust.workslinkfedilinkarrow-up1·1 year agoA lot of it can if you have a big enough computer.
minus-squareCeeBee@lemmy.worldlinkfedilinkarrow-up1·1 year agoIt’s getting there. In the next few years as hardware gets better and models get more efficient we’ll be able to run these systems entirely locally. I’m already doing it, but I have some higher end hardware.
minus-squareXanaus@lemmy.mllinkfedilinkarrow-up1·1 year agoCould you please share your process for us mortals ?
minus-squareCeeBee@lemmy.worldlinkfedilinkarrow-up1·1 year agoStable diffusion SXDL Turbo model running in Automatic1111 for image generation. Ollama with Ollama-webui for an LLM. I like the Solar:7b model. It’s lightweight, fast, and gives really good results. I have some beefy hardware that I run it on, but it’s not necessary to have.
minus-squaredream_weasel@sh.itjust.workslinkfedilinkarrow-up0arrow-down9·edit-21 year agoHa. Lame. Edit: lol. Sign out of Google, nerds. Bring me your hypocrite neckbeard downvotes.
minus-squareOokami38@sh.itjust.workslinkfedilinkarrow-up1·1 year agoI want some of whatever you have, man.
minus-squaredream_weasel@sh.itjust.workslinkfedilinkarrow-up1arrow-down2·1 year agoReckless disregard for the opinions of the fanatically security and privacy conscious? Or just a good-natured appreciation for pissing people off? :)
I want said AI to be open source and run locally on my computer
I can run a pretty alright text generation model and the stable diffusion models on my 2016 laptop with two GTX1080m cards. You can try with these tools: Oobabooga textgenUi
Automatic1111 image generation
They might not be the most performant applications but they are very easy to use.
You seem to have missed the point a bit
Just read it again and you’re right. But maybe someone else finds it useful.
I do, so thank you :)
deleted by creator
Funny how these comments appeared only today in my instance, I guess there are some federation issues still
new phone who dis?
“I wish I had X”
“Here’s X”
What point was missed here?
The post “I wish X instead of Y”
The comment: “And run it [X] locally”
The next comment: “You can run Y locally”
Also the one I told this literally admitted that I was right and you’re arguing still
I want mine in an emotive-looking airborne bot like Flubber
Checkout /r/localLlama, Ollama, and Mistral.
This is all possible and became a lot easier to do recently.
A lot of it can if you have a big enough computer.
It’s getting there. In the next few years as hardware gets better and models get more efficient we’ll be able to run these systems entirely locally.
I’m already doing it, but I have some higher end hardware.
Could you please share your process for us mortals ?
Stable diffusion SXDL Turbo model running in Automatic1111 for image generation.
Ollama with Ollama-webui for an LLM. I like the Solar:7b model. It’s lightweight, fast, and gives really good results.
I have some beefy hardware that I run it on, but it’s not necessary to have.
Ha. Lame.
Edit: lol. Sign out of Google, nerds. Bring me your hypocrite neckbeard downvotes.
I want some of whatever you have, man.
Reckless disregard for the opinions of the fanatically security and privacy conscious? Or just a good-natured appreciation for pissing people off? :)
Drugs. I want the drugs.