You get what you pay for, maybe stop buying 2 for a dollar shop fans and then crying like a retarded faggot about the quality.
China has certainly shit out a lot of garbage, so has the US.
What fuckwits like you refuse to acknowledge is China is on a fast track to out perform the US on every metric. They already lack your inner city nigger problem.
I'm giving altman money because the open ai model was the best, despite being a bit restrictive.
But it seems like it's genuinely getting to the point where it's retarded, and it's really clear it's being lined up to become the mother of all psyops
Probably going to give freedom gpt another go.
Looking forward to the time where I own hardware capable of running it locally without super long delays
I did that, downloaded and installed a fuck tonne of special programming shite, then a bunch of models, and every local model ran so slowly it was barely usable (RTX 3050 or 60 12gb & 48gb RAM)
To be fair, this was when the local models first came out - maybe worth checking out again...
Can you recommend any good local models, or alternative service based models
Don't mind parting with a few shekles if it's good,
The software I installed is LM Studio and I just use the search option built into there to find and download models. I have 8gb of VRAM, so many of the models available on there can't run on my machine but may be able to run on yours.
For models, bartowski-Llama-3.2-3B-Instruct-GGUF/Llama-3.2-3B-Instruct-Q8_0.gguf has been my go to for the past month or so. I also installed the chinese Deepseek R1, specifically: DeepSeek-R1-Distill-Llama-8B-GGUF/DeepSeek-R1-Distill-Llama-8B-Q4_K_M.gguf.
I barely know what I am doing. There might be better models to try or better ways to set it up.
You need a quantized model. Quantizing down to four bits massively reduces it's memory footprint and allows it to run on smaller hardware. Any quant below four and it turns to mush.
[ + ] NoRefunds
[ - ] NoRefunds 6 points 3 monthsJan 26, 2025 07:12:07 ago (+6/-0)
[ + ] 4thTurning
[ - ] 4thTurning 2 points 3 monthsJan 26, 2025 08:05:28 ago (+2/-0)
[ + ] Tallest_Skil
[ - ] Tallest_Skil 3 points 3 monthsJan 26, 2025 08:53:34 ago (+3/-0)
[ + ] registereduser
[ - ] registereduser 1 point 3 monthsJan 26, 2025 09:04:07 ago (+1/-0)
You get what you pay for, maybe stop buying 2 for a dollar shop fans and then crying like a retarded faggot about the quality.
China has certainly shit out a lot of garbage, so has the US.
What fuckwits like you refuse to acknowledge is China is on a fast track to out perform the US on every metric. They already lack your inner city nigger problem.
[ + ] Empire_of_the_Mind
[ - ] Empire_of_the_Mind 0 points 3 monthsJan 26, 2025 09:49:34 ago (+0/-0)
[ + ] ModernGuilt
[ - ] ModernGuilt 1 point 3 monthsJan 26, 2025 11:18:41 ago (+1/-0)
[ + ] Crackinjokes
[ - ] Crackinjokes 6 points 3 monthsJan 26, 2025 05:56:14 ago (+6/-0)
[ + ] CoronaHoax
[ - ] CoronaHoax 5 points 3 monthsJan 26, 2025 08:13:41 ago (+5/-0)
[ + ] Tallest_Skil
[ - ] Tallest_Skil 3 points 3 monthsJan 26, 2025 08:53:58 ago (+3/-0)
[ + ] mannerbund
[ - ] mannerbund 1 point 3 monthsJan 26, 2025 11:22:38 ago (+1/-0)
I can only run the smaller models, with the largest one requiring 400GB of memory.
[ + ] CoronaHoax
[ - ] CoronaHoax 0 points 3 monthsJan 26, 2025 22:09:48 ago (+0/-0)
[ + ] mannerbund
[ - ] mannerbund 0 points 3 monthsJan 27, 2025 10:16:13 ago (+0/-0)
[ + ] Empire_of_the_Mind
[ - ] Empire_of_the_Mind 1 point 3 monthsJan 26, 2025 09:49:12 ago (+1/-0)
[ + ] Reawakened
[ - ] Reawakened 1 point 3 monthsJan 26, 2025 07:43:44 ago (+2/-1)
China claims all kinds of things, but then.... no.
[ + ] Her0n
[ - ] Her0n 0 points 3 monthsJan 26, 2025 11:10:31 ago (+0/-0)
[ + ] WanderingToast
[ - ] WanderingToast 0 points 3 monthsJan 26, 2025 11:09:07 ago (+0/-0)
But it seems like it's genuinely getting to the point where it's retarded, and it's really clear it's being lined up to become the mother of all psyops
Probably going to give freedom gpt another go.
Looking forward to the time where I own hardware capable of running it locally without super long delays
[ + ] rectangle
[ - ] rectangle 3 points 3 monthsJan 26, 2025 11:51:11 ago (+3/-0)
[ + ] WanderingToast
[ - ] WanderingToast 0 points 3 monthsJan 26, 2025 17:23:17 ago (+0/-0)
(RTX 3050 or 60 12gb & 48gb RAM)
To be fair, this was when the local models first came out - maybe worth checking out again...
Can you recommend any good local models, or alternative service based models
Don't mind parting with a few shekles if it's good,
[ + ] rectangle
[ - ] rectangle 0 points 3 monthsJan 27, 2025 00:54:47 ago (+0/-0)
For models, bartowski-Llama-3.2-3B-Instruct-GGUF/Llama-3.2-3B-Instruct-Q8_0.gguf has been my go to for the past month or so. I also installed the chinese Deepseek R1, specifically: DeepSeek-R1-Distill-Llama-8B-GGUF/DeepSeek-R1-Distill-Llama-8B-Q4_K_M.gguf.
I barely know what I am doing. There might be better models to try or better ways to set it up.
[ + ] TheYiddler
[ - ] TheYiddler 0 points 3 monthsJan 27, 2025 05:13:11 ago (+0/-0)
[ + ] Wahaha
[ - ] Wahaha 0 points 3 monthsJan 26, 2025 08:55:08 ago (+0/-0)
[ + ] Empire_of_the_Mind
[ - ] Empire_of_the_Mind 1 point 3 monthsJan 26, 2025 09:50:05 ago (+1/-0)