Jailbreaking ChatGPT opens it up beyond its safeguards, letting it do and say almost anything. From insults to deliberate lies, here's how to jailbreak ChatGPT.
If you want to do stuff with ai that is outside chatgpt terms of service, figure out how to self host your own. It’s not hard and chatgpt is a stupid bitch bot. Look up llamacpp or if you hate command lines, gpt4all. If you set up multithreading correctly and download the right k model, you can get near chatgpt speeds even without an nvidia gpu. My Athlon fx works really well for self hosted ai.
You’re not paying money for chatgpt so you’re not the customer. Your “please help me pirate a movie” queries are getting sent straight to everyone who wants to know about it. Ever wondered why every ai makes you sign in first?
I considered self hosting, but the setup seems complicated. The need for a good gpu is stated everywhere. And my concern is how to get the database to even come close to chatGpt? I cant train on every book on existence, as they did
The GGML and GGUF formats perform very well with CPU inference when using LLamaCPP as the engine. My 10 years old 2.8 GHz CPUs generate about 2 words per second. Slightly below reading speed, but pretty solid. Just make sure to keep to the 7B models if you have 16 GiB of memory and 13B models if you have 32 GiB of memory.
Super useful! Thanks!
I installed the oobabooga stugg. The http://localhost:7860/?__theme=dark open fine. But then nothing works.
how do I train the model with that 8gb .kbin file I downloaded? There are so much option, and I dont even know what I’m doing
There’s a “models” directory inside the directory where you installed the webui. This is where the model files should go, but they also have supporting files (.yaml or .json) with important metadata about the model.
The easiest way to install a model is to let the webui download the model itself:
And after it finishes downloading, just load it into memory by clicking the refresh button, selecting it, choosing llama.cpp and then load (perhaps tick the ‘CPU’ box, but llama.cpp can do mixed CPU/GPU inference, too, if I remember right).
My install is a few months old, I hope the UI hasn’t changed to drastically in the meantime :)
If you want to do stuff with ai that is outside chatgpt terms of service, figure out how to self host your own. It’s not hard and chatgpt is a stupid bitch bot. Look up llamacpp or if you hate command lines, gpt4all. If you set up multithreading correctly and download the right k model, you can get near chatgpt speeds even without an nvidia gpu. My Athlon fx works really well for self hosted ai.
You’re not paying money for chatgpt so you’re not the customer. Your “please help me pirate a movie” queries are getting sent straight to everyone who wants to know about it. Ever wondered why every ai makes you sign in first?
I considered self hosting, but the setup seems complicated. The need for a good gpu is stated everywhere. And my concern is how to get the database to even come close to chatGpt? I cant train on every book on existence, as they did
Tip: try Oobabooga’s Text Generation WebUI with one of the WizardLM Uncensored models from HuggingFace in GGML or GGUF format.
The GGML and GGUF formats perform very well with CPU inference when using LLamaCPP as the engine. My 10 years old 2.8 GHz CPUs generate about 2 words per second. Slightly below reading speed, but pretty solid. Just make sure to keep to the 7B models if you have 16 GiB of memory and 13B models if you have 32 GiB of memory.
Super useful! Thanks! I installed the oobabooga stugg. The http://localhost:7860/?__theme=dark open fine. But then nothing works. how do I train the model with that 8gb .kbin file I downloaded? There are so much option, and I dont even know what I’m doing
There’s a “models” directory inside the directory where you installed the webui. This is where the model files should go, but they also have supporting files (.yaml or .json) with important metadata about the model.
The easiest way to install a model is to let the webui download the model itself:
And after it finishes downloading, just load it into memory by clicking the refresh button, selecting it, choosing llama.cpp and then load (perhaps tick the ‘CPU’ box, but llama.cpp can do mixed CPU/GPU inference, too, if I remember right).
My install is a few months old, I hope the UI hasn’t changed to drastically in the meantime :)