VexCatalyst@lemmy.fmhy.ml to Selfhosted@lemmy.worldEnglish · 1 year agoGuide: Self-hosting open source GPT chat with no GPU using GPT4Allforum.tuxdigital.comexternal-linkmessage-square12fedilinkarrow-up176arrow-down16
arrow-up170arrow-down1external-linkGuide: Self-hosting open source GPT chat with no GPU using GPT4Allforum.tuxdigital.comVexCatalyst@lemmy.fmhy.ml to Selfhosted@lemmy.worldEnglish · 1 year agomessage-square12fedilink
minus-squareeu8@lemmy.worldlinkfedilinkEnglisharrow-up1·1 year agoTake my answer with a grain of salt, but I’m pretty sure if you have a GPU you can just run the same models and it should work more efficiently for you. The only difference for you is you can run some of the larger models.
Take my answer with a grain of salt, but I’m pretty sure if you have a GPU you can just run the same models and it should work more efficiently for you. The only difference for you is you can run some of the larger models.