maria [she/her]
- 32 Posts
- 122 Comments
maria [she/her]@lemmy.blahaj.zoneOPto
Free Open-Source Artificial Intelligence@lemmy.world•[Help] how does one interact with MCP servers without mcp-libraries?English
1·1 year agono… turns out MCP servers are run and downloaded using mainly npx and stuff. I would LOVE it if it were just a rest api. But nope… because stuff like file access is also possible! So that data obv can’t leave ur PC…
maria [she/her]@lemmy.blahaj.zoneto
Free Open-Source Artificial Intelligence@lemmy.world•where have a group for talk about aiEnglish
2·1 year agowell what did it translate to?
maria [she/her]@lemmy.blahaj.zoneOPto
AI@lemmy.ml•How are companies SO bad at seeeling AI to us? (And how to fix it)
1·1 year agoyes!! u get it!! that’s exactly the way we must handle AI, jus like we do with othr tech!!!
also— i do really enjoy the idea you have there with the AI assisted new site, but if you plan on making that a product while also open-accessing the prompts, people will copy that within a day, so it’d make most sense to just open-source the entire platform.
the pen knos what im gona write!!?!?!?! :o
maria [she/her]@lemmy.blahaj.zoneto
Free Open-Source Artificial Intelligence@lemmy.world•where have a group for talk about aiEnglish
21·1 year agosadly yea… i think its discord onli mostly ;(
i onli use matrix so dis is a no go for me <3
wud be totally nice n comfy if peeps were to create matrix space or so <3 <3 <3
maria [she/her]@lemmy.blahaj.zoneOPto
Selfhosted@lemmy.world•[Help] IPv4 address reaches website, but domain doesn't... (wrong community?)English
2·1 year agobut i wanna have a website others can access too. I tried using VPNs for cool stuff already (like controlling my lil raspberry robot from work with my phone) but I want this website to be available to all the people…
should i just bite the bullet and rent some hosting service? Or is there still hope for me putting “setup home website server” on my resume?
:o
will u become like an ancient wizard? that’d be SO cool
:o
ur here!!! <3
😳🥹😖🥰
omygosh! hiiiiiiiii!!!
how do you do? srri for my late respons - i had family things to pursue…
maria [she/her]@lemmy.blahaj.zoneto
Free Open-Source Artificial Intelligence@lemmy.world•How to run LLaMA (and other LLMs) on Android.English
2·1 year agoYea, I did this.
Can’t WAIT for vulkan support. Imagine the speed! It could be so much faster. Currently it just slows down the model to like - 2 tokens per second
I think she stepped on a crack.
Also HIII MAXXX!!! <3
maria [she/her]@lemmy.blahaj.zoneto
Free Open-Source Artificial Intelligence@lemmy.world•'A virtual DPU within a GPU': Could clever hardware hack be behind DeepSeek's groundbreaking AI efficiency?English
5·1 year agoHmmm this sounds like a shitpost…
a virtual DPU on a GPUsounds likedownload some RAM on this websiteto me.Like - aren’t NPUs just more tensor cores? More matrix multiplying machines? I can’t just simulate that and expect it to be faster…
maria [she/her]@lemmy.blahaj.zoneto
AI@lemmy.ml•Cohere Drops Command-R 35B 08-2024 Update, Just About a Perfect Local LLM for 24GB GPUs.
1·1 year agoi totally agree… with everything. 6GB really is smol and, cuz imma crazy person, i currently try and optimize everything for llama3.2 3B Q4 model so people with even less GB VRAM can use it. i really like the idea of people just having some smollm laying around on their pc and devs being able to use it.
i really should probably opt for APIs, you’re right. the only API I ever used was Cohere, cuz yea their CR+ model is real nice. but i still wanna use smol models for a smol price if any. imma have a look at the APIs you listed. Never heard of Kobold Horde and Samba so i’ll have a look at those… or i go for the lazy route and chose depseek cuz it’s apparently unreasonably cheap for SOTA perf. so eh…
also yes! Lemmy really does seem anti AI, and i’m fine with that. i just say
yeah companies use it in obviously dum ways but the tech is super interestingwhich is a reasonable argument i think.so yes, local llm go! i wanna get that new top amd gpu once that gets announced. so i’ll be able to run those spicy 32B models. for now i’ll just stick with 8B and 3B cuz they work quick and kinda do what i want.
maria [she/her]@lemmy.blahaj.zoneto
AI@lemmy.ml•Cohere Drops Command-R 35B 08-2024 Update, Just About a Perfect Local LLM for 24GB GPUs.
1·1 year agocould you define “right settings”?
I assuma Q4 and some context window Q8 aswell. Anything lese to tweak?
I just have a smol gtx1060 6gb VRAM, so i probably can’t fit it on mine and imma have to use cpu partly. but maybe other readers here can!(I’m just a silly ollama user, not knowing anything more complex than the tokenizer… so yea, maybe put a lil infodump in here to make us all smarter please <3 )
EDIT: brucethemoose probably refered to this model named “Medius”. there is no 14B in the name.
maria [she/her]@lemmy.blahaj.zoneto
AI@lemmy.ml•Cohere Drops Command-R 35B 08-2024 Update, Just About a Perfect Local LLM for 24GB GPUs.
1·1 year agoi luv command R+ so very much and now i wanna try that smoler model but also the newly released r7b model was really not the best so i got sad…
maria [she/her]@lemmy.blahaj.zoneto
AI@lemmy.ml•Man learns he’s being dumped via “dystopian” AI summary of texts
1·1 year agoi like how when ai summarizes a sad dramatic thing, people go :o like it’s something special and not exactly what it was trained to do.
maria [she/her]@lemmy.blahaj.zoneOPto
Free Open-Source Artificial Intelligence@lemmy.world•Before you buy and courses: Read this free prompting guide! (no login required)English
2·1 year agoooh, leaked prompts? which ones are you talking about?
maria [she/her]@lemmy.blahaj.zoneOPto
Free Open-Source Artificial Intelligence@lemmy.world•Before you buy and courses: Read this free prompting guide! (no login required)English
2·1 year agoYou are completely right and it is mostly about trial and error. I’d assume these courses mainlyl teach things you can do with the big bois, those being by the obvious big evil AI companies. It’s very much an overblown topic and companies pretend it’s actually a fancy thing to learn and be good at.
The linked guide just explains the basic concepts of few shot prompting, CoT and RAG and stuff. Even these terms I feel, make the topic more complicated than it is. Could literally be summarized to
- Use examples of what you want
- Use near-zero temperature for almost everything
- For complex tasks, tell it to provide its internal thought proccess before providing the answer (or just use the QwQ model)
- maybe SCREAM AT THE LLM IN ALLCAPS if something is really important
maria [she/her]@lemmy.blahaj.zoneOPtoAsklemmy@lemmy.ml•Is Lemmy your "main social media app"? If not, which one is it?
0·1 year agoThat’s the best type of social (media)
maria [she/her]@lemmy.blahaj.zoneOPtoAsklemmy@lemmy.ml•Is Lemmy your "main social media app"? If not, which one is it?
0·1 year agoOk that is fair. I commented on someone elses comment here that they should try out PeerTube, as that is a decentralized option with all the benefits of the fediverse.




what volume of markdown files are we talking?
also, just so i understand this right:
you are looking fir a markdown editor which has a chat window on the side which can look at other files to assist in writing.
is that correct?
which editor do you use right now for editing the files? does it need to support vim-movements? (if u dont know what that is, it doesnt matter)
what exactly would the LM be assisting in? should it be to just read files and respond, or edit them itself aswell, or suggest edits?
suggestion for under 200 files
depending on the amount of files, a simple index and read-tool functionality might be enough. Here is how you would create such an index:
These three steps can be done using any coding agent you got lying around using this prompt
This index can then be did into any agent to let it find files quicker, without having to hope for good chunking settings in a RAG pipeline.
all this was written by a human, even if it might not seem as such.