LLaMA can’t. Chameleon and similar ones can:
LLaMA can’t. Chameleon and similar ones can:
I expected that recording would be the hard part.
I think some of the open-source ones should work if your phone is rooted?
I’ve heard that Google’s phone app can record calls (though it says it aloud when starting the recording). Of course, it wouldn’t work if Google thinks it shouldn’t in your region.
By the way, Bluetooth headphones can have both speakers and a microphone. And Android can’t tell a peripheral device what it should or shouldn’t do with audio streams. Sounds like a fun DIY project if you’re into it, or maybe somebody sells these already.
Haven’t heard of all-in-one solutions, but once you have a recording, whisper.cpp can do the transcription:
The underlying Whisper models are MIT.
Then you can use any LLM inference engine, e.g. llama.cpp, and ask the model of your choice to summarise the transcript:
You can also write a small bash/python script to make the process a bit more automatic.
If config prompt = system prompt, its hijacking works more often than not. The creators of a prompt injection game (https://tensortrust.ai/) have discovered that system/user roles don’t matter too much in determining the final behaviour: see appendix H in https://arxiv.org/abs/2311.01011.
CVEs are constantly found in complex software, that’s why security updates are important. If not these, it’d have been other ones a couple of weeks or months later. And government users can’t exactly opt out of security updates, even if they come with feature regressions.
You also shouldn’t keep using software with known vulnerabilities. You can find a maintained fork of Chromium with continued Manifest V2 support or choose another browser like Firefox.
If your CPU isn’t ancient, it’s mostly about memory speed. VRAM is very fast, DDR5 RAM is reasonably fast, swap is slow even on a modern SSD.
8x7B is mixtral, yeah.
Mostly via terminal, yeah. It’s convenient when you’re used to it - I am.
Let’s see, my inference speed now is:
As of quality, I try to avoid quantisation below Q5 or at least Q4. I also don’t see any point in using Q8/f16/f32 - the difference with Q6 is minimal. Other than that, it really depends on the model - for instance, llama-3 8B is smarter than many older 30B+ models.
Have been using llama.cpp, whisper.cpp, Stable Diffusion for a long while (most often the first one). My “hub” is a collection of bash scripts and a ssh server running.
I typically use LLMs for translation, interactive technical troubleshooting, advice on obscure topics, sometimes coding, sometimes mathematics (though local models are mostly terrible for this), sometimes just talking. Also music generation with ChatMusician.
I use the hardware I already have - a 16GB AMD card (using ROCm) and some DDR5 RAM. ROCm might be tricky to set up for various libraries and inference engines, but then it just works. I don’t rent hardware - don’t want any data to leave my machine.
My use isn’t intensive enough to warrant measuring energy costs.
The article isn’t about automatic proofs, but it’d be interesting to see a LLM that can write formal proofs in Coq/Lean/whatever and call external computer algebra systems like SageMath or Mathematica.
I see, thanks. Will check. I just thought perhaps you figured out something other than those from your experience.
Any guidance on choosing appropriate conservative settings for i7-13700K? I may be hit with the same as you in the future (sometimes I have to do some heavy multithreaded combinatorial computations which run several days with 100°C temperature, using all cores). The motherboard has options for customising pretty much everything there is, but I didn’t touch anything overclocking-related, so I have Asus defaults.
it cuts out the middle man of having to find facts on your own
Nope.
Even without corporate tuning or filtering.
A language model is useful when you know what to expect from it, but it’s just another kind of secondary information source, not an oracle. In some sense it draws random narratives from the noosphere.
And if you give it search results as part of input in hope of increasing its reliability, how will you know they haven’t been manipulated by SEO? Search engines are slowly failing these days. A language model won’t recognise new kinds of bullshit as readily as you.
Education is still important.
Disabling root login and password auth, using a non-standard port and updating regularly works for me for this exact use case.
You could also help preparing XFCE for eventual Wayland compatibility: https://wiki.xfce.org/releng/wayland_roadmap .
Okular (a rather good PDF viewer) can’t save sessions (open files & positions) in non-KDE environments.
The issue has been open since 2018: https://bugs.kde.org/show_bug.cgi?id=397463 .
There’s some partial merge request but it’s been dead for a year.
Alas, their IP requirements are too much for me.
Anyway, is there much of xenharmonic sheet music there? Take, for example, Easley Blackwood (but not his books).
Downloading there is straightforward: look at network requests, redownload svg’s of individual pages with wget
and reassemble those into a pdf. I did that today and the resulting quality wasn’t exactly low - though I didn’t examine it too closely. Readability was perfect.
Probably could be automated, but I’m not bothered enough to do so yet.
Alternatively, ffmpeg -protocol_whitelist file,crypto,data,https,tls,tcp -stats -i <URL.m3u8> -codec copy <FILE.mp4>
.
Also, some m3u8’s are just files containing redirects to other m3u8’s in various resolutions. You might want to extract the one you need and download that.
My intuition:
So I don’t think this approach will help you a lot even for finding words and phrases. And everything I’ve said can be extended to semantic noise too, so your extended question also seems a hopeless endeavour when approached specifically with LLMs or big data analysis of text.