Yeah, it seems the sensor costs as much as a decent used camera.
Yeah, it seems the sensor costs as much as a decent used camera.
I was wondering if your tool was displaying cache as usage, but I guess not. Not sure what you have running that’s consuming that much.
I mentioned this in another comment, but I’m currently running a simulation of a whole proxmox cluster with nodes, storage servers, switches and even a windows client machine active. I’m running that all on gnome with Firefox and discord open and this is my usage
$ free -h
total used free shared buff/cache available
Mem: 46Gi 16Gi 9.1Gi 168Mi 22Gi 30Gi
Swap: 3.8Gi 0B 3.8Gi
Of course discord is inside Firefox, so that helps, but still…
What does free -h
say?
About 6 months ago I upgraded my desktop from 16 to 48 gigs cause there were a few times I felt like I needed a bigger tmpfs.
Anyway, the other day I set up a simulation of this cluster I’m configuring, just kept piling up virtual machines without looking cause I knew I had all the ram I could need for them. Eventually I got curious and checked my usage, I had just only reached 16 gigs.
I think basically the only time I use more that the 16 gigs I had is when I fire up my GPU passthrough windows VM that I use for games, which isn’t your typical usage.
So much more. It’s not even in the same ballpark.
I remember people being upset by the ribbon back when office 2007 was released. Their complaints made sense until I sat down and used it. Found it to be a great improvement. I switched my libre office to the ribbon layout as soon as they added it. Because I don’t use it often, it’s great for finding stuff compared to looking through the menus.
The nice thing about the LO implementation is also that they added a couple of varieties of the design, like the compact one which pushes things closer together so it’s not distracting.
Yeah it’s the equivalent of finding two dollars on the ground and getting excited because at this rate you’ll be a billionaire soon enough. There’s less than 2g of plastic in an SD card - the buttons on your shirt probably weigh more.
Games are already horifically inefficient
That’s so far from the truth, it hurts me to read it. Games are one of the most optimised programs you can run on your computer. Just think about it, it’s a application rendering an entire imaginary world every dozen milliseconds. Compare it to anything else you run, like say slack or teams, which makes your CPU sweat just to notify you about a new message.
With 30% ownership it could have been at the forefront of generative AI, which OpenAI released to the world in 2022.
Do they think openai invented the concept of generative ai, because that’s what their statement implies?
It’s not that uncharacteristic. Mono is a fully open source project they didn’t create, didn’t really work on, and one they can’t extract any value from. So this is basically a gesture that doesn’t cost them anything, but at the same time it doesn’t do much except generate a headline.
Khtml was licensed as LGPL.
Some editors can embed neovim, for example: vscode-neovim. Not sure how well that works though as I never tried it.
Well personally if a package is not on aur I first check if there’s an appimage available, or if there’s a flatpak. If neither exist, I generally make a package for myself.
It sounds intimidating, but for most software the package description is just gonna be a single file of maybe 10-15 lines. It’s a useful skill to learn and there’s lots of tutorials explaining how to get into it, as well as the arch wiki serving as documentation. Not to mention, every aur or arch package can be looked at as an example, just click the “view PKGBUILD” link on the side on the package view. You can even simply download an existing package with git clone and just change some bits.
Alternatively you can just make it locally and use it like that, i.e. just run make without install.
Aur and pacman are 90% of why I use arch.
Also fyi to OP: never install software system-wide without your package manager. No sudo make install
, no curl .. | sudo bash
or whatever the readme calls for. Not because it’s unsafe, but because eventually you’re likely to end up with a broken system, and then you’ll blame your distro for it, or just Linux in general.
My desktop install is about a decade old now, and never broke because I only ever use the package manager.
Of course in your home folder anything goes.
I think they meant you don’t know what the binary is called because it doesn’t match the package name. I usually list the package files to see what it put in /use/bin
in such cases.
What’s up with the abuse of the word open lately. I had a look at that project to see how they were doing the conversion, but I couldn’t find it. But I found this:
Short answer, yes! OpenScanCloud (OSC) is and will stay closed source…
Your data will be transferred through Dropbox and stored/processed on my local servers. I will use those image sets and resulting 3d models for further research, but none of your data will be published without your explicit consent!
I feel like I’d rather use Autodesk at that point. At least I know what I’m dealing with right out of the gate.
But check that it has all the features you need because it lags behind gitea in some aspects (like ci).
At least it’s symmetrical so it won’t rock, unlike every other phone out there now, including the one I’m typing on.
You say that as if solving grid storage wasn’t one of the most important problems humanity faces right now.
Podman not because of security but because of quadlets (systemd integration). Makes setting up and managing container services a breeze.