

No problem. It was an interesting question that made me curious too.
Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.


No problem. It was an interesting question that made me curious too.


https://stackoverflow.com/questions/30869297/difference-between-memfree-and-memavailable
Rik van Riel’s comments when adding MemAvailable to /proc/meminfo:
/proc/meminfo: MemAvailable: provide estimated available memory
Many load balancing and workload placing programs check /proc/meminfo to estimate how much free memory is available. They generally do this by adding up “free” and “cached”, which was fine ten years ago, but is pretty much guaranteed to be wrong today.
It is wrong because Cached includes memory that is not freeable as page cache, for example shared memory segments, tmpfs, and ramfs, and it does not include reclaimable slab memory, which can take up a large fraction of system memory on mostly idle systems with lots of files.
Currently, the amount of memory that is available for a new workload, without pushing the system into swap, can be estimated from MemFree, Active(file), Inactive(file), and SReclaimable, as well as the “low” watermarks from /proc/zoneinfo.
However, this may change in the future, and user space really should not be expected to know kernel internals to come up with an estimate for the amount of free memory.
It is more convenient to provide such an estimate in /proc/meminfo. If things change in the future, we only have to change it in one place.
Looking at the htop source:
https://github.com/htop-dev/htop/blob/main/MemoryMeter.c
/* we actually want to show "used + shared + compressed" */
double used = this->values[MEMORY_METER_USED];
if (isPositive(this->values[MEMORY_METER_SHARED]))
used += this->values[MEMORY_METER_SHARED];
if (isPositive(this->values[MEMORY_METER_COMPRESSED]))
used += this->values[MEMORY_METER_COMPRESSED];
written = Meter_humanUnit(buffer, used, size);
It’s adding used, shared, and compressed memory, to get the amount actually tied up, but disregarding cached memory, which, based on the above comment, is problematic, since some of that may not actually be available for use.
top, on the other hand, is using the kernel’s MemAvailable directly.
https://gitlab.com/procps-ng/procps/-/blob/master/src/free.c
printf(" %11s", scale_size(MEMINFO_GET(mem_info, MEMINFO_MEM_AVAILABLE, ul_int), args.exponent, flags & FREE_SI, flags & FREE_HUMANREADABLE));
In short: You probably want to trust /proc/meminfo’s MemAvailable, (which is what top will show), and htop is probably giving a misleadingly-low number.


If oomkiller starts killing processes, then you’re running out of memory.
Well, you could want to not dig into swap.


There might be some way to make use of it.
Linux apparently can use VRAM as a swap target:
https://wiki.archlinux.org/title/Swap_on_video_RAM
So you could probably take an Nvidia H200 (141 GB memory) and set it as a high-priority swap partition, say.
Normally, a typical desktop is liable to have problems powering an H200 (600W max TDP), but that’s with all the parallel compute hardware active, and I assume that if all you’re doing is moving stuff in and out of memory, it won’t use much power, same as a typical gaming-oriented GPU.
That being said, it sounds like the route on the Arch Wiki above is using vramfs, which is a FUSE filesystem, which means that it’s running in userspace rather than kernelspace, which probably means that it will have more overhead than is really necessary.
EDIT: I think that a lot will come down to where research goes. If it turns out that someone figures out that changing the hardware (having a lot more memory, adding new operations, whatever) dramatically improves performance for AI stuff, I suspect that current hardware might get dumped sooner rather than later as datacenters shift to new hardware. Lot of unknowns there that nobody will really have the answers to yet.
EDIT2: Apparently someone made a kernel-based implementation for Nvidia cards to use the stuff directly as CPU-addressable memory, not swap.
https://github.com/magneato/pseudoscopic
In holography, a pseudoscopic image reverses depth—what was near becomes far, what was far becomes near. This driver performs the same reversal in compute architecture: GPU memory, designed to serve massively parallel workloads, now serves the CPU as directly-addressable system RAM.
Why? Because sometimes you have 16GB of HBM2 sitting idle while your neural network inference is memory-bound on the CPU side. Because sometimes constraints breed elegance. Because we can.
Pseudoscopic exposes NVIDIA Tesla/Datacenter GPU VRAM as CPU-addressable memory through Linux’s Heterogeneous Memory Management (HMM) subsystem. Not swap. Not a block device. Actual memory with struct page backing, transparent page migration, and full kernel integration.
I’d guess that that’ll probably perform substantially better.
It looks like they presently only target older cards, though.


This world is getting dumber and dumber.
Ehhh…I dunno.
Go back 20 years and we had similar articles, just about the Web, because it was new to a lot of people then.
searches
https://www.belfasttelegraph.co.uk/news/internet-killed-my-daughter/28397087.html
Internet killed my daughter
Were Simon and Natasha victims of the web?
Predators tell children how to kill themselves
And before that, I remember video games.
It happens periodically — something new shows up, and then you’ll have people concerned about any potential harm associated with it.
https://en.wikipedia.org/wiki/Moral_panic
A moral panic, also called a social panic, is a widespread feeling of fear that some evil person or thing threatens the values, interests, or well-being of a community or society.[1][2][3] It is “the process of arousing social concern over an issue”,[4] usually elicited by moral entrepreneurs and sensational mass media coverage, and exacerbated by politicians and lawmakers.[1][4] Moral panic can give rise to new laws aimed at controlling the community.[5]
Stanley Cohen, who developed the term, states that moral panic happens when “a condition, episode, person or group of persons emerges to become defined as a threat to societal values and interests”.[6] While the issues identified may be real, the claims “exaggerate the seriousness, extent, typicality and/or inevitability of harm”.[7] Moral panics are now studied in sociology and criminology, media studies, and cultural studies.[2][8] It is often academically considered irrational (see Cohen’s model of moral panic, below).
Examples of moral panic include the belief in widespread abduction of children by predatory pedophiles[9][10][11] and belief in ritual abuse of women and children by Satanic cults.[12] Some moral panics can become embedded in standard political discourse,[2] which include concepts such as the Red Scare[13] and terrorism.[14]
Media technologies
Main article: Media panic
The advent of any new medium of communication produces anxieties among those who deem themselves as protectors of childhood and culture. Their fears are often based on a lack of knowledge as to the actual capacities or usage of the medium. Moralizing organizations, such as those motivated by religion, commonly advocate censorship, while parents remain concerned.[8][40][41]
According to media studies professor Kirsten Drotner:[42]
[E]very time a new mass medium has entered the social scene, it has spurred public debates on social and cultural norms, debates that serve to reflect, negotiate and possibly revise these very norms.… In some cases, debate of a new medium brings about – indeed changes into – heated, emotional reactions … what may be defined as a media panic.
Recent manifestations of this kind of development include cyberbullying and sexting.[8]
I’m not sure that we’re doing better than people in the past did on this sort of thing, but I’m not sure that we’re doing worse, either.


Might be interesting to infer optimal dimensions. Glancing at the source, looks like that doesn’t presently happen.


!patientgamers@sh.itjust.works looked smug as hell. They’d been telling everyone for years.


Summary created by Smart Answers AI
chuckles
If databases are involved they usually offer some method of dumping all data to some kind of text file. Usually relying on their binary data is not recommended.
It’s not so much text or binary. It’s because a normal backup program that just treats a live database file as a file to back up is liable to have the DBMS software write to the database while it’s being backed up, resulting in a backed-up file that’s a mix of old and new versions, and may be corrupt.
Either:
or:
In general, if this is a concern, I’d tend to favor #2 as an option, because it’s an all-in-one solution that deals with all of the problems of files changing while being backed up: DBMSes are just a particularly thorny example of that.
Full disclosure: I mostly use ext4 myself, rather than btrfs. But I also don’t run live DBMSes.
EDIT: Plus, #2 also provides consistency across different files on the filesystem, though that’s usually less-critical. Like, you won’t run into a situation where you have software on your computer update File A, then does a sync(), then updates File B, but your backup program grabs the new version of File B but then the old version of File A. Absent help from the filesystem, your backup program won’t know where write barriers spanning different files are happening.
In practice, that’s not usually a huge issue, since fewer software packages are gonna be impacted by this than write ordering internal to a single file, but it is permissible for a program, under Unix filesystem semantics, to expect that the write order persists there and kerplode if it doesn’t…and a traditional backup won’t preserve it the way that a backup with help from the filesystem can.


I think that the problem will be if software comes out that’s doesn’t target home PCs. That’s not impossible. I mean, that happens today with Web services. Closed-weight AI models aren’t going to be released to run on your home computer. I don’t use Office 365, but I understand that at least some of that is a cloud service.
Like, say the developer of Video Game X says “I don’t want to target a ton of different pieces of hardware. I want to tune for a single one. I don’t want to target multiple OSes. I’m tired of people pirating my software. I can reduce cheating. I’m just going to release for a single cloud platform.”
Nobody is going to take your hardware away. And you can probably keep running Linux or whatever. But…not all the new software you want to use may be something that you can run locally, if it isn’t released for your platform. Maybe you’ll use some kind of thin-client software — think telnet, ssh, RDP, VNC, etc for past iterations of this — to use that software remotely on your Thinkpad. But…can’t run it yourself.
If it happens, I think that that’s what you’d see. More and more software would just be available only to run remotely. Phones and PCs would still exist, but they’d increasingly run a thin client, not run software locally. Same way a lot of software migrated to web services that we use with a Web browser, but with a protocol and software more aimed at low-latency, high-bandwidth use. Nobody would ban existing local software, but a lot of it would stagnate. A lot of new and exciting stuff would only be available as an online service. More and more people would buy computers that are only really suitable for use as a thin client — fewer resources, closer to a smartphone than what we conventionally think of as a computer.
EDIT: I’d add that this is basically the scenario that the AGPL is aimed at dealing with. The concern was that people would just run open-source software as a service. They could build on that base, make their own improvements. They’d never release binaries to end users, so they wouldn’t hit the traditional GPL’s obligation to release source to anyone who gets the binary. The AGPL requires source distribution to people who even just use the software.


I will say that, realistically, in terms purely of physical distance, a lot of the world’s population is in a city and probably isn’t too far from a datacenter.
https://calculatorshub.net/computing/fiber-latency-calculator/
It’s about five microseconds of latency per kilometer down fiber optics. Ten microseconds for a round-trip.
I think a larger issue might be bandwidth for some applications. Like, if you want to unicast uncompressed video to every computer user, say, you’re going to need an ungodly amount of bandwidth.
DisplayPort looks like it’s currently up to 80Gb/sec. Okay, not everyone is currently saturating that, but if you want comparable capability, that’s what you’re going to have to be moving from a datacenter to every user. For video alone. And that’s assuming that they don’t have multiple monitors or something.
I can believe that it is cheaper to have many computers in a datacenter. I am not sold that any gains will more than offset the cost of the staggering fiber rollout that this would require.
EDIT: There are situations where it is completely reasonable to use (relatively) thin clients. That’s, well, what a lot of the Web is — browser thin clients accessing software running on remote computers. I’m typing this comment into Eternity before it gets sent to a Lemmy instance on a server in Oregon, much further away than the closest datacenter to me. That works fine.
But “do a lot of stuff in a browser” isn’t the same thing as “eliminate the PC entirely”.
You could also just only use Macs.
I actually don’t know what the current requirement is. Back in the day, Apple used to build some of the OS — like QuickDraw — into the ROMs, so unless you had a physical Mac, not just a purchased copy of MacOS, you couldn’t legally run MacOS, since the ROM contents were copyrighted, and doing so would require infringing on the ROM copyright. Apple obviously doesn’t care about this most of the time, but I imagine that if it becomes institutionalized at places that make real money, they might.
But I don’t know if that’s still the case today. I’m vaguely recalling that there was some period where part of Apple’s EULA for MacOS prohibited running MacOS on non-Apple hardware, which would have been a different method of trying to tie it to the hardware.
searches
This is from 2019, and it sounds like at that point, Apple was leveraging the EULAs.
https://discussions.apple.com/thread/250646417?sortBy=rank
Posted on Sep 20, 2019 5:05 AM
The widely held consensus is that it is only legal to run virtual copies of macOS on a genuine Apple made Apple Mac computer.
There are numerous packages to do this but as above they all have to be done on a genuine Apple Mac.
- VMware Fusion - this allows creating VMs that run as windows within a normal Mac environment. You can therefore have a virtual Mac running inside a Mac. This is useful to either run simultaneously different versions of macOS or to run a test environment inside your production environment. A lot of people are going to use this approach to run an older version of macOS which supports 32bit apps as macOS Catalina will not support old 32bit apps.
- VMware ESXi aka vSphere - this is a different approach known as a ‘bare metal’ approach. With this you use a special VMware environment and then inside that create and run virtual machines. So on a Mac you could create one or more virtual Mac but these would run inside ESXi and not inside a Mac environment. It is more commonly used in enterprise situations and hence less applicable to Mac users.
- Parallels Desktop - this works in the same way as VMware Fusion but is written by Parallels instead.
- VirtualBox - this works in the same way as VMware Fusion and Parallels Desktop. Unlike those it is free of charge. Ostensible it is ‘owned’ by Oracle. It works but at least with regards to running virtual copies of macOS is still vastly inferior to VMware Fusion and Parallels Desktop. (You get what you pay for.)
Last time I checked Apple’s terms you could do the following.
- Run a virtualised copy of macOS on a genuine Apple made Mac for the purposes of doing software development
- Run a virtualised copy of macOS on a genuine Apple made Mac for the purposes of testing
- Run a virtualised copy of macOS on a genuine Apple made Mac for the purposes of being a server
- Run a virtualised copy of macOS on a genuine Apple made Mac for personal non-commercial use
No. Apple spells this out very clearly in the License Agreement for macOS. Must be installed on Apple branded hardware.
They switched to ARM in 2020, so unless their legal position changed around ARM, I’d guess that they’re probably still relying on the EULA restrictions. That being said, EULAs have also been thrown out for various reasons, so…shrugs
goes looking for the actual license text.
Yeah, this is Tahoe’s EULA, the most-recent release:
https://www.apple.com/legal/sla/docs/macOSTahoe.pdf
Page 2 (of 895 pages):
They allow only on Apple-branded hardware for individual purchases unless you buy from the Mac Store. For Mac Store purchases, they allow up to two virtual instances of MacOS to be executed on Apple-branded hardware that is also running the OS, and only under certain conditions (like for software development). And for volume purchase contracts, they say that the terms are whatever the purchaser negotiated. I’m assuming that there’s no chance that Apple is going to grant some “go use it as much as you want whenever you want to do CI tests or builds for open-source projects targeting MacOS” license.
So for the general case, the EULA prohibits you from running MacOS wherever on non-Apple hardware.


Now my question is, who’s making that one query that leaks my domain name? Is it Apache on startup
If you’re wanting a list of DNS queries from your system, assuming that it’s DNS and not DoH, maybe:
# tcpdump port domain
Then go start Apache or whatever.
Okay, this is unfortunately DIY, but if you’re willing to spend time:
Get a plywood box and put it in there.
If you hear vibrations, put sorbothane between the NAS and the box.
If you need more sound absorption, put acoustic foam on the inside.
If you need more cooling, drill two holes, mount a case fan, and run the air through some kind of baffle. Like, maybe attach insulated flex-duct, like this:
https://www.amazon.com/Hon-Guan-Silencer-Reducer-Ventilation/dp/B07HC8CXQG
Yes. For a single change. Like having an editor with 2 minute save lag, pushing commit using program running on cassette tapes4 or playing chess over snail-mail. It’s 2026 for Pete’s sake, and we5 won’t tolerate this behavior!
Now of course, in some Perfect World, GitHub could have a local runner with all the bells and whistles. Or maybe something that would allow me to quickly check for progress upon the push6 or even something like a “scratch commit”, i.e. a way that I could testbed different runs without polluting history of both Git and Action runs.
For the love of all that is holy, don’t let GitHub Actions manage your logic. Keep your scripts under your own damn control and just make the Actions call them!
I don’t use GitHub Actions and am not familiar with it, but if you’re using it for continuous integration or build stuff, I’d think that it’s probably a good idea to have that decoupled from GitHub anyway, unless you want to be unable to do development without an Internet connection and access to GitHub.
I mean, I’d wager that someone out there has already built some kind of system to do this for git projects. If you need some kind of isolated, reproducible environment, maybe Podman or similar, and just have some framework to run it?
like macOS builds that would be quite hard to get otherwise
Does Rust not do cross-compilation?
searches
It looks like it can.
https://rust-lang.github.io/rustup/cross-compilation.html
I guess maybe MacOS CI might be a pain to do locally on a non-MacOS machine. You can’t just freely redistribute MacOS.
goes looking
Maybe this?
Darling is a translation layer that lets you run macOS software on Linux
That sounds a lot like Wine
And it is! Wine lets you run Windows software on Linux, and Darling does the same for macOS software.
As long as that’s sufficient, I’d think that you could maybe run MacOS CI in Darling in Podman? Podman can run on Linux, MacOS, Windows, and BSD, and if you can run Darling in Podman, I’d think that you’d be able to run MacOS stuff on whatever.
I tried a fully fledged consumer NAS (QNAP with Seagate 12 TB NAS drives) but the noise of the platters was not acceptable.
If you have a NAS, then you can put it as far away as your network reaches. Just put it somewhere where you can’t hear the thing.


The publishers can do it via uploading beta branches, but there’s also a way to tell the Steam client to fetch old versions independently of that. I remember it coming up specifically with Skyrim, because updates broke a lot of modded environments, and it takes a long time for a lot of mods to be updated (during which time people couldn’t play their modded installs).
searches
https://steamcommunity.com/app/489830/discussions/0/4032473829603430509/
The download_depot Steam console command.
The above link is about Skyrim, but also links to a non-Skyrim-specific guide that talks about how to obtain manifest IDs for versions of other games.
But, yeah. It’s really not how Steam’s intended to be used, and I imagine that hypothetically, one day, it could stop working.
There are also IIRC some ways to block Steam from updating individual games, but again, not intended functionality.
searches
https://steamcommunity.com/discussions/forum/0/3205995441631274440/
If you specifically want control over game updates for some game, then GOG can be a major benefit for that.
One concern I have is that games can be purchased — Oxygen Not Included, for example, was purchased by Tencent, which added data-mining. Fortunately, in that case, Tencent was open about what they were doing, and allowed players to opt out — if they let Tencent log data about them, they could “earn” various in-game rewards. But I could imagine less-pleasant malware being attached to games after someone purchases IP rights to them and just pushes it out. Can’t do that with GOG, since there’s no channel intrinsically available to a game publisher to push updates out (unless the game has that built-in to itself).


I would guess that he’s looking for a response to someone pointing out that Steam has a larger game library than GOG.
Like, he’s gonna say “yes, but a higher proportion of the excluded games aren’t good”.


I mean, it’s true that there are lots of games sold on Steam that aren’t great games, but that doesn’t hurt me much.
There are lots of products on Amazon that aren’t that great.
There are lots of websites on the Internet that aren’t that great.
As long as I can get to the stuff I want, all good.
EDIT: I think that a better selling point for GOG than that it excludes more not-good games is that the offline installer model can survive GOG going down.
Or maybe that GOG gives you control over updates. There are ways to do this with Steam, but it’s not an intended mode of operation, and some people, like heavy Skyrim modders, where an update can cause major breakage, really want control over when they update.
I’m using Debian trixie on two systems with (newer) AMD hardware:
ROCm 7.0.1.70001-42~24.04 on an RX 7900 XTX
ROCm 7.0.2.70002-56~24.04 on an AMD AI Max 395+.