• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle
  • I see a lot of articles talking about the white elephants that might be lost from public view, which is probably the biggest tragedy, using their KI-10 as an example.

    The one I’m most worried about from that collection is that they have the last known operational CDC6000 series machine (Theirs is a slightly smaller CDC6500, the flagship CDC6600 is the machine that made Seymour Cray famous, it fucked so hard it was 3x as fast as the previous title holder when came out in 1964 and was still the fastest machine in the world until 1969… when it was replaced by the derived, upgraded CDC7600 from 1969-1975).

    It’s a 12,000lb, 80" tall, 165" on a side monster that draws 30kW (at 208V/400Hz), I haven’t heard a plan for it, and there are very, very few possible long-term-secure homes for such a thing.

    I guess it’s just not in the current auction so it isn’t drawing as much attention yet?


  • That’s credible.

    I find the hardware architecture and licensing situation with AMD much more appealing than Nivida and really want to like their cards for compute, but they sure make it challenging to recommend.

    I had to do a little dead reckoning with the list of supported targets to find one that did the right thing with the 12CU RDNA2 680M.

    I’ve been meaning to put my findings on the internet since it might be useful to someone else, this is a good a place as any.

    On a fresh Xubuntu 22.04.4 LTS install doing the official ROCm 6.1 setup instructions, using a Minisforum UM690S Ryzen 9 6900HX/64GB/1TB box as the target, and after setting the GPU Memory to 8GB in the EFI before boot so it doesn’t OOM.

    For OpenMP projects, you’ll probably need to install libstdc++-12-dev in addition to the documented stuff because HIP won’t see the cmath libs otherwise (bug), then the <CMakeConfig.txt> mods for adapting a project with accelerator directives to that target are

    find_package(hip REQUIRED)
    list(APPEND CMAKE_PREFIX_PATH /opt/rocm-6.1.0)
    set(CMAKE_CXX_COMPILER ${HIP_HIPCC_EXECUTABLE})
    set(CMAKE_CXX_LINKER   ${HIP_HIPCC_EXECUTABLE})
    target_compile_options(yourtargetname PUBLIC "-lm;-fopenmp;-fopenmp-targets=amdgcn-amd-amdhsa;-Xopenmp-target=amdgcn-amd-amdhsa;-march=gfx1035"
    

    And torch, because I was curious how that would go (after I watched the Docker based suggested method download 30GB of trash then fall over, and did the bare metal install instead) seems to work with PYTORCH_TEST_WITH_ROCM=1 HSA_OVERRIDE_GFX_VERSION=10.3.0 python3 testtorch.py which is the most confidence inspiring.

    Also amdgpu_top is your friend for figuring out if you actually have something on the GPU compute pipes or if it’s just lying and running on the CPU.


  • Neat.

    I set up some basic compute stuff with the ROCm stack on a 6900HX-based mini computer the other week (mostly to see if it was possible as there are some image processing workloads a colleague was hoping to accelerate on a similar host) and noticed that the docs occasionally pretend you could use GTT dynamicly allocated memory for compute tasks, but there was no evidence of it ever having worked for anyone.

    That machine had flexible firmware and 64GB of RAM stuffed in it so I just shuffled the boot time allocation in the EFI to give 8GB to the GPU to make it work, but it’s not elegant.

    It’s also pretty clumsy to actually make things run, lot of “set the magic environment variable because the tool chain will mis-detect the architecture of your unsupported card” and “Inject this wall of text into your CMake list to override libraries with our cooked versions” to make things work. Then it performs like an old GTX1060, which is on one hand impressive for an integrated part in a fairly low wattage machine, and on the other hand is competing with a low-mid range card from 2016.

    Pretty on brand really, they’ve been fucking up their compute stack since before any other vendor was doing the GPGPU thing (abandoning CTM for Stream in like a year).

    I think the OpenMP situation was the least jank of the ways I tried getting something to offload on an APU, but it was also one of the later attempts so maybe I was just getting used to it’s shit.


  • Don’t trust that they’re 100% compatible with mainline Linux, ChromeOS carries some weird patches and proprietary stuff up-stack.

    I have a little Dell Chromebook 11 3189 that I did the Mr.Chromebox Coreboot + Linux thing on, a couple years ago I couldn’t get the (weird i2c) input devices to work right, that has since been fixed in upstream coreboot tables and/or Linux but (as of a couple months ago) still don’t play nice with smaller alternative OSes like NetBSD or a Haiku nightly.

    The Audio situation is technically functional but still a little rough, the way the codec in bay/cherry trail devices is half chipset half external occasionally leads to the audio configuration crapping itself in ways that take some patience and/or expertise to deal with (Why do I suddenly have 20 inoperable sound cards in my pulse audio settings?).

    This particular machine also does some goofy bullshit with 2 IMUs in the halves instead of a fold-back sensor, so the rotation/folding stuff via iio sensors is a little quirky.

    But, they absolutely are fun, cheap hacker toys that are generally easy targets.



  • Relevant place to ask: I’ve been trying to find a reference for the earliest Emacs that could host a terminal emulator or subshell in a window.

    Multics emacs appears to have had both split windows and a character-at-a-time input and output mode as far back as 1978 for use as a SUPDUP and/or TELNET client, which is currently the earliest I’m aware of. Ancient ITS TECO EMACS had splits pretty early on, and may have sprung the necessary character plumbing earlier - but I’ve never found any reference material to confirm/deny.

    It’s a fringe to a larger interest, which is that I’ve been trying to document the history of terminal multiplexers, especially in the Window (1986)-Screen(1987)-Tmux(2007) tradition (as opposed to the historical meaning which we’d call terminal servers). I’m slowly becoming convinced they came about after the advent of floating window GUIs hosting multiple terminal emulators. If you were super connected and could get access to one, sometime fairly early in the window between the 1973 introduction of the Alto and the surviving 1979 manuals the Alto program “Chat” could run multiple telnet sessions in floating windows (I’m also looking for a more precise date for when Bob Sproull made Chat able to do that trick). Several other early graphical systems like Blit terminals (1982 inside Bell, commercial as the 5620 in 1984) and early Sun Windowing System of early SunOS (1983) could also do multiple floating terminal emulators, so they were common by the early 80s.

    Because the 36-bit DEC lineage had pretty robust psuedoterminals all the way back into the mid 1960s ref, a lot of hackers did a lot of fun shit on PDP-10s with ITS and TENEX and WAITS, and Stanford and MIT had PDP-10s connected to fancy video terminals by the mid 70s, it’s IMO the most likely place for the first terminal multiplexers to emerge… if I could just find some documentation or dated code or accounts.



  • IIRC, the Ultra 1 and 2 are strictly SBus machines, the all the later Ultra 5/10/30/60/80 are PCI machines, plus most but not all members of the family have UPA slots with that freaky two rows of card edge connector for fancy video boards?

    For readers not exposed to lots of Sun lore, Ultras were distinguished from SparcStations because they host 64 bit SPARCv9 parts branded “UltraSPARC,” as opposed to the 4m SparcStations which were based on 32-bit SPARCv8 processors.

    I’ll also add that, if you don’t want to fuck around with large pieces of aging hardware and just want to marinate yourself in a retro Solaris environment, the qemu sparc support is really good. Folks restoring Sun stuff with disc issues often do their installs via netboot from an emulated server. Adafruit even has a beginner click-by-click tutorial for spinning your own emulated Sun4m system.


  • Selecting Suns is easy because there aren’t many bad choices in the era you’re talking about, but a little weird because the internal names and the package label names don’t always match in obvious ways. Most of the “classic era” Sparc boxes are Sun-4 variants, with SparcStatons mostly being Sun-4c or Sun-4m and Ultras mostly being Sun-4u machines. The Sun-4* name is more important to knowing what you are looking at than the case badge. For example, I have a “SparcServer 20” that some previous owner installed a TurboGX (cgsix) video board in, so it’s almost exactly a similarly-spec’d SparcStation20 with different badges.

    Pre-SparcStation Sun-3 and Sun-4 VME based machines are quite a bit more exotic to source parts for in a modern context, and newer stuff are PCs (remember they did go and re-use the Ultra name for a family of x86 boxes a couple years later, so watch model numbers if you’re trying to buy a SPARC Ultra).
    SparcStations are a little more bespoke and workstation-y (SBus cards, SCSI discs) and Ultras are generally a little more PC-like (mostly PCI cards, ATA discs), but neither are particularly hard to work on these days since the common SBus peripherals aren’t terribly expensive and SCSI disc emulators like BlueSCSIs have come down in price and up in performance. IIRC, in all cases you have to be kind of specific with RAM, some older machines use memory modules unique to the family and Ultras mostly take 168pin PC style DIMMs but are picky about the exact details.

    IMO the SS10/SS20/SS5 Sun-4m machines are pretty nice to work with because they are still “workstation grade” high reliability parts but were made in HUGE quantities and are extremely modular within the family so it’s easy to work on them and get parts/upgrades/documentation/etc. They also have 10baseT Ethernet onboard (careful about degrading your whole switch), while the older SS1/SS2 need an AUI transceiver.

    Peripherals:

    Remember that older Suns use their own protocol over MiniDIN-8 for keyboard and mouse and 13-W3 video cables. You’ll need a suitable Sun keyboard (probably a Type 5 or Type 6) and mouse, and those can be expensive on their own if not bundled because keyboard people. They’re not as bad as some of the more exotic and/or desirable to keyboard enthusiast bespoke keyboards, but still pay attention when considering a machine to buy. Video is a little easier because 13W3-to-VGA cables are a thing, (I have one of these with switches so you can configure for Sun or SGI or Next or IBM’s particular signaling). You still need a monitor or scan converter that works with Sync-On-Green to accept the signal… most modern LCDs with VGA ports actually can, but the labeling is typically not very clear about that. Sun video adapters are generally a little more willing to negotiate video modes than some of the other workstations (eg. My SS20 has talked to almost everything I’ve plugged it into, my HP Apollo 9000/735 and its absurd CRX-24z video board will talk to the Dell P2314H on my real work desk and has spurned every other monitor I’ve tried it with).

    NVRAM:

    Most older Suns have a chip on the motherboard - typically with a yellow barcode sticker if it’s original - which contains a small battery-backed NVRAM storing the serial number, the Ethernet MAC, and various configuration parameters, and a RTC (Real Time Clock). At this point the internal batteries on all of them should be presumed dead. The M48Txx line of chips Suns use were originally made by Mostek, who was absorbed by SGS-Thompson, who became STMicro. Ref for NVRAM chips. Once it dies the machine loses its machine ID and MAC address and such. Fortunately, they can be reprogrammed from OpenFirmware, either with original values read from stickers and the like, or suitable made-up replacements. There are a lot of surviving Suns hand-assigned MAC addresses containing amusing strings like DEAD, BEEF, CAFE, C0FFEE etc. as people have made up suitable numbers. Sun’s factory MAC addresses have a 08:00:20 prefix if you want networking tools that notice that sort of thing to assume it’s a Sun.

    Generally there are 3(and a half) options for dealing with them:

    1. Modern production compatibles are still available though you have to be a bit careful about model compatibility, and they’re rather expensive these days, something like $25 a piece (eg. Mauser has a small stock of MT48T08s for $26.50+S&H ).

    2. You can also grind an end and attach a 3.3v coincell battery holder yourself - some folks say you should always cut the old battery all the way out because there may be unwanted effects to having the dead battery in parallel with the good one.

    3. You can crack the whole top of the module with the battery and crystal off and solder on a module with a replacement crystal and user-serviceable battery holder in place.

    4. For rarely-used machines, you can just do the reprogramming procedure (in the first ref) at the OpenFirmware OK prompt by hand each time you start the machine, it will hold while the computer is powered.

    It’s not a huge deal, but it is a thing to expect to have to deal with.

    Software:

    Remember that the OS nomenclature is a little weird because Solaris started out being versioned on top of SunOS (eg. SunOS 5.1 hosts Solaris 2.1), and at they dropped the SunOS name then leading “2” from Solaris versions so you have Solaris 2.5->2.6->7->8. The Wikipedia version history table is straightforward enough to work through, and has decent notes on supported systems. You’ll generally be between 2.1 and 9 on the era of systems you’re talking about, and those are the ones that “feel” like old commercial workstation Unix with OpenWindows and CDE and whatnot - I’m partial to 7 as “peak Solaris” but I’m sure that’s because I helped maintain a bunch of 7 boxes at one point, it’s a fully mature SVR4 with all the commercial Unix-isms before it started to converge with the modern Free Unix-likes. Many of the usual suspects like Tenox and WinWorldPC have install media and/or software.

    Edited to add from downthread:

    Emulation:

    If you don’t want to fuck around with large pieces of aging hardware and just want to marinate yourself in a retro Solaris environment, the qemu sparc support is really good. Folks restoring Sun stuff with disc issues often do their installs via netboot from an emulated server. Adafruit even has a beginner click-by-click tutorial for spinning your own emulated Sun4m system.




    1. Neat, I never knew what that system was called. I have fond memories of my local libraries DEC terminals.

    2. Ooh, it looks like that ran on Pick, which is a neat early database/operating system/programming environment …thing named for one of its primary authors (I shit you not) Dick Pick. Later typically the UniVerse hosted variant, which is proprietary up to it’s eyeballs, and still sold by Rocket Software as U2. I don’t think I’ve ever come across a copy of native Pick or UniVerse for a 90s Unix or NT in a vintage software archive, but it has been widely used essentially forever, and it is virtually search proof, so it might be out there.

    3. Dynix itself is pretty search proof since it was also the name of an influential multiprocessor Unix from Sequent, which, like Pick, was at some point bought by IBM.

    4. Holy shit they’re still a thing https://www.sirsidynix.com/



  • For the most part, I’d rather have native packages. I’m not deeply philosophically opposed to secondary packaging systems, and only mildly opposed to “ship the whole dependency tree in an archive” software distribution methods (lookin’ at you NextStep/OS X style bundles), and see their potential especially on platforms with no/bad native package managers or to bring in specific software that would pose a compatibility problem for the host system… but they never seem to work nearly as well as native packages, and the two big players on Linux have problems.

    As far as I’m concerned, they’re just taking the old last-ditch practice of “I have this piece of recalcitrant software that is incompatible with the rest of my system, so I’ll throw it in /opt with its entire dependency tree,” replacing opt with a bunch of bind mounts, and doing so with varying degrees of additional tooling.

    The sandboxing is a nice idea, but it seems like in practice the models on both snap and flatpack are simultaneously restrictive in ways that make them annoying-to-unusable for many tasks, and too sloppy to provide reliable security guarantees.

    They make debugging problems harder because you can’t check functionality from another program because they likely don’t share libs. ldd is a lot easier than spelunking around with eg. flatpak list --app --columns=application,runtime until you find a “peer” to test.

    If I need a one-off piece of software that is a compatibility nuisance on my host distro (but not so much of a nuisance it needs to go in a container or VM, which is a pretty narrow window), I’ll usually reach for an AppImage because unlike the other two, they’re actually fairly standalone and don’t involve a big invasive runtime/tooling system.

    The Immutable-core OSes that depend on them are kind of the same way at the moment. Fundamentally a pretty neat idea, but so far I find them super frustrating in practice. Nix is …different… to work with, but is likely a more elegant scheme to solve the same class of problems.


  • I’ve never been much of a poster (not even 2 posts/yr for the almost dozen years I’ve had a reddit habit), but I was a regular commenter in various specific-interest subs.

    I am, as a rule, no longer contributing content to Reddit, since they’ve made it clear they plan to finish their transition from “hosting communities” to “extracting value from users.” Frankly, it’s not as much of an imposition as I feared, because many of those communities seem to be broadly taking the same attitude.

    I’m actively trying to comment heavily here to to try to help establish communities. If I had a little more free time I’d do some posting and/or try to help spin some successor communities for my interests.


  • It’s not just the technical interest community on the lemmy instance, SDF as an entity has gravitas that I appreciate.

    It’s backed by a 30 year old 501©(7) nonprofit established from an even older entity, instead of “someone who spun a VM the other week.”

    They’ve handled the abuse of hosting public-facing UNIX systems for decades, so this isn’t a crew who will freak out when they get a look at the Internet’s gaping hate hole.

    This is a community with roots in the BBS era, and the Usenet era, and which maintain their connection to the culture of the ARPANET era via the historical systems (Ya’ll played with the TOPS-20 box?). These are people who know that, as platforms go, ‘All of this has happened before, and it will all happen again.’ and that’s the perspective I deeply desire in my platforms these days.



  • My usual suggestion: Get a generation-old business or workstation class machine from one of the major manufacturers, as a refurb. Mostly meaning keep an eye on Dell Refurbished or Lenovo Outlet - sometimes you can also get a deal on a refurb via woot - for something that appeals to you. The stock is always changing at those, and there are almost always sales/coupons for around 40% off at the first-party refurb stores, so +/- a week of patience can save you a bunch of money.

    Business or workstation class machines (think Dell Latitude or Precision, especially the ones with models that start with a 7, or Thinkpad) are typically mechanically much better built than their consumer counterparts, and usually full of reputable components that are connected in standard ways - low end consumer stuff sometimes has issues where they got weird less-common components or connected things in stupid ways to save a few cents per unit that will cause driver issues.

    Waiting a generation gives time for mainline kernel driver support to fully mature to minimize driver problems, and drastically cuts the price.

    I’ve had several machines following that advice, and I think the only driver trouble I’ve had with them has been with unsupported fingerprint/smartcard readers, which I …don’t care about anyway.

    Or, if you want a way cheap beater and don’t mind some hackin’, grab a used/refurbished AUE Chromebook that is on the Mr. Chromebox Supported List. AUE means they no longer receive ChromeOS updates, so their price craters to like $50, and you can flash a normal UEFI payload and use them as a (feeble, storage starved, low resolution) computer. Not a good main machine, but they make fun beaters for experimenting. There are often batches of them being dumped via woot.

    …also, don’t buy anything with an Nvidia GPU unless you have a specific compelling reason, it’ll be a pain in your ass for the life of the machine.


  • I dabbled with Linux/Unix (Suse, Gentoo, Debian, Slackware, Arch, NetBSD, a little Solaris, a couple of those long-dead floppy/livecd/liveusb systems… and some less-unix things like BeOS) starting in about 1998 and slowly moved fully over to Linux as the daily driver. My usual distro for personal machines has been Arch since about 2004, though I’ve typically had *buntu, and/or CentOS (starting at cAos, now migrating to Rocky) machines for some things I do professionally, and at least one personal Debian server.

    I did a lot of environment hopping early on, but settled on XFCE from about 2007-2017, then KDE from about 2017-current once Plasma5 got its resource consumption under control. I’ve been playing with Hyprland a little bit recently, just because it’s the least-broken way to fiddle with a Wayland environment I’ve found, but I like floating+snapping better than tiling so I doubt it’ll become my daily driver.

    I think my first Arch install was off 0.2 or 0.3 media in mid-2002, and there are probably only a month or two in that time that I haven’t had at least one Arch box, so that’s two decades.