• 22 Posts
  • 34 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle

  • If you signup to social media it will pester you for your email contacts, location and hobbies/interests.

    Building a signup wizard to use that information to select a instance would seemto be the best approach.

    The contacts would let you know what instance most of your friends are located (e.g. look up email addresses).

    Topic specific instance, can provide a hobby/interests selection section.

    Lastly the location would let you choose a country specific general instance.

    It would help push decentralisation but instead of providing choice your asking questions the user is used to being asked.


  • Nvidia drivers don't tend to be as performant under linux.

    With AMD instead of using the AMD VLK driver, you would use the RADV (developed largely by valve). Which petforms better.

    Every AMD card under linux supports OpenCL (the driver is more based on graphics card architecture) and you install it very easily. Googling it with windows found pages of errors and missing support.

    Blender supports OpenCL. I bet the 2x improvement is Blender being able to ofload rendering to the AMD graphics card.

    Also this represents the biggest headache in Linux, lots of gamers insist they can only use Nvidia cards. Nvidia treats linux as an afterthought as best or deliberately sabotages things at worse.

    AMD embraced open source and so Linux land is much nicer on AMD (and to a less extent Intel).

    The results here will probably be a DxVK quirk, lots of "Nvidia optimised" games have game engines doing weird things and the Nvidia driver compensates. DxVK has been identifying that to produce "good" vulkan calls.


  • If you read the reports…

    Normally JPL outsource their Mars mission hardware to Lockheed Martin. For some reason they have decided to do Mars Sample Return in house. The reports argue JPL hasn’t built the necessary in house experience and should have worked with LM.

    Secondly JPL is suffering a staff shortage which is affecting other projects and the Mars Sample Return is making the problem worse.

    Lastly if an organisation stops performing an action it “forgets” how to do it. You can rebuild the capability but it takes time.

    A team arbitrary declaring they are experts and suddenly decideding they will do it is one that will have to relearn skills/knowledge on a big expensive high profile project. The project will either fail (and be declared a success) or masses of money will be spent to compensate for the teams learning.

    Either situation is not ideal


  • The GAO has performed an annual review of the Space Launch System every year since 2014 and switched to reviewing the Artemis program in 2019.

    Each year the GAO points out Nasa isn’t tracking any costs and Nasa argues with the GAO about the costs they assign. Then the GAO points out Nasa has no concrete plan to reduce costs, Nasa then goes nu’uh (see the articles cost reduction “objectives”).

    The last two reports have focused on the RS-25 engine, last time the GAO was unhappy because an engine cost Nasa $100 million and Nasa had just granted a development contract to reduce the cost of the engine.

    However if you took the headline cost of the contract and split it over planned engines it was greater than the desired cost savings. Nasa response was development costs don’t count.

    Congress reviews GAO reports and decides to give SLS more money.


  • The other person was just wrong.

    Large scale Hydrogen generation isn’t generated in a fossil free way, Hydrogen can be generated is a green way but the infrastructure isn’t there to support SLS.

    Hydrogen is high ISP (miles per gallon) by rubbish thrust (engine torque).

    This means SLS only works with Solid Rocket Boosters, these are highly toxic and release green house contributing material into the upper atmosphere. I suspect you would find Falcon 9/Starship are less polluting as a result.

    Lastly the person implies SLS could be fueled by space sources (e.g. the moon).

    SLS is a 2.5 stage rocket, the boosters are ditched in Earths Atmosphere and the first stage ditched at the edge of space. The current second stage doesn’t quite make low earth orbit.

    So someone would have to mine materials on the moon and ship them back. This would be far more expensive than producing hydrogen on Earth.

    Hydrogen on the moon makes sense if your in lunar orbit, not from Earth.







  • When Oracle bought Sun Microsystems, it demonstrated it didn’t know how to interact with open source communities. The Hudson -> Jenkins fork is probably the most famous where Oracle thought they could dictate where teams would collaborate. The bullying tone Oracle took made it clear they viewed the community as employees who should do as they are told.

    To me this kind of fumble shows people in the Red Hat side are suffering the same issue, they don’t understand they manage an ecosystem. Ironically if Oracle, Alma and Rocky work together they stand a good chance of owning that community.


  • I am running a AMD Athlon™ X4 860K Quad Core Processor with 32GiB of RAM, Radeon HD 7450, 16TiB of HDD storage and 256GiB SSD. The only upgrade I am considering is buying 4TiB SSD drives to replace the HDD drives, this is only because I’ve noticed SSD’s have gotten really cheap.

    I would plan for Docker and not Virtual Machines, as VM’s emulate an entire computer and then you run an entire operating system within them and then the application, the result is they need far more resources to act as a host for an application. Server applications have been moving to Docker because its a defined way to sandbox applications, run them consistently and uses far less resources.

    Personally I run Debian Stable since its a home server and the only updated applications I want are Docker images and security patches. I then installed Docker Community Edition on to it.

    I then deployed Portainer Community Edition on to the server, this provides a Web UI to manage the docker contaners running on the server. I have 9 docker containers currently running on the server.

    You mentioned Plex: Plex provide a docker image for running their application that supports NVidia GPU Acceleration and seems to run fine on AMD hardware. You will find almost every server application offers an official docker image.

    With my business hat on, think how many docker containers you want and plan for that + 1 cores in your CPU, you can probably look up the applications you want to run and add up their recommended RAM usage, as a home rule of thumb 16 GiB of RAM is the minimum, 64GiB would be overkill.





  • The biggest issue with switching is your “must have” applications.

    A lot of people spend time trying to make them work, it often doesn’t work well and so they go back.

    Take Sync, Linux has similar solutions (insync is a popular one), but there alternative solutions. Perhaps the server could run syncthing or your tooling supports ftp, etc…

    The key thing is not to ask for the equivalent of X, but think what you actually use X for.

    So if you use Sync to share video on Slack, you don’t need a Sync replacement you need a way to share video on slack.

    Alas I think Photoshop is the one killer application





  • Because that doesn’t fit.

    The object sends X-ray pulses for 30-300 seconds every 22 minutes.

    For a binary star system we would expect to see pulses while the neutron star is not behind the star and a short period without any pulses while the other star blocks it. Which is the inverse of the recorded pattern

    In a tertiary or greater star system you could have longer periods where the star is blocked but the time between pulses would vary depending on the positions of other stars.

    Personally I think it will end up being a pulsar that is slowing down and becoming a regular neutron star with something externally adding/removing mass from it causing it to speed up again.



  • I think this is the paper behind the article.

    They propose “dark stars” which formed when dark matter clouds collapsed, the mass then pulls in hydrogen and helium. The resulting star is huge but it is a relatively cool star.

    They believe they have found 3 candidates which could be galaxies (containing population III stars) or super massive dark stars.

    The test is if they absorb or emit helium a helium signature. If they absorb it, they are dark stars if they emit helium they are galaxies.

    The nice thing is there are only 2 proposed types of dark matter which could make a dark star work so it would help us work out what dark matter is.






  • Apart from Ubuntu/Fedora (which are Snap/Flatpak heavy), I think you would be OK with any Linux distribution. I have a Intel Atom N270 and 2GiB of RAM happily running Debian Bookworm and KDE (with an SSD) your talking about something with far more power.

    For me the considerations are as follows.

    RAM

    You’ve listed 4GiB of RAM, looking at my PC now (Debian Bookworm, KDE Desktop, 2 Flatpaks, Steam Store and Firefox ESR running), I am using 4.5GiB of RAM.

    • 2.9GiB of that is Firefox,
    • ~800MiB is Steam of which 550MiB is the Steam Store Web Browser.
    • ~850MiB is the KDE desktop

    Moving to XFCE or LXDE would help you reduce the Desktop RAM usage to 400MiB-600MiB, but you’ll still keeping hitting memory limits unless you install an addon to limit the number of tabs. Upgrading 8GiB in would resolve this weakness.

    I get by on the Netbook limiting it to 3 tabs or steam.

    Disk Storage
    You’ve listed 500GiB of HDD Storage, this means you want to avoid any distribution which pushes Snaps/Flatpaks/Immutable OS because the amount of storage they require and loading that from a HDD would be insanely slow.

    Similarly I would go for LXDE or KDE desktops, both are based on creating common shared system libraries so your desktop loads one instance of the library into memory and applications use it. As a result such desktops will quickly reach 1GiB of RAM but not increase much further.

    Also moving from a HDD to SDD would give noticeable performance gains, the biggest performance bottleneck as far back as Core 2 Duo/Bulldozer CPU’s was Disk I/O.

    GPU

    The biggest issue will be the 710M, I don’t think NVidia’s Wayland driver covers this era so you’ll be stuck on X11. Considering the age of the GPU and the need for the proprietary driver, personally I would aim for Debian or OpenSuse the long release cycles mean you can get it working and it will stay that way.

    From a desktop perspective, I would install KDE and if it was slow/tearing I’d switch to Mate desktop.

    • KDE has some GPU effects but is largely CPU drawn, it tends to look nice and work
    • Gnome 3 choses to use the GPU even when its less efficient so if it doesn’t work well on KDE it won’t on Gnome.
    • Mate is Gnome 2 and works smoothly on pretty much anything.
    • Cinnamon is Gnome 3
    • XFCE is like Mate is just works everywhere, personally I find Mate a more complete desktop.

  • Engineering is tradeoffs.

    A command shell is focused on file operations and starting/stopping applications. So it makes it easy to do those things.

    You can use scripting languages (e.g. Node.js/Python) to do everything bash does but they are for general purpose computing and so what and how you perform a task becomes more complicated.

    This is why its important to know multiple languages, since each one will make specific tasks easier and a community forms around them as a result.

    If I want to mess with the file system/configuration I will use Bash, if I want to build a website I will use Typescript, if I want to train a machine learning model I will use Python, if I am data engineering I will use Java, etc .