![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
simply not true. they’re no angels or open source champions, but come on.
simply not true. they’re no angels or open source champions, but come on.
i mean, i’ve worked in neural networks for embedded systems, and it’s definitely possible. i share you skepticism about overhead, but i’ll eat my shoes if it isn’t opt in
there are language models that are quite feasible to run locally for easier tasks like this. “local” rules out both ChatGPT and Co-pilot since those models are enormous. AI generally means machine learned neural networks these days, even if a pile of if-else used to pass in the past.
not sure how they’re going to handle low-resource machines, but as far as AI integrations go this one is rather tame
if it’s easier to pay, people spend more
deleted by creator
IBM then. or, i don’t know, the British Royal Family?
the reality of talking about extremist economics is no one knows how it would work out in the long term. but regardless, if it happened tomorrow we already have a Microsoft to deal with.
“taxation is theft” “wage labour is exploitation”
sometimes things are subtle and complicated and can’t be practically boiled down to absolutes.
“we don’t know how” != “it’s not possible”
i think OpenAI more than anyone knows the challenges with scaling data and training. anyone working on AI knows the line: “a baby can learn to recognize elephants from a single instance”. reducing training data and time is fundamental to advancement. don’t get me wrong, it’s great to put numbers to these things. i just don’t think this paper is super groundbreaking or profound. a bit clickbaity and sensational for Computerphile
honestly 8 space indents always felt a bit ridiculous to me. i usually use 4 since it’s more conventional in most languages but could also be happy with 2.
weird hill to die on. use default setting unless you have a good reason not to. the argument itself is a waste of time on projects that want to get things done.
gotem!
seriously tho, you don’t think OpenAI is tracking this? architecural improvements and training strategies are developing all the time
i really want to like Nix.
gave it a shot a few years ago, but i felt like documentation and community support wasn’t really there yet. this was long before Nix surpassed Arch in terms of number of available packages. now people still complain about documentation, especially of the Nix language. i see a lot of package authors using it, and that kind of tempts me to start using at least the package manager. but a lot of packages don’t. the allure of GitOpsing my entire OS is very tempting, but then there’s been these rumors (now confirmed) of new forks, while Guix splintered off much earlier. for something that’s ostensibly supposed to be the most stable OS, that makes me nervous. it also seems to have some nontrivial overhead—building packages, retaining old packages, etc.
the pitch for Nix is really appealing, but with so much uncertainty it’s hard to pull the trigger on migrating anything. heck, if i could pull off some PoCs, i think my enterprise job might consider adopting it, but it’s a hard recommend for me today as it was 5 years ago.
i didn’t think people would really be surprised. but maybe i’m jaded by my experience in the industry.
if we’re arguing whether or not it’s objectively stupid, i think that’s up to the market to decide.
kinda seems like a toy to me anyway, and it’s kind of priced that way
what else would it be? it’s a pretty common embedded target. dev kits from Qualcomm come with Android and use the Android bootloader and debug protocols at the very least.
nobody is out here running a plain Linux kernel and maintaining a UI stack while AOSP exists. would be a foolish waste of time for companies like Rabbit to use anything else imo.
to say it’s “just an Android device” is both true and a mischaracterization. it’s likely got a lot in common with a smartphone, but they’ve made modifications and aren’t supporting app stores or sideloading. doesn’t mean you can’t do it, just don’t be surprised when it doesn’t work 1-1
it’s an analogy that applies to me. tldr worrying about having my identity stolen via physical access to my phone isn’t part of my threat model. i live in a safe city, and i don’t have anything the police could find to incriminate me. everyone is going to have a different threat model. some people need to brick up their windows
it’s not a password; it’s closer to a username.
but realistically it’s not in my personal threat model to be ready to get tied down and forced to unlock my phone. everyone with windows on their house should know that security is mostly about how far an adversary is willing to go to try to steal from you.
personally, i like the natural daylight, and i’m not paranoid enough to brick up my windows just because it’s a potential ingress.
seems like chip designers are being a lot more conservative from a design perspective. NPUs are generally a shitton of 8 bit registers with optimized matrix multiplication. the “AI” that’s important isn’t the stuff in the news or the startups; it’s the things that we’re already taking for granted. speech to text, text to speech, semantic analysis, image processing, semantic search, etc, etc. sure there’s a drive to put larger language models or image generation models on embedded devices, but a lot of these applications are battle tested and would be missed or hampered if that hardware wasn’t there. “AI” is a buzz word and a goalpost that moves at 90 mph. machine learning and the hardware and software ecosystem that’s developed over the past 15 or so years more or less quietly in the background (at least compared to ChatGPT) are revolutionary tech that will be with us for a while.
blockchain currency never made sense to me from a UX or ROI perspective. they were designed to be more power hungry as adoption took off, and power and compute optimizations were always conjecture. the way wallets are handled and how privacy was barely a concern was never going to fly with the masses. pile on that finance is just a trash profession that requires goggles that turn every person and thing into an evaluated commodity, and you have a recipe for a grift economy.
a lot of startups will fail, but “AI” isn’t going anywhere. it’s been around as long as computers have. i think we’re going to see a similarly (to chip designers) cautious approach from companies like Google and Apple, as more semantic search, image editing, and conversation bot advancements start to make their way to the edge.
damn go off