I think you overestimate the amount of ‘thought’ going on here. (ref}
I think you overestimate the amount of ‘thought’ going on here. (ref}
The way he plays with the meaning of words
She (or, if you’re not sure, they).
any kind of bureaucratic or rule-based decision-making
Human-written rules are often flawed, and for similar reasons (the sole human thought process that ‘AI’ is very good at reproducing is system justification). But human-written rules can be written down and they can be interrogated. But Apple landed itself in court because it had no clue how its credit algorithm worked and could not conceive how it could possibly be sexist if the machine didn’t get any gender data to analyse.
Perhaps that is the point.
That is, indeed, the point.
It’s asking why don’t we use it for that purpose, not suggesting that there is anything easy about doing so. I don’t know how you think science works, but it’s not like that.
The data cannot be understood. These models are too large for that.
Apple says it doesn’t understand why its credit card gives lower credit limits to women that men even if they have the same (or better) credit scores, because they don’t use sex as a datapoint. But it’s freaking obvious why, if you have a basic grasp of the social sciences and humanities. Women were not given the legal right to their own bank accounts until the 1970s. After that, banks could be forced to grant them bank accounts but not to extend the same amount of credit. Women earn and spend in ways that are different, on average, to men. So the algorithm does not need to be told that the applicant is a woman, it just identifies them as the sort of person who earns and spends like the class of people with historically lower credit limits.
Apple’s ‘sexist’ credit card investigated by US regulator
Garbage in, garbage out. Society has been garbage for marginalised groups since forever and there’s no way to take that out of the data. Especially not big data. You can try but you just end up playing whackamole with new sources of bias, many of which cannot be measured well, if at all.
It’s how LLMs work.
The systems didn’t do anything they weren’t told to do.
You’re thinking of the kinds of algorithms written by human beings. AI is a black box. No one knows how these models obtain their answers.
Where did you get insurance carriers from?
No idea what your post, before or after edit, is trying to say. But the subject of your quoted sentence is “proponents of AI” not “AI”, and the sentence is about what is enabled by AI systems. Your attempt at pedantry makes no sense.
If you’re suggesting that it is possible to build an AI with none of the biases embedded in the world it learns from, you might want to read that article again because the (obvious) rebuttal is right there.
Isn’t that a continuation of “why the outlier was culled”?
Not sure I follow, but I think the answer is “no”.
If you control for all the causes of a difference, the difference will disappear. Which is fine if you’re looking for causal factors which are not already known to be causal factors, but no good at all if you’re trying to establish whether or not a difference exists.
It’s really quite difficult to ask a coherent question with real-world data from the messy, complicated reality of human beings.
A simple example:
Women are more likely to die from complications after a coronary artery bypass.
But if you include body surface area (a measure of body size) in your model, the difference between men and women disappears.
And if you go the whole hog and measure vein size, the importance of body size disappears too.
And, while we can never do an RCT to prove it, it makes perfect sense that smaller veins would increase the risk for a surgery which involves operating on blood vessels.
None of that means women do not, in fact, have a higher risk of dying after coronary artery bypass surgery. Collect all the data which has ever existed and women will still be more likely to die from the surgery. We have explained the phenomenon and found what is very likely to be the direct cause of higher mortality. Being a woman just makes you more likely to have that risk factor.
It is rare that the answer is as neat and simple as this. It is very easy to ask a different question from the one you thought you were asking (or pretend to be answering one question when you answered another).
You can’t just throw masses of data into a pot and expect sensible answers to come out. This is the key difference between statisticians and data scientists. And, not to throw shade on data scientists, they often end up explaining to the world that oestrogen makes people more likely to die from complications of coronary artery bypass surgery.
That kind of analysis is done all the time. But, even if we can collect all the relevant data (big if), the methods required are difficult to interpret and easy to abuse (we can’t do an RCT of being born female vs male, or black vs white, &c). A good example is the proliferation of analyses claiming that the gender pay gap does not exist (after you’ve ‘controlled’ for all the things that cause the gender pay gap).
It’s not easy to do ‘right’ even when done in good faith.
The article isn’t claiming that it is easy, of course. It’s asking why power is so keen on one type of question and not its inverse. And that is a very good question, albeit one with a very easy answer. Power is not in the business of abolishing itself.
How is the microphone for phone calls?
If you think Mike Masnick does not spend enough time on the Fediverse, you do not spend enough time on the Fediverse.
Who do you imagine is (or should be) making these rules for the Fediverse?
If you are forced to use them:
That way, Amazon has to pay the search engine.
I must be missing some context because I have absolutely no idea what you’re on about.
Who wouldn’t tell who what and why does that matter?
Who is “they”? Who is the second “they”? Who is the we in “our”? What is the question?
That’s a fantastically efficient way to destroy their business. There’s no way to get honest reviews of employers from employees who know their identities will be exposed whether they consent or not. Doesn’t even matter if the review is after leaving that job, future employers can go nosing too.
Absolute techbro-brane gold.
It’s likely a browser issue. I’ve found a workaround, thanks.
Aye, it looks like my browser is doing something strange. Thanks.
Browser (Firefox).
I just tried opening the feed from a thread set at a good zoom level and it is better? I don’t understand how or why. But I may have found some kind of solution by accident.
I don’t know which jurisdiction you’re in but, while it isn’t illegal in the UK, you’re absolutely right about it being a bad idea and you are correct about the reason. In the event of a crash, it could count against you (in the UK, at least).