• 0 Posts
  • 29 Comments
Joined 1 year ago
cake
Cake day: August 14th, 2023

help-circle





  • Signtist@lemm.eetoMicroblog Memes@lemmy.worldYou fools!
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    Fair enough, and I also just like the mystery of it all; I understand that a large philosophical question can’t be definitively answered in a tweet. I would say that, while knowledge requires truth and justification, there’s something more to it than just the presence of those 2 factors.

    If I had never seen the sky, but believed it was blue, I’d be right, but I wouldn’t be knowledgeable; I’d just be a lucky guesser due to the lack of justification for my belief. But would I be justified if I had read a book that said it was blue, and based my decision off of that? It seems arbitrary - what if the book was wrong? What if there were another book I had access to that described the sky as being green, but I simply decided I better liked the blue book?

    I think real knowledge requires a level of certainty that a single point of justification can’t reasonably provide, and that a “true justified belief” is a step between an arbitrary belief and real knowledge. Knowledge would essentially be a belief so well-justified that it requires no “belief” at all. In the end, I’d probably say that real knowledge is totally outside of human ability, but that’s not a new concept.


  • Signtist@lemm.eetoMicroblog Memes@lemmy.worldYou fools!
    link
    fedilink
    English
    arrow-up
    7
    ·
    5 months ago

    I think the whole point is that the difference between belief and knowledge isn’t about whether or not you’re right, it’s about whether or not your belief has been verified and proven to be true, so in OP’s example, they would be right that the room looks like that, but that belief wouldn’t have been verified due to the professor never seeing his actual room. Thus, a justified true belief, but not knowledge.


  • Signtist@lemm.eetoMicroblog Memes@lemmy.worldYou fools!
    link
    fedilink
    English
    arrow-up
    20
    ·
    5 months ago

    I guess it depends on whether you’re talking about them believing that’s his room, or them just believing his room looks like that. For the prior they’d be wrong, but for the latter they’d be right, and they’d be justified in that belief, but it’s ultimately not knowledge because they can’t actually see his real room.




  • It’s important to define was “equal” is in this context. Some people hear “equal” and think they must measure exactly the same in every test, but that’s not how the word is being used in this context. It’s more that people are so varied from one person to another that no test can truly judge them well enough to differentiate them when it comes to inherent worth.

    One person might measure above another in one test, but there are surely many others where the results would be flipped. There are so many different things you could test a person on that in the end none of them really matter; any one measurement is like trying to figure out what an extinct animal looked like from a single tiny piece of a fossil.

    That’s what the IQ test is doing - it’s taking one tiny piece of human intelligence, which itself is one tiny piece of what might be said to make up a person’s value, and trying to use that to extrapolate information about them that simply can’t be taken from such a 1-dimensional test. It’s not worthless, but it needs to be paired with a bunch of other tests before it can really say anything, and even then it wouldn’t say much.



  • Ah, I see. It’s true that these issues cast a negative light on AI, but I doubt most people will even hear about most of them, or even really understand them if they do. Even when talking about brand security, there’s little incentive for these companies to actually address the issues - the AI train is already full-steam ahead.

    I work with construction plans in my job, and just a few weeks ago I had to talk the CEO of the company I work for out of spending thousands on a program that “adds AI to blueprints.” It literally just added a chatgpt interface to a pdf viewer. The chat wasn’t even able to actually interact with the PDF in any way. He was enthralled by the “demo” that a rep had shown him at an expo, that I’m sure was set up to make it look way more useful than it really was. After that whole fiasco, I lost faith that the people in charge of whether or not AI programs are adopted will actually do their due diligence to ensure they’re actually helpful.

    Having a good brand image only matters if people are willing to look.


  • I highly doubt that OpenAI or any other AI developer would see any real repercussions, even if they had a security hole that someone managed to exploit to cause harm. Companies exist to make money, and OpenAI is no exception; if it’s more profitable to release a dangerous product than a safe one, and they won’t get in trouble for it, they’ll likely have no issues with releasing their product with security holes.

    Unfortunately, the question can’t be “should we be charging them for this?” Nobody is going to force them to pay, and they have no reason to do it on their own. Barring an entire cultural revolution, the question instead must be “should we do it anyway to prevent this from being used in harmful ways?” And the answer is yes. Our society is designed to maximize profits, usually for people who already have money, so if you’re working within the confines of that society, you need to factor that into your reasoning.

    Companies have long since decided that ethics is nothing more than a burden getting in the way of their profits, and you’ll have a hard time going against the will of the companies in a capitalist country.





  • Signtist@lemm.eetoTechnology@lemmy.worldUnity bans VLC from Unity Store.
    link
    fedilink
    English
    arrow-up
    92
    arrow-down
    2
    ·
    9 months ago

    I’ve said it before, and I’ll say it again: when a company does something that shows it doesn’t have its customers’ best interests in mind, it’s imperative that it be immediately and wholly abandoned.

    Companies have long since learned that we’ll ignore major red flags for the sake of convenience, and at this point they’re not even trying to hide the flags - they’re proudly flying them and laughing as we continue to give them business.