• 1 Post
  • 14 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle
  • Imo, the true fallacy of using AI for journalism or general text, lies not so much in generative AI’s fundamental unreliability, but rather it’s existence as an affordable service.

    Why would I want to parse through AI generated text on times.com, when for free, I could speak to some of the most advanced AI on bing.com or openai’s chat GPT or Google bard or a meta product. These, after all, are the back ends that most journalistic or general written content websites are using to generate text.

    To be clear, I ask why not cut out the middleman if they’re just serving me AI content.

    I use AI products frequently, and I think they have quite a bit of value. However, when I want new accurate information on current developments, or really anything more reliable or deeper than a Wikipedia article, I turn exclusively to human sources.

    The only justification a service has for serving me generated AI text, is perhaps the promise that they have a custom trained model with highly specific training data. I can imagine, for example, weather.com developing highly specific specialized AI models which tie into an in-house llm and provide me with up-to-date and accurate weather information. The question I would have in that case would be why am I reading an article rather than just being given access to the llm for a nominal fee? At some point, they are not no longer a regular website, they are a vendor for a in-house AI.


  • peanuts4life@beehaw.orgtoAsklemmy@lemmy.mlWhats your such opinion
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    11 months ago

    But these traits are secondary and tertiary sexual characteristics (ie they are tied to your biological sex). They are certainly the origin of gender identity, but they don’t justify it. My dissatisfaction is not with the concept of sex. It’s fair to say, “oh that person has a penis, that person is a woman, that person is intersex,” and we should strive to develop better, more diverse sexual classifies, but gender? Na.

    Gender roles/ jobs, fem and masculine, the separation of media to cater towards one gender or the other, the gendering of clothes, attitudes, and opinions, and finally the gendering of sex. It’s all just caveman talk, imo


  • peanuts4life@beehaw.orgtoAsklemmy@lemmy.mlWhats your such opinion
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    11 months ago

    Gender is the cultural outcome of primary and secondary sexual characteristics and in no meaningfully physical way exist. In other words, we traditionally have a “boy” culture and a “girl” culture, not a gender. We are artificially indoctrinated and assimilated into a given culture based on primary or secondary sexual characteristics.

    Likewise, it follows that all other gender identities are similarly a cultural phenomenon and not the outcome of some essential characteristic of the individual.

    Gender cultures are, at least historically speaking, bad. They’ve generally been used to persecute people who aren’t in the dominant (boy) gender, and the conditions dictating mobility between genders is so intensly arbitrary that it warrants abolishing the whole stupid idea. Gender dysphoria is a symptom, generally, of the tyranny of these conditions.

    (PS, I totally am open to being wrong about this.)







  • I’m not anti ai, I use it generative ai all of the time, and I actually come from a family of professional artists myself ( though I am not ). I agree that its a tool which is useful; however, I disagree that it is not destructive or harmful to artist simply because it is most effective in thier hands.

    1. it concentrates the power of creativity into firms which can afford to produce and distribute ai tools. While ai models are getting smaller, there are frequently licensing issues involved (not copywrite, but simply utilizing the tools for profit) in these small models. We have no defined roadmap for the Democratization of these tools, and most signs point towards large compute requirements.

    2. it enables artist to effectively steal the intellectual labor of other artist. Just because you create cool art with it doesn’t mean it’s right for you to scrape a book or portfolio to train your ai. This is purely for practical reasons. Artists today work thier ass of to make the very product ai stands to consolidate and distribute for prennies to the dollar.

    you fail to recognize that possibility that I support ai but oppose its content being copywritable purely because firms would immediately utilize this to evade licensing work. Why pay top dollar for a career concept artist’s vision when you can pay a starting liberal arts grad pennies to use Adobe suit to generate images trained in said concept artists?

    Yes, that liberal arts grad deserves to get paid, but they also deserve any potential whatsoever of career advancement.

    Now imagine instead if new laws required that generative ai license thier inputs in order to sell for profit? Sure, small generative ai would still scrape the Internet to produce art, but it would create a whole new avenue for artist to create and license art. Advanced generative ai may need smaller datasets, and small teams of artist may be able to utilize and license boutique models.


  • I disagree with this reductionist argument. The article essentially states that because ai generation is the “exploration of latent space,” and photography is also fundamentally the “exploration of latent space,” that they are equivalent.

    It disregards the intention of copywriting. The point isn’t to protect the sanctity or spiritual core of art. The purpose is to protect the financial viability of art as a career. It is an acknowledgment that capitalism, if unregulated, would destroy art and make it impossible to pursue.

    Ai stands to replace artist in a way which digital and photography never really did. Its not a medium, it is inference. As such, if copywrite was ever good to begin with, it should oppose ai until compromises are made.




  • In my opinion, copyright laws should only apply to the original text, and only for a limited time. If someone wants to make a sequel to the book I just wrote? Go for it, it’s not going to be cannon or from the same author. If they want to publish it in Spanish? No, it’s substantially the same.

    Likewise, if I paint a picture of my OC, I should have copywrite over that picture, no one else can sell or print it, but not the characteristics which make up the OC.

    It seems at first that this would lead to a horrible Disney stealing intellection property situation, but I don’t think so. Instead, everyone would be doing the reverse. Pop culture would be reabsorped by the masses. Films are,at the end of the day, produced by artist, except now those artist are the essential element, not the ip. A studio is only valuable if they can produce great films, not aquire the best brand. Let’s let the masses take a crack at superman.



  • I’ve been using LLMs a lot. I use gpt 4 to help edit articles, answer nagging questions I can’t be bothered to answer, and other random things, such as cooking advice.

    It’s fair to say, I believe, that all general purpose LLMs like this are plagiarizing all of the time. Much in the way my friend Patrick doesn’t give me sources for all of his opinions, Gpt 4 doesn’t tell me where it got its info on baked corn. The disadvantage of this, is that I can’t trust it any more than I can trust Patrick. When it’s important, I ALWAYS double check. The advantage is I don’t have to take the time to compare, contrast, and discover sources. It’s a trade off.

    From my perspective, The theoretical advantage of bing or Google’s implementation is ONLY that they provide you with sources. I actually use Bing’s implementation of gpt when I want a quick, real world reference to an answer.

    Google will be making a big mistake by sidelining it’s sources when open source LLMs are already overtaking Google’s bard’s ai in quality. Why get questionable advice from Google, when I can get slightly less questionable advice from gpt, my phone assistant, or actual, inline citations from bing?