• 1 Post
  • 59 Comments
Joined 2 months ago
cake
Cake day: September 13th, 2024

help-circle
  • So they are moving away from general models and specializing them to tasks as certain kind of ai agents

    It will probably make queries with those agents defined in a narrow domain and those agents will probably be much less prone to error.

    I think its a good next step. Expecting general intelligence to arise out of LLMs with larger training models is obviously a highly criticized idea on Lemmy, and this move supports the apparent limitations of this approach.

    If you think about it, assigning special “thinking” steps for ai models makes less sense for a general model, and much more sense for well-defined scopes.

    We will probably curate these scopes very thoroughly over time and people will start trusting the accuracy of their answer through more tailored design approaches.

    When we have many many effective tailored agents for specialized tasks, we may be able to chain those agents together into compound agents that can reliably carry out many tasks like we expected from AI in the first place.







  • tee9000@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    20 days ago

    Im seeing a lot of reasons why you, or i, would not want such a service to exist.

    What a person should or should not be doing is their business. Companies who can target vulnerable people would ideally be regulated.

    Id much rather first go after payday advance companies with exorbant fees, or casinos, or high interest loans that individuals cant be expected to repay.






  • tee9000@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    8
    ·
    26 days ago

    Aw. A cute little comment that fits nicely with the article headline.

    Thanks for saving me time by giving your summarized dissection of the article and the implications it has for apples product.

    Oh and btw all tech products stop production at some point, because you know, apple isnt a dumbass that pumps out products without regard for demand as they are creating a successive model of the product.

    Stop scrolling. Start thinking.






  • tee9000@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    edit-2
    1 month ago

    In fairness you cant just say its not a zero sum game when the article is supported with a quote from one individual saying they were glad it told them in some cases. We dont know how effective it is.

    This is normalizing very intimate (and automated) surveillance. Kids all have smart phones and can google anything they want when they arent using school hardware. If kids have any serious pre-meditation to do something bad then they will do it on their smartphones.

    The only reason this would be effective is to catch students before they are aware they are being watched (poof thats gone tomorrow), or the student is so dirt poor that they dont have a smart phone or craptop.

    And what else will the student data be used for? Could it be sold? It would certainly have value. Good intentions are right now… data is FOREVER.


  • Counter point: when this isnt an obscure thing, and kids are aware of it, they will purposefully use trigger words because they are kids.

    If kids/people are having mental health issues, whats the best way to handle that? By scanning for the symptom and telling them to stop being mentally troubled? I really doubt kids are getting the care they need based on these flags. Seems like a bandaid for cultural/systemic issues that cause the mental illness/harm.