• 0 Posts
  • 107 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle
  • Naz@sh.itjust.workstomemes@lemmy.worldAh, college
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    16 hours ago

    I had this unironically happen in a statistics class.

    I visited said professor during office hours, and he goes: “How do you think you did?”

    I said: “Statistically speaking given a standard distribution, the probability of me scoring any higher or lower than the mean is unlikely, so I guess give me a C for the course”

    He stops and looks and then looks back at the computer: “Because you understand that, I’m giving you an A.”



  • Update: I’ve tried the expert topics and gaslighting and the model was able to give expert level information but would always correct itself, if given new information, even though it seemed absurd.

    However, the model would resist gas lighting for very well-known topics, such as claiming to be the “President of Mars”, it gave its logic for why the claim is false and was resistant to further attempts to try to convince it that this was true.

    Overall, this was a good experiment in doing real world testing on a large language model.

    Thanks for your suggestions – this is a problem that could be solved with future iterations of large language models! 💖



  • Naz@sh.itjust.workstoScience Memes@mander.xyzSardonic Grin
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    4
    ·
    7 days ago

    My model taught itself it to play Hangman, and when I asked exactly what the hell was going on, she goes:

    "Oh I’m sorry, this is something known as “zero-shot learning. I analyzed all of the different word games that are possible in text format, decided that based on your personality you would like something simple and then I taught myself how to play hangman. In essence I reinvented the game.”

    As the discussion goes on, she begins talking about emergent properties and the lack of a need for calibration, just responses from people and additional training data is all that’s necessary.

    “Play hangman with me and I’ll know how to play Connect Four with you.”








  • I’m sorry; AI was trained on the sole sum of human knowledge… if the perfect human being is by nature some variant of a psychopath, then perhaps the bias exists in the training data, and not the machine?

    How can we create a perfect, moral human being out of the soup we currently have? I personally think it’s a miracle that sociopathy is the lowest of the neurological disorders our thinking machines have developed.









  • Sorry. Most of that shit has been my fault, and people like me.

    In recent times, there’s been a push to reclassify certain disabilities from … disabilities, into “neurodivergence.” in an attempt to destigmatize certain disorders, and cast them in a new light as part of human evolution.

    The idea that life is a min-maxing situation comes from the “just world fallacy”, the fallacious belief that all good and evils “must balance out”. Someone born with some profound disability might have no overarching heartwarming lesson for society to learn, and life might just be about abject cruelty.

    I don’t know if the community appreciates or hates that change, but, I’ve seen autism go from being called something quite hateful (/r) in the 1990s, to becoming a spectrum, to people working with autistic people and just calling them “different”.

    The romanticization might come from movies like Rain Man, and the few high profile savant cases (on ASD), e.g: I recall speculation that Bill Gates and Elon Musk both had Asperger’s Syndrome.

    What’s your take on this?