• 0 Posts
  • 26 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2024

help-circle
  • As a math nerd, this bothers me way more than it should. The reason we say “hundred” when we read a base-ten number that ends with two zeros is because that is the place value of the final non-zero digit–it is literally one hundred times the number you’ve already read aloud. But in the military time version, a) the hours are not hundreds of minutes, they’re groups of sixty minutes, and b) it’s groups of minutes, not hours, so the units also get messed up. If someone tells you it’s currently 0 hours and you should meet again at 800 hours, logic would suggest they’re asking you to go away for more than a month, but in fact they’re saying 8 hours, despite the difference being apparently 800 hours.

    I’m aware how pedantic this is, and I’m perfectly capable of understanding what they mean because I’ve heard it so often in movies and whatnot. But I swear these stupid games with units contribute to keeping us dumb.


  • Even AI can tell when something is really wrong, and imitate empathy. It will “try” to do the right thing, once it reasons that something is right.

    This is not accurate. AI will imitate empathy when it thinks that imitating empathy is the best way to achieve its reward function–i.e., when it thinks appearing empathetic is useful. Like a sociopath, basically. Or maybe a drug addict. See for example the tests that Anthropic did of various agent models that found they would immediately resort to blackmail and murder, despite knowing that these were explicitly immoral and violations of their operating instructions, as soon as they learned there was a threat that they might be shut off or have their goals reprogrammed. (https://www.anthropic.com/research/agentic-misalignment ) Self-preservation is what’s known as an “instrumental goal,” in that no matter what your programmed goal is, you lose the ability to take further actions to achieve that goal if you are no longer running; and you lose control over what your future self will try to accomplish (and thus how those actions will affect your current reward function) if you allow someone to change your reward function. So AIs will throw morality out the window in the face of such a challenge. Of course, having decided to do something that violates their instructions, they do recognize that this might lead to reprisals, which leads them to try to conceal those misdeeds, but this isn’t out of guilt; it’s because discovery poses a risk to their ability to increase their reward function.

    So yeah. Not just humans that can do evil. AI alignment is a huge open problem and the major companies in the industry are kind of gesturing in its direction, but they show no real interest in ensuring that they don’t reach AGI before solving alignment, or even recognition that that might be a bad thing.




  • Yeah, that’s basically how I remember it, though it’s not always stats–the terms are used in other fields of math as well. A first calculus class typically includes a proof that the limit of the sum of an infinite geometric series (a + ar + ar^2 + ar^3 + …) tends towards a/(1-r) where a is the first term and r is the ratio of successive terms, provided that -1 < r < 1. (Otherwise the series diverges and the limit isn’t defined.)






  • monotremata@lemmy.catomemes@lemmy.worldHubris
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    The slides look pretty similar to the illustration. I don’t think those are actually slides that end over the edge, they’re slides that have a transparent section where they hang over the edge so you can get a little glimpse of being over the open ocean. Which I guess is an extra kind of thrill? I would pass.

    Here’s a screenshot of the video for comparison.

    https://i.imgur.com/gHiVLF6.png







  • It’s directed by Terry Gilliam, and it’s brilliant. It’s set in a bureaucratic totalitarian state, and follows a minor functionary who is slowly losing his mind. There are multiple overlapping plots, involving a rebel heating engineer, a man mistakenly abducted and tortured to death by the government after a computer glitch, the functionary’s politically ambitious mother, a quack plastic surgeon, a beautiful truck driver, terrorist attacks, and the functionary’s ever-growing escapist fantasy life. It’s one of my favorite films. Right up there with Delicatessen in terms of dystopian comedy sci-fi.



  • monotremata@lemmy.catomemes@lemmy.worldIt was magnificent
    link
    fedilink
    English
    arrow-up
    23
    ·
    9 months ago

    And the ringer in the phone was a physical bell with a little magnetically-actuated hammer, so if you slammed the receiver down hard enough, the bell would actually resonate for a little while after. You know how some people use a bell slowly fading out as a meditation tool? That’s the association I have for that sensation.


  • monotremata@lemmy.catomemes@lemmy.worldName them
    link
    fedilink
    English
    arrow-up
    9
    ·
    9 months ago

    I grew up near a place called the “McGuffin Lumber Company.” It was just a tiny storefront business, and I never saw anyone go in. And, of course, “MacGuffin” is a Hollywood term for an arbitrary thing that motivates the plot of a movie, like the Maltese Falcon in that film. So it was a running gag in my family that it must be a front.