A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.

I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things as well.

  • 0 Posts
  • 50 Comments
Joined 4 years ago
cake
Cake day: August 21st, 2021

help-circle


  • I believe my WebOS version is so old, they stopped any update or data collection servers. I had developer mode running before. That’s perfectly alright. Just a bit annoying to constantly refresh, and somehow my attempts to automate it failed. So one day I rooted it and now I have full ssh access, a homebrew channel… And I would have liked to use that to run an Ambilight, and that requires root. Sadly it requires a newer operating system version so I still don’t have any LEDs in the background.


  • Youtube-webos is the single reason why I’ve resumed watching YT in the livingroom. I had stopped when they introduced the second pre-roll ad and a lot of things became unskippable. Now my old TV is rooted, it doesn’t really phone home, it doesn’t show ads and it’s a breeze. Let’s hope it stays that way, because Google is already trying to fight the adblockers for some time now.


  • I used to run such things on my NAS/Server at home (And I still do, though I’m currently changing some things.) But in addition to the 4.50€ for ~20W of electricity, it was maybe 600€ for the machine, so another 5€ a month over 10 years. And then my internet contract is a bit more expensive because I need an IPv4 address which can do port forwarding… On the flipside, I can just attach a 10TB harddrive and have it available everywhere. And that’d be very expensive with a cloud service or hoster.


  • Yes. I also have my own small VPS doing this (Piefed), Peertube, eMail, Nextcloud… for myself and family if they want. And that’s $8 a month. I wonder why it doesn’t scale down drastically with more users. I mean sure they generate a lot of requests. But then you only need to cache an image or pull in the posts and replies once for 12.000 users, while my server does that just for me. (Albeit for Lemmy, which is way smaller than Mastodon).









  • Yeah, sure. No offense. I mean we have different humans as well. I got friends who will talk about a subject and they’ve read some article about it and they’ll tell me a lot of facts and I rarely see them make any mistakes at all or confuse things. And then I got friends who like to talk a lot, and I better check where they picked that up.
    I think I’m somewhere in the middle. I definitely make mistakes. But sometimes my brain manages to store where I picked something up and whether that was speculation, opinion or fact, along with the information itself. I’ve had professors who would quote information verbatim and tell roughly where and in which book to find it.

    With AI I’m currently very cautious. I’ve seen lots of confabulated summaries, made-up facts. And if designed to, it’ll write it in a professional tone. I’m not opposed to AI or a big fan of some applications either. I just think it’s still very far away from what I’ve seen some humans are able to do.


  • I think the difference is that humans are sometimes aware of it. A human will likely say, I don’t know what Kanye West did in 2018. While the AI is very likely to make up something. And also in contrast to a human this will likely be phrased like a Wikipedia article. While you can often look a human in the eyes and know whether they tell the truth or lie, or are uncertain. Not always, and we also tell untrue things, but I think the hallucinations are kind of different in several ways.


  • I’m not a machine learning expert at all. But I’d say we’re not set on the transformer architecture. Maybe just invent a different architecture which isn’t subject to that? Or maybe specifically factor this in. Isn’t the way we currently train LLM base models to just feed in all text they can get? From Wikipedia and research papers to all fictional books from Anna’s archive and weird Reddit and internet talk? I wouldn’t be surprised if they start to make things up since we train them on factual information and fiction and creative writing without any distinction… Maybe we should add something to the architecture to make it aware of the factuality of text, and guide this… Or: I’ve skimmed some papers a year or so ago, where they had a look at the activations. Maybe do some more research what parts of an LLM are concerned with “creativity” or “factuality” and expose that to the user. Or study how hallucinations work internally and then try to isolate this so it can be handled accordingly?




  • Yeah you’re right. I didn’t want to write a long essay but I thought about recommending Grok. In my experience, it tries to bullshit people a bit more than other services do. But the tone is different. I found deep within, it has the same bias towards positivity, though. In my opinion it’s just behind a slapped on facade. Ultimately similar to slapping on a prompt onto ChatGPT, just that Musk may have also added that to the fine-tuning step before.

    I think there is two sides to the coin. The AI is the same. Regardless, it’ll tell you like 50% to 99% correct answers and lie to you the other times, since it’s only an AI. If you make it more appeasing to you, you’re more likely to believe both the correct things it generates, but also the lies. It really depends on what you’re doing if this is a good or a bad thing. It’s argualby bad if it phrases misinformation to sound like a Wikipedia article. Might be better to make it sound personal, so once people antropormorphize it, they won’t switch off their brain. But this is a fundamental limitation of today’s AI. It can do both fact and fiction. And it’ll blur the lines. But in order to use it, you can’t simultaneously hate reading it’s output. I also like that we can change the character. I’m just a bit wary of the whole concept. So I try to use it more to spark my creativity and less so to answer my questions about facts. I also have some custom prompts in place so it does it the way I like. Most of the times I’ll tell it something like it’s a professional author and it wants to help me (an amateur) with my texts and ideas. That way it’ll give more opinions rather than try and be factual. And when I use it for coding some tech-demos, I’ll use it as is.


  • I’d have to agree: Don’t ask ChatGPT why it has changed it’s tone. It’s almost for certain, this is a made-up answer and you (and everyone who reads this) will end up stupider than before.

    But ChatGPT always had a tone of speaking. Before that, it sounded very patronizing to me. And it’d always counterbalance everything. Since the early days it always told me, you have to look at this side, but also look at that side. And it’d be critical of my mails and say I can’t be blunt but have to phrase my mail in a nicer way…

    So yeah, the answer is likely known to the scientists/engineers who do the fine-tuning or preference optimization. Companies like OpenAI tune and improve their products all the time. Maybe they found out people don’t like the sometimes patrronizing tone, and now they’re going for something like “Her”. Idk.

    Ultimately, I don’t think this change accomplishes anything. Now it’ll sound more factual. Yet the answers have about the same degree of factuality. They’re just phrased differently. So if you like that better, that’s good. But either way, you’re likely to continue asking it questions, let it do the thinking and become less of an independent thinker yourself. What it said about critical thinking is correct. But it applies to all AI, regardless of it’s tone. You’ll also get those negative effects with your preferred tone of speaking.