

Idk, does AI have anything to offer on the empathy side, except sycophancy and repeating what I just said, and then repeating itself three times before slowly steering towards a reply?
A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.
I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things as well.
Idk, does AI have anything to offer on the empathy side, except sycophancy and repeating what I just said, and then repeating itself three times before slowly steering towards a reply?
I believe my WebOS version is so old, they stopped any update or data collection servers. I had developer mode running before. That’s perfectly alright. Just a bit annoying to constantly refresh, and somehow my attempts to automate it failed. So one day I rooted it and now I have full ssh access, a homebrew channel… And I would have liked to use that to run an Ambilight, and that requires root. Sadly it requires a newer operating system version so I still don’t have any LEDs in the background.
Youtube-webos is the single reason why I’ve resumed watching YT in the livingroom. I had stopped when they introduced the second pre-roll ad and a lot of things became unskippable. Now my old TV is rooted, it doesn’t really phone home, it doesn’t show ads and it’s a breeze. Let’s hope it stays that way, because Google is already trying to fight the adblockers for some time now.
I used to run such things on my NAS/Server at home (And I still do, though I’m currently changing some things.) But in addition to the 4.50€ for ~20W of electricity, it was maybe 600€ for the machine, so another 5€ a month over 10 years. And then my internet contract is a bit more expensive because I need an IPv4 address which can do port forwarding… On the flipside, I can just attach a 10TB harddrive and have it available everywhere. And that’d be very expensive with a cloud service or hoster.
Yes. I also have my own small VPS doing this (Piefed), Peertube, eMail, Nextcloud… for myself and family if they want. And that’s $8 a month. I wonder why it doesn’t scale down drastically with more users. I mean sure they generate a lot of requests. But then you only need to cache an image or pull in the posts and replies once for 12.000 users, while my server does that just for me. (Albeit for Lemmy, which is way smaller than Mastodon).
Wow. That is a lot of server power. And these things ain’t cheap.
$5000 a month sounds a lot for forwarding text messages and images. According to the Fediverse Observer they have 12,000 active users (boils down to $5 per user a year), but still… Is it that much storage or computationally so expensive to federate posts?
I don’t think that’s affected. It sounds more like political propaganda which gets allowed. I bet this will still be censored and lead to demonetization just as it is today.
It is misrepresenting the facts quite a bit. I think microwave links might be able to do a bit more bandwidth. And laser can do way more than ChatGPT attributes to it. It can do 1 or 2.5 Gbps as well. The main thing about optics is that it comes without electromagnetic interference. And you don’t need to have a fresnel zone without obstacles, and you don’t need a license. The other things about laser being more susceptible to weather, etc should be about right. (And I don’t know a lot about cost and alignment, so I don’t really know if that’s accurate and substancially more effort for lasers. They sure both cost some money and you have to point both at the receiver.)
Sure. I think we’re talking a bit about different things here. I didn’t want to copy it, just know how it’s done 😆 But yeah, you’re right. And what you said has another benefit. If they want to protect it by law, we have a process for that: Patents. And those require to publish how it’s done…
Nah, all it takes is one person buying it, disassemble it and look at the mechanics to see whether there are things like motors and mirrors inside the transmitter to do new things like align it dynamically. And I mean the other things, physics, the atmosphere, lenses and near infraread lasers along with signal processing are well-understood. I think it won’t be a big secret once it turns into a real thing… I mean as long as it’s hype only it might be.
I wonder what they did, though. Because the article is omitting most of the interesting details and frames it as if this as if optical communication in itself was something new or disruptive… I mean if I read the Wikipedia article on Long-range optical wireless communication, it seems a bunch of companies have already invested 3 digit million sums into solving this exact issue…
Yeah, sure. No offense. I mean we have different humans as well. I got friends who will talk about a subject and they’ve read some article about it and they’ll tell me a lot of facts and I rarely see them make any mistakes at all or confuse things. And then I got friends who like to talk a lot, and I better check where they picked that up.
I think I’m somewhere in the middle. I definitely make mistakes. But sometimes my brain manages to store where I picked something up and whether that was speculation, opinion or fact, along with the information itself. I’ve had professors who would quote information verbatim and tell roughly where and in which book to find it.
With AI I’m currently very cautious. I’ve seen lots of confabulated summaries, made-up facts. And if designed to, it’ll write it in a professional tone. I’m not opposed to AI or a big fan of some applications either. I just think it’s still very far away from what I’ve seen some humans are able to do.
I think the difference is that humans are sometimes aware of it. A human will likely say, I don’t know what Kanye West did in 2018. While the AI is very likely to make up something. And also in contrast to a human this will likely be phrased like a Wikipedia article. While you can often look a human in the eyes and know whether they tell the truth or lie, or are uncertain. Not always, and we also tell untrue things, but I think the hallucinations are kind of different in several ways.
I’m not a machine learning expert at all. But I’d say we’re not set on the transformer architecture. Maybe just invent a different architecture which isn’t subject to that? Or maybe specifically factor this in. Isn’t the way we currently train LLM base models to just feed in all text they can get? From Wikipedia and research papers to all fictional books from Anna’s archive and weird Reddit and internet talk? I wouldn’t be surprised if they start to make things up since we train them on factual information and fiction and creative writing without any distinction… Maybe we should add something to the architecture to make it aware of the factuality of text, and guide this… Or: I’ve skimmed some papers a year or so ago, where they had a look at the activations. Maybe do some more research what parts of an LLM are concerned with “creativity” or “factuality” and expose that to the user. Or study how hallucinations work internally and then try to isolate this so it can be handled accordingly?
I get that. We want them to be creative and make up an eMail for us. Though I don’t think there is any fundamental barrier preventing us from guiding LLMs. Can’t we just make it aware whether the current task is reciting Wikipedia or creative storywriting? Or whether it’s supposed to focus on the input text or its background knowledge? Currently we don’t. But I don’t see how that would be theoretically impossible.
I can’t find any backing for the claim in the title “and they’re here to stay”. I think that’s just made up. Truth is, we found two ways which don’t work. And that’s making them larger and “think”. But that doesn’t really rule out anything. I agree that that’s a huge issue for AI applications. And so far we weren’t able to tackle it.
Yeah you’re right. I didn’t want to write a long essay but I thought about recommending Grok. In my experience, it tries to bullshit people a bit more than other services do. But the tone is different. I found deep within, it has the same bias towards positivity, though. In my opinion it’s just behind a slapped on facade. Ultimately similar to slapping on a prompt onto ChatGPT, just that Musk may have also added that to the fine-tuning step before.
I think there is two sides to the coin. The AI is the same. Regardless, it’ll tell you like 50% to 99% correct answers and lie to you the other times, since it’s only an AI. If you make it more appeasing to you, you’re more likely to believe both the correct things it generates, but also the lies. It really depends on what you’re doing if this is a good or a bad thing. It’s argualby bad if it phrases misinformation to sound like a Wikipedia article. Might be better to make it sound personal, so once people antropormorphize it, they won’t switch off their brain. But this is a fundamental limitation of today’s AI. It can do both fact and fiction. And it’ll blur the lines. But in order to use it, you can’t simultaneously hate reading it’s output. I also like that we can change the character. I’m just a bit wary of the whole concept. So I try to use it more to spark my creativity and less so to answer my questions about facts. I also have some custom prompts in place so it does it the way I like. Most of the times I’ll tell it something like it’s a professional author and it wants to help me (an amateur) with my texts and ideas. That way it’ll give more opinions rather than try and be factual. And when I use it for coding some tech-demos, I’ll use it as is.
I’d have to agree: Don’t ask ChatGPT why it has changed it’s tone. It’s almost for certain, this is a made-up answer and you (and everyone who reads this) will end up stupider than before.
But ChatGPT always had a tone of speaking. Before that, it sounded very patronizing to me. And it’d always counterbalance everything. Since the early days it always told me, you have to look at this side, but also look at that side. And it’d be critical of my mails and say I can’t be blunt but have to phrase my mail in a nicer way…
So yeah, the answer is likely known to the scientists/engineers who do the fine-tuning or preference optimization. Companies like OpenAI tune and improve their products all the time. Maybe they found out people don’t like the sometimes patrronizing tone, and now they’re going for something like “Her”. Idk.
Ultimately, I don’t think this change accomplishes anything. Now it’ll sound more factual. Yet the answers have about the same degree of factuality. They’re just phrased differently. So if you like that better, that’s good. But either way, you’re likely to continue asking it questions, let it do the thinking and become less of an independent thinker yourself. What it said about critical thinking is correct. But it applies to all AI, regardless of it’s tone. You’ll also get those negative effects with your preferred tone of speaking.
The question is unclear to me. You mean phone apps to access it? Other server projects with the same idea and they’re Free Software?