• 0 Posts
  • 18 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • Hmm, I think summarization is a bad example. I’ve read quite some AI summaries that miss the point, sum up to a point where the simplification makes sth wrong or the AI added things or paraphrased and made things at least ambiguous. Even with the state of the art tech. Especially if the original texts were condensed or written by professionals. Like scientific papers or good news articles…

    What I think works better are tasks like translating text. That works really well. Sometimes things like rewording text. Or the style-transfer the image generators can do. That’s impressive. Restoring old photos, coloring them or editing something in/out. I also like the creativity they provide me with. They can come up with ideas, flesh out my ideas.

    I think AI is an useful tool for tasks like that. But not so much for summarization or handling factual information. I don’t see a reason why further research coudn’t improve on that… But at the current state it’s just the wrong choice of tools.

    And sure, it doesn’t help that people hype AI and throw it at everything.


  • h3ndrik@feddit.detoEurope@feddit.deGermany Is Seizing Jews’ Money Again
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    edit-2
    1 year ago

    Well, the money wasn’t “seized” and having jewish members doen’t necessarily have anything to do with the business decision. It could, but we don’t know. Could also be the case this protest group filed the wrong paperwork / chose the wrong legal status for their organization and just breached contract. Also a single bank (despite being publicly owned) isn’t Germany.

    So the words “Germany” and “Seizing” are wrong. “Jews” is speculation. I’m fine with the words “Is” “Money,” “again”.





  • It’s been an underdeveloped topic for some time. espeak-ng is available on most distros and has some integrations available that somewhat tie it into the desktop. There are more modern solutions that sound way better. For example Coqui’s xtts2, maybe Piper which is part of Home Assistand nowadays. If your language is English, you got quite some more solutions available to choose from. But it’s a mixed bag if they sound nice, are easy to install (that also depends on which Linux distro you use and if it’s available as a package) and if they tie into the rest of the system. I’m not an expert on this, but I’d also like to have TTS and STT available on my Linux desktop witout putting to much effort into it.


  • I think you’ll have to learn a bit about security. There is no one article, but entire books written about that… And it really depends on the type of service, the used frameworks and the intended deployment.

    I’d have a look at similar software. There are tons of open source projects that handle sensitive information. From files like Nextcloud to contact sync to ticketing and payment information.

    Edit: I’d leave Docker as an afterthought, since some people recommend that. It’s deployment, not development. And not a means of stopping user data getting leaked or stopping login brute forcing.)


  • That is the correct answer. You don’t read a book while doing the chores. I also don’t whip out a Terry Pratchett novel while commuting when I know I have to change trains in 10 minutes. A podcast will do and you can keep listening while waiting on the platform. In the car I often listen to music instead, but sometimes I get bored and I’m more in the mood for an interesting podcast. Especially if I’m stuck in the car for a bit longer.

    When deliberately doing one thing only, I like to read. I can read the paragraphs as fast or as slow as I like or just skim them.

    The “secret” is: You have to find the podcasts you like. Some are just chat and drivel, some are more condensed. You might also like Audiobooks with stories instead of factual information. I don’t think you can make an absolute statement. Well, unless your brain isn’t wired for audio content. I’d get that nothing appeases you if that were the case.

    Nowadays everyone and their grandma has a podcast. Quality varies greatly and most of them are more talk and not anything of substance. it’s not easy to find the good ones in all of the noise. But they exist.






  • Wow. That’s really a bad article. And even though the author managed to ramble on for quite some pages, they somehow completely failed to address the interesting and well discussed arguments.

    [Edit: I disagree -strongly- with the article]

    We’ve discussed this in June 2022 after the Google engineer Blake Lemoine claimed his company’s artificial intelligence chatbot LaMDA was a self-aware person. We’ve discussed both intelligence and conciousness.

    And my -personal- impression is: If you use ChatGPT for the first time, it blows you away. It’s been a ‘living in the future’ moment for me. And I see how you’d write an excited article about it. But once you used it for a few days, you’ll see every 6th grade teacher can distinguish if homework assignments were done by a sentient being or an LLM. And ChatGPT isn’t really useful for too many tasks. Drafting things, coming up with creative ideas or giving something the final touch, yes. But defenitely limited and not something ‘general’. I’d say it does some of my tasks so badly, it’s going to be years before we can talk about ‘general’ intelligence.


  • Thank you for the balanced view. I hope some day we ‘free’ people will stick together and replace that fear with trust in each other (and good moderation tools to ensure it). I’d love to hear those exact ‘fears’ and see if they’re warranted or artificial and this is just the excuse for protectionism. Because I myself am afraid that some of the free ‘alternative’ communities aren’t big and stable enough to bear additional stress and being split up in several fractions. It feels the same with the latest split and defederation happening in/on lemmy with beehaw separating from instances like lemmy.world. That caused quite some confusion with new users and trouble for everyone. I don’t think this is healthy for the platform as a whole. And splitting up any federated platform unnecessarily, will inevitably harm the platform as a whole.

    This being said, tilvids must have made some exellent choices, too. They attracted quite an amount of high quality content and seem to be generally thriving. I have to give them that… :-)


  • I personally like Peertube but strongly dislike TILVids. They use Peertube but disable the federation bit. If it is the case that you can follow folks on other instances, it is even more unfair, because they block me from subscribing to content on their platform. Well, it isn’t worse than some other video platforms, but it’s certainly only taking, not sharing like we’re used on the fediverse. I’d use another instance and i recommend everyone else to do so, too. (…As long as this is an alternative. Feel free to use TILVids, it’s certainly better than Odysee or YouTube!)


  • Thank you for explaining. Yes. Now that i have skimmed through the paper i’m kind of disappointed in their work. It’s not a surprise to me that quality will degrade if you design a feedback loop with low quality data. And does this even mean anything for a distinction between human and synthetic data? Isn’t it obvious a model will deteriorate if you feed it progressively lower quality input, regardless of where you got that from? I’m pretty sure this is the mechanism behind that. A better question to ask would be: Is there some point where synthetic output gets good enough to train something with it. And how far away is that point. Or can we rule that out because of some properties we can’t get around. I’m not sure if learning from own output is even possible like this. I as a human certainly can’t teach myself. I would need some input like books or curated assignments/examples prepared by other people. There are kind of intrinsic barriers when teaching oneself. However I can certainly practice stuff. But that’s kind of a different mechanism. And difficult to compare to the AI stuff.

    I’m glad i can continue to play with the language models, have them tuned to follow instructions (with the help of GPT4 data) etc