I would love if my national broadcaster was on the Fediverse somehow :)
Poland is healing <3
At its heart, the DMA requires more interoperability than ever, making it harder for gatekeepers to favor their own services or block other businesses from reaching consumers on their platforms.
Wow Google/Apple/etc. will actually have to compete instead of just having a de facto monopoly? But how could they ever earn money under such conditions /s
I’d say the documentation problem is even worse in closed source internal stuff at companies. You often only have internal users (ie your colleagues) so instead of properly documenting, often people just tell stuff mouth-to-mouth (tribal knowledge basically).
Also, you rarely gain any points for writing good documentation at jobs because you have to spend time doing it making the coding itself slower. And some people deliberately don’t write docs cause they think it’ll provide better job security.
This is one benefit of the walled garden of places like Reddit because they are legally obligated to delete the information especially in places like the EU.
In theory yes, but anyone can also scrape reddit for all its posts and comments (and someone likely is). And nobody is making them delete the data. And then there’s stuff like the Internet archive complicating stuff further.
I really like that perspective, thank you for easing my fear.
The kind of frightening thing is that anyone could start an instance on the Fediverse, collect all the posts and comments coming in as all instances usually do and then use it to do the same thing, and I’m not sure there’s currently anything (legally or otherwise) stopping them.
But at least we have the option to defederate such an instance. If we can find out which ones do it…
Remember the whole “if you aren’t paying for the product, you are the product”?
It wasn’t enough to turn you into a product. Now they also want to turn you into a resource. Farming your comments and posts to feed to an AI model.
What an economy we’ve built.
What alternative do you suggest? Please tell me it’s not crypto.
You might also want to check if there’s anything set to periodically do something in Task Scheduler, search for it in windows start menu
Fuck, inject everything Cory Doctorow says into my veins please
Choose a server that is oriented to your geographical location.
You should rather move to a smaller instance than lemmy.ml. Remember, you can still interact with all other instances.
In the long, long run we will likely have to do something about it.
Do you have any solutions in mind? I’ve been trying to think of something but nothing quite feels right.
Maybe having a setting like “Allow federated users to post” that you could disable, and then users from outside the instance can only comment, not post. That would at least cater the discussion somewhat to what the instance wants to talk about. It still feels very restricting though.
Hey, sorry to bother you again but I have a follow-up question. Maybe feel free to ping someone else that might be able to answer.
How do you deal with users just signing up somewhere else and then interacting on beehaw anyway? I mean isn’t that effectively just circumventing the application process?
Cool. Roughly how many % of applications do you deny? I’m starting to wonder if I should be stricter with my applications for the sake of “user quality”.
I’ve found many applications simply fail to answer the questions. I guess this should at least be a minimum requirement.
as they didn’t write enough in their registration for us to really evaluate if they were a good fit for this instance.
I’m just curious, what do you consider to be “writing enough”? How strict are you with your applications? I’m trying to learn from beehaw for my own instance.
Am I the only one that feels it’s a bit strange to have such safeguards in an AI model? I know most models aren’t available online but some models are available to download and run locally right? So what prevents me from just doing that if I wanted to get around the safeguards? I guess maybe they’re just doing it so that they can’t be somehow held legally responsible for anything the AI model might say?