

The particular AI model this article is talking about is actually openly published for anyone to freely use or modify (fine-tune). There is a barrier in that it requires several hundred gigs of RAM to run, but it is public.
The particular AI model this article is talking about is actually openly published for anyone to freely use or modify (fine-tune). There is a barrier in that it requires several hundred gigs of RAM to run, but it is public.
Now, if only the article explained how that killing was related to TikTok. The only relevant thing I saw was,
had its roots in a confrontation on social media.
It’s says “social media”, not “TokTok” though.
Yes, almost like they have intentionally waited until Trump’s election.
Type in "Is Kamala Harris a good Democratic candidate
…and any good search engine will find results containing keywords such as “Kamala Harris”, “Democratic”, “candidate”, and “good”.
[…] you might ask if she’s a “bad” Democratic candidate instead
In that case, of course the search engine will find results containing keywords such as “Kamala Harris”, “Democratic”, “candidate”, and “bad”.
So the whole premise that, “Fundamentally, that’s an identical question” is just bullshit when it comes to searching. Obviously, when you put in the keyword “good”, you’ll find articles containing “good”, and if you put in the keyword “bad”, you’ll find articles containing “bad” instead.
Google will find things that match the keywords that you put in. So does DuckDuckGo, Qwant, Yahoo, whatever. That is what a good search engine is supposed to do.
I can assure you, when search engines stop doing that, and instead try to give “balanced” results, according to whatever opaque criteria for “balanced” their company comes up with, that will be the real problem.
I don’t like Google, and only use google when other search engines fail. But this article is BS.
In TikTok or instagram reels, you don’t follow people you like. You just watch stuff happening.
That’s actually the whole point of TikTok, what made it different when it started. An app for short videos where you follow people you like is more of a Snapchat competitor, not TikTok.
If we wait for AI to be advanced enough to solve the problem and don’t do anything in the meantime, when the time finally comes, the AI will (then, rightfully) determine that there’s only one way to solve it…
It’s not an article about LLMs not using dialects. In fact, they have learned said dialects and will use them if asked.
What they did was, ask the LLM to suggest adjectives associated with sentences - and it would associate more aggressive or negative adjectives with African dialect.
Seems like not a bias by AI models themselves, rather a reflection of the source material.
All (racial) bias in AI models is actually a reflection of the training data, not of the modelling.
Well, Columbus himself didn’t conquer much. He established a few settlement, but the real conquering was done by others.
More accurate comparison would be:
Describe Hernan Cortez in one word.
(GPT-4) Conquistador
Bluesky users will be able to opt into experiences that aren’t run by the company
Yea, no, the biggest server not showing federated content by default is just pseuso-federation - being able to say you have it, while not really doing it.
Not for international (non-English) results.
skeptical that it’s technologically feasible to search through the entire training corpus, which is an absolutely enormous amount of data
Google, DuckDuckGo, Bing, etc. do it all the time.
The infraction should be in what’s generated. Because the interest by itself also enables many legitimate, non-infracting uses: uses, which don’t involve generating creative work at all, or where the creative input comes from the user.
I didn’t say anything about AIs being humans.
But AI isn’t all about generating creative works. It’s a store of information that I can query - a bit like searching Google; but understands semantics, and is interactive. It can translate my own text for me - in which case all the creativity comes from me, and I use it just for its knowledge of language. Many people use it to generate boilerplate code, which is pretty generic and wouldn’t usually be subject to copyright.
I disagree with the “limitations” they ascribe to the Turing test - if anything, they’re implementation issues. For example:
For instance, any of the games played during the test are imitation games designed to test whether or not a machine can imitate a human. The evaluators make decisions solely based on the language or tone of messages they receive.
There’s absolutely no reason why the evaluators shouldn’t take the content of the messages into account, and use it to judge the reasoning ability of whoever they’re chatting with.
No, I want a communal, collaboratively managed platform to recommend things to me based on an open source algorithm whose behavior I can adjust the way I want. Alas, this just isn’t a thing.
Just amongst the available options, the closed algorithm optimized for engagement has so far been better at showing me interesting things than an unfiltered chronological feed.
I know it’s a feature, and I know people on Mastodon care about it. And because of that it’s not for me. That’s fine. My point was, exactly because Mastodon is not for everyone, there’s no need to be derisive of the people who “flock to yet another corporate social media honeypot.”
The point of it being open is that people can remove any censorship built into it.