

the issue here is
The issue is Mozilla’s McKinsey CEO has decided to break the promise not to sell personal data.
the issue here is
The issue is Mozilla’s McKinsey CEO has decided to break the promise not to sell personal data.
If Firefox disappears. Mozilla isn’t Firefox, it’s the organization staffed with ad-tech and McKinsey ghouls and paid by Google to kill Firefox.
The equally hilarious thing is that currently they have the “never will” promise in the same codebase as the “definitely will” gated by a “TOU” flag, showing intent to violate the promise.
but if we look at the countries on this planet that are the most successful in terms of economics, equality, personal freedom, human rights, etc. then we find countries that made it work through regulation and strong government institutions
Yeah that’s socialism. The best societies were all degrees of socialist, this includes western Europe and the USA at its mid-century peak. These societies all had aggressive, borderline confiscatory progressive taxation, large scale government intervention in the economy (in the US especially aggressive anti-trust), a generous social welfare state, and a large and professionalized civil service.
They also had large and well-organized labor unions capable of wielding power on behalf of their members and disrupting plans of the elites.
Remove those things and you quickly slide into a dystopian fascist nightmare state as the US and parts of Europe like the UK are discovering.
Every time there’s an AI hype cycle the charlatans start accusing the naysayers of moving goalposts. Heck that exact same thing was happing constantly during the Watson hype. Remember that? Or before that the Alpha Go hype. Remember that?
I was editing my comment down to the core argument when you responded. But fundamentally you can’t make a machine think without understanding thought. While I believe it is easy to test that Watson or ChatGPT are not thinking, because you can prove it through counterexample, the reality is that charlatans can always “but actually” those counterexamples aside by saying “it’s a different kind of thought.”
What we do know because this at least the 6th time this has happened is that the wow factor of the demo will wear off, most promised use cases won’t materialize, everyone will realize it’s still just an expensive stochastic parrot and, well, see you again for the next hype cycle a decade from now.
You think when these journalists keep expressing “confusion” about why the public loves Luigi, are they just pretending to not understand? Or perhaps they’re so fucking cooked that they can’t see things from the perspective of the class that they’re in?
just because any specific chip in your calculator is incapable of math doesn’t mean your calculator as a system is
It’s possible to point out the exact silicon in the calculator that does the calculations, and also exactly how it does it. The fact that you don’t understand it doesn’t mean that nobody does. The way a calculator calculates is something that is very well understood by the people who designed it.
By the way, this brings us to the history of AI which is a history of 1) misunderstanding thought and 2) charlatans passing off impressive demos as something they’re not. When George Boole invented boolean mathematics he thought he was building a mathematical model of human thought because he assumed that thought==logic and if he could represent logic such that he could do math on it, he could encode and manipulate thought mathematically.
The biggest clue that human brains are not logic machines is probably that we’re bad at logic, but setting that aside when boolean computers were invented people tried to describe them as “electronic brains” and there was an assumption that they’d be thinking for us in no time. Turns out, those “thinking machines” were, in fact, highly mechanical and nobody would look at a univac today and suggest that it was ever capable of thought.
Arithmetic was something that we did with our brains and when we had machines that could do it that led us to think that we had created mechanical brains. It wasn’t true then and it isn’t true now.
Is it possible that someday we’ll make machines that think? Perhaps. But I think we first need to really understand how the human brain works and what thought actually is.
There’s this message pushed by the charlatans that we might create an emergent brain by feeding data into the right statistical training algorithm. They give mathematical structures misleading names like “neural networks” and let media hype and people’s propensity to anthropomorphize take over from there.
The fact that Luigi has not been convicted seems to be being treated as an irrelevant technicality by the media in this matter. Interesting given how scrupulous they usually are in dropping “alleged” everywhere.
Because everything we know about how the brain works says that it’s not a statistical word predictor.
LLMs have no encoding of meaning or veracity.
There are some great philosophical exercises about this like the chinese room experiment.
There’s also the fact that, empirically, human brains are bad at statistical inference but do not need to consume the entire internet and all written communication ever to have a conversation. Nor do they need to process a billion images of a bird to identify a bird.
Now of course because this exact argument has been had a billion times over the last few years your obvious comeback is “maybe it’s a different kind of intelligence.” Well fuck, maybe birds shit icecream. If you want to worship a chatbot made by a psycopath be my guest.
Also by this author
Because it’s an expensive madlibs program…
Really determining if a computer is self-aware would be very hard because we are good at making programs that mimic self-awareness. Additionally, humans are kinda hardwired to anthropomorphize things that talk.
But we do know for absolute sure that OpenAI’s expensive madlibs program is not self-aware and is not even on the road to self-awareness, and anyone who thinks otherwise has lost the plot.
Given that its training data probably has millions of instances of people fearing death I have no doubt that it would regurgitate some of that stuff. And LLMs constantly “say” stuff that isn’t true. They have no concept of truth and therefore can not either reliably lie or tell the truth.
They’re not releasing it because it sucks.
Their counternarrative is they’re not releasing it because it’s like, just way too powerful dude!
Removed by mod
The wedge already exists. The Trump people started leaking that Elon is a seagull less than 3 days after the election. The problem is that Elon paid for Trump’s victory and he’s continuing to throw money around. Trump knows this and it’s why he keeps appointing Elon’s goons to the administration (beginning with JD Vance and continuing with David Sachs and all the other VC).
Next time they pick a patsy for a guy with distinctive eyebrows they should find a guy with the same eyebrows.
Tesla is the new BMW and nobody’s using those buttons.
You’d probably have to lie between them instead of just looking at a photo, to assess if it’s still possible.
Clearly they were put there with the intention of making it difficult/uncomfortable to lie down on the subway vent. If they were installed incompetently that doesn’t make them unhostile though, it just makes them ineffective for their obviously intended purpose.
Too many villains in our society, not enough heros.