

The experience of installing and updating GPU drivers can be very different across different distros. Especially if you use secure boot. This was such a pain point for me on Tumbleweed that I just pinned my kernel.
The experience of installing and updating GPU drivers can be very different across different distros. Especially if you use secure boot. This was such a pain point for me on Tumbleweed that I just pinned my kernel.
I don’t see any mention of whether this uses local models or cloud models. I’m not interested in sending anything I care about it into the cloud.
If you think this isn’t related to human rights, then you’ve missed the point.
People have the right to use technology, and indeed we effectively need technology to exercise our right to free speech. You cannot have one without the other. Not anymore.
The right way to think about this that they are arbitrarily banning a topic of discussion simply because it is not dead-center average. This isn’t even a legal issue, and the justification is utter nonsense (Facebook itself runs on Linux, like >90% of the internet). No government has officially asked them to do this, though the timing suggests that it is unofficially from the Trump administration.
This is about exerting control, establishing precedent, and applying a chilling effect to anything not directly aligned with their interests. This obviously extends to human rights issues. This is a test run.
This will be highly platform-dependent, and also dependent on your threat model.
On PC laptops, you should probably enable Secure Boot (if it’s not enabled by default), and password-protect your BIOS. On Macs you can disable booting from external media (I think that’s even the default now, but not totally sure). You should definitely enable full-disk encryption – that’s FileVault on Mac and Bitlocker on Windows.
On Apple devices, you can enable USB Restricted Mode, which will protect against some attacks with USB cables or devices.
Apple devices also have lockdown mode, which restricts or disables a whole bunch of functionality in an effort to reduce your attack surface against a variety of sophisticated attacks.
If you’re worried about hardware hacks, then on a laptop you’d want to apply some tamper-evident stickers or something similar, so if an evil maid opens it up and tampers with the hardware, at least you’ll know something fishy happened, so you can go drop your laptop in an active volcano or something.
If you use any external devices, like a keyboard, mouse, hard drive, whatever…well…how paranoid are you? I’m going to be honest: there is a near 0% chance I would even notice if someone replaced my charging cables or peripheral cables with malicious ones. I wouldn’t even notice if someone plugged in a USB keylogger between my desktop PC and my keyboard, because I only look at the back of my PC once in a blue moon. Digital security begins with physical security.
On the software side, make sure you’re the only one with admin rights, and ideally you shouldn’t even log into admin accounts on a day-to-day basis.
According to BabyCenter.com user data, it’s the 13,388th most popular boy’s name. https://www.babycenter.com/baby-names/details/root-1704004
I agree that the models themselves are clearly transformative. That doesn’t mean it’s legal for Meta to pirate everything on earth to use for training. THAT’S where the infringement is. And they admitted they used pirated material: https://www.techspot.com/news/101507-meta-admits-using-pirated-books-train-ai-but.html
You want to use the same bullshit tactics and unreasonable math that the RIAA used in their court cases?
I would enjoying seeing megacorps held to at least the same standards as individuals. I would prefer for those standards to be reasonable across the board, but that’s not really on the table here.
I guess the idea is that the models themselves are not infringing copyright, but the training process DID. Some of the big players have admitted to using pirated material in training data. The rest obviously did even if they haven’t admitted it.
While language models have the capacity to produce infringing output, I don’t think the models themselves are infringing (though there are probably exceptions). I mean, gzip can reproduce infringing material too with the correct input. If producing infringing work requires both the algorithm AND specific, intentional user input, then I don’t think you should put the blame solely on the algorithm.
Either way, I don’t think existing legal frameworks are suitable to answer these questions, so I think it’s more important to think about what the law should be rather than what it currently is.
I remember stories about the RIAA suing individuals for many thousands of dollars per mp3 they downloaded. If you applied that logic to OpenAI — maximum fine for every individual work used — it’d instantly bankrupt them. Honestly, I’d love to see it. But I don’t think any copyright holder has the balls to try that against someone who can afford lawyers. They’re just bullies.
Why? This cannot possibly have any legal weight. Some adults look young. Some kids look old. The very idea is broken from the outset.
I can’t tell if this is incompetence or malice.
Thanks for the info. I was not aware that Bluesky had public, shareable block lists. That is indeed a great feature.
For anyone else like me who was not aware, I found this site with an index of a lot of public block lists: https://blueskydirectory.com/lists . I was not able to load some of them, but others did load successfully. Maybe some were deleted or are not public? I’m not sure.
I’ve never been heavily invested in microblogging, so my first-hand experience is limited and mostly academic. I have accounts on Mastodon and Bluesky, though. I would not have realized this feature was available in Bluesky if you hadn’t mentioned it and I didn’t find that index site in a web search. It doesn’t seem easily discoverable within Bluesky’s own UI.
Edit: I agree, of course, that there is a larger systemic problem at the society level. I recently read this excellent piece (very long but worth it!) that talks a bit about how that relates to social media: https://www.wrecka.ge/against-the-dark-forest/ . Here’s a relevant excerpt:
If this truly is the case—if the only way to improve our public internet is to convert all humans one by one to a state of greater enlightenment—then a full retreat into the bushes is the only reasonable course.
But it isn’t the case. Because yes, the existence of dipshits is indeed unfixable, but building arrays of Dipshit Accelerators that allow a small number of bad actors to build destructive empires defended by Dipshit Armies is a choice. The refusal to genuinely remodel that machinery when its harms first appear is another choice. Mega-platform executives, themselves frequently dipshits, who make these choices, lie about them to governments and ordinary people, and refuse to materially alter them.
Do you think this is a systemic problem, or just the happenstance of today? Is there something about Bluesky’s architecture or governance that makes it more resilient against that (particularly in the long term)? Or will they have all the same problems as they gain more users and enable more federation with other servers?
I’d rather have something like a “code grammar checker” that highlights potential errors for my examination rather than something that generates code from scratch itself
Agreed. The other good use case I’ve found is as a faster reference for simple things. LLMs are absolutely great for one-liners and generating troublesome (but logically simple) things like complex xpath queries. But I still haven’t seen one generate a good script of even moderate complexity without hand-holding. In some cases I’ve been able to get usable output with a few shots, saving me a bit of time compared to if I’d written the whole darned thing from scratch.
I’ve found LLMs very useful for coding, but they aren’t replacing my actual coding, per se. They replace looking things up, like through man pages, language references, or StackOverflow. Something like ffmpeg, for example, has a million options and it is always a little annoying to sift through the docs manually when I just need to do one specific task.
I’m sure it’ll happen sooner or later. I’m not naive enough to claim that “computers will never be able to do $THING” anymore. I’ll say “not in the next year”, though.
Just marketing nonsense. There are three ways to present AI features:
A generational improvement on things that have been available for 20+ years. This is not sexy and does not make for good advertising. For example: grammar checking, natural-speech processing (Siri), automatic photo tagging/sorting.
A new type of usage that nobody cares about because they’ve lived without it just fine up to now.
Straight-up lie to people about what it can do, using just enough weasel words to keep yourself out of jail.
I’d be surprised if it were anything else. No way in hell OpenAI is going to develop their own browser engine from scratch. Mayyyyybe they go with Gecko? Might make sense if OpenAI is trying to eat Google’s lunch long-term.
In theory, an “AI PC” (please imagine giant eye-rolls along with the scare quotes) has the hardware to run models locally instead of shunting stuff off to OpenAI or Anthropic for processing. So in theory, it’s more private and secure than similar functionality on a “traditional PC”.
In practice…wtf knows what Windows is doing anyway? Or what it will do with the next OS update? Same for macOS. On the Mac side, Apple keeps talking about their local AI and private cloud AI, and yet they’re still partnering with OpenAI for ChatGPT integration. I don’t want to use anything that even has the capability to send my shit to OpenAI, for the same reason I don’t like to put poison in my fridge no matter how clearly labelled it might be.
“It’s popular so it must be good/true” is not a compelling argument. I certainly wouldn’t take it on faith just because it has remained largely unquestioned by marketers.
The closest research I’m familiar with showed the opposite, but it was specifically related to the real estate market so I wouldn’t assume it applies broadly to, say, groceries or consumer goods. I couldn’t find anything supporting this idea from a quick search of papers. Again, if there’s supporting research on this (particularly recent research), I would really like to see it.
If there is any research from the last 50 years suggesting this actually works, I’d love to see it.
Wait, isn’t it the other way around? You should arrive in NY earlier than you left London, since NY is 5 hours behind London. So if you leave at 8:30 and arrive 1.5 hours later, it should only be 5AM when you arrive.
You might need a third breakfast before your elevenses in that case.
Interesting read, thanks! I’ll finish it later, but already this bit is quite interesting:
Without access to gender, the ML algorithm over-predicts women to default compared to their true default rate, while the rate for men is accurate. Adding gender to the ML algorithm corrects for this and the gap in prediction accuracy for men and women who default diminishes.
We find that the MTEs are biased, signif-icantly favoring White-associated names in 85.1% of casesand female-associated names in only 11.1% of case
If you’re planning to use LLMs for anything along these lines, you should filter out irrelevant details like names before any evaluation step. Honestly, humans should do the same, but it’s impractical. This is, ironically, something LLMs are very well suited for.
Of course, that doesn’t mean off-the-shelf tools are actually doing that, and there are other potential issues as well, such as biases around cities, schools, or any non-personal info on a resume that might correlate with race/gender/etc.
I think there’s great potential for LLMs to reduce bias compared to humans, but half-assed implementations are currently the norm, so be careful.
Uh huh.
Yes. That’s what “sale of data” means. Everybody understood that. That’s exactly what we don’t want you to do.