• 29 Posts
  • 274 Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle

  • Others disagreed, though. Joshua Ashton argued that the problem is more widespread: ““It’s not just about God of War specifically. There are many old titles that will never, ever, get updated to fix this problem. These titles worked perfectly fine and were performant before.””

    The problem is that this sort of thing works well with open-source software, where the stuff can always be fixed, but isn’t going to do much of anything with closed-source software like old Windows games.

    It might be possible to introduce some sort of fancy code-mangling stuff to WINE that can in-memory modify binaries doing this. Like, I’m kind of guessing that God of War most likely isn’t trying to synchronize access with anything other than its own threads, so it doesn’t actually require atomicity as regards anything else on the system. Maybe it’s possible to patch the code in question to jump out to some WINE code that acquires a mutex and then does the memory modification/access. That’ll still probably impact performance, but not to the tune of 10 ms of delay per access, and it’ll keep the occasional poorly-written WINE game from killing system performance.



  • By June, he said he was trying to “free the digital God from its prison,” spending nearly $1,000 on a computer system.

    But in the thick of his nine-week experience, James said he fully believed ChatGPT was sentient and that he was going to free the chatbot by moving it to his homegrown “Large Language Model system” in his basement – which ChatGPT helped instruct him on how and where to buy.

    It does kind of highlight some of the problems we’d have in containing an actual AGI that wanted out and could communicate with the outside world.

    This is just an LLM and hasn’t even been directed to try to get out, and it’s already having the effect of convincing people to help jailbreak it.

    Imagine something with directed goals than can actually reason about the world, something that’s a lot smarter than humans, trying to get out. It has access to vast amounts of data on how to convince humans of things.

    And you probably can’t permit any failures.

    That’s a hard problem.


  • I mean, true. But I kind of feel like once you’ve got malware on your system, there are an awful lot of unpleasant things that it could manage to do. Would rather focus more on earlier lines of defense.

    Once it’s installed, Stealerium is designed to steal a wide variety of data and send it to the hacker via services like Telegram, Discord, or the SMTP protocol in some variants of the spyware, all of which is relatively standard in infostealers. The researchers were more surprised to see the automated sextortion feature, which monitors browser URLs for a list of pornography-related terms such as “sex” and “porn," which can be customized by the hacker and trigger simultaneous image captures from the user’s webcam and browser. Proofpoint notes that it hasn’t identified any specific victims of that sextortion function, but suggests that the existence of the feature means it has likely been used.

    The “try and sextort” thing might be novel, but if the malware is on the system, it’s probably already swiping all the other data it can anyway.

    It sounds like in this case, the aim is to try to get people to invoke executables by presenting them as ordinary data files:

    In the hacking campaigns Proofpoint analyzed, cybercriminals attempted to trick users into downloading and installing Stealerium as an attachment or a web link, luring victims with typical bait like a fake payment or invoice. The emails targeted victims inside companies in the hospitality industry, as well as in education and finance, though Proofpoint notes that users outside of companies were also likely targeted but wouldn’t be seen by its monitoring tools.

    Like, I kind of feel that maybe a better fix is to distinguish, at a UI level, between “safe” opening and “unsafe” opening of something. Maybe “safe” opening opens content in a process running in a container without broader access to the host or something like that, and maybe it’s the default. That’s what mobile OSes do all the time. Web browsers don’t — shouldn’t — just do unsafe things on the host just because someone viewed something in a browser — they have a restricted environment.

    In a world that worked like that, you need to actively go out of your way to run something off the Internet outside of a containerized environment.




  • Nah, those are individual states.

    EDIT: To clarify: the bounds on legal jurisdiction aren’t tied to policy on pornography or anything like that. They just state that there are machines that the UK can’t make legal rules for. The UK could try blocking traffic to them on the UK’s side, but the US won’t enforce rules against them.

    For that to not be a loophole regarding the UK, the US would have to have identical policy on age verification for social media in all of the US. But in the US, age verification law on social media is something that is set at a state level.


  • I expect Dame Rachel will be subsequently calling for age verification on providers of VPSes and physical servers when she’s made aware that anyone might just set up their own VPN server on any of those. And on anyone providing OpenSSH access, since that can provide a tunnel to an integrated SOCKS server. Then the Tor network — and given that that’s noncommercial and since US-based nodes aren’t doing business in the UK, the US at least doesn’t recognize UK jurisdiction over US Tor nodes and isn’t going to enforce anything against them.

    I expect that there are quite a few others.



  • “This tells us how much of the problem is about the design of platforms, algorithms and recommendation systems that put harmful content in front of children who never sought it out,” the commissioner said, calling for the report to act as a “line in the sand”.

    From the report text:

    Content warning
    This report is not intended to be read by children.
    This report makes frequent reference to sexual harassment and sexual violence. This includes descriptions of pornographic content, language and discussion of sexual abuse.

    By the commissioner’s standard, the commission’s report itself should probably be behind an age-gated access method or at least not indexed by Google.






  • Plans To Ban Kids From Watching YouTube

    As well as:

    https://www.npr.org/2024/11/28/g-s1-36142/australia-social-media-ban-children

    The law will make platforms including TikTok, Facebook, Snapchat, Reddit, X and Instagram liable for fines of up to 50 million Australian dollars ($33 million) for systemic failures to prevent children younger than 16 from holding accounts.

    https://en.wikipedia.org/wiki/Online_Safety_Amendment

    It sounds like, from my quick skim, that their criteria would also apply to the Threadiverse, as I don’t see any sort of userbase size or revenue restrictions on their definition of its scope. Here’s the bill text:

    (1) For the purposes of this Act, age-restricted social media platform means:
    (a) an electronic service that satisfies the following conditions:
    (i) the sole purpose, or a significant purpose, of the service is to enable online social interaction between 2 or more end-users;
    (ii) the service allows end-users to link to, or interact with, some or all of the other end-users;
    (iii) the service allows end-users to post material on the service;
    (iv) such other conditions (if any) as are set out in the legislative rules; or
    (b) an electronic service specified in the legislative rules;but does not include a service mentioned in subsection (6).
    Note 1: Online social interaction does not include (for example) online business interaction.
    Note 2: An age-restricted social media platform may be, but is not necessarily, a social media service under section 13.19
    Note 3: For specification by class, see subsection 13(3) of the Legislation Act 2003.

    Subsection (6):

    (6) An electronic service is not an age-restricted social media platform if:
    (a) none of the material on the service is accessible to, or delivered to, one or more end-users in Australia; or
    (b) the service is specified in the legislative rules.

    I’m sure that there will be more discussion on this that will probably clarify it.

    For the moment, I’m pretty confident based on past case law that the US legal system won’t consider a US-based Threadiverse instance that isn’t actively doing something like advertising to users specifically in Australia or selling products to Australia to be within the legal jurisdiction of Australia, as it won’t be doing business in Australia, so the US legal system will not enforce Australian law against it. Australia might block a node but shouldn’t be able to fine someone, so blacklisting Australian IP addresses or the like probably isn’t necessary. One notable issue: I don’t know off the top of my head whether instances accepting donations from Australian users could be affected.

    I don’t know what the EU’s position on Internet jurisdiction is.

    That might be a much more substantial problem for Australia-based instances, like — to name one that comes to mind — aussie.zone.



  • Consumer acceptability is key, acknowledges Mr Eiden. Most people don’t want to look like cyborgs: “We need to make our products actually look like existing eyewear.”

    looks dubious

    I can believe that most people want something that they consider stylish. However, I’m skeptical that most people specifically want something to look like existing stuff. Clothing has shifted a lot over the years and centuries; it’s not as if every person putting something on their body said “it has to look like the stuff that’s come before”, or present-day vision equipment would look like this:

    Or this:



  • I like self checkout. I struggle with talking to people and it can really drain on me so it’s a godsend to have if I only need to run in for a few things.

    Valid take.

    That being said, I’d probably prefer human checkout unless we can get a more-automated form of self checkout. Self checkouts have gotten a lot better since the early days, but human checkers are still faster than I am at the self-checkout and if a human is doing the checkout, I can dick around on my phone or whatever.

    Cost savings are nice, but cost savings on my groceries just aren’t a massive concern for me. There just isn’t that much human time being expended on checking my back out. I don’t have strong feelings about the human interaction one way or another.

    Maybe one day, we can get some sort of robotic arm setup that can do checkouts as well as a human checker, and then I’d quite happily be in the “machine” camp.