• 0 Posts
  • 50 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle

  • In addition to what others say, for me the biggest sin is just how maddeningly slow it is. Trying to scroll a conversation back in time is just miserable. Coming from approaches where scrolling arbitrarily back in history has felt pretty much instant for over 20 years, it just feels horribly backwards. The reasons for that sluggishness is that it’s just a terrible design vaguely wearing a passable layer of paint to make it look approachable.





  • This was roughly the state of affairs before but the state of things have relented where software password managers are now allowed to serve the purpose.

    So if a hardened security guy wants to only use his dedicated hardware token with registering backups, that’s possible.

    If a layman wants to use Google password manager to just take care of it, that’s fine too.

    Also much in between, using a phone instead of a yubikey like, using an offline password manager, etc.




  • Eh, analogy will be imperfect due to nuance, but I’d say it is close.

    The big deals are:

    • DeepSeek isn’t one of the “presumed winners” that investors had been betting on, and they caught up very quickly
    • DeepSeek let people download the model, meaning others can host it free and clear. The investors largely assumed at least folks would all abide by the ‘keep our model private if it is competitive and only allow access as a service offering’, and this really fouls up assumptions that an AI provider would hold lock-in
    • DeepSeek is pricing way way lower than OpenAI.
    • Purportedly they didn’t need to push their luck with just tons of H100 to get where they are. You are right that you still need pretty beefy to run it, but nVidia’s stock was predicated on even bigger stakes. Reportedly an attempt to train a model by OpenAI involved $500 million, and a claim to train a “good enough” for less than $10 million dramatically reduces the value of nVidia. Note that why they are “way down” they still have almost a 3 trillion dollar market cap. That’s still over 30 Intels or 12 AMDs. There’s just some pessimism because OpenAI and Anthropic either directly or indirectly drove potentially a majority of nVidia revenue, and there’s a lot more uncertainty about those companies now.

    I also think this is on the back of a fairly long relatively stagnant run. After the folks saw the leap from GPT2 to ChatGPT they assumed a future of similar dramatic leaps, but have instead gotten increasingly modest refinements. So against a backdrop of a more “meh” sentiment over where they are going you have this thing to disturb some presumed fundamentals in the popular opinion.


    • 7-zip
    • VLC
    • OBS
    • Firefox did it only to mostly falter to Chrome but Chrome is largely Chromium which is open source.
    • Linux (superseded all the Unix, very severely curtailed Windows Server market)
    • Nearly all programming language tools (IDEs, Compilers, Interpreters)
    • Essentially all command line ecosystem (obviously on the *nix side, but MS was pretty much compelled to open source Powershell and their new Terminal to try to compete)

    In some contexts you aren’t going to have a lively enough community to drive a compelling product even as there’s enough revenue to facilitate a company to make a go of it, but to say ‘no open source software has acheived that’ is a bit much.






  • Usually I’ll see something mild or something niche get wildly messed up.

    I think a few times I managed to get a query from a post in, but I think they are monitoring for viral bad queries and very quickly massage it one way or another to not provide the ridiculous answer. For example a fair amount of times the AI overview just would be seemingly disabled for queries I found in these sorts of posts.

    Also have to contend with the reality that people can trivially fake it and if the AI isn’t weird enough, they will inject a weirdness to get their content to be more interesting.


  • jj4211@lemmy.worldtomemes@lemmy.worldCan't wait!
    link
    fedilink
    arrow-up
    17
    ·
    2 months ago

    Unfortunately, this time around the majority of AI build up are GPUs that are likely difficult to accomodate in a random build.

    If you want a GPU for graphics, well, many of them don’t even have video ports.

    If your use case doesn’t need those, well, you might not be able to reasonably power and cool the sorts of chips that are being bought up.

    The latest wrinkle is that a lot of that overbuying is likely to go towards Grace Blackwell, which is a standalone unit. Ironically despite being a product built around a GPU but needing a video port, their video port is driven by a non-nvidia chip.


  • jj4211@lemmy.worldtomemes@lemmy.worldAi bubble
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    This was after applying various mechanisms of the traditional kind. Admittedly there was one domain specific strategy that want applied that would have caught a few more, but not all of them.

    The point is that I had a task that was hard to code up, but trivial yet tedious for a human. AI approaches can bridge that gap sometimes.

    In terms of energy consumption, it wouldn’t be so bad if the approaches weren’t horribly over used. That’s the problem now, 99% of usage is garbage. If it settled down to like 3 or 4% of usage it would still be just as useful, but no one would bat an eye at the energy demand.

    As with a lot of other bubble things, my favorite part is probably going to be it’s life after the bubble pops. When the actually useful use cases remain and the stupid stuff does out.



  • jj4211@lemmy.worldtomemes@lemmy.worldAi bubble
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    2 months ago

    I had some files that i knew had duplicates, but didn’t exactly match and while the filenames were not identical, you could tell by looking if they were the same.

    Would have been very tedious to do all of them, LLM was able to identify a “good enough” number of duplicates and only made a few mistakes. Greatly sped up the manual work required to clean up the collection.

    But that’s so far from most advertised scenarios and not compelling from a “make lots of money” perspective.


  • Well that’s not quite true.

    I have some z-wave thermostats, which I know do not talk to the Internet, just a local system with a zwave dongle.

    For a relative, recently set up a similar setup, but with a homekit thermostat. Similar deal, though it really really wanted to connect to a cloud server and you kind of had to trick it to a non apple homekit setup. The follow on model from that brand did drop homekit support, presumably because they wanted to force their cloud servers, which became required for any advanced functionally.

    There are ways to get automation friendly devices without a cloud connected requirement, though admittedly you have to be paying pretty close attention. Generally offerings for business are more likely to be locally workable, but that’s hardly a given either