Programmer and sysadmin (DevOps?), wannabe polymath in tech, science and the mind. Neurodivergent, disabled, burned out, and close to throwing in the towel, but still liking ponies 🦄 and sometimes willing to discuss stuff.

  • 5 Posts
  • 1.19K Comments
Joined 2 years ago
cake
Cake day: June 26th, 2023

help-circle
  • Therapists are not supposed to bond with their patients. If you find one whom you can stand for half an hour, then take what you can and leave the rest, they’re not to be your friend or lover. The fact that chatbots let people fall in love with them, is a huge fail from a therapy point of view.

    Bouncing ideas back and forth is a good use though. A good prompt I’ve seen recently:

    I’m having a persistent problem with [x] despite having taken all the necessary countermeasures I could think of. Ask me enough questions about the problem to find a new approach.

    If you worry about privacy, you can run an LLM locally, but it won’t be fast, and you’d need extra steps to enable search.


  • You can use local AI as a sort of “private companion”. I have a few smaller versions on my smartphone, they aren’t as great as the online versions, and run slower… but you decide the system prompt (not the company behind it), and they work just fine to bounce ideas.

    NotebookLM is a great tool to interact with large amounts of data. You can bet Google is using every interaction to train their LLMs, everything you say is going to be analyzed, classified, and fed as some form of training, hopefully anonymized (…but have you read their privacy policy? I haven’t, “accept”…).

    All chatbots are prompted by the company to be somewhat sycophantic so you come back, the cases where they were “too sycophantic”, were just a mistake in dialing it too far. Again, can avoid that with your own system prompt… or at least add an initial prompt in config, if you have the option, to somewhat counteract the company’s prompt.

    If you want serendipity, you can ask a chatbot to be more spontaneous and suggest more random things. They’re generally happy to oblige… but the company ones are cut short on anything that could even remotely be considered as “harmful”. That includes NSFW, medical, some chemistry and physics, random hypotheticals, and so on.


  • Aren’t copyright laws awesome?

    • Buy digital copy… no you can’t, you can only license one
    • Buy physical book, now you have a copy
    • Want a digital copy? No you can’t, copyright forbids it…
    • …unless you destroy the physical copy in the process, then it’s only a format migration
    • Donating the books after digitizing, would be “stealing”!

    And still, they are suing them for migrating formats without authorization 🤦

    All hail Disney’s lobbying and the 150 year copyright term!


  • TLDR: It’s a mess.

    Back in the day, I started migrating notepad stuff to Markdown on a Wiki. Then on a MediaWiki. Then DokuWiki. Then ZimWiki. Then Joplin. Then GitHub Pages and a self-hosted Jeckyll.

    Each, single, one, of, them, uses a slightly different flavor of Markdown. At this point, I have stuff spread over ALL OF THEM, much of it rotting away in “backups to migrate later”. 😮‍💨
    I’ve been considering “vibe coding” some converters…

    As for syncing… the Markdown part is easy: git.
    Working with a Markdown editor to update GH Pages, was a good experience.
    Having ZimWiki auto-sync to git, was good, but didn’t find a decent compatible editor for Android.
    I switched to Joplin lured by the built-in auto-sync options, but kind of regret it now, when it has a folder with thousands of files in it.

    Obsidian is not OSS itself, but has an OSS plugin to sync to git.
    I’ve read that using Logseq alongside Obsidian should be possible… and was planning to test that setup, keeping Obsidian in charge of sync. Possibly with GitHub/Jeckyll, git-lfs for images and attachments.


    PS: assuming one could have working back-and-forth converters for the different Markdown flavors, and everything stored in git, then one could theoretically use git hooks to convert to/from whatever local version used by a particular editor.







  • Hm, makes sense, but I feel like we’re still missing something.

    I saw comments about Durov, similar to this investigation, maybe around a month ago.


    With the xAI partnership news, I looked into it and found this nice thing:

    In Telegram, you can clear them one by one, or date ranges, or use disappearing messages, but this tool still found some I had missed.

    (Disclaimer: I got pulled into Telegram by some friends leaving WhatsApp with the policy changes of 2021, my threat model is less one of FSB, and more one of indiscriminate AI siphoning for ad targeting)




  • The future of all computing is AI. Get on or get left behind.

    Satire?.. hm… for quite some time already, people have been proposing we get rid of all software, and instead use real-time generative AI to render what some software would do.

    AI cosplaying as software… imagine “web development”, where the “browser” were an AI simulating to be a browser, connecting to an AI simulating to be a server… what would “web development” even mean anymore?






  • Do GIMP, Krita, Kdenlive or Inkscape use AI?

    There are AI plugins for all of them… but they’re optional for now (2025). Kdenlive is working on integrating correction and background removal generative AI. Main offender is Adobe, which is the “standard” workflow for most media processing, and is forcing AI everywhere, including something as simple as color curves… then slapping a tag of “made using AI” in the output file. Inkscape is foremost a SVG editor, but Adobe Illustrator already has generative AI to allow stuff like rotating vector graphics “in 3D”, it’s only time for Inkscape to follow suit. Even Windows Notepad got some AI features recently 🤦

    AI assisted compression and correction

    JPG compression itself is a sort of “AI light”, where it analyzes chunks of an image for perceptual similarity, to drop “irrelevant” data. Adobe has added a feature to do that, but using AI in the analysis, tweaking/generating blocks so there are more similarities. It’s likely others will follow suit: “it’s lossy compression after all, right? …right?”

    Lossy audio encoding (MP3, etc), also has a perceptual profile to increase block similarities, they’re adding AI there the same way as in images.

    Videos… well, they’re a mix of images and audio, with temporal sequences already breaking images into key frames, intermediates, generated, etc. Generatively tweaking some of those to make them more similar, within perceptual limits, also improves compression.

    Does this only apply to digital media used in mainstream sources or does it mean everyone who uses editing software is using AI?

    Main issue lies at the source: cameras

    Unless you’re using a large sensor professional camera, all the “prosumer” and smartphone sensors, are… let’s put it mildly… UTTER CRAP. They’re too small, with lenses too bad, unable to avoid CoC, diffraction, or chromatic aberration.

    Before it even spits out a “RAW” image, it’s already been processed to hell and the way back. Modern consumer “better” cameras… use more AI to do a “better” processing job. What you see, is way past the point of whatever the camera has ever seen.

    …and then, it goes into the software pipeline. ☠️