• 0 Posts
  • 3 Comments
Joined 9 days ago
cake
Cake day: March 10th, 2025

help-circle
  • SaraTonin@lemm.eetolinuxmemes@lemmy.worldIt's that simple
    link
    fedilink
    English
    arrow-up
    3
    ·
    13 hours ago

    I’m in the middle of moving, but once I’m set up I’m going to look into dual booting. I’m not sure I’ll 100% be able to get rid of windows, though. For a start, I’ve heard NVIDIA is a nightmare on Linux and I’ve only recently got a new computer so i don’t really want to buy more hardware.

    Hopefully dual booting will allow me to experiment and try alternatives for software which doesn’t have a Linux version, and i hear that one of the things that chatbots are actually good at is diagnosing and fixing Linux issues. So I’m hopeful, but I’m not assuming it’ll be entirely painless.


  • If you follow AI news you should know that it’s basically out of training data, that extra training is inversely exponential and so extra training data would only have limited impact anyway, that companies are starting to train AI on AI generated data -both intentionally and unintentionally, and that hallucinations and unreliability are baked-in to the technology.

    You also shouldn’t take improvements at face value. The latest chatGPT is better than the previous version, for sure. But its achievements are exaggerated (for example, it already knew the answers ahead of time for the specific maths questions that it was denoted answering, and isn’t better than before or other LLMs at solving maths problems that it doesn’t have the answers already hardcoded), and the way it operates is to have a second LLM check its outputs. Which means it takes,IIRC, 4-5 times the energy (and therefore cost) for each answer, for a marginal improvement of functionality.

    The idea that “they’ve come on in leaps and bounds over the Last 3 years therefore they will continue to improve at that rate isn’t really supported by the evidence.