It seems most people are on board with the idea that AI will change the world. While I agree it having some impact, I also think it is overinflated by marketing. Operating an AI takes huge computing power, which costs heaps of money and energy. So how are people suggesting that exponential improvement is feasible? I do not get it.

Further, aren’t we supposed to reduce energy usage? Why are we trying to overspend what little is left? I hate how this is taking priority over the environment.

Creating this post mainly to rant, I thought OpenAI firing Sam Altman was a signal for a reality check. It seems they are wrapping it up and trying to rehire him though… What a drama.

  • sbv@sh.itjust.works
    link
    fedilink
    arrow-up
    29
    arrow-down
    2
    ·
    1 year ago

    Some of it is overinflated marketing, but for organizations trying to cut costs it could have a significant effect on a lot of their employees.

    AI doesn’t need to be good. It just needs to be cheaper and good enough.

    • partial_accumen@lemmy.world
      link
      fedilink
      arrow-up
      11
      arrow-down
      3
      ·
      edit-2
      1 year ago

      So most people are assuming AI will do all the work of a job. Maybe it will someday, but my experience today with it appears to be able to do 80% of the work with only 20% human effort put in. So no, its not doing 100% of the work, it doing 80%, but it does that 80% in seconds for what used to take me hours or days.

      That is a huge improvement over no AI use at all.

      • sbv@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        it appears to be able to do 80% of the work with only 20% human effort put in. So no, its not doing 100% of the work, it doing 80%,

        I think that’s the calculation most organizations will make. If AI can do 80% of a job, they can fire 80-90% of their employees in that task, and use the remainder as AI wranglers.

        That’s a pretty significant workforce reduction, and it means the folks who remain employed spend less of their time doing what they trained for, and more time in an IT/management role.

    • someacnt@sopuli.xyzOP
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      1 year ago

      Yea, I mostly mean the AGI nonsense. There are jobs where AI is helpful - tho imo it is worthy to point out that not all of it is purely benefit of AI.

  • squirmy_wormy@lemmy.world
    link
    fedilink
    arrow-up
    24
    arrow-down
    3
    ·
    1 year ago

    Id argue this isn’t unpopular to anyone who knows that “AI” is just pattern matching and marketing to people who don’t understand tech.

    People should actively be skeptical.

  • cm0002@lemmy.world
    link
    fedilink
    arrow-up
    22
    arrow-down
    3
    ·
    1 year ago

    It’s unsustainable right now because hardware and software are not aligned (yet). Software is currently out pacing hardware, but there are loads of companies working on specialist chips that will deal with the computing power problem and the energy consumption problem just by the shear factor of optimization benefits.

    Plus, software optimizations are also well under way and models are always being fine tuned to run better/train better with less

    • someacnt@sopuli.xyzOP
      link
      fedilink
      arrow-up
      5
      arrow-down
      4
      ·
      1 year ago

      I doubt how good results that could achieve. I agree that 10~100 times improvement is feasible by optimizing the hardware. But the hardware in general need to be improved, yet the impenetrable barrier light speed is blocking on the way.

      And more complete AI systems should require hundreds of thousands times the computation power. Really, this has the same issue as bitcoin.

      • M500@lemmy.ml
        link
        fedilink
        arrow-up
        9
        arrow-down
        2
        ·
        1 year ago

        I think the specialized hardware for this task will be better than you expect. It’s like using a sledgehammer to carve something. Pretty soon a chisel will be given to the computer and it will be able to do its job much easier.

        • someacnt@sopuli.xyzOP
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          I doubt it since GPU was already not a bad tool for this job. The generality of GPGPU helped a lot here.

    • Schal330@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      I suspect this is all part of the long term plan; provide the service at a reduced fee so people gain reliance on the tech, then increase the cost over time. We see this happen everywhere.

  • Karlos_Cantana@sopuli.xyz
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    1 year ago

    The “current gen AI” is the key here. How sustainable it is depends on how quickly it can grow and improve. Technology is growing much faster than in the past. I remember getting a dictation program in 1998. I had to spend 2 hours talking to it so it could learn my voice. Even after all that, it still only had about a 25% success rate in properly transcribing my text. In 2015 I bought my first smart watch. The first voice transcription I made from it was 100% correct with absolutely no learning of my voice at all.

    I believe that the LLM will quickly give way to a different type of AI. There may be several different approaches to AI before something really takes hold and changes the game.

  • ttmrichter@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    1 year ago

    Current AI isn’t in any meaningful sense “intelligent”. It’s all smoke, mirrors, horses, and ponies put out on a fancy performance designed to transfer money from the public purse (directly or indirectly) into the pockets of sociopathic billionaires.

  • deafboy@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    Operating an AI takes huge computing power.

    For now. There are already plans to accelerate some specific machine learning workloads on next generations of low powered mobile chips. Think ChatGPT on a smartphone.

    For other use-cases, you don’t even need to wait. Google Coral can do object recognition for your security camera feed, using minuscule amount of power, compared to a GPU.

    • nodsocket@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      This is definitely true, but keep in mind that there is a limit to how far you can optimize a chip. Eventually we could have everything running on ASICs, but electronics do have a maximum speed that we may not be far from reaching.