• 0 Posts
  • 28 Comments
Joined 10 months ago
cake
Cake day: December 14th, 2023

help-circle




  • Frankly, AI might have its uses, and I’ve found it useful here and there, but perhaps the cons outweigh the pros…

    If I used a knife to write I shouldn’t be surprised that I don’t get good results. The other way around as well.

    Present AIs/LLMs, like any other tool, has its places to shine and places where you shouldn’t even think about using it, but as a new tool we are still figuring everything out while at the same time new versions are appearing. So we should be careful about how we use it, but for what it works great it should be used as any other tool and hopefully it gets used more and more for the people rather than for capital.

    edit to better reply to you: The biggest con of AIs right now is capitalism.

    How does a country like Vietnam or China tackle AI?

    I don’t know how they are using it but I’ll give a few ideas about how they could be used to help improve countries, like:

    A simple use is to try to get rid of as much work/human hours of work as possible and distribute this time to the whole population to enjoy life. In such a case it would be much easier to ask everyone to work hard to make new data to train the AIs to replace then by basically writing everything as if explaining to a new worker or giving the AI your emails, or even recording your talks with coworkers, to make more good quality data that will reduce everyone’s workload.

    Also, if a country were to ask each person to write a text about their lives, their strenghts, theirs problems, and an AI was trained on it the government, and the people, could ask it about what should be fixed in the country, how things could be fixed, and such things. Even if LLMs were simple “next token predictors” asking it to explain what people see as problems and how people think they can be solved could help governance in a Communist country massively.

    I’m sure there are even better things to do though.









  • However, history is poised to repeat itself with a similar outcome of chaos and disillusionment. The misguided belief that language models can replace the human workforce will yield hilarious yet unfortunate results.

    Even if AI can’t be much better than what has already been demonstrated, which I don’t think is the case but let’s consider it, there are already quite a few jobs which can be at least partially automated and that can already change the world by so much, even if only by having permanent unemployment at above 10-20% for every country, or by the bourgeoisie accepting to reduce worked hours to only a few so the system doesn’t collapse.


  • New robots are also using LLMs both for understanding their enviroment with cameras, rather than complicated sensors that might not understand the world as we do, and for controlling movement by basically taking in the data from the robot and what other LLMs understand from the enviroment and predicting what inputs are needed to move correctly for movement or doing any tasks.

    As the LLMs get better they can also come up with better strategies too, which is already being used to some extent to have them create, test and fix codes based on output and error messages and this should soon allow fully autonomous robots as well that can think by themselves and interact with the world leading to many advancements, like full automation of work and scientific discoveries.





  • LLMs would probably be best used in systems, like multiple LLMs and normal programs each with their strenghs covering the other’s weaknesses. And perhaps having programs, or even other LLMs that shut it off if anything goes wrong.

    Something weird happened to a robot?

    The brain or part of it (as there can be multiple LLMs toghether each trained to do one or a few things only) or a more powerful LLM overseeing many robots identifies that and stop it, waiting for a better LLM offsite or a human to say something.

    I mean, if the thing happening is so weird that there is no data about it available then perhaps not even a human would be able to deal well with it, meaning that an LLM doesn’t need to be perfect to be very useful.

    Even if the robots had problems and would bug out causing a lot of damage we could still take a lot of people away from work and let the robots to do it if the robots can work and make enough to replenish their own losses by themselves. And with time any problem should be fixable anyway, so we might as well try.



  • Actually the biggest problem with (humanoid) robots is and always has been power. Batteries only last so long and take up space and add weight.

    Kinda. If the robots are good enough that they can do all sorts of tasks with a humanoid robot it wouldn’t be hard to make them switch their own batteries, which isn’t very different of humans and their need to eat and such, or the can just plug themselves to be powered up when needed.

    Indeed that might not be very convenient or the most efficient but it could be done by robots alone without human input. As the tech needed for these robots means that industrial robots, many of which can be plugged in all the time, could also produce much more and cheaper, leading to the possibility of simply having many batteries cheaply. Not the best solution but it is a solution.

    It’s not the CPU.

    It is the CPU to some extent, but recently software has been biggest part of it and one that is being improved a lot recently.

    We’ve always been able to make robots that can perform certain tasks and with enough effort you can make robots that can perform many tasks.

    The robots in industry so far were mostly just a complex machine doing a simple task. They couldn’t try to improve themselves or do anything beyond their programming. For example, a machine taking a part in an assembly line and putting it somewhere else can only really take something if it is where it expects it to be (to the mm) when it expects it to be there. With newer but no so new tech they might be able to recognize a QR like symbol on it and and reorient themselves but they can’t do anything other than what they have been programmed to do.

    But the newer robots with newer AIs will soon be able to do anything a human can. For example, if you ask one to clean the house and the neighboorhood they don’t just goes around the floor vaccuing while perhaps missing some parts, they could see that becoming a doctor and buying more robots to clean everything is a solution and they will think about it. That’s the level of difference in task making here.