• 2 Posts
  • 191 Comments
Joined 1 year ago
cake
Cake day: August 22nd, 2023

help-circle
  • I’ll tell you the strategy that worked for me last time (quit for ~2 years), and that I’m using this time.

    • Switch to a vape. Lung capacity increases immediately, and you get rid of the bad smell. If you haven’t vaped, give yourself some time to get used to the different habit (no cigarette packing ritual anymore etc)
    • Buy a 0 nicotine vape or two, or find a local place you can get them easily. This is your “inside” vape.
    • Buy a refillable vape and get nicotine liquid roughly equivalent to the full-nic vape you switched to from cigarettes. This is your “outside” vape.
    • Start restricting the locations you use the full-nic vape. I work from home, so I don’t vape full-nic at my desk, I walk outside to do it. You want to break the absent-minded vaping+work or vaping+tv habit.
    • Step your nicotine intake down over as long a period as you like, but don’t ever step it back up. First time I quit, I did it over about a year. That’s a little extreme. You could probably do it over a few months.
    • Once you’re on 0 nic all the time, either stay with that, or gradually wean yourself off the habit as well. This is much easier without the chemical addiction.

    Good luck.






  • I attended a federal contracting conference a few months ago, and they had one of these things (or a variant) walking around the lobby.

    From talking to the guy who was babysitting it, they can operate autonomously in units or be controlled in a general way (think higher level unit deployment and firing policies rather than individual remote control) given a satellite connection. In a panel at the same conference, they were discussing AI safety, and I asked:

    Given that AI seems to be developing from less complex tasks like chess (which is still complicated, obviously, but a constrained problem) to more complex and ill-defined tasks like image generation, it seems that it’s inevitable that we will develop AI capable of providing strategic or tactical plans, if we haven’t already. If two otherwise-equally-matched military units are fighting, it seems reasonable to believe that the one using an AI to make decisions within seconds would win over the one with human leadership, simply because they would react more quickly to changing battlefield conditions. This would place an enormous incentive on the US military to adopt AI assisted strategic control, which would likely lead to units of autonomous weapons which are also controlled by an autonomous system. Do any of you have any concerns about this, and if so, do you have any ideas about how we can mitigate the problem.

    (Paraphrasing, obviously, but this is close)

    The panel members looked at each other, looked at me, smiled, shrugged, and didn’t say anything. The moderator asked them explicitly if they would like to respond, and they all declined.

    I think we’re at the point where an AI could be used to create strategies, and I would be very surprised if no one were trying to do this. We already have autonomous weapons, and it’s only a matter of time before someone starts putting them together. Yeah, they will generally act reasonably, because they’ll be trained on human tactics in a variety of scenarios, but that will be cold comfort to dead civilians who happened to get in the way of a hallucinating strategic model.

    EDIT: I know I’m not actually addressing anything you said, but you seem to have thought about this a bit, and I was curious about what you thought of this scenario.