• 2 Posts
  • 37 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle





  • Most of the time I just copy/paste the terminal output and say ‘it didn’t work’ and it’ll come back with ‘I’m sorry, I meant [new command]’.

    It isn’t something that I’d trust to run unattended terminal commands (yet) but it is very good when you’re just like ‘Hey, I want to try to install pihole today, how do I install and configure it’, or ‘Here’s my IP Tables entry, why can’t I connect to this service’ … ‘Ok give me the commands to update the entry to do whatever it was you just said’.





  • If we could ensure 100% compliance with a meta-blockade then I’d be for it.

    However, that isn’t going to happen and any instances that do federate with Meta will be the part of the Fediverse that exists to billions of people. Those instances will become the dominate instances on the Fediverse for people who want to get away from Meta but still access the Fediverse services. Lemmy, as it stands now, is only a few million people at most. We simply do not have the weight to throw around on this issue.

    It is inevitable that commercial interests join the Fediverse and the conversation should be around how we deal with that inevitability rather than attempting to use de-federation as a tool to ‘fix’ every issue.




  • ZFS array using striping and parity. Daily snapshots get backed up to another machine on the network. 2 external hard drives with mirrors of the backup rotate between my home and office weekly-ish.

    I can lose 2 hard drives from the array at the same time without suffering data loss. Any accidentally deleted files can be restored from a snapshot if my house is hit by a meteor I lose maximum of 3-4 days of snapshots.



  • It seems inevitable that some kind of ID system will be needed online. Maybe not a real ID linked to your person but some sort of hard to obtain credential. That way getting it banned is inconvenient and posts without an ID signature can be filtered easily.

    It used to be that spam was fairly easy to detect for a human, it may have been hard to automate the detection of but a person could generally tell what was a bot and what wasn’t. Large Language Models (like GPT4) can make spam users appear to produce real conversations just like a person.

    The large scale use of such systems provide the ability to influence people on a mass scale. How do you know you’re talking to people and not GPT4 instances that are arguing for a specific interest? The only real way to solve this is to create some sort of system where posting has a cost associated with it, similar to how cryptocurrencies use a proof of work system to ensure that the transactional network isn’t spammed.

    Having to perform computationally heavy cryptography using a key that is registered to your account prior to posting would massively increase the cost of such spamming operations. Imagine if your PC had to solve a problem that took 5+ seconds prior to your post being posted. It wouldn’t be terribly inconvenient to you but for a person trying to post on 1000 different accounts it would be a crippling limitation that would be expensive to overcome.

    That would fight spam effectively, it wouldn’t do much to filter content.