That video was the first thing I thought of :)
(https://youtu.be/YUpST_cQ1hM for anyone wondering)
That video was the first thing I thought of :)
(https://youtu.be/YUpST_cQ1hM for anyone wondering)
Well I kept using it until Infinity died, which was only at the start of this month!
If I do decide to go back, it will be by compiling the infinity APK with my own API key, but I’m not feeling much of an urge to bother at the moment.
It probably really depends on the project, though I’d probably try and start with the tests that are easiest/nicest to write and those which will be most useful. Look for complex logic that is also quite self-contained.
That will probably help to convince others of the value of tests if they aren’t onboard already.
Yeah they’ve put them in a couple places, It’s pretty bad. Had to work out how to create a custom uBlock Origin rule to block them.
I think calling it just like a database of likely responses is too much of a simplification and downplays what it is capable of.
I also don’t really see why the way it works is relevant to it being “smart” or not. It depends how you define “smart”, but I don’t see any proof of the assumptions people seem to make about the limitations of what an LLM could be capable of (with a larger model, better dataset, better training, etc).
I’m definitely not saying I can tell what LLMs could be capable of, but I think saying “people think ChatGPT is smart but it actually isn’t because <simplification of what an LLM is>” is missing a vital step to make it a valid logical argument.
The argument is relying on incorrect intuition people have. Before seeing ChatGPT I reckon if you’d told people how an LLM worked they wouldn’t have expected it to be able to do things it can do (for example if you ask it to write a rhyming poem about a niche subject it wouldn’t have a comparable poem about in its dataset).
A better argument would be to pick something that LLMs can’t currently do that it should be able to do if it’s “smart”, and explain the inherent limitation of an LLM which prevents it from doing that. This isn’t something I’ve really seen, I guess because it’s not easy to do. The closest I’ve seen is an explanation of why LLMs are bad at e.g. maths (like adding large numbers), but I’ve still not seen anything to convince me that this is an inherent limitation of LLMs.
Thanks for the info on crossposting! I thought I’d seen someone mention a cross posting feature but couldn’t see any button to do it. I’m using the Jerboa app on Android which I guess doesn’t have that button, but I see it on the website now as you say.
It’s also good to know that linking to the original URL is generally better and the rest can be handled by the UI - that does seem nicer.
Great TIL, I hate it.
Excellent how the page alludes to other horrible things to imagine, like “don’t pour hot oil into your ear”, and “don’t pour it in if there’s a hole in your eardrum”
I’d be happy if we’d just accepted “referer” as the correct spelling for everything, but instead we have the “Referrer-Policy” header, so now I need to check the correct spelling for anything involving referring…
I do sort of like the idea that because we want to keep backwards compatibility on software we just change the language instead since that’s easier.
What sort of features 🤔
Of the 1,723 adults surveyed across the UK, 73% said technology companies should, by law, have to scan private messaging for child sexual abuse and disrupt it in end-to-end encrypted environments.
Found this interesting. I found the survey results here: https://docs.cdn.yougov.com/68pn2b6b57/NSPCC_OnlineSafetyBill_230427_W.pdf
The exact question I believe is being referred to was:
And do you think technology companies should or should not be required by law to use accredited technology to identify child sexual abuse in end-to-end encrypted messaging apps?
This seems like a really bad question, since it implies a coexistence of end to end encryption and big tech companies being able to read people’s messages, which doesn’t really make sense (or at least requires more clarification on what that would mean). The question as it is is basically “do you think child sexual abuse is bad”.
Haha, got a “network error” on my first attempt so clicked send again, I guess it did go through the first time after all :D
I believe if you hosted your own instance you would have to get access because of how federation works, so it might stay as something like most apps/uis won’t expose it because it’s a little invasive, but it’s definitely still accessible without too much work.
My understanding (from limited knowledge) is that also due to how federation works even if you’re instance isn’t under too much load, you may notice issues with posts/comments from other instances if they’re struggling.
On that note, I think a post view limit would be good too. Maybe 10 posts a day for accounts who haven’t donated and 100 for those who have?
I’m worried this will make it harder for people to transition to mastodon as it’s more of a shock. It would help if someone made a mastodon frontend to mimic twitter (shitty UI, paywalled, occasionally insert low quality AI generated posts, ads, read limits) for a smoother transition /s
Yeah, there currently seem to be a bunch of rough edges with Lemmy. Another is that iirc editing a comment increases the comment count shown on a post.
Nothing that can’t be fixed though, and it’s encouraging how good Lemmy feels already compared to reddit (for me at least).
Yeah, there currently seem to be a bunch of rough edges with Lemmy. Another is that iirc editing a comment increases the comment count shown on a post.
Nothing that can’t be fixed though, and it’s encouraging how good Lemmy feels already compared to reddit (for me at least).
My experience using docker on windows has been pretty awful, it would randomly become completely unresponsive, sometimes taking 100% CPU in the process. Couldn’t stop it without restarting my computer. Tried reinstalling and various things, still no help. Only found a GitHub issue with hundreds of comments but no working workarounds/solutions.
When it does work it still manages to feel… fragile, although maybe that’s just because of my experience with it breaking.
Assuming x and y are totally ordered 🤮
Probably no time soon.