So you can still ban the voting agent. Worst case scenario you have to wait for a single rule breaking comment to ban the user. That seems like a small price to pay for a massive privacy enhancement.
So you can still ban the voting agent. Worst case scenario you have to wait for a single rule breaking comment to ban the user. That seems like a small price to pay for a massive privacy enhancement.
I don’t think you do. Admins can just ban the voting agent for bad voting behavior and the user for bad posting behavior. All of this conflict is imagined.
This is literally already the Lemmy trust model. I can easily just spin up my own instance and send out fake pub actions to brigade. The method detecting and resolving this is no different.
It will be extremely obvious if you see 300 user agents voting but the instance only has 100 active users.
But if the only bad behavior is voting and you can that agent then you’ve solved the core issue. The utility is to remove the bad behavior, no?
Is that really harassment considering Lemmy votes have no real consequences besides feels?
You don’t even need to message an admin. You can just ban the agent doing the voting.
Ok, then you can keep your votes public and other who don’t want that have an option as well. Everyone is happy. There is no conflict here.
In addition to that, I guarantee you that meta and the like are already running data mining instances on here. Being publicly tied to votes is just more telemetry for the machine. I don’t quite understand why people seem to think that is no big deal.
Who cares? Generating an infinite number of tokenized identities to facilitate ban evasion will just result in an instance getting defederated. This introduces no real risk as long as the instance is generally abiding by the rules.
Most of us here are fairly anonymous anyway. I dont think being able to add an additional layer of privacy to our activity is really a big deal.
Awesome! This is the exact stopgap implementation I was arguing for, and I’m surprised how many people kept insisting it was impossible. You should try and get this integrated into mainline Lemmy asap. Definitely joining piefed in the meantime though.
But she is already perfect
Her AMRAAM could easily be hidden by that outfit
This is very effective recruiting where do I sign up?
The rogue instance would still need fake users though. It would be very easy to see if you are getting votes from 300 unique tokens, but the instance only has 100 users.
Also the method I am proposing would simply be transparent in terms of user management, so if you are running core Lemmy, the only way to generate voting tokens would be to generate users.
Maybe. I was kind of hoping someone else would run with this flag because I don’t have a spare public GitHub account I really want to throw into this debate. I’m more likely to just implement it and then toss a PR grenade into the discussion in a few months if there’s no other progress.
Worst case scenario, there is an entirely separate, tokenized identity for votes which is authenticated the exact same way, but which is only tied to an identity at the home instance. It would be as if the voting pub is coming from user:socsa-token. It’s effectively a separate user with a separate key. A well behaving instance would only ever publish votes from socsa-token, and comments from Socsa. To the rest of the fediverse socsa-token is simply a user which never comments and Socsa is a user which never votes.
I am not sure key based ID is actually core to AP anyway. The last time I read the spec it kind of hand waved identity management implementation.
Yes, that is why I am arguing in favor of an additional layer of pseudonymous voting.
As far as I understand it all activity originates from the home instance, where users are interacting with federated copies of posts. The unique user token from a well behaving instance follows the user across the fediverse, allowing bulk moderation for voting patterns using that token. The only difference is that it is not explicitly tied to a given user string. That means moderation for vote manipulation gets tracked via a user’s vote token, and moderation for trolling/spam/rule violations happens via their display name. It may be possible that a user is banned from voting but not commenting and vice versa. It’s is a fairly minor change in moderation workflow, which brings a significant enhancement to user privacy.
To prevent them from engaging in bad behavior.