Hello world,
as many of you may already be aware, there is an ongoing spam attack by a person claiming to be Nicole.
It is very likely that these images are part of a larger scale harassment campaign against the person depicted in the images shared as part of this spam.
Although the spammer claims to be the person in the picture, we strongly believe that this is not the case and that they’re only trying to frame them.
Starting immediately, we will remove any images depicting “Nicole” and information that may lead to identifying the real person depicted in those images to prevent any possible harassment.
This includes older posts and comments once identified.
We also expect moderators to take action if such content is reported.
While we do not intend to punish people posting this once, not being aware of the context, we may take additional actions if they continue to post this content, as we consider this to be supporting the harassment campaign.
Discussion that does not include the images themselves or references that may lead to identifying the real person behind the image will continue to be allowed.
If you receive spam PMs please continue reporting them and we’ll continue working on our spam detections to attempt to identify them early before they reach many users.
For anti-spam efforts, I think that there are a variety of potential partial solutions. No complete fixes, but some:
Rate-limiting the comment frequency on new accounts. IIRC, Reddit used this tactic. It does create some issues for (legitimate) use of throwaway accounts in anonymous posts, but there’s no legitimate reason for a new account to blast hundreds of messages an hour, I think. This might already be present, but if not, it’d be a good start. This can be defeated by generating new accounts for each new message or batch of.
Rate-limiting new account creation from a given IP address, if not already present. An attacker could defeat this via use of a commercial VPN, and if too low, it could create issues for some commercial VPNs.
Hashing of messages to red-flag identical messages being posted en masse. As best I could tell, the spammer here was posting many identical messages. This can be defeated by a spammer having software slightly modify each message.
Fuzzy-hashing of messages to red-flag almost identical messages being posted en masse. This can be defeated via text generation methods that are carefully tailed to the fuzzy hashing mechanism to modify messages such that each fuzzy-hashes to a different message.
A mechanism to permit an account to share blacklists of IP or message hashes and trigger removal of messages on other instances, preferably associated with a specific identifier or account. This permits any other instances to leverage antispam work by one instance; if I want to trust a given antispam admin or bot on lemmy.world, I can. Let an instance admin review and override such removals, maybe. It creates abuse potential for malicious use or inadvertent false positives spanning instances, but I think that it’s necessary to avoid having each instance fight its own lonely antispam battles. Otherwise, new and personal instances risk being buried by a deluge of direct message spam. The same mechanism, if exposed to users and not just instance admins, would also permit for subscribable content filters for people who don’t want to see content of a given sort (e.g. profanity or pornographic content of a particular sort or whatever, not just spam), which is another issue.
Fortunately, as far as I see as a user, we’re not yet at the point that there is much spam on here yet, so this isn’t yet a serious problem. Maybe it’ll never happen, if the userbase never grows much. But if the userbase gets considerably bigger, increasingly-problematic spam will inevitably follow.