I’ve heard from someone that works in their security dep. that they go through photos to see if there’s pedophilia related content.
Trying to confirm that online but I don’t see any articles about this.
Did anyone hear something similar to this? Perhaps this is for public groups
Typically it’s an automated scan against known CSAM. If someone in the chat reports an image, it might go to a human for review. It’s unusual that a human is going to look through your images or messages unprompted.