Instagram's AI now automatically blocks offensive comments based on what users have already reported. The feature, which coincides with National Bullying Prevention Month, is currently in beta mode for testing.
- The feature is similar to Twitter’s “Hide Replies” but is automatic. Instagram's AI will hide comments based on those that were previously reported, as well as comments in violation of its community guidelines.
- Users can still view the removed posts under the label “View Hidden Comments."
- Instagram is also expanding its AI-powered nudge warnings to notify repeat offenders. The feature alerts people that their comment/post may be offensive before it's posted, giving them time to reflect.
- The company says it's seen a "meaningful" drop in negative interactions since introducing the nudge feature in 2019.
- Instagram's new features still rely on AI to accurately identify what's considered harassment, which can be problematic particularly "for borderline messages with a limited context," Cornell University professor Natalie Bazarova said.