Content Moderation Workflow

1. How We Review Content

  1. Automated Scanning All content uploaded to our platforms is scanned automatically using AI tools and filters. We check for:

  • Sexual content

  • Child safety violations

  • Violence and graphic content

  • Hate speech

  • Terrorism or extremism

  • Spam or scams

  • Impersonation or fraud

  • Politically or religiously provocative content

  • Harmful links (malware, phishing)

  1. Initial Classification

  • Safe Content: Published immediately

  • Borderline Content: Visibility may be limited and sent for human review

  • High-Risk Content: Temporarily hidden and sent for urgent review

  1. Human Moderation Review Moderators evaluate:

  • Context and intent

  • Severity of the content

  • Local laws (Tanzania: TCRA, Cybercrimes Act, EPOCA)

  • Community Guidelines

  1. Decision Outcomes

  • Approve: Content is published

  • Restrict: Apply age restrictions, limited distribution, or warning labels

  • Remove: Content violating laws or guidelines is removed; user is notified

  • Escalate: Serious cases (child safety, terrorism, crimes) are reported to law enforcement or internal legal team

  1. User Notification & Appeal Users receive explanations for moderation actions and can submit appeals. Senior moderators review appeals and make the final decision.

  2. 2. Rules for Users

  3. Safety First: Do not post illegal, harmful, or threatening content

  4. Respect Others: No harassment, bullying, or hate speech

  5. Child Protection: Avoid content involving minors in any harmful or sexualized way

  6. Intellectual Property: Do not upload copyrighted or pirated content without permission

  7. Responsible Use: Avoid spam, scams, or misuse of platform features

Paroter Technologies Company reserves the right to remove content, restrict features, suspend accounts, or report to authorities when necessary.

Paroter Technologies Company
Paroter Technologies Company