Log In

Don't have an account? Sign up now

Lost Password?

Sign Up

Trust and Safety Transparency Policy

Trust and Safety Transparency Policy — InternBoard.com

Issued by: Master Trading Class Private Limited

Designation: Owner and Operator of InternBoard.com

Global Office: InternBoard.com, 12 Woodlands Square, #13-79 Woods Square, Tower One, Singapore – 737715

Corporate and Correspondence Address: Master Trading Class Private Limited #G-3, South West Avenue, Street No. 2 Lalamma Gardens, Puppalaguda, Hyderabad, Telangana, India — 500089

Support and Enquiries: support@internboard.com

Last Reviewed and Updated: April 23, 2026

Effective Date: April 23, 2026


This Trust and Safety Transparency Policy (“Policy”) is a legally binding document issued by Master Trading Class Private Limited, the owner and operator of InternBoard.com. It constitutes the operational counterpart to the Company’s Community Guidelines and describes, with transparency and accountability, how the Company detects, investigates, moderates, and enforces violations of its platform standards. This Policy is intended to be read in conjunction with the Company’s Community Guidelines, Terms and Conditions, Privacy Policy, Cookie Policy, and Fraud and Scam Prevention Policy, all of which are published on the Platform and collectively constitute the Company’s governance framework. By accessing or using the Platform in any manner, all users — including Career Starters, Employers, and Visitors — acknowledge and accept the practices described in this Policy.


1. Introduction and Purpose

1.1 Why This Policy Exists

InternBoard.com (“InternBoard,” “the Platform,” “we,” “us,” or “our”) is a global digital opportunity marketplace connecting Career Starters — students, fresh graduates, and early-career professionals—with Employers offering internships, apprenticeships, full-time positions, part-time roles, remote opportunities, freelance gigs, and career development resources. The terms “Career Starter” and “Candidate” are used interchangeably throughout this Policy and carry identical legal meaning.

Trust is the foundation upon which InternBoard operates. Career Starters trust the Platform to connect them with genuine Opportunities from legitimate Employers. Employers trust the Platform to give them access to authentic, professionally presented Career Starters. Both groups trust that the Company will act with integrity, consistency, and accountability in enforcing the standards set out in the Community Guidelines.

This Policy exists to honour that trust by being transparent—not merely about what users must do, but about how the Company ensures those standards are maintained. It discloses the Company’s enforcement methodology, moderation systems, accountability mechanisms, and governance framework in a manner that is consistent with the expectations of global regulators; the Digital Trust and Safety Partnership (DTSP) Best Practices Framework (now formalized as ISO/IEC 25389); the Trust and Safety Professional Association (TSPA) standards; and the requirements of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (India), as amended in 2022, 2023, and 2025.

1.2 The Relationship Between This Policy and the Community Guidelines

The Company’s Community Guidelines define what all users must and must not do on the Platform. This Trust and Safety Transparency Policy defines how the Company ensures those standards are upheld. The two documents are complementary and should be read together. Where any user wishes to understand the rules of conduct, they should consult the Community Guidelines. Where any user wishes to understand how those rules are detected, investigated, and enforced—and how they may seek redress—they should consult this Policy.

1.3 Platform Role

InternBoard operates exclusively as a technology platform and opportunity discovery service. The Company does not act as an employer, staffing agency, recruitment consultancy, or hiring intermediary; does not participate in any hiring, shortlisting, or rejection decision; and does not enter into any employment or engagement contract on behalf of any party. All interactions and agreements between Career Starters and Employers occur directly between those parties. Notwithstanding its intermediary role, the Company maintains robust Trust and Safety operations in fulfillment of its legal obligations and its commitment to platform integrity.


2. Definitions

2.1 Career Starter

“Career Starter” refers to any individual who registers on the Platform to discover and apply for Opportunities, including but not limited to students, fresh graduates, and early-career professionals. The terms “Career Starter” and “Candidate” are used interchangeably throughout this Policy and carry identical legal meaning.

2.2 Community Guidelines

“Community Guidelines” means the Company’s Community Guidelines document, Version 2.0, published on the Platform at internboard.com/community-guidelines, which sets out the standards of conduct, ethical responsibilities, and obligations applicable to all users.

2.3 Company

“Company” means Master Trading Class Private Limited, the owner and operator of InternBoard.com.

2.4 Content

“Content” means any data, text, images, videos, resumes, curricula vitae, Opportunity listings, profiles, messages, communications, reviews, feedback, or other materials submitted, uploaded, posted, transmitted, or otherwise made available by any user through the Platform.

2.5 Employer

“Employer” means any individual, organisation, company, startup, institution, or entity that registers on the Platform for the purpose of posting Opportunities or accessing Career Starter profiles.

2.6 Enforcement Action

“Enforcement Action” means any action taken by the Company in response to an actual or suspected violation of the Community Guidelines or applicable law, including formal warnings, content removal, feature restrictions, account suspensions, permanent account terminations, and referrals to law enforcement authorities.

2.7 Grievance Appellate Committee

“Grievance Appellate Committee” or “GAC” means the statutory committee established by the Government of India under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (as amended), to hear appeals from users dissatisfied with decisions of an intermediary’s Grievance Officer.

2.8 Grievance Officer

“Grievance Officer” means the designated officer of the Company responsible for receiving, acknowledging, and resolving user complaints regarding Content, conduct, and data protection in accordance with the IT Rules 2021 (India) and applicable law.

2.9 Opportunity

“Opportunity” means any internship, apprenticeship, part-time role, full-time position, freelance gig, project-based engagement, remote or work-from-home role, or career development resource listed on the Platform by an Employer.

2.10 Platform

“Platform” refers collectively to the InternBoard.com website, associated mobile applications, APIs, tools, and all other digital services operated by the Company under the InternBoard brand.

2.11 Trust and Safety Team

“Trust and Safety Team” means the team of trained human moderators, analysts, and safety professionals within or engaged by the Company who are responsible for reviewing reports, investigating violations, and implementing Enforcement Actions.

2.12 User

“User” means any individual or entity—whether a Career Starter, Employer, or Visitor—who accesses or uses the Platform in any manner.

2.13 Visitor

“Visitor” means any individual who accesses or browses the Platform without creating a registered account.


3. Governance Framework and Policy Architecture

3.1 The InternBoard Policy Suite

The Company’s Trust and Safety governance is implemented through an interconnected suite of policies, each addressing a distinct but complementary dimension of platform safety. These documents collectively constitute the InternBoard Platform Governance Framework:

  • Community Guidelines — the user-facing rulebook defining all prohibited and required conduct, applicable to every Career Starter, Employer, and Visitor;
  • Trust and Safety Transparency Policy (this document) — the operational accountability document explaining how Community Guidelines violations are detected, investigated, and enforced, and how the Company maintains transparency with its users and regulators;
  • Terms and Conditions—the legally binding agreement governing users’ access to and use of the Platform, including Membership terms, intellectual property rights, disclaimers, and limitation of liability;
  • Privacy Policy — governing the collection, use, storage, processing, and protection of personal data of all users;
  • Cookie Policy — governing the use of cookies and similar tracking technologies on the Platform;
  • Fraud and Scam Prevention Policy — providing detailed guidance on identifying, reporting, and avoiding fraudulent activity specific to career and employment platforms;
  • Data Retention, Deletion and Data Lifecycle Policy—governing the retention and secure disposal of user data;
  • Employer Verification and Trust Program Policy—governing the Company’s voluntary Employer verification programme; and
  • DMCA / Copyright and Intellectual Property Policy — governing intellectual property infringement claims and the Company’s response procedures.

3.2 Hierarchy and Interpretation

In the event of any inconsistency between any two policies within the Platform Governance Framework, the Terms and Conditions shall prevail, unless the inconsistency relates to a matter specifically and exclusively addressed in a more specific policy document. This Policy, as the Trust and Safety operational framework, takes precedence over general governance documents where the subject matter relates specifically to moderation methodology, enforcement processes, or transparency reporting.

3.3 Alignment with External Frameworks

The Company’s Trust and Safety practices are designed to align with the following external frameworks and legal instruments:

  • India: Information Technology Act, 2000; Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (as amended in 2022, 2023, and by the November 2025 amendment); Digital Personal Data Protection Act, 2023; and DPDP Rules, 2025;
  • Singapore: Personal Data Protection Act, 2012 (as amended);
  • European Union and United Kingdom: General Data Protection Regulation (EU) 2016/679; EU Digital Services Act (DSA) 2022/2065, applicable to the extent of the Company’s EU user base; UK GDPR; and
  • International Standards: ISO/IEC 25389 (the Safe Framework), which codifies the Digital Trust and Safety Partnership Best Practices Framework; and the Santa Clara Principles on Transparency and Accountability in Content Moderation (2018, revised 2021).

4. Safety-by-Design Principles

4.1 The Safety-by-Design Commitment

The Company integrates safety considerations into the design and architecture of the Platform from inception, rather than applying safety measures reactively after harm has occurred. This approach—known as “Safety by Design”—reflects the Company’s commitment to proactive harm prevention as a foundational product principle, consistent with the DTSP Best Practices Framework.

4.2 Structured Opportunity Listings

The Platform’s Opportunity listing interface is designed to reduce ambiguity and improve the detection of suspicious or non-compliant listings through the following structural features:

  • required fields for role title, description, compensation status, work type, location, and application process — ensuring that critical information cannot be omitted;
  • standardised listing templates that reduce opportunities for vague, misleading, or incomplete descriptions that might facilitate fraudulent conduct; and
  • field-level validation that flags listings that deviate materially from expected patterns, such as compensation claims that are statistically anomalous for the stated role type or industry.

4.3 Controlled User Profiles

Career Starter and Employer profile interfaces are designed to:

  • collect only the information necessary for legitimate opportunity discovery and recruitment purposes, consistent with the principle of data minimisation under the Company’s Privacy Policy;
  • limit the display of personally sensitive information — such as full addresses, financial account details, and government identification numbers — by design, reducing the risk of data harvesting through the Platform; and
  • structure credential and experience fields in a manner that facilitates verification and reduces the scope for fabricated or materially exaggerated representations.

4.4 Built-In Reporting Architecture

The Platform’s reporting tools are designed to be accessible, low-friction, and available at the point of encounter with potentially violating Content or conduct:

  • each Opportunity listing, Employer profile, and Career Starter profile includes a clearly accessible reporting mechanism;
  • reporting prompts are categorised to guide users towards accurate characterisation of the reported conduct, improving the quality of incoming reports; and
  • reporting channels are available via email at support@internboard.com for users who prefer to submit detailed reports or evidence outside the in-Platform reporting tool.

5. Detection and Monitoring Systems

5.1 Multi-Layer Detection Architecture

The Company employs a multi-layered detection architecture combining automated systems with human review, consistent with the three-layer approach adopted by major employment platforms, including LinkedIn. This architecture is designed to ensure both efficiency at scale and accuracy in complex cases.

5.2 Layer One — Automated Proactive Detection

The first layer of the Company’s detection system operates proactively and automatically. When Content is submitted to the Platform — including Opportunity listings, Career Starter profiles, and Employer profiles — automated systems apply defined rules and signals to identify potentially violating Content before it achieves broad visibility. This layer includes:

  • Rule-Based Filters: Automated rules that flag Content containing defined prohibited keywords, phrases, patterns, or structural indicators consistent with known fraud, scam, or policy violation typologies—including but not limited to requests for payment from Career Starters, promises of implausible compensation, and impersonation patterns;
  • Pattern Recognition: Systems that identify statistical anomalies in Content or account behavior—including unusual login patterns, high-volume messaging activity, rapid account creation, and atypical listing behaviors—that are associated with inauthentic, automated, or fraudulent activity;
  • Natural Language Processing: Algorithmic tools that analyse the linguistic content of Opportunity listings, profile descriptions, and communications to identify hate speech, harassment indicators, discriminatory language, and deceptive descriptions; and
  • Document and Credential Scanning: Where applicable, tools that assist in identifying uploaded documents that display characteristics consistent with alteration, fabrication, or formatting anomalies that warrant further human review.

5.3 Layer Two — Combined Automated and Human-Led Detection

The second layer of detection addresses Content and conduct that automated systems have identified as potentially violating but in respect of which the system’s confidence level does not warrant automatic removal. At this layer:

  • flagged Content is queued for review by a member of the Trust and Safety Team, who applies human judgment and contextual analysis to determine whether a violation has occurred;
  • risk scoring models assign a composite risk indicator to accounts and Content based on multiple signals, including complaint history, verification status, behavioural patterns, profile completeness, and the nature of reported interactions—with higher-risk accounts and Content prioritized for earlier human review; and
  • Behavioral analysis monitors patterns of conduct across sessions and over time, including the frequency and nature of applications submitted, the volume and recipients of messages sent, and anomalous changes in account activity.

The Company acknowledges that automated risk scoring and flagging systems are not infallible. Accounts or Content that are incorrectly assessed may be subject to an appeal as described in Section 9 of this Policy.

5.4 Layer Three — User-Driven Reporting

The third layer of detection is driven by users who encounter Content or conduct that they believe violates the Community Guidelines. User reports are a critical and valued part of the Company’s safety ecosystem. Each report submitted through the Platform’s in-application reporting tool or by email to support@internboard.com is received by the Trust and Safety Team and assessed in accordance with the procedures described in Section 6 of this Policy. The Company maintains confidence in the identities of reporting users to the fullest extent permitted by applicable law.

5.5 AI Governance in Detection

The Company is committed to responsible AI governance in its detection operations:

  • automated systems do not make final Enforcement Action decisions — including permanent account terminations—without human review and oversight by a member of the Trust and Safety Team;
  • the Company uses AI confidence levels to guide enforcement proportionality, applying more conservative responses (such as content downranking or temporary flagging) in cases of moderate confidence, and reserving automatic removal for cases of high confidence in well-defined violation categories;
  • the Company’s automated detection systems are periodically reviewed and refined to reduce false positives and improve accuracy, and quality assurance checks are conducted on a routine basis on prior moderation decisions; and
  • the Company does not use detection systems in a manner that discriminates on the basis of any protected characteristic and applies human review to any case where the automated system’s output may be influenced by such a characteristic.

6. Content Moderation Practices

6.1 Scope of Moderation

The following categories of Content are subject to review and moderation under this Policy:

  • Opportunity listings posted by Employers, including job titles, descriptions, compensation disclosures, application instructions, and any associated images or attachments;
  • Employer profiles, including company name, description, industry, contact information, and verification documentation;
  • Career Starter profiles, including personal information, educational and professional credentials, resume or curriculum vitae documents, skills, portfolio links, and profile photographs;
  • communications transmitted through the Platform’s messaging tools, subject to applicable privacy law; and
  • any other user-generated Content submitted to or published through the Platform.

6.2 Moderation Methodology

The Company applies the following moderation methodology, calibrated to the nature of the Content and the severity of the potential violation:

  • Automated Moderation: For Content that automated systems identify with high confidence as clearly violating a defined policy rule—such as a listing explicitly requesting payment from a Career Starter — the automated system may remove or restrict the Content immediately without prior human review, subject to the appeal rights described in Section 9;
  • Human-Led Moderation: For Content that automated systems flag with moderate confidence or for Content reported by users, the Trust and Safety Team conducts a human-led review. Human moderators are trained on the Community Guidelines, applicable law, and the Company’s internal moderation standards and apply contextual judgment to reach a proportionate outcome;
  • Hybrid Moderation: For complex cases — including those involving potential harassment, hate speech, AI-generated deception, or nuanced misrepresentation — a combination of automated analysis and human review is applied, with the human reviewer’s determination being final; and
  • Proactive Review: The Company reserves the right to proactively review any Opportunity listing, user profile, or Content prior to or after publication, without waiting for a user report, where the Company has reasonable grounds to suspect a violation of the Community Guidelines. The Company may place a listing or profile under temporary review hold pending assessment, during which it may not be visible to other users.

6.3 Moderator Standards and Training

All members of the Trust and Safety Team who conduct human content review:

  • receive training in the Community Guidelines, the standards described in this Policy, applicable law, and the Company’s internal moderation protocols before conducting independent reviews;
  • receive ongoing training as the Community Guidelines and this Policy are updated, and as new violation typologies emerge on the Platform;
  • are subject to quality assurance review of their moderation decisions on a periodic basis, conducted by senior members of the Trust and Safety Team to maintain consistency and accuracy; and
  • are supported with appropriate resources to manage the psychological demands of content moderation work, consistent with best practices in the Trust and Safety profession.

7. Enforcement Framework

7.1 Graduated Enforcement Model

The Company applies a graduated enforcement model that calibrates the severity and nature of the Enforcement Action to the gravity, nature, and context of the violation. The following Enforcement Actions are available and may be applied independently or in combination:

7.1.1 Formal Warning

For first-time or minor violations that do not cause material harm to any user, the Company may issue a formal written warning to the relevant user, setting out the specific nature of the violation, the provision of the Community Guidelines breached, and the conduct required to bring the account into compliance. A formal warning is recorded on the user’s account and will be taken into account in any subsequent enforcement assessment involving that account.

7.1.2 Content Removal

The Company may remove, edit, restrict access to, or de-index any Content that violates the Community Guidelines or applicable law without prior notice to the relevant user. A Content removal does not automatically result in further Enforcement Action against the account, but all Content removals are recorded internally and may inform subsequent enforcement decisions. Where practicable, the Company will notify the user of the removal and the reason for it.

7.1.3 Feature Restriction

The Company may temporarily restrict a user’s access to specific Platform features — including the ability to post Opportunity listings, submit applications, use the Platform’s messaging tools, or access certain search functions—either as a standalone Enforcement Action for moderate violations or as a precautionary measure pending the completion of an investigation.

7.1.4 Temporary Account Suspension

For violations of moderate to serious severity, or where an investigation is ongoing and there is a risk of continued harm, the Company may suspend a user’s account for a defined period. During a suspension, the user will not be able to access or use any feature of the Platform. The duration of the suspension will be proportionate to the severity and context of the violation. The Company will notify the suspended user of the suspension and the reason for it, unless such notification would compromise an ongoing investigation or a law enforcement process.

7.1.5 Permanent Account Termination

For serious, repeated, or egregious violations of the Community Guidelines—including but not limited to financial exploitation of Career Starters, posting of fraudulent Opportunity listings, identity theft, harassment involving threats of violence, distribution of malware through the Platform, impersonation of Company personnel, and systematic misrepresentation—the Company may permanently and irrevocably terminate the user’s account. Permanently terminated users are not permitted to create new accounts on the Platform, and any attempt to do so will itself constitute a violation subject to further enforcement action.

7.1.6 Platform Blacklisting

In cases of permanent account termination arising from fraud, financial exploitation, or other serious violations, the Company may blacklist the terminated user, permanently prohibiting them from registering any new account on the Platform under any identity. The Company may employ technical measures—including device fingerprinting, IP address monitoring, and email pattern analysis—to identify and block blacklisted users who attempt to re-register. Where an Employer entity’s account is permanently terminated, affiliated individuals who are identified as having participated in the relevant conduct may also be blacklisted.

7.1.7 Referral to Law Enforcement and Regulatory Authorities

Where a violation constitutes or may constitute a criminal offense or regulatory breach—including but not limited to fraud, cybercrime, harassment, identity theft, financial fraud, and data protection violations—the Company will report the relevant conduct to applicable law enforcement agencies, consumer protection authorities, or data protection supervisory authorities in the relevant jurisdiction. The Company will cooperate fully with any resulting investigation and will provide all evidence and information required by applicable legal process.

7.2 Immediate Action for Serious Violations

Notwithstanding the graduated model described in Section 7.1, the Company reserves the right to take immediate Enforcement Action—including immediate permanent account termination without prior warning—in cases involving:

  • financial exploitation or payment fraud targeting Career Starters;
  • threats of violence or conduct posing an immediate safety risk to any person;
  • posting or transmission of Content involving the sexual exploitation of minors;
  • distribution of malware or the conduct of cyberattacks through the Platform;
  • identity theft or impersonation of the Company’s personnel or directors; or
  • any conduct that, in the Company’s reasonable judgment, poses an immediate and serious risk to the safety, integrity, or reputation of the Platform or its users.

7.3 Organisational Enforcement

Where an Employer’s account is subject to an Enforcement Action for fraud, exploitation, or a serious violation, all Opportunity listings associated with that Employer will be immediately removed from the Platform. Any affiliated individuals identified as having participated in the relevant conduct may be subject to independent Enforcement Actions, up to and including permanent account termination and blacklisting.

7.4 No Obligation to Pre-Screen

The Company is not obligated to pre-screen, review, or approve Content before it is published on the Platform. The Company’s decision not to review or remove any particular Content does not constitute an endorsement, approval, or verification of that Content, and does not create any liability on the part of the Company.


8. Reporting and Investigation Procedure

8.1 How to Submit a Report

Users who encounter or suspect any violation of the Community Guidelines or applicable law are strongly encouraged to report it promptly through one of the following channels:

  • the Platform’s designated in-application reporting tool, accessible directly on each Opportunity listing, Employer profile, Career Starter profile, or communication; or
  • by email to the Trust and Safety Team at support@internboard.com, including a clear description of the suspected violation and any supporting evidence.

For urgent safety concerns involving threats of violence or immediate risk of harm to any person, users should contact local law enforcement directly in addition to reporting to the Company.

8.2 Information to Include

To assist the Trust and Safety Team in investigating a reported violation efficiently, users are requested to provide the following:

  • the nature of the suspected violation and a description of the relevant conduct or Content;
  • the username, account name, or listing identifier of the user or Opportunity being reported, where available;
  • the date and approximate time of the relevant conduct or Content publication;
  • any screenshots, recorded communications, URLs, or other evidence that may support the report; and
  • the reporter’s contact details, so that the Trust and Safety Team may follow up if required.

8.3 Report Acknowledgement and Investigation Timeline

  • The Company will acknowledge receipt of all reports submitted by email within forty-eight (48) hours of receipt, in accordance with the Company’s service level commitment.
  • In respect of reports received through the Platform’s in-application reporting tool, the Trust and Safety Team will review each report within seven (7) calendar days of receipt for standard cases.
  • For reports raising urgent safety concerns—including threats of violence, financial exploitation, or content involving minors—the Trust and Safety Team prioritizes review and will endeavour to complete an initial assessment within twenty-four (24) hours.
  • Complex investigations—including those involving coordinated inauthentic behavior, multi-account abuse, or potential criminal conduct—may require additional time. The Company will update the reporting user on the status of the investigation at reasonable intervals, to the extent consistent with its confidentiality obligations.

8.4 Investigation Process

Upon receipt of a report, the Trust and Safety Team will:

  • conduct an initial triage to assess the nature, severity, and urgency of the reported conduct;
  • gather and review relevant Content, account data, communication records, and behavioural signals associated with the reported account or Content;
  • cross-reference the reported conduct against the Company’s internal records of prior complaints, warnings, or Enforcement Actions involving the same account;
  • where appropriate, request additional information or evidence from the reporting user;
  • reach a determination as to whether a violation of the Community Guidelines or applicable law has occurred; and
  • implement the appropriate Enforcement Action in accordance with Section 7 of this Policy.

8.5 Good Faith Reporting

Reports must be submitted in good faith, based on a genuine and reasonable belief that a violation has occurred. Users must not submit false, malicious, or vexatious reports against other users. The submission of a false or bad-faith report with the intention of causing unjustified enforcement action against another user is itself a violation of the Community Guidelines and may result in Enforcement Action against the reporting user.

8.6 Confidentiality of Reports and Reporters

The Company will maintain the confidentiality of all reports and the identity of reporting users to the fullest extent permitted by applicable law. The Company will not disclose the identity of a reporting user to the subject of the report unless required to do so by a lawful court order, regulatory directive, or other mandatory legal process.


9. Appeals and Grievance Redressal

9.1 Right to Appeal

Users who have been subject to a Content removal, account suspension, feature restriction, or permanent account termination under this Policy have the right to appeal the Enforcement Action. An appeal must be submitted in writing to support@internboard.com within fourteen (14) calendar days of the date of the Enforcement Action notice.

9.2 Contents of an Appeal

An appeal submission should include:

  • the user’s full name and the account username or registered email address associated with the affected account;
  • a clear description of the Enforcement Action being appealed;
  • the user’s explanation of why they believe the Enforcement Action was incorrect, disproportionate, or made in error; and
  • any supporting evidence, documentation, or context the user wishes the Trust and Safety Team to consider in the review.

9.3 Appeal Review Process

  • Appeals will be reviewed by a member of the Trust and Safety Team who was not involved in the original enforcement decision, ensuring an independent assessment.
  • The Company will acknowledge receipt of an appeal within forty-eight (48) hours and will complete its review within thirty (30) calendar days of receipt of a complete and valid appeal submission.
  • The Company will notify the user of the outcome of the appeal in writing, including a brief explanation of the grounds for the decision.
  • Where the appeal is upheld, the Company will take all reasonable steps to reverse the Enforcement Action and restore the user’s account or Content as promptly as practicable.
  • The Company’s decision on appeal is final and binding, subject to the user’s right to escalate to the Grievance Officer or the Grievance Appellate Committee as described in Section 9.4 and Section 9.5 below, and subject to any rights the user may have to seek independent legal redress before a competent court or consumer forum under applicable law.

9.4 Grievance Officer

In accordance with Rule 3(2)(a) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (India), the Company has designated a Grievance Officer whose contact details are permanently published on the Platform. The Grievance Officer is responsible for receiving and addressing user complaints regarding Content, conduct, and data protection matters.

Contact details of the Grievance Officer:

Email: support@internboard.com

Correspondence Address: Master Trading Class Private Limited #G-3, South West Avenue, Street No. 2 Lalamma Gardens, PuppalaGuda Hyderabad, Telangana, India — 500089

The Grievance Officer will:

  • acknowledge all complaints within twenty-four (24) hours of receipt; and
  • resolve all complaints and communicate the outcome to the complainant within fifteen (15) days of receipt, in compliance with Rule 3(2)(a) of the IT Rules, 2021.

9.5 Grievance Appellate Committee

Users who are dissatisfied with the decision of the Grievance Officer may escalate their complaint to the Grievance Appellate Committee (GAC), established by the Government of India under the IT Rules, 2021 (as amended). Appeals to the GAC must be made within thirty (30) days of the Grievance Officer’s decision, and the GAC is required to decide such appeals within thirty (30) days of receipt. The Company will comply with all orders issued by the GAC within the timelines required by applicable law.


10. Repeat Violations and Blacklisting

10.1 Repeat Violation Policy

The Company tracks the enforcement history of all user accounts. Users who engage in repeated violations of the Community Guidelines—whether through multiple instances of the same violation category or through distinct violations across different categories—will be subject to progressively more severe Enforcement Actions, up to and including permanent account termination. The Company’s assessment of whether a pattern of repeat violations exists will be made in its reasonable discretion, taking into account the nature, frequency, severity, and timing of each violation and any relevant contextual factors.

10.2 Blacklisting

Where a user’s account is permanently terminated for fraud, financial exploitation of Career Starters, or any other serious violation, the Company may blacklist that user. Blacklisted users are permanently prohibited from registering any new account on the Platform, whether under their existing identity or under a different name, email address, or device. The Company may employ the following technical measures to identify and block blacklisting circumvention:

  • device fingerprinting—identifying the combination of device attributes associated with the blacklisted user’s Device;
  • IP address monitoring — flagging registration attempts from IP addresses associated with known blacklisted accounts; and
  • email address pattern analysis — identifying email addresses associated with previously blacklisted accounts or displaying characteristics consistent with those accounts.

10.3 Organisational Blacklisting

Where an Employer organisation is blacklisted, the Company may extend blacklisting to individual representatives or affiliates of that organisation who are identified as having participated in, directed, or facilitated the conduct that led to the blacklisting.


11. Fraud Prevention and Platform Integrity

11.1 The Company’s Fraud Prevention Commitment

The Company actively monitors the Platform to prevent and respond to fraudulent and deceptive conduct that exploits Career Starters or Employers. The Company’s approach to fraud prevention is further detailed in its separate Fraud and Scam Prevention Policy, which is published on the Platform and incorporated into this Policy by reference. This Section provides a summary of the Trust and Safety Team’s fraud-specific operations.

11.2 Key Fraud Risk Areas Monitored

The Trust and Safety Team specifically monitors for the following categories of fraud risk, which represent the most common and materially harmful forms of misconduct on career and opportunity platforms globally:

  • Payment Fraud: Employers or actors impersonating Employers who request fees, deposits, or financial consideration from Career Starters as a purported condition of any Opportunity;
  • Fake Opportunity Listings: Opportunity listings that are fabricated, substantially misrepresented, or designed to harvest personal or financial data from Career Starters;
  • Identity Impersonation: Accounts impersonating legitimate companies, organisations, recruitment agencies, or individuals—including impersonation of the Company’s own personnel;
  • Phishing and Social Engineering: Communications designed to deceive Career Starters or Employers into disclosing sensitive personal information, financial credentials, or login details;
  • AI-Generated and Synthetic Fraud: Use of AI-generated profile photographs, fabricated credentials, or synthetic Opportunity descriptions to misrepresent identity or Opportunity legitimacy; and
  • Money Transfer Scams: Arrangements in which Career Starters are instructed to receive and transfer money on behalf of a purported Employer as part of a fraudulent engagement.

11.3 User Awareness and Fraud Prevention Guidance

Career Starters and Employers are the first line of defence against fraud. The Company strongly advises all users to:

  • never pay any fee, deposit, or financial consideration to any Employer or recruiter in connection with any Opportunity listed on the Platform, regardless of the stated justification;
  • independently verify the identity and legitimacy of any Employer before attending an interview, providing sensitive personal information, or accepting an Opportunity;
  • be alert to Opportunity listings offering implausibly high compensation for roles requiring minimal experience or qualifications;
  • report any communication from an Employer that requests payment, financial account details, government identification numbers, or any other sensitive information under circumstances that appear suspicious; and
  • consult the Company’s Fraud and Scam Prevention Policy for detailed guidance on identifying and avoiding common scam typologies on career platforms.

12. Law Enforcement Cooperation and Government Requests

12.1 Law Enforcement Cooperation

The Company cooperates with law enforcement agencies, regulatory authorities, and government bodies in accordance with applicable law. Where the Company receives a valid legal process—including a court order, statutory demand, subpoena, or government direction—requiring the disclosure of user information or the removal of Content, the Company will assess the request for legal validity and proportionality and will respond in accordance with its legal obligations.

12.2 Process Requirements

The Company requires that all law enforcement and government requests:

  • be issued through a formal, lawful legal process by a competent authority of the jurisdiction in which the request is made;
  • specify the legal authority under which the request is made, the specific information or action requested, and the purpose of the request; and
  • be submitted to the Company’s designated point of contact at support@internboard.com, or to the Company’s registered address, as directed by the applicable legal process.

The Company will not voluntarily disclose user data to law enforcement or government authorities outside of a valid legal process, except where disclosure is necessary to prevent an immediate and serious risk to the life or safety of any person.

12.3 Notification to Users

Where the Company is required to respond to a law enforcement or government request that relates to a specific user’s account or Content, the Company will endeavour to notify the affected user of the request before complying, unless such notification is prohibited by applicable law, a court order, or a direction from a competent authority, or unless notification would compromise an ongoing investigation.

12.4 Compliance with the IT Rules, 2021 (India) — November 2025 Amendment

In accordance with the amendment to Rule 3(1)(d) of the IT Rules, 2021, effective 15 November 2025, the Company will remove unlawful Content upon receiving actual knowledge through either a court order or a notification from the Appropriate Government or its authorized officer, issued by an officer not below the rank of Joint Secretary (or equivalent). The Company will act on such orders promptly and within the timelines prescribed by applicable law. Orders received under Section 69A of the Information Technology Act, 2000, for blocking public access to specific Content will be complied with in accordance with the procedure established under that provision.


13. Platform Limitations and Transparency

13.1 Scope of Monitoring

The Company employs substantial human and technological resources in its Trust and Safety operations. However, the Company acknowledges the following inherent limitations of its monitoring capacity:

  • the Platform hosts a high volume of user-generated Content and interactions, and no moderation system—automated or human—can guarantee detection of every violation in real time;
  • automated detection systems are probabilistic and may produce false positives (flagging compliant Content as violating) or false negatives (failing to detect all instances of violating Content). The Company continuously refines its systems to reduce error rates; and
  • the Company’s ability to monitor off-platform conduct that originates from or is connected to Platform interactions is limited by technical and legal constraints.

These limitations do not reduce users’ obligations under the Community Guidelines. Users must exercise their own independent judgment and due diligence in all interactions on the Platform, and are strongly encouraged to use the Platform’s reporting tools when they encounter conduct or Content that may violate the Community Guidelines.

13.2 No Endorsement of Content

The Company does not endorse, verify, or assume responsibility for the accuracy, authenticity, or legality of any Opportunity listing, Employer profile, or Career Starter profile published on the Platform, except to the extent that a specific listing or profile has been reviewed and verified through the Company’s Employer Verification and Trust Program. The presence of any listing or profile on the Platform is not a representation or warranty by the Company regarding its authenticity, accuracy, or compliance with applicable law.

13.3 Commitment to Continuous Improvement

The Company is committed to the continuous improvement of its Trust and Safety systems and practices. The Trust and Safety Team undertakes the following improvement activities on a periodic basis:

  • analysis of patterns and trends in reported violations, Enforcement Actions, and appeal outcomes to identify systemic risks and areas for policy or system enhancement;
  • review and refinement of automated detection systems based on performance data, quality assurance findings, and emerging violation typologies;
  • review of enforcement consistency to ensure that similar violations are treated consistently across different user accounts and Content types; and
  • assessment of new and emerging threat categories — including AI-generated fraud, synthetic identity attacks, and new forms of financial exploitation — and development of appropriate detection and enforcement responses.

14. Global Compliance and Cross-Border Operations

14.1 Multi-Jurisdictional Compliance

The Platform operates globally and serves users in multiple jurisdictions. The Company is committed to compliance with the applicable Trust and Safety and data protection laws of every jurisdiction in which it operates. This includes:

  • India: Compliance with the IT Act, 2000; the IT Rules, 2021 (as amended); the DPDP Act, 2023; and the DPDP Rules, 2025, including all grievance redressal, content removal, and intermediary due diligence obligations;
  • Singapore: Compliance with the Personal Data Protection Act, 2012, and PDPC guidelines, including data protection obligations relevant to user safety operations;
  • European Union and United Kingdom: Compliance with the GDPR, UK GDPR, and — to the extent applicable — the EU Digital Services Act, including transparency reporting and due diligence requirements; and
  • Other Jurisdictions: The Company will comply with applicable local law in all other jurisdictions in which users are located, including consumer protection and online safety legislation.

14.2 Sanctions Compliance

The Company will not knowingly facilitate the use of the Platform in connection with any activity that violates applicable international trade sanctions or embargoes, including those administered by the United Nations, the Government of India, the United States Office of Foreign Assets Control (OFAC), the European Union, or any other competent authority. Accounts identified as violating sanctions compliance obligations will be immediately suspended and referred to relevant authorities.


15. Limitation of Liability

15.1 Scope of Exclusion

To the fullest extent permitted by applicable law, the Company, Master Trading Class Private Limited, and its respective directors, officers, shareholders, employees, and agents shall not be liable for:

  • fraud, deception, or financial harm caused to any user by another user of the Platform, where such harm results from conduct that the Company could not reasonably have been expected to detect or prevent;
  • any failure to detect or address a specific violation of the Community Guidelines in a timely manner, provided that the Company has exercised reasonable diligence in its Trust and Safety operations;
  • any loss or damage arising from the conduct of Career Starters or Employers in interactions that take place outside the Platform;
  • any indirect, incidental, special, consequential, or punitive damages of any kind arising from or in connection with the Company’s Trust and Safety operations or this Policy; or
  • any moderation decision that is subsequently found to have been made in error, provided that the Company has acted in good faith and in accordance with the procedures described in this Policy.

15.2 Cap on Liability

Where the Company’s liability cannot be excluded by applicable law, the Company’s maximum aggregate liability to any user in connection with this Policy shall not exceed the total Membership fees actually paid by that user to the Company in the twelve (12) calendar months immediately preceding the event giving rise to the claim.


16. Indemnification

Each user of the Platform agrees to defend, indemnify, and hold harmless the Company, Master Trading Class Private Limited, and its directors, officers, shareholders, employees, agents, and assigns from and against any and all third-party claims, liabilities, damages, losses, costs, and expenses—including reasonable legal fees—arising out of or in connection with:

  • the user’s violation of the Community Guidelines, this Policy, or any applicable law;
  • any Content submitted by the user that is false, fraudulent, defamatory, infringing, or unlawful;
  • any dispute between the user and any other user arising from interactions on the Platform; or
  • any claim by a Career Starter arising from an Employer’s financial exploitation in violation of the Community Guidelines.

17. Governing Law

This Policy, and all matters arising out of or relating to it shall be governed by and construed in accordance with the laws of the Republic of India, without reference to its conflict of laws principles. The Company acknowledges that users in other jurisdictions may also be entitled to protections under applicable local law — including the GDPR (EU/UK), PDPA (Singapore), and equivalent consumer protection and online safety instruments — which shall apply to the extent required by mandatory applicable law.


18. Jurisdiction and Dispute Resolution

18.1 Good-Faith Negotiation

Before initiating any formal legal proceedings in relation to this Policy, users are requested to notify the Company in writing at support@internboard.com, setting out the nature of the dispute and the remedy sought. The parties agree to engage in good-faith negotiations for thirty (30) calendar days from the date of written notification before either party commences formal proceedings.

18.2 Exclusive Jurisdiction

Subject to Section 18.1 and to any mandatory consumer forum jurisdiction available under applicable law, all disputes arising from or in connection with this Policy — including questions regarding its existence, validity, breach, or termination — shall be subject to the exclusive jurisdiction of the courts located in Hyderabad, Telangana, India.

18.3 Regulatory Redressal

Nothing in this Section limits the right of any user to seek redressal before the following:

  • the Grievance Appellate Committee (GAC) established under the IT Rules, 2021 (India);
  • the Data Protection Board of India under the DPDP Act, 2023;
  • a competent data protection supervisory authority in the user’s jurisdiction, including the Information Commissioner’s Office (UK) or the relevant EU supervisory authority; or
  • a competent consumer dispute redressal commission or forum under the Consumer Protection Act, 2019 (India) or equivalent legislation in the user’s jurisdiction.

19. Amendments to this Policy

19.1 Right to Amend

The Company reserves the right to amend, update, or replace this Policy at any time to reflect changes in applicable law, regulatory guidance, platform developments, enforcement methodology, or evolving Trust and Safety best practices. The version number and “Last Reviewed and Updated” date at the head of this Policy will reflect all revisions.

19.2 Notice of Material Changes

Where any amendment to this Policy is material—including amendments that significantly alter the enforcement framework, introduce new enforcement mechanisms, or change the scope of moderation—the Company will:

  • notify registered users by email to the address held on their account at least thirty (30) calendar days prior to the amended Policy taking effect; and
  • display a prominent notice on the Platform for the same period.

19.3 Continued Use as Acceptance

Continued use of the Platform following the publication or notification of an amended Policy shall constitute the user’s acceptance of the amended terms.


20. Contact and Safety Support

For all trust and safety enquiries, violation reports, Enforcement Action appeals, Grievance Officer complaints, law enforcement liaison, or other communications relating to this Policy, users and authorities may contact the Company through the following channels:

Company Name: Master Trading Class Private Limited

Platform: InternBoard.com

Email: support@internboard.com

Global Office: InternBoard.com, 12 Woodlands Square, #13-79 Woods Square, Tower One, Singapore – 737715

Corporate and Correspondence Address: Master Trading Class Private Limited, #G-3, South West Avenue, Street No. 2 Lalamma Gardens, Puppalaguda, Hyderabad, Telangana, India — 500089

The Company is committed to acknowledging all trust and safety communications within forty-eight (48) hours and to providing a substantive response within thirty (30) calendar days of receipt, or within such a shorter period as may be required by applicable law.


This Trust and Safety Transparency Policy was reviewed and updated on April 23, 2026. This policy supersedes all the prior versions. This Policy should be read in conjunction with the Community Guidelines, the Terms and Conditions, and the Privacy Policy, all of which are available on the Platform. Users are encouraged to review this Policy periodically. The current version is always available at internboard.com/trust-and-safety-transparency-policy.