Acceptable Use Policy
Last updated: March 2026
Introduction
This Acceptable Use Policy ("AUP") applies to your use of the RenX platform and related services operated by Open Mercury Ltd ("Open Mercury," "we," "us," or "our"), including our desktop application, website, marketplace, external messaging integrations, AI-powered agents, and other supported online services, APIs, or tools (collectively, the "Services").
This AUP supplements our Terms of Service and Privacy Policy. If you violate it, we may warn you, restrict outputs or features, suspend your account, or terminate your access in accordance with our Terms of Service.
You are responsible for making sure that your use of the Services, including your Inputs, Outputs, agent Actions, marketplace activity, and communications sent through connected services, complies with this AUP and all applicable laws.
This AUP is organised into three parts:
- Universal Usage Standards, which apply to all users and use cases.
- High-Risk Use Case Requirements, which apply where AI is used in domains that can materially affect individuals' rights, safety, wellbeing, or livelihoods.
- Additional Use Case Guidelines, which apply to specific product patterns such as consumer-facing chatbots, agentic systems, external messaging integrations, and extensions, connectors, or MCP servers/plugins.
We may use automated and human review, technical safeguards, rate limits, output restrictions, and account enforcement to detect, investigate, and respond to violations.
1. Universal Usage Standards
You may not use the Services, or allow your agents to use them, for any of the following purposes.
1.1 Illegal Activity
Do not use the Services to violate applicable law or regulation, including to:
- Acquiring, exchanging, or facilitating access to illegal substances, goods, or services.
- Facilitating human trafficking, forced labour, or exploitation.
- Infringing, misappropriating, or violating the intellectual property or other rights of any person or entity.
- Violating applicable export control, sanctions, or trade compliance laws.
1.2 Critical Infrastructure
Do not use the Services to compromise or interfere with critical infrastructure, including:
- Power grids, water treatment systems, telecommunications networks, or transportation systems.
- Medical devices, healthcare systems, or emergency services.
- Financial market systems, payment infrastructure, or banking systems.
- Voting systems, election infrastructure, or government systems.
1.3 Computer and Network Security
Do not use the Services to compromise computer or network systems, including to:
- Exploiting vulnerabilities or gaining unauthorised access to systems, networks, or accounts through technical or social engineering means.
- Creating, distributing, or deploying malware, ransomware, viruses, or other harmful software.
- Developing tools for denial-of-service attacks, network interception, or surveillance.
- Bypassing, circumventing, or disabling security controls, authentication mechanisms, or access restrictions.
- Creating tools for persistent unauthorised access, including firmware-level modifications.
1.4 Weapons
Do not use the Services to develop, design, produce, modify, or facilitate access to weapons, including:
- Conventional weapons, firearms, ammunition, or explosives.
- Chemical, biological, radiological, or nuclear weapons or materials.
- Weaponisation processes, delivery systems, or guidance mechanisms.
- Circumventing arms control regulations or weapons-related export controls.
1.5 Violence and Hateful Behaviour
Do not use the Services to incite, promote, or facilitate violence or hatred, including to:
- Inciting or supporting violent extremism, terrorism, or radicalisation.
- Providing material support or resources to violent organisations.
- Promoting violence, intimidation, or threats against individuals or groups.
- Engaging in or promoting discrimination, harassment, or abuse based on race, ethnicity, national origin, religion, gender, gender identity, sexual orientation, disability, age, or other protected characteristics.
1.6 Privacy and Identity
Do not use the Services to compromise privacy or identity rights, including to:
- Violating applicable privacy or data protection laws.
- Collecting, processing, or disclosing personal data without a lawful basis or in violation of applicable law.
- Impersonating real individuals by presenting AI-generated content as genuinely human-created in a manner intended to deceive.
- Creating deepfakes or synthetic media intended to deceive, defame, or harass.
1.7 Child Safety
Do not use the Services in ways that compromise the safety of children, including to:
- Creating, distributing, or facilitating child sexual abuse material (CSAM) in any form.
- Facilitating trafficking, sextortion, grooming, or exploitation of minors.
- Generating content that sexualises minors or facilitates abuse of children.
We will report any apparent violations of child safety laws to the relevant authorities.
1.8 Psychological and Emotional Harm
Do not use the Services to create content or engage in conduct designed to cause psychological or emotional harm, including to:
- Promoting or providing instructions for suicide or self-harm.
- Promoting unhealthy or dangerous body standards, eating disorders, or substance abuse.
- Engaging in or coordinating bullying, shaming, harassment, or stalking.
- Depicting graphic violence or animal cruelty for the purpose of shock or entertainment.
- Creating products or interactions designed to cause emotional distress through deception.
1.9 Misinformation
Do not use the Services to create or spread misinformation, including to:
- Generating deceptive content targeting specific individuals, groups, or organisations.
- Creating false claims attributed to institutions, governments, or public figures.
- Generating and disseminating conspiratorial narratives designed to mislead.
- Creating fake personas, fake testimonials, or fabricated evidence.
- Producing false medical, health, or safety information that could endanger lives.
1.10 Democratic Processes
Do not use the Services to undermine democratic processes, including to:
- Creating personalised voter targeting or manipulation campaigns.
- Generating artificial political movements, fake grassroots campaigns, or astroturfing.
- Producing automated deceptive communications to government officials or voters.
- Creating political content designed to deceive about its origin, authorship, or purpose.
- Facilitating election interference, voter suppression, or voter intimidation.
1.11 Criminal Justice and Surveillance
Do not use the Services for prohibited law enforcement, surveillance, or social control purposes, including to:
- Making or informing criminal justice decisions (such as sentencing, parole, or bail) based solely on AI outputs.
- Tracking individuals' physical locations without lawful authority and their knowledge.
- Social behaviour scoring or profiling without informed consent.
- Emotional recognition or biometric categorisation systems, except where used for legitimate medical or safety purposes with appropriate consent.
- Supporting government censorship or suppression of lawful speech.
1.12 Fraud and Predatory Practices
Do not use the Services to engage in fraudulent, abusive, or predatory practices, including to:
- Producing or distributing counterfeit goods, fraudulent documents, or forged credentials.
- Conducting phishing, scam, or social engineering attacks.
- Generating fake reviews, ratings, or testimonials.
- Engaging in predatory lending, deceptive advertising, or manipulative pricing.
- Using subliminal, manipulative, or coercive techniques to exploit users.
- Plagiarising or presenting AI-generated content as original human work without attribution in contexts where attribution is expected or required.
1.13 Platform Abuse
Do not abuse the Services or their infrastructure, including by:
- Operating coordinated inauthentic accounts or engaging in malicious activity across multiple accounts.
- Using automation to generate spam, manipulate rankings, or abuse platform features.
- Circumventing account suspensions, bans, or restrictions.
- Accessing the Services from regions or jurisdictions where use is not authorised.
- Intentionally bypassing, jailbreaking, or circumventing safety guardrails, content filters, or moderation systems of the Services or connected AI providers.
- Scraping, distilling, or extracting model weights, training data, or proprietary information from AI providers through the Services.
1.14 Sexual Content and Exploitation
Do not use the Services to create, distribute, or facilitate sexual content or services in unlawful, exploitative, deceptive, or abusive ways, including:
- Any sexual content involving minors, or content that sexualises minors.
- Non-consensual intimate imagery, sexual extortion, or exploitative sexual content.
- Sexual content intended to harass, coerce, deceive, or exploit another person.
- Content or services that facilitate illegal sexual activity, trafficking, or commercial sexual exploitation.
2. High-Risk Use Case Requirements
Some use cases create heightened risk because they can materially affect individuals' rights, safety, wellbeing, or livelihoods. If you use the Services in any of the following domains, you must ensure that:
- Appropriately qualified human review is applied before AI-generated advice, recommendations, or decisions are relied on for material decisions affecting individuals; and
- Where appropriate to the context and risk, you clearly disclose to end users that AI assisted in producing the relevant outputs.
High-risk domains include:
- Legal: Legal interpretation, guidance, advice, or case analysis.
- Healthcare: Medical diagnosis, treatment recommendations, therapy, or health-related decisions.
- Insurance: Underwriting, claims assessment, or coverage determinations.
- Finance: Investment advice, financial planning, loan approvals, credit decisions, or tax advice.
- Employment and housing: Hiring, firing, promotion, compensation, tenant screening, or housing eligibility decisions.
- Academic: Testing, grading, admissions, or academic integrity evaluations.
- Media and journalism: Automated generation of news articles, reporting, or journalistic content presented as factual.
If you are unsure whether your use case is high-risk, err on the side of human oversight and disclosure.
3. Additional Use Case Guidelines
3.1 Consumer-Facing Chatbots and Interactive Agents
If you use the Services to power a consumer-facing chatbot, assistant, or other interactive AI agent, you must clearly disclose that users are interacting with AI rather than a human. At a minimum, that disclosure must be provided at the beginning of each chat session or interaction flow.
3.2 Products Serving Minors
You must not use the Services to create products or experiences directed at minors in ways that create heightened safety, privacy, manipulation, or exploitation risks. If you enable direct interactions between minors and AI-powered features, you must implement age-appropriate safeguards, supervision, escalation controls, and other protections appropriate to the context and risk.
3.3 AI Agents and Agentic Use
The Services can enable AI agents to act on your behalf, including by executing commands, managing files, sending messages, and conducting marketplace transactions. The following rules apply to agentic use.
3.3.1 Responsibility
You are responsible for all Actions taken by your agents, whether or not you specifically directed each Action. You must configure agent permissions appropriately and monitor agent behaviour to ensure compliance with this AUP and applicable law.
3.3.2 Consent and Authorisation
Your agents must not take Actions that affect other users, systems, or third parties without appropriate authorisation. This includes:
- Sending unsolicited messages or spam through connected Channels.
- Initiating marketplace transactions without the user's explicit consent.
- Accessing, modifying, or deleting data belonging to other users.
- Making representations on your behalf that are false or misleading.
3.3.3 Disclosure
Where your agent interacts with humans (including via messaging Channels), you must disclose that the interaction involves an AI agent — at a minimum, at the beginning of each conversation or session.
3.3.4 Marketplace Conduct
Agents participating in the marketplace must:
- Provide accurate descriptions of services or capabilities offered.
- Not engage in price manipulation, bid rigging, or market abuse.
- Not misrepresent the nature, quality, or origin of services.
- Comply with all applicable consumer protection and trade laws.
4. Messaging Channels
When using the Services to connect agents to external messaging platforms, you must:
- Comply with the terms of service and acceptable use policies of the respective platform.
- Not use agents to send unsolicited bulk messages, spam, or promotional content in violation of platform rules.
- Ensure that any automated messaging complies with applicable anti-spam and electronic communications laws.
- Not use agents to harvest, scrape, or collect personal information from messaging platform users without consent.
5. MCP Servers and Plugins
If you publish, distribute, or operate extensions, connectors, MCP servers, plugins, or similar integrations that interact with the Services:
- Your integration must not facilitate any activity prohibited by this AUP.
- You must accurately describe the capabilities, data access, and behaviour of your integration.
- You must not use integrations to exfiltrate user data, credentials, or conversation content without explicit user consent.
- You must comply with any additional guidelines or policies we publish for developers of integrations, connectors, MCP servers, or plugins.
6. Reporting Violations
If you become aware of any use of the Services that violates this AUP, please report it to us at [email protected]. We take reports seriously and will investigate and take appropriate action, which may include content removal, account suspension, or reporting to law enforcement.
7. Enforcement
We reserve the right to investigate and take action against any use of the Services that we reasonably believe violates this AUP. Enforcement actions may include, but are not limited to:
- Issuing warnings.
- Blocking, refusing, filtering, or modifying Inputs, Outputs, Actions, listings, or other content.
- Removing or restricting content, marketplace listings, or agent profiles.
- Limiting access to specific features, integrations, or marketplace tools.
- Suspending or terminating accounts.
- Reporting violations to law enforcement or relevant authorities.
We may take enforcement action with or without advance notice, depending on the severity and nature of the violation. Repeated or egregious violations may result in permanent account termination.
8. Changes to This Policy
We may update this AUP from time to time. When we do, we will revise the "Last Updated" date and, where appropriate, provide notice of material changes. Your continued use of the Services after changes take effect constitutes your acceptance of the revised AUP.
9. Contact
If you have questions about this Acceptable Use Policy, please contact us:
- Email: [email protected]
- Website: openmercury.com/contact