Responsible AI Policy
Owner: Pantera AI Inc. (“Pantera”)
This Responsible Artificial Intelligence (AI) Policy defines the principles, governance, and operational controls
Pantera applies to design, develop, deploy, and operate AI-enabled features in a safe, ethical, and compliant way.
This policy is intended to align with widely accepted best practices, including the EU AI Act risk-based approach
and ISO-aligned AI governance concepts.
1. Purpose
The purpose of this policy is to ensure Pantera’s AI systems are developed and used responsibly, with appropriate
safeguards for security, privacy, transparency, fairness, reliability, and human oversight.
2. Scope
This policy applies to:
- All Pantera employees, contractors, and third parties acting on behalf of Pantera
- All AI-enabled features and workflows offered by Pantera (including integrations and automation)
- All environments used to develop, test, and operate AI features (including cloud and third-party services)
3. Definitions
- AI system: Software that uses model-driven inference (including machine learning and generative AI) to produce outputs such as content, recommendations, or automated actions.
- High-risk use: Any use that could materially impact users’ rights, safety, finances, legal outcomes, or access to essential services.
- Human-in-the-loop: A control where a qualified person reviews or approves sensitive actions before execution.
4. Core Principles
4.1 Lawfulness and Accountability
- Pantera assigns clear ownership for AI-enabled features and maintains governance over design and operation.
- We document key decisions, risks, and controls for AI systems (including changes and incident response).
4.2 Security and Privacy by Design
- We apply least-privilege access controls to AI data, prompts, logs, and outputs.
- We minimize data collection and limit data retention to what is necessary for product delivery and security.
- We implement safeguards to prevent leakage of confidential or personal data through prompts or outputs.
4.3 Transparency and User Notice
- We disclose when an interaction is AI-assisted where reasonable and appropriate.
- We communicate meaningful limitations of AI outputs (e.g., potential errors, hallucinations, or incomplete context).
4.4 Fairness and Non-Discrimination
- We assess AI features for potential bias and discriminatory impact, especially in sensitive use cases.
- We avoid using protected or sensitive attributes unless explicitly required, lawful, and safeguarded.
4.5 Reliability, Safety, and Quality
- We test AI features against defined acceptance criteria before release.
- We monitor performance, failure modes, and drift where applicable.
- We maintain fallback paths for critical workflows (manual review, safe defaults, or human escalation).
4.6 Human Oversight
- High-impact actions should use human approval or human review controls.
- Users should have a path to contest, correct, or request review of outcomes in applicable scenarios.
5. Risk Management (EU AI Act-aligned Approach)
Pantera uses a risk-based approach to evaluate AI features. Before deploying material AI changes, we assess:
- Use case and potential user impact (including legal, financial, safety, or reputational harm)
- Data types processed (personal data, sensitive data, confidential business data)
- Security and privacy controls (access, retention, encryption, logging)
- Human oversight requirements and escalation paths
- Testing and monitoring requirements
6. Data Handling and Retention
- Customer data is processed only for legitimate business purposes and in accordance with contractual and legal requirements.
- We avoid storing prompts/outputs containing sensitive data unless necessary for security, troubleshooting, or audit requirements.
- We define retention periods for logs and operational records based on security and compliance needs.
7. Third-Party AI Providers
When using third-party AI services (e.g., model APIs), Pantera evaluates vendors for security, privacy, and reliability.
We document vendor risk, usage constraints, and applicable contractual terms, and we configure vendor settings to reduce data exposure where possible.
8. Incident Management
AI-related incidents (e.g., sensitive data exposure, unsafe outputs, unauthorized actions, or systemic errors) are handled through Pantera’s incident response process.
We perform root cause analysis, implement corrective actions, and document outcomes.
9. Training and Awareness
- Personnel working on AI features receive guidance on secure prompting, privacy, and safe deployment practices.
- Access to AI tooling and production settings is restricted to authorized roles.
10. Governance and Review
Policy owner: COO
Approver: CEO
Review cadence: At least annually, and upon material changes to AI systems or applicable regulations.
11. Contact
For questions about this policy, contact: compliance@getpantera.com