AI POLICY STATEMENT
AsterMindAI Inc
Last Updated: February 23, 2026
This AI Policy Statement describes how AsterMindAI Inc ("AsterMind", "we", "us", or "our") develops, deploys, and governs artificial intelligence technologies within our software, products, and services, including all software as a service (SaaS) offerings, software development kits (SDKs), application programming interfaces (APIs), and related technologies (collectively, the "Services"). This policy reflects our commitment to responsible, transparent, and ethical AI practices. For information about how we collect and use personal data, please see our Privacy Policy. For the terms governing your use of our Services, please review our Terms of Service.
1. INTRODUCTION
AsterMind leverages artificial intelligence to help organizations streamline operations, enhance decision-making, and deploy AI capabilities effectively across their enterprises. Our product portfolio spans a range of AI technologies designed to address diverse organizational needs: from deterministic machine learning toolkits that provide reproducible, mathematically grounded analytics, to AI orchestration platforms that enable organizations to manage and deploy AI workflows at scale, and conversational AI solutions that deliver intelligent, context-aware interactions.
We recognize that the adoption of AI technologies carries significant responsibilities. Organizations deploying AI must be able to trust that their technology partners operate with integrity, transparency, and accountability. This policy outlines our principles, commitments, and practices for responsible AI across all product lines. It is intended to provide our customers, partners, regulators, and the public with a clear understanding of how we approach the development and deployment of AI within our Services.
2. NO USE OF CUSTOMER DATA FOR AI MODEL TRAINING
AsterMind does not use customer data to train, fine-tune, or improve AI models. Any AI-driven features within our Services operate on pre-trained models, deterministic algorithms, or rule-based automation that does not retain or learn from individual user interactions. Customer data processed through our Services remains your property at all times. This commitment applies to all AsterMind products and services, including when our Services integrate with third-party AI providers at your direction and configuration.
Specifically:
- We do not use customer data to train, fine-tune, or improve AI models.
- We do not collect or store user-generated content for AI training purposes.
- We do not use customer data to improve machine learning algorithms.
- We do not share customer data with third parties for AI training purposes.
- Customer data processed through our Services remains your property and is handled in accordance with our contractual obligations to you.
- This commitment extends to all metadata, usage patterns, and derivative data generated through your use of our Services.
For complete details on how we handle your data, please refer to our Privacy Policy and our Data Processing Agreement.
3. THIRD-PARTY AI MODELS AND INTEGRATIONS
Certain AsterMind Services may integrate with or route requests to third-party AI model providers (such as OpenAI, Anthropic, Google, or other providers) at your direction and configuration. Transparency about these integrations is critical to maintaining trust with our customers, and we are committed to clearly communicating how third-party AI models interact with your data.
When our Services integrate with third-party AI models:
- Your data is transmitted to those providers only when you explicitly configure the integration and initiate the request.
- Third-party providers are subject to their own terms of service and privacy policies, which govern their handling of data they receive.
- We do not send your data to third-party AI providers without your explicit configuration and consent.
- We implement technical safeguards to minimize the data transmitted to third-party providers to only what is necessary for the requested operation.
For our locally-deployed SDK products: these are self-contained machine learning toolkits that run entirely on your systems. They do not connect to third-party AI providers and do not transmit data externally. All model training and inference occurs locally within your infrastructure.
We maintain a list of integrated third-party AI providers and their relevant policies, available upon request at legal@astermind.ai.
4. AI TRANSPARENCY AND EXPLAINABILITY
We design our AI features to be transparent and explainable wherever technically feasible. Transparency in AI means that users and stakeholders can understand what an AI system is doing, why it produces certain outputs, and what data it relies upon. Explainability means that the reasoning behind AI-generated outputs can be articulated in terms that are meaningful to the people affected by those outputs.
Our transparency commitments include:
- Users retain control over key decision-making processes and can configure AI behavior to align with their organizational requirements.
- AI-generated recommendations are based on predefined business rules or customer-configured parameters, not opaque algorithmic processes.
- For our deterministic ML products: models use deterministic, closed-form mathematical operations that produce consistent and reproducible results. Given the same input data, these models will always produce the same output.
- For AI features that integrate with large language models (LLMs): we provide clear disclosure that outputs are AI-generated and may vary between requests due to the probabilistic nature of these models.
- We publish documentation explaining how our AI features work, what data they use, and their known limitations.
- We provide audit trails and logging capabilities so that customers can review AI-driven actions within their environments.
5. AI GOVERNANCE STRUCTURE
AsterMind maintains an internal AI governance framework responsible for overseeing the development, deployment, and ongoing operation of AI within our products. Effective governance ensures that AI systems are developed and operated in a manner consistent with our values, applicable laws, and the expectations of our customers and stakeholders. We believe that robust governance is a prerequisite for trustworthy AI.
Our governance practices include:
- Regular review of AI systems for safety, fairness, accuracy, and compliance with applicable laws and regulations.
- A documented approval process for deploying new AI features, including risk assessment and impact evaluation.
- Ongoing monitoring of AI system performance, outputs, and user feedback to detect issues early.
- Periodic assessment of AI-related risks, including risks to privacy, security, fairness, and reliability.
- Clear escalation procedures for AI-related incidents or concerns.
We designate responsibility for AI ethics and safety at the leadership level, ensuring that AI governance receives appropriate executive attention and resources. Our governance framework is reviewed and updated at least annually to reflect evolving best practices, regulatory requirements, and lessons learned from our operations.
6. AI MODEL DEVELOPMENT AND LIFECYCLE
AsterMind employs rigorous processes throughout the AI model development lifecycle to ensure that our products meet high standards for quality, safety, and reliability. Our approach varies by product line, reflecting the distinct technical characteristics of each.
Deterministic ML Products
Built on Extreme Learning Machine (ELM) algorithms, these products use deterministic, closed-form mathematical operations. They are not deep learning models and do not exhibit the unpredictability associated with large neural networks. Models are developed, tested, and validated against established benchmarks, then released through our standard software development lifecycle with full version control and documentation.
Cloud-Based AI Services
When integrating with third-party LLMs, we implement guardrails, content filtering, and output validation to ensure that AI-generated content meets quality and safety standards. We continuously monitor model performance and update integration configurations as provider models change or new versions are released.
All AI features, regardless of product line, undergo the following before release:
- Functional testing to verify correct operation across expected use cases and edge cases.
- Bias and fairness evaluation to identify and mitigate potential discriminatory outcomes.
- Security review to assess vulnerabilities, including prompt injection, data leakage, and adversarial attacks.
- Performance benchmarking to ensure AI features meet latency, accuracy, and reliability requirements.
We maintain version control and detailed change logs for all AI-related components, enabling traceability and accountability throughout the model lifecycle.
7. DATA PRIVACY AND SECURITY
AsterMind adheres to industry-leading security and privacy standards to protect customer data across all of our Services. We recognize that the use of AI technologies introduces unique privacy and security considerations, and we address these through comprehensive technical and organizational measures.
- We comply with GDPR, CCPA/CPRA, VCDPA, and other applicable data protection laws in the jurisdictions where we and our customers operate.
- AI-driven features process data securely without storing sensitive customer information beyond what is necessary for operational use.
- Strict access controls, including role-based access and the principle of least privilege, ensure only authorized personnel can access customer data.
- We implement encryption in transit (TLS 1.2 or higher) and at rest for all AI-related data processing.
- We conduct regular security assessments of our AI systems, including penetration testing and vulnerability scanning.
- We maintain incident response procedures specific to AI-related security events.
All data processed by AI features is subject to our Privacy Policy and Data Processing Agreement. For enterprise and government customers, we offer additional security documentation and certifications upon request.
Model and Algorithm Protection
AsterMind's proprietary models, algorithms, trained parameters, weights, and machine learning artifacts constitute valuable trade secrets and intellectual property of AsterMindAI Inc. The following protections apply across all deployment models, including SaaS, on-premises, and edge deployments:
- Customers may not extract, copy, or attempt to derive proprietary models, algorithms, trained parameters, weights, embeddings, or other machine learning artifacts from the Services, whether through technical means, observation, or systematic querying of outputs.
- For on-premises and edge deployments, all proprietary AI components are delivered as compiled object code or encrypted artifacts only. No source code, model weights in readable format, or unprotected algorithmic implementations are provided.
- Any attempt to isolate, extract, or reverse-engineer AI models or algorithms from the Software runtime environment constitutes a material breach of the Terms of Service and EULA.
- Customers may not use the outputs of the Services to train, distill, or create competing models or algorithms that replicate or materially imitate the functionality of AsterMind's proprietary technology.
8. FAIRNESS AND BIAS MITIGATION
AsterMind is committed to ensuring that our AI-driven products operate fairly and without unlawful discrimination. We take a proactive approach to identifying and mitigating bias across our product portfolio, recognizing that different AI technologies present different bias risks.
Deterministic ML Products
Our deterministic ML products use mathematical operations that produce consistent results regardless of protected characteristics. Because these models are closed-form calculations, their behavior is fully predictable and auditable. Bias in their outputs is a function of training data composition, not emergent model behavior, making it detectable and correctable through data quality practices.
LLM-Integrated Features
For AI features that integrate with large language models, we implement content filtering and output monitoring to detect and mitigate biased or discriminatory outputs. We continuously evaluate the performance of our guardrails and update them as needed.
Additional fairness commitments:
- Customers can configure automation rules to fit their specific business and compliance needs, including fairness-related requirements.
- We regularly review AI logic and outputs to identify and address unintended biases in automated workflows.
- We evaluate training data for representativeness and potential sources of bias before use.
- We welcome reports of biased or unfair AI outputs at ai-ethics@astermind.ai.
9. HUMAN OVERSIGHT
AsterMind believes that AI should augment human decision-making, not replace it. We design our products to keep humans informed and in control, particularly in contexts where AI-assisted decisions may have significant consequences for individuals or organizations.
- AsterMind recommends human oversight for all AI-assisted decisions, particularly those affecting individuals' rights, opportunities, or access to services.
- For high-stakes decisions in healthcare, legal, financial, safety-critical, or employment contexts, AsterMind requires customers to maintain human-in-the-loop processes that ensure a qualified person reviews AI outputs before action is taken.
- Our products are designed to augment human decision-making by providing information, analysis, and recommendations that support -- but do not replace -- human judgment.
- We provide configuration options for customers to require human review and approval before AI recommendations are acted upon within automated workflows.
- The ultimate responsibility for decisions made using AI-generated outputs rests with the customer and their designated decision-makers.
10. AI-GENERATED OUTPUTS AND OWNERSHIP
Clarity about the ownership and status of AI-generated outputs is essential for customers who rely on our Services in their operations. AsterMind is committed to providing straightforward terms regarding output ownership.
- You own the outputs generated from your inputs using AsterMind Services, subject to the terms of your applicable service agreement.
- AsterMind does not claim ownership of customer-generated outputs produced through our Services.
- Important note: AI-generated outputs may not be eligible for copyright protection under current United States law. The U.S. Copyright Office has indicated that works generated by AI without sufficient human authorship may not qualify for copyright registration. Customers should consult their own legal counsel regarding intellectual property rights in AI-generated content.
- Outputs generated by AI features are provided for informational purposes and should be verified by qualified professionals before being relied upon for critical decisions.
11. LIMITATIONS AND DISCLAIMERS
While we strive to deliver high-quality, reliable AI features, it is important that customers understand the inherent limitations of AI technologies. AI systems, including those integrated into AsterMind Services, may produce inaccurate, incomplete, biased, or misleading results under certain conditions.
- AI-generated outputs are not a substitute for professional judgment in legal, medical, financial, engineering, or other specialized fields.
- Customers are responsible for validating AI outputs before relying on them for operational or business-critical decisions.
- Deterministic ML Products: While deterministic and reproducible, these model results depend on the quality, completeness, and representativeness of the training data provided by the customer. Poor or biased training data will produce poor or biased results.
- Cloud-Based AI Services: Features integrating with third-party LLMs are subject to the inherent limitations of those models, including the potential for hallucination (generating plausible but incorrect information), factual errors, inconsistent responses, and sensitivity to input phrasing.
- AsterMind does not guarantee that AI features will be error-free, uninterrupted, or suitable for any particular purpose.
For complete warranty and liability limitations, please refer to our Terms of Service.
12. ALIGNMENT WITH AI FRAMEWORKS AND STANDARDS
AsterMind is committed to aligning our AI practices with recognized national and international AI governance frameworks. We actively monitor the evolving regulatory and standards landscape to ensure our practices remain current and compliant. The following describes our alignment with key frameworks.
NIST AI Risk Management Framework (AI RMF)
Our AI governance aligns with the four core functions of the NIST AI RMF:
- Govern: We maintain an AI governance structure with designated leadership responsibility, documented policies, and clear accountability for AI-related decisions.
- Map: We identify and document AI-related risks for each product and feature, including risks to individuals, organizations, and society.
- Measure: We assess AI risks through systematic testing, ongoing monitoring, and periodic review using defined metrics and evaluation criteria.
- Manage: We implement controls, safeguards, and remediation procedures for identified risks, and we prioritize risk responses based on severity and likelihood.
EU AI Act
We monitor developments under the EU AI Act and are committed to compliance with applicable requirements as they come into effect. Our products are provided as general-purpose tools that can be configured by customers for a wide range of use cases. Customers deploying our Services within the EU are responsible for compliance with AI Act obligations specific to their use cases and risk classifications. We commit to transparency requirements applicable to AI system providers and will provide customers with the technical documentation and information they need to fulfill their own compliance obligations.
DoD Responsible AI Principles
Our practices align with the U.S. Department of Defense Responsible AI Principles:
- Responsible: We maintain governance and accountability structures that ensure AI decisions can be traced to responsible individuals and teams.
- Equitable: We design AI features to minimize bias and unfair discrimination, and we provide tools for customers to evaluate fairness in their deployments.
- Traceable: We maintain documentation, audit trails, and logging for AI processes so that outputs can be understood and traced.
- Reliable: We test and validate AI systems before deployment and monitor them continuously in production to ensure consistent, dependable performance.
- Governable: We build in human oversight mechanisms and the ability to deactivate, override, or roll back AI features when necessary.
White House Blueprint for an AI Bill of Rights
We support and design our Services in alignment with the five principles outlined in the White House Blueprint for an AI Bill of Rights:
- Safe and Effective Systems: We test and monitor AI systems to ensure they perform as intended without causing harm.
- Algorithmic Discrimination Protections: We evaluate our AI features for bias and take steps to prevent unlawful discrimination.
- Data Privacy: We protect customer data and provide transparency about data collection and use practices.
- Notice and Explanation: We disclose when AI is being used and provide understandable explanations of how it affects outcomes.
- Human Alternatives, Consideration, and Fallback: We support human oversight and provide mechanisms for users to opt out of AI-driven processes where appropriate.
13. AI SAFETY AND INCIDENT RESPONSE
AsterMind takes AI safety seriously and maintains procedures for identifying, investigating, and resolving AI-related incidents. If an AI feature produces harmful, inaccurate, or biased outputs that are reported to us, we will investigate promptly and take appropriate action to address the issue.
To report AI-related concerns, contact us at ai-ethics@astermind.ai.
Our AI incident response process includes:
- Acknowledgment: We acknowledge receipt of AI-related incident reports within two (2) business days.
- Investigation: We conduct a thorough investigation and root cause analysis to understand the nature and scope of the issue.
- Remediation: We implement remediation or mitigation measures to address the identified issue and prevent recurrence.
- Notification: We notify affected customers where appropriate, including details about the issue and the steps taken to resolve it.
- Documentation: We document lessons learned and incorporate findings into our AI governance and development processes.
For critical safety issues affecting multiple customers or involving potential harm, we will issue a service advisory through our standard customer communication channels. We also maintain internal escalation procedures to ensure that serious AI safety issues receive immediate attention from senior leadership.
14. MODEL AND POLICY UPDATES
The AI landscape is evolving rapidly, and AsterMind is committed to keeping our policies, practices, and products current with technological developments, regulatory changes, and emerging best practices. AsterMind reserves the right to update this AI Policy Statement as our products and the regulatory landscape evolve.
- Material changes to this policy will be communicated via email notification to account holders and posted on our website at least thirty (30) days in advance of taking effect.
- Significant changes to AI model capabilities, third-party AI provider integrations, or data handling practices will be communicated to affected customers through our standard notification channels.
- We publish release notes documenting AI-related changes with each product update, available through our customer portal and documentation site.
- Continued use of the Services after the effective date of any policy update constitutes acceptance of the updated policy.
- We encourage customers to review this policy periodically and to contact us with any questions about changes.
15. COMPLIANCE AND ACCOUNTABILITY
AsterMind takes full responsibility for the ethical development and deployment of AI within our products. We are committed to accountability at every level of our organization, from engineering teams to executive leadership. We believe that accountability requires not only internal discipline but also openness to external scrutiny and feedback.
- We conduct annual reviews of our AI practices and policies to ensure they remain aligned with our commitments and applicable requirements.
- We cooperate with regulatory authorities and respond to lawful inquiries about our AI practices in a timely manner.
- We engage with industry groups, standards bodies, and civil society organizations to contribute to the development of responsible AI norms.
- We provide mechanisms for customers, users, and affected individuals to raise concerns about our AI practices and receive meaningful responses.
For questions, concerns, or to request additional information about our AI practices, you may contact us at:
AsterMindAI Inc
706 Scottingham Terrace
North Chesterfield, VA 23236
United States
- AI Ethics: ai-ethics@astermind.ai
- Privacy: privacy@astermind.ai
- Legal: legal@astermind.ai
- Website: https://astermind.ai
Effective Date: February 23, 2026