AI & Uploaded Files Policy

AI & Uploaded Files Policy

V

1.1

-

Last edited on

Feb 24, 2026

1. Policy Statement

At PART3 TECHNOLOGIES CORP. (“Part3”, “we”, “us”, “our”, the “Company”), we recognize our role in shaping the use of artificial intelligence (“AI”) in the provision of our services and products. We have been using AI technology for our internal business purposes with appropriate safeguards at a scale and pace that enabled us to experiment, learn, and hone cutting-edge best practices for use of AI technologies responsibly. As always, we are committed to sharing our learnings and practices as quickly as possible. This policy outlines the principles, guidelines, and responsibilities regarding the use of third party AI applications (the “AI Applications”) at Part3. 

This policy applies in addition to our Terms of Use, Privacy Policy, Cookie Policy, Terms of Services Agreement and any other product-specific terms and conditions.

2. Purpose

In this policy, Part3 shares how we use AI Applications responsibly, how we make decisions about releasing and using any data or results arising from the use of the AI Applications, how we support our customers as they use any data or results arising from the use of the AI Applications, and how we learn and evolve our use of AI Applications. These initiatives continue to move us toward our goal — developing and deploying safe, secure, and trustworthy services that empower people and organizations. The purpose of this policy is to ensure that AI is implemented into our products and services ethically, responsibly, and in alignment with our mission, vision and values as an organization.

3. Scope

To deliver our products and services, we leverage a range of AI Applications across development, operations, and customer-facing features. The following outlines the key tools and platforms we use, and the purposes they serve.

AI Providers & Tools

We use AI Applications from OpenAI, Inc. (including ChatGPT and other language and code models) and Google DeepMind Limited (Google Gemini) for core capabilities including content generation, data analysis, decision support, and research and development. We also use Claude by Anthropic, Inc. across various productivity and development workflows.

Core Development

Agentic coding tools support feature development, automated code review, and code quality analysis.

Productivity & Operations

We use OpenAI, Anthropic, and Google AI tools to support general productivity tasks, internal operations, support triage, and CRM workflows.

Integrations & Infrastructure

We connect AI capabilities to our internal tools — including Linear, Slack, Notion, and Figma — through various Model Context Protocol (MCP) servers. Our product's AI features are powered directly through the OpenAI and Gemini APIs.

Reasons for AI Use

We use AI Applications to accelerate software development, automate code review and quality assurance, power customer-facing features such as submittal review and spec analysis, streamline internal operations, and support research, writing, and decision-making across the organization.

4. Policy

Ethical and Responsible AI Use

We are committed to upholding ethical practices across all aspects of our operations, including our use of AI Applications. The following principles guide our approach.

a. Transparency

Trust is built through transparency. We aim to be clear and open about which AI Applications we use, how we use them, and the role they play in our products, services, and internal decision-making. We disclose the use of AI to our customers and users where it is relevant, and we strive to ensure accountability in any context where AI Applications contribute to decisions or generate outputs that may affect others.

  1. Privacy and Data Security

We strive to ensure that use of AI Applications within our systems is securely implemented, assessed for risk, and monitored regularly. Our team works hard to ensure that AI is used responsibly, is operating correctly, and is compliant with applicable laws, regulations, policies, procedures, standards, guidelines, and best practices.

Password Requirements

For internal systems, passwords must be 14+ characters with 3 of 4 complexity types (uppercase, lowercase, number, special character), expire every 365 days (where MFA isn't in use), lock after 5 failed attempts, and not reuse the last 6 passwords. For customer-facing systems, the minimum drops to 10 characters with the same complexity rule and lockout threshold, but no expiration requirement is specified.

Additional rules include: passwords may never be written down or stored in cleartext, individual passwords must never be shared, compromised passwords must be changed immediately and reported to you (Corey), first-time system-generated passwords must be changed on initial login, and passwords must only be distributed via the company password manager — never via Slack or email. If a password must be shared, the username and password must go in separate communications.

SSO is required wherever possible. Where SSO isn't available, MFA is required. Attempting to disable MFA is a terminable offense.

Encryption Requirements

Data at rest must use AES-256 at minimum. Data in transit must use TLS 1.2 or higher — FTP and deprecated protocols like TLS 1.1 or lower are prohibited. All staff laptops and hardware must have full disk encryption enabled; disabling it is a policy violation subject to termination.

Encryption keys must be protected with appropriate access controls, rotated periodically per industry best practices, stored separately from the data they encrypt, and accessed only via MFA-protected systems. Any compromised key must be reported to leadership immediately and rotated as soon as reasonably possible.

Any personal information (as defined in our Privacy Policy) collected and/or processed by any AI Application will be handled in accordance with applicable privacy laws and regulations to safeguard individual privacy and to ensure data security. Where required by applicable law, we will seek additional consent from users if their personal information will be input into an AI Application. 

Further, when personal information is processed by any AI Application, we:

ensure that the collection and use of personal information is limited to what is necessary for the purpose;

  • use anonymized or de-identified data where possible;

  • avoid function creep, and only use personal information for purposes identified in our Privacy Policy;

  • establish and abide by appropriate retention schedules for personal information;

  • ensure that any inferences created about individuals are for purposes specified and disclosed in our Privacy Policy, and that their accuracy can be reasonably assessed and validated;

  • treat any inferences generated about an identifiable individual as personal information;

  • ensure that personal information is accurate, complete and up to date;

  • evaluate the impacts of any accuracy issues or limitations disclosed by the provider or developer of the AI Application;

  • take steps to ensure that any outputs from an AI Application are accurate, especially if those outputs are used to make or assist in decisions about an individual or individuals;

  • ensure that training or other input data used by the AI Applications is lawfully collected, used, and disclosed, taking account of applicable privacy and intellectual property rights;

  • safeguard any personal information collected or used; and

  • maintain ongoing awareness of, and mitigations against, threats.

b. Human Oversight

Use of AI Applications at Part3 is done under human supervision to ensure that AI-generated results are accurate, appropriate, and aligned with our goals and objectives. We only use AI Applications as a tool to support human decision-making and not to replace it entirely. Our employees confirm the validity and reliability of output produced by the AI Applications and we add additional oversight and review of outputs to prevent discriminatory outcomes based on race, gender, sexual orientation, disability, or other protected characteristics.

Compliance and Accountability

Part3 is committed to complying with all relevant laws, regulations, and industry standards related to use of AI Applications. We will at all times hold ourselves accountable for the proper implementation of this policy and will continuously strive to improve our practices. 

5. Risk Analysis and Review

On an ongoing basis, we take steps to identify, prevent and manage risks relating to use of the AI Applications. Part3 works to identify potential harms, measure their likelihood to occur, and develop mitigation systems to address them. We regularly conduct privacy impact and security risk assessments to ensure that security, safety, confidentiality, and privacy of our customers and users are protected while continuing to promote and empower the use of AI to benefit our business.

Our Development team meets at least once per quarter to:

  • align internal roles and responsibilities; 

  • establish requirements for safe, secure, and trustworthy AI use and deployment; 

  • elect appropriate and useful AI Applications;

  • identify and prioritize AI risks;

  • systematically measure prioritized risks to assess prevalence and the effectiveness of our mitigation practices; and

  • manage and/or mitigate identified risks.

Align Internal Roles and Responsibilities and Establish Requirements for Safe, Secure, and Trustworthy AI Use and Deployment

We’ve implemented policies and practices to encourage a culture of risk management. The AI Applications used by Part3 are carefully selected to ensure that they adhere to company policies, including our security, privacy, and data protection policies. We update these policies as needed, informed by regulatory developments and feedback from internal and external stakeholders.

Prior to using and implementing any AI Application, Part3’s Development team reviews available information about the AI Application, its capabilities, and its limitations and then our team maps, measures, and manages any identified risks. If significant risks are identified with respect to an AI Application, we request additional review by experts within and outside of the Company. Further, our policies, programs, and best practices include input from a diverse group of internal and external stakeholders. 

To ensure transparency of our AI Application use, we make available materials to customers, users, employees and other stakeholders that explain our use of AI Applications and any significant risks associated therewith. We also maintain an internal database of all AI Applications used for the production of, and integrated into, our products and services. 

Elect Appropriate and Useful AI Applications

All AI Applications used with respect to our products and services are properly vetted by our Development team and we maintain a list of all Part3 approved AI Applications. We permit our employees to use only Part3 approved AI Applications for use relating to our products and services. As a method of selecting the AI Applications to be used for our products and services, we do our best to ensure that said AI Applications: 

  • use anonymized, synthetic, or de-identified data rather than personal information, when possible;

  • respect privacy laws and best practices;

  • do not re-identify any previously de-identified data; 

  • do not collect, use, or disclose personal information that is otherwise unlawful; profile or categorize in a manner that may lead to unfair, unethical, or discriminatory treatment; collect, use, or disclose personal information for purposes that are known or likely to cause significant harm to individuals or groups;

  • do not create content for malicious purposes;

  • do not deliberately nudge individuals into divulging personal information; and

  • do not generate or publish false or defamatory information.

When selecting AI Applications, we first establish that the tool is both necessary and likely to be effective in achieving the specified purpose. We also ensure that procedures exist for individuals to access and correct any personal information collected about them and especially where that information may be included in outputs generated in response to a prompt. Where an AI Application is used as part of a decision-making process, we maintain adequate records to allow for requests for access to information about that decision to be meaningfully fulfilled.

Identify and Prioritize AI Risks

Identifying risks is a vital first step toward measuring and managing risks associated with use of AI because it informs our decisions about planning, mitigations, and the appropriateness of an AI Application for a given context. 

Prior to using and implementing any AI Application, our Development team begins with an impact assessment to identify potential risks associated with use of a particular AI Application and their associated harms. The impact assessment is finished with strategizing about mitigations to address any identified risks.

Our team also develops processes for identifying and analyzing privacy and security risks, like security threat modeling, to help us inform our understanding of risks and mitigations for use and implementation of AI Applications.

Systematically Measure Prioritized Risks to Assess Prevalence and the Effectiveness of Our Mitigation Practices

We’ve implemented procedures to measure AI risks and related impacts to inform how we manage these considerations when using AI Applications. For instance, we established metrics to measure identified risks associated with use of AI Applications and we perform regular mitigations performance testing to measure how effective our mitigation strategies are.

Manage and/or Mitigate Identified Risks

We manage and mitigate identified risks at the platform and application levels. We also work to safeguard against previously unknown risks by building ongoing performance monitoring, feedback channels, and processes for incident response. Only after completing the above steps do we implement the use of an AI Application into our products and services. We continually monitor, track, and evaluate the cybersecurity of our systems and we will use metrics to measure and understand systemic issues. This monitoring will include detection of unauthorized data access or exfiltration and modifications to security configuration from either insider or external actors.

We consistently disclose the role of AI Applications in interactions with users and AI-generated content. 

Our Development team also implements processes to monitor performance and collect user feedback to respond when our products or services don’t perform as expected. 

Lastly, employees across the Company must report AI uses to our senior management team for in-depth review and oversight. We also ensure that our employees only use AI Applications when it is necessary and proportionate and do not input any content that contains personal information into public AI technology services. We also encourage our employees and users to avoid inputting personal information into any AI Applications. Similarly, we offer our employees education and training on responsible use of AI Applications. We also provide role-based training to team members for specific and unique AI Applications. We are committed to ensuring our employees’ understanding of the ethical considerations, privacy concerns, and potential biases associated with the use of AI Applications.

Ongoing Processes

As we expand and evolve our use of AI Applications, we continue to build on the above practices. We implement strategies to not only identify and mitigate potential risks but also to protect and safeguard Part3’s services and products and the data of our users and customers. For example:

we employ strong identity and access control and we use holistic security monitoring (for both external and internal threats) with rapid incident response and continuous security validation;

  • we consistently review the literature and publications made available by the providers of the AI Applications as well as governmental authorities to better understand the AI Applications, the risks associated with the use thereof as well as emerging safety and security issues;

  • we use threat intelligence to inform our cybersecurity programs;

  • we invest in preventing, detecting, and disrupting abusive and malicious use of our infrastructure, technologies, and products;

  • we perform periodic security validation checks to validate our detection efficacy and make improvements where appropriate;

  • we ensure that impacted individuals are provided with an effective challenge mechanism for any administrative or otherwise significant decision made about them;

We continually evaluate the validity and reliability of each AI Application for its intended purpose throughout its intended lifecycle. We are consistently evaluating whether a specific AI Application is necessary and whether there are other more privacy-protective technologies that can be used to achieve the same purpose. 

As we learn more about how AI is used, we continue to iterate on our requirements, review processes, and best practices.

6. Periodic Review

This policy will be subject to periodic review to ensure its relevance, effectiveness, and alignment with evolving industry standards, laws and regulations and societal values.

7. Conclusion

This policy reflects Part3's commitment to leveraging AI responsibly and ethically. While AI may be employed to support our administrative tasks, we remain dedicated to maintaining the human touch in our products and services and upholding the values and principles that define our organization.