1 Purpose
The purpose of this policy is to ensure that Artificial Intelligence (AI) technology is used in a responsible and safe manner.
Artificial Intelligence poses many risks, some of which are not fully understood, and the technology is constantly evolving. For this reason, this policy provides both general principles and practical rules. The general principles provide a guide when facing not just current, but also future AI risks and AI technologies. The practical rules provide an overview of the processes that we use at Engage Hub to make sure the general principles are incorporated in our usage of AI.
2 Audience
This policy applies to anyone working for Engage Hub.
3 Policy
3.1 General Principles
Engage Hub is committed to using AI in a way that follows the OECD AI Principles (2019), which “promote use of AI that is innovative and trustworthy and that respects human rights and democratic values.”
In particular:
- AI should be used for beneficial outcomes for the people and the planet.
- AI should be used in a way that respects the rule of law, human rights, and democratic values.
- AI should respect freedom, dignity and autonomy.
- AI should respect privacy and data protection.
- AI should be non-discriminating and support equality, diversity, and fairness.
- AI should be used in a transparent way. When AI is being used, meaningful and transparent information must be made available on how it is being used. Those affected by an AI system must be supplied with information to enable them to understand the outcome and, if desired, to challenge it.
- AI systems should be robust, secure, and safe. Risks posed by AI systems must be systematically assessed and considered.
3.2 Practical Rules
There are two main ways in which AI can be used in our business:
- It can be used by people working for us to perform their job duties.
- It can be embedded by our developers in our products and services.
In this policy we provide some general rules and some specific rules for the two usages.
3.2.0 General Rules
- Engage Hub does not use or develop AI systems that pose significant or unacceptable threats to health, safety, or the fundamental rights of persons. These are the systems classified as “high risk” and “unacceptable risk” by the EU AI Act.
- All AI tools and technologies must be reviewed and approved by the Chief Security Officer before they are used.
- When possible, the output of an AI tool should be reviewed by a human before it is used (the “human in the loop” principle).
- The way AI is used in the company should be monitored for potential legal or ethical issues.
3.2.1 Usage of AI by People Working for Engage Hub
- AI tools must be reviewed and approved by the Chief Security Officer before they are used. The review will assess the information security risk posed by the tool but will also consider whether a tool is fair, reliable, inclusive, and transparent.
- Unless you obtained approval to do otherwise, you must only feed public information into an AI tool. This is because AI may learn from any information given to it and the information might be extracted or discovered later by another user of the tool. In particular, never feed into an AI tool:
• Any personal information;
• Any company confidential information;
•Any company intellectual property (such as software code snippets or internal documents).
- You must get approval from the Chief Security Officer before you feed non public information into an AI tool.
- You must review any output of an AI tool, before using it, to make sure it is correct. This is because AI tools can make significant, unexpected, and sometimes unexplicable mistakes.
- You must review any output of an AI tool, before using it, to make sure it is fair and not based on biased or discriminatory assumptions. This is because AI tools can be biased and discriminate against particular groups of people.
- You must always disclose the fact that you used an AI tool to complete a task.
- Engage Hub does not use AI tools to take decisions that might affect a person’s career or livelihood. For example, AI tools must never be used to:
• Manage or monitor workers;
• Select or assess candidates for a job;
• Assess someone’s work performance;
• Score open-ended exams or tests;
• Sort CVs during a job application process;
• Perform interviews during a job application process.
3.2.1.1 Generative AI Tools
Generative AI is AI used to generate text, images, music, or other content. Examples of Generative AI tools include ChatGPT, Google Bard, Bing Chat, LLaMA, Stable Diffusion, Midjourney, and DALL-E.
- The main use of generative AI in Engage Hub is for R&D purposes.
- You must always carefully review the output of a generative AI tool before it is used to make sure it is not misleading or factually incorrect.
- You must always carefully review the output of a generative AI tool before it is used to make sure it is not offensive or discriminatory and that it is in line with the company’s values and ethical standards.
- You must always carefully review the output of a generative AI tool before it is used to make sure it is not infringing upon the intellectual property rights of others.
- You must disclose any usage of generative AI.
- Clearly flag any content partially or entirely generated using generative AI.
- Never use generative AI tools to generate software code or other key intellectual property.
- Do not use generative AI to generate anything that will be published, viewed, or presented to an external audience. Only use generative AI, if at all, for internal documents.
- Any exception to these rules must be approved by the Chief Security Officer.
3.2.2 Usage of AI in our Products and Services
- AI tools and technologies must be carefully evaluated from a security point of view and for compliance with the OECD AI Principles (2019) before they are used in our products and services.
- In particular, AI technology should be reviewed before it is used to make sure it does not have built-in bias.
- Any new usage of a AI tool or technology in our product, software, or services, must be reviewed and approved by the Chief Security Officer.
- Extreme care must be exercised with data that is fed into cloud AI products as the data might be used to train the product and might implicitly enter the public domain.
- AI open source tools that can be used inside on-premise sandboxes are to be preferred to cloud AI products because it is easier to control what happens to the data that is fed into the AI.
- Customers must be clearly informed when the services or product that we supply to them use AI and what the dangers involved are.
- Engage Hub must cooperate with customers to ensure any usage of AI is in line with the OECD AI Principles (2019).
- Any AI-based service that we provide must be compliant with all the applicable laws and regulations.