Training: AI Literacy Basics
Summary
Starting from February 2, 2025, everyone involved with artificial intelligence (AI) must possess a level of AI literacy. This includes having the necessary knowledge, skills, and understanding to use AI tools effectively and responsibly. Being AI literate means being aware of the opportunities, risks, and potential harms associated with AI. Developers, providers, and users of AI systems must ensure that their staff, as well as anyone else who operates or uses AI systems on their behalf, have adequate knowledge and understanding of AI. When assessing this, it is important to consider each individual’s role, technical expertise, experience, education, and training. Additionally, the context in which AI systems are employed and the relevant legal framework must also be taken into account. Finally, the needs and characteristics of the individuals or organizations affected by the AI systems should be considered.
Introduction
On January 8, 2024, the European Artificial Intelligence Act (AI Act) came into effect. This act aims to ensure that AI designed, developed, and/or used within the EU is trustworthy, lawful, ethically responsible, reliable, and robust. It mandates measures to protect the mental health, safety, and fundamental rights of individuals. One of the initial requirements set forth by the AI Act is AI literacy, as outlined in Article 4. It states:
“Organizations are required to take concrete steps to promote AI literacy by February 2025. This means that organizations must begin establishing courses and programs that prepare their employees for this impactful technology.”
Therefore, starting from February 2, 2025, the AI Act mandates that organizations ensure all employees and contractors working with AI systems possess an adequate level of AI literacy.
The AI Act
The AI Act is the first legal framework for artificial intelligence, addressing the associated risks and positioning Europe as a leader in the global AI landscape. It outlines clear requirements and obligations for AI policymakers, researches, designers, developers, providers, purchasers, deployers, auditors and users concerning specific AI applications. At the same time, the regulation aims to minimize the administrative and financial burden on businesses, especially small and medium-sized enterprises, while enhancing their capacity for innovation.
The AI Act is a component of a larger set of policy measures aimed at promoting the development of trustworthy AI. These measures work together to ensure the safety and fundamental rights of individuals and businesses in relation to AI. Additionally, they enhance the utilization, investment, and innovation in AI throughout the European Union.
The Act establishes harmonized rules for the design, development, marketing, and use of AI systems in the EU, following a risk-based approach. These rules will be implemented gradually over the coming years, with the first provisions set to take effect in February 2025. Among these initial provisions is a legal requirement for AI literacy.
AI literacy
AI literacy encompasses the knowledge, understanding, and skills necessary to consciously and responsibly design, develop, sell, purchase, and use AI systems. This means that employees and contractors need to have a broad understanding of how AI works, the possibilities it offers, and the organizational, technical, legal, and ethical risks associated with its use.
The level of required AI literacy varies based on the specific tasks, roles, and functions within the organization, as well as the applications of AI for various stakeholders, including suppliers, customers, employees, patients, students, and government entities. For example, an employee who uses AI for simple tasks like creating summaries will need less in-depth knowledge than someone who employs AI to develop complex algorithms, conduct analyses, or make decisions.
It is important to note that the obligation to possess AI literacy applies to all employees using AI systems on behalf of the organization, including contracted workers
What steps should I take?
While the AI Act does not explicitly outline how to achieve AI literacy, organizations can consider the following steps:
- Assessment of AI Landscape: Investigate the developments, strengths, weaknesses, opportunities, and threats related to AI that may impact the organization. Use the concept of double materiality (considering both internal and external information that would be relevant to a reasonable person).
- Develop an AI Strategy: Create an AI vision, ambition, strategy, and policy. Ensure AI literacy is integrated into the organization’s overall policies for AI, human resources, and procurement. Establish guidelines on whether to use certain tools and procedures for updates.
- Categorization of AI Use: Identify which category of AI your organization is dealing with regarding risk levels: minimal or none (green), limited (yellow), high (orange), or unacceptable (red).
- Assess Current Knowledge Levels: Evaluate the current level of knowledge and experience of employees and contractors concerning the design, development, procurement, and use of AI.
- Training and Development: Offer webinars, courses, and training programs tailored to the various roles and responsibilities within and outside the organization.
- Targeted Training: Provide specific training for the AI applications utilized within the organization. By implementing these measures, organizations can fulfill the obligations of the AI Act and support the responsible and effective use of AI.
Key issues to investigate and discuss
Before designing, developing, selling, buying, or using (generative) AI, consider investigating and discussing the following crucial issues:
- Privacy, Ethics, and Governance: It is essential to explore privacy concerns, ethical frameworks, and methods for monitoring AI-generated content. Ensure transparency, obtain informed consent, and comply with various legal requirements.
- Cultural Sensitivities and Bias Mitigation: Develop AI systems that respect cultural differences and actively work to mitigate bias. This includes avoiding the reinforcement of stereotypes by ensuring diverse representation in both training data and output evaluation.
- Governance, Security, and Cyber-security: Implement strong governance frameworks to oversee AI development, manage ethical concerns, and protect AI systems from cyber threats, geopolitical risks, and disinformation campaigns.
- Collaboration, Regulation, and Development Models: Balance private, public, and open-source development approaches while navigating different international jurisdictions. Promote responsible innovation and work to prevent monopolistic behavior and concentration of power.
- Global Economic and Strategic Implications: Analyze the impact of AI on productivity, labor markets, global trade, and shifts in economic power. Ensure resilient AI supply chains and develop policies that prevent brain drain while securing strategic partnerships.
By the end of the course, participants will have a comprehensive understanding of the relevant questions to ask and answer in order to create a responsible AI literacy development program.
AI literacy is a journey, not a destination.
In the coming years, AI literacy will be an ongoing process of discovery, learning, innovation, and evaluation. This indicates that an ‘AI literacy plan’ will evolve into a program consisting of various steps, short-term actions focused on compliance and long-term strategies aimed at future-proofing.
It is beneficial to designate a function responsible for monitoring developments and fostering continuous learning. The concept of ‘algoprudence’ and use of a algoprudence database is crucial in this context. It emphasizes that as an organization and responsible professionals, we must exercise specific, case-based, and decentralized judgment regarding the responsible use of algorithms. A one-size-fits-all approach is not feasible; instead, we should aim for broader guidelines while tailoring our strategies to individual circumstances
Objective
The goal of this course is to understand what AI literacy means, why it is important, and what you can and cannot do with it. Participants will have the opportunity to discuss and share their own AI tools or use cases with one another.
It is important to note that this course is not focused on learning how to use AI tools. We will not practice with any AI tools; however, we will create a risk profile for a specific tool during the workshop. Additionally, this is not a technical training program, so we will not teach programming. Nevertheless, some technical terms and concepts will be explained. Lastly, we will not cover the AI Act and other related legislation in detail, as this is not a legal affairs course.
Target audience
This course is designed for anyone working with AI tools, such as ChatGPT (user).
Furthermore, this course can serve as a basis for anyone who researches, designs, develops, sells, purchases, manages, deploys, assures and/or makes policy for AI tools. After completing this introductory course, participants can also enroll in an advanced course tailored for this audience.
Results
By completing this course, you will:
- understand what AI literacy is, its importance, and its implications;
- be able to create an AI risk profile;
- respond to questions regarding the usefulness and necessity of AI, as well as its associated risks and opportunities for the organization, customers, and network;
- distinguish between tools and applications, and understand personal versus third-party usage;
- recognize when generative AI is reliable and when it is not;
- be aware of key issues, including data quality and accessibility within the value chain, energy consumption, technological advancements, biased algorithms, legal restrictions, ethical concerns, privacy issues, informed consent related to content use, vendor lock-in, and the influence of major tech companies.
For those interested in attending the advanced course afterward, you will:
- begin to establish an internal program for fostering AI literacy;
- develop a future radar to make timely and informed decisions regarding AI;
- gain better insights into the developments and challenges within AI, along with their impacts and uncertainties on vision, strategy, and planning;
- collaborate with customers, suppliers, and your network to innovate reliable and safe AI through initiatives like a Community of Practice.
Program proposal (4 hours)
- Introduction: Legislation and Context
- Social and Organizational Context
- Why? The Drivers for AI Literacy
- Letting Go of Conditioning and Dominant Organizational Logic
- Understanding AI and AI Literacy
- Key Issues to Consider
- What Needs to Be Arranged and What Can Be Arranged?
- Practical Examples and Use Cases
- Implications for Your Operating and Business Model, Role, and Organization
- How to Ensure AI Literacy
- Questions, Dialogue, and Statements
Form
The course focuses on awareness and knowledge transfer through practical examples, providing opportunities for participants to ask questions and share personal insights. It is interactive and uses statements to encourage dialogue. Along with concepts, theories, and practical examples, participants will also engage in discussions about their own real-world experiences in the field of AI.
In-Company
The program is customized based on an initial consultation with the client and participants. If necessary, additional current themes, topics, or interactive components can be incorporated.
Information
If you would like more information, please feel free to contact.
tags: tal_eng