What is the EU AI Act?
Have you already dealt with artificial intelligence in the company? Then you certainly know that the use of AI offers great opportunities, but also one or the other risk. In order to promote opportunities and regulate the spread of AI technologies, the EU AI Act was created.
Hope: The development and use of artificial intelligence May things be done responsibly in the future. The EU AI regulation therefore applies not only to companies within the EU, but also to international companies that want to bring products and services to the European market or are already represented on the European market with their products.
.png)
Objective of the EU AI regulation
To put it bluntly: The EU AI Act is a comprehensive body of legislation that regulates the rules and requirements for the use of AI technologies in several chapters. This gives companies an overview of what is and is not allowed with AI.
Simply put, it comprises:
- General Provisions
- Detailed market surveillance and enforcement rules
- Priorities such as security and transparency
- Classification and regulation of high-risk AI systems
Who is affected by the regulation?
Companies, developers, authorities and public institutions that use or offer AI systems. Accordingly, companies need to be mindful when employees use AI tools at work, for example, or when customers interact directly with AI in some way, for example via a chatbot on the website.
As a first step, check in which areas you use artificial intelligence and, in the next step, what requirements it entails. If necessary, you should seek external help for this.
Classification of AI systems in the EU AI ACT
How risky are the AI systems that are used in companies? That is the central question of the EU AI Act. AI systems are divided into classes based on their risk, which entail different AI requirements. The aim is to make companies aware of their technological responsibility.
The EU AI Act primarily distinguishes between two categories:
- Prohibited practices in the area of AI (Chapter II)
These include systems that pose a significant risk to society or individuals — those for manipulation or social evaluation.
These include AI tools that have the power to create fake content in order to give their disseminators an advantage. Be it social or commercial. Just think of fake news about politicians.
- high-risk AI systems (Chapter III)
Systems in areas such as biometrics, education, critical infrastructure, law enforcement, and border control fall under this point. There are strict regulatory requirements.
As specific examples, you can name autonomous vehicles or medical devices, but also facial recognition, which allows you to identify prevailing emotions — just think of the effects that it would have if, for example, it was recognized at work which emotions are expressed in various interactions.
More transparency through the EU AI Act
There are also regulations for AI systems with manageable risks. However, their requirements are limited. Your risk can be reduced simply by increasing transparency. This regulation applies in particular to AI tools that produce images, videos or voices. Although they pose a potential threat — because they could be used to create deepfakes — they must be differentiated from deepfakes, which negatively manipulate human behavior and are completely prohibited.
Deepfakes are media content created with artificial intelligence. They can be found more and more frequently on social media and on the web in general. They have the power to spread fake news. In future, it must be clearly marked when content has been generated by AI in order to minimize fake news.
.png)
What about the transparency of frequently used AI tools?
The most frequently used AI tools ChatGPT, MidJourney and DALL-E are AI systems with limited risk. Transparency obligations that apply to these AI applications include:
- Disclosure that the content is AI-generated, and
- that people interact with artificial intelligence.
Images, audios and videos in particular represent the biggest challenge. If these models generate media content from real people, they could be considered deepfakes — if they have the potential to negatively manipulate people. However, if a fantasy picture is created, this rule does not apply.
Important: Depending on the model, requirements may also change. GPT-4, for example, is being scrutinized much more closely.
The most important provisions of the EU AI Act
The EU AI regulation is novel and is likely to grow to include further regulations. But there are already some regulations that you should know in the company.
Prohibited practices and systems
In order to protect society, there are some AI practices that are expressly prohibited in the European AI Regulation. Why Because they are considered unethical or potentially dangerous. This includes:
- Technologies that could negatively manipulate people
- AI systems that perform social assessments
- Specific applications that analyze faces and behaviors
Companies that have already developed comparable systems must bite the bullet. For them, it means adapting them or stopping their development. This will have a significant impact on certain industries — it remains exciting how these companies will deal with the European AI law.
Requirements for high-risk AI systems under the European AI Regulation
The EU AI Act places high demands on high-risk AI systems. Vendors must ensure that systems meet these requirements before they are placed on the market. Transparency is also highly sought after with these systems, because you want to be able to estimate the benefits and consequences of the applications.
Important duties include:
- Transparency of how things work
- Safety and quality management
- Documentation and logging of development and updates
- Conformity assessment
- CE mark
- Registration in an online database
To better understand, here are a few examples of the requirements for high-risk AI systems:
transparency
A company that uses AI to evaluate credit must disclose how artificial intelligence assesses creditworthiness. For example, whether it takes into account payment history or previous credit defaults. This ensures that those affected understand how decisions are made.
Goodwin Law, LawNow
Safety and quality management
A hospital that uses AI for medical diagnoses must take safety measures, such as regular testing, to ensure that the AI makes accurate diagnoses. (Goodwin Law)
Documentation and logging
A manufacturer of self-driving cars must document all AI updates and decisions in order to be able to understand how the vehicle works under various conditions. This ensures traceability . (LawNow)
Conformity assessment
Before a high-risk AI system, such as facial recognition, is brought to market, it must be verified through a compliance assessment. This can be done by third parties or internal audits. (LawNow)
CE mark
After the AI system has passed the conformity assessment, a company that manufactures an AI-controlled robot for operations, for example, can apply the CE mark. This shows that the product complies with EU regulations. (LawNow)
registry
A manufacturer of AI-powered medical devices must register their product in the EU database. This allows authorities to monitor whether it complies with the guidelines and is barrier-free. (LawNow)
Article 4 of the EU AI Act — train AI skills
Article 4 of the European AI Regulation states that providers and operators of AI systems must ensure that their employees know how to use artificial intelligence correctly. To do this, knowledge and skills in using AI must be extensively trained.
Knowledge and skills include:
- Technical knowledge
- Practical experience
- Training tailored to the specific application of AI
- Understanding the ethical and safety-related risks and opportunities of AI
Why is this of comparatively great importance? The legislator thus ensures that AI systems cannot be misinterpreted or misused by unqualified users.
Where there is ignorance, mistakes are easy to make. Therefore, all providers and users who use artificial intelligence in their daily work must provide compulsory education and training — so that AI is used safely and efficiently in the future.
.png)
Who is affected by Article 4 of the EU AI Act?
It is still unclear which company size is affected by the EU AI Act. The text of the law speaks of the obligation to “take measures to ensure to the best extent possible” — which leaves open, what exactly is meant by these measures and how this “best possible scope” is defined.
It is expected that large organizations will need to provide extensive internal training programs to ensure that their personnel and everyone working with AI systems have a high level of AI expertise. The bigger the company, the greater the likelihood that someone will use AI.
For smaller companies, these trainings will be more difficult to implement due to limited resources, yet they are also expected to provide at least a basic understanding of AI — if they use AI solutions. In order to stay connected, it will not be possible to rule out the use of artificial intelligence in the long term.
Regardless of the size of the company, every company is therefore recommended to train employees on AI, in particular with regard to safety and ethical risks.
Are you looking for continuing education in the area of AI in the company?
Mytalents.ai is the right place to go. More than 100 courses and more than 4000 learning content offer practical, compact training for employees on AI topics. The courses are specifically tailored to individual areas, such as finance, marketing, purchasing, management, IT and sales.
Practice-oriented examples and introductions to AI applications such as ChatGPT and Microsoft Copilot provide a clear understanding of technical principles, data protection, IT security and ethical issues. Regular updates and new courses ensure that learning content is always up to date with the latest AI development.
When must the new EU AI law be implemented?
With the publication in Official Journal of the European Union On July 12, 2024, the AI Act came into force on August 2, 2024. The official deadline for staff to be trained is February 2, 2025. From that point on, companies could be punished if employees cause damage due to improper use of AI.
AI training is also relevant for providers and users of non-high-risk AI systems. Every company should therefore take timely measures to train employees in how to use artificial intelligence.
Areas of AI training
In order to meet the requirements of Article 4, companies should cover the following training areas:
1. Technical knowledge
AI basics: How machine learning and neural networks work
Data processing: Importance of training data and its influence on AI models
AI limitations: Understanding concepts such as hallucinations (false AI-generated information) and bias in AI systems
2. Practical experience
Application of AI systems in specific fields and departments.
3. Ethics and safety:
Deepfake detection: Identify AI-generated images, videos, and audios
Data protection: Handling sensitive data in AI systems
Ethical Decision-Making: Assessing the Impact of AI Decisions
Regular training and tailored programs are crucial to ensure the safe use of AI.
mytalents.ai provides up-to-date content on exactly the points listed above, such as our entry-level course Large Language Models or courses on Data protection and AI. mytalents.ai also has Courses for specific subject areas, such as marketing, sales, finance, purchasing, and more. So that every department in the company knows what to pay attention to when dealing with AI.
How must these AI trainings be implemented?
As artificial intelligence is rapidly evolving, it is crucial for companies to regularly train their employees. This applies to both technical staff and end users.
Specific measures can include:
- Online learning platforms
- Application-oriented e-learning courses
- Training by internal or external experts
- Workshops
- Special continuing education programs
Challenges of the new EU AI law for SMEs
The question is: How should small and medium-sized enterprises (SMEs) train their employees? After all, this requires plenty of resources and suitable specialists. Depending on the industry and application context, there may also be specific requirements that require tailor-made training.
Mytalents.ai offers industry-specific courses on AI topics for small and medium-sized companies, such as finance, IT, and marketing. This includes a practical introduction to AI applications, technical principles, data protection and ethical aspects. The courses are regularly updated to cover the latest developments.
.png)
Long-term effects of the EU AI Act
With the advent of AI, there was much about risks and opportunities for employees. Who will lose their job and which areas will become more relevant? It is clear that the use of AI will change the labor market. But this doesn't necessarily have to be accompanied by anxiety. Certain positions will become more important, such as developers or AI operations experts who are driving forward the implementation of AI in everyday working life.
At the same time, some jobs may lose relevance, but this does not mean that employees are no longer important. Anyone who develops their AI skills and works efficiently with AI and can act as a supervisory authority will retain their relevance in the future. Because in the end, you always need a trained eye to take a last look at the work of artificial intelligence.
Companies that train and support their employees will remain competitive in the long term and work even more efficiently. Continuing education in the field of artificial intelligence is therefore important for all instances.
However, the EU AI Act also has a bitter aftertaste. As important as the thoughtful and safe use of artificial intelligence is, the EU AI regulation also prevents Europe from gaining access to innovative AI systems. Many AI providers are not coming to Europe. We don't have access to top video models yet, such as Sora from OpenAI or Veo2 from Google. We are also denied the Apple Intelligence or AI systems from Facebook and Instagram.
In the long term, this can have a significant impact on the economic viability of European companies. Europe is losing touch because the increase in productivity through AI systems in other parts of the world cannot be proportionally caught up in Europe.
Internal training programs to promote AI skills
Establishing internal training programs for AI skills is essential to prepare employees for the responsible use of AI. In focus: not only technical skills, but also ethical issues.
Two training courses are recommended to take into account the different needs of the workforce:
Basic AI Skills
It all starts with the basics. Anyone who understands AI can make conscious decisions about how to use it. Employees get an understanding of: What is AI? Which ethical principles must be considered? How will artificial intelligence be used in organizations in the future? Critical thinking, fact-checking and thoughtful use are trained.
The entire organization receives a basic understanding of how artificial intelligence is used and what technological change means for the organization. In this way, everyone acts from the same level of knowledge.
Role-specific training
Every company department is confronted with different requirements regarding AI. Through targeted training paths, the individual departments of the company are made fit for AI. In this way, employees learn how to use the appropriate AI applications efficiently and in a time-saving manner. This approach is ideally tailored to the individual goals of the respective department or role and helps employees to actively design and make optimal use of AI-based processes.
By combining the two training courses, companies can ensure that their employees have both solid basic knowledge and the specific skills required to use artificial intelligence in their work area.
mytalents.ai AI training and continuing education offer
mytalents.ai offers an opportunity for companies to specifically train their employees in the area of AI:
- Over 100 courses and more than 4000 learning content on various AI topics
- Practical use cases for individual areas of the company, such as finance, marketing, purchasing, management, IT, sales, etc.
- Introductions to common AI applications, such as ChatGPT, Microsoft Copilot and/or proprietary GPT solutions
- Comprehensive training on technical principles, data protection, IT security, ethical issues of AI use and recognition of AI-generated content
- Regular updates to stay up to date with AI development
Many customers are already successfully using the knowledge they have learned from our practical courses in everyday working life. One example of this is TCG UNITECH — a leader in the light metal and plastics industry. By working with mytalents.ai, TCG UNITECH was able to specifically strengthen employees' AI competence, optimize work processes and ensure increased efficiency in departments — in particular through the use of generative AI tools, such as ChatGPT and Microsoft Copilot.
Link to the case study: mytalents.ai