Challenges of the EU AI Act for companies

Since 2 February 2025, companies have been obliged under the EU AI Act to ensure that their employees have sufficient AI skills. This training obligation applies to both providers and operators of AI systems and aims to ensure the safe and responsible use of AI technologies.

The following article briefly presents the content of the AI Act, the challenges for companies and the training offered by I-D Social.

Risk levels of AI applications

The EU AI Act is the first comprehensive law to regulate artificial intelligence (AI) in the European Union. It aims to make AI systems safe, transparent and ethical. The law categorises AI applications into four risk levels:

Unacceptable risk (prohibited)
Manipulative AI (e.g. social scoring, as used in China)
Emotion recognition in schools or the workplace
Real-time biometrics (e.g. Real-time biometrics (e.g. facial recognition in public areas)

High risk (strict requirements)
AI in sensitive areas such as health, justice or human resources
Companies must comply with security, transparency and monitoring requirements

Limited risk (transparency obligation)
Chatbots or deepfakes must be labelled as AI

Minimal risk (hardly regulated)
AI in applications such as video games or spam filters
Companies are obliged to test their AI systems, minimise risks and provide extensive documentation. Violations can be penalised with high fines.

Challenges for companies and organisations

We see the following challenges in particular:

Compliance
Companies must ensure that their AI systems comply with EU requirements.
Technical checks and comprehensive documentation are required

Costs
The implementation of security and transparency measures is associated with considerable costs.
This could be a financial burden for start-ups in particular.

Liability
Companies are responsible for damage caused by their AI systems.
Stricter controls can lead to legal risks.

Innovation vs. regulation
Strict regulations could slow down the speed of innovation.
EU companies are in competition with less regulated markets (e.g. USA, China).

Data protection and ethics
AI systems must comply with the GDPR.
Algorithms must be fair, comprehensible and explainable.

I-D Social’s training programme

The EU AI Act poses a significant challenge for companies wishing to use AI technologies. The regulation aims to minimise risks and ensure ethical standards, which is to be welcomed in principle. However, it also brings with it a number of bureaucratic hurdles that can be a burden for smaller companies in particular.

However, the need to train employees in dealing with AI is not just a bureaucratic obligation, but also an opportunity. Well-trained employees can better utilise the potential of AI, which can lead to greater efficiency and innovation in the long term. Training should therefore not be seen purely as a compulsory task, but as an investment in the future of the company.

Nevertheless, it is important that regulation remains practical and does not become overly burdensome. Politicians and industry are called upon to work together to find solutions that guarantee both the protection of citizens and the competitiveness of companies. A balanced implementation of the EU AI Act, which provides clear guidelines without stifling innovation, is crucial to the success of this regulation. We are happy to support you.

Scroll to Top