Geißler Legal - Specialized Lawyer for Compliance & International Commercial Law

Need2Know - Compliance with AI in Operations

Compliance-focused legal advice for your business activities - personal, cross-border, cost-efficient.

The employment law design and introduction of AI guidelines in your company

The latest innovations in the field of generative artificial intelligence (“generative AI”) are prompting many companies to set up and implement their own AI strategies. AI is therefore a work tool that has numerous implications for employment law. For reasons of preventive compliance with a focus on protecting personal or business-related data, every company should formulate basic guidelines for the use of AI.

The lack of an AI strategy in the company poses employment law risks

In view of the many free AI applications available, such as Perplexity, Chat-GPT, Midjourney, DeepL or Copilot, even skeptical employers should not close their eyes to the fact that employees - especially for simple or creative tasks - are already often using AI solutions. In principle, they are also entitled to do so, provided there are no regulations or instructions to the contrary in the company. An obligation to disclose under Section 241 Paragraph 2 of the German Civil Code only applies to employees if the company could suffer damage from the specific use - and this is also recognizable to the employee. The prerequisite for this is usually sensitizing the workforce to the technical and legal limits of the use of artificial intelligence. Employers should therefore take proactive action. Possible risks of AI use by employees Security of company data. One of the greatest, but also least known, risks associated with the use of AI applications is the disclosure of valuable or confidential company data by entering it into an AI system. Data is "paid for" especially with free AI versions. The performance of current Large Language Models (LLMs) of generative AI applications depends on them being trained with data created by human intelligence. The stock of public information available on the Internet has already been largely exhausted. This is particularly relevant for companies with trade secrets and other sensitive data sets.

Protection of personal data according to GDPR

The handling of personal data in connection with artificial intelligence can also be problematic. If this is entered into an AI, there must be a legal basis for this under data protection law, for example consent.

Data processing The data subject must also be informed about the data processing. For non-European data processors, a secure transfer to third countries must also be ensured in accordance with Art. 44 ff. GDPR. These requirements are usually not met if an employee uses his or her own private AI account to do his or her work. Nor can it be assumed that employees are aware of the complex data protection issues, so that they develop a feeling of disturbance and refrain from such behavior or disclose it to the employer.

Protection against automated decisions (Art. 22 GDPR) and AI Regulation

According to Article 22(1) GDPR, individuals may not, in principle, be subjected to decisions based solely on automated processing if these have legal consequences for them or significantly affect them in a similar way.

This would make it inadmissible, for example, to use AI in the HR department to automatically sort out applications based on certain characteristics (possibly determined by the AI). However, decisions that affect the interests of customers are also regulated by Art. 22 GDPR. In view of Art. 22 GDPR, it is not problematic to use AI to collect and process data if a human ultimately makes a final decision on this basis. It should be noted, however, that HR applications (hiring, terminations, promotions) are to be classified as high-risk systems according to the risk-based approach of the AI Regulation. Providers of such artificial Intelligence are therefore subject to special risk management and transparency obligations.

Intellectual Property and Copyright

There are also risks associated with the use of AI applications, particularly in connection with copyright law. Firstly, it must be taken into account that purely AI-generated work results are generally copyrightable due to the lack of “personal intellectual creation” according to Section 2 Paragraph 2 of the Copyright Act. are not protected by copyright. Secondly, there are liability risks if the work results produced by the AI - even unnoticed - infringe the copyright of third parties.

employer's options for regulation

Absolute prohibition of use

Within the framework of their general right of direction under labor law pursuant to Section 106 of the German Trade Code (GewO), employers are entitled to prohibit their employees from using AI applications in the context of their work.

However, the existing risks cannot be legally excluded by an absolute ban. Employees can ignore the ban and thereby cause damage to the company. The violation of third-party rights is also not excluded.

In addition, it may be advisable not to completely close oneself off to the new technology, because actually engaging with AI applications builds skills among employees that will most likely pay off in the future.

Company policies or guidelines for the use of employee-owned AI accountsunts. Companies often have not yet introduced their own AI infrastructure. They can then allow employees to use their personal AI account and at the same time establish guidelines for use with a view to the risks outlined below.

In this way, employees can be made aware of risks and enabled to gain experience in dealing with AI systems. When introducing such AI policies or guidelines, it is important to first know your own company-specific risk situation and then take this into account when drawing up the rules. It is also a good idea to inform employees about how the systems work and, if necessary, introduce them to the technique of prompting (i.e. the design of commands to the AI).

In some companies, it may be appropriate to include department-specific risk situations in the awareness-raising and instruction of employees. For the marketing department, for example, it will be particularly important to ensure that results are protectable under copyright law and to raise awareness that AI applications sometimes tend to produce discriminatory results. Implementation according to Section 106 of the German Trade Regulation Act (GewO) Employers can implement AI guidelines by issuing instructions in accordance with Section 106 of the German Trade Regulation Act (GewO). To do this, it is necessary to make the rules accessible to employees and to inform them of the binding nature of the rules. Proof of access must be documented.

co-determination of the works council

The works council has no co-determination rights unless the company introduces its own AI systems. As long as employees use their own, personal AI accounts, there is no need for a works agreement. In particular, the AI application does not trigger a right of co-determination according to Section 87 Paragraph 1 No. 6 of the Works Constitution Act if the employer does not receive information suitable for monitoring. This follows from a recent decision of the Hamburg Labor Court.

However, according to Section 90 Paragraph 1 No. 3 of the Works Constitution Act, the works council must be informed in good time by submitting the necessary documents, as the use of AI regulates work procedures and workflows.

According to Section 80 Paragraph 3 Sentences 2 and 3 of the Works Constitution Act, the works council can always call in an expert (at the employer’s expense), if, in order to carry out his tasks, he has to assess the introduction or application of artificial intelligence. 


introduction of company-specific AI solutions

Effective risk management with regard to data protection and protection of company secrets is only possible if an internal AI solution is introduced. Microsoft 365, for example, offers the "Copilot" as a "daily AI companion" as a relatively easy-to-access option. For companies with For large data sets, it can be interesting to have your own databases prepared and processed using a secure AI solution.

However, risks related to copyright and other intellectual property rights can only be mitigated by in-house AI solutions To the extent that data on usage is collected and this enables the employer to If the (abstract) possibility of monitoring the performance and/or behavior of employees is allowed, the right of co-determination under Section 87 Paragraph 1 No. 6 of the Works Constitution Act is affected. In most cases, this will not be avoidable due to the AI solutions currently available on the market. In addition, a right of co-determination under Section 87 Paragraph 1 No. 1 of the Works Constitution Act comes into consideration if orderly behavior in the company - in the sense of "company cooperation" - is regulated.

The Hamburg Labor Court correctly stated that the requirements for using ChatGPT and similar tools do not fall under the codetermination-requiring orderly behavior. Ultimately, it is just a new work tool - so the work behavior that is not subject to codetermination is affected.

If there are specific risks to the health of employees, a right of co-determination according to Section 87 Paragraph 1 No. 7 of the Works Constitution Act (BetrVG) – combined with a risk assessment according to Section 3 of the Works Constitution Ordinance (BetrSichV) and Section 5 Paragraph 3 No. 6 of the Works Protection Act (ArbSchG) – may also be considered.

All internal company AI solutions will also be regulated by the AI Regulation in the future. Regardless of their risk classification, these AI systems must then meet the information requirements of Section 50 AI Regulation. Employees must always be able to recognize that they are dealing with an AI, unless this is obvious. High-risk systems - especially applications in the human resources area in connection with hiring, termination, promotion or other individual monitoring and assessment of Employees - are also subject to special provider and operator obligations. The employer as operator of a system must above all ensure that the system is safely organized and properly supervised.

Conclusion and Outlook

If employees use external AI providers to perform their work, this entails considerable actual and legal risks. However, an absolute ban on AI only helps to a limited extent and is mainly recommended for high-risk areas. All others should Raise employees' awareness of AI risks and provide them with appropriate, binding rules. Once introduced, AI guidelines should be checked at regular intervals for their conformity with technical developments and any changes to the legal framework. The introduction of AI guidelines is therefore recommended as an immediate measure for all employers who do not fully support the use of AI. Even in companies with a works council, the introduction is usually possible by means of a “simple” instruction in accordance with Section 106 of the German Trade Code without co-determination. However, if an internal company AI technology is introduced at the same time, Section 87 Paragraph 1 No. 6 is usually applicable. BetrVG must be observed. In the future, tailor-made, application-specific AI systems will increasingly be used in companies. As soon as the main provisions of the AI Regulation come into force, strict regulations will apply to them by the European Union. If these are high-risk systems (for example in the HR area), the following applies: a high level of regulation, especially for the providers of these AIs.

However, the companies in which these AI solutions are used are also subject to considerable obligations as operators of this AI. In view of the AI Regulation, which came into force on August 1st this year, operational implementation should begin in order to have it completed within the two-year implementation period before the regulatory provisions come into force. We are happy to answer any questions you may have about the legally compliant use of AI applications from an employment law perspective.

Make an inquiry now
We will be happy to advise you comprehensively and personally on your concerns.

International Lawyer & certified Compliance Officer

PHOTO 3-Mobile

Happy to help you

Contact

Your law firm Geißler Legal.

address

Gertrudenstr. 30-36 (Willy-Millowitsch-Platz)
D-50667 Cologne
Phone:
0221-42482831
Phone:
0171-2211612

business hours

Mon-Sat: 10:00 am – 1:00 pm
Mon-Fri: 2:00 pm – 8:00 pm 
and by telephone appointment

Contact

en_USEnglish