Turkish Data Protection Authority Issues Guidance on the Use of Generative AI Tools in the Workplace
- Apr 20
- 3 min read
Turkish Data Protection Authority has issued a guideline on 5 March 2026, titled “Use of Generative AI Tools in Workplaces”, intended towards raising the awareness on potential risks and drawing a framework of compliance principles for the use of publicly available third party generative AI tools.
As the everyday use of generative artificial intelligence tools become widespread across various industries, the guideline acknowledges the fact that these tools are commonly used by the employees individually in day-to-day workflows for various purposes from content generation to research. Therefore, rather than adopting a blanket-ban approach, companies are encouraged towards a controlled use framework to mitigate risks of primarily confidentiality, data privacy, cyber security and intellectual property on top of broader concerns of corporate reputation.
The real concern: “shadow AI”
The guideline identifies “shadow AI” as a central concern, referring to employees’ use of generative AI tools on their own initiative and without the employer’s knowledge, approval or oversight. While the Authority acknowledges the widespread use of these tools by employees in the workplace in pursuit of productivity and efficiency, it does not advocate an outright ban, noting that such an approach may instead drive such use further outside institutional visibility and control. Instead, the guidance emphasizes direction and awareness under clear internal rules rather than prohibition.
Key risks identified by the Authority
The Authority highlights that, in the absence of clear visibility over which AI tools are being used and what information is being shared through them, companies may face significant challenges in ensuring compliance and taking timely remedial action in the event of a breach. Some of the key risk factors are:
Auditability and accountability. The lack of registration, monitoring or audit mechanism over AI tools and the generated data may impede a company’s ability to respond to incidents and ensure compliance.
Risks relating to accuracy and decision quality. AI outputs may be inaccurate or misleading if used without internal review, which may ultimately compromise organization’s quality standards and ethical principles.
Intellectual property and trade secrets. The trade secrets, source codes, product, designs shared or commercially sensitive information shared with external AI tools may weaken the control over such information and may expose it to unauthorized access.
Reputation risks. Insufficient control over AI outputs may lead to inaccurate or poor-quality, content may in turn adversely affect company’s reputation.
Information and cybersecurity risks. AI tools used outside institutional control may expand the organization’s attack surface through insecure APIs, personal devices or unmanaged integrations, thereby increasing exposure to malware, unauthorized access, data loss and other cyber-security risks.
Data Privacy. Uncontrolled exchange of personal data through AI tools carries the risk of data breaches, unlawful processing or unauthorized Access. The guide emphasizes that any personal data and commercially sensitive information shared through prompts may become accessible to third parties that Data Protection Law no. 6698 covers any data processing activity through AI tools.
What the Authority expects from companies
The guidance ultimately points companies toward a controlled and accountable use model rather than a prohibitive one. In the Authority’s view, the appropriate response is not to exclude generative AI tools from workplace processes altogether, but to establish a clear internal framework defining which tools may be used, for which purposes, under what conditions, what types of information may be entered into such tools, and how outputs may be reviewed and used.
In this context, human review remains central: the overall logic of the guidance is that organizational responsibility cannot be delegated to AI tools, and that meaningful human judgment must be preserved over both the information shared with such tools and the outputs generated through them. Although the guidance is not a binding regulation, it provides a clear indication of the Authority’s expectations that companies adopt visible, policy-based and risk-sensitive governance structures supported by internal rules, data protection safeguards, confidentiality measures, and employee awareness.
Author

Ömer Faruk Çıkın

