Key corporate cybersecurity risks in the implementation of AI tools
During his speech at this year’s SAP Now, Szymon Włochowicz introduced a list of key corporate risks associated with the use of AI models. This list was developed by a team of Hicron experts specializing in artificial intelligence research. Out of the eight identified risk areas, three were discussed in detail by our representative during the presentation.
Unauthorized AI tools (Shadow AI)
Lack of auditability
Data retention by providers
Copyright infringements
Model hallucinations in decision-making processes
Data leakage through prompts
Every query (prompt) submitted by a user to an AI system carries the risk of disclosing sensitive corporate data. Detailed prompts may contain trade secrets that, similar to the case of Shadow AI, can be used to train models or disclose confidential information to other AI model users. Data submitted in prompts may be stored by AI providers for an indefinite period, increasing the risk of breaches of information confidentiality and security.
AI use case that may generate this type of risk:
TOOL USE DESCRIPTION: A user sends a command to AI that contains a trade secret. For example: “Prepare a presentation for a meeting with the management board in which you present the following chemical process…”
RISK: The data included by the user in the prompt may be used by the AI tool to train and further develop the Large Language Model (LLM). It may potentially use this information when generating responses for other companies, thereby disclosing data that constitutes a trade secret.
RISK PREVENTION: To prevent such a risk, it is necessary to increase user awareness of the threat associated with using AI tools, introduce AI Governance principles within the organization, and rely only on trusted AI solution providers.
Cross-border data transfers
When companies use AI tools, data may sometimes leak outside the organization’s jurisdiction. This poses the risk of non-compliance with the GDPR and loss of control over customer data. AI models are often hosted in countries other than those where an enterprise’s core systems are located and are therefore subject to different legal regulations. This issue requires particular attention when selecting and integrating AI tools.
AI use case that may generate this type of risk:
TOOL USE DESCRIPTION: To prepare recommendations for sales representatives on how to handle customer cases, a company uses external AI tools that have access to corporate data. In this process, the data is transferred to LLMs located in other countries, which operate under different legal systems.
RISK: Transferring data to LLMs in other jurisdictions may lead to violations of data protection regulations, such as GDPR, and to a loss of control over customer data. Another risk is the possibility of improper data model configuration, which may increase the likelihood of data leakage or unauthorized access.
RISK PREVENTION: To minimize this risk, the tools in use should be secured from a legal standpoint, and the organization should rely exclusively on trusted AI technology providers.
Privilege escalation in AI systems
If enterprises are not adequately secured, situations may also arise in which AI systems unintentionally make data available to unauthorized individuals. This can occur both through data aggregation and summarization mechanisms, as well as a result of deliberate cyberattacks using prompt abuse techniques.
AI use case that may generate this type of risk:
TOOL USE DESCRIPTION: A mid-level employee asks an internal AI system to prepare a presentation containing data on managerial salary ranges or historical payroll information to which the employee should not have access.
RISK: Incorrectly configured AI integrated with the HR system may disclose confidential information to unauthorized individuals by failing to respect access levels. Another threat involves attempts to bypass security mechanisms through carefully crafted prompts (prompt hacking), for example by impersonating an authorized user. In this case, the situation may be further complicated by user requests for aggregated data (reports, analyses, summaries). The problem may also be amplified when attempts are made to use attack techniques on LLMs.
RISK PREVENTION: It is necessary to implement precise classification of documents and access levels, as well as permission-control mechanisms at the AI level. The system should be resistant to techniques that attempt to extract information through prompts, and AI tools should be used only from reputable providers.

Ten key technical threats to AI systems
When implementing a new AI-powered tool into an organization’s structure, it is necessary to analyze the potential risks that may arise from its use. At Hicron, we have established a dedicated team of specialists responsible for developing AI solutions and integrating them with SAP. However, its members operate within a broader framework, which has resulted in the identification of potential risks related to the implementation of artificial intelligence tools. During the presentation, our expert shared a checklist developed by the aforementioned team.
It includes ten key technical risks that should be taken into account when designing and implementing AI solutions within organizations:
- Unlimited resource consumption
- Prompt injection
- Disclosure of confidential information
- Improper output management
- Weaknesses in vectors and embeddings
- Misleading information
- System prompt leakage
- Excessive agency
- Data model poisoning
- Supply chain control
AI Governance Framework as the foundation of security
With so many serious threats posed by the irresponsible use of AI tools, businesses are increasingly considering ways to protect themselves. That is why more and more of them are choosing to implement the AI Governance Framework. It serves as the foundation for the effective security management of artificial intelligence systems. They consist of formal documents that clearly define the procedures for implementing AI tools.
This framework covers the following key areas:
Data Governance
It involves precise data classification, the implementation of access control mechanisms, and the monitoring of data flows within AI systems. It also emphasizes the need to implement policies that clearly define which data can be used by AI tools.
Risk Management
This approach involves conducting regular security audits and penetration tests, as well as developing procedures for responding to AI-related incidents. In this regard, it is also recommended to implement mechanisms that automatically shut down systems when anomalies are detected.
Legal Aspects
This element of the strategy emphasizes the legal aspects of AI implementation, such as GDPR requirements, algorithm transparency, and particular attention to intellectual property issues in the context of generative models.
Integration & Adoption
This aspect involves the secure integration of AI into existing systems, while systematic user training is presented as a key component of an effective security strategy.