How to achieve the highest level of cybersecurity in the age of AI – practical tips from an expert

See how Hicron builds secure and transparent AI environments for advanced organizations. Explore practical tips from our SAP experts on how to effectively protect your artificial intelligence implementations.

This year’s SAP NOW left participants not only inspired but also equipped with concrete insights valuable to the entire SAP community. One of the most important topics of the event was artificial intelligence – also in the speech of our expert, Szymon Włochowicz, COO at Hicron. His presentation on the secure implementation and use of AI tools attracted considerable interest, which is why we have prepared this article based on its key takeaways.

Do you want to know how to protect your business in the era of AI and make the most of the potential of new technologies? Read on and discover our expert’s practical recommendations!

Professional preparation as the first step to effective protection

Artificial intelligence has become a key component of the business strategies of modern organizations. However, in the pursuit of innovation and efficiency, fundamental cybersecurity aspects are often overlooked. Implementing and using AI tools without a well-defined security strategy can lead to serious data breaches, loss of trade secrets, and exposure to advanced cyberattacks.

AI goes beyond traditional protection mechanisms and requires a holistic approach that considers the specific risks associated with machine learning, data processing by language models, and the integration of AI with existing IT infrastructure. Each AI implementation introduces new attack vectors and vulnerabilities that must be secured through dedicated measures and a strategic approach.

Effectively securing AI implementations requires a comprehensive strategy encompassing risk management, data protection policies, access control mechanisms, and continuous monitoring. It is important to emphasize that ensuring cybersecurity in the context of AI is a dynamic process that requires continuous adaptation to new technologies and evolving threats.

The evolution of the concept of AI

  • Machine Learning

It is a technology that allows computer systems to learn from data without being explicitly programmed. Algorithms analyze patterns in the data in order to make decisions or predict outcomes. It forms the foundation of modern artificial intelligence and is used in areas such as data analysis, image recognition, and forecasting.

  • Generative AI based on LLM (Large Language Models)

Generative AI refers to systems capable of creating new content – such as text, images, or sound – based on existing data. LLMs, such as ChatGPT, specialize in natural language processing, generating responses that resemble human communication. Their applications include chatbots, language translation, and the creation of marketing content.

  • Agentic AI

Agentic AI is a more advanced form of artificial intelligence that operates autonomously, making decisions and performing tasks in a way that resembles human initiative. Such tools can independently plan, learn, and adapt to changing conditions. It represents a step toward more independent systems that offer users the ability to support complex business processes.

  • AGI (Artificial General Intelligence)

AGI is a vision of artificial intelligence that matches human intellectual capabilities, able to solve a wide range of problems across different domains. Unlike specialized systems, AGI has the ability to adapt and learn in a universal way. Although it is a long-term goal, AGI remains largely a theoretical concept that still requires significant technological breakthroughs.

Generative AI represents the next step in the development of artificial intelligence. It is a completely different class of systems than previous solutions based on Machine Learning technology. On the one hand, Generative AI tools are, to a large extent, online, cloud-based systems. On the other – they operate on very large data volumes and do not truly resemble either a Machine Learning system or a standard system. This is precisely why their use requires a completely different approach to corporate security and a different method of managing these systems within an enterprise’s architecture, in order to ensure adequate protection. Since Generative AI tools combine elements of human intelligence imitation, cloud-based solutions, and large volumes of data, they are an entirely new construct that requires an innovative view of security. This shift in enterprise security standards will continue to evolve along with the development of AI tools. Both Agentic AI and Artificial General Intelligence will require different approaches, as each of them will introduce a completely new concept.
Szymon Wlochowicz
Szymon Wlochowicz
COO Hicron

Key corporate cybersecurity risks in the implementation of AI tools

During his speech at this year’s SAP Now, Szymon Włochowicz introduced a list of key corporate risks associated with the use of AI models. This list was developed by a team of Hicron experts specializing in artificial intelligence research. Out of the eight identified risk areas, three were discussed in detail by our representative during the presentation.

Unauthorized AI tools (Shadow AI)

Lack of auditability

Data retention by providers

Copyright infringements

Model hallucinations in decision-making processes

Data leakage through prompts

Every query (prompt) submitted by a user to an AI system carries the risk of disclosing sensitive corporate data. Detailed prompts may contain trade secrets that, similar to the case of Shadow AI, can be used to train models or disclose confidential information to other AI model users. Data submitted in prompts may be stored by AI providers for an indefinite period, increasing the risk of breaches of information confidentiality and security.

AI use case that may generate this type of risk:

TOOL USE DESCRIPTION: A user sends a command to AI that contains a trade secret. For example: “Prepare a presentation for a meeting with the management board in which you present the following chemical process…”

RISK: The data included by the user in the prompt may be used by the AI tool to train and further develop the Large Language Model (LLM). It may potentially use this information when generating responses for other companies, thereby disclosing data that constitutes a trade secret.

RISK PREVENTION: To prevent such a risk, it is necessary to increase user awareness of the threat associated with using AI tools, introduce AI Governance principles within the organization, and rely only on trusted AI solution providers.

Cross-border data transfers

When companies use AI tools, data may sometimes leak outside the organization’s jurisdiction. This poses the risk of non-compliance with the GDPR and loss of control over customer data. AI models are often hosted in countries other than those where an enterprise’s core systems are located and are therefore subject to different legal regulations. This issue requires particular attention when selecting and integrating AI tools.

AI use case that may generate this type of risk:

TOOL USE DESCRIPTION: To prepare recommendations for sales representatives on how to handle customer cases, a company uses external AI tools that have access to corporate data. In this process, the data is transferred to LLMs located in other countries, which operate under different legal systems.

RISK: Transferring data to LLMs in other jurisdictions may lead to violations of data protection regulations, such as GDPR, and to a loss of control over customer data. Another risk is the possibility of improper data model configuration, which may increase the likelihood of data leakage or unauthorized access.

RISK PREVENTION: To minimize this risk, the tools in use should be secured from a legal standpoint, and the organization should rely exclusively on trusted AI technology providers.

Privilege escalation in AI systems

If enterprises are not adequately secured, situations may also arise in which AI systems unintentionally make data available to unauthorized individuals. This can occur both through data aggregation and summarization mechanisms, as well as a result of deliberate cyberattacks using prompt abuse techniques.

AI use case that may generate this type of risk:

TOOL USE DESCRIPTION: A mid-level employee asks an internal AI system to prepare a presentation containing data on managerial salary ranges or historical payroll information to which the employee should not have access.

RISK: Incorrectly configured AI integrated with the HR system may disclose confidential information to unauthorized individuals by failing to respect access levels. Another threat involves attempts to bypass security mechanisms through carefully crafted prompts (prompt hacking), for example by impersonating an authorized user. In this case, the situation may be further complicated by user requests for aggregated data (reports, analyses, summaries). The problem may also be amplified when attempts are made to use attack techniques on LLMs.

RISK PREVENTION: It is necessary to implement precise classification of documents and access levels, as well as permission-control mechanisms at the AI level. The system should be resistant to techniques that attempt to extract information through prompts, and AI tools should be used only from reputable providers.

Ten key technical threats to AI systems

When implementing a new AI-powered tool into an organization’s structure, it is necessary to analyze the potential risks that may arise from its use. At Hicron, we have established a dedicated team of specialists responsible for developing AI solutions and integrating them with SAP. However, its members operate within a broader framework, which has resulted in the identification of potential risks related to the implementation of artificial intelligence tools. During the presentation, our expert shared a checklist developed by the aforementioned team.

It includes ten key technical risks that should be taken into account when designing and implementing AI solutions within organizations:

  1. Unlimited resource consumption
  2. Prompt injection
  3. Disclosure of confidential information
  4. Improper output management
  5. Weaknesses in vectors and embeddings
  6. Misleading information
  7. System prompt leakage
  8. Excessive agency
  9. Data model poisoning
  10. Supply chain control

AI Governance Framework as the foundation of security

With so many serious threats posed by the irresponsible use of AI tools, businesses are increasingly considering ways to protect themselves. That is why more and more of them are choosing to implement the AI Governance Framework. It serves as the foundation for the effective security management of artificial intelligence systems. They consist of formal documents that clearly define the procedures for implementing AI tools.

This framework covers the following key areas:

Data Governance

It involves precise data classification, the implementation of access control mechanisms, and the monitoring of data flows within AI systems. It also emphasizes the need to implement policies that clearly define which data can be used by AI tools.

Risk Management

This approach involves conducting regular security audits and penetration tests, as well as developing procedures for responding to AI-related incidents. In this regard, it is also recommended to implement mechanisms that automatically shut down systems when anomalies are detected.

Legal Aspects

This element of the strategy emphasizes the legal aspects of AI implementation, such as GDPR requirements, algorithm transparency, and particular attention to intellectual property issues in the context of generative models.

Integration & Adoption

This aspect involves the secure integration of AI into existing systems, while systematic user training is presented as a key component of an effective security strategy.

Creating AI Governance policy documents is not an easy process, but it is worth remembering that technology providers often offer solid assistance in this area. SAP also offers such support. An AI Governance Hub has been made available on the vendor’s website dedicated to SAP LeanIX. It is a space that gathers information and recommendations on creating AI Governance safeguards within an enterprise, along with a set of links to external materials in which this concept is thoroughly described and presented.
Szymon Wlochowicz
Szymon Wlochowicz
COO Hicron

The role of Enterprise Architecture tools in AI security

Szymon Włochowicz’s presentation provided a detailed overview of how Enterprise Architecture solutions, such as SAP LeanIX, can serve as the foundation for an effective security strategy for AI implementations. This tool not only provides the ability to centrally inventory all components of the IT environment but also enables dynamic management of the system architecture across the entire organization.

Comprehensive inventory of AI tools

SAP LeanIX enables the precise cataloging of AI tools and models, both proprietary and external. As a result, the organization gains a complete, up-to-date map of AI solutions operating within the company, which in practice reduces the risk of Shadow AI and unauthorized implementations.

Advanced mapping of dependencies between AI and organizational systems

The system makes it possible to visualize the relationships between AI tools and the existing infrastructure and processes. This enables the rapid identification of potential vulnerability points – for example, critical data exchange interfaces or insufficiently secured integrations. SAP LeanIX also allows organizations to assign specific owners to AI assets, thereby strengthening accountability for their security.

Monitoring and control of AI access to organizational data

SAP LeanIX supports continuous monitoring of data flows by recording and analyzing the access rights and authorizations of AI tools. This approach helps reduce the risk of unauthorized access to critical data.

Planning AI transformation and evolution

With SAP LeanIX, it is possible not only to document the current architecture but also to plan the development of AI in the context of security. The platform makes it possible to create implementation schedules, verify the completeness of protection mechanisms, and simulate the impact of architectural changes across the entire organization.

Support for compliance and audit processes

SAP LeanIX provides essential reports for audit purposes and ensures compliance with regulations such as the GDPR. The tool allows you to track data processing paths and document control activities, significantly enhancing transparency and facilitating preparation for mandatory security audits.

SAP LeanIX provides invaluable support in building a secure, transparent, and compliant AI environment within organizations that have a high level of technological maturity.

Implementing secure AI with Hicron

The safe implementation of artificial intelligence requires not only technological expertise but, above all, a strategic approach and in-depth expert knowledge. At Hicron, we have the aforementioned dedicated team of AI specialists who apply all the practices outlined in this article on a daily basis, ensuring the highest level of security for our Clients. Our team is already actively working with SAP AI tools for Clients, delivering advanced implementation projects. One example of our expertise is the comprehensive testing of the SAP Joule tool, during which we thoroughly verified its security across all use cases within the SAP SuccessFactors modules. Thanks to this approach, Hicron is a trusted partner that not only delivers innovative solutions but also ensures their secure, controlled implementation in accordance with the highest corporate standards.

If you are planning to develop artificial intelligence technologies in your SAP environment and are looking for a strategy that ensures complete infrastructure security, take advantage of the knowledge and experience of Hicron experts. Contact our team to plan and implement AI development together in a secure and compliant way, tailored to the individual needs of your organization!

This site use cookies. By continuing to use this website, you agree to our Privacy Policy.

OK, I agree