Search
Janusch Skubatz kiberbiztonsági szakértő, az EOS Csoport információbiztonsági igazgatója, barna hajjal és fehér ingben.
  • Generative AI offers great opportunities.
  • EOS parent Otto Group has developed a secure tool tailored to the company’s needs.
  • Its use also offers advantages in the receivables management sector with its particularly high standards.
It was unpleasant news for the management of electronics group Samsung. In spring 2023, notes from internal meetings and data on the performance of production facilities suddenly appeared on the internet. The culprits were not hackers, but the company’s own employees who had used generative AI. One employee, for example, used ChatGPT to turn his meeting notes into a finished document without realizing that the free version of the AI assistant was saving all the information from its users' prompts – and using it to expand its knowledge.

An example that shows that generative intelligence also poses risks, such as insufficient employee awareness of the security gaps, a lack of company guidelines or new opportunities for hackers to attack. In a global survey conducted by the consulting firm McKinsey, 53% of all participants who had already worked with generative AI considered cybersecurity to be the greatest risk of the new technology.

This is how many companies see cybersecurity as the biggest problem:

ZoomZoom
53 százalékot mutató ábra, amelyen az látható, hogy hány vállalat tartja a kiberbiztonságot a legnagyobb problémájának.
Source: McKinsey Global Survey on AI, 2023, “The state of AI in 2023: Generative AI’s breakout year”

A safe solution approach for generative AI

At the same time, generative AI offers companies enormous potential to increase their efficiency and innovative strength. The Otto Group was therefore looking for a way to exploit the opportunities offered by the technology – while at the same time limiting its risks. Under the name ogGPT, the parent company of EOS has developed its own generative AI for the Group’s 26,000 employees. “We wanted to create a solution that is tailored to our Group’s own needs, with a focus on security and data protection,” says Anja Körber, Head of Artificial Intelligence & Automation at Otto Group IT. The fee-based enterprise version of Chat GPT was out of the question – even though this AI assistant does not store the information from the prompts. “The servers are located in the USA,” says Anja. “This means that in case of doubt, local employees, for example from support, would have access to the data. And European standards such as the GDPR do not apply in the USA.”

Only this many companies have guidelines on the use of AI: 

ZoomZoom
21 százalékot mutató ábra, amelyen az látható, hogy hány vállalat rendelkezik mesterségesintelligencia-irányelvekkel.
Source: McKinsey Global Survey on AI, 2023, “The state of AI in 2023: Generative AI’s breakout year”
The survey by consulting firm McKinsey also clearly shows that there is a gap in terms of guidelines. The Otto Group has already addressed this issue, also on behalf of its affiliates. Over the course of two months, Otto Group employees not only developed an AI guideline, but also the individual building blocks for ogGPT in various projects and hackathons. This resulted in a basic framework that updates its knowledge – including with data from the internet – but is still secure, explains Anja: “ogGPT forgets the prompts. We have full control over the data.”

However, ogGPT’s unique selling point is not just security. “We wanted functions that match our requirements,” says Anja. For example, the AI assistant can summarize the content of a long email thread. ogGPT is also already being used to write a newsletter: while readers of the newsletter were previously only provided with headlines that linked to further content, they can now find compact summaries of the content. A significant improvement that would have been far too time-consuming to do manually.
Portré Anja Koerber női szakértőről, az Otto Csoport mesterséges intelligenciáért és automatizálásért felelős vezetőjéről, befont hajjal és blézerben.

We wanted to create a solution that was tailored to our Group’s own needs with a focus on security and data protection.

Anja Körber
Head of Artificial Intelligence & Automation, Otto Group
An in-house solution also reduces the fear of employees who have to get used to integrating generative AI into their day-to-day work, adds Anja: “There are internal experts and a dedicated community. That gives a lot of people a good feeling: if I can’t find a solution, I can ask a colleague.”

An in-house generative AI is also useful for Otto affiliate EOS, says Janusch Skubatz, Chief Information Security Officer of the EOS Group. Initial internal projects are already underway. “I think that we will also use ogGPT in the future,” he says, “but the data protection hurdles in receivables management are particularly high. When we acquire non-performing loans, for example, we gain access to data that is particularly worthy of protection.” This is why EOS is subject to more far-reaching rules than other companies and often also to contractual requirements from partners, says Janusch: “We therefore have to carefully check whether the technology needs to meet further requirements.”
Janusch Skubatz kiberbiztonsági szakértő, az EOS Csoport információbiztonsági igazgatója, barna hajjal és fehér ingben.

Prompts on the internet should not use business information that is not intended for the outside world.

Janusch Skubatz
Chief Information Security Officer of the EOS Group
As long as companies cannot use a secure in-house version of generative AI, Janusch advises users in all industries to adhere to basic rules. EOS and the Otto Group as a whole already have an AI policy that applies to all employees.

Excerpt from the EOS Group's AI guideline

  • Act responsibly! Only enter data into public AI systems if it would be perfectly appropriate to publish the data on the company website. 
  • Be careful! AI-generated responses can be biased, inaccurate or inappropriate. Always check the results generated by AI tools.
  • Watch out! Never give your login details (username, password) to AI tools and always watch out for phishing methods.
  • Ask for support: If you are not sure whether you are allowed to use certain data or which AI tool is safe, ask your local information security officer. 
However, Janusch believes that the most decisive factor is regular interaction with generative AI: “Experience is a prerequisite for developing a feel for the technology and the quality of the results.” This way, risk situations like those at Samsung can be avoided and the immense potential exploited.

Interested in discussing individualized generative AI? Get in touch now.  

Photo credits: EOS