Home Bots & Business AI Needs Clear Ethical Frameworks

AI Needs Clear Ethical Frameworks

'Find a balance between innovation and ethics'

by Marco van der Hoeven

KPN uses AI for a wide range of applications. In addition to technical aspects, ethical considerations play an important role in determining where and how AI can be used. This is the theme of the presentation Lienke Vet gave during Hyperautomation 2024. As mediapartner of this event, Rocking Robots spoke with her about the challenges of responsibly deploying AI.

“There is a lot of discussion about the possibilities and opportunities of AI, but it is also important to understand the limitations of the technology,” says Lienke Vet, AI Governance Lead at KPN, “especially when you scale it up, such as in automated decision-making.”

She is responsible within her organization for establishing frameworks for the use of AI. This is widely deployed within KPN and closely linked to KPN’s strategy. One focus point for KPN is the rollout of fiber optics, where models calculate the optimal deployment. These models can be self-learning or more focused on optimization.

AI also ensures that the network operates optimally and improves customer service by offering personalized services or support based on previous disruptions. Like any company, KPN also uses AI for internal business processes.

Innovation

Vet: “It is important to find a balance between innovation and ethics. For this, a strategy is needed that first determines the value of AI, especially in response to the question of how to organize AI. Our approach starts at the highest level: the Board of Directors has approved the frameworks because they find it important, and that serves as a mandate.”

“Due to my close contact with the AI teams, I can explain well why certain ethical considerations are important, combined with technical feasibility, such as the difference between a model that works and one that continues to work.”

“For if a model does not work, the process could be endangered. Therefore, it is essential to have robust models that are compliant and built in such a way that they continue to function. This requires a lot of collaboration and continuous dialogue with the teams that develop the models. We stay closely involved and develop methods together to identify and manage risks and boundaries.”

Legislation

“Instead of completely stopping a process when a risk is identified, we investigate what is possible and make clear agreements about the limits. But when there are significant risks that can directly affect our employees or customers, we must intervene firmly. Legislation exists for a reason; AI has a huge impact and must be taken seriously.”

“The discussions we have are therefore aimed at ensuring a balance between innovation and compliance, minimizing risks while making progress, and aligning the most critical aspects of our work with the broader business strategy. All this is ultimately aimed at ensuring that we act responsibly in the interests of our customers and employees.”

Her internal audience consists not only of technicians who already understand the basics of AI but also a broader audience within the organization. “This signifies a new phase of maturity: we have moved from simply implementing models to applying them at scale, which requires a different approach. This is a positive development but also brings challenges, such as asking the right questions to suppliers and assessing the added value of AI versus the potential risks.”

Generative AI

The rise of generative technologies also impacts her work. “Previously, more technical knowledge was needed to create and implement a model, but nowadays anyone with an internet connection can experiment with AI. This democratizes the technology, which is good because AI will ultimately be part of all processes around us.”

“However, this also brings new risks, such as data leaks and the tendency of models to ‘hallucinate’ or exhibit biases. This can lead to stereotypes, clearly illustrating the risk of bias in AI applications. It is important that we recognize and address these risks in our AI strategies and frameworks.”

Competence

According to her, the human factor should not be underestimated. “AI is a powerful technology in this rapidly evolving digital world. It’s not just about people being technically competent, but primarily about understanding what AI is and how it works, so they can use the technology responsibly. This means everyone in an organization must understand how AI makes decisions or generates advice and be able to think critically about its outcomes.”

“On my wish list is that everyone within the organization is trained and educated in ‘AI literacy.’ This doesn’t mean everyone needs to become a technical expert, but they should be aware of how AI works and its implications. For example, when someone interacts with an AI-driven chatbot, they should consider: is this information reliable? What data is this based on? What does this mean for my decision? This helps not only in making informed decisions but also in thinking critically about and providing feedback on AI systems.”

Security

Another important aspect is the safe use of AI. “I advocate for the use of secure, alternative datasets that do not contain sensitive information, which is also a learning process for users. They learn how important it is to ask the right questions and how to optimize input for AI to generate better and safer outcomes.”

“With this approach, we promote a culture where AI is not only seen as a tool for efficiency but also as an area where awareness and understanding are essential for safe and effective use. This aligns with the idea of AI literacy as a new form of basic knowledge, just as important as reading and writing.”

Misschien vind je deze berichten ook interessant