This document outlines a structured approach to managing AI security posture within an organization. It provides a detailed walkthrough on how to gain visibility, govern access, and assess risks associated with AI models and agents. The steps below demonstrate how Delinea's solution can help in maintaining a secure AI environment.
Hello. I am Eris Kroelich. Today, I will introduce you to Delinea AI Security Posture Management.

We recognize that many organizations are at different stages of their AI journey and are concerned about how AI is being utilized. Our approach focuses on helping you manage your AI security by providing continuous visibility and discovery.

Gain visibility into AI models and agents within your cloud service provider, including configurations, tools, activities, and ownership of AI agents and services. We also ensure the governance of access by understanding entitlements and aligning with the principle of least privilege.

It's crucial to assess risks by identifying stale, over-privileged, or overly shared agents and LMs that might pose security and compliance risks. Check for LMs or AIs that are publicly accessible or have external accounts accessing them.

Our solution enables all these capabilities. I will now demonstrate how Delinea can assist with these aspects, beginning with identity posture.

Within the AI Security category, we can review all checks, identifying those that pass and those that fail. I will now select the checks that have failed.

With this, you have full visibility into ongoing activities and results. We have pre-configured checks for various concerns, such as AI agents with potential access to sensitive assets or those created by external accounts or exposed to the public internet. Additionally, we cover issues like high model temperatures and privacy risks.

By delving deeper into these checks, you can quickly identify each detected issue. Our continuous discovery feature ensures new issues are flagged promptly, enabling effective mitigation. This also provides a deeper understanding of AI agents.

This section provides a high-level overview of the agent. By selecting 'show entity', more detailed information is available, such as last usage, origin type, origin ID, and raw data of the AI agent.

Details such as creation date, location, ownership, and the model in use are accessible. By selecting 'Actions investigate', you can view entitlements assigned to the AI agent and all relevant Azure connections. This shows the agent as part of AI task collections and Azure AI users.

Here you can view more information. This is referred to as a 'spiderweb' view, providing enhanced visibility. Additionally, the inventory and assets sections offer insight into everything related to AI and LMs.

While some items may not pose risks, they offer visibility into what is being created, such as LMs, AI services, AI agents, and virtual machines possibly running AI services. You can filter these by type and origin.

Further information can be accessed by selecting specific items, such as virtual machines potentially running AI. Our solution aids in your AI journey, enhancing security posture, ensuring compliance, and providing complete visibility.

Thank you for watching this demonstration. If you have any questions, please feel free to reach out. Thank you.
