Windows news you can use: April 2025
May 1, 2025GH-100T00: GitHub fundamentals – Administration basics and product features, releasing in May.
May 1, 2025To fully adopt Artificial Intelligence (AI) and realize the benefits, agencies and enterprises need to address how they will secure AI models, agents, LLMs, data sources and more. Due to the nascent nature of AI security, enterprise security teams and AI teams need to develop new relationships, procedures, testing methods and skills to address the increasing threats to AI. While there has been much discussion and progress around “responsible AI”, this series of articles will lay out the problem space common to many enterprises and agencies that are delaying the deployment and adoption of AI and give direct guidance on how, at every level, enterprises can unify around Securing AI. These two topics, Secure AI and Responsible AI, are related, but come with very different requirements. This coming together will need to be led between the CIO and CISO to meet mission and business objectives with AI.
Road to Securing AI
It has been nearly a year since my initial article about balancing AI, Copilot and Security to help leaders understand the intersection of this multidiscipline challenge, so I thought this would be the perfect time to bring together the Federal Microsoft Team of experts to address the challenges of deploying secure AI in a four part blog.
- Part 1 – Executive Business Imperative: Overview of business need to proactively assemble teams to address enterprise mission and explanation of common challenges and solutions with respect to the intersection of AI and Security
- Part 2 – Building Joint AI and Security Teams: Building a Common Language between AI and Security; Best Practices for AI Security
- Part 3 – Testing AI for Security Vulnerabilities: Understanding Security Red Teaming Forfor AI Teams and understanding Zero Trust Landscape that AI will need to embrace
- Part 4 – Security for AI Vulnerabilities Detection: Understanding Tools and techniques for Security Teams engaged in validating AI security
Business Imperative
Federal Agencies and every commercial organization are looking for benefits in productivity, customer service, mission agility, supply chain analysis, battlefield insights, security and efficiency. However, security concerns due to the newness of AI have often delayed or stalled the quick adoption of AI throughout these same enterprises. As a business leader, you may hear about the benefits, uses and terminology of AI from AI vendors and experts. As a Security Leader, CISO, you will generally focus on traditional security like endpoints, servers, networks, data and identity. The morphing behavior of AI also presents a type of ever moving target versus a static approach that many Cyber Ops folk are most familiar with.
What neither leader will generally understand is the “seam” between AI and Security and how to address it. AI is all about focusing the organization on the connective tissue and lines of responsibility between the disciplines (Security and AI) to avoid vagueness of where responsibility lies. The seams and connective tissue like many new technologies will be the biggest weakness and vulnerability, so needs the siloed approach will need to be overcome for success.
To understand the threat landscape to AI, we need to look at common threats to AI from all vectors. Additionally, we want to understand where Microsoft is building in security to aid in threat mitigation and alerting.
A quick review of AI threats bubbles up a litany of new and re-purposed traditional terms like prompt injection, crescendo attacks, jailbreaking AI, DAN and more. Generally, this leaves most outside the AI field at a lack of understanding due to the technology specific terminology.
A quick review of Security threats dives deep into terms like red teaming, kill chains, horizontal movement of threat actors, NIST 800-53 and more. Again, this leaves many outside of Security and those specialists in the AI field without an understanding of enterprise security requirements to allow pilots to mature to secure enterprise deployments.
The NIST AI RMF and the NIST AI RMF Generative AI profile are government created, well respected and thought-out frameworks for assessing AI risk, security and more that help organizations on this journey and overcome some of the challenges mentioned above. These frameworks are a great place for any agency to start and align to a best practices approach to identify, measure, mitigate and operationalize AI security; AI Risk Management Framework | NIST and Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile | NIST
Lastly, in this fast, paced innovative space of AI, Microsoft just announced on April 4th the public preview of the AI Red Teaming Agent as an extension to the PyRIT (Python Risk Identification Tool) toolkit that is being used throughout the industry for risk identification and mitigation. This AI Red Teaming Agent is specifically targeted at both content safety and security related risks.
For leaders, it is a business and mission imperative that we lead, support and bring AI personnel and Security personnel together to realize our collective desire to benefit from AI. Like many new technologies that are democratized, a recent survey by Microsoft called the Work Trend Index shows that 70% of employees are using AI that is not provided by IT or the enterprise for work productivity. Users “will find a way” if the enterprise doesn’t provide user productivity tools like CoPilot and other AI services. Thus, it is this imperative and inherent risk that this article will address by establishing a baseline understanding and architecture for AI and Security to foster conversations, actions, deployment, user enablement, and collaboration in the follow-on articles.
Source: 2024 Annual Work Trend Index from Microsoft and LinkedIn
Not all AI Security Needs are the Same
While much of the hype and excitement has been around generative AI, there are many types of AI. Below is a chart of AI models that span from custom built AI to Software-as-a-Service (SaaS) AI. This is an important distinction that the AI and Security Teams will need to understand and develop standards that are appropriate to the type of AI deployment.
If your team is unfamiliar with AI, I would encourage you to start on the right (SaaS) and move toward IaaS “Custom AI,” since the baseline techniques can be leveraged across the spectrum. For example, securing data with Purview, DRM and Information barriers to enable M365 Copilot, would develop the team muscle of data policy that will become critical as you move to develop your own customer AI models and services.
Source: AI shared responsibility model – Microsoft Azure | Microsoft Learn
New, not New
Many of you will remember the early days of software development in the 2000s where organizations struggled to bring together software development and security. Security was often an afterthought and not built-in or was mainly focused on the “castle walls.” Lessons from that early time can be applicable here. In the same way, Secure Development Lifecycle and the newer ISO 27034 created a common approach for development and security, so too can the new initiatives across the industry bring together AI and Security.
To that end, the below diagram will serve as a roadmap for the rest of the article and helps start the understanding of common terminology, threats, language and Zero Trust solutions for cross-discipline coordination and teaming.
Looking at the above diagram through the eyes of a security person, it appears very much like any other application. It has inputs, data, storage, agents/add-ins and in-memory actions based on logic rules.
The AI team can look at the above and get a sense of how the solution can be secured, some of the tools to consider and threats at each exposure point in the AI chain.
ATLAS – A map to Securing AI
The industry seems to be moving in the direction of dual mapping to bring together AI and Security. A great example of this bringing together is a new iteration of the popular security approach of MITRE ATT&CK™ framework into Adversarial ML Threat Matrix was created under the name ATLAS.
The MITRE ATT&CK™ framework is a core methodology for traditional cyber operations to define and organize tactics, techniques and procedures (TTPs). The image below shows how bad actors will attack AI using AI vulnerabilities, Security vulnerabilities or a combination of both AI and Security (IT) Vulnerabilities to exploit AI.
In fact, Microsoft at the onset of our inclusion of GPT-4 inclusion into Bing in 2018 Microsoft stood up a Red Team across AI and Security to address concerns and create a hybrid Red Team that is more than just traditional Security Red Teams.
This internal work contributed to the realization that better AI tools were needed and spurred the creation Azure AI Content Safety which “Prompt Shield” and six other capabilities and threat mitigation approaches.
Zero Trust
Another new, not new approach that can be applied to AI is Zero Trust (ZT). This is working at scale within the US Department of the Navy through an integrated approach that met 151 of the 152 activities (60 of which are ahead of the 2027 deadline).
What is new is that Microsoft has recently updated its Zero Trust defense areas to include AI Cybersecurity and Secure and Govern AI.
Securing AI leveraging Microsoft Zero Trust and Security capabilities
There are many built-in protection rings for AI that address the full Zero Trust defense needs and allow your organization to extend to further AI protections. Many of these protections are out of box or possibly part of your existing M365 E5 or Azure Security investment, but simply need activation or enterprise specific policy configuration.
Below you see the concentric rings of defense that customers may use to protect their enterprises using Zero Trust principles for IT systems like collaboration and devices. This same infrastructure can be leveraged to protect and extend AI’s security.
There are many specific features that can immediately be engaged to protect data against unintentional and intentional (Insider Risk) threat against your AI models using back end data (RAG) to tune the model to the agency or enterprise specific vision for AI:
- Advanced Threat Protection
- Data Loss Prevention (DLP)
- Identity and Access Management (IAM)
- Endpoint Security
- Cloud Security
Advanced Threat Protection
Microsoft’s threat protection that leverages artificial intelligence and machine learning to identify and mitigate potential threats. By continuously monitoring IT and AI systems, these tools can detect unusual activities or patterns that may indicate an attempt to exploit vulnerabilities. Advanced threat protection helps in preemptively addressing security issues, ensuring that AI models are safeguarded against malicious attacks and data breaches.
Data Loss Prevention (DLP)
Data loss prevention (DLP) tools are crucial for safeguarding sensitive information within AI systems. Microsoft Security products provide robust DLP solutions that monitor and control the flow of data, preventing unauthorized access or data leaks. These tools can identify and block sensitive data transfers, ensuring that AI-driven processes remain secure and compliant with data protection regulations.
Identity and Access Management (IAM)
Effective identity and access management (IAM) is essential for protecting AI systems from unauthorized access. Microsoft Security’s IAM solutions offer multi-factor authentication (MFA), role-based access control (RBAC), and other mechanisms to ensure that only authorized users can access AI resources. By implementing strong IAM practices, organizations can prevent bad actors from exploiting vulnerabilities through unauthorized access to sensitive AI data and systems.
Endpoint Security
Endpoint security solutions provided by Microsoft Security help protect devices and endpoints that interact with AI systems. These solutions include anti-malware, firewalls, and intrusion detection systems that safeguard against threats originating from compromised endpoints. By securing the devices that access or contribute to AI processes, organizations can mitigate the risk of security vulnerabilities and data leaks caused by endpoint-related attacks.
Cloud Security
As AI systems often rely on cloud infrastructure, securing cloud environments is vital. Microsoft Security products offer comprehensive cloud security solutions that protect AI workloads and data stored in the cloud. These solutions include encryption, access control, and continuous monitoring to detect and respond to security incidents. By securing cloud environments, organizations can ensure that their AI systems are resilient against threats and data breaches.
The above quick overview of the AI landscape, solutions, challenges and products that can assist with securing your enterprise AI solution was meant for you to have a primer on this space. In the follow-on articles we will be going deeper into the what technology, testing and settings that should be configured to support secure AI.
Get Those Teams Going
If your organization is struggling with getting going on AI and addressing Security concerns, we hope the above shows there is a myriad of frameworks, technologies and products that are already available to jump start your teams. If we convinced you at least in principle that Securing AI is maturing to support mission critical usage, we encourage you to engage with your AI Leads and Security Teams to foster a working virtual team. AI is an opportunity to force “bridge building” across the traditional silos of enterprise security and mission/business. Our goal is to help you understand key communication channels, where the lines of responsibility fall, and who own the AI/ML remediation efforts “WHEN”, “NOT IF” the ecosystem is threatened.
The next three articles in this series will address how these teams can quickly come together in a common terminology and effort to secure your AI and realize your organization’s AI goals.
Let’s get going together. Build those teams, share your vision, share this blog with your team to facilitate conversation and give us feedback, so we can help you on this journey in the follow-on technical blogs.