whitepapervault.com
Artificial IntelligenceHealthcareInnovationRisk Management

Navigating The Security Landscape Of Generative Ai

Navigating The Security Landscape Of Generative Ai

Navigating the security landscape of generative AI First published April 2, 2025 Last updated April 8, 2025 Navigating the security landscape of generative AI

  • 1
  • Navigating the security landscape of generative AI

  • 2
  • Introduction Generative artificial intelligence, specifically large language models (LLMs), is reshaping how organizations handle data, automate processes, and drive innovation. However, as these capabilities expand, they also expand current security risks and introduce new ones. Security frameworks and teams need to account for the new challenges that generative AI brings, such as context window overflow, agent mismanagement, and indirect prompt injections. As generative AI becomes a core technology within organizations, we also need to ensure that it’s held to the same standard and compliance requirements as other technologies. Organizations that learn to take an agile approach to security will be well positioned in the marketplace as adoption of AI grows. This white paper provides an approach for CISOs to navigate these risks, offering detailed mitigation strategies, including enhanced input validation, real-time monitoring, and modular system architecture. We focused on eight initial threat vectors and have suggested mitigation strategies for each. We view a strong security foundation as an accelerant to adopting generative AI that enables organizations to safely and confidently add it to their mix of technologies. While many current technologies can also help tighten security, generative AI brings a few additional nuances that must be addressed and are novel to security. Many of the recommendations in this paper are easier said than done, but augmenting technologies, both from AWS and our partners, are evolving to help address those gaps and should be considered. Finally, this paper is intended to complement, and potentially reinforce, newly emerging generative AI security strategies such as OWASP Top 10 for LLM, MITRE ATLAS, and so on. AWS continues to participate in global standards bodies such as the Coalition for Secure AI (CoSAI), Frontier Model Forum, and more to provide insights. The following challenges represent a prescriptive point of view from the AWS proactive security team. Regulatory and standards evolution Global interest has increased among regulators given the potential ramifications of improper uses of generative AI. The EU AI Act is one of the better-known regulations, and it predominately takes a risk-based approach. High-risk applications, such as law enforcement, healthcare, and workloads impacting human rights are given a higher Navigating the security landscape of generative AI

  • 3
  • regulatory bar to meet. This can include clauses such as including human-in-the-loop or even an outright prohibition of a workload. A risk-based approach strikes an effective balance between industry conditions and regulatory needs. On one hand, there are risks to trusting the outputs of an LLM for a life-critical workload. However, a joke-telling chatbot should not be held to the same standards. Legal precedent is expected to shape regulatory actions in concert with outputs from standards agencies such as NIST. In the long term, a patchwork quilt of regulations will likely emerge in the US while other countries that have previously aligned with the GDPR will likely align with the EU AI Act. Certain compliance standards such as ISO42001 and IRAP have started to cover AI security. HITRUST is also building AI controls. There is the potential that the EU will accept ISO42001 as an effective risk management practice. However, EU regulatory frameworks continue to evolve, as demonstrated by the SHREMS II decision regarding GDPR. Organizations are encouraged to use NIST ahead of regulatory actions and take an agile approach to their security posture. Organizations that stay ahead of compliance and regulatory frameworks by taking a security-first approach will have a competitive advantage within the marketplace once regulations begin to take hold. Generative AI’s impact on organization structures The impact of generative AI depends on the current organizational structures. Traditionally, there have been tensions between data science teams and security teams. Data science often needs broad access to data while security strives for a least- privilege approach. Organizations that follow a methodology of scaling security instead of consolidating it into a single organizational structure will be better positioned for success. A scaled approach creates a culture of security and helps security leaders focus on core issues. One example of a scaled approach is the AWS Security Guardians program. This program trains Amazon staff how to do security reviews, collaborate with teams on taking a security-first approach, and identify when to escalate to security engineering.

    Related posts

    Whitepapervault Com

    Transforming It The Strategic Impact Of Low Code Development

    Mergers And Acquisitions Lens Aws Well Architected Framework

    Leave a Comment