- by Thamaraiselvan, Hexaware; Gowdhaman, Lumina Datamatics
Executive Summary:
Industry Statistics
- Blocked Generative AIs: The top blocked generative AIs include OpenAI and ChatGPT.
- Domains: Various business verticals like manufacturing, finance, technology, and services are adopting generative models.
- Trends: Highlighted trends in generative AI adoption across different industries.
Threats and Risks
- General Awareness: Emphasizes the inevitability of integrating generative AI into business operations, similar to the ubiquity of Google.
- Blocking Approach: Suggested to initially blocking all open generative AI domains and then selectively opening specific aspects based on business needs.
- Understanding Business Models: Important to understand why an organization requires access to generative AI to determine what to allow and block.
Security Best Practices
- Guideline Document: Essential for creating awareness and managing access levels. Ensures users understand how to use generative AI without leaking sensitive information.
- Isolated Environments: Develop generative AI in separate environments to conduct security scans and analyze behavior patterns.
- No Sensitive Information: Avoid using sensitive customer information in generative AI prompts. Implement network and proxy DLP services and emerging technologies like prompt-based firewalls.
- Customized Generative AI: Create custom interfaces for users to interact with generative AI through API calls, providing better control over file uploads and prompt responses.
- SSO Integration: Adopt Single Sign-On (SSO) for generative AI platforms to maintain user authentication and access appropriateness.
- Monitoring Access: Use emerging technologies like LLM-based firewalls to monitor generative AI access and scrutinize outputs for appropriateness and malicious content.
- Vulnerability Assessments: Conduct proper vulnerability assessments and penetration testing for applications developed using generative AI.
Emerging Technologies and Approaches
- Indirect Use of Generative AI: Tools like co-pilots using LLM models should have security measures in place. Ensure proper scrutiny of generative AI interfaces in products.
- Supplier Security: Probe suppliers on their security practices when they use generative AI capabilities within their products.
- Information Rights Management (IRM): Utilize IRM systems, especially when uploading files or fine-tuning presentations, to add an additional security layer.
Challenges and Legal Considerations
- Assuring Data Segregation: Highlighted the challenge of ensuring that generative AI models trained with an organization's data do not inadvertently train other models.
- Legal and Regulatory Measures: Currently rely on legal and regulatory contracts to assure data segregation.
- Emerging Security Models: Need for LLM-based firewalls and other emerging security models to enhance data protection.
The task force discussion provided a comprehensive overview of security best practices for generative AI adoption, emphasizing the importance of creating awareness, isolating environments, monitoring access, and leveraging emerging technologies to ensure data security. The disscussion also highlighted the challenges of assuring data segregation and the evolving landscape of legal and regulatory measures.
Comments