Secure practices and solutions for AI-ready data
According to Gartner, 2025 will bring a shift towards AI regulation and governance, agentic programs and disinformation security—a more plausible future organizations can start preparing for today.
These best practices and tools can help you reach those new frontiers securely:
Evaluate your needs and solutions. Don’t implement AI just for the sake of having it. AI isn’t a magic bullet, so ask your team: Where do you really need process improvements? What tools help solve those problems, and what are their information security policies?
Scrutinize your external AI options. Do some research to determine which platforms are trustworthy and which might pose problems. Take advantage of web intelligence to find the platform’s reputation score (the closer to 100, the better) and determine whether, and to what extent, public, open-source AI platforms may pull data from your organization into its large language model (LLM).
Measure your business risk. The types of data shared with AI solutions and how you intend to use them can drastically change your risk. Deep learning models also require a substantial amount of data for training. Do you have in place the policies and tools you need to decide what data is off-limits? This applies to public, enterprise and private AI systems.
Establish your data policies for in-house GenAI. Half of companies use open-source AI. But even if you develop your own GenAI application or use an Enterprise AI tool like Copilot, it’s critical to set guardrails for data use and sharing. This is especially true if your GenAI platform is modifying files or creating new content using data from within your organization. Data loss prevention solutions can either alert users to risks before they make the mistake of sharing sensitive or proprietary data (more on that later), or just apply protection controls automatically.
Engage in risk modeling. Sandbox an AI tool to evaluate its data handling, visualize data loss scenarios, assess risk and uncover security and visibility gaps.
Robust access controls. Unrestricted access to models means your data is at risk of tampering, theft or misuse. Apply granular, role-based access controls (RBAC), continuous authentication and a zero trust approach to all stages of AI workflows.
Data Loss Prevention (DLP). To secure your employees’ AI use, look for a DLP solution that delivers comprehensive visibility and verifies data meets security and compliance policies before it enters AI systems. Be sure your DLP system is capable of alerting users when their GenAI results contain protected data. The best-case scenario is either to choose a solution that automatically gives users an opportunity to act on alerts, or to automatically protect the data while informing the user about the potential data loss event. Pairing DLP with user risk analytics can help you focus security education and action on the people that matter most.
Unify data management policies. Your DLP solution should be capable of consistently applying existing data governance policies to your applications, websites and tools that use AI—the same policies that exist on endpoints, networks and elsewhere across your entire environment. Understand what data is being sent into, and coming out of, AI-based tools.
Website categories. With new AI-enabled SaaS applications appearing daily, no team could realistically keep up with an allow/block list approach. Here, AI website categories (which should be supported by your web security or CASB solution) can keep you on the straight and narrow. They’ll allow you to control the AI websites and applications you access while monitoring content against all your existing DLP rules to ensure nothing leaks.