GenAI Integration: Security Concerns
This page addresses concerns that potential GenAI tasks' users may have, regarding the safety of data sent to an AI model through the task and the security of the database while running such tasks.
- In this article:
Security measures
Our approach toward data safety while using RavenDB AI tasks, is that we need to take care of security on our end, rather than expect the AI model to protect our data.
You can take these security measures:
-
Use a local model when possible
Use a local AI model like Ollama whenever you don't have to transit your data to an external model, to keep the data, as much as possible, within the safe boundaries of your own network. -
Pick the right model
RavenDB does not dictate what model to use, giving you full freedom to pick the services that you want to connect.
Choose wisely the AI model you connect, some seem to be in better hands than others. -
Send only the data you want to send
You are in full control of the data that is sent from your server to the AI model.
Your choices while defining the task, including the collection you associate the task with and the context generation script you define, determine the only data that will be exposed to the AI model.
Take your time, when preparing this script, to make sure you send only the data you actually want to send. -
Use the playgrounds
While defining your AI task, take the time and use Studio's playgrounds to double-check what is actually sent.
There are separate playgrounds for the different stages, using them is really enjoyable, and you can test your configuration on various documents and see exactly what you send and what you receive. -
Use a secure server
The AI model is not given entry to your database. The data that you send it voluntarily is all it gets. However, as always, if you care about your privacy and safety, you'd want to use a secure server.
This will assure that you have full control over visitors to your database and their permissions. -
Use your update script wisely
When considering threats to our data we often focus on external risks, but many times it is us that endanger it the most.
The update script is the JavaScript that the GenAI task runs after receiving a reply from the AI model. Here too, take your time to check this powerful script using the built in Studio playground.