What tools and models are safest or least safe for your privacy? In this matrix, we look at privacy from two perspectives: model training (your data is used to make better models) and human review (someone looks at your data).
The only completely private form of generative AI are local models, models that you host on your own infrastructure or in enterprise-grade model environments (such as IBM WatsonX).
Models rated mostly private do not train on your data, but a human may review your inputs if the model’s safety mechanisms determine there might be a violation of the Terms of Service. Users cannot opt out of human review.
Models rated conditionally private train on your data and have human review, but users can opt out of training. Users cannot opt out of human review.
Models rated never private train on your data and have human review, and users cannot opt out of either.
Got questions about how to integrate AI into your work? Ask us! Visit www.TrustInsights.ai/aiservices for more help.