Occurs when adversaries train an AI model on inaccurate, mislabeled data. This model poisoning can then lead an AI algorithm to make mistakes and misclassifications later on, even if an adversary does not have access to directly manipulate the inputs it receives.
This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy. I Agree