Machine learning: what are the security risks?
6th March 2025
Machine learning uses data to learn, so what can cyber criminals learn from them?
Machine learning (ML) is a key factor of artificial intelligence (AI). ML is the process of training algorithms to learn from, and make predictions or decisions based on, the data it is fed. The more data given, the more accurate the algorithm.
However, are machine learning models secure, or can cyber criminals use that element of AI to access personal information? There are a variety of security risks associated with ML including:
Data poisoning attacks: by injecting malicious data into a training dataset, a ML model can be corrupted. This can lead to it making inaccurate predictions or biased classifications that could end up being harmful if the data is used incorrectly.
Model inversion attacks: using the outputs of an ML model, bad actors can infer sensitive information used to train the model.
For example, if the model was fed personal details – let’s say medical information – then the attacker may be able to reconstruct this data from the model and gain access to those details.
Adversarial attacks: creating inputs designed to deceive the ML model – this could involve slightly altering an image to cause facial recognition to not recognise a person.
AI supply chain attacks: attackers could compromise the components of the ML supply chain, including pre-trained models or libraries, introducing backdoor access points.
As shown, ML is vulnerable to corruption and attack, just as AI is. A cyber security breach isn’t likely to stem from a ML model as a more straightforward attack, such as phishing, but it is still worth considering as a potential risk.
Keep your systems up to date and use all available layers of protection that you can to ensure that bad actors can’t breach your systems.
For more information on machine learning, contact us HERE.