dr inż. Michał Malinowski

bazy grafowe, sztuczna inteligencja, cyberbezpieczeństwo

How to trick a smart algorithm hacking artificial intelligence systems


Book chapter


Malinowski Michał
Justyna Żylińska, Katarzyna Huczek, Multidimensionality of cybersecurity: hate speech in the cyberworld, Uczelnia Techniczno Handlowa im. Heleny Chodkowskiej, 2025 Sep, pp. 195-215

DOI: doi.org/10.6084/m9.figshare.30041800.v1

Cite

Cite

APA   Click to copy
Michał, M. (2025). How to trick a smart algorithm hacking artificial intelligence systems. In J. Żylińska & K. Huczek (Eds.), Multidimensionality of cybersecurity: hate speech in the cyberworld (pp. 195–215). Uczelnia Techniczno Handlowa im. Heleny Chodkowskiej. https://doi.org/doi.org/10.6084/m9.figshare.30041800.v1


Chicago/Turabian   Click to copy
Michał, Malinowski. “How to Trick a Smart Algorithm Hacking Artificial Intelligence Systems.” In Multidimensionality of Cybersecurity: Hate Speech in the Cyberworld, edited by Justyna Żylińska and Katarzyna Huczek, 195–215. Uczelnia Techniczno Handlowa im. Heleny Chodkowskiej, 2025.


MLA   Click to copy
Michał, Malinowski. “How to Trick a Smart Algorithm Hacking Artificial Intelligence Systems.” Multidimensionality of Cybersecurity: Hate Speech in the Cyberworld, edited by Justyna Żylińska and Katarzyna Huczek, Uczelnia Techniczno Handlowa im. Heleny Chodkowskiej, 2025, pp. 195–215, doi:doi.org/10.6084/m9.figshare.30041800.v1.


BibTeX   Click to copy

@inbook{malinowski2025a,
  title = {How to trick a smart algorithm hacking artificial intelligence systems},
  year = {2025},
  month = sep,
  pages = {195-215},
  publisher = {Uczelnia Techniczno Handlowa im. Heleny Chodkowskiej},
  doi = {doi.org/10.6084/m9.figshare.30041800.v1},
  author = {Michał, Malinowski},
  editor = {Żylińska, Justyna and Huczek, Katarzyna},
  booktitle = {Multidimensionality of cybersecurity: hate speech in the cyberworld},
  month_numeric = {9}
}

This article presents a systematic analysis of methods for deceiving artificial intelligence (AI) algorithms by exploiting their internal vulnerabilities at various stages of the model lifecycle. The focus is placed on the architecture of machine learning-based systems, identifying critical points susceptible to attacks, such as training data, learning algorithms, model structure, and both input and output data. Typical attack techniques are discussed, including data poisoning, adversarial attacks, and model inversion. The study illustrates how the perceived “intelligence” of such systems can be manipulated through architectural and algorithmic vulnerabilities. A set of countermeasures and best practices is proposed to minimise the effectiveness of such attacks from the system design phase.