Apns-218.mp4

The resulting produced by the neural network.

: Files like "apns-218.mp4" typically show a side-by-side comparison of: The original input video. The adversarial patch being applied to the scene. apns-218.mp4

You can often find these supplementary videos on platforms like arXiv (under the "Ancillary files" section) or the researchers' project GitHub repositories. The resulting produced by the neural network

: The authors demonstrate that a small patch placed in a scene can cause a segmentation model to fail globally or ignore critical objects (like pedestrians or traffic signs). You can often find these supplementary videos on

The number usually denotes a specific test case, scene, or figure number referenced within the study. This paper explores the vulnerability of deep learning-based image segmentation models (like those used in autonomous driving) to adversarial patches—small, intentionally designed images that can cause a model to misclassify specific objects or entire regions of a scene. Context of the Paper