• Author(s): Chunjiang Ge, Sijie Cheng, Ziming Wang, Jiale Yuan, Yuan Gao, Jun Song, Shiji Song, Gao Huang, Bo Zheng

This paper presents a novel approach to image classification by employing sparse neural networks, aiming to enhance both efficiency and robustness. Sparse neural networks are designed to reduce the number of active parameters, thereby decreasing computational complexity and memory usage without significantly compromising performance. The proposed method introduces a structured sparsity technique that selectively activates neurons based on their contribution to the classification task, effectively pruning less important connections.

The framework is evaluated on several benchmark datasets, including CIFAR-10, CIFAR-100, and ImageNet, to demonstrate its effectiveness. Results indicate that sparse neural networks achieve comparable accuracy to dense networks while significantly reducing the number of parameters and computational overhead. This reduction in complexity not only speeds up the training and inference processes but also makes the models more suitable for deployment on resource-constrained devices.

Additionally, the paper explores the robustness of sparse neural networks against adversarial attacks. By incorporating sparsity into the network architecture, the models exhibit improved resistance to perturbations, enhancing their reliability in real-world applications. The study includes a comprehensive analysis of the trade-offs between sparsity, accuracy, and robustness, providing valuable insights into the design of efficient and resilient neural networks.

In conclusion, the proposed sparse neural network framework offers a promising solution for efficient and robust image classification. By balancing the trade-offs between model complexity, performance, and robustness, this approach paves the way for the development of more practical and deployable machine learning models. The findings underscore the potential of structured sparsity in advancing the field of neural network research, particularly in applications where computational resources are limited.