METHODS FOR OPTIMIZING NEURAL NETWORK MODELS
Annotation
Methods for building optimized deep learning accelerators are discussed. Traditional approaches to fault-tolerant deep learning accelerators are shown to rely on redundant computation, which results in significant overheads including training time, power consumption, and integrated circuit size. A method is proposed that considers differences in the vulnerability of individual neurons and the bits of each neuron, which partially solves the problem of computational redundancy. The method allows you to selectively protect model components at the architectural and circuit levels, which reduces overhead without compromising the reliability of the model. It is shown that quantization of the deep learning accelerator model allows data to be represented in fewer bits, which reduces hardware resource requirements.
Keywords
Постоянный URL
Articles in current issue
- CLASSIFICATION OF HEART RHYTHM DISORDER EPISODES BY INFORMATIVE FEATURES IN THE ELECTROCARDIOGRAM TIME DOMAIN
- USING DEEP LEARNING IN PNEUMONIA DIAGNOSIS FROM X-RAYS PATTERNS
- DATA MINING IN THE DIAGNOSIS OF ANEMIA BY CLINICAL INDICATORS
- METHODS FOR OPTIMIZING NEURAL NETWORK MODELS
- RANDOM NUMBER GENERATION FOR INTERACTIVE APPLICATIONS USING EXTERNAL SENSORS
- METHODS OF LOAD BALANCING IN HIGHLY LOADED SYSTEMS
- BALANCEABLE DATA STRUCTURE WITH ELEMENT PRIORITIES IN THE PROBLEM OF DISCRETE INFORMATION SOURCE MODELING
- STUDY OF THE VARIABILITY OF NATURAL-TERRITORIAL COMPLEXES OF THE TAZ PENINSULA BASED ON MULTISPECTRAL AND RADAR SPACE SURVEY DATA
- MODEL ESTIMATES OF PARAMETERS OF PASSIVE RADIO SYSTEMS FOR MONITORING LIGHTNING ACTIVITY
- INFORMATION AND ANALYTICAL SERVICE: APPROACH TO ANALYZING GEOSPATIAL DATA FROM SATELLITE-BASED MONITORING