Toggle light / dark theme

Shrinking massive neural networks used to model language

Posted in robotics/AI

Deep learning neural networks can be massive, demanding major computing power. In a test of the “lottery ticket hypothesis,” MIT researchers have found leaner, more efficient subnetworks hidden within BERT models. The discovery could make natural language processing more accessible.