Bender.jl: A utility package for customizable deep learning

07/28/2022, 4:40 PM — 4:50 PM UTC
Blue

Abstract:

A wide range of research on feedforward neural networks requires "bending" the chain rule during backpropagation. The package Bender.jl provides neural network layers (compatible with Flux.jl), which gives users more freedom to choose every aspect of the forward mapping. This makes it easy to leverage ChainRules.jl to compose a wide range of experiments, such as training binary neural networks, Feedback Alignment and Direct Feedback Alignment in just a few lines of code.

Description:

In this lightning talk we will explore two different use cases of Bender.jl, namely training binary neural networks and training neural networks using the biologically motivated Feedback Alignment algorithm. Binary neural networks and feedback alignment might seem like very different areas of research, but from an implementation point of view they are very similar, as both amount to modifying the chain rule during backpropagation. Implementing a binary neural network requires modifying backpropagation in order to allow non-zero error signals to propagate through binary activation functions and feedback alignment requires modifying backpropagation to use a set of auxilary weights for transporting errors backwards (in order to avoid the biologically implausible weight symmetry requirement inherent to backpropagation). By allowing the user to specify the exact nature of the forward mapping when initializing a layer it is possible to leverage ChainRules.jl to easily implement these and similar experiments.

Platinum sponsors

Julia ComputingRelational AIJulius Technology

Gold sponsors

IntelAWS

Silver sponsors

Invenia LabsBeacon BiosignalsMetalenzASMLG-ResearchConningPumas AIQuEra Computing Inc.Jeffrey Sarnoff

Media partners

Packt PublicationGather TownVercel

Community partners

Data UmbrellaWiMLDS

Fiscal Sponsor

NumFOCUS