All events are in Central time unless specified.
Activity

M.S. Defense: Archit Srivastava

Date:
Time:
1:00 pm – 2:00 pm
Avery Hall Room: 347
1144 T St
Lincoln NE 68508
Additional Info: AVH
“Feed Forward Neural networks with Asymmetric Training”

Our work presents a new perspective on training feed-forward neural networks(FFNN). We introduce and formally define the notion of symmetry and asymmetry in the context of training of FFNN. We provide a mathematical definition to generalize the idea of sparsification and demonstrate how sparsification can induce asymmetric training in FFNN. In FFNN, training consists of two phases, forward pass, and backward pass. We define symmetric training in FFNN as follows— If a neural network uses the same parameters for both forward pass and backward pass, then the training is said to be symmetric.The definition of asymmetric training in artificial neural networks follows naturally from the contrapositive of the definition of symmetric training. Training is asymmetric if the neural network uses different parameters for the forward and backward pass. We conducted experiments to induce asymmetry during the training phase of the feed-forward neural network such that the network uses all the parameters during the forward pass, but only a subset of parameters are used in the backward pass to calculate the gradient of the loss function using sparsified backpropagation. We explore three strategies to induce asymmetry in Neural networks. The first method is somewhat analogous to drop-out because the sparsified backpropagation algorithm drops specific neurons along with associated parameters while calculating the gradient. The second method is excessive sparsification. It induces asymmetry by dropping both neurons and connections, thus making the neural network behave as if it is partially connected while calculating the gradient in the backward pass. The third method is a refinement of the second method; it also induces asymmetry by dropping both neurons and connections while calculating the gradient in the backward pass. In our experiments, the FFNN with asymmetric training reduced overfitting, had better accuracy, and reduced backpropagation time compared to the FFNN with symmetric training with drop-out.

Committee:
Dr. Vinod Variyam, Advisor
Dr. Stephen Scott
Dr. Ashok Samal

Zoom: https://unl.zoom.us/j/91905181819

Download this event to my calendar