•1 min read•from Machine Learning
Implementation details of Backpropagation in Siamese networks. [D]
Hey Folks,
Could someone please share correct implementation of backprop in siamese networks? The explanation on the original paper is not super detailed.
I found this random implementation on github, ref. The inputs are passed one after the other, loss is computed for the last two inputs and the weight is updated after. Is this the correct implementation?
Another implementation I could think of is to have two copies of same network like Bi-encoder. Two inputs are passed simultaneously, loss is backprop'd and weights are updated for both the networks, and both network weights are replaced with aggregate(mean) of both networks before next forward pass.
Which one is correct?
Please clarify.
[link] [comments]
Want to read more?
Check out the full article on the original site
Tagged with
#natural language processing for spreadsheets
#generative AI for data analysis
#rows.com
#Excel alternatives for data analysis
#financial modeling with spreadsheets
#Backpropagation
#Siamese networks
#implementation
#weights update
#correct implementation
#loss function
#network architecture
#Bi-encoder
#network weights
#inputs
#forward pass
#original paper
#simultaneous processing
#mean aggregation
#two inputs