1,253 questions
- Bountied 0
- Unanswered
- Frequent
- Score
- Trending
- Week
- Month
- Unanswered (my tags)
0
votes
1
answer
74
views
The output changes in a seemingly deterministic code
I have this class that helps me calculate gradient during backpropagation.
import time
visited = set()
class Value:
def __init__(self, data, _children=None, _op=''):
self.data = data
...
2
votes
1
answer
139
views
How to Implement Backpropagation Without Auto-Differentiation for a Feedforward Neural Network?
I am working on a deep learning assignment that requires implementing a feedforward neural network (FNN) from scratch using only NumPy (without TensorFlow, PyTorch, or other auto-differentiation tools)...
0
votes
1
answer
101
views
Self made backpropagation doesnt work in python (2-neuron-network)
I tried to build a neural network with two neurons as described in the book Why machines learn on page 330. This is my code and I don't know why it didn't work. I tried something like this before with ...
1
vote
0
answers
71
views
Computing Gradient of Loss w.r.t Learning Rate PyTorch
I am building a custom optimizer that samples learning rates from a Dirichlet distribution, whose parameters (alpha) need to be updated in each backpropagation. I've already figured out how to get the ...
1
vote
1
answer
111
views
RuntimeError: Trying to backward through the graph a second time – How to resolve without using retain_graph=True?
I’m working on a neural Turing Machine (NTM) model in PyTorch that uses a controller with 2D attention fusion. During training, I encounter the following error when calling .backward() on my loss:
...
1
vote
1
answer
146
views
Why do custom backward functions and torch.autograd yield different results after network training?
I'd like to custom a nn.Linear()'s backward function:
class Linear(torch.autograd.Function):
@staticmethod
def forward(ctx, inputs, weight, bias):
e = F.linear(inputs, weight, bias)
...
1
vote
1
answer
37
views
How does PyTorch autograd backpropogate successfully through non-tensor elements in the computational graph for the example REINFORCE code?
I am trying to understand the example REINFORCE PyTorch implementation on PyTorch GitHub: https://github.com/pytorch/examples/blob/main/reinforcement_learning/reinforce.py
One particular point is a ...
1
vote
1
answer
123
views
Not reaching optimal parameter values for my neural network
I am trying to create a classification neural network using only the NumPY library for it. I have completely made the network and worked through the logic of it, and it seems perfectly fine to me. I ...
0
votes
1
answer
78
views
How to do backpropagation in PyTorch when training AlphaZero?
I'm trying to implement my version of AlphaZero for Connect Four. I have implemented a convolutional network using PyTorch and can get (random) value- and policy outputs from the model for given ...
0
votes
0
answers
102
views
PyTorch loss.backward() Error: TypeError: 'NoneType' object is not callable
I encountered an error while training a Siamese network using PyTorch. When I call loss.backward() after calculating the loss, I receive a type error. Below are the error messages and the relevant ...
1
vote
0
answers
92
views
full convolution method for backpropagation
How to perform full convolution in tensorflow , this method is explained in this article .
the function tf.nn.conv3d has parametre padding which accepte two types "SAME"and "VALID" ...
1
vote
1
answer
206
views
Pytorch, Can't find the inplace operation that's preventing my network from calculating gradient
I'm trying to implement Proximal Policy Optimisation with multiple Actor heads: there are multiple actions the agent can perform, so I need one head to chose which action(s) to perform, and then ...
0
votes
1
answer
120
views
Can the following Pytorch operations be backwarded?
I have a tensor A, which is from original point cloud data. Its size is (N,3). Besides, I have a tensor B.It is an output score tensor by a neural network.Its size is (N,1). I firstly use torch.cat to ...
1
vote
1
answer
75
views
How to calculate loss over a sliding window of samples and then backpropagate the weighted average loss
I am trying to implement a learning technique from a paper. The relevant portion is: The SNN baseline used a sliding window of 50 consecutive data points, representing 200 ms of data (50-point window, ...
0
votes
0
answers
21
views
Using pytorch, how to estalish a loss function to keep all kernal to be exactly same
For exmaple, you have a CNN of 3 layers, every layer has only 1 channal(1 output features), so the weights of the CNN are 3 kernals, 1 for each layer. Assume these 3 kernals all are 221, and how can ...