Skip to main content
Stack Overflow
  1. About
  2. For Teams
Filter by
Sorted by
Tagged with
0 votes
0 answers
156 views

I'm implementing a more efficient version of lokr.Linear from the LoKr module in PEFT. The current implementation uses torch.kron to construct the delta_weight before applying rank dropout, but this ...
2 votes
1 answer
97 views

I’m currently using PyTorch’s torch.autograd.functional.jacobian to compute per-sample, elementwise gradients of a scalar-valued model output w.r.t. its inputs. I need to keep create_graph=True ...
3 votes
1 answer
1k views

I’m working on a PyTorch model where I compute a "global representation" through a forward pipeline. This pipeline is subsequently used in an extra sampling procedure later on in the network. When I ...
2 votes
1 answer
105 views

I am trying to apply a very simple parameter estimation of a SIR model using a gradient descent algorithm. I am using the package autograd since the audience (this is for a sort of workshop for ...
1 vote
1 answer
111 views

I’m working on a neural Turing Machine (NTM) model in PyTorch that uses a controller with 2D attention fusion. During training, I encounter the following error when calling .backward() on my loss: ...
0 votes
1 answer
67 views

I am having a weird issue with PyTorch's autograd functionality when implementing a custom loss calculation on a second order differential equation. In the code below, predictions of the neural ...
1 vote
1 answer
37 views

I am trying to understand the example REINFORCE PyTorch implementation on PyTorch GitHub: https://github.com/pytorch/examples/blob/main/reinforcement_learning/reinforce.py One particular point is a ...
2 votes
1 answer
57 views

I am trying to compute some derivatives of neural network outputs. To be precise I need the jacobian matrix of the function that is represented by the neural network and the second derivative of the ...
0 votes
1 answer
130 views

Say I have obtained some alphas and betas as parameters from a neural network, which will be parameters of the Beta distribution. Now, I sample from the Beta distribution and then calculate some loss ...
2 votes
2 answers
71 views

I have the following training code. I am quite sure I call loss.backward() just once, and yet I am getting the error from the title. What am I doing wrong? Note that the X_train_tensor is output from ...
0 votes
0 answers
177 views

For some reason, when changing my loss function in torch, I have to use numpy's functions to compute. But I'm very worried about whether the use of numpy function would make the autograd function fail....
1 vote
1 answer
76 views

I am using a residual neural network for a classification task. Somehow adding or omitting a ReLU activation causes the autograd to fail. I would be grateful for any insights on the reason for this? ...
0 votes
1 answer
43 views

I am trying to implement this function but have had no luck. There is a VAE model that I am using, and along with it, there are encoder and decoder. I'm freezing the weights of the VAE decoder, and ...
5 votes
2 answers
940 views

My code was running fine with CUDA, but now that I run it with device="cpu", with the flag torch.autograd.set_detect_anomaly(True), the runtime error is raised: RuntimeError: Function '...
3 votes
1 answer
131 views

I get error if I don't supply retain_graph=True in y1.backward() import torch x = torch.tensor([2.0], requires_grad=True) y = torch.tensor([3.0], requires_grad=True) f = x+y z = 2*f ...

15 30 50 per page
1
2 3 4 5
...
26

AltStyle によって変換されたページ (->オリジナル) /