Skip to main content
Stack Overflow
  1. About
  2. For Teams
Filter by
Sorted by
Tagged with
0 votes
1 answer
107 views

I'm trying to recreate the cv2.warpAffine() function, taking a tensor input and output rather than a Numpy array. However, gradients calculated from the output tensor produce a Non-None gradient ...
1 vote
1 answer
90 views

I need to write SGD-Perceptron for digit recognition according to this guidelines: The selection is MNIST database with 60000 training samples. Get random vector and calculate net (weighted sum) and ...
0 votes
1 answer
45 views

I am unable to achieve good results unless I choose a batch size of 1. By good, I mean error decreases significantly through the epochs. When I do a full batch of 30 the results are poor, error ...
1 vote
1 answer
172 views

I had an unexpected output while implementing SGD algorithm for my ML homework. This is part of my training data which normally has 320 rows: my dataset: https://github.com/Jangrae/csv/blob/master/...
0 votes
0 answers
141 views

import numpy as np from matplotlib import pyplot as plt xk = np.linspace(-1,1,100) yk= 2 * xk + 3 + np.random.rand(len(xk)) x1,x2 = np.meshgrid(xk,yk) F = (x1 - 2) ** 2 + 2 * (x2 - 3) ** 2 fig=plt....
1 vote
0 answers
59 views

I'm working on a multivariate optimization problem using the gradient descent algorithm. The algorithm does an okay job, but I noticed the cost function is not following a monotonic descending trend ...
0 votes
0 answers
539 views

I'm currently working in python with tensorflow and would like to train my model with a gradient descent model and not a stochastic gradient descent model. The reason is that I want to train my model ...
0 votes
1 answer
595 views

I have a model in pytorch. The model can take any shape but lets assume this is the model torch_model = Sequential( Flatten(), Linear(28 * 28, 256), Dropout(.4), ReLU(), ...
0 votes
1 answer
475 views

I am using SGDRegressor with a constant learning rate and default loss function. I am curious to know how changing the alpha parameter in the function from 0.0001 to 100 will change regressor behavior....
1 vote
0 answers
385 views

I tried to implement the stochastic gradient descent method and apply it to my build dataset. The data set follows a linear regression ( wx + b = y). The process has also somehow converged towards the ...
1 vote
1 answer
97 views

# We first define the observations as a list and then also as a table for the experienced worker's performance. Observation1 = [2.0, 6.0, 2.0] Observation2 = [1.0, 5.0, 7.0] Observation3 = [5.0, 2.0, ...
2 votes
1 answer
901 views

I ran into this weird behavior when trying to "manually" optimize a network's parameters via SGD. When attempting to update the model's parameters using the following way, it works just fine:...
1 vote
0 answers
1k views

I'm trying to implement a slightly different version of SGD with pytorch and test it on some datasets. I need to write a custom optimizer on which to train my model, however I cannot find any guide ...
1 vote
1 answer
214 views

This is my code: from sklearn.linear_model import SGDClassifier, LogisticRegression from sklearn.metrics import classification_report, accuracy_score from sklearn.feature_extraction.text import ...
0 votes
1 answer
386 views

I assumed the "stochastic" in Stochastic Gradient Descent came from the random selection of samples within each batch. But the articles I have read on the topic seem to indicate that SGD ...

15 30 50 per page
1
2 3

AltStyle によって変換されたページ (->オリジナル) /