Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Job Deletion #95

Discussion options

Hello,

Thanks for the great package.

I am having a really strange issue where a job processed, works but on the cleanup in Laravel within CallQueuedHandler tries to delete the job, but at this point it is already deleted on Google Cloud. Do you know if Google Cloud is doing this?

if (! $job->isDeletedOrReleased()) {
 $job->delete();
 }

The above is the offending line, the job is not set to deleted in Laravel and when it calls for the first time it returns the following error:-

{
 "message": "Requested entity was not found.",
 "code": 5,
 "status": "NOT_FOUND",
 "details": []
}

It then kicks into the retry flow which in turn then produces the same error as the job isn't on Google Cloud.

Everything works otherwise, it just means every job is appearing as failed when infact Google successfully processed it. I wanted to put this in here first rather than an issue incase it was something on Googles side.

You must be logged in to vote

Hello @AlexJump24 and @i386,

I've released the version v3.4.1-rc1 with two fixes.

The first fix ensures that Cloud Tasks, rather than Laravel, deletes the task (if it was processed successfully). This will help to avoid the duplicate deletion of tasks.

The second fix addresses a bug I found while fixing the other one: the package could theoretically execute a task even if it no longer existed in Cloud Tasks. This issue should now also be resolved.

I wasn't able to reproduce the "NOT_FOUND" issue myself, so I can't guarantee that they will work. Please let me know if this version works (or not) :-)

One more thing to note is that to download this version, you will need to set "minimum-stabi...

Replies: 6 comments 4 replies

Comment options

Giving this a further test, google is self deleting the tasks on successful completion.

Updating

public function delete(): void
{
 parent::delete();
 $this->cloudTasksQueue->delete($this);
}

to

public function delete(): void
{
 parent::delete();
 if ($this->job['internal']['attempts'] === ($this->maxTries - 1)) {
 $this->cloudTasksQueue->delete($this);
 }
}

seems to resolve the issue, the delete flow being kept in for the retries, as without it the retries attempt indefinetley.

I'm unsure if this is a google side configuration issue, but I can't see anything, so wondering if this is an actual issue that I can put a PR in for?

You must be logged in to vote
0 replies
Comment options

Thanks for the great package.

Thank you!

Giving this a further test, google is self deleting the tasks on successful completion.

Indeed! And the Laravel CallQueuedHandler also deletes the job after it's processed successfully. But that should happen before Google can delete it, because at that point the handle-task route hasn't returned a 200 OK status yet and so Google should not have deleted the job already? That's what's so confusing... 😵

Screenshot 2023年02月11日 at 11 38 23

Anyway, thanks for the code sample! I expanded on it a little bit. It (I assume) checks if the job has reached it's maximum number of tries and if it has, allows it to be deleted. There are two more things that should be checked too:

  • Job max exceptions (max tries does not take exceptions into account)
  • Job timeouts (until when can a job be attempted)

If any of the 3 are true, then the job is marked as failed. So I've just used the hasFailed method. It also checks hasError because in that case, the job is released and should also be deleted.

Screenshot 2023年02月11日 at 12 20 22

Love to hear your thoughts on this.
If this looks ok, I can release a beta version with this bug fix which you can use to check if it solves the issue.

By the way, what kind of set up are you using to reproduce this issue? Because I haven't been able to reproduce it so far.

You must be logged in to vote
1 reply
Comment options

Hey thanks for the prompt reply 😃 That makes a lot of sense and certainly something I can try.

In regards to the setup it is just Google Cloud Run containers calling the queue. I am not too familiar with the configuration of them but I don't believe its anything out of the ordinary.

The queues are being processed off of the default queue on an separate queue, something that also caught me out because I originally had issues that I had not created the default queue even though I wasn't using it 😄

The only other thing worth highlighting is I have only tested using Mailables so far incase anything is happening inside that flow which is different, but I doubt it.

Comment options

Hello @AlexJump24 and @i386,

I've released the version v3.4.1-rc1 with two fixes.

The first fix ensures that Cloud Tasks, rather than Laravel, deletes the task (if it was processed successfully). This will help to avoid the duplicate deletion of tasks.

The second fix addresses a bug I found while fixing the other one: the package could theoretically execute a task even if it no longer existed in Cloud Tasks. This issue should now also be resolved.

I wasn't able to reproduce the "NOT_FOUND" issue myself, so I can't guarantee that they will work. Please let me know if this version works (or not) :-)

One more thing to note is that to download this version, you will need to set "minimum-stability" to at least "RC" in your composer.json file. Some more info: https://getcomposer.org/doc/04-schema.md#minimum-stability

You must be logged in to vote
1 reply
Comment options

Awesome thank you, I am currently unable to test this myself currently but the first opportunity I have I will do so, although I see @i386 has tested this in production and all seems to be working 😃

Answer selected by AlexJump24
Comment options

Putting this into production now @marickvantuil :)

You must be logged in to vote
0 replies
Comment options

I've been getting this same error on production, I can replicate it more easily when you run chained jobs that are long running. So i'll try the RC version and see if it fixes the issue 👍

You must be logged in to vote
2 replies
Comment options

I've been using it today on Prod and no more errors.

Comment options

Yeah this is ready to go. None of those annoying NOT_FOUND errors either :)

Comment options

Ah that's great to hear! I just tagged v3.4.1. Thanks for testing @AlexJump24 @i386 @Kyon147

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

AltStyle によって変換されたページ (->オリジナル) /