Skip to main content
Stack Overflow
  1. About
  2. For Teams
Filter by
Sorted by
Tagged with
2 votes
1 answer
55 views

I am using Dask for some processing. The client starts successfully, but I am seeing zero workers. This is how I am creating the client: client = Client("tls://localhost:xxxx") This is the ...
2 votes
0 answers
63 views

I have the following codes that pass an array to the task and submit to Dask cluster. The Dask cluster is running in Docker with several Dask workers. Docker starts with: scheduler: docker run -d \ -...
2 votes
0 answers
69 views

I am trying to analyze the 30 day standardized precipitation index for a multi-state range of the southeastern US for the year 2016. I'm using xclim to process a direct pull of gridded daily ...
0 votes
0 answers
44 views

I am analysing some data using dask distributed on a SLURM cluster. I am also using jupyter notebook. I am changing my codebase frequently and running jobs. Recently, a lot of my jobs started to crash....
0 votes
0 answers
66 views

I maintain a production Dask cluster. Every few weeks or so I need to restart the scheduler because it becomes progressively slower over time. The dashboard can take well over a minute to display the &...
Z4NG's user avatar
  • 91
0 votes
0 answers
28 views

Using Python streamz and dask, I want to distribute the data of textfiles that are generated to threads. Which then will process every newline generated inside those files. from streamz import Stream ...
1 vote
1 answer
50 views

I already have a code using threadpool tkiniter and matplotlib to process signals which are getting written to a file from another process. The Synchronization between the two process is by reading ...
0 votes
0 answers
42 views

import os from dask_cloudprovider.gcp import GCPCluster os.environ["GOOGLE_APPLICATION_CREDENTIALS"]=r'C:\Users\Me\Documents\credentials\compute_engine_default_key\test-project123-...
0 votes
1 answer
85 views

I am trying to deploy a dask cluster with 0 workers and 1 scheduler, based on the work load need to scale up the worker to required, i found that the adaptive deployment is the correct way, i am using ...
1 vote
0 answers
99 views

I am new to Dask. While attempting to run concat on a list of DataFrames, I noticed it is consuming more time, resources, and tasks than expected. Here are the details of my run: Scheduler (same as ...
0 votes
1 answer
252 views

I am trying to run a Dask Scheduler and Workers on a remote cluster using SLURMRunner from dask-jobqueue. I want to bind the Dask dashboard to 0.0.0.0 (so it’s accessible via port forwarding) and ...
0 votes
0 answers
115 views

I'm trying out some things with Dask for the first time, and while I had it running a few weeks ago, I now find that I can't get the LocalCluster initiated. I've cut if off after running 30 minutes at ...
MKJ's user avatar
  • 338
0 votes
0 answers
125 views

I am trying to get this code to work and then use it to train various models on two gpu's: from dask_cuda import LocalCUDACluster from dask.distributed import Client if __name__ == "__main__&...
1 vote
1 answer
64 views

I am trying to learn dask, and have created the following toy example of a delayed pipeline. +-----+ +-----+ +-----+ | baz +--+ bar +--+ foo | +-----+ +-----+ +-----+ So baz has a dependency on ...
0 votes
1 answer
83 views

I am running tasks using client.submit thus: from dask.distributed import Client, get_client, wait, as_completed # other imports zip_and_upload_futures = [ client.submit(zip_and_upload, id, path, ...
Dave's user avatar
  • 501

15 30 50 per page
1
2 3 4 5
...
77

AltStyle によって変換されたページ (->オリジナル) /