I am trying to learn Go concurrency patterns and best practices on my own.
I have invented a simple task for myself.
I want to process a slice of integers in a parallel manner by dividing it into sub-slices and process each with a goroutine.
Below is the solution I have at the moment, I seek constructive code review.
If you have any questions or clarifications/updates are needed, please leave a comment.
// TASK:
// given the slice of integers, multiply each element by 10
// apply concurrency to solve this task by splitting slice into subslices and feeding them to goroutines
// EXAMPLE:
// input -> slice := []int{ 1, 2, 3, 4, 5, 6, 7, 8 }
// output -> slice := []int{ 10, 20, 30, 40, 50, 60, 70, 80 }
package main
import (
"fmt"
"sync"
"time"
)
func simulate_close_signal(quit chan bool, wg *sync.WaitGroup) {
defer close(quit)
defer wg.Done() // maybe I can remove this?
for {
select {
case <-time.After(1 * time.Second):
fmt.Println("Close signal sent")
return
}
}
}
func process_subslice(quit chan bool,
wg *sync.WaitGroup,
original_slice []int,
subslice_starting_index int,
subslice_ending_index int,
timeout func()) {
defer wg.Done()
for {
select {
case <-quit:
fmt.Println("Goroutine: received request to close, quiting...")
return
default:
// do I need mutex here? I think I don't since each goroutine processes independent subslice...
original_slice[subslice_starting_index] = original_slice[subslice_starting_index] * 10
subslice_starting_index++
if subslice_starting_index >= subslice_ending_index {
return
}
timeout() // added so I can test close signal
}
}
}
func main() {
slice := []int{1, 2, 3, 4, 5, 6, 7, 8}
fmt.Println("Starting slice: ", slice)
quit := make(chan bool)
wg := sync.WaitGroup{}
for i := 0; i < 2; i++ {
wg.Add(1)
go process_subslice(quit,
&wg,
slice,
i*4, // this way slice is divided
(1+i)*4, // in two subslices with indexes [0, 3] and [4,7]
func() { time.Sleep(1 * time.Second) }) // pass empty func to test normal execution
}
wg.Add(1)
go simulate_close_signal(quit, &wg)
wg.Wait()
fmt.Println("Processed slice: ", slice)
}
1 Answer 1
Looks fine, as you wrote, you don't need simulate_close_signal
to be in the wait group since the wg.Wait
call will already synchronise the access to slice
later for the other two goroutines that write to it.
Other than that, you might as well make the quit
channel a chan struct{}
, since there's nothing being written, or read from it, the only purpose is to be able to close
it, so you can just make that explicit.
Dividing up the slice like this is of course doable, though I'd really suggest you simply create an actual subslice instead of passing around the indexes, that's what they're for (they'll all share the underlying array, just point to different parts of it). Alternatively you could create an actual array with the fixed dimensions if you don't need the slice capabilities (e.g. [8]int{1, 2, 3, 4, 5, 6, 7, 8}
).
Edit: Mind you, now I actually ran it in the playground. The version you posted consistently gives [10 20 3 4 50 60 7 8]
as the result? Have you tried it before posting?
The non-blocking read from quit
doesn't work quite as you'd like it to. If you simply print something in the multiplication part you'll notice it enters that block twice - that can happen, as nothing is being read from the quit
channel yet. Normally I'd say you can simply drop the select
, it serves no purpose here. Alternatively you'd have to deal with the default
case being executed once, twice, more, or even never!
-
1\$\begingroup\$ Thank you for answering! Regarding your edit: it seemed OK to me, since there is 1 second period before goroutines receive termination signal. I thought that in that period both routines had time to start and process part of the subslice.
default
case being executed arbitrary number of times (or never) is fine in this scenario. \$\endgroup\$AlwaysLearningNewStuff– AlwaysLearningNewStuff2021年08月09日 09:00:56 +00:00Commented Aug 9, 2021 at 9:00 -
1\$\begingroup\$ Upvoted, added bounty and officially accepted. Thanks. \$\endgroup\$AlwaysLearningNewStuff– AlwaysLearningNewStuff2021年08月12日 13:29:05 +00:00Commented Aug 12, 2021 at 13:29