I have multiple E2E tests (written in Java) which share login details, each test during runtime will query the locker API for login details which is running on its own dedicated server.
Below is my implementation of my locker API in Go, it was working for me but wanted to make sure there are no bugs.
type service struct {
lockstatus []string // stores test_id denoting which test took the lock
data []*LoginDetails // stores login details
mu []sync.Mutex // stores locks for "data" slice
}
func (s *service) AcquireLoginDetails(req *Request, resp *Response) error {
foundLoginDetails := false
for i := range s.mu {
// for i, lock := range s.mu <- lock is copied here so will use s.mu[i] directly
if s.mu[i].TryLock() {
// skip to next lock if current lock is already taken
if s.lockstatus[i] != "" {
s.mu[i].Unlock()
continue
} else {
// use this index to acquire lock for the requested test
foundLoginDetails = true
s.lockstatus[i] = req.GetTestID()
resp.SetUserID(s.LoginDetails[i].UserID)
resp.SetPassword(s.LoginDetails[i].Password)
}
s.mu[i].Unlock()
if foundLoginDetails { break } // stop searching for further login details
}
}
if !foundLoginDetails {
return error_explaining_we_failed_to_acquire_lock
}
return nil
}
func (s *service) ReleaseLoginDetails(req *Request, resp *Response) error {
foundLoginDetails := false
for i, testID := range s.lockstatus {
if testID == req.GetTestID() {
s.mu[i].Lock()
foundLoginDetails = true
s.lockstatus[i] = ""
s.mu[i].Unlock()
break
}
}
if !foundLoginDetails {
return error_explaining_we_failed_to_release_lock
}
return nil
}
```
1 Answer 1
One problem I see here is that your mutexes protect only the data
slice, but they should also protect the lockstatus
slice. Since they don't, you have a race condition where a lockstatus
item can be written in AcquireLoginDetails
(with the corresponding mutex acquired) and simultaneously read in ReleaseLoginDetails
(on the for
line), without the mutex acquired.
I think it would be slightly cleaner if, instead of three slices of equal length, your service
object contained one slice of 3-tuples, each item containing its own lockstatus
, data
, and mutex:
type service struct {
items []*serviceItem
}
type serviceItem struct {
mu sync.Mutex // protects data and lockstatus
data *LoginDetails
lockstatus string
}
Making items
a []*serviceItem
(instead of a []serviceItem
) lets you safely write:
for _, item := range s.items { ... }
without a lock-copying warning.
The documentation for Mutex.TryLock
says:
Note that while correct uses of TryLock do exist, they are rare, and use of TryLock is often a sign of a deeper problem in a particular use of mutexes.
Indeed, you don't need TryLock
here, as the code holds the lock briefly enough that it's no big deal to block waiting to acquire it.
Finally, as a matter of style, I would return early from these functions on success instead of having "found" vars; and I'd split out a helper function that lets you rely on defer
for unlocking the mutex instead of having to write Unlock
in two different code paths.
func (s *service) AcquireLoginDetails(req *Request, resp *Response) error {
for _, item := range s.items {
if tryAcquireItem(req, resp, item) {
return nil
}
}
return errCannotAcquire
}
func tryAcquireItem(req *Request, resp *Response, item *serviceItem) bool {
item.mu.Lock()
defer item.mu.Unlock()
if item.lockstatus != "" {
return false
}
item.lockstatus = req.GetTestID()
resp.SetUserID(item.data.UserID)
resp.SetPassword(item.data.Password)
return true
}
(Similarly for ReleaseLoginDetails
.)