Racket provides concurrency in the form of threads, and it provides a general sync function that can be used to synchronize both threads and other implicit forms of concurrency, such as ports.
Threads run concurrently in the sense that one thread can preempt another without its cooperation, but threads do not run in parallel in the sense of using multiple hardware processors. See Parallelism for information on parallelism in Racket.
To execute a procedure concurrently, use thread . The following example creates two new threads from the main thread:
The next example creates a new thread that would otherwise loop forever, but the main thread uses sleep to pause itself for 2.5 seconds, then uses kill-thread to terminate the worker thread:
(loop)))))
In DrRacket, the main thread keeps going until the Stop button is clicked, so in DrRacket the thread-wait is not necessary.
If the main thread finishes or is killed, the application exits, even if other threads are still running. A thread can use thread-wait to wait for another thread to finish. Here, the main thread uses thread-wait to make sure the worker thread finishes before the main thread exits:
Each thread has a mailbox for receiving messages. The thread-send function asynchronously sends a message to another thread’s mailbox, while thread-receive returns the oldest message from the current thread’s mailbox, blocking to wait for a message if necessary. In the following example, the main thread sends data to the worker thread to be processed, then sends a 'done message when there is no more data and waits for the worker thread to finish.
(loop)]['done
In the next example, the main thread delegates work to multiple arithmetic threads, then waits to receive the results. The arithmetic threads process work items then send the results to the main thread.
oper1oper2(operationoper1oper2)))(loop)])))))
Semaphores facilitate synchronized access to an arbitrary shared resource. Use semaphores when multiple threads must perform non-atomic operations on a single resource.
In the following example, multiple threads print to standard output concurrently. Without synchronization, a line printed by one thread might appear in the middle of a line printed by another thread. By using a semaphore initialized with a count of 1, only one thread will print at a time. The semaphore-wait function blocks until the semaphore’s internal counter is non-zero, then decrements the counter and returns. The semaphore-post function increments the counter so that another thread can unblock and then print.
The pattern of waiting on a semaphore, working, and posting to the semaphore can also be expressed using call-with-semaphore ,which has the advantage of posting to the semaphore if control escapes (e.g., due to an exception):
output-semaphore
Semaphores are a low-level technique. Often, a better solution is to restrict resource access to a single thread. For example, synchronizing access to standard output might be better accomplished by having a dedicated thread for printing output.
Channels synchronize two threads while a value is passed from one thread to the other. Unlike a thread mailbox, multiple threads can get items from a single channel, so channels should be used when multiple threads need to consume items from a single work queue.
In the following example, the main thread adds items to a channel using channel-put , while multiple worker threads consume those items using channel-get . Each call to either procedure blocks until another thread calls the other procedure with the same channel. The workers process the items and then pass their results to the result thread via the result-channel.
(loop)))))[(DONE)[elsethread-iditem))(loop)])))))
Buffered asynchronous channels are similar to the channels described above, but the “put” operation of asynchronous channels does not block—unless the given channel was created with a buffer limit and the limit has been reached. The asynchronous-put operation is therefore somewhat similar to thread-send , but unlike thread mailboxes, asynchronous channels allow multiple threads to consume items from a single channel.
In the following example, the main thread adds items to the work channel, which holds a maximum of three items at a time. The worker threads process items from this channel and then send results to the print thread.
(loop)))))(safer-printf"Thread ~a processing item: ~a"thread-iditem)(loop)))))
Note the above example lacks any synchronization to verify that all items were processed. If the main thread were to exit without such synchronization, it is possible that the worker threads will not finish processing some items or the print thread will not print all items.
There are other ways to synchronize threads. The sync function allows threads to coordinate via synchronizable events. Many values double as events, allowing a uniform way to synchronize threads using different types. Examples of events include channels, ports, threads, and alarms. This section builds up a number of examples that show how the combination of events, threads, and sync (along with recursive functions) allow you to implement arbitrarily sophisticated communication protocols to coordinate concurrent parts of a program.
In the next example, a channel and an alarm are used as synchronizable events. The workers sync on both so that they can process channel items until the alarm is activated. The channel items are processed, and then results are sent back to the main thread.
(cond[elsethread-idevt))]))))(make-worker-thread1)(make-worker-thread2)(make-worker-thread3)['alarm[result(loop)]))
The next example shows a function for use in a simple TCP echo server. The function uses sync/timeout to synchronize on input from the given port or a message in the thread’s mailbox. The first argument to sync/timeout specifies the maximum number of seconds it should wait on the given events. The read-line-evt function returns an event that is ready when a line of input is available in the given input port. The result of thread-receive-evt is ready when thread-receive would not block. In a real application, the messages received in the thread mailbox could be used for control messages, etc.
(cond(loop)][else(loop)])))
The serve function is used in the following example, which starts a server thread and a client thread that communicate over TCP. The client prints three lines to the server, which echoes them back. The client’s copy-port call blocks until EOF is received. The server times out after two seconds, closing the ports, which allows copy-port to finish and the client to exit. The main thread uses thread-wait to wait for the client thread to exit (since, without thread-wait , the main thread might exit before the other threads are finished).
(start-server);copy-port will block until EOF is read from in-port
Sometimes, you want to attach result behavior directly to the event passed to sync . In the following example, the worker thread synchronizes on three channels, but each channel must be handled differently. Using handle-evt associates a callback with the given event. When sync selects the given event, it calls the callback to generate the synchronization result, rather than using the event’s normal synchronization result. Since the event is handled in the callback, there is no need to dispatch on the return value of sync .
list-of-numberslist-of-numberslist-of-strings(loop)))
The result of handle-evt invokes its callback in tail position with respect to sync , so it is safe to use recursion as in the following example.
The wrap-evt function is like handle-evt , except that its handler is not called in tail position with respect to sync . At the same time, wrap-evt disables break exceptions during its handler’s invocation.
Events also allow you to encode many different communication patterns between multiple concurrent parts of a program. One common such pattern is producer-consumer. Here is a way to implement a variation on it using the above ideas. Generally speaking, these communication patterns are implemented via a server loop that uses sync to wait for any number of different possibilities to occur and then reacts to them, updating some local state.
;private state and server loop(void;the items variable holds the items that;have been produced but not yet consumed(sync;wait for productionproducer-chan;if that event was chosen,;we add an item to our list;and go back around the loop;wait for consumption, but only;if we have something to produce;if that event was chosen,;we know that the first item item;has been consumed; drop it and;and go back around the loop;an example (non-deterministic) interaction'(2 1)
It is possible to build up more complex synchronization patterns. Here is a silly example where we extend the producer consumer with an operation to wait until at least a certain number of items have been produced.
;we send a new channel over to the;main loop so that we can wait here(void[total-items-seen0][waiters'()]);instead of waiting on just production/;consumption now we wait to learn about;threads that want to wait for a certain;number of elements to be reachedproducer-chanwaiters)));wait for threads that are interested;the number of items producedwait-at-least-chan;for each thread that wants to wait,;we check to see if there has been enough;production(cond;if so, we send a message back on the channel;and continue the loop without that item[else;otherwise, we just ignore that one;an example (non-deterministic) interaction(producei)(wait-at-least10)1 -> 3
9 -> 2
0 -> 1
8 -> 0
4 -> 9
2 -> 8
7 -> 7
6 -> 6
5 -> 5
3 -> 4