| 시간 제한 | 메모리 제한 | 제출 | 정답 | 맞힌 사람 | 정답 비율 |
|---|---|---|---|---|---|
| 5 초 | 512 MB | 0 | 0 | 0 | 0.000% |
Your job is to simulate the operation on a machine called ACM (Asynchrounous Calculating Machine) to determine how long each program takes to finish their job. This helps detecting unexpected thread behaviors, like race and deadlock.
ACM has many processing units and can run many threads simultaneously. It runs a global scheduler, which maintains all the threads, assigns a thread on each processing unit and manages all semaphores. Each thread has its own thread id and every thread is one of three states: RUNNING, READY, and WAITING. Global scheduler has one thread queue which contains all the threads in READY state.
A program consists of one or more code blocks. Each code block has a list of ACM operations, including the actual computations and thread creation. Each thread runs operations in only one of the given code blocks, and it runs operations from the top of the block to the bottom, unless it receives killThread-signal from other threads.
ACM simulation program simulates 9 operations. Here I describe them all. Please note that word enclosed by [] should have some actual name or value.
Each parameter should either be a name or a digit. A name is an alphabetic string (case-sensitive) and its length is at most 200. Here are some explanations.
Additionally, you may assume following constraints.
Threads are assigned to CPUs when
In each step, the simulator does round-robin preemption (when time step is a multiple of time slice), executes operations of each running threads and increments time step. When a compute operation is executed, the thread gets computing time. A thread that has computing time greater than 0 cannot do any operations. After each step, the simulator decrements positive computing time of each running thread.
Operations executed in the same time step is executed in ascending order of the CPU id. In the same time step, a CPU id of an operation is greater or equal to CPU ids of operations which have already been executed.
Time step starts from 0.
Input consists of multiple testcases.
Each testcase begins with a line that contains two integers separated by a space. They mean the number of time steps of the simulation and the maximum number of threads that ACM is capable, respectively. The next line contains a single integer, which indicates the number of CPUs available. The following line has an integer that means time slice of the scheduler.
Description of semaphores follows. It begins with the number of semaphores which is followed by information of each semaphore, one per line. The information of a semaphore consists of an alphabet string (case-sensitive) and an integer, which specify the name of the semaphore and its initial value, respectively.
Finally code blocks are given. The number of code blocks comes first, and then each code block is described. The first line of the description of each code block is the name of the block (case-sensitive alphabet string), followed by a colon (:). Following lines describe content of the code block. A code block is an array of operations described above, and one operation is written in one line. You may assume that every code block always ends with an ‘end’ operation.
Input terminates with two zeroes separated by a space character.
constraints
For each test case, print its case number on the first line.
If the number of living threads exceeds ACM‘s capacity, put information of all threads which terminated before exceeding the capacity, and then put “<<oops>>”. If not, and if there are one or more threads not finished at the end of the simulation, put information of all threads terminated, and then put “<<loop>>”. Otherwise (i.e. when all threads terminates within the time), just put information of all threads. All quotes are for clarity.
Information of threads must be written in increasing order of thread ID, one per line. Information of a single thread is denoted by two integers, the thread ID and the time it terminated, separated by a single space character.
50 50 1 10 1 semaphore 1 1 codeBlockA: loop 2 compute 10 next end 50 50 1 10 1 semaphore 1 2 codeBlockA: hoge <- forkR codeBlockB yield compute 1 killThread hoge lock semaphore 1 compute 1 end codeBlockB: compute 1 lock semaphore 2 end 5 5 1 3 1 semaphore 1 1 codeBlockA: hoge <- forkI codeBlockA compute 1 end 5 5 1 2 1 semaphore 1 1 codeBlockA: compute 1 hoge <- forkI codeBlockA hoge <- forkI codeBlockA end 0 0
Case 1: 1 20 Case 2: 1 3 2 3 Case 3: 1 1 2 2 3 4 4 4 5 5 <<loop>> Case 4: 1 1 2 3 3 3 <<oops>>