Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit d20ccb7

Browse files
course outline
1 parent f510341 commit d20ccb7

File tree

1 file changed

+94
-0
lines changed

1 file changed

+94
-0
lines changed

‎README.md

Lines changed: 94 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,96 @@
11
# Parallel-Concurrent-and-Distributed-Programming-in-Java
22
Parallel, Concurrent, and Distributed Programming in Java | Coursera
3+
4+
5+
## Parallel Programming in Java
6+
7+
<b><u>Week 1 : Task Parallelism</u></b>
8+
- Demonstrate task parallelism using Asynkc/Finish constructs
9+
- Create task-parallel programs using Java's Fork/Join Framework
10+
- Interpret Computation Graph abstraction for task-parallel programs
11+
- Evaluate the Multiprocessor Scheduling problem using Computation Graphs
12+
- Assess sequetional bottlenecks using Amdahl's Law
13+
14+
<b><u>Week 2 : Functional Parallelism</u></b>
15+
16+
- Demonstrate functional parallelism using the Future construct
17+
- Create functional-parallel programs using Java's Fork/Join Framework
18+
- Apply the princple of memoization to optimize functional parallelism
19+
- Create functional-parallel programs using Java Streams
20+
- Explain the concepts of data races and functional/structural determinism
21+
22+
<b><u>Week 3 : Loop Parallelism</u></b>
23+
- Create programs with loop-level parallelism using the Forall and Java Stream constructs
24+
- Evaluate loop-level parallelism in a matrix-multiplication example
25+
- Examine the barrier construct for parallel loops
26+
- Evaluate parallel loops with barriers in an iterative-averaging example
27+
- Apply the concept of iteration grouping/chunking to improve the performance of parallel loops
28+
29+
<b><u>Week 4 : Data flow Synchronization and Pipelining</u></b>
30+
- Create split-phase barriers using Java's Phaser construct
31+
- Create point-to-point synchronization patterns using Java's Phaser construct
32+
- Evaluate parallel loops with point-to-point synchronization in an iterative-averaging example
33+
- Analyze pipeline parallelism using the principles of point-to-point synchronization
34+
- Interpret data flow parallelism using the data-driven-task construct
35+
36+
37+
## Concurrent Programming in Java
38+
39+
<b><u>Week 1 : Threads and Locks</u></b>
40+
- Understand the role of Java threads in building concurrent programs
41+
- Create concurrent programs using Java threads and the synchronized statement (structured locks)
42+
- Create concurrent programs using Java threads and lock primitives in the java.util.concurrent library (unstructured locks)
43+
- Analyze programs with threads and locks to identify liveness and related concurrency bugs
44+
- Evaluate different approaches to solving the classical Dining Philosophers Problem
45+
46+
<b><u>Week 2 : Critical Sections and Isolation</u></b>
47+
- Create concurrent programs with critical sections to coordinate accesses to shared resources
48+
- Create concurrent programs with object-based isolation to coordinate accesses to shared resources with more overlap than critical sections
49+
- Evaluate different approaches to implementing the Concurrent Spanning Tree algorithm
50+
- Create concurrent programs using Java's atomic variables
51+
- Evaluate the impact of read vs. write operations on concurrent accesses to shared resources
52+
53+
<b><u>Week 3 : Actors</u></b>
54+
- Understand the Actor model for building concurrent programs
55+
- Create simple concurrent programs using the Actor model
56+
- Analyze an Actor-based implementation of the Sieve of Eratosthenes program
57+
- Create Actor-based implementations of the Producer-Consumer pattern
58+
- Create Actor-based implementations of concurrent accesses on a bounded resource
59+
60+
<b><u>Week 4 : Concurrent Data Structures</u></b>
61+
- Understand the principle of optimistic concurrency in concurrent algorithms
62+
- Understand implementation of concurrent queues based on optimistic concurrency
63+
- Understand linearizability as a correctness condition for concurrent data structures
64+
- Create concurrent Java programs that use the java.util.concurrent.ConcurrentHashMap library
65+
- Analyze a concurrent algorithm for computing a Minimum Spanning Tree of an undirected graph
66+
67+
## Distributed Programming in Java
68+
69+
<b><u>Week 1 : Distributed Map Reduce</u></b>
70+
- Explain the MapReduce paradigm for analyzing data represented as key-value pairs
71+
- Apply the MapReduce paradigm to programs written using the Apache Hadoop framework
72+
- Create Map Reduce programs using the Apache Spark framework
73+
- Acknowledge the TF-IDF statistic used in data mining, and how it can be computed using the MapReduce paradigm
74+
- Create an implementation of the PageRank algorithm using the Apache Spark framework
75+
76+
<b><u>Week 2 : Client-Server Programming</u></b>
77+
- Generate distributed client-server applications using sockets
78+
- Demonstrate different approaches to serialization and deserialization of data structures for distributed programming
79+
- Recall the use of remote method invocations as a higher-level primitive for distributed programming (compared to sockets)
80+
- Evaluate the use of multicast sockets as a generalization of sockets
81+
- Employ distributed publish-subscribe applications using the Apache Kafka framework
82+
83+
<b><u>Week 3 : Message Passing</u></b>
84+
- Create distributed applications using the Single Program Multiple Data (SPMD) model
85+
- Create message-passing programs using point-to-point communication primitives in MPI
86+
- Identify message ordering and deadlock properties of MPI programs
87+
- Evaluate the advantages of non-blocking communication relative to standard blocking communication primitives
88+
- Explain collective communication as a generalization of point-to-point communication
89+
90+
<b><u>Week 4 : Combining Distribution and Multuthreading</u></b>
91+
- Distinguish processes and threads as basic building blocks of parallel, concurrent, and distributed Java programs
92+
- Create multithreaded servers in Java using threads and processes
93+
- Demonstrate how multithreading can be combined with message-passing programming models like MPI
94+
- Analyze how the actor model can be used for distributed programming
95+
- Assess how the reactive programming model can be used for distrubted programming
96+

0 commit comments

Comments
(0)

AltStyle によって変換されたページ (->オリジナル) /