Modern C Concurrency in Depth C1720
What you’ll learn
Learn Concurrent programming in C++ including feature in C++17/20 standards.
You will learn how to implement useful concurrent data structures and algorithms using latest C++ features.
Requirements
Basic of c++ programming , some knowledge about data structures and algorithms would be helpful
Description
C++ programming language can be categorized under many topics. Some say its a general purpose programming language, some say its a object oriented version of C. But I liked to categorized it under system programming language. One characteristic of any system programming language including C++ is that language should be able to execute faster compare to other languages like java etc. C++ paradigm took sharp turn with the introduction of C++11 standards. The most notable difference with previous version is the introduction of new memory model. Memory model is the key part of any language, and the performance of all the functionalities depends on that language memory model. With new c++ memory model, we can exploit tremendous power of modern multi core processors. Programming a proper C++ code with better memory reclaim mechanism is tough task. But if we want to code thread safe code which can harvest underline processors true power is much more difficult task. In this course we will have in depth discussion on C++ concurrency features including memory model. We will implements thread safe data structures and algorithms, both lock based manner and lock free manner. Proper lock free implementations of data structures and algorithms will provide unprecedented performance output. Let me listed down key aspects we cover in this course below.1.Basics of C++ concurrency(threads, mutex, package_task, future ,async, promise)2.Lock based thread safe implementation of data structures and algorithms.3.C++ memory model.4.Lock free implementation of data structures and algorithms.5.C++20 concurrency features.5. Proper memory reclaim mechanism for lock free data structures.6. Design aspects of concurrent code.7. In depth discussion on thread pools.8. Bonus section on CUDA programming with C and C++.
Overview
Section 1: Thread management guide
Lecture 1 Setting up the environment for the course
Lecture 2 Introduction to parallel computing
Lecture 3 Quiz : Parallel programming in general
Lecture 4 How to launch a thread
Lecture 5 Programming exercise 1 : Launching the threads
Lecture 6 Joinability of threads
Lecture 7 Join and detach functions
Lecture 8 How to handle join, in exception scenarios
Lecture 9 Programming exercise 2 : Trivial sale a ship model
Lecture 10 How to pass parameters to a thread
Lecture 11 Problematic situations may arise when passing parameters to a thread
Lecture 12 Transferring ownership of a thread
Lecture 13 Some useful operations on thread
Lecture 14 Programming excersice 3 : Sail a ship with work queues
Lecture 15 Parallel accumulate – algorithm explanation
Lecture 16 Parallel accumulate algorithm implementation
Lecture 17 Thread local storage
Lecture 18 Debugging a application in Visual studio
Section 2: Thread safe access to shared data and locking mechanisms
Lecture 19 Introduction to locking mechanisms
Lecture 20 Concept of invarient
Lecture 21 mutexes
Lecture 22 Things to remember when using mutexes
Lecture 23 Thread safe stack implementation : introduction to stack
Lecture 24 Thread safe stack implementation : implementation
Lecture 25 Thread safe stack implementation : race condition inherit from the interface
Lecture 26 Dead locks
Lecture 27 unique locks
Section 3: Communication between thread using condition variables and futures
Lecture 28 introduction to condition variables
Lecture 29 Details about condition variables
Lecture 30 Thread safe queue implementation : introduction to queue data structure
Lecture 31 Thread safe queue implementation : implementation
Lecture 32 introduction to futures and async tasks
Lecture 33 async tasks detailed discussion
Lecture 34 Parallel accumulate algorithm implementation with async task
Lecture 35 Introduction to package_task
Lecture 36 Communication between threads using std::promises
Lecture 37 Retrieving exception using std::futures
Lecture 38 std::shared_futures
Section 4: Lock based thread safe data structures and algorithm implementation
Lecture 39 introduction to lock based thread safe data structures and algorithms
Lecture 40 queue data structure implementation using linked list data structure
Lecture 41 thread safe queue implementation
Lecture 42 Parallel STL introduction
Lecture 43 parallel quick sort algorithm implementation
Lecture 44 parallel for each implementation
Lecture 45 parallel find algorithm implementation with package task
Lecture 46 parallel find algorithm implementation with async
Lecture 47 Partial sum algorithm introduction
Lecture 48 Partial sum algorithm parallel implementation
Lecture 49 Introduction to Matrix
Lecture 50 Parallel Matrix multiplication
Lecture 51 Parallel matrix transpose
Lecture 52 Factors affecting the performance of concurrent code
Section 5: C++20 Concurrency features
Lecture 53 Jthread : Introduction
Lecture 54 Jthread : Our own version implementation
Lecture 55 C++ coroutines : Introduction
Lecture 56 C++ coroutines : resume functions
Lecture 57 C++ coroutines : Generators
Lecture 58 C++ Barriers
Section 6: C++ memory model and atomic operations
Lecture 59 Introduction to atomic operations
Lecture 60 Functionality of std::atomic_flag
Lecture 61 Functionality of std::atomic_bool
Lecture 62 Explanation of compare_exchange functions
Lecture 63 atomic pointers
Lecture 64 General discussion on atomic types
Lecture 65 Important relationships related to atomic operations between threads
Lecture 66 Introduction to memory ordering options
Lecture 67 Discussion on memory_order_seq_cst
Lecture 68 Introduction to instruction reordering
Lecture 69 Discussion on memory_order_relaxed
Lecture 70 Discussion on memory_order_acquire and memory_order_release
Lecture 71 Important aspects of memory_order_acquire and memory_order_release
Lecture 72 Concept of transitive synchronization
Lecture 73 Discussion on memory_order_consume
Lecture 74 Concept of release sequence
Lecture 75 Implementation of spin lock mutex
Section 7: Lock free data structures and algorithms
Lecture 76 Introduction and some terminology
Lecture 77 Stack recap
Lecture 78 Simple lock free thread safe stack
Lecture 79 Stack memory reclaim mechanism using thread counting
Lecture 80 Stack memory reclaim mechanism using hazard pointers
Lecture 81 Stack memory reclaim mechanism using reference counting
Section 8: Thread pools
Lecture 82 Simple thread pool
Lecture 83 Thread pool which allowed to wait on submitted tasks
Lecture 84 Thread pool with waiting tasks
Lecture 85 Minimizing contention on work queue
Lecture 86 Thread pool with work stealing
Section 9: Bonus section : Parallel programming in massively parallel devices with CUDA
Lecture 87 Setting up the environment for CUDA
Lecture 88 Elements of CUDA program
Lecture 89 Organization of threads in CUDA program 1
Lecture 90 Organization of threads in CUDA program 2
Lecture 91 Unique index calculation for threads in a grid
Lecture 92 Unique index calculation for threads in a 2D grid
Lecture 93 Unique index calculation for threads in a 2D grid 2
Lecture 94 Timing a CUDA program
Lecture 95 CUDA memory transfer
Lecture 96 Sum array example
Lecture 97 Error handling in a CUDA program
Lecture 98 CUDA device properties
Anyone who wants to widen you skills with c++ programming.
Course Information:
Udemy | English | 10h 3m | 3.25 GB
Created by: Kasun Liyanage
You Can See More Courses in the Developer >> Greetings from CourseDown.com