Concurrency VS Parallelism
Let's learn the definition first.
Concurrency means doing more than one at a time.
Parallelism means doing lots of work by dividing it up among multiple threads that run concurrently.
Let's talk about concurrency to some extend:
A CPU can handle one task at a time. It means a single-core processor can execute the instruction on one thread at a given time. So if we throw multiple tasks at the same time what is going to happen? By that theory, it should crash 😝 . But something else is cooking up inside of it. If we throw multiple tasks simultaneously, different threads are assigned to handle and the CPU will switch between those threads to complete the tasks. It is called context-switching. Nowadays, modern CPUs are that much of fast that we the consumer do not even notice the delay. To a normal eye, the computer is doing multitasking. Like in the human world we also do multi-tasking right (doing different tasks at the same time)? Or do we? Think about it when we are eating and talking, we thought we are doing two tasks at the same time but we switch the context so fast that we didn't even notice. It's called multitasking or in the Software engineering way we are doing things concurrently. What do you think about how the CPU decides what thread to pick and release? CPU accomplished this with a scheduling algorithm.
If you are a web developer, you are already dealing with concurrency daily. If you don't realize it by now it's okay. So basically web servers serve their request using concurrency. So many requests coming up and they are fulfilled immediately.
everything has its own pros and cons. Yes, concurrency may be very helpful with web servers but it has some cons too. Mostly is the sharing of the same resources for threads. There are many ways to mitigate this, but we have to keep those solutions in mind when designing an application. (Optionally you can check this out if you are not sure about thread-safety )
It's time for parallelism now:
Parallel processing is used to engage multiple cores to achieve some tasks as fast as possible. So if we have a single-core processor for concurrency it's enough because it can use context-switching to execute multiple tasks (instructions). But in multi-core environment parallelism comes handy for performance. It can use as much core to execute instructions as fast as possible.
But nowadays you can do parallel processing in a single-core machine. 😲😳 Feeling surprised?
Let's see how we can achieve such a thing.
- In a superscalar processor, we can execute multiple instructions at the same time.
- We can separate thread-level parallelism using hyperthreading.
- Instruction level parallelism is can be done also.
There is no point in comparing these two concepts. Each of the concepts has its own pros and cons. So in the end there is no winner of all. Depending on your needs you create your model based on these concepts. Many programming languages nowadays support both of these. If you want the best of both worlds why not mashed it up and use the mashed model.
Implemented a C# Console demo on it, consider going to GitHub and goto the