Concurrency and Parallelism: Concepts and Implementation

In computer science, concurrency and parallelism are fundamental ideas, particularly when it comes to optimizing the application performance. Although these two are used interchangeability, these phrases refer to distinct methods of multitasking. In this tutorial we will use Java to explore the meanings, distinctions, and real-world applications of concurrency and parallelism.

What is Concurrency?

The ability of a system to manage several tasks at once is referred to as concurrency. These tasks are managed such that they appear to be completed simultaneously, rather than necessarily being completed at the same time. This is done by a mechanism called Context Switching. Managing many tasks simultaneously is known as concurrency. We usually use different threads to achieve this.

Concurrency in Java

public class ConcurrencyExample {
    public static void main(String[] args) {
        Runnable task1 = () -> {
            for (int i = 0; i < 5; i++) {
                System.out.println("Task 1 - Count: " + i);
                try {
                    Thread.sleep(200); 
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
            }
        };

        Runnable task2 = () -> {
            for (int i = 0; i < 5; i++) {
                System.out.println("Task 2 - Count: " + i);
                try {
                    Thread.sleep(200);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
            }
        };

        Thread thread1 = new Thread(task1);
        Thread thread2 = new Thread(task2);

        thread1.start();
        thread2.start();
    }
}

In the above code we create two tasks and run them concurrently. We used a Thread.sleep to simulate any real world functionalities being executed. Running this program will give us an output which will give us the appearance that both the tasks in the program are being done simultaneously.

What is Parallelism ?

Parallelism denotes carrying out several tasks at once. Usually, this involves breaking a work up into smaller tasks that can be completed simultaneously on several cores or processors. Doing multiple tasks at once is known as parallelism, and it is frequently connected to high-performance computing.

Parallelism in Java

import java.util.concurrent.RecursiveTask;
import java.util.concurrent.ForkJoinPool;

public class ParallelismExample {

    private static class SumTask extends RecursiveTask<Long> {
        private final long[] array;
        private final int start, end;
        private static final int THRESHOLD = 1000;

        SumTask(long[] array, int start, int end) {
            this.array = array;
            this.start = start;
            this.end = end;
        }

        @Override
        protected Long compute() {
            if (end - start <= THRESHOLD) {
                long sum = 0;
                for (int i = start; i < end; i++) {
                    sum += array[i];
                }
                return sum;
            } else {
                int mid = (start + end) / 2;
                SumTask leftTask = new SumTask(array, start, mid);
                SumTask rightTask = new SumTask(array, mid, end);
                leftTask.fork();
                long rightResult = rightTask.compute();
                long leftResult = leftTask.join();
                return leftResult + rightResult;
            }
        }
    }

    public static void main(String[] args) {
        long[] array = new long[10000];
        for (int i = 0; i < array.length; i++) {
            array[i] = i;
        }

        ForkJoinPool pool = new ForkJoinPool();
        SumTask task = new SumTask(array, 0, array.length);
        long sum = pool.invoke(task);

        System.out.println("Sum: " + sum);
    }
}

In this example program, the array is divided into smaller pieces by the SumTask class, which then uses the ForkJoinPool to calculate the total of each chunk in parallel. After the work has been divided recursively to a manageable size, it is processed sequentially to give out the final result.

Concurrency vs. Parallelism: Key Differences

Concurrency is attained through context switching, or more specifically, the interleaving of processes on the central processing unit (CPU).

Parallelism is carrying out several operations at once in order to speed up execution. It all comes down to using more number of cores or processors to speed up computations.

Which One To Use ?

Using concurrency or parallelism completely depends upon the problem we are trying to solve. Generally, we can use concurrency when

  1. Application has multiple tasks that can be interleaved.
  2. The tasks involve I/O operations or other activities that can benefit from overlapping execution.
  3. Keeping the application responsive while running background tasks

And we can use Parallelism when

  1. We are performing large-scale computations which can be further divided into independent and smaller subtasks.
  2. Application runs in a multi-core architecture and can leverage the multiple cores present.

Leave a Comment