Part 1. MPI - Introduction and First Program

Introduction. Why all this?

This series of articles will focus on parallel programming .





  • Part 1. MPI - Introduction and the first program.





  • Part 2. MPI - Learning to monitor processes.






To battle. Introduction

Quite often, the most complex algorithms require a huge amount of computational resources in real tasks, when a programmer writes code in its standard understanding of procedural or Object Oriented Programming (OOP) , then for especially demanding algorithmic tasks that work with a large amount of data and require minimizing the task execution time , it is necessary to perform optimization.



Basically, 2 types of optimization are used, or a mixture of them: vectorization and parallelization of

computations. How are they different?





Calculations are performed on the processor, the processor uses special data "stores" called registers. Processor registers are directly connected to logical elements and require much less time to perform operations on data than data from RAM, and even more so on a hard disk, since for the latter, data transfer takes a rather large part of the time. Also in processors there is a memory area called Cache, it stores those values ​​that are currently involved in calculations or will participate in them in the near future, that is, the most important data.





The task of optimizing the algorithm comes down to correctly building a sequence of operations and optimally placing data in the Cache, minimizing the number of possible data transfers from memory.





? - . ++: AVX , 256 , float32 . , 2 (256 / 32) = 8 float32 . , , , 8 , . : , .





, , , .. , .





, MPI.

MPI - Message Passing Interface ( ). , , - - . , .





: MIMD(Multiple Instruction Multiple Data - , ) SPMD(Single Program Multiple Data - , ).





, , . MIMD - , SPMD. MPI , ( , ) SPMD.





MPI - , . . Ubuntu Budgie 20.04 LTS .





:





[user-name]$ sudo apt-get update
[user-name]$ sudo apt-get install gcc
[user-name]$ sudo apt-get install mpich
      
      



, GCC, , C\++&MPI .





.

MPI- - , . N , . MPI MPI_[Name], .





:



- . ++/ MPI_Comm. , , , . MPI_COMM_WORLD. MPI_COMM_SELF, MPI_COMM_NULL, .



- , . , -, , , .



- 0 32767( . MPI_TAG_UB).





, MPI. MPI :





int MPI_Init(int *argc, char ***argv);
int MPI_Finalize(void);
      
      



, , , MPI_Init, , . .





++ MPI.





#include <stdio.h>
#include "mpi.h"

int main(int argc, char **argv)
{
  printf("Before MPI_INIT\n");
  MPI_Init(&argc, &argv);
  printf("Parallel sect\n");
  MPI_Finalize();
  printf("After MPI_FINALIZE\n");
  return 0;
}
      
      



*.cpp, ( main.cpp):





[user-name]$ mpic++ main.cpp -o main
[user-name]$ mpiexec -n 2 ./main 
      
      



MPI-, . , -n 2 , ? , 2 .





. "Before ..." "After ..." , MPI Init-Finalize.






In this short article, using the simplest program as an example, we learned how to run C ++ files with MPI code and figured out what kind of animal is MPI and what it is eaten with. In further tutorials, we will look at more useful programs and finally move on to communication between processes.








All Articles