Part 2. MPI - Learning to Monitor Processes

This series of articles is about parallel programming using MPI .





  • Part 1. MPI - Introduction and the first program.





  • Part 2. MPI - Learning to monitor processes.





In the previous article, we discussed how to run a program, what MPI is and why this parallel programming is needed if you can write without it. In this article, we assume that the reader has read the material presented in the previous one and proceed to the next step in studying the MPI technology , namely, process control. In order to avoid the indignation of experienced programmers, I will further mean by "threads", "processes", etc. a part of a computing system on which a specific instance of a program is running (This part can be either a specific thread or any computing node of the system).






Process numbers and total number of processes

To perform useful actions when building a parallel program, it is necessary to distribute roles between computational nodes and threads. To do this, it is simply vital for us to know which thread is processing a particular instance of the program running on it, but for a start it would be nice to know how many of them are running at all.





In order to find out on which thread the program is running, there are procedures MPI_Comm_size . It accepts a communicator as input (we will talk about it later), and the memory address where an integer will be written , that is, the number of threads processing the program.





int MPI_Comm_size(MPI_Comm comm, int* size)
      
      



? , , . , , . MPI , MPI_COMM_WORLD. , , .





, . MPI_Comm_size. , . :





int MPI_comm_rank(MPI_Comm comm, int* rank)
      
      



.





2 , , .





#include <stdio.h>
#include "mpi.h"

int main(int argc, char **argv)
{	
	int rank, size;

	MPI_Init(&argc, &argv);
	
  MPI_Comm_size(MPI_COMM_WORLD, &size);
	MPI_Comm_rank(MPI_COMM_WORLD, &rank);
	
  MPI_Finalize();
  
	printf("Process: %d, size: %d\n", rank, size);
	
	return 0;
}
      
      



5 :





Process: 0, size: 5
Process: 1, size: 5
Process: 2, size: 5
Process: 3, size: 5
Process: 4, size: 5
      
      



.





? , . , , , , .. , .





Comm_size, Comm_rank

. .





#include <stdio.h>
#include "mpi.h"

int main(int argc, char **argv)
{	
	const int MAX = 20;
	int rank, size;
	int n, ibeg, iend;

	MPI_Init(&argc, &argv);
	MPI_Comm_size(MPI_COMM_WORLD, &size);
	MPI_Comm_rank(MPI_COMM_WORLD, &rank);

	n = (MAX - 1) / size + 1;
	ibeg = rank * n + 1;
	iend = (rank + 1) * n;
	for(int i = ibeg; i <= ((iend > MAX) ? MAX : iend); i++)
	{
		printf("Process: %d, %d^2=%d\n", rank, i, i*i);
	}

	MPI_Finalize();
	
	return 0;
}
      
      



5 :





Process: 0, 1^2=1
Process: 0, 2^2=4
Process: 0, 3^2=9
Process: 0, 4^2=16
Process: 1, 5^2=25
Process: 1, 6^2=36
Process: 1, 7^2=49
Process: 1, 8^2=64
Process: 2, 9^2=81
Process: 2, 10^2=100
Process: 2, 11^2=121
Process: 2, 12^2=144
Process: 3, 13^2=169
Process: 3, 14^2=196
Process: 3, 15^2=225
Process: 3, 16^2=256
Process: 4, 17^2=289
Process: 4, 18^2=324
Process: 4, 19^2=361
Process: 4, 20^2=400
      
      



MAX=20. , , .





, , , : . MPI , , .





, , , - . , , . MPI_Init, , , , - , . .





. MPI <time>, .





double MPI_Wtime(void);
double MPI_Wtick(void);
      
      



, . , , . , Wtime , .





, MPI_WTIME_IS_GLOBAL, 0 1, .





.





. MPI_Get_processor_name. :





int MPI_Get_Processor_name(char* name, int* len);
      
      



.





MPI, . .





To consolidate the knowledge I suggest you write a simple program that recognizes whether a number is prime to the numbers in a predetermined range from 1 to of N . This will clearly show you how easy and simple it is to parallelize calculations using this technology and will allow you to put all the acquired skills in your head.





All a pleasant time of day, Khabravites and those who came across this article from the outside.








All Articles