Rocstar  1.0
Rocstar multiphysics simulation application
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
Pane_communicator Class Reference

Encapsulates information about distribution of panes on all processes and handles across pane communication for shared nodes, ghost nodes, and ghost cells. More...

#include <Pane_communicator.h>

Collaboration diagram for Pane_communicator:

Classes

struct  Pane_comm_buffers
 Buffers for outgoing and incoming messages of a specific pane to be communicated with another pane (either local or global). More...
 

Public Types

enum  Buff_type {
  RNS, RCS, SHARED_NODE, GNR,
  GCR
}
 

Public Member Functions

 Pane_communicator (COM::Window *w, MPI_Comm c=MPI_COMM_WORLD)
 Constructor from a communicator. More...
 
void init (void **ptrs, COM_Type type, int ncomp, const int *sizes=NULL, const int *strds=NULL)
 Initialize the communication buffers. More...
 
void init (COM::Attribute *att, const COM::Attribute *my_pconn=NULL)
 Initialize the communication buffers. More...
 
MPI_Comm mpi_comm () const
 Obtain the MPI communicator for the object. More...
 
std::vector< COM::Pane * > & panes ()
 Obtains all the local panes. More...
 
int owner_rank (const int pane_id) const
 Obtain the process rank of a given pane. More...
 
int total_npanes () const
 Obtain the total number of panes on all processes. More...
 
int lpaneid (const int pane_id) const
 For a given pane, obtain an internal pane ID which is unique and contiguous across processes. More...
 
void begin_update_shared_nodes (std::vector< std::vector< bool > > *involved=NULL)
 Initiates updating shared nodes by calling MPI_Isend and MPI_Irecv. More...
 
void begin_update_ghost_nodes ()
 Initiates updating ghost nodes by calling MPI_Isend and MPI_Irecv. More...
 
void begin_update_ghost_cells ()
 Initiates updating ghost nodes by calling MPI_Isend and MPI_Irecv. More...
 
void end_update_shared_nodes ()
 Finalizes shared node updating by calling MPI_Waitall on all send requests. More...
 
void end_update_ghost_nodes ()
 Finalizes ghost node updating by calling MPI_Waitall on all send requests. More...
 
void end_update_ghost_cells ()
 Finalizes ghost cell updating by calling MPI_Waitall on all send requests. More...
 
void reduce_on_shared_nodes (MPI_Op)
 Perform a reduction operation using locally cached values of the shared nodes, assuming begin_update_shared_nodes() has been called. More...
 
void reduce_maxabs_on_shared_nodes ()
 Reduce to the value with the maximum absolute value using locally cached values of shared nodes. More...
 
void reduce_minabs_on_shared_nodes ()
 Reduce to the value with the maximum absolute value using locally cached values of shared nodes. More...
 
void reduce_diff_on_shared_nodes ()
 Compute difference of non-zero values of each shared node, assuming there are at most two non-zero values. More...
 
void update_ghost_values ()
 
void reduce_average_on_shared_nodes ()
 Reduce to the average of values using locally cached values of the shared nodes, assuming begin_update_shared_nodes() has been called. More...
 

Protected Member Functions

void init_pane_comm_buffers (std::vector< Pane_comm_buffers > &pcb, const int *ptr, int &index, const int n_items, const int lpid)
 Initialize a Pane_comm_buffers for ghost information. More...
 
void end_update ()
 Finalizes updating by calling MPI_Waitall on all send requests. More...
 
void begin_update (const Buff_type btype, std::vector< std::vector< bool > > *involved=NULL)
 Initiates updating by calling MPI_Isend and MPI_Irecv. More...
 

Protected Attributes

int _my_pconn_id
 The id of the pconn being used. More...
 
COM::Window * _appl_window
 Reference to the application window. More...
 
const MPI_Comm _comm
 MPI Communicator. More...
 
std::vector< COM::Pane * > _panes
 Vector of all local panes. More...
 
int _total_npanes
 The total number of panes on all processes. More...
 
std::map< int, int > _lpaneid_map
 Mapping from user-defined pane ids to internal IDs, which are unique and contiguous across all processes, which are useful for defining unique tags for MPI messages. More...
 
int _type
 The base data type, number of components, and the number of bytes of all components for the data to be communicated. More...
 
int _ncomp
 
int _ncomp_bytes
 
std::vector< void * > _ptrs
 An array of pointers to the data for all local panes. More...
 
std::vector< int > _sizes
 The sizes of the arrays for all local panes. More...
 
std::vector< int > _strds
 The strides of the arrays for all local panes. More...
 
std::vector< std::vector
< Pane_comm_buffers > > 
_shr_buffs
 Shared node pane communication buffers. More...
 
std::vector< std::vector
< Pane_comm_buffers > > 
_rns_buffs
 Buffer for real nodes to send. More...
 
std::vector< std::vector
< Pane_comm_buffers > > 
_gnr_buffs
 Buffer for ghost nodes to receive. More...
 
std::vector< std::vector
< Pane_comm_buffers > > 
_rcs_buffs
 Buffer for real cells to send. More...
 
std::vector< std::vector
< Pane_comm_buffers > > 
_gcr_buffs
 Buffer for ghost cells to receive. More...
 
std::vector< MPI_Request > _reqs_send
 Arrays of pending nonblocking MPI requests. Same format as _shr_buffs. More...
 
std::vector< MPI_Request > _reqs_recv
 
std::vector< std::pair< int,
int > > 
_reqs_indices
 The indices in buffs for each pending nonblocking receive request. More...
 

Private Member Functions

 Pane_communicator (const Pane_communicator &)
 
Pane_communicatoroperator= (const Pane_communicator &)
 

Detailed Description

Encapsulates information about distribution of panes on all processes and handles across pane communication for shared nodes, ghost nodes, and ghost cells.

However,there can be no overlap among the 3 types of communication since they share the same set of tags and request buffers.

Definition at line 46 of file Pane_communicator.h.

Member Enumeration Documentation

enum Buff_type
Enumerator
RNS 
RCS 
SHARED_NODE 
GNR 
GCR 

Definition at line 69 of file Pane_communicator.h.

69  {
70  RNS, // Real nodes to send
71  RCS, // Real cells to send.
72  SHARED_NODE, // Shared nodes to send and receive.
73  GNR, // Ghost nodes to receive.
74  GCR // Ghost cells to receive.
75  };

Constructor & Destructor Documentation

MAP_BEGIN_NAMESPACE Pane_communicator ( COM::Window *  w,
MPI_Comm  c = MPI_COMM_WORLD 
)
explicit

Constructor from a communicator.

Also initialize the internal data structures of the communicator, in particular the internal pane IDs.

Definition at line 38 of file Pane_communicator.C.

References _appl_window, _lpaneid_map, _my_pconn_id, _panes, _total_npanes, COM_PCONN, and i.

39  : _appl_window( w), _comm(COMMPI_Initialized()?c:MPI_COMM_NULL),
40  _total_npanes(-1)
41 {
43  _appl_window->panes( _panes);
44  const COM::Window::Proc_map &proc_map = _appl_window->proc_map();
45  _total_npanes = proc_map.size();
46 
47  // Determine a mapping from user-defined pane ids to a set of
48  // internal pane IDs, which are unique and contiguous across all processes
49  // to be used for defining unique tags for MPI messages.
50  std::map<int,int>::const_iterator it=proc_map.begin();
51  for ( int i=0; i<_total_npanes; ++i, ++it)
52  _lpaneid_map[ it->first] = i;
53 }
std::map< int, int > _lpaneid_map
Mapping from user-defined pane ids to internal IDs, which are unique and contiguous across all proces...
blockLoc i
Definition: read.cpp:79
int _my_pconn_id
The id of the pconn being used.
const MPI_Comm _comm
MPI Communicator.
int COMMPI_Initialized()
Definition: commpi.h:168
int _total_npanes
The total number of panes on all processes.
std::vector< COM::Pane * > _panes
Vector of all local panes.
COM::Window * _appl_window
Reference to the application window.
Pane_communicator ( const Pane_communicator )
private

Member Function Documentation

void begin_update ( const Buff_type  btype,
std::vector< std::vector< bool > > *  involved = NULL 
)
protected

Initiates updating by calling MPI_Isend and MPI_Irecv.

See begin_update_shared_nodes for description of involved.

Definition at line 205 of file Pane_communicator.C.

References _comm, _gcr_buffs, _gnr_buffs, _my_pconn_id, _ncomp_bytes, _panes, _ptrs, _rcs_buffs, _reqs_indices, _reqs_recv, _reqs_send, _rns_buffs, _shr_buffs, _sizes, _strds, _total_npanes, _type, COM_assertion, COM_assertion_msg, COM_get_sizeof(), COMMPI_Comm_rank(), COMMPI_Initialized(), COMMPI_Irecv(), COMMPI_Isend(), GCR, GNR, i, Pane_communicator::Pane_comm_buffers::inbuf, Pane_communicator::Pane_comm_buffers::index, j, k, MPI_BYTE, MPI_COMM_SELF, n, nj, Pane_communicator::Pane_comm_buffers::outbuf, Pane_communicator::Pane_comm_buffers::rank, rank, RCS, RNS, SHARED_NODE, and Pane_communicator::Pane_comm_buffers::tag.

Referenced by begin_update_ghost_cells(), begin_update_ghost_nodes(), and begin_update_shared_nodes().

206  {
208  "Cannot begin a new update until all prior updates are finished.");
209  // Define and initialize variables
211 
212  MPI_Request req;
213  _reqs_send.clear(); _reqs_recv.clear(); _reqs_indices.clear();
214  int local_npanes = _panes.size();
215  int tag_max = _total_npanes*_total_npanes;
216  if ( involved) { involved->clear(); involved->resize( local_npanes); }
217 
218  // Now copy data to outbuf. First loop through local panes
219  for ( int i=0; i<local_npanes; ++i) {
220  if ( involved) (*involved)[i].resize( _sizes[i], false);
221 
222  // Obtain the pane connectivity for the current pane
223  const COM::Attribute *pconn = _panes[i]->attribute(_my_pconn_id);
224  const int *vs = (const int*)pconn->pointer();
225 
226  int strd_bytes = COM_get_sizeof( _type, _strds[i]);
227  // Shift the pointer by -1 because node IDs in pconn start from 1
228  char *ptr = ((char*)_ptrs[i])-strd_bytes;
229 
230  // Obtain the current buffer
231  std::vector< Pane_comm_buffers> *buffs =NULL;
232 
233  if (btype == RNS) buffs = &_rns_buffs[i];
234 
235  else if (btype == RCS) buffs = &_rcs_buffs[i];
236 
237  else if (btype == SHARED_NODE) buffs = &_shr_buffs[i];
238 
239  else if (btype == GNR) {
240  buffs = &_gnr_buffs[i];
241  }
242  else if (btype == GCR)
243  {
244  buffs = &_gcr_buffs[i];
245  }
246  else
247  COM_assertion_msg(false,"Invalid buffer type");
248 
249  // Loop through the communicating panes of the current pane
250  for ( int j=0, nj=buffs->size(); j<nj; ++j) {
251  Pane_comm_buffers* pcb = &(*buffs)[j];// The current buffer.
252 
253  // Fill in outgoing buffers if sending.
254  int bufsize=_ncomp_bytes * vs[ pcb->index+1];
255 
256  if ( btype != SHARED_NODE || _panes[i]->id() != vs[pcb->index]) {
257  // If not sending shared nodes to itself
258  if(btype <= SHARED_NODE){
259  pcb->outbuf.resize( bufsize);
260 
261  if ( involved) {
262  for ( int k=0, from=pcb->index+2,n=vs[pcb->index+1];
263  k<n; ++k, ++from)
264  (*involved)[i][vs[ from]-1] = true;
265  }
266 
267  for ( int k=0, from=pcb->index+2,n=vs[pcb->index+1];
268  k<n; ++k, ++from) {
269  std::memcpy( &pcb->outbuf[ _ncomp_bytes*k],
270  &ptr[ strd_bytes*vs[ from]], _ncomp_bytes);
271  }
272 
273  // Initiates send operations either locally or remotely
274  // if on same communicating process
275  if ( rank == pcb->rank) {
276  int tag = pcb->tag;
277 
278  // If send locally, shift the tag in one of the two directions
279  if ( _panes[i]->id() > vs[pcb->index]) tag += tag_max;
280  tag = tag%32768;
281  // COMMPI uses DUMMY_MPI if MPI not initialized
282  int ierr=COMMPI_Isend( &pcb->outbuf[0], pcb->outbuf.size(),
283  MPI_BYTE, 0, tag, MPI_COMM_SELF, &req);
284  COM_assertion( ierr==0);
285  _reqs_send.push_back( req);
286  }
287  else {
288  int ierr=MPI_Isend( &pcb->outbuf[0], pcb->outbuf.size(), MPI_BYTE,
289  pcb->rank, pcb->tag, _comm, &req);
290  COM_assertion( ierr==0);
291 
292  _reqs_send.push_back( req);
293  }
294  }
295 
296  // Initiates receive operations either locally or remotely
297  if(btype >=SHARED_NODE){
298  pcb->inbuf.resize( bufsize);
299 
300  if ( rank == pcb->rank) {
301  int tag = pcb->tag;
302 
303  // If recv locally, shift the tag in one of the two directions
304  // have to have unique tags for send/receive when we are sending
305  // between panes on the same process.
306  if ( _panes[i]->id() < vs[pcb->index]) tag += tag_max;
307  tag = tag%32768;
308  int ierr=COMMPI_Irecv( &pcb->inbuf[0], pcb->inbuf.size(),
309  MPI_BYTE, 0, tag, MPI_COMM_SELF, &req);
310  COM_assertion( ierr==0);
311 
312  // Push the receive request into _reqs_recv and _reqs_indices
313  _reqs_recv.push_back( req);
314  _reqs_indices.push_back( std::make_pair(i,(j<<4)+btype));
315  }
316  else {
317  int ierr=MPI_Irecv( &pcb->inbuf[0], pcb->inbuf.size(), MPI_BYTE,
318  pcb->rank,pcb->tag, _comm, &req);
319  COM_assertion( ierr==0);
320 
321  // Push the receive request into _reqs_recv and _reqs_indices
322  _reqs_recv.push_back( req);
323  _reqs_indices.push_back( std::make_pair(i,(j<<4)+btype));
324  }
325  }
326  }
327 
328  else { // btype == SHARED_NODE &&_panes[i]->id()==vs[pcb->index]
329  pcb->inbuf.resize( bufsize);
330 
331  // A pane is sending to itself.
332  // In a list of nodes which a pane shares with itself, the nodes are
333  // listed in pairs. IE a node list { 1,16,2,17,...} indicates that nodes
334  // 1 and 3 are the same and nodes 5 and 13 are the same.
335  // So, we copy the data value of a node into the outbuf of its shared node.
336  // Then we can transfer the data directly.
337  for (int k=0,from=pcb->index+2,n=vs[pcb->index+1]; k<n; k+=2, from+=2){
338  std::memcpy( &pcb->inbuf[ _ncomp_bytes*(k+1)],
339  &ptr[strd_bytes*vs[ from]], _ncomp_bytes);
340  std::memcpy( &pcb->inbuf[ _ncomp_bytes*k],
341  &ptr[strd_bytes*vs[ from+1]], _ncomp_bytes);
342  }
343  }
344  }
345  }
346  if(COMMPI_Initialized())
347  MPI_Barrier(_comm);
348 }
int COMMPI_Comm_rank(MPI_Comm c)
Definition: commpi.h:162
int COMMPI_Irecv(void *buf, int count, MPI_Datatype datatype, int src, int tag, MPI_Comm comm, MPI_Request *request)
Begins a nonblocking receive.
Definition: commpi.C:131
here we put it at the!beginning of the common block The point to point and collective!routines know about but MPI_TYPE_STRUCT as yet does not!MPI_STATUS_IGNORE and MPI_STATUSES_IGNORE are similar objects!Until the underlying MPI library implements the C version of these are declared as arrays of MPI_STATUS_SIZE!The types and are OPTIONAL!Their values are zero if they are not available Note that!using these reduces the portability of MPI_IO INTEGER MPI_BOTTOM INTEGER MPI_DOUBLE_PRECISION INTEGER MPI_LOGICAL INTEGER MPI_BYTE
std::vector< void * > _ptrs
An array of pointers to the data for all local panes.
std::vector< int > _sizes
The sizes of the arrays for all local panes.
#define COM_assertion(EX)
Error checking utility similar to the assert macro of the C language.
std::vector< std::vector< Pane_comm_buffers > > _gnr_buffs
Buffer for ghost nodes to receive.
here we put it at the!beginning of the common block The point to point and collective!routines know about but MPI_TYPE_STRUCT as yet does not!MPI_STATUS_IGNORE and MPI_STATUSES_IGNORE are similar objects!Until the underlying MPI library implements the C version of these are declared as arrays of MPI_STATUS_SIZE!The types and are OPTIONAL!Their values are zero if they are not available Note that!using these reduces the portability of MPI_IO INTEGER MPI_BOTTOM INTEGER MPI_DOUBLE_PRECISION INTEGER MPI_LOGICAL INTEGER MPI_2REAL INTEGER MPI_2DOUBLE_COMPLEX INTEGER MPI_LB INTEGER MPI_WTIME_IS_GLOBAL INTEGER MPI_COMM_SELF
j indices k indices k
Definition: Indexing.h:6
#define COM_assertion_msg(EX, msg)
std::vector< int > _strds
The strides of the arrays for all local panes.
int COMMPI_Isend(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request)
Begins a nonblocking send.
Definition: commpi.C:112
std::vector< std::vector< Pane_comm_buffers > > _rns_buffs
Buffer for real nodes to send.
std::vector< std::vector< Pane_comm_buffers > > _rcs_buffs
Buffer for real cells to send.
std::vector< std::vector< Pane_comm_buffers > > _shr_buffs
Shared node pane communication buffers.
std::vector< MPI_Request > _reqs_recv
std::vector< std::pair< int, int > > _reqs_indices
The indices in buffs for each pending nonblocking receive request.
blockLoc i
Definition: read.cpp:79
const NT & n
std::vector< std::vector< Pane_comm_buffers > > _gcr_buffs
Buffer for ghost cells to receive.
int COM_get_sizeof(const COM_Type type, int c)
Definition: roccom_c++.h:560
std::vector< MPI_Request > _reqs_send
Arrays of pending nonblocking MPI requests. Same format as _shr_buffs.
int _type
The base data type, number of components, and the number of bytes of all components for the data to b...
j indices j
Definition: Indexing.h:6
int _my_pconn_id
The id of the pconn being used.
const MPI_Comm _comm
MPI Communicator.
void int * nj
Definition: read.cpp:74
static int rank
Definition: advectest.C:66
int COMMPI_Initialized()
Definition: commpi.h:168
int _total_npanes
The total number of panes on all processes.
std::vector< COM::Pane * > _panes
Vector of all local panes.

Here is the call graph for this function:

Here is the caller graph for this function:

void begin_update_ghost_cells ( )
inline

Initiates updating ghost nodes by calling MPI_Isend and MPI_Irecv.

Definition at line 135 of file Pane_communicator.h.

References begin_update(), GCR, and RCS.

135  {
136  begin_update(RCS);
137  begin_update(GCR);
138  }
void begin_update(const Buff_type btype, std::vector< std::vector< bool > > *involved=NULL)
Initiates updating by calling MPI_Isend and MPI_Irecv.

Here is the call graph for this function:

void begin_update_ghost_nodes ( )
inline

Initiates updating ghost nodes by calling MPI_Isend and MPI_Irecv.

Definition at line 129 of file Pane_communicator.h.

References begin_update(), GNR, and RNS.

129  {
130  begin_update(RNS);
131  begin_update(GNR);
132  }
void begin_update(const Buff_type btype, std::vector< std::vector< bool > > *involved=NULL)
Initiates updating by calling MPI_Isend and MPI_Irecv.

Here is the call graph for this function:

void begin_update_shared_nodes ( std::vector< std::vector< bool > > *  involved = NULL)
inline

Initiates updating shared nodes by calling MPI_Isend and MPI_Irecv.

If involved is not NULL, then at exit it contains a vector of bitmaps, each of which corresponds to a local pane, and the size of each bitmap is the number of real nodes for its corresponding local pane. Upon exit, the bits corresponding to shared nodes are set to true. The information in involved is required to overlap communication and computation during reduction operations.

Definition at line 124 of file Pane_communicator.h.

References begin_update(), and SHARED_NODE.

124  {
125  begin_update(SHARED_NODE,involved);
126  }
void begin_update(const Buff_type btype, std::vector< std::vector< bool > > *involved=NULL)
Initiates updating by calling MPI_Isend and MPI_Irecv.

Here is the call graph for this function:

void end_update ( )
protected

Finalizes updating by calling MPI_Waitall on all send requests.

Definition at line 351 of file Pane_communicator.C.

References _comm, _reqs_send, and COM_assertion.

Referenced by end_update_ghost_cells(), end_update_ghost_nodes(), and end_update_shared_nodes().

351  {
352  if ( _comm!=MPI_COMM_NULL) {
353 
354  std::vector< MPI_Status> status( _reqs_send.size());
355  int ierr=0;
356  if (_reqs_send.size())
357  ierr=MPI_Waitall( _reqs_send.size(), &_reqs_send[0], &status[0]);
358  COM_assertion( ierr==0);
359  }
360  _reqs_send.resize(0);
361 }
#define COM_assertion(EX)
Error checking utility similar to the assert macro of the C language.
std::vector< MPI_Request > _reqs_send
Arrays of pending nonblocking MPI requests. Same format as _shr_buffs.
const MPI_Comm _comm
MPI Communicator.

Here is the caller graph for this function:

void end_update_ghost_cells ( )
inline

Finalizes ghost cell updating by calling MPI_Waitall on all send requests.

Definition at line 152 of file Pane_communicator.h.

References end_update(), and update_ghost_values().

152  {
154  end_update();
155  }
void end_update()
Finalizes updating by calling MPI_Waitall on all send requests.

Here is the call graph for this function:

void end_update_ghost_nodes ( )
inline

Finalizes ghost node updating by calling MPI_Waitall on all send requests.

Definition at line 146 of file Pane_communicator.h.

References end_update(), and update_ghost_values().

146  {
148  end_update();
149  }
void end_update()
Finalizes updating by calling MPI_Waitall on all send requests.

Here is the call graph for this function:

void end_update_shared_nodes ( )
inline

Finalizes shared node updating by calling MPI_Waitall on all send requests.

Definition at line 141 of file Pane_communicator.h.

References end_update().

141  {
142  end_update();
143  }
void end_update()
Finalizes updating by calling MPI_Waitall on all send requests.

Here is the call graph for this function:

void init ( void **  ptrs,
COM_Type  type,
int  ncomp,
const int *  sizes = NULL,
const int *  strds = NULL 
)

Initialize the communication buffers.

ptrs is an array of pointers to the data for all local panes type is the base data type (such as MPI_INT) ncomp is the number of component per node sizes is the number of items per pane. If null, then use number of nodes. strides are the strides for each array. If null, then use ncomp for all.

Definition at line 91 of file Pane_communicator.C.

References _gcr_buffs, _gnr_buffs, _my_pconn_id, _ncomp, _ncomp_bytes, _panes, _ptrs, _rcs_buffs, _rns_buffs, _shr_buffs, _sizes, _strds, _type, COM_get_sizeof(), i, init_pane_comm_buffers(), and lpaneid().

Referenced by init(), Rocmap::reduce_average_on_shared_nodes(), Rocmap::reduce_maxabs_on_shared_nodes(), Rocmap::reduce_minabs_on_shared_nodes(), and Rocmap::update_ghosts().

92  {
93 
94  //=== Initialize local objects from the arguments
95  _type = type;
96  _ncomp = ncomp;
98 
99  int local_npanes = _panes.size();
100  _shr_buffs.resize( local_npanes);
101  _rns_buffs.resize( local_npanes);
102  _gnr_buffs.resize( local_npanes);
103  _rcs_buffs.resize( local_npanes);
104  _gcr_buffs.resize( local_npanes);
105  _ptrs.clear(); _ptrs.insert( _ptrs.end(), ptrs, ptrs+local_npanes);
106 
107  _sizes.clear();
108  if ( sizes)
109  _sizes.insert( _sizes.end(), sizes, sizes+local_npanes);
110  else {
111  // Default values of sizes are the numbers of nodes
112  _sizes.resize( local_npanes);
113  for ( int i=0; i<local_npanes; ++i)
114  _sizes[i] = _panes[i]->size_of_real_nodes();
115  }
116 
117  _strds.clear();
118  if ( strds) _strds.insert( _strds.end(), strds, strds+local_npanes);
119  else _strds.resize( local_npanes, _ncomp);
120 
121  // Allocate buffer space for outbuf and ghosts
122  for ( int i=0; i<local_npanes; ++i) {
123  int pid = _panes[i]->id(), lpid=lpaneid(pid);
124 
125  // Obtain the pane connectivity of the local pane.
126  const COM::Attribute *pconn = _panes[i]->attribute(_my_pconn_id);
127  const int *vs = (const int*)pconn->pointer();
128  int vs_size=pconn->size_of_real_items();
129 
130  // Note that the pane connectivity may be inherited from a parent
131  // window, and the current window may contain only a subset of the
132  // panes of the parent window. Here we should consider only the panes
133  // in the current window
134 
135  // Initialize buffers for shared nodes.
136  int index = 0;
137  init_pane_comm_buffers( _shr_buffs[i], vs, index, vs_size, lpid);
138 
139  const int vs_gsize=pconn->size_of_items();
140  // Initialize the ghost information buffers.
141  if(vs_size<vs_gsize)
142  init_pane_comm_buffers(_rns_buffs[i], vs, vs_size, vs_gsize, lpid);
143  else
144  _rns_buffs[i].clear();
145 
146  if(vs_size<vs_gsize)
147  init_pane_comm_buffers(_gnr_buffs[i], vs, vs_size, vs_gsize, lpid);
148  else
149  _gnr_buffs[i].clear();
150 
151  if(vs_size<vs_gsize)
152  init_pane_comm_buffers(_rcs_buffs[i], vs, vs_size, vs_gsize, lpid);
153  else
154  _rcs_buffs[i].clear();
155 
156  if(vs_size<vs_gsize)
157  init_pane_comm_buffers(_gcr_buffs[i], vs, vs_size, vs_gsize, lpid);
158  else
159  _gcr_buffs[i].clear();
160  }
161 }
std::vector< void * > _ptrs
An array of pointers to the data for all local panes.
std::vector< int > _sizes
The sizes of the arrays for all local panes.
std::vector< std::vector< Pane_comm_buffers > > _gnr_buffs
Buffer for ghost nodes to receive.
std::vector< int > _strds
The strides of the arrays for all local panes.
std::vector< std::vector< Pane_comm_buffers > > _rns_buffs
Buffer for real nodes to send.
void init_pane_comm_buffers(std::vector< Pane_comm_buffers > &pcb, const int *ptr, int &index, const int n_items, const int lpid)
Initialize a Pane_comm_buffers for ghost information.
std::vector< std::vector< Pane_comm_buffers > > _rcs_buffs
Buffer for real cells to send.
std::vector< std::vector< Pane_comm_buffers > > _shr_buffs
Shared node pane communication buffers.
blockLoc i
Definition: read.cpp:79
std::vector< std::vector< Pane_comm_buffers > > _gcr_buffs
Buffer for ghost cells to receive.
int COM_get_sizeof(const COM_Type type, int c)
Definition: roccom_c++.h:560
int lpaneid(const int pane_id) const
For a given pane, obtain an internal pane ID which is unique and contiguous across processes...
int _type
The base data type, number of components, and the number of bytes of all components for the data to b...
int _my_pconn_id
The id of the pconn being used.
std::vector< COM::Pane * > _panes
Vector of all local panes.

Here is the call graph for this function:

Here is the caller graph for this function:

void init ( COM::Attribute *  att,
const COM::Attribute *  my_pconn = NULL 
)

Initialize the communication buffers.

att is a pointer to the attribute my_pconn stores pane-connectivity

Definition at line 56 of file Pane_communicator.C.

References _appl_window, _my_pconn_id, _panes, COM_assertion, COM_assertion_msg, COM_PCONN, i, and init().

57  {
58  // Note that the attribute must be on the attribute window.
59  COM_assertion( att->window() == _appl_window &&
60  ( my_pconn ==NULL || my_pconn->window() == _appl_window));
61 
62  if(my_pconn){
63  _my_pconn_id = my_pconn->id();
64  }
65  else{
67  COM_assertion_msg( att && (att->is_nodal() || att->is_elemental()),
68  "Pane_communicator must be initialized with a nodal or elemental attribute");
69  }
70 
71  int att_id = att->id();
72  int local_npanes = _panes.size();
73 
74  void** pointers = new void*[local_npanes];
75  int *sizes = new int[local_npanes];
76  int *strides = new int[local_npanes];
77 
78  for ( int i=0; i < local_npanes; ++i) {
79  COM::Attribute *attribute = _panes[i]->attribute(att_id);
80  pointers[i] = attribute->pointer();
81  sizes[i] = attribute->size_of_real_items();
82  strides[i] = attribute->stride();
83  }
84  init(pointers, att->data_type(), att->size_of_components(), sizes, strides);
85  delete[] pointers; pointers = NULL;
86  delete[] sizes; sizes=NULL;
87  delete[] strides; strides = NULL;
88 }
#define COM_assertion(EX)
Error checking utility similar to the assert macro of the C language.
#define COM_assertion_msg(EX, msg)
blockLoc i
Definition: read.cpp:79
void init(void **ptrs, COM_Type type, int ncomp, const int *sizes=NULL, const int *strds=NULL)
Initialize the communication buffers.
int _my_pconn_id
The id of the pconn being used.
std::vector< COM::Pane * > _panes
Vector of all local panes.
COM::Window * _appl_window
Reference to the application window.

Here is the call graph for this function:

void init_pane_comm_buffers ( std::vector< Pane_comm_buffers > &  pcb,
const int *  ptr,
int &  index,
const int  n_items,
const int  lpid 
)
protected

Initialize a Pane_comm_buffers for ghost information.

Definition at line 168 of file Pane_communicator.C.

References _total_npanes, COM_assertion_msg, Pane_communicator::Pane_comm_buffers::index, j, lpaneid(), owner_rank(), Pane_communicator::Pane_comm_buffers::rank, and Pane_communicator::Pane_comm_buffers::tag.

Referenced by init().

170  {
171  if ( n_items == 0) return;
172 
173  // Determine the number of communicating panes
174  int count=0, np= ptr[index];
175  int block_size = 1;
176 
177  for (int j=0; j<np; block_size+=ptr[index+block_size+1]+2, ++j) {
178  if (owner_rank( ptr[index+block_size]) >=0) ++count;
179  }
180  COM_assertion_msg(block_size+index <= n_items, "Out of pconn bounds.");
181  ++index;
182 
183  pcbv.resize( count);
184 
185  int qid=0, lqid=0;
186  // Loop through communicating panes for shared nodes.
187  for (int p=0, j=0; p<np; ++p, index+=ptr[index+1]+2) {
188  if (owner_rank( qid=ptr[index]) <0) continue;
189 
190  // Define a unique tag to be used by MPI_Isend and MPI_Irecv.
191  lqid=lpaneid(qid);
192  Pane_comm_buffers &pcb = pcbv[j++];
193  pcb.rank = owner_rank( qid);
194  // Numbers have to be positive, try to avoid conflicting tags with other apps.
195  // assuming other apps are low, 100 is a decent guess.
196  pcb.tag = 100 + ((lpid>lqid) ?
197  lpid*_total_npanes+lqid : lqid*_total_npanes+lpid);
198  pcb.tag = pcb.tag%32768;
199  pcb.index = index;
200  }
201  COM_assertion_msg( index <= n_items, "Out of bound of pconn");
202 }
#define COM_assertion_msg(EX, msg)
int owner_rank(const int pane_id) const
Obtain the process rank of a given pane.
int lpaneid(const int pane_id) const
For a given pane, obtain an internal pane ID which is unique and contiguous across processes...
j indices j
Definition: Indexing.h:6
int _total_npanes
The total number of panes on all processes.

Here is the call graph for this function:

Here is the caller graph for this function:

int lpaneid ( const int  pane_id) const
inline

For a given pane, obtain an internal pane ID which is unique and contiguous across processes.

Definition at line 112 of file Pane_communicator.h.

References _lpaneid_map, _total_npanes, and COM_assertion.

Referenced by init(), and init_pane_comm_buffers().

112  {
114  return _lpaneid_map.find(pane_id)->second;
115  }
#define COM_assertion(EX)
Error checking utility similar to the assert macro of the C language.
std::map< int, int > _lpaneid_map
Mapping from user-defined pane ids to internal IDs, which are unique and contiguous across all proces...
int _total_npanes
The total number of panes on all processes.

Here is the caller graph for this function:

MPI_Comm mpi_comm ( ) const
inline

Obtain the MPI communicator for the object.

Definition at line 97 of file Pane_communicator.h.

References _comm.

97 { return _comm; }
const MPI_Comm _comm
MPI Communicator.
Pane_communicator& operator= ( const Pane_communicator )
private
int owner_rank ( const int  pane_id) const
inline

Obtain the process rank of a given pane.

If the pane does not exist, it will return -1.

Definition at line 104 of file Pane_communicator.h.

References _appl_window.

Referenced by init_pane_comm_buffers(), and reduce_average_on_shared_nodes().

105  { return _appl_window->owner_rank( pane_id); }
COM::Window * _appl_window
Reference to the application window.

Here is the caller graph for this function:

std::vector<COM::Pane*>& panes ( )
inline

Obtains all the local panes.

Definition at line 100 of file Pane_communicator.h.

References _panes.

100 { return _panes; }
std::vector< COM::Pane * > _panes
Vector of all local panes.
void reduce_average_on_shared_nodes ( )

Reduce to the average of values using locally cached values of the shared nodes, assuming begin_update_shared_nodes() has been called.

Definition at line 758 of file Pane_communicator.C.

References _my_pconn_id, _ncomp, _panes, _ptrs, _strds, _type, COM_assertion_msg, COM_CHAR, COM_CHARACTER, COM_DOUBLE, COM_DOUBLE_PRECISION, COM_FLOAT, COM_get_sizeof(), COM_INT, COM_INTEGER, COM_REAL, i, j, k, MPI_SUM, owner_rank(), and reduce_on_shared_nodes().

758  {
759  // Sum the values from all shared nodes
761 
762  // A map from shared nodes' local ids to multiplicity
763  std::map<int,int> nodes_to_mult;
764  std::map<int,int>::iterator nodes_to_mult_pos;
765  int num_panes = (int)_panes.size();
766  //loop through all local panes, constructing multiplicity of nodes
767  //then iterate through nodes, dividing by multiplicity
768  for(int i = 0; i < num_panes; ++i){
769  const COM::Attribute *pconn = _panes[i]->attribute(_my_pconn_id);
770  const int *vs = (const int*)pconn->pointer();
771 
772  // make sure we only consider shared nodes, not ghost nodes
773  int vs_size=pconn->size_of_real_items();
774  int strd_bytes = COM_get_sizeof( _type, _strds[i]);
775  // Shift the pointer by -1 because node IDs in pconn start from 1
776  char *ptr = ((char*)_ptrs[i])-strd_bytes;
777 
778  // Determine the multiplicity of shared nodes. Multiplicity is the
779  // number of panes which own the node, so at least 2 for shared nodes.
780  nodes_to_mult.clear();
781  for(int j=1; j<vs_size; j+= vs[j+1]+2){
782  // If this pane hasn't been inherited, skip it
783  if (owner_rank( vs[j])>= 0){
784  // Loop through nodes, ++mult if already seen. Else mult=2
785  for(int k=0; k< vs[j+1]; ++k){
786  nodes_to_mult_pos = nodes_to_mult.find(vs[j+k+2]);
787  if (nodes_to_mult_pos != nodes_to_mult.end())
788  nodes_to_mult_pos->second++;
789  else
790  nodes_to_mult.insert(std::make_pair(vs[j+k+2],2));
791  }
792  }
793  }
794 
795  // Loop through the list of shared nodes, dividing data by their multiplicity
796  // Remember the assumption that stride == ncomp
797  int shared_id,mult;
798  switch ( _type) {
799  case COM_CHAR:
800  case COM_CHARACTER:
801  for(nodes_to_mult_pos=nodes_to_mult.begin();
802  nodes_to_mult_pos!=nodes_to_mult.end();
803  ++nodes_to_mult_pos){
804  shared_id = nodes_to_mult_pos->first;
805 
806  mult = nodes_to_mult_pos->second;
807  for(int k = 0; k < _ncomp; ++k){
808  ((char*)&ptr[ strd_bytes*shared_id])[k] /= mult;
809  }
810  }
811  break;
812  case COM_INT:
813  case COM_INTEGER:
814  for(nodes_to_mult_pos=nodes_to_mult.begin();
815  nodes_to_mult_pos!=nodes_to_mult.end();
816  ++nodes_to_mult_pos){
817  shared_id = nodes_to_mult_pos->first;
818 
819  mult = nodes_to_mult_pos->second;
820  for(int k = 0; k < _ncomp; ++k){
821  ((int*)&ptr[ strd_bytes*shared_id])[k] /= mult;
822  }
823  }
824  break;
825  case COM_FLOAT:
826  case COM_REAL:
827  for(nodes_to_mult_pos=nodes_to_mult.begin();
828  nodes_to_mult_pos!=nodes_to_mult.end();
829  ++nodes_to_mult_pos){
830  shared_id = nodes_to_mult_pos->first;
831  mult = nodes_to_mult_pos->second;
832  for(int k = 0; k < _ncomp; ++k){
833  ((float*)&ptr[ strd_bytes*shared_id])[k] /= (float)mult;
834  }
835  }
836  break;
837  case COM_DOUBLE:
839  for(nodes_to_mult_pos=nodes_to_mult.begin();
840  nodes_to_mult_pos!=nodes_to_mult.end();
841  ++nodes_to_mult_pos){
842  shared_id = nodes_to_mult_pos->first;
843  mult = nodes_to_mult_pos->second;
844  for(int k = 0; k < _ncomp; ++k){
845  ((double*)&ptr[ strd_bytes*shared_id])[k] /= (double)mult;
846  }
847  }
848  break;
849  default:
850  COM_assertion_msg(false, "Unknown data type"); // Not supported
851  }
852  }
853 }
std::vector< void * > _ptrs
An array of pointers to the data for all local panes.
void reduce_on_shared_nodes(MPI_Op)
Perform a reduction operation using locally cached values of the shared nodes, assuming begin_update_...
j indices k indices k
Definition: Indexing.h:6
#define COM_assertion_msg(EX, msg)
std::vector< int > _strds
The strides of the arrays for all local panes.
C/C++ Data types.
Definition: roccom_basic.h:129
int owner_rank(const int pane_id) const
Obtain the process rank of a given pane.
blockLoc i
Definition: read.cpp:79
int COM_get_sizeof(const COM_Type type, int c)
Definition: roccom_c++.h:560
int _type
The base data type, number of components, and the number of bytes of all components for the data to b...
j indices j
Definition: Indexing.h:6
int _my_pconn_id
The id of the pconn being used.
Fortran Data types.
Definition: roccom_basic.h:133
std::vector< COM::Pane * > _panes
Vector of all local panes.
here we put it at the!beginning of the common block The point to point and collective!routines know about but MPI_TYPE_STRUCT as yet does not!MPI_STATUS_IGNORE and MPI_STATUSES_IGNORE are similar objects!Until the underlying MPI library implements the C version of these are declared as arrays of MPI_STATUS_SIZE!The types and are OPTIONAL!Their values are zero if they are not available Note that!using these reduces the portability of MPI_IO INTEGER MPI_BOTTOM INTEGER MPI_DOUBLE_PRECISION INTEGER MPI_LOGICAL INTEGER MPI_2REAL INTEGER MPI_2DOUBLE_COMPLEX INTEGER MPI_LB INTEGER MPI_WTIME_IS_GLOBAL INTEGER MPI_GROUP_EMPTY INTEGER MPI_SUM

Here is the call graph for this function:

void reduce_diff_on_shared_nodes ( )

Compute difference of non-zero values of each shared node, assuming there are at most two non-zero values.

Definition at line 623 of file Pane_communicator.C.

References _comm, _my_pconn_id, _ncomp, _ncomp_bytes, _panes, _ptrs, _reqs_indices, _reqs_recv, _shr_buffs, _strds, _type, COM_assertion, COM_assertion_msg, COM_CHAR, COM_CHARACTER, COM_DOUBLE, COM_DOUBLE_PRECISION, COM_FLOAT, COM_get_sizeof(), COM_INT, COM_INTEGER, COM_REAL, i, Pane_communicator::Pane_comm_buffers::inbuf, Pane_communicator::Pane_comm_buffers::index, j, k, nk, and reduce_diff().

623  {
624  while ( !_reqs_recv.empty()) {
625  int index;
626 
627  // Wait for any receive request to finish and then process the request
628  if ( _comm!=MPI_COMM_NULL) {
629  MPI_Status status;
630  int ierr = MPI_Waitany( _reqs_recv.size(), &_reqs_recv[0],
631  &index, &status);
632  COM_assertion( ierr == 0);
633  }
634  else
635  index = _reqs_recv.size()-1;
636 
637  // Obtain the indices in _shr_buffs for the receive request
638  int i=_reqs_indices[index].first, j=(_reqs_indices[index].second>>4);
639 
640  int strd_bytes = COM_get_sizeof( _type, _strds[i]);
641  // Shift the pointer by -1 because node IDs in pconn start from 1
642  char *ptr = ((char*)_ptrs[i])-strd_bytes;
643 
644  const COM::Attribute *pconn = _panes[i]->attribute(_my_pconn_id);
645  const int *vs = (const int*)pconn->pointer();
646 
647  Pane_comm_buffers &pcb = _shr_buffs[i][j];
648  COM_assertion( int(pcb.inbuf.size()) == _ncomp_bytes*vs[ pcb.index+1]);
649 
650  switch ( _type) {
651  case COM_CHAR:
652  case COM_CHARACTER:
653  for ( int k=0, nk=vs[pcb.index+1], to=pcb.index+2; k<nk; ++k, ++to)
654  reduce_diff( (char*)&pcb.inbuf[ _ncomp_bytes*k],
655  (char*)&ptr[ strd_bytes*vs[ to]], _ncomp);
656  break;
657  case COM_INT:
658  case COM_INTEGER:
659  for ( int k=0, nk=vs[pcb.index+1], to=pcb.index+2; k<nk; ++k, ++to)
660  reduce_diff( (int*)&pcb.inbuf[ _ncomp_bytes*k],
661  (int*)&ptr[ strd_bytes*vs[ to]], _ncomp);
662  break;
663  case COM_FLOAT:
664  case COM_REAL:
665  for ( int k=0, nk=vs[pcb.index+1], to=pcb.index+2; k<nk; ++k, ++to)
666  reduce_diff( (float*)&pcb.inbuf[ _ncomp_bytes*k],
667  (float*)&ptr[ strd_bytes*vs[ to]], _ncomp);
668  break;
669  case COM_DOUBLE:
671  for ( int k=0, nk=vs[pcb.index+1], to=pcb.index+2; k<nk; ++k, ++to)
672  reduce_diff( (double*)&pcb.inbuf[ _ncomp_bytes*k],
673  (double*)&ptr[ strd_bytes*vs[ to]], _ncomp);
674  break;
675  default:
676  COM_assertion_msg(false, "Unknown data type"); // Not supported
677  }
678 
679  // Remove the received message from the list
680  _reqs_recv.erase(_reqs_recv.begin()+index);
681  _reqs_indices.erase(_reqs_indices.begin()+index);
682  }
683 }
std::vector< void * > _ptrs
An array of pointers to the data for all local panes.
#define COM_assertion(EX)
Error checking utility similar to the assert macro of the C language.
j indices k indices k
Definition: Indexing.h:6
#define COM_assertion_msg(EX, msg)
std::vector< int > _strds
The strides of the arrays for all local panes.
C/C++ Data types.
Definition: roccom_basic.h:129
void reduce_diff(T *a, T *b, int size)
std::vector< std::vector< Pane_comm_buffers > > _shr_buffs
Shared node pane communication buffers.
std::vector< MPI_Request > _reqs_recv
std::vector< std::pair< int, int > > _reqs_indices
The indices in buffs for each pending nonblocking receive request.
blockLoc i
Definition: read.cpp:79
int COM_get_sizeof(const COM_Type type, int c)
Definition: roccom_c++.h:560
int _type
The base data type, number of components, and the number of bytes of all components for the data to b...
void int int * nk
Definition: read.cpp:74
j indices j
Definition: Indexing.h:6
int _my_pconn_id
The id of the pconn being used.
const MPI_Comm _comm
MPI Communicator.
Fortran Data types.
Definition: roccom_basic.h:133
std::vector< COM::Pane * > _panes
Vector of all local panes.

Here is the call graph for this function:

void reduce_maxabs_on_shared_nodes ( )

Reduce to the value with the maximum absolute value using locally cached values of shared nodes.

Definition at line 497 of file Pane_communicator.C.

References _comm, _my_pconn_id, _ncomp, _ncomp_bytes, _panes, _ptrs, _reqs_indices, _reqs_recv, _shr_buffs, _strds, _type, COM_assertion, COM_assertion_msg, COM_CHAR, COM_CHARACTER, COM_DOUBLE, COM_DOUBLE_PRECISION, COM_FLOAT, COM_get_sizeof(), COM_INT, COM_INTEGER, COM_REAL, i, Pane_communicator::Pane_comm_buffers::inbuf, Pane_communicator::Pane_comm_buffers::index, j, k, nk, and reduce_maxabs().

497  {
498  while ( !_reqs_recv.empty()) {
499  int index;
500 
501  // Wait for any receive request to finish and then process the request
502  if ( _comm!=MPI_COMM_NULL) {
503  MPI_Status status;
504  int ierr = MPI_Waitany( _reqs_recv.size(), &_reqs_recv[0],
505  &index, &status);
506  COM_assertion( ierr == 0);
507  }
508  else
509  index = _reqs_recv.size()-1;
510 
511  // Obtain the indices in _shr_buffs for the receive request
512  int i=_reqs_indices[index].first, j=(_reqs_indices[index].second>>4);
513 
514  int strd_bytes = COM_get_sizeof( _type, _strds[i]);
515  // Shift the pointer by -1 because node IDs in pconn start from 1
516  char *ptr = ((char*)_ptrs[i])-strd_bytes;
517 
518  const COM::Attribute *pconn = _panes[i]->attribute(_my_pconn_id);
519  const int *vs = (const int*)pconn->pointer();
520 
521  Pane_comm_buffers &pcb = _shr_buffs[i][j];
522  COM_assertion( int(pcb.inbuf.size()) == _ncomp_bytes*vs[ pcb.index+1]);
523 
524  switch ( _type) {
525  case COM_CHAR:
526  case COM_CHARACTER:
527  for ( int k=0, nk=vs[pcb.index+1], to=pcb.index+2; k<nk; ++k, ++to)
528  reduce_maxabs( (char*)&pcb.inbuf[ _ncomp_bytes*k],
529  (char*)&ptr[ strd_bytes*vs[ to]], _ncomp);
530  break;
531  case COM_INT:
532  case COM_INTEGER:
533  for ( int k=0, nk=vs[pcb.index+1], to=pcb.index+2; k<nk; ++k, ++to)
534  reduce_maxabs( (int*)&pcb.inbuf[ _ncomp_bytes*k],
535  (int*)&ptr[ strd_bytes*vs[ to]], _ncomp);
536  break;
537  case COM_FLOAT:
538  case COM_REAL:
539  for ( int k=0, nk=vs[pcb.index+1], to=pcb.index+2; k<nk; ++k, ++to)
540  reduce_maxabs( (float*)&pcb.inbuf[ _ncomp_bytes*k],
541  (float*)&ptr[ strd_bytes*vs[ to]], _ncomp);
542  break;
543  case COM_DOUBLE:
545  for ( int k=0, nk=vs[pcb.index+1], to=pcb.index+2; k<nk; ++k, ++to)
546  reduce_maxabs( (double*)&pcb.inbuf[ _ncomp_bytes*k],
547  (double*)&ptr[ strd_bytes*vs[ to]], _ncomp);
548  break;
549  default:
550  COM_assertion_msg(false, "Unknown data type"); // Not supported
551  }
552 
553  // Remove the received message from the list
554  _reqs_recv.erase(_reqs_recv.begin()+index);
555  _reqs_indices.erase(_reqs_indices.begin()+index);
556  }
557 }
std::vector< void * > _ptrs
An array of pointers to the data for all local panes.
#define COM_assertion(EX)
Error checking utility similar to the assert macro of the C language.
j indices k indices k
Definition: Indexing.h:6
#define COM_assertion_msg(EX, msg)
std::vector< int > _strds
The strides of the arrays for all local panes.
C/C++ Data types.
Definition: roccom_basic.h:129
std::vector< std::vector< Pane_comm_buffers > > _shr_buffs
Shared node pane communication buffers.
std::vector< MPI_Request > _reqs_recv
std::vector< std::pair< int, int > > _reqs_indices
The indices in buffs for each pending nonblocking receive request.
blockLoc i
Definition: read.cpp:79
int COM_get_sizeof(const COM_Type type, int c)
Definition: roccom_c++.h:560
int _type
The base data type, number of components, and the number of bytes of all components for the data to b...
void int int * nk
Definition: read.cpp:74
j indices j
Definition: Indexing.h:6
int _my_pconn_id
The id of the pconn being used.
const MPI_Comm _comm
MPI Communicator.
void reduce_maxabs(T *a, T *b, int size)
Fortran Data types.
Definition: roccom_basic.h:133
std::vector< COM::Pane * > _panes
Vector of all local panes.

Here is the call graph for this function:

void reduce_minabs_on_shared_nodes ( )

Reduce to the value with the maximum absolute value using locally cached values of shared nodes.

Definition at line 560 of file Pane_communicator.C.

References _comm, _my_pconn_id, _ncomp, _ncomp_bytes, _panes, _ptrs, _reqs_indices, _reqs_recv, _shr_buffs, _strds, _type, COM_assertion, COM_assertion_msg, COM_CHAR, COM_CHARACTER, COM_DOUBLE, COM_DOUBLE_PRECISION, COM_FLOAT, COM_get_sizeof(), COM_INT, COM_INTEGER, COM_REAL, i, Pane_communicator::Pane_comm_buffers::inbuf, Pane_communicator::Pane_comm_buffers::index, j, k, nk, and reduce_minabs().

560  {
561  while ( !_reqs_recv.empty()) {
562  int index;
563 
564  // Wait for any receive request to finish and then process the request
565  if ( _comm!=MPI_COMM_NULL) {
566  MPI_Status status;
567  int ierr = MPI_Waitany( _reqs_recv.size(), &_reqs_recv[0],
568  &index, &status);
569  COM_assertion( ierr == 0);
570  }
571  else
572  index = _reqs_recv.size()-1;
573 
574  // Obtain the indices in _shr_buffs for the receive request
575  int i=_reqs_indices[index].first, j=(_reqs_indices[index].second>>4);
576 
577  int strd_bytes = COM_get_sizeof( _type, _strds[i]);
578  // Shift the pointer by -1 because node IDs in pconn start from 1
579  char *ptr = ((char*)_ptrs[i])-strd_bytes;
580 
581  const COM::Attribute *pconn = _panes[i]->attribute(_my_pconn_id);
582  const int *vs = (const int*)pconn->pointer();
583 
584  Pane_comm_buffers &pcb = _shr_buffs[i][j];
585  COM_assertion( int(pcb.inbuf.size()) == _ncomp_bytes*vs[ pcb.index+1]);
586 
587  switch ( _type) {
588  case COM_CHAR:
589  case COM_CHARACTER:
590  for ( int k=0, nk=vs[pcb.index+1], to=pcb.index+2; k<nk; ++k, ++to)
591  reduce_minabs( (char*)&pcb.inbuf[ _ncomp_bytes*k],
592  (char*)&ptr[ strd_bytes*vs[ to]], _ncomp);
593  break;
594  case COM_INT:
595  case COM_INTEGER:
596  for ( int k=0, nk=vs[pcb.index+1], to=pcb.index+2; k<nk; ++k, ++to)
597  reduce_minabs( (int*)&pcb.inbuf[ _ncomp_bytes*k],
598  (int*)&ptr[ strd_bytes*vs[ to]], _ncomp);
599  break;
600  case COM_FLOAT:
601  case COM_REAL:
602  for ( int k=0, nk=vs[pcb.index+1], to=pcb.index+2; k<nk; ++k, ++to)
603  reduce_minabs( (float*)&pcb.inbuf[ _ncomp_bytes*k],
604  (float*)&ptr[ strd_bytes*vs[ to]], _ncomp);
605  break;
606  case COM_DOUBLE:
608  for ( int k=0, nk=vs[pcb.index+1], to=pcb.index+2; k<nk; ++k, ++to)
609  reduce_minabs( (double*)&pcb.inbuf[ _ncomp_bytes*k],
610  (double*)&ptr[ strd_bytes*vs[ to]], _ncomp);
611  break;
612  default:
613  COM_assertion_msg(false, "Unknown data type"); // Not supported
614  }
615 
616  // Remove the received message from the list
617  _reqs_recv.erase(_reqs_recv.begin()+index);
618  _reqs_indices.erase(_reqs_indices.begin()+index);
619  }
620 }
std::vector< void * > _ptrs
An array of pointers to the data for all local panes.
#define COM_assertion(EX)
Error checking utility similar to the assert macro of the C language.
j indices k indices k
Definition: Indexing.h:6
#define COM_assertion_msg(EX, msg)
std::vector< int > _strds
The strides of the arrays for all local panes.
C/C++ Data types.
Definition: roccom_basic.h:129
void reduce_minabs(T *a, T *b, int size)
std::vector< std::vector< Pane_comm_buffers > > _shr_buffs
Shared node pane communication buffers.
std::vector< MPI_Request > _reqs_recv
std::vector< std::pair< int, int > > _reqs_indices
The indices in buffs for each pending nonblocking receive request.
blockLoc i
Definition: read.cpp:79
int COM_get_sizeof(const COM_Type type, int c)
Definition: roccom_c++.h:560
int _type
The base data type, number of components, and the number of bytes of all components for the data to b...
void int int * nk
Definition: read.cpp:74
j indices j
Definition: Indexing.h:6
int _my_pconn_id
The id of the pconn being used.
const MPI_Comm _comm
MPI Communicator.
Fortran Data types.
Definition: roccom_basic.h:133
std::vector< COM::Pane * > _panes
Vector of all local panes.

Here is the call graph for this function:

void reduce_on_shared_nodes ( MPI_Op  op)

Perform a reduction operation using locally cached values of the shared nodes, assuming begin_update_shared_nodes() has been called.

Definition at line 401 of file Pane_communicator.C.

References _comm, _my_pconn_id, _ncomp, _ncomp_bytes, _panes, _ptrs, _reqs_indices, _reqs_recv, _shr_buffs, _strds, _type, COM_assertion, COM_assertion_msg, COM_CHAR, COM_CHARACTER, COM_DOUBLE, COM_DOUBLE_PRECISION, COM_FLOAT, COM_get_sizeof(), COM_INT, COM_INTEGER, COM_REAL, i, Pane_communicator::Pane_comm_buffers::inbuf, Pane_communicator::Pane_comm_buffers::index, j, k, nk, reduce_int(), and reduce_real().

Referenced by reduce_average_on_shared_nodes().

401  {
402  while ( !_reqs_recv.empty()) {
403  int index;
404 
405  // Wait for any receive request to finish and then process the request
406  if ( _comm!=MPI_COMM_NULL) {
407  MPI_Status status;
408  int ierr = MPI_Waitany( _reqs_recv.size(), &_reqs_recv[0],
409  &index, &status);
410  COM_assertion( ierr == 0);
411  }
412  else
413  index = _reqs_recv.size()-1;
414 
415  // Obtain the indices in _shr_buffs for the receive request
416  int i=_reqs_indices[index].first, j=(_reqs_indices[index].second>>4);
417 
418  int strd_bytes = COM_get_sizeof( _type, _strds[i]);
419  // Shift the pointer by -1 because node IDs in pconn start from 1
420  char *ptr = ((char*)_ptrs[i])-strd_bytes;
421 
422  const COM::Attribute *pconn = _panes[i]->attribute(_my_pconn_id);
423  const int *vs = (const int*)pconn->pointer();
424 
425  Pane_comm_buffers &pcb = _shr_buffs[i][j];
426  COM_assertion( int(pcb.inbuf.size()) == _ncomp_bytes*vs[ pcb.index+1]);
427 
428  switch ( _type) {
429  case COM_CHAR:
430  case COM_CHARACTER:
431  for ( int k=0, nk=vs[pcb.index+1], to=pcb.index+2; k<nk; ++k, ++to)
432  reduce_int(op, (char*)&pcb.inbuf[ _ncomp_bytes*k],
433  ((char*)&ptr[strd_bytes*vs[ to]]), _ncomp);
434  break;
435  case COM_INT:
436  case COM_INTEGER:
437  for ( int k=0, nk=vs[pcb.index+1], to=pcb.index+2; k<nk; ++k, ++to)
438  reduce_int(op, (int*)&pcb.inbuf[ _ncomp_bytes*k],
439  (int*)&ptr[strd_bytes*vs[ to]], _ncomp);
440  break;
441  case COM_FLOAT:
442  case COM_REAL:
443  for ( int k=0, nk=vs[pcb.index+1], to=pcb.index+2; k<nk; ++k, ++to)
444  reduce_real( op, (float*)&pcb.inbuf[ _ncomp_bytes*k],
445  (float*)&ptr[ strd_bytes*vs[ to]], _ncomp);
446  break;
447  case COM_DOUBLE:
449  for ( int k=0, nk=vs[pcb.index+1], to=pcb.index+2; k<nk; ++k, ++to)
450  reduce_real( op, (double*)&pcb.inbuf[ _ncomp_bytes*k],
451  (double*)&ptr[strd_bytes*vs[ to]], _ncomp);
452  break;
453  default:
454  COM_assertion_msg(false, "Unknown data type"); // Not supported
455  }
456 
457  // Remove the received message from the list
458  _reqs_recv.erase(_reqs_recv.begin()+index);
459  _reqs_indices.erase(_reqs_indices.begin()+index);
460  }
461 }
std::vector< void * > _ptrs
An array of pointers to the data for all local panes.
#define COM_assertion(EX)
Error checking utility similar to the assert macro of the C language.
j indices k indices k
Definition: Indexing.h:6
#define COM_assertion_msg(EX, msg)
void reduce_real(MPI_Op op, T *a, T *b, int size)
Local level implementation of reduce operations.
std::vector< int > _strds
The strides of the arrays for all local panes.
C/C++ Data types.
Definition: roccom_basic.h:129
std::vector< std::vector< Pane_comm_buffers > > _shr_buffs
Shared node pane communication buffers.
std::vector< MPI_Request > _reqs_recv
std::vector< std::pair< int, int > > _reqs_indices
The indices in buffs for each pending nonblocking receive request.
blockLoc i
Definition: read.cpp:79
void reduce_int(MPI_Op op, T *a, T *b, int size)
Local level implementation of reduce operations.
int COM_get_sizeof(const COM_Type type, int c)
Definition: roccom_c++.h:560
int _type
The base data type, number of components, and the number of bytes of all components for the data to b...
void int int * nk
Definition: read.cpp:74
j indices j
Definition: Indexing.h:6
int _my_pconn_id
The id of the pconn being used.
const MPI_Comm _comm
MPI Communicator.
Fortran Data types.
Definition: roccom_basic.h:133
std::vector< COM::Pane * > _panes
Vector of all local panes.

Here is the call graph for this function:

Here is the caller graph for this function:

int total_npanes ( ) const
inline

Obtain the total number of panes on all processes.

Definition at line 108 of file Pane_communicator.h.

References _total_npanes.

108 { return _total_npanes; }
int _total_npanes
The total number of panes on all processes.
void update_ghost_values ( )

Definition at line 693 of file Pane_communicator.C.

References _comm, _gcr_buffs, _gnr_buffs, _my_pconn_id, _ncomp, _ncomp_bytes, _panes, _ptrs, _reqs_indices, _reqs_recv, _strds, _type, COM_assertion, COM_assertion_msg, COM_CHAR, COM_CHARACTER, COM_DOUBLE, COM_DOUBLE_PRECISION, COM_FLOAT, COM_get_sizeof(), COM_INT, COM_INTEGER, COM_REAL, GNR, i, Pane_communicator::Pane_comm_buffers::inbuf, Pane_communicator::Pane_comm_buffers::index, j, k, nk, and update_value().

Referenced by end_update_ghost_cells(), and end_update_ghost_nodes().

693  {
694  while ( !_reqs_recv.empty()) {
695  int index;
696 
697  // Wait for any receive request to finish and then process the request
698  if ( _comm!=MPI_COMM_NULL) {
699  MPI_Status status;
700  int ierr = MPI_Waitany( _reqs_recv.size(), &_reqs_recv[0],
701  &index, &status);
702  COM_assertion( ierr == 0);
703  }
704  else
705  index = _reqs_recv.size()-1;
706 
707  // Obtain the indices in _shr_buffs for the receive request
708  int i=_reqs_indices[index].first, j=(_reqs_indices[index].second>>4);
709 
710  int btype = (_reqs_indices[index].second)&15;
711 
712  int strd_bytes = COM_get_sizeof( _type, _strds[i]);
713  // Shift the pointer by -1 because node IDs in pconn start from 1
714  char *ptr = ((char*)_ptrs[i])-strd_bytes;
715 
716  const COM::Attribute *pconn = _panes[i]->attribute(_my_pconn_id);
717  const int *vs = (const int*)pconn->pointer();
718 
719  Pane_comm_buffers &pcb = (btype==GNR) ? _gnr_buffs[i][j] : _gcr_buffs[i][j] ;
720  COM_assertion( int(pcb.inbuf.size()) == _ncomp_bytes*vs[ pcb.index+1]);
721 
722  switch ( _type) {
723  case COM_CHAR:
724  case COM_CHARACTER:
725  for ( int k=0, nk=vs[pcb.index+1], to=pcb.index+2; k<nk; ++k, ++to)
726  update_value( (char*)&pcb.inbuf[ _ncomp_bytes*k],
727  (char*)&ptr[ strd_bytes*vs[ to]], _ncomp);
728  break;
729  case COM_INT:
730  case COM_INTEGER:
731  for ( int k=0, nk=vs[pcb.index+1], to=pcb.index+2; k<nk; ++k, ++to)
732  update_value( (int*)&pcb.inbuf[ _ncomp_bytes*k],
733  (int*)&ptr[ strd_bytes*vs[ to]], _ncomp);
734  break;
735  case COM_FLOAT:
736  case COM_REAL:
737  for ( int k=0, nk=vs[pcb.index+1], to=pcb.index+2; k<nk; ++k, ++to)
738  update_value( (float*)&pcb.inbuf[ _ncomp_bytes*k],
739  (float*)&ptr[ strd_bytes*vs[ to]], _ncomp);
740  break;
741  case COM_DOUBLE:
743  for ( int k=0, nk=vs[pcb.index+1], to=pcb.index+2; k<nk; ++k, ++to)
744  update_value( (double*)&pcb.inbuf[ _ncomp_bytes*k],
745  (double*)&ptr[ strd_bytes*vs[ to]], _ncomp);
746  break;
747  default:
748  COM_assertion_msg(false, "Unknown data type"); // Not supported
749  }
750 
751  // Remove the received message from the list
752  _reqs_recv.erase(_reqs_recv.begin()+index);
753  _reqs_indices.erase(_reqs_indices.begin()+index);
754  }
755 }
std::vector< void * > _ptrs
An array of pointers to the data for all local panes.
void update_value(T *a, T *b, int size)
#define COM_assertion(EX)
Error checking utility similar to the assert macro of the C language.
std::vector< std::vector< Pane_comm_buffers > > _gnr_buffs
Buffer for ghost nodes to receive.
j indices k indices k
Definition: Indexing.h:6
#define COM_assertion_msg(EX, msg)
std::vector< int > _strds
The strides of the arrays for all local panes.
C/C++ Data types.
Definition: roccom_basic.h:129
std::vector< MPI_Request > _reqs_recv
std::vector< std::pair< int, int > > _reqs_indices
The indices in buffs for each pending nonblocking receive request.
blockLoc i
Definition: read.cpp:79
std::vector< std::vector< Pane_comm_buffers > > _gcr_buffs
Buffer for ghost cells to receive.
int COM_get_sizeof(const COM_Type type, int c)
Definition: roccom_c++.h:560
int _type
The base data type, number of components, and the number of bytes of all components for the data to b...
void int int * nk
Definition: read.cpp:74
j indices j
Definition: Indexing.h:6
int _my_pconn_id
The id of the pconn being used.
const MPI_Comm _comm
MPI Communicator.
Fortran Data types.
Definition: roccom_basic.h:133
std::vector< COM::Pane * > _panes
Vector of all local panes.

Here is the call graph for this function:

Here is the caller graph for this function:

Member Data Documentation

COM::Window* _appl_window
protected

Reference to the application window.

Definition at line 196 of file Pane_communicator.h.

Referenced by init(), owner_rank(), and Pane_communicator().

std::vector< std::vector< Pane_comm_buffers> > _gcr_buffs
protected

Buffer for ghost cells to receive.

Definition at line 227 of file Pane_communicator.h.

Referenced by begin_update(), init(), and update_ghost_values().

std::vector< std::vector< Pane_comm_buffers> > _gnr_buffs
protected

Buffer for ghost nodes to receive.

Definition at line 223 of file Pane_communicator.h.

Referenced by begin_update(), init(), and update_ghost_values().

std::map<int,int> _lpaneid_map
protected

Mapping from user-defined pane ids to internal IDs, which are unique and contiguous across all processes, which are useful for defining unique tags for MPI messages.

Definition at line 206 of file Pane_communicator.h.

Referenced by lpaneid(), and Pane_communicator().

std::vector< void*> _ptrs
protected
std::vector< std::vector< Pane_comm_buffers> > _rcs_buffs
protected

Buffer for real cells to send.

Definition at line 225 of file Pane_communicator.h.

Referenced by begin_update(), and init().

std::vector<std::pair<int,int> > _reqs_indices
protected

The indices in buffs for each pending nonblocking receive request.

Definition at line 231 of file Pane_communicator.h.

Referenced by begin_update(), reduce_diff_on_shared_nodes(), reduce_maxabs_on_shared_nodes(), reduce_minabs_on_shared_nodes(), reduce_on_shared_nodes(), and update_ghost_values().

std::vector<MPI_Request> _reqs_send
protected

Arrays of pending nonblocking MPI requests. Same format as _shr_buffs.

Definition at line 229 of file Pane_communicator.h.

Referenced by begin_update(), and end_update().

std::vector< std::vector< Pane_comm_buffers> > _rns_buffs
protected

Buffer for real nodes to send.

Definition at line 221 of file Pane_communicator.h.

Referenced by begin_update(), and init().

std::vector< std::vector< Pane_comm_buffers> > _shr_buffs
protected

Shared node pane communication buffers.

The outer vector corresponds to local panes; the inner vectors corresponds to the

Definition at line 219 of file Pane_communicator.h.

Referenced by begin_update(), init(), reduce_diff_on_shared_nodes(), reduce_maxabs_on_shared_nodes(), reduce_minabs_on_shared_nodes(), and reduce_on_shared_nodes().

std::vector< int> _sizes
protected

The sizes of the arrays for all local panes.

Definition at line 213 of file Pane_communicator.h.

Referenced by begin_update(), and init().

std::vector< int> _strds
protected
int _total_npanes
protected

The total number of panes on all processes.

Definition at line 202 of file Pane_communicator.h.

Referenced by begin_update(), init_pane_comm_buffers(), lpaneid(), Pane_communicator(), and total_npanes().

int _type
protected

The base data type, number of components, and the number of bytes of all components for the data to be communicated.

Definition at line 209 of file Pane_communicator.h.

Referenced by begin_update(), init(), reduce_average_on_shared_nodes(), reduce_diff_on_shared_nodes(), reduce_maxabs_on_shared_nodes(), reduce_minabs_on_shared_nodes(), reduce_on_shared_nodes(), and update_ghost_values().


The documentation for this class was generated from the following files: