Quantcast

Segmentation fault using H5Dset_extent in parallel

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Segmentation fault using H5Dset_extent in parallel

fffred
This post was updated on .
Hello,

I receive a segmentation fault when I try to use H5Dset_extent with
more than 1 MPI processor.

Here is a test file in C++:

----------------------------------
#include <mpi.h>
#include <iostream>
#include <csignal>
#include "hdf5.h"

int main (int argc, char* argv[])
{
    int mpi_provided;
    MPI_Init_thread( &argc, &argv, MPI_THREAD_MULTIPLE, &mpi_provided );
   
    // Create HDF5 file
    hid_t file_access = H5Pcreate(H5P_FILE_ACCESS);
    H5Pset_fapl_mpio(file_access, MPI_COMM_WORLD, MPI_INFO_NULL);
    hid_t file_id = H5Fcreate( "test.h5", H5F_ACC_TRUNC, H5P_DEFAULT,
file_access);
    H5Pclose(file_access);
    std::cout << "file created" <<std::endl;
   
    // Define initial and maximum size
    hsize_t maxDims[2] = {H5S_UNLIMITED, (hsize_t) 10};
    hsize_t dims[2] = {0, (hsize_t) 10};
    hid_t file_space = H5Screate_simple(2, dims, maxDims);
   
    // Define chunks
    hid_t dataset_create = H5Pcreate(H5P_DATASET_CREATE);
    H5Pset_layout(dataset_create, H5D_CHUNKED);
    H5Pset_alloc_time(dataset_create, H5D_ALLOC_TIME_EARLY); //
necessary for collective dump
    hsize_t chunk_dims[2] = {1, (hsize_t) 10};
    H5Pset_chunk(dataset_create, 2, chunk_dims);
   
    // Create the dataset
    hid_t dataset = H5Dcreate(file_id, "A", H5T_NATIVE_DOUBLE,
file_space, H5P_DEFAULT, dataset_create, H5P_DEFAULT);
    H5Pclose(dataset_create);
    H5Sclose(file_space);
    std::cout << "dataset created" <<std::endl;
   
    // Extend the dataset
    dims[0]++;
    H5Dset_extent(dataset, &dims[0]);
    std::cout << "dataset extended" <<std::endl;
   
    H5Dclose(dataset);
    H5Fclose(file_id);
    std::cout << "closed" <<std::endl;

    MPI_Finalize();
}
----------------------------


MPI is openmpi 1.10.3 (with option MPI_THREAD_MULTIPLE), compiled with gcc 5.4.
HDF5 version 1.10.0 compiled with the same.
I run OSX 10.11.6

Thank you for your help!

Fred

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Segmentation fault using H5Dset_extent in parallel

fffred
Let me add my lastest tests:

- Still does not works with gcc 4.8
- Still does not work with openmpi 1.10.2
- DOES WORK WITH HDF5 1.8.16 !!

This tends to point to a bug in HDF5 1.10.0.
However it might also be an expected change that I am not aware of.

Please advise.
Thank you
Fred


2016-07-27 15:49 GMT+02:00 Frederic Perez <[hidden email]>:

> Hello,
>
> I receive a segmentation fault when I try to use H5Dset_extent with
> more than 1 MPI processor, in collective mode.
>
> Here is a test file in C++:
>
> ----------------------------------
> #include <mpi.h>
> #include <iostream>
> #include <csignal>
> #include "hdf5.h"
>
> int main (int argc, char* argv[])
> {
>     // Check MPI with THREAD_MULTIPLE
>     int mpi_provided;
>     MPI_Init_thread( &argc, &argv, MPI_THREAD_MULTIPLE, &mpi_provided );
>     if (mpi_provided != MPI_THREAD_MULTIPLE) {
>         std::cout << "No MPI_THREAD_MULTIPLE"<<std::endl;
>         MPI_Finalize();
>         raise(SIGSEGV);
>     }
>
>     // Get MPI params
>     int sz, rk;
>     MPI_Comm_size( MPI_COMM_WORLD, &sz );
>     MPI_Comm_rank( MPI_COMM_WORLD, &rk );
>     std::cout << "MPI with size "<<sz<<std::endl;
>
>     // Define the collective transfer
>     hid_t transfer = H5Pcreate(H5P_DATASET_XFER);
>     H5Pset_dxpl_mpio( transfer, H5FD_MPIO_COLLECTIVE);
>
>     // Create HDF5 file
>     hid_t file_access = H5Pcreate(H5P_FILE_ACCESS);
>     H5Pset_fapl_mpio(file_access, MPI_COMM_WORLD, MPI_INFO_NULL);
>     hid_t file_id = H5Fcreate( "test.h5", H5F_ACC_TRUNC, H5P_DEFAULT,
> file_access);
>     H5Pclose(file_access);
>     std::cout << "file created" <<std::endl;
>
>     // Define initial and maximum size
>     hsize_t maxDims[2] = {H5S_UNLIMITED, (hsize_t) 10};
>     hsize_t dims[2] = {0, (hsize_t) 10};
>     hid_t file_space = H5Screate_simple(2, dims, maxDims);
>
>     // Define chunks
>     hid_t dataset_create = H5Pcreate(H5P_DATASET_CREATE);
>     H5Pset_layout(dataset_create, H5D_CHUNKED);
>     H5Pset_alloc_time(dataset_create, H5D_ALLOC_TIME_EARLY); //
> necessary for collective dump
>     hsize_t chunk_dims[2] = {1, (hsize_t) 10};
>     H5Pset_chunk(dataset_create, 2, chunk_dims);
>
>     // Create the dataset
>     hid_t dataset = H5Dcreate(file_id, "A", H5T_NATIVE_DOUBLE,
> file_space, H5P_DEFAULT, dataset_create, H5P_DEFAULT);
>     H5Pclose(dataset_create);
>     H5Sclose(file_space);
>     std::cout << "dataset created" <<std::endl;
>
>     // Extend the dataset
>     dims[0]++;
>     H5Dset_extent(dataset, &dims[0]);
>     std::cout << "dataset extended" <<std::endl;
>
>     H5Dclose(dataset);
>     H5Fclose(file_id);
>     std::cout << "closed" <<std::endl;
>
>     MPI_Finalize();
> }
> ----------------------------
>
>
> MPI is openmpi 1.10.3 (with option MPI_THREAD_MULTIPLE), compiled with gcc 5.4.
>
> HDF5 version 1.10.0 compiled with the same.
>
> Thank you for your help!
>
> Fred

_______________________________________________
Hdf-forum is for HDF software users discussion.
[hidden email]
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5
Loading...