HDF-related BOFs at SC09

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

HDF-related BOFs at SC09

Ruth Aydt
Administrator
There will be two Birds-of-a-Feather (BOF) sessions for HDF5 users at  
the upcoming SC09 conference, to be held in Portland, Oregon from  
November 14th - 20th.

The first BOF, Developing Bioinformatics Applications with BioHDF,  
will be led by Geospiza, and will discuss the use of HDF5 on the  
BioHDF project, a collaborative effort to develop portable, scalable  
bioinformatics data storage technologies in HDF5. Future directions of  
BioHDF will also be discussed. (Wednesday, 12:15-1:15 pm)

The second BOF, HDF5: State of the Union, will be led by members of  
The HDF Group, and will discuss features currently under development  
in HDF5, answer questions, and gather input for future directions.  
(Thursday, 12:15-1:15 pm)

*****  Will you be attending the HDF5: State of the Union BOF?

If so, and you have questions you'd like addressed, please send them  
to me by Nov 12th.   If you would like us to include a slide on your  
use of HDF5 in the BOF presentation, please send the PPT to me by Nov  
12th.  Time allowing, we'll invite you to stand and introduce yourself  
when your project slide is shown.

Hope to see you at SC09!

-Ruth

------------------------------------------------------------
Ruth Aydt
The HDF Group

aydt at hdfgroup.org      (217)265-7837
------------------------------------------------------------




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.hdfgroup.org/pipermail/hdf-forum_hdfgroup.org/attachments/20091104/b8039f7b/attachment.html>

Reply | Threaded
Open this post in threaded view
|

File Family in HDF Java

Aaron Kagawa
Greetings,

 

Has anyone created a HDF Java program that uses File Family in either the
JNI or object package?

 

Thanks, Aaron Kagawa

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.hdfgroup.org/pipermail/hdf-forum_hdfgroup.org/attachments/20091104/2dfd2ed2/attachment.html>

Reply | Threaded
Open this post in threaded view
|

File Family in HDF Java

Aaron Kagawa
I was able to figure out how to write the file family with the JNI code. I'm
running stress tests with it now to see if it works for us.  However, I
wasn't able to figure out how to use file families with the object layer. Am
I missing something or is it simply not possible?

 

Thanks, Aaron

 

 

  _____  

From: hdf-forum-bounces at hdfgroup.org [mailto:hdf-forum-bounces at hdfgroup.org]
On Behalf Of Aaron Kagawa
Sent: Wednesday, November 04, 2009 3:49 PM
To: hdf-forum at hdfgroup.org
Subject: [Hdf-forum] File Family in HDF Java

 

Greetings,

 

Has anyone created a HDF Java program that uses File Family in either the
JNI or object package?

 

Thanks, Aaron Kagawa

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.hdfgroup.org/pipermail/hdf-forum_hdfgroup.org/attachments/20091104/88455dae/attachment.html>

Reply | Threaded
Open this post in threaded view
|

Exception when writing to large files on windows

Aaron Kagawa
Greetings,

In earlier emails I asked about a memory leak with our code. That has been
resolved; thanks for your responses. My next email asked about File
Families. The reason behind that was because we are seeing an exception when
running a test in windows that writes to a file that eventually becomes very
large without file families.

Here is the error:

...
********************************
time: 37610000
        9949259.0 values a second
        currentDims: 376100010000, ensuring dataset size: 376100010000,
startDims: 376100000000
        usedMemory: 2.6641311645507812, totalMemory: 32.75, freeMemory:
30.08586883544922, maxMemory: 493.0625
        37610000, CommittedVirtualMemorySize: 99.8828125,
getTotalPhysicalMemorySize: 2047.9999990463257,
getFreePhysicalMemorySize1477.40625
        2 objects still open:
                id: 16777216, null, H5I_FILE
                id: 87941649, null, H5I_DATASET
ncsa.hdf.hdf5lib.exceptions.HDF5InternalErrorException: System error message
HDF5-DIAG: Error detected in HDF5 (1.8.2) thread 0:
  #000: ..\..\..\src\H5Dio.c line 267 in H5Dwrite(): can't write data
    major: Dataset
    minor: Write failed
  #001: ..\..\..\src\H5Dio.c line 582 in H5D_write(): can't write data
    major: Dataset
    minor: Write failed
  #002: ..\..\..\src\H5Dchunk.c line 1641 in H5D_chunk_write(): unable to
read raw data chunk
    major: Low-level I/O
    minor: Read failed
  #003: ..\..\..\src\H5Dchunk.c line 2508 in H5D_chunk_lock(): unable to
preempt chunk(s) from cache
    major: Low-level I/O
    minor: Unable to initialize object
  #004: ..\..\..\src\H5Dchunk.c line 2317 in H5D_chunk_cache_prune(): unable
to preempt one or more raw data cache entry
    major: Low-level I/O
    minor: Unable to flush data from cache
  #005: ..\..\..\src\H5Dchunk.c line 2183 in H5D_chunk_cache_evict(): cannot
flush indexed storage buffer
    major: Low-level I/O
    minor: Write failed
  #006: ..\..\..\src\H5Dchunk.c line 2111 in H5D_chunk_flush_entry(): unable
to write raw data to file
    major: Low-level I/O
    minor: Write failed
  #007: ..\..\..\src\H5Fio.c line 159 in H5F_block_write(): file write
failed
    major: Low-level I/O
    minor: Write failed
  #008: ..\..\..\src\H5FDint.c line 185 in H5FD_write(): driver write
request failed
    major: Virtual File Layer
    minor: Write failed
  #009: ..\..\..\src\H5FDwindows.c line 921 in H5FD_windows_write(): file
write failed
    major: Low-level I/O
    minor: Write failed
  #010: ..\..\..\src\H5FDwindows.c line 921 in H5FD_windows_write(): Invalid
argument
    major: Internal error (too specific to document in detail)
    minor: System error message
ncsa.hdf.hdf5lib.exceptions.HDF5InternalErrorException: System error message
        at ncsa.hdf.hdf5lib.H5.H5Dwrite_long(Native Method)
        at ncsa.hdf.hdf5lib.H5.H5Dwrite(H5.java:1031)
        at h5.TestHDF5WriteLowLevel.main(TestHDF5WriteLowLevel.java:123)

The error occurs under these conditions:

* run TestHDF5WriteLowLevel (see the complete code below) test with an
NUMBER_OF_LOOPS set to 100000000000
* run the test on Windows XP 32 bit, Windows XP 64 bit, or Windows
Vista 64 bit. (in all cases we use 32 bit java since the java hdf5 release
was only for 32 bit)
* when the file size of the hdf5 file gets to be over 22 GB. This
number varies over about 6 runs, but its always over 20GB and under about
26GB. (getting to a file size of 22+ GB takes a while. the test usually runs
for over 5 hours).
* when the "startDims" print out is around 376100000000 (376 billion).
This number varies but is usually around 350billion to under 400 billion
values.

Some additional notes:

* we have never seen this fail when running on linux. On linux we've
reached numbers like 80+ gigs and over a trillion values.
* we've created another TestHDF5WriteLowLevel test to use file
families. With File Family (setting the limit to 1GB) running on windows we
are currently on over 1 trillion values and 64GB. So the file families seems
to work for windows. It appears that the object layer does not support file
families. Therefore, at this point in time we cannot integrate this into our
application, because we rely too heavily on the Object layer. Is there a
plan to support file family in the upcoming release?

We are targeting Windows as our primarily OS, so this problem is a major
problem for us. A couple of questions that I'm hoping that the community can
help us with:

* Is this a know problem for windows? Or are we doing something wrong?
* Do others see this problem occurring?
* Can others duplicate our problem?

thanks, Aaron Kagawa

 

 

 

package h5;
 
import java.io.File;
import java.util.Arrays;
 
import sun.management.ManagementFactory;
 
import com.sun.management.OperatingSystemMXBean;
 
import ncsa.hdf.hdf5lib.H5;
import ncsa.hdf.hdf5lib.HDF5Constants;
 
/**
 * Implements a simple test writes to a dataset in a loop. This test is
meant to test the memory
 * used in a windows process. it seems that the windows process continues to
grow, while the
 * java heap space stays constant.  
 */
public class TestHDF5WriteLowLevel {
 
  private static final int INSERT_SIZE = 10000;
  private static final long NUMBER_OF_LOOPS = 200;
  private static final int PRINTLN_INTERVAL = 10000;
  private static final double MB = 1024.0 * 1024.0;
 
  public static void main(String[] args) {
    long numberOfLoops = NUMBER_OF_LOOPS;
    int printlnInterval = PRINTLN_INTERVAL;
    if (args.length == 1) {
      numberOfLoops = Long.parseLong(args[0]);
    }
    if (args.length == 2) {
      printlnInterval = Integer.parseInt(args[0]);
    }
   
    System.out.println("INSERT_SIZE: " + INSERT_SIZE);
    System.out.println("TIMES: " + numberOfLoops);
    try {
      // create a new file
      File javaFile = new File("TestHDF5Write-" + System.currentTimeMillis()
+ ".h5");
      int fapl = H5.H5Pcreate(HDF5Constants.H5P_FILE_ACCESS);
      H5.H5Pset_fclose_degree(fapl, HDF5Constants.H5F_CLOSE_STRONG);
      int fileId = H5.H5Fcreate (javaFile.getAbsolutePath(),
HDF5Constants.H5F_ACC_TRUNC,
          HDF5Constants.H5P_DEFAULT, HDF5Constants.H5P_DEFAULT);
 
     
      // create group (there is no good reason for us to have a group here)
      int groupId = H5.H5Gcreate (fileId, "/group", 0);
 
      // create data set
      long[] chunkSize = new long[] { 3000 };
      int gzipCompressionLevel = 2;
 
      int dataspaceId = H5.H5Screate_simple(1, new long[] { INSERT_SIZE },
new long[] { HDF5Constants.H5S_UNLIMITED });
      int pid = H5.H5Pcreate(HDF5Constants.H5P_DATASET_CREATE);
      H5.H5Pset_layout(pid, HDF5Constants.H5D_CHUNKED);
      H5.H5Pset_chunk(pid, 1, chunkSize);
      H5.H5Pset_deflate(pid, gzipCompressionLevel);
     
      int dataSetId = H5.H5Dcreate(groupId, "/group/Dataset1",
HDF5Constants.H5T_NATIVE_LLONG, dataspaceId, pid);
     
      long[] newDataArray = new long[INSERT_SIZE];
      Arrays.fill(newDataArray, System.currentTimeMillis());
     
      H5.H5Dwrite(dataSetId, HDF5Constants.H5T_NATIVE_LLONG,
HDF5Constants.H5S_ALL, HDF5Constants.H5S_ALL, HDF5Constants.H5P_DEFAULT,
newDataArray);
 
      H5.H5Dclose(dataSetId);
      H5.H5Gclose(groupId);
     
      long startTime = 0;
      long endTime;
      long duration;
      OperatingSystemMXBean osm;
      for (long loopIndex = 0; loopIndex < numberOfLoops; loopIndex++) {
        if (startTime == 0) {
          startTime = System.currentTimeMillis();
        }
        // figure out how big the current dims are
        dataSetId = H5.H5Dopen(fileId, "/group/Dataset1");
        int datasetDataspace = H5.H5Dget_space(dataSetId); //aka
file_space_id
        long[] currentDims = new long[1];
        H5.H5Sget_simple_extent_dims(datasetDataspace, currentDims, null);
        H5.H5Sclose(datasetDataspace);
 
        // extend the data set
        H5.H5Dextend(dataSetId, new long[] { currentDims[0] + INSERT_SIZE});
        // select the file space
        int filespace = H5.H5Dget_space(dataSetId); //aka file_space_id
        H5.H5Sselect_hyperslab(filespace, HDF5Constants.H5S_SELECT_SET, new
long[] { currentDims[0] },
            new long[] {1}, new long[] { INSERT_SIZE }, null);
       
        // make the data to add
        newDataArray = new long[INSERT_SIZE];
        Arrays.fill(newDataArray, System.currentTimeMillis());
       
        if (loopIndex % printlnInterval == 0) {
          System.out.println("********************************");
          System.out.println("time: " + loopIndex);
          endTime = System.currentTimeMillis();
          duration = endTime - startTime;
          if (duration == 0) {
            duration = 1;
          }
          System.out.println("\t" + (printlnInterval * INSERT_SIZE /
((float) duration / 1000)) + " values a second");
          startTime = endTime;
          System.out.println("\tcurrentDims: " + currentDims[0]
              + ", ensuring dataset size: " + ((loopIndex +1) * INSERT_SIZE)

              + ", startDims: " + (loopIndex * INSERT_SIZE));
          System.out.println("\t"
              + "usedMemory: " + ((Runtime.getRuntime().totalMemory() -
Runtime.getRuntime().freeMemory()) / MB)
              + ", totalMemory: " + (Runtime.getRuntime().totalMemory() /
MB)
              + ", freeMemory: " + (Runtime.getRuntime().freeMemory() / MB)
              + ", maxMemory: " + (Runtime.getRuntime().maxMemory() / MB));
          osm = (OperatingSystemMXBean)
ManagementFactory.getOperatingSystemMXBean();
          System.out.println("\t" + loopIndex
              + ", CommittedVirtualMemorySize: " +
(osm.getCommittedVirtualMemorySize()) / MB
              + ", getTotalPhysicalMemorySize: " +
osm.getTotalPhysicalMemorySize() / MB
              + ", getFreePhysicalMemorySize" +
osm.getFreePhysicalMemorySize() / MB);
          printOpenHDF5Objects(fileId);
        }
 
        // write the data
        int memoryDataspace = H5.H5Screate_simple(1, new long[] {
INSERT_SIZE }, null); //aka mem_space_id
        H5.H5Dwrite(dataSetId, HDF5Constants.H5T_NATIVE_LLONG,
memoryDataspace, filespace, HDF5Constants.H5P_DEFAULT, newDataArray);
        H5.H5Sclose(memoryDataspace);
        H5.H5Sclose(filespace);
        H5.H5Fflush(dataSetId, HDF5Constants.H5F_SCOPE_LOCAL);
        H5.H5Dclose(dataSetId);
      }
      H5.H5Fflush(fileId, HDF5Constants.H5F_SCOPE_GLOBAL);
      H5.H5Fclose(fileId);
    }
    catch (Exception e) {
      e.printStackTrace();
    }
   
    System.exit(0);
  }
 
 
  /** print the open hdf5 objects associated with the hdf5 file */
  public static void printOpenHDF5Objects(int fid) {
    try {
      int count;
      count = H5.H5Fget_obj_count(fid, HDF5Constants.H5F_OBJ_ALL);
      int[] objs = new int[count];
      H5.H5Fget_obj_ids(fid, HDF5Constants.H5F_OBJ_ALL, count, objs);
      String[] name = new String[1];
      System.out.println("\t" + count + " objects still open:");
      for (int i = 0; i < count; i++) {
        int type = H5.H5Iget_type(objs[i]);
        System.out.print("\t\tid: " + objs[i] + ", " + name[0]);
        if (HDF5Constants.H5I_DATASET == type) {
          System.out.println(", H5I_DATASET");
        }
        else if (HDF5Constants.H5I_FILE == type) {
          System.out.println(", H5I_FILE");
        }
        else if (HDF5Constants.H5I_GROUP == type) {
          System.out.println(", H5I_GROUP");
        }
        else if (HDF5Constants.H5I_DATATYPE == type) {
          System.out.println(", H5I_DATATYPE");
        }
        else if (HDF5Constants.H5I_ATTR == type) {
          System.out.println(", H5I_ATTR");
        }
        else {
          System.out.println(", UNKNOWN " + type);
        }
      }
     
    }
    catch (Exception e) {
      e.printStackTrace();
    }
 
  }
}

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.hdfgroup.org/pipermail/hdf-forum_hdfgroup.org/attachments/20091105/b7d3c3f7/attachment-0001.html>

Reply | Threaded
Open this post in threaded view
|

Exception when writing to large files on windows

Peter Cao
Aaron,

I don't know exact the cause of the problem. We  fixed some other
potential memory leaks.
If you are building hdf-java from the source, you can try our latest
source code to see if it fix
your problem (run svn co
http://svn.hdfgroup.uiuc.edu/hdf-java/branches/hdf-java-2.6/).

There is a  small memory leak, which we are still trying to figure out.
You have to run over night
 to see the noticeable memory  builds-up.


Thanks
--pc


Aaron Kagawa wrote:

>
> Greetings,
>
> In earlier emails I asked about a memory leak with our code. That has
> been resolved; thanks for your responses. My next email asked about
> File Families. The reason behind that was because we are seeing an
> exception when running a test in windows that writes to a file that
> eventually becomes very large without file families.
>
> Here is the error:
>
> ...
> ********************************
> time: 37610000
>         9949259.0 values a second
>         currentDims: 376100010000, ensuring dataset size: 376100010000, startDims: 376100000000
>         usedMemory: 2.6641311645507812, totalMemory: 32.75, freeMemory: 30.08586883544922, maxMemory: 493.0625
>         37610000, CommittedVirtualMemorySize: 99.8828125, getTotalPhysicalMemorySize: 2047.9999990463257, getFreePhysicalMemorySize1477.40625
>         2 objects still open:
>                 id: 16777216, null, H5I_FILE
>                 id: 87941649, null, H5I_DATASET
> ncsa.hdf.hdf5lib.exceptions.HDF5InternalErrorException: System error message
> HDF5-DIAG: Error detected in HDF5 (1.8.2) thread 0:
>   #000: ..\..\..\src\H5Dio.c line 267 in H5Dwrite(): can't write data
>     major: Dataset
>     minor: Write failed
>   #001: ..\..\..\src\H5Dio.c line 582 in H5D_write(): can't write data
>     major: Dataset
>     minor: Write failed
>   #002: ..\..\..\src\H5Dchunk.c line 1641 in H5D_chunk_write(): unable to read raw data chunk
>     major: Low-level I/O
>     minor: Read failed
>   #003: ..\..\..\src\H5Dchunk.c line 2508 in H5D_chunk_lock(): unable to preempt chunk(s) from cache
>     major: Low-level I/O
>     minor: Unable to initialize object
>   #004: ..\..\..\src\H5Dchunk.c line 2317 in H5D_chunk_cache_prune(): unable to preempt one or more raw data cache entry
>     major: Low-level I/O
>     minor: Unable to flush data from cache
>   #005: ..\..\..\src\H5Dchunk.c line 2183 in H5D_chunk_cache_evict(): cannot flush indexed storage buffer
>     major: Low-level I/O
>     minor: Write failed
>   #006: ..\..\..\src\H5Dchunk.c line 2111 in H5D_chunk_flush_entry(): unable to write raw data to file
>     major: Low-level I/O
>     minor: Write failed
>   #007: ..\..\..\src\H5Fio.c line 159 in H5F_block_write(): file write failed
>     major: Low-level I/O
>     minor: Write failed
>   #008: ..\..\..\src\H5FDint.c line 185 in H5FD_write(): driver write request failed
>     major: Virtual File Layer
>     minor: Write failed
>   #009: ..\..\..\src\H5FDwindows.c line 921 in H5FD_windows_write(): file write failed
>     major: Low-level I/O
>     minor: Write failed
>   #010: ..\..\..\src\H5FDwindows.c line 921 in H5FD_windows_write(): Invalid argument
>     major: Internal error (too specific to document in detail)
>     minor: System error message
> ncsa.hdf.hdf5lib.exceptions.HDF5InternalErrorException: System error message
>         at ncsa.hdf.hdf5lib.H5.H5Dwrite_long(Native Method)
>         at ncsa.hdf.hdf5lib.H5.H5Dwrite(H5.java:1031)
>         at h5.TestHDF5WriteLowLevel.main(TestHDF5WriteLowLevel.java:123)
>
> The error occurs under these conditions:
>
>     * run TestHDF5WriteLowLevel (see the complete code below) test
>       with an NUMBER_OF_LOOPS set to 100000000000
>     * run the test on Windows XP 32 bit, Windows XP 64 bit, or Windows
>       Vista 64 bit. (in all cases we use 32 bit java since the java
>       hdf5 release was only for 32 bit)
>     * when the file size of the hdf5 file gets to be over 22 GB. This
>       number varies over about 6 runs, but its always over 20GB and
>       under about 26GB. (getting to a file size of 22+ GB takes a
>       while. the test usually runs for over 5 hours).
>     * when the "startDims" print out is around 376100000000 (376
>       billion). This number varies but is usually around 350billion to
>       under 400 billion values.
>
> Some additional notes:
>
>     * we have never seen this fail when running on linux. On linux
>       we've reached numbers like 80+ gigs and over a trillion values.
>     * we've created another TestHDF5WriteLowLevel test to use file
>       families. With File Family (setting the limit to 1GB) running on
>       windows we are currently on over 1 trillion values and 64GB. So
>       the file families seems to work for windows. It appears that the
>       object layer does not support file families. Therefore, at this
>       point in time we cannot integrate this into our application,
>       because we rely too heavily on the Object layer. Is there a plan
>       to support file family in the upcoming release?
>
> We are targeting Windows as our primarily OS, so this problem is a
> major problem for us. A couple of questions that I'm hoping that the
> community can help us with:
>
>     * Is this a know problem for windows? Or are we doing something wrong?
>     * Do others see this problem occurring?
>     * Can others duplicate our problem?
>
> thanks, Aaron Kagawa
>
>  
>
>  
>
>  
>
> package h5;
>  
> import java.io.File;
> import java.util.Arrays;
>  
> import sun.management.ManagementFactory;
>  
> import com.sun.management.OperatingSystemMXBean;
>  
> import ncsa.hdf.hdf5lib.H5;
> import ncsa.hdf.hdf5lib.HDF5Constants;
>  
> /**
>  * Implements a simple test writes to a dataset in a loop. This test is meant to test the memory
>  * used in a windows process. it seems that the windows process continues to grow, while the
>  * java heap space stays constant.  
>  */
> public class TestHDF5WriteLowLevel {
>  
>   private static final int INSERT_SIZE = 10000;
>   private static final long NUMBER_OF_LOOPS = 200;
>   private static final int PRINTLN_INTERVAL = 10000;
>   private static final double MB = 1024.0 * 1024.0;
>  
>   public static void main(String[] args) {
>     long numberOfLoops = NUMBER_OF_LOOPS;
>     int printlnInterval = PRINTLN_INTERVAL;
>     if (args.length == 1) {
>       numberOfLoops = Long.parseLong(args[0]);
>     }
>     if (args.length == 2) {
>       printlnInterval = Integer.parseInt(args[0]);
>     }
>    
>     System.out.println("INSERT_SIZE: " + INSERT_SIZE);
>     System.out.println("TIMES: " + numberOfLoops);
>     try {
>       // create a new file
>       File javaFile = new File("TestHDF5Write-" + System.currentTimeMillis() + ".h5");
>       int fapl = H5.H5Pcreate(HDF5Constants.H5P_FILE_ACCESS);
>       H5.H5Pset_fclose_degree(fapl, HDF5Constants.H5F_CLOSE_STRONG);
>       int fileId = H5.H5Fcreate (javaFile.getAbsolutePath(), HDF5Constants.H5F_ACC_TRUNC,
>           HDF5Constants.H5P_DEFAULT, HDF5Constants.H5P_DEFAULT);
>  
>      
>       // create group (there is no good reason for us to have a group here)
>       int groupId = H5.H5Gcreate (fileId, "/group", 0);
>  
>       // create data set
>       long[] chunkSize = new long[] { 3000 };
>       int gzipCompressionLevel = 2;
>  
>       int dataspaceId = H5.H5Screate_simple(1, new long[] { INSERT_SIZE }, new long[] { HDF5Constants.H5S_UNLIMITED });
>       int pid = H5.H5Pcreate(HDF5Constants.H5P_DATASET_CREATE);
>       H5.H5Pset_layout(pid, HDF5Constants.H5D_CHUNKED);
>       H5.H5Pset_chunk(pid, 1, chunkSize);
>       H5.H5Pset_deflate(pid, gzipCompressionLevel);
>      
>       int dataSetId = H5.H5Dcreate(groupId, "/group/Dataset1", HDF5Constants.H5T_NATIVE_LLONG, dataspaceId, pid);
>      
>       long[] newDataArray = new long[INSERT_SIZE];
>       Arrays.fill(newDataArray, System.currentTimeMillis());
>      
>       H5.H5Dwrite(dataSetId, HDF5Constants.H5T_NATIVE_LLONG, HDF5Constants.H5S_ALL, HDF5Constants.H5S_ALL, HDF5Constants.H5P_DEFAULT, newDataArray);
>  
>       H5.H5Dclose(dataSetId);
>       H5.H5Gclose(groupId);
>      
>       long startTime = 0;
>       long endTime;
>       long duration;
>       OperatingSystemMXBean osm;
>       for (long loopIndex = 0; loopIndex < numberOfLoops; loopIndex++) {
>         if (startTime == 0) {
>           startTime = System.currentTimeMillis();
>         }
>         // figure out how big the current dims are
>         dataSetId = H5.H5Dopen(fileId, "/group/Dataset1");
>         int datasetDataspace = H5.H5Dget_space(dataSetId); //aka file_space_id
>         long[] currentDims = new long[1];
>         H5.H5Sget_simple_extent_dims(datasetDataspace, currentDims, null);
>         H5.H5Sclose(datasetDataspace);
>  
>         // extend the data set
>         H5.H5Dextend(dataSetId, new long[] { currentDims[0] + INSERT_SIZE});
>         // select the file space
>         int filespace = H5.H5Dget_space(dataSetId); //aka file_space_id
>         H5.H5Sselect_hyperslab(filespace, HDF5Constants.H5S_SELECT_SET, new long[] { currentDims[0] },
>             new long[] {1}, new long[] { INSERT_SIZE }, null);
>        
>         // make the data to add
>         newDataArray = new long[INSERT_SIZE];
>         Arrays.fill(newDataArray, System.currentTimeMillis());
>        
>         if (loopIndex % printlnInterval == 0) {
>           System.out.println("********************************");
>           System.out.println("time: " + loopIndex);
>           endTime = System.currentTimeMillis();
>           duration = endTime - startTime;
>           if (duration == 0) {
>             duration = 1;
>           }
>           System.out.println("\t" + (printlnInterval * INSERT_SIZE / ((float) duration / 1000)) + " values a second");
>           startTime = endTime;
>           System.out.println("\tcurrentDims: " + currentDims[0]
>               + ", ensuring dataset size: " + ((loopIndex +1) * INSERT_SIZE)
>               + ", startDims: " + (loopIndex * INSERT_SIZE));
>           System.out.println("\t"
>               + "usedMemory: " + ((Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory()) / MB)
>               + ", totalMemory: " + (Runtime.getRuntime().totalMemory() / MB)
>               + ", freeMemory: " + (Runtime.getRuntime().freeMemory() / MB)
>               + ", maxMemory: " + (Runtime.getRuntime().maxMemory() / MB));
>           osm = (OperatingSystemMXBean) ManagementFactory.getOperatingSystemMXBean();
>           System.out.println("\t" + loopIndex
>               + ", CommittedVirtualMemorySize: " + (osm.getCommittedVirtualMemorySize()) / MB
>               + ", getTotalPhysicalMemorySize: " + osm.getTotalPhysicalMemorySize() / MB
>               + ", getFreePhysicalMemorySize" + osm.getFreePhysicalMemorySize() / MB);
>           printOpenHDF5Objects(fileId);
>         }
>  
>         // write the data
>         int memoryDataspace = H5.H5Screate_simple(1, new long[] { INSERT_SIZE }, null); //aka mem_space_id
>         H5.H5Dwrite(dataSetId, HDF5Constants.H5T_NATIVE_LLONG, memoryDataspace, filespace, HDF5Constants.H5P_DEFAULT, newDataArray);
>         H5.H5Sclose(memoryDataspace);
>         H5.H5Sclose(filespace);
>         H5.H5Fflush(dataSetId, HDF5Constants.H5F_SCOPE_LOCAL);
>         H5.H5Dclose(dataSetId);
>       }
>       H5.H5Fflush(fileId, HDF5Constants.H5F_SCOPE_GLOBAL);
>       H5.H5Fclose(fileId);
>     }
>     catch (Exception e) {
>       e.printStackTrace();
>     }
>    
>     System.exit(0);
>   }
>  
>  
>   /** print the open hdf5 objects associated with the hdf5 file */
>   public static void printOpenHDF5Objects(int fid) {
>     try {
>       int count;
>       count = H5.H5Fget_obj_count(fid, HDF5Constants.H5F_OBJ_ALL);
>       int[] objs = new int[count];
>       H5.H5Fget_obj_ids(fid, HDF5Constants.H5F_OBJ_ALL, count, objs);
>       String[] name = new String[1];
>       System.out.println("\t" + count + " objects still open:");
>       for (int i = 0; i < count; i++) {
>         int type = H5.H5Iget_type(objs[i]);
>         System.out.print("\t\tid: " + objs[i] + ", " + name[0]);
>         if (HDF5Constants.H5I_DATASET == type) {
>           System.out.println(", H5I_DATASET");
>         }
>         else if (HDF5Constants.H5I_FILE == type) {
>           System.out.println(", H5I_FILE");
>         }
>         else if (HDF5Constants.H5I_GROUP == type) {
>           System.out.println(", H5I_GROUP");
>         }
>         else if (HDF5Constants.H5I_DATATYPE == type) {
>           System.out.println(", H5I_DATATYPE");
>         }
>         else if (HDF5Constants.H5I_ATTR == type) {
>           System.out.println(", H5I_ATTR");
>         }
>         else {
>           System.out.println(", UNKNOWN " + type);
>         }
>       }
>      
>     }
>     catch (Exception e) {
>       e.printStackTrace();
>     }
>  
>   }
> }
>
>  
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> Hdf-forum at hdfgroup.org
> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
>  



Reply | Threaded
Open this post in threaded view
|

Exception when writing to large files on windows

Aaron Kagawa
Hey Peter,

Thanks for the response. I'm pretty sure its not a memory leak (one that's
noticeable in the process) that is causing the problem.  The windows process
seems to be fine.  I'm not sure, but I think it has something to do with the
gzip compression.  If I turn compression off I can write a very very large
file 166GB but of course it has a lot less data than with compression.
However, it seems when I set compression to anything between 1 and 9, there
is a problem.

I'll try compiling the latest code and try again.

Thanks, Aaron

-----Original Message-----
From: hdf-forum-bounces at hdfgroup.org [mailto:hdf-forum-bounces at hdfgroup.org]
On Behalf Of Peter Cao
Sent: Friday, November 06, 2009 12:38 PM
To: hdf-forum at hdfgroup.org
Subject: Re: [Hdf-forum] Exception when writing to large files on windows

Aaron,

I don't know exact the cause of the problem. We  fixed some other
potential memory leaks.
If you are building hdf-java from the source, you can try our latest
source code to see if it fix
your problem (run svn co
http://svn.hdfgroup.uiuc.edu/hdf-java/branches/hdf-java-2.6/).

There is a  small memory leak, which we are still trying to figure out.
You have to run over night
 to see the noticeable memory  builds-up.


Thanks
--pc


Aaron Kagawa wrote:

>
> Greetings,
>
> In earlier emails I asked about a memory leak with our code. That has
> been resolved; thanks for your responses. My next email asked about
> File Families. The reason behind that was because we are seeing an
> exception when running a test in windows that writes to a file that
> eventually becomes very large without file families.
>
> Here is the error:
>
> ...
> ********************************
> time: 37610000
>         9949259.0 values a second
>         currentDims: 376100010000, ensuring dataset size: 376100010000,
startDims: 376100000000
>         usedMemory: 2.6641311645507812, totalMemory: 32.75, freeMemory:
30.08586883544922, maxMemory: 493.0625
>         37610000, CommittedVirtualMemorySize: 99.8828125,
getTotalPhysicalMemorySize: 2047.9999990463257,
getFreePhysicalMemorySize1477.40625
>         2 objects still open:
>                 id: 16777216, null, H5I_FILE
>                 id: 87941649, null, H5I_DATASET
> ncsa.hdf.hdf5lib.exceptions.HDF5InternalErrorException: System error
message
> HDF5-DIAG: Error detected in HDF5 (1.8.2) thread 0:
>   #000: ..\..\..\src\H5Dio.c line 267 in H5Dwrite(): can't write data
>     major: Dataset
>     minor: Write failed
>   #001: ..\..\..\src\H5Dio.c line 582 in H5D_write(): can't write data
>     major: Dataset
>     minor: Write failed
>   #002: ..\..\..\src\H5Dchunk.c line 1641 in H5D_chunk_write(): unable to
read raw data chunk
>     major: Low-level I/O
>     minor: Read failed
>   #003: ..\..\..\src\H5Dchunk.c line 2508 in H5D_chunk_lock(): unable to
preempt chunk(s) from cache
>     major: Low-level I/O
>     minor: Unable to initialize object
>   #004: ..\..\..\src\H5Dchunk.c line 2317 in H5D_chunk_cache_prune():
unable to preempt one or more raw data cache entry
>     major: Low-level I/O
>     minor: Unable to flush data from cache
>   #005: ..\..\..\src\H5Dchunk.c line 2183 in H5D_chunk_cache_evict():
cannot flush indexed storage buffer
>     major: Low-level I/O
>     minor: Write failed
>   #006: ..\..\..\src\H5Dchunk.c line 2111 in H5D_chunk_flush_entry():
unable to write raw data to file
>     major: Low-level I/O
>     minor: Write failed
>   #007: ..\..\..\src\H5Fio.c line 159 in H5F_block_write(): file write
failed
>     major: Low-level I/O
>     minor: Write failed
>   #008: ..\..\..\src\H5FDint.c line 185 in H5FD_write(): driver write
request failed
>     major: Virtual File Layer
>     minor: Write failed
>   #009: ..\..\..\src\H5FDwindows.c line 921 in H5FD_windows_write(): file
write failed
>     major: Low-level I/O
>     minor: Write failed
>   #010: ..\..\..\src\H5FDwindows.c line 921 in H5FD_windows_write():
Invalid argument
>     major: Internal error (too specific to document in detail)
>     minor: System error message
> ncsa.hdf.hdf5lib.exceptions.HDF5InternalErrorException: System error
message

>         at ncsa.hdf.hdf5lib.H5.H5Dwrite_long(Native Method)
>         at ncsa.hdf.hdf5lib.H5.H5Dwrite(H5.java:1031)
>         at h5.TestHDF5WriteLowLevel.main(TestHDF5WriteLowLevel.java:123)
>
> The error occurs under these conditions:
>
>     * run TestHDF5WriteLowLevel (see the complete code below) test
>       with an NUMBER_OF_LOOPS set to 100000000000
>     * run the test on Windows XP 32 bit, Windows XP 64 bit, or Windows
>       Vista 64 bit. (in all cases we use 32 bit java since the java
>       hdf5 release was only for 32 bit)
>     * when the file size of the hdf5 file gets to be over 22 GB. This
>       number varies over about 6 runs, but its always over 20GB and
>       under about 26GB. (getting to a file size of 22+ GB takes a
>       while. the test usually runs for over 5 hours).
>     * when the "startDims" print out is around 376100000000 (376
>       billion). This number varies but is usually around 350billion to
>       under 400 billion values.
>
> Some additional notes:
>
>     * we have never seen this fail when running on linux. On linux
>       we've reached numbers like 80+ gigs and over a trillion values.
>     * we've created another TestHDF5WriteLowLevel test to use file
>       families. With File Family (setting the limit to 1GB) running on
>       windows we are currently on over 1 trillion values and 64GB. So
>       the file families seems to work for windows. It appears that the
>       object layer does not support file families. Therefore, at this
>       point in time we cannot integrate this into our application,
>       because we rely too heavily on the Object layer. Is there a plan
>       to support file family in the upcoming release?
>
> We are targeting Windows as our primarily OS, so this problem is a
> major problem for us. A couple of questions that I'm hoping that the
> community can help us with:
>
>     * Is this a know problem for windows? Or are we doing something wrong?
>     * Do others see this problem occurring?
>     * Can others duplicate our problem?
>
> thanks, Aaron Kagawa
>
>  
>
>  
>
>  
>
> package h5;
>  
> import java.io.File;
> import java.util.Arrays;
>  
> import sun.management.ManagementFactory;
>  
> import com.sun.management.OperatingSystemMXBean;
>  
> import ncsa.hdf.hdf5lib.H5;
> import ncsa.hdf.hdf5lib.HDF5Constants;
>  
> /**
>  * Implements a simple test writes to a dataset in a loop. This test is
meant to test the memory
>  * used in a windows process. it seems that the windows process continues
to grow, while the

>  * java heap space stays constant.  
>  */
> public class TestHDF5WriteLowLevel {
>  
>   private static final int INSERT_SIZE = 10000;
>   private static final long NUMBER_OF_LOOPS = 200;
>   private static final int PRINTLN_INTERVAL = 10000;
>   private static final double MB = 1024.0 * 1024.0;
>  
>   public static void main(String[] args) {
>     long numberOfLoops = NUMBER_OF_LOOPS;
>     int printlnInterval = PRINTLN_INTERVAL;
>     if (args.length == 1) {
>       numberOfLoops = Long.parseLong(args[0]);
>     }
>     if (args.length == 2) {
>       printlnInterval = Integer.parseInt(args[0]);
>     }
>    
>     System.out.println("INSERT_SIZE: " + INSERT_SIZE);
>     System.out.println("TIMES: " + numberOfLoops);
>     try {
>       // create a new file
>       File javaFile = new File("TestHDF5Write-" +
System.currentTimeMillis() + ".h5");
>       int fapl = H5.H5Pcreate(HDF5Constants.H5P_FILE_ACCESS);
>       H5.H5Pset_fclose_degree(fapl, HDF5Constants.H5F_CLOSE_STRONG);
>       int fileId = H5.H5Fcreate (javaFile.getAbsolutePath(),
HDF5Constants.H5F_ACC_TRUNC,
>           HDF5Constants.H5P_DEFAULT, HDF5Constants.H5P_DEFAULT);
>  
>      
>       // create group (there is no good reason for us to have a group
here)
>       int groupId = H5.H5Gcreate (fileId, "/group", 0);
>  
>       // create data set
>       long[] chunkSize = new long[] { 3000 };
>       int gzipCompressionLevel = 2;
>  
>       int dataspaceId = H5.H5Screate_simple(1, new long[] { INSERT_SIZE },
new long[] { HDF5Constants.H5S_UNLIMITED });
>       int pid = H5.H5Pcreate(HDF5Constants.H5P_DATASET_CREATE);
>       H5.H5Pset_layout(pid, HDF5Constants.H5D_CHUNKED);
>       H5.H5Pset_chunk(pid, 1, chunkSize);
>       H5.H5Pset_deflate(pid, gzipCompressionLevel);
>      
>       int dataSetId = H5.H5Dcreate(groupId, "/group/Dataset1",
HDF5Constants.H5T_NATIVE_LLONG, dataspaceId, pid);
>      
>       long[] newDataArray = new long[INSERT_SIZE];
>       Arrays.fill(newDataArray, System.currentTimeMillis());
>      
>       H5.H5Dwrite(dataSetId, HDF5Constants.H5T_NATIVE_LLONG,
HDF5Constants.H5S_ALL, HDF5Constants.H5S_ALL, HDF5Constants.H5P_DEFAULT,
newDataArray);

>  
>       H5.H5Dclose(dataSetId);
>       H5.H5Gclose(groupId);
>      
>       long startTime = 0;
>       long endTime;
>       long duration;
>       OperatingSystemMXBean osm;
>       for (long loopIndex = 0; loopIndex < numberOfLoops; loopIndex++) {
>         if (startTime == 0) {
>           startTime = System.currentTimeMillis();
>         }
>         // figure out how big the current dims are
>         dataSetId = H5.H5Dopen(fileId, "/group/Dataset1");
>         int datasetDataspace = H5.H5Dget_space(dataSetId); //aka
file_space_id
>         long[] currentDims = new long[1];
>         H5.H5Sget_simple_extent_dims(datasetDataspace, currentDims, null);
>         H5.H5Sclose(datasetDataspace);
>  
>         // extend the data set
>         H5.H5Dextend(dataSetId, new long[] { currentDims[0] +
INSERT_SIZE});
>         // select the file space
>         int filespace = H5.H5Dget_space(dataSetId); //aka file_space_id
>         H5.H5Sselect_hyperslab(filespace, HDF5Constants.H5S_SELECT_SET,
new long[] { currentDims[0] },

>             new long[] {1}, new long[] { INSERT_SIZE }, null);
>        
>         // make the data to add
>         newDataArray = new long[INSERT_SIZE];
>         Arrays.fill(newDataArray, System.currentTimeMillis());
>        
>         if (loopIndex % printlnInterval == 0) {
>           System.out.println("********************************");
>           System.out.println("time: " + loopIndex);
>           endTime = System.currentTimeMillis();
>           duration = endTime - startTime;
>           if (duration == 0) {
>             duration = 1;
>           }
>           System.out.println("\t" + (printlnInterval * INSERT_SIZE /
((float) duration / 1000)) + " values a second");
>           startTime = endTime;
>           System.out.println("\tcurrentDims: " + currentDims[0]
>               + ", ensuring dataset size: " + ((loopIndex +1) *
INSERT_SIZE)
>               + ", startDims: " + (loopIndex * INSERT_SIZE));
>           System.out.println("\t"
>               + "usedMemory: " + ((Runtime.getRuntime().totalMemory() -
Runtime.getRuntime().freeMemory()) / MB)
>               + ", totalMemory: " + (Runtime.getRuntime().totalMemory() /
MB)
>               + ", freeMemory: " + (Runtime.getRuntime().freeMemory() /
MB)
>               + ", maxMemory: " + (Runtime.getRuntime().maxMemory() /
MB));
>           osm = (OperatingSystemMXBean)
ManagementFactory.getOperatingSystemMXBean();
>           System.out.println("\t" + loopIndex
>               + ", CommittedVirtualMemorySize: " +
(osm.getCommittedVirtualMemorySize()) / MB
>               + ", getTotalPhysicalMemorySize: " +
osm.getTotalPhysicalMemorySize() / MB
>               + ", getFreePhysicalMemorySize" +
osm.getFreePhysicalMemorySize() / MB);
>           printOpenHDF5Objects(fileId);
>         }
>  
>         // write the data
>         int memoryDataspace = H5.H5Screate_simple(1, new long[] {
INSERT_SIZE }, null); //aka mem_space_id
>         H5.H5Dwrite(dataSetId, HDF5Constants.H5T_NATIVE_LLONG,
memoryDataspace, filespace, HDF5Constants.H5P_DEFAULT, newDataArray);

>         H5.H5Sclose(memoryDataspace);
>         H5.H5Sclose(filespace);
>         H5.H5Fflush(dataSetId, HDF5Constants.H5F_SCOPE_LOCAL);
>         H5.H5Dclose(dataSetId);
>       }
>       H5.H5Fflush(fileId, HDF5Constants.H5F_SCOPE_GLOBAL);
>       H5.H5Fclose(fileId);
>     }
>     catch (Exception e) {
>       e.printStackTrace();
>     }
>    
>     System.exit(0);
>   }
>  
>  
>   /** print the open hdf5 objects associated with the hdf5 file */
>   public static void printOpenHDF5Objects(int fid) {
>     try {
>       int count;
>       count = H5.H5Fget_obj_count(fid, HDF5Constants.H5F_OBJ_ALL);
>       int[] objs = new int[count];
>       H5.H5Fget_obj_ids(fid, HDF5Constants.H5F_OBJ_ALL, count, objs);
>       String[] name = new String[1];
>       System.out.println("\t" + count + " objects still open:");
>       for (int i = 0; i < count; i++) {
>         int type = H5.H5Iget_type(objs[i]);
>         System.out.print("\t\tid: " + objs[i] + ", " + name[0]);
>         if (HDF5Constants.H5I_DATASET == type) {
>           System.out.println(", H5I_DATASET");
>         }
>         else if (HDF5Constants.H5I_FILE == type) {
>           System.out.println(", H5I_FILE");
>         }
>         else if (HDF5Constants.H5I_GROUP == type) {
>           System.out.println(", H5I_GROUP");
>         }
>         else if (HDF5Constants.H5I_DATATYPE == type) {
>           System.out.println(", H5I_DATATYPE");
>         }
>         else if (HDF5Constants.H5I_ATTR == type) {
>           System.out.println(", H5I_ATTR");
>         }
>         else {
>           System.out.println(", UNKNOWN " + type);
>         }
>       }
>      
>     }
>     catch (Exception e) {
>       e.printStackTrace();
>     }
>  
>   }
> }
>
>  
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> Hdf-forum at hdfgroup.org
> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
>  


_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum at hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org





Reply | Threaded
Open this post in threaded view
|

Exception when writing to large files on windows

Elena Pourmal
Hi Aaron,

We have a bug report (not confirmed yet with pure HDF5), that NetCDF-4  
(based on HDF5) uses more and more memory when writing gzip compressed  
datasets. HDF5 library releases memory when application quits (i.e.,  
there is no memory leak per se). Could you please check what is going  
with memory consumption when gzip is used/not used with your  
application?

Thank you!

Elena
On Nov 7, 2009, at 6:33 AM, Aaron Kagawa wrote:

> Hey Peter,
>
> Thanks for the response. I'm pretty sure its not a memory leak (one  
> that's
> noticeable in the process) that is causing the problem.  The windows  
> process
> seems to be fine.  I'm not sure, but I think it has something to do  
> with the
> gzip compression.  If I turn compression off I can write a very very  
> large
> file 166GB but of course it has a lot less data than with compression.
> However, it seems when I set compression to anything between 1 and  
> 9, there
> is a problem.
>
> I'll try compiling the latest code and try again.
>
> Thanks, Aaron
>
> -----Original Message-----
> From: hdf-forum-bounces at hdfgroup.org [mailto:hdf-forum-bounces at hdfgroup.org
> ]
> On Behalf Of Peter Cao
> Sent: Friday, November 06, 2009 12:38 PM
> To: hdf-forum at hdfgroup.org
> Subject: Re: [Hdf-forum] Exception when writing to large files on  
> windows
>
> Aaron,
>
> I don't know exact the cause of the problem. We  fixed some other
> potential memory leaks.
> If you are building hdf-java from the source, you can try our latest
> source code to see if it fix
> your problem (run svn co
> http://svn.hdfgroup.uiuc.edu/hdf-java/branches/hdf-java-2.6/).
>
> There is a  small memory leak, which we are still trying to figure  
> out.
> You have to run over night
> to see the noticeable memory  builds-up.
>
>
> Thanks
> --pc
>
>
> Aaron Kagawa wrote:
>>
>> Greetings,
>>
>> In earlier emails I asked about a memory leak with our code. That has
>> been resolved; thanks for your responses. My next email asked about
>> File Families. The reason behind that was because we are seeing an
>> exception when running a test in windows that writes to a file that
>> eventually becomes very large without file families.
>>
>> Here is the error:
>>
>> ...
>> ********************************
>> time: 37610000
>>        9949259.0 values a second
>>        currentDims: 376100010000, ensuring dataset size:  
>> 376100010000,
> startDims: 376100000000
>>        usedMemory: 2.6641311645507812, totalMemory: 32.75,  
>> freeMemory:
> 30.08586883544922, maxMemory: 493.0625
>>        37610000, CommittedVirtualMemorySize: 99.8828125,
> getTotalPhysicalMemorySize: 2047.9999990463257,
> getFreePhysicalMemorySize1477.40625
>>        2 objects still open:
>>                id: 16777216, null, H5I_FILE
>>                id: 87941649, null, H5I_DATASET
>> ncsa.hdf.hdf5lib.exceptions.HDF5InternalErrorException: System error
> message
>> HDF5-DIAG: Error detected in HDF5 (1.8.2) thread 0:
>>  #000: ..\..\..\src\H5Dio.c line 267 in H5Dwrite(): can't write data
>>    major: Dataset
>>    minor: Write failed
>>  #001: ..\..\..\src\H5Dio.c line 582 in H5D_write(): can't write data
>>    major: Dataset
>>    minor: Write failed
>>  #002: ..\..\..\src\H5Dchunk.c line 1641 in H5D_chunk_write():  
>> unable to
> read raw data chunk
>>    major: Low-level I/O
>>    minor: Read failed
>>  #003: ..\..\..\src\H5Dchunk.c line 2508 in H5D_chunk_lock():  
>> unable to
> preempt chunk(s) from cache
>>    major: Low-level I/O
>>    minor: Unable to initialize object
>>  #004: ..\..\..\src\H5Dchunk.c line 2317 in H5D_chunk_cache_prune():
> unable to preempt one or more raw data cache entry
>>    major: Low-level I/O
>>    minor: Unable to flush data from cache
>>  #005: ..\..\..\src\H5Dchunk.c line 2183 in H5D_chunk_cache_evict():
> cannot flush indexed storage buffer
>>    major: Low-level I/O
>>    minor: Write failed
>>  #006: ..\..\..\src\H5Dchunk.c line 2111 in H5D_chunk_flush_entry():
> unable to write raw data to file
>>    major: Low-level I/O
>>    minor: Write failed
>>  #007: ..\..\..\src\H5Fio.c line 159 in H5F_block_write(): file write
> failed
>>    major: Low-level I/O
>>    minor: Write failed
>>  #008: ..\..\..\src\H5FDint.c line 185 in H5FD_write(): driver write
> request failed
>>    major: Virtual File Layer
>>    minor: Write failed
>>  #009: ..\..\..\src\H5FDwindows.c line 921 in H5FD_windows_write():  
>> file
> write failed
>>    major: Low-level I/O
>>    minor: Write failed
>>  #010: ..\..\..\src\H5FDwindows.c line 921 in H5FD_windows_write():
> Invalid argument
>>    major: Internal error (too specific to document in detail)
>>    minor: System error message
>> ncsa.hdf.hdf5lib.exceptions.HDF5InternalErrorException: System error
> message
>>        at ncsa.hdf.hdf5lib.H5.H5Dwrite_long(Native Method)
>>        at ncsa.hdf.hdf5lib.H5.H5Dwrite(H5.java:1031)
>>        at h5.TestHDF5WriteLowLevel.main(TestHDF5WriteLowLevel.java:
>> 123)
>>
>> The error occurs under these conditions:
>>
>>    * run TestHDF5WriteLowLevel (see the complete code below) test
>>      with an NUMBER_OF_LOOPS set to 100000000000
>>    * run the test on Windows XP 32 bit, Windows XP 64 bit, or Windows
>>      Vista 64 bit. (in all cases we use 32 bit java since the java
>>      hdf5 release was only for 32 bit)
>>    * when the file size of the hdf5 file gets to be over 22 GB. This
>>      number varies over about 6 runs, but its always over 20GB and
>>      under about 26GB. (getting to a file size of 22+ GB takes a
>>      while. the test usually runs for over 5 hours).
>>    * when the "startDims" print out is around 376100000000 (376
>>      billion). This number varies but is usually around 350billion to
>>      under 400 billion values.
>>
>> Some additional notes:
>>
>>    * we have never seen this fail when running on linux. On linux
>>      we've reached numbers like 80+ gigs and over a trillion values.
>>    * we've created another TestHDF5WriteLowLevel test to use file
>>      families. With File Family (setting the limit to 1GB) running on
>>      windows we are currently on over 1 trillion values and 64GB. So
>>      the file families seems to work for windows. It appears that the
>>      object layer does not support file families. Therefore, at this
>>      point in time we cannot integrate this into our application,
>>      because we rely too heavily on the Object layer. Is there a plan
>>      to support file family in the upcoming release?
>>
>> We are targeting Windows as our primarily OS, so this problem is a
>> major problem for us. A couple of questions that I'm hoping that the
>> community can help us with:
>>
>>    * Is this a know problem for windows? Or are we doing something  
>> wrong?
>>    * Do others see this problem occurring?
>>    * Can others duplicate our problem?
>>
>> thanks, Aaron Kagawa
>>
>>
>>
>>
>>
>>
>>
>> package h5;
>>
>> import java.io.File;
>> import java.util.Arrays;
>>
>> import sun.management.ManagementFactory;
>>
>> import com.sun.management.OperatingSystemMXBean;
>>
>> import ncsa.hdf.hdf5lib.H5;
>> import ncsa.hdf.hdf5lib.HDF5Constants;
>>
>> /**
>> * Implements a simple test writes to a dataset in a loop. This test  
>> is
> meant to test the memory
>> * used in a windows process. it seems that the windows process  
>> continues
> to grow, while the
>> * java heap space stays constant.
>> */
>> public class TestHDF5WriteLowLevel {
>>
>>  private static final int INSERT_SIZE = 10000;
>>  private static final long NUMBER_OF_LOOPS = 200;
>>  private static final int PRINTLN_INTERVAL = 10000;
>>  private static final double MB = 1024.0 * 1024.0;
>>
>>  public static void main(String[] args) {
>>    long numberOfLoops = NUMBER_OF_LOOPS;
>>    int printlnInterval = PRINTLN_INTERVAL;
>>    if (args.length == 1) {
>>      numberOfLoops = Long.parseLong(args[0]);
>>    }
>>    if (args.length == 2) {
>>      printlnInterval = Integer.parseInt(args[0]);
>>    }
>>
>>    System.out.println("INSERT_SIZE: " + INSERT_SIZE);
>>    System.out.println("TIMES: " + numberOfLoops);
>>    try {
>>      // create a new file
>>      File javaFile = new File("TestHDF5Write-" +
> System.currentTimeMillis() + ".h5");
>>      int fapl = H5.H5Pcreate(HDF5Constants.H5P_FILE_ACCESS);
>>      H5.H5Pset_fclose_degree(fapl, HDF5Constants.H5F_CLOSE_STRONG);
>>      int fileId = H5.H5Fcreate (javaFile.getAbsolutePath(),
> HDF5Constants.H5F_ACC_TRUNC,
>>          HDF5Constants.H5P_DEFAULT, HDF5Constants.H5P_DEFAULT);
>>
>>
>>      // create group (there is no good reason for us to have a group
> here)
>>      int groupId = H5.H5Gcreate (fileId, "/group", 0);
>>
>>      // create data set
>>      long[] chunkSize = new long[] { 3000 };
>>      int gzipCompressionLevel = 2;
>>
>>      int dataspaceId = H5.H5Screate_simple(1, new long[]  
>> { INSERT_SIZE },
> new long[] { HDF5Constants.H5S_UNLIMITED });
>>      int pid = H5.H5Pcreate(HDF5Constants.H5P_DATASET_CREATE);
>>      H5.H5Pset_layout(pid, HDF5Constants.H5D_CHUNKED);
>>      H5.H5Pset_chunk(pid, 1, chunkSize);
>>      H5.H5Pset_deflate(pid, gzipCompressionLevel);
>>
>>      int dataSetId = H5.H5Dcreate(groupId, "/group/Dataset1",
> HDF5Constants.H5T_NATIVE_LLONG, dataspaceId, pid);
>>
>>      long[] newDataArray = new long[INSERT_SIZE];
>>      Arrays.fill(newDataArray, System.currentTimeMillis());
>>
>>      H5.H5Dwrite(dataSetId, HDF5Constants.H5T_NATIVE_LLONG,
> HDF5Constants.H5S_ALL, HDF5Constants.H5S_ALL,  
> HDF5Constants.H5P_DEFAULT,
> newDataArray);
>>
>>      H5.H5Dclose(dataSetId);
>>      H5.H5Gclose(groupId);
>>
>>      long startTime = 0;
>>      long endTime;
>>      long duration;
>>      OperatingSystemMXBean osm;
>>      for (long loopIndex = 0; loopIndex < numberOfLoops; loopIndex+
>> +) {
>>        if (startTime == 0) {
>>          startTime = System.currentTimeMillis();
>>        }
>>        // figure out how big the current dims are
>>        dataSetId = H5.H5Dopen(fileId, "/group/Dataset1");
>>        int datasetDataspace = H5.H5Dget_space(dataSetId); //aka
> file_space_id
>>        long[] currentDims = new long[1];
>>        H5.H5Sget_simple_extent_dims(datasetDataspace, currentDims,  
>> null);
>>        H5.H5Sclose(datasetDataspace);
>>
>>        // extend the data set
>>        H5.H5Dextend(dataSetId, new long[] { currentDims[0] +
> INSERT_SIZE});
>>        // select the file space
>>        int filespace = H5.H5Dget_space(dataSetId); //aka  
>> file_space_id
>>        H5.H5Sselect_hyperslab(filespace,  
>> HDF5Constants.H5S_SELECT_SET,
> new long[] { currentDims[0] },
>>            new long[] {1}, new long[] { INSERT_SIZE }, null);
>>
>>        // make the data to add
>>        newDataArray = new long[INSERT_SIZE];
>>        Arrays.fill(newDataArray, System.currentTimeMillis());
>>
>>        if (loopIndex % printlnInterval == 0) {
>>          System.out.println("********************************");
>>          System.out.println("time: " + loopIndex);
>>          endTime = System.currentTimeMillis();
>>          duration = endTime - startTime;
>>          if (duration == 0) {
>>            duration = 1;
>>          }
>>          System.out.println("\t" + (printlnInterval * INSERT_SIZE /
> ((float) duration / 1000)) + " values a second");
>>          startTime = endTime;
>>          System.out.println("\tcurrentDims: " + currentDims[0]
>>              + ", ensuring dataset size: " + ((loopIndex +1) *
> INSERT_SIZE)
>>              + ", startDims: " + (loopIndex * INSERT_SIZE));
>>          System.out.println("\t"
>>              + "usedMemory: " +  
>> ((Runtime.getRuntime().totalMemory() -
> Runtime.getRuntime().freeMemory()) / MB)
>>              + ", totalMemory: " +  
>> (Runtime.getRuntime().totalMemory() /
> MB)
>>              + ", freeMemory: " +  
>> (Runtime.getRuntime().freeMemory() /
> MB)
>>              + ", maxMemory: " + (Runtime.getRuntime().maxMemory() /
> MB));
>>          osm = (OperatingSystemMXBean)
> ManagementFactory.getOperatingSystemMXBean();
>>          System.out.println("\t" + loopIndex
>>              + ", CommittedVirtualMemorySize: " +
> (osm.getCommittedVirtualMemorySize()) / MB
>>              + ", getTotalPhysicalMemorySize: " +
> osm.getTotalPhysicalMemorySize() / MB
>>              + ", getFreePhysicalMemorySize" +
> osm.getFreePhysicalMemorySize() / MB);
>>          printOpenHDF5Objects(fileId);
>>        }
>>
>>        // write the data
>>        int memoryDataspace = H5.H5Screate_simple(1, new long[] {
> INSERT_SIZE }, null); //aka mem_space_id
>>        H5.H5Dwrite(dataSetId, HDF5Constants.H5T_NATIVE_LLONG,
> memoryDataspace, filespace, HDF5Constants.H5P_DEFAULT, newDataArray);
>>        H5.H5Sclose(memoryDataspace);
>>        H5.H5Sclose(filespace);
>>        H5.H5Fflush(dataSetId, HDF5Constants.H5F_SCOPE_LOCAL);
>>        H5.H5Dclose(dataSetId);
>>      }
>>      H5.H5Fflush(fileId, HDF5Constants.H5F_SCOPE_GLOBAL);
>>      H5.H5Fclose(fileId);
>>    }
>>    catch (Exception e) {
>>      e.printStackTrace();
>>    }
>>
>>    System.exit(0);
>>  }
>>
>>
>>  /** print the open hdf5 objects associated with the hdf5 file */
>>  public static void printOpenHDF5Objects(int fid) {
>>    try {
>>      int count;
>>      count = H5.H5Fget_obj_count(fid, HDF5Constants.H5F_OBJ_ALL);
>>      int[] objs = new int[count];
>>      H5.H5Fget_obj_ids(fid, HDF5Constants.H5F_OBJ_ALL, count, objs);
>>      String[] name = new String[1];
>>      System.out.println("\t" + count + " objects still open:");
>>      for (int i = 0; i < count; i++) {
>>        int type = H5.H5Iget_type(objs[i]);
>>        System.out.print("\t\tid: " + objs[i] + ", " + name[0]);
>>        if (HDF5Constants.H5I_DATASET == type) {
>>          System.out.println(", H5I_DATASET");
>>        }
>>        else if (HDF5Constants.H5I_FILE == type) {
>>          System.out.println(", H5I_FILE");
>>        }
>>        else if (HDF5Constants.H5I_GROUP == type) {
>>          System.out.println(", H5I_GROUP");
>>        }
>>        else if (HDF5Constants.H5I_DATATYPE == type) {
>>          System.out.println(", H5I_DATATYPE");
>>        }
>>        else if (HDF5Constants.H5I_ATTR == type) {
>>          System.out.println(", H5I_ATTR");
>>        }
>>        else {
>>          System.out.println(", UNKNOWN " + type);
>>        }
>>      }
>>
>>    }
>>    catch (Exception e) {
>>      e.printStackTrace();
>>    }
>>
>>  }
>> }
>>
>>
>>
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> Hdf-forum is for HDF software users discussion.
>> Hdf-forum at hdfgroup.org
>> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
>>
>
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> Hdf-forum at hdfgroup.org
> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
>
>
>
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> Hdf-forum at hdfgroup.org
> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org



Reply | Threaded
Open this post in threaded view
|

Exception when writing to large files on windows

Aaron Kagawa
Elena,

The windows process seems fine during the test using gzip. There is no
growth.  

I also tried using the szip filter instead of gzip.  We got much further
along with that test. The file grew to 72GB and 700billion values, but then
crashed in a similar way to the gzip.  I really do think it has something to
do with the C code that is specific to windows.  

Has anyone run into issues with windows and HDF5?  

Thanks, Aaron

-----Original Message-----
From: hdf-forum-bounces at hdfgroup.org [mailto:hdf-forum-bounces at hdfgroup.org]
On Behalf Of Elena Pourmal
Sent: Sunday, November 08, 2009 9:21 AM
To: hdf-forum at hdfgroup.org
Subject: Re: [Hdf-forum] Exception when writing to large files on windows

Hi Aaron,

We have a bug report (not confirmed yet with pure HDF5), that NetCDF-4  
(based on HDF5) uses more and more memory when writing gzip compressed  
datasets. HDF5 library releases memory when application quits (i.e.,  
there is no memory leak per se). Could you please check what is going  
with memory consumption when gzip is used/not used with your  
application?

Thank you!

Elena
On Nov 7, 2009, at 6:33 AM, Aaron Kagawa wrote:

> Hey Peter,
>
> Thanks for the response. I'm pretty sure its not a memory leak (one  
> that's
> noticeable in the process) that is causing the problem.  The windows  
> process
> seems to be fine.  I'm not sure, but I think it has something to do  
> with the
> gzip compression.  If I turn compression off I can write a very very  
> large
> file 166GB but of course it has a lot less data than with compression.
> However, it seems when I set compression to anything between 1 and  
> 9, there
> is a problem.
>
> I'll try compiling the latest code and try again.
>
> Thanks, Aaron
>
> -----Original Message-----
> From: hdf-forum-bounces at hdfgroup.org
[mailto:hdf-forum-bounces at hdfgroup.org

> ]
> On Behalf Of Peter Cao
> Sent: Friday, November 06, 2009 12:38 PM
> To: hdf-forum at hdfgroup.org
> Subject: Re: [Hdf-forum] Exception when writing to large files on  
> windows
>
> Aaron,
>
> I don't know exact the cause of the problem. We  fixed some other
> potential memory leaks.
> If you are building hdf-java from the source, you can try our latest
> source code to see if it fix
> your problem (run svn co
> http://svn.hdfgroup.uiuc.edu/hdf-java/branches/hdf-java-2.6/).
>
> There is a  small memory leak, which we are still trying to figure  
> out.
> You have to run over night
> to see the noticeable memory  builds-up.
>
>
> Thanks
> --pc
>
>
> Aaron Kagawa wrote:
>>
>> Greetings,
>>
>> In earlier emails I asked about a memory leak with our code. That has
>> been resolved; thanks for your responses. My next email asked about
>> File Families. The reason behind that was because we are seeing an
>> exception when running a test in windows that writes to a file that
>> eventually becomes very large without file families.
>>
>> Here is the error:
>>
>> ...
>> ********************************
>> time: 37610000
>>        9949259.0 values a second
>>        currentDims: 376100010000, ensuring dataset size:  
>> 376100010000,
> startDims: 376100000000
>>        usedMemory: 2.6641311645507812, totalMemory: 32.75,  
>> freeMemory:
> 30.08586883544922, maxMemory: 493.0625
>>        37610000, CommittedVirtualMemorySize: 99.8828125,
> getTotalPhysicalMemorySize: 2047.9999990463257,
> getFreePhysicalMemorySize1477.40625
>>        2 objects still open:
>>                id: 16777216, null, H5I_FILE
>>                id: 87941649, null, H5I_DATASET
>> ncsa.hdf.hdf5lib.exceptions.HDF5InternalErrorException: System error
> message
>> HDF5-DIAG: Error detected in HDF5 (1.8.2) thread 0:
>>  #000: ..\..\..\src\H5Dio.c line 267 in H5Dwrite(): can't write data
>>    major: Dataset
>>    minor: Write failed
>>  #001: ..\..\..\src\H5Dio.c line 582 in H5D_write(): can't write data
>>    major: Dataset
>>    minor: Write failed
>>  #002: ..\..\..\src\H5Dchunk.c line 1641 in H5D_chunk_write():  
>> unable to
> read raw data chunk
>>    major: Low-level I/O
>>    minor: Read failed
>>  #003: ..\..\..\src\H5Dchunk.c line 2508 in H5D_chunk_lock():  
>> unable to
> preempt chunk(s) from cache
>>    major: Low-level I/O
>>    minor: Unable to initialize object
>>  #004: ..\..\..\src\H5Dchunk.c line 2317 in H5D_chunk_cache_prune():
> unable to preempt one or more raw data cache entry
>>    major: Low-level I/O
>>    minor: Unable to flush data from cache
>>  #005: ..\..\..\src\H5Dchunk.c line 2183 in H5D_chunk_cache_evict():
> cannot flush indexed storage buffer
>>    major: Low-level I/O
>>    minor: Write failed
>>  #006: ..\..\..\src\H5Dchunk.c line 2111 in H5D_chunk_flush_entry():
> unable to write raw data to file
>>    major: Low-level I/O
>>    minor: Write failed
>>  #007: ..\..\..\src\H5Fio.c line 159 in H5F_block_write(): file write
> failed
>>    major: Low-level I/O
>>    minor: Write failed
>>  #008: ..\..\..\src\H5FDint.c line 185 in H5FD_write(): driver write
> request failed
>>    major: Virtual File Layer
>>    minor: Write failed
>>  #009: ..\..\..\src\H5FDwindows.c line 921 in H5FD_windows_write():  
>> file
> write failed
>>    major: Low-level I/O
>>    minor: Write failed
>>  #010: ..\..\..\src\H5FDwindows.c line 921 in H5FD_windows_write():
> Invalid argument
>>    major: Internal error (too specific to document in detail)
>>    minor: System error message
>> ncsa.hdf.hdf5lib.exceptions.HDF5InternalErrorException: System error
> message
>>        at ncsa.hdf.hdf5lib.H5.H5Dwrite_long(Native Method)
>>        at ncsa.hdf.hdf5lib.H5.H5Dwrite(H5.java:1031)
>>        at h5.TestHDF5WriteLowLevel.main(TestHDF5WriteLowLevel.java:
>> 123)
>>
>> The error occurs under these conditions:
>>
>>    * run TestHDF5WriteLowLevel (see the complete code below) test
>>      with an NUMBER_OF_LOOPS set to 100000000000
>>    * run the test on Windows XP 32 bit, Windows XP 64 bit, or Windows
>>      Vista 64 bit. (in all cases we use 32 bit java since the java
>>      hdf5 release was only for 32 bit)
>>    * when the file size of the hdf5 file gets to be over 22 GB. This
>>      number varies over about 6 runs, but its always over 20GB and
>>      under about 26GB. (getting to a file size of 22+ GB takes a
>>      while. the test usually runs for over 5 hours).
>>    * when the "startDims" print out is around 376100000000 (376
>>      billion). This number varies but is usually around 350billion to
>>      under 400 billion values.
>>
>> Some additional notes:
>>
>>    * we have never seen this fail when running on linux. On linux
>>      we've reached numbers like 80+ gigs and over a trillion values.
>>    * we've created another TestHDF5WriteLowLevel test to use file
>>      families. With File Family (setting the limit to 1GB) running on
>>      windows we are currently on over 1 trillion values and 64GB. So
>>      the file families seems to work for windows. It appears that the
>>      object layer does not support file families. Therefore, at this
>>      point in time we cannot integrate this into our application,
>>      because we rely too heavily on the Object layer. Is there a plan
>>      to support file family in the upcoming release?
>>
>> We are targeting Windows as our primarily OS, so this problem is a
>> major problem for us. A couple of questions that I'm hoping that the
>> community can help us with:
>>
>>    * Is this a know problem for windows? Or are we doing something  
>> wrong?
>>    * Do others see this problem occurring?
>>    * Can others duplicate our problem?
>>
>> thanks, Aaron Kagawa
>>
>>
>>
>>
>>
>>
>>
>> package h5;
>>
>> import java.io.File;
>> import java.util.Arrays;
>>
>> import sun.management.ManagementFactory;
>>
>> import com.sun.management.OperatingSystemMXBean;
>>
>> import ncsa.hdf.hdf5lib.H5;
>> import ncsa.hdf.hdf5lib.HDF5Constants;
>>
>> /**
>> * Implements a simple test writes to a dataset in a loop. This test  
>> is
> meant to test the memory
>> * used in a windows process. it seems that the windows process  
>> continues
> to grow, while the
>> * java heap space stays constant.
>> */
>> public class TestHDF5WriteLowLevel {
>>
>>  private static final int INSERT_SIZE = 10000;
>>  private static final long NUMBER_OF_LOOPS = 200;
>>  private static final int PRINTLN_INTERVAL = 10000;
>>  private static final double MB = 1024.0 * 1024.0;
>>
>>  public static void main(String[] args) {
>>    long numberOfLoops = NUMBER_OF_LOOPS;
>>    int printlnInterval = PRINTLN_INTERVAL;
>>    if (args.length == 1) {
>>      numberOfLoops = Long.parseLong(args[0]);
>>    }
>>    if (args.length == 2) {
>>      printlnInterval = Integer.parseInt(args[0]);
>>    }
>>
>>    System.out.println("INSERT_SIZE: " + INSERT_SIZE);
>>    System.out.println("TIMES: " + numberOfLoops);
>>    try {
>>      // create a new file
>>      File javaFile = new File("TestHDF5Write-" +
> System.currentTimeMillis() + ".h5");
>>      int fapl = H5.H5Pcreate(HDF5Constants.H5P_FILE_ACCESS);
>>      H5.H5Pset_fclose_degree(fapl, HDF5Constants.H5F_CLOSE_STRONG);
>>      int fileId = H5.H5Fcreate (javaFile.getAbsolutePath(),
> HDF5Constants.H5F_ACC_TRUNC,
>>          HDF5Constants.H5P_DEFAULT, HDF5Constants.H5P_DEFAULT);
>>
>>
>>      // create group (there is no good reason for us to have a group
> here)
>>      int groupId = H5.H5Gcreate (fileId, "/group", 0);
>>
>>      // create data set
>>      long[] chunkSize = new long[] { 3000 };
>>      int gzipCompressionLevel = 2;
>>
>>      int dataspaceId = H5.H5Screate_simple(1, new long[]  
>> { INSERT_SIZE },
> new long[] { HDF5Constants.H5S_UNLIMITED });
>>      int pid = H5.H5Pcreate(HDF5Constants.H5P_DATASET_CREATE);
>>      H5.H5Pset_layout(pid, HDF5Constants.H5D_CHUNKED);
>>      H5.H5Pset_chunk(pid, 1, chunkSize);
>>      H5.H5Pset_deflate(pid, gzipCompressionLevel);
>>
>>      int dataSetId = H5.H5Dcreate(groupId, "/group/Dataset1",
> HDF5Constants.H5T_NATIVE_LLONG, dataspaceId, pid);
>>
>>      long[] newDataArray = new long[INSERT_SIZE];
>>      Arrays.fill(newDataArray, System.currentTimeMillis());
>>
>>      H5.H5Dwrite(dataSetId, HDF5Constants.H5T_NATIVE_LLONG,
> HDF5Constants.H5S_ALL, HDF5Constants.H5S_ALL,  
> HDF5Constants.H5P_DEFAULT,
> newDataArray);
>>
>>      H5.H5Dclose(dataSetId);
>>      H5.H5Gclose(groupId);
>>
>>      long startTime = 0;
>>      long endTime;
>>      long duration;
>>      OperatingSystemMXBean osm;
>>      for (long loopIndex = 0; loopIndex < numberOfLoops; loopIndex+
>> +) {
>>        if (startTime == 0) {
>>          startTime = System.currentTimeMillis();
>>        }
>>        // figure out how big the current dims are
>>        dataSetId = H5.H5Dopen(fileId, "/group/Dataset1");
>>        int datasetDataspace = H5.H5Dget_space(dataSetId); //aka
> file_space_id
>>        long[] currentDims = new long[1];
>>        H5.H5Sget_simple_extent_dims(datasetDataspace, currentDims,  
>> null);
>>        H5.H5Sclose(datasetDataspace);
>>
>>        // extend the data set
>>        H5.H5Dextend(dataSetId, new long[] { currentDims[0] +
> INSERT_SIZE});
>>        // select the file space
>>        int filespace = H5.H5Dget_space(dataSetId); //aka  
>> file_space_id
>>        H5.H5Sselect_hyperslab(filespace,  
>> HDF5Constants.H5S_SELECT_SET,
> new long[] { currentDims[0] },
>>            new long[] {1}, new long[] { INSERT_SIZE }, null);
>>
>>        // make the data to add
>>        newDataArray = new long[INSERT_SIZE];
>>        Arrays.fill(newDataArray, System.currentTimeMillis());
>>
>>        if (loopIndex % printlnInterval == 0) {
>>          System.out.println("********************************");
>>          System.out.println("time: " + loopIndex);
>>          endTime = System.currentTimeMillis();
>>          duration = endTime - startTime;
>>          if (duration == 0) {
>>            duration = 1;
>>          }
>>          System.out.println("\t" + (printlnInterval * INSERT_SIZE /
> ((float) duration / 1000)) + " values a second");
>>          startTime = endTime;
>>          System.out.println("\tcurrentDims: " + currentDims[0]
>>              + ", ensuring dataset size: " + ((loopIndex +1) *
> INSERT_SIZE)
>>              + ", startDims: " + (loopIndex * INSERT_SIZE));
>>          System.out.println("\t"
>>              + "usedMemory: " +  
>> ((Runtime.getRuntime().totalMemory() -
> Runtime.getRuntime().freeMemory()) / MB)
>>              + ", totalMemory: " +  
>> (Runtime.getRuntime().totalMemory() /
> MB)
>>              + ", freeMemory: " +  
>> (Runtime.getRuntime().freeMemory() /
> MB)
>>              + ", maxMemory: " + (Runtime.getRuntime().maxMemory() /
> MB));
>>          osm = (OperatingSystemMXBean)
> ManagementFactory.getOperatingSystemMXBean();
>>          System.out.println("\t" + loopIndex
>>              + ", CommittedVirtualMemorySize: " +
> (osm.getCommittedVirtualMemorySize()) / MB
>>              + ", getTotalPhysicalMemorySize: " +
> osm.getTotalPhysicalMemorySize() / MB
>>              + ", getFreePhysicalMemorySize" +
> osm.getFreePhysicalMemorySize() / MB);
>>          printOpenHDF5Objects(fileId);
>>        }
>>
>>        // write the data
>>        int memoryDataspace = H5.H5Screate_simple(1, new long[] {
> INSERT_SIZE }, null); //aka mem_space_id
>>        H5.H5Dwrite(dataSetId, HDF5Constants.H5T_NATIVE_LLONG,
> memoryDataspace, filespace, HDF5Constants.H5P_DEFAULT, newDataArray);
>>        H5.H5Sclose(memoryDataspace);
>>        H5.H5Sclose(filespace);
>>        H5.H5Fflush(dataSetId, HDF5Constants.H5F_SCOPE_LOCAL);
>>        H5.H5Dclose(dataSetId);
>>      }
>>      H5.H5Fflush(fileId, HDF5Constants.H5F_SCOPE_GLOBAL);
>>      H5.H5Fclose(fileId);
>>    }
>>    catch (Exception e) {
>>      e.printStackTrace();
>>    }
>>
>>    System.exit(0);
>>  }
>>
>>
>>  /** print the open hdf5 objects associated with the hdf5 file */
>>  public static void printOpenHDF5Objects(int fid) {
>>    try {
>>      int count;
>>      count = H5.H5Fget_obj_count(fid, HDF5Constants.H5F_OBJ_ALL);
>>      int[] objs = new int[count];
>>      H5.H5Fget_obj_ids(fid, HDF5Constants.H5F_OBJ_ALL, count, objs);
>>      String[] name = new String[1];
>>      System.out.println("\t" + count + " objects still open:");
>>      for (int i = 0; i < count; i++) {
>>        int type = H5.H5Iget_type(objs[i]);
>>        System.out.print("\t\tid: " + objs[i] + ", " + name[0]);
>>        if (HDF5Constants.H5I_DATASET == type) {
>>          System.out.println(", H5I_DATASET");
>>        }
>>        else if (HDF5Constants.H5I_FILE == type) {
>>          System.out.println(", H5I_FILE");
>>        }
>>        else if (HDF5Constants.H5I_GROUP == type) {
>>          System.out.println(", H5I_GROUP");
>>        }
>>        else if (HDF5Constants.H5I_DATATYPE == type) {
>>          System.out.println(", H5I_DATATYPE");
>>        }
>>        else if (HDF5Constants.H5I_ATTR == type) {
>>          System.out.println(", H5I_ATTR");
>>        }
>>        else {
>>          System.out.println(", UNKNOWN " + type);
>>        }
>>      }
>>
>>    }
>>    catch (Exception e) {
>>      e.printStackTrace();
>>    }
>>
>>  }
>> }
>>
>>
>>
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> Hdf-forum is for HDF software users discussion.
>> Hdf-forum at hdfgroup.org
>> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
>>
>
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> Hdf-forum at hdfgroup.org
> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
>
>
>
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> Hdf-forum at hdfgroup.org
> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org


_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum at hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org





Reply | Threaded
Open this post in threaded view
|

Exception when writing to large files on windows

Elena Pourmal
Aaron,

Thank you for information. I entered a bug report and we will take a  
look as soon as we can.

Elena
On Nov 10, 2009, at 2:23 PM, Aaron Kagawa wrote:

> Elena,
>
> The windows process seems fine during the test using gzip. There is no
> growth.
>
> I also tried using the szip filter instead of gzip.  We got much  
> further
> along with that test. The file grew to 72GB and 700billion values,  
> but then
> crashed in a similar way to the gzip.  I really do think it has  
> something to
> do with the C code that is specific to windows.
>
> Has anyone run into issues with windows and HDF5?
>
> Thanks, Aaron
>
> -----Original Message-----
> From: hdf-forum-bounces at hdfgroup.org [mailto:hdf-forum-bounces at hdfgroup.org
> ]
> On Behalf Of Elena Pourmal
> Sent: Sunday, November 08, 2009 9:21 AM
> To: hdf-forum at hdfgroup.org
> Subject: Re: [Hdf-forum] Exception when writing to large files on  
> windows
>
> Hi Aaron,
>
> We have a bug report (not confirmed yet with pure HDF5), that NetCDF-4
> (based on HDF5) uses more and more memory when writing gzip compressed
> datasets. HDF5 library releases memory when application quits (i.e.,
> there is no memory leak per se). Could you please check what is going
> with memory consumption when gzip is used/not used with your
> application?
>
> Thank you!
>
> Elena
> On Nov 7, 2009, at 6:33 AM, Aaron Kagawa wrote:
>
>> Hey Peter,
>>
>> Thanks for the response. I'm pretty sure its not a memory leak (one
>> that's
>> noticeable in the process) that is causing the problem.  The windows
>> process
>> seems to be fine.  I'm not sure, but I think it has something to do
>> with the
>> gzip compression.  If I turn compression off I can write a very very
>> large
>> file 166GB but of course it has a lot less data than with  
>> compression.
>> However, it seems when I set compression to anything between 1 and
>> 9, there
>> is a problem.
>>
>> I'll try compiling the latest code and try again.
>>
>> Thanks, Aaron
>>
>> -----Original Message-----
>> From: hdf-forum-bounces at hdfgroup.org
> [mailto:hdf-forum-bounces at hdfgroup.org
>> ]
>> On Behalf Of Peter Cao
>> Sent: Friday, November 06, 2009 12:38 PM
>> To: hdf-forum at hdfgroup.org
>> Subject: Re: [Hdf-forum] Exception when writing to large files on
>> windows
>>
>> Aaron,
>>
>> I don't know exact the cause of the problem. We  fixed some other
>> potential memory leaks.
>> If you are building hdf-java from the source, you can try our latest
>> source code to see if it fix
>> your problem (run svn co
>> http://svn.hdfgroup.uiuc.edu/hdf-java/branches/hdf-java-2.6/).
>>
>> There is a  small memory leak, which we are still trying to figure
>> out.
>> You have to run over night
>> to see the noticeable memory  builds-up.
>>
>>
>> Thanks
>> --pc
>>
>>
>> Aaron Kagawa wrote:
>>>
>>> Greetings,
>>>
>>> In earlier emails I asked about a memory leak with our code. That  
>>> has
>>> been resolved; thanks for your responses. My next email asked about
>>> File Families. The reason behind that was because we are seeing an
>>> exception when running a test in windows that writes to a file that
>>> eventually becomes very large without file families.
>>>
>>> Here is the error:
>>>
>>> ...
>>> ********************************
>>> time: 37610000
>>>       9949259.0 values a second
>>>       currentDims: 376100010000, ensuring dataset size:
>>> 376100010000,
>> startDims: 376100000000
>>>       usedMemory: 2.6641311645507812, totalMemory: 32.75,
>>> freeMemory:
>> 30.08586883544922, maxMemory: 493.0625
>>>       37610000, CommittedVirtualMemorySize: 99.8828125,
>> getTotalPhysicalMemorySize: 2047.9999990463257,
>> getFreePhysicalMemorySize1477.40625
>>>       2 objects still open:
>>>               id: 16777216, null, H5I_FILE
>>>               id: 87941649, null, H5I_DATASET
>>> ncsa.hdf.hdf5lib.exceptions.HDF5InternalErrorException: System error
>> message
>>> HDF5-DIAG: Error detected in HDF5 (1.8.2) thread 0:
>>> #000: ..\..\..\src\H5Dio.c line 267 in H5Dwrite(): can't write data
>>>   major: Dataset
>>>   minor: Write failed
>>> #001: ..\..\..\src\H5Dio.c line 582 in H5D_write(): can't write data
>>>   major: Dataset
>>>   minor: Write failed
>>> #002: ..\..\..\src\H5Dchunk.c line 1641 in H5D_chunk_write():
>>> unable to
>> read raw data chunk
>>>   major: Low-level I/O
>>>   minor: Read failed
>>> #003: ..\..\..\src\H5Dchunk.c line 2508 in H5D_chunk_lock():
>>> unable to
>> preempt chunk(s) from cache
>>>   major: Low-level I/O
>>>   minor: Unable to initialize object
>>> #004: ..\..\..\src\H5Dchunk.c line 2317 in H5D_chunk_cache_prune():
>> unable to preempt one or more raw data cache entry
>>>   major: Low-level I/O
>>>   minor: Unable to flush data from cache
>>> #005: ..\..\..\src\H5Dchunk.c line 2183 in H5D_chunk_cache_evict():
>> cannot flush indexed storage buffer
>>>   major: Low-level I/O
>>>   minor: Write failed
>>> #006: ..\..\..\src\H5Dchunk.c line 2111 in H5D_chunk_flush_entry():
>> unable to write raw data to file
>>>   major: Low-level I/O
>>>   minor: Write failed
>>> #007: ..\..\..\src\H5Fio.c line 159 in H5F_block_write(): file write
>> failed
>>>   major: Low-level I/O
>>>   minor: Write failed
>>> #008: ..\..\..\src\H5FDint.c line 185 in H5FD_write(): driver write
>> request failed
>>>   major: Virtual File Layer
>>>   minor: Write failed
>>> #009: ..\..\..\src\H5FDwindows.c line 921 in H5FD_windows_write():
>>> file
>> write failed
>>>   major: Low-level I/O
>>>   minor: Write failed
>>> #010: ..\..\..\src\H5FDwindows.c line 921 in H5FD_windows_write():
>> Invalid argument
>>>   major: Internal error (too specific to document in detail)
>>>   minor: System error message
>>> ncsa.hdf.hdf5lib.exceptions.HDF5InternalErrorException: System error
>> message
>>>       at ncsa.hdf.hdf5lib.H5.H5Dwrite_long(Native Method)
>>>       at ncsa.hdf.hdf5lib.H5.H5Dwrite(H5.java:1031)
>>>       at h5.TestHDF5WriteLowLevel.main(TestHDF5WriteLowLevel.java:
>>> 123)
>>>
>>> The error occurs under these conditions:
>>>
>>>   * run TestHDF5WriteLowLevel (see the complete code below) test
>>>     with an NUMBER_OF_LOOPS set to 100000000000
>>>   * run the test on Windows XP 32 bit, Windows XP 64 bit, or Windows
>>>     Vista 64 bit. (in all cases we use 32 bit java since the java
>>>     hdf5 release was only for 32 bit)
>>>   * when the file size of the hdf5 file gets to be over 22 GB. This
>>>     number varies over about 6 runs, but its always over 20GB and
>>>     under about 26GB. (getting to a file size of 22+ GB takes a
>>>     while. the test usually runs for over 5 hours).
>>>   * when the "startDims" print out is around 376100000000 (376
>>>     billion). This number varies but is usually around 350billion to
>>>     under 400 billion values.
>>>
>>> Some additional notes:
>>>
>>>   * we have never seen this fail when running on linux. On linux
>>>     we've reached numbers like 80+ gigs and over a trillion values.
>>>   * we've created another TestHDF5WriteLowLevel test to use file
>>>     families. With File Family (setting the limit to 1GB) running on
>>>     windows we are currently on over 1 trillion values and 64GB. So
>>>     the file families seems to work for windows. It appears that the
>>>     object layer does not support file families. Therefore, at this
>>>     point in time we cannot integrate this into our application,
>>>     because we rely too heavily on the Object layer. Is there a plan
>>>     to support file family in the upcoming release?
>>>
>>> We are targeting Windows as our primarily OS, so this problem is a
>>> major problem for us. A couple of questions that I'm hoping that the
>>> community can help us with:
>>>
>>>   * Is this a know problem for windows? Or are we doing something
>>> wrong?
>>>   * Do others see this problem occurring?
>>>   * Can others duplicate our problem?
>>>
>>> thanks, Aaron Kagawa
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> package h5;
>>>
>>> import java.io.File;
>>> import java.util.Arrays;
>>>
>>> import sun.management.ManagementFactory;
>>>
>>> import com.sun.management.OperatingSystemMXBean;
>>>
>>> import ncsa.hdf.hdf5lib.H5;
>>> import ncsa.hdf.hdf5lib.HDF5Constants;
>>>
>>> /**
>>> * Implements a simple test writes to a dataset in a loop. This test
>>> is
>> meant to test the memory
>>> * used in a windows process. it seems that the windows process
>>> continues
>> to grow, while the
>>> * java heap space stays constant.
>>> */
>>> public class TestHDF5WriteLowLevel {
>>>
>>> private static final int INSERT_SIZE = 10000;
>>> private static final long NUMBER_OF_LOOPS = 200;
>>> private static final int PRINTLN_INTERVAL = 10000;
>>> private static final double MB = 1024.0 * 1024.0;
>>>
>>> public static void main(String[] args) {
>>>   long numberOfLoops = NUMBER_OF_LOOPS;
>>>   int printlnInterval = PRINTLN_INTERVAL;
>>>   if (args.length == 1) {
>>>     numberOfLoops = Long.parseLong(args[0]);
>>>   }
>>>   if (args.length == 2) {
>>>     printlnInterval = Integer.parseInt(args[0]);
>>>   }
>>>
>>>   System.out.println("INSERT_SIZE: " + INSERT_SIZE);
>>>   System.out.println("TIMES: " + numberOfLoops);
>>>   try {
>>>     // create a new file
>>>     File javaFile = new File("TestHDF5Write-" +
>> System.currentTimeMillis() + ".h5");
>>>     int fapl = H5.H5Pcreate(HDF5Constants.H5P_FILE_ACCESS);
>>>     H5.H5Pset_fclose_degree(fapl, HDF5Constants.H5F_CLOSE_STRONG);
>>>     int fileId = H5.H5Fcreate (javaFile.getAbsolutePath(),
>> HDF5Constants.H5F_ACC_TRUNC,
>>>         HDF5Constants.H5P_DEFAULT, HDF5Constants.H5P_DEFAULT);
>>>
>>>
>>>     // create group (there is no good reason for us to have a group
>> here)
>>>     int groupId = H5.H5Gcreate (fileId, "/group", 0);
>>>
>>>     // create data set
>>>     long[] chunkSize = new long[] { 3000 };
>>>     int gzipCompressionLevel = 2;
>>>
>>>     int dataspaceId = H5.H5Screate_simple(1, new long[]
>>> { INSERT_SIZE },
>> new long[] { HDF5Constants.H5S_UNLIMITED });
>>>     int pid = H5.H5Pcreate(HDF5Constants.H5P_DATASET_CREATE);
>>>     H5.H5Pset_layout(pid, HDF5Constants.H5D_CHUNKED);
>>>     H5.H5Pset_chunk(pid, 1, chunkSize);
>>>     H5.H5Pset_deflate(pid, gzipCompressionLevel);
>>>
>>>     int dataSetId = H5.H5Dcreate(groupId, "/group/Dataset1",
>> HDF5Constants.H5T_NATIVE_LLONG, dataspaceId, pid);
>>>
>>>     long[] newDataArray = new long[INSERT_SIZE];
>>>     Arrays.fill(newDataArray, System.currentTimeMillis());
>>>
>>>     H5.H5Dwrite(dataSetId, HDF5Constants.H5T_NATIVE_LLONG,
>> HDF5Constants.H5S_ALL, HDF5Constants.H5S_ALL,
>> HDF5Constants.H5P_DEFAULT,
>> newDataArray);
>>>
>>>     H5.H5Dclose(dataSetId);
>>>     H5.H5Gclose(groupId);
>>>
>>>     long startTime = 0;
>>>     long endTime;
>>>     long duration;
>>>     OperatingSystemMXBean osm;
>>>     for (long loopIndex = 0; loopIndex < numberOfLoops; loopIndex+
>>> +) {
>>>       if (startTime == 0) {
>>>         startTime = System.currentTimeMillis();
>>>       }
>>>       // figure out how big the current dims are
>>>       dataSetId = H5.H5Dopen(fileId, "/group/Dataset1");
>>>       int datasetDataspace = H5.H5Dget_space(dataSetId); //aka
>> file_space_id
>>>       long[] currentDims = new long[1];
>>>       H5.H5Sget_simple_extent_dims(datasetDataspace, currentDims,
>>> null);
>>>       H5.H5Sclose(datasetDataspace);
>>>
>>>       // extend the data set
>>>       H5.H5Dextend(dataSetId, new long[] { currentDims[0] +
>> INSERT_SIZE});
>>>       // select the file space
>>>       int filespace = H5.H5Dget_space(dataSetId); //aka
>>> file_space_id
>>>       H5.H5Sselect_hyperslab(filespace,
>>> HDF5Constants.H5S_SELECT_SET,
>> new long[] { currentDims[0] },
>>>           new long[] {1}, new long[] { INSERT_SIZE }, null);
>>>
>>>       // make the data to add
>>>       newDataArray = new long[INSERT_SIZE];
>>>       Arrays.fill(newDataArray, System.currentTimeMillis());
>>>
>>>       if (loopIndex % printlnInterval == 0) {
>>>         System.out.println("********************************");
>>>         System.out.println("time: " + loopIndex);
>>>         endTime = System.currentTimeMillis();
>>>         duration = endTime - startTime;
>>>         if (duration == 0) {
>>>           duration = 1;
>>>         }
>>>         System.out.println("\t" + (printlnInterval * INSERT_SIZE /
>> ((float) duration / 1000)) + " values a second");
>>>         startTime = endTime;
>>>         System.out.println("\tcurrentDims: " + currentDims[0]
>>>             + ", ensuring dataset size: " + ((loopIndex +1) *
>> INSERT_SIZE)
>>>             + ", startDims: " + (loopIndex * INSERT_SIZE));
>>>         System.out.println("\t"
>>>             + "usedMemory: " +
>>> ((Runtime.getRuntime().totalMemory() -
>> Runtime.getRuntime().freeMemory()) / MB)
>>>             + ", totalMemory: " +
>>> (Runtime.getRuntime().totalMemory() /
>> MB)
>>>             + ", freeMemory: " +
>>> (Runtime.getRuntime().freeMemory() /
>> MB)
>>>             + ", maxMemory: " + (Runtime.getRuntime().maxMemory() /
>> MB));
>>>         osm = (OperatingSystemMXBean)
>> ManagementFactory.getOperatingSystemMXBean();
>>>         System.out.println("\t" + loopIndex
>>>             + ", CommittedVirtualMemorySize: " +
>> (osm.getCommittedVirtualMemorySize()) / MB
>>>             + ", getTotalPhysicalMemorySize: " +
>> osm.getTotalPhysicalMemorySize() / MB
>>>             + ", getFreePhysicalMemorySize" +
>> osm.getFreePhysicalMemorySize() / MB);
>>>         printOpenHDF5Objects(fileId);
>>>       }
>>>
>>>       // write the data
>>>       int memoryDataspace = H5.H5Screate_simple(1, new long[] {
>> INSERT_SIZE }, null); //aka mem_space_id
>>>       H5.H5Dwrite(dataSetId, HDF5Constants.H5T_NATIVE_LLONG,
>> memoryDataspace, filespace, HDF5Constants.H5P_DEFAULT, newDataArray);
>>>       H5.H5Sclose(memoryDataspace);
>>>       H5.H5Sclose(filespace);
>>>       H5.H5Fflush(dataSetId, HDF5Constants.H5F_SCOPE_LOCAL);
>>>       H5.H5Dclose(dataSetId);
>>>     }
>>>     H5.H5Fflush(fileId, HDF5Constants.H5F_SCOPE_GLOBAL);
>>>     H5.H5Fclose(fileId);
>>>   }
>>>   catch (Exception e) {
>>>     e.printStackTrace();
>>>   }
>>>
>>>   System.exit(0);
>>> }
>>>
>>>
>>> /** print the open hdf5 objects associated with the hdf5 file */
>>> public static void printOpenHDF5Objects(int fid) {
>>>   try {
>>>     int count;
>>>     count = H5.H5Fget_obj_count(fid, HDF5Constants.H5F_OBJ_ALL);
>>>     int[] objs = new int[count];
>>>     H5.H5Fget_obj_ids(fid, HDF5Constants.H5F_OBJ_ALL, count, objs);
>>>     String[] name = new String[1];
>>>     System.out.println("\t" + count + " objects still open:");
>>>     for (int i = 0; i < count; i++) {
>>>       int type = H5.H5Iget_type(objs[i]);
>>>       System.out.print("\t\tid: " + objs[i] + ", " + name[0]);
>>>       if (HDF5Constants.H5I_DATASET == type) {
>>>         System.out.println(", H5I_DATASET");
>>>       }
>>>       else if (HDF5Constants.H5I_FILE == type) {
>>>         System.out.println(", H5I_FILE");
>>>       }
>>>       else if (HDF5Constants.H5I_GROUP == type) {
>>>         System.out.println(", H5I_GROUP");
>>>       }
>>>       else if (HDF5Constants.H5I_DATATYPE == type) {
>>>         System.out.println(", H5I_DATATYPE");
>>>       }
>>>       else if (HDF5Constants.H5I_ATTR == type) {
>>>         System.out.println(", H5I_ATTR");
>>>       }
>>>       else {
>>>         System.out.println(", UNKNOWN " + type);
>>>       }
>>>     }
>>>
>>>   }
>>>   catch (Exception e) {
>>>     e.printStackTrace();
>>>   }
>>>
>>> }
>>> }
>>>
>>>
>>>
>>> ------------------------------------------------------------------------
>>>
>>> _______________________________________________
>>> Hdf-forum is for HDF software users discussion.
>>> Hdf-forum at hdfgroup.org
>>> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
>>>
>>
>>
>> _______________________________________________
>> Hdf-forum is for HDF software users discussion.
>> Hdf-forum at hdfgroup.org
>> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
>>
>>
>>
>>
>> _______________________________________________
>> Hdf-forum is for HDF software users discussion.
>> Hdf-forum at hdfgroup.org
>> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
>
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> Hdf-forum at hdfgroup.org
> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
>
>
>
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> Hdf-forum at hdfgroup.org
> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org



Reply | Threaded
Open this post in threaded view
|

Presentations from HDF5 BOFs at SC09

Ruth Aydt
Administrator
In reply to this post by Ruth Aydt
The presentations from the SC09 BOFs are now available for download.

See http://www.hdfgroup.org/pubs/presentations/

In addition to the main presentation during the "HDF5: State of the  
Union" BOF, John Shalf talked about "Tuning HDF5/Lustre at LBNL/
NERSC".  Those slides are also available.  Thanks John!


On Nov 4, 2009, at 12:25 PM, Ruth Aydt wrote:

> There will be two Birds-of-a-Feather (BOF) sessions for HDF5 users  
> at the upcoming SC09 conference, to be held in Portland, Oregon from  
> November 14th - 20th.
>
> The first BOF, Developing Bioinformatics Applications with BioHDF,  
> will be led by Geospiza, and will discuss the use of HDF5 on the  
> BioHDF project, a collaborative effort to develop portable, scalable  
> bioinformatics data storage technologies in HDF5. Future directions  
> of BioHDF will also be discussed. (Wednesday, 12:15-1:15 pm)
>
> The second BOF, HDF5: State of the Union, will be led by members of  
> The HDF Group, and will discuss features currently under development  
> in HDF5, answer questions, and gather input for future directions.  
> (Thursday, 12:15-1:15 pm)
>

------------------------------------------------------------
Ruth Aydt
The HDF Group

aydt at hdfgroup.org      (217)265-7837
------------------------------------------------------------




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.hdfgroup.org/pipermail/hdf-forum_hdfgroup.org/attachments/20091201/a9096452/attachment.html>