JRIO frequently asked questions

(last updated April, 2013)


IBM Java Record I/O (JRIO) is deprecated as of SDK 6.0.1. No new function will be added to any release of IBM Java Record I/O (JRIO).

IBM 31-bit SDK for z/OS, Java Technology Edition, V7.1 and IBM 64-bit SDK for z/OS, Java Technology Edition, V7.1 are planned to be the last releases to support the JRIO component.

We strongly recommend migrating to IBM JZOS Batch Toolkit, which currently has equivalent functionality and will be enhanced with new functions.

For more comprehensive information, refer to the IBM JZOS Batch Toolkit for z/OS SDKs webpage.

For API specifics, refer to IBM JZOS Batch Toolkit API.

What is JRIO? Why should I use it?

JRIO is a class library, similar to java.io. While java.io provides byte-oriented or field-oriented access to files, JRIO provides record-oriented access. You can use JRIO to access VSAM data sets and non-VSAM data sets (PDS or sequential) and HFS files. For more detailed information, see the JRIO Overview.

JRIO lets your application read, append, and update records by providing sequential, random, and keyed access.

What are the requirements for using JRIO?

JRIO is an integral part of the IBM Developer Kit for OS/390, Java 2 Technology Edition and the IBM SDK for z/OS, Java 2 Technology Edition. Refer to the Java™ on z/OS website for information on the requirements for downloading and installing either of these products.

How do I obtain the JRIO code?

JRIO is an integral part of the IBM Developer Kit for OS/390, Java™ 2 Technology Edition and the IBM SDK for z/OS, Java 2 Technology Edition. Refer to the Java™ on z/OS website for information on downloading and installing either of these products.

How do I use JRIO? What Application Programming Interface (API) is supported?

JRIO supports:

Refer to JRIO Javadoc for supported APIs.

How do I run a JRIO application?

Java commands implicitly set the CLASSPATH for the JRIO classes. To run a JRIO application, update your CLASSPATH to include the application classes by using the following Shell command:

export CLASSPATH=.:/u/joe/java/myclasses:$CLASSPATH

In this example, the class loader first scans the current directory for the application classes. If that fails, the class loader then scans the /u/joe/java/myclasses directory.

What implicit functions do the java and javac commands perform for the JRIO class files?

On the IBM Developer Kit for OS/390, Java 2 Technology Edition and the IBM SDK for z/OS, Java 2 Technology Edition, the java and javac commands implicitly set the CLASSPATH for the JRIO classes. More specifically, they add the JRIO class files found in recordio.jar to the CLASSPATH. If the JRIO class files are needed outside of the java and javac commands (as they are in the WebSphere Application Server), you must explicitly add them to the CLASSPATH as described in the following:

To add the JRIO class files on the IBM Developer Kit for OS/390, Java™ 2 Technology Edition, update CLASSPATH to include the JRIO class files by using the following Shell command:

export CLASSPATH=$CLASSPATH:
	/usr/lpp/java/IBM/J1.3/lib/ext/recordio.jar

To add the JRIO class files on the IBM SDK for z/OS, Java 2 Technology Edition, update CLASSPATH to include the JRIO class files by using the following Shell command:

export CLASSPATH=$CLASSPATH:
	/usr/lpp/java/J1.4/lib/ext/recordio.jar

The export command references the default path (/usr/lpp/java/IBM/J1.3) into which the IBM Developer Kit for OS/390, Java 2 Technology Edition was installed or the default path (/usr/lpp/java/J1.4) into which the IBM SDK for z/OS, Java 2 Technology Edition was installed. If the default path has been changed, change the export command accordingly.

Why does a variable-length KeyedAccessRecordFile read or write only 80-byte records?

Opening an existing KSDS does not automatically set the correct record length in the KeyedAccessRecordFile. You must manually set it to some value greater than or equal to the maximum record length; otherwise the default (IConstants.JRIO_DEFAULT_RECORD_LENGTH = 80) is used.

For example, if you specified the following:

InputFile = new KeyedAccessRecordFile(CLUSTER_NAME,
                                   IConstants.JRIO_READ_MODE);

The default record length would be set to 80, and only 80 bytes of each record would be read.

To read variable records greater than 80 bytes, change the preceding line of code to the following two lines of code:

IRecordFile rf = new RecordFile(CLUSTER_NAME,
                              buffer.length, 
                                // Where buffer is the
                                // maximum record length
                              IConstants.JRIO_VARIABLE_MODE);
InputFile = new KeyedAccessRecordFile(rf,
                              IConstantsJRIO_READ_MODE);

Note: Instead of using 'buffer.length' in the first line of code, you can use IConstants.JRIO_MAX_VB_RECORD_LENGTH

Why does a record-oriented file I copied from a non-HFS file to an HFS file using OPUT with the BINARY option appear as one long line when edited in HFS?

Record-oriented files do not use characters to end lines of text. For fixed-length record-format files, records end on a logical record length (LRECL) multiple, and for variable-length record-format files, the record descriptor word prefixed to each record determines the number of bytes that follows for any record.

The TSO/ISPF editor shows each record on a separate line. A record's length is the number of bytes an I/O read operation returns. For a file whose record format is fixed, this length should match the LRECL. Therefore, the editor automatically breaks each record after 80 bytes.

If you use the BINARY option with OPUT (as you should to copy fixed-length record-format files), under HFS the file should normally appear as one long line when edited or viewed because the file does not contain any newline (NL=0x15) characters.

If you copy a non-HFS variable-length record-format file to an HFS file using OPUT with the BINARY option, this also appears as one long line when edited. However, the record boundary locations are lost because the record descriptor word is not recognized.

You can use JRIO to copy a record-oriented non-HFS file to an HFS file while still maintaining record boundaries. You can use the CopyFile JRIO sample program as an example of how to code a Java program to copy a non-HFS fixed-length record-format file to an HFS file while still maintaining the record boundaries. You can also change this sample program to make it copy a non-HFS variable-length record-format file to an HFS file.

Why does a record-oriented file copied from a non-HFS file to an HFS file using OPUT with the TEXT option contain more records?

When you use the TEXT option on the OPUT command to copy a record-oriented non-HFS file to an HFS file, an EBCDIC newline (NL=0x15) character is appended to the end of each record in the file. The addition of the newline character at the end of every record increases the size of the file and the number of records in the file when processed using JRIO.

If you use the BINARY option on the OPUT command, the newline characters are not appended to the end of each record, and the size of the file and the number of records in the file remains the same.

For example, if you have a non-VSAM data set with a logical record length of 80 that contains five records, and you use the BINARY option on OPUT, the HFS file is 400 bytes long, and it contains five records when you use JRIO.

If you use the TEXT option, the HFS file is 405 bytes long, and it contains five 80-byte records and a short 5-byte record, padded to 80 bytes. This produces a six record file in HFS when processed using JRIO. These records are skewed by one each (because of the newline characters).

You should use the java.io reader and writer classes to access TEXT files in HFS.

You can use JRIO to copy a record-oriented non-HFS file to an HFS file while still maintaining record boundaries. You can use the CopyFile JRIO sample program as an example of how to code a Java program to copy a non-HFS fixed-length record-format file to an HFS file while still maintaining the record boundaries. You can also change this sample program to make it copy a non-HFS variable-length record-format file to an HFS file.

What documentation is available?

JRIO documentation is available on this Web site in HTML format.

JRIO overview HTML format
JRIO Javadoc for SDK 6.0.0 and earlier HTML format
JRIO Javadoc for SDK 6.0.1 and SDK 7.0.0 HTML format

What examples are provided?

The samples zip is available on the IBM website Java Record I/O (JRIO) Overview:

  1. Click JRIO documentation on the overview page.
  2. Go to JRIO sample code and user's guide in the What documentation is available section and click the ZIP package to download it to your workstation.
  3. Extract the zip contents to a local directory.
  4. Deploy recordio-samples.jar to your local z/OS HFS file system. To extract all the files, execute:
    jar - xvf recordio-samples.jar
    
    The com directory under the deployed jar contains the sample source code.
  5. Update the CLASPATH to include the JRIO Samples classes by using the following shell:
    Export CLASSPATH=$CLASSPATH:<location of sample jar>/recordio-samples.jar
    
    Javadoc is available at Overview: IBM Java Record I/O (JRIO) API Specification.
  6. Both sample source code and javadoc are also available in the local directory where the recordio-samples.zip is unpacked.

Why does my application sometimes receive a NativeSeqFile error when using JRIO and the Remote Method Invocation (RMI) - Java Remote Method Protocol (JRMP) to access MVS datasets?

RMI-JRMP uses sockets to make the connection between the client and the server java programs. Sockets in RMI-JRMP are given an expiration time (default 15000 milliseconds or 15 seconds). If the socket is not used within this timeframe, the socket is closed. A socket cleanup thread is activated every 15000 milliseconds and is run to clean up any closed sockets and their associated resources.

With Java on the OS/390 and z/OS platforms, the socket connection runs in a thread. Additionally, the JRIO code uses the Java Native Interface (JNI) to call native OS/390 'C' runtime library routines to access MVS datasets. Because JRIO uses these 'C' runtime library routines, JRIO is dependent on the behavior of these routines and any restrictions associated with them. According to Chapter 23 of the C/C++ Programming Guide:

All MVS files opened in a given thread and still open when 
the thread is terminated are closed automatically by the  
library during thread termination

This means that the thread in which the file was opened may be cleaned up before other threads that may be reading or writing from the file have been completed. If this happens the JRIO file descriptor (obtained when the file was opened and used for subsequent reads and writes on the file) is no longer valid. This will cause the NativeSeqFile error when the read or write is attempted.

A JRIO/RMI-JRMP application can open the file on one socket connection using RMI-JRMP and then try to read or write from the file using another socket connection. With the default socket connection expiration time in RMI-JRMP, if you open a file and then try to read or write from it and the socket has timed out, the read or write may fail because the socket was closed and eventually cleaned up and all resources released.

To prevent this from happening you can alter the default expiration time for the socket connection. To increase the default socket expiration time you can alter the sun.rmi.transport.connectionTimeout property from the client. Click for more documentation on this property.

Another alternative is to use RMI-IIOP which does not use the default socket timeout value.

Does JRIO support DDNAMES?

Yes. DDNAMES can be used to access data through JRIO for both SDK 1.3 and SDK 1.4. Generation Data Groups (GDGs) are supported when a GDG dataset already exists and the dataset name is specified in the DDNAME statement with DISP=OLD.PTF level UQ82210 or higher for SDK 1.3.1 (available Nov. 2003) or PTF level UQ81134 (available Oct. 2003) for SDK 1.4.1. (Note: For SDK 1.4.1, please also see APAR PQ80226 , defect 65350, which is not included in PTF UQ81134. A temporary fix is available if required through normal service channels.)

The following pieces of code make up the solution for accessing a DDname from JAVA:

  1. JCL to invoke BPXBATSL (This is BPXBATCH with a local spawn option)
  2. A java program that uses JRIO to access a data set or DDname

The test program below calls the jrio method to open a data set. The data set to be passed can be either a regular data set name or in the format DD:ddname

Example program: TestRead.class syntax from shell command line: java TestRead G254033.GETBASE.EXEC

syntax from JCL job:

//G254033A JOB (ACCOUNT),'G254033',MSGCLASS=X,MSGLEVEL=(1,1),
//         NOTIFY=G254033,CLASS=B,REGION=100M
//*  This job executes java from JCL passing a DDname as a 
//*  parameter BPXBATSL allows you to execute a UNIX program 
//*  running on a task in the same address space.  This makes 
//*  any DDnames on the step invoking BPXBATSL available to  
//*  the program.
//*  The full path must be specified when calling java, 
//*  subsequent environment information can be included via 
//*  the STDENV statement.
//*  Since JCL only supports 100 characters on the PARM=  
//*  keyword, if the program name gets too long, consider 
//*  defining a symlink for it.
//*  The continuing parameter field enclosed in apostrophes
//*  must end at column 71, then must start at col. 16
//*  on the next line with double slash in col. 1&2
//STEP1    EXEC PGM=BPXBATSL,
//          PARM='PGM /tmp/JavaS390/IBM/J1.3/bin/java TestRead 
//            DD:ODDD
//            D'
//*STDIN defaults to /dev/null when left out.
//STDOUT   DD PATH='/home/g254033/temp/java.out'
//STDERR   DD PATH='/home/g254033/temp/java.err'
//ODDDD    DD DSNAME=G254033.GETBASE.EXEC,DISP=OLD

Output goes to java.out and errors go to java.err

Note that your java program can hardcode the DDnames, as opposed to passing them in as parameters. Your JCL can also define many DDnames which your program can access. When passing the input string to the JRIO function, remember to code it as DD:ddname. This will cause the fopen function in the C run time library to access the DD statement for the data set to be processed.

To access Generation Data Groups (GDGs), you can code the appropriate GDG syntax in the DD statement.

Note: JRIO does not support passing HFS file names via the PATH parameter.

Does JRIO support Generation Data Groups?

Yes. GDG absolute and relative names can be passed directly to JRIO as well as passing the names from batch in a ddname statement. When a relative name is passed it's absolute name can be obtained from the getAbsolutePath() method. PTF level UK18621 for 31-bit SDK5 SR3 (available October, 2006) or PTF level UK18623 for 64-bit SDK5 SR3 (available October, 2006).

Why do the file characteristics of a file created using the createFileLike method differ from the characteristics of the model file supplied?

PTF level UK18621 for 31-bit SDK5 SR3 (available October, 2006) or PTF level UK18623 for 64-bit SDK5 SR3 (available October, 2006). This limitation no longer exists. The created file will have identical characteristics as the model file.

See demo example CopyFile2.java:

PS to PS: com.ibm.recordio.examples.portable.CopyFile2 
   //HLQ.TEST.FILE
   //HLQ.TEST.FILE1

PDS member: com.ibm.recordio.examples.portable.CopyFile2 
   "//HLQ.TEST.FILE(M1)"
   "//HLQ.TEST.FILE(M2)"

PS to HFS: com.ibm.recordio.examples.portable.CopyFile2 
   //HLQ.TEST.FILE /tmp/file1.txt

HFS to HFS: com.ibm.recordio.examples.portable.CopyFile2 
   /tmp/file1.txt /tmp/file2.txt

When creating a MVS dataset can space parameters other than record length and format be specified?

Yes, space parameters such as tracks, blocks and cylinders can now be defined in both SDK 1.3 and SDK 1.4. Support was added to SDK 1.3 PTF level UQ94379 (available Nov. 2004) and to SDK 1.4 PTF level UK00802 (available Feb. 2005). Methods for get and set space attributes of the recordfile class were added for creation of MVS physical sequential and PDS datasets. VSAM datasets can not be created by JRIO.

See the demo code SpaceParms.java:

HFS file creation:


   java.com.ibm.recordio.examples.portable.SpaceParms 
     /tmp/hfs.default.file

   java.com.ibm.recordio.examples.portable.SpaceParms 
     /tmp/hfs.vb.file VB

   java.com.ibm.recordio.examples.portable.SpaceParms 
     /tmp/hfs.fb120.file FB 120

PS Dataset creation:

   java.com.ibm.recordio.examples.portable.SpaceParms 
     //G254033.PRIVATE.DEFAULTS.PS

   ...SpaceParms //G254033.PRIVATE.FBPS.REC120 FB 120

   ...SpaceParms //G254033.PRIVATE.FBPS.BLK1200 FB 120 1200

   ...SpaceParms //G254033.PRIVATE.VBPS.BLKALLO VB 120 1244 
        BLOCKS 1000 100 0 NORLSE

   ...SpaceParms //G254033.PRIVATE.VBPS.CYLALLO VB 120 1244 
        CYLINDERS 3 1 0

   ...SpaceParms //G254033.PRIVATE.VBPS.TRKALLO VB 120 1244 
        TRACKS 100 50 0 RLSE

PDS creation:

   ...SpaceParms //G254033.PRIVATE.FBPDS.CYLALLO FB 80 1600 
        CYLINDERS 3 1 1 NORLSE

   ...SpaceParms //G254033.PRIVATE.VBPDS.BLKALLO VB 120 1244 
        BLOCKS 1000 100 10 NORLSE

   ...SpaceParms //G254033.PRIVATE.VBPDS.TRKALLO VB 120 1244 
        TRACKS 100 50 15 RLSE

Does the java security manager allow policy to be set for MVS datasets (including GDG's and DDNAMES)?

The original JRIO code did implement policy checks for read, write and delete permissions. This worked for HFS files only. No measures were taken to allow security checks on MVS datasets. MVS qualifiers are unique and not recognizable to the security code, a MVS dataset format needed to be created. Here is the format and some examples of how to code MVS datasets into a security policy file.

Format:

/DATASET/HLQ/MLQ/LLQ  (all datasets start with "/DATASET" 
                       followed by the MVS qualifiers)

/DATASET/HLQ/MLQ/LLQ/GDG_RELATIVE_NAME

/DATASET/HLQ/MLQ/LLQ/GDG_ABSOLUTE_NAME

/DATASET/HLQ/MLQ/LLQ/DD_RELATIVE_NAME

/DATASET/HLQ/MLQ/LLQ/DD_ABSOLUTE_NAME

For example an MVS dataset G254033.GB131.EXEC needs to be written into the policy file as:

/DATASET/G254033/GB131/EXEC

For example, a GDG name of G254033.PRIVATE.GDG(-001) can be written into the policy file as either:

/DATASET/G254033/PRIVATE/GDG/-001
/DATASET/G254033/PRIVATE/GDG/-1
/DATASET/G254033/PRIVATE/GDG/-01
/DATASET/G254033/PRIVATE/GDG/G0001V00

For example, a DDNAME of 0DDDD can be written into the policy file as either:

/DATASET/DD:ODDDD
/DATASET/G254033/PRIVATE/JRIO001

Policy examples:

/DATASET/MYHLQ/MYMLQ/- (all under middle level qualifier MYMLQ)
/DATASET/MYHLQ/* (all under high level qualifier MYHLQ)
/DATASET/MYHLQ/MYMLQ/MYLLQ/MEMBER (for a PDS member 
    MYHLQ.MYMLQ.MYLLQ(MEMBER) only)

MVS dataset support was added to SDK 1.3 PTF level UQ94379 (available November 2004) and to SDK 1.4 PTF level UK00802 (available February 2005).

Why, after recently updating the IBM SDK for z/OS, does my JRIO program fail with the message: java.lang.IllegalArgumentException: recordLength=32760?

JRIO made corrections to handling variable block data and this has made application code fail that should have failed before. This is related to the fact that the method getInstanceOf has multiple signatures. This previously worked because JRIO was incorrectly handling VB data. Since JRIO is now properly handling VB data, you will need to use a more general signature intended for opening existing datasets.

Here's an example:

The changes are in the application's java file: comment out 
   the specific record length signature  
   
       /*! Too specific get causes problems  
       m_file = RandomAccessRecordFile.getInstanceOf(  
       aName,  
       "r",  
       IConstants.JRIO_MAX_RECORD_LENGTH,  
       IConstants.JRIO_VARIABLE_MODE);  
       !*/  
   
   and replace with the non-specific signature:  
   
       m_file = RandomAccessRecordFile.getInstanceOf(  
       aName,  
       "r");

The specific signature fails because JRIO_MAX_RECORD_LENGTH (32760) is used and the max LRECL for a VB dataset is JRIO_MAX_VB_RECORD_LENGTH (32756). The getInstanceOf method tries to open with 32760 and since the argument doesn't match the existing LRECL size it throws an illegal argument exception. Using the non-specfic signature lets JRIO use the current settings, an alternate solution would be to pass JRIO_MAX_VB_RECORD_LENGTH. Be sure to be at these levels for the latest corrections for both SDK 1.3 and SDK 1.4. Support was added to SDK 1.3 PTF level UK03478 (available May 2005) and to SDK 1.4 PTF level UK00802 (available Feb. 2005).

Does JRIO support multiple code pages besides CP1047 (EBCIDIC)?

Yes, when WAS 5.0 was released it runs with an ISO8859-1 (ASCII) code page. JRIO code was changed to support all file encodings. JRIO handles all file and directory name conversions passed to it. However, when reading data from files (also VSAM keys), it is up to the user to do the appropriate conversions since stored file data can be in any form including binary. Be sure to be at these levels for the latest corrections for both SDK 1.3 and SDK 1.4. Support was added to SDK 1.3 PTF level UK03478 (available May 2005) and to SDK 1.4 PTF level UK00802 (available Feb. 2005).

Be sure to code your system dependent strings carefully! For example, VSAM keys are binary data:

Change this:

String keyString = "7000000000002551000";
IKey key = new Key(keyString.getBytes());

To this to run correctly in any codepage:

String keyString = ("7000000000002551000","Cp1047");
IKey key = new Key(keyString.getBytes("Cp1047"));

Does JRIO support opening datasets or files with shared dispositions?

Yes, this is new support for JRIO. Previously a disposition of OLD was the default value. Now you can share resources between application programs. Support was added to SDK 1.4 PTF level UK07860 (available October 2005). Methods for get and set disposition types for the recordfile class were added. VSAM datasets can not be opened with a shared disposition. (Note: the queue names used when shared are: SPFDSN for reads and SPFEDIT for writes.)

Here is some example code for using the new methods:

import com.ibm.recordio.*;
/*
 * syntax:   java TstDisp datasetName or DDName
 * examples: java TstDisp '//G254033.JCL.CNTL(JOB1)'
 *           java TstDisp //DD:0DDDD
 *           java TstDisp //G254033.GETBASE.EXEC
 *           java -DRIOJADEBUG TstDisp 
                    //G254033.GETBASE.EXEC > tdsp.trc 2>&1
 */

public class TstDisp
extends java.lang.Thread
implements IConstants
{

   /**
    *
    */
   public TstDisp() {
      super();
   }

   public static void main(String[] args) {
      String filename= args[0];   // passed in ps or pds name
      System.out.println("fileName = " + fileName);
      FileOutputRecordStream fors= null;
      FileInputRecordStream firs= null;
      RandomAccessRecordFile rarf = null;
      try {
        IRecordFile rf = new RecordFile(fileName);   
        // create record file object
        System.out.println("got rf");
        // test gets/sets and establish a disposition type
        //System.out.println("DspType = " + rf.getDspType());
        rf.setDspType(JRIO_DSP_TYPE_SHR);
        System.out.println("DspType = " + rf.getDspType());
        //rf.setDspType(JRIO_DSP_TYPE_OLD);
        //System.out.println("DspType = " + rf.getDspType());
        //rf.setDspType(JRIO_DSP_TYPE_SHR);
        //System.out.println("DspType = " + rf.getDspType());
        //rf.setDspType(JRIO_DEFAULT_DSP_TYPE);
        //System.out.println("DspType = " + rf.getDspType());
        // bypass exits check if DD: passed in
        //if (rf.exists()) {   // exists opens ps or pds to 
              check members
           System.out.println("File: " + fileName + " 
              already exists!");
           // open for read, write or random access
           //fors= new FileOutputRecordStream(rf); // open
           //firs= new FileInputRecordStream(rf); // open
           rarf = new RandomAccessRecordFile(rf,
                                         JRIO_READ_WRITE_MODE);
           //                            JRIO_READ_MODE);
           // put delay loop here to check correct enq was 
           // established or do your reads and writes
           System.out.println("FileOutputRecordStream 
             opened OK");
           sleep(30000);
           System.out.println("back from sleep");
         //} else {
         //  System.out.println("File: " + fileName + " 
               doesn't exist!");
         //}
      }
      catch (Exception unexpected) {
         System.out.println("IOERRO_ON_CREATE");
         System.err.println("unexpected=" + unexpected);
         unexpected.printStackTrace(System.err);
      } finally {
         try {
            if (firs != null || fors != null || rarf != null) {
               //fors.close();
               //firs.close();
               rarf.close();
               System.out.println("FileOutputRecordStream 
                 closed OK");
            }
         } catch (java.io.IOException ignored) {
            System.out.println("IOERROR_ON_CLOSE");
         }
      }
   }
}

Does JRIO support dataset type=large?

Yes, beginning with JAVA 1.4.2 Service Refresh 10, JAVA 1.5.0 Service Refresh 7 and JAVA 6.0 Service Refresh 1, JRIO supports dataset type=large. It is important to note that there are z/OS system and application updates that are required to support dataset type=large. It is recommended that you review the information in the publication DFSMS Using The New Functions and specifically the chapter that addresses "Using Large Format Datasets" in preparation for using this function.


Browse z/OS