BLKSIZE, the Misunderstood and Abused JCL Parameter

BLKSIZE, the Misunderstood and Abused JCL Parameter

BLKSIZE, block size — What, you ask, is a block (in this sense) ? Glad you asked. A block is a bunch of records, like a handful or a scoop. A record, as we ordinarily think of it, is, in mainframe-speak, a Logical Record.  A bunch of such records, taken together, is a Physical Record, or block.

BLKSIZE (Block Size) is the reason for the B in FB. And in VB, FBA, VBA, FBM, FBSA, and any other letter combinations for RECFM (RECFM = Record Format) that include the letter B. The B means Blocked. The records are blocked together into groups, and each such group of records constitutes a block.  So when you read a record, the system actually grabs a big handful of records at once, then feeds them to your program one at a time. The system does one big READ instead of a bunch of little ones. Much more efficient. Runs a lot faster. It works the same way when writing records: the system sets aside an area (called a buffer) equal in length to your block size, and gradually fills it up with the records your program writes. When the buffer is full, the system writes it out and starts refilling the area again. You never see any of this happening.

Actually, the system sets aside at least two buffers for each file, so it can start using the second one as soon as the first one is full, without waiting for the WRITE (or READ) to complete.  But we were talking about the size of the buffer(s) — BLKSIZE — not the number of buffers, so back to that . . .

It used to be that you had to let the system know how many records to put into a block. You had to specify the size of the scoop. Yes, the size of the scoop, or block, not the number of records it would hold. (For example, 100 records of 80 characters each would amount to a block 8,000 characters (8000 bytes) in length, BLKSIZE=8000). YOU DO NOT HAVE TO DO THIS ANYMORE except in a few special cases. Almost always you can leave off the parameter BLKSIZE entirely, or, at worst, put BLKSIZE=0 to indicate that you didn’t just forget. The system will then figure it out for you. That’s the best approach. Don’t ever specify a BLKSIZE other than zero unless you have a good reason.

If you are writing a program and you find that when you specify a file definition within your program it seems as if you're being required to specify how many records are in a block, you can generally (and generally should) say zero.  BLOCK CONTAINS ZERO RECORDS, for example, in COBOL, or possibly BLOCK CONTAINS 0 RECORDS, with a numeric digit zero rather than the word ZERO.  It is important that you NOT specify some particular number of records other than zero, because when your program is compiled that file definition will be used to build a DCB for the file, and whatever is specified in that DCB will override anything specified in the JCL.  So don't do it.  Let the system figure out the BLKSIZE;  Let there be zero in the BLKSIZE in your compiler-constructed DCB.

The other important thing to know about BLKSIZE is that I/O operations are slow. (I/O = Input and Output, that is, READ and WRITE). For most types of Data Set, using the common access methods, one physical operation is required each time you read or write a block.

Hence, if you read 100 records from a Data Set with BLKSIZE=80, that causes a hundred physical READ occurrences compared to only one READ occurrence if BLKSIZE=8000, and performing 100 separate physical events takes 100 times as long — A hundred times as much elapsed time! — as doing it just once. (System-determined BLKSIZE for an FB 80 data set will usually be 27920, enough for 349 eighty-byte records per block. I just use 100 in the example because it’s easier to do the math in your head, hence it feels easier to relate to 100.)


Because people tend to have a bit of intuitive difficulty comprehending how very slow I/O is compared to almost anything else on a computer, I generally suggest to any doubters that they do the following experiment.

Go into ISPF 3.2 and allocate two Data Sets, both with Data Set type BASIC, which means ordinary flat files. Specify the SPACE in terms of tracks, allowing about 500 tracks for each Data Set. Both should have RECFM (Record format) FB, and LRECL=80. One should have BLKSIZE=80. The other should have BLKSIZE=0, which will give you a system-determined BLKSIZE that will usually be 27920, enough for 349 eighty-byte records per block. . . .

That done, Go into ISPF Edit on the latter empty Data Set, the one with BLKSIZE=27920. Overtype the left-hand line number field of the first line with R34899, indicating you want the line to be replicated 34899 times. Put in any data at all in the data portion of the line, and press enter. That should give you 34,900 identical records (100 full blocks). Save and end.

Next Edit your other empty Data Set, the one with BLKSIZE=80. On the command line, put in COPY 'XXX', but instead of XXX you say the name of the Data Set where you just put the 34,900 identical records. Press Enter. The same data should now appear here also. Save and end.

You should now have two Data Sets that are identical except for the BLKSIZE.

You might have already noticed that "Save and end" took a good bit longer on the BLKSIZE=80 Data Set. That was not a random quirk. Take turns going into Edit again on each Data Set repeatedly. It will take decidedly longer to get into edit on the BLKSIZE=80 file as compared to the other. If you SAVE the Data Sets again, that too will exhibit the same difference in response time. Amazing, right?

If you doubt this at all, don't take my word for it, try the experiment. Really. In that way you will get a feeling for the true meaning of BLKSIZE.

As a bonus, go to ISPF 3.4, put in a Data Set name pattern that will match your two new Data Sets, select "Initial View" as 2 (space), and press enter. When the two Data Set names show up, type F in the "Command" column to the left of each and press enter, hence freeing the unused portion of each Data Set's allocated space. Look at the space each one is using, under the "Tracks" column. Yes, the BLKSIZE=80 Data Set uses much more space than the other.

End of Experiment

The system does not choose 27920 with a random number generator. It picks that size for efficient use of disk space. Two such blocks will fit on one track of disk (for most disk space).

Space on disk, as you know, is generally measured in units of tracks. On all modern disks, fifteen tracks are equivalent to one cylinder.


The newest disks, called EAV (Extended Addressing Volumes), are a model of 3390 disk that consists mostly of an “extended area”, or "extended addressing area", on which space is allocated only in units of cylinders, and in most cases the allocation is rounded upward to the nearest 21 cylinders. (21 cylinders is the current value of the "multi-cylinder unit" that IBM uses.) The tracks are still there, but they’re considered to be like old halfpenny coins, not worth mentioning.

Before long most places will probably have the EAV disks.

If your Data Set is allocated in the extended area, and you use system-determined BLKSIZE, the system will subtract 32 bytes per block when calculating the optimal BLKSIZE for the data set.  The extra 32 bytes on disk are used by the system for control information (in the form of an invisible-to-you suffix following each block).  The block size calculation will probably have to subtract more than 32 for any specific case, though, because BLKSIZE must usually be an even multiple of logical record size (LRECL).  Hence your Data Set will have a slightly smaller optimal  BLKSIZE if it resides in the extended addressing area.  

The extended address area is also called EAS (Extended Addressing Space).  Also it can be called cylinder-managed space, because single tracks are not assigned to data sets there; the smallest amount of space allocated to a data set within EAS is generally 21 cylinders (the aforementioned multi-cylinder unit, which IBM currently sets at 21 cylinders.)  If you ask for one track, the system will round the request upward to 21 cylinders.  Whenever the data set grows and gets more space allocated, that secondary space is also assigned in multiples of 21 cylinders.

What happens if your Data Set is moved from someplace else and it goes into the extended area? If you are using system-determined BLKSIZE, that is, BLKSIZE=0 or unspecified, then the system will automatically recalculate the BLKSIZE when the Data Set is moved.  It will also reblock the data. No problem.

Where Does BLKSIZE come from?

Alas, you cannot always use system determined block size.

The system can get the BLKSIZE from your JCL, true. It can also get the BLKSIZE by looking at the existing BLKSIZE of a Data Set.

Your JCL, incidentally, takes precedence over what is already specified on an existing Data Set. So if you have an existing Data Set with BLKSIZE=800, you can change that whenever your program writes to the Data Set. Whatever you specify in the JCL will override what is already there.

That last point can lead to a problem and also to its solution. Occasionally it happens that someone has some old JCL that specifies some nonsense like BLKSIZE=3120, and they use this JCL to write into a member of a library (or PDS). A common example is old Compile-and-Link JCL. This causes the existing BLKSIZE (saved as a number in the label of the Data Set) to be changed to 3120. If it used to be bigger than that, you have a problem. Many existing members of the Data Set actually have blocks bigger than 3120, and when you try to read them thereafter you get an I/O error message. Oops.

Fortunately the solution is equally simple. You get similar JCL and change the 3120 to the correct number, that is, whatever it used to be. Or if it’s a PDS you can just run COMPRESS JCL with the BLKSIZE specified as the correct number. When the system writes into the Data Set, it will change the BLKSIZE back to whatever you specified. If you don’t want to write into the existing Data Set, you can make a copy of it by specifying the old, larger BLKSIZE on the input DD statement. The system will use the BLKSIZE you specify, and will not even look at the 3120 specified in the label.

But that was a digression.

There is a third place where the system can obtain the BLKSIZE. Sadly, this third choice takes precedence over what you specify in the JCL. The program you are running can have BLKSIZE specified – hard-coded – within the program.

If a program has BLKSIZE specified internally, there’s not much you can do about it.

Why does anyone specify BLKSIZE inside the program? Mostly because they are copying old program code that had BLKSIZE specified, written back in the day when BLKSIZE was required, and/or they thought they could simplify either the JCL or their program logic by hard-coding a number in the program. Oh well.

If you leave BLKSIZE off your JCL, or you specify BLKSIZE=0, then if the Data Set that is created has some weird BLKSIZE like 800 or 3120, it is because the program you were running specified BLKSIZE within the program.

BLKSIZE specified within a program overrides everything else. Sorry.

Other BLKSIZE considerations

The only time I ever specify a small BLKSIZE is when some job step uses a lot of memory and has a large number of DD statements for a lot of files that will be open at the same time, AND that job is getting 80A ABENDs, 0C4s, and similar problems. Smaller BLKSIZE then is a trade-off. Each open Data Set will have at least one buffer allocated in memory, and the size of that buffer is about the same as the size of one physical block. Smaller BLKSIZE means smaller buffers.

On rare occasions it is not possible to use system determined BLKSIZE because your system is not using SMS (System Managed Storage), or because the particular disk you are using is defined to the system as being excluded from the control of SMS. The first case is super rare. The second case occurs sometimes for disks that are shared between two separate z/OS systems, when the administrators (aka System Programmers) want to be sure that one of the systems does not move any of the Data Sets off the shared disks onto some other disk accessible to only one of the systems.

Oh, yeah, by the way, SMS not only determines block sizes for you, it does lots of other useful things too, such as migrating unused Data Sets and bringing them back again as needed. As must be with such things, occasionally SMS does something you didn’t want it to do. On balance, though, it is – well, not the best thing since sliced bread, but probably the best thing since HASP.

HASP? you might ask.  Decades ago some IBM customers in Texas sped up processing by creating a spooling add-on for the mainframe operating system.  IBM bought out HASP and made it part of the mainframe operating system(s), where it's current incarnations are now called JES, as in JES2 or JES3. The S in HASP stood for Spooling: Houston Automatic Spooling Priority subsystem. Prior to Spooling, each record in a printed output file was written directly to a printer, that is, the printer was treated the same as a disk or any other attached device. Early printers were generally very slow.  You can only imagine how much that slowed down a job's total elapsed run time, right? Good for you.  JES apparently stands for Job Entry Subsystem.  A job that enters into the z/OS system, as from a "card reader" or  an "internal reader",  or via the SUBMIT command in TSO,  is spooled while it awaits execution, just as output print is spooled — So the spooling system is sort of "two mints in one", spooling both input and output.  Spooling means it (the spooled material) is put into a big holding space on disk, and that holding place is called the Spool.  But enough about spooling.

Also note that the system-determined BLKSIZE will give you the best use of disk space. A badly chosen BLKSIZE can cause your data set to take up several times as much disk space as a properly allocated equivalent Data Set.

That’s it for today. Best thing you can do with BLKSIZE is to specify BLKSIZE=0 or omit BLKSIZE entirely whenever you can.  Let the system figure out what BLKSIZE to use.  It's one of the things the system does best.   


(addendum 10 May 2016)

What is the difference between BLKSIZE and LRECL?  you may ask. (Really, some people have asked that.  Or used it for a search term and gotten to this article thereby.)  So. Here goes:

A record, as you normally think of it, is called a Logical Record.   When you look at a dataset (file) in Edit or Browse, you normally see one record per line – one Logical Record.  When your computer program refers to reading or writing a data record, again that ordinarily means a logical record.

The size, or Length, of a Logical Record is LRECL  (Logical RECord  Length).

A Physical Record (a block) contains one or more Logical Records. Hence BLKSIZE (the size, or maximum size, for a block) is generally larger than the LRECL (the length, or maximum length, of a Logical Record).

BLKSIZE (Block Size) is related to the B in FB  — and in VB, FBA, VBA, FBM, FBSA, and all other RECFM combinations. The B means Blocked. The (logical) records are blocked into groups to form physical records.

What did that mean, you ask:  that LRECL is the “. .. maximum length, of a Logical Record”?

For Fixed Length records, the logical record length is the data length – it’s the length of what you see when you look at one line of the dataset in Browse or Edit.  The length of the line is the record length, it is the same for every record in the dataset, and it is the value you use for LRECL for fixed Length records.  Fixed-length records are any records where the RECFM is F, FB, FBA, FBM, FBS, FBSA – any RECFM with an F in it –the F stands for Fixed.

Different is the situation for Varying-length records.  When you specify LRECL (in JCL or in ISPF 3.2, etc), the data length you need to specify is the maximum length of any logical record that can be in the dataset, plus 4.  Yes, for varying-length records you have to add 4 to the maximum length of the longest line you can have,; This is the actual length of the logical record as it is stored on disk; Every such varying-length data record is prefixed with a four-character “Record Descriptor Word” (RDW) that contains the length of that particular data record on disk (inclusive of said RDW).   Each block in these datasets contains an additional extra 4 bytes similar to the RDW, but here it is called a BDW (Block Descriptor Word).  Varying-length records are any records where the RECFM is V, VA, VB, VBA, VBS, VBSA – anything with a V in it – the V stands for Varying.

. . . and Why did I say a physical block is generally larger than a Logical Record” ?   What “generally” ?  Like, not always?  Right.  There are two exceptions.  Short blocks and spanned records.

The S in VBS is for Spanned records.  Spanned in this sense means that an individual record can overlap blocks.  In the simplest case a spanned record resides partly at the end of one block, with the rest of the data deposited at the beginning of the next block.   So if a record is, say, 4,000 bytes long, but it is being written at the end of a buffer that only has 3,999 unused bytes empty at the end, then, rather than waste space, the system will write as much as it can into the last part of that block, and then put the rest of the data record at the beginning of the next buffer.  (When you open a dataset for output, the system sets aside at least one buffer for it in memory; the size of the buffer is equal to the BLKSIZE; when a buffer is full the buffer is written to disk as a physical block.)  In the not-as-simple case, a logical record can overlap several physical blocks.   Nowadays it is possible to have extremely long logical records if you use RECFM=VBS together with LRECL=X – In z/OS there is an actual limit on BLKSIZE of 32760, imposed by the design of the system.  For spanned records, the logical record length can exceed the BLKSIZE.  Hence was LRECL=X invented to allow longer records.  How does the system stitch these spanned records back together when you read them back?  For a dataset with spanned records, that is, with a RECFM containing both the letters V and S, the RDW contains more complex information beyond just the record length.

The second case is a simple case: the actual size of a block can equal the LRECL.  Or similarly, a block might contain a few records, but not enough to fill up the block.  Such a block is referred to as a “short block”, if you like to pick up jargon.

For Fixed-Length records (anything with an F in the RECFM value) or Unformatted (anything with U in the RECFM value), if a block contains only one record – maybe that’s all the data you have in the dataset – then for that one record the actual physical block on disk will be the size of that one record, regardless of what you have for BLKSIZE. (Or if the block contains two records, the size of the block will be equal to the length of those two records together, and so on for anything less than the maximum that would fit inside a full block.)  This situation usually occurs only for the last block written to the dataset.  

The LRECL also equals the BLKSIZE for RECFM=F, that is, Fixed unblocked records, one logical record per physical block.  Unless your record length is quite large, please don’t do that without a good reason.  It tends to make reading and writing the data quite slow.

This brings us to the peculiar RECFM called FBS.  This seems to be the only case where a letter in the RECFM value can have more than one meaning.  That letter of course is S.  For varying-length records, the S means Spanned, as just discussed.  For Fixed length records, the S means Standard.  FBS is Fixed Block Standard.  What does standard mean in this sense?  All the blocks in the dataset have to be the same length (a standard length) except for the last block, which can be a “short block”.  The last block was exempted from the requirement for obvious reasons : depending on the number of logical records to be written into the dataset, the final block might not be long enough to fill a block of the specified size.

Hence the RECFM=FBS format lends itself to a peculiar problem:  You cannot append data onto the end of it by writing additional blocks.  Well, you can, for example if you trick the system by specifying RECFM=FB in your JCL, or inside your program.  However, if another program then tries to read back the added data using RECFM=FBS mode, haha, they will crash when they come to the short block that used to be at the end but is now someplace in the middle.

So, hopefully you now understand the difference between LRECL and BLKSIZE, if that was puzzling you earlier.

Again, that's if for the article on BLKSIZE for today.

Reminder, bottom line:  Advice: imho, Use BLKSIZE=0 or leave BLKSIZE unspecified and let the system assign it.  I mean, as a general rule, unless you are using it for a specific (albeit rare) reason.

(end of May 10th addendum)


2 comments on “BLKSIZE, the Misunderstood and Abused JCL Parameter

  1. Stumbled across this site today. Has some really good info, I hope you'll do more, thanks.

Comments are closed.