Put the Calendar on your TSO/ISPF home page

You want to put the calendar on your TSO/ISPF home page, that is, your ISPF Primary Option Panel ?

Yes, we're back to ISPF settings. There are so many!  The calendar is a popular one.

There are a couple of ways to do this.  First we'll do it the quick and easy way.

Go to the ISPF primary option menu, the default initial screen display in ISPF, the home page.

The very top line of the TSO/ISPF display, above the dotted line, is called the "Action Bar".
This is the line that starts where it says "Menu" at the upper left of the ISPF screen.
About the fifth choice from the left it says "Status".  Place your cursor over the word "Status" on that top line, and with the cursor positioned on "Status", press enter. 

  Menu  Utilities  Compilers  Options  Status  Help 
                            ISPF Primary Option Menu  
 Option ===>  
  0  Settings      Terminal and user parameters            User ID . : Your-ID  
  1  View          Display source data or listings         Time. . . : 17:33 
  2  Edit          Create or change source data            Terminal. : 3278T  

The "Status display" drop-down menu should appear.  Yes, that decorative display area on the right-hand side of your primary menu is called the Status display.

Looking at the drop-down list, you notice that option 3 in the list says "Calendar".

To select 3 for calendar, you type the number 3 in the itty-bitty entry area on the top line of the box, and press enter.

 Menu  Utilities  Compilers  Options  Status  Help 
 ------------------------------------ *----------------------------*
                            ISPF Prim | 3  *. Session              |
 Option ===>                          |    2. Function keys        |
                                      |    3. Calendar             |
 0  Settings      Terminal and user p |    4. User status          |
 1  View          Display source data |    5. User point and shoot |
 2  Edit          Create or change so |    6. None                 |
 3  Utilities     Perform utility fun *----------------------------*
 4  Foreground    Interactive language processing
 5  Batch         Language Processors
 6  Command       Enter TSO or Workstation commands

Voila, the calendar appears on the screen! :

 Menu  Utilities  Compilers  Options  Status  Help 
                            ISPF Primary Option Menu  
 Option ===>  
 0  Settings      Terminal and user parameters            < Calendar >   
 1  View          Display source data or listings            May       2017 
 2  Edit          Create or change source data            Su Mo Tu We Th Fr Sa 
 3  Utilities     Perform utility functions                   1  2  3  4  5  6   
 4  Foreground    Interactive language processing          7  8  9 10 11 12 13 
 5  Batch         Language Processors                     14 15 16 17 18 19 20 
 6  Command       Enter TSO or Workstation commands       21 22 23 24 25 26 27 
 7  Dialog Test   Perform dialog testing                  28 29 30 31  
 				  	                  Time . . . . : 05:38 
                                                          Day of year. :   142 
Enter X to Terminate using log/list defaults  

Okay, you might say, you're done here now.
But if you read a little further you'll see how you can set some fun options, like colors.

This brings us to method 2.

Yes, method 1 was the easy way, and method 2 is the way where you can set options.

Again starting from your ISPF Primary Option Menu, this time you select the "Menu" option in the "Action bar", that is, move your cursor up to where it says "Menu" in the top left corner of the ISPF screen, and then, with the cursor on "Menu", press enter.

When you press enter with the cursor on "Menu", you will see a drop-down selection box appear directly Under the word "Menu".

In that drop-down box, you select 8 for Status Area by typing the number 8 in the little bitty entry field at the left of the top line in the drop-down box, and again you press enter.

 Menu  Utilities  Compilers  Op 
 | 8  1. Settings              |  
 |    2. View                  |  
 |    3. Edit                  |  
 |    4. ISPF Command Shell    |  
 |    5. Dialog Test...        |  
 |    6. Other IBM Products... |  
 |    7. SCLM                  |  
 |    8. Status Area...        |  
 |    9. Exit                  |  

After you select 8, you will get still another drop-down box (or pop-up box, if you prefer to think of it that way).  The box will show a copy of your current "Status" display.

Focus on the top line inside that box where it says "Status Options" — that is where you want to place the Cursor over "Status" and press Enter.

 Menu  Utilities  Compilers  
   |   Status  Options       |   
   | ----------------------  |    
   |  ISPF Status            | 
   | Command ===>            |  
   |                         |  
   |                         |  


You will now get yet another box, overlaying most of the previous box. This newest box has numbered options.

Option 3 is Calendar, unless your choice has already been set to Calendar, in which case the 3 will be coverd by an asterisk, and you skip this step.

Assuming you do not currently have the Calendar option turned on, you now select 3 for "Calendar" by typing the number 3 in the itty bitty little entry field and then pressing enter. Doing this will turn on the Calendar display.

Now get back to that box that says "Status Options", put the cursor over the word "Options", and press enter.
You now get — Can you guess? — another drop-down box.
This one contains, unsurprisingly, options you can set. The first two choices are "1. Calendar Start day" and "2. Calendar Colors".

Select 2 for colors and you will get another little mini-display that lists the names of the Calendar fields you can change in the left-hand column. Each is followed by its own little tiny entry field, and there is another column showing the available color choices numbered 1 through 8.

 Menu  Utilities  Compilers  Options  Status  Help 
  *-------------------------* ----------------------------------------------  
  |   Status  Options       | F Primary Option Menu   
  | - *------------------------- Calendar Colors --------------------------*  
  | S |                                                          Defaults  | 
  | C |                                                                    | 
  |   |   Change one or more of the Calendar colors and press enter to     | 
  |   |   immediately see the effect. Clearing a field restores defaults.  | 
  |   |                                                                    |   
  |   | Field:            Color:    Valid Colors:    Sample:               | 
  |   | Scroll Button . . . 8       1. White         < Calendar >  | 
  |   | Heading Date  . . . 8       2. Red              July      1995     |  
  |   | Heading Text  . . . 8       3. Blue          Su Mo Tu We Th Fr Sa  | 
  |   | Weekday . . . . . . 8       4. Green                            1  | 
  |   | Saturday/Sunday . . 8       5. Pink           2  3  4  5  6  7  8  | 
  |   | Current Day . . . . 8       6. Yellow         9 10 11 12 13 14 15  | 
  |   |                             7. Turq          16 17 18 19 20 21 22  | 
  |   |                             8. CUA default   23 24 25 26 27 28 29  |  
  |   |                                              30 31                 |   
  |   |                                              Time . . . . : 10:10  | 
  |   |                                              Day of year. :   202  | 
  |   |                                                                    | 
  *-- |                                                                    | 
      |                                                                    | 

Turq is short for turquoise, also known as aqua, aquamarine, peacock blue, sometimes called teal — that green-blue color with the power to inspire otherwise sane people to argue over whether it is "really" a shade of green or of blue. The particular shade of turquoise used for 3270 screens is noteworthy for being a bright pastel, so it stands out from a black background, like a softened shade of white.

Assuming you use the IBM 3270 emulator, you can actually manipulate the 3270 emulator settings to revise the color values, so you can adjust the value of fields currently designated as "Turq" to be closer to green or closer to a pale sky blue. Or maybe you don't like pink — you can adjust normally pink fields to be more of a plum purple or peach orange or any other color you like. If you're red-green color blind, and you don't use the newish corrective lenses for that, you can go into the 3270 emulator settings and pick out any shades of colors you can easily distinguish and assign your own personal color choices to be used for whatever fields are by default green, red, or anything else. Or if you see colors the same way most people do, but the only ones you like are red and grey, you can pick a set of variations on red and grey, though personally I wouldn't advise it in an office setting. But that's all a digression. The  point was that the "Turq" choice on the screen stands for turquoise — whatever turquoise might be.  Surely you don't think It was just a sneaky way to get you interested in exploring what you can do with your 3270 emulator settings.  Anyway, back to the Calendar settings.

If you've gotten to the screen shown just before the digression, you're in the ISPF Colors for Calendars place, and you can reassign colors to field types on the calendar display.

Let's say you want Weekdays to be Blue, which is color number 3 in the "Valid Colors" column, and you want Saturdays and Sundays, that is, weekends, to be Red, which is color number 2. You find the word "Weekday" listed in the left-hand column, the column titled "Field:". You type the digit 3 in the one-byte entry field just to the right of the word "Weekday". You then enter the digit 2 in the entry field that comes right after "Saturday/Sunday". You want the current day to be highlighted in white? You put the digit 1 in the entry field after "Current Day". Set any other color choices you want in a similar way. Press enter. At the far right of the little mini-screen, under the title "Sample:", you will see a copy of a calendar month display. It should change colors to reflect your chosen colors right after you press the enter key. You can hang around on this mini-screen revising the colors until you hit a pattern you like. Bingo, you're done.

Press F3 as needed to get back to the ISPF home page, aka the Primary Option Menu, and carry on with your day.

You can, of course, repeat the exercise, and this time choose to revise "1. Calendar Start day… " rather than "2. Calendar Colors… ", but I'll leave that to you to explore on your own.

IBM z/OS MVS Spooling : a Brief Introduction

IBM z/OS MVS Spooling : a Brief Introduction —

Spooling means a holding area on disk is used for input jobs waiting to run and output waiting to print.

This is a brief introduction to IBM z/OS MVS mainframe spooling.

The holding area is called spool space.  The imagery of spooling was probably taken from processes that wind some material such as thread, string, fabric or gift wrapping paper onto a spindle or spool.  In effect, spooling is a near-synonym for queueing.  Those paper towels on that roll in the kitchen are queued up waiting for you to use each one in turn, just like a report waiting to be printed or a set of JCL statements waiting for its turn to run.

On very early mainframe computers, the system read input jobs from a punched card reader, one line at a time, one line from each punched card.  It wrote printed output to a line printer, one line at a time.  Compared to disks – even the slower disks used decades ago – the card readers and line printers were super slow.  Bottlenecks, it might be said.  The system paused its other processing and waited while the next card was read, or while the next line was printed.  So that methodology was pretty well doomed.  It was okay as a first pass at getting a system to run — way better than an abacus — but that mega bottleneck had to go.  Hence came spooling.

HASP, Houston Automatic Spooling Priority program (system, subsystem) was early spooling software that was used with OS/360 (the ancestral precursor of z/OS MVS).  (See HASP origin story, if interested.)  HASP was the basis of development for JES2, which today is the most widely used spooling subsystem for z/OS MVS systems.  Another fairly widely used current spooling sub-system is JES3, based on an alternate early system called ASP.  We will focus on JES2 in this article because it is more widely used.   JES stands for Job Entry Subsystem.  In fact JES subsystems oversee both job entry (input) and processing of sysout (SYStem OUTput). 

Besides simply queueing the input and output, the spooling subsystem schedules it.  The details of the scheduling form the main point of interest for most of us. Preliminary to that, we might want to know a little about the basic pieces involved.

The Basic pieces

There are input classes, also called job classes, that control scheduling and resource limits

There are output classes, also called sysout classes, that control output print

There are real physical devices (few card readers, but many variations of printers and vaguely printer-like devices)

There are virtual devices. One virtual device is the “internal reader” used for software-submitted jobs, such as those sent in using the TSO submit command or FTP.  Virtual output devices include “external writers”.  An external writer is a program that reads and processes sysout files, and such a program can route the output to any available destination.  Many sysout files are never really printed, but are viewed (and further processed) directly from the spool space under TSO using a software product like SDSF.

There is spool sharing.  A JES2 spool space on disk (shared disk, called shared DASD) can be shared between two or more z/OS MVS systems with JES2 (with a current limit of 32 systems connected this way).  Each such system has a copy of JES2 running. Together they form a multi-access spool configuration (MAS).  Each JES2 subsystem sharing the same spool space can start jobs  from the waiting input queues on the shared spool, and can also select and process output from the shared spool.

There is checkpointing. This is obviously especially necessary when spool sharing is in use.

There is routing.  Again, useful with spool sharing, to enable you to route your job to run on a particular system, but also useful just to route your job’s output print files to print on a particular printer.

There are separate JES2 operator commands that the system operator can use to control the spooling subsystem, for example to change what classes of sysout can be sent to a specific printer, or what job classes are to be processed.  (These are the operator commands that start with a dollar sign $, or some alternative currency symbol depending on where your system is located.)

There is a set of very JCL-like control statements you can use to specify your requirements to the spooling subsystem.  (Sometimes called JECL, for Job Entry Control Language, as distinct from plain JCL, Job Control Language.)  For JES3, these statements begin with //* just like an ordinary JCL comment, so a job that has been running on a JES3 system can be copied to a system without JES3 and the JES3-specific JECL statements will simply be ignored as comments.  For JES2, on which we will focus here, the statements generally begin with /* in columns 1 and 2.  Common examples you may have seen are /*ROUTE and /*OUTPUT but notice that the newer OUTPUT statement in JCL is an upgrade from /*OUTPUT and the new OUTPUT statement offers more (and, well, newer) options.  Though the OUTPUT statement is newish, it is over a decade old, so you probably do have it on your system.

There are actual JCL parameters and statements that interact with JES2, such as the OUTPUT parameter on the DD statement, and the just-mentioned OUTPUT statement itself, which is pointed to by the parameter on the DD. 

Another example is the CLASS parameter on the JOB statement, which is used to designate the job class for job scheduling and execution.  The meanings of the individual job classes are totally made up for each site.  Some small development company might have just one job class for everything.  Big companies typically create complicated sets of job classes, each class defined with its own limits for resources such as execution time, region size, even the time of day when the jobs in each class are allowed to run.  Your site can define how many jobs of the same class are allowed to run concurrently, and the scheduling selection priority of each class relative to each other class.  Sometimes sites will set up informal rules which are not enforced by the software, but by local working rules, so that everyone there is presumed to know that they are only allowed to specify, for example, CLASS=E for emergency jobs.   (That’s one I happened to see someplace.)  If you want to know what job CLASS to specify for your various work, your best bet is to ask your co-workers, the people who are responsible for setting up the job classes, or some other knowledgeable source at your company.  Remember you can be held accountable for following rules you know nothing about that are not enforced by any software configuration, so don’t try to figure it out on your own, ask colleagues and other appropriate individuals what is permissible and expected.  Not joking.  JES2 Init & Tuning (Guide and reference) are the books that define how JES2 job classes have been configured, if you’re just curious to get a general idea of what the parameters are. The JES2 proc in proclib usually contains a HASPPARM DD statement pointing to where to find the JES2 configuration parameters on any particular system.  

In some cases similar considerations can apply for the use of SYSOUT print classes and the routing of such output to go to particular printers or to be printed at particular times.  The SYSOUT classes, like JOB classes, are entirely arbitrary and chosen by the responsible personnel at each site.  

MSGCLASS on the JOB statement controls where the job log goes — the JCL and messages portion of your listing.  The values you can specify for MSGCLASS are exactly the same as those for SYSOUT (whatever way that may be set up at your site).  If you want all your SYSOUT to go to the same place, along with your JCL and messages, specify that class as the value for MSGCLASS= on your job statement, and then specify SYSOUT=* on all of the DD statements for printed output files.  (That is, specify an asterisk as the value for SYSOUT= on the DD statements.)  

In many places, SYSOUT class A indicates real physical printed output on any printer, class X indicates routing to a held queue where it can be viewed from SDSF, and class Z specifies that the output immediately vanishes (Yup, that's an option).  However, there is no way to know for sure the details of how classes are set up at your particular site unless you ask about it.

Sometimes places maintain “secret” classes for specifying higher priority print jobs, or jobs that go to particular special reserved printers, and the secrets don’t stay secret of course.  Just because you see someone else using some print class, don’t assume it means it’s okay for you to use it for any particular job.  Ask around about the local rules and expectations.

So, for MSGCLASS (aka SYSOUT classes), as for JOB classes, the best thing is to ask whoever sets up the classes at your site; or, if that isn't practical, ask people working in the same area as you are, or just whoever you think is probably knowledgeable about the local setup.  Classes are set up by your site, for your site. 

An example of a JES2-related JCL statement that you have probably not yet seen is introduced with z/OS 2.2 — the JOBGROUP  statement, and an entire set of associated statements (ENDGROUP, SCHEDULE, BEFORE, AFTER, CONCURRENT), there are about ten of them – but that would be a topic for a follow-on post.  You probably don’t have z/OS 2.2 yet anyway, but it can be fun to know what’s coming.  JOBGROUP is coming.

That’s probably enough for an overview basic introduction.

The idea for this post came from a suggestion by Ian Watson.


References and Further Reading

z/OS concepts: JES2 compared to JES3

z/OS JES2 Initialization and Tuning Guide, SA32-0991-00
How to initialize JES2 in a multi-access SPOOL configuration
z/OS MVS JCL Reference (z/OS 2.2)
JES2 Execution Control Statements (This is where you can see the new JOBGROUP)
z/OS MVS JCL Reference, SA23-1385-00  (z/OS 2.1)
JES2 control statements
OUTPUT JCL statement

z/OS JES2 Initialization and Tuning Reference, SA32-0992-00
Parameter description for JOBCLASS(class…|STC|TSU)

z/OS JES2 Initialization and Tuning Guide, SA32-0991-00
Defining the data set for JES2 initialization parameters

IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling

Why You Want DB2 12 Now

Why you want DB2  12 — Just the highlights

 – – Version 12 of DB2 for z/OS

Okay, would somebody remind you again why you would want to upgrade to DB2 12? you might ask. Probably mainly because:

1) It lets you have bigger DB2 tables  and bigger partition sizes,

2) Some tasks are easier; you can use ALTER to increase partition size, and

3)  Version 12 has significant performance enhancements that can make some online transactions run a lot faster; for example, as long as the object isn’t too big you can keep an object in an in-memory buffer pool, which is potentially a major turbo boost for some transactions.  Also the new fast index traversal feature can speed up random index access when reading data, and.the new fast insert algorithm can speed up inserts (but only for unclustered data on universal table spaces).

Yes, those things that look like links are in fact links to IBM doc on the topic, click them if you want to find more information.

SoDB2 12 lets you have bigger data tables and partitions, some tasks are easier to accomplish, and some online transactions will probably run faster.  
More data, Easier, Faster . . . Sounds great.

But wait, there’s more:

Obfuscated data definition statements:  If you’re into code secrecy, for example if you sell a software product that uses DB2 tables and you don’t want competitors disguised as customers to copy your code, or you just don’t want customers to copy the code, modify it, and then open a problem report when their mods don’t work, then you might like this feature: For you, DB2 12 lets you disguise the source for SQL procedures, SQL functions, and triggers, using obfuscated data definition statements.

Triggers can be written in SQL PL (SQL Programming Language)

In Spanish, you can have fun pronouncing the name DB2 12 “Day-bay-dos-Dosay”, which sounds a bit like dosey-doe (a square dance move). 
Okay, skip that as a reason.  J

Utilities REORG, LOAD, RUNSTATS (Runstats), Backup and Recovery, and REPAIR CATALOG have enhancements.

Availability: Advisory-REORG pending (AREOR) status during alteration of index compression for universal table space indexes means those indexes can continue to be used during the alteration.  Also you can increase the size of a partition using ALTER to change the DSSIZE rather than doing a REORG.

Extended support for the array data type and for global variables. A global variable can now be defined as having the array data type.  As a global variable, an array can be used both within and outside of PL SQL code.

Extended support for Unicode columns defined within ordinary non-Unicode (EBCDIC) DB2 tables.

Performance improvements for XML as well as for index list prefetch-based plans, outer joins, UNION ALL, and some other areas, including a number of performance improvements for online transaction processing – That is, some online transactions ought to run faster.  Click here  to get a list of online transaction performance enhancements.


Okay, How big can a DB2 table be now?  256 trillion rows if you use the new table space structure (PBR: partition-by-range).

How did they do it?  They use RID Record IDentifiers 7 bytes long (2-byte partition number plus 5-byte page number) to locate data based on RPN Relative Page Number.

How big can a partition be?  1024 GB, which is 1 Terabyte (TB), or about a trillion bytes

How fast are those improved inserts? potentially over 11 million inserts per second for unclustered data, according to IBM.  Sounds pretty good.  Match that on your cell phone app if you can.  Note that IBM also tells us The fast insert algorithm is enabled for only universal table spaces using MEMBER CLUSTER. The algorithm can be enabled system-wide or for individual table spaces." 

Vocabulary (These vocabulary definitions are straight from IBM references)

LOB : Large Object (Large Object table space)

PBG:  partition-by-growth

PBR: partition-by-range

RPN: Relative Page Number(s)

PBR RPN: Partition-By-Range Relative-Page-Number

UTS: Universal Table Space

RID: Record Identifier(s) (note: Expanded to 7 bytes in DB2 12, to allow bigger sizes)

SQL PL : SQL Programming Language.

OLTP: Online Transaction Processing

AREOR: advisory-REORG pending status

RBDP: Restrictive REBUILD-Pending status

DSSIZE:  Maximum size for each partition.—or, in a LOB, maximum size for each data set.

MEMBER CLUSTER specifies that inserted data is not clustered either by the implicit clustering index (the first index), or the explicit clustering index

FTB : Fast Traversal Block(s) for indexes are in-memory optimized index structures that allow DB2 to traverse the (index) much faster than traditional page-oriented page traversal for indexes cached in buffer pools.


References, Bibliography, Further Reading

Okay, you don’t need all of these, but it can save time when you want to find something if you have a list for reference.  Look at whichever ones happen to interest you.  You can find a lot more good stuff in Version 12 besides the little bit that was mentioned in this post.  Also you should be able to find more details on exactly how to use the features (syntax, examples, caveats and restrictions).

IBM DB2 12 for z/OS Technical Overview

What's new in the DB2 12 base release?

IBM Systems Magazine – DB2 12 for z/OS Tools: What's Improved

DB2Tutor blog

Administrator function in DB2 12

Altering DB2 tables

Altering partitions

Changing the boundary between partitions

Increasing the partition size of a partitioned table space

Increased partition sizes and simplified partition management
for range-partitioned table spaces with relative page numbering


Improved availability when altering index compression

New and changed catalog tables

DB2 12 for z/OS Product Documentation

Administrator information roadmap

New-user information roadmap

Application Developer information roadmap

DB2 12 – What's new – Improved EDM pool management

DB2 12 – What's new – OLTP performance in DB2 12

DB2 12 – Performance – Designing EDM storage space for performance

DB2 12 – Installation and migration – Calculating EDM pool size

DB2 12 – Performance – Calculating the EDM statement cache hit ratio

Creating and modifying DB2 objects from application programs



Addresses on disk.

An address on disk is an odd sort of thing.

Not a number like a memory pointer.

More like a set of co-ordinates in three dimensional space.

Ordinary computer memory is mapped out in the simplest way, starting at zero, with each additional memory location having an address that is the plus-one of the location just before it, until the allowable maximum address is reached; The allowable maximum pointer address is limited by the number of digits available to express the address. 

Disks — external storage devices — are addressed differently.

Since a computer system can have multiple disks accessible, each disk unit has its own unit address relative to the system.  Each unit address is required to be unique.  This is sort of like disks attached to a PC being assigned unique letters like C, D, E, F, and so on; except the mainframe can have a lot more disks attached, and it uses multi-character addresses expressed as hex numbers rather than using letters of the alphabet.  That hex number is called the unit address of the disk.

Addresses on the disk volume itself are mapped in three-dimensional space.  The position of each record on any disk is identified by Cylinder, Head, and Record number, similar to X, Y, and Z co-ordinates, except that they're called CC, HH, and R instead of X, Y, and Z. A track on disk is a circle.  A cylinder is a set of 15 tracks that are positioned as if stacked on top of each other.  You can see how 15 circles stacked up would form a cylinder, right?  Hence the name cylinder. 

Head, in this context, equates to Track.  The physical mechanism that reads and writes data is called a read/write head, and there are 15 read/write heads for each disk, one head for each possible track within a cylinder.  All fifteen heads move together, rather like the tines of a 15-pronged fork being moved back and forth. To access tracks in a different cylinder, the heads move in or out to position to that other cylinder.  So just 15 read/write heads can read and write data on all the cylinders just by moving back and forth.  

That's the model, anyway.  And that's how the original disks were actually constructed.  Now the hardware implementation varies, and any given disk might not look at all like the model.  A disk today could be a bunch of PC flash drives rigged up to emulate the model of a traditional disk.  But Regardless of what any actual disk might look like physically now,  the original disk model was the basis of the design for the method of addressing data records on disk.  In the model, a disk is composed of a large number of concentric cylinders, with each cylinder being composed of 15 individual tracks, and each track containing some number of records. 

Record here means physical record, what we normally call a block of data (as in block size).  A physical record — a block — is usually composed of multiple logical records (logical records are what we normally think of as records conceptually and in everyday speech).  But a logical record is not a real physical thing, it is just an imaginary construct implemented in software.  If you have a physical record — a block — of 800 bytes of data, your program can treat that as if it consists of ten 80-byte records, but you can just as easily treat it as five 160-byte records if you prefer, or one 800-byte record; the logical record has no real physical existence.  All reading and writing is done with blocks of data, aka physical records.  The position of any given block of data is identified by its CCHHR, that is, its cylinder, head, and record number (where head means track, and record means physical record).  

The smallest size a data set can be is one track.  A track is never shared between multiple data sets.

The CCHHR represents 5 bytes, not 5 hex digits.  You have two bytes (a halfword) for the cylinder number and two bytes for the head (track) number.  

A "word", on the IBM mainframe, is 4 bytes, or 4 character positions.  Each byte has 8 bits, in terms of zeroes and ones, but it is usually viewed in terms of hexadecimal; In hexadecimal a byte is expressed as two hex digits.  A halfword is obviously 2 bytes, which is the size of a "small integer".  (4 bytes being a long integer, the kind of number most often used; but halfword arithmetic is also very commonly used, and runs a little faster.)  A two-byte small integer can express a number up to 32767 if signed or 65535 if unsigned.  CC and HH are both halfwords.  

Interestingly, a halfword is also used for BLKSIZE (this is a digression), but the largest block size for an IBM data set traditionally is 32760, not 32767, simply because the MVS operating system, like MVT before it, was written using 32760 as the maximum BLKSIZE.  Lately there are cases where values up to 65535 are allowed, using LBI (large block interface) and what-not, but mostly the limit is still 32760.  But watch the space; 65535 is on its way in; obviously the number need not allow for negative values, that is, it need not be signed.  End of digression on BLKSIZE.

There can be any number of concentric cylinders, but using the traditional CCHHR method you can only address a number that can be represented in a two-byte hex unsigned integer.  That would be 65,535, but in fact the highest cylinder address (now) on ordinary IBM disks is 65,520.  That is the CC-coordinate, the basis of the CC in CCHHR.

But wait, you say, you've got an entire 2 bytes — a halfword integer — to express the track number within the cylinder, yet there are always 15 tracks in a cylinder; one byte would be enough.  In fact, even  half a byte could be used to count to fifteen, which is hex F.  Right.  You got it.  What do we guess must eventually happen here? 

People want bigger disks so they can have bigger data sets, and more of them.  Big data.  You know how many customers the Bank of China has these days?  No, I don't either, but it's a lot, and that means they need big data sets.  And they aren't the only ones who want that.  I really don't want to think about guessing how much data the FBI must store.  What we do know is that there is a big – and growing – demand for gigantic data sets.

So inevitably the unused extra byte in HH  must be poached and turned into an adjunct C.  Thus is born the addressing scheme for the extended area on EAV disks (EAV = Extended Address Volumes).  So, three bytes for C, one byte for H ?  Well, no, IBM decided to leave only HALF of a byte — four bits — for H.  (As you noticed earlier, one hex digit — half of a byte — is enough to count to fifteen, which is hex F.) So IBM took 12 bits away from HH for extending the cylinder number.   Big data.  Big.

And you yourself would not care overly about EAV, frankly, except that (a) you (probably) need to change your JCL to use it, and (b) there are restrictions on it, plus (c) those restrictions keep changing, and besides that (d) people are saying your company intends converting entirely to EAV disks eventually.

Okay, so what is this EAV thing, and what do you do about it ?

EAV means Extended Address Volume, which means bigger disks than were previously possible, with more cylinders.  The first part of an EAV disk is laid out just like any ordinary disk, using the traditional CCHHR addressing.  So that can be used with no change to your programs or JCL.

In the extended area, cylinders above 65,520,  The CCHH is no longer CCHH. 

The first two bytes (sixteen bits) contain the lower part of the cylinder number, which can go as high as 65535.  The next twelve bits — one and a half bytes taken from what was previously part of HH — contain the rest of the cylinder number, so to read the whole thing as a number you would have to take those twelve bits and put them to the left of the first two bytes. The remaining four bits — the remaining half of a byte out of what was once HH — contains the track number within the cylinder, which can go as high as fifteen.

Says IBM (in z/OS DFSMS Using Data Sets):

     A track address is a 32-bit number that identifies each track
     within a volume. The address is in the format hexadecimal CCCCcccH.

        CCCC is the low order 16-bits of the cylinder number.

        ccc is the high order 12-bits of the cylinder number.

        H is the four-bit track number.

End of quote from IBM manual.

The portion of the disk that requires the new format of CCHH is called extended addressing space (EAS), and also called cylinder-managed space.  Cylinder-managed space starts at cylinder 65520.

Of course, for any space with an address below cylinder 65535, those extra 12 bits are always zero, so you can view the layout of the CCHH the old way or the new way there, it makes no difference.

Within the extended addressing area, the EAS, the cylinder-managed space, you cannot allocate individual tracks.  Space in that area is always assigned in Cylinders, or rather in chunks of 21 cylinders at a time.  The smallest data set in that area is 21 cylinders.  The 21-cylinder chunk is called the "multicylinder unit".

If you code a SPACE request that is not a multiple of 21 cylinders (for a data set that is to reside in the extended area), the system will automatically round the number up to the next multiple of 21 cylinders.

As of this writing, most types of data sets are allowed within cylinder-managed space, including PDS and PDSE libraries, most VSAM, sequential data sets including DSNLARGE, BDAM, and zFS.  This also depends on the level of your z/OS system, with more data set types being supported in newer releases.

However the VTOC cannot be in the extended area, and neither can system page data sets, HFS files, or VSAM files that have imbed or keyrange specified.  Also VSAM files must have Control Area size (CA) or Minimum Allocation Units (MAU) such as to be compatible with the restriction that space is going to be allocated in chunks of 21 cylinders at a time.  Minor limitations.

Specify EATTR=OPT in your JCL when creating a new data set that can reside in the extended area.   EATTR stands for Extended ATTRibutes.  OPT means optional.  The only other valid value for EATTR is NO, and NO is the default if you don't specify EATTR at all.

The other EAV-related JCL you can specify on a DD statement is either EXTPREF or EXTREQ as part of the DSNTYPE.  When you specify  EXTPREF it means you prefer that the data set go into the extended area; EXTREQ means you require it to go there.


Allocate a new data set in the extended addressing area

//DD1 DD DISP=(,CATLG),SPACE=(CYL,(2100,2100)),


Addendum 1 Feb 2017: BLKSIZE in Cylinder-managed Space

This was mentioned in a previous post on BLKSIZE, but it is relevant to EAV and bears repeating here.  If you are going to take advantage of the extended address area, the EAS, on an EAV disk, you should use system-determined BLKSIZE, that is, either specify no BLKSIZE at all for the data set or specify BLKSIZE=0, signifying that you want the system to figure out the best value of BLKSIZE for the data set.

Why? Because in the cylinder managed area of the disk the system needs an extra 32 bytes for each block, which it uses for control information. Hence the optimal BLKSIZE for your Data Set will be slightly smaller when the data set resides in the extended area.  The 32 byte chunk of control information does not appear within your data.  You do not see it.  But it takes up space on disk, as a 32-byte suffix after each block.

You could end up using twice as much disk space if you choose a poor BLKSIZE, with about half the disk space being wasted.  That is true because a track must contain an integral number of blocks, for example one or two blocks.  If you think you can fit exactly two blocks on each track, but the system grabs 32 bytes for control information for each block, then there will be not quite enough room on the track for a second block.  Hence the rest of the track will be wasted, and this will be repeated for every track, approximately doubling the size of your data set.  

On the other hand, if you just let the system decide what BLKSIZE to use, it generally calculates a number that allows two blocks per track. 

And when you use system-determined BLKSIZE  — when you just leave it to the system to decide the BLKSIZE — you get a bonus; if the system migrates your data set, and the data set happens to land on the lower part of a disk, outside the extended area, then if you have used system-determined BLKSIZE, that is, BLKSIZE=0 or unspecified, the system will automatically recalculate the best BLKSIZE when the Data Set is moved.  If the data set is later moved back into the cylinder-managed EAS area, the BLKSIZE will again be automatically recalculated and the data reblocked.

If in the future IBM releases some new sort of disk with a different track length, and your company acquires a lot of the new disks and adds them to the same disk storage pool you're using now, the same consideration applies: If system-determined BLKSIZE is in effect, the best BLKSIZE will be calculated automatically and the data will be reblocked automatically when the system moves the data set to the different device type.

Yes, it is possible for a data set to reside partly in track-managed space (the lower part of the disk) and partly in cylinder-managed space (the EAS, extended address, high part of the disk), per the IBM document.  

You should generally use system-determined BLKSIZE anyway.  But if you’re using EAV disks, it becomes more important to do so because of the invisible 32-byte suffix the system adds when your data set resides in the extended area.

[End of Addendum on BLKSIZE]

References, further reading


z/OS DFSMS Using Data Sets


Disk types and sizes

a SHARE presentation on EAV

EAV reference – IBM manual
z/OS 2.1.0 =>
z/OS DFSMS Using Data Sets =>
All Data Sets => Allocating Space on Direct Access Volumes => Extended Address Volumes

IBM manual on storage administration (for systems programmers)
z/OS DFSMSdfp Storage Administration

The 32-byte per block overhead in the extended area of EAV disk (IBM manual):

z/OS DFSMS Using Data Sets ==>
Non-VSAM Access to Data Sets and UNIX Files ==>
Specifying and Initializing Data Control Blocks ==>
Selecting Data Set Options ==>
Block Size (BLKSIZE)
Extended-format data sets: In an extended-format data set, the system adds a 32-byte suffix to each block, which your program does not see. This suffix does not appear in your buffers. Do not include the length of this suffix in the BLKSIZE or BUFL values.”

Packed, Zoned, Binary Math

Mainframe Math: Packed, Zoned, Binary Numbers —

Why are there different kinds of numbers?  And how are they different, exactly? Numeric formats on the mainframe (simplified) . . .

The mainframe can do two basic kinds of math: Decimal and Binary.  Hence the machine recognizes two basic numeric formats: Decimal and Binary.  It has separate machine instructions for each.  Adding two binary integers is a different machine operation from adding two decimal integers.  If you tell your computer program to add a binary number to a packed number, the compiler generates machine code that first converts at least one of the numbers, and then when it has two numbers of the same type it adds them for you.

There is also a displayable, printable type, called Zoned Decimal, which in its unsigned integer form is identical to character format.  You cannot do math with zoned decimal numbers.  If you code a statement in a high level language telling the computer to add two zoned decimal numbers, it might generate an error, but otherwise it works because the compiler generates machine instructions that will first convert the two numbers into another type, and then do the math with the converted copies. 

Within Decimal and Binary there are sub-types, such as double-precision and floating point.  These exist mainly to enable representation of, and do math with, very large numbers.  Within displayable numbers there are many possible variations of formatting.  Of course.  

For this article, we are going to skip all the variations except the three most common and most basic: Binary (also called Hexadecimal, or Hex), Decimal (called Packed Decimal, or just Packed), and Zoned Decimal (the displayable, printable representation, sometimes also called “Picture”).  To start with we’ll focus on integers. 

Generally, packed decimal integers used for mathematical operations can have up to 31 significant digits, but there are limitations: a multiplier or divisor is limited to 15 digits, and, when doing division, the sum of the lengths of the quotient and remainder cannot exceed 31 digits. For practical business purposes, these limits are generally adequate in most countries.

Binary (hex) numbers come in two basic sizes, 4 bytes (a full word, sometimes called a long integer), and 2 bytes (a half word, sometimes called a short integer). 

A signed binary integer in a 4 byte field can hold a value up to 2,147,483,647.  

The leftmost bit of the leftmost byte is the sign bit.  If the sign bit is zero that means the number is positive.  If the sign bit is one the number is negative.  This is why a full word signed binary integer is sometimes called a 31-bit integer.  Four bytes with 8 bits each should be 32 bits, right?  But no, the number part is only 31 bits, because one of the bits is used for the sign.

A 2-byte (half word) integer can hold a value up to 32,767 if it is defined as a signed integer, or 65,535 if unsigned.  

The sign bit in a 2-byte binary integer is still the leftmost bit, but since there are only two bytes, the sign bit is the leftmost bit of the left-hand byte.  

Consider the case where you are not using integers, but rather you have an implied decimal point; For example, you’re dealing with dollars and cents. Now you have two positions to the right of an implied (imaginary) decimal point. With two digits used for cents, the maximum number of dollars you can represent is divided by a hundred: $32,767 is no longer possible in a half word; the new limit becomes $327.67, so half word binary math won’t be much use if you’re doing accounting.  $21,474,836.47, that is, twenty-one million and some, would be the limit for full word binary.  Such a choice might be considered to demonstrate pessimism or lack of foresight, or both.  You probably want to choose decimal representations for accounting programs, because decimal representation lets you use larger fields, and hence bigger numbers.

Half word integers are often used for things like loop counters and other smallish arithmetic, because the machine instructions that do half word arithmetic run pretty fast compared to other math.  For the most part binary arithmetic is easier and runs faster than decimal arithmetic.  Also, machine addresses (pointers) are full word binary (hex) integers, so any operation that calculates an offset from an address is quicker if the offset is also a binary integer.  Plus you can fit a bigger number into a smaller field using binary.  However, If you need to do calculations that use reasonably large numbers, for example hundreds of billions in accounting calculations, then you want to use decimal variables and do decimal math.

How are these different types of numbers represented internally – what do they look like?

An unsigned packed decimal number is composed entirely of the hex digits zero through nine.  Two such digits fit in one byte.  So a byte containing hex’12’ would represent unsigned packed decimal twelve.  Two bytes containing hex’9999’ would represent unsigned packed decimal nine thousand nine hundred and ninety-nine. 

How is binary different?  You don’t stop counting at nine.  You get to use A for ten, B for eleven, C for twelve, D for thirteen, E for fourteen, and F for fifteen.  It’s base sixteen math (rather than the base ten math that we grew up with).  So in base ten math, when you run out of digits at 9, and you expand leftward into a two-digit number, you write 10 and call it ten.  With base sixteen math, you don’t run out of available digits until you hit F for fifteen; so then when you expand leftward into a two-digit number, you write 10 and call it sixteen.  By the time you get to x’FF’ you’ve got the equivalent of 255 in your two-digit byte, rather than 99. 

Why, you may ask, did they do this; In fact, why, having done it, did they stop at F?  Actually it’s pretty simple.  Remember that word binary – it really means you’re dealing in bits, and bits can only be zero (off) or one (on).  On IBM mainframe type machines, there happen to be 8 bits in a byte.  Each half byte, then – each of our digits – has 4 bits in it.  That’s just how the hardware is made. 

Yes, a half byte is also named a nibble, but I've never heard even one person actually call it that.  People I've known in reality either say "half byte", or they say "one hex digit".  

So we can all see that 0000 should represent zero, and 0001 should be one.  Then what?  Well, 0010 means two; The highest digit you have is 1, and then you have to expand toward the left.  You have to "carry", just like in regular math, except you hit the maximum at 1 rather than 9.  This is base 2 math. 

To get three you add one+two, giving you 0011 for three.  Hey, all we have at this point is bit switches, kind of like doing math by turning light switches off and on. (Base 2 math.)  10 isn't ten here, and 10 isn't fifteen; 10 here is two.  So, if 011 is three, and you have to move leftward to count higher, that means 0100 is obviously four.  Eventually you add one (001) and two (010) to four (0100) and you get up to the giddy height of seven (0111).  Lesser machines might stop there, call the bit string 111 seven, and be satisfied with having achieved base 8 math.  Base 8 is called Octal, lesser machines did in fact use it, and personally I found it to be no fun at all.  The word excruciating comes to mind.  Anyway, with the IBM machine people were blessed with a fourth bit to expand into. 1000 became eight, 1001 was nine, 1010 (eight plus two) became A, and so on until all the bits were used up, culminating in 1111 being called F and meaning fifteen.  Base sixteen math, and we call it hexadecimal, affectionately known as hex.  It was pretty easy to see that hex was better than octal, and it was also pretty easy to see that we didn’t need to go any higher — base 16 is quite adequate for ordinary human minds.  So there it is.  It also explains why the word hex is so often used almost interchangeably with the word binary.

And Zoned Decimal?  A byte containing the digit 1 in zoned decimal is represented by hex ‘F1’, which is exactly the same as what it would be as part of a text string (“Our number 1 choice.”)  Think of a printable byte as one character position. The digit 9, when represented as a printable (zoned) decimal number, is hex’F9’.  A number like 123456, if it is unsigned, is hex’F1F2F3F4F5F6’.  (If it has a sign, the sign might be separate, but if the sign is part of the string then the F in the last byte might be some other code to represent the sign.  Conveniently, F is one of the signs for plus, and it also means unsigned.)  You cannot do math with zoned decimal numbers.  If you code a statement in a high level language telling the computer to add two zoned decimal numbers, and if it does not generate an error, the compiler generates machine instructions that will first convert the two numbers into another type and then do the math with the converted copies. 

You may be thinking that the left half of each byte is wasted in zoned decimal format.  Well, not wasted exactly: Any printable character will use up one byte; a half byte containing F is no bigger than the same half byte containing zero.  Still, if you are not actually printing the digit at the moment, could you save half the memory by eliminating the F on the left?  Pretty much. 

You scrunch the number together, squeezing out the F half-bytes, and you have unsigned packed decimal.  You just need to add a representation of the plus or minus sign to get standard (signed) packed decimal format. The standard is to use the last half byte at the end for the sign, the farthest right position.  This is why decimal numbers are usually set up to allow for an odd number of digits – because memory is allocated in units of bytes, there are two digits in a byte, and the last byte has to contain the sign as the last digit position.

How is the packed decimal sign represented in hex?  The last position is usually a C for plus or a D for minus.  F also means plus, but usually carries the nuance of meaning that the number is defined as unsigned.  Naturally there are some offbeat representations where plus can be F, A, C, or E, like the open spaces on a Treble clef in music, and minus can be either B or D (the two remaining hex letters after F,A,C,E are taken) – hence giving meaning to all the non-decimal digits.  Mostly, when produced by ordinary processes, it’s C for plus, F for unsigned and hence also plus, or D for minus. 

So if you have the digit 1 in zoned decimal, as hex’F1’, then after it is fully converted to signed packed decimal the packed decimal byte will be hex’1C’.  Zoned decimal Nine (hex ‘F9’) would convert to packed decimal hex ‘9C’, and zero (hex ‘F0’) becomes hex’0C’.  Minus nine becomes hex ‘9D’, and yes, you can have minus zero as hex ‘0D’. 

The mathematical meaning of minus zero is arguable, but some compilers allow it, and in fact the IEEE standard for floating point requires it.  Some machine instructions can also produce it as a result of math involving a negative number and/or overflow.  You care about negative zero mainly because in some operations (which you might easily never encounter), hex’0D’, the minus zero, might give different results from ordinary zero.  A minus zero normally compares equal to ordinary plus zero when doing ordinary decimal comparisons.  Okay, moving on … Zoned decimal 123, hex ‘F1F2F3’, when converted to signed packed decimal will become hex’123C’, and Zoned decimal 4567, hex ‘F4F5F6F7’, when converted to signed packed decimal will become hex’04567C’, with a leading zero added because you need an even number of digits; half bytes have to be paired up so they fill out entire bytes.

Wait, you say, how did the plus or minus look in zoned decimal? 

The answer is that there are various formats.

It is possible for the rightmost Zoned Decimal digit to contain the sign in place of that digit’s lefthand “F” (its “zone”), and that is the format generated when a Zoned Decimal number is produced by the UNPACK machine instruction.  

The most popular format, for users of high level languages, seems to be when the sign is kept separate and placed at the beginning of the number (the farthest left position). COBOL calls this “SIGN IS LEADING SEPARATE”.  

However, many print formats are possible, and you can delve into this topic further by looking at IBM’s Language Reference Manual for whatever language you’re using.  Zoned decimal is essentially a print (or display) format.  High Level Computer Languages facilitate many elaborate editing niceties such as leading blanks or zeroes, insertion of commas and decimal points, currency symbols, and stuff that may never even have occurred to you (or me).

In COBOL, a packed decimal variable is traditionally defined with USAGE IS COMPUTATIONAL-3, Or COMP-3, but it can also be called PACKED-DECIMAL.  A zoned decimal variable is defined with a PICTURE format having USAGE IS DISPLAY.  A binary variable is just called COMP, but it can also be called COMP-4 or BINARY.

In C, a variable that will contain a packed decimal number is just called decimal.  If you are going to use decimal numbers in your C program, the header <decimal.h> should be #included.  A variable that will hold a four byte binary number is called int, or long.  A two byte binary integer is called short.  Typically a number is converted into printable zoned decimal by using some function like sprintf with the %d formatting code.  Input zoned decimal can be treated as character.

PL/I refers to packed decimal numbers as FIXED DECIMAL.  Zoned decimal numbers are defined as having PICTURE values containing nines, e.g. P’99999’ in a simple case.  Binary numbers are called FIXED BINARY, or FIXED BIN, with a four byte binary number being FIXED BINARY(31) and a two byte binary number being called FIXED BINARY(15).

What if you want to use non-integers, that is, you want decimal positions to the right of the decimal?  Dollars and cents, for example?

In most high level languages, you define the number of decimal positions you want when you declare the variable, and for binary numbers and packed decimal numbers, that number of decimal positions is considered to be implied; it just remembers where to put the decimal for you, but the decimal position is not visible when you look at the memory location in a dump or similar display.  For zoned decimal numbers, you can declare the variable (or the print format) in a way that both the implied decimal and a visible decimal occur in the same position.  For example, if (in your chosen language) 999V99 creates an implied decimal position wherever the V is, then you would define an equivalent displayable decimal point as 999V.99, in effect telling the compiler that you want a decimal point to be printed at the same location as the implied decimal.  As previously noted, the limits on the numbers of digits that can be represented or manipulated apply to all the digits in use on both sides of the implied decimal point.

You may have noticed that abends are a bit more common when using packed decimal arithmetic, as compared with binary math.  There are two common ways that decimal arithmetic abends where binary would not.  One occurs when fields are not initialized.  If an uninitialized field contains hex zeroes, and it is defined as binary, that’s valid and some might say lucky.  If the same field of hex zeroes is defined as signed packed decimal, mathematical operations will fail because of the missing sign in the last half byte.  This is a common cause of an 0C7 abend failure in formatted dumps (such as a PL/I program containing an “ON ERROR” unit with a “PUT DATA;” statement).  When the uninitialized fields contain hex zeroes, it might seem that the person using binary variables is lucky, but sometimes uninitialized fields contain leftover data from something else, essentially random trash that happens to be in memory.  In that case decimal instructions usually still abend, and binary mathematical operations do not – they just come up with wrong results, because absolutely any hex value is a valid binary number.  The abend doesn’t look like such bad luck in that situation.  The other common cause for the same problem, besides uninitialized fields, is similar insofar as it means picking up unintended data.  When something goes wrong in a program – maybe a memory overlay, maybe a bad address pointer – an instruction may try to execute using the wrong data.  Again, there is a good chance that decimal arithmetic will fail in such a situation, because of the absence of the sign perhaps, or perhaps because the data contains values other than the digits zero through nine plus the sign.  Binary arithmetic may carry on happily producing wrong answers based on the bogus data values.  Even if you recognize that the output is wrong, it can be difficult to track back to the cause of the problem.  With an immediate 0C7 or other decimal arithmetic abend, you have a better chance of finding the underlying problem with less difficulty. 

So there you have it.  Basic mainframe computer math, simplified.  Sort of.



z/Architecture Principles of Operation (PDF) SA22-7832

at this url:

In the SA22-7832-10 version, on the first page of Chapter 8. Decimal Instructions, there is a section called “Decimal-Number Formats”, containing subsections for zoned and packed-decimal. 

On the fourth page of Chapter 7. General Instructions there is a section called “Binary-Integer Representation”, followed by sections about binary arithmetic.

Principles of Operation is the definitive source material, the final authority.

Further reading

F1 for Mainframe has a good very short article called “SORT – CONVERT PD to ZD and BI to ZD”, in which the author shows you SORT control cards you can use to convert data from one numeric format to another (without writing a program to do it).   At this url:   https://mainframesf1.com/2012/03/27/sort-convert-pd-to-zd-and-bi-to-zd/




Program Search Order

z/OS MVS Program Search Order

When you want to run a program or a TSO command, where does the system find it?  If there are multiple copies, how does it decide which one to use?  That is today’s topic.

The system searches for the item you require; It has a list of places to search; it gives you the first copy it finds.  The order in which it searches the list is called, unsurprisingly, search order

The search order is different depending on where you are when the request is made (where your task is running within the computer system, not where you're sitting geographically).   It also depends on what you request (program, JCL PROC, CLIST, REXX, etc). 

Types of searches: Normally we think of the basic search for an executable program. The search for TSO commands is almost identical to the search for programs executed in batch, with some extra bells and whistles.  There is different handling for a PROC request in JCL (since the search needs to cover JCL libraries, not program libraries).  The search is different when you request an exec in TSO by prefixing the name of the exec with a percent sign (%) — the percent sign signals the system to bypass searching for programs and go directly to searching for CLIST or REXX execs. Transaction systems such as CICS and IMS are again different: They use look-up tables set up for your CICS or IMS configuration.

Right now we’re only going to cover batch programs and TSO commands.

Overview of basic search order for executable programs:

  1. Command tables are consulted for special directions
  2. Programs already in storage within your area
  4. STEPLIB if it exists
  5. JOBLIB only if there is no STEPLIB
  6. LPA (Link Pack Area)
  7. LINKLIST (System Libraries)
  8. CLIST and REXX only if running TSO

Let’s pretend you don’t believe that bit about STEPLIB and JOBLIB, that the system searches one or the other but not both.  Look here for verification:  Numbered item 3 in Section “Search order the system uses for programs” in the IBM publication “z/OS MVS Initialization and Tuning Guide” at this link: https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.ieae100/ieae10018.htm

Batch Programs vs TSO commands

As noted, these two things are almost exactly the same.

You can type the name of a program when in READY mode in TSO, or in ISPF option 6, which mimics READY mode.  (Using ISPF 6 is similar to opening a DOS window on the PC.)  Under ISPF you can also enter the name of a program directly on any ISPF command line if you preface the program name with the word TSO followed by a space. 

So you can type the word IEBGENER when in READY mode, or you can put in “TSO  IEBGENER“ on the ISPF command line, and the system will fetch the same IEBGENER utility program just the same as it does when you say
“//  EXEC  PGM=IEBGENER” in batch JCL. 

There is a catch to this: the PARM you use in JCL is not mirrored when you invoke a program under TSO this way.  If you enter any parameters on the command line in TSO, they are passed to the program in a different format than a PARM.   When you type the name of a program to invoke it under TSO, what usually happens is that the program is started, but instead of getting a pointer it can use to find a PARM string, the program receives a pointer that can be used to find the TSO line command that was entered.  The two formats are similar enough so that a program expecting to get a PARM will be confused by the CPPL pointer (CPPL = Command Processor Parameter List).  Typically such a program will issue a message saying that the PARM parameters are invalid.

So, Let's look at how the system searches for the program.

Optional ISPF command tables (TSO/ISPF only) 

We mention command tables first because the command tables are the first thing the system checks.  Fortunately the tables are optional.  Unfortunately they can cause problems.  Fortunately that is rare.  So It’s fine for you to skip ahead to the next section, you won’t miss much; just be aware that the command tables exist, so you can remember it if you ever need to diagnose a quirky problem in this area.

Under TSO/ISPF, ISPF-specific command tables can be used.  These can alter where the system searches for commands named in such a table.  This is an area where IBM makes changes from time to time, so if you need to set up these tables, consult the IBM documentation for your exact release level. 

There is a basic general ISPF TSO command table called ISPTCM that does not vary depending on the ISPF application.

Also, search order can vary within ISPF depending on the ISPF application. 

For example, when you go into a product like File-AID, the search order might be changed to use the command tables specific to that application. 

So there are also command tables specific to the ISPF application that is running.  In addition to a general Application Command Table, there can be up to 3 site command tables and up to 3 user command tables, plus a system command table.  Within these, the search order can vary depending on what is specified at your site during setup. 

If you are having a problem related to command tables, or if you need to set one up, consult the IBM references.   For z/OS v2r2, see "Customizing the ISPF TSO command table (ISPTCM)" and “Customizing command tables In the IBM manual "ISPF Planning and Customizing".

ISPF command table(s) can influence other things besides search order, including:

(a) A table can specify that a particular program is to run in APF-authorized mode, allowing the program to do privileged processes (APF is an entirely separate topic and not covered herein).

(b) A table can specify that a particular program must be obtained from LPA only (bypassing the normal search order).  (We’ll introduce LPA further down.)

(c) A table can change characteristics of how a command is executed.  For example the table can specify that any given program must always be invoked as a PARM-processing ordinary program, or the table can specify instead that the program must always be invoked as a TSO command that accepts parameters entered on the command line along with the command name, e.g. LISTDS ‘some.dataset.name’

Programs are not required to be listed in any of these command tables.  If a program is not listed in any special table, then default characteristics are applied to the program, and the module itself is located via the normal search order, which is basically the same as for executable programs in batch. 

Most notably, a command table can cause problems if an old version exists but that fact is not known to the current local Systems people (z/OS System Admins are not called Admins as they might be on lesser computer systems; on the mainframe they're called "System Programmers" generally, or some more specialized title).  Such a table might prevent or warp the execution of a program because the characteristics of a named program have changed from one release to another.

If a command is defined in a command table, but the program that the system finds does not conform to the demands imposed by the settings in the table, that situation can be the cause of quirky problems, including the “COMMAND NOT FOUND” message.  Yes, that might perhaps be IBM’s idea of a practical joke, but it happens like this: If the table indicates that a program has to be found in some special place, but the system finds the program in the wrong place, then in some cases you get the COMMAND NOT FOUND message.  I know, you’re cracking up laughing.  That’s the effect it seems to have on most people when they run into it.  Not everyone, of course.

Other subsystems such as IMS and CICS also use command tables, and these are not covered by the current discussion.  Consult IMS and CICS documentation for specifics of those subsystems. 

Programs already loaded into your memory

Before STEPLIB or anything like that, the system searches amongst the programs that have already been loaded into your job’s memory.  Your TSO session counts as a job.  

There can be multiple tasks running within your job – Think split screens under TSO/ISPF.  In that case the system first searches the program modules that belong to the task where the request was issued, and after that it searches those that belong to the job itself.  Relevant jargon : a task has a Load List, which has Load List Elements (LLE); the job itself has Job pack area (JPA).

If a copy of the module is found in your job’s memory, in either the task's LLE or the job's JPA, that copy of the module will be used if it is available.  

If the module is marked as reentrant – if the program has a flag set to promise that it does not modify itself at all – then any number of tasks can use the same copy simultaneously, so you always get it. If the module is not marked as reentrant, but it is marked as serially reusable, that means the program can modify itself while it's running as long as, at the end, it puts everything back the way it was originally; in that case a second task can use an existing copy of the module after the previous task finishes.  If neither of those flags are set, then the system has to load a fresh copy of the load module every time it is to be run.

If the module is not reentrant and it is already in use, as often happens in multi-user transaction processing subsystems like IMS and CICS, the system might make you wait until the previous caller finishes, or else it might load an additional copy of the module.  In CICS and IMS this depends on CICS/IMS configuration parameters.  In ordinary jobs and TSO sessions, the system normally just loads a new copy.

Note that if a usable copy of a module is found already loaded in memory, either in the task's LLE or in the job's JPA, that copy will be used EVEN if your program has specified a DCB on the LOAD macro to tell the system to load the module from some other library (unless the copy in memory is "not reusable", and then a new copy will be obtained).  Hunh?  Yeah, suppose your program, while it is running, asks to load another program, specifying that the other program is to be loaded from some specific Library.  Quelle surprise, if there is already a copy of the module sitting in memory in your area, then the LOAD does not go look in that other library.

For further nuances on this, if it is a topic of concern for you, see the IBM write-up under the topic "The Search for the Load Module" in the z/OS MVS Assembler Services Guide


Most jobs do not use TASKLIB.  TSO does.  

Essentially TASKLIB works like a stand-in for STEPLIB.  A job can’t just reallocate STEPLIB once the job is already running.  For batch jobs the situation doesn’t come up much.  Under TSO, people often find cases where they’d like to be able to reallocate STEPLIB.  Enter TASKLIB.  STEPLIB stays where it is, but TASKLIB can sneak in ahead of it.

Under TSO, ISPF uses the ddname ISPLLIB as its main TASKLIB.

The ddname ISPLUSR exists so that individual users – you yourself for example – can use their own private load libraries, and tell ISPF to adopt those libraries as part of the TASKLIB, in fact at the top of the list.  When ISPF starts, it checks to see if the ddname ISPLUSR is allocated.  If it is, then ISPF assigns TASKLIB with ISPLUSR first, followed by ISPLLIB.  As long as you allocate ISPLUSR before you start ISPF, then ISPLUSR will be searched before ISPLLIB.  

In fact that USR suffix was invented just for such situations.  It’s a tiny digression, but ISPF allows you to assign ISPPUSR to pre-empt ISPPLIB for ISPF screens (Panels), and so on for other ISP*LIB ddnames; they can be pre-empted by assigning your own libraries to their ISP*USR counterparts. 

Up to 15 datasets can be concatenated on ISPLUSR.  If you allocate more, only the first 15 will be searched.

If you allocate ISPLUSR, you have to do it before you start ISPF, and it applies for the duration of the ISPF session.

Not so with LIBDEF.  The LIBDEF command can be used while ISPF is active.  Datasets you LIBDEF onto ISPLLIB are searched before other ISPLLIB datasets, but after ISPLUSR datasets.

The TSOLIB command can also be used to add datasets onto TASKLIB

The TSOLIB command, if used, must be invoked from READY mode prior to going into ISPF.   

Why yet another way of putting datasets onto TASKLIB?  The other three methods just discussed are specific to ISPF. The TSOLIB command is applicable to READY mode also. It can be used even when ISPF is not active.  For programs that run under TSO but not under ISPF, the only TASKLIB datasets used are those activated by TSOLIB.  Within ISPF, any dataset you add to TASKLIB by using "TSOLIB ACTIVATE" will come last within TASKLIB, after ISPLLIB.  

Also, TASKLIB load libraries activated by the TSOLIB command are available to non-ISPF modules called by ISPF-enabled programs.  For example, if a program running under ISPF calls something like IEBCOPY, and then IEBCOPY itself issues a LOAD to get some other module it wants to call, do not expect the system to look in ISPLLIB for the module that IEBCOPY is trying to load.  It should check TSOLIB, though.  However, some programs bypass TASKLIB search altogether.

Under ISPF, This is the search order within TASKLIB:


For non-ISPF TSO, only TSOLIB is used for TASKLIB.

To find out your LIBDEF allocations, enter ISPLIBD on the ISPF command line. (Not TSO ISPLIBD, just plain ISPLIBD)

TASKLIB with the CALL command in TSO

When you use CALL to run a program under TSO, the library name on the CALL command becomes a TASKLIB during the time the called program is running.  CALL is often used within CLIST and REXX execs, even though you may not use CALL much yourself directly.

So if you say CALL 'SYS1.LINKLIB(IEBGENER)' from ISPF option 6 or from a CLIST or REXX, then 'SYS1.LINKLIB' will be used to satisfy other LOAD requests that might be issued by IEBGENER or its subtasks while the called IEBGENER is running.  Ah, yes, the system is full of nuances and special cases like this one; I'm just giving you the highlights.  This entire article is somewhat of a simplification.  What joys, you might be thinking, must await you in the world of mainframes.


If STEPLIB and JOBLIB are both in the JCL, the STEPLIB is used and the JOBLIB is ignored.

The system does NOT search both.

LPA (Link Pack Area)

You know that there are default system load libraries that are used when you don’t have STEPLIB or JOBLIB, or when your STEPLIB or JOBLIB does not contain the program to be executed.  

The original list of the names of those system load libraries is called LINKLIST.  The first original system load library was called SYS1.LINKLIB, and when they wanted to have more than one system library they came up with the name LINKLIST to designate the list of library names that were to be treated as logical extensions of LINKLIB.

The drawback with LINKLIST, as with STEPLIB and JOBLIB, is that when you request a program from them, the system actually has to go and read the program into memory from disk.  That’s overhead.  So they invented LPA (which stands for Link Pack Area — go figure.).

Some programs are used so often by so many jobs that it makes sense to keep them loaded into memory permanently.  Well, virtual memory.  As long as such a program is not self-modifying, multiple jobs can use the same copy at the same time:  Additional savings.  So an area of memory is reserved for LPA.  Modules from SYS1.LPALIB (and its concatenation partners) are loaded into LPA.  It is pageable, so more heavily used modules replace less used modules dynamically.

Sounds good, but more tweaks came.  Some places consider some programs so important and so response-time-sensitive that they want those programs to be kept in memory all the time, even if they haven’t been used for a few minutes.  And so on, until we now have several subsets of LPA.

Within LPA, the following search order applies:

Dynamic LPA, from the list in ‘SYS1.PARMLIB(PROG**)’

Fixed LPA (FLPA), from the list in ‘SYS1.PARMLIB(IEAFIX**)’

Modified LPA (MLPA), from the list in ‘SYS1.PARMLIB(IEALPA**)’

Pageable LPA (PLPA), from the list in (LPALST**) and/or (PROG**)


 LINKLIST Libraries are specified using SYS1.PARMLIB(PROG**) and/or (LNKLST**).

SYS1.LINKLIB is included in the list of System Libraries even if it is not named explicitly. 

An overview of LINKLIB was just given in the introduction to LPA, so you know this already, or at least you can refer back to it above if you skipped ahead.

Note that LINKLIST libraries are controlled by LLA; whenever any LINKLIST module is updated the system's BLDL needs to be rebuilt by an LLA refresh.  If LLA is not refreshed the old version of the module will continue to be given to anyone requesting that module.

What’s LLA, you might ask.  For any library under LLA control, the system reads the directory of each library, and keeps the directory permanently in memory.  A library directory contains the disk address of every library member.  Hence keeping the directory in memory considerably speeds up the process of finding any member.  It speeds that up more than one might think, because PDS directory blocks have a very small block size, 256 bytes, and these days the directories of production load libraries can contain a lot of members, two facts which taken together mean that reading an entire PDS directory from disk can require many READ operations and hence be time-consuming.  If you repeat that delay for almost every program search in every job, you have a drag on the system.  So LLA gives a meaningful performance improvement for reading members from PDS-type libraries.  For PDSE libraries too, but for different reasons; PDSE libraries do not have directory blocks like ordinary PDS libraries.  Anyway, the price you pay for the improvement is that the in-memory copies of the directories have to be rebuilt whenever a directory is updated, that is, whenever a library member is replaced.  What does LLA stand for?  Originally it stood for LINKLIST LookAside, but when the concept was extended to cover other libraries besides LINKLIST the name was changed to Library LookAside.

Under TSO, CLIST and REXX execs

Under TSO, if a module is not found in any of the above places, there is a final search for a CLIST or REXX exec matching the requested name. 

CLIST and REXX execs are obtained by searching ddnames SYSUEXEC, SYSUPROC, SYSEXEC, and SYSPROC (in that order, if the order has not been deliberately changed).  Note that SYSUEXEC and SYSEXEC are for REXX members only, whereas SYSPROC and SYSUPROC can contain both REXX and CLIST members, with the proviso that a REXX member residing in the SYSPROC family of libraries must contain the word REXX on the first line, typically as a comment /* REXX */

These four ddnames can be changed and other ddnames can be added by use of the ALTLIB command.  Also the order of search within these ddnames can be altered with the ALTLIB command (among other ways)

.SYSPROC was the original CLIST ddname.  A CLIST library can also include REXX execs. The SYSEXEC ddname was added to be used just for REXX.  In the spirit of ISPLUSR et al, a USER version of each was added, called SYS­UPROC and SYSUEXEC.

The default REXX library ddname SYSEXEC can be changed to something other than SYSEXEC by MVS system installation parameters.

Prefixing the TSO command name with the percent sign (%) causes the system to skip directly to the CLIST and REXX part of the search, rather than looking every other place first. 

To find the actual search order in effect within the CLIST and EXEC ddnames for your TSO session at any given time, use the command TSO ALTLIB DISPLAY. 

That's it for program search order.  It's a simplification of course. ;-)


References, Further reading

z/OS Basic Skills, Mainframe concepts, z/OS system installation and maintenance
Search order for programs

z/OS TSO/E REXX Reference, SA32-0972-00 
Using SYSPROC and SYSEXEC for REXX execs

z/OS V2R2 ISPF Services Guide 
Application data element search order

z/OS ISPF Services Guide, SC19-3626-00 
LIBDEF—allocate application libraries

Numbered item 3 in Section “Search order the system uses for programs” in the IBM publication “z/OS MVS Initialization and Tuning Guide” at this link: https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.ieae100/ieae10018.htm

"The Search for the Load Module" in the z/OS MVS Assembler Services Guide

"Customizing the ISPF TSO command table (ISPTCM)" and “Customizing command tables In the IBM manual "ISPF Planning and Customizing"


Guest Post(s) — the David Staudacher Corner

This post is a placeholder for attachments by David Staudacher.

David's material is, for the most part, not beginner stuff, so I don't really recommend these if you're just getting started on the mainframe.  On the other hand, if you read them and want to see more from Dave, check out LinkedIn, where he has a group callled
Mainframe (COBOL,JCL,DB2,CICS,VSAM,MVS,Adabas/Natural ) Experts (https://www.linkedin.com/groups/910927)


Size Matters: What TSO Region Size REALLY Means

What does specifying an 8 Meg (8192K) TSO Logon Region Size mean?

It does Not mean you get an 8 Meg region size.   Nice guess, and a funny idea, but no.  For that matter, REGION=8M in JCL doesn't get you 8 Meg there, either.  Hasn't done so for decades (despite what you may have heard or read elsewhere.   If you have trouble believing this, feel free to skip down a few paragraphs, where you can find a link to some IBM doc on the matter.)

No, your region size defaults to at least 32 Meg, regardless of what you specify.

Despite the omnipresent threat that the people who set up your own particular z/OS system might have changed the IBM defaults there, or (who knows) might even have vindictively limited your own personal userid — For this discussion,  we're assuming you're using a more or less intact z/OS system with the IBM defaults in effect.

So, you ask, What happens when you specify Size 8192 at Logon?

Size    ===> 8192  

When specifying Size=8192, you will be allowed to use up to 32 Meg of 31-bit addressable memory. This is the ordinary kind of memory that most programs use. You will get this for any value you enter for Size until you get up to asking for something greater than 32 Meg. Above 32 Meg, the Size will be interpreted differently.

If you enter a number greater than 32 Meg (Size ===> 32768), that will be interpreted in the way you would expect – as the region size for ordinary 31-bit memory.

You have to specify a  number bigger than 32768 to increase your actual region size.

Just wanted to make sure you saw that.  It's not a subtitle.

Notice by the way that region size is not a chunk of memory that is automatically allocated for you, it's just a limit.  It means your programs can keep asking for memory until they get to the limit available, and above that they'll be refused.

So, what does the 8192 mean, then?  When specifying Size=8192, you will be allowed to use up to 8 Meg of 24-bit addressable memory.   (In this context, 8192 means 8192K, and 8192K = 8M.)

This is why trying to specify a number slightly bigger than 8 Meg,  say 9 or 10 Meg, is likely to fail your logon with a "not available" message.   The request for 9 or 10 Meg is interpreted as a request for that amount of 24-bit memory, and most systems don't have that amount of 24-bit memory available, even though there is loads of 31-bit memory.  So asking for 9 Meg might fail with a "not available" message, but if you skip over all those small numbers and specify a number >32Meg then the system can probably give you that, and your logon would then work. 

How did this strange situation arise, and what is the difference anyway?

Here starts the explanation of the different types of addresses

Feel free to skip ahead if you don't care about the math, you just want the practical points.

24-bit addresses are smaller than 31-bit addresses.  Each address — Let's say each saved pointer to memory within the 24-bit-addressable range — requires only 3 bytes (instead of the usual 4 bytes).

24-bit memory addresses are any addresses lower than 16 Meg.

There is an imaginary line at 16 Meg.  24-bit addresses are called "Below the Line" and 31-bit addresses are called "Above the Line".

More-Technical-details-than-usual digression here.  Addresses start at address zero.  The 3-byte range goes up to hex’FFFFFF’ (each byte is represented as two hex digits.  Yes, F is a digit in the hex way of looking at things.  The digits in hex are counted 0123456789ABCDEF).  There are 8 bits in a byte (on IBM mainframes using hexadecimal representation for addressing).  So 3 bytes is 24 bits.  Hence, 3-byte addresses, 24-bit addressing.   Before you notice that 4 times 8 is actually 32, not 31, you may as well know that the leftmost bit in the leftmost byte in a 4-byte address is reserved, and not considered part of the address.  Hence, 31-bit addressing.

Decades ago the 24-bit scheme was the standard type of memory, and some old programs, including some parts of the operating system, still need to use the smaller addresses.   Why? Because there were a lot of structures that people set up where they only allowed a 3-character field for each pointer. When the operating system was changed to use 4-byte addresses, some of the existing tables were not easy to change — mainly because the tables were used by so many programs that also would have needed to be changed, and, crucially, not all of those programs belonged to IBM.  Customers had programs with the same dependency.  Lots of customers. So even today a program can run in “24-bit addressing mode” and still use the old style when it needs to do that.

Most programs run in “31-bit addressing mode”. So they are dependent on the amount of 31-bit memory available.  By this day and age, another upgrade is in progress.   It allows the use of 64-bit addresses. The current standard is still 31-bit addressing, and it will be that way for a good while yet. However, 64-bit addressing is used extensively by programs that need to have a lot of data in memory at the same time, such as editor programs that allow you to edit ginormous data sets.

When specifying Size=8192, you will be allowed to use up to 2Gig of 64-bit memory, as long as your system is at least at level z/OS 1.10, or any higher level.  (2 Gig is the maximum addressable by 64 bits, if you wonder.)   Prior to z/OS 1.10, the default limit for 64-bit memory was zero. In JCL you can change this with the MEMLIMIT parameter, but there is no way for you to specify an amount for 64-bit memory on the TSO Logon screen.

There is an imaginary bar at 2 Gig, since the word "line" had already been used for the imaginary line at 16 Meg.  Addresses above 2 Gig, that is, 64-bit addresses, are called "Above the Bar".  Addresses lower than that are called "Below the Bar".

Here ends the explanation of the different types of addresses.

Maybe  you wonder what happens for various values you might specify other than 8192 aka 8 Meg.  So now we'll discuss the three possibilities:

– You specify less than 16 Meg
– you specify more than 32 Meg
– you specify between 16 Meg and 32 Meg (boring)

Specifying less than 16 Meg

Any user-specified logon SIZE value less than 16 Meg just controls the amount of 24-bit memory you can use.

The limit on how much you can get for 24-bit memory will vary depending on how much your own particular system has reserved for its own use (such as the z/OS "nucleus" and what not), and for the system to use for you on your behalf, for example for building tables in memory from the DD statements in your JCL.   (Yes, you have JCL when you are logged onto TSO, you just don't see it unless you look for it.  The Logon screen has a field for a logon proc, remember that?  It's a JCL proc.)Any 24-bit memory the system doesn’t reserve for itself to use, you can get. This is called private area storage (subpools 229 and 230).

Typical mistake:  A user who thinks he has an 8 Meg region size may try to increase it to 9 Meg by typing in 9216 for size. The LOGON attempt may fail. It may fail because there is not nine Meg of leftover 24-bit storage that the system isn’t using.  Such a user might easily but mistakenly conclude that it is not possible for him to increase the region size above what he had before.  Ha ha wrong, you say to yourself (possibly with a little smile).  Because you now know that they have to specify a number bigger than 32768 — that is, more than 32 Meg.

Specifying more than 32 Meg

To increase the actual Region size, of course, as you now know, the user needs to specify a number bigger than 32 Meg (bigger than SIZE==>32768).  When you specify a value above 32Meg, it governs how much 31-bit storage you get.  The maximum that can be specified is 2096128 (2 Gigabytes).

Specifying any value above 32 Meg ALSO causes the user to get all available 24-bit memory (below the 16mb line).  This has the potential to cause problems related to use of 24-bit memory (subpools 229/230). This could happen if the program you’re running uses a lot of 24-bit memory and then requests the system to give it some resource that makes the system need to allocate more 24-bit memory, but it can’t, because you already took it all. The request fails. The program abends, flops, falls over or hangs. This happens extremely rarely, but it can happen, so it’s something for you to know as a possibility.

Specifying between 16 Meg and 32 Meg

What happens if you specify a number bigger than 16 Meg but smaller than 32 Meg? You still get the 32 Meg region size, of course. You also get all the 24-bit storage available — The 24-bit memory is allocated in the same way as it would have been if you had specified a number above 32 Meg. So asking for 17 Meg or 31 Meg has EXACTLY the same identical effect: It increases the request for 24-bit storage to the maximum available, but it leaves the overall region size at the default of 32 Meg. Having this ability must be of use to someone in some real world situation I suppose, or why would IBM have bothered to provide it? but imagining such a situation evades the grasp of my own personal imagination.


If you want to see the IBM documentation on this — and who would blame you, it’s a bizarre setup — check out page 365 of "z/OS MVS JCL Reference" (SA23-1385-01) at http://publibz.boulder.ibm.com/epubs/pdf/iea3b601.pdf

Caveats and addendums

The IBM-supplied defaults can be changed in a site-specific way. Mostly the systems people at your site can do this by using an IEALIMIT or IEFUSI exit, which they write and maintain themselves. Also IBM is free to change the defaults in future, as they did in z/OS 1.10 for the default for 64-bit memory.

If you want to know whether there is a way to find out what your real limits are in case the systems programmers at your site might have changed the defaults, yes there is a way, but it involves looking at addresses in memory (not as hard as it sounds) and is too long to describe in this same article.

Yes, this is a break from our usual recent thread on TSO Profiles and ISPF settings.   We'll probably go straight back to that next.  The widespread misunderstanding of REGION and Logon SIZE comes up over and over again, and it happened to come up again recently here.  There is a tie-in with TSO, though, which you may as well know, since we're here.  A lot of problems in TSO are caused by people logging on with region sizes that aren't big enough for the work they want to do under TSO.  The programs that fail don't usually give you a neat error message asking you to logon with a larger region size — mostly they just fall over in the middle of whatever they happen to be doing at the time, leaving you to guess as to the reason.  Free advice:  If you have a problem in TSO and it doesn't make much sense, it's worth a try just to logon with SIZE===>2096128 and see what happens.  Oftentimes just logging off and logging on again clears up the problem for much the same reason:  Some program (who knows which one) has obtained a lot of storage and then failed to release it, so there isn't much storage left for use by other programs you try to run.   Logging off frees the storage, and you start over when you logon again.  Batch JCL corollary:  If you get inexplicable abends in a batch job, especially 0C4 abends, try increasing the REGION size on the EXEC statement.  Go ahead, laugh, but try it anyway.  It's an unfailing source of amusement over the years to see problems "fixed" by increasing either the JCL REGION size or the TSO Logon Region size.  All the bizarre rules shown above work the same for batch JCL REGION as for TSO LOGON Region Size, except for 64-bit memory, which can be changed using the MEMLIMIT parameter in JCL but cannot be changed on the TSO Logon screen.  Remember, you have to go higher than 32M to increase your actual region size!


You need to specify a value bigger than 32 Meg to increase your actual (31-bit) TSO Region size (or JCL Region size).