IBM z/OS MVS Spooling : a Brief Introduction

IBM z/OS MVS Spooling : a Brief Introduction —

Spooling means a holding area on disk is used for input jobs waiting to run and output waiting to print.

This is a brief introduction to IBM z/OS MVS mainframe spooling.

The holding area is called spool space.  The imagery of spooling was probably taken from processes that wind some material such as thread, string, fabric or gift wrapping paper onto a spindle or spool.  In effect, spooling is a near-synonym for queueing.  Those paper towels on that roll in the kitchen are queued up waiting for you to use each one in turn, just like a report waiting to be printed or a set of JCL statements waiting for its turn to run.

On very early mainframe computers, the system read input jobs from a punched card reader, one line at a time, one line from each punched card.  It wrote printed output to a line printer, one line at a time.  Compared to disks – even the slower disks used decades ago – the card readers and line printers were super slow.  Bottlenecks, it might be said.  The system paused its other processing and waited while the next card was read, or while the next line was printed.  So that methodology was pretty well doomed.  It was okay as a first pass at getting a system to run — way better than an abacus — but that mega bottleneck had to go.  Hence came spooling.

HASP, Houston Automatic Spooling Priority program (system, subsystem) was early spooling software that was used with OS/360 (the ancestral precursor of z/OS MVS).  (See HASP origin story, if interested.)  HASP was the basis of development for JES2, which today is the most widely used spooling subsystem for z/OS MVS systems.  Another fairly widely used current spooling sub-system is JES3, based on an alternate early system called ASP.  We will focus on JES2 in this article because it is more widely used.   JES stands for Job Entry Subsystem.  In fact JES subsystems oversee both job entry (input) and processing of sysout (SYStem OUTput). 

Besides simply queueing the input and output, the spooling subsystem schedules it.  The details of the scheduling form the main point of interest for most of us. Preliminary to that, we might want to know a little about the basic pieces involved.

The Basic pieces

There are input classes, also called job classes, that control scheduling and resource limits

There are output classes, also called sysout classes, that control output print

There are real physical devices (few card readers, but many variations of printers and vaguely printer-like devices)

There are virtual devices. One virtual device is the “internal reader” used for software-submitted jobs, such as those sent in using the TSO submit command or FTP.  Virtual output devices include “external writers”.  An external writer is a program that reads and processes sysout files, and such a program can route the output to any available destination.  Many sysout files are never really printed, but are viewed (and further processed) directly from the spool space under TSO using a software product like SDSF.

There is spool sharing.  A JES2 spool space on disk (shared disk, called shared DASD) can be shared between two or more z/OS MVS systems with JES2 (with a current limit of 32 systems connected this way).  Each such system has a copy of JES2 running. Together they form a multi-access spool configuration (MAS).  Each JES2 subsystem sharing the same spool space can start jobs  from the waiting input queues on the shared spool, and can also select and process output from the shared spool.

There is checkpointing. This is obviously especially necessary when spool sharing is in use.

There is routing.  Again, useful with spool sharing, to enable you to route your job to run on a particular system, but also useful just to route your job’s output print files to print on a particular printer.

There are separate JES2 operator commands that the system operator can use to control the spooling subsystem, for example to change what classes of sysout can be sent to a specific printer, or what job classes are to be processed.  (These are the operator commands that start with a dollar sign $, or some alternative currency symbol depending on where your system is located.)

There is a set of very JCL-like control statements you can use to specify your requirements to the spooling subsystem.  (Sometimes called JECL, for Job Entry Control Language, as distinct from plain JCL, Job Control Language.)  For JES3, these statements begin with //* just like an ordinary JCL comment, so a job that has been running on a JES3 system can be copied to a system without JES3 and the JES3-specific JECL statements will simply be ignored as comments.  For JES2, on which we will focus here, the statements generally begin with /* in columns 1 and 2.  Common examples you may have seen are /*ROUTE and /*OUTPUT but notice that the newer OUTPUT statement in JCL is an upgrade from /*OUTPUT and the new OUTPUT statement offers more (and, well, newer) options.  Though the OUTPUT statement is newish, it is over a decade old, so you probably do have it on your system.

There are actual JCL parameters and statements that interact with JES2, such as the OUTPUT parameter on the DD statement, and the just-mentioned OUTPUT statement itself, which is pointed to by the parameter on the DD. 

Another example is the CLASS parameter on the JOB statement, which is used to designate the job class for job scheduling and execution.  The meanings of the individual job classes are totally made up for each site.  Some small development company might have just one job class for everything.  Big companies typically create complicated sets of job classes, each class defined with its own limits for resources such as execution time, region size, even the time of day when the jobs in each class are allowed to run.  Your site can define how many jobs of the same class are allowed to run concurrently, and the scheduling selection priority of each class relative to each other class.  Sometimes sites will set up informal rules which are not enforced by the software, but by local working rules, so that everyone there is presumed to know that they are only allowed to specify, for example, CLASS=E for emergency jobs.   (That’s one I happened to see someplace.)  If you want to know what job CLASS to specify for your various work, your best bet is to ask your co-workers, the people who are responsible for setting up the job classes, or some other knowledgeable source at your company.  Remember you can be held accountable for following rules you know nothing about that are not enforced by any software configuration, so don’t try to figure it out on your own, ask colleagues and other appropriate individuals what is permissible and expected.  Not joking.  JES2 Init & Tuning (Guide and reference) are the books that define how JES2 job classes have been configured, if you’re just curious to get a general idea of what the parameters are. The JES2 proc in proclib usually contains a HASPPARM DD statement pointing to where to find the JES2 configuration parameters on any particular system.  

In some cases similar considerations can apply for the use of SYSOUT print classes and the routing of such output to go to particular printers or to be printed at particular times.  The SYSOUT classes, like JOB classes, are entirely arbitrary and chosen by the responsible personnel at each site.  

MSGCLASS on the JOB statement controls where the job log goes — the JCL and messages portion of your listing.  The values you can specify for MSGCLASS are exactly the same as those for SYSOUT (whatever way that may be set up at your site).  If you want all your SYSOUT to go to the same place, along with your JCL and messages, specify that class as the value for MSGCLASS= on your job statement, and then specify SYSOUT=* on all of the DD statements for printed output files.  (That is, specify an asterisk as the value for SYSOUT= on the DD statements.)  

In many places, SYSOUT class A indicates real physical printed output on any printer, class X indicates routing to a held queue where it can be viewed from SDSF, and class Z specifies that the output immediately vanishes (Yup, that's an option).  However, there is no way to know for sure the details of how classes are set up at your particular site unless you ask about it.

Sometimes places maintain “secret” classes for specifying higher priority print jobs, or jobs that go to particular special reserved printers, and the secrets don’t stay secret of course.  Just because you see someone else using some print class, don’t assume it means it’s okay for you to use it for any particular job.  Ask around about the local rules and expectations.

So, for MSGCLASS (aka SYSOUT classes), as for JOB classes, the best thing is to ask whoever sets up the classes at your site; or, if that isn't practical, ask people working in the same area as you are, or just whoever you think is probably knowledgeable about the local setup.  Classes are set up by your site, for your site. 

An example of a JES2-related JCL statement that you have probably not yet seen is introduced with z/OS 2.2 — the JOBGROUP  statement, and an entire set of associated statements (ENDGROUP, SCHEDULE, BEFORE, AFTER, CONCURRENT), there are about ten of them – but that would be a topic for a follow-on post.  You probably don’t have z/OS 2.2 yet anyway, but it can be fun to know what’s coming.  JOBGROUP is coming.

That’s probably enough for an overview basic introduction.

The idea for this post came from a suggestion by Ian Watson.

 

References and Further Reading

z/OS concepts: JES2 compared to JES3
https://www.ibm.com/support/knowledgecenter/zosbasics/com.ibm.zos.zconcepts/zconc_jes2vsjes3.htm

 
z/OS JES2 Initialization and Tuning Guide, SA32-0991-00
How to initialize JES2 in a multi-access SPOOL configuration
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.hasa300/himas.htm
 
z/OS MVS JCL Reference (z/OS 2.2)
JES2 Execution Control Statements (This is where you can see the new JOBGROUP)
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.2.0/com.ibm.zos.v2r2.ieab600/jes2zone.htm
 
z/OS MVS JCL Reference, SA23-1385-00  (z/OS 2.1)
JES2 control statements
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.ieab600/j2st.htm
 
OUTPUT JCL statement
https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.ieab600/outst.htm

 
z/OS JES2 Initialization and Tuning Reference, SA32-0992-00
Parameter description for JOBCLASS(class…|STC|TSU)
https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.hasa400/has2u600106.htm

 
z/OS JES2 Initialization and Tuning Guide, SA32-0991-00
Defining the data set for JES2 initialization parameters
https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.hasa300/defiset.htm

IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling

CCHHR and EAV

CCHHR and EAV

Addresses on disk.

An address on disk is an odd sort of thing.

Not a number like a memory pointer.

More like a set of co-ordinates in three dimensional space.

Ordinary computer memory is mapped out in the simplest way, starting at zero, with each additional memory location having an address that is the plus-one of the location just before it, until the allowable maximum address is reached; The allowable maximum pointer address is limited by the number of digits available to express the address. 

Disks — external storage devices — are addressed differently.

Since a computer system can have multiple disks accessible, each disk unit has its own unit address relative to the system.  Each unit address is required to be unique.  This is sort of like disks attached to a PC being assigned unique letters like C, D, E, F, and so on; except the mainframe can have a lot more disks attached, and it uses multi-character addresses expressed as hex numbers rather than using letters of the alphabet.  That hex number is called the unit address of the disk.

Addresses on the disk volume itself are mapped in three-dimensional space.  The position of each record on any disk is identified by Cylinder, Head, and Record number, similar to X, Y, and Z co-ordinates, except that they're called CC, HH, and R instead of X, Y, and Z. A track on disk is a circle.  A cylinder is a set of 15 tracks that are positioned as if stacked on top of each other.  You can see how 15 circles stacked up would form a cylinder, right?  Hence the name cylinder. 

Head, in this context, equates to Track.  The physical mechanism that reads and writes data is called a read/write head, and there are 15 read/write heads for each disk, one head for each possible track within a cylinder.  All fifteen heads move together, rather like the tines of a 15-pronged fork being moved back and forth. To access tracks in a different cylinder, the heads move in or out to position to that other cylinder.  So just 15 read/write heads can read and write data on all the cylinders just by moving back and forth.  

That's the model, anyway.  And that's how the original disks were actually constructed.  Now the hardware implementation varies, and any given disk might not look at all like the model.  A disk today could be a bunch of PC flash drives rigged up to emulate the model of a traditional disk.  But Regardless of what any actual disk might look like physically now,  the original disk model was the basis of the design for the method of addressing data records on disk.  In the model, a disk is composed of a large number of concentric cylinders, with each cylinder being composed of 15 individual tracks, and each track containing some number of records. 

Record here means physical record, what we normally call a block of data (as in block size).  A physical record — a block — is usually composed of multiple logical records (logical records are what we normally think of as records conceptually and in everyday speech).  But a logical record is not a real physical thing, it is just an imaginary construct implemented in software.  If you have a physical record — a block — of 800 bytes of data, your program can treat that as if it consists of ten 80-byte records, but you can just as easily treat it as five 160-byte records if you prefer, or one 800-byte record; the logical record has no real physical existence.  All reading and writing is done with blocks of data, aka physical records.  The position of any given block of data is identified by its CCHHR, that is, its cylinder, head, and record number (where head means track, and record means physical record).  

The smallest size a data set can be is one track.  A track is never shared between multiple data sets.

The CCHHR represents 5 bytes, not 5 hex digits.  You have two bytes (a halfword) for the cylinder number and two bytes for the head (track) number.  

A "word", on the IBM mainframe, is 4 bytes, or 4 character positions.  Each byte has 8 bits, in terms of zeroes and ones, but it is usually viewed in terms of hexadecimal; In hexadecimal a byte is expressed as two hex digits.  A halfword is obviously 2 bytes, which is the size of a "small integer".  (4 bytes being a long integer, the kind of number most often used; but halfword arithmetic is also very commonly used, and runs a little faster.)  A two-byte small integer can express a number up to 32767 if signed or 65535 if unsigned.  CC and HH are both halfwords.  

Interestingly, a halfword is also used for BLKSIZE (this is a digression), but the largest block size for an IBM data set traditionally is 32760, not 32767, simply because the MVS operating system, like MVT before it, was written using 32760 as the maximum BLKSIZE.  Lately there are cases where values up to 65535 are allowed, using LBI (large block interface) and what-not, but mostly the limit is still 32760.  But watch the space; 65535 is on its way in; obviously the number need not allow for negative values, that is, it need not be signed.  End of digression on BLKSIZE.

There can be any number of concentric cylinders, but using the traditional CCHHR method you can only address a number that can be represented in a two-byte hex unsigned integer.  That would be 65,535, but in fact the highest cylinder address (now) on ordinary IBM disks is 65,520.  That is the CC-coordinate, the basis of the CC in CCHHR.

But wait, you say, you've got an entire 2 bytes — a halfword integer — to express the track number within the cylinder, yet there are always 15 tracks in a cylinder; one byte would be enough.  In fact, even  half a byte could be used to count to fifteen, which is hex F.  Right.  You got it.  What do we guess must eventually happen here? 

People want bigger disks so they can have bigger data sets, and more of them.  Big data.  You know how many customers the Bank of China has these days?  No, I don't either, but it's a lot, and that means they need big data sets.  And they aren't the only ones who want that.  I really don't want to think about guessing how much data the FBI must store.  What we do know is that there is a big – and growing – demand for gigantic data sets.

So inevitably the unused extra byte in HH  must be poached and turned into an adjunct C.  Thus is born the addressing scheme for the extended area on EAV disks (EAV = Extended Address Volumes).  So, three bytes for C, one byte for H ?  Well, no, IBM decided to leave only HALF of a byte — four bits — for H.  (As you noticed earlier, one hex digit — half of a byte — is enough to count to fifteen, which is hex F.) So IBM took 12 bits away from HH for extending the cylinder number.   Big data.  Big.

And you yourself would not care overly about EAV, frankly, except that (a) you (probably) need to change your JCL to use it, and (b) there are restrictions on it, plus (c) those restrictions keep changing, and besides that (d) people are saying your company intends converting entirely to EAV disks eventually.

Okay, so what is this EAV thing, and what do you do about it ?

EAV means Extended Address Volume, which means bigger disks than were previously possible, with more cylinders.  The first part of an EAV disk is laid out just like any ordinary disk, using the traditional CCHHR addressing.  So that can be used with no change to your programs or JCL.

In the extended area, cylinders above 65,520,  The CCHH is no longer CCHH. 

The first two bytes (sixteen bits) contain the lower part of the cylinder number, which can go as high as 65535.  The next twelve bits — one and a half bytes taken from what was previously part of HH — contain the rest of the cylinder number, so to read the whole thing as a number you would have to take those twelve bits and put them to the left of the first two bytes. The remaining four bits — the remaining half of a byte out of what was once HH — contains the track number within the cylinder, which can go as high as fifteen.

Says IBM (in z/OS DFSMS Using Data Sets):

     A track address is a 32-bit number that identifies each track
     within a volume. The address is in the format hexadecimal CCCCcccH.

        CCCC is the low order 16-bits of the cylinder number.

        ccc is the high order 12-bits of the cylinder number.

        H is the four-bit track number.

End of quote from IBM manual.

The portion of the disk that requires the new format of CCHH is called extended addressing space (EAS), and also called cylinder-managed space.  Cylinder-managed space starts at cylinder 65520.

Of course, for any space with an address below cylinder 65535, those extra 12 bits are always zero, so you can view the layout of the CCHH the old way or the new way there, it makes no difference.

Within the extended addressing area, the EAS, the cylinder-managed space, you cannot allocate individual tracks.  Space in that area is always assigned in Cylinders, or rather in chunks of 21 cylinders at a time.  The smallest data set in that area is 21 cylinders.  The 21-cylinder chunk is called the "multicylinder unit".

If you code a SPACE request that is not a multiple of 21 cylinders (for a data set that is to reside in the extended area), the system will automatically round the number up to the next multiple of 21 cylinders.

As of this writing, most types of data sets are allowed within cylinder-managed space, including PDS and PDSE libraries, most VSAM, sequential data sets including DSNLARGE, BDAM, and zFS.  This also depends on the level of your z/OS system, with more data set types being supported in newer releases.

However the VTOC cannot be in the extended area, and neither can system page data sets, HFS files, or VSAM files that have imbed or keyrange specified.  Also VSAM files must have Control Area size (CA) or Minimum Allocation Units (MAU) such as to be compatible with the restriction that space is going to be allocated in chunks of 21 cylinders at a time.  Minor limitations.

Specify EATTR=OPT in your JCL when creating a new data set that can reside in the extended area.   EATTR stands for Extended ATTRibutes.  OPT means optional.  The only other valid value for EATTR is NO, and NO is the default if you don't specify EATTR at all.

The other EAV-related JCL you can specify on a DD statement is either EXTPREF or EXTREQ as part of the DSNTYPE.  When you specify  EXTPREF it means you prefer that the data set go into the extended area; EXTREQ means you require it to go there.

Example

Allocate a new data set in the extended addressing area

//MYJOB  JOB  1,CLASS=A,MSGCLASS=X
//BR14 EXEC PGM=IEFBR14
//DD1 DD DISP=(,CATLG),SPACE=(CYL,(2100,2100)),
//   EATTR=OPT,
//   DSNTYPE=EXTREQ,
//   UNIT=3390,VOL=SER=EAVVOL,
//   DSN=&SYSUID..BIG.DATASET,
//   DCB=(LRECL=X,DSORG=PS,RECFM=VBS)

 

Addendum 1 Feb 2017: BLKSIZE in Cylinder-managed Space

This was mentioned in a previous post on BLKSIZE, but it is relevant to EAV and bears repeating here.  If you are going to take advantage of the extended address area, the EAS, on an EAV disk, you should use system-determined BLKSIZE, that is, either specify no BLKSIZE at all for the data set or specify BLKSIZE=0, signifying that you want the system to figure out the best value of BLKSIZE for the data set.

Why? Because in the cylinder managed area of the disk the system needs an extra 32 bytes for each block, which it uses for control information. Hence the optimal BLKSIZE for your Data Set will be slightly smaller when the data set resides in the extended area.  The 32 byte chunk of control information does not appear within your data.  You do not see it.  But it takes up space on disk, as a 32-byte suffix after each block.

You could end up using twice as much disk space if you choose a poor BLKSIZE, with about half the disk space being wasted.  That is true because a track must contain an integral number of blocks, for example one or two blocks.  If you think you can fit exactly two blocks on each track, but the system grabs 32 bytes for control information for each block, then there will be not quite enough room on the track for a second block.  Hence the rest of the track will be wasted, and this will be repeated for every track, approximately doubling the size of your data set.  

On the other hand, if you just let the system decide what BLKSIZE to use, it generally calculates a number that allows two blocks per track. 

And when you use system-determined BLKSIZE  — when you just leave it to the system to decide the BLKSIZE — you get a bonus; if the system migrates your data set, and the data set happens to land on the lower part of a disk, outside the extended area, then if you have used system-determined BLKSIZE, that is, BLKSIZE=0 or unspecified, the system will automatically recalculate the best BLKSIZE when the Data Set is moved.  If the data set is later moved back into the cylinder-managed EAS area, the BLKSIZE will again be automatically recalculated and the data reblocked.

If in the future IBM releases some new sort of disk with a different track length, and your company acquires a lot of the new disks and adds them to the same disk storage pool you're using now, the same consideration applies: If system-determined BLKSIZE is in effect, the best BLKSIZE will be calculated automatically and the data will be reblocked automatically when the system moves the data set to the different device type.

Yes, it is possible for a data set to reside partly in track-managed space (the lower part of the disk) and partly in cylinder-managed space (the EAS, extended address, high part of the disk), per the IBM document.  

You should generally use system-determined BLKSIZE anyway.  But if you’re using EAV disks, it becomes more important to do so because of the invisible 32-byte suffix the system adds when your data set resides in the extended area.

[End of Addendum on BLKSIZE]

References, further reading

IBM on EAV

z/OS DFSMS Using Data Sets
https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.idad400/eav.htm

JCL for EAV
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.2.0/com.ibm.zos.v2r2.ieab600/xddeattr.htm
http://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.ieab600/iea3b6_Subparameter_definition18.htm

Disk types and sizes
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.idad500/tcd.htm

a SHARE presentation on EAV
https://share.confex.com/share/124/webprogram/Handout/Session17109/SHARE_Seattle_Session%2017109_How%20to%20on%20EAV%20Planning%20and%20Best%20Practices.pdf

EAV reference – IBM manual
z/OS 2.1.0 =>
z/OS DFSMS =>
z/OS DFSMS Using Data Sets =>
All Data Sets => Allocating Space on Direct Access Volumes => Extended Address Volumes
https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.idad400/eav.htm

IBM manual on storage administration (for systems programmers)
z/OS DFSMSdfp Storage Administration
https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.idas200/toc.htm

The 32-byte per block overhead in the extended area of EAV disk (IBM manual):
https://www.ibm.com/support/knowledgecenter/SSLTBW_1.13.0/com.ibm.zos.r13.idad400/blksz.htm

z/OS DFSMS Using Data Sets ==>
Non-VSAM Access to Data Sets and UNIX Files ==>
Specifying and Initializing Data Control Blocks ==>
Selecting Data Set Options ==>
Block Size (BLKSIZE)
Extended-format data sets: In an extended-format data set, the system adds a 32-byte suffix to each block, which your program does not see. This suffix does not appear in your buffers. Do not include the length of this suffix in the BLKSIZE or BUFL values.”

IEFBR14

I E F B R 1 4   Mysteries of IEFBR14 Revealed —

People ask what IEFBR14 does.  If you laughed at that, move along to some other reading material.

There, now we’re alone to investigate the mysteries of IEFBR14.  You’re new to the mainframe perhaps. 

If you have read the previous introductory JCL post, you know that the only thing you can ask the mainframe to do for you is run a program (as specified by EXEC in JCL), and when that happens the system does various setup before running the program, plus various cleanup after the program.  Most of that setup and cleanup is orchestrated by what you specify on the DD statements (DD means Data Definition).   So when you run a program, any program, you have quite a bit of control over the actions the system will take on your behalf,  processing your DD statements.

When the program IEFBR14 runs, IEFBR14 itself does nothing, but when you tell the system to run a program (any program), that acts as a trigger to get the system to process any DD statements you include following the EXEC statement.  So you can use that fact to create and delete datasets just with JCL.  

For example, you might specify DISP=(NEW,CATLG) on a DD statement for a dataset if you want the system to create the dataset for your program just before the program runs (hence NEW), and you want the system to save the new dataset when the program ends, and you also want the system to create a catalog entry so you can access the dataset again just by its name, triggering the system to look the name up in the catalog (hence CATLG).

So all you need to do to create a dataset is to put in a DD statement after the EXEC for IEFBR14, and on the DD statement you specify DSN= whatever name you want the new dataset to have, you specify DISP=(NEW,CATLG) as just mentioned, and rather than specify a lot of other information you pick an existing dataset you like and just model the parameters for this new one based on that, saying LIKE=selected.model.dataset — the DDname on the DD statement can be anything syntactically valid, that is, not more than 8 characters long, starts with a letter or acceptable symbol, and contains only letters, numbers, and the aforementioned acceptable symbols, which are usually #, @, and a currency symbol such as $.

Example:
 
//MYJOB  JOB  1,SAMPLE,MSGCLASS=X,CLASS=A
//BR14  EXEC  PGM=IEFBR14
//NEWDATA  DD  DSN=MY.NEW.DATA,DISP=(NEW,CATLG),
//    LIKE=MY.MODEL.DATASET

So the system creates the dataset.  Your program does not create it.  Your program might put data into it (or not), and the system doesn't care about the data.  The system manages the dataset itself based on what you specify in the DD statement in the JCL – or if a program really needs to create a dataset then the program builds the equivalent of a DD statement internally and invokes “dynamic allocation” to get the system to process the specifications the same as if there had been a DD statement in JCL.

In such a case the system processes that “dynamic allocation” information exactly the same way it would have processed it if you had supplied the information on a DD statement present in the JCL.

To delete an existing dataset you no longer want, you can specify DISP=(OLD,DELETE) on the DD statement, and the system will delete the dataset.  This is similar to the way it would delete a dataset if you issued the DELETE command under TSO, or using  IDCAMS, but there are a couple of important nuances you need to know about  deleting datasets.  

One is that it is a big mistake if you try to delete a member of a dataset using JCL.  The DISP you specify applies to the entire dataset, even if you put a member name in parentheses.    Never say DELETE for a Member in JCL; you will lose the entire library.

The second thing you need to know about deleting a dataset is that, for ordinary cataloged data sets, saying DELETE causes the deletion of both the dataset and the catalog entry that points to it.  That's fine and works for most cases, but sometimes you might have just a dataset with no catalog entry, and other times you might have a catalog entry pointing to a dataset that isn't really there anymore.

If you have a dataset that is not cataloged, then you need to tell the system where it is.  You do that by specifying both the UNIT and VOL parameters.   UNIT identifies the type of device that holds the dataset, something like disk or tape, which you might be able to specify just by saying UNIT=3390 (for disk) or UNIT=TAPE.   VOL is short for VOLUME, and identifies which specific disk or tape contains your dataset.  So UNIT is a class of things, and VOL, or VOLUME, is a specific item within that class.  

It turns out that coding UNIT isn't usually as simple as saying UNIT=DISK.  The people responsible for setting up your system can name the various units anything they want.  UNIT=TEST and UNIT=PROD are common choices.  The system, as it comes from IBM, has UNIT=SYSDA and UNIT=SYSALLDA as default disk unit names, but some places change those defaults or restrict their use.  If you have access to any JCL that created or used the dataset, it would likely contain the correct UNIT name — because if there is no catalog entry for an existing dataset, then every reference to the dataset has to specify UNIT.

When you first create a dataset, you are required to supply a UNIT type, but you are not required to specify a VOLUME — the system will select an available VOLUME from within the UNIT class you specified.  

If you are dealing with a dataset that was created with just the UNIT specified, and DISP=(NEW,KEEP), then you need to find the output from the job that created the dataset.  The JCL part of the listing will show what volume the system selected for the dataset.

To code the VOL parameter in your JCL, typically you say VOL=SER=xxxxxx, where xxxxxx is the name of the volume.  There are various alternatives to this way of coding it,  SER is short for Serial Number.  The names of tapes used to be 6-digit numbers in most places, for whatever reason — possibly to make it easy to avoid duplication.  Besides Serial Number, the volume parameter has other possible subparameters too,  but you don't care right now.

If you don't know what UNIT name to use, but you do know the volume where the dataset is, then go into  something like TSO/ISPF 3.4 and do a listing of the volume.  Select any other dataset on the volume and request the equivalent of ISPF 3.2 dataset information. Whatever UNIT it says, that should work for every dataset on the volume.

Note that it is possible for the same volume to be a member of more than one UNIT class.  It might belong to 3390, SYSDA, and TEST for example.  In that case it doesn't matter which UNIT name you specify for the purpose of finding (and deleting) the dataset.  The only point of putting UNIT into your JCL for an existing dataset is to help the system find it.

Note that it is possible to have multiple datasets with exactly the same name on different volumes and in different UNIT classes.  An easy example to visualize is having a disk dataset that you copy to a tape,  giving it the same DSN, and then later you copy it again to a different tape.  

Another, more perverse, example occurs when someone creates a new dataset they intend to keep, but mistakenly specifies DISP=(NEW,KEEP) rather than DISP=(NEW,CATLG).  Later they can't find the dataset, because no catalog entry was created. Rather than figure out what happened, they run the same job again.  If the system puts the second copy of the dataset onto a different disk volume, they now have two uncataloged copies of it.  If they keep doing that, at some point the system will select a volume that already has a copy of the dataset, and then the job will fail with an error message saying a duplicate name exists on the volume.  To clean up something like that, you need to find every uncataloged copy of the dataset  and delete it, specifying VOL and UNIT along with DISP=(OLD,DELETE) — you can use IEFBR14 for that.

On many systems, they have it set up so that a disk housekeeping program runs every night (or on some other schedule), and deletes all uncataloged disk datasets.  So if you find yourself in possession of a set of identically named uncataloged disk datasets, and you don't want to look for all the volume names, you might get lucky if you wait for a possible overnight housekeeping utility to run automatically and delete them for you.

One other point on those uncataloged datasets.  You also have the option of creating a catalog entry for a dataset, rather than deleting it, if you want to keep it around.  To do that, you specify UNIT and VOL, as just discussed, and DISP=(OLD,CATLG) — but you can only have one cataloged copy of any dataset name.

So, back to the point we touched on earlier, about how a program can create the equivalent of a DD statement internally.

It is also possible for part of the information to be specified on a DD statement in JCL, and part of the information to be specified in the program.  In that case the program needs to specify its DD modifications before the OPEN for the file.  The system then merges what the program specified with what the JCL specified, and if there’s a difference then the program takes precedence.  So the program can change what the JCL said, and the program wins any disagreements – but the program has to have its say BEFORE the OPEN for the file.

Note that IEFBR14 does not OPEN any files.  The system does the setup and cleanup involved in allocation processing, and only that. Various JCL specifications can be used to indicate processes that occur only at OPEN or CLOSE of a file.  Releasing unused disk space is an example of that.  If you code the RLSE subparameter in your SPACE parameter on a DD statement for IEFBR14,  that subparameter is ignored; no space is released.

This general discussion of DD parameters that can be specified within a program is pretty much irrelevant to IEFBR14, except to note that IEFBR14 does no file handling of any kind, so it will never override anything in your JCL (as some other program might).  So if you use IEFBR14 to set up a dataset, and the dataset does not come out the way you wanted, that is not due to anything IEFBR14 did. Because IEFBR14 does nothing.

If some item of information about a dataset is not specified in JCL, and the program does not specify it either, then if it is a dataset that already exists, the system looks to see if the missing item of information might be specified in some existing saved information relevant to the dataset, such as the Catalog entry or the Dataset Label.  Things like record length, record format, and allowable space allocation are generally kept in the Dataset Label for ordinary datasets.  For VSAM datasets most of the information is kept in the Catalog.  The system looks for the information, and if the information is found, it merges it together with the information obtained from the program and the JCL to form a composite picture of the dataset.

What if the system needs to know something about a dataset – record length, for example (LRECL), or record format (RECFM) or one of those other parameters – and after looking in all three places just named, the system has not found the information anyplace – what happens?  Default values apply.  Ha ha, because you don’t usually like the defaults, with the single glowing exception of BLKSIZE where the default is always the best value possible.  The system can calculate BLKSIZE really well.  Other defaults – you don’t want to know.  So specify everything except BLKSIZE.  You don’t need to specify everything explicitly though, you can use the LIKE parameter to specify a model.  Then the system will go look at the model dataset you’ve specified and copy all the attributes it can from there, rather than using the very lame system defaults.  So you specify whatever parameters you want to specify, and you also say LIKE=My.Favorite.Existing.Dataset to tell the system to copy everything it can from that dataset’s settings before applying the <<shudder>> system defaults.  Note: The system will not copy BLKSIZE from the model you specify in LIKE.  No, the system knows where its strengths and weaknesses lie.  It recalculates the BLKSIZE unless you explicitly specify some definite numeric you want for that.

Also note that any particular system, such as the one you're using, can be set up with Data Classes and other system-specific definitions of things that affect defaults for new datasets.  Something could be set up, for example, stating that by default a new dataset with a certain pattern of DSN would have certain attributes.  If so, that would generally take precedence over any general IBM-supplied system defaults, but usually stuff you specify explicitly will override any site-specific definitions of attributes — unless, of course, the ones on your system were purposely set up designating they couldn't be overridden.

Okay, so then, does IEFBR14 do some magical internal specifications?  Nope.  IEFBR14 does absolutely nothing.  You do the work yourself by specifying everything you want on the DD statements.  IEFBR14 never opens the files, modifies nothing, does nothing.  You can code your own program equivalent to IEFBR14 by writing no executable program statements except RETURN. Or, for that matter, no statements at all, since most compilers will supply a missing RETURN at the end for you.  Yes, that is IEFBR14.  Laziest program there is: Lets the system do all the work.

You can use any ddnames you want when you run IEFBR14.  The files are never opened.  The system does the setup prior to running the program, including creating any new files you’ve requested.  The program runs, ignoring everything, doing nothing.  When it ends, the system does the cleanup, including deleting any datasets where you specified DELETE as the second subparameter of DISP.

So that’s how it works.  Often people think IEFBR14 does some magic, but it doesn’t.  It relies on the system to go through the normal setup and cleanup. 

You can add extra DD statements into any other program you might happen to be running, and the system will do the same setup and cleanup.  Of course you’d need to be sure the program you pick doesn’t happen to use the ddnames you pick – you wouldn’t want to use ddnames like SYSUT1 or SYSUT2  with most IBM-supplied programs, for example. 

Ddnames SALLY, JOE, and TOMMY should work just fine though.  The IEFBR14 program doesn’t look at them.  The system doesn’t know and doesn’t care whether the program uses the datasets. 

People use IEFBR14 for convenience because they know for sure that the program will not tamper with any of the file specifications. 

How did IEFBR14 get its name?  Well, the IEF prefix is a common prefix IBM uses for system-level programs it supplies.  BR14 is based on the Assembler Language instruction : BR 14
which means Branch (BR for branch) to the address contained in register 14 — the pointer that holds the return address.  So, Branch to the Return Address, that is, Return to Caller.

Correction 23 November:  Sorry, it's parsed BR 14 rather than B R14.    The notation BR in Assembler Language means Branch to the address contained in the specified Register, as distinct from B. branching to an address specified some other way, for example as a label within the program.

That’s it, secrets of IEFBR14 revealed.

Or,  JCL Basic Concepts part II 

Packed, Zoned, Binary Math

Mainframe Math: Packed, Zoned, Binary Numbers —

Why are there different kinds of numbers?  And how are they different, exactly? Numeric formats on the mainframe (simplified) . . .

The mainframe can do two basic kinds of math: Decimal and Binary.  Hence the machine recognizes two basic numeric formats: Decimal and Binary.  It has separate machine instructions for each.  Adding two binary integers is a different machine operation from adding two decimal integers.  If you tell your computer program to add a binary number to a packed number, the compiler generates machine code that first converts at least one of the numbers, and then when it has two numbers of the same type it adds them for you.

There is also a displayable, printable type, called Zoned Decimal, which in its unsigned integer form is identical to character format.  You cannot do math with zoned decimal numbers.  If you code a statement in a high level language telling the computer to add two zoned decimal numbers, it might generate an error, but otherwise it works because the compiler generates machine instructions that will first convert the two numbers into another type, and then do the math with the converted copies. 

Within Decimal and Binary there are sub-types, such as double-precision and floating point.  These exist mainly to enable representation of, and do math with, very large numbers.  Within displayable numbers there are many possible variations of formatting.  Of course.  

For this article, we are going to skip all the variations except the three most common and most basic: Binary (also called Hexadecimal, or Hex), Decimal (called Packed Decimal, or just Packed), and Zoned Decimal (the displayable, printable representation, sometimes also called “Picture”).  To start with we’ll focus on integers. 

Generally, packed decimal integers used for mathematical operations can have up to 31 significant digits, but there are limitations: a multiplier or divisor is limited to 15 digits, and, when doing division, the sum of the lengths of the quotient and remainder cannot exceed 31 digits. For practical business purposes, these limits are generally adequate in most countries.

Binary (hex) numbers come in two basic sizes, 4 bytes (a full word, sometimes called a long integer), and 2 bytes (a half word, sometimes called a short integer). 

A signed binary integer in a 4 byte field can hold a value up to 2,147,483,647.  

The leftmost bit of the leftmost byte is the sign bit.  If the sign bit is zero that means the number is positive.  If the sign bit is one the number is negative.  This is why a full word signed binary integer is sometimes called a 31-bit integer.  Four bytes with 8 bits each should be 32 bits, right?  But no, the number part is only 31 bits, because one of the bits is used for the sign.

A 2-byte (half word) integer can hold a value up to 32,767 if it is defined as a signed integer, or 65,535 if unsigned.  

The sign bit in a 2-byte binary integer is still the leftmost bit, but since there are only two bytes, the sign bit is the leftmost bit of the left-hand byte.  

Consider the case where you are not using integers, but rather you have an implied decimal point; For example, you’re dealing with dollars and cents. Now you have two positions to the right of an implied (imaginary) decimal point. With two digits used for cents, the maximum number of dollars you can represent is divided by a hundred: $32,767 is no longer possible in a half word; the new limit becomes $327.67, so half word binary math won’t be much use if you’re doing accounting.  $21,474,836.47, that is, twenty-one million and some, would be the limit for full word binary.  Such a choice might be considered to demonstrate pessimism or lack of foresight, or both.  You probably want to choose decimal representations for accounting programs, because decimal representation lets you use larger fields, and hence bigger numbers.

Half word integers are often used for things like loop counters and other smallish arithmetic, because the machine instructions that do half word arithmetic run pretty fast compared to other math.  For the most part binary arithmetic is easier and runs faster than decimal arithmetic.  Also, machine addresses (pointers) are full word binary (hex) integers, so any operation that calculates an offset from an address is quicker if the offset is also a binary integer.  Plus you can fit a bigger number into a smaller field using binary.  However, If you need to do calculations that use reasonably large numbers, for example hundreds of billions in accounting calculations, then you want to use decimal variables and do decimal math.

How are these different types of numbers represented internally – what do they look like?

An unsigned packed decimal number is composed entirely of the hex digits zero through nine.  Two such digits fit in one byte.  So a byte containing hex’12’ would represent unsigned packed decimal twelve.  Two bytes containing hex’9999’ would represent unsigned packed decimal nine thousand nine hundred and ninety-nine. 

How is binary different?  You don’t stop counting at nine.  You get to use A for ten, B for eleven, C for twelve, D for thirteen, E for fourteen, and F for fifteen.  It’s base sixteen math (rather than the base ten math that we grew up with).  So in base ten math, when you run out of digits at 9, and you expand leftward into a two-digit number, you write 10 and call it ten.  With base sixteen math, you don’t run out of available digits until you hit F for fifteen; so then when you expand leftward into a two-digit number, you write 10 and call it sixteen.  By the time you get to x’FF’ you’ve got the equivalent of 255 in your two-digit byte, rather than 99. 

Why, you may ask, did they do this; In fact, why, having done it, did they stop at F?  Actually it’s pretty simple.  Remember that word binary – it really means you’re dealing in bits, and bits can only be zero (off) or one (on).  On IBM mainframe type machines, there happen to be 8 bits in a byte.  Each half byte, then – each of our digits – has 4 bits in it.  That’s just how the hardware is made. 

Yes, a half byte is also named a nibble, but I've never heard even one person actually call it that.  People I've known in reality either say "half byte", or they say "one hex digit".  

So we can all see that 0000 should represent zero, and 0001 should be one.  Then what?  Well, 0010 means two; The highest digit you have is 1, and then you have to expand toward the left.  You have to "carry", just like in regular math, except you hit the maximum at 1 rather than 9.  This is base 2 math. 

To get three you add one+two, giving you 0011 for three.  Hey, all we have at this point is bit switches, kind of like doing math by turning light switches off and on. (Base 2 math.)  10 isn't ten here, and 10 isn't fifteen; 10 here is two.  So, if 011 is three, and you have to move leftward to count higher, that means 0100 is obviously four.  Eventually you add one (001) and two (010) to four (0100) and you get up to the giddy height of seven (0111).  Lesser machines might stop there, call the bit string 111 seven, and be satisfied with having achieved base 8 math.  Base 8 is called Octal, lesser machines did in fact use it, and personally I found it to be no fun at all.  The word excruciating comes to mind.  Anyway, with the IBM machine people were blessed with a fourth bit to expand into. 1000 became eight, 1001 was nine, 1010 (eight plus two) became A, and so on until all the bits were used up, culminating in 1111 being called F and meaning fifteen.  Base sixteen math, and we call it hexadecimal, affectionately known as hex.  It was pretty easy to see that hex was better than octal, and it was also pretty easy to see that we didn’t need to go any higher — base 16 is quite adequate for ordinary human minds.  So there it is.  It also explains why the word hex is so often used almost interchangeably with the word binary.

And Zoned Decimal?  A byte containing the digit 1 in zoned decimal is represented by hex ‘F1’, which is exactly the same as what it would be as part of a text string (“Our number 1 choice.”)  Think of a printable byte as one character position. The digit 9, when represented as a printable (zoned) decimal number, is hex’F9’.  A number like 123456, if it is unsigned, is hex’F1F2F3F4F5F6’.  (If it has a sign, the sign might be separate, but if the sign is part of the string then the F in the last byte might be some other code to represent the sign.  Conveniently, F is one of the signs for plus, and it also means unsigned.)  You cannot do math with zoned decimal numbers.  If you code a statement in a high level language telling the computer to add two zoned decimal numbers, and if it does not generate an error, the compiler generates machine instructions that will first convert the two numbers into another type and then do the math with the converted copies. 

You may be thinking that the left half of each byte is wasted in zoned decimal format.  Well, not wasted exactly: Any printable character will use up one byte; a half byte containing F is no bigger than the same half byte containing zero.  Still, if you are not actually printing the digit at the moment, could you save half the memory by eliminating the F on the left?  Pretty much. 

You scrunch the number together, squeezing out the F half-bytes, and you have unsigned packed decimal.  You just need to add a representation of the plus or minus sign to get standard (signed) packed decimal format. The standard is to use the last half byte at the end for the sign, the farthest right position.  This is why decimal numbers are usually set up to allow for an odd number of digits – because memory is allocated in units of bytes, there are two digits in a byte, and the last byte has to contain the sign as the last digit position.

How is the packed decimal sign represented in hex?  The last position is usually a C for plus or a D for minus.  F also means plus, but usually carries the nuance of meaning that the number is defined as unsigned.  Naturally there are some offbeat representations where plus can be F, A, C, or E, like the open spaces on a Treble clef in music, and minus can be either B or D (the two remaining hex letters after F,A,C,E are taken) – hence giving meaning to all the non-decimal digits.  Mostly, when produced by ordinary processes, it’s C for plus, F for unsigned and hence also plus, or D for minus. 

So if you have the digit 1 in zoned decimal, as hex’F1’, then after it is fully converted to signed packed decimal the packed decimal byte will be hex’1C’.  Zoned decimal Nine (hex ‘F9’) would convert to packed decimal hex ‘9C’, and zero (hex ‘F0’) becomes hex’0C’.  Minus nine becomes hex ‘9D’, and yes, you can have minus zero as hex ‘0D’. 

The mathematical meaning of minus zero is arguable, but some compilers allow it, and in fact the IEEE standard for floating point requires it.  Some machine instructions can also produce it as a result of math involving a negative number and/or overflow.  You care about negative zero mainly because in some operations (which you might easily never encounter), hex’0D’, the minus zero, might give different results from ordinary zero.  A minus zero normally compares equal to ordinary plus zero when doing ordinary decimal comparisons.  Okay, moving on … Zoned decimal 123, hex ‘F1F2F3’, when converted to signed packed decimal will become hex’123C’, and Zoned decimal 4567, hex ‘F4F5F6F7’, when converted to signed packed decimal will become hex’04567C’, with a leading zero added because you need an even number of digits; half bytes have to be paired up so they fill out entire bytes.

Wait, you say, how did the plus or minus look in zoned decimal? 

The answer is that there are various formats.

It is possible for the rightmost Zoned Decimal digit to contain the sign in place of that digit’s lefthand “F” (its “zone”), and that is the format generated when a Zoned Decimal number is produced by the UNPACK machine instruction.  

The most popular format, for users of high level languages, seems to be when the sign is kept separate and placed at the beginning of the number (the farthest left position). COBOL calls this “SIGN IS LEADING SEPARATE”.  

However, many print formats are possible, and you can delve into this topic further by looking at IBM’s Language Reference Manual for whatever language you’re using.  Zoned decimal is essentially a print (or display) format.  High Level Computer Languages facilitate many elaborate editing niceties such as leading blanks or zeroes, insertion of commas and decimal points, currency symbols, and stuff that may never even have occurred to you (or me).

In COBOL, a packed decimal variable is traditionally defined with USAGE IS COMPUTATIONAL-3, Or COMP-3, but it can also be called PACKED-DECIMAL.  A zoned decimal variable is defined with a PICTURE format having USAGE IS DISPLAY.  A binary variable is just called COMP, but it can also be called COMP-4 or BINARY.

In C, a variable that will contain a packed decimal number is just called decimal.  If you are going to use decimal numbers in your C program, the header <decimal.h> should be #included.  A variable that will hold a four byte binary number is called int, or long.  A two byte binary integer is called short.  Typically a number is converted into printable zoned decimal by using some function like sprintf with the %d formatting code.  Input zoned decimal can be treated as character.

PL/I refers to packed decimal numbers as FIXED DECIMAL.  Zoned decimal numbers are defined as having PICTURE values containing nines, e.g. P’99999’ in a simple case.  Binary numbers are called FIXED BINARY, or FIXED BIN, with a four byte binary number being FIXED BINARY(31) and a two byte binary number being called FIXED BINARY(15).

What if you want to use non-integers, that is, you want decimal positions to the right of the decimal?  Dollars and cents, for example?

In most high level languages, you define the number of decimal positions you want when you declare the variable, and for binary numbers and packed decimal numbers, that number of decimal positions is considered to be implied; it just remembers where to put the decimal for you, but the decimal position is not visible when you look at the memory location in a dump or similar display.  For zoned decimal numbers, you can declare the variable (or the print format) in a way that both the implied decimal and a visible decimal occur in the same position.  For example, if (in your chosen language) 999V99 creates an implied decimal position wherever the V is, then you would define an equivalent displayable decimal point as 999V.99, in effect telling the compiler that you want a decimal point to be printed at the same location as the implied decimal.  As previously noted, the limits on the numbers of digits that can be represented or manipulated apply to all the digits in use on both sides of the implied decimal point.

You may have noticed that abends are a bit more common when using packed decimal arithmetic, as compared with binary math.  There are two common ways that decimal arithmetic abends where binary would not.  One occurs when fields are not initialized.  If an uninitialized field contains hex zeroes, and it is defined as binary, that’s valid and some might say lucky.  If the same field of hex zeroes is defined as signed packed decimal, mathematical operations will fail because of the missing sign in the last half byte.  This is a common cause of an 0C7 abend failure in formatted dumps (such as a PL/I program containing an “ON ERROR” unit with a “PUT DATA;” statement).  When the uninitialized fields contain hex zeroes, it might seem that the person using binary variables is lucky, but sometimes uninitialized fields contain leftover data from something else, essentially random trash that happens to be in memory.  In that case decimal instructions usually still abend, and binary mathematical operations do not – they just come up with wrong results, because absolutely any hex value is a valid binary number.  The abend doesn’t look like such bad luck in that situation.  The other common cause for the same problem, besides uninitialized fields, is similar insofar as it means picking up unintended data.  When something goes wrong in a program – maybe a memory overlay, maybe a bad address pointer – an instruction may try to execute using the wrong data.  Again, there is a good chance that decimal arithmetic will fail in such a situation, because of the absence of the sign perhaps, or perhaps because the data contains values other than the digits zero through nine plus the sign.  Binary arithmetic may carry on happily producing wrong answers based on the bogus data values.  Even if you recognize that the output is wrong, it can be difficult to track back to the cause of the problem.  With an immediate 0C7 or other decimal arithmetic abend, you have a better chance of finding the underlying problem with less difficulty. 

So there you have it.  Basic mainframe computer math, simplified.  Sort of.

________________________________________________________________________

References

z/Architecture Principles of Operation (PDF) SA22-7832

at this url:
http://publibfi.boulder.ibm.com/epubs/pdf/dz9zr010.pdf

In the SA22-7832-10 version, on the first page of Chapter 8. Decimal Instructions, there is a section called “Decimal-Number Formats”, containing subsections for zoned and packed-decimal. 

On the fourth page of Chapter 7. General Instructions there is a section called “Binary-Integer Representation”, followed by sections about binary arithmetic.

Principles of Operation is the definitive source material, the final authority.

Further reading

F1 for Mainframe has a good very short article called “SORT – CONVERT PD to ZD and BI to ZD”, in which the author shows you SORT control cards you can use to convert data from one numeric format to another (without writing a program to do it).   At this url:   https://mainframesf1.com/2012/03/27/sort-convert-pd-to-zd-and-bi-to-zd/

 

 

 

Program Search Order

z/OS MVS Program Search Order

When you want to run a program or a TSO command, where does the system find it?  If there are multiple copies, how does it decide which one to use?  That is today’s topic.

The system searches for the item you require; It has a list of places to search; it gives you the first copy it finds.  The order in which it searches the list is called, unsurprisingly, search order

The search order is different depending on where you are when the request is made (where your task is running within the computer system, not where you're sitting geographically).   It also depends on what you request (program, JCL PROC, CLIST, REXX, etc). 

Types of searches: Normally we think of the basic search for an executable program. The search for TSO commands is almost identical to the search for programs executed in batch, with some extra bells and whistles.  There is different handling for a PROC request in JCL (since the search needs to cover JCL libraries, not program libraries).  The search is different when you request an exec in TSO by prefixing the name of the exec with a percent sign (%) — the percent sign signals the system to bypass searching for programs and go directly to searching for CLIST or REXX execs. Transaction systems such as CICS and IMS are again different: They use look-up tables set up for your CICS or IMS configuration.

Right now we’re only going to cover batch programs and TSO commands.

Overview of basic search order for executable programs:

  1. Command tables are consulted for special directions
  2. Programs already in storage within your area
  3. TASKLIB   (ISPLUSR,  LIBDEF,  ISPLLIB, TSOLIB) 
  4. STEPLIB if it exists
  5. JOBLIB only if there is no STEPLIB
  6. LPA (Link Pack Area)
  7. LINKLIST (System Libraries)
  8. CLIST and REXX only if running TSO

Let’s pretend you don’t believe that bit about STEPLIB and JOBLIB, that the system searches one or the other but not both.  Look here for verification:  Numbered item 3 in Section “Search order the system uses for programs” in the IBM publication “z/OS MVS Initialization and Tuning Guide” at this link: https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.ieae100/ieae10018.htm

Batch Programs vs TSO commands

As noted, these two things are almost exactly the same.

You can type the name of a program when in READY mode in TSO, or in ISPF option 6, which mimics READY mode.  (Using ISPF 6 is similar to opening a DOS window on the PC.)  Under ISPF you can also enter the name of a program directly on any ISPF command line if you preface the program name with the word TSO followed by a space. 

So you can type the word IEBGENER when in READY mode, or you can put in “TSO  IEBGENER“ on the ISPF command line, and the system will fetch the same IEBGENER utility program just the same as it does when you say
“//  EXEC  PGM=IEBGENER” in batch JCL. 

There is a catch to this: the PARM you use in JCL is not mirrored when you invoke a program under TSO this way.  If you enter any parameters on the command line in TSO, they are passed to the program in a different format than a PARM.   When you type the name of a program to invoke it under TSO, what usually happens is that the program is started, but instead of getting a pointer it can use to find a PARM string, the program receives a pointer that can be used to find the TSO line command that was entered.  The two formats are similar enough so that a program expecting to get a PARM will be confused by the CPPL pointer (CPPL = Command Processor Parameter List).  Typically such a program will issue a message saying that the PARM parameters are invalid.

So, Let's look at how the system searches for the program.

Optional ISPF command tables (TSO/ISPF only) 

We mention command tables first because the command tables are the first thing the system checks.  Fortunately the tables are optional.  Unfortunately they can cause problems.  Fortunately that is rare.  So It’s fine for you to skip ahead to the next section, you won’t miss much; just be aware that the command tables exist, so you can remember it if you ever need to diagnose a quirky problem in this area.

Under TSO/ISPF, ISPF-specific command tables can be used.  These can alter where the system searches for commands named in such a table.  This is an area where IBM makes changes from time to time, so if you need to set up these tables, consult the IBM documentation for your exact release level. 

There is a basic general ISPF TSO command table called ISPTCM that does not vary depending on the ISPF application.

Also, search order can vary within ISPF depending on the ISPF application. 

For example, when you go into a product like File-AID, the search order might be changed to use the command tables specific to that application. 

So there are also command tables specific to the ISPF application that is running.  In addition to a general Application Command Table, there can be up to 3 site command tables and up to 3 user command tables, plus a system command table.  Within these, the search order can vary depending on what is specified at your site during setup. 

If you are having a problem related to command tables, or if you need to set one up, consult the IBM references.   For z/OS v2r2, see "Customizing the ISPF TSO command table (ISPTCM)" and “Customizing command tables In the IBM manual "ISPF Planning and Customizing".

ISPF command table(s) can influence other things besides search order, including:

(a) A table can specify that a particular program is to run in APF-authorized mode, allowing the program to do privileged processes (APF is an entirely separate topic and not covered herein).

(b) A table can specify that a particular program must be obtained from LPA only (bypassing the normal search order).  (We’ll introduce LPA further down.)

(c) A table can change characteristics of how a command is executed.  For example the table can specify that any given program must always be invoked as a PARM-processing ordinary program, or the table can specify instead that the program must always be invoked as a TSO command that accepts parameters entered on the command line along with the command name, e.g. LISTDS ‘some.dataset.name’

Programs are not required to be listed in any of these command tables.  If a program is not listed in any special table, then default characteristics are applied to the program, and the module itself is located via the normal search order, which is basically the same as for executable programs in batch. 

Most notably, a command table can cause problems if an old version exists but that fact is not known to the current local Systems people (z/OS System Admins are not called Admins as they might be on lesser computer systems; on the mainframe they're called "System Programmers" generally, or some more specialized title).  Such a table might prevent or warp the execution of a program because the characteristics of a named program have changed from one release to another.

If a command is defined in a command table, but the program that the system finds does not conform to the demands imposed by the settings in the table, that situation can be the cause of quirky problems, including the “COMMAND NOT FOUND” message.  Yes, that might perhaps be IBM’s idea of a practical joke, but it happens like this: If the table indicates that a program has to be found in some special place, but the system finds the program in the wrong place, then in some cases you get the COMMAND NOT FOUND message.  I know, you’re cracking up laughing.  That’s the effect it seems to have on most people when they run into it.  Not everyone, of course.

Other subsystems such as IMS and CICS also use command tables, and these are not covered by the current discussion.  Consult IMS and CICS documentation for specifics of those subsystems. 

Programs already loaded into your memory

Before STEPLIB or anything like that, the system searches amongst the programs that have already been loaded into your job’s memory.  Your TSO session counts as a job.  

There can be multiple tasks running within your job – Think split screens under TSO/ISPF.  In that case the system first searches the program modules that belong to the task where the request was issued, and after that it searches those that belong to the job itself.  Relevant jargon : a task has a Load List, which has Load List Elements (LLE); the job itself has Job pack area (JPA).

If a copy of the module is found in your job’s memory, in either the task's LLE or the job's JPA, that copy of the module will be used if it is available.  

If the module is marked as reentrant – if the program has a flag set to promise that it does not modify itself at all – then any number of tasks can use the same copy simultaneously, so you always get it. If the module is not marked as reentrant, but it is marked as serially reusable, that means the program can modify itself while it's running as long as, at the end, it puts everything back the way it was originally; in that case a second task can use an existing copy of the module after the previous task finishes.  If neither of those flags are set, then the system has to load a fresh copy of the load module every time it is to be run.

If the module is not reentrant and it is already in use, as often happens in multi-user transaction processing subsystems like IMS and CICS, the system might make you wait until the previous caller finishes, or else it might load an additional copy of the module.  In CICS and IMS this depends on CICS/IMS configuration parameters.  In ordinary jobs and TSO sessions, the system normally just loads a new copy.

Note that if a usable copy of a module is found already loaded in memory, either in the task's LLE or in the job's JPA, that copy will be used EVEN if your program has specified a DCB on the LOAD macro to tell the system to load the module from some other library (unless the copy in memory is "not reusable", and then a new copy will be obtained).  Hunh?  Yeah, suppose your program, while it is running, asks to load another program, specifying that the other program is to be loaded from some specific Library.  Quelle surprise, if there is already a copy of the module sitting in memory in your area, then the LOAD does not go look in that other library.

For further nuances on this, if it is a topic of concern for you, see the IBM write-up under the topic "The Search for the Load Module" in the z/OS MVS Assembler Services Guide

TASKLIB 

Most jobs do not use TASKLIB.  TSO does.  

Essentially TASKLIB works like a stand-in for STEPLIB.  A job can’t just reallocate STEPLIB once the job is already running.  For batch jobs the situation doesn’t come up much.  Under TSO, people often find cases where they’d like to be able to reallocate STEPLIB.  Enter TASKLIB.  STEPLIB stays where it is, but TASKLIB can sneak in ahead of it.

Under TSO, ISPF uses the ddname ISPLLIB as its main TASKLIB.

The ddname ISPLUSR exists so that individual users – you yourself for example – can use their own private load libraries, and tell ISPF to adopt those libraries as part of the TASKLIB, in fact at the top of the list.  When ISPF starts, it checks to see if the ddname ISPLUSR is allocated.  If it is, then ISPF assigns TASKLIB with ISPLUSR first, followed by ISPLLIB.  As long as you allocate ISPLUSR before you start ISPF, then ISPLUSR will be searched before ISPLLIB.  

In fact that USR suffix was invented just for such situations.  It’s a tiny digression, but ISPF allows you to assign ISPPUSR to pre-empt ISPPLIB for ISPF screens (Panels), and so on for other ISP*LIB ddnames; they can be pre-empted by assigning your own libraries to their ISP*USR counterparts. 

Up to 15 datasets can be concatenated on ISPLUSR.  If you allocate more, only the first 15 will be searched.

If you allocate ISPLUSR, you have to do it before you start ISPF, and it applies for the duration of the ISPF session.

Not so with LIBDEF.  The LIBDEF command can be used while ISPF is active.  Datasets you LIBDEF onto ISPLLIB are searched before other ISPLLIB datasets, but after ISPLUSR datasets.

The TSOLIB command can also be used to add datasets onto TASKLIB

The TSOLIB command, if used, must be invoked from READY mode prior to going into ISPF.   

Why yet another way of putting datasets onto TASKLIB?  The other three methods just discussed are specific to ISPF. The TSOLIB command is applicable to READY mode also. It can be used even when ISPF is not active.  For programs that run under TSO but not under ISPF, the only TASKLIB datasets used are those activated by TSOLIB.  Within ISPF, any dataset you add to TASKLIB by using "TSOLIB ACTIVATE" will come last within TASKLIB, after ISPLLIB.  

Also, TASKLIB load libraries activated by the TSOLIB command are available to non-ISPF modules called by ISPF-enabled programs.  For example, if a program running under ISPF calls something like IEBCOPY, and then IEBCOPY itself issues a LOAD to get some other module it wants to call, do not expect the system to look in ISPLLIB for the module that IEBCOPY is trying to load.  It should check TSOLIB, though.  However, some programs bypass TASKLIB search altogether.

Under ISPF, This is the search order within TASKLIB:

    ISPLUSR
    LIBDEF
    ISPLLIB
    TSOLIB

For non-ISPF TSO, only TSOLIB is used for TASKLIB.

To find out your LIBDEF allocations, enter ISPLIBD on the ISPF command line. (Not TSO ISPLIBD, just plain ISPLIBD)

TASKLIB with the CALL command in TSO

When you use CALL to run a program under TSO, the library name on the CALL command becomes a TASKLIB during the time the called program is running.  CALL is often used within CLIST and REXX execs, even though you may not use CALL much yourself directly.

So if you say CALL 'SYS1.LINKLIB(IEBGENER)' from ISPF option 6 or from a CLIST or REXX, then 'SYS1.LINKLIB' will be used to satisfy other LOAD requests that might be issued by IEBGENER or its subtasks while the called IEBGENER is running.  Ah, yes, the system is full of nuances and special cases like this one; I'm just giving you the highlights.  This entire article is somewhat of a simplification.  What joys, you might be thinking, must await you in the world of mainframes.

STEPLIB or JOBLIB

If STEPLIB and JOBLIB are both in the JCL, the STEPLIB is used and the JOBLIB is ignored.

The system does NOT search both.

LPA (Link Pack Area)

You know that there are default system load libraries that are used when you don’t have STEPLIB or JOBLIB, or when your STEPLIB or JOBLIB does not contain the program to be executed.  

The original list of the names of those system load libraries is called LINKLIST.  The first original system load library was called SYS1.LINKLIB, and when they wanted to have more than one system library they came up with the name LINKLIST to designate the list of library names that were to be treated as logical extensions of LINKLIB.

The drawback with LINKLIST, as with STEPLIB and JOBLIB, is that when you request a program from them, the system actually has to go and read the program into memory from disk.  That’s overhead.  So they invented LPA (which stands for Link Pack Area — go figure.).

Some programs are used so often by so many jobs that it makes sense to keep them loaded into memory permanently.  Well, virtual memory.  As long as such a program is not self-modifying, multiple jobs can use the same copy at the same time:  Additional savings.  So an area of memory is reserved for LPA.  Modules from SYS1.LPALIB (and its concatenation partners) are loaded into LPA.  It is pageable, so more heavily used modules replace less used modules dynamically.

Sounds good, but more tweaks came.  Some places consider some programs so important and so response-time-sensitive that they want those programs to be kept in memory all the time, even if they haven’t been used for a few minutes.  And so on, until we now have several subsets of LPA.

Within LPA, the following search order applies:

Dynamic LPA, from the list in ‘SYS1.PARMLIB(PROG**)’

Fixed LPA (FLPA), from the list in ‘SYS1.PARMLIB(IEAFIX**)’

Modified LPA (MLPA), from the list in ‘SYS1.PARMLIB(IEALPA**)’

Pageable LPA (PLPA), from the list in (LPALST**) and/or (PROG**)

LINKLIST

 LINKLIST Libraries are specified using SYS1.PARMLIB(PROG**) and/or (LNKLST**).

SYS1.LINKLIB is included in the list of System Libraries even if it is not named explicitly. 

An overview of LINKLIB was just given in the introduction to LPA, so you know this already, or at least you can refer back to it above if you skipped ahead.

Note that LINKLIST libraries are controlled by LLA; whenever any LINKLIST module is updated the system's BLDL needs to be rebuilt by an LLA refresh.  If LLA is not refreshed the old version of the module will continue to be given to anyone requesting that module.

What’s LLA, you might ask.  For any library under LLA control, the system reads the directory of each library, and keeps the directory permanently in memory.  A library directory contains the disk address of every library member.  Hence keeping the directory in memory considerably speeds up the process of finding any member.  It speeds that up more than one might think, because PDS directory blocks have a very small block size, 256 bytes, and these days the directories of production load libraries can contain a lot of members, two facts which taken together mean that reading an entire PDS directory from disk can require many READ operations and hence be time-consuming.  If you repeat that delay for almost every program search in every job, you have a drag on the system.  So LLA gives a meaningful performance improvement for reading members from PDS-type libraries.  For PDSE libraries too, but for different reasons; PDSE libraries do not have directory blocks like ordinary PDS libraries.  Anyway, the price you pay for the improvement is that the in-memory copies of the directories have to be rebuilt whenever a directory is updated, that is, whenever a library member is replaced.  What does LLA stand for?  Originally it stood for LINKLIST LookAside, but when the concept was extended to cover other libraries besides LINKLIST the name was changed to Library LookAside.

Under TSO, CLIST and REXX execs

Under TSO, if a module is not found in any of the above places, there is a final search for a CLIST or REXX exec matching the requested name. 

CLIST and REXX execs are obtained by searching ddnames SYSUEXEC, SYSUPROC, SYSEXEC, and SYSPROC (in that order, if the order has not been deliberately changed).  Note that SYSUEXEC and SYSEXEC are for REXX members only, whereas SYSPROC and SYSUPROC can contain both REXX and CLIST members, with the proviso that a REXX member residing in the SYSPROC family of libraries must contain the word REXX on the first line, typically as a comment /* REXX */

These four ddnames can be changed and other ddnames can be added by use of the ALTLIB command.  Also the order of search within these ddnames can be altered with the ALTLIB command (among other ways)

.SYSPROC was the original CLIST ddname.  A CLIST library can also include REXX execs. The SYSEXEC ddname was added to be used just for REXX.  In the spirit of ISPLUSR et al, a USER version of each was added, called SYS­UPROC and SYSUEXEC.

The default REXX library ddname SYSEXEC can be changed to something other than SYSEXEC by MVS system installation parameters.

Prefixing the TSO command name with the percent sign (%) causes the system to skip directly to the CLIST and REXX part of the search, rather than looking every other place first. 

To find the actual search order in effect within the CLIST and EXEC ddnames for your TSO session at any given time, use the command TSO ALTLIB DISPLAY. 

That's it for program search order.  It's a simplification of course. ;-)

 

References, Further reading

z/OS Basic Skills, Mainframe concepts, z/OS system installation and maintenance
Search order for programs
http://www.ibm.com/support/knowledgecenter/zosbasics/com.ibm.zos.zsysprog/zsysprogc_searchorder.htm

z/OS TSO/E REXX Reference, SA32-0972-00 
Using SYSPROC and SYSEXEC for REXX execs
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.ikja300/dup0003.htm

z/OS V2R2 ISPF Services Guide 
Application data element search order
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.2.0/com.ibm.zos.v2r2.f54sg00/aso.htm

z/OS ISPF Services Guide, SC19-3626-00 
LIBDEF—allocate application libraries
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.f54sg00/libdef.htm

Numbered item 3 in Section “Search order the system uses for programs” in the IBM publication “z/OS MVS Initialization and Tuning Guide” at this link: https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.ieae100/ieae10018.htm

"The Search for the Load Module" in the z/OS MVS Assembler Services Guide

"Customizing the ISPF TSO command table (ISPTCM)" and “Customizing command tables In the IBM manual "ISPF Planning and Customizing"

 

Size Matters: What TSO Region Size REALLY Means

What does specifying an 8 Meg (8192K) TSO Logon Region Size mean?

It does Not mean you get an 8 Meg region size.   Nice guess, and a funny idea, but no.  For that matter, REGION=8M in JCL doesn't get you 8 Meg there, either.  Hasn't done so for decades (despite what you may have heard or read elsewhere.   If you have trouble believing this, feel free to skip down a few paragraphs, where you can find a link to some IBM doc on the matter.)

No, your region size defaults to at least 32 Meg, regardless of what you specify.

Despite the omnipresent threat that the people who set up your own particular z/OS system might have changed the IBM defaults there, or (who knows) might even have vindictively limited your own personal userid — For this discussion,  we're assuming you're using a more or less intact z/OS system with the IBM defaults in effect.

So, you ask, What happens when you specify Size 8192 at Logon?

Size    ===> 8192  

When specifying Size=8192, you will be allowed to use up to 32 Meg of 31-bit addressable memory. This is the ordinary kind of memory that most programs use. You will get this for any value you enter for Size until you get up to asking for something greater than 32 Meg. Above 32 Meg, the Size will be interpreted differently.

If you enter a number greater than 32 Meg (Size ===> 32768), that will be interpreted in the way you would expect – as the region size for ordinary 31-bit memory.

You have to specify a  number bigger than 32768 to increase your actual region size.

Just wanted to make sure you saw that.  It's not a subtitle.

Notice by the way that region size is not a chunk of memory that is automatically allocated for you, it's just a limit.  It means your programs can keep asking for memory until they get to the limit available, and above that they'll be refused.

So, what does the 8192 mean, then?  When specifying Size=8192, you will be allowed to use up to 8 Meg of 24-bit addressable memory.   (In this context, 8192 means 8192K, and 8192K = 8M.)

This is why trying to specify a number slightly bigger than 8 Meg,  say 9 or 10 Meg, is likely to fail your logon with a "not available" message.   The request for 9 or 10 Meg is interpreted as a request for that amount of 24-bit memory, and most systems don't have that amount of 24-bit memory available, even though there is loads of 31-bit memory.  So asking for 9 Meg might fail with a "not available" message, but if you skip over all those small numbers and specify a number >32Meg then the system can probably give you that, and your logon would then work. 

How did this strange situation arise, and what is the difference anyway?

Here starts the explanation of the different types of addresses

Feel free to skip ahead if you don't care about the math, you just want the practical points.

24-bit addresses are smaller than 31-bit addresses.  Each address — Let's say each saved pointer to memory within the 24-bit-addressable range — requires only 3 bytes (instead of the usual 4 bytes).

24-bit memory addresses are any addresses lower than 16 Meg.

There is an imaginary line at 16 Meg.  24-bit addresses are called "Below the Line" and 31-bit addresses are called "Above the Line".

More-Technical-details-than-usual digression here.  Addresses start at address zero.  The 3-byte range goes up to hex’FFFFFF’ (each byte is represented as two hex digits.  Yes, F is a digit in the hex way of looking at things.  The digits in hex are counted 0123456789ABCDEF).  There are 8 bits in a byte (on IBM mainframes using hexadecimal representation for addressing).  So 3 bytes is 24 bits.  Hence, 3-byte addresses, 24-bit addressing.   Before you notice that 4 times 8 is actually 32, not 31, you may as well know that the leftmost bit in the leftmost byte in a 4-byte address is reserved, and not considered part of the address.  Hence, 31-bit addressing.

Decades ago the 24-bit scheme was the standard type of memory, and some old programs, including some parts of the operating system, still need to use the smaller addresses.   Why? Because there were a lot of structures that people set up where they only allowed a 3-character field for each pointer. When the operating system was changed to use 4-byte addresses, some of the existing tables were not easy to change — mainly because the tables were used by so many programs that also would have needed to be changed, and, crucially, not all of those programs belonged to IBM.  Customers had programs with the same dependency.  Lots of customers. So even today a program can run in “24-bit addressing mode” and still use the old style when it needs to do that.

Most programs run in “31-bit addressing mode”. So they are dependent on the amount of 31-bit memory available.  By this day and age, another upgrade is in progress.   It allows the use of 64-bit addresses. The current standard is still 31-bit addressing, and it will be that way for a good while yet. However, 64-bit addressing is used extensively by programs that need to have a lot of data in memory at the same time, such as editor programs that allow you to edit ginormous data sets.

When specifying Size=8192, you will be allowed to use up to 2Gig of 64-bit memory, as long as your system is at least at level z/OS 1.10, or any higher level.  (2 Gig is the maximum addressable by 64 bits, if you wonder.)   Prior to z/OS 1.10, the default limit for 64-bit memory was zero. In JCL you can change this with the MEMLIMIT parameter, but there is no way for you to specify an amount for 64-bit memory on the TSO Logon screen.

There is an imaginary bar at 2 Gig, since the word "line" had already been used for the imaginary line at 16 Meg.  Addresses above 2 Gig, that is, 64-bit addresses, are called "Above the Bar".  Addresses lower than that are called "Below the Bar".

Here ends the explanation of the different types of addresses.

Maybe  you wonder what happens for various values you might specify other than 8192 aka 8 Meg.  So now we'll discuss the three possibilities:

– You specify less than 16 Meg
– you specify more than 32 Meg
– you specify between 16 Meg and 32 Meg (boring)

Specifying less than 16 Meg

Any user-specified logon SIZE value less than 16 Meg just controls the amount of 24-bit memory you can use.

The limit on how much you can get for 24-bit memory will vary depending on how much your own particular system has reserved for its own use (such as the z/OS "nucleus" and what not), and for the system to use for you on your behalf, for example for building tables in memory from the DD statements in your JCL.   (Yes, you have JCL when you are logged onto TSO, you just don't see it unless you look for it.  The Logon screen has a field for a logon proc, remember that?  It's a JCL proc.)Any 24-bit memory the system doesn’t reserve for itself to use, you can get. This is called private area storage (subpools 229 and 230).

Typical mistake:  A user who thinks he has an 8 Meg region size may try to increase it to 9 Meg by typing in 9216 for size. The LOGON attempt may fail. It may fail because there is not nine Meg of leftover 24-bit storage that the system isn’t using.  Such a user might easily but mistakenly conclude that it is not possible for him to increase the region size above what he had before.  Ha ha wrong, you say to yourself (possibly with a little smile).  Because you now know that they have to specify a number bigger than 32768 — that is, more than 32 Meg.

Specifying more than 32 Meg

To increase the actual Region size, of course, as you now know, the user needs to specify a number bigger than 32 Meg (bigger than SIZE==>32768).  When you specify a value above 32Meg, it governs how much 31-bit storage you get.  The maximum that can be specified is 2096128 (2 Gigabytes).

Specifying any value above 32 Meg ALSO causes the user to get all available 24-bit memory (below the 16mb line).  This has the potential to cause problems related to use of 24-bit memory (subpools 229/230). This could happen if the program you’re running uses a lot of 24-bit memory and then requests the system to give it some resource that makes the system need to allocate more 24-bit memory, but it can’t, because you already took it all. The request fails. The program abends, flops, falls over or hangs. This happens extremely rarely, but it can happen, so it’s something for you to know as a possibility.

Specifying between 16 Meg and 32 Meg

What happens if you specify a number bigger than 16 Meg but smaller than 32 Meg? You still get the 32 Meg region size, of course. You also get all the 24-bit storage available — The 24-bit memory is allocated in the same way as it would have been if you had specified a number above 32 Meg. So asking for 17 Meg or 31 Meg has EXACTLY the same identical effect: It increases the request for 24-bit storage to the maximum available, but it leaves the overall region size at the default of 32 Meg. Having this ability must be of use to someone in some real world situation I suppose, or why would IBM have bothered to provide it? but imagining such a situation evades the grasp of my own personal imagination.

THE IBM DOC

If you want to see the IBM documentation on this — and who would blame you, it’s a bizarre setup — check out page 365 of "z/OS MVS JCL Reference" (SA23-1385-01) at http://publibz.boulder.ibm.com/epubs/pdf/iea3b601.pdf

Caveats and addendums

The IBM-supplied defaults can be changed in a site-specific way. Mostly the systems people at your site can do this by using an IEALIMIT or IEFUSI exit, which they write and maintain themselves. Also IBM is free to change the defaults in future, as they did in z/OS 1.10 for the default for 64-bit memory.

If you want to know whether there is a way to find out what your real limits are in case the systems programmers at your site might have changed the defaults, yes there is a way, but it involves looking at addresses in memory (not as hard as it sounds) and is too long to describe in this same article.

Yes, this is a break from our usual recent thread on TSO Profiles and ISPF settings.   We'll probably go straight back to that next.  The widespread misunderstanding of REGION and Logon SIZE comes up over and over again, and it happened to come up again recently here.  There is a tie-in with TSO, though, which you may as well know, since we're here.  A lot of problems in TSO are caused by people logging on with region sizes that aren't big enough for the work they want to do under TSO.  The programs that fail don't usually give you a neat error message asking you to logon with a larger region size — mostly they just fall over in the middle of whatever they happen to be doing at the time, leaving you to guess as to the reason.  Free advice:  If you have a problem in TSO and it doesn't make much sense, it's worth a try just to logon with SIZE===>2096128 and see what happens.  Oftentimes just logging off and logging on again clears up the problem for much the same reason:  Some program (who knows which one) has obtained a lot of storage and then failed to release it, so there isn't much storage left for use by other programs you try to run.   Logging off frees the storage, and you start over when you logon again.  Batch JCL corollary:  If you get inexplicable abends in a batch job, especially 0C4 abends, try increasing the REGION size on the EXEC statement.  Go ahead, laugh, but try it anyway.  It's an unfailing source of amusement over the years to see problems "fixed" by increasing either the JCL REGION size or the TSO Logon Region size.  All the bizarre rules shown above work the same for batch JCL REGION as for TSO LOGON Region Size, except for 64-bit memory, which can be changed using the MEMLIMIT parameter in JCL but cannot be changed on the TSO Logon screen.  Remember, you have to go higher than 32M to increase your actual region size!

 

You need to specify a value bigger than 32 Meg to increase your actual (31-bit) TSO Region size (or JCL Region size).

 

Changing Default ISPF Settings

This is a simple discussion of some basic ISPF settings, for people who don't know much about ISPF.  (It can be a surprising disadvantage to find oneself in that situation.)  So if you already know everything, move along, there's nothing for you to see here . . .

There are a lot of ISPF settings available to you in Option 0, "ISPF Settings".  Many of the defaults can seem inconvenient to the point of being annoying.  That makes it hard to decide what to reset first — so let's start with the most disagreeable, and feel comforted by the fact that at least none of the defaults cause anything to be blinking in reverse video with a bright fuchsia background.

What's the single most troublesome default ISPF setting?
It's a tight race:
. . .  The input command line being positioned at the bottom of the screen rather than at the top ?
. . . The fact that the HOME key sends the cursor up to that "Action bar" line at the top of the screen, which is almost never where you want it to go ?
. . . The fact that F12 defaults to mean CANCEL ?
. . . The decorative underscores (the underlining) adorning almost all the input fields on all the screens ?

That's a 4-way tie already.

Since these are conveniently easy to change, let's do these first.  Yes, this article is bound to lead to future continuations of the world of  ISPF settings — but for today, let's start by learning how to set F-keys and change the other annoying things just mentioned.

This becomes a step-by-step guide now.  I'll be assuming you will actually be changing your settings while you're reading.

Change the Position of the Command Line 

From the ISPF primary option panel, select option 0, "Settings".

The "ISPF Settings" screen will appear.

In the lefthand column of that screen, you can toggle the on/off setting of any option by the presence or absence of the slash character on the far left.  When the slash is there, the feature is turned on. When you remove the slash, you turn the feature off.

The best thing for you to do here now (in my opinion) is this:
Immediately blank out the slash (/) at the left of "Command line at bottom", and press enter.

As soon as you press enter, the command entry line should immediately move from the bottom of the screen to a place near the top.  (That's the line that appears on almost every screen and says something like
"Command ===>"
or
"COMMAND INPUT ===>")

Why is it good for you to move this line ?
The goal is for you to be able to press the "Home" key at any time to get the cursor to return to the "Command ===>" line.  This brings you halfway to that goal.  It also means you won't have to tab through all the other input fields on the screen to get down to the line where you can enter a command.

Now direct your attention a few lines further down the "ISPF Settings" screen, to the line that says "Tab to action bar choices".
Remove the slash (/) to the left of that line also, and press enter again.

NOW you should be able to press the "Home" key, and have it take you to the "Command ===>" line.  As a practical matter, using ISPF to do practical stuff, you want to have an easy way to get the cursor to go straight back to the line where you enter your commands. Now you can do that.

This would be a good time to save your settings, assuming you actually just changed them.

Usually, when you change ISPF settings, the software doesn't really save the new settings permanently until you exit ISPF.

So, suppose you just changed your settings, and you walk away from your TSO session without exiting ISPF or logging off?  In most places, your idle TSO session times out and dies after some amount of time (whatever amount of "TSO idle timeout" happens to be set, at your place, for your ID).  From the point of view of the ISPF session you left abandoned, this counts as an abend, a crash. The changes you made are lost.  So, what happens after that?  Next time you LOGON again you will need to reset your ISPF settings all over again.

So, be sure to exit ISPF in an orderly way anytime you change your settings and you want to preserve the changes.

If you just changed your ISPF settings, go ahead, save them, just logoff and logon again, and we'll go on to the next item on our list of annoyances.

What's up next?  Resetting that annoying "F12=Cancel" is a good idea.

Reset What F12 Does (and what the other F-keys do)

On the "Command ===>" line, type the word KEYS and press enter.  Yes, it's that simple (almost).  A popup box will show you the current meaning for each of the 12 primary function keys.  Go down to F12, and in the second column (under the "Definition" heading) change the "definition" of F12 from CANCEL to RETRIEVE.  Or CRETRIEV is okay too.  Just overtype the word CANCEL with the new setting, RETRIEVE (or CRETRIEV).  Voila, it's reset !  But, yes, there are caveats.  Keep reading, we're getting there . . .

What does Retrieve do for you?, perhaps you wonder.  Well, it brings back a copy of the last command you entered, placing it in the command entry field for you, so you don't have to retype it if you want to do the same thing (or almost the same thing) repeatedly.  The system holds several of your most recent commands, in a wrap-around list.

CRETRIEV (which stands for Command Retrieve) is similar to RETRIEVE, with minor nuances. In one way, RETRIEVE is nicer, because it doesn't matter where the cursor is when you press the F-key that means RETRIEVE.  When you choose CRETRIEV, the cursor is supposed to be on the command line already before you press the F-key.  On the other hand, CRETRIEV works better in SDSF, and maybe in some other places I haven't found.

So, you overtype the word CANCEL (or any other F-key "definition") with RETRIEVE or CRETRIEV or any other command you like.  That changes it, instantly, at least for this set of definitions.  Hunh?  Yes, there are multiple sets of definitions of the F-keys.  We'll get to that.

Now, What about that other column, where it says "Label", on the far right?, you ask.

Tab over and blank it out, and it'll reset itself to the appropriate new label.

Why is there a column for "Label"?  Anytime you say PFSHOW on the command line, ISPF will display the current settings of the keys, near the bottom of the screen — when you type PFSHOW repeatedly, it will take turns between showing the settings of all the keys, showing the settings of some subset of the keys, and turning off the PFSHOW display entirely.

The column designated "Label" determines what PFSHOW will display as the explanation for each key.

So if you have a bizarre sense of humor, or an antisocial disposition, you can set the labels to say something contrary to the actual definitions of the keys.  Or, you can set something cutesy or quirky, or use acronyms that only you will recognize.  (Settle yourself down and such urges usually pass.)  Most often it's as well just to blank out the label and let ISPF set it to match the definition.  Usually.

CRETRIEV? you ask.  Okay, there are some ISPF commands whose names don't do much to suggest their actual function, such as CRETRIEV and, uh, "NRETRIEV".  What will NRETRIEV do for you, you may wonder, if you set one of your keys to mean "NRETRIEV"?

Imagine you're on the ISPF EDIT Entry Panel, with the cursor positioned to the "Other data set name" entry field.   If you press a function key that you've set to NRETRIEV, it will bring back the name of the last dataset you edited.  You press the same key again and again to go back through a log of previously edited datasets.  It's similar to Retrieve, except it's for dataset names instead of ISPF commands.  That can be handy, right?  Sure.  See, you're liking TSO better already — admit it.  (Disliking it less?)

Note that it matters where you have the cursor positioned when you invoke NRETRIEV.  If you position the cursor to the "Other Data Set name" entry field, the retrieved DSNs appear in that field.  The DSN choices presented to you from that list will include ALL the data sets you've edited most recently, even very long dataset names that you may have accessed from the 3.4 Data Set List screen.

If you do NOT have the cursor positioned to that one field, then the names that are retrieved overwrite the "ISPF Library" Project+Group+Type+Member area — you know, the three-part dataset name, with the DSN that ISPF remembers for you.  These dataset names come from a different list — This other list only includes names that you previously entered in the "Ispf Library" area.

Personally I usually have F6 set to NRETRIEV, and then (just for this special case) I set the "Label" for F6 to "Prev DSN".

So, the "Label" column can be useful, if used sparingly.

Wait, you're thinking, Hold on a minute. That's all very nice, but isn't F6 used for "Repeat Change" in Edit?  Oh, and what did I mean when I said "(almost)" back near the start of talking about resetting F12?

Here's the punch line:  You don't have just ONE set of function keys.  You have at least a dozen different sets of F-keys, depending on WHERE you are in ISPF.  Vocabulary item: These sets of function keys are called "Keylists".

The set of keys you get on the ISPF "Edit Entry Panel" happens to be the same as the set you get when you say "KEYS" in "Option 0, Ispf Settings". The keylist you get while you're actually editing — when you're past the initial Edit Entry Panel — Well, that's a different keylist.

Take a look again at that popup box you had up on the screen before, the one that comes up when you say KEYS — the popup box with the keys and their definitions listed.    If you enter KEYS from ISPF option zero, you can see that it says, fairly near the top — sort of as a subheading — "ISR Keylist ISRSAB Change".  Yes, this keylist is named ISRSAB.

If you go someplace else in ISPF and type "KEYS" on the command line, you might still get ISRSAB (which will happen for the "Edit Entry Panel"), or then again you might get some other Keylist (like when you're actually in edit mode, editing something, at which time you get ISRSPEC).  In the ISPF Edit member selection list, you get yet another different Keylist, ISRSPBC.

If you go into some other product that fits nicely into ISPF but is not really part of ISPF — something like SDSF, or the File-AID Editor, or Endevor — then when you type KEYS there it will (probably) show you the function keys and let you revise them; but usually it doesn't use the same KEYS popup screen — it uses a different screen that doesn't show a Keylist name.  So, those sets of keys you can't really change from "Option 0, Ispf Settings".  To change those, you pretty much have to select each product separately and type KEYS within each product to change the keys there.  But I digress; let's go back to the ISPF Keylists that you can change in "Option 0, Ispf Settings".

In option 0, as also in the Edit Entry Panel, you get Keylist ISRSAB.  In the ISPF Edit member selection list, you get a different Keylist, ISRSPBC.  When you're actually editing something, you get yet another Keylist, ISRSPEC.  There are a bunch of different Keylists.  If you want to set F12 to mean Retrieve everyplace, then you need to make sure the keys are set the way you want them in all the Keylists you use.

Yes, you do have the alternative option of suppressing the use of Keylists, so that you have only one Keylist and it is used everyplace — well, almost everyplace — but if you think about it, you can see that having F6 set to "Repeat Change" is quite useful within Edit but not terribly useful in most other places; and NRETRIEV — remember NRETRIEV?  Retrieves names of recently edited datasets, most recent first — Well, NRETRIEV could be useful when you're entering the name of the dataset you want to edit, epecially if you don't quite remember the name exactly.  So why not have F6 set to mean "Repeat Change" while you're actually editing something, but have it set to mean NRETRIEV when you're just entering the DSName?  Similar arguments can be made for other ISPF commands and other keys.

Yeah? Name ONE, you say.  Okay, AUTOTYPE. (Ha.)  Autotype is similar to NRETRIEV, but you just start typing part of a dataset name, hit the key you've got set to "autotype", and it suggests the rest of the name.  (Yes, much like Google, and your email, etc.)  Keep pressing the key to go through its set of guesses (which it presents in alphabetical order, starting with the part you've typed).   Yes, you can accept part of some DSN it suggests, and change the ending, or just lop off the ending, and then press your "autotype" key again to get more suggestions starting with what you've got entered so far.  (I use F5.)  Now is that useful?  Told ya.  It also works in ISPF 3.4, Data Set List, which conveniently uses the same Keylist as the ISPF Entry Panel — our friend ISRSAB — and no, SAB doesn't stand for "Same As Before", at least I don't think it does.

If you want to continue tailoring your function keys, go back to that same "ISPF Settings" option 0 screen, and use the arrow keys on the keyboard to move your cursor up to the top line — the "Action Bar" — to where it says "Function keys".  With the cursor on "Function keys", press enter.  You get a drop-down list that lets you select 1, 2, 3, and so on.

Option 1 will let you set the "non-Keylist keys", the basic keys that ISPF falls back on when no Keylist is in effect.

Option 2 will show you another drop-down list (of Keylist names), and you can go through and change the settings on at least a dozen different Keylists.  Type E to the left of one of the Keylist names in the drop-down list, and you can edit the settings of the F-keys in that list.

You might find that handier than resetting the definitions by going around to all the different screens you use and typing "KEYS" for every one.

Not all the possible sets of function keys are represented in that drop-down list, though.  It doesn't cover the SDSF list of function keys, or the lists for various other add-on products like File-AID and Endevor.  Notice that all of the Keylist names in the drop-down list start with ISR?  Yeah.  ISR and ISP are prefixes used by ISPF.

Short digression: Other products — components — software apps? — each get their own special 3-character prefix assigned, or sometimes they get more than one prefix.  IKJ means TSO itself, DSN means DB2 (go figure), DFH is CICS, DFS is IMS, and so on.  The non-IBM products don't always use the prefixes assigned by IBM, but recently IBM has been "encouraging" vendors to start using the assigned prefixes.  I wonder if IBM is letting Script Waterloo keep the SCR prefix.  That used to be my personal favorite, because the error or warning messages I got were always prefixed SCRW, and it seemed so appropriate.  But let's get back to the present…

Option 3 can be used to specify that you want 24 function keys, not just 12.  The second group of 12 function keys are usually accessed by holding down the "Shift" key while simultaneously pressing a function key.  Shift+F1 would be F13, Shift+F12 would be F24, etc.

Simply choosing "24" here doubles the number of keys you can set to do different things — As an example, consider that a lot of people set one of the keys in the ISPF Edit Keylist (ISRSPEC) to mean "SUBMIT", and then they just press that key when they're editing some JCL and want to Submit it to run as a batch job.  Okay, maybe that doesn't sound like such a big convenience.

So consider CVIEW and CEDIT.  Imagine you're editing that same JCL — What if one of the lines refers to another dataset, DSN=some.dataset.with.stuff.you.want.to.see,DISP=SHR ?  Well, if you've specified CEDIT (or CVIEW) for one of your F-keys (Say maybe F14, the alternate of F2), then you can position your cursor to the start of that dataset name, press your F14, and be transported as if by magic into a new ISPF Edit (or View) session, editing or viewing that other dataset.  When you press F3 from there you come back to your JCL, like waking up from a dream of the other data.

 Option 9 can be used to turn off the Keylist feature, so the same function keys will be used everyplace — well, almost everyplace.

So, okay.  Enough about Function Key settings. You get that now.  You can finish setting those up later.  What about those decorative underscores?, you may be asking.  Can we get rid of those now?  Well, You read my mind.

GETTING RID OF THE DECORATIVE UNDERSCORES

This is not intuitively obvious.  Back on that "Option 0, ISPF Settings" screen, up in the "action bar" at the top, just to the right of "Function keys", it says "Colors".

Right, Colors.  Use the arrow keys to move your cursor up to that selection, "Colors", and press Enter.  (Or if you read the previous blogs, and you followed the directions to set up your 3270 emulator preferences so you can just double-click the mouse on various things, then you just need to double-click on "Colors".)

Either way, a drop-down box appears.  You select "2. CUA attributes…"  (Selecting "3. Point-and-Shoot…" seems to do the same thing.)

Like I said, not intuitively obvious. What you see next will be, though.  You get another pop-up screen containing a list of field types ("Panel Element" types).  If you page down (F8) through the list, looking at the column that says "Highlight", on the far right, you will see that some things are set to USCORE.  You guessed it, that means Underscores appear in that type of field.  Overtype the word USCORE with the word NONE for every case where it appears.  Keep paging down (F8) until you're sure you got them all.  I think there are about 40 "Panel Element" types listed.

Other options for "Highlight" include REVERSE and BLINK, but in the end I think most of us will prefer NONE as a setting in almost all cases.

You can, by the way, also change default colors on this same pop-up screen.

Like so many things, it's pretty easy once you know how.

This is probably enough information for one article, right?  We can reset some more ISPF settings next time.  I'm thinking that ISPF "edit" profiles are pretty important for you to be able to reset, but it might be more fun to look at resetting colors and things within the PC 3270 emulation. We'll probably look at one of those things next.

 

 

Take Control of Your TSO/ISPF Profile(s)

 

It's time for you to take control of your TSO sessions.  Own your own TSO/ISPF Profile(s).   We start by just jumping in the deep end:  Take control of your saved ISPF Profile variables (the easy way) (even the ones you didn't think you could change).

You know how you logon in the morning, and ISPF seems to remember things like the name of the last dataset you edited yesterday, how your JOB statement(s) should look, and all sorts of other things that you've typed into the entry fields on different ISPF panels?  Most of that stuff is saved in your ISPF Profile dataset.  The ISPF Profile dataset usually has a name like 'YourID.ISPF.ISPPROF', and it has potentially hundreds of members.  Those members store most of the stuff that ISPF seems to remember.

To answer your unspoken question, Yes, actually, you can edit that dataset, BUT you have to be careful because it contains hex stuff that can get messed up, and you also don't want to mess up any alignment of the data.  And yes, in most cases you can just go back to wherever you typed a thing in the first place, and retype it; but that can be tedious, and it doesn't always work for everything.  Another idea you think of: What happens when you want to make the same simple change everyplace and there are a lot of places: Maybe you want to copy an existing profile dataset and then change all occurrences of one 7-letter userid to a new 7-letter userid.  You wonder if you can do that using a utility program.  Yes, you can use File-AID (Specifically, you use File-AID option 3.6); in fact that's something I do myself if I want to change the userid like that (as long as the two userids are the same length).  Still, there are times when none of those ideas is quite what you want.

An example: Let's say You use ISPF option 6 to enter and save TSO commands, especially long ones with awkward syntax that you don't want  to remember and you don't want to retype.

ISPF option 6 keeps a short list of your most recent commands, letting you retrieve and re-execute the command of your choice, with or without changing it a little.  You should cut-and-paste that list into a scratch pad file someplace, but back on the main track: You want to be able to change the commands right there in the list, without executing them; and the list is protected against you overtyping the saved commands.

Suppose you mistyped something.  Worse than that, imagine you typed a mistake that reveals a deep and profound misconception of your work environment, or contains an embarrassing Freudian slip, and you would rather avoid having any of your co-workers get a chance to notice it.  But ISPF pops it right in at the top of the list, to save until you've entered ten other commands.

Dang.  Looks like another case of a machine trying to get the better of you.  At least, it looks that way at first glance . . . but in a few minutes you'll be able to turn the tables (if you read on. . .)

Even without knowing about ISPF Profile variables, you think of a couple of other options immediately.  One, you can type in several other commands, causing the evidence of your mistake to roll off the bottom of the list; but then you'll lose all the other commands in your list too.  Two, you can either cancel your TSO session or you can just hit the attention (attn) key to get thrown out of ISPF into READY mode before ISPF saves the changed list into your profile.  That should work if you think of it right away, and you don't mind losing whatever you've got going on in your other split screens.

Putting aside those ideas, let's see how to do it without losing anything.

You exit ISPF option 6 and go to option 7.3, and voila, a screen full of apparent gibberish appears — Variables and their values.  Importantly, much of the gibberish can be overtyped.  A few lines down from the top you can see a line that says "Variable  P  A  Value"   (The column headings).   The data you want to overtype is going to be found under the "Value" column.  The first column is just the name of the variable, which you don't really need to know.  There are hundreds of variables, and a name can't be longer than 8 characters, so that means a lot of them will look like, well, gibberish.  The second column, titled "P", tells what kind of variable it is (what "pool" it swims in).  You're looking for Profile variables, which are identified by having a "P" (for Profile) in the "P" (for Pool) column.  That other column, titled "A" (for attributes), will usually be blank for the Profile variables.  If it says "N" it means No, you can't overtype the value, but that applies to things like the time and date, not to the profile variables.  Just use F8 to page down through the list until you get to the ones with "P" in the "P" column.  Keep paging through the "P" variables, looking at the data under the "Value" column until you see the data that you want to change.  For the case in our example, the Variable name (column 1) will say PTCRET01, so it isn't too terribly far down.

Having found it, you overtype the string that you don't want with something else that you do want.  (If you don't actually know of any text you want to put there, try just putting TIME or PROFILE or some other innocuous short command padded out with blanks.)  After overtyping the value, you stare at it carefully for a minute to make sure you didn't make some other mistake while fixing the first one.  When you're satisfied that it looks good, you press F3  (or whatever key you have set to mean END).  Now you're laughing (metaphorically, probably not literally).  Go back to option 6 just to check.  There you are: the list now looks the way YOU wanted it to look.  Mission accomplished.

But wait a minute.  Didn't you notice other stuff while you were paging through that list?  Go back to 7.3 and have a look around.  You recognize your job card(s), some dataset names you've used on edit or utility screens, lots of stuff.  Most of it can be overtyped right there, without cruising around to all the individual screens where the text was originally set.  Hey, you realize, you can do this.  You own this.  You've got the power.

Remind me sometime to tell you about inserting a variable named ZSTART into the list as a new profile variable, to specify stuff you want to have happen automatically when you start ISPF.  Yes, you can do that [provided your z/OS system is at least at level 2.1].  There are two (2) line commands available: i (for insert) and d (for delete).  Type the letter i on the far left of any line, to the left of one of the Variable names, and press enter.  A blank line appears.  Put the word ZSTART for the Variable name, put P in the P column (I know, your dog would love this), leave the A column blank, and under Value you get to type in a string representing what you want to have happen automatically at ISPF startup.   The string you put in should follow this pattern:

ISPF;2;Start S.h;Start 3.4;Swap 1

The interpretation of the above string is as follows.  It always starts with ISPF.  After that you indicate a virtual new line by putting in a semi-colon (;)(Semicolon is the default value for the line separation character.  If you've changed yours to something else, use your own line separation character instead of semi-colon when doing this.  The line separator functions like an imaginary press of the Enter key, allowing you to string multiple commands together on the same line.)  In this example, you want to get three split screens started automatically whenever you start ISPF.  The first one will be Edit (option 2), the second one will be the SDSF hold queue (S.h), and the third one will be DSLIST (option 3.4).  You say Swap 1 at the end to tell ISPF to put you back onto the first of your split screens, the Edit screen.

If you're wondering why you can't just say ISPF 2, instead of having to say ISPF first and then say 2 on a separate logical line, well, I don't know why IBM requires that.  I just go along, because that's what you do to get it to work.  The first thing seemingly has to be just ISPF by itself.

When you're actually on the READY mode screen, you can type ISPF 2 if you like, and it will take you directly to ISPF edit.  It won't execute your multiple ZSTART commands then, though.  You can also say ISPF BASIC from READY mode if you want to skip your ZSTART.   What if you're already in your extravagant multi-session environment you've brought on yourself, and you want to get out of it all at once?  You use =XALL as the command.   (It can sometimes abend, by the way, but it does get you out, back to READY mode).  But we digress from our original digression.

When you go into ISPF after setting up your ZSTART, you'll automatically be on the Edit screen (if you used the example commands), and you'll have two other sessions started on alternate screens.  Before that, though, while you're still back in ISPF 7.3, having just put in your ZSTART profile variable, your option 7.3 screen (well, part of it) now looks something like this:

Variable        P  A    Value
.                                 —-+—-1—-+—-2—-+—-3—-+—-4—-+—-5—-+–
ZSSSMODE P   B
ZSTART       P         ISPF;2;Start S.h;Start 3.4;Swap 1
ZSUPSPA    P
ZSUPSPB    P

To save what you've done, you press the END key (generally F3); and that also takes you back out of the 7.3 area, to the more ordinary part of ISPF.  There'll be an extra screen, more of a popup really, but just F3 past that.

Anyway, so much for  ISPF Profile Variables, and dealing with most of the character strings that ISPF saves for you.

What about flags and switches, though?  Obviously a lot of things are controlled by on/off or Y/N  or multiple choice settings, and if you don't know the names of the variables then you can't very well use option 7.3 to change the values in them.   Doesn't matter.  A lot of them can be changed on the ISPF Settings screen (option 0), and that's probably the easiest way to set them anyway.  If you do happen to know and remember the name of a variable and you want to go straight to it in option 7.3 next time, the command for that is not FIND, it's LOCATE (or LOC).

On our next outing, let's visit ISPF option zero (Settings) and see what can be done there to improve your virtual environment.

What can you change in option zero, ISPF Settings, you ask?

A lot.  Here are some highlights:

Command Line position – Most people prefer Top but the default is Bottom
Changing what Function keys do (F12=CANCEL?  Whose idea was that?)
Changing Function keys (12 or 24), KEYLISTS (yes or no)
Get rid of that annoying underlining of all the fields where you type things
Log/List – Suppress that useless extra screen displayed when you exit ISPF
Long message in a box (or not)
Message ID numbers on messages (makes them easier to Google search)
Terminal type 3278T and format Max
Putting the calendar on your primary options menu
Seeing the screen name in the upper lefthand corner of screens

We'll talk about the first few of those in the next article.

So, the title of this piece you're reading — didn't it mention "Profile(s)", with an S?  So far we've only talked about your ISPF Profile.

Maybe you're wondering: How many profiles do you have in TSO, anyway?

Listing just the main ones, You have one each for native TSO, RACF, SDSF, FileAID or any other similar tool, ISPF general Settings, and ISPF dataset edit (one edit profile for each dataset type, up to your site-specific limit).  "Dataset type" means the last part of the name after the last dot, what would be called a file extension on the PC.  The EDIT profiles, like most of your Profiles, live in your ISPF Profile dataset.  Some other profiles, like your native mode TSO and your RACF profiles, are saved elsewhere.

Next time we'll talk about some of the ISPF option zero settings, because resetting those can save you a lot of annoyance.  Meanwhile, if you don't want to wait for me, you can always go to ISPF option zero and type "HELP", which will allow you to read through the "ISPF Settings" tutorial.  If you want to read the tutorials, though, you probably should start in Edit, because the Edit tutorial will probably be what you'll find the most useful (in my opinion).

So, yeah, we're just getting started.  Good start, though, yes?

TSO/ISPF Multiple Split Screens – the Easy Way

You think it might be nice to use multiple split screens with TSO/ISPF, but it seems hard to keep track of them?  Then you probably just haven't seen it done the easy way.  This short explanation will walk you through it.

After you read this, you should know how to add an easy selection bar at the bottom of the screen, and then double-click the mouse on the session you want.  (It might be easier to remember later if you do the steps in an ISPF session while you're reading. You'd get a feel for it — literally.)  Some of the features discussed here only came out recently, at the z/OS 2.1 release level, so if you're using an older release you'll have to wait until your place installs 2.1 before you can use all the bells and whistles.  That said, let's get started.

First, enter these commands in TSO/ISPF:
===> SPLIT  NEW
===> SWAPBAR

The "SPLIT NEW" command causes an extra split screen to start.

The "SWAPBAR" command causes a new "action bar" to appear on the bottom line of the 3270 screen, to use with your collection of screens.

You "SPLIT NEW" again to create each additional split screen.

You also have a choice of saying START instead of SPLIT NEW.

Your newly added action bar contains a name for each of your active screens, so you can recognize which is which.  Usually the name on the action bar will describe what is on that screen — EDIT, or SDSF, or something obvious.

With no further setup than that, you can move the cursor to your choice in the action bar and press ENTER to go to that screen.

Or you can enter "SWAP 1", or "SWAP 3", etc, on the command line, considering the screens to be numbered as 1, 2, 3, … according to their positions on the action bar.

Or just type the number for the session you want and press [F9] instead of [enter]  — assuming your F9 is set to mean  SWAP.

If you quit reading now,  you already know enough to use multiple split screens.  With just a little bit more perseverance, though, you can have a better setup.  ("Better" meaning nicer looking and easier to use.)

For a nicer looking display, You can change the SWAPBAR settings so that a line is shown above the added "action bar". This sets the bottom bar apart from the rest of the screen, making it look more like the "File|Edit|View|etc…" bar at the top of the screen.

You can also change the color of the text in the SWAPBAR action bar,  further setting it off from the rest of the screen, but the default color (white) is fine. If you have your PC3270 session set up with a light background, then white means black.

To access the panel that allows you to change these things, enter the SWAPBAR command followed by the operand slash (/) :

===> SWAPBAR   /

On the popup that then appears, Select "Show SWAPBAR divider line", and pick a color (say  W to pick White, and so on).  To get ISPF to save your settings so you can actually use them, enter S on the line where it says "S to update SWAPBAR", and then press F3 while your "S" is still there.

Okay.  Now,  wouldn't it be nice to be able to double-click the mouse key to select your choices in the action bar?

If you are using IBM's PC 3270 emulation, here's how you do that.

Move the cursor up to the PC 3270 "Menu Bar" (above the similar ISPF bar). If the "Menu Bar" is not there — if it isn't showing — you can get it by pressing alt-E (That is, holding the ALT key while you press the letter E), and then selecting "Show Menu-Bar" from the selection list.

Having found the "Menu Bar",  Click "Edit" in the "Menu Bar" and you'll get a drop-down menu.  From there:

select:            Preferences >
select:                   Hotspots >
Find "Point-and-Select commands" about halfway down,
Under that select (click the box next to) "ENTER at cursor position"

click the [OK] box

To save your settings, again move the cursor to the "Menu Bar", and select File, then Save.

With the Hotspots setup as shown, You can now double-click on the choices in the action bar.  (Guess it pays to persevere.)  When you exit ISPF to logoff, your setup will be saved for future use.

If you like selecting things by double-clicking the mouse, you now have a bonus.  On the ISPF main menu (the primary options menu), notice where it says "2 Edit", and double click on the word "Edit" there.   Mmmm-hmmm, it takes you straight to the edit screen.

Go back, and this time double-click where it says "Utilities" in "3 Utilities".   Doing that puts you on the "Utility Selection Panel".   Notice how the Utility Selection Panel is set up in a similar way to the primary menu?  It has "1 Library",  "4 Dslist",  and so on.   You see where this is going, right?   Double-click on the word "Dslist".

Yes, it works for any panel set up that way, provided of course the panel was set up correctly in the first place.   Generally, double-clickable selection fields will be turquoise, and the screens will have a general look and feel like the two we just discussed.  So, if you like double-clicking the mouse to select things, now you're laughing.

If you don't think "SPLIT NEW" is an easy phrase to remember, assign "SPLIT NEW" to a function key you don't use (for me, that was F4).  It makes the whole experience flow.  Want a new screen?  Hit F4. Want another one?  F4 again; it's there at the touch of a key.  Some people recommend using F2.  You just replace the old default SPLIT key with SPLIT NEW.  So that's a good idea too.  I don't use F2 that way because I like to keep the old "SPLIT" as a way of positioning the split line in the middle of the screen in order to edit two datasets at once and compare them visually.  So it's a matter of personal preference which key you decide to use.

If you do use F2 to split the screen in the middle so you can see parts of two screens at the same time, then (you ask) How do you know which of your other sessions is going to be on the other part of the screen?

Look at the names of the sessions as they are listed at the bottom of the screen.  You can see that there is an asterisk to the left of the screen where you are currently working.  Notice that one of the other session names is marked with a hyphen (minus sign) on its left.  That will be the companion to your current screen when you press F2.  It's also where you go if you press F9 without specifying a destination session.  If you want to change it so your current screen is matched with a different companion session, here is how to do that:  Press F9 to go to the companion it has now.  Once there, double-click on the companion you want to replace it with.  That's all there is to it.

While you're deciding which function key you want to use for SPLIT NEW, You also have a choice of saying START instead of SPLIT NEW.

If you use START instead of SPLIT NEW, you can specify an immediate target destination for the new screen you're adding.  So, for example, if you want to open a new screen and use it to view your SDSF held output (option S.H), you say:

===> START S.H

You get the new screen, and you're already right there in SDSF.

Let's say maybe you decide to set your chosen key to START rather than SPLIT NEW.  Then you can just say S.H and press the F4 button, and there you are instantly in SDSF, at the hold queue.   Sort of like using the transporter beam in a sci fi show.   Assuming you've chosen F4, and assuming S.H is where you wanted to go.   (Yes, it still works no matter what key you choose, or what destination.  Only my example ceases to be applicable when you change those.)  (Cue the laugh track.)

So now we'll describe how to reset your function keys, in case you don't already know.  It's pretty easy.

===> KEYS

With today's ISPF, all you have to do is type the word "KEYS" on the command line and a screen will appear that allows you to reset what the function keys do.    You reset what a key does by typing the new command in the lefthand column.

That "Label" column on the far right of the screen (I know you're curious) lets you set the text that later appears as the description of the key's function when the keys are displayed at the bottom of your screens (which is controlled by ===> PFSHOW).   Putting "Short" in the "Format" column means that key will be included on the short list when you PFSHOW only the short list of keys.

Today's ISPF also provides you with multiple sets of Function keys, depending on where you are (unless you've turned off that feature).  So the keys you get within EDIT are not the same as the keys you get on the edit member selection list or the main menu.  This means you need to do the "KEYS" thing described above several times.  The main menu uses the same set of KEYS as the main screen you get after typing 2 or 3 or 3.4., and that set (that key list) is named ISRSAB.   When you go to the member selection list for EDIT, or the dataset list in ISPF 3.4,  you get another key list, named ISRSPBC.  So if you set the keys in ISPF 3.4, the same settings will be used when you go to the EDIT member selection list.   There is quite a bit of sharing, but there are still quite a number of key lists.   SDSF has its own set.  Most program products (Like FileAID) have their own sets.  What you might want to do is just type KEYS whenever you first go into a different function for the next little while, and reset as necessary until you've hit them all.  (While you're there, you may as well change F12 from the default of CANCEL to the much more useful setting RETRIEVE.)  Other than that, you're done with the setup.  BUT,  the ISPF part of your setup won't be saved until you exit ISPF, and you can lose it if your session is cancelled or you're thrown out of ISPF by an abend, or you just let it time out when you go home.  So to be on the safe side you probably want to exit ISPF and Logoff TSO, and then just Logon again with a fresh start — and you probably want to do that in general whenever you make substantial changes to your ISPF settings  (changes you don't want to do over).

Let's say you've started eight split-screen sessions and you don't want to say =X eight times.  (As you know, you say =X and press enter to get rid of any one split screen.)  If you're exiting because you want to save your settings, go ahead and type =X as many times as you need to.  That's the least risky option.  If you have nothing to lose, though, there's a new command (which is still new and sometimes abends):

===>   =XALL

Note that =XALL works for the basic IBM applications, but if one or more of your split screen sessions is running a non-IBM product — something like File-AID, or the MAX editor, or Endevor, or some CA product, for example  — then that product might or might not honor the =XALL directive.  It is up to the product to honor the request when  =XALL arrives at that product's turn to exit.  If you're lucky, ISPF just stops its exiting spree when it comes to such a screen.  At that point you just enter =X,  or whatever is required to exit the particular product, and after you escape the sticking point you can enter =XALL again to continue your mass exit.    If you're not so lucky, ISPF might abend instead of halting, depending on how that particular non-IBM product reacts when it sees its turn come up under the XALL process.

If at some point you decide you want to turn off the SWAPBAR action bar, enter:

===> SWAPBAR OFF

Note: If you have the ISPF setting "Always show split line", using SWAPBAR turns off that split line feature.  Your split line is replaced by the action bar.  Using "SWAPBAR OFF" does not turn the split line setting back on.   You need to reset it yourself: You use option 0 (zero) from the ISPF primary panel, find and reset "always on", and then exit ISPF and go back into it again.  Some ISPF settings are confusing because they don't take effect until you exit and go back into ISPF, and this is one of them.

About that loss of the always-on split line:  You still get the split line displayed if the split is anyplace in the middle of the screen.  All you lose is the "Always" in "Always show split line".

Another little caveat:  If you've set up your 3270 emulator preferences to allow double-clicking on the sessions listed in the action bar, it can change the way the text selection part of cut and paste works.  Really it just slows it down. There seems to be a big delay added between the time you click on the section you're going to select, and the time the highlighting starts to appear.   If you find that annoying, you might switch to using the Ctrl key plus an arrow key rather than selecting sections of text with the mouse: Just click the mouse once on the start of the text you want to select (to position the cursor there), then press the Ctrl key and hold it down while you use the arrow keys to highlight the text you want to select.  It can actually feel better than using the mouse for selection, once you get used to it, because you never again pick up extra text accidentally near the border of the selection area.   On the other hand, once you get used to that little added delay using the mouse for marking your selection, it might not bother you.

By the way, be careful about using a mouse click to position the cursor when you intend to cut-and-paste text containing a url.  If you click on the url itself in the text, you might be magically transported to the destination.  Okay, not magically.  It happens if you have the "Execute URL" box checkmarked back in the "Hotspots Setup" part of your PC3270 Configuration settings, mentioned in  our discussion above.

That's it.  Extra split screens the easy way.

The picture below shows the action bar at the bottom of the 3270 session screen.  Yes, I use a light background rather than traditional TSO black. Yes, I use an extra-big screen.  Just focus on the action bar now.  It shows SDSF, another SDSF, DSLIST, EDIT, and ISR@PRI (the first seven characters of name of the ISPF primary option menu, ISR@PRIM).  Double-click on whichever session you want to go to, and you're instantly there.  With the added divider line, the action bar really does look okay, not too distracting or intrusive.  Now that you've put in the effort to set it up, have fun with it.  You get used to it really fast, and you wonder why you didn't use it before.

Another last warning, just so you know.  Every split screen session you add uses up more memory in your TSO session on the mainframe, the same as having a lot of windows open at once on the PC uses up more memory on your PC.  So if you start having memory shortage problems in TSO, cut back on the number of split sessions you use simultaneously, and try logging on with a bigger SIZE specified if you can.  The largest possible logon SIZE specifiable is 2096128 and you can only get that if your company allows it; but don't just assume you can't get 2096128 because you couldn't get, say, Ten Meg (10000).  Yes, it can happen that 10000 doesn't work but 2096128 does.  But that's a topic for another episode . . .

 

3270.with.SWAPBAR————–

Updated slightly 2016 June 11, primarily to extend the discussion of =X and =XALL