IBM z/OS MVS Spooling : a Brief Introduction

IBM z/OS MVS Spooling : a Brief Introduction —

Spooling means a holding area on disk is used for input jobs waiting to run and output waiting to print.

This is a brief introduction to IBM z/OS MVS mainframe spooling.

The holding area is called spool space.  The imagery of spooling was probably taken from processes that wind some material such as thread, string, fabric or gift wrapping paper onto a spindle or spool.  In effect, spooling is a near-synonym for queueing.  Those paper towels on that roll in the kitchen are queued up waiting for you to use each one in turn, just like a report waiting to be printed or a set of JCL statements waiting for its turn to run.

On very early mainframe computers, the system read input jobs from a punched card reader, one line at a time, one line from each punched card.  It wrote printed output to a line printer, one line at a time.  Compared to disks – even the slower disks used decades ago – the card readers and line printers were super slow.  Bottlenecks, it might be said.  The system paused its other processing and waited while the next card was read, or while the next line was printed.  So that methodology was pretty well doomed.  It was okay as a first pass at getting a system to run — way better than an abacus — but that mega bottleneck had to go.  Hence came spooling.

HASP, Houston Automatic Spooling Priority program (system, subsystem) was early spooling software that was used with OS/360 (the ancestral precursor of z/OS MVS).  (See HASP origin story, if interested.)  HASP was the basis of development for JES2, which today is the most widely used spooling subsystem for z/OS MVS systems.  Another fairly widely used current spooling sub-system is JES3, based on an alternate early system called ASP.  We will focus on JES2 in this article because it is more widely used.   JES stands for Job Entry Subsystem.  In fact JES subsystems oversee both job entry (input) and processing of sysout (SYStem OUTput). 

Besides simply queueing the input and output, the spooling subsystem schedules it.  The details of the scheduling form the main point of interest for most of us. Preliminary to that, we might want to know a little about the basic pieces involved.

The Basic pieces

There are input classes, also called job classes, that control scheduling and resource limits

There are output classes, also called sysout classes, that control output print

There are real physical devices (few card readers, but many variations of printers and vaguely printer-like devices)

There are virtual devices. One virtual device is the “internal reader” used for software-submitted jobs, such as those sent in using the TSO submit command or FTP.  Virtual output devices include “external writers”.  An external writer is a program that reads and processes sysout files, and such a program can route the output to any available destination.  Many sysout files are never really printed, but are viewed (and further processed) directly from the spool space under TSO using a software product like SDSF.

There is spool sharing.  A JES2 spool space on disk (shared disk, called shared DASD) can be shared between two or more z/OS MVS systems with JES2 (with a current limit of 32 systems connected this way).  Each such system has a copy of JES2 running. Together they form a multi-access spool configuration (MAS).  Each JES2 subsystem sharing the same spool space can start jobs  from the waiting input queues on the shared spool, and can also select and process output from the shared spool.

There is checkpointing. This is obviously especially necessary when spool sharing is in use.

There is routing.  Again, useful with spool sharing, to enable you to route your job to run on a particular system, but also useful just to route your job’s output print files to print on a particular printer.

There are separate JES2 operator commands that the system operator can use to control the spooling subsystem, for example to change what classes of sysout can be sent to a specific printer, or what job classes are to be processed.  (These are the operator commands that start with a dollar sign $, or some alternative currency symbol depending on where your system is located.)

There is a set of very JCL-like control statements you can use to specify your requirements to the spooling subsystem.  (Sometimes called JECL, for Job Entry Control Language, as distinct from plain JCL, Job Control Language.)  For JES3, these statements begin with //* just like an ordinary JCL comment, so a job that has been running on a JES3 system can be copied to a system without JES3 and the JES3-specific JECL statements will simply be ignored as comments.  For JES2, on which we will focus here, the statements generally begin with /* in columns 1 and 2.  Common examples you may have seen are /*ROUTE and /*OUTPUT but notice that the newer OUTPUT statement in JCL is an upgrade from /*OUTPUT and the new OUTPUT statement offers more (and, well, newer) options.  Though the OUTPUT statement is newish, it is over a decade old, so you probably do have it on your system.

There are actual JCL parameters and statements that interact with JES2, such as the OUTPUT parameter on the DD statement, and the just-mentioned OUTPUT statement itself, which is pointed to by the parameter on the DD. 

Another example is the CLASS parameter on the JOB statement, which is used to designate the job class for job scheduling and execution.  The meanings of the individual job classes are totally made up for each site.  Some small development company might have just one job class for everything.  Big companies typically create complicated sets of job classes, each class defined with its own limits for resources such as execution time, region size, even the time of day when the jobs in each class are allowed to run.  Your site can define how many jobs of the same class are allowed to run concurrently, and the scheduling selection priority of each class relative to each other class.  Sometimes sites will set up informal rules which are not enforced by the software, but by local working rules, so that everyone there is presumed to know that they are only allowed to specify, for example, CLASS=E for emergency jobs.   (That’s one I happened to see someplace.)  If you want to know what job CLASS to specify for your various work, your best bet is to ask your co-workers, the people who are responsible for setting up the job classes, or some other knowledgeable source at your company.  Remember you can be held accountable for following rules you know nothing about that are not enforced by any software configuration, so don’t try to figure it out on your own, ask colleagues and other appropriate individuals what is permissible and expected.  Not joking.  JES2 Init & Tuning (Guide and reference) are the books that define how JES2 job classes have been configured, if you’re just curious to get a general idea of what the parameters are. The JES2 proc in proclib usually contains a HASPPARM DD statement pointing to where to find the JES2 configuration parameters on any particular system.  

In some cases similar considerations can apply for the use of SYSOUT print classes and the routing of such output to go to particular printers or to be printed at particular times.  The SYSOUT classes, like JOB classes, are entirely arbitrary and chosen by the responsible personnel at each site.  

MSGCLASS on the JOB statement controls where the job log goes — the JCL and messages portion of your listing.  The values you can specify for MSGCLASS are exactly the same as those for SYSOUT (whatever way that may be set up at your site).  If you want all your SYSOUT to go to the same place, along with your JCL and messages, specify that class as the value for MSGCLASS= on your job statement, and then specify SYSOUT=* on all of the DD statements for printed output files.  (That is, specify an asterisk as the value for SYSOUT= on the DD statements.)  

In many places, SYSOUT class A indicates real physical printed output on any printer, class X indicates routing to a held queue where it can be viewed from SDSF, and class Z specifies that the output immediately vanishes (Yup, that's an option).  However, there is no way to know for sure the details of how classes are set up at your particular site unless you ask about it.

Sometimes places maintain “secret” classes for specifying higher priority print jobs, or jobs that go to particular special reserved printers, and the secrets don’t stay secret of course.  Just because you see someone else using some print class, don’t assume it means it’s okay for you to use it for any particular job.  Ask around about the local rules and expectations.

So, for MSGCLASS (aka SYSOUT classes), as for JOB classes, the best thing is to ask whoever sets up the classes at your site; or, if that isn't practical, ask people working in the same area as you are, or just whoever you think is probably knowledgeable about the local setup.  Classes are set up by your site, for your site. 

An example of a JES2-related JCL statement that you have probably not yet seen is introduced with z/OS 2.2 — the JOBGROUP  statement, and an entire set of associated statements (ENDGROUP, SCHEDULE, BEFORE, AFTER, CONCURRENT), there are about ten of them – but that would be a topic for a follow-on post.  You probably don’t have z/OS 2.2 yet anyway, but it can be fun to know what’s coming.  JOBGROUP is coming.

That’s probably enough for an overview basic introduction.

The idea for this post came from a suggestion by Ian Watson.

 

References and Further Reading

z/OS concepts: JES2 compared to JES3
https://www.ibm.com/support/knowledgecenter/zosbasics/com.ibm.zos.zconcepts/zconc_jes2vsjes3.htm

 
z/OS JES2 Initialization and Tuning Guide, SA32-0991-00
How to initialize JES2 in a multi-access SPOOL configuration
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.hasa300/himas.htm
 
z/OS MVS JCL Reference (z/OS 2.2)
JES2 Execution Control Statements (This is where you can see the new JOBGROUP)
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.2.0/com.ibm.zos.v2r2.ieab600/jes2zone.htm
 
z/OS MVS JCL Reference, SA23-1385-00  (z/OS 2.1)
JES2 control statements
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.ieab600/j2st.htm
 
OUTPUT JCL statement
https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.ieab600/outst.htm

 
z/OS JES2 Initialization and Tuning Reference, SA32-0992-00
Parameter description for JOBCLASS(class…|STC|TSU)
https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.hasa400/has2u600106.htm

 
z/OS JES2 Initialization and Tuning Guide, SA32-0991-00
Defining the data set for JES2 initialization parameters
https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.hasa300/defiset.htm

IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling IBM z/OS MVS spooling

Concatenating ISPF libraries

Concatenating ISPF libraries —

This post has hints and tips for using ISPF-specific libraries, plus how to avoid a few common problems when concatenating ISPF libraries.

Most libraries used under TSO are concatenated, including TSO/ISPF-specific ones.  So this could as well be called “Using ISPF Libraries”, but the fact is that a lot of the difficulties people have with the environment are related to concatenations.  So the emphasis is on that.  Plus somebody* asked me about it that way.

Let’s assume you’ve already read the post about Program Search Order and you understand that.  Now you want to know about the search order for libraries specific to TSO and TSO/ISPF. 

(Notice I’m using that blue-ish color to indicate where there’s a link you can click, as for Program Search Order there to refer you back to the earlier post on that if you missed it. There are also a bunch of links to IBM doc in here this time.)

Some libraries are used by both TSO/ISPF and native TSO.  This includes some basic load libraries (loadlibs) containing programs in executable format (e.g., load modules), such as might be found on a STEPLIB DD statement in the TSO Logon JCL procedure. 

It also includes the CLIST and REXX libraries allocated to ddnames SYSPROC and SYSUPROC as well as the REXX-only libraries allocated to SYSEXEC and SYSUEXEC (or other ddnames designated either by you or for you at your site).

If you are not yet familiar with using the CLIST and REXX facilities under TSO, there is a previous post introducing that also, called How to Write a CLIST

In addition to the above, ISPF has its own set of ddnames for libraries containing ISPF-specific types of data. 

As you may also already know, perhaps from the post Take Control of Your TSO/ISPF Profile(s), you have an ISPF profile library that contains your personalized ISPF settings.  The default ddname for that is ISPPROF.   

Most of the other ISPF-specific dataset types have default ddnames of the form ISP?LIB, where the question mark (?) position is replaced by a character that indicates the library type: M for messages, T for tables, S for skeletons, P for panels.  There are sometimes others, such as the loadlib ddname ISPLLIB which is checked prior to STEPLIB for fetching ISPF modules in executable format.

Beyond the ISP?LIB ddnames, you have the option of using LIBDEF to assign your own private test libraries to ddnames of the form ISP?USR, and the ISP?USR libraries will then be searched prior to their counterpart distribution versions on ISP?LIB.

ISPF Panels (screens)

After CLIST and REXX execs, the member type you are most likely to want to change is presumably panels, that is, screens.  Screens in ISPF are called Panels (go figure), and in the simplest case they reside in ISPPLIB.  If you plan to test your own new panels, or your own modified versions of distribution panels, you want to create your own panel library for use on ddname ISPPUSR.  

You want to allocate your private test panel library with mostly the same attributes as the distribution ISPPLIB, but maybe a few tweaks. 

For one thing, regardless of whether the distribution library is installed as a PDS or a PDSE, you’re probably better off if you make your own library a PDSE, that is, DSNTYPE=LIBRARY, so that you can replace members repeatedly without needing to do a compress. 

Also, you probably want to use system-determined BLKSIZE, that is, do not copy the BLKSIZE from the distribution library, just leave it blank or specify zero and let the system decide what is best.  I doubt you need to maintain backward compatibility with type 3330 disks – let your system decide what will work best for you.  

Third tweak: You’ll want to modify the disk SPACE allocation to allow an amount of disk space appropriate to your anticipated testing, and to your system; but do yourself the favor of allocating in units of CYLinders (CYL,CYLS) rather than Tracks (TRK, TRKS), because if it is allocated in units of Cylinders then anything that comes along and releases unused space from your data set will only take away your unused space down to the nearest cylinder boundary, rather than paring it down to a track boundary.  As you know, one cylinder is equal to fifteen tracks; Since a typical panel library member is relatively small, having those few extra tracks might allow you several more opportunities to type SAVE before you hit the wall.

The ISPF screens (panels) are, of course, in an ISPF-specific ISPF panel format.  You can figure out a lot of the syntax just by looking at the screens in the distribution panel library(ies) and comparing the screens that are displayed with the members in the library.  As you may remember, you use the ISPF command PANELID to get ISPF to start showing the name of the currently displayed panel in the upper left corner of every screen.  Some of the syntax is pretty much beyond what you can figure out with that kind of guesswork method, though, so you’ll want to have access to the manual(s).  Check out the  Panel definition statement guide section in the ISPF Dialog Developer’s Guide.  There’s a “References and Further Reading” list at (or near) the end of this post where you can find additional links.

ISPF messages

You may have a need to issue ISPF-style messages.  You may be tempted to create your own message members.  In the short run, that is not strictly necessary.  IBM has kindly provided catch-all messages where you can supply your own message text and have it be displayed using one of IBM's own message numbers. 

You have available for your immediate use the message ISRZ001 (with variants ISRZ000 and ISRZ002) to allow you to supply the message text by assigning your message text string to a variable.  Assuming you choose to use ISRZ001, you assign your short message text to variable name ZEDSMSG – this is the text that appears in the upper right hand corner of the screen whenever an error occurs.  You assign your long message text to variable name ZEDLMSG – in general this text appears if someone presses the “HELP” key (F1 or F13) to try to get more information after seeing the short message.  After setting those variables, you call SETMSG to save them for use on the next ISPF panel to be displayed and to set the next message number to ISRZ001.  Of course some other process can still sneak in along the way back and call SETMSG again, and replace your messages, but in practice I have not yet seen that happen.

Example

To use the generic message ISRZ001 with message text you supply, you put lines similar to the following into your CLIST:

SET ZEDSMSG = &STR(‘Oops, you goofed’)
SET ZEDLMSG = &STR(‘Longer explanatory message text’)
ISPEXEC SETMSG MSG(ISRZ001)

If you’re using REXX rather than CLIST, the approximate equivalent of the above in REXX is:

ADDRESS ISPEXEC
zedsmsg = ‘Oops, you goofed’
zedlmsg = ‘Longer explanatory message text’
'SETMSG MSG(ISRZ001)'

Using SETMSG as shown in the above code snippet causes the message to appear on whatever ISPF screen is displayed next, so you don’t need to know exactly where you are, or are going to be. 

Common Problems

Compatibility of record format

With CLIST concatenations and with ISPF library concatenations in general, one problem people occasionally bring on themselves is to combine FB and VB in the same concatenation, which causes problems when the system tries to interpret all members as either FB or VB.  When libraries are concatenated, all of the libraries in the concatenation must have the same record format (RECFM).  For CLIST, REXX, and ISPF-specific libraries this can be either FB or VB, that is, either fixed-length records or variable-length records. If the libraries are FB, then they must all have the same record length as well.

Apart from load libraries, IBM distributes the ISPF libraries as FB 80 by default, that is, every line in every member is fixed length 80 bytes.  Many people prefer VB libraries because they can have longer lines, and so they reallocate copies of the distribution libraries as VB.  Whatever distribution libraries you’re using, make sure any other libraries you concatenate with them have the same format.

If you do not have all the same format libraries in a concatenation, you may not see a problem initially.  This is because all of the members being fetched happen to be coming from the same kind of library.  Then one day you may (or some program you’re running behind the scenes may) try to use a member that resides in the library with the unmatching format.  The member will probably be read successfully if the BLKSIZE is compatible, but then the contents of the member will be parsed incorrectly, according to the rules for the expected record format.  Then whatever process is trying to use the member will complain about the syntax being wrong, if it gets that far.  Possibly some other error will come first.  Depends on what is actually read, and what was expected.  This happens more often than one might think, and it can be harder to figure out than you might imagine too, because (a) the error message you end up getting bears no obvious relationship to the underlying problem, and (b) it may have appeared to be running successfully for a while prior to the failure simply because it did not happen to need to fetch anything from the incompatible library.

So: Be sure all the datasets in your concatenation have the same record format.  If the format is FB (fixed length), make sure they all have the same record length as well.

Fifteen-Library Concatenation Limit 

Warning: You can use a maximum of 15 libraries concatenated together onto any one of the ISP?USR ddnames (and perhaps some other ISPF ddnames you might come across depending on your luck).  If you concatenate more than 15 libraries, you will not, in general, get an error message about it; but only the first 15 will be used, and any others will be ignored.  I’ve seen this 15-library concatenation limit for other ddnames in TSO/ISPF previously, but as of z/OS 2.1 the only specific warning about it I’ve found mentions only the ISP?USR concatenations; and I have not done exhaustive experiments to be able to say otherwise.  It is a limit I’ve run into before and we can expect to see again, so remember to be careful when you concatenate numerous libraries on one ddname under TSO/ISPF.  For some cases, 15 is a magic limit.  Okay, not magic, but it may as well be.

Replacing members in active libraries

There are also some problems that can happen when you replace members in ISPF libraries while ISPF is active, for example if you want to modify a message or a panel and then test the modification.

If you replace a member in an ISPF panel, message, or skeleton library while ISPF is running, do not expect ISPF to look for the replacement.  If ISPF finds that it has an existing copy of the panel, message, or skeleton member already in storage, ISPF will keep reusing that old copy, and it will not automatically fetch your updated version.  For panels and messages, however, you can tell ISPF not to act that way: You do this by specifying the TEST parameter when starting ISPF.  Rather than just saying ISPF to get into ISPF, instead say :  ISPF TEST

If the ISPF startup is done by a CLIST or REXX EXEC member, then the TEST parameter has to be specified within the CLIST or REXX member, on the line where the ISPF command itself is started. (You cannot just add the word TEST when invoking the CLIST – unless of course the CLIST has been written to allow that as an optional parameter, which you might think would be commonplace but it’s not.) 

Caveat / Warning: Of course there is some overhead added when you use TEST.

This trick of using the TEST parameter may not work for changes to ISPF skeleton members.  The skeleton members are the ones that contain bits and pieces of model JCL (in, of course, ISPF’s own unique skeleton syntax).  Typically skeletons are used by programs when generating those jobs that seem to be created spontaneously on request under ISPF.  IBM suggests you test skeleton changes by allocating the skeleton test library via LIBDEF; then, after updating any skeleton member(s), they suggest you reissue LIBDEFs as needed to force an OPEN for the ISPSLIB ddname (“Allocating required ISPF librariesin the ISPF User’s Guide).  Yes, you would really be using ddname ISPSUSR one would hope, but the logic is the same.

Active Libraries acquiring new disk extents

Another vulnerability particular to online systems such as TSO/ISPF is similar but worse.  If any library within a concatenation acquires a new disk SPACE extent while the concatenation is open, that new area of space is invisible to the processes that fetch members for use by ISPF functions.

That problem arises because, for efficiency, the actual disk locations of data sets are checked only at OPEN time, and all of those disk addresses are remembered for later reference whenever members are to be fetched.  Fetching runs much faster that way.  But it means that any updated member placed into a subsequently obtained disk space extent cannot be found when an ISPF function wants to fetch it.  Hence the member you've updated becomes invisible to ISPF for the duration of that execution.  And this is true even if you’ve specified the TEST parameter to start ISPF.  To be able to access the new extent you have to cause OPEN for the ddname to happen again.  With ISPF libraries this means you need to exit ISPF and then start ISPF over again.  You don’t actually need to logoff, just go back to READY mode and then start ISPF again. 

Those are the main things you can trip over when concatenating ISPF libraries. 

End of section on common problems.

End of article on Concatenating ISPF libraries.

* Thanks to Bob Banik for the idea for this post.   . . .   Not sure this is what he had in mind exactly of course . . .   :-)

References, further reading

 z/OS ISPF User's Guide Vol I, SC19-3627-00 
Allocating required ISPF libraries
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.f54ug00/alreqli.htm


z/OS TSO/E REXX Reference, SA32-0972-00 
Using SYSPROC and SYSEXEC for REXX execs
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.ikja300/dup0003.htm


z/OS ISPF Dialog Developer's Guide and Reference, SC19-3619-00

ISPF test and trace modes 
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.f54dg00/ispdg39.htm
Parameters
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.f54dg00/ispdg28.htm

Allocating CLIST, REXX, and program libraries  (discusses ISPLLIB  ddname)
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.f54ug00/alrexli.htm

Panel definition statement guide
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.f54dg00/pnlgide.htm

 

ISPF Dialog Developer’s Guide
z/OS ISPF Dialog Developer's Guide and Reference
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.f54dg00/toc.htm

 

z/OS V2R2 ISPF Services Guide 
Application data element search order
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.2.0/com.ibm.zos.v2r2.f54sg00/aso.htm

z/OS ISPF Services Guide, SC19-3626-00
LIBDEF—allocate application libraries
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.f54sg00/libdef.htm

 

z/OS ISPF Edit and Edit Macros, SC19-3621-00
Edit macro messages
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.f54em00/titmthe.htm

 

z/OS ISPF User's Guide Vol I, SC19-3627-00
PANELID
https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.f54ug00/ispug97.htm

 

IBM Data Set Commander for z/OS User's Guide Ver 8.1
ISPF dialog development enhancements
https://www.ibm.com/support/knowledgecenter/en/SS2N8G_8.1.0/com.ibm.ipt.doc_8.1/usersguide/iqiugdiagdevenh.htm

 

CCHHR and EAV

CCHHR and EAV

Addresses on disk.

An address on disk is an odd sort of thing.

Not a number like a memory pointer.

More like a set of co-ordinates in three dimensional space.

Ordinary computer memory is mapped out in the simplest way, starting at zero, with each additional memory location having an address that is the plus-one of the location just before it, until the allowable maximum address is reached; The allowable maximum pointer address is limited by the number of digits available to express the address. 

Disks — external storage devices — are addressed differently.

Since a computer system can have multiple disks accessible, each disk unit has its own unit address relative to the system.  Each unit address is required to be unique.  This is sort of like disks attached to a PC being assigned unique letters like C, D, E, F, and so on; except the mainframe can have a lot more disks attached, and it uses multi-character addresses expressed as hex numbers rather than using letters of the alphabet.  That hex number is called the unit address of the disk.

Addresses on the disk volume itself are mapped in three-dimensional space.  The position of each record on any disk is identified by Cylinder, Head, and Record number, similar to X, Y, and Z co-ordinates, except that they're called CC, HH, and R instead of X, Y, and Z. A track on disk is a circle.  A cylinder is a set of 15 tracks that are positioned as if stacked on top of each other.  You can see how 15 circles stacked up would form a cylinder, right?  Hence the name cylinder. 

Head, in this context, equates to Track.  The physical mechanism that reads and writes data is called a read/write head, and there are 15 read/write heads for each disk, one head for each possible track within a cylinder.  All fifteen heads move together, rather like the tines of a 15-pronged fork being moved back and forth. To access tracks in a different cylinder, the heads move in or out to position to that other cylinder.  So just 15 read/write heads can read and write data on all the cylinders just by moving back and forth.  

That's the model, anyway.  And that's how the original disks were actually constructed.  Now the hardware implementation varies, and any given disk might not look at all like the model.  A disk today could be a bunch of PC flash drives rigged up to emulate the model of a traditional disk.  But Regardless of what any actual disk might look like physically now,  the original disk model was the basis of the design for the method of addressing data records on disk.  In the model, a disk is composed of a large number of concentric cylinders, with each cylinder being composed of 15 individual tracks, and each track containing some number of records. 

Record here means physical record, what we normally call a block of data (as in block size).  A physical record — a block — is usually composed of multiple logical records (logical records are what we normally think of as records conceptually and in everyday speech).  But a logical record is not a real physical thing, it is just an imaginary construct implemented in software.  If you have a physical record — a block — of 800 bytes of data, your program can treat that as if it consists of ten 80-byte records, but you can just as easily treat it as five 160-byte records if you prefer, or one 800-byte record; the logical record has no real physical existence.  All reading and writing is done with blocks of data, aka physical records.  The position of any given block of data is identified by its CCHHR, that is, its cylinder, head, and record number (where head means track, and record means physical record).  

The smallest size a data set can be is one track.  A track is never shared between multiple data sets.

The CCHHR represents 5 bytes, not 5 hex digits.  You have two bytes (a halfword) for the cylinder number and two bytes for the head (track) number.  

A "word", on the IBM mainframe, is 4 bytes, or 4 character positions.  Each byte has 8 bits, in terms of zeroes and ones, but it is usually viewed in terms of hexadecimal; In hexadecimal a byte is expressed as two hex digits.  A halfword is obviously 2 bytes, which is the size of a "small integer".  (4 bytes being a long integer, the kind of number most often used; but halfword arithmetic is also very commonly used, and runs a little faster.)  A two-byte small integer can express a number up to 32767 if signed or 65535 if unsigned.  CC and HH are both halfwords.  

Interestingly, a halfword is also used for BLKSIZE (this is a digression), but the largest block size for an IBM data set traditionally is 32760, not 32767, simply because the MVS operating system, like MVT before it, was written using 32760 as the maximum BLKSIZE.  Lately there are cases where values up to 65535 are allowed, using LBI (large block interface) and what-not, but mostly the limit is still 32760.  But watch the space; 65535 is on its way in; obviously the number need not allow for negative values, that is, it need not be signed.  End of digression on BLKSIZE.

There can be any number of concentric cylinders, but using the traditional CCHHR method you can only address a number that can be represented in a two-byte hex unsigned integer.  That would be 65,535, but in fact the highest cylinder address (now) on ordinary IBM disks is 65,520.  That is the CC-coordinate, the basis of the CC in CCHHR.

But wait, you say, you've got an entire 2 bytes — a halfword integer — to express the track number within the cylinder, yet there are always 15 tracks in a cylinder; one byte would be enough.  In fact, even  half a byte could be used to count to fifteen, which is hex F.  Right.  You got it.  What do we guess must eventually happen here? 

People want bigger disks so they can have bigger data sets, and more of them.  Big data.  You know how many customers the Bank of China has these days?  No, I don't either, but it's a lot, and that means they need big data sets.  And they aren't the only ones who want that.  I really don't want to think about guessing how much data the FBI must store.  What we do know is that there is a big – and growing – demand for gigantic data sets.

So inevitably the unused extra byte in HH  must be poached and turned into an adjunct C.  Thus is born the addressing scheme for the extended area on EAV disks (EAV = Extended Address Volumes).  So, three bytes for C, one byte for H ?  Well, no, IBM decided to leave only HALF of a byte — four bits — for H.  (As you noticed earlier, one hex digit — half of a byte — is enough to count to fifteen, which is hex F.) So IBM took 12 bits away from HH for extending the cylinder number.   Big data.  Big.

And you yourself would not care overly about EAV, frankly, except that (a) you (probably) need to change your JCL to use it, and (b) there are restrictions on it, plus (c) those restrictions keep changing, and besides that (d) people are saying your company intends converting entirely to EAV disks eventually.

Okay, so what is this EAV thing, and what do you do about it ?

EAV means Extended Address Volume, which means bigger disks than were previously possible, with more cylinders.  The first part of an EAV disk is laid out just like any ordinary disk, using the traditional CCHHR addressing.  So that can be used with no change to your programs or JCL.

In the extended area, cylinders above 65,520,  The CCHH is no longer CCHH. 

The first two bytes (sixteen bits) contain the lower part of the cylinder number, which can go as high as 65535.  The next twelve bits — one and a half bytes taken from what was previously part of HH — contain the rest of the cylinder number, so to read the whole thing as a number you would have to take those twelve bits and put them to the left of the first two bytes. The remaining four bits — the remaining half of a byte out of what was once HH — contains the track number within the cylinder, which can go as high as fifteen.

Says IBM (in z/OS DFSMS Using Data Sets):

     A track address is a 32-bit number that identifies each track
     within a volume. The address is in the format hexadecimal CCCCcccH.

        CCCC is the low order 16-bits of the cylinder number.

        ccc is the high order 12-bits of the cylinder number.

        H is the four-bit track number.

End of quote from IBM manual.

The portion of the disk that requires the new format of CCHH is called extended addressing space (EAS), and also called cylinder-managed space.  Cylinder-managed space starts at cylinder 65520.

Of course, for any space with an address below cylinder 65535, those extra 12 bits are always zero, so you can view the layout of the CCHH the old way or the new way there, it makes no difference.

Within the extended addressing area, the EAS, the cylinder-managed space, you cannot allocate individual tracks.  Space in that area is always assigned in Cylinders, or rather in chunks of 21 cylinders at a time.  The smallest data set in that area is 21 cylinders.  The 21-cylinder chunk is called the "multicylinder unit".

If you code a SPACE request that is not a multiple of 21 cylinders (for a data set that is to reside in the extended area), the system will automatically round the number up to the next multiple of 21 cylinders.

As of this writing, most types of data sets are allowed within cylinder-managed space, including PDS and PDSE libraries, most VSAM, sequential data sets including DSNLARGE, BDAM, and zFS.  This also depends on the level of your z/OS system, with more data set types being supported in newer releases.

However the VTOC cannot be in the extended area, and neither can system page data sets, HFS files, or VSAM files that have imbed or keyrange specified.  Also VSAM files must have Control Area size (CA) or Minimum Allocation Units (MAU) such as to be compatible with the restriction that space is going to be allocated in chunks of 21 cylinders at a time.  Minor limitations.

Specify EATTR=OPT in your JCL when creating a new data set that can reside in the extended area.   EATTR stands for Extended ATTRibutes.  OPT means optional.  The only other valid value for EATTR is NO, and NO is the default if you don't specify EATTR at all.

The other EAV-related JCL you can specify on a DD statement is either EXTPREF or EXTREQ as part of the DSNTYPE.  When you specify  EXTPREF it means you prefer that the data set go into the extended area; EXTREQ means you require it to go there.

Example

Allocate a new data set in the extended addressing area

//MYJOB  JOB  1,CLASS=A,MSGCLASS=X
//BR14 EXEC PGM=IEFBR14
//DD1 DD DISP=(,CATLG),SPACE=(CYL,(2100,2100)),
//   EATTR=OPT,
//   DSNTYPE=EXTREQ,
//   UNIT=3390,VOL=SER=EAVVOL,
//   DSN=&SYSUID..BIG.DATASET,
//   DCB=(LRECL=X,DSORG=PS,RECFM=VBS)

 

Addendum 1 Feb 2017: BLKSIZE in Cylinder-managed Space

This was mentioned in a previous post on BLKSIZE, but it is relevant to EAV and bears repeating here.  If you are going to take advantage of the extended address area, the EAS, on an EAV disk, you should use system-determined BLKSIZE, that is, either specify no BLKSIZE at all for the data set or specify BLKSIZE=0, signifying that you want the system to figure out the best value of BLKSIZE for the data set.

Why? Because in the cylinder managed area of the disk the system needs an extra 32 bytes for each block, which it uses for control information. Hence the optimal BLKSIZE for your Data Set will be slightly smaller when the data set resides in the extended area.  The 32 byte chunk of control information does not appear within your data.  You do not see it.  But it takes up space on disk, as a 32-byte suffix after each block.

You could end up using twice as much disk space if you choose a poor BLKSIZE, with about half the disk space being wasted.  That is true because a track must contain an integral number of blocks, for example one or two blocks.  If you think you can fit exactly two blocks on each track, but the system grabs 32 bytes for control information for each block, then there will be not quite enough room on the track for a second block.  Hence the rest of the track will be wasted, and this will be repeated for every track, approximately doubling the size of your data set.  

On the other hand, if you just let the system decide what BLKSIZE to use, it generally calculates a number that allows two blocks per track. 

And when you use system-determined BLKSIZE  — when you just leave it to the system to decide the BLKSIZE — you get a bonus; if the system migrates your data set, and the data set happens to land on the lower part of a disk, outside the extended area, then if you have used system-determined BLKSIZE, that is, BLKSIZE=0 or unspecified, the system will automatically recalculate the best BLKSIZE when the Data Set is moved.  If the data set is later moved back into the cylinder-managed EAS area, the BLKSIZE will again be automatically recalculated and the data reblocked.

If in the future IBM releases some new sort of disk with a different track length, and your company acquires a lot of the new disks and adds them to the same disk storage pool you're using now, the same consideration applies: If system-determined BLKSIZE is in effect, the best BLKSIZE will be calculated automatically and the data will be reblocked automatically when the system moves the data set to the different device type.

Yes, it is possible for a data set to reside partly in track-managed space (the lower part of the disk) and partly in cylinder-managed space (the EAS, extended address, high part of the disk), per the IBM document.  

You should generally use system-determined BLKSIZE anyway.  But if you’re using EAV disks, it becomes more important to do so because of the invisible 32-byte suffix the system adds when your data set resides in the extended area.

[End of Addendum on BLKSIZE]

References, further reading

IBM on EAV

z/OS DFSMS Using Data Sets
https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.idad400/eav.htm

JCL for EAV
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.2.0/com.ibm.zos.v2r2.ieab600/xddeattr.htm
http://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.ieab600/iea3b6_Subparameter_definition18.htm

Disk types and sizes
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.idad500/tcd.htm

a SHARE presentation on EAV
https://share.confex.com/share/124/webprogram/Handout/Session17109/SHARE_Seattle_Session%2017109_How%20to%20on%20EAV%20Planning%20and%20Best%20Practices.pdf

EAV reference – IBM manual
z/OS 2.1.0 =>
z/OS DFSMS =>
z/OS DFSMS Using Data Sets =>
All Data Sets => Allocating Space on Direct Access Volumes => Extended Address Volumes
https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.idad400/eav.htm

IBM manual on storage administration (for systems programmers)
z/OS DFSMSdfp Storage Administration
https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.idas200/toc.htm

The 32-byte per block overhead in the extended area of EAV disk (IBM manual):
https://www.ibm.com/support/knowledgecenter/SSLTBW_1.13.0/com.ibm.zos.r13.idad400/blksz.htm

z/OS DFSMS Using Data Sets ==>
Non-VSAM Access to Data Sets and UNIX Files ==>
Specifying and Initializing Data Control Blocks ==>
Selecting Data Set Options ==>
Block Size (BLKSIZE)
Extended-format data sets: In an extended-format data set, the system adds a 32-byte suffix to each block, which your program does not see. This suffix does not appear in your buffers. Do not include the length of this suffix in the BLKSIZE or BUFL values.”

IEFBR14

I E F B R 1 4   Mysteries of IEFBR14 Revealed —

People ask what IEFBR14 does.  If you laughed at that, move along to some other reading material.

There, now we’re alone to investigate the mysteries of IEFBR14.  You’re new to the mainframe perhaps. 

If you have read the previous introductory JCL post, you know that the only thing you can ask the mainframe to do for you is run a program (as specified by EXEC in JCL), and when that happens the system does various setup before running the program, plus various cleanup after the program.  Most of that setup and cleanup is orchestrated by what you specify on the DD statements (DD means Data Definition).   So when you run a program, any program, you have quite a bit of control over the actions the system will take on your behalf,  processing your DD statements.

When the program IEFBR14 runs, IEFBR14 itself does nothing, but when you tell the system to run a program (any program), that acts as a trigger to get the system to process any DD statements you include following the EXEC statement.  So you can use that fact to create and delete datasets just with JCL.  

For example, you might specify DISP=(NEW,CATLG) on a DD statement for a dataset if you want the system to create the dataset for your program just before the program runs (hence NEW), and you want the system to save the new dataset when the program ends, and you also want the system to create a catalog entry so you can access the dataset again just by its name, triggering the system to look the name up in the catalog (hence CATLG).

So all you need to do to create a dataset is to put in a DD statement after the EXEC for IEFBR14, and on the DD statement you specify DSN= whatever name you want the new dataset to have, you specify DISP=(NEW,CATLG) as just mentioned, and rather than specify a lot of other information you pick an existing dataset you like and just model the parameters for this new one based on that, saying LIKE=selected.model.dataset — the DDname on the DD statement can be anything syntactically valid, that is, not more than 8 characters long, starts with a letter or acceptable symbol, and contains only letters, numbers, and the aforementioned acceptable symbols, which are usually #, @, and a currency symbol such as $.

Example:
 
//MYJOB  JOB  1,SAMPLE,MSGCLASS=X,CLASS=A
//BR14  EXEC  PGM=IEFBR14
//NEWDATA  DD  DSN=MY.NEW.DATA,DISP=(NEW,CATLG),
//    LIKE=MY.MODEL.DATASET

So the system creates the dataset.  Your program does not create it.  Your program might put data into it (or not), and the system doesn't care about the data.  The system manages the dataset itself based on what you specify in the DD statement in the JCL – or if a program really needs to create a dataset then the program builds the equivalent of a DD statement internally and invokes “dynamic allocation” to get the system to process the specifications the same as if there had been a DD statement in JCL.

In such a case the system processes that “dynamic allocation” information exactly the same way it would have processed it if you had supplied the information on a DD statement present in the JCL.

To delete an existing dataset you no longer want, you can specify DISP=(OLD,DELETE) on the DD statement, and the system will delete the dataset.  This is similar to the way it would delete a dataset if you issued the DELETE command under TSO, or using  IDCAMS, but there are a couple of important nuances you need to know about  deleting datasets.  

One is that it is a big mistake if you try to delete a member of a dataset using JCL.  The DISP you specify applies to the entire dataset, even if you put a member name in parentheses.    Never say DELETE for a Member in JCL; you will lose the entire library.

The second thing you need to know about deleting a dataset is that, for ordinary cataloged data sets, saying DELETE causes the deletion of both the dataset and the catalog entry that points to it.  That's fine and works for most cases, but sometimes you might have just a dataset with no catalog entry, and other times you might have a catalog entry pointing to a dataset that isn't really there anymore.

If you have a dataset that is not cataloged, then you need to tell the system where it is.  You do that by specifying both the UNIT and VOL parameters.   UNIT identifies the type of device that holds the dataset, something like disk or tape, which you might be able to specify just by saying UNIT=3390 (for disk) or UNIT=TAPE.   VOL is short for VOLUME, and identifies which specific disk or tape contains your dataset.  So UNIT is a class of things, and VOL, or VOLUME, is a specific item within that class.  

It turns out that coding UNIT isn't usually as simple as saying UNIT=DISK.  The people responsible for setting up your system can name the various units anything they want.  UNIT=TEST and UNIT=PROD are common choices.  The system, as it comes from IBM, has UNIT=SYSDA and UNIT=SYSALLDA as default disk unit names, but some places change those defaults or restrict their use.  If you have access to any JCL that created or used the dataset, it would likely contain the correct UNIT name — because if there is no catalog entry for an existing dataset, then every reference to the dataset has to specify UNIT.

When you first create a dataset, you are required to supply a UNIT type, but you are not required to specify a VOLUME — the system will select an available VOLUME from within the UNIT class you specified.  

If you are dealing with a dataset that was created with just the UNIT specified, and DISP=(NEW,KEEP), then you need to find the output from the job that created the dataset.  The JCL part of the listing will show what volume the system selected for the dataset.

To code the VOL parameter in your JCL, typically you say VOL=SER=xxxxxx, where xxxxxx is the name of the volume.  There are various alternatives to this way of coding it,  SER is short for Serial Number.  The names of tapes used to be 6-digit numbers in most places, for whatever reason — possibly to make it easy to avoid duplication.  Besides Serial Number, the volume parameter has other possible subparameters too,  but you don't care right now.

If you don't know what UNIT name to use, but you do know the volume where the dataset is, then go into  something like TSO/ISPF 3.4 and do a listing of the volume.  Select any other dataset on the volume and request the equivalent of ISPF 3.2 dataset information. Whatever UNIT it says, that should work for every dataset on the volume.

Note that it is possible for the same volume to be a member of more than one UNIT class.  It might belong to 3390, SYSDA, and TEST for example.  In that case it doesn't matter which UNIT name you specify for the purpose of finding (and deleting) the dataset.  The only point of putting UNIT into your JCL for an existing dataset is to help the system find it.

Note that it is possible to have multiple datasets with exactly the same name on different volumes and in different UNIT classes.  An easy example to visualize is having a disk dataset that you copy to a tape,  giving it the same DSN, and then later you copy it again to a different tape.  

Another, more perverse, example occurs when someone creates a new dataset they intend to keep, but mistakenly specifies DISP=(NEW,KEEP) rather than DISP=(NEW,CATLG).  Later they can't find the dataset, because no catalog entry was created. Rather than figure out what happened, they run the same job again.  If the system puts the second copy of the dataset onto a different disk volume, they now have two uncataloged copies of it.  If they keep doing that, at some point the system will select a volume that already has a copy of the dataset, and then the job will fail with an error message saying a duplicate name exists on the volume.  To clean up something like that, you need to find every uncataloged copy of the dataset  and delete it, specifying VOL and UNIT along with DISP=(OLD,DELETE) — you can use IEFBR14 for that.

On many systems, they have it set up so that a disk housekeeping program runs every night (or on some other schedule), and deletes all uncataloged disk datasets.  So if you find yourself in possession of a set of identically named uncataloged disk datasets, and you don't want to look for all the volume names, you might get lucky if you wait for a possible overnight housekeeping utility to run automatically and delete them for you.

One other point on those uncataloged datasets.  You also have the option of creating a catalog entry for a dataset, rather than deleting it, if you want to keep it around.  To do that, you specify UNIT and VOL, as just discussed, and DISP=(OLD,CATLG) — but you can only have one cataloged copy of any dataset name.

So, back to the point we touched on earlier, about how a program can create the equivalent of a DD statement internally.

It is also possible for part of the information to be specified on a DD statement in JCL, and part of the information to be specified in the program.  In that case the program needs to specify its DD modifications before the OPEN for the file.  The system then merges what the program specified with what the JCL specified, and if there’s a difference then the program takes precedence.  So the program can change what the JCL said, and the program wins any disagreements – but the program has to have its say BEFORE the OPEN for the file.

Note that IEFBR14 does not OPEN any files.  The system does the setup and cleanup involved in allocation processing, and only that. Various JCL specifications can be used to indicate processes that occur only at OPEN or CLOSE of a file.  Releasing unused disk space is an example of that.  If you code the RLSE subparameter in your SPACE parameter on a DD statement for IEFBR14,  that subparameter is ignored; no space is released.

This general discussion of DD parameters that can be specified within a program is pretty much irrelevant to IEFBR14, except to note that IEFBR14 does no file handling of any kind, so it will never override anything in your JCL (as some other program might).  So if you use IEFBR14 to set up a dataset, and the dataset does not come out the way you wanted, that is not due to anything IEFBR14 did. Because IEFBR14 does nothing.

If some item of information about a dataset is not specified in JCL, and the program does not specify it either, then if it is a dataset that already exists, the system looks to see if the missing item of information might be specified in some existing saved information relevant to the dataset, such as the Catalog entry or the Dataset Label.  Things like record length, record format, and allowable space allocation are generally kept in the Dataset Label for ordinary datasets.  For VSAM datasets most of the information is kept in the Catalog.  The system looks for the information, and if the information is found, it merges it together with the information obtained from the program and the JCL to form a composite picture of the dataset.

What if the system needs to know something about a dataset – record length, for example (LRECL), or record format (RECFM) or one of those other parameters – and after looking in all three places just named, the system has not found the information anyplace – what happens?  Default values apply.  Ha ha, because you don’t usually like the defaults, with the single glowing exception of BLKSIZE where the default is always the best value possible.  The system can calculate BLKSIZE really well.  Other defaults – you don’t want to know.  So specify everything except BLKSIZE.  You don’t need to specify everything explicitly though, you can use the LIKE parameter to specify a model.  Then the system will go look at the model dataset you’ve specified and copy all the attributes it can from there, rather than using the very lame system defaults.  So you specify whatever parameters you want to specify, and you also say LIKE=My.Favorite.Existing.Dataset to tell the system to copy everything it can from that dataset’s settings before applying the <<shudder>> system defaults.  Note: The system will not copy BLKSIZE from the model you specify in LIKE.  No, the system knows where its strengths and weaknesses lie.  It recalculates the BLKSIZE unless you explicitly specify some definite numeric you want for that.

Also note that any particular system, such as the one you're using, can be set up with Data Classes and other system-specific definitions of things that affect defaults for new datasets.  Something could be set up, for example, stating that by default a new dataset with a certain pattern of DSN would have certain attributes.  If so, that would generally take precedence over any general IBM-supplied system defaults, but usually stuff you specify explicitly will override any site-specific definitions of attributes — unless, of course, the ones on your system were purposely set up designating they couldn't be overridden.

Okay, so then, does IEFBR14 do some magical internal specifications?  Nope.  IEFBR14 does absolutely nothing.  You do the work yourself by specifying everything you want on the DD statements.  IEFBR14 never opens the files, modifies nothing, does nothing.  You can code your own program equivalent to IEFBR14 by writing no executable program statements except RETURN. Or, for that matter, no statements at all, since most compilers will supply a missing RETURN at the end for you.  Yes, that is IEFBR14.  Laziest program there is: Lets the system do all the work.

You can use any ddnames you want when you run IEFBR14.  The files are never opened.  The system does the setup prior to running the program, including creating any new files you’ve requested.  The program runs, ignoring everything, doing nothing.  When it ends, the system does the cleanup, including deleting any datasets where you specified DELETE as the second subparameter of DISP.

So that’s how it works.  Often people think IEFBR14 does some magic, but it doesn’t.  It relies on the system to go through the normal setup and cleanup. 

You can add extra DD statements into any other program you might happen to be running, and the system will do the same setup and cleanup.  Of course you’d need to be sure the program you pick doesn’t happen to use the ddnames you pick – you wouldn’t want to use ddnames like SYSUT1 or SYSUT2  with most IBM-supplied programs, for example. 

Ddnames SALLY, JOE, and TOMMY should work just fine though.  The IEFBR14 program doesn’t look at them.  The system doesn’t know and doesn’t care whether the program uses the datasets. 

People use IEFBR14 for convenience because they know for sure that the program will not tamper with any of the file specifications. 

How did IEFBR14 get its name?  Well, the IEF prefix is a common prefix IBM uses for system-level programs it supplies.  BR14 is based on the Assembler Language instruction : BR 14
which means Branch (BR for branch) to the address contained in register 14 — the pointer that holds the return address.  So, Branch to the Return Address, that is, Return to Caller.

Correction 23 November:  Sorry, it's parsed BR 14 rather than B R14.    The notation BR in Assembler Language means Branch to the address contained in the specified Register, as distinct from B. branching to an address specified some other way, for example as a label within the program.

That’s it, secrets of IEFBR14 revealed.

Or,  JCL Basic Concepts part II 

Packed, Zoned, Binary Math

Mainframe Math: Packed, Zoned, Binary Numbers —

Why are there different kinds of numbers?  And how are they different, exactly? Numeric formats on the mainframe (simplified) . . .

The mainframe can do two basic kinds of math: Decimal and Binary.  Hence the machine recognizes two basic numeric formats: Decimal and Binary.  It has separate machine instructions for each.  Adding two binary integers is a different machine operation from adding two decimal integers.  If you tell your computer program to add a binary number to a packed number, the compiler generates machine code that first converts at least one of the numbers, and then when it has two numbers of the same type it adds them for you.

There is also a displayable, printable type, called Zoned Decimal, which in its unsigned integer form is identical to character format.  You cannot do math with zoned decimal numbers.  If you code a statement in a high level language telling the computer to add two zoned decimal numbers, it might generate an error, but otherwise it works because the compiler generates machine instructions that will first convert the two numbers into another type, and then do the math with the converted copies. 

Within Decimal and Binary there are sub-types, such as double-precision and floating point.  These exist mainly to enable representation of, and do math with, very large numbers.  Within displayable numbers there are many possible variations of formatting.  Of course.  

For this article, we are going to skip all the variations except the three most common and most basic: Binary (also called Hexadecimal, or Hex), Decimal (called Packed Decimal, or just Packed), and Zoned Decimal (the displayable, printable representation, sometimes also called “Picture”).  To start with we’ll focus on integers. 

Generally, packed decimal integers used for mathematical operations can have up to 31 significant digits, but there are limitations: a multiplier or divisor is limited to 15 digits, and, when doing division, the sum of the lengths of the quotient and remainder cannot exceed 31 digits. For practical business purposes, these limits are generally adequate in most countries.

Binary (hex) numbers come in two basic sizes, 4 bytes (a full word, sometimes called a long integer), and 2 bytes (a half word, sometimes called a short integer). 

A signed binary integer in a 4 byte field can hold a value up to 2,147,483,647.  

The leftmost bit of the leftmost byte is the sign bit.  If the sign bit is zero that means the number is positive.  If the sign bit is one the number is negative.  This is why a full word signed binary integer is sometimes called a 31-bit integer.  Four bytes with 8 bits each should be 32 bits, right?  But no, the number part is only 31 bits, because one of the bits is used for the sign.

A 2-byte (half word) integer can hold a value up to 32,767 if it is defined as a signed integer, or 65,535 if unsigned.  

The sign bit in a 2-byte binary integer is still the leftmost bit, but since there are only two bytes, the sign bit is the leftmost bit of the left-hand byte.  

Consider the case where you are not using integers, but rather you have an implied decimal point; For example, you’re dealing with dollars and cents. Now you have two positions to the right of an implied (imaginary) decimal point. With two digits used for cents, the maximum number of dollars you can represent is divided by a hundred: $32,767 is no longer possible in a half word; the new limit becomes $327.67, so half word binary math won’t be much use if you’re doing accounting.  $21,474,836.47, that is, twenty-one million and some, would be the limit for full word binary.  Such a choice might be considered to demonstrate pessimism or lack of foresight, or both.  You probably want to choose decimal representations for accounting programs, because decimal representation lets you use larger fields, and hence bigger numbers.

Half word integers are often used for things like loop counters and other smallish arithmetic, because the machine instructions that do half word arithmetic run pretty fast compared to other math.  For the most part binary arithmetic is easier and runs faster than decimal arithmetic.  Also, machine addresses (pointers) are full word binary (hex) integers, so any operation that calculates an offset from an address is quicker if the offset is also a binary integer.  Plus you can fit a bigger number into a smaller field using binary.  However, If you need to do calculations that use reasonably large numbers, for example hundreds of billions in accounting calculations, then you want to use decimal variables and do decimal math.

How are these different types of numbers represented internally – what do they look like?

An unsigned packed decimal number is composed entirely of the hex digits zero through nine.  Two such digits fit in one byte.  So a byte containing hex’12’ would represent unsigned packed decimal twelve.  Two bytes containing hex’9999’ would represent unsigned packed decimal nine thousand nine hundred and ninety-nine. 

How is binary different?  You don’t stop counting at nine.  You get to use A for ten, B for eleven, C for twelve, D for thirteen, E for fourteen, and F for fifteen.  It’s base sixteen math (rather than the base ten math that we grew up with).  So in base ten math, when you run out of digits at 9, and you expand leftward into a two-digit number, you write 10 and call it ten.  With base sixteen math, you don’t run out of available digits until you hit F for fifteen; so then when you expand leftward into a two-digit number, you write 10 and call it sixteen.  By the time you get to x’FF’ you’ve got the equivalent of 255 in your two-digit byte, rather than 99. 

Why, you may ask, did they do this; In fact, why, having done it, did they stop at F?  Actually it’s pretty simple.  Remember that word binary – it really means you’re dealing in bits, and bits can only be zero (off) or one (on).  On IBM mainframe type machines, there happen to be 8 bits in a byte.  Each half byte, then – each of our digits – has 4 bits in it.  That’s just how the hardware is made. 

Yes, a half byte is also named a nibble, but I've never heard even one person actually call it that.  People I've known in reality either say "half byte", or they say "one hex digit".  

So we can all see that 0000 should represent zero, and 0001 should be one.  Then what?  Well, 0010 means two; The highest digit you have is 1, and then you have to expand toward the left.  You have to "carry", just like in regular math, except you hit the maximum at 1 rather than 9.  This is base 2 math. 

To get three you add one+two, giving you 0011 for three.  Hey, all we have at this point is bit switches, kind of like doing math by turning light switches off and on. (Base 2 math.)  10 isn't ten here, and 10 isn't fifteen; 10 here is two.  So, if 011 is three, and you have to move leftward to count higher, that means 0100 is obviously four.  Eventually you add one (001) and two (010) to four (0100) and you get up to the giddy height of seven (0111).  Lesser machines might stop there, call the bit string 111 seven, and be satisfied with having achieved base 8 math.  Base 8 is called Octal, lesser machines did in fact use it, and personally I found it to be no fun at all.  The word excruciating comes to mind.  Anyway, with the IBM machine people were blessed with a fourth bit to expand into. 1000 became eight, 1001 was nine, 1010 (eight plus two) became A, and so on until all the bits were used up, culminating in 1111 being called F and meaning fifteen.  Base sixteen math, and we call it hexadecimal, affectionately known as hex.  It was pretty easy to see that hex was better than octal, and it was also pretty easy to see that we didn’t need to go any higher — base 16 is quite adequate for ordinary human minds.  So there it is.  It also explains why the word hex is so often used almost interchangeably with the word binary.

And Zoned Decimal?  A byte containing the digit 1 in zoned decimal is represented by hex ‘F1’, which is exactly the same as what it would be as part of a text string (“Our number 1 choice.”)  Think of a printable byte as one character position. The digit 9, when represented as a printable (zoned) decimal number, is hex’F9’.  A number like 123456, if it is unsigned, is hex’F1F2F3F4F5F6’.  (If it has a sign, the sign might be separate, but if the sign is part of the string then the F in the last byte might be some other code to represent the sign.  Conveniently, F is one of the signs for plus, and it also means unsigned.)  You cannot do math with zoned decimal numbers.  If you code a statement in a high level language telling the computer to add two zoned decimal numbers, and if it does not generate an error, the compiler generates machine instructions that will first convert the two numbers into another type and then do the math with the converted copies. 

You may be thinking that the left half of each byte is wasted in zoned decimal format.  Well, not wasted exactly: Any printable character will use up one byte; a half byte containing F is no bigger than the same half byte containing zero.  Still, if you are not actually printing the digit at the moment, could you save half the memory by eliminating the F on the left?  Pretty much. 

You scrunch the number together, squeezing out the F half-bytes, and you have unsigned packed decimal.  You just need to add a representation of the plus or minus sign to get standard (signed) packed decimal format. The standard is to use the last half byte at the end for the sign, the farthest right position.  This is why decimal numbers are usually set up to allow for an odd number of digits – because memory is allocated in units of bytes, there are two digits in a byte, and the last byte has to contain the sign as the last digit position.

How is the packed decimal sign represented in hex?  The last position is usually a C for plus or a D for minus.  F also means plus, but usually carries the nuance of meaning that the number is defined as unsigned.  Naturally there are some offbeat representations where plus can be F, A, C, or E, like the open spaces on a Treble clef in music, and minus can be either B or D (the two remaining hex letters after F,A,C,E are taken) – hence giving meaning to all the non-decimal digits.  Mostly, when produced by ordinary processes, it’s C for plus, F for unsigned and hence also plus, or D for minus. 

So if you have the digit 1 in zoned decimal, as hex’F1’, then after it is fully converted to signed packed decimal the packed decimal byte will be hex’1C’.  Zoned decimal Nine (hex ‘F9’) would convert to packed decimal hex ‘9C’, and zero (hex ‘F0’) becomes hex’0C’.  Minus nine becomes hex ‘9D’, and yes, you can have minus zero as hex ‘0D’. 

The mathematical meaning of minus zero is arguable, but some compilers allow it, and in fact the IEEE standard for floating point requires it.  Some machine instructions can also produce it as a result of math involving a negative number and/or overflow.  You care about negative zero mainly because in some operations (which you might easily never encounter), hex’0D’, the minus zero, might give different results from ordinary zero.  A minus zero normally compares equal to ordinary plus zero when doing ordinary decimal comparisons.  Okay, moving on … Zoned decimal 123, hex ‘F1F2F3’, when converted to signed packed decimal will become hex’123C’, and Zoned decimal 4567, hex ‘F4F5F6F7’, when converted to signed packed decimal will become hex’04567C’, with a leading zero added because you need an even number of digits; half bytes have to be paired up so they fill out entire bytes.

Wait, you say, how did the plus or minus look in zoned decimal? 

The answer is that there are various formats.

It is possible for the rightmost Zoned Decimal digit to contain the sign in place of that digit’s lefthand “F” (its “zone”), and that is the format generated when a Zoned Decimal number is produced by the UNPACK machine instruction.  

The most popular format, for users of high level languages, seems to be when the sign is kept separate and placed at the beginning of the number (the farthest left position). COBOL calls this “SIGN IS LEADING SEPARATE”.  

However, many print formats are possible, and you can delve into this topic further by looking at IBM’s Language Reference Manual for whatever language you’re using.  Zoned decimal is essentially a print (or display) format.  High Level Computer Languages facilitate many elaborate editing niceties such as leading blanks or zeroes, insertion of commas and decimal points, currency symbols, and stuff that may never even have occurred to you (or me).

In COBOL, a packed decimal variable is traditionally defined with USAGE IS COMPUTATIONAL-3, Or COMP-3, but it can also be called PACKED-DECIMAL.  A zoned decimal variable is defined with a PICTURE format having USAGE IS DISPLAY.  A binary variable is just called COMP, but it can also be called COMP-4 or BINARY.

In C, a variable that will contain a packed decimal number is just called decimal.  If you are going to use decimal numbers in your C program, the header <decimal.h> should be #included.  A variable that will hold a four byte binary number is called int, or long.  A two byte binary integer is called short.  Typically a number is converted into printable zoned decimal by using some function like sprintf with the %d formatting code.  Input zoned decimal can be treated as character.

PL/I refers to packed decimal numbers as FIXED DECIMAL.  Zoned decimal numbers are defined as having PICTURE values containing nines, e.g. P’99999’ in a simple case.  Binary numbers are called FIXED BINARY, or FIXED BIN, with a four byte binary number being FIXED BINARY(31) and a two byte binary number being called FIXED BINARY(15).

What if you want to use non-integers, that is, you want decimal positions to the right of the decimal?  Dollars and cents, for example?

In most high level languages, you define the number of decimal positions you want when you declare the variable, and for binary numbers and packed decimal numbers, that number of decimal positions is considered to be implied; it just remembers where to put the decimal for you, but the decimal position is not visible when you look at the memory location in a dump or similar display.  For zoned decimal numbers, you can declare the variable (or the print format) in a way that both the implied decimal and a visible decimal occur in the same position.  For example, if (in your chosen language) 999V99 creates an implied decimal position wherever the V is, then you would define an equivalent displayable decimal point as 999V.99, in effect telling the compiler that you want a decimal point to be printed at the same location as the implied decimal.  As previously noted, the limits on the numbers of digits that can be represented or manipulated apply to all the digits in use on both sides of the implied decimal point.

You may have noticed that abends are a bit more common when using packed decimal arithmetic, as compared with binary math.  There are two common ways that decimal arithmetic abends where binary would not.  One occurs when fields are not initialized.  If an uninitialized field contains hex zeroes, and it is defined as binary, that’s valid and some might say lucky.  If the same field of hex zeroes is defined as signed packed decimal, mathematical operations will fail because of the missing sign in the last half byte.  This is a common cause of an 0C7 abend failure in formatted dumps (such as a PL/I program containing an “ON ERROR” unit with a “PUT DATA;” statement).  When the uninitialized fields contain hex zeroes, it might seem that the person using binary variables is lucky, but sometimes uninitialized fields contain leftover data from something else, essentially random trash that happens to be in memory.  In that case decimal instructions usually still abend, and binary mathematical operations do not – they just come up with wrong results, because absolutely any hex value is a valid binary number.  The abend doesn’t look like such bad luck in that situation.  The other common cause for the same problem, besides uninitialized fields, is similar insofar as it means picking up unintended data.  When something goes wrong in a program – maybe a memory overlay, maybe a bad address pointer – an instruction may try to execute using the wrong data.  Again, there is a good chance that decimal arithmetic will fail in such a situation, because of the absence of the sign perhaps, or perhaps because the data contains values other than the digits zero through nine plus the sign.  Binary arithmetic may carry on happily producing wrong answers based on the bogus data values.  Even if you recognize that the output is wrong, it can be difficult to track back to the cause of the problem.  With an immediate 0C7 or other decimal arithmetic abend, you have a better chance of finding the underlying problem with less difficulty. 

So there you have it.  Basic mainframe computer math, simplified.  Sort of.

________________________________________________________________________

References

z/Architecture Principles of Operation (PDF) SA22-7832

at this url:
http://publibfi.boulder.ibm.com/epubs/pdf/dz9zr010.pdf

In the SA22-7832-10 version, on the first page of Chapter 8. Decimal Instructions, there is a section called “Decimal-Number Formats”, containing subsections for zoned and packed-decimal. 

On the fourth page of Chapter 7. General Instructions there is a section called “Binary-Integer Representation”, followed by sections about binary arithmetic.

Principles of Operation is the definitive source material, the final authority.

Further reading

F1 for Mainframe has a good very short article called “SORT – CONVERT PD to ZD and BI to ZD”, in which the author shows you SORT control cards you can use to convert data from one numeric format to another (without writing a program to do it).   At this url:   https://mainframesf1.com/2012/03/27/sort-convert-pd-to-zd-and-bi-to-zd/

 

 

 

Program Search Order

z/OS MVS Program Search Order

When you want to run a program or a TSO command, where does the system find it?  If there are multiple copies, how does it decide which one to use?  That is today’s topic.

The system searches for the item you require; It has a list of places to search; it gives you the first copy it finds.  The order in which it searches the list is called, unsurprisingly, search order

The search order is different depending on where you are when the request is made (where your task is running within the computer system, not where you're sitting geographically).   It also depends on what you request (program, JCL PROC, CLIST, REXX, etc). 

Types of searches: Normally we think of the basic search for an executable program. The search for TSO commands is almost identical to the search for programs executed in batch, with some extra bells and whistles.  There is different handling for a PROC request in JCL (since the search needs to cover JCL libraries, not program libraries).  The search is different when you request an exec in TSO by prefixing the name of the exec with a percent sign (%) — the percent sign signals the system to bypass searching for programs and go directly to searching for CLIST or REXX execs. Transaction systems such as CICS and IMS are again different: They use look-up tables set up for your CICS or IMS configuration.

Right now we’re only going to cover batch programs and TSO commands.

Overview of basic search order for executable programs:

  1. Command tables are consulted for special directions
  2. Programs already in storage within your area
  3. TASKLIB   (ISPLUSR,  LIBDEF,  ISPLLIB, TSOLIB) 
  4. STEPLIB if it exists
  5. JOBLIB only if there is no STEPLIB
  6. LPA (Link Pack Area)
  7. LINKLIST (System Libraries)
  8. CLIST and REXX only if running TSO

Let’s pretend you don’t believe that bit about STEPLIB and JOBLIB, that the system searches one or the other but not both.  Look here for verification:  Numbered item 3 in Section “Search order the system uses for programs” in the IBM publication “z/OS MVS Initialization and Tuning Guide” at this link: https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.ieae100/ieae10018.htm

Batch Programs vs TSO commands

As noted, these two things are almost exactly the same.

You can type the name of a program when in READY mode in TSO, or in ISPF option 6, which mimics READY mode.  (Using ISPF 6 is similar to opening a DOS window on the PC.)  Under ISPF you can also enter the name of a program directly on any ISPF command line if you preface the program name with the word TSO followed by a space. 

So you can type the word IEBGENER when in READY mode, or you can put in “TSO  IEBGENER“ on the ISPF command line, and the system will fetch the same IEBGENER utility program just the same as it does when you say
“//  EXEC  PGM=IEBGENER” in batch JCL. 

There is a catch to this: the PARM you use in JCL is not mirrored when you invoke a program under TSO this way.  If you enter any parameters on the command line in TSO, they are passed to the program in a different format than a PARM.   When you type the name of a program to invoke it under TSO, what usually happens is that the program is started, but instead of getting a pointer it can use to find a PARM string, the program receives a pointer that can be used to find the TSO line command that was entered.  The two formats are similar enough so that a program expecting to get a PARM will be confused by the CPPL pointer (CPPL = Command Processor Parameter List).  Typically such a program will issue a message saying that the PARM parameters are invalid.

So, Let's look at how the system searches for the program.

Optional ISPF command tables (TSO/ISPF only) 

We mention command tables first because the command tables are the first thing the system checks.  Fortunately the tables are optional.  Unfortunately they can cause problems.  Fortunately that is rare.  So It’s fine for you to skip ahead to the next section, you won’t miss much; just be aware that the command tables exist, so you can remember it if you ever need to diagnose a quirky problem in this area.

Under TSO/ISPF, ISPF-specific command tables can be used.  These can alter where the system searches for commands named in such a table.  This is an area where IBM makes changes from time to time, so if you need to set up these tables, consult the IBM documentation for your exact release level. 

There is a basic general ISPF TSO command table called ISPTCM that does not vary depending on the ISPF application.

Also, search order can vary within ISPF depending on the ISPF application. 

For example, when you go into a product like File-AID, the search order might be changed to use the command tables specific to that application. 

So there are also command tables specific to the ISPF application that is running.  In addition to a general Application Command Table, there can be up to 3 site command tables and up to 3 user command tables, plus a system command table.  Within these, the search order can vary depending on what is specified at your site during setup. 

If you are having a problem related to command tables, or if you need to set one up, consult the IBM references.   For z/OS v2r2, see "Customizing the ISPF TSO command table (ISPTCM)" and “Customizing command tables In the IBM manual "ISPF Planning and Customizing".

ISPF command table(s) can influence other things besides search order, including:

(a) A table can specify that a particular program is to run in APF-authorized mode, allowing the program to do privileged processes (APF is an entirely separate topic and not covered herein).

(b) A table can specify that a particular program must be obtained from LPA only (bypassing the normal search order).  (We’ll introduce LPA further down.)

(c) A table can change characteristics of how a command is executed.  For example the table can specify that any given program must always be invoked as a PARM-processing ordinary program, or the table can specify instead that the program must always be invoked as a TSO command that accepts parameters entered on the command line along with the command name, e.g. LISTDS ‘some.dataset.name’

Programs are not required to be listed in any of these command tables.  If a program is not listed in any special table, then default characteristics are applied to the program, and the module itself is located via the normal search order, which is basically the same as for executable programs in batch. 

Most notably, a command table can cause problems if an old version exists but that fact is not known to the current local Systems people (z/OS System Admins are not called Admins as they might be on lesser computer systems; on the mainframe they're called "System Programmers" generally, or some more specialized title).  Such a table might prevent or warp the execution of a program because the characteristics of a named program have changed from one release to another.

If a command is defined in a command table, but the program that the system finds does not conform to the demands imposed by the settings in the table, that situation can be the cause of quirky problems, including the “COMMAND NOT FOUND” message.  Yes, that might perhaps be IBM’s idea of a practical joke, but it happens like this: If the table indicates that a program has to be found in some special place, but the system finds the program in the wrong place, then in some cases you get the COMMAND NOT FOUND message.  I know, you’re cracking up laughing.  That’s the effect it seems to have on most people when they run into it.  Not everyone, of course.

Other subsystems such as IMS and CICS also use command tables, and these are not covered by the current discussion.  Consult IMS and CICS documentation for specifics of those subsystems. 

Programs already loaded into your memory

Before STEPLIB or anything like that, the system searches amongst the programs that have already been loaded into your job’s memory.  Your TSO session counts as a job.  

There can be multiple tasks running within your job – Think split screens under TSO/ISPF.  In that case the system first searches the program modules that belong to the task where the request was issued, and after that it searches those that belong to the job itself.  Relevant jargon : a task has a Load List, which has Load List Elements (LLE); the job itself has Job pack area (JPA).

If a copy of the module is found in your job’s memory, in either the task's LLE or the job's JPA, that copy of the module will be used if it is available.  

If the module is marked as reentrant – if the program has a flag set to promise that it does not modify itself at all – then any number of tasks can use the same copy simultaneously, so you always get it. If the module is not marked as reentrant, but it is marked as serially reusable, that means the program can modify itself while it's running as long as, at the end, it puts everything back the way it was originally; in that case a second task can use an existing copy of the module after the previous task finishes.  If neither of those flags are set, then the system has to load a fresh copy of the load module every time it is to be run.

If the module is not reentrant and it is already in use, as often happens in multi-user transaction processing subsystems like IMS and CICS, the system might make you wait until the previous caller finishes, or else it might load an additional copy of the module.  In CICS and IMS this depends on CICS/IMS configuration parameters.  In ordinary jobs and TSO sessions, the system normally just loads a new copy.

Note that if a usable copy of a module is found already loaded in memory, either in the task's LLE or in the job's JPA, that copy will be used EVEN if your program has specified a DCB on the LOAD macro to tell the system to load the module from some other library (unless the copy in memory is "not reusable", and then a new copy will be obtained).  Hunh?  Yeah, suppose your program, while it is running, asks to load another program, specifying that the other program is to be loaded from some specific Library.  Quelle surprise, if there is already a copy of the module sitting in memory in your area, then the LOAD does not go look in that other library.

For further nuances on this, if it is a topic of concern for you, see the IBM write-up under the topic "The Search for the Load Module" in the z/OS MVS Assembler Services Guide

TASKLIB 

Most jobs do not use TASKLIB.  TSO does.  

Essentially TASKLIB works like a stand-in for STEPLIB.  A job can’t just reallocate STEPLIB once the job is already running.  For batch jobs the situation doesn’t come up much.  Under TSO, people often find cases where they’d like to be able to reallocate STEPLIB.  Enter TASKLIB.  STEPLIB stays where it is, but TASKLIB can sneak in ahead of it.

Under TSO, ISPF uses the ddname ISPLLIB as its main TASKLIB.

The ddname ISPLUSR exists so that individual users – you yourself for example – can use their own private load libraries, and tell ISPF to adopt those libraries as part of the TASKLIB, in fact at the top of the list.  When ISPF starts, it checks to see if the ddname ISPLUSR is allocated.  If it is, then ISPF assigns TASKLIB with ISPLUSR first, followed by ISPLLIB.  As long as you allocate ISPLUSR before you start ISPF, then ISPLUSR will be searched before ISPLLIB.  

In fact that USR suffix was invented just for such situations.  It’s a tiny digression, but ISPF allows you to assign ISPPUSR to pre-empt ISPPLIB for ISPF screens (Panels), and so on for other ISP*LIB ddnames; they can be pre-empted by assigning your own libraries to their ISP*USR counterparts. 

Up to 15 datasets can be concatenated on ISPLUSR.  If you allocate more, only the first 15 will be searched.

If you allocate ISPLUSR, you have to do it before you start ISPF, and it applies for the duration of the ISPF session.

Not so with LIBDEF.  The LIBDEF command can be used while ISPF is active.  Datasets you LIBDEF onto ISPLLIB are searched before other ISPLLIB datasets, but after ISPLUSR datasets.

The TSOLIB command can also be used to add datasets onto TASKLIB

The TSOLIB command, if used, must be invoked from READY mode prior to going into ISPF.   

Why yet another way of putting datasets onto TASKLIB?  The other three methods just discussed are specific to ISPF. The TSOLIB command is applicable to READY mode also. It can be used even when ISPF is not active.  For programs that run under TSO but not under ISPF, the only TASKLIB datasets used are those activated by TSOLIB.  Within ISPF, any dataset you add to TASKLIB by using "TSOLIB ACTIVATE" will come last within TASKLIB, after ISPLLIB.  

Also, TASKLIB load libraries activated by the TSOLIB command are available to non-ISPF modules called by ISPF-enabled programs.  For example, if a program running under ISPF calls something like IEBCOPY, and then IEBCOPY itself issues a LOAD to get some other module it wants to call, do not expect the system to look in ISPLLIB for the module that IEBCOPY is trying to load.  It should check TSOLIB, though.  However, some programs bypass TASKLIB search altogether.

Under ISPF, This is the search order within TASKLIB:

    ISPLUSR
    LIBDEF
    ISPLLIB
    TSOLIB

For non-ISPF TSO, only TSOLIB is used for TASKLIB.

To find out your LIBDEF allocations, enter ISPLIBD on the ISPF command line. (Not TSO ISPLIBD, just plain ISPLIBD)

TASKLIB with the CALL command in TSO

When you use CALL to run a program under TSO, the library name on the CALL command becomes a TASKLIB during the time the called program is running.  CALL is often used within CLIST and REXX execs, even though you may not use CALL much yourself directly.

So if you say CALL 'SYS1.LINKLIB(IEBGENER)' from ISPF option 6 or from a CLIST or REXX, then 'SYS1.LINKLIB' will be used to satisfy other LOAD requests that might be issued by IEBGENER or its subtasks while the called IEBGENER is running.  Ah, yes, the system is full of nuances and special cases like this one; I'm just giving you the highlights.  This entire article is somewhat of a simplification.  What joys, you might be thinking, must await you in the world of mainframes.

STEPLIB or JOBLIB

If STEPLIB and JOBLIB are both in the JCL, the STEPLIB is used and the JOBLIB is ignored.

The system does NOT search both.

LPA (Link Pack Area)

You know that there are default system load libraries that are used when you don’t have STEPLIB or JOBLIB, or when your STEPLIB or JOBLIB does not contain the program to be executed.  

The original list of the names of those system load libraries is called LINKLIST.  The first original system load library was called SYS1.LINKLIB, and when they wanted to have more than one system library they came up with the name LINKLIST to designate the list of library names that were to be treated as logical extensions of LINKLIB.

The drawback with LINKLIST, as with STEPLIB and JOBLIB, is that when you request a program from them, the system actually has to go and read the program into memory from disk.  That’s overhead.  So they invented LPA (which stands for Link Pack Area — go figure.).

Some programs are used so often by so many jobs that it makes sense to keep them loaded into memory permanently.  Well, virtual memory.  As long as such a program is not self-modifying, multiple jobs can use the same copy at the same time:  Additional savings.  So an area of memory is reserved for LPA.  Modules from SYS1.LPALIB (and its concatenation partners) are loaded into LPA.  It is pageable, so more heavily used modules replace less used modules dynamically.

Sounds good, but more tweaks came.  Some places consider some programs so important and so response-time-sensitive that they want those programs to be kept in memory all the time, even if they haven’t been used for a few minutes.  And so on, until we now have several subsets of LPA.

Within LPA, the following search order applies:

Dynamic LPA, from the list in ‘SYS1.PARMLIB(PROG**)’

Fixed LPA (FLPA), from the list in ‘SYS1.PARMLIB(IEAFIX**)’

Modified LPA (MLPA), from the list in ‘SYS1.PARMLIB(IEALPA**)’

Pageable LPA (PLPA), from the list in (LPALST**) and/or (PROG**)

LINKLIST

 LINKLIST Libraries are specified using SYS1.PARMLIB(PROG**) and/or (LNKLST**).

SYS1.LINKLIB is included in the list of System Libraries even if it is not named explicitly. 

An overview of LINKLIB was just given in the introduction to LPA, so you know this already, or at least you can refer back to it above if you skipped ahead.

Note that LINKLIST libraries are controlled by LLA; whenever any LINKLIST module is updated the system's BLDL needs to be rebuilt by an LLA refresh.  If LLA is not refreshed the old version of the module will continue to be given to anyone requesting that module.

What’s LLA, you might ask.  For any library under LLA control, the system reads the directory of each library, and keeps the directory permanently in memory.  A library directory contains the disk address of every library member.  Hence keeping the directory in memory considerably speeds up the process of finding any member.  It speeds that up more than one might think, because PDS directory blocks have a very small block size, 256 bytes, and these days the directories of production load libraries can contain a lot of members, two facts which taken together mean that reading an entire PDS directory from disk can require many READ operations and hence be time-consuming.  If you repeat that delay for almost every program search in every job, you have a drag on the system.  So LLA gives a meaningful performance improvement for reading members from PDS-type libraries.  For PDSE libraries too, but for different reasons; PDSE libraries do not have directory blocks like ordinary PDS libraries.  Anyway, the price you pay for the improvement is that the in-memory copies of the directories have to be rebuilt whenever a directory is updated, that is, whenever a library member is replaced.  What does LLA stand for?  Originally it stood for LINKLIST LookAside, but when the concept was extended to cover other libraries besides LINKLIST the name was changed to Library LookAside.

Under TSO, CLIST and REXX execs

Under TSO, if a module is not found in any of the above places, there is a final search for a CLIST or REXX exec matching the requested name. 

CLIST and REXX execs are obtained by searching ddnames SYSUEXEC, SYSUPROC, SYSEXEC, and SYSPROC (in that order, if the order has not been deliberately changed).  Note that SYSUEXEC and SYSEXEC are for REXX members only, whereas SYSPROC and SYSUPROC can contain both REXX and CLIST members, with the proviso that a REXX member residing in the SYSPROC family of libraries must contain the word REXX on the first line, typically as a comment /* REXX */

These four ddnames can be changed and other ddnames can be added by use of the ALTLIB command.  Also the order of search within these ddnames can be altered with the ALTLIB command (among other ways)

.SYSPROC was the original CLIST ddname.  A CLIST library can also include REXX execs. The SYSEXEC ddname was added to be used just for REXX.  In the spirit of ISPLUSR et al, a USER version of each was added, called SYS­UPROC and SYSUEXEC.

The default REXX library ddname SYSEXEC can be changed to something other than SYSEXEC by MVS system installation parameters.

Prefixing the TSO command name with the percent sign (%) causes the system to skip directly to the CLIST and REXX part of the search, rather than looking every other place first. 

To find the actual search order in effect within the CLIST and EXEC ddnames for your TSO session at any given time, use the command TSO ALTLIB DISPLAY. 

That's it for program search order.  It's a simplification of course. ;-)

 

References, Further reading

z/OS Basic Skills, Mainframe concepts, z/OS system installation and maintenance
Search order for programs
http://www.ibm.com/support/knowledgecenter/zosbasics/com.ibm.zos.zsysprog/zsysprogc_searchorder.htm

z/OS TSO/E REXX Reference, SA32-0972-00 
Using SYSPROC and SYSEXEC for REXX execs
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.ikja300/dup0003.htm

z/OS V2R2 ISPF Services Guide 
Application data element search order
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.2.0/com.ibm.zos.v2r2.f54sg00/aso.htm

z/OS ISPF Services Guide, SC19-3626-00 
LIBDEF—allocate application libraries
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.f54sg00/libdef.htm

Numbered item 3 in Section “Search order the system uses for programs” in the IBM publication “z/OS MVS Initialization and Tuning Guide” at this link: https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.ieae100/ieae10018.htm

"The Search for the Load Module" in the z/OS MVS Assembler Services Guide

"Customizing the ISPF TSO command table (ISPTCM)" and “Customizing command tables In the IBM manual "ISPF Planning and Customizing"

 

ISPF Edit Macros

ISPF Edit Macros, an Introduction . . .

You want to put a series of Edit commands together into an EXEC so you can make the same edit changes to several members?  Write an ISPF Edit macro.  It's easier than it sounds.

You need to have a dataset where you can put your EXEC members (CLISTs and REXX EXECs). You only need one; You can put your edit macros into the same dataset with your other CLISTs and REXX EXECs.  As discussed in the previous post here, your EXEC library should be allocated to the ddname SYSUPROC(*see footnote for other options).  Allocate SYSUPROC (with a U in the middle) at the time when you Logon, before you start ISPF;  Do this by using a CLIST that runs automatically every time you Logon TSO – a CLIST that contains the ALLOCATE for SYSUPROC.  The first part of the previous article  (How to Write a Clist) explains how you set up a CLIST to execute at LOGON to TSO..

So let’s put a really simple edit macro into your EXEC library, just to get the ball rolling.  Create a member called UCXX.  This member will change lowercase text to uppercase text, but only in eXcluded lines.  UCXX will have the following few lines:

ISREDIT MACRO NOPROCESS
ISREDIT CAPS OFF
ISREDIT CHANGE P'<'  P'>' ALL X
ISREDIT LEFT MAX

Every line starts with ISREDIT.  That tells the command handler to pass the line over to ISPF edit for processing.

The first line contains ISREDIT MACRO to indicate that this is a macro. The MACRO line is a requirement.  Specifying NOPROCESS enables you to limit the range of lines that are affected by line commands within your macro (this will be more apparent in the second example).  In this first example we use “ALL X” (all excluded lines) as a quick and easy substitute for specifying the line range.

The second line contains an actual edit command.  If your edit profile for this type of dataset has CAPS mode turned ON, this CAPS OFF will turn it OFF.   This does the same thing as entering CAPS OFF on the command line, but here you are putting the command into your macro instead, and hence you preface it with the mandatory ISREDIT lead-in.

The next line also contains an actual edit command, Change (which can be abbreviated C — I just write it out as Change to make the meaning obvious at a glance).

Change is an edit command you already know.  P means “picture”, and there are various picture possibilities: P with the Less-than sign enclosed in apostrophes ( P‘<’ ) means lowercase text, and P with the Greater-than sign enclosed in apostrophes ( P‘>’ ) means uppercase. So this change command will change lowercase letters to uppercase letters, but since it says “all X”, it will make the change only on eXcluded lines.

The final line of the macro just straightens up the display a bit, and is not strictly necessary.  If the last character that gets changed by the macro happens to be at the far right of a long line, then without a corrective adjustment like this, the cursor may end up at the far right of a line, so the screen display may look funny.  This last line just shifts the display back to the left.  You could also add an ISREDIT RESET line, but then you wouldn't see highlighting of the changed lines.

Assume you have now entered the sample lines into a new EXEC member called UCXX, and saved it.  If your EXEC library was automatically allocated to SYSUPROC at Logon, you should just go into split screen mode now and use your second “screen session” to edit some other dataset.  Pick a C program or some text – anything containing some lowercase.

To test this macro, you will exclude a block of lines by typing XX over the lefthand line numbers at the start and end of a block of lines that contain lowercase, thus excluding them from the display.  Then type UCXX on the command line, press enter, and bingo, your macro should do the change command as specified, immediately changing all lowercase letters to uppercase letters within all excluded lines.

Okay, that worked.  Just put CAN (or cancel) on the command line to get out of EDIT without saving the changes (assuming you’re doing this just to test for your macro).

This is actually easy, right?

As an exercise, you can make a modified version of the macro and call it UCNX, and set it up so it makes the specified changes to all the lines that are NOT eXcluded.  (The opposite of “all X” is “all NX”, if you don’t use that feature often enough to recall the syntax offhand.)

As another exercise, you can make yet another copy of the macro and call it DENUM, setting it up to change all numbers in columns 73 through 80 to blanks.  You would of course only use such a macro on 80-byte fixed-length records. The Picture code for numeric data is P’#’ and blank is just a blank enclosed in apostrophes.  Some people don't like to use the blank enclosed in apostrophes because it isn't totally easy to read; You can use X'40' instead if you prefer, since hexadecimal 40 equates to the character blank.  For example you might use a line like this:
ISREDIT CHANGE P'#'  ' '  73 80 ALL NX
or, equivalently:
ISREDIT CHANGE P'#'  X'40' 73 80 ALL NX

If you want to know the rest of the picture codes, consult Picture string examples and Picture strings (string, string1) in the Links to IBM Documentation at the end of this post.

Now let’s do something that specifies a range of lines without relying on eXclude.

Here’s one that I prepared earlier — a modified version of the UCXX macro.  Let’s call it UC## (or some other unused name of your own choosing):

ISREDIT MACRO NOPROCESS
ISREDIT CAPS OFF
ISREDIT PROCESS RANGE Q  /
ISREDIT CHANGE P'<' P'>'  .ZFRANGE .ZLRANGE ALL
ISREDIT LEFT MAX

Enter the above lines into a member named (for example) UC## in your same EXEC library.  Save the member, and swap back to your other split screen session.  Edit the same thing you just cancelled out of after using UCXX.  This time, instead of eXcluding lines, put // over the lefthand line numbers at the start and at the end of the block of lines to be processed by the macro.  Instead of XX, you use //, basically.  Then you enter UC## on the command line, press enter, and again voila, the text should be changed just like it was before with the UCXX macro.

So, we ask, how is the second macro, UC##, different from the first macro, UCXX ?    Right, It has that PROCESS RANGE statement, and then, on the CHANGE command, instead of “ALL X”, it has .ZFRANGE and .ZLRANGE.  What does it mean?  Well, F means First (in .ZFRANGE) and L means Last.  RANGE obviously means, uh, range: the range of lines to be affected by the change  So, the line number of the first line where you put // becomes .ZFRANGE, and the last line where you put // becomes .ZLRANGE.  What does .Z mean?  IBM reserves .Z for their own ISPF editor label names.  In general in ISPF, names that start with Z are generally IBM ISPF names, so if you are making up a name for anything, it’s safer if you pick a name that doesn't start with Z.

So in the second macro we have the same Change command, but a different way of specifying the range of lines to be processed.  We told it to “PROCESS RANGE Q /”, and when we executed the macro, we used the slash (/) when we designated a block of lines delineated by // ; using QQ instead of // should work the same way – the “PROCESS RANGE Q /” means that either Q or / can be used (but you can’t mix them).

You can choose the two characters (you don’t have to use Q and /), but there are rules.  Per IBM, you can choose any alphabetic or special character except blank, hyphen (-), apostrophe ('), or period (.) and also no numbers.  That quoted part was a direct quote from the “Edit and Edit Macros” manual, but personally I wouldn’t use the letter A or X or other letters that might lead to ambiguity, and wouldn’t use a plus sign or an ampersand or anything that looks like it might possibly ever lend itself to an alternate interpretation.  But that’s just me.  If you do try some character that looks like it ought to work, but it doesn’t, you can open a problem with IBM with full confidence that IBM is very good about responding to that sort of thing with a documentation change when appropriate.

I always pick the slash because the slash is used a lot for selection in ISPF, so it’s easy to remember.

In fact more often I revert to the “XX” style because it lets me set up several excluded sections and then issue the command once.  You can, for example, put an X on the line numbers to exclude lines 5, 7, and 9, but you cannot do that with the .ZFRANGE, .ZLRANGE method of doing things.  However, you might have a situation where you need to eXclude various lines for other reasons, and in that case the // style might suit your purpose.

Anyway, having seen the PROCESS statement in action, you see how it relates to the NOPROCESS on the MACRO statement at the beginning.

Moving ahead, the following sample macro is a simple template you can use to create other macros.  It inserts 3 lines at the point where the cursor is when you press enter.  So you can copy this into your EXEC library and give it member name #TRYTHIS. Then you edit something.  Type #TRYTHIS  on the command line, but don’t press enter yet.  Place your cursor on any line in the source you're editing, and then press enter.  The three lines will be inserted after the line where your cursor was.  Of course, the three lines will say Line 1, Line 2, and Line 3, if you enter the macro exactly as shown here, but you can change the macro to insert some more useful text.  This is an actual situation that comes up a lot, you have to insert the same block of lines into a number ofl source members.  You can modify the macro to insert four or five lines, as an exercise, and supply some better lines – insert a new JCL step into a JOB, for example, or just insert a block of comments.  Here's the basic example:

ISREDIT MACRO  NOPROCESS
CONTROL NOCAPS
ISREDIT (ROW,COL) = CURSOR
ISREDIT (LINE) = LINENUM .ZCSR
SET &LINE  = &LINE + 0
ISREDIT LINE_AFTER &LINE = +
'Line 3 '
ISREDIT LINE_AFTER &LINE = +
'Line 2 '
ISREDIT LINE_AFTER &LINE = +
'Line 1 '
EXIT CODE(0)

For a more elaborate version of this same idea, with error handling, see the IBM sample edit macro ISRBOX, which draws a little comment box at the place where it finds the cursor when you press enter, with attention to the column as well as the row; it even places the cursor inside the added comment box for you.  (ISRBOX in Edit-related sample macros in the Links to IBM Documentation at the end of this article).

Want yet more sample edit macros?  If you remember TSO ISRDDN, enter TSO ISRDDN on any ISPF command line and look for ddname SYSPROC.   In the command entry 1-byte field, next to the first dataset on SYSPROC, put a B for Browse (or an E for Edit).  This should put you into Browse (or Edit) for the entire SYSPROC concatenation, which is usually several libraries.  On the command line in Browse (or Edit), put in the command SRCHFOR ISREDIT and press enter.  That should identify every "live" edit macro you have available.  It's kind of like reading an encyclopedia, but you can copy pieces of macros to modify for slightly different uses.

Since this is just an introductory article, we’ll leave it at that for now (except for the following discussion of Initial Macros, which you don’t need if you aren’t planning to use them).  Check out some of those IBM-supplied sample macros when you’ve finished playing around with modified versions of the quite simple macro examples included above.

Initial Macros

We can’t leave the topic of ISPF edit macros without discussing Initial Macros.

Well, I can’t, because people often ask about initial macros; on the other hand, you can if you wish, because the succinct truth is that in most cases they’re probably more trouble than they’re worth  —   complicated and unnecessary, plus they slow you down getting into edit.  Still, people seem to want to try them, until they’ve actually done so.  So the remainder of this article is about initial macros, followed by any Footnotes , followed at the very end by some Links to the IBM Documentation on edit macros (which is worth looking at, even if all you want is the sample macros).

An initial macro is a macro that executes automatically whenever you go into ISPF edit AND certain conditions are met.

First, note that an initial macro runs after the data to be edited is read, but before it is displayed on the screen.  This means that your initial macro cannot contain any commands that reference “display values” such as UP, Down, DISPLAY_anything, and so on, because, there being no display yet, these “display values” have no meaning yet.  The IBM manual on ISPF Edit and Edit macros gives the example of a good initial macro as one containing the command CAPS ON, which would be okay because it would not depend on the display; it’s a profile setting actually.

In the simplest case, the main condition you need to keep in mind is that when you edit a dataset you are using an ISPF Edit Profile that is based on the dataset type.  That particular ISPF Edit Profile needs to specify your chosen initial edit macro.  If you remember the post “Profile on: ISPF Edit Profiles” (why, of course you do) then you remember that each dataset you edit has an edit profile determined by the last part of the dataset name – what would be called a “file extension” on the PC, and on the mainframe is called a “low level qualifier”, or suffix, or dataset type.  So if you have one dataset named ‘userid.allround.CLIST’ and another dataset named ‘userid.batch.JCL’, they each use a different default profile (named CLIST and JCL respectively), and hence each could have a different initial macro specified (or more commonly, no initial macro).

If you want to specify an initial macro to be associated with a particular ISPF Edit Profile, first edit some dataset that uses that Edit Profile (based on the last part of the name).  Enter PROFILE on the command line (and press enter) just so you can see your profile settings displayed.  Next enter IMACRO whatever on the command line, replacing “whatever” with the name of the macro you want to have assigned as the initial macro.  That should work, and the lines displaying your edit profile settings should show IMACRO as whatever; That is, as long as the Profile settings do not show that this edit profile is locked.  In that case you want to unlock the edit profile before setting IMACRO, and then lock it again.

There are also Application-wide macros as a possibility, which will execute the same initial macro for all dataset types.  You specify an application-wide macro with ZUSERMAC in your ISPF variables.  The easy way to do that is to use ISPF option 7.3 and insert a line that specifies ZUSERMAC. 

To insert a new variable in ISPF 7.3, type the letter i (for insert) on the far left of any line, to the left of one of the Variable names, and press enter.  A blank line appears.  Put the word ZUSERMAC for the Variable name, put P in the P column, leave the A column blank, and under Value you put in the name of your preferred initial macro. Save and exit.  (If you want to delete it later using 7.3 again, you can use D for delete in much the same way you used I for insert.) The Application-wide macro you specified will then be executed after any site-wide initial macro (if any exists) but before any initial macro associated with your ISPF Edit Profile for the specific dataset type.

Not complex enough yet?  Wait, there’s more.  When something like a CLIST (or any EXEC or program) uses ISPF dialog services to invoke ispf edit, it can specify an initial macro name there too:

ISPEXEC EDIT DATASET(‘dsn(member)’) MACRO(macname)

That last bit, incidentally, can be more useful than it looks.  As you may (or may not) know (or have guessed), you can run the ISPF editor from a batch job, and in that case you can specify an initial macro which is not only the initial macro, it is the only macro or anything else to happen in that edit session.  But that can wait until we have a post on Running TSO in Batch jobs.

You can also specify the name of the initial macro in the INITIAL MACRO field on the edit panel, the same panel where you specify the name of the dataset you want to edit.  More importantly, you can type NONE in that field to suppress the initial macro that would normally be executed for that edit profile.

That’s your introduction to ISPF edit macros.  There are some Links to IBM Documentation at the very end, if you page down.

__________

Footnotes

* footnote about alternatives to SYSUPROC:

SYSUPROC is not a typo for SYSPROC.  Usually SYSPROC has a bunch of datasets concatenated together, and you need for those datasets to be there, or various TSO things will stop working, such as most of ISPF. The ddname SYSUPROC. with that extra U in the middle, was invented precisely so people could leave SYSPROC for the system.  The U in SYSUPROC stands for User, and SYSUPROC is processed AHEAD OF anything from SYSPROC.

If you don’t want to use ddname SYSUPROC, you can concatenate your EXEC library together with the system libraries on ddname SYSPROC, but doing that yourself via an ALLOCATE in a CLIST is not as reliable, mainly because SYSPROC usually points to a whole list of library datasets, and whoever maintains your z/OS system can change the dataset names in that list anytime without telling you; If that happens then you would be left using your modified copy of the old list, and other TSO things might not work right (or at all). You can get around the outdated-list problem by using the IBM-supplied CLIST SPROC (rather than ALLOCATE) to concatenate your own CLIST library at the start of the SYSPROC concatenation (see IBM doc on the SPROC concatenation).  Similarly, you might have a command called CONCAT that does much the same thing as SPROC.

For further discussion of some of the options, see the IBM Doc.

That doc also mentions a third option, ALTLIB.  So, why not use ALTLIB, you might ask? After all, you can use it once you’re already in ISPF.  Yeah, but it’s more complicated than it sounds; For one thing, if you issue ALTLIB while you’re in split screen mode, it will only apply to the one “screen session” where you issue it; ALTLIB allocation does not carry over to your other split screen(s).  Also the allocation goes away when you exit the session where you issued ALTLIB. To see more, read “Using ALTLIB in ISPF

On balance, allocating SYSUPROC in a Logon CLIST is the simplest thing to do, if that works for you.  If that doesn’t work for you for whatever reason, try running SPROC in your Logon CLIST.

_______________

Links to IBM Documentation

z/OS V2R2 ISPF Edit and Edit Macros
https://www.ibm.com/support/knowledgecenter/SSLTBW_2.2.0/com.ibm.zos.v2r2.f54em00/toc.htm

 ISPF Edit and Edit Macros, downloadable PDF version
http://publibz.boulder.ibm.com/epubs/pdf/isp2em10.pdf

Edit-related sample macros
https://www.ibm.com/support/knowledgecenter/SSLTBW_2.2.0/com.ibm.zos.v2r2.f54em00/lob.htm

Picture string examples
https://www.ibm.com/support/knowledgecenter/SSLTBW_2.2.0/com.ibm.zos.v2r2.f54em00/ispem58.htm

Picture strings (string, string1)
https://www.ibm.com/support/knowledgecenter/SSLTBW_2.2.0/com.ibm.zos.v2r2.f54em00/useofp1.htm

Specifying the search string
https://www.ibm.com/support/knowledgecenter/SSLTBW_2.2.0/com.ibm.zos.v2r2.f54em00/ispem56.htm

Initial macros
https://www.ibm.com/support/knowledgecenter/SSLTBW_2.2.0/com.ibm.zos.v2r2.f54em00/initial.htm

z/OS (2.1) TSO/E Command Reference
https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.ikjc500/toc.htm

 z/OS (2.1) TSO/E CLISTs
https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.ikjb800/toc.htm

SYSUPROC Alternatives:

SPROC

Using ALTLIB in ISPF

TSO Background Color

How to Change your TSO Background Color

This short post describes a quick and easy way to get your TSO 3270 screen to display a light background rather than a black background when using IBM PC 3270 emulation software (aka PC3270).  Some people prefer a light background because it resembles most other things they do on Windows and hence feels more natural.  Running with a light background (rather than the default black) also has the practical advantage of allowing you to do screen prints without using up a lot of black toner on your printer.

If you have read the previous post on changing 3270 screen size, or even just the first part of it, you know how to find and edit your .ws file, where your 3270 configuration settings are stored.  Consider knowing that a pre-requisite to using this method.  Yes, we are just going to edit that settings file and swap in a new block of color settings in place of the existing ones.

The other method you can use is to go into the PC3270 drop-down configuration choices and change the colors one by one, modifying every color choice to specify your preferred background color instead of black, at the same time being sure that the typeface color (which you can also change) will show up against your chosen background.  That method is tedious.  So the easy thing for you to do is to copy the [Colors] section below (using cut-and-paste on your PC editor), using these lines to replace your existing [Colors] section in your .ws file.

As always, I caution you sincerely to make a backup copy of your .ws file before you edit it.

Then just delete the existing [Colors] section in your .ws file and replace it with a copy-and-paste of the [Colors] section shown in the sample below.

Using the sample color settings given here will give you a pale background color that is within the sky blue range (rather than white) only because white seems too bright (to me) and the pale sky color range happens to seem comfortable on the eyes (again, for me).  Presumably we're not much different from each other . . .

Shown below  are the lines to copy-and-paste into your .ws file (but if the lines below appear to you double-spaced, remove the blank lines …!  Double-spacing is something this blogging WordPress editor seems to like to insert at times. )

[Colors]
BaseColorNormalUnprotected=005B00 D5DEE6
BaseColorIntensifiedUnprotected=D10514 D5DEE6
BaseColorNormalProtected=18464B D5DEE6
BaseColorIntensifiedProtected=000000 D5DEE6
ExtendedColorBlue=00154F D5DEE6
ExtendedColorGreen=0B5918 D5DEE6
ExtendedColorPink=743F85 D5DEE6
ExtendedColorRed=880F12 D5DEE6
ExtendedColorTurquoise=034569 D5DEE6
ExtendedColorWhite=1A1A1A D5DEE6
ExtendedColorYellow=9D811C D5DEE6
ExtendedColorDefaultHightlight=000000 D5DEE6
ExtendedColorDefaultNoHightlight=2A2A2A D5DEE6
OIAColorBackground=B6C6D3
OtherScreenColor=C9CBE4
OtherRuleLine=A0A000

_____________________________________________________________________
For reference, here's a link to the IBM manual on this:
3270 Emulator User’s Reference
Personal Communications for Windows, Version 5.7
ftp://ftp.software.ibm.com/software/network/pcomm/publications/pcomm_57/pc3270ref.pdf

 

A Few Good TSO Commands

A Few Good TSO Commands

Introduction fo lesser-known useful commands like CMDE, ZEXPAND, ISRDDN, REFLIST, TSO PROFILE, SEND.  Explains difference in TSO READY vs ISPF commands..

Learning more TSO Commands is like expanding your vocabulary.  You can manage to use TSO/ISPF knowing very little about it, but it’s like speaking a language – You feel more comfortable using the language when you know more words.

As with previous posts: If you already know everything, move along, this is not the article you’re looking for; Just keep on Googling.

Also note that some of the features discussed in these pages came in with release level 2.1 of z/OS (V2R1), and if you have an older version – Version  1.13 (V1R13) for example – You won’t have the newest stuff until your site converts to Version 2.  Additional new stuff comes in at 2.2 and so on, but 2.1 was a pivotal release with more and better enhancements than most new releases.  Most of the stuff discussed in this article  has been around for a long time (like SEND and ISRDDN), but some is new, like ZEXPAND.

You pick up more vocabulary as you go along; Words change, usage changes, usage varies from one place to another, you can make up new words.  There are a couple of EXEC languages you can use to make up your own TSO commands easily: REXX, which you may know from other platforms, and an older CLIST (Comand List) language which consists basically of putting any set of ordinary TSO commands together into an executable file, though some additional niceties such as IF and DO logic are available.  You can also put sets of ISPF edit commands together into an executable file, called an Edit macro.

Adding your own commands would be a digression from today’s topic, though, which is to show you a few handy TSO commands you can use right now to expand your comfort zone.

We will also discuss usage quirks such as TSO READY mode (line mode) vs full screen mode, the meaning of the three-asterisk line, and the use of PA2 to redisplay a garbled screen.

CMDE might be the best TSO/ISPF command you probably never heard of.  You’ve noticed that the ISPF command line varies in length from one screen to another, and quite often the line is too short for you to enter some long command you want to type?  Just type CMDE on the command line, press Enter, and voila! You get an extended command line, in the form of a popup screen with a command line so long your typing can wrap to multiple lines.  The popup goes away by itself after you use it.

ZEXPAND is a variation on CMDE.   When in ISPF Edit, you might want to enter an extra-long Edit subcommand — Maybe you want to do a Find or a Change using a long text string.  For Edit subcommands you need to use ZEXPAND rather than CMDE.  Also the method for using ZEXPAND is a bit more awkward: You type ZEXPAND to get the popup with the long command line, then you enter your lengthy edit subcommand, then you press F3; when the popup goes away and you see the regular Edit screen again, you press Enter from there to get the long command to execute.  Cumbersome but effective I guess.

If you have never felt you needed a longer command line, keep reading, you might change your mind.

You want to use Instant Messaging from one TSO session to another? Then you want the SEND command.  Yes, before cell phones were invented, TSO users could text each other using SEND.  The syntax is:

TSO  SEND  'Your message enclosed in quotes ',U(Anybody)

Be sure you include that last part, the comma followed by the U (which stands for USER, meaning TSO Userid), followed by the TSO userid of your intended recipient enclosed in parentheses.  With the above example, you send a message to somebody who has the Userid “ANYBODY”.  If you forget that part, or make some syntax error that causes it to be discarded, then the message goes to the default destination: the main operations console; Generally it will then also appear in the System Log (where it will be viewable to everyone who looks at SDSF LOG).

What syntax error could one possibly make? you might ask.  A common error is to try to enclose an apostrophe within the quoted string.  That apostrophe then becomes the termination of the string.  The string, thus terminated, and not being followed immediately by ,USER(Whoever) . . . Well, it goes to the default destination (the main operations console, and thence to the aforementioned System Log).  Can you include an apostrophe in your message?  Yes, if you code it as two consecutive apostrophes:

TSO   SEND   'Isn"’t that nice?',USER(Someone)

When using this type of TSO command, in general the comma can be replaced with a space – they’re interchangeable (in TSO) in most such cases.

Also note, if you didn’t already know it, that two consecutive apostrophes can be used in many places within z/OS to represent one single apostrophe within a quoted string; It can be done to include an apostrophe within a PARM string in JCL, for example.

When you use SEND, the message recipient does not actually see your message until they press Enter (or some F-key, or the screen clear key, and so on).  So if they are just sitting there staring at something on the screen, they won’t get it until they do something to wake up the screen.

Want to send somebody an entire dataset?  Sort of like appending a photo to a cell phone text message, right?  The TRANSMIT command, aka XMIT, does that.  To pick up the dataset you sent, the recipient uses the RECEIVE command.   Yes it works for Library datasets too.  Yes it can send the dataset to a TSO user on another z/OS system as long as the two systems are connected to each other.  In fact you can XMIT your Library to a flat file, download the flat file to your PC in binary format, and email the file if you want.  For that matter, you can also send emails directly from the mainframe – but we’re getting ahead of ourselves.  That would all be beyond the scope of the present very basic article.  Now that you know it is possible, you can look it up elsewhere using Google, of course, if you're super-interested in it; and it might make a good topic for a future article here; but for now let’s get back to today’s simpler theme – a few good TSO commands.

 Why do you say “TSO” before the “SEND”?

 In the Microsoft PC world, before there was Windows, there was DOS.  You can still open a DOS window on your PC and enter DOS line commands.  Analogously, under TSO, before there was ISPF, there was TSO READY mode.  So ISPF is kind of like Windows:  It’s a full-screen interface that was overlaid onto a previously existing line-mode system.  Yup.

There were a lot of TSO commands in the pre-ISPF world, and, like the PC DOS commands, they are still there.  When you enter the word “TSO” on the ISPF command line, the rest of the line is passed across to the underlying TSO READY-mode line-oriented handler.  If you use ISPF Option 6, it’s like opening a DOS window on the PC, and everything you enter within ISPF Option 6 is passed directly to the TSO READY-mode handler; but if you just want to enter one or two line commands, then you don’t need to go to Option 6 – you can just preface what you say with the word TSO on the command line on any ISPF screen.

Another little quirk you should know about TSO READY mode, that is, line mode:  When TSO writes to your screen in line mode, it puts three asterisks at the bottom of whatever it writes.  You are supposed to press enter at that point, after you finish reading the screen.  When you press enter – or an F-key, or clear, etc. – TSO interprets that as a signal for it to move ahead to the next bunch of data it wants to display, if any, or to return to full-screen mode if it has no further lines to display.  If you type anything on the line with the three asterisks, whatever you say will be thrown away, discarded, ignored.

Any input from you on the three-asterisk line is just taken as an indication that you have finished reading the lines and are ready for the next display.  The actual content of what you enter vanishes, uninterpreted and unseen.

What if you realize you have unwittingly become trapped in some process that is going to produce way more lines of data than you want to sit through, and you just want to cancel out of it somehow?  Usually the PA1 key, the ATTN key, or a Break key will do it, provided your emulator offers you such a key.

If the screen now looks wonky, press PA2  to restore the full screen display as it was before you got it messed up.  This works whether the screen was messed up by the line mode display or some other method.  For example, when in ordinary full screen mode, if you press the CLEAR key accidentally,  how do you get your original screen back?  Just press PA2.  Or suppose, in Edit mode, you overtype some stuff you didn’t mean to change?  Press PA2 and the screen goes back to the way it was before.

What other TSO READY mode commands are there for you to use?  You might be amazed.  All of the IDCAMS commands, for starters, as well as the DFHSM data set migration, recall, and recovery commands.  Yes, you can enter things like:

TSO DEFINE CLUSTER(etc.etc.)

on the ISPF command line, provided you get the syntax right.

You also have available the DELETE and RENAME commands.

Besides deleting and renaming datasets, these can be used to delete and rename individual members of libraries if you get the syntax right and have enough confidence to try it.

The penalty for getting the DELETE syntax wrong can, unfortunately, result in your deleting an entire dataset accidentally, so DELETE is not really something you usually want to do as a freehand command entry.

HDELETE can be used to delete a migrated dataset without first recalling it.  That can be useful, and usually runs faster than other methods.

RENAME can provide a handy way to assign an alias name to a library member – You just add the operand “ALIAS” on the TSO RENAME (at the end of the line).

Where are you supposed to get the syntax for this kind of stuff? From the TSO HELP of course.

Yes, there is a  HELP for the line-oriented commands, and it is, of course, accessed by entering TSO HELP followed by the name of the command you want help with, for example:

TSO HELP RENAME

However, I don’t usually do that.  All of the basic TSO READY mode HELP text from IBM is contained in a library named ‘SYS1.HELP’, and personally I just go and use ISPF browse to look at ‘SYS1.HELP’ – It’s a lot easier to read that way, because you can page up and down in full screen mode; also because looking at the member list will give you hints about what might be the name of the command you’re looking for, in case you don’t remember exactly.  (Was it HRECALL that I wanted, or HRECOVER??? For example.)  Moreover, there are a bunch of useful HELP members that don’t correspond exactly to the name of the command: For example, the DEFCL member tells you about DEFINE CLUSTER, and the DEFGDG member tells you about DEFINE GDG (Generation Data Group).

The TSO HELP members have, alas, a peculiar syntax you need to know.  Think of it like reading stock market quotes or the Racing Form or a Recipe book: First you have to know how to read it, but that isn’t hard to learn.  For the HELP members, you need to know that they show the syntax using this convention:

Anything that is shown unquoted in UPPER CASE LETTERS, you type it exactly as they show it.

Anything they show 'enclosed in single quotes', you replace with your own information.  As a variation, Occasionally they indicate this by using Lower Case Letters rather than (or in addition to) single quotes.  (In general, single quotes means paired apostrophes.)

Example: If they say

TRACKS('PRIMARY' 'SECONDARY')

Or

TRACKS(primary secondary)

In both cases you type the word TRACKS and the parentheses as shown, but you replace the words primary and secondary with data of your own choosing, such as:

TRACKS(15  45)

When using TSO READY mode commands, you can usually replace a space with a comma, at your discretion.  The following is equivalent to the above:

TRACKS(15,45)

But wait, you may say: SOMETIMES you actually need to enter your own data, AND you need to enclose it in single quotes.  MOSTLY the HELP text will represent this situation by the use of double apostrophes, like so:

DATA("STRING")

The above means you would be expected to supply something like this:

DATA(‘My text string goes here’)

For another example of that, the SEND command – with which you are now already familiar – shows this in the HELP member:

SEND  "TEXT"  USER('USERID LIST')

Which, as you now know, means you would actually enter something like this (from an ISPF command line, where you need to prepend the word TSO):

TSO SEND ‘Are you still using that data set? ’,USER(someguy)

Another Syntax item you want to know is that a vertical bar is used to mean “OR”.  Hence, if the HELP tells you that you have this option available (from the text in the DEFCL help member):

INDEXED | NONINDEXED | NUMBERED

That means that you can choose any one of the three choices shown, but only one.  (Many options you can leave off and allow them to assume their default values – most of the defaults are reasonable for obscure parameters you never heard of – but Indexing is not an example of something you want to leave to chance.)

Typically each HELP member is organized so that first it shows the complete syntax of the command, with all the options, and after that it tells you which options are actually required and what all the defaults are, and then the final section gives you a short explanation of each separate operand.

If you browse the members of ‘SYS1.HELP’ you will find descriptions of some commands you are probably not authorized to use. You probably don’t want to experiment willy-nilly with commands you can guess are probably going to be blocked (e.g., OPERATOR and ACCOUNT and some of the RACxxxx stuff).  Authorization violations, however innocent, may be logged someplace, and you don’t really want to be asked to explain things for which the only explanation you can offer is, uh, just playing around I guess.  The mood could vary depending on where you are, of course, but these days I’d lean toward the side of caution. Worse, some commands might NOT be blocked when they ought to be, and you could accidentally cause trouble that you don’t yet know how to fix.  So by all means read through the HELP text for anything that sounds like it could be interesting and useful, but don’t actually try to use anything you aren’t sure is safe.

Oh, and ‘SYS1.HELP’ does not contain HELP members for all possible TSO commands, just the basic ones provided by IBM – which is still a lot of commands.

Among these, then, what other TSO READY mode commands might be useful, you wonder?  I’m thinking you should know about PROFILE.

TSO PROFILE

Yes, there is a PROFILE command for the underlying TSO, apart from ISPF profiles and the myriad other profiles.  Most of the TSO PROFILE operands are well and truly useless these days, but there are a few useful ones.  First, PREFIX.  You can say things like:

TSO PROFILE PREFIX(AltName)

TSO PROFILE NOPREFIX

TSO PROFILE PREFIX(MyOwnID)

The PREFIX default is your own TSO Userid.  What does that mean?  It means that anyplace you enter a dataset name in TSO without enclosing it in quotes, TSO prefixes the DSN (Data Set Name) you enter with your TSO userid.  That’s what usually happens, right?  You go to ISPF option 2 and enter something like TEST.DATA and you end up editing DSN=YourOwn.TEST.DATA, and that suits most people most of the time.

If you say TSO PROFILE NOPREFIX on the command line and press enter, then TSO will stop prepending your userid for you.  Thereafter you would have to type YOUROWN.TEST.DATA rather than just TEST.DATA – and that wouldn’t be great if you really use your own datasets most of the time.  On some projects, however, you might use a lot of shared datasets, and that means you have to put quotes around the dataset names all the time, and maybe you get tired of doing that.  In that situation, people sometimes use TSO PROFILE NOPREFIX, or, more rarely, people set the PREFIX to whatever name qualifier is used by the project.

Also handy is TSO PROFILE MSGID which will get TSO to prefix an identifying message number, such as IKJ1234E, when TSO issues messages.  (This is different from the message identifier that you can set within ISPF option zero, which tells ISPF to do basically the same thing with ISPF-generated messages.)  Why do you want that message number?  So that when something goes wrong you can look the error message up with Google (or the search engine of your choice).

PAUSE is a TSO PROFILE operand that lets you get a little more information sometimes.  If you turn on TSO PROFILE PAUSE, then sometimes when a process fails and displays an error message it will just wait after the message, giving you a chance to enter a question mark (?) to request additional error information (if any happens to be available).  It doesn’t tell you to enter the question mark, it just sits there waiting for you to do something.  If you just press enter, then the failing process will just continue with the normal path of its failure.  . . . Except sometimes pressing enter will be interpreted as requesting an abend dump to be produced.  If you press the enter key just after, or during, a failure, and your TSO session seems to hang for a minute, it probably took an abend dump during that minute.

Commence Short digression about dumps produced under TSO:

Let’s follow this digression real quickly.  Dump creation depends not just on your settings and responses, but also on various other settings, some specific to you or to a product, some system-wide. In general if your TSO session seems to hang for a long minute right after a failure, it probably used that time to take a dump.  Sometimes you might want the dump, if only to give it to someone else to look at.  The surprise is that you might also want to find the dumps you don’t want, so you can delete them.

To find the dump, go into SDSF, assuming you use SDSF, which most places do, especially most JES2 places.  However, if your z/OS system uses JES3 instead of JES2, then you may have some other SDSF-like product instead of SDSF, and in that case you can ask somebody at your site what they use to look at sysout. This discussion is going to assume JES2 for simplicity, or at least for reduced complexity.

So, to find a dump created by your TSO session, go to SDSF (or a reasonable facsimile thereof).  On the command line first enter something like “OWNER myTsoID”, substituting, of course, your own TSO userid for “myTsoID”.  After that, enter “DA”, for “Display Active”.  It should show a job list consisting mainly of your TSO userid.  Yes, your TSO session runs like a job, with JCL and everything.

Put a question mark (?) to the left of your userid.  This lists the various sysout files associated with your TSO session.  The first three at the top contain your TSO session JCL and a bunch of messages similar to what you see at the top of any batch job.  After that, there may be other files in the list.

Possibly there is a CEEDUMP or SYSABEND or SYSUDUMP or something else that looks like a dump.  If so, that is probably where your dump is.  If you want to keep it to give to somebody, you can copy it into a dataset the same way you would any sysout file, that is, you type XD in the command entry column to the left of the sysout you want to save.  When you press enter, you will be presented with a screen that lets you provide a dataset name where you want it to save the dump.  So, do that if you want.

A dump in your sysout will probably go away when you logoff, or else it will stay in your output queue for a few days and then eventually go away if you don’t purge it first.  It is possible to fill up the spool by producing a large number of these dumps – I joke not – So you might want to purge it from your sysout queue if you find that you are creating a lot of them and they’re big and they aren’t going away quickly by themselves.

If you did not find a dump in your sysout, then, if a dump was produced, it probably went into a dataset.  That might be a very large disk dataset filling up space you could use for something else.  So, look through the system-generated messages that are near the top of your sysout file for your TSO session.  Do a FIND for “dump” (yes, just like in Edit).  Maybe the messages will say the dump was suppressed.  Maybe they will say it went into a dataset with a name like SYS1.DUMP01, in which case, unless you want to send somebody a copy of it, you don’t care, because (a) that isn’t your own disk space it’s using, and (b) those SYS1 dump datasets get reused in a loop by new dumps, so your dump didn’t cause any additional use of disk space.  However, there is another possibility, which is especially likely if you are running with IBM Language Environment, and that possibility is that a very large dump dataset has been created using your TSO userid for the first part of its dataset name, and hence using up disk space that you could perhaps use more beneficially for something else.  Moreover, it will create a new such dump dataset the next time it encounters a similar abend, and another one the time after that.  Each will have the time and date of the occurrence woven into the dataset name.  These can use enormous amounts of disk space during the testing phase of a project.  Again, I joke not.  As a humorous aside, note that these datasets, if left lying around, probably get backed up to tape by automatic backup procedures, hence cramming up lots of backup tapes and possibly annoying the people in charge of that.  Okay, it’s only a little bit funny.

So, find every occurrence of “dump” in your TSO session messages, looking for any of them that say something like “a dump has been taken to YourUserId.Dnnnnnn.Tnnnnnn.YourUserId” – the format of this can vary, but you should be able to spot them.  If you actually want to find the dump to give to somebody who has requested that you get a dump for them, well, there it is.  Send them an email.  Otherwise go to ISPF 3.4 and delete the dump dataset and all its similarly unwanted cousins.

For extra credit and additional disk space reclamation, do an ISPF 3.4 on the TSO userid of everybody else involved in the testing phase of your project, find their presumably unwanted and unnoticed dump datasets, and enquire politely as to whether they really are keeping them on purpose, or maybe they want to delete them to free up disk space for the project?  Again, it’s only a little bit funny, but you really can claw back loads of disk space this way under some conditions.

If you just want more disk space for your own immediate use and you don’t much feel like talking to your co-workers about it, then type HMIG to the left of their dump datasets while you are in ISPF 3.4 on their dataset lists.  This might (or might not) result in said datasets being migrated off disk to tape (Migration can fail for various reasons, for example the system might not be willing to migrate a dataset if that dataset hasn’t been backed up yet).  Most often this works, though; usually you can HMIG the large datasets of other users without ticking off the security system.

End of digression about dumps produced under TSO.

Another operand a bit similar to PAUSE is PROMPT, which allows failing EXECs to stop and prompt you to enter information that might allow the failing process to correct itself and continue. The opposite setting of this operand, TSO PROFILE NOPROMPT, can be useful if you have a habit of becoming trapped in long verbose EXEC failures that keep prompting you to answer dumb questions when all you really want is to exit.

VARSTORAGE(HIGH) is an offbeat TSO PROFILE operand that I like personally.  It influences the allocation of some of the invisible variables created by EXEC files, so they go into storage (memory) with 31-bit addresses (where there is more room) rather than 24-bit memory (which is a much smaller area).  EXEC files are used a lot behind the scenes in ISPF, and a lot of ISPF-based applications create a lot of big variables and use a lot of memory, so this has the potential to be more useful than it might sound.

You can string multiple operands together on one TSO PROFILE command, for example:

TSO PROFILE MSGID PAUSE PROMPT VARSTORAGE(HIGH)

Finally, TSO PROFILE NOINTERCOM will block receipt of messages that other TSO users might try to send you using TSO SEND.  (It does not prevent operations, or the system itself, from sending you messages.)  You would say TSO PROFILE INTERCOM to turn it back on.

TSO PROFILE with no operands will cause all of your current TSO PROFILE settings to be displayed.

Although the execution of TSO READY mode commands is handled by the line-oriented handler, that does not mean that the commands, once invoked, need to stay in line mode.

In fact a lot of them will go immediately into full-screen mode.  A good example is ISRDDN.  You can invoke ISRDDN via the TSO-prefix route by saying, unsurprisingly, TSO ISRDDN on any ISPF command line.  That brings up a full-screen interface program.  You can also invoke the same command directly from ISPF by using the command name DDLIST unprefixed by the word “TSO”.

If you have never used this command (TSO ISRDDN aka DDLIST), you definitely want to try using it.  It gives you a full screen display listing the datasets that are allocated to your TSO session, and lets you do a lot of stuff with them.

There is a column that allows you to enter a single-letter command against any dataset in the list.  Among these commands is E for edit, which invokes the ISPF editor.  If you have several library datasets concatenated onto one ddname, and you put the E against the top of that concatenated list, then you are placed into edit with a composite member list which is a merge of all the libraries allocated to that ddname.

So if you are trying to find out where a particular ISPF screen or a particular EXEC is coming from, this greatly facilitates the search.  You can Edit or Browse an entire concatenation as you would a single dataset.  The main ddname for EXEC members is SYSPROC, and the main ddname for ISPF screens is ISPPLIB.  There are also other ddnames used under various conditions – For example, any particular product might use some ddname of its own for its own ISPF screens and another for its EXEC members.

Incidentally, the main ddname for the TSO HELP text members is SYSHELP, so if you use the ISRDDN tool to browse that ddname you will find even more members than you would by just browsing ‘SYS1.HELP’ as previously suggested.  Doing this has the same appeal as reading a dictionary or an encyclopedia.  It’s a good way to learn stuff, if you like that kind of thing.  Note that not all HELP members correspond to actual commands – Somebody might have deleted a TSO command but forgotten to delete its HELP member.  Similarly, not all TSO commands have HELP members to describe them – somebody might add a command but not write a HELP for it.  The stuff from IBM is kept up to date pretty well, though.

Besides these single-letter commands aimed at the datasets, you can enter primary commands on the main command entry line, including but not limited to some good search commands.  Saying MEMBER XXX tells it to search the directory lists of all of the libraries in the entire dataset list to find member named XXX, for example, and it highlights when multiple datasets contain the same member name.

Invoke TSO ISRDDN (or DDLIST) and, once in, type HELP on the main command entry line.  There is an extensive (and fairly readable) HELP text covering a lot of useful subcommands, including an ENQ enquiry to find out who is using a particular dataset, an APF enquiry to find out what libraries are in the currently active APF-authorized library list, and much more. This is one of the most useful commands out there.  Try it, you’ll like it.

Okay, you’re happy enough to get a list of the datasets you have allocated right now; but suppose you want to know what datasets you had allocated yesterday? Or, recently, anyway.  No problem, you want REFLIST:

REFLIST

The display you get from REFLIST will look a lot like the ISPF 3.4 DSLIST, but it will contain the names of the datasets you referenced most recently.  On the command line within the REFLIST display, you can enter SORT REFERRED to get them arranged in order of use, SORT NAME to get them arranged in alphabetical order, and so on, the same as in the ISPF 3.4 display.  You can use F11 and F10 to scroll right and left to get additional information about the datasets (space used, attributes, and so on).

Well, Enough for now.

Until next time, then.

[This post revised March 10th to add ZEXPAND and PA2, and the note about z/OS V2R1.  Revised again 25 April with minor corrections.]

 

Size Matters: What TSO Region Size REALLY Means

What does specifying an 8 Meg (8192K) TSO Logon Region Size mean?

It does Not mean you get an 8 Meg region size.   Nice guess, and a funny idea, but no.  For that matter, REGION=8M in JCL doesn't get you 8 Meg there, either.  Hasn't done so for decades (despite what you may have heard or read elsewhere.   If you have trouble believing this, feel free to skip down a few paragraphs, where you can find a link to some IBM doc on the matter.)

No, your region size defaults to at least 32 Meg, regardless of what you specify.

Despite the omnipresent threat that the people who set up your own particular z/OS system might have changed the IBM defaults there, or (who knows) might even have vindictively limited your own personal userid — For this discussion,  we're assuming you're using a more or less intact z/OS system with the IBM defaults in effect.

So, you ask, What happens when you specify Size 8192 at Logon?

Size    ===> 8192  

When specifying Size=8192, you will be allowed to use up to 32 Meg of 31-bit addressable memory. This is the ordinary kind of memory that most programs use. You will get this for any value you enter for Size until you get up to asking for something greater than 32 Meg. Above 32 Meg, the Size will be interpreted differently.

If you enter a number greater than 32 Meg (Size ===> 32768), that will be interpreted in the way you would expect – as the region size for ordinary 31-bit memory.

You have to specify a  number bigger than 32768 to increase your actual region size.

Just wanted to make sure you saw that.  It's not a subtitle.

Notice by the way that region size is not a chunk of memory that is automatically allocated for you, it's just a limit.  It means your programs can keep asking for memory until they get to the limit available, and above that they'll be refused.

So, what does the 8192 mean, then?  When specifying Size=8192, you will be allowed to use up to 8 Meg of 24-bit addressable memory.   (In this context, 8192 means 8192K, and 8192K = 8M.)

This is why trying to specify a number slightly bigger than 8 Meg,  say 9 or 10 Meg, is likely to fail your logon with a "not available" message.   The request for 9 or 10 Meg is interpreted as a request for that amount of 24-bit memory, and most systems don't have that amount of 24-bit memory available, even though there is loads of 31-bit memory.  So asking for 9 Meg might fail with a "not available" message, but if you skip over all those small numbers and specify a number >32Meg then the system can probably give you that, and your logon would then work. 

How did this strange situation arise, and what is the difference anyway?

Here starts the explanation of the different types of addresses

Feel free to skip ahead if you don't care about the math, you just want the practical points.

24-bit addresses are smaller than 31-bit addresses.  Each address — Let's say each saved pointer to memory within the 24-bit-addressable range — requires only 3 bytes (instead of the usual 4 bytes).

24-bit memory addresses are any addresses lower than 16 Meg.

There is an imaginary line at 16 Meg.  24-bit addresses are called "Below the Line" and 31-bit addresses are called "Above the Line".

More-Technical-details-than-usual digression here.  Addresses start at address zero.  The 3-byte range goes up to hex’FFFFFF’ (each byte is represented as two hex digits.  Yes, F is a digit in the hex way of looking at things.  The digits in hex are counted 0123456789ABCDEF).  There are 8 bits in a byte (on IBM mainframes using hexadecimal representation for addressing).  So 3 bytes is 24 bits.  Hence, 3-byte addresses, 24-bit addressing.   Before you notice that 4 times 8 is actually 32, not 31, you may as well know that the leftmost bit in the leftmost byte in a 4-byte address is reserved, and not considered part of the address.  Hence, 31-bit addressing.

Decades ago the 24-bit scheme was the standard type of memory, and some old programs, including some parts of the operating system, still need to use the smaller addresses.   Why? Because there were a lot of structures that people set up where they only allowed a 3-character field for each pointer. When the operating system was changed to use 4-byte addresses, some of the existing tables were not easy to change — mainly because the tables were used by so many programs that also would have needed to be changed, and, crucially, not all of those programs belonged to IBM.  Customers had programs with the same dependency.  Lots of customers. So even today a program can run in “24-bit addressing mode” and still use the old style when it needs to do that.

Most programs run in “31-bit addressing mode”. So they are dependent on the amount of 31-bit memory available.  By this day and age, another upgrade is in progress.   It allows the use of 64-bit addresses. The current standard is still 31-bit addressing, and it will be that way for a good while yet. However, 64-bit addressing is used extensively by programs that need to have a lot of data in memory at the same time, such as editor programs that allow you to edit ginormous data sets.

When specifying Size=8192, you will be allowed to use up to 2Gig of 64-bit memory, as long as your system is at least at level z/OS 1.10, or any higher level.  (2 Gig is the maximum addressable by 64 bits, if you wonder.)   Prior to z/OS 1.10, the default limit for 64-bit memory was zero. In JCL you can change this with the MEMLIMIT parameter, but there is no way for you to specify an amount for 64-bit memory on the TSO Logon screen.

There is an imaginary bar at 2 Gig, since the word "line" had already been used for the imaginary line at 16 Meg.  Addresses above 2 Gig, that is, 64-bit addresses, are called "Above the Bar".  Addresses lower than that are called "Below the Bar".

Here ends the explanation of the different types of addresses.

Maybe  you wonder what happens for various values you might specify other than 8192 aka 8 Meg.  So now we'll discuss the three possibilities:

– You specify less than 16 Meg
– you specify more than 32 Meg
– you specify between 16 Meg and 32 Meg (boring)

Specifying less than 16 Meg

Any user-specified logon SIZE value less than 16 Meg just controls the amount of 24-bit memory you can use.

The limit on how much you can get for 24-bit memory will vary depending on how much your own particular system has reserved for its own use (such as the z/OS "nucleus" and what not), and for the system to use for you on your behalf, for example for building tables in memory from the DD statements in your JCL.   (Yes, you have JCL when you are logged onto TSO, you just don't see it unless you look for it.  The Logon screen has a field for a logon proc, remember that?  It's a JCL proc.)Any 24-bit memory the system doesn’t reserve for itself to use, you can get. This is called private area storage (subpools 229 and 230).

Typical mistake:  A user who thinks he has an 8 Meg region size may try to increase it to 9 Meg by typing in 9216 for size. The LOGON attempt may fail. It may fail because there is not nine Meg of leftover 24-bit storage that the system isn’t using.  Such a user might easily but mistakenly conclude that it is not possible for him to increase the region size above what he had before.  Ha ha wrong, you say to yourself (possibly with a little smile).  Because you now know that they have to specify a number bigger than 32768 — that is, more than 32 Meg.

Specifying more than 32 Meg

To increase the actual Region size, of course, as you now know, the user needs to specify a number bigger than 32 Meg (bigger than SIZE==>32768).  When you specify a value above 32Meg, it governs how much 31-bit storage you get.  The maximum that can be specified is 2096128 (2 Gigabytes).

Specifying any value above 32 Meg ALSO causes the user to get all available 24-bit memory (below the 16mb line).  This has the potential to cause problems related to use of 24-bit memory (subpools 229/230). This could happen if the program you’re running uses a lot of 24-bit memory and then requests the system to give it some resource that makes the system need to allocate more 24-bit memory, but it can’t, because you already took it all. The request fails. The program abends, flops, falls over or hangs. This happens extremely rarely, but it can happen, so it’s something for you to know as a possibility.

Specifying between 16 Meg and 32 Meg

What happens if you specify a number bigger than 16 Meg but smaller than 32 Meg? You still get the 32 Meg region size, of course. You also get all the 24-bit storage available — The 24-bit memory is allocated in the same way as it would have been if you had specified a number above 32 Meg. So asking for 17 Meg or 31 Meg has EXACTLY the same identical effect: It increases the request for 24-bit storage to the maximum available, but it leaves the overall region size at the default of 32 Meg. Having this ability must be of use to someone in some real world situation I suppose, or why would IBM have bothered to provide it? but imagining such a situation evades the grasp of my own personal imagination.

THE IBM DOC

If you want to see the IBM documentation on this — and who would blame you, it’s a bizarre setup — check out page 365 of "z/OS MVS JCL Reference" (SA23-1385-01) at http://publibz.boulder.ibm.com/epubs/pdf/iea3b601.pdf

Caveats and addendums

The IBM-supplied defaults can be changed in a site-specific way. Mostly the systems people at your site can do this by using an IEALIMIT or IEFUSI exit, which they write and maintain themselves. Also IBM is free to change the defaults in future, as they did in z/OS 1.10 for the default for 64-bit memory.

If you want to know whether there is a way to find out what your real limits are in case the systems programmers at your site might have changed the defaults, yes there is a way, but it involves looking at addresses in memory (not as hard as it sounds) and is too long to describe in this same article.

Yes, this is a break from our usual recent thread on TSO Profiles and ISPF settings.   We'll probably go straight back to that next.  The widespread misunderstanding of REGION and Logon SIZE comes up over and over again, and it happened to come up again recently here.  There is a tie-in with TSO, though, which you may as well know, since we're here.  A lot of problems in TSO are caused by people logging on with region sizes that aren't big enough for the work they want to do under TSO.  The programs that fail don't usually give you a neat error message asking you to logon with a larger region size — mostly they just fall over in the middle of whatever they happen to be doing at the time, leaving you to guess as to the reason.  Free advice:  If you have a problem in TSO and it doesn't make much sense, it's worth a try just to logon with SIZE===>2096128 and see what happens.  Oftentimes just logging off and logging on again clears up the problem for much the same reason:  Some program (who knows which one) has obtained a lot of storage and then failed to release it, so there isn't much storage left for use by other programs you try to run.   Logging off frees the storage, and you start over when you logon again.  Batch JCL corollary:  If you get inexplicable abends in a batch job, especially 0C4 abends, try increasing the REGION size on the EXEC statement.  Go ahead, laugh, but try it anyway.  It's an unfailing source of amusement over the years to see problems "fixed" by increasing either the JCL REGION size or the TSO Logon Region size.  All the bizarre rules shown above work the same for batch JCL REGION as for TSO LOGON Region Size, except for 64-bit memory, which can be changed using the MEMLIMIT parameter in JCL but cannot be changed on the TSO Logon screen.  Remember, you have to go higher than 32M to increase your actual region size!

 

You need to specify a value bigger than 32 Meg to increase your actual (31-bit) TSO Region size (or JCL Region size).

 

Profile on: ISPF Edit Profiles

 

What are TSO ISPF edit profiles anyway, What do they do for you, and How do you get them to do that?

This is another simple discussion of basic ISPF information, for people who don’t know much about ISPF.   All you experts, just move along.  These aren't the blog posts you're looking for.

You get a separate ISPF edit profile for each data set type you edit.

Data set type?

The data set type (in this context) is the file extension part of the data set name.

In z/OS, it isn’t actually called a file extension.  They call it the "low level qualifier", but it's the same idea as the file extension on PC file names. It's the last part of the data set name, after the last dot. Let's just call it type, since that's short, Englishlike, and easy to remember. Coincidentally it's also the name used for it on the ISPF "Edit Entry Panel", the screen that you see after you select ISPF Option 2, Edit.

So let's do that, select ISPF Option 2, Edit.

Next, type PANELID on the command line. Presto, the word ISREDM01 should appear near the upper lefthand corner of the screen. That's just the name of the first Edit screen, the Edit Entry Panel. I like to refer to panels by names — It's unambiguous — When you say "the Edit Entry Panel", somebody might not get exactly which screen you mean, unless they happen to be looking at it and notice that it has a title saying "Edit Entry Panel".  But when you say ISREDM01, that can only mean ISREDM01.

You've probably picked up that "Panel" is another word for screen.

So, on that ISREDM01 screen, put in the name of one of your data sets, assuming you have data sets.

If not, any data set you can access under TSO will do, as long as it's available, and you don't do "save".

(People will warn you that you shouldn't do that, because you might "save" accidentally. Okay, you've been warned. Let's continue.)   (To exit edit without saving the data, the command is CANCEL, in the unlikely event you don't already know that.)  (If you're REALLY nervous about accidentally saving, go up to the command line and enter AUTOSAVE OFF – – That will prevent it from saving automatically if you unthinkingly press F3.  It will only save if it sees the explicit SAVE command.)

For the example, let's use a data set that has "C" as the type (That is, any data set with a name that ends in .C — It would be even better for illustrative purposes now if you find one that also contains C program source. If no C source is available, find some other kind of source in a data set that ends in .COBOL or .PLI . . . or .ASM or .PASCAL … but if you don’t have any of that, just pick any data set you do have available.)

When you're into your edit session and you see your source displayed (on panel ISREDDE2), type the word PROFILE on the command line and press enter.

That will cause the ISPF editor to display some information lines just above the top line of the source/data, somewhat like this:

=PROF> ….C (VARIABLE – 251)….RECOVERY ON….NUMBER OFF………
=PROF> ….CAPS OFF….HEX OFF….NULLS OFF….TABS OFF…………
=PROF> ….AUTOSAVE ON….AUTONUM OFF….AUTOLIST OFF….STATS ON..
=PROF> ….PROFILE LOCK….IMACRO NONE….PACK OFF….NOTE ON……
=PROF> ….HILITE C LOGIC PAREN CURSOR FIND MARGINS(1,251)………

A lot of the things shown on those five =PROF> lines have values like ON and OFF. Unsurprisingly, those can be toggled between ON and OFF. Some also have extra options you can specify. So we'll talk about a simple one, then about one with options you don't care about, and then we'll see one with options you do care about.

CAPS is a straight ON/OFF toggle.

With CAPS ON in effect, any data you enter into your data set will be automatically changed to uppercase. That’s good for editing JCL. Accidentally putting lowercase letters into JCL statements is a common cause of JCL errors. On the other hand, for editing ordinary text, or a C program, you want CAPS OFF, which allows you to enter data in whatever uppercase and lowercase way you want, and it stays that way.

CAPS is one of the settings that ISPF will automatically change for you dynamically, depending on whether or not it finds any lowercase text in the source. So you only need to set it yourself — or reset it — if you don’t like the editor’s choice on the matter. For example, you might have some source that currently contains all uppercase data, but you want to add some lowercase. Every time you try, the editor converts it to uppercase as soon as you press enter. It’s downright annoying. That’s when you want to say CAPS OFF, and press enter, before putting in your lowercase text.

Some of the settings — HEX springs to mind — have options, but almost nobody would want the options.

On the command line, enter HEX ON, or just HEX, and the editor will display a two-line hexadecimal interpretation beneath each source/data line.

You:

Command ==> HEX ON
****** ****************************** Top of Data ***
000001 #pragma title ("TESTPGM: C Sample program")
000002 // This is a Comment

The computer screen’s response (approximately):

Command ==>
****** **************************************** Top of Data ***
000001 #pragma title ("TESTPGM: C Sample program")
– – –  79988984A8A98447ECEEDCD74C4E899984999898975444444444444444
– – –  B7917410393350DF3523774A03021473507967914FD000000000000000
————————————————————————
000002 // This is a Comment
– – –  664E88A48A484C899989A4444444444444444444444444444444444444
– – –  1103892092010364455300000000000000000000000000000000000000
————————————————————————

This is the Vertical Hexadecimal display mode. The capital letter C has the hex value C3, so directly beneath each capital C in your data, the hex digit C will be displayed, and beneath that, the hex digit 3.  The 2-character hex value C3 is thus displayed directly below the single text character C and it all lines up in a way that is easy to read. Well, as hex displays go, it’s easy to read.

That’s vertical mode, and that accounts for the "VERT" where it says HEX ON VERT in the related =PROF> display (which I omitted from the illustration above for simplicity). The other option — DATA instead of VERT –does not line up beyond the first character in each line.   If your capital C happened to be in column 1, the hex digit C would go right beneath the capital C in the text, but then the asociated 3 would come right after it on the same line, meaning that the hex equivalent for the letter after your capital C would start in column 3 on the hex line.  So, okay, if you edit data records that are only one or two bytes long, maybe you would want that, and the hex would only take up one extra line per record, rather than two. Most of us actually don’t have any data like that, though. Anyway . . .

When you enter HEX OFF the hexadecimal display lines disappear.

This setting is remembered in the edit profile for the type you’re editing. So suppose you enter HEX ON to see the real hex values of some gibberish-looking data, and then you're interrupted.   When you come back, it's time to go home, and you've forgotten about the hex and you don't look at the screen very closely.  You press F3 a few times, logoff and go home. You come back another time and select option 2 to edit that data set again, or any other data set with the same ending on its name. The data will automatically be displayed with the Vertical Hexadecimal display mode.  Yes, and this is only part of what TSO/ISPF Edit profiles will do for you.  And no, it isn't a funny prank to do that to somebody when they walk away from their desk without locking their display.  Find some other way to express your affection.  Or disaffection, as the case may be.

Unlike the HEX setting, HILITE has meaningful additional operands you can (and should) choose.

HILITE means COLORS — Setting HILITE ON causes program source to take on meaningful colors, like what you get with pretty much any good PC-based program source editor.

You can enter HILITE ON to activate the basic feature.  However, if instead  you enter the word HILITE by itself, a useful popup screen will appear (named ISREP1).

You are shown (on ISREP1) a list of the choices you have for the type of highlighting you want, such as COBOL, REXX, JCL, XML, C HTML, PASCAL …You can page down (F8) to see more choices.

When you just type HILITE ON, with no specifics beyond that, the software generally tries to figure out what type of data/source you're editing, based largely on its quick evaluation of the first few lines. The evaluation is usually good, but you can do better. Since this is a C data set type we're editing — remember we entered a data set name with .C on the end — we're going to assume the data set we're editing contains C source code. (Yes, really I mean C or COBOL or whatever you found that you could use for this.)

Yes, that's why we chose .C in the first place, just to illustrate what you can do with highlighting. You caught me at it again  :)

So we're sitting here with the ISREP1 popup screen displayed in front of us. The title of the panel says "Edit Color Settings".

You Tab forward to "Language:" and you put in the number 4 to indicate C (unless you see some other choice you like better for your own situation).

You hit the Tab key again to go to "Coloring:", where you put 3, for "Both IF and DO". I've failed to imagine a scenario where anybody would want only one or the other of those, but such a thing must be possible, or the choices wouldn’t be offered. Maybe you can think of some hypothetical situation I've overlooked.

You tab forward again, and you make sure to put a slash (/) to the left of "Parentheses matching" to select that feature. Seriously, isn't that one of the main reasons you want highlighting? So you can get matching sets of parentheses in different colors, so they, uh, match ? Again, my imagination must be deficient, not seeing when anybody would say no to this. Anyway, you can keep tabbing forward and use slashes to select both of the next two options as well, which seem to embellish the results of the FIND command. (If you want to unembellish them after you’ve seen them, without resetting anything else, the magic phrase for you to use there is RESET FIND.)

You press Enter, though you don’t need to, and you stare at the screen for a second to make sure it suits you. You press F3. You're back on panel ISREDDE2 and, if you are in fact editing program source, it should be a lot easier to read now (unless of course you already had the highlighting turned on before we did this . . .)

One other very important thing for you to have in your ISPF edit profiles is RECOVERY.  In all but very rare and unfortunate circumstances, you want to set RECOVERY ON. This has the happy effect of enabling the UNDO command to work.

You want the UNDO command to work.

If you ever see the message “UNDO not available”, your first response should be to set “RECOVERY ON”. However, there is another way to get the UNDO command to work: Enter “SETUNDO STORAGE

Recovery is a feature designed to protect you against losing your editing changes if some disaster strikes while you’re in ISPF edit — A power failure, a system crash, or your TSO session freezing up and eventually getting cancelled. When RECOVERY mode is ON, a temporary work file is created, and ISPF keeps track of your changes using the work file.   When you come back later after the disaster, you get a chance to save what you had been editing.  One warning, the EDIT session you have while in recovery processing is not exactly the regular edit mode.  If you want to continue editing the same source, it's a good precaution to go ahead and save what  you get from recovery processing, exit from edit, and then go back into a new edit session.  Just sayin.

RECOVERY ON does slow you down just a little, but on a decent system the difference isn’t even noticeable in most cases. Occasionally, though, if you are editing a very large data set and making a lot of scattered changes, and your response time seems to be excruciatingly slow, you might pick up a little bit of speed if you choose to turn recovery off and take your chances with disaster; but issue the “SETUNDO STORAGE” command anyway, which, as far as I've ever seen, doesn’t slow you down in any way detectable by ordinary mortal people.  It's a good idea to issue SAVE fairly often in that case (if you're making changes you want to save).

SETUNDO STORAGE” causes the software to keep track of your changes in a table in memory, and then the UNDO command can use that table to back out your changes when you say UNDO.

If you are using RECOVERY ON, and you ever see the message “Recovery suspended”, that means UNDO is not going to be able to undo your changes by using the work file. That can happen if the work file fills up and can’t be extended. Besides UNDO being gone then, the disaster recovery feature is gone too. Sometimes you can just save your changes and exit from edit, then go back in, and the Recovery feature will become available again, or can at least be turned back on by saying “RECOVERY ON”; but sometimes not. In the latter case, you might try talking to someone responsible for z/OS system maintenance at your site, and let them try to fix it. Meanwhile of course you’ll be careful enough to save your edited data set at judiciously chosen intervals, and to use “SETUNDO STORAGE”.  Note that SETUNDO is all one word. No spaces. Important point to remember if you expect to be needing to use it.

On balance, all else being equal, you want RECOVERY ON

If that doesn’t work for you, then you want to say SETUNDO STORAGE

There are some other things you eventually want to know about in your edit profile settings, but let’s leave that and skip ahead a bit now, to some overview stuff.

To make the =PROF> information lines go away again, you do not say PROFILE OFF. That has the humorous effect of changing the name of the profile in use from “C” to “OFF”.

What happens there is this. You can use a different PROFILE for editing your file, it doesn’t have to be the name from the file extension — that’s just the default.   On the mainframe, the concept of using the file extension as a data set type has been around since before z/OS, since before its predecessor MVS, for as long as TSO has been a product, before the most ancient PC was ever built, when the very idea of a personal computer was, like cell phones, still Science Fiction. You’d think people would have caught on by now, yes?  but no, not really. People don’t stick to the plan. Most people don’t even seem to be aware of the plan. And it isn’t enforced. IBM likes to give people choices (which is both a blessing and a curse). So people name their data sets with any whimsical extension that seems like a good idea at the time. There are boatloads of data sets containing C programs but having file extensions like .C2, .C3, .CSOURCE, .Cx, .PROGRAMS, and so on. By default that would mean the creation of a separate profile for every such “type”, leading to an awfully lot of edit profiles being created for you.

The z/OS system, left to itself, as distributed, allows a maximum of 25 edit profiles, to correspond to a possibility of 25 data set types. While that might be enough if people named data sets according to a plan that treated the ending as the file type, in the real world the default should be something more like 255, which happens instead to be the maximum that your z/OS system maintenance people at your site can specify if they want to change it (the setting is called MAXIMUM_EDIT_PROFILES).

My recommendation is that you try to ask them (as nicely as you can) to do that:

Increase MAXIMUM_EDIT_PROFILES to 255, as documented by IBM in the “ISPF Planning and Customizing” IBM manual.

What happens if they leave it at the default of 25, and you have just gone into ISPF edit on your 26th ever data set type? Your least-recently-used edit profile is deleted, unless you’ve “locked” it to prevent that. That’s why you want the maximum to be increased. Not because you want a lot of edit profiles with names like CX2, DATAX and BKUP, but because you don’t want your real edit profiles to roll off the stack.

What you can do (without involving anybody changing the way ISPF is set up)  is to set up a few edit profiles the way you want them for a few data set types, and then LOCK those profiles. After that, when you edit a different data set that is really the same kind of data set but it isn’t named with the same extension at the end, you can enter something like PROFILE C to put into effect your saved edit profile named C, or PROFILE JCL to get the one called JCL, and so on. To lock a profile, you enter PROFILE LOCK on the command line.

Right, you’d think that would just bring in a profile named LOCK, wouldn't you?  But no, this is how they implemented the feature. You enter the command PROFILE LOCK. Later if you want to change something in the profile, you say PROFILE UNLOCK before you change it. If you forget to do that, then any changes you make to the profile will be temporary, meaning the changes will disappear when your edit session ends, and the profile will go back to the way it was. When you edit a data set with a name like SOURCE.C.BKUP and you want to use your C program profile settings, you enter PROFILE C on the command line, and you’ll get the highlighting and such that you put into your edit profile named C.

So you don’t say PROFILE OFF, which would bring about the use of an edit profile named OFF.   No, what you do to get rid of the PROFILE display is you say "RESET", or "RESET SPECIAL". Yes, those =PROF> lines are SPECIAL lines, as differentiated from, say, LABEL, ERROR, CHANGE, COMMAND, or EXCLUDED lines, any of which can also be RESET separately.

COMMAND? you ask.   Really? When would somebody want to say "RESET COMMAND", in reality, in the real world? It's a digression, but since you ask . . .

Suppose you're editing a big program source file, or a data file, something big, and you've excluded some lines and done other things to get the display to show you exactly the part you want to look at, but somewhere along the way you inadvertently typed, oh, say the letter "M", in a line number field, while you were paging up and down through the data. Now the upper righthand corner of the screen — the "SMSG" (short message) area — says "MOVE/COPY is pending", and the editor won't let you do the stuff you want to do until you deal with that.

You don't just want to say "RESET" and lose your carefully crafted display of the lines of interest. You certainly don't want to page up and down endlessly through the data looking for the errant keystroke.

Voila, you have found the situation where you want to say "RESET COMMAND".

Similarly, to get rid of the PROFILE display lines without affecting anything else, you can say "RESET SPECIAL".   The other thing you can do is just delete them with the “D” or “DD” line commands, just as if they were source lines. Yeah. It also works for other non-data lines you might get from time to time.

So, that’s enough meta-level information.

We can go back and look a little at more of the edit profile settings if you still want to know more, but you’ve got all the basics you need now — at least as far as ISPF Edit Profiles.  Although, having a bit of knowledge about the workings of line numbering, Nulls, and ISPF stats can occasionally be quite useful  . . .

The other settings . . . Well . . .

STATS controls the “ISPF Statistics” that show up in the member list of a library (a PDS, Partitioned data Set, PDSE — it has several names, but we’ll call it “library” for now). The name “ISPF Statistics” refers to data like the date, time, and userid associated with the last change to the data.   So if you edit a member, and save it, the statistics will have your userid together with the date and time it happened, if you have STATS ON (which is the default). If you don’t want the statistics to appear, you edit the member again, and type STATS OFF on the command line. Press enter, then type SAVE. Press F3 to go back to the member list, and the statistics are gone. When would you do this? If you have a library that is usually updated by some batch job, and it does not update the ISPF Statistics, then any existing ISPF Statistics, such as the record of your edit, will remain as a long-standing monument to your edit session. That would be misleading if the batch job ever updates the data subsequent to your edit update — it would look as though the batch job's changes had been done in your edit session.  So you'd want to get rid of the ISPF Stats in that case, to avoid confusing anyone.  There may also be other types of cases where you would do this; I don’t really know about that, though.

The main things for you to know about the ISPF statistics are: (1) That they are controlled by you, setting STATS OFF or STATS ON, but if you don’t do anything, the default is ON, and that means your userid plus the date and time of the update will appear in the member list after you SAVE the member. (2) You can also change the ISPF Statistics outside of edit, using ISPF option 3.5, (3) The data inside a library member can be changed by a program, usually running in a batch job, without causing the ISPF Statistics to be updated nor destroyed (although some batch jobs do update the stats). (4)   If you copy a member from one library to another, for example using ISPF option 3.3 copy, or using IEBCOPY in a batch job, the ISPF Statistics are copied along with the member (even though the stats are not contained within the member itself. They reside in the “directory” of the library.) So if someone copies a member from one of your libraries to some other library, and some third person just looks at the member list, they might easily think that you’ve been editing that other library. In some cases that can lead to social incidents. It’s just something for you to know, so if somebody ever asks you, perhaps in a hostile manner, why your stats are on some forbidden library, and you honestly have no idea, remember the words “3.3 copy”. Somebody (not you) must have used an ISPF 3.3 copy. That explains everything. You’re innocent. Innocent, I say. Really.

The Line Numbering settings (NUMBER and AUTONUM) are interesting to the extent that they sometimes cause trouble by virtue of their relative invisibility.

Here is a quick introduction to Ispf line numbers: ISPF line numbering differs depending on whether the data you are editing contains fixed length records or varying length records.

For fixed length records, based on an ancient tradition originating in the time of punched cards, the line numbers are put into the last eight columns of each line, on the far right. ISPF has put a tweak on that, though. If you are editing a member of a library, and if you have STATS on (which is the default for STATS), then ISPF borrows the last two digits of the line number and uses those for the “modification level number”.  They start out as 00. If you change any line in an edit session, the 00 at the end of that line becomes 01 (no matter how many times you change it within the same edit session). Go out of edit and come back, change the line again, and it goes from 01 to 02. And so on.

For varying length records, the punched card scheme doesn’t work, since there are no specific column numbers that can lay claim to the title of being the last eight columns for all the records.  So the line numbers go into the first 8 columns.

There is a “third way”, called COBOL line numbering. If you say NUMBER ON COBOL, then the line numbers go into the first six columns of each record, which is a COBOL standard.

The "fourth way" of course is NUMBER OFF, my personal favorite.  When you use NUMBER OFF, you still see line numbers displayed by the editor on the lefthand side of your data.  Those line numbers that are displayed are not actually in your data records, though.

What kind of trouble can the line numbers cause, and in what way are they invisible, you might ask. Take a quick example.

Your data is composed of fixed-length 80-byte records, and the last 8 columns are blank. (You have NUMBER OFF in effect.)   Your screen happens to be set to size 24 x 80 (24 lines by 80 columns, the usual default, and you haven't changed it).

The editor displays line numbers on the left of the screen regardless of whether or not there are actual line numbers within your data.  Since that line number display takes up the lefthand portion of the screen, you don't see all 80 columns of your data, and so you do not see the last 8 columns (unless you scroll over to the right using F11, which usually you don’t).

With those circumstances in effect, you decide to excerpt a few lines from some other data set and insert them into the source you're editing — maybe a few lines of a C program that do something you want. Quite unknown to you, the data you're borrowing was previously edited with NUMBER ON in effect. So now you have eight invisible digits at the far right of a few of the lines in your own file. You can see them if you scroll right (by pressing F11), but you don’t think about it.   Soon, if it’s a C program, you compile it, or if it’s an EXEC, you execute it, or if it’s data, you feed it into the program that reads it as data.  One way or another, you try to use it and you start getting bizarre error messages that make no sense to you. That is, they make no sense until you idly happen to press F11  and the display scrolls right, making columns 73 through 80 visible, and you observe the stranded line numbers hanging there.

You can mystify yourself for half an hour finding that; even longer, if you’re tired or something.

What else can go wrong? Another example.

You have data without line numbers (You have NUMBER OFF in effect), and you decide you want to have line numbers. You enter NUMBER ON, or just NUMBER with no operands, and the editor generates line numbers for you immediately, inserting them into columns 73 through 80 of your records, as if by magic. Unfortunately in this case, you happened to have data in some of those columns already, but only in a few records that you hadn't noticed. That data is overwritten by the line numbers. So a record that, a few short moments ago, ended with “/*This line is fine*/”, now ends with “/*This line is01230000”.  You don't notice immediately.  Next time you compile the program, a few lines of your C or PL/I program are ignored by the compiler because they have become part of a comment that starts with "/*This" and will end when the next “*/” is found, perhaps a few lines down.  In  languages that allow multi-line quoted strings, this same trick can also work with a multi-line quoted string  :)

If you're lucky, the compiler will generate an error message — though it's likely to be a bizarre and baffling message, telling you that you have a missing "end" statement, or an "else" without a preceding "IF", or mismatched parentheses.  That might not happen, though.  Some compilers will give you a  warning message when they notice a semi-colon embedded inside a comment, and this is one of the reasons such warnings exist; but not all compilers do that.   It is possible that no error message will be generated at all, if the particular lines of program code you've accidentally commented out do not prevent the compiler from making sense of what remains — for example if you only commented out a line that increments a loop counter or resets a pointer, leaving you with an infinite loop, or if you inactivated some lines that subtract a fee, or add a bonus, in some accounting calculation, leaving you with trouble.  Oh well. That’s line numbers for you.

There is a command UNNUM that you can use to remove line numbers, replacing them with blanks.

When you have Line Numbering active, and you insert new lines between existing lines, the editor generates in-between line numbers for the new lines.  If you insert two lines between line 200 and line 300, they'll get line numbers such as 210 and 220 assigned.  If you insert two more lines between 210 and 220, those will become 211 and 212.  When you insert a new line between 211 and 212, the editor changes 212 to be 213, and gives the new line the number 212.  So if you go on like that for a while, inserting more lines between other inserted lines, eventually you're generating a lot of renumbering work for the editor whenever you insert a line, and having the editor do all that extra work slows it down.  That brings us to the RENUM command.  When you issue RENUM the editor renumbers all the lines, using increments of 100.

That, in turn, brings us to the AUTONUM profile setting.  If you are using NUMBER ON and you have AUTONUM ON as well, the editor will automatically do a RENUM of the data as part of SAVE processing.  Interestingly (which in this case means "confusingly"), the line numbers being displayed are not changed as part of AUTONUM processing.  It makes sense when you think about it for a second — the renumbering is part of the saving — but it still feels like an oddity.

NULLS is another setting that can be of interest in that it can both create trouble and do the opposite. Most of the time you don’t care, but occasionally you do. Setting NULLS on or off makes other commands behave differently.

Say you want to cut-and-paste some comments into columns 60-70 of your data, all lined up nicely on the far right. You find that when you press enter, the comments are pulled back to the left, so there is only one blank between whatever was on the line before, and the comment you added. That means you have NULLS ON.   Say UNDO so you can try again. Say NULLS OFF. Again paste your comments into columns 60-70 of your data, all lined up nicely on the far right. You press enter, and your added comments stay where you put them.  That’s one practical example.

The underlying idea here is that blanks, in particular trailing blanks, are viewed differently depending on the NULLS setting. With NULLS OFF, the tendency is for each trailing blank to be treated as a character occupying space, whereas with NULLS ON, each additional trailing blank after the first tends to be viewed as more like empty vacuum space.

I don't have a good example to illustrate that right now, so let’s look at something totally irrelevant to edit profiles, but relevant to blanks and nulls.   I’m guessing you don’t yet know about the data shifting line commands. They’re good, and they’re interesting, very useful.

A single right parenthesis ) shifts data right, and a single left parenthesis ( shifts it left. If you put )5 on a line number and press enter, that shifts the data right 5 places, and (5 shifts the data 5 positions to the left. This is a straightforward shift. All of the data on the line is shifted, and data at either end of a line can be lopped off (truncated, as they say) if you shift too far. On balance, when you do )10 and follow it with (10, you end up where you started, except for any data you may have lost due to truncation.

Of course you can say )) on one line, and then go down a few lines and put another )) to shift the data on a block of lines all at once.  Either )) can also be followed by a number, as shown for single parentheses.

The “greater than” > symbol and the “less than” < symbol also shift data right and left. They treat embedded blanks differently, though.  Essentially they squish out embedded blanks.

Typing >7 and pressing enter, followed by typing <7 and pressing enter, does not get you back where you started.

> and < eliminate blanks, compressing a string of consecutive blanks into one blank.  It isn’t totally obvious what the elimination pattern is, though.   You need to actually do this yourself to get a feel for what it does, if you have any interest in ever using these. If you do, then put some blank lines into your data to start the demonstration. Be sure you have NULLS OFF. because you're going to use copy and paste for the setup.

Let’s assume you have ctrl-C and ctrl-V set up to mean copy and paste, or at least that you have some method of doing copy and paste. Highlight the line numbers on the far left, and press ctrl-C or your equivalent thereof. With that column of numbers safely tucked into your carryall, you move the cursor out across your blank lines and paste the numbers down a few times, so in the end you have dropped down three or four columns of data, leaving several blank spaces between each column.

While we're on the subject of copy and paste, let's sneak in an introduction to other cut/copy and pasting you can do in ISPF edit.  If you already know all about it, skip this rather long paragraph, and we pick up the discussion of <<>> data shifting again just after.  So —  You also have at your disposal a  line-oriented, non-cursor-based, ordinary TSO/ISPF edit command called CUT.  You also have PASTE.   To try it, do this.  Put CC on a line number field, and then go down a few lines and put CC on another line number field, then go up to the command line and say CUT  AAA and press enter.  (Or you can say CUT  BBB, or CUT  ITOUT, or use any other short valid name.)  The lines you marked have not disappeared from your file, but they have been copied into a holding area.  (So, yes, it's a misuse of the word "CUT".)  Now go to another place in the same member, or, more fun yet, go into split screen mode, and go edit some other file on the other screen.  Put the letter A on one of the line numbers there, and, on the command line, say PASTE AAA (or paste bbb, or whatever you called it).  The text you captured will be inserted into the place you've designated.  Why did they call this CUT instead of COPY?  Of course it's because they already used COPY for something else.  Put the letter A on one of your line numbers, and then move your cursor up to the command line and say COPY.  Press Enter.  It will pop up another screen for you, where you can enter the name of some other file.  When you press enter, that other file will be copied into the place you designated with the letter A (which in this context meant "After").  If you had said COPY CAT on the command line, rather than just COPY, the editor would have looked for a member named CAT in the library you're editing, and copied that into the place you designated.  But again I've digressed.  Back to the data shifting commands.

Go over to the line number field and put in >3 and press enter, and then do it again. Then do <3 on the same line and press enter, followed by another <3 and enter. It should have shifted the lefthand columns of data, while leaving the righthand columns unmoved.  If it isn't clear what's happening, Try  a few more iterations of >3 and <3 shifting until it seems to make sense.  (or you can do <2 or <5 or whatever is comfortable.)

You do know that the way to get out of EDIT without SAVING your changes is to enter CANCEL, right?  Very important editing command, CANCEL.

This >> << type of shifting can be useful when you don’t have columns, but you don’t have much room on the right or the left either, and you don’t mind losing embedded blanks.

As long as we’re on this tangent, you should know about TS (text split). Put TS on a line number, then move the cursor over to where you want to split the line, and press enter then. It splits the original line into two lines.  The first of the two ends at the point where you had the cursor when you pressed enter.

TF (text flow) is the opposite of TS. It runs lines together. Quite often it runs together more lines than you intended, so you need to be a little careful. It will stop accumulating its long string when it comes to a blank line, or when it comes to a line with different indentation.

SCROLLING – You want to know how to change your settings so that if you put the cursor anyplace on the page and press F7 or F8, then the page will go up or down based on where you placed the cursor.   Oh, same thing with F10 and F11, you also want to do left-right scrolling based on cursor position.  Okay.  First press the home key to go up to the command line (assuming you have the command line at the top).  Then press the Tab key to move to the field on the right labeled Scroll.  (If you do not have your command line at the top of your screen, then just press Tab a bunch of times until you end up in the field labeled Scroll.)  In the Scroll field you type the word CSR (an abbreviation for CURSOR) and press enter.  That's all there is to it.    Perhaps you're asking,  Why isn't this the default?  One can only wonder.  It's one of those questions pondered by philosophers I guess.  Certainly it's hard to imagine any rational reason.  Anyway, you Tab over to the Scroll field and enter the value CSR, and that does it.  The change should be saved when you exit.   ISPF Scrolling is not specifically an Edit thing — The same method works on other ISPF screens that have scrolling, such as the ISPF 3.4 Dataset List display.

So, hopefully we’ve covered the main things you want to know about edit profiles.   Congratulations, you now know more about it than a lot of people who have been using the ISPF editor for years.  Oh well, maybe you can help them out sometime. :)

The IMACRO edit profile setting allows you to specify an initial edit macro — to designate an EXEC containing a bunch of edit commands — to run automatically first thing whenever you go into an edit session using that edit profile. Anytime I ever used that feature, it was very slow, and not worth doing. They may have improved the feature since then, but I don’t bother with it. Edit macros can form an entirely separate topic in themselves, anyway. If you stick with z/OS you’ll want to learn to use edit macros eventually, but that's a topic for another day.

———————————–

Updated 26 April 2016 to add the paragraph on SCROLLING — what an oversight to have omitted that originally!  Sheesh.

 

Changing Default ISPF Settings

This is a simple discussion of some basic ISPF settings, for people who don't know much about ISPF.  (It can be a surprising disadvantage to find oneself in that situation.)  So if you already know everything, move along, there's nothing for you to see here . . .

There are a lot of ISPF settings available to you in Option 0, "ISPF Settings".  Many of the defaults can seem inconvenient to the point of being annoying.  That makes it hard to decide what to reset first — so let's start with the most disagreeable, and feel comforted by the fact that at least none of the defaults cause anything to be blinking in reverse video with a bright fuchsia background.

What's the single most troublesome default ISPF setting?
It's a tight race:
. . .  The input command line being positioned at the bottom of the screen rather than at the top ?
. . . The fact that the HOME key sends the cursor up to that "Action bar" line at the top of the screen, which is almost never where you want it to go ?
. . . The fact that F12 defaults to mean CANCEL ?
. . . The decorative underscores (the underlining) adorning almost all the input fields on all the screens ?

That's a 4-way tie already.

Since these are conveniently easy to change, let's do these first.  Yes, this article is bound to lead to future continuations of the world of  ISPF settings — but for today, let's start by learning how to set F-keys and change the other annoying things just mentioned.

This becomes a step-by-step guide now.  I'll be assuming you will actually be changing your settings while you're reading.

Change the Position of the Command Line 

From the ISPF primary option panel, select option 0, "Settings".

The "ISPF Settings" screen will appear.

In the lefthand column of that screen, you can toggle the on/off setting of any option by the presence or absence of the slash character on the far left.  When the slash is there, the feature is turned on. When you remove the slash, you turn the feature off.

The best thing for you to do here now (in my opinion) is this:
Immediately blank out the slash (/) at the left of "Command line at bottom", and press enter.

As soon as you press enter, the command entry line should immediately move from the bottom of the screen to a place near the top.  (That's the line that appears on almost every screen and says something like
"Command ===>"
or
"COMMAND INPUT ===>")

Why is it good for you to move this line ?
The goal is for you to be able to press the "Home" key at any time to get the cursor to return to the "Command ===>" line.  This brings you halfway to that goal.  It also means you won't have to tab through all the other input fields on the screen to get down to the line where you can enter a command.

Now direct your attention a few lines further down the "ISPF Settings" screen, to the line that says "Tab to action bar choices".
Remove the slash (/) to the left of that line also, and press enter again.

NOW you should be able to press the "Home" key, and have it take you to the "Command ===>" line.  As a practical matter, using ISPF to do practical stuff, you want to have an easy way to get the cursor to go straight back to the line where you enter your commands. Now you can do that.

This would be a good time to save your settings, assuming you actually just changed them.

Usually, when you change ISPF settings, the software doesn't really save the new settings permanently until you exit ISPF.

So, suppose you just changed your settings, and you walk away from your TSO session without exiting ISPF or logging off?  In most places, your idle TSO session times out and dies after some amount of time (whatever amount of "TSO idle timeout" happens to be set, at your place, for your ID).  From the point of view of the ISPF session you left abandoned, this counts as an abend, a crash. The changes you made are lost.  So, what happens after that?  Next time you LOGON again you will need to reset your ISPF settings all over again.

So, be sure to exit ISPF in an orderly way anytime you change your settings and you want to preserve the changes.

If you just changed your ISPF settings, go ahead, save them, just logoff and logon again, and we'll go on to the next item on our list of annoyances.

What's up next?  Resetting that annoying "F12=Cancel" is a good idea.

Reset What F12 Does (and what the other F-keys do)

On the "Command ===>" line, type the word KEYS and press enter.  Yes, it's that simple (almost).  A popup box will show you the current meaning for each of the 12 primary function keys.  Go down to F12, and in the second column (under the "Definition" heading) change the "definition" of F12 from CANCEL to RETRIEVE.  Or CRETRIEV is okay too.  Just overtype the word CANCEL with the new setting, RETRIEVE (or CRETRIEV).  Voila, it's reset !  But, yes, there are caveats.  Keep reading, we're getting there . . .

What does Retrieve do for you?, perhaps you wonder.  Well, it brings back a copy of the last command you entered, placing it in the command entry field for you, so you don't have to retype it if you want to do the same thing (or almost the same thing) repeatedly.  The system holds several of your most recent commands, in a wrap-around list.

CRETRIEV (which stands for Command Retrieve) is similar to RETRIEVE, with minor nuances. In one way, RETRIEVE is nicer, because it doesn't matter where the cursor is when you press the F-key that means RETRIEVE.  When you choose CRETRIEV, the cursor is supposed to be on the command line already before you press the F-key.  On the other hand, CRETRIEV works better in SDSF, and maybe in some other places I haven't found.

So, you overtype the word CANCEL (or any other F-key "definition") with RETRIEVE or CRETRIEV or any other command you like.  That changes it, instantly, at least for this set of definitions.  Hunh?  Yes, there are multiple sets of definitions of the F-keys.  We'll get to that.

Now, What about that other column, where it says "Label", on the far right?, you ask.

Tab over and blank it out, and it'll reset itself to the appropriate new label.

Why is there a column for "Label"?  Anytime you say PFSHOW on the command line, ISPF will display the current settings of the keys, near the bottom of the screen — when you type PFSHOW repeatedly, it will take turns between showing the settings of all the keys, showing the settings of some subset of the keys, and turning off the PFSHOW display entirely.

The column designated "Label" determines what PFSHOW will display as the explanation for each key.

So if you have a bizarre sense of humor, or an antisocial disposition, you can set the labels to say something contrary to the actual definitions of the keys.  Or, you can set something cutesy or quirky, or use acronyms that only you will recognize.  (Settle yourself down and such urges usually pass.)  Most often it's as well just to blank out the label and let ISPF set it to match the definition.  Usually.

CRETRIEV? you ask.  Okay, there are some ISPF commands whose names don't do much to suggest their actual function, such as CRETRIEV and, uh, "NRETRIEV".  What will NRETRIEV do for you, you may wonder, if you set one of your keys to mean "NRETRIEV"?

Imagine you're on the ISPF EDIT Entry Panel, with the cursor positioned to the "Other data set name" entry field.   If you press a function key that you've set to NRETRIEV, it will bring back the name of the last dataset you edited.  You press the same key again and again to go back through a log of previously edited datasets.  It's similar to Retrieve, except it's for dataset names instead of ISPF commands.  That can be handy, right?  Sure.  See, you're liking TSO better already — admit it.  (Disliking it less?)

Note that it matters where you have the cursor positioned when you invoke NRETRIEV.  If you position the cursor to the "Other Data Set name" entry field, the retrieved DSNs appear in that field.  The DSN choices presented to you from that list will include ALL the data sets you've edited most recently, even very long dataset names that you may have accessed from the 3.4 Data Set List screen.

If you do NOT have the cursor positioned to that one field, then the names that are retrieved overwrite the "ISPF Library" Project+Group+Type+Member area — you know, the three-part dataset name, with the DSN that ISPF remembers for you.  These dataset names come from a different list — This other list only includes names that you previously entered in the "Ispf Library" area.

Personally I usually have F6 set to NRETRIEV, and then (just for this special case) I set the "Label" for F6 to "Prev DSN".

So, the "Label" column can be useful, if used sparingly.

Wait, you're thinking, Hold on a minute. That's all very nice, but isn't F6 used for "Repeat Change" in Edit?  Oh, and what did I mean when I said "(almost)" back near the start of talking about resetting F12?

Here's the punch line:  You don't have just ONE set of function keys.  You have at least a dozen different sets of F-keys, depending on WHERE you are in ISPF.  Vocabulary item: These sets of function keys are called "Keylists".

The set of keys you get on the ISPF "Edit Entry Panel" happens to be the same as the set you get when you say "KEYS" in "Option 0, Ispf Settings". The keylist you get while you're actually editing — when you're past the initial Edit Entry Panel — Well, that's a different keylist.

Take a look again at that popup box you had up on the screen before, the one that comes up when you say KEYS — the popup box with the keys and their definitions listed.    If you enter KEYS from ISPF option zero, you can see that it says, fairly near the top — sort of as a subheading — "ISR Keylist ISRSAB Change".  Yes, this keylist is named ISRSAB.

If you go someplace else in ISPF and type "KEYS" on the command line, you might still get ISRSAB (which will happen for the "Edit Entry Panel"), or then again you might get some other Keylist (like when you're actually in edit mode, editing something, at which time you get ISRSPEC).  In the ISPF Edit member selection list, you get yet another different Keylist, ISRSPBC.

If you go into some other product that fits nicely into ISPF but is not really part of ISPF — something like SDSF, or the File-AID Editor, or Endevor — then when you type KEYS there it will (probably) show you the function keys and let you revise them; but usually it doesn't use the same KEYS popup screen — it uses a different screen that doesn't show a Keylist name.  So, those sets of keys you can't really change from "Option 0, Ispf Settings".  To change those, you pretty much have to select each product separately and type KEYS within each product to change the keys there.  But I digress; let's go back to the ISPF Keylists that you can change in "Option 0, Ispf Settings".

In option 0, as also in the Edit Entry Panel, you get Keylist ISRSAB.  In the ISPF Edit member selection list, you get a different Keylist, ISRSPBC.  When you're actually editing something, you get yet another Keylist, ISRSPEC.  There are a bunch of different Keylists.  If you want to set F12 to mean Retrieve everyplace, then you need to make sure the keys are set the way you want them in all the Keylists you use.

Yes, you do have the alternative option of suppressing the use of Keylists, so that you have only one Keylist and it is used everyplace — well, almost everyplace — but if you think about it, you can see that having F6 set to "Repeat Change" is quite useful within Edit but not terribly useful in most other places; and NRETRIEV — remember NRETRIEV?  Retrieves names of recently edited datasets, most recent first — Well, NRETRIEV could be useful when you're entering the name of the dataset you want to edit, epecially if you don't quite remember the name exactly.  So why not have F6 set to mean "Repeat Change" while you're actually editing something, but have it set to mean NRETRIEV when you're just entering the DSName?  Similar arguments can be made for other ISPF commands and other keys.

Yeah? Name ONE, you say.  Okay, AUTOTYPE. (Ha.)  Autotype is similar to NRETRIEV, but you just start typing part of a dataset name, hit the key you've got set to "autotype", and it suggests the rest of the name.  (Yes, much like Google, and your email, etc.)  Keep pressing the key to go through its set of guesses (which it presents in alphabetical order, starting with the part you've typed).   Yes, you can accept part of some DSN it suggests, and change the ending, or just lop off the ending, and then press your "autotype" key again to get more suggestions starting with what you've got entered so far.  (I use F5.)  Now is that useful?  Told ya.  It also works in ISPF 3.4, Data Set List, which conveniently uses the same Keylist as the ISPF Entry Panel — our friend ISRSAB — and no, SAB doesn't stand for "Same As Before", at least I don't think it does.

If you want to continue tailoring your function keys, go back to that same "ISPF Settings" option 0 screen, and use the arrow keys on the keyboard to move your cursor up to the top line — the "Action Bar" — to where it says "Function keys".  With the cursor on "Function keys", press enter.  You get a drop-down list that lets you select 1, 2, 3, and so on.

Option 1 will let you set the "non-Keylist keys", the basic keys that ISPF falls back on when no Keylist is in effect.

Option 2 will show you another drop-down list (of Keylist names), and you can go through and change the settings on at least a dozen different Keylists.  Type E to the left of one of the Keylist names in the drop-down list, and you can edit the settings of the F-keys in that list.

You might find that handier than resetting the definitions by going around to all the different screens you use and typing "KEYS" for every one.

Not all the possible sets of function keys are represented in that drop-down list, though.  It doesn't cover the SDSF list of function keys, or the lists for various other add-on products like File-AID and Endevor.  Notice that all of the Keylist names in the drop-down list start with ISR?  Yeah.  ISR and ISP are prefixes used by ISPF.

Short digression: Other products — components — software apps? — each get their own special 3-character prefix assigned, or sometimes they get more than one prefix.  IKJ means TSO itself, DSN means DB2 (go figure), DFH is CICS, DFS is IMS, and so on.  The non-IBM products don't always use the prefixes assigned by IBM, but recently IBM has been "encouraging" vendors to start using the assigned prefixes.  I wonder if IBM is letting Script Waterloo keep the SCR prefix.  That used to be my personal favorite, because the error or warning messages I got were always prefixed SCRW, and it seemed so appropriate.  But let's get back to the present…

Option 3 can be used to specify that you want 24 function keys, not just 12.  The second group of 12 function keys are usually accessed by holding down the "Shift" key while simultaneously pressing a function key.  Shift+F1 would be F13, Shift+F12 would be F24, etc.

Simply choosing "24" here doubles the number of keys you can set to do different things — As an example, consider that a lot of people set one of the keys in the ISPF Edit Keylist (ISRSPEC) to mean "SUBMIT", and then they just press that key when they're editing some JCL and want to Submit it to run as a batch job.  Okay, maybe that doesn't sound like such a big convenience.

So consider CVIEW and CEDIT.  Imagine you're editing that same JCL — What if one of the lines refers to another dataset, DSN=some.dataset.with.stuff.you.want.to.see,DISP=SHR ?  Well, if you've specified CEDIT (or CVIEW) for one of your F-keys (Say maybe F14, the alternate of F2), then you can position your cursor to the start of that dataset name, press your F14, and be transported as if by magic into a new ISPF Edit (or View) session, editing or viewing that other dataset.  When you press F3 from there you come back to your JCL, like waking up from a dream of the other data.

 Option 9 can be used to turn off the Keylist feature, so the same function keys will be used everyplace — well, almost everyplace.

So, okay.  Enough about Function Key settings. You get that now.  You can finish setting those up later.  What about those decorative underscores?, you may be asking.  Can we get rid of those now?  Well, You read my mind.

GETTING RID OF THE DECORATIVE UNDERSCORES

This is not intuitively obvious.  Back on that "Option 0, ISPF Settings" screen, up in the "action bar" at the top, just to the right of "Function keys", it says "Colors".

Right, Colors.  Use the arrow keys to move your cursor up to that selection, "Colors", and press Enter.  (Or if you read the previous blogs, and you followed the directions to set up your 3270 emulator preferences so you can just double-click the mouse on various things, then you just need to double-click on "Colors".)

Either way, a drop-down box appears.  You select "2. CUA attributes…"  (Selecting "3. Point-and-Shoot…" seems to do the same thing.)

Like I said, not intuitively obvious. What you see next will be, though.  You get another pop-up screen containing a list of field types ("Panel Element" types).  If you page down (F8) through the list, looking at the column that says "Highlight", on the far right, you will see that some things are set to USCORE.  You guessed it, that means Underscores appear in that type of field.  Overtype the word USCORE with the word NONE for every case where it appears.  Keep paging down (F8) until you're sure you got them all.  I think there are about 40 "Panel Element" types listed.

Other options for "Highlight" include REVERSE and BLINK, but in the end I think most of us will prefer NONE as a setting in almost all cases.

You can, by the way, also change default colors on this same pop-up screen.

Like so many things, it's pretty easy once you know how.

This is probably enough information for one article, right?  We can reset some more ISPF settings next time.  I'm thinking that ISPF "edit" profiles are pretty important for you to be able to reset, but it might be more fun to look at resetting colors and things within the PC 3270 emulation. We'll probably look at one of those things next.

 

 

Take Control of Your TSO/ISPF Profile(s)

 

It's time for you to take control of your TSO sessions.  Own your own TSO/ISPF Profile(s).   We start by just jumping in the deep end:  Take control of your saved ISPF Profile variables (the easy way) (even the ones you didn't think you could change).

You know how you logon in the morning, and ISPF seems to remember things like the name of the last dataset you edited yesterday, how your JOB statement(s) should look, and all sorts of other things that you've typed into the entry fields on different ISPF panels?  Most of that stuff is saved in your ISPF Profile dataset.  The ISPF Profile dataset usually has a name like 'YourID.ISPF.ISPPROF', and it has potentially hundreds of members.  Those members store most of the stuff that ISPF seems to remember.

To answer your unspoken question, Yes, actually, you can edit that dataset, BUT you have to be careful because it contains hex stuff that can get messed up, and you also don't want to mess up any alignment of the data.  And yes, in most cases you can just go back to wherever you typed a thing in the first place, and retype it; but that can be tedious, and it doesn't always work for everything.  Another idea you think of: What happens when you want to make the same simple change everyplace and there are a lot of places: Maybe you want to copy an existing profile dataset and then change all occurrences of one 7-letter userid to a new 7-letter userid.  You wonder if you can do that using a utility program.  Yes, you can use File-AID (Specifically, you use File-AID option 3.6); in fact that's something I do myself if I want to change the userid like that (as long as the two userids are the same length).  Still, there are times when none of those ideas is quite what you want.

An example: Let's say You use ISPF option 6 to enter and save TSO commands, especially long ones with awkward syntax that you don't want  to remember and you don't want to retype.

ISPF option 6 keeps a short list of your most recent commands, letting you retrieve and re-execute the command of your choice, with or without changing it a little.  You should cut-and-paste that list into a scratch pad file someplace, but back on the main track: You want to be able to change the commands right there in the list, without executing them; and the list is protected against you overtyping the saved commands.

Suppose you mistyped something.  Worse than that, imagine you typed a mistake that reveals a deep and profound misconception of your work environment, or contains an embarrassing Freudian slip, and you would rather avoid having any of your co-workers get a chance to notice it.  But ISPF pops it right in at the top of the list, to save until you've entered ten other commands.

Dang.  Looks like another case of a machine trying to get the better of you.  At least, it looks that way at first glance . . . but in a few minutes you'll be able to turn the tables (if you read on. . .)

Even without knowing about ISPF Profile variables, you think of a couple of other options immediately.  One, you can type in several other commands, causing the evidence of your mistake to roll off the bottom of the list; but then you'll lose all the other commands in your list too.  Two, you can either cancel your TSO session or you can just hit the attention (attn) key to get thrown out of ISPF into READY mode before ISPF saves the changed list into your profile.  That should work if you think of it right away, and you don't mind losing whatever you've got going on in your other split screens.

Putting aside those ideas, let's see how to do it without losing anything.

You exit ISPF option 6 and go to option 7.3, and voila, a screen full of apparent gibberish appears — Variables and their values.  Importantly, much of the gibberish can be overtyped.  A few lines down from the top you can see a line that says "Variable  P  A  Value"   (The column headings).   The data you want to overtype is going to be found under the "Value" column.  The first column is just the name of the variable, which you don't really need to know.  There are hundreds of variables, and a name can't be longer than 8 characters, so that means a lot of them will look like, well, gibberish.  The second column, titled "P", tells what kind of variable it is (what "pool" it swims in).  You're looking for Profile variables, which are identified by having a "P" (for Profile) in the "P" (for Pool) column.  That other column, titled "A" (for attributes), will usually be blank for the Profile variables.  If it says "N" it means No, you can't overtype the value, but that applies to things like the time and date, not to the profile variables.  Just use F8 to page down through the list until you get to the ones with "P" in the "P" column.  Keep paging through the "P" variables, looking at the data under the "Value" column until you see the data that you want to change.  For the case in our example, the Variable name (column 1) will say PTCRET01, so it isn't too terribly far down.

Having found it, you overtype the string that you don't want with something else that you do want.  (If you don't actually know of any text you want to put there, try just putting TIME or PROFILE or some other innocuous short command padded out with blanks.)  After overtyping the value, you stare at it carefully for a minute to make sure you didn't make some other mistake while fixing the first one.  When you're satisfied that it looks good, you press F3  (or whatever key you have set to mean END).  Now you're laughing (metaphorically, probably not literally).  Go back to option 6 just to check.  There you are: the list now looks the way YOU wanted it to look.  Mission accomplished.

But wait a minute.  Didn't you notice other stuff while you were paging through that list?  Go back to 7.3 and have a look around.  You recognize your job card(s), some dataset names you've used on edit or utility screens, lots of stuff.  Most of it can be overtyped right there, without cruising around to all the individual screens where the text was originally set.  Hey, you realize, you can do this.  You own this.  You've got the power.

Remind me sometime to tell you about inserting a variable named ZSTART into the list as a new profile variable, to specify stuff you want to have happen automatically when you start ISPF.  Yes, you can do that [provided your z/OS system is at least at level 2.1].  There are two (2) line commands available: i (for insert) and d (for delete).  Type the letter i on the far left of any line, to the left of one of the Variable names, and press enter.  A blank line appears.  Put the word ZSTART for the Variable name, put P in the P column (I know, your dog would love this), leave the A column blank, and under Value you get to type in a string representing what you want to have happen automatically at ISPF startup.   The string you put in should follow this pattern:

ISPF;2;Start S.h;Start 3.4;Swap 1

The interpretation of the above string is as follows.  It always starts with ISPF.  After that you indicate a virtual new line by putting in a semi-colon (;)(Semicolon is the default value for the line separation character.  If you've changed yours to something else, use your own line separation character instead of semi-colon when doing this.  The line separator functions like an imaginary press of the Enter key, allowing you to string multiple commands together on the same line.)  In this example, you want to get three split screens started automatically whenever you start ISPF.  The first one will be Edit (option 2), the second one will be the SDSF hold queue (S.h), and the third one will be DSLIST (option 3.4).  You say Swap 1 at the end to tell ISPF to put you back onto the first of your split screens, the Edit screen.

If you're wondering why you can't just say ISPF 2, instead of having to say ISPF first and then say 2 on a separate logical line, well, I don't know why IBM requires that.  I just go along, because that's what you do to get it to work.  The first thing seemingly has to be just ISPF by itself.

When you're actually on the READY mode screen, you can type ISPF 2 if you like, and it will take you directly to ISPF edit.  It won't execute your multiple ZSTART commands then, though.  You can also say ISPF BASIC from READY mode if you want to skip your ZSTART.   What if you're already in your extravagant multi-session environment you've brought on yourself, and you want to get out of it all at once?  You use =XALL as the command.   (It can sometimes abend, by the way, but it does get you out, back to READY mode).  But we digress from our original digression.

When you go into ISPF after setting up your ZSTART, you'll automatically be on the Edit screen (if you used the example commands), and you'll have two other sessions started on alternate screens.  Before that, though, while you're still back in ISPF 7.3, having just put in your ZSTART profile variable, your option 7.3 screen (well, part of it) now looks something like this:

Variable        P  A    Value
.                                 —-+—-1—-+—-2—-+—-3—-+—-4—-+—-5—-+–
ZSSSMODE P   B
ZSTART       P         ISPF;2;Start S.h;Start 3.4;Swap 1
ZSUPSPA    P
ZSUPSPB    P

To save what you've done, you press the END key (generally F3); and that also takes you back out of the 7.3 area, to the more ordinary part of ISPF.  There'll be an extra screen, more of a popup really, but just F3 past that.

Anyway, so much for  ISPF Profile Variables, and dealing with most of the character strings that ISPF saves for you.

What about flags and switches, though?  Obviously a lot of things are controlled by on/off or Y/N  or multiple choice settings, and if you don't know the names of the variables then you can't very well use option 7.3 to change the values in them.   Doesn't matter.  A lot of them can be changed on the ISPF Settings screen (option 0), and that's probably the easiest way to set them anyway.  If you do happen to know and remember the name of a variable and you want to go straight to it in option 7.3 next time, the command for that is not FIND, it's LOCATE (or LOC).

On our next outing, let's visit ISPF option zero (Settings) and see what can be done there to improve your virtual environment.

What can you change in option zero, ISPF Settings, you ask?

A lot.  Here are some highlights:

Command Line position – Most people prefer Top but the default is Bottom
Changing what Function keys do (F12=CANCEL?  Whose idea was that?)
Changing Function keys (12 or 24), KEYLISTS (yes or no)
Get rid of that annoying underlining of all the fields where you type things
Log/List – Suppress that useless extra screen displayed when you exit ISPF
Long message in a box (or not)
Message ID numbers on messages (makes them easier to Google search)
Terminal type 3278T and format Max
Putting the calendar on your primary options menu
Seeing the screen name in the upper lefthand corner of screens

We'll talk about the first few of those in the next article.

So, the title of this piece you're reading — didn't it mention "Profile(s)", with an S?  So far we've only talked about your ISPF Profile.

Maybe you're wondering: How many profiles do you have in TSO, anyway?

Listing just the main ones, You have one each for native TSO, RACF, SDSF, FileAID or any other similar tool, ISPF general Settings, and ISPF dataset edit (one edit profile for each dataset type, up to your site-specific limit).  "Dataset type" means the last part of the name after the last dot, what would be called a file extension on the PC.  The EDIT profiles, like most of your Profiles, live in your ISPF Profile dataset.  Some other profiles, like your native mode TSO and your RACF profiles, are saved elsewhere.

Next time we'll talk about some of the ISPF option zero settings, because resetting those can save you a lot of annoyance.  Meanwhile, if you don't want to wait for me, you can always go to ISPF option zero and type "HELP", which will allow you to read through the "ISPF Settings" tutorial.  If you want to read the tutorials, though, you probably should start in Edit, because the Edit tutorial will probably be what you'll find the most useful (in my opinion).

So, yeah, we're just getting started.  Good start, though, yes?

TSO/ISPF Multiple Split Screens – the Easy Way

You think it might be nice to use multiple split screens with TSO/ISPF, but it seems hard to keep track of them?  Then you probably just haven't seen it done the easy way.  This short explanation will walk you through it.

After you read this, you should know how to add an easy selection bar at the bottom of the screen, and then double-click the mouse on the session you want.  (It might be easier to remember later if you do the steps in an ISPF session while you're reading. You'd get a feel for it — literally.)  Some of the features discussed here only came out recently, at the z/OS 2.1 release level, so if you're using an older release you'll have to wait until your place installs 2.1 before you can use all the bells and whistles.  That said, let's get started.

First, enter these commands in TSO/ISPF:
===> SPLIT  NEW
===> SWAPBAR

The "SPLIT NEW" command causes an extra split screen to start.

The "SWAPBAR" command causes a new "action bar" to appear on the bottom line of the 3270 screen, to use with your collection of screens.

You "SPLIT NEW" again to create each additional split screen.

You also have a choice of saying START instead of SPLIT NEW.

Your newly added action bar contains a name for each of your active screens, so you can recognize which is which.  Usually the name on the action bar will describe what is on that screen — EDIT, or SDSF, or something obvious.

With no further setup than that, you can move the cursor to your choice in the action bar and press ENTER to go to that screen.

Or you can enter "SWAP 1", or "SWAP 3", etc, on the command line, considering the screens to be numbered as 1, 2, 3, … according to their positions on the action bar.

Or just type the number for the session you want and press [F9] instead of [enter]  — assuming your F9 is set to mean  SWAP.

If you quit reading now,  you already know enough to use multiple split screens.  With just a little bit more perseverance, though, you can have a better setup.  ("Better" meaning nicer looking and easier to use.)

For a nicer looking display, You can change the SWAPBAR settings so that a line is shown above the added "action bar". This sets the bottom bar apart from the rest of the screen, making it look more like the "File|Edit|View|etc…" bar at the top of the screen.

You can also change the color of the text in the SWAPBAR action bar,  further setting it off from the rest of the screen, but the default color (white) is fine. If you have your PC3270 session set up with a light background, then white means black.

To access the panel that allows you to change these things, enter the SWAPBAR command followed by the operand slash (/) :

===> SWAPBAR   /

On the popup that then appears, Select "Show SWAPBAR divider line", and pick a color (say  W to pick White, and so on).  To get ISPF to save your settings so you can actually use them, enter S on the line where it says "S to update SWAPBAR", and then press F3 while your "S" is still there.

Okay.  Now,  wouldn't it be nice to be able to double-click the mouse key to select your choices in the action bar?

If you are using IBM's PC 3270 emulation, here's how you do that.

Move the cursor up to the PC 3270 "Menu Bar" (above the similar ISPF bar). If the "Menu Bar" is not there — if it isn't showing — you can get it by pressing alt-E (That is, holding the ALT key while you press the letter E), and then selecting "Show Menu-Bar" from the selection list.

Having found the "Menu Bar",  Click "Edit" in the "Menu Bar" and you'll get a drop-down menu.  From there:

select:            Preferences >
select:                   Hotspots >
Find "Point-and-Select commands" about halfway down,
Under that select (click the box next to) "ENTER at cursor position"

click the [OK] box

To save your settings, again move the cursor to the "Menu Bar", and select File, then Save.

With the Hotspots setup as shown, You can now double-click on the choices in the action bar.  (Guess it pays to persevere.)  When you exit ISPF to logoff, your setup will be saved for future use.

If you like selecting things by double-clicking the mouse, you now have a bonus.  On the ISPF main menu (the primary options menu), notice where it says "2 Edit", and double click on the word "Edit" there.   Mmmm-hmmm, it takes you straight to the edit screen.

Go back, and this time double-click where it says "Utilities" in "3 Utilities".   Doing that puts you on the "Utility Selection Panel".   Notice how the Utility Selection Panel is set up in a similar way to the primary menu?  It has "1 Library",  "4 Dslist",  and so on.   You see where this is going, right?   Double-click on the word "Dslist".

Yes, it works for any panel set up that way, provided of course the panel was set up correctly in the first place.   Generally, double-clickable selection fields will be turquoise, and the screens will have a general look and feel like the two we just discussed.  So, if you like double-clicking the mouse to select things, now you're laughing.

If you don't think "SPLIT NEW" is an easy phrase to remember, assign "SPLIT NEW" to a function key you don't use (for me, that was F4).  It makes the whole experience flow.  Want a new screen?  Hit F4. Want another one?  F4 again; it's there at the touch of a key.  Some people recommend using F2.  You just replace the old default SPLIT key with SPLIT NEW.  So that's a good idea too.  I don't use F2 that way because I like to keep the old "SPLIT" as a way of positioning the split line in the middle of the screen in order to edit two datasets at once and compare them visually.  So it's a matter of personal preference which key you decide to use.

If you do use F2 to split the screen in the middle so you can see parts of two screens at the same time, then (you ask) How do you know which of your other sessions is going to be on the other part of the screen?

Look at the names of the sessions as they are listed at the bottom of the screen.  You can see that there is an asterisk to the left of the screen where you are currently working.  Notice that one of the other session names is marked with a hyphen (minus sign) on its left.  That will be the companion to your current screen when you press F2.  It's also where you go if you press F9 without specifying a destination session.  If you want to change it so your current screen is matched with a different companion session, here is how to do that:  Press F9 to go to the companion it has now.  Once there, double-click on the companion you want to replace it with.  That's all there is to it.

While you're deciding which function key you want to use for SPLIT NEW, You also have a choice of saying START instead of SPLIT NEW.

If you use START instead of SPLIT NEW, you can specify an immediate target destination for the new screen you're adding.  So, for example, if you want to open a new screen and use it to view your SDSF held output (option S.H), you say:

===> START S.H

You get the new screen, and you're already right there in SDSF.

Let's say maybe you decide to set your chosen key to START rather than SPLIT NEW.  Then you can just say S.H and press the F4 button, and there you are instantly in SDSF, at the hold queue.   Sort of like using the transporter beam in a sci fi show.   Assuming you've chosen F4, and assuming S.H is where you wanted to go.   (Yes, it still works no matter what key you choose, or what destination.  Only my example ceases to be applicable when you change those.)  (Cue the laugh track.)

So now we'll describe how to reset your function keys, in case you don't already know.  It's pretty easy.

===> KEYS

With today's ISPF, all you have to do is type the word "KEYS" on the command line and a screen will appear that allows you to reset what the function keys do.    You reset what a key does by typing the new command in the lefthand column.

That "Label" column on the far right of the screen (I know you're curious) lets you set the text that later appears as the description of the key's function when the keys are displayed at the bottom of your screens (which is controlled by ===> PFSHOW).   Putting "Short" in the "Format" column means that key will be included on the short list when you PFSHOW only the short list of keys.

Today's ISPF also provides you with multiple sets of Function keys, depending on where you are (unless you've turned off that feature).  So the keys you get within EDIT are not the same as the keys you get on the edit member selection list or the main menu.  This means you need to do the "KEYS" thing described above several times.  The main menu uses the same set of KEYS as the main screen you get after typing 2 or 3 or 3.4., and that set (that key list) is named ISRSAB.   When you go to the member selection list for EDIT, or the dataset list in ISPF 3.4,  you get another key list, named ISRSPBC.  So if you set the keys in ISPF 3.4, the same settings will be used when you go to the EDIT member selection list.   There is quite a bit of sharing, but there are still quite a number of key lists.   SDSF has its own set.  Most program products (Like FileAID) have their own sets.  What you might want to do is just type KEYS whenever you first go into a different function for the next little while, and reset as necessary until you've hit them all.  (While you're there, you may as well change F12 from the default of CANCEL to the much more useful setting RETRIEVE.)  Other than that, you're done with the setup.  BUT,  the ISPF part of your setup won't be saved until you exit ISPF, and you can lose it if your session is cancelled or you're thrown out of ISPF by an abend, or you just let it time out when you go home.  So to be on the safe side you probably want to exit ISPF and Logoff TSO, and then just Logon again with a fresh start — and you probably want to do that in general whenever you make substantial changes to your ISPF settings  (changes you don't want to do over).

Let's say you've started eight split-screen sessions and you don't want to say =X eight times.  (As you know, you say =X and press enter to get rid of any one split screen.)  If you're exiting because you want to save your settings, go ahead and type =X as many times as you need to.  That's the least risky option.  If you have nothing to lose, though, there's a new command (which is still new and sometimes abends):

===>   =XALL

Note that =XALL works for the basic IBM applications, but if one or more of your split screen sessions is running a non-IBM product — something like File-AID, or the MAX editor, or Endevor, or some CA product, for example  — then that product might or might not honor the =XALL directive.  It is up to the product to honor the request when  =XALL arrives at that product's turn to exit.  If you're lucky, ISPF just stops its exiting spree when it comes to such a screen.  At that point you just enter =X,  or whatever is required to exit the particular product, and after you escape the sticking point you can enter =XALL again to continue your mass exit.    If you're not so lucky, ISPF might abend instead of halting, depending on how that particular non-IBM product reacts when it sees its turn come up under the XALL process.

If at some point you decide you want to turn off the SWAPBAR action bar, enter:

===> SWAPBAR OFF

Note: If you have the ISPF setting "Always show split line", using SWAPBAR turns off that split line feature.  Your split line is replaced by the action bar.  Using "SWAPBAR OFF" does not turn the split line setting back on.   You need to reset it yourself: You use option 0 (zero) from the ISPF primary panel, find and reset "always on", and then exit ISPF and go back into it again.  Some ISPF settings are confusing because they don't take effect until you exit and go back into ISPF, and this is one of them.

About that loss of the always-on split line:  You still get the split line displayed if the split is anyplace in the middle of the screen.  All you lose is the "Always" in "Always show split line".

Another little caveat:  If you've set up your 3270 emulator preferences to allow double-clicking on the sessions listed in the action bar, it can change the way the text selection part of cut and paste works.  Really it just slows it down. There seems to be a big delay added between the time you click on the section you're going to select, and the time the highlighting starts to appear.   If you find that annoying, you might switch to using the Ctrl key plus an arrow key rather than selecting sections of text with the mouse: Just click the mouse once on the start of the text you want to select (to position the cursor there), then press the Ctrl key and hold it down while you use the arrow keys to highlight the text you want to select.  It can actually feel better than using the mouse for selection, once you get used to it, because you never again pick up extra text accidentally near the border of the selection area.   On the other hand, once you get used to that little added delay using the mouse for marking your selection, it might not bother you.

By the way, be careful about using a mouse click to position the cursor when you intend to cut-and-paste text containing a url.  If you click on the url itself in the text, you might be magically transported to the destination.  Okay, not magically.  It happens if you have the "Execute URL" box checkmarked back in the "Hotspots Setup" part of your PC3270 Configuration settings, mentioned in  our discussion above.

That's it.  Extra split screens the easy way.

The picture below shows the action bar at the bottom of the 3270 session screen.  Yes, I use a light background rather than traditional TSO black. Yes, I use an extra-big screen.  Just focus on the action bar now.  It shows SDSF, another SDSF, DSLIST, EDIT, and ISR@PRI (the first seven characters of name of the ISPF primary option menu, ISR@PRIM).  Double-click on whichever session you want to go to, and you're instantly there.  With the added divider line, the action bar really does look okay, not too distracting or intrusive.  Now that you've put in the effort to set it up, have fun with it.  You get used to it really fast, and you wonder why you didn't use it before.

Another last warning, just so you know.  Every split screen session you add uses up more memory in your TSO session on the mainframe, the same as having a lot of windows open at once on the PC uses up more memory on your PC.  So if you start having memory shortage problems in TSO, cut back on the number of split sessions you use simultaneously, and try logging on with a bigger SIZE specified if you can.  The largest possible logon SIZE specifiable is 2096128 and you can only get that if your company allows it; but don't just assume you can't get 2096128 because you couldn't get, say, Ten Meg (10000).  Yes, it can happen that 10000 doesn't work but 2096128 does.  But that's a topic for another episode . . .

 

3270.with.SWAPBAR————–

Updated slightly 2016 June 11, primarily to extend the discussion of =X and =XALL