JCL: You’ll Like the LIKE parameter …
LIKE enables you to create a New Dataset Modelled on an Existing Dataset, in JCL. You use it by coding LIKE=model.dataset.name on the DD statement that creates the new dataset.
Perhaps from time to time you find yourself wanting to set up JCL for a job that will create a new dataset every time the job runs. You don’t want to be a complainer, but in this day and age you don’t see why you should have to specify all that complicated JCL stuff – Record Length, Record Type?, SPACE??, UNIT??? Sheesh! You just want the dataset to have all the same parameters as some existing dataset. You want your new dataset to be just like some other dataset.
We aren’t talking about some fancy special database or anything, just a flat file or a library or something ordinary like that. The type of thing you know you could do with ISPF 3.2 allocation.
Yes, in ISPF you know how to do it. (But we’ll review that here quickly anyway for some other reader who might not know how to do it; You yourself can skip ahead to the next paragraph.) You go to ISPF option 3.2, and you enter the DataSet Name of the dataset that you want to use as a model. We’ll call it ‘some.existing.dataset.name’. You press the enter key to get ISPF to display the current allocation attributes of that dataset, because you know that ISPF will then remember those details when immediately thereafter you press F3 to exit the display. So that’s what you do next, after getting ISPF to display the attributes of your model dataset: You just press F3 to go back to the basic ISPF 3.2 main screen. This time you enter the new DataSet Name you want to use for the new dataset . You choose the option to create (Allocate) a new dataset, and you press enter; voila, ISPF presents you with the same screen full of attributes it just showed you a few seconds ago. You press enter. ISPF creates the dataset, and you’re done. Fine if you’re in ISPF, you may say, but you want to do this in JCL as part of some job that will run on an automated schedule.
Honestly, you say to yourself, haven’t mainframes been around long enough by now that IBM could have figured this out, provided you with some easy option? Well, they did (eventually). These days, when creating a new dataset in JCL, you can specify LIKE=some.existing.dataset.name and the system will copy most parameters from the information it has about that other dataset.
Altogether you only need to specify three (3) parameters on your DD statement (in any order): DSN, DISP, and LIKE. These work as follows.
You specify the DSN parameter to assign your chosen DataSet Name to your new dataset.
You specify the DISP parameter, not so much to inform z/OS that this is going to be a NEW dataset, but to inform z/OS that you want this new dataset to continue to remain in existence after the job processes it, and in fact you also want the system to create a catalog entry for this new dataset name so the dataset can be found again easily. (DISP is short for DISPosition – you're specifying the disposition of the dataset.) If you do not specify DISP at all, if you just leave it off, then the default for DISP is (NEW,DELETE), meaning that the system will create your new dataset for you all right, but then it will delete the dataset again as soon as that job step ends. So you probably want to say DISP=(NEW,CATLG), which is the JCL equivalent of what you do when you use ISPF 3.2 allocate.
You specify the LIKE parameter to tell the system where to find all the rest of the information.
So, on balance, your DD statement will look something like this :
//ddname DD DSN=my.new.dataset,DISP=(NEW,CATLG),
You are free to add more JCL parameters if you want your new dataset to be just LIKE your chosen model dataset EXCEPT for some minor change(s). That would be equivalent to pulling up the ISPF 3.2 model dataset attributes in the usual way, but then overtyping a few of them on the Allocation.
In that case, go ahead and specify more JCL parameters for only the things you want to change. The system will use the information you supply and merge that with the rest of the information it gets from the existing model. Just like ISPF 3.2 allocation would do. Unsurprisingly, since both invoke most of the same behind-the-scenes system processing.
If you just wanted a quick overview, that’s it; You’ve got the basics; You’re done here. Three parameters is all you need: DSN, DISP, and LIKE. Off you go to try it.
If you’re still reading, you probably want to know if there are any catches to this. In fact there are a few restrictions, limitations, caveats and quirks. Naturally. Some of these follow, in decreasing order of probability that you might care.
Most importantly, LIKE= is not valid for all datasets. (Neither is ISPF 3.2 allocation, so no surprise.)
The existing model dataset has to be a cataloged disk dataset. So, LIKE= cannot designate a dataset on tape, and it also cannot designate one of those temporary work files that aren’t cataloged and just disappear automatically at the end of processing.
Moreover, Use of LIKE= is supported only for fairly ordinary dataset types – notably, NOT for VSAM files. It works just fine for PDS (or PDSE) libraries; it also works for ordinary flat files. Besides type BASIC, this includes both extended (striped) datasets and large format (DSNTYPE=LARGE) datasets. Surprisingly enough, HFS files are also supported.
(For VSAM files, IDCAMS supports a parameter called MODEL which is quite similar to LIKE, but it goes into the IDCAMS DEFINE statement rather than being in the JCL.)
Apart from the above restrictions on what type of dataset can be used as the model,
BLKSIZE is not generally copied. The system determines the optimal BLKSIZE for the new dataset. That is generally a good thing; the system is very good at figuring that out. Look for the earlier post/article on BLKSIZE if you want to know more about that.
Those are the main restrictions. (Meaning this is another good candidate for a break point.)
A few even lower probability caveats are described in the next few paragraphs, against the unlikely event that you might happen to run into any of them. Either that, or maybe you like to collect computer trivia just to arm yourself in case you ever want to one-up some annoying wisenheimer. (Philosophically, that road leads nowhere, but you already know that.) Or maybe you’re just curious what more there could possibly be to reveal on such a seemingly simple topic. Maybe you want to become an expert. Well, for whatever reason, you’re still reading, so here we go with more.
SPACE is part of the information that is involved in the copy, but it is not directly copied.
The new space allocation is calculated, based loosely on the existing allocation: The system checks to find out how many tracks of disk space are currently allocated for the first three extents of the existing model dataset. If space was originally allocated in cylinders or blocks, that doesn’t matter; the calculation will convert to the equivalent number of tracks.
What does that mean, the first three extents? It means the first 3 chunks of disk space assigned to the dataset — an ordinary simple dataset can have as many as 16 separate chunks of space on one disk volume. (ISPF 3.2 and 3.4 will display “number of extents” for you if you want to check on that information.)
How does that happen, a dataset having a lot of extents? Mostly it happens that a dataset can get a lot of separate chunks of space – extents — when data keeps being added into the dataset, causing the dataset to get filled up repeatedly. So, the system keeps giving the dataset another chunk of disk space every time that happens (until it hits 16).
If the existing dataset has more than three extents of disk space, then the new dataset created using the LIKE parameter will be smaller than the existing model dataset.
Of course, the new dataset can probably have additional extents added later if it fills up, in the same way it happened for the old dataset; but that depends on the space being available. It can sometimes occur that the system cannot find sufficient eligible empty space to expand a dataset.
If you are unhappy with accepting this disk space situation, then you need to specify SPACE on the DD statement in addition to LIKE, and code the SPACE parameter to reflect whatever way you want the SPACE to be allocated. One warning on this: The RLSE parameter only works if you code it on the DD statement in the JCL step that actually writes to the dataset. If you allocate a new dataset in a job step where the program does not open the dataset for output, the unused space is not released by the RLSE parameter. You’ve been warned. Note, however, that you can copy the same SPACE parameter you used at allocation, placing the copy onto the DD statement in the job step that actually writes to the dataset. The RLSE parameter will then be honored there. In fact, if a dataset was originally created without RLSE, and/or without secondary extents, you can get either or both of those things added to the dataset by coding them in the SPACE parameter on the step that writes to the dataset. That can be a handy piece of trivia to know, on rare occasions.
In practice, there is another case where you might also want to specify SPACE: When you are creating a test file as a small version of some much bigger file, and you don’t want to allow your little test dataset to take on a very large SPACE allocation based on the large production dataset. So you override it by specifying a smaller SPACE allocation on your DD statement when you create the datsaset.
Next there are considerations related to SMS (System Managed Storage).
You might not be able to override the SPACE parameter in the above-described way if the SMS data class (DATACLAS) for the dataset has been set up to block this capability. I’ve never seen that done in real life, but the IBM documentation tells us that the option exists, so presumably there is some use for it in some special case someplace. In general a lot can be controlled by parameters like data class, storage class, and so on (SMS parameters), but those SMS classes are set up in ways that are very specific to your particular company, group, system, or site, by people responsible for your particular system, so all we can really do here is to point out that SMS parameters exist, they are defined at your site by the people responsible for your particular system, and they can exert invisible influence on dataset allocation, for better or for worse.
For the LIKE parameter to work correctly and reliably, SMS (System Managed Storage) has to be active on the system. In some cases the LIKE parameter will be ignored if SMS is not active, and in some cases it will partially work, that is, it will copy only part of the information it should copy. These days virtually all z/OS systems can be expected to have SMS active, and virtually all cataloged disk datasets on z/OS mainframes are controlled by SMS unless there is some special reason for some datasets to be excluded. So, you don’t need to worry about this requirement except to be aware of it in case you might one day happen to stumble into it.
In most cases you will never care about the special conditions and restrictions and what-ifs that filled the preceding several paragraphs (or any other even more obscure considerations not mentioned). For the most part, if you can use ISPF 3.2 to create a new dataset based on the attributes of an existing dataset, then you can do the same thing equally well by using the LIKE parameter on a DD statement in your JCL.
Also you have a LIKE parameter available for you to use on the ALLOCATE statement under TSO. The TSO ALLOCATE statement, synonym ALLOC, is the TSO equivalent of a DD statement in JCL. It is used mostly in CLISTs and REXX EXECs.
For VSAM files, IDCAMS supports a parameter called MODEL which is quite similar to LIKE, but it goes into the IDCAMS DEFINE statement rather than being in the JCL.
Be aware that restrictions such as those described in the above discussion are subject to being changed at any new release level of the operating system – IBM tends to remove restrictions that people complain about, if and when such restrictions can be easily removed. It also seems that in recent years they are not always quick to update their reference manuals when a restriction changes (Possibly because they’ve been adding a lot of major enhancements recently). So, depending on what release level of z/OS system you’re using, any particular restriction might or might not actually apply, and sometimes you just have to try it and see what happens. Happily enough IBM is more into the practice of removing restrictions, rather than adding them, so once you set up JCL that works, it will probably continue to work even longer than an old Volkswagen.
For the most part, if you can use ISPF 3.2 to create a new dataset based on the attributes of an existing dataset, then you can do the same thing equally well by using the LIKE parameter on a DD statement in your JCL.
The basic format is:
//ddname DD DSN=my.new.dataset,DISP=(NEW,CATLG),