Monday, September 26, 2011

How big can a DB2 for z/OS index be?

That looks like a pretty simple question, doesn't it? I mean, if I were asking about maximum table space size, you'd be able to answer, really quickly, with "64 GB for a non-partitioned table space, 128 TB for a non-LOB partitioned table space," right? Asking about maximum size for an index doesn't make the question that much harder, does it? Or does it? Do you know the answer? Read on, and you will.

Just last week, I got a question from a DBA at one of the DB2-using organizations that I support. She had a non-partitioned index (NPI) defined on a table in a big partitioned table space. The index had grown to 63.9 GB, and that gave the DBA a nagging sense of unease. She was concerned that something bad might happen when the index size hit 64 GB. For lots of mainframe DB2 people, 64 GB does ring a bell as a size limit for a non-partitioned object. Does that limit apply to a non-partitioned index?

Looking for the answer to that question, you could check the "Limits in DB2 for z/OS" section of the appendix in the SQL Reference (this manual and others in the DB2 for z/OS library are available online at https://www-304.ibm.com/support/docview.wss?uid=swg27019288). There, in the "DB2 system limits" table, you'd see how large a table space can be, but you wouldn't find information on index size limits. Hmmmm....

Time to end the suspense. The aforementioned DBA can relax (and I told her so), because the non-partitioned index in question had been defined with a piece size of 64 GB, and a non-partitioned index can have, of course, more than one piece (i.e., more than one data set). In fact, any index (partitioned or non-partitioned) can have more than one data set in its page set (the data sets that comprise a DB2 table space or index space are collectively known as a page set). So, the maximum size of an index would depend on two factors: 1) the maximum number of data sets that can be in the index's page set, and 2) the maximum size of a data set in the index's page set.

Both of those index-max-size factors have some dependency on the nature of the table space holding the table on which an index is defined. If that table space is non-partitioned (i.e., a simple table space or a segmented table space that is not a universal table space), the index can have up to 32 data sets in its page set, and each of those data sets can be up to 2 GB in size -- so, in that case the maximum size of the index would be 64 GB (32 data sets times 2 GB per data set), UNLESS the index was defined with a PIECESIZE value smaller than 2G (if PIECESIZE 1G had been specified for the index, its maximum size would be 32 GB). Note that the default value for PIECESIZE is 2G if the index is defined on a table in a non-partitioned table space.

When the table on which an index is defined is in a partitioned table space, the index's maximum size depends on 1) whether or not that table space was defined with a DSSIZE specification (or with the LARGE attribute), and 2) whether the index is partitioned or non-partitioned (syntactically speaking, DSSIZE is preferred over LARGE for specifying partition size). We'll consider non-partitioned indexes first. If LARGE or DSSIZE was not specified for the associated table space, a non-partitioned index can have 32 data sets in its page set, and each of these data sets can be up to 2 GB in size. That makes the index's maximum size 64 GB (as noted above for an index associated with a non-partitioned table space, an explicitly specified PIECESIZE value smaller than 2G would reduce the index's maximum size accordingly).

If the partitioned table space was defined with the LARGE or DSSIZE options, an associated non-partitioned index can have as many data sets as the table space can have partitions. How many is that? Well, it depends on the DSSIZE value (and a LARGE specification equates to a DSSIZE value of 4G) and the page size of the table space. Details on the relationship between DSSIZE, page size, and maximum number of partitions can be found in the CREATE TABLESPACE section of the SQL Reference, but suffice it to say that the maximum size of a non-partitioned index is 128 TB (same as the maximum size of a non-LOB table space). This maximum size could be reached in one of two ways: with a PIECESIZE of 32G and 4096 data sets, or a PIECESIZE of 64G and 2048 data sets (a 64G PIECESIZE specification is possible only if the DSSIZE of the associated table space is 64G or larger, and with a DSSIZE specification of 64G and a page size of 32 KB, a table space can have a maximum of 2048 partitions). If the non-partitioned index was defined without a PIECESIZE specification, it will have a default PIECESIZE as determined by the following formula:

Default PIECESIZE = MIN(x, 2^32 /(MIN(4096, 2^32 / (x / y))) * z)

Where:

x is the DSSIZE of the associated table space
y is the page size of the table space
z is the page size of the index

For example, if you created a non-partitioned index without a PIECESIZE specification, and the associated table space had a DSSIZE of 32G (the "x" value in the above formula) and a page size of 32 KB (the "y" value in the formula), and the page size of your index is 4 KB (the "z" value), the default PIECESIZE value would be 4G.

How about the max size of a partitioned index? Because that index will have one data set for each of the associated table space's partitions, the answer to the index size limit question will (as for non-partitioned indexes) depend on the maximum number of partitions for the table space and the size of each index partition. Here's how the maximum number of partitions for a table space breaks down:
  • If the table space was defined without the LARGE or DSSIZE options, and with a NUMPARTS value of 64 or less, the maximum number of partitions is 64.
  • If the table space was defined with the LARGE option, it can have up to 4096 partitions.
  • If the table space was defined with a DSSIZE specification, the maximum number of partitions can be up to 4096, depending on the DSSIZE value and the table space's page size (information on maximum number of partitions for various combinations of DSSIZE and table space page size can be found in the CREATE TABLESPACE section of the SQL Reference).

So, the above maximum-number-of-partitions values indicate how many partitions a partitioned index can have (again, because you'll have one index partition per table space partition), but how large can each index partition be? You can't specify PIECESIZE for a partitioned index, but DB2 determines the index partition size via the formula referenced previously in this blog entry:

Index partition size = MIN(x, 2^32 /(MIN(4096, 2^32 / (x / y))) * z)

Where:

x is the DSSIZE of the associated table space
y is the page size of the table space
z is the page size of the index

To illustrate: if the table space's DSSIZE is 64G ("x" in the formula) and the table space's page size is 32 KB ("y" value) and the index's page size is 32 KB ("z" value), each index partition can be up to 64 GB in size, and the maximum size of the partitioned index is 128 TB (64 GB times 2048 partitions, which is the maximum number of partitions that a table space with a 64G DSSIZE and a 32 KB page size can have). In fact, 128 TB is as big as a partitioned index can get (an index could also reach this size if the underlying table space had a DSSIZE of 32G and a page size of 32 KB).

Can you use the above index partition-sizing formula when the partitioned table space associated with the index was defined without a DSSIZE specification? I believe that you can. For that purpose, note that a LARGE table space effectively has a DSSIZE of 4G. For a partitioned table space defined without the LARGE option and without a DSSIZE specification, the maximum size of a table space partition is:
  • 4G, when NUMPARTS for the table space is between 1 and 16
  • 2G, when NUMPARTS for the table space is between 17 and 32
  • 1G, when NUMPARTS for the table space is between 33 and 64
  • 4G, when NUMPARTS for the table space is between 65 and 254

You should now be able to respond to "How large can this index get to be?" questions. I hope that this information will be useful to you.



Thursday, September 15, 2011

DB2 for z/OS Version 10: Moving Objects from One Buffer Pool to Another

Over the past 15 years or so, I have written magazine articles, and presented, and blogged on the topic of DB2 for z/OS buffer pools and their impact on application performance (two of my more recent blog entries in this category were posted in September and October of last year). Often, the message boils down to this: bigger buffer pool configuration = good. That said, there are times when enlarging a DB2 buffer pool configuration is not feasible. There may already be some pressure on the memory resource of your z/OS LPAR (as indicated, in my book, by a demand paging rate in excess of 10 per second during busy processing times -- check your z/OS monitor). There could be an organizational impediment to making your buffer pool configuration larger, in that the approval process for such a system change takes so long as to make you reluctant to put in the request. Whatever the reason, when you need to (or want to) make do with the DB2 buffers you have, you can often improve application response time and throughput by moving table spaces and/or indexes from one buffer pool (a pool with a high rate of disk read I/Os per second) to another (the target pool presumably being one with a relatively low rate of read I/Os per second). There are other reasons, as well, for moving a database object to a different buffer pool: you might want to use a buffer pool to manage parallelism for dynamic queries (as mentioned in the blog entry I posted last week), or you might isolate an object in its own pool for monitoring purposes, or you might be changing an object's page size (more on this momentarily). Whatever your situation might be, I want to let you know via this blog entry that some of the rules pertaining to changing the buffer pool assignment for a table space or an index changed with DB2 10 for z/OS. I'll use the rest of this entry to describe these changes.

I'll start by informing you of one thing that hasn't changed: if your DB2 subsystem is running in standalone mode (i.e., it's not a member of a DB2 data sharing group), and the object being moved from one buffer pool to another is a table space, and the buffer pool to which the table space will be assigned has the same page size as the pool to which the table space is currently assigned, and there are no already-existing "pending" definition changes for the table space or any of its associated indexes (pending object definition changes were introduced with DB2 10 in new-function mode -- more on this later) then 1) the ALTER TABLESPACE statement with the BUFFERPOOL specification can be issued at any time, and 2) the change will take place when the table space's data sets are next opened after the ALTER TABLESPACE statement has been issued (a common means of closing and reopening a table space's data sets is to issue a -STOP DATABASE command for the table space, followed by a -START DATABASE command). This behavior is the same as that which you'd see in a DB2 9 environment (or in a DB2 V8 system).

OK, now for the DB2 10-introduced changes.

FOR TABLE SPACES AND INDEXES

More flexibility for DB2 data sharing sites. I like this change a lot. Prior to DB2 10, if one wanted to reassign a table space or an index to a different buffer pool, the target object had to be in the stopped state at the time of the issuance of the ALTER TABLESPACE or ALTER INDEX statement with the BUFFERPOOL specification. If you're a DBA, that's not so good from a convenience standpoint, as stopping a heavily-accessed table space or index may not be do-able in prime time, even if you issue the -STOP DATABASE command with the AT(COMMIT) option to utilize drain locking. You might have to do the -STOP at night and/or during a weekend. Yuck. Enter DB2 10 (in conversion mode, no less!), and that restriction is gone: you can issue the buffer pool reassigning ALTER TABLESPACE or ALTER INDEX statement at any time, even when DB2 is running in data sharing mode -- the target object does NOT have to be in the stopped state. The change will take effect, as before, when the object's data sets are next opened following the issuance of the ALTER statement.

FOR TABLE SPACES

Reassigning a table space to a buffer pool with a different page size. Before DB2 10, the only way to move a table space from a 4K buffer pool to (for example) an 8K pool (that is, to change the table space's page size) was to unload data from the table space, drop it, re-create it with the larger-page buffer pool specified, and re-load the previously unloaded data. Cumbersome. With DB2 10 (in new-function mode), you can effect this change for a universal table space (one that is not an XML table space) or a LOB table space via an ALTER TABLESPACE statement (specifying the buffer pool with the larger or smaller page size), and the change will take effect the next time the table space (in its entirety) is REORGed  with SHRLEVEL CHANGE or REFERENCE (this kind of table space definition modification is known as a "pending" change -- something new introduced with DB2 10, and a continuation of the online schema change capabilities introduced with DB2 V8 and expanded with subsequent DB2 releases). By the way, DB2 10 (in new-function mode) also enables conversion of a simple, segmented, or traditional table-controlled range-partitioned table space to a universal table space via ALTER and REORG.

FOR INDEXES

Pending versus immediate definitional changes. For indexes, the DB2 10 change pertaining to buffer pool reassignment has to timing as it relates to the change actually taking effect. If the change is of the pending variety (new with DB2 10 in new-function mode), it will take effect when the index (in its entirety) is next reorganized via REORG INDEX with SHRLEVEL CHANGE (or via REORG TABLESPACE with SHRLEVEL CHANGE or REFERENCE -- and the REORG TABLESPACE route is required if there are also pending definition changes in existence for the table space associated with the index). Here is what would make an ALTER INDEX with a BUFFERPOOL specification a pending change (and ALL of the following conditions would have to be true):
  • The index's data sets have already been created.
  • The index is defined on a table in a universal table space, or on an XML table or an auxiliary table associated with a base table that is in a universal table space.
  • There are pending definition changes in existence for the index or the associated table space, or the buffer pool to which the index will be reassigned has a page size that is different from that of the index's current buffer pool (index page sizes other than 4K were introduced with DB2 9).
If any of the above conditions is not true, the index definition change is immediate.

For a pending change, buffer pool reassignment takes place the next time the entire index is reorganized via REORG INDEX with SHRLEVEL CHANGE or REORG TABLESPACE with SHRLEVEL CHANGE or REFERENCE.

If the change is of the immediate variety, and the page size of the "to" buffer pool is the same as that of the "from" buffer pool, reassignment of the index to the pool specified in the ALTER INDEX statement will take place when the index's data sets are next opened after the issuance of ALTER INDEX.

If it's an immediate change and the "to" buffer pool has a page size that is different from that of the "from" buffer pool, the index is placed in REBUILD-pending status and the buffer pool reassignment takes place when the index is rebuilt.

There you have it. I hope that this information will be useful for your buffer pool reassignment changes in a DB2 10 environment (or for DB2 10 migration planning, if you're currently using DB2 9 or DB2 V8).

Wednesday, September 7, 2011

Managing DB2 for z/OS CPU Parallelism for Dynamic SQL

DB2 for z/OS query CPU parallelism (hereinafter referred to simply as "query parallelism"), introduced with DB2 Version 4 in the mid-1990s, enables DB2 to split a query into multiple tasks that can execute concurrently on a mainframe server. The primary benefit of query parallelism is improved performance from an elapsed time perspective (query parallelism tends to increase statement execution CPU time somewhat -- this because DB2 needs to evaluate a candidate statement to determine whether or not parallelism would improve the query's run time, and because DB2 has to manage a parallelized query's "child" tasks). That said, for locally-originating queries (i.e., those issued by programs running on the mainframe server on which the target DB2 subsystem is running), a sometimes equally-appealing benefit is redirection of work to the server's zIIP engines, if the server is configured with zIIPs (queries that come from remote DRDA requesters, processed through DB2's distributed data facility, already benefit, cost-wise, from substantial zIIP offload).

One matter that sometimes comes up when an organization is considering the use of query parallelism concerns management of the DB2 feature's scope-of-use. In other words, people may want some, but not all, queries executed in a given DB2 environment to be candidates for parallelism. This some-but-not-all approach makes sense in certain situations, particularly when there is a desire to utilize query parallelism in a system on which high-volume OLTP and/or batch applications are already running. Mainframe servers that handle such operational workloads often run at high levels of utilization during peak processing hours (sometimes 90% busy or more). On that type of system, the CPU cost of evaluation by DB2 of all queries for estimated parallelism performance benefit, and of management by DB2 of child tasks for queries that are parallelized, could push the server's utilization rate to a level that would negatively impact the performance of the existing operational applications. [Note: for a DB2 subsystem that is dedicated to business intelligence usage (an example being a data warehousing system), it might be appropriate to make all dynamic queries -- and dynamic SQL tends to dominate in BI environments -- candidates for parallelism by changing the value of the DB2 ZPARM parameter CDSSRDEF to ANY from its default value of 1.]

So, if you want some, but not all, of your queries to be candidates for parallelization, how do you go about putting that aim into action? For static SQL, it's easy: identify the program that issues the SELECTs that you'd like DB2 to parallelize, and rebind that package with a specification of DEGREE(ANY). For dynamic SQL, selective implementation of query parallelism might be a little less straightforward, depending on the particulars of the situation. If dynamic queries are issued by a program that an organization's developers have coded, making that program's queries candidates for parallelization would simply require that the program issue the SQL statement SET CURRENT DEGREE = 'ANY' prior to issuing the queries. In other cases, the SET CURRENT DEGREE mechanism may not be applicable. You might have a situation in which dynamic queries are generated and issued by an end-user data query tool, and that tool might not provide a means of setting the value of the CURRENT DEGREE special register to 'ANY'. Or, the dynamic queries you want DB2 to parallelize might be issued by a purchased application for which the vendor does not supply source code, thereby making it impossible to add a SET CURRENT DEGREE statement to a program (and even if you have the source code for a purchased application, modifying that source code could put you in violation of the vendor's support agreement).

What can you do if the SET CURRENT DEGREE = 'ANY' option is not available to you? Here, you have some options with respect to implementing limited-scope parallelism for dynamic queries (i.e., implementing parallelism for queries in a "some-but-not-all" fashion), as explained below. Note that for all three of the parallelism management approaches described below, the value of the CDSSRDEF parameter in ZPARM is assumed to be ANY (in other words, it's assumed that you've opened the door for all-dynamic-query parallelism on that subsystem, and you'll then take one or more steps to close that door for all but a portion of the dynamic query workload).  

Limit query parallelism by table -- Suppose that you want to make dynamic queries targeting table ABC candidates for parallelism, while preventing parallelism for dynamic queries accessing other tables. This could be done pretty easily be assigning the table space associated with table ABC to its own buffer pool, and leaving the value of that pool's VPPSEQT parameter at the default of 50 (or specify a value greater than 50, if you want -- more on this momentarily). Set the value of VPPSEQT to zero for your other buffer pools, and parallelism for queries targeting tables in table spaces assigned to those pools will be disabled as a result.

Now, a bit more on VPPSEQT (short for Virtual Pool Parallel SEQuential Threshold): this is one of several DB2 buffer pool parameter specifications that can be altered for a particular pool via execution of an ALTER BUFFERPOOL command. The default value for VPPSEQT, as noted, is 50. This is actually a percentage of the VPSEQT (Virtual Pool SEQuential Threshold) value, which is 80 by default. That 80 is a percentage of the associated pool's buffers, so by default 50% X 80% = 40% of a pool's buffers are available to hold table and/or index pages that are read from disk as a result of parallel-mode query execution. If you change the VPPSEQT threshold for a pool to 60 and leave the VPSEQT threshold at 80 then, for that pool, 60% X 80% = 48% of the buffers will be available to hold pages read into memory in support of parallel-mode query processing. If VPPSEQT is set to zero, no buffers will be available to support parallel-mode query processing, so queries targeting tables in table spaces assigned to that pool will not be candidates for parallelization.

Limit query parallelism by package, collection, etc. -- A lot of people think of DB2's Resource Limit Facility (RLF) only as a means of limiting the amount of CPU time that a dynamic SQL statement can accumulate in execution before it is terminated. That is indeed the most common usage of RLF, and it is actuated through the use of what's called an RLMT table (the name of this table is actually of the form DSNRLMTxx, with xx being a user-specified two-character alphanumeric value). You could alternatively activate an RLST table (DSNRLSTxx), which would allow you to (among other things) deactivate query parallelism for dynamic queries associated with workload components qualified by certain identifiers (options include authorization ID, collection name, and package name -- these correspond to columns of the RLST table). Remember, your specification of ANY for CDSSRDEF in ZPARM opened the door for parallelism candidacy to all dynamic queries -- through the RLST table (via the value '4' in the RLFUNC column) you are closing that door for workloads OTHER than the one for which you want dynamic queries to be candidates for parallelism. Here's an interesting and easy-to-specify set-up: with one row in the RLST table, having '4' in column RLFUNC and no values in any other columns, parallelism will be disabled for dynamic queries that are local to the DB2 subsystem (e.g., issued via SPUFI or QMF or batch jobs), while allowing parallelism for queries issued from remote DRDA requesters (this thanks to a blank value in the LUNAME column of the RLST table, which represents the local location). You can find more information about the RLST table in the DB2 Version 8 Administration Guide, the DB2 9 Performance Monitoring and Tuning Guide, or the DB2 10 Managing Performance manual.

Limit query parallelism by DB2 data sharing group member -- I really like this option, but it does, of course, require that you have DB2 running in data sharing mode on a Parallel Sysplex. The concept here is pretty simple: you take one or more of the members of the data sharing group, and you set the value of the ZPARM parameter CDSSRDEF to ANY on those subsystems (leaving the value of CDSSRDEF at 1 on the other members of the group -- and it's assumed that you'll run your high-volume OLTP and batch processes on the CDSSRDEF = 1 subsystems). If you have "local" programs that issue dynamic queries that you want DB2 to parallelize, run those on the subsystem(s) that has CDSSRDEF = ANY. If you have users and/or applications that issue dynamic queries that access host data via DRDA and DB2's distributed data facility, have these users and apps connect to the CDSSRDEF = ANY subsystem(s). For maximum availability as it pertains to programs and users issuing dynamic queries that are to be candidates for parallelism, it's best to have at least two member DB2 subsystems set up for parallelism-by-default (i.e., with CDSSRDEF = ANY). That way, if one is down due to a planned (e.g., for maintenance) or unplanned outage, the other member(s) can continue to parallelize dynamic queries as desired. This kind of configuration for high availability was made MUCH easier for DRDA requesters when DB2 for z/OS Version 8 introduced a member-subsetting capability. Setting this up would involve creating an alias location name for the DB2 members for which you have CDSSRDEF = ANY, assigning a TCP/IP port to that alias, and updating the DDF record in the respective members' bootstrap data sets with this information (done via DSNJU003, the change log inventory utility). Combine a location alias with the use of dynamic virtual IP addresses (DVIPAs) for DB2 members, and you'll have a high-availability configuration that is simple from the perspective of users and applications connecting to the CDSSRDEF = ANY DB2 subsystems (from a CONNECT TO point of view, there is no need to know the names of individual DB2 members associated with the location alias). All of this is described quite clearly in an excellent IBM "red book" titled DB2 9 for z/OS: Distributed Functions.

And there you have it. You can implement parallelism for dynamic queries in a limited fashion, and you have options when it comes to setting this up. Assess your situation, and go the route that makes sense for your organization.