10. Monitor Switches
打开Monitor Switch,才能获得性能方面的信息,命令如下
db2 "update monitor switches using lock ON sort ON bufferpool ON uow ON table ON statement ON"
9. Agents
要保证有足够的agent应付系统负载
命令:db2 "get snapshot for database manager"
要察看“Agents waiting for a token” 或者“ Agents stolen from another application”,如果有值,就需要增加DB manager的agent值,就是修改MAXAGENTS 和/或者 MAX_COORDAGENTS的值。
High water mark for agents registered = 7
High water mark for agents waiting for a token = 0
Agents registered= 7
Agents waiting for a token= 0
Idle agents= 5
Agents assigned from pool= 158
Agents created from empty Pool = 7
Agents stolen from another application= 0
High water mark for coordinating agents= 7
Max agents overflow= 0
8. Maximum Open Files
最大的打开文件数目
DB2限制同时打开的文件数目,数据库参数"MAXFILOP"限定了并发打开的文件数目。如达到这个数目,DB2就会开始关闭和打开Tablespace文件,包括raw device,这样会降低SQL反映时间和占用CPU。
使用命令来查看是否有文件关闭情况:
db2 "get snapshot for database on DBNAME"
看看其中的 "Database files closed = 0"
如果值不为零,就要修改MAXFILOP,
db2 "update db cfg for DBNAME using MAXFILOP N"
7. Locks
缺省的LOCKTIMEOUT=-1,就是说不设置lock的timeout,在OLTP中这可能是一个灾难。然而很多DB就是这么设置的。要设置比较小的数值,比如设置LOCKTIMEOUT=10或者15秒。
察看命令:
db2 "get db cfg for DBNAME",
看看下面的信息:
Lock timeout (sec) (LOCKTIMEOUT) = -1
要和应用人员将明白,他们是否已经在程序中可以处理timeout的情况。然后设置
db2 "update db cfg for DBNAME using LOCKTIMEOUT 15"
可以在系统中察看lock wait的数目,lock wait time, lock list 使用的内存量。
db2 "get snapshot for database on DBNAME"
察看:
Locks held currently= 0
Lock waits= 0
Time database waited on locks (ms)= 0
Lock list memory in use (Bytes)= 576
Deadlocks detected= 0
Lock escalations= 0
Exclusive lock escalations= 0
Agents currently waiting on locks= 0
Lock Timeouts= 0
db2 "list tablespaces show detail", 可察看临时表空间的container,
Tablespace ID= 1
Name= TEMPSPACE1
Type= System managed space
Contents= Temporary data
State= 0x0000
Detailed explanation: Normal
Total pages= 1
Useable pages= 1
Used pages= 1
Free pages= Not applicable
High water mark (pages)= Not applicable
Page size (bytes)= 4096
Extent size (pages)= 32
Prefetch size (pages)= 96
Number of containers= 3
这里表示有3个container,Prefetch size是Extent size的3倍。为了最好的并行性能,最好Prefetch size是Extent size的倍数。一般倍数是container的数目。
db2 "list tablespace containers for 1 show detail"
可以看到containers的定义。
5. SORT MEMORY
OLTP应该没有大规模的sort,因为sort会消耗大量的CPU, I/O和时间。
缺省的SORTHEAP = 256*4K=1M,一般是足够了。应该知道sort overflows 的数目和每个交易的sort number。
Db2 "get snapshot for database on DBNAME"
察看如下项目:
Total sort heap allocated= 0
Total sorts = 1
Total sort time (ms)= 8
Sort overflows = 0
Active sorts = 0
Commit statements attempted = 3
Rollback statements attempted = 0
Let transactions = Commit statements attempted + Rollback statements
attempted
Let SortsPerTX= Total sorts / transactions
Let PercentSortOverflows = Sort overflows * 100 / Total sorts
4. TABLE ACCESS
要查出来每次查询读出的row,
1) db2 "get snapshot for database on DBNAME"
看到多少交易发生,the sum of Commit statements attempted + Rollback statements attempted
2) db2 "get snapshot for tables on DBNAME"
区分出交易读出的row。divide the number of rows read by the number of transactions (RowsPerTX).OLTP一般每次交易从一个table里面读出20 row,如果发现一个交易能读出成百上千行数据,表扫描就可能出现,可能需要看看index是否需要。简单情况下是运行runstats收集信息。
Sample output from "get snapshot for tables on DBNAME" follows:
Snapshot timestamp = 09-25-2000 4:47:09.970811
Database name= DGIDB
Database path= /fs/inst1/inst1/NODE0000/SQL00001/
Input database alias= DGIDB
Number of accessed tables= 8
Table List
Table Schema= INST1
Table Name= DGI_SALES_ LOGS_TB
Table Type= User
Rows Written= 0
Rows Read= 98857
Overflows= 0
Page Reorgs= 0
有很高的Overflows ,就需要re-org table。当一行宽度改变,可能DB2就会把一行放到不同的页中。
3. TABLESPACE ANALYSIS
tablespace snapshot对理解哪些数据被访问和怎么访问的有很大的价值。
db2 "get snapshot for tablespaces on DBNAME"
对每一个tablespace,要注意:
What is the average read time (ms)?
What is the average write time (ms)?
What percentage of the physical I/O is asynchronous (prefetched) vs. synchronous (random)?
What are the buffer pool hit ratios for each tablespace?
How many physical pages are being read each minute?
How many physical and logical pages are being read for each transaction?
对所有的tablespaces,注意:
Which tablespaces have the slowest read and write times? Why?
Containers on slow disks? Are container sizes unequal?
Are the access attributes, asynchronous versus synchronous access, consistent with expectations?
Randomly read tables should have randomly read tablespaces, meaning high synchronous read percentages, usually higher buffer pool hit ratios, and lower physical I/O rates.
对每个tablespace,要注意Prefetch size是Extent size的倍数。如果必要,可以修改tablespace的prefetch size。
显示tablespace信息:db2 "list tablespaces show detail"
显示containers 信息:db2 "list tablespace containers for N show detail"
2. BUFFER POOL OPTIMIZATION
终于讲到BufferPool了。
现在一般的系统内存都可以达到2G,4G,8G了,但是DB2缺省的IBMDEFAULTBP只有16M。所以呢,一般可以建立一个buffer pool 给SYSCATSPACE catalog tablespace, 一个buffer pool给 TEMPSPACE tablespace, 至少两个BP_RAND and BP_SEQ. 随机存取的Tablespaces 应该有一个buffer pool来应付随机的objectives,这就是 BP_RAND. 顺序存取的Tablespaces (with asynchronous prefetch I/O) 应该建立一个buffer pool给sequential objectives, BP_SEQ. 也可以建立其他的buffer pools,这要根据应用来说。比如可以建立一个足够大的buffer pool 来存放热点经常存取的数据。有时候需要为大的table建立单一的buffer pool.
太小的buffer pool 会导致大量的、不必要的物理I/O。太大的biffer pool有可能会产生系统paging,增加不必要的CPU管理内存开销。
buffer pool的大与小是相对的,一个系统的buffer pool大小应该"合适的"!当达到diminishing return达到时,就是合适的。如果不是使用自动工具,应该有条理的测试buffer pool性能,比如命中率,I/O次数,物理I/O读的比率,直到达到合适状态。当然,应用是变化的,所以最优状态不是不边的,也是要定期的评估。
一般的DBA的流程是:
1). Create an SQL Event Monitor, write to file:
$> db2 "create event monitor SQLCOST for statements write to ..."
2). Activate the event monitor (be sure ample free disk space is available):
$> db2 "set event monitor SQLCOST state = 1"
3). Let the application run.
4). Deactivate the event monitor:
$> db2 "set event monitor SQLCOST state = 0"
5). Use the DB2-supplied db2evmon tool to format the raw SQL Event Monitor data (hundreds of megabytes of free disk space may be required depending on SQL throughput rates):
$> db2evmon -db DBNAME -evm SQLCOST
> sqltrace.txt
6). Browse through the formatted file scanning for unusually large cost numbers, a time-consuming process:
$> more sqltrace.txt
7). Undertake a more complete analysis of the formatted file that attempts to identify unique statements (independent of literal values), each unique statement's frequency (how many times it occurred), and the aggregate of its total CPU, sort, and other resource costs. Such a thorough analysis could take a week or more on just a 30-minute sample of application SQL activity.