Java程序导入文本文件生成XLS格式

最近工作中碰到一件事,要从文本文件中导入数据生成xls文件。 想想不难的,前端时间写过一个java程序,是放在server上跑的,从Oracle数据库里取数据再生成xls文件。 于是,把原先的那个程序给拿过来改改,很快就成了。由于这个是给用户在客户端操作的,所以用swing写了个很简朴的GUI界面。
但实际运行测试时,发现了一个很大的问题,那就是内存的问题。原先的那个程序,将要导入的数据先都写到一个vector中。 在服务器上,30多万行的数据都轻松搞定, 但一放到Win XP上就不行了。(我的机器内存512MB) 我测试的文本文件,有10万行数据,导入到vector后,再将数据通过POI包写到xls文件时就会使java虚拟机堆溢出,我手动调整了虚拟机最大可用内存量,即设置 -Xmx也还是不行,最后将测试数据减少到25000行程序顺利生成了xls文件。 故最终的程序的原理改成,先将导入的文本数据输出到temp文件,每个temp文件第一行为列名,从第二行起为数据,最多25000行。每个temp文件对应生成一个xls文件。经测试,效率还是相当不错的。哈哈,java现在效率其实很不错的啦,除非是那种对时间很苛刻的环境,那没有办法。

最后发布的时候,由于是给无专业IT知识的用户使用,所以我使用exe4j软件将其打包成一个exe文件,方便用户使用。 用户只要保证装有JRE 5.0或以上版本就可以了。(我没测试过6.0,相信没有问题的,哈哈,想当然了)

使用方法:点击select按钮,选中要导入的文件就可以了。生成的xls文件在被导入文件同一目录下;若有忽略的数据,在程序所在目录下生成log文件。

数据文件说明:
数据的第一行应为头信息,即表的列名,余下为数据本身,每个字段之间用|分割。 可下载测试样本来看。

软件下载地址:
TextConverter.zip

测试数据下载:
test2.txt

软件界面

软件界面

Script:Generate A DDL Script For A Table

以下脚本用于生成创建表的DDL语句,需要用到DBMS_METADATA.GET_DDL:

-- How to use ddl.sql
-- Run ddl.sql on the sql*plus.
-- Login the sql*plus with apps user or dba user
-- Start ddl.sql, which will ask you table_name and table_owner that you're looking for.
-- It will generate tablename_ddl.txt

set long 1000000
SET PAGESIZE 3000
set lines 200
SET HEADING OFF
SET VERIFY OFF
SET FEEDBACK OFF
set echo on
set timing off
set wrap On

ACCEPT TABLE_NAME CHAR PROMPT 'Enter Table Name : '
ACCEPT TABLE_OWNER CHAR PROMPT 'Enter Table Owner : '

select DBMS_METADATA.GET_DDL('TABLE',OBJECT_NAME,OWNER)
FROM Dba_objects
where owner = UPPER('&TABLE_OWNER') and object_name = UPPER('&TABLE_NAME')
and object_type = 'TABLE'
union all
select dbms_metadata.GET_DEPENDENT_DDL ('COMMENT', TABLE_NAME, OWNER )
FROM (select table_name,owner
from Dba_col_comments
where owner = UPPER('&TABLE_OWNER')
and table_name = UPPER('&TABLE_NAME')
and comments is not null
union
select table_name,owner
from sys.Dba_TAB_comments
where owner = UPPER('&TABLE_OWNER')
and table_name = UPPER('&TABLE_NAME')
and comments is not null)
union all
select DBMS_METADATA.GET_DEPENDENT_DDL('INDEX',TABLE_NAME, TABLE_OWNER)
FROM (select table_name,table_owner
FROM Dba_indexes
where table_owner = UPPER('&TABLE_OWNER')
and table_name = UPPER('&TABLE_NAME')
and index_name not in (select constraint_name
from sys.Dba_constraints
where table_name = table_name
and constraint_type = 'P' )
and rownum = 1)
union all
select dbms_metadata.GET_DDL ('TRIGGER', trigger_name ,owner )
from Dba_triggers
where table_owner = UPPER('&TABLE_OWNER')
and table_name = UPPER('&TABLE_NAME')
.
SET CONCAT +
spool &TABLE_NAME+_ddl.txt
/
spool off

Sample output:

SQL> @ddl
SQL> set timing off
SQL> set wrap On
SQL>
SQL> ACCEPT TABLE_NAME CHAR PROMPT 'Enter Table Name : '
Enter Table Name : TAB$
SQL> ACCEPT TABLE_OWNER CHAR PROMPT 'Enter Table Owner : '
Enter Table Owner : SYS
SQL>
SQL> select DBMS_METADATA.GET_DDL('TABLE',OBJECT_NAME,OWNER)
2  FROM Dba_objects
3  where owner = UPPER('&TABLE_OWNER') and object_name = UPPER('&TABLE_NAME')
4  and object_type = 'TABLE'
5  union all
6  select dbms_metadata.GET_DEPENDENT_DDL ('COMMENT', TABLE_NAME, OWNER )
7  FROM (select table_name,owner
8  from Dba_col_comments
9  where owner = UPPER('&TABLE_OWNER')
10  and table_name = UPPER('&TABLE_NAME')
11  and comments is not null
12  union
13  select table_name,owner
14  from sys.Dba_TAB_comments
15  where owner = UPPER('&TABLE_OWNER')
16  and table_name = UPPER('&TABLE_NAME')
17  and comments is not null)
18  union all
19  select DBMS_METADATA.GET_DEPENDENT_DDL('INDEX',TABLE_NAME, TABLE_OWNER)
20  FROM (select table_name,table_owner
21  FROM Dba_indexes
22  where table_owner = UPPER('&TABLE_OWNER')
23  and table_name = UPPER('&TABLE_NAME')
24  and index_name not in (select constraint_name
25  from sys.Dba_constraints
26  where table_name = table_name
27  and constraint_type = 'P' )
28  and rownum = 1)
29  union all
30  select dbms_metadata.GET_DDL ('TRIGGER', trigger_name ,owner )
31  from Dba_triggers
32  where table_owner = UPPER('&TABLE_OWNER')
33  and table_name = UPPER('&TABLE_NAME')
34  .
SQL> SET CONCAT +
SQL> spool &TABLE_NAME+_ddl.txt
SP2-0332: Cannot create spool file.
SQL> /
CREATE TABLE "SYS"."TAB$"
(    "OBJ#" NUMBER NOT NULL ENABLE,
"DATAOBJ#" NUMBER,
"TS#" NUMBER NOT NULL ENABLE,
"FILE#" NUMBER NOT NULL ENABLE,
"BLOCK#" NUMBER NOT NULL ENABLE,
"BOBJ#" NUMBER,
"TAB#" NUMBER,
"COLS" NUMBER NOT NULL ENABLE,
"CLUCOLS" NUMBER,
"PCTFREE$" NUMBER NOT NULL ENABLE,
"PCTUSED$" NUMBER NOT NULL ENABLE,
"INITRANS" NUMBER NOT NULL ENABLE,
"MAXTRANS" NUMBER NOT NULL ENABLE,
"FLAGS" NUMBER NOT NULL ENABLE,
"AUDIT$" VARCHAR2(38) NOT NULL ENABLE,
"ROWCNT" NUMBER,
"BLKCNT" NUMBER,
"EMPCNT" NUMBER,
"AVGSPC" NUMBER,
"CHNCNT" NUMBER,
"AVGRLN" NUMBER,
"AVGSPC_FLB" NUMBER,
"FLBCNT" NUMBER,
"ANALYZETIME" DATE,
"SAMPLESIZE" NUMBER,
"DEGREE" NUMBER,
"INSTANCES" NUMBER,
"INTCOLS" NUMBER NOT NULL ENABLE,
"KERNELCOLS" NUMBER NOT NULL ENABLE,
"PROPERTY" NUMBER NOT NULL ENABLE,
"TRIGFLAG" NUMBER,
"SPARE1" NUMBER,
"SPARE2" NUMBER,
"SPARE3" NUMBER,
"SPARE4" VARCHAR2(1000),
"SPARE5" VARCHAR2(1000),
"SPARE6" DATE
) CLUSTER "SYS"."C_OBJ#" ("OBJ#")
CREATE INDEX "SYS"."I_TAB1" ON "SYS"."TAB$" ("BOBJ#")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "SYSTEM"

优化模式区别(all_rows & first_rows_n)

FIRST_ROWS优化模式以最快速度地检索出结果 集中的一行为其指导目标。当系统用户正在使用OLTP系统检索单条记录时,该 优化模式最为有效。但是该模式对于批处理密集型(batch)作业环境来说并不是最理想 的选择,在这种环境中一个查询通常需要检索许多行。FIRST_ROWS提示 一般会强制使用某些索引,而在默认环境(ALL_ROWS)中可能不采用这些索引。在使 用UPDATE和DELETE语句时FIRST_ROWS模式会被忽略,因这些DML操 作中所查询到的所有记录都会被更新或删除。另当使用以下分组语句(如GROUP BY,DISTINCT,INTERSECT,MINUS和UNION)时FIRST_ROWS模式均被ALL_ROWS模式取代,因为这些语句进行分组时必须检索所有行。当语句中有ORDER BY子句时,如果索引扫描可以进行实际的排序工作,则优化器将避免额外的排 序。当索引扫描可用并且索引处于内部表(inner table)时,优化器将更倾向于NESTED LOOPS即嵌套循环而非SORT MERGE排 序连接。

另10g中现有的FIRST_ROWS模式的变体FIRST_ROWS_N来 指定以多少行数最快返回。这个值介于10~1000之间,这个使用FIRST_ROWS_N的新方法是完全基于成本的方法,它对于N的取值较敏感,若N甚小,优化器就会产生包 括嵌套循环以及索引查找的计划。如果N值较大,优化器也可能生成由散列连接和全表扫描组 成的计划(类似于ALL_ROWS)。 又FIRST_ROW与FIRST_ROWS_N存 在不同,FIRST_ROW模式中保量了部分基于规则的代码,而FIRST_ROWS_N模式则是完完全全基于统计信息计算相应成本,如Oracle文档所述:

ALL_ROWS优化模式指导查询以最快速度检索出所 有行(最佳吞吐量)。当系统用户 处于需要大量批处理报告的环境中,该模式较理想。

在实际的SQL硬解析过程中,FIRST_ROWS_N模式将首先以ALL_ROWS模 式的方式计算一次各执行计划的具体代价,之后将我们需要的N条记录代入成本计算中代替实 际全部的候选行(CARD)以得出FIRST_ROWS_N中 的计划成本。

create table test as select  * from dba_objects;

create table testa as select * from test;

alter session set events’10053 trace name context forever,level 1′;    –使用10053事 件获取成本计算过程trace

alter session set optimizer_mode=all_rows;

select test.owner from test,testa where test.object_id=testa.object_id

alter session set events’10053 trace name context off’;

下为ALL_ROWS模式中,最佳连接方式的选 取:

NL Join

Outer table: Card: 9622.00  Cost: 35.37  Resp: 35.37  Degree: 1  Bytes: 7

Inner table: TESTA  Alias: TESTA

Access Path: TableScan

NL Join:  Cost: 318924.52  Resp: 318924.52  Degree: 0

Cost_io: 315358.00  Cost_cpu: 27736509932

Resp_io: 315358.00  Resp_cpu: 27736509932

Access Path: index (index (FFS))

Index: INDA_ID

resc_io: 5.69  resc_cpu: 1304190

ix_sel: 0.0000e+00  ix_sel_with_filters: 1

Inner table: TESTA  Alias: TESTA

Access Path: index (FFS)

NL Join:  Cost: 56375.98  Resp: 56375.98  Degree: 0

Cost_io: 54762.00  Cost_cpu: 12551800804

Resp_io: 54762.00  Resp_cpu: 12551800804

Access Path: index (AllEqJoinGuess)

Index: INDA_ID

resc_io: 1.00  resc_cpu: 8171

ix_sel: 1.0393e-04  ix_sel_with_filters: 1.0393e-04

NL Join: Cost: 9667.48  Resp: 9667.48  Degree: 1

Cost_io: 9657.00  Cost_cpu: 81507910

Resp_io: 9657.00  Resp_cpu: 81507910

Best NL cost: 9667.48

resc: 9667.48 resc_io: 9657.00 resc_cpu: 81507910

resp: 9667.48 resp_io: 9657.00 resp_cpu: 81507910

Join Card:  9622.00 = outer (9622.00) * inner (9622.00) * sel (1.0393e-04)

Join Card – Rounded: 9622 Computed: 9622.00

SM Join

Outer table:

resc: 35.37  card 9622.00  bytes: 7  deg: 1  resp: 35.37

Inner table: TESTA  Alias: TESTA

resc: 7.17  card: 9622.00  bytes: 3  deg: 1  resp: 7.17

using dmeth: 2  #groups: 1

SORT resource      Sort statistics

Sort width:          70 Area size:      131072 Max Area size:    12582912

Degree:               1

Blocks to Sort:      17 Row size:           14 Total Rows:           9622

Initial runs:         2 Merge passes:        1 IO Cost / pass:         10

Total IO sort cost: 27      Total CPU sort cost: 13931876

Total Temp space used: 254000

SM join: Resc: 203.62  Resp: 203.62  [multiMatchCost=0.00]

HA Join

Outer table:

resc: 35.37  card 9622.00  bytes: 7  deg: 1  resp: 35.37

Inner table: TESTA  Alias: TESTA

resc: 7.17  card: 9622.00  bytes: 3  deg: 1  resp: 7.17

using dmeth: 2  #groups: 1

Cost per ptn: 0.81  #ptns: 1

hash_area: 124 (max=3072)   Hash join: Resc: 43.35  Resp: 43.35  [multiMatchCost=0.00]

HA Join (swap)

Outer table:

resc: 7.17  card 9622.00  bytes: 3  deg: 1  resp: 7.17

Inner table: TEST  Alias: TEST

resc: 35.37  card: 9622.00  bytes: 7  deg: 1  resp: 35.37

using dmeth: 2  #groups: 1

Cost per ptn: 0.81  #ptns: 1

hash_area: 124 (max=3072)   Hash join: Resc: 43.35  Resp: 43.35  [multiMatchCost=0.00]

HA cost: 43.35

resc: 43.35 resc_io: 42.00 resc_cpu: 10480460

resp: 43.35 resp_io: 42.00 resp_cpu: 10480460

Best:: JoinMethod: Hash

Cost: 43.35  Degree: 1  Resp: 43.35  Card: 9622.00  Bytes: 10

***********************

Best so far: Table#: 0  cost: 35.3706  card: 9622.0000  bytes: 67354

Table#: 1  cost: 43.3476  card: 9622.0000  bytes: 96220

可以看到连接中二表上的候选行都是9622条,实际结果集也是9622条。

我们来看FIRST_ROWS_10情况下的trace:

alter session set events’10053 trace name context forever,level 1′;

alter session set optimizer_mode=first_rows_10;

select test.owner from test,testa where test.object_id=testa.object_id;

alter session set events’10053 trace name context off’;

Now joining: TEST[TEST]#0

***************

NL Join

Outer table: Card: 11.00  Cost: 2.00  Resp: 2.00  Degree: 1  Bytes: 3

Inner table: TEST  Alias: TEST

Access Path: TableScan

NL Join:  Cost: 368.08  Resp: 368.08  Degree: 0

Cost_io: 364.00  Cost_cpu: 31713898

Resp_io: 364.00  Resp_cpu: 31713898

Access Path: index (AllEqJoinGuess)

Index: IND_ID

resc_io: 2.00  resc_cpu: 15503

ix_sel: 1.0393e-04  ix_sel_with_filters: 1.0393e-04

NL Join (ordered): Cost: 24.02  Resp: 24.02  Degree: 1

Cost_io: 24.00  Cost_cpu: 178973

Resp_io: 24.00  Resp_cpu: 178973

Best NL cost: 24.02

resc: 24.02 resc_io: 24.00 resc_cpu: 178973

resp: 24.02 resp_io: 24.00 resp_cpu: 178973

Join Card:  11.00 = outer (11.00) * inner (9622.00) * sel (1.0393e-04)

Join Card – Rounded: 11 Computed: 11.00

SM Join

Outer table:

resc: 7.17  card 9622.00  bytes: 3  deg: 1  resp: 7.17

Inner table: TEST  Alias: TEST

resc: 35.37  card: 9622.00  bytes: 7  deg: 1  resp: 35.37

using dmeth: 2  #groups: 1

SORT resource      Sort statistics

Sort width:          70 Area size:      131072 Max Area size:    12582912

Degree:               1

Blocks to Sort:      22 Row size:           18 Total Rows:           9622

Initial runs:         2 Merge passes:        1 IO Cost / pass:         14

Total IO sort cost: 36      Total CPU sort cost: 14055006

Total Temp space used: 320000

SORT resource      Sort statistics

Sort width:          70 Area size:      131072 Max Area size:    12582912

Degree:               1

Blocks to Sort:      17 Row size:           14 Total Rows:           9622

Initial runs:         2 Merge passes:        1 IO Cost / pass:         10

Total IO sort cost: 27      Total CPU sort cost: 13931876

Total Temp space used: 254000

SM join: Resc: 109.14  Resp: 109.14  [multiMatchCost=0.00]

SM cost: 109.14

resc: 109.14 resc_io: 105.00 resc_cpu: 32173386

resp: 109.14 resp_io: 105.00 resp_cpu: 32173386

SM Join (with index on outer)

Access Path: index (FullScan)

Index: IND_ID

resc_io: 167.00  resc_cpu: 5134300

ix_sel: 1  ix_sel_with_filters: 1

Cost: 167.66  Resp: 167.66  Degree: 1

Outer table:

resc: 167.66  card 11.00  bytes: 7  deg: 1  resp: 167.66

Inner table: TESTA  Alias: TESTA

resc: 7.17  card: 9622.00  bytes: 3  deg: 1  resp: 7.17

using dmeth: 2  #groups: 1

SORT resource      Sort statistics

Sort width:          70 Area size:      131072 Max Area size:    12582912

Degree:               1

Blocks to Sort:      17 Row size:           14 Total Rows:           9622

Initial runs:         2 Merge passes:        1 IO Cost / pass:         10

Total IO sort cost: 27      Total CPU sort cost: 13931876

Total Temp space used: 254000

SM join: Resc: 203.62  Resp: 203.62  [multiMatchCost=0.00]

HA Join

Outer table:

resc: 35.37  card 9622.00  bytes: 7  deg: 1  resp: 35.37

Inner table: TESTA  Alias: TESTA

resc: 7.17  card: 9622.00  bytes: 3  deg: 1  resp: 7.17

using dmeth: 2  #groups: 1

Cost per ptn: 0.81  #ptns: 1

hash_area: 124 (max=3072)   Hash join: Resc: 43.35  Resp: 43.35  [multiMatchCost=0.00]

HA Join (swap)

Outer table:

resc: 7.17  card 9622.00  bytes: 3  deg: 1  resp: 7.17

Inner table: TEST  Alias: TEST

resc: 2.00  card: 11.00  bytes: 7  deg: 1  resp: 2.00

using dmeth: 2  #groups: 1

Cost per ptn: 0.69  #ptns: 1

hash_area: 124 (max=3072)   Hash join: Resc: 9.85  Resp: 9.85  [multiMatchCost=0.00]

HA cost: 9.85

resc: 9.85 resc_io: 9.00 resc_cpu: 6646477

resp: 9.85 resp_io: 9.00 resp_cpu: 6646477

First K Rows: copy A one plan, tab=TESTA

Best:: JoinMethod: Hash

Cost: 9.85  Degree: 1  Resp: 9.85  Card: 9622.00  Bytes: 17

***********************

Best so far: Table#: 0  cost: 2.0012  card: 11.0000  bytes: 77

Table#: 1  cost: 9.8546  card: 9622.0000  bytes: 163574

可以看到此次计算中代入了用户希望最先返回的结果 条数11(为10+1),通过设 置连接对象的候选结果集(Card)以到达相关优化目的,相应的COST均有所下降。

下为FIRST_ROWS_1000的情况:

alter session set events’10053 trace name context forever,level 1′;

alter session set optimizer_mode=first_rows_1000;

select test.owner from test,testa where test.object_id=testa.object_id;

alter session set events’10053 trace name context off’;

NL Join

Outer table: Card: 1000.00  Cost: 5.04  Resp: 5.04  Degree: 1  Bytes: 7

Inner table: TESTA  Alias: TESTA

Access Path: TableScan

NL Join:  Cost: 33147.66  Resp: 33147.66  Degree: 0

Cost_io: 32777.00  Cost_cpu: 2882616819

Resp_io: 32777.00  Resp_cpu: 2882616819

Access Path: index (index (FFS))

Index: INDA_ID

resc_io: 5.69  resc_cpu: 1304190

ix_sel: 0.0000e+00  ix_sel_with_filters: 1

Inner table: TESTA  Alias: TESTA

Access Path: index (FFS)

NL Join:  Cost: 5861.74  Resp: 5861.74  Degree: 0

Cost_io: 5694.00  Cost_cpu: 1304492819

Resp_io: 5694.00  Resp_cpu: 1304492819

Access Path: index (AllEqJoinGuess)

Index: INDA_ID

resc_io: 1.00  resc_cpu: 8171

ix_sel: 1.0393e-04  ix_sel_with_filters: 1.0393e-04

NL Join: Cost: 1006.09  Resp: 1006.09  Degree: 1

Cost_io: 1005.00  Cost_cpu: 8474019

Resp_io: 1005.00  Resp_cpu: 8474019

Best NL cost: 1006.09

resc: 1006.09 resc_io: 1005.00 resc_cpu: 8474019

resp: 1006.09 resp_io: 1005.00 resp_cpu: 8474019

Join Card:  1000.00 = outer (1000.00) * inner (9622.00) * sel (1.0393e-04)

Join Card – Rounded: 1000 Computed: 1000.00

SM Join

Outer table:

resc: 35.37  card 9622.00  bytes: 7  deg: 1  resp: 35.37

Inner table: TESTA  Alias: TESTA

resc: 7.17  card: 9622.00  bytes: 3  deg: 1  resp: 7.17

using dmeth: 2  #groups: 1

SORT resource      Sort statistics

Sort width:          70 Area size:      131072 Max Area size:    12582912

Degree:               1

Blocks to Sort:      22 Row size:           18 Total Rows:           9622

Initial runs:         2 Merge passes:        1 IO Cost / pass:         14

Total IO sort cost: 36      Total CPU sort cost: 14055006

Total Temp space used: 320000

SORT resource      Sort statistics

Sort width:          70 Area size:      131072 Max Area size:    12582912

Degree:               1

Blocks to Sort:      17 Row size:           14 Total Rows:           9622

Initial runs:         2 Merge passes:        1 IO Cost / pass:         10

Total IO sort cost: 27      Total CPU sort cost: 13931876

Total Temp space used: 254000

SM join: Resc: 109.14  Resp: 109.14  [multiMatchCost=0.00]

SM cost: 109.14

resc: 109.14 resc_io: 105.00 resc_cpu: 32173386

resp: 109.14 resp_io: 105.00 resp_cpu: 32173386

SM Join (with index on outer)

Access Path: index (FullScan)

Index: IND_ID

resc_io: 167.00  resc_cpu: 5134300

ix_sel: 1  ix_sel_with_filters: 1

Cost: 167.66  Resp: 167.66  Degree: 1

Outer table:

resc: 167.66  card 1000.00  bytes: 7  deg: 1  resp: 167.66

Inner table: TESTA  Alias: TESTA

resc: 7.17  card: 9622.00  bytes: 3  deg: 1  resp: 7.17

using dmeth: 2  #groups: 1

SORT resource      Sort statistics

Sort width:          70 Area size:      131072 Max Area size:    12582912

Degree:               1

Blocks to Sort:      17 Row size:           14 Total Rows:           9622

Initial runs:         2 Merge passes:        1 IO Cost / pass:         10

Total IO sort cost: 27      Total CPU sort cost: 13931876

Total Temp space used: 254000

SM join: Resc: 203.62  Resp: 203.62  [multiMatchCost=0.00]

HA Join

Outer table:

resc: 35.37  card 9622.00  bytes: 7  deg: 1  resp: 35.37

Inner table: TESTA  Alias: TESTA

resc: 7.17  card: 9622.00  bytes: 3  deg: 1  resp: 7.17

using dmeth: 2  #groups: 1

Cost per ptn: 0.81  #ptns: 1

hash_area: 124 (max=3072)   Hash join: Resc: 43.35  Resp: 43.35  [multiMatchCost=0.00]

HA Join (swap)

Outer table:

resc: 7.17  card 9622.00  bytes: 3  deg: 1  resp: 7.17

Inner table: TEST  Alias: TEST

resc: 5.04  card: 1000.00  bytes: 7  deg: 1  resp: 5.04

using dmeth: 2  #groups: 1

Cost per ptn: 0.70  #ptns: 1

hash_area: 124 (max=3072)   Hash join: Resc: 12.91  Resp: 12.91  [multiMatchCost=0.00]

HA cost: 12.91

resc: 12.91 resc_io: 12.00 resc_cpu: 7038524

resp: 12.91 resp_io: 12.00 resp_cpu: 7038524

First K Rows: copy A one plan, tab=TESTA

Best:: JoinMethod: Hash

Cost: 12.91  Degree: 1  Resp: 12.91  Card: 9622.00  Bytes: 17

***********************

Best so far: Table#: 0  cost: 5.0389  card: 1000.0000  bytes: 7000

Table#: 1  cost: 12.9051  card: 9622.0000  bytes: 163574

可以看到此处代入了1000为某一连接对象的候选行数。

MOS上有一个著名的《MIGRATING TO THE COST-BASED OPTIMIZER》教材,详细介绍了RBO和CBO的区别:
[gview file=”http://askmaclean.com/wp-content/uploads/resource/40178_rbo_rip.doc”]

Know More about Oracle PGA Memory

Question :

 

1. How can we know available PGA and temporary tablespace before we issue a huge operation?
2. Can we estimate PGA and temporary tablespace for a huge operation?
3. As we know, there is limitation for a user process. Can we set unlimited and how to do it?

 

We say:

 

PROPOSED SOLUTION(S)
====================
ANSWER
======
Note 223730.1 Title Automatic PGA Memory Management in 9i and 10g

This Document briefly describes how Oracle 9i manage PGA work area and how to
tune it and some of the common issues and some of the common misunderstood issues.

Automatic PGA Memory Management

Automatic PGA Memory Management
***Checked for relevance on 07-Jan-2011***

note:note:
Automatic PGA Memory Management
——————————-

Process Global Area,  often known as the Program Global Area (PGA) resides in the
process private memory of the server process. It contains global variables and data
structures and control information for a server process. example of such information
is the runtime area of a cursor. Each time a cursor is executed, a new runtime
area is created for that cursor in the PGA memory region of the server process
executing that cursor.

The performance of complex long running queries, typical in a DSS environment,
depend to a large extent on the memory available in the Program Global Area (PGA).
which is also called work area.

The size of a work area can be controlled and tuned. Generally, bigger work areas
can significantly improve the performance of a particular operator at the cost of
higher memory consumption. Ideally, the size of a work area is big enough that it
can accommodate the input data and auxiliary memory structures allocated by its
associated SQL operator. This is known as the optimal size of a work area (e.g.
a memory sort). When the size of the work area is smaller than optimal
(e.g. a disk sort), the response time increases, because an extra pass is performed
over part of the input data. This is known as the one-pass size of the work area.
Under the one-pass threshold, when the size of a work area is far too small compared
to the input data size, multiple passes over the input data are needed. This could
dramatically increase the response time of the operator. This is known as the multi-pass
size of the work area.

In Oracle8i administrators sized the PGA by carefully adjusting a number of
initialization parameters, such as, SORT_AREA_SIZE, HASH_AREA_SIZE,
BITMAP_MERGE_AREA_SIZE, and CREATE_BITMAP_AREA_SIZE, etc.

Starting with Oracle9i, an option is provided to completely automate the
management of PGA memory.  Administrators merely need to specify the
maximum amount of PGA memory available to an instance using a newly
introduced initialization parameter PGA_AGGREGATE_TARGET.

The database server automatically distributes this memory among various
active queries in an intelligent manner so as to ensure maximum performance
benefits and the most efficient utilization of memory. Furthermore, Oracle9i
and newer releases can adapt itself to changing workload thus utilizing
resources efficiently regardless of the load on the system. The amount of
the PGA memory available to an instance can be changed dynamically by
altering the value of the PGA_AGGREGATE_TARGET parameter making it possible
to add to and remove PGA memory from an active instance online. Since the
database engine itself is better equipped to determine SQL execution memory
requirements, database administrators should use this feature and not try
to tune the PGA manually. This should translate to better throughput for
large number of users on the system as well as improved response time for
queries.

The automatic SQL execution memory management feature is enabled by setting the
parameter WORKAREA_SIZE_POLICY to AUTO and by specifying a size of
PGA_AGGREGATE_TARGET in the initialization file. These two parameters can also be
set dynamically using the ALTER SYSTEM command. In the absence of either of these
parameters, the database will revert to manual PGA management mode. In Oracle9i
Release 2, an advisory for PGA_AGGREGATE_TARGET was introduced. Just like in Buffer
Cache Advisory, the PGA Advisory will suggest the appropriate size for PGA memory
and thus make PGA tuning an even simpler task.

Version specific notes:

Until 9iR2, PGA_AGGREGATE_TARGET parameter controls the sizing of workareas for
all dedicated server connections, but it has no effect on shared servers (aka
MTS) connections and the *_AREA_SIZE parameters will take precedence in this
case.

In 10g, PGA_AGGREGATE_TARGET controls workareas allocated by both dedicated and
shared connections.

As of 11g, Automatic Memory Management (AMM) expands to managing both SGA and
PGA memory.   Under memory pressure for PGA memory, SGA memory will be
re-allocated for use by a process to accommodate workarea needs.  On the
flip-side, if PGA memory is under allocated, memory can be added to the
auto-tuned components in the SGA beyond the original SGA configuration.
NOTE:   With AMM, setting an explicit value for PGA_AGGREGATE_TARGET will
act as a minimum setting that AMM will not shrink below.  See note:443746.1
for more information.

How To Tune PGA_AGGREGATE_TARGET
———————————-

The first question we will have when we set this parameter is what is the best
value for it?

To determine the appropriate setting for PGA_AGGREGATE_TARGET  parameter we
recommend to follow the following steps

1- Make a first estimate for PGA_AGGREGATE_TARGET  based on the following rule

– For OLTP systems

PGA_AGGREGATE_TARGET  = ( * 80%) * 20%

– For DSS systems

PGA_AGGREGATE_TARGET  = ( * 80%) * 50%

So for example, if we have an Oracle instance configured on system with 16G of
Physical memory, then the suggested PGA_AGGREGATE_TARGET parameter value we
should start with incase we have OLTP system is (16 G * 80%)*20% ~= 2.5G and
incase we have DSS system is (16 G * 80%)* 50% ~= 6.5 G.

In the above equation, we assume that 20% of the memory will be used by the OS,
and in OLTP system 20% of the remaining memory will be used for
PGA_AGGREGATE_TARGET and the remaining memory is going for Oracle SGA
memory and non-oracle processes memory. So make sure that you have
enough memory for your SGA and also for non-oracle processes

2- A second step in tuning the PGA_AGGREGATE_TARGET is to monitor performance
using available PGA statistics and see if PGA_AGGREGATE_TARGET is under sized
or over sized. Several dynamic performance views are available for this
purpose:

– V$PGASTAT

This view provides instance-level statistics on the PGA memory usage and
the automatic PGA memory manager. For example:

SELECT * FROM V$PGASTAT;

NAME                                               VALUE
——————————————————–
aggregate PGA target parameter                     524288000 bytes
aggregate PGA auto target                          463435776 bytes
global memory bound                                25600 bytes
total PGA inuse                                    9353216 bytes
total PGA allocated                                73516032 bytes
maximum PGA allocated                              698371072 bytes
total PGA used for auto workareas                  0 bytes
maximum PGA used for auto workareas                560744448 bytes
total PGA used for manual workareas                0 bytes
maximum PGA used for manual workareas              0 bytes
over allocation count                              0 bytes
total bytes processed                              4.0072E+10 bytes
total extra bytes read/written                     3.1517E+10 bytes
cache hit percentage                               55.97 percent

Main statistics to look at

(a) aggregate PGA auto target : This gives the amount of PGA memory Oracle can
use for work areas running in automatic mode. This part of memory represent the
tunable part of PGA memory,i.e. memory allocated for intensive memory SQL operators
like sorts, hash-join, group-by, bitmap merge and bitmap index create. This memory
part can be shrinked/expanded in function of the system load. Other parts of
PGA memory are known as untunable, i.e. they require a size that can’t be negociated
(e.g. context information for each session, for each open/active cursor,
PL/SQL or Java memory).

So, the aggregate PGA auto target should not be small compared to the value of
PGA_AGGREGATE_TARGET. You must ensure that enough PGA memory is left for work areas
running in automatic mode.

(b) total PGA used for auto workarea: This gives the actual tunable PGA memory used by
the system. The ‘maximum PGA used for auto workareas’ gives the maximum reached
by previous statistic since instance startup.

(c) total PGA in used: This gives the total PGA memory in use. The detail of this
value can be found in the PGA_USED_MEM column of the v$process view.

Oracle92, 10g, 11g:
(d) over allocation count: Over-allocating PGA memory can happen if the value of
PGA_AGGREGATE_TARGET is too small to accommodate the untunable PGA memory part plus
the minimum memory required to execute the work area workload. When this happens,
Oracle cannot honor the initialization parameter PGA_AGGREGATE_TARGET, and extra
PGA memory needs to be allocated. over allocation count is the number of time the
system was detected in this state since database startup. This count should ideally be
equal to zero.

Oracle92,10g, 11g:
(e) cache hit percentage: This metric is computed by Oracle to reflect the
performance of the PGA memory component. It is cumulative from instance
start-up. A value of 100% means that all work areas executed by the system
since instance start-up have used an optimal amount of PGA memory. This is,
of course, ideal but rarely happens except maybe for pure OLTP systems. In
reality, some work areas run one-pass or even multi-pass, depending on the
overall size of the PGA memory. When a work area cannot run optimally, one or
more extra passes is performed over the input data. This reduces the cache
hit percentage in proportion to the size of the input data and the number of
extra passes performed. this value if computed from the “total bytes processed”
and “total extra bytes read/written” statistics available in the same view using
the following formula:

total bytes processed * 100
PGA Cache Hit Ratio =  ——————————————————
(total bytes processed + total extra bytes read/written)

– V$SQL_WORKAREA_HISTOGRAM (Oracle92, 10g, 11g)

This view shows the number of work areas executed with optimal memory size, one-
pass memory size, and multi-pass memory size since instance start-up. Statistics
in this view are subdivided into buckets that are defined by the optimal memory
requirement of the work area. Each bucket is identified by a range of optimal
memory requirements specified by the values of the columns LOW_OPTIMAL_SIZE and
HIGH_OPTIMAL_SIZE.

Example :

The following query shows statistics for all nonempty buckets.

SELECT LOW_OPTIMAL_SIZE/1024 low_kb,(HIGH_OPTIMAL_SIZE+1)/1024 high_kb,
optimal_executions, onepass_executions, multipasses_executions
FROM   v$sql_workarea_histogram
WHERE  total_executions != 0;

The result of the query might look like the following:

LOW_KB HIGH_KB OPTIMAL_EXECUTIONS ONEPASS_EXECUTIONS MULTIPASSES_EXECUTIONS
—— ——- —————— —————— ———————-
8      16      156255             0                   0
16     32      150                0                   0
32     64      89                 0                   0
64     128     13                 0                   0
128    256     60                 0                   0
256    512     8                  0                   0
512    1024    657                0                   0
1024   2048    551                16                  0
2048   4096    538                26                  0
4096   8192    243                28                  0
8192   16384   137                35                  0
16384  32768   45                 107                 0
32768  65536   0                  153                 0
65536  131072  0                  73                  0
131072 262144  0                  44                  0
262144 524288  0                  22                  0

The query result shows that, in the 1024 KB to 2048 KB bucket, 551 work areas used
an optimal amount of memory, while 16 ran in one-pass mode and none ran in
multi-pass mode. It also shows that all work areas under 1 MB were able to run in
optimal mode.

You can also use V$SQL_WORKAREA_HISTOGRAM to find the percentage of times work
areas were executed in optimal, one-pass, or multi-pass mode since start-up.

Example :

SELECT optimal_count, round(optimal_count*100/total, 2) optimal_perc,
onepass_count, round(onepass_count*100/total, 2) onepass_perc,
multipass_count, round(multipass_count*100/total, 2) multipass_perc
FROM
(SELECT decode(sum(total_executions), 0, 1, sum(total_executions)) total,
sum(OPTIMAL_EXECUTIONS) optimal_count,
sum(ONEPASS_EXECUTIONS) onepass_count,
sum(MULTIPASSES_EXECUTIONS) multipass_count
FROM   v$sql_workarea_histogram
WHERE  low_optimal_size > 64*1024);   —- for 64 K optimal size

– V$SQL_WORKAREA_ACTIVE

This view can be used to display the work areas that are active (or executing)
in the instance. Small active sorts (under 64 KB) are excluded from the view.
Use this view to precisely monitor the size of all active work areas and to
determine if these active work areas spill to a temporary segment.

Example :

SELECT to_number(decode(SID, 65535, NULL, SID)) sid,
operation_type OPERATION,trunc(EXPECTED_SIZE/1024) ESIZE,
trunc(ACTUAL_MEM_USED/1024) MEM, trunc(MAX_MEM_USED/1024) “MAX MEM”,
NUMBER_PASSES PASS, trunc(TEMPSEG_SIZE/1024) TSIZE
FROM V$SQL_WORKAREA_ACTIVE
ORDER BY 1,2;

SID OPERATION         ESIZE     MEM       MAX MEM    PASS TSIZE
— —————– ——— ——— ——— —– ——-
8   GROUP BY (SORT)   315       280       904         0
8   HASH-JOIN         2995      2377      2430        1   20000
9   GROUP BY (SORT)   34300     22688     22688       0
11  HASH-JOIN         18044     54482     54482       0
12  HASH-JOIN         18044     11406     21406       1   120000

This output shows that session 12 (column SID) is running a hash-join having its
work area running in one-pass mode (PASS column). This work area is currently
using 11406 KB of memory (MEM column) and has used, in the past, up to 21406 KB
of PGA memory (MAX MEM column). It has also spilled to a temporary segment of
size 120000 KB. Finally, the column ESIZE indicates the maximum amount of memory
that the PGA memory manager expects this hash-join to use. This maximum is dynamically
computed by the PGA memory manager according to workload.

When a work area is deallocated—that is, when the execution of its associated SQL
operator is complete—the work area is automatically removed from the
V$SQL_WORKAREA_ACTIVE view.

– note: have some other queries we use to monitor SQL execution memory

3- The Third and last step is tuning the PGA_AGGREGATE_TARGET. In Oracle 9i
Release 2 we have 2 new views that help us in this task

– V$PGA_TARGET_ADVICE
– V$PGA_TARGET_ADVICE_HISTOGRAM

By examining these two views, you will be able to determine how key PGA statistics
will be impacted if you change the value of PGA_AGGREGATE_TARGET.

To enable automatic generation of PGA advice performance views, make sure the
following parameters are set:

– PGA_AGGREGATE_TARGET
– STATISTICS_LEVEL. Set this to TYPICAL (the default) or ALL; setting this
parameter to BASIC turns off generation of PGA performance advice views.

The content of these PGA advice performance views is reset at instance start-up
or when PGA_AGGREGATE_TARGET is altered.   NOTE:  PGA_AGGREGATE can change
automatically over time starting with 11g as part of the Automatic Memory
Management enhancements available at 11g.   See note:443746.1 for more
details.

V$PGA_TARGET_ADVICE view predicts how the statistics cache hit percentage and
over allocation count in V$PGASTAT will be impacted if you change the value of
the initialization parameter PGA_AGGREGATE_TARGET.

The following select statement can be used to find this information

SELECT round(PGA_TARGET_FOR_ESTIMATE/1024/1024) target_mb,
ESTD_PGA_CACHE_HIT_PERCENTAGE cache_hit_perc,
ESTD_OVERALLOC_COUNT
FROM   v$pga_target_advice;

The output of this query might look like the following:

TARGET_MB  CACHE_HIT_PERC ESTD_OVERALLOC_COUNT
———- ————– ——————–
63         23             367
125        24             30
250        30             3
375        39             0
500        58             0
600        59             0
700        59             0
800        60             0
900        60             0
1000       61             0
1500       67             0
2000       76             0
3000       83             0
4000       85             0

From the above results we should set the PGA_AGGREGATE_TARGET parameter to a
value where we avoid any over allocation, so lowest PGA_AGGREGATE_TARGET value
we can set is 375 ( where ESTD_OVERALLOC_COUNT is 0)

After eliminating over-allocations, the goal is to maximize the PGA cache hit
percentage, based on your response-time requirement and memory constraints.

V$PGA_TARGET_ADVICE_HISTOGRAM view predicts how the statistics displayed
by the performance view V$SQL_WORKAREA_HISTOGRAM will be impacted if you
change the value of the initialization parameter PGA_AGGREGATE_TARGET. You can
use the dynamic view V$PGA_TARGET_ADVICE_HISTOGRAM to view detailed
information on the predicted number of optimal, one-pass and multi-pass work
area executions for the set of PGA_AGGREGATE_TARGET values you use for the
prediction.

Common issues
—————

1- When we set the PGA_AGGREGATE_TARGET  and WORKAREA_SIZE_POLICY to auto
then the *_area_size parameter are automatically ignored and oracle will
automatically use the computed value for these parameters.

2- In Oracle 8i and earlier, the PGA memory was static, once the process started
and started to allocate memory for it’s PGA area then it will not release it
back to the OS unless the process exits or dies. But the OS and under heavy
memory pressure will decide to page out unused memory pages belongs to a process
PGA to the swap space.

In Oracle 9i and under the automatic PGA memory management, Oracle will be able
to unallocate memory from a process PGA which is not using it any more so
another process can use it, also it can adjust the different work areas size
to accommodate the current workload and the amount of memory can be used.

3- Using automatic PGA memory management feature will help limiting resources
used by oracle process, and will use it more efficiently.

4- Using automatic PGA memory management will help also reducing the possibility
of getting ora-4030 errors unless we hit a OS limit, because work area sizes
will be controlled and adjusted automatically based on the PGA_AGGGREGATE_TARGET
parameter first and then the current work load.

5- If column ESTD_OVERALLOCATION_COUNT in the V$PGA_TARGET_ADVICE VIEW is nonzero,
It indicates that PGA_AGGREGATE_TARGET is too small to even meet the minimum
PGA memory needs. If PGA_AGGREGATE_TARGET is set within the over allocation
zone, the memory manager will over-allocate memory and actual PGA memory
consumed will be more than the limit you set. It is therefore meaningless to
set a value of PGA_AGGREGATE_TARGET in that zone.

6- Some customer reported that SQL LOADER in Oracle 9i is slower than SQL Loader
in Oracle 8i, and example of this is bug: which was closed as not a
bug. Using PGA_AGGREGATE_TARGET alleviated the problem.

7- PGA_AGGREGATE_TARGET  is not supported on VMS, for more information please refer
to note: “Oracle9i Release Notes Release 1 (9.0.1) for Alpha OpenVMS”.
ORA-3113 is returned on instance startup when init.ora PGA_AGGREGATE_TARGET is set.

8- Setting PGA_AGGREGATE_TARGET in 9.0.1 on HP-UX 11.0 may panic the OS. for
more information please refer to note: “ALERT HP-UX  Patch Levels
Advised” and Bug:2122307.

Known Bugs
———–

– bug:
Details: Attempting to set pga_aggregate_target over 4000Gb should error with
ORA-4032 but no error is signalled.
Fixed-Releases: 9.2.0.1.0

– Bug:2122307 HP System crash when setting PGA_AGGREGATE_TARGET to 10M or more
in Oracle 9.0.1.

This is basically an OS Problem that cause the crash. The system call pattern
automatic PGA management is using causing HP/UX to try to extend fixed region
and leads to memory allocation failures.

To resolve the bug both this patch and PHKL_25188 (or later) must be installed.

 

As the DBA you need to get confirmation from your operating system administrator that the amount of memory reported as being in use by a process includes or does not include shared memory. If shared memory is included in the value displayed by the operating system utility, then the shared pool size must be deducted from that value to know how much private memory the process is actually using.

See note 174555.1 “UNIX  Determining the Size of an Oracle Process”.

If an RDBMS user process is using more private memory than expected, then the DBA has three options:

– Do nothing
– Monitor the RDBMS iuser session to find out what SQL statements are being performed or were being performed by that RDBMS session. Using the SQL*Trace functionality of the database would normally be done if information from the end user cannot be obtained directly as to what they were doing since the memory usage was higher than expected or what they are doing right now.
– Kill that RDBMS user session.
PGA_AGGREGATE_TARGET does not set a hard limit on pga usage. It is only a target value used to dynamically size the process work areas. It also does not affect other areas of the pga that are allowed to grow beyond this limit.

There are certain areas of pga that cannot be controlled by initialization parameters. Such areas include pl/sql memory collections such as pl/sql tables and varrays.

Depending on the programming code and amount of data being handled these areas can grow very large (up to 20G internal limit on 10) and can consume large amounts of memory. This memory growth can be controlled by good programming practices. As an example, use LIMIT clause with BULK COLLECT.

Additionally, programming mistakes can also lead to excessive memory usage.

You can take steps to control the size of a process. However, from within the database framework you cannot place a hard limit on the size of a process by setting any initialization parameters or database configuration.

You can limit the size of a process from the OS side by setting kernel limits or user shell limits but this leads to the ORA-4030 and will cause transaction rollback.

As noted in bug 7279150, “… this is not a hard limit and that we will exceed it when it is undersized and the workload increases rapidly, such as when they start the workload for their testing or when they spawn a new set of sessions from their application servers.”

 

During the execution of SQL statements, server processes consume memory for various operations. Part of this memory is assigned to work areas for operations such as sorts and hash joins. Ideally, a work area should be large enough to support the SQL operation being performed. This size is known as the optimal size of a work area. When a work area is smaller than its optimal size, an extra pass is performed over part of the input data. This is known as the one-pass size of the work area. When the size of a work area is far too small compared to the input data size, multiple passes over the input data are needed. This is known as the multi-pass size of the work area. Operations performed in one-pass and multi-pass work area sizes increase response times, sometimes dramatically in the case of the latter.

You can set the size of the various work areas with individual initialization parameters but the same amount of memory is allocated to each process. So if your parameters are set to avoid any multi-pass operations, processes will be assigned this much memory even for operations that could run in a much smaller optimal size.

With Oracle9i, you can use the PGA_AGGREGATE_TARGET initialization parameter to assign memory that is shared by the server processes active in the instance and is automatically reallocated to the processes that currently need it. In the initial release of Oracle9i, additional information was added to various dynamic performance tables to help you monitor whether you had set a sufficiently high value for the PGA target. However, these statistics do not provide a lot of detail nor give you any guidance for setting a more appropriate value when it is under- or over-sized.

Additional statistics are available in Oracle9i Database Release 2 to help you monitor and tune the PGA_AGGREGATE_TARGET parameter. Some of these statistics are provided through new views and others through modified views. In this section of the lesson, you can find more details about managing your PGA memory with the views listed above.

 

The new statistics to help you monitor the performance of the PGA memory component for a particular value of PGA_AGGREGATE_TARGET are based on two concepts: work areas groups and a computed PGA cache hit percentage value.

By grouping work areas based on their optimal memory requirement, statistics can be collected for each group based on the number of optimal, one-pass, and multi-pass executions for each of these work area groups. With this finer granularity of work area statistics than previously available, you can more accurately predict how much memory is required across the instance to maximize the number of optimal executions.

The PGA cache hit percentage summarizes statistics on work area executions into a single measure of PGA performance for a given value of PGA_AGGREGATE_TARGET. The PGA cache hit percentage is derived from the number of work areas that run optimal, the number of passes for the non-optimal work areas, and the sizes of the work areas.

The new view, v$sql_workarea_histogram, enables you to study the nature of the work area workload over a chosen time period. The work areas are split into 33 groups based on their optimal memory requirements. Each group is defined by the lower bound on its optimal memory requirement, displayed in the low_optimal_size column, and its upper bound, displayed in the high_optimal_size column. For each group, the view accumulates the number of executions using optimal, one-pass, multi-pass memory since instance start up. These are displayed in the optimal_executions, onepass-executions, and multipasses_executions columns respectively. The total_executions column contains the sum of the other three execution counts.

To determine if you have set a good value for the PGA target size, query the v$sql_workarea_histogram view using a statement such as the following, which converts the low_optimal_size and high_optimal_size column values to kilobytes:

SQL> SELECT low_optimal_size/1024                          AS low_kb,
2     (high_optimal_size+1)/1024                         AS high_kb,
3     ROUND(100*optimal_executions/total_executions)     AS optimal,
4     ROUND(100*onepass_executions/total_executions)     AS onepass,
5     ROUND(100*multipasses_executions/total_executions) AS multipass
6  FROM v$sql_workarea_histogram
7  WHERE total_executions != 0
8  ORDER by low_kb;

Of course, as with any monitoring effort using dynamic views, you should issue the query at the beginning and at the end of a typical work period and use the differences between the two results to determine what activity occurred during that period.

Although it would be ideal for all work areas to execute in the optimal size, this goal is typically achieved by over-allocating memory to the PGA. If you graph the results of your query as a histogram, you can quickly identify the location in the graph where work groups begin to use one-pass, or even multi-pass, sizes. In the example shown the former occurs with a work area size of about 200KB and the latter about 100MB. Based on your knowledge of the type of work being done and the current level of performance, you can decide if this is acceptable or if the PGA_AGGREGATE_TARGET parameter value should be changed.

 

If a work area runs in one-pass or multi-pass mode, extra bytes will be processed since one or more extra pass over the input data will be performed. A new statistic, PGA cache hit percentage, condenses in one global numerical metric the relative performance of the PGA memory component. The PGA cache hit percentage is the percentage formed from the ratio of the number of bytes that need to be processed when all work areas run in optimal mode to the total bytes really processed. The higher the PGA cache hit percentage, the better the performance of PGA memory and hence of the system.

The PGA cache hit percentage statistic is stored in the v$pgastat view and can be retrieved from the row with the value cache hit percentage in the name column. A sample query is shown above.

The v$sql_workarea and v$sql_workarea_active views have been modified in Oracle9i Database Release 2 to reflect information on the temporary segment an operator (work area) uses.

The v$sort_usage view is renamed to v$tempseg_usage to reflect that information about all temporary segments, not only sort segments, is recorded in this view. In the current release, the old view name is being maintained for backward compatibility.

 

If you monitor the PGA space consumption with the various views provided for that purpose (v$sql_workarea_histogram, v$pgastat, v$sql_workarea, and v$sql_workarea_active), you may decide that you need to reset your PGA_AGGREGATE_TARGET initialization parameter value to make better use of your available memory. In some cases, you may want to reduce the memory allocated, in other cases; you may realize that you should increase the value. To help you determine by what factor you should change the parameter value, you can use two new views, provided in Oracle9i Database Release 2, that provide PGA sizing advice. These views, v$pga_target_advice and v$pga_target_advice_histogram, accumulate advice statistics to help you tune the PGA_AGGREGATE_TARGET value. The views are only populated if PGA_AGGREGATE_TARGET is set to a non-zero value that enables auto management of the PGA. Also the initialization parameter STATISTICS_LEVEL value must be set to Typical or All. Further, the view contents are refreshed when you shut down the instance or when you change the value of PGA_AGGREGATE_TARGET dynamically.

Rows in these two views correspond to setting the PGA_AGGREGATE_TARGET value to 0.125, 0.25, 0.5, 0.75, 1, 1.2, 1.4, 1.6, 1.8, 2, 3, 4, 6, and 8 times its current value. However, if these values are either less than 10MB or more than 256GB, they will not be included in the output.

To begin monitoring and tuning the PGA target size, you should issue query against the v$pga_target_advice view similar to:

SQL> SELECT
2    ROUND(pga_target_for_estimate/1024/1024) AS target_mb,
3    estd_pga_cache_hit_percentage    AS cache_hit_percent,
4    estd_overalloc_count
5  FROM v$pga_target_advice
6  ORDER BY target_mb;

For output that is easier to read and use, this query converts the value of the pga_target_for_estimate column from bytes to megabytes. As mentioned earlier, you should issue this query at the beginning and at the end of a typical work period and use the differences between the two results to obtain advise related to the work performed during that period.

The output from the query on v$pga_target_advice contains up to 14 rows that contain estimated statistics based on the multiples of the current PGA_AGGREGATE_TARGET value.

Assuming that your query produced the output shown above with the PGA_AGGREGATE_TARGET value set to 500MB, follow these steps to evaluate the results:

Step 1: Identify the first row with a value of zero in the estd_overallocation_count column. The rows above this one are for PGA_AGGREGATE_TARGET values (shown in the target_mb column) that are too small for the minimum PGA memory needs. In this case, this is the fifth row, which contains data for the current PGA_AGGREGATE_TARGET value, 500MB. Had the target_mb column value in the first row with a zero estd_overallocation_count been larger than the current setting, you should increase the PGA_AGGREGATE_TARGET parameter to at least this size.

Step 2: Examine the rows with PGA values larger than the minimum required to find the first pair of adjacent rows with values in the cache_hit_percent column that are within a few percentage points of each other. These rows indicate where, were you to graph the values, you would see an inflection point (sometimes referred to as a “knee”) in the curve. The optimal value for the PGA_AGGREGATE_TARGET parameter is at this inflection point, represented by the target_mb column value in the first of these two rows. Based on the above results, you should set the PGA_AGGREGATE_TARGET to 3000MB if you have sufficient memory. If you have even more memory available, you should assign it to some other use, such as one of the SGA components, rather than increasing the PGA target size.

 

如何在windows vista/2008/7中 安装Oracle OMS 即Grid Control

之前因为考试的原因,希望能在本机的windows上安装 Oracle OMS 以方便练习,OCM考试中使用的Grid Control 版本为10.2.0.1, 仅支持Windows相关系统号5,对于6或6.1的vista/2008/7安装过程中都会遇到问题无法解决,最为明显的是10.2.0.1自带数据库为10.1.0.4版本在windows相关版本号6以上的平台中均无法创建数据库。Oracle OCM考试中会要求你使用DBCA 建立一个干净的Customer定制数据库,仅需要建立必要的试图和过程(catalog.sql与catproc.sql运行过)并带有label security 组件(其他如Spatial与OLAP组建包括Oracle JVM等均不需要),如图:

labelsecurity

Oracle Grid Control 的10.2.0.5版本已经发布,其中10.2.0.4版本已支持最新的Windows操作系统,但只有patch install版本而无直接安装的full install 版本,且直接安装10.2.0.1 版本在OMS configuration部分将出错导致配置失败故我们需要另一种安装方式,如文档 ID:     763072.1

Installing On Windows 2008/Vista

The following sections discuss installing Enterprise Manager 10g Grid Control on Windows 2008/Vista.

Installing Enterprise Manager 10g Grid Control Using A New Database On Windows 2008/Vista

Enterprise Manager 10g Grid Control will not support the New Database install option on Windows 2008/Vista as the Database 10.1.0.4 (which is the default Database for Grid Control) is not supported for Windows 2008/Vista.

Installing Enterprise Manager 10g Grid Control Using an Existing Database On Windows 2008/Vista

To install Enterprise Manager 10g Grid Control using an existing database on Windows 2008/Vista, follow these steps:

  1. Download the fixed OUI 10204 from ARU Microsoft Windows (32-bit) (Bug 6640752) and extract it to a designated location, for example, C:OUI_Shiphome
  2. Copy Enterprise Manager 10g Grid Control 10.2.0.2.0 base shiphome (Disk1) into a location, for example, C:EMGC_102020shiphome
  3. Install the database that supports Vista/2008, such as 10.2.0.3.0/10.2.0.4. Make the following changes in the response file:

    C:EMGC_Shiphome102020shiphomeDisk1installresponseemnoseed.rsp

    1. Give the absolute path of products.xml for the FROM_LOCATION parameter.

      FROM_LOCATION="C:EMGC_102020shiphomeDisk1/oms/Disk1/stage/products.xml"

    2. Give the OMS Oracle Home location. For example:

      ORACLE_HOME=c:emnoseedoms10g

    3. Give the Oracle Home name. For example:

      ORACLE_HOME_NAME=oms10g12

    4. Give the Agent Oracle Home location. For example:

      s_agentHome=c:emnoseedagent10g

    5. Give the Agent Home Name. For example:

      s_agentHomeName=agent10g12

    Note:

    Ensure that the Oracle Home location and name are not identical to any previous Homes.

  4. Invoke the setup.exe for Install in the following way:

    C:OUI_ShiphomecdDisk1installsetup.exe -ignoreDiskLabel -responseFile C:EMGC_102020shiphomeDisk1installresponseemnoseed.rsp

  5. If the Agent is not coming up, and if the value of agentTZRegion in emd.properties is GMT then do the following:
    1. Reset TimeZone on Agent box to GMT.
    2. Execute the following command in <Oracle home of Agent>BIN>emctl resetTZ agent
  6. Patch OMS and Agent to version 10.2.0.5.

Installing An Additional Management Service On Windows 2008/Vista

Follow the below instructions to install an additional Management Service on Windows 2008/Vista.

  1. Download the fixed OUI 10204 from ARU Microsoft Windows (32-bit) (Bug 6640752) and extract it to a designated location, for example, C:OUI_Shiphome
  2. Copy Enterprise Manager 10g Grid Control 10.2.0.2.0 base shiphome (Disk1) into a location, for example, C:EMGC_102020shiphome
  3. Make the following changes in the response file:

    C:EMGC_102020shiphomeDisk1installresponseoms.rsp

    1. Supply the absolute path of products.xml for FROM_LOCATION parameter.

      FROM_LOCATION="C:EMGC_102020shiphome/Disk1/oms/Disk1/stage/products.xml"

    2. Supply the OMS Oracle Home location. For example: ORACLE_HOME=C:OH102020oms10g
    3. Specify the Oracle Home name. For example, ORACLE_HOME_NAME=oms10g20
    4. Supply the Agent Oracle Home location. For example, s_agentHome=C:OH102020agent10g
    5. Supply the Agent Home Name. For example, s_agentHomeName=agent10g20

    Note:

    Ensure that the Oracle Home location and name not be the same as any previous homes.

  4. Invoke the setup.exe for Install in the following way:

    C:OUI_ShiphomecdDisk1installsetup.exe -ignoreDiskLabel -responseFile

    C:EMGC_102020shiphomeDisk1installresponseoms.rsp

  5. If the Agent does not come up, and if the value of agentTZRegion in emd.properties is GMT, then do the following:
    1. Reset TimeZone on the Agent installation box to GMT
    2. Execute the following command in <Oracle home of Agent>BIN>emctl resetTZ agent
  6. Patch OMS and Agent to version 10.2.0.5.

Installing An Additional Management Agent On Windows 2008/Vista

Follow the below instructions to install an additional Management Agent on Windows 2008/Vista.

  1. Download the fixed OUI 10204 from ARU Microsoft Windows (32-bit) (Bug 6640752) and extract it to a designated location, for example, C:OUI_Shiphome
  2. Copy Enterprise Manager 10g Grid Control 10.2.0.2.0 base shiphome (Disk1) into a location, for example, C:EMGC_102020shiphome
  3. Make the following changes in the response file:

    C:EMGC_102020shiphomeDisk1installresponseagent.rsp

    1. Supply the absolute path of products.xml for FROM_LOCATION parameter.

      FROM_LOCATION="C:EMGC_102020shiphomeDisk1/oms/Disk1/stage/products.xml"

    2. Supply the OMS Oracle Home location. For example, ORACLE_HOME=c:agtagent10g
    3. Supply the Oracle Home name. For example, ORACLE_HOME_NAME=agent10g13

    Note:

    Ensure that Oracle Home location and name are not the same as any previous homes.

  4. Invoke the setup.exe for Install in the following way:

    C:OUI_ShiphomecdDisk1installsetup.exe -ignoreDiskLabel -responseFile

    C:EMGC_102020shiphomeDisk1installresponseagent.rsp

  5. If the Agent is not coming up, and if the value of agentTZRegion in emd.properties is GMT then do the following:
    1. Reset TimeZone on the Agent installation box to GMT.
    2. Execute the following command in <Oracle home of Agent>BIN>emctl resetTZ agent

Patch Agent to version 10.2.0.5.

kgl simulator,共享池simulator以及缓冲simulator相关解释

从9i开始学习的朋友一定习惯了v$sgastat视图中 一系列的sim项,主要包括各种 kglsim,sim cache segment trace等等。如下例:

SQL> select * from v$sgastat where lower(name) like '%sim%' order by name;
shared pool  kglsim count of pinned he        1920
shared pool  kglsim free heap list              96
shared pool  kglsim free obj list               96
shared pool  kglsim hash table                8208
shared pool  kglsim hash table bkts        4194304
shared pool  kglsim heap                    782208
shared pool  kglsim main lru count           76800
shared pool  kglsim main lru size           151040
shared pool  kglsim object batch           1163520
shared pool  kglsim pin list arr               288
shared pool  kglsim recovery area             1920
shared pool  kglsim sga                      30552
shared pool  kglsim size of pinned mem        3776
shared pool  ksim client list                  168
shared pool  log_simultaneous_copies           800
shared pool  sim cache nbufs                   640
shared pool  sim cache sizes                   640
shared pool  sim kghx free lists                 8
shared pool  sim lru segments                 1280
shared pool  sim segment hits                 1280
shared pool  sim segment num bufs              640
shared pool  sim state object                   40
shared pool  sim trace buf                    5144
shared pool  sim trace buf context             200
shared pool  sim_knlasg                       1280
shared pool  simulator hash buckets         131328
shared pool  simulator hash latch             5120
shared pool  simulator latch/bucket st        2816
28 rows selected.
有时你也会发现存在sim相关的latch,
SQL> select name from v$latch where name like '%sim%';
sim partition latch
simulator hash latch
simulator lru latch

显然Oracle实例中存在着这许多的模拟器(simulator),他们是 干啥的?

这些模拟器帮助Oracle实例获知当拥有更大的内存容量(涉及buffer_cache,shared_pool_size等)时可以节省多少时间和功。这些信息被汇总到几个常用视图中,如 V$SHARED_POOL_ADVICE, V$MTTR_TARGET_ADVICE, V$DB_CACHE_ADVICE等等。模拟器特性从9i开始被引入,也就是这些视图第一次出现的时候。

MMAN进程也利用这些信息作为SGA动态调整的基本依据(在sga_target或memory_target启用的情况下)。

KGL simulator会跟踪共享池内存分配并扮演模拟器引擎的角色。

KGL是 kernel general Library cache manager,它管理library cache中的对象 包括游标,PL/SQL存储过程,表的定义等等。

KGL simulator 评估当缓存大于目前情况的获益。 其通过将flush out出library cache的object的hash code保存至kgl simulator的 hash 列表中,以造成object似乎仍在缓存中的假象,方便其统计相关的数据。

当有对象被读入library cache中时,Oracle检查kgl simulator中的hash列表,若命中说明历史hash列表中的对象对当前实例仍有用,即更大的内存空间对实例来说是有益的。

11g内存管理新特性的internal表现

11g中自动内存管理(Automatic Memory Management ,amm), 令dba在数据库内存配置的相关工作更加简单. AMM现在将SGA与PGA整合到一起管理,而您只需要设置memory_target参数即可限定Oracle将使用到的内存尺寸,Oracle将自动分配这些内存空间.

您一定很困惑Oracle在unix平台上是如何对共享的sga内存空间与私有的pga内存空间进行切换的?这意味着Oracle需要经常释放sga中的部分内存以便允许pga去使用它们.传统的sys V 使用的共享内存shm接口不具备如此的灵活性.我们来看看Oracle是如何做到的?

先来获取我们需要的11g实例共享内存id(shared memory id)

[oracle@rh2 ~]$ sysresv                // 该命令需要设置了正确的LD_LIBRARY_PATH
IPC Resources for ORACLE_SID “T11” :
Shared Memory:
ID              KEY
65537           0x95c84bb8
Semaphores:
ID              KEY
327681          0xdf521034
Oracle Instance alive for sid “T11”

试着找出对应的sys V共享内存段:

[oracle@rh2 ~]$ ipcs -m

—— Shared Memory Segments ——–
key        shmid      owner      perms      bytes      nattch     status
0x95c84bb8 65537      oracle    660        4096       0

对应的存在着共享内存段,但该段很小只有 4096 byte哦,既然Oracle不再把sga放到共享段中,那藏到哪里去了呢?

我们接下来检查Oracle实例进程的内存影射状况.

[oracle@rh2 ~]$  pmap `pgrep -f lgwr`|less
14889:   ora_lgwr_T11
0000000000400000 155016K r-x–  /usr/oracle/product/11g/db_1/bin/oracle
0000000009c62000  12404K rw—  /usr/oracle/product/11g/db_1/bin/oracle
000000000a87f000    732K rwx–    [ anon ]
0000000060000000      4K r–s-  /dev/shm/ora_T11_65537_0
0000000060001000  16380K rw-s-  /dev/shm/ora_T11_65537_0
0000000061000000  16384K rw-s-  /dev/shm/ora_T11_65537_1
0000000062000000  16384K rw-s-  /dev/shm/ora_T11_65537_2
0000000063000000  16384K rw-s-  /dev/shm/ora_T11_65537_3
0000000064000000  16384K rw-s-  /dev/shm/ora_T11_65537_4
0000000065000000  16384K rw-s-  /dev/shm/ora_T11_65537_5
0000000066000000  16384K rw-s-  /dev/shm/ora_T11_65537_6
0000000067000000  16384K rw-s-  /dev/shm/ora_T11_65537_7
0000000068000000  16384K rw-s-  /dev/shm/ora_T11_65537_8
0000000069000000  16384K rw-s-  /dev/shm/ora_T11_65537_9
000000006a000000  16384K rw-s-  /dev/shm/ora_T11_65537_10
000000006b000000  16384K rw-s-  /dev/shm/ora_T11_65537_11
000000006c000000  16384K rw-s-  /dev/shm/ora_T11_65537_12
000000006d000000  16384K rw-s-  /dev/shm/ora_T11_65537_13
000000006e000000  16384K rw-s-  /dev/shm/ora_T11_65537_14
000000006f000000  16384K rw-s-  /dev/shm/ora_T11_65537_15
0000000070000000  16384K rw-s-  /dev/shm/ora_T11_65537_16
0000000071000000  16384K rw-s-  /dev/shm/ora_T11_65537_17
0000000072000000  16384K rw-s-  /dev/shm/ora_T11_65537_18
0000000073000000  16384K rw-s-  /dev/shm/ora_T11_65537_19
0000000074000000  16384K rw-s-  /dev/shm/ora_T11_65537_20
0000000075000000  16384K rw-s-  /dev/shm/ora_T11_65537_21
0000000076000000  16384K rw-s-  /dev/shm/ora_T11_65537_22
0000000077000000  16384K rw-s-  /dev/shm/ora_T11_65537_23
0000000078000000  16384K rw-s-  /dev/shm/ora_T11_65537_24
0000000079000000  16384K rw-s-  /dev/shm/ora_T11_65537_25
000000007a000000  16384K rw-s-  /dev/shm/ora_T11_65537_26
000000007b000000  16384K rw-s-  /dev/shm/ora_T11_65537_27
000000007c000000  16384K rw-s-  /dev/shm/ora_T11_65537_28
000000007d000000  16384K rw-s-  /dev/shm/ora_T11_65537_29
000000007e000000  16384K rw-s-  /dev/shm/ora_T11_65537_30
000000007f000000  16384K rw-s-  /dev/shm/ora_T11_65537_31
0000000080000000  16384K rw-s-  /dev/shm/ora_T11_65537_32
0000000081000000  16384K rw-s-  /dev/shm/ora_T11_65537_33
0000000082000000  16384K rw-s-  /dev/shm/ora_T11_65537_34
0000000083000000  16384K rw-s-  /dev/shm/ora_T11_65537_35
0000000084000000  16384K rw-s-  /dev/shm/ora_T11_65537_36
0000000085000000  16384K rw-s-  /dev/shm/ora_T11_65537_37
0000000086000000  16384K rw-s-  /dev/shm/ora_T11_65537_38
0000000087000000  16384K rw-s-  /dev/shm/ora_T11_65537_39
0000000088000000  16384K rw-s-  /dev/shm/ora_T11_65537_40
0000000089000000  16384K rw-s-  /dev/shm/ora_T11_65537_41
000000008a000000  16384K rw-s-  /dev/shm/ora_T11_65537_42

............

0000003e79109000      4K rw—  /lib64/tls/librt-2.3.4.so
0000003e7910a000     64K rw—    [ anon ]
0000003e79600000     84K r-x–  /lib64/libnsl-2.3.4.so
0000003e79615000   1020K —–  /lib64/libnsl-2.3.4.so
0000003e79714000      4K r—-  /lib64/libnsl-2.3.4.so
0000003e79715000      4K rw—  /lib64/libnsl-2.3.4.so
0000003e79716000      8K rw—    [ anon ]
0000007fbfff3000     52K rwx–    [ stack ]
ffffffffff600000      4K r-x–    [ anon ]
total          2497724K

pmap工具诠释了进程相关共享内存的情况,可以看到许多个16MB的"文件"对应了Oracle服务进程的空间地址.这是linux上POSIX风格的共享内存管理模式,使用"文件"形式包含共享内存段.

借助于将sga分割成许多块,Oracle可以很容易地把sga部分内存返回给OS,而服务器进程即可以利用到这些内存.(当memory_max_target>1024时,颗粒为16MB,否则为4MB).

接下来我们测试下Oracle是如何释放部分sga内存的.

对比实例启动前后:

启动前:

[oracle@rh2 ~]$ ls -l /dev/shm
总用量 0

启动后:
[oracle@rh2 ~]$ ls -l /dev/shm
总用量 1373704
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_0
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_1
-rw-r—–  1 oracle oinstall        0  9月 27 18:59 ora_T11_327680_10
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_100
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_101
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_102
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_103
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_104
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_105
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_106
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_107
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_108
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_109
-rw-r—–  1 oracle oinstall        0  9月 27 18:59 ora_T11_327680_11
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_110
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_111
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_112
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_113
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_114
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_115
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_116
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_117
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_118
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_119
-rw-r—–  1 oracle oinstall        0  9月 27 18:59 ora_T11_327680_12

可以看到启动后出现的16MB文件形式共享内存中部分大小为0,这些块被选出当发生内存交换时来被’destory’.使用pmap工具仍可以看到该部分影射,而实际上已经被Oracle释放了.

现在我们加大pga,观察其交换情况.

SQL> alter system set pga_aggregate_target=1900M ;

System altered.

[oracle@rh2 ~]$ ls -l /dev/shm
总用量 289984
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_0
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_1
-rw-r—–  1 oracle oinstall        0  9月 27 18:59 ora_T11_327680_10
-rw-r—–  1 oracle oinstall        0  9月 27 19:09 ora_T11_327680_100
-rw-r—–  1 oracle oinstall        0  9月 27 19:09 ora_T11_327680_101
-rw-r—–  1 oracle oinstall        0  9月 27 19:09 ora_T11_327680_102
-rw-r—–  1 oracle oinstall        0  9月 27 19:09 ora_T11_327680_103
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_104
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_105
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_106
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_107
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_108
-rw-r—–  1 oracle oinstall 16777216  9月 27 18:59 ora_T11_327680_109
-rw-r—–  1 oracle oinstall        0  9月 27 18:59 ora_T11_327680_11
-rw-r—–  1 oracle oinstall        0  9月 27 19:09 ora_T11_327680_110
-rw-r—–  1 oracle oinstall        0  9月 27 19:09 ora_T11_327680_111
-rw-r—–  1 oracle oinstall        0  9月 27 19:09 ora_T11_327680_112
-rw-r—–  1 oracle oinstall        0  9月 27 19:09 ora_T11_327680_113
-rw-r—–  1 oracle oinstall        0  9月 27 19:09 ora_T11_327680_114
-rw-r—–  1 oracle oinstall        0  9月 27 19:09 ora_T11_327680_115
-rw-r—–  1 oracle oinstall        0  9月 27 19:09 ora_T11_327680_116
-rw-r—–  1 oracle oinstall        0  9月 27 19:09 ora_T11_327680_117
-rw-r—–  1 oracle oinstall        0  9月 27 19:09 ora_T11_327680_118
-rw-r—–  1 oracle oinstall        0  9月 27 19:09 ora_T11_327680_119
-rw-r—–  1 oracle oinstall        0  9月 27 18:59 ora_T11_327680_12
-rw-r—–  1 oracle oinstall        0  9月 27 19:09 ora_T11_327680_120
-rw-r—–  1 oracle oinstall        0  9月 27 19:09 ora_T11_327680_121

可以看到出现了大量size为0的"文件",期许的交换出现了.

可见在11g中Oracle采用了新的共享内存实现方式,区别于旧的"一块式"共享段,更为灵活了.

PHP使用mail函数发送邮件标题乱码问题

PHP程序使用mail()函数发送邮件的时候,标题中文的话会出现乱码。
解决方法:
先用函数base64_encode() ― 使用 MIME base64 对数据进行编码
标题字符串前加编码类型例如: =?UTF-8?B?
标题字符串后加:?=
邮件header说明Content-type ― 防止邮件正文也乱码

举例:
$to         = ‘admin@askmaclean.com’;
$subject  = “=?UTF-8?B?”.base64_encode(‘邮件标题’).”?=”;
$headers  = ‘MIME-Version: 1.0’ . “\r\n”;
$headers .= ‘Content-type: text/html; charset=utf-8’ . “\r\n”;
// Additional headers
$headers .= ‘To: Xinple <‘”>admin@askmaclean.com>’ . “\r\n”;
$headers .= ‘From: Admin <‘”>admin@askmaclean.com>’ . “\r\n”;
$headers .= ‘Reply-To: Xinple <‘”>webmaster@163gl.com>’ . “\r\n”;
mail($to, $subject, $message, $headers);

Oracle内部错误ORA-07445[kpopfr()+339] [SIGFPE]一例

当所有列长度综合超过1048576时可能引发的一个dump错误,session会自动关闭。一般只有列很多且单列较“宽”时可能出现该错误。

已经测试的在10.2.0.1,以及10.2.0.3上均可以再现该问题,测试方法:

create table test

( c000 char(2000),

c001 char(2000),

c523 char(2000),

c524 char(576));

— sum of all column size is 1048576(0x100000).

Run next shell script.

while [ 1 ]

do

echo “set feedback off”

echo “select * from test where c001 = ‘A’;”

done | sqlplus -s scott/tiger

Note 245840.1 Information on the sections in this article

以上循环执行一段时间后session会被关闭,告警日志中出现

ORA-07445: exception encountered: core dump [kpopfr()+339] [SIGFPE] [Integer divide by zero][0x002327FF5] [] []的记录。没有在9i版本上测试,不能确定其影响。

该bug在10.2.0.4 patch set中已被修复,也可以通过小补丁形式修复,Oracle发布的小布丁只针对10.2.0.3版本,即10.2.0.1上是不能打的。

附bug描述原文:
Subject:     Bug 5753629 – Query may dump [in kpopfr / kposdi]

Doc ID:     5753629.8     Type:     PATCH

Modified Date :     03-APR-2009     Status:     PUBLISHED

@ Note to support: do not edit this note – it is auto generated
Bug 5753629  Query may dump [in kpopfr / kposdi]

This note gives a brief overview of bug 5753629.
The content was last updated on: 03-APR-2009
Click here for details of each of the sections below.
Affects:

Product (Component)     Oracle Server (Rdbms)
Range of versions believed to be affected     Versions < 11
Versions confirmed as being affected

* 10.2.0.3

Platforms affected     Generic (all / most platforms affected)

Fixed:

This issue is fixed in

* 10.2.0.3 Patch 9 on Windows Platforms
* 10.2.0.4 (Server Patch Set)
* 11.1.0.6 (Base Release)
* Process May Dump (ORA-7445) / Abend / Abort
* Dump in or under kpopfr / kposdi
* (None Specified)

Symptoms:

Related To:

Description

Repeatedly executing a query can lead to a dump in kpopfr.

eg:

create table test

( c000 char(2000),

c001 char(2000),

c523 char(2000),

c524 char(576));

— sum of all column size is 1048576(0x100000).

Run next shell script.

while [ 1 ]

do

echo “set feedback off”

echo “select * from test where c001 = ‘A’;”

done | sqlplus -s scott/tiger

^

Dump occurs

Please note: The above is a summary description only. Actual symptoms can vary. Matching to any

symptoms here does not confirm that you are encountering this problem. Always consult with Oracle

Support for advice.

References

Bug 5753629 (This link will only work for PUBLISHED bugs)

Restoring a user's original password

WARNING
=======
The method described in this article is not officially supported by development,
therefore when you get errors using this procedure support cannot file a bug for it.
Please make no mistakes when setting the HEX value with the VALUES keyword as bad
values may cause internal errors. This undocumented feature exists to support
export / import and retain the original passwords.
Problem Description:
====================
How can you restore a users original password ?
Solution Description:
=====================
With the output of dba_users you can restore a user's old password by
issuing the following:
alter user <name> identified by values '<original_encrypted_password>';
For example:
============
SQL> create user test identified by test
2  default tablespace users temporary tablespace temp;
User created.
SQL> select username, password from dba_users where username= 'TEST';
USERNAME   PASSWORD
---------- ------------------------------
TEST       CDC423BEB32B1812
SQL> alter user test identified by test1;
User altered.
SQL> select username, password from dba_users where username= 'TEST';
USERNAME   PASSWORD
---------- ------------------------------
TEST       22F2E341BF4B8764
SQL> alter user test identified by values 'CDC423BEB32B1812';
User altered.
SQL> connect test/test
Connected.
Solution Explanation:
=====================
To do this you would need a spool of dba_users taken before the password was
changed.

沪ICP备14014813号

沪公网安备 31010802001379号