Quantcast
Channel: ANBOB
Viewing all 703 articles
Browse latest View live

升级Oracle 19c经验: TTS时ORA-39083和ORA-00942案例

$
0
0

上周在使用TTS传输表空间从11.2.0.4到19C, 在最后impdp metadata的环节提示ora-942 结果提示有大量索引没有创建成功,但是查看报错的表实际是存在的, 后来发现这是一个收权问题导致的。后分析这种场景是发生在如一开始给了一个用户如create ANY  table/index的系统权限或者dba 角色或是on object上的权限,后来创建了跨SCHEMA的index(indexes and tables  differenct  owners)或FK 约束时,再后来安全整改收回大权限,就导致了这个问题。 下面验证一下。

创建表空间、用户、表和索引

[oracle@oel7db1 tpt-oracle-master]$ ora

SQL*Plus: Release 19.0.0.0.0 - Production on Sun Jun 14 08:14:04 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0


USERNAME             INST_NAME            HOST_NAME                  I# SID   SERIAL#  VERSION    STARTED  SPID       OPID  CPID            SADDR            PADDR
-------------------- -------------------- ------------------------- --- ----- -------- ---------- -------- ---------- ----- --------------- ---------------- ----------------
SYS                  CDB$ROOT-anbob19c    oel7db1                     1 36    17455    19.0.0.0.0 20200614 2962       33    2961            000000006A88B4E0 000000006B9069A8

SQL> @cc pdb1
ALTER SESSION SET container = pdb1;

Session altered.


USERNAME             INST_NAME            HOST_NAME                  I# SID   SERIAL#  VERSION    STARTED  SPID       OPID
-------------------- -------------------- ------------------------- --- ----- -------- ---------- -------- ---------- -----
CPID            SADDR            PADDR
--------------- ---------------- ----------------
SYS                  PDB1-anbob19c        oel7db1                     1 36    17455    19.0.0.0.0 20200614 2962       33
2961            000000006A88B4E0 000000006B9069A8

SQL> @df

           Container                                                            Free            Alloc
    CON_ID Name            TABLESPACE_NAME                Num Files       Space Meg.       Space Meg.        PCT
---------- --------------- ------------------------------ --------- ---------------- ---------------- ----------
         3 PDB1            SYSAUX                                 1               19              390        .95
                           SYSTEM                                 1                3              280        .99
                           TEMP                                   1                                36
                           UNDOTBS1                               1               61              100        .39
                           USERS                                  1              104            1,548        .93
********** *************** ****************************** --------- ---------------- ----------------
sum                                                               5              186            2,354
                                                          --------- ---------------- ----------------
sum                                                               5              186            2,354

SQL> @ls users

TABLESPACE_NAME                   FILE_ID EXT         MB      MAXSZ
------------------------------ ---------- --- ---------- ----------
FILE_NAME
--------------------------------------------------------------------------------------------------------------
USERS                                  12 YES     1547.5   32767.98
/u01/app/oracle/oradata/ANBOB19C/pdb1/users01.dbf


SQL> create tablespace TBS1 datafile '/u01/app/oracle/oradata/ANBOB19C/pdb1/tbs101.dbf' size 10m;
Tablespace created.

SQL> create user u1 identified by "anbob.com";
SQL> create user u2 identified by "anbob.com";


SQL> grant unlimited tablespace,create session,create table,create any index  to u1,u2;

Grant succeeded.

SQL> create table u1.test as select column_value id ,'anbob'||column_value name from xmltable('1 to 10');
create table u1.test as select column_value id ,'anbob'||column_value name from xmltable('1 to 10')
*
ERROR at line 1:
ORA-64464: XML event error
ORA-19202: Error occurred in XML processing
In line 1 of orastream:
LPX-00210: expected '<' instead of '1' SQL> create table u1.test(id int,name varchar2(10)) ;
Table created.

SQL> insert into u1.test select to_number(column_value) id ,'anbob'||column_value name from xmltable('1 to 10');
10 rows created.

SQL> commit;
Commit complete.

SQL> create table u2.test as select * from u1.test;
Table created.

SQL> alter table u1.test move tablespace tbs1;
Table altered.

SQL> alter table u2.test move tablespace tbs1;
Table altered.


create index u1.idx_test_id  on u2.test(id) tablespace tbs1;
create index u2.idx_test_name on u2.test(name) tablespace tbs1;
create index u2.idx_test_id on u1.test(id) tablespace tbs1;


SQL> EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('TBS1', TRUE);
PL/SQL procedur successfully completed.

SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;
no rows selected

SQL> select owner,table_owner,index_name,table_name from dba_indexes where owner in('U1','U2')

OWNER                          TABLE_OWNE INDEX_NAME           TABLE_NAME
------------------------------ ---------- -------------------- --------------------
U2                             U1         IDX_TEST_ID          TEST
U1                             U2         IDX_TEST_ID          TEST
U2                             U2         IDX_TEST_NAME        TEST

SQL> @dirs

DIRECTORY_NAME                           DIRECTORY_PATH
---------------------------------------- ------------------------------------------------------------------------------------------
SDO_DIR_WORK
...
JAVA$JOX$CUJS$DIRECTORY$                 /u01/app/oracle/product/19.2.0/db_1/javavm/admin/
DATAPUMP                                 /home/oracle

14 rows selected.

Note:
我们在两个用户创建了一个表,又分别在对方的表上创建了跨schema的索引和本schema索引,一并排除是按导入schema顺序导致和是否所有索引都有问题。

回收权限

这个脚本查询的是系统角色、系统权限和对象权限三个view

SQL>  revoke create any index from u1,u2;
SQL> @privs u1
no rows selected

GRANTEE                   PRIVILEGE                                ADM
------------------------- ---------------------------------------- ---
U1                        UNLIMITED TABLESPACE                     NO
U1                        CREATE SESSION                           NO
U1                        CREATE TABLE                             NO


no rows selected

SQL> @privs u2
no rows selected


GRANTEE                   PRIVILEGE                                ADM
------------------------- ---------------------------------------- ---
U2                        CREATE SESSION                           NO
U2                        CREATE TABLE                             NO
U2                        UNLIMITED TABLESPACE                     NO


no rows selected

TTS 导出导入元数据

这里简单使用本库做为导出、导入

SQL> alter tablespace TBS1 read only;
Tablespace altered.

SQL> host expdp anbob/anbob@cdb1pdb1 directory=DATAPUMP dumpfile=tts_tbs1.dmp logfile=tts_tbs1.log  transport_tablespaces=TBS1 exclude=TABLE_STATISTICS,INDEX_STATISTICS REUSE_DUMPFILES=yes

Export: Release 19.0.0.0.0 - Production on Tue Jun 16 05:22:25 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Starting "ANBOB"."SYS_EXPORT_TRANSPORTABLE_01":  anbob/********@cdb1pdb1 directory=DATAPUMP dumpfile=tts_tbs1.dmp logfile=tts_tbs1.log transport_tablespaces=TBS1 exclude=TABLE_STATISTICS,INDEX_STATISTICS REUSE_DUMPFILES=yes
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
Master table "ANBOB"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ANBOB.SYS_EXPORT_TRANSPORTABLE_01 is:
  /home/oracle/tts_tbs1.dmp
******************************************************************************
Datafiles required for transportable tablespace TBS1:
  /u01/app/oracle/oradata/ANBOB19C/pdb1/tbs101.dbf
Job "ANBOB"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at Tue Jun 16 05:23:25 2020 elapsed 0 00:00:55


SQL> drop tablespace tbs1 including contents ;
Tablespace dropped.

SQL> select owner,table_owner,index_name,table_name from dba_indexes where owner in('U1','U2');
no rows selected

SQL> host impdp anbob/anbob@cdb1pdb1 directory=DATAPUMP dumpfile=tts_tbs1.dmp logfile=imp_tts_tbs1.log transport_datafiles=/u01/app/oracle/oradata/ANBOB19C/pdb1/tbs101.dbf

Import: Release 19.0.0.0.0 - Production on Tue Jun 16 05:25:40 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Master table "ANBOB"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "ANBOB"."SYS_IMPORT_TRANSPORTABLE_01":  anbob/********@cdb1pdb1 directory=DATAPUMP dumpfile=tts_tbs1.dmp logfile=imp_tts_tbs1.log transport_datafiles=/u01/app/oracle/oradata/ANBOB19C/pdb1/tbs101.dbf
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
ORA-39083: Object type INDEX:"U2"."IDX_TEST_ID" failed to create with error:
ORA-00942: table or view does not exist

Failing sql is:
CREATE INDEX "U2"."IDX_TEST_ID" ON "U1"."TEST" ("ID") PCTFREE 10 INITRANS 2 MAXTRANS 255  STORAGE(SEG_FILE 181 SEG_BLOCK 162 OBJNO_REUSE 73789 INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "TBS1" PARALLEL 1

ORA-39083: Object type INDEX:"U1"."IDX_TEST_ID" failed to create with error:
ORA-00942: table or view does not exist

Failing sql is:
CREATE INDEX "U1"."IDX_TEST_ID" ON "U2"."TEST" ("ID") PCTFREE 10 INITRANS 2 MAXTRANS 255  STORAGE(SEG_FILE 181 SEG_BLOCK 146 OBJNO_REUSE 73787 INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "TBS1" PARALLEL 1

Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "ANBOB"."SYS_IMPORT_TRANSPORTABLE_01" completed with 2 error(s) at Tue Jun 16 05:25:52 2020 elapsed 0 00:00:09


SQL>  select owner,table_owner,index_name,table_name from dba_indexes where owner in('U1','U2')

OWNER                     TABLE_OWNER          INDEX_NAME           TABLE_NAME
------------------------- -------------------- -------------------- ------------------------------
U2                        U2                   IDX_TEST_NAME        TEST

SQL> @privs anbob

GRANTEE                   GRANTED_ROLE                                                                                                                     ADM DEF
------------------------- --------------------------
ANBOB                     DBA                                                                                                                              NO  YES

GRANTEE                   PRIVILEGE                                ADM
------------------------- ---------------------------------------- ---
ANBOB                     UNLIMITED TABLESPACE                     NO
ANBOB                     CREATE SESSION                           NO


GRANTEE                   OWNER                     TABLE_NAME                     PRIVILEGE
------------------------- ------------------------- ------------------------------ ----------------------------------------
ANBOB                     SYS                       DATAPUMP                       READ
ANBOB                     SYS                       DATAPUMP                       WRITE

Note:
这里你会发现导入的数据中少了两个index和table owner不同的索引,相同schema的索引创建成功。当然我expdp的用户anbob是DBA 角色,不存在导入用户的权限不足。

如果u1/u2 用户不收回create any index系统权限再次做相同的测试

这里省略了相同的数据构建部分,只是没有收回create any index权限。

SQL> alter tablespace TBS1 read only;

Tablespace altered.

SQL> host expdp anbob/anbob@cdb1pdb1 directory=DATAPUMP dumpfile=tts_tbs1.dmp logfile=tts_tbs1.log  transport_tablespaces=TBS1 exclude=TABLE_STATISTICS,INDEX_STATISTICS REUSE_DUMPFILES=yes

Export: Release 19.0.0.0.0 - Production on Sun Jun 14 09:33:04 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Starting "ANBOB"."SYS_EXPORT_TRANSPORTABLE_01":  anbob/********@cdb1pdb1 directory=DATAPUMP dumpfile=tts_tbs1.dmp logfile=tts_tbs1.log transport_tablespaces=TBS1 exclude=TABLE_STATISTICS,INDEX_STATISTICS REUSE_DUMPFILES=yes
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
Master table "ANBOB"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ANBOB.SYS_EXPORT_TRANSPORTABLE_01 is:
  /home/oracle/tts_tbs1.dmp
******************************************************************************
Datafiles required for transportable tablespace TBS1:
  /u01/app/oracle/oradata/ANBOB19C/pdb1/tbs101.dbf
Job "ANBOB"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at Sun Jun 14 09:34:06 2020 elapsed 0 00:00:57


SQL> drop tablespace tbs1 including contents ;
Tablespace dropped.

SQL> host impdp anbob/anbob@cdb1pdb1 directory=DATAPUMP dumpfile=tts_tbs1.dmp logfile=imp_tts_tbs1.log transport_datafiles=/u01/app/oracle/oradata/ANBOB19C/pdb1/tbs101.dbf

Import: Release 19.0.0.0.0 - Production on Sun Jun 14 09:35:05 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Master table "ANBOB"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "ANBOB"."SYS_IMPORT_TRANSPORTABLE_01":  anbob/********@cdb1pdb1 directory=DATAPUMP dumpfile=tts_tbs1.dmp logfile=imp_tts_tbs1.log transport_datafiles=/u01/app/oracle/oradata/ANBOB19C/pdb1/tbs101.dbf
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "ANBOB"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at Sun Jun 14 09:35:24 2020 elapsed 0 00:00:12

SQL> select owner,table_owner,index_name,table_name from dba_indexes where owner in('U1','U2');

OWNER                          TABLE_OWNE INDEX_NAME           TABLE_NAME
------------------------------ ---------- -------------------- --------------------
U2                             U1         IDX_TEST_ID          TEST
U2                             U2         IDX_TEST_NAME        TEST
U1                             U2         IDX_TEST_ID          TEST

Note:
导入是成功的。

警示:
以后在TTS前需要检查index和table owner不相同的对象,并检查该user是否权限满足,可以临时授予DBA或CREATE ANY INDEX的权限,导入成功后回收权限即可。


Troubleshooting Oracle 19c RAC db crash with ORA-00600 [kcbbxsv_nwp]

$
0
0

db alert log

2020-06-12T05:01:44.048197+08:00
PDB(3):minact-scn: useg scan erroring out with error e:12751
2020-06-12T05:01:58.302414+08:00
Errors in file /u02/app/oracle/diag/rdbms/anbob/wgdb11/trace/wgdb11_dbwb_59726.trc  (incident=1280745) (PDBNAME=CDB$ROOT):
ORA-00600: internal error code, arguments: [kcbbxsv_nwp], [], [], [], [], [], [], [], [], [], [], []
Incident details in: /u02/app/oracle/diag/rdbms/wgdb1/wgdb11/incident/incdir_1280745/wgdb11_dbwb_59726_i1280745.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2020-06-12T05:02:00.124461+08:00
Errors in file /u02/app/oracle/diag/rdbms/anbob/wgdb11/trace/wgdb11_dbwb_59726.trc:
ORA-00600: internal error code, arguments: [kcbbxsv_nwp], [], [], [], [], [], [], [], [], [], [], []
Errors in file /u02/app/oracle/diag/rdbms/wgdb1/wgdb11/trace/wgdb11_dbwb_59726.trc  (incident=1280746) (PDBNAME=CDB$ROOT):
ORA-471 [] [] [] [] [] [] [] [] [] [] [] []  <<<<<<<<
Incident details in: /u02/app/oracle/diag/rdbms/wgdb1/wgdb11/incident/incdir_1280746/wgdb11_dbwb_59726_i1280746.trc
2020-06-12T05:02:00.269879+08:00
Dumping diagnostic data in directory=[cdmp_20200612050200], requested by (instance=1, osid=59726 (DBWB)), summary=[incident=1280745].
2020-06-12T05:02:01.998825+08:00
USER (ospid: 59726): terminating the instance due to ORA error 471  <<<<<<<<
2020-06-12T05:02:09.324604+08:00
Instance terminated by USER(prelim), pid = 201552

trace file

*** 2020-06-12T05:01:58.303334+08:00
2020-06-12T05:01:58.303319+08:00
Incident 1280745 created, dump file: /u02/app/oracle/diag/rdbms/anbob/anbob1/incident/incdir_1280745/anbob1_dbwb_59726_i1280745.trc
ORA-00600: internal error code, arguments: [kcbbxsv_nwp], [], [], [], [], [], [], [], [], [], [], []
kge_experr : Found error ORA-600 not in expected list.
kge_experr: Dumping error frames [top = 1 : barrier top = 0]
kge_experr : [0] : Error = ORA-600 : 
Call stack = ksedsts()+426<-kge_snap_callstack()+77<-kgeadse()+557<-kgerinv_internal()+44
<-kgerinv()+40<-kserin()+180<-kcbbxsv()+17478<-kcbb_coalesce_int()+326<-kcbb_coalesce()+438<
-kcbbwthc()+817<-kcbbdrv()+8765<-ksb_act_run_int()+117<-ksb_act_run()+130<-ksbcti()+18
kge_experr: Dumping error frames - done
error 471 detected in background process
ORA-00600: internal error code, arguments: [kcbbxsv_nwp], [], [], [], [], [], [], [], [], [], [], []
2020-06-12T05:02:00.137498+08:00
Incident 1280746 created, dump file: /u02/app/oracle/diag/rdbms/anbob/anbob1/incident/incdir_1280746/anbob1_dbwb_59726_i1280746.trc
ORA-471 [] [] [] [] [] [] [] [] [] [] [] []
2020-06-12 05:02:01.983 :kjzduptcctx(): Notifying DIAG for crash event
 PROCESS STATE
-------------
Process global information:
     process: 0xee1ac44f0, call: 0xab52da458, xact: (nil), curses: 0xf6257d6e0, usrses: 0xf6257d6e0 <<<<<<<<<<<
     in_exception_handler: no
  ----------------------------------------
  SO: 0xf7fef76f8, type: process (2), map: 0xee1ac44f0
      state: LIVE (0x4532), flags: 0x1
      owner: (nil), proc: 0xf7fef76f8
      link: 0xf7fef7718[0xf7fef7718, 0xf7fef7718]
      child list count: 15, link: 0xf7fef7768[0xdbff094a8, 0xdbff098a8]
      conid: 1, conuid: 1, SGA version=(1,0), pg: 0
  SOC: 0xee1ac44f0, type: process (2), map: 0xf7fef76f8
       state: LIVE (0x99fc), flags: INIT (0x1)
  (process) Oracle pid:93, ser:1, calls cur/top: 0xab52da458/0xab52da458
            flags : (0x6) SYSTEM  icon_uid:0 logon_pdbid=0
            flags2: (0x800),  flags3: (0x10) 
            call error: 0, sess error: 0, txn error 0
            intr queue: empty
    (post info) last post received: 0 0 33
                last post received-location: ksa2.h LINE:298 ID:ksasnd
                last process to post me: 0xf21a5c338 1 6
                last post sent: 0 0 193
                last post sent-location: kjc.h LINE:2511 ID:KJCS Post snd proxy to flush msg
                last process posted by me: 0xf01a5e388 1 6
                waiter on post event: 0
    (latch info) hold_bits=0x0 ud_influx=0x19a7
    (osp latch info) hold_bits=0x0 ud_influx=0x0
    Process Group: DEFAULT, pseudo proc: 0xee1e6cf58
    O/S info: user: oracle, term: UNKNOWN, ospid: 59726 
    OSD pid info: 
    PDB SWITCH DEPTH : 0

   ----------------------------------------
    SO: 0xf7fe1ef90, type: session (4), map: 0xf6257d6e0
        state: LIVE (0x4532), flags: 0x1
        owner: 0xf7fef76f8, proc: 0xf7fef76f8
        link: 0xf7fe1efb0[0xf7f21de90, 0xf7f21de10]
        child list count: 2, link: 0xf7fe1f000[0xdbff87fa8, 0xe7fd12340]
        conid: 1, conuid: 1, SGA version=(1,0), pg: 0
    SOC: 0xf6257d6e0, type: session (4), map: 0xf7fe1ef90  <<<<<<<<<<<
         state: LIVE (0x99fc), flags: INIT (0x1)
    (session) sid: 4465 ser: 57003 trans: (nil), creator: 0xee1ac44f0
              flags: (0x51) USR/- flags2: (0x409) -/-/INC
              flags_idl: (0x1) status: BSY/-/-/- kill: -/-/-/-
              DID: 0001-005D-000000020000-0000-00000000, short-term DID: 
              txn branch: (nil)
              con_id/con_uid/con_name: 1/1/CDB$ROOT
              con_logonuid: 1 con_logonid: 1
              con_scuid: 1 con_scid: 1
              edition#: 0              user#/name: 0/SYS
              oct: 0, prv: 0, sql: (nil), psql: (nil)
              stats: 0xb1ffe1be0, PX stats: 0x12a0df44
    service name: SYS$BACKGROUND
    Current Wait Stack:
      Not in wait; last wait ended 2.446949 sec ago     <<<<<<<<<<<
    Wait State:
      fixed_waits=0 flags=0x21 boundary=(nil)/-1

kgerinv()+40 kernel generic error record internal named error with va_list
kserin()+180 kernel service error [partial hit for: kse ]
kcbbxsv()+17478 kernel cache buffers databasewriter take single buffer and check if we have to return to LRU
kcbb_coalesce_int()+326 kernel cache buffers databasewriter [partial hit for: kcbb ]
kcbb_coalesce()+438 kernel cache buffers databasewriter [partial hit for: kcbb ]
kcbbwthc()+817 kernel cache buffers databasewriter [partial hit for: kcbb ]
kcbbdrv()+8765 kernel cache buffers databasewriter central write driver
ksb_act_run_int()+117 kernel service background processes [partial hit for: ksb ]
ksb_act_run()+130 kernel service background processes [partial hit for: ksb ]
ksbcti() kernel service background processes call timeout/interrupts

MOS search callstack matched BUG 30486436.

INTERNAL PROBLEM DESCRIPTION:
A weak lock may be blocked due to a refuse bast was dropped for incorrect DRM check when refuse bast was issued right after DRM lock replay.

INTERNAL FIX DESCRIPTION:
Add check in refuse bast to handle DRM just replayed the lock. The refuse bast would be valid and should be honored.

Redis学习01之安装Redis6 on Linux 7

$
0
0

Redis是一个速度非常快的非关系型内存数据库,最初由Salvatore Sanfilippo创建,可以存储Key与5种不同类型的Value之间映射,允许将内存键值持久化到硬盘,也可以使用复制特性扩展读性能,算是一个远程内存库,有强劲的性能,致力于帮助用户解决问题,并且开源,通过复制、持久化、分片特性可以扩展成一个包含数百GB数据,每秒处理上百万次请求的系统,所以在目前的高并发应用中普遍采用。

支持的5种结构有string字符串,list列表,set集合,hash散列,zset有序信合。

下面记录Redis的安装与字符串类型的操作(python)

下载网站

$ wget http://download.redis.io/releases/redis-6.0.5.tar.gz
$ tar xzf redis-6.0.5.tar.gz
$ cd redis-6.0.5
$ make

在CENTOS/OEL 7上安装会可能遇到下面的错误

In file included from server.c:30:0:
server.h:1045:5: error: expected specifier-qualifier-list before ‘_Atomic’
     _Atomic unsigned int lruclock; /* Clock for LRU eviction */
     ^
server.c: In function ‘serverLogRaw’:
server.c:1028:31: error: ‘struct redisServer’ has no member named ‘logfile’
     int log_to_stdout = server.logfile[0] == '\0';
                               ^
server.c:1031:23: error: ‘struct redisServer’ has no member named ‘verbosity’
     if (level < server.verbosity) return;
                       ^
server.c:1033:47: error: ‘struct redisServer’ has no member named ‘logfile’
     fp = log_to_stdout ? stdout : fopen(server.logfile,"a");
                                               ^
server.c:1046:47: error: ‘struct redisServer’ has no member named ‘timezone’
         nolocks_localtime(&tm,tv.tv_sec,server.timezone,server.daylight_active);
                                               ^
server.c:1046:63: error: ‘struct redisServer’ has no member named ‘daylight_active’
         nolocks_localtime(&tm,tv.tv_sec,server.timezone,server.daylight_active);
                                                               ^
server.c:1049:19: error: ‘struct redisServer’ has no member named ‘sentinel_mode’
         if (server.sentinel_mode) {

The reason
From redis 6.0.0, building redis from source code needs C11 support.

The version of gcc in Centos7 is 4.8.5, but C11 was introduced in 4.9.

[root@MiWiFi-R2100-srv redis-6.0.5]# rpm -qa|grep gcc
libgcc-4.8.5-4.el7.x86_64
gcc-gfortran-4.8.5-4.el7.x86_64
gcc-c++-4.8.5-4.el7.x86_64
gcc-4.8.5-4.el7.x86_64

Solvution
Install Developer Toolset 7 to compile with gcc7.

# yum install centos-release-scl
# yum install devtoolset-7

This won’t update gcc 4.8.5 to 7 directly, it provides a virtual environment for gcc toolchain.

Use the following command enter the environment.

[root@MiWiFi-R2100-srv redis-6.0.5]# scl enable devtoolset-7 bash
[root@MiWiFi-R2100-srv redis-6.0.5]#  echo "source /opt/rh/devtoolset-7/enable" >>/etc/profile
[root@MiWiFi-R2100-srv redis-6.0.5]# make
$ mkdir -p /usr/soft/redis
$ make PREFIX=/usr/soft/redis install

[root@MiWiFi-R2100-srv bin]# pwd
/usr/soft/redis/bin
[root@MiWiFi-R2100-srv bin]# ll
total 35632
-rwxr-xr-x 1 root root 4717032 Jun 22 10:36 redis-benchmark   # 性能检测
-rwxr-xr-x 1 root root 8932872 Jun 22 10:36 redis-check-aof   # 检查aof日志
-rwxr-xr-x 1 root root 8932872 Jun 22 10:36 redis-check-rdb   # 检查rdb日志
-rwxr-xr-x 1 root root 4966208 Jun 22 10:36 redis-cli         # 客户端
lrwxrwxrwx 1 root root      12 Jun 22 10:36 redis-sentinel -> redis-server
-rwxr-xr-x 1 root root 8932872 Jun 22 10:36 redis-server      # 服务端

# vi ~/.bash_profile

export REDIS_HOME=/usr/soft/redis
export PATH=$PATH:$REDIS_HOME/bin
# source ~/.bash_profile

[root@MiWiFi-R2100-srv redis]# redis-server
8133:C 22 Jun 2020 10:54:31.795 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
8133:C 22 Jun 2020 10:54:31.795 # Redis version=6.0.5, bits=64, commit=00000000, modified=0, pid=8133, just started
8133:C 22 Jun 2020 10:54:31.795 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
8133:M 22 Jun 2020 10:54:31.796 * Increased maximum number of open files to 10032 (it was originally set to 1024).
                _._
           _.-``__ ''-._
      _.-``    `.  `_.  ''-._           Redis 6.0.5 (00000000/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._
 (    '      ,       .-`  | `,    )     Running in standalone mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |    `-._   `._    /     _.-'    |     PID: 8133
  `-._    `-._  `-./  _.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |           http://redis.io
  `-._    `-._`-.__.-'_.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |
  `-._    `-._`-.__.-'_.-'    _.-'
      `-._    `-.__.-'    _.-'
          `-._        _.-'
              `-.__.-'


-- 放到后台进程
[root@MiWiFi-R2100-srv redis]# pwd
/usr/soft/redis
[root@MiWiFi-R2100-srv redis]# vi redis.conf
[root@MiWiFi-R2100-srv redis]# cat redis.conf
daemonize yes

[root@MiWiFi-R2100-srv redis]# redis-server /usr/soft/redis/redis.conf
8177:C 22 Jun 2020 10:59:58.664 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
8177:C 22 Jun 2020 10:59:58.664 # Redis version=6.0.5, bits=64, commit=00000000, modified=0, pid=8177, just started
8177:C 22 Jun 2020 10:59:58.664 # Configuration loaded

[root@MiWiFi-R2100-srv bin]# ps -ef|grep redis|grep -v grep
root 8178 1 0 10:59 ? 00:00:01 redis-server *:6379
[root@MiWiFi-R2100-srv redis]# redis-
redis-benchmark  redis-check-aof  redis-check-rdb  redis-cli        redis-sentinel   redis-server
[root@MiWiFi-R2100-srv redis]# redis-cli
127.0.0.1:6379> set key1 "hello world"
OK
127.0.0.1:6379> get key1
"hello world"
127.0.0.1:6379>
 

Oracle GoldenGate增加字段问题(上): Extract OGG-01028 Number of digits N+ exceeds max N on column

$
0
0

“酒虽好,不要贪杯” !  OGG在同步效率、安全、兼容性上一致做的很出色,广泛用于数据同步,支持异构, 但是在它的使用过程中也要严格按照流程使用,如何在OGG同步的表上做DDL操作有严格流程,如果任性的使用,后期维护必将带给诸多麻烦, 这里简单记录在OGG进程正在同步事务的情况下增加字段导致的抽取进程异常终止。

ggserr.log

2020-06-23 00:26:37 INFO OGG-06507 Oracle GoldenGate Capture for Oracle, eaccta1.prm: MAP (TABLE) resolved (entry ANBOB.TAB_TEST1): table "ANBOB"."TAB_TEST1".
2020-06-23 00:26:37 INFO OGG-06509 Oracle GoldenGate Capture for Oracle, eaccta1.prm: Using the following key columns for source table ANBOB.TAB_TEST1: TARIFFPLAN_ID, PLANITEM_ID.
2020-06-23 00:26:37 ERROR OGG-01028 Oracle GoldenGate Capture for Oracle, eaccta1.prm: Formatting error on: table name ANBOB.TAB_TEST1, rowid AAInD5AGdAACB6IAAC, XID 3618.67.2080991
8, position (Seqno 210054, RBA 1845468456). Number of digits 24 exceeds max 19 on column START_CYCLE_OFFSET, value: -470000000000000000000000.
2020-06-23 00:26:37 ERROR OGG-01668 Oracle GoldenGate Capture for Oracle, eaccta1.prm: PROCESS ABENDING.
2020-06-23 00:29:48 INFO OGG-01738 Oracle GoldenGate Capture for Oracle, ext_f.prm: BOUNDED RECOVERY: CHECKPOINT: for object pool 1: p19204010_Redo_Thread_1: start=SeqNo: 210054, RBA: 174
2906384, SCN: 3897.3551655082 (16741039207594), Timestamp: 2020-06-23 00:14:03.000000, end=SeqNo: 210054, RBA: 2093275136, SCN: 3897.3553983944 (16741041536456), Timestamp: 2020-06-23 00:29:46.
000000, Thread: 1.

Note: 从上面的信息猜测看到是列长度map错位了,发生在START_CYCLE_OFFSET列,值长度超24但列精度最大19,值是已经无意义, 同时还有rowid信息。

查看表定义

SQL> @ind ANBOB.TAB_TEST1
Display indexes where table or index name matches %ANBOB.TAB_TEST1%...

TABLE_OWNER          TABLE_NAME                     INDEX_NAME                     POS# COLUMN_NAME                    DSC
-------------------- ------------------------------ ------------------------------ ---- ------------------------------ ----
ANBOB                TAB_TEST1                   INX_TAB_TEST1                        1 TARIFFPLAN_ID
                                                                                      2 PLANITEM_ID


INDEX_OWNER          TABLE_NAME                     INDEX_NAME                     IDXTYPE    UNIQ STATUS   PART TEMP 
-------------------- ------------------------------ ------------------------------ ---------- ---- -------- ---- ---- 
ANBOB                TAB_TEST1                      INX_TAB_TEST1                  NORMAL     YES  VALID    NO   N    


SQL> select START_CYCLE_OFFSET from ANBOB.TAB_TEST1 where rowid='AAInD5AGdAACB6IAAC';

START_CYCLE_OFFSET
------------------
                 0

SQL> @desc ANBOB.TAB_TEST1
           Name                            Null?    Type
           ------------------------------- -------- ----------------------------
    1      TARIFFPLAN_ID                   NOT NULL NUMBER(8)
    2      PLANITEM_ID                     NOT NULL NUMBER(8)
    3      DISC_ENTITY                     NOT NULL NUMBER(1)
...
   10      REFITEM_CYCLE_OFFSET            NOT NULL NUMBER(4)
   11      REFVALUE_CALC_TYPE              NOT NULL NUMBER(1)
   12      REFVALUE_UNIT                   NOT NULL NUMBER(2)
   13      DISC_OBJECT_TYPE                NOT NULL NUMBER(1)
   14      DISC_ITEM_CODE                  NOT NULL VARCHAR2(256)
   15      START_CYCLE_OFFSET              NOT NULL NUMBER(4)
   16      VALID_CYCLE_TYPE                NOT NULL NUMBER(1)
   17      VALID_CYCLES                    NOT NULL NUMBER(4)
   18      INUSE                           NOT NULL NUMBER(1)
   19      TARIFFPLAN_TYPE                          NUMBER(4)
   20      DISC_ITEM_TYPE                  NOT NULL NUMBER(1)
   21      FREEREFITEM_MIN_USAGE           NOT NULL NUMBER(14)
   22      NOTE                                     VARCHAR2(255)
   23      ISDISCTSPEC                              VARCHAR2(1)
   24      SPECDISCT_EXPR                           VARCHAR2(512)
...
   32      FORCE_OUTPUT_EVENT              NOT NULL NUMBER(1)
   33      DISC_SOURCE_TYPE                NOT NULL NUMBER(1)
   34      REFITEM_CYCLES                           VARCHAR2(512)
   35      G_REFITEM_CYCLES                         VARCHAR2(512)
   36      REF_ITEMADD_EXPR                         VARCHAR2(512)
   37      G_REF_ITEMADD_EXPR                       VARCHAR2(512)

SQL> @o ANBOB.TAB_TEST1

owner                     object_name                    object_type        status           OID      D_OID CREATED           LAST_DDL_TIME
------------------------- ------------------------------ ------------------ --------- ---------- ---------- ----------------- -----------------
ANBOB                      TAB_TEST1                     TABLE              VALID        1930539    2257145 20140322 22:53:59 20200623 00:30:25

DB1:/interface/ogg> ggsci -V

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.160419 23261684
AIX 6, ppc, 64bit (optimized), Oracle 11g on Jul 10 2016 22:21:03

Copyright (C) 1995, 2016, Oracle and/or its affiliates. All rights reserved.

Note:
OGG 是12.2版本, 此表问题时间点有做DDL操作。正常启进程是启不来的, 12.1前OGG有个bug Extract falsely abends with “OGG-01028 Formatting error … Number of digits xx exceeds max xx on column” (Doc ID 1939795.1),这个表较小,进程已经终止无法使用SKIPTRANS跳过,也无法集成模式的ignore_transaction,临时跳过此表同步启动了进程。

后面尝试再加回该表,重启EXtract进程时又发现了LONG TRANSACTION, 顺便记录下分析长事务的方法。

GGSCI (DB1) 64> stop eaccta1

Sending STOP request to EXTRACT EACCTA1 ...

STOP request pending. There are open, long-running transactions.
Before you stop Extract, make the archives containing data for those transactions available for when Extract restarts.
To force Extract to stop, use the SEND EXTRACT EACCTA1, FORCESTOP command.Oldest redo log files necessary to restart Extract are:

Redo Thread 1, Redo Log Sequence Number 210061, SCN 3897.3601449869 (16741089002381), RBA 2777865232
Redo Thread 2, Redo Log Sequence Number 241889, SCN 3897.3601322118 (16741088874630), RBA 110276624.

2020-06-23 18:23:22  WARNING OGG-01742  Command sent to EXTRACT EACCTA1 returned with an invalid response.


GGSCI (DB1) 65> send extract EACCTA1, showtrans

Sending SHOWTRANS request to EXTRACT EACCTA1 ...

Oldest redo log files necessary to restart Extract are:

Redo Thread 1, Redo Log Sequence Number 210061, SCN 3897.3601449869 (16741089002381), RBA 2777865232
Redo Thread 2, Redo Log Sequence Number 241889, SCN 3897.3601322118 (16741088874630), RBA 110276624

------------------------------------------------------------
XID:                  3325.61.6323268       
Items:                1        
Extract:              EACCTA1   
Redo Thread:          1      
Start Time:           2020-06-23:04:57:18  
SCN:                  3897.3601449869 (16741089002381)  
Redo Seq:             210061
Redo RBA:             2777865232          
Status:               Running             

------------------------------------------------------------
XID:                  4639.52.12009229      
Items:                1        
Extract:              EACCTA1   
Redo Thread:          1      
Start Time:           2020-06-23:04:57:20  
SCN:                  3897.3601459776 (16741089012288)  
Redo Seq:             210061
Redo RBA:             2779562000          
Status:               Running             
...

查看当前归档是否已删除

SQL> select thread#,max(sequence#) from v$archived_log where deleted='YES' group by thread#;

   THREAD# MAX(SEQUENCE#)
---------- --------------
         1         210084
         2         241919

是否是可以跳过空事务?使用SKIPEMPTYTRANS

XID:                  3325.61.6323268       
Items:                1        ### not empty transaction
Extract:              EACCTA1   
Redo Thread:          1      
Start Time:           2020-06-23:04:57:18  
SCN:                  3897.3601449869 (16741089002381)  
Redo Seq:             210061
Redo RBA:             2777865232 

根据XID找到数据库内的长事务

SQL> select t.xidusn||'.'||t.xidslot||'.'||xidsqn XID ,s.sid,s.status ses_state,machine,s.sql_id,start_time, username, r.name,  ubafil, ubablk, t.status tx_state, (used_ublk*p.value)/1024 blk, used_urec,decode(bitand(t.flag,power(2,7)),0, 'Normal','TX rolling') tx_state
   from v$transaction t, v$rollname r, v$session s, v$parameter p
   where xidusn=usn
   and s.saddr=t.ses_addr
   and p.name='db_block_size'
   and xidusn=3325 and xidslot=61 and xidsqn=6323268;

XID                                   SID SES_STAT MACHINE    SQL_ID          START_TIME           USERNAME    NAME                               UBAFIL     UBABLK TX_STATE                BLK  USED_UREC TX_STATE
------------------------------ ---------- -------- ---------- --------------- -------------------- ----------- ------------------------------ ---------- ---------- ---------------- ---------- ---------- ----------
3325.61.6323268                      3552 INACTIVE kinjk6                     06/23/20 04:57:17    ACCOUNT     _SYSSMU3325_2049892258$                 0          0 ACTIVE                   16          1 Normal


-- file: long_transactions.sql
-- author: weejar zhang(www.anbob.com)
-- purpose: check current long transactions
col xid for a25
col uba for a20
col machine for a15
select t.xidusn||'.'||t.xidslot||'.'||xidsqn XID ,s.sid,s.status ses_state,machine,s.sql_id,start_time, username, r.name,  ubafil||'.'||ubablk||'.'||UBAREC UBA, t.status tx_state, (used_ublk*p.value)/1024 blk, used_urec,decode(bitand(t.flag,power(2,7)),0, 'Normal','TX rolling') tx_state
   from v$transaction t, v$rollname r, v$session s, v$parameter p
   where xidusn=usn
   and s.saddr=t.ses_addr
   and p.name='db_block_size'
   order by start_time desc;

Tip:
又是dblink 引起的长事务, 1个undo block, undo block DBA 为0, 可以参考我之前的分享。

<<Lots of Long transaction caused by database link, and undo hdr show DBA for that slot is 0x00000000>>

Oracle GoldenGate增加字段问题(下) replicat OGG-00918 Key column xx is missing from map

$
0
0

接上一篇Oracle GoldenGate增加字段问题(上): Extract OGG-01028 Number of digits N+ exceeds max N on column, 如果在有OGG的表上增加列,又未启用同步DDL ,操作流程不当会遇到很多麻烦, 这里记录replicat进程异常终止。


queness. KEYCOLS may be used to define the key.
2020-06-23 23:31:31 INFO OGG-02756 Oracle GoldenGate Delivery for Oracle, rep_zwa.prm: The definition for table ANBOB_COM.TEST_TABLE1 is obtained from the trail file.
2020-06-23 23:31:31 INFO OGG-06511 Oracle GoldenGate Delivery for Oracle, rep_zwa.prm: Using following columns in default map by name: ITEMCODE, ITEMNAME, ITEMLEVEL, PARENTITEMCODE, WRTOFF_ORDER, SERV_ID, SUBSERV_ID, IN_USE, ITEMCODEN, FINANCEITEM_CODE, IS_NEW_SERVICE, IS_CAL_SCORE, MONTH_FEE_PRECHARGE_COUNT, ODD_ROUND_TYPE, UNIT.
2020-06-23 23:31:31 ERROR OGG-00918 Oracle GoldenGate Delivery for Oracle, rep_zwa.prm: Key column NOTE is missing from map.
2020-06-23 23:31:31 ERROR OGG-01668 Oracle GoldenGate Delivery for Oracle, rep_zwa.prm: PROCESS ABENDING.
2020-06-23 23:31:31 INFO OGG-00975 Oracle GoldenGate Manager for Oracle, mgr.prm: Cannot create process '/openv/ogg/ogg12/replicat'. Child process is no longer alive.
2020-06-23 23:31:31 INFO OGG-00975 Oracle GoldenGate Manager for Oracle, mgr.prm: startER failed.
2020-06-23 23:31:31 WARNING OGG-01742 Oracle GoldenGate Command Interpreter for Oracle: Command sent to MGR MGR returned with an ERROR response.

NOTE:
Key column NOTE is missing from map, 这里的NOTE列指的是trail中没有目标表 Note列的MAP, 报错上一句也列出了trail所有列

查看列定义

SQL> @desc ANBOB_COM.TEST_TABLE1
           Name                            Null?    Type
           ------------------------------- -------- ----------------------------
    1      ITEMCODE                                 VARCHAR2(30)
    2      ITEMNAME                                 VARCHAR2(64)
    3      ITEMLEVEL                                NUMBER(2)
    4      PARENTITEMCODE                           VARCHAR2(32)
    5      WRTOFF_ORDER                             NUMBER(2)
    6      SERV_ID                                  VARCHAR2(12)
    7      SUBSERV_ID                               VARCHAR2(12)
    8      IN_USE                                   NUMBER(1)
    9      ITEMCODEN                       NOT NULL NUMBER(8)
   10      FINANCEITEM_CODE                         VARCHAR2(32)
   11      IS_NEW_SERVICE                           NUMBER(1)
   12      IS_CAL_SCORE                             NUMBER(1)
   13      MONTH_FEE_PRECHARGE_COUNT       NOT NULL NUMBER(4)
   14      ODD_ROUND_TYPE                           CHAR(1)
   15      UNIT                                     NUMBER(2)
   16      NOTE                                     VARCHAR2(512)

Note:
NOTE列在最后一个,ORACLE没有mysql那么灵活ADD COLUMN可以选择列的位置,只能在最后追加,判断是新加的列,结果发现确定了这个假想。

--当前没有值
SQL> select distinct note from ANBOB_COM.TEST_TABLE1;

NOTE
--------------------------------------------------------------------------------

临时解决方法
把目标表的列先删除,启同步就可以了,应用以后再把列填加回来。

SQL> alter table ANBOB_COM.TEST_TABLE1 drop column note;

Table altered.

GGSCI> start rep xxx;
-- WAIT REPLICATE complete.

SQL> alter table ANBOB_COM.TEST_TABLE1 add  note varchar2(512);

可能的原因
1, 目标增加列前trail队列中还有未应用的日志。
2. 源表增加了trandata,那trail文件中只有 key column和变更的列值, 对于没有PK, UK的列,在源和目标会使用所有的列做为KEY 列,变更KEY列要重加trandata. 也可以使用参数KEYCOLS 指定KEY 列。

从上面表定义是属于第二种情况,没有重建trandata。 另外如果源库有PK,目标没有PK时也会出现这问题,所以一定要保证源和目标一致,否则要配置colmap映射。

Oracle 11g 升级 12c 、19c后改变 database trigger fail with ORA-01031

$
0
0

无论出于安全、特性、性能、支持周期都需要考虑升级数据库,但是也会导致有些功能改变而影响软件使用或管理方式,升级后经验格外重要,因为oracle官方提供的功能无法模拟各行业生产环境中所有的应用场景, 尤其是从最近要面临的11g升级19c大版本升级,防止踩雷,像wm_concat 在新版本不支持一样。

之前我在《oracle 12c new feature: RESOURCE role without unlimited tablespace》 分享过升级到12c 后,新创建的用户给RESOURCE因为UNLIMITED TABLESPACE权限的缺失依旧不能在表空间中创建对象。这里分享一下在database级创建trigger的变化即使给了DBA ROLE 依旧ORA-01031。

SQL>   CREATE OR REPLACE TRIGGER "ANBOB"."DTR_DDLEVENTS"
  2  AFTER DDL ON DATABASE
  3  DECLARE
  4  --
  5  -- author: weejar
  6  -- date : 2016-5-13
  ...
  48  /
  CREATE OR REPLACE TRIGGER "ANBOB"."DTR_DDLEVENTS"
                            *
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> grant create any trigger to ANBOB;
Grant succeeded.

SQL> @roles ANBOB

GRANTEE                   GRANTED_ROLE                           ADM DEF
------------------------- -------------------------------------- --- ---
ANBOB                      APP_OPERATOR                           YES YES
ANBOB                      PDB_DBA                                NO  YES
ANBOB                      DBA                                    NO  YES
ANBOB                      APP_SELECTOR                           YES YES
ANBOB                      AUDROLE                                YES YES


SQL> CREATE OR REPLACE TRIGGER "ANBOB"."DTR_DDLEVENTS"
                            *
ERROR at line 1:
ORA-01031: insufficient privileges

NOTE:
可以看到anbob用户给了dba,pdb_dba 角色和create any trigger, 创建trigger on database仍旧提示无权限。

如果注意过preupgrade中给的输出应该知道原因了。原因是因为ADMINISTER DATABASE TRIGGER 的权限到12.2以后需要直接给用户权限了。

VERSION <=12.1
To create a trigger in your own schema on a table in your own schema or on your own schema (SCHEMA), you must have the CREATE TRIGGER system privilege.
To create a trigger in any schema on a table in any schema, or on another user’s schema (schema.SCHEMA), you must have the CREATE ANY TRIGGER system privilege.

In addition to the preceding privileges, to create a trigger on DATABASE, you must have the ADMINISTER DATABASE TRIGGER system privilege.By default DBA and IMP_FULL_DATABASE role has this privileges.Also sys user has this privileges by default.

VERSION  >= 12.2
In 12.2 , direct grant of “administer database trigger” is needed for the trigger owner.

$ORACLE_BASE/product/19.0.0/dbhome_1/jdk/bin/java -jar $ORACLE_BASE/product/19.0.0/dbhome_1/rdbms/admin/preupgrade.jar TERMINAL TEXT

(AUTOFIXUP) Directly grant ADMINISTER DATABASE TRIGGER privilege to the
owner of the trigger or drop and re-create the trigger with a user that was granted directly with such. You can list those triggers using:

SELECT OWNER, TRIGGER_NAME FROM DBA_TRIGGERS WHERE
TRIM(BASE_OBJECT_TYPE)='DATABASE' AND 
OWNER NOT IN (SELECT GRANTEE FROM DBA_SYS_PRIVS WHERE PRIVILEGE='ADMINISTER DATABASE TRIGGER').

There is one or more database triggers whose owner does not have the right privilege on the database.
The creation of database triggers must be done by users granted with ADMINISTER DATABASE TRIGGER privilege. Privilege must have been granted directly.

SQL> grant administer database trigger to ANBOB;
Grant succeeded.
SQL>   CREATE OR REPLACE TRIGGER "ANBOB"."DTR_DDLEVENTS"
  2  AFTER DDL ON DATABASE
  3  DECLARE
...

Trigger created.

High wait event ‘row cache mutex’ in 12cR2、19c

$
0
0

In Oracle 12.2.0.1.0 (12cR2), “row cache mutex” replaced 12.1.0.2.0 (12cR1) and 11g  “latch: row cache objects”, similar to “latch: library cache” substitution by “library cache: mutex X” in the previous release.

P1TEXT = cache id ,”cache id” can be used to directly pinpoint the exact contention row cache objects. v$rowcache.cache#(x$kqrst.kqrstcln)

SQL> select cache#,type,PARAMETER,gets,getmisses,flushes from V$ROWCACHE where cache#=10;

    CACHE# TYPE        PARAMETER                              GETS  GETMISSES    FLUSHES
---------- ----------- -------------------------------- ---------- ---------- ----------
        10 PARENT      dc_users                              38441        113          0

12cR2 v$rowcache contains 71 rows, 19c(19.3)contains 75 rows, 20c(20.2) contains 76 rows.

High waits on “row cache mutex” when looking up user or role information in user row cache (dc_users). hit Bug 30623138.

if row cache dc_props or dc_cdbprops and the sessions are using dblinks. may hit Bug 30712670.

More about latch: row cache objects Troubleshooting “latch: row cache objects” case and Event 10089 to do.

Redis学习02之String & Database

$
0
0

Redis支持5种数据类型,这里记录第一个String字符,Redis中的字符串和其它编程语言或键值库功能相似,函数丰富方便,字符串的值可以存储3种类型的值:字节串、整数、浮点数。KEY可以是数字、大小写字母、下划线或中文, Value 有空格要以引号括起。如果value是整数或符点数可以对其自增、减操作。可以增、删、改、查等操作, KEY 和Value可以是中文,但是key是中文是使用keys 列出键时中文为unicode码。

String SET/GET

127.0.0.1:6379> ping
PONG
127.0.0.1:6379> set 1 a
OK
127.0.0.1:6379> get 1
"a"
127.0.0.1:6379> set keyname valuename
OK
127.0.0.1:6379> get keyname
"valuename"
127.0.0.1:6379> set name "weejar zhang"
OK
127.0.0.1:6379> get name
"weejar zhang"

如果KEY存在则不更新

127.0.0.1:6379> get name
"weejar zhang blog .com"
127.0.0.1:6379> set name "anbob.com" NX
(nil)
127.0.0.1:6379> get name
"weejar zhang blog .com"

多个KEY Mget

127.0.0.1:6379> keys *
1) "blog"
2) "name"

127.0.0.1:6379> mget name blog
1) "weejar"
2) "anbob.com"

127.0.0.1:6379> getrange blog 4 -1
"b.com"

127.0.0.1:6379> getrange blog 4 -2
"b.co"

整数、浮点加减

127.0.0.1:6379> set age 30
OK
127.0.0.1:6379> get age
"30"
127.0.0.1:6379> incr age
(integer) 31
127.0.0.1:6379> incrby age 3
(integer) 34
127.0.0.1:6379> decr age
(integer) 33

检索模糊

127.0.0.1:6379> set 名字 张维照
OK
127.0.0.1:6379> get 名字
"\xe5\xbc\xa0\xe7\xbb\xb4\xe7\x85\xa7"
127.0.0.1:6379> keys *
1) "1"
2) "keyname"
3) "name"
4) "age"
5) "\xe5\x90\x8d\xe5\xad\x97"

127.0.0.1:6379> keys name
1) "name"
127.0.0.1:6379> keys na
(empty array)
127.0.0.1:6379> keys na*
1) "name"
127.0.0.1:6379> keys na?e
1) "name"
127.0.0.1:6379> keys nam[ed]
1) "name"
127.0.0.1:6379> type name
string

127.0.0.1:6379> type age
string
127.0.0.1:6379> exists name
(integer) 1
127.0.0.1:6379> keys 1
1) "1"

改名和删除

127.0.0.1:6379> rename 1 key1
OK
127.0.0.1:6379> del key1
(integer) 1
127.0.0.1:6379> keys *
1) "keyname"
2) "name"
3) "age"
4) "\xe5\x90\x8d\xe5\xad\x97"
127.0.0.1:6379> flushdb
OK
127.0.0.1:6379> keys *
(empty array)
清空所有
127.0.0.1:6379> flushall

追加和截取

127.0.0.1:6379> set name "weejar zhang"
OK
127.0.0.1:6379> get name
"weejar zhang"
127.0.0.1:6379> append name " anbob.com"
(integer) 22
127.0.0.1:6379> get name
"weejar zhang anbob.com"
127.0.0.1:6379> getrange name 13 17
"anbob"
127.0.0.1:6379> setrange name 13 "blog "
(integer) 22
127.0.0.1:6379> get name
"weejar zhang blog .com"

127.0.0.1:6379> get name
"anbob.com"

127.0.0.1:6379> getrange blog 4 -1
"b.com"

127.0.0.1:6379> getrange blog 4 -2
"b.co"

getrange在是从原来的substr改名,在python redis中依旧保留两种。

大小写互换

127.0.0.1:6379> set chr h
OK
127.0.0.1:6379> get chr
"h"
127.0.0.1:6379> setbit chr 2 0
(integer) 1
127.0.0.1:6379> get chr
"H"
127.0.0.1:6379> setbit chr 2 1
(integer) 0
127.0.0.1:6379> get chr
"h"

生命周期

127.0.0.1:6379> get name
"weejar zhang blog .com"
127.0.0.1:6379> ttl name
(integer) -1
127.0.0.1:6379> expire name 10
(integer) 1
127.0.0.1:6379> ttl name
(integer) 8
127.0.0.1:6379> ttl name
(integer) 7
127.0.0.1:6379> ttl name
(integer) 6
127.0.0.1:6379> ttl name
(integer) 5
127.0.0.1:6379> ttl name
(integer) -2
127.0.0.1:6379> get name
(nil)
127.0.0.1:6379> set name "anbob.com" EX 10
OK
127.0.0.1:6379> ttl name
(integer) 9
127.0.0.1:6379> ttl name
(integer) 7

改为永久

127.0.0.1:6379> persist name
(integer) 1
127.0.0.1:6379> ttl name
(integer) -1

随机KEY

127.0.0.1:6379> set age 30
OK
127.0.0.1:6379> keys *
1) "age"
2) "name"
127.0.0.1:6379> randomkey
"age"
127.0.0.1:6379> randomkey
"age"
127.0.0.1:6379> randomkey
"age"
127.0.0.1:6379> randomkey
"name"
127.0.0.1:6379> randomkey
"name"

获取并更新

127.0.0.1:6379> set next 30
OK
127.0.0.1:6379> get next
"30"
127.0.0.1:6379> getset next 31
"30"
127.0.0.1:6379> get next
"31"

数据库

127.0.0.1:6379> config get databases
1) "databases"
2) "16"
127.0.0.1:6379> info keyspace
# Keyspace
db0:keys=2,expires=0,avg_ttl=0

KEY移动数据库

127.0.0.1:6379> move age 2
(integer) 1
127.0.0.1:6379> keys *
1) "name"
127.0.0.1:6379> select 2
OK
127.0.0.1:6379[2]> keys *
1) "age"
127.0.0.1:6379[2]> info keyspace
# Keyspace
db0:keys=1,expires=0,avg_ttl=0
db2:keys=1,expires=0,avg_ttl=0

Tip:
Redis 使用 DB number 实现类似关系型数据库中 schema 的功能。不同 DB number 表示的数据库是隔离的,但是目前只能使用数字来表示一个数据库

redis-cli -n dbnumber
redis://127.0.0.1:6379/dbnumber

python 调用

[root@MiWiFi-R2100-srv ~]# python -m pip install redis
Collecting redis
Downloading https://files.pythonhosted.org/packages/a7/7c/24fb0511df653cf1a5d938d8f5d19802a88cef255706fdda242ff97e91b7/redis-3.5.3-py2.py3-none-any.whl (72kB)
100% |████████████████████████████████| 81kB 13kB/s
Installing collected packages: redis

[root@MiWiFi-R2100-srv ~]# python
Python 2.7.5 (default, Nov 20 2015, 02:00:19)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.

>>> import redis
>>> client=redis.Redis()
>>> client.get('name')
'weejar zhang'
>>> client.get('age')
'33'
>>> client.get('名字')
'\xe5\xbc\xa0\xe7\xbb\xb4\xe7\x85\xa7'
>>> client.get('名字').decode

>>> print(client.keys())
['keyname', 'name', 'age', '\xe5\x90\x8d\xe5\xad\x97']
>>> for key in client.keys():
... print(key.decode())
...
keyname
name
age
Traceback (most recent call last):
File "", line 2, in
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 0: ordinal not in range(128)

>>> import sys
>>> reload(sys)
<module 'sys' (built-in)>
>>> sys.setdefaultencoding('utf8')
>>> for key in client.keys():
... print(key.decode())
...
keyname
name
age
名字

>>> for key in client.keys():
... print(client.get(key))
...
valuename
weejar zhang
33
张维照


Redis学习03之 HASH

$
0
0

前一篇学习了String类型, 由于Redis的数据保存在内存中,查询方式非常块,像String类型可以存储浏览量,投票,文章点击等小量级的数据记录中,如果数据量超过百万级别使用简单的string映射关系会浪费大量的内存,此时Redis推荐使用另一种数据结构:HASH.  存储相同量级的数据Hash 消耗内存约String的四分之一,使用一种压缩存储,同时查询速度也并不差。 如果Redisk中有大量的keys, 使用key *会导致Redis hang,所以不清楚时不建议使用该命令。下面简单测试HASH的使用。

HASH 实例了KEY-VALUE的映射,和字符串类似,但Value是键值对, 根据Key可以快速找到Value, 无论有多少个键值对,查询时间是一样的,Python中的DICT就是HASH TABLE的实现。 Redis中的Hash可以存储2的32次方-1(约43亿)个键值对。

hash是一个string类型的field和value的映射表.一个key可对应多个field,一个field对应一个value。将一个对象存储为hash类型,较于每个字段都存储成string类型更能节省内存。新建一个hash对象时开始是用zipmap(又称为small hash)来存储的。这个zipmap其实并不是hash table,但是zipmap相比正常的hash实现可以节省不少hash本身需要的一些元数据存储开销。尽管zipmap的添加,删除,查找都是O(n)。

这里是因为Redis 的hash 对象有两种编码方式
1. ziplist(2.6之前是zipmap)
2. hashtable

当哈希对象可以同时满足以下两个条件时, 哈希对象使用 ziplist 编码:
哈希对象保存的所有键值对的键和值的字符串长度都小于 64 字节;
哈希对象保存的键值对数量小于 512 个;

不能满足这两个条件的哈希对象需要使用 hashtable 编码。在Redis 内存优化时强烈建议使用HASH。

哈希类型中的命令及使用
配置值
HSET、HMSET 、HSETX
HSET – 设置值
HGET – 获取指定字段值

取值
HGET HMGET HGETALL
HGET – 获取指定字段值
HMGET – 返回多个字段值
HGETALL – 获取所有字段及值

自增
HINCRBY – 为指定字段值增加
HINCRBYFLOAT – 为指定字段值增加浮点数

其它
HDEL – 字段删除
HEXISTS – 判断字段是否存在
HKEYS – 返回所有字段
HLEN – 返回字段数量
HVALS – 返回所有字段值

127.0.0.1:6379> hset stu1 name Jack age 30  sex F
(integer) 3
127.0.0.1:6379> hget stu1 name
"Jack"
127.0.0.1:6379> hget stu1 age
"30"
127.0.0.1:6379> hset stu1 id 1
(integer) 1

127.0.0.1:6379> hmget stu1 name sex
1) "Jack"
2) "F"

127.0.0.1:6379> hgetall stu1
1) "name"
2) "Jack"
3) "age"
4) "30"
5) "sex"
6) "F"
7) "id"
8) "1"

127.0.0.1:6379> hdel stu1 sex
(integer) 1
127.0.0.1:6379> hmset stu2 name Tome id 2
OK

127.0.0.1:6379> hgetall stu1
1) "name"
2) "Jack"
3) "age"
4) "30"
5) "id"
6) "1"
127.0.0.1:6379> hgetall stu2
1) "name"
2) "Tome"
3) "id"
4) "2"

127.0.0.1:6379> hkeys stu1
1) "name"
2) "age"
3) "id"
127.0.0.1:6379> hkeys stu2
1) "name"
2) "id"

127.0.0.1:6379> hlen stu1
(integer) 3


127.0.0.1:6379> hvals stu1
1) "Jack"
2) "30"
3) "1"

127.0.0.1:6379> hincrby stu1 age 2
(integer) 32
127.0.0.1:6379> hget stu1 age
"32"


案例

模拟像WEIBO,Twitter 上发布的链接一样,会自动转为短链接显示。这里有几个python文件

[root@MiWiFi-R2100-srv hash]# find . -name "*.py" -print -exec cat {} \;
./base36.py
def base10_to_base36(number):
    alphabets = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"
    result = ""

    while number != 0 :
        number, i = divmod(number, 36)
        result = (alphabets[i] + result)

    return result or alphabets[0]
./shorty_url.py
from base36 import base10_to_base36

ID_COUNTER = "ShortyUrl::id_counter"
URL_HASH = "ShortyUrl::url_hash"

class ShortyUrl:

    def __init__(self, client):
        self.client = client

    def shorten(self, target_url):
        new_id = self.client.incr(ID_COUNTER)
        short_id = base10_to_base36(new_id)
        self.client.hset(URL_HASH, short_id, target_url)
        return short_id

    def restore(self, short_id):
        return self.client.hget(URL_HASH, short_id)
./gethkeys.py
import redis

client = redis.Redis()

fields = client.hkeys('ShortyUrl::url_hash')

for name in fields:
   print(name.decode())

./gethvals.py
import redis

client = redis.Redis()

fields = client.hvals('ShortyUrl::url_hash')

for name in fields:
   print(name.decode())

下面使用python CL命令行及调用py测试

[root@MiWiFi-R2100-srv hash]# python
Python 2.7.5 (default, Nov 20 2015, 02:00:19)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from shorty_url import ShortyUrl
>>> from redis import Redis
>>> client = Redis(decode_responses=True)
>>> shorty_url = ShortyUrl(client)
>>> shorty_url.shorten('anbob.com')
'1'
>>> shorty_url.shorten('www.google.com/bolabolabola.html')
'2'
>>> shorty_url.restore("1")
u'anbob.com'
>>> shorty_url.restore("2")
u'www.google.com/bolabolabola.html'

>>> for i in range(30):
...   shorty_url.shorten('www.anbob.com/arch/'+str(i)+'.html')
...
'Y'
'Z'
'10'
'11'
'12'
'13'
'14'
'15'
...
...
'1M'
'1N'
'1O'
'1P'
'1Q'
'1R'


127.0.0.1:6379> hgetall ShortyUrl::url_hash

 70) "www.anbob.com/arch/1.html"
 71) "10"
 72) "www.anbob.com/arch/2.html"
 73) "11"
 74) "www.anbob.com/arch/3.html"
 75) "12"
 76) "www.anbob.com/arch/4.html"
 77) "13"
 78) "www.anbob.com/arch/5.html"
 79) "14"
 80) "www.anbob.com/arch/6.html"
...

114) "www.anbob.com/arch/23.html"
115) "1M"
116) "www.anbob.com/arch/24.html"
117) "1N"
118) "www.anbob.com/arch/25.html"

[root@MiWiFi-R2100-srv hash]# python gethkeys.py
1
2
3
4
5
6
7
...

[root@MiWiFi-R2100-srv hash]# python gethvals.py
anbob.com
www.google.com/bolabolabola.html
...
www.anbob.com/arch/0.html
www.anbob.com/arch/1.html
www.anbob.com/arch/2.html
www.anbob.com/arch/3.html
www.anbob.com/arch/4.html
www.anbob.com/arch/5.html
www.anbob.com/arch/6.html
www.anbob.com/arch/7.html
www.anbob.com/arch/8.html
...

也可用于存储json

>>> import json
>>> import redis
>>> client = redis.Redis()
>>> client.hset('users','weejar', json.dumps({'id':1,'CNname':'张维照', 'blog':'www.anbob.com'}))
1L
>>> client.hset('users','admin', json.dumps({'id':2,'CNname':'ADMIN', 'blog':'www.anbob.com/login'}))
1L
>>> names = client.hkeys('users')

>>> for p in names:
...   print(p)
...
admin
weejar

>>> for p in names:
...   print(client.hget('users',p).decode())
...
{"blog": "www.anbob.com/login", "CNname": "ADMIN", "id": 2}
{"blog": "www.anbob.com", "CNname": "\u5f20\u7ef4\u7167", "id": 1}

>>> user_cnt=client.hlen('users')

>>> print('there are {%s} users ' % user_cnt )
there are {2} users

>>> if client.hexists('users','weejar'):
...    client.hdel('users','weejar')
...
1
>>> client.hkeys('users')
['admin']

— over —

Redis学习04之 List列表

$
0
0

前两节学习了String和Hash, Hash可以把关连性的字段组合到一起用一个KEY, key值多同样会耗费内存和CPU, 在这点上Hash要优于String,  当然String在字符操作上如追加、部分值更新、Key 过期上更加灵活, 都是为特定的场景制定,这里学习另一个数据结构LIST列表,List顾名思义可以认为左右延伸的队列,一种有序存放的数据结构。

常用的操作有

推入
LPUSH:将元素推入到列表左端
RPUSH:将元素推入到列表右端
LPUSHX、RPUSHX:只对已存在的列表执行推入操作

移除
LPOP:弹出列表最左端的元素
RPOP:弹出列表最右端的元素
RPOPLPUSH:将右端弹出的元素推入到左端

测试命令

[root@MiWiFi-R2100-srv ~]# redis-cli
127.0.0.1:6379> lpush Order u1
(integer) 1
127.0.0.1:6379> lpush Order u2
(integer) 2
127.0.0.1:6379> lpush Order u3
(integer) 3

127.0.0.1:6379> lrange Order 0 -1
1) "u3"
2) "u2"
3) "u1"
127.0.0.1:6379> rpush Order u5
(integer) 4
127.0.0.1:6379> lrange Order 0 -1
1) "u3"
2) "u2"
3) "u1"
4) "u5"
127.0.0.1:6379> lindex Order 0
"u3"
127.0.0.1:6379> lindex Order 1
"u2"
127.0.0.1:6379> rpush Order 9
(integer) 5
127.0.0.1:6379> rpush Order u1
(integer) 6
127.0.0.1:6379> rpush Order u9
(integer) 7
127.0.0.1:6379> lrange Order 0 -1
1) "u3"
2) "u2"
3) "u1"
4) "u5"
5) "9"
6) "u1"
7) "u9"
127.0.0.1:6379> rpushx Order u9
(integer) 8
127.0.0.1:6379> rpushx Order u10
(integer) 9
127.0.0.1:6379> lrange Order 0 -1
1) "u3"
2) "u2"
3) "u1"
4) "u5"
5) "9"
6) "u1"
7) "u9"
8) "u9"
9) "u10"
127.0.0.1:6379> rpushx Order u10
(integer) 10
127.0.0.1:6379> lrange Order 0 -1
 1) "u3"
 2) "u2"
 3) "u1"
 4) "u5"
 5) "9"
 6) "u1"
 7) "u9"
 8) "u9"
 9) "u10"
10) "u10"
127.0.0.1:6379> rpushx Order u11
(integer) 11
127.0.0.1:6379> lrange Order 0 -1
 1) "u3"
 2) "u2"
 3) "u1"
 4) "u5"
 5) "9"
 6) "u1"
 7) "u9"
 8) "u9"
 9) "u10"
10) "u10"
11) "u11"
127.0.0.1:6379> rpushx Order1 u11
(integer) 0
127.0.0.1:6379> lrange Order1 0 -1
(empty array)
127.0.0.1:6379> lpush Order u100 u200 u300
(integer) 14
127.0.0.1:6379> lrange Order1 0 -1
(empty array)
127.0.0.1:6379> lrange Order 0 -1
 1) "u300"
 2) "u200"
 3) "u100"
 4) "u3"
 5) "u2"
 6) "u1"
 7) "u5"
 8) "9"
 9) "u1"
10) "u9"
11) "u9"
12) "u10"
13) "u10"
14) "u11"
127.0.0.1:6379> lpop u300
(nil)
127.0.0.1:6379> lpop Order
"u300"
127.0.0.1:6379> lrange Order 0 -1
 1) "u200"
 2) "u100"
 3) "u3"
 4) "u2"
 5) "u1"
 6) "u5"
 7) "9"
 8) "u1"
 9) "u9"
10) "u9"
11) "u10"
12) "u10"
13) "u11"
127.0.0.1:6379> lpop Order
"u200"
127.0.0.1:6379> lpop Order
"u100"
127.0.0.1:6379> lrange Order 0 -1
 1) "u3"
 2) "u2"
 3) "u1"
 4) "u5"
 5) "9"
 6) "u1"
 7) "u9"
 8) "u9"
 9) "u10"
10) "u10"
11) "u11"
127.0.0.1:6379> rpop Order
"u11"
127.0.0.1:6379> rpop Order
"u10"
127.0.0.1:6379> rpoplpush Order Last
"u10"
127.0.0.1:6379> rpoplpush Order Last
"u9"
127.0.0.1:6379> rpoplpush Order Last
"u9"
127.0.0.1:6379> lrange Order 0 -1
1) "u3"
2) "u2"
3) "u1"
4) "u5"
5) "9"
6) "u1"
127.0.0.1:6379> lrange Last 0 -1
1) "u9"
2) "u9"
3) "u10"
127.0.0.1:6379> llen Order
(integer) 6
127.0.0.1:6379> llen last
(integer) 0
127.0.0.1:6379> llen LAST
(integer) 0
127.0.0.1:6379> llen Last
(integer) 3

应用场景

FIFO 队列,如下发短信,或秒杀活动

import redis
import json
client = redis.Redis()
for i in range(100):
    print('telnum {0:011d} add completed!'.format(i))
    client.rpush('phone_queue',json.dumps('{"telnum":"'+str(i).zfill(011)+'"}'))

telnum 00000000000 add completed!
2L
telnum 00000000001 add completed!
3L
telnum 00000000002 add completed!

下发短信

import redis
import json

client = redis.Redis()

def send_sms(tel):
    print('----{%s}----'.format(tel))

next_phone_info = ''

while True:
    phone_bytes = client.lpop('phone_queue')
    if not phone_bytes:
        print('All message send completed!')
        break
    phone_info = json.loads(phone_bytes)
    retry = phone_info.get('retry', 0)
    telnum = phone_info['telnum']
    rest = send_sms(telnum)
    if rest:
        print('The telnum {} send completed...'.format(telnum))
        continue
    if retry >= 3:
        print('The telnum {} send failed. try 3 times'.format(telnum))
        continue
    next_phone_info = {'telnum':telnum, 'retry': retry+1}
client.rpush('phone_queue', json.dumps(next_phone_info))
 

同样也可以做分页查询

[root@MiWiFi-R2100-srv list]# vi paging.py
class Paging:

    def __init__(self, client, key):
        self.client = client
        self.key = key

    def add(self, item):
        self.client.lpush(self.key, item)

    def get_page(self, page_number, item_per_page):
        """
        """
        start_index = (page_number - 1) * item_per_page
        end_index = page_number * item_per_page - 1
        return self.client.lrange(self.key, start_index, end_index)

    def size(self):
        """
        """
        return self.client.llen(self.key)


[root@MiWiFi-R2100-srv list]# python paging.py
[root@MiWiFi-R2100-srv list]# python
Python 2.7.5 (default, Nov 20 2015, 02:00:19)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from redis import Redis
>>> from paging import Paging
>>> client = Redis(decode_responses=True)
>>> topics = Paging(client, "user-topics")
>>> for i in range(20):
...   topics.add(i)
...
>>> topics.get_page(2, 5)
[u'14', u'13', u'12', u'11', u'10']

–over —

Redis学习05之 SET集合

$
0
0

集合SETs和Sorted SETs有序集合都是集合操作基本一样,只是差别后者数据是有序的,前者无序。集合是Redis的基本数据结构之一,集合中同样和列表一下也可以存放很多数据,列表是左右顺序可以存储重复数据的结构,集合是无充存放且不存在重复数据,有序集合它保留了集合不能重复元素的特性.但不同的是,有序集合是可排序的.但是他和列表使用索引下标进行排序依据不同的是,它给每个元素设置一个分数(score)作为排序的依据.

数据结构 允许重复 支持顺序 顺序实现方式 应用场景
列表 索引下标 队列、抢购、时间顺序
集合 标签、去重,社交圈
有序集合 分值 排行榜、积分

集合的命令有10条,有序集合有20几条,下面只是简单的使用。
操作
SADD 增加一个或多个member
SREM 删除member

获取
SPOP 获取并删除
SMEMBERS 获取所有
SISMEMBER 获取指定member

集合
SINTER 集合交集
SUNION 集合并集
SDIFF 集合差集,前比后多
统计
SCARD 统计多少member

集合案例

import redis

client = redis.Redis(host='127.0.0.1')
classes = ['English','Chinese','Match','Computer']

def get_students():
    students = client.sunion(*classes)
    return len(students)

def get_stu_intersection(class_a, class_b):
    students = client.sinter(class_a, class_b)
    return len(students)

def get_stu_diff(class_a, class_b):
    students = client.sdiff(class_a, class_b)
    return len(students)

def get_stu_union(class_a, class_b):
    students = client.sunion(class_a, class_b)
    return len(students)


client.sadd('English', 'tom', 'jack','zhang')
client.sadd('Match', 'tom', 'jack','mike','lee')

>>> get_students()
5
>>> get_stu_intersection('English','Match')
2
>>> get_stu_union('English','Match')
5

打标签

>>> class Tagging:
...     def __init__(self, client, item):
...         self.client = client
...         self.key = make_tag_key(item)
...     def add(self, *tags):
...         self.client.sadd(self.key, *tags)
...     def remove(self, *tags):
...         self.client.srem(self.key, *tags)
...     def is_included(self, tag):
...         return self.client.sismember(self.key, tag)
...     def get_all_tags(self):
...         return self.client.smembers(self.key)
...     def count(self):
...         return self.client.scard(self.key)
...

>>> book_tags = Tagging(client, "The C Programming Language")
>>> book_tags.add('c')
>>> book_tags.add('programming language')
>>> book_tags.add('computer book')
>>> book_tags.get_all_tags()
set(['c', 'programming language', 'computer book'])
>>> for i in book_tags.get_all_tags():
...   print i
...
c
programming language
computer book

另外有些好友推荐也可以使用SET 差集,调用client.srandmember随机获取几个人。
— over —

Redis学习06之Sorted SET有序集合

$
0
0

今天学习Redis最后一个基本数据结构有序集合,上一节学习总结有记录有序集合是一种可以根据分数排序的SETs,Keys(members)也是唯一的,分数可以重复,值是一种浮点类型的分数,所以常应用于积分和实时排行榜,可见Redis确实是为解决问题而生的,直接开始。

常用命令
ZADD:添加或更新成员
ZREM:移除指定的成员
ZSCORE:获取成员的分值
ZINCRBY:对成员的分值执行自增或自减操作
ZCARD:获取有序集合的大小
ZRANK、ZREVRANK:获取成员在有序集合中的排名
ZRANGE、ZREVRANGE:获取指定索引范围内的成员
ZRANGEBYSCORE、ZREVRANGEBYSCORE:获取指定分值范围内的成员
ZCOUNT:统计指定分值范围内的成员数量
ZRANGEBYSCORE、ZREVRANGEBYSCORE:获取指定分值范围内的成员
ZCOUNT:统计指定分值范围内的成员数量
ZUNIONSTORE、ZINTERSTORE:有序集合的并集运算和交集运算
ZRANGEBYLEX、ZREVRANGEBYLEX:返回指定字典序范围内的成员
ZLEXCOUNT:统计位于字典序指定范围内的成员数量
ZREMRANGEBYLEX:移除位于字典序指定范围内的成员
ZPOPMAX、ZPOPMIN:弹出分值最高和最低的成员
BZPOPMAX、BZPOPMIN:阻塞式最大/最小元素弹出操作

[root@anbob.com ~]# redis-cli
127.0.0.1:6379> zadd book_ranking  10  book1 4 book2 11 book3
(integer) 3
127.0.0.1:6379> zadd book_ranking  18 book1
(integer) 0
127.0.0.1:6379> zcard book_ranking
(integer) 3
127.0.0.1:6379> zrange book_ranking
(error) ERR wrong number of arguments for 'zrange' command
127.0.0.1:6379> zrange book_ranking 0 -1
1) "book2"
2) "book3"
3) "book1"
127.0.0.1:6379> zrange book_ranking 0 -1 withscores
1) "book2"
2) "4"
3) "book3"
4) "11"
5) "book1"
6) "18"
127.0.0.1:6379> zrange book_ranking 1 2 withscores
1) "book3"
2) "11"
3) "book1"
4) "18"
127.0.0.1:6379> zrangebyscore book_ranking 1 10
1) "book2"
127.0.0.1:6379> zrangebyscore book_ranking 10 20
1) "book3"
2) "book1"
127.0.0.1:6379> zrangebyscore book_ranking 10 20 withscores
1) "book3"
2) "11"
3) "book1"
4) "18"
127.0.0.1:6379> zrem book_ranking book1
(integer) 1
127.0.0.1:6379> zrange book_ranking 0 -1
1) "book2"
2) "book3"
127.0.0.1:6379> zadd book_ranking  1000 book9
(integer) 1
127.0.0.1:6379> zadd book_ranking  1000 book8
(integer) 1
127.0.0.1:6379> zadd book_ranking  100 book7
(integer) 1
127.0.0.1:6379> zrange book_ranking 0 -1 withscores
 1) "book2"
 2) "4"
 3) "book3"
 4) "11"
 5) "book7"
 6) "100"
 7) "book8"
 8) "1000"
 9) "book9"
10) "1000"
127.0.0.1:6379> zpopmax book_ranking
1) "book9"
2) "1000"
127.0.0.1:6379> zpopmin book_ranking 2
1) "book2"
2) "4"
3) "book3"
4) "11"
127.0.0.1:6379> zrange book_ranking 0 -1 withscores
1) "book7"
2) "100"
3) "book8"
4) "1000"
127.0.0.1:6379> zincrby book_ranking 1.1 book7
"101.09999999999999"
127.0.0.1:6379> zrange book_ranking 0 -1 withscores
1) "book7"
2) "101.09999999999999"
3) "book8"
4) "1000"
127.0.0.1:6379> zincrby book_ranking -2 book7
"99.099999999999994"

案例

临时配置密码,远程连接
127.0.0.1:6379> config get requirepass
1) "requirepass"
2) ""
127.0.0.1:6379> config set requirepass redis_pwd
OK
127.0.0.1:6379> config get requirepass
1) "requirepass"
2) "redis_pwd"

--file stu_rank.py --
import redis
client = redis.Redis(host='192.168.56.110',password='redis_pwd')
rank_90_100 = client.zrevrange('rank',0,4, withscores=True)
for index,stu in enumerate(rank_90_100):
    print(f'学生编号:{stu[0].decode()} , 分数: {stu[1]}, 排名第: {index+1}')

-- 加些数据
127.0.0.1:6379> zadd rank 10 u1
(integer) 1
127.0.0.1:6379> zadd rank 20 u2
(integer) 1
127.0.0.1:6379> zadd rank 30 u3
(integer) 1
127.0.0.1:6379> zadd rank 40 u4
(integer) 1
127.0.0.1:6379> zadd rank 50 u5
(integer) 1
127.0.0.1:6379> zadd rank 60 u6 70 u7 80 u8 90 u9 99 u99
(integer) 5

--获取排名前5

D:\code>stu_rank.py
学生编号:u99 , 分数: 99.0, 排名第: 1
学生编号:u9 , 分数: 90.0, 排名第: 2
学生编号:u8 , 分数: 80.0, 排名第: 3
学生编号:u7 , 分数: 70.0, 排名第: 4
学生编号:u6 , 分数: 60.0, 排名第: 5

同样有些积分排名的,文章点赞, 以时间为积分控制过期时间的,取top的可以使用该数据结构。

Oracle 19c新特性: EXPDP 参数TTS_CLOSURE_CHECK估算Transportable Tablespace时间

$
0
0

TTS(Transportable Tablespace)在大型数据库迁移方案看较常见,原理是导出源库表元数据信息 (EXPDP)→ 传输表空间文件到目标库→导入库表元数据信息,及后来的XTTS(Cross Platform Transportable Tablespaces)跨平台,及利用Full Transportable Export/Import的F[X]TTS, 在生产库保持正常运行的情况下,传送迁移表空间数据文件,通过不断生成增量备份并进行恢复,使迁移时间可控,最大程度减少迁移所需要的停机时间, 复制数据文件实现了在线,但是导出元数据阶段需要把表空间改为只读,写业务要中断,那就存在一个问题,导出metadata元数据需要多少时间?有没有不可预见的问题? 时间长度依赖数据库内的对象个数等了,但是19c 之前测试导出元数据也要read only表空间,19c DATAPUMP引入新特性TTS_CLOSURE_CHECK为此而生,可见Oracle在online运维上一直在升级。

使用19c,DBA可以更轻松地确定导出将花费多长时间,并发现 closure check 未报告的不可预见的问题。DataPump Export 的参数 TTS_CLOSURE_CHECK 可以用来指定 Transportable Tablespaces 的TEST_MODE会在 Transportable Tablespaces/Full Transportable Export/Import时做只导出元数据的测试。它不需要把源库的 tablespace 设置为只读。。产生的 dump 文件会被标记为 “unusable.”。

TTS_CLOSURE_CHECK

目的:  指定 transportable export 的 closure check 的级别

语法以及描述
TTS_CLOSURE_CHECK = [ ON | OFF | FULL | TEST_MODE ]
ON – 开启 closure check 来确保 要导出的tablespace集里没有引用集合之外的对象
OFF – 关闭 closure check。用户自己确保要导出的tablespace集里没有引用集合之外的对象
FULL – 进行全面的多方向的 closure check 确保要导出的tablespace集里没有引用集合之外的对象,并且集合之外的对象也没有引用要导出的对象
TEST_MODE – 测试模式下不需要把 tablespaces 置为只读模式,来评估 transportable tablespace 导出花费的时间。导出的文件不能用于导入。

注意:
1/ ON, OFF, 以及 FULL 的选项是互斥的。  TEST_MODE 仅适用于导出。
2/ 使用 TTS_CLOSURE_CHECK TEST_MODE 不需要把 tablespaces 置为只读模式,并提供导出花费的时间。导出的文件不能用于导入。
3/ DataPump进行 closure check 所需的时间可能很长,有时甚至是不必要的,尤其是在已知要导出的tablespace集里没有引用集合之外的对象的情况下。
4/ 跳过 closure check 将减少  transportable export 完成的时间,从而增加可用性。 在 tablespace 处于 read-write 模式下即可获得导出时间的功能也提高了可用性。
5/ 可以通过procedure DBMS_DATAPUMP.SET_PARAMETER设置 TTS_CLOSURE_CHECK 参数。下面的例子关闭了 closure check 并启用了测试模式:
SYS.DBMS_DATAPUMP.SET_PARAMETER(jobhdl, ‘TTS_CLOSURE_CHECK’, DBMS_DATAPUMP.KU$_TTS_CLOSURE_CHECK_OFF+DBMS_DATAPUMP.KU$_TTS_CLOSURE_CHECK_TEST);

[oracle@oel7db1 ~]$ ora

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Jul 7 22:57:28 2020
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0


USERNAME             INST_NAME            HOST_NAME                  I# SID   SERIAL#  VERSION    STARTED  SPID       OPID  CPID            SADDR                                                PADDR
-------------------- -------------------- ------------------------- --- ----- -------- ---------- -------- ---------- ----- --------------- ----------                                    ------ ----------------
SYS                  PDB1-anbob19c        oel7db1                     1 390   16816    19.0.0.0.0 20200707 4022       33    4021            0000000073                                    434028 0000000074923E68

SQL> select STATUS,tablespace_name from dba_tablespaces

STATUS    TABLESPACE_NAME
--------- ------------------------------
ONLINE    SYSTEM
ONLINE    SYSAUX
ONLINE    UNDOTBS1
ONLINE    TEMP
ONLINE    USERS
READ ONLY TBS1


测试导出USERS表空间元数据

[oracle@oel7db1 ~]$ expdp userid=system/xxxxxx directory=datapump dumpfile=user_metadat.dump\
 transport_tablespaces=users TTS_CLOSURE_CHECK=test_mode

Export: Release 19.0.0.0.0 - Production on Tue Jul 7 23:01:26 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01":  userid=system/******** directory=datapump dumpfile=user_metadat.dump transport_tablespaces=users TTS_CLOSURE_CHECK=test_mode
Processing object type TRANSPORTABLE_EXPORT/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
  /home/oracle/user_metadat.dump
Dump file set is unusable. TEST_MODE requested.
******************************************************************************
Datafiles required for transportable tablespace USERS:
  /u01/app/oracle/oradata/ANBOB19C/pdb1/users01.dbf
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at Tue Jul 7 23:03:09 2020 elapsed 0 00:01:36

[oracle@oel7db1 ~]$ impdp userid=system/oracle directory=datapump dumpfile=user_metadat.dump transport_tablespaces=users TTS_CLOSURE_CHECK=test_mode
LRM-00121: 'test_mode' is not an allowable value for 'tts_closure_check'

[oracle@oel7db1 ~]$ oerr lrm 121
121, 0, "'%.*s' is not an allowable value for '%.*s'"
// *Cause: The value is not a legal value for this parameter.
// *Action: Refer to the manual for allowable values.


Oracle 12C wait ‘library cache lock’ after change password even set 28401 event 案例

$
0
0

《library cache lock或row cache lock, Failed Logon Delay 因为错误的密码尝试》和《oracle 12c等待事件: Failed Logon Delay》记录过用户密码错误尝试导致数据库出现大量的library cache lock 和row cache lock。 主要是在11g引入的安全特性延迟密码认证在3-10秒,在延迟期间以X模式持有row cache lock防止同一用户的并发失败尝试。通常是配置28401 event禁用延迟认证特性,但这也不是“万能药”,像这次的案例。 除了密码延迟认证,PASSWORD_LIFE_TIME和FAILED_LOGIN_ATTEMPTS 也是用户的警惕的地方。

密码错误尝试的症状有:
1, 非常高的library cache lock(ACCOUNT)和row cache lock(dc_user)
2, v$session.username为空,因为认证还没通过
3,alter user命令会hang 等待’library cache lock’ or ‘row cache lock’,等待几分钟后最终会成功
4, 锁定用户后,等待时间消失

这是一个12C r2的数据库,通常配置28401 event消失,但这个案例并没有,下面记录一下。

修改了密码以后

SQL> alter user xx identified by xxx;

SQL>@ase.sql

USERNAME           SID EVENT                MACHINE    MODULE               STATUS   LAST_CALL_ET SQL_ID          WAI_SECINW ROW_WAIT_OBJ# SQLTEXT                        BS          CH# OSUSER     HEX
----------- ---------- -------------------- ---------- -------------------- -------- ------------ --------------- ---------- ------------- ------------------------------ ---------- ---- ---------- ---------
                  1212 row cache lock       xxxxxxxxxx sqlplus              ACTIVE              2                 0:2                   -1                                :             0 redis
                  2874 row cache lock       xxxxxxxxxx sqlplus              ACTIVE              2                 0:2                   -1                                :             0 redis
                   127 row cache lock       xxxxxxxxxx sqlplus              ACTIVE              3                 0:3                   -1                                :             0 cbec
                  2615 row cache lock       xxxxxxxxxx sqlplus              ACTIVE              4                 0:5                   -1                                :             0 cbec
                  1136 row cache lock       xxxxxxxxxx sqlplus              ACTIVE              6                 0:6                   -1                                :             0 weblogic
                  1099 row cache lock       xxxxxxxxxx sqlplus              ACTIVE              6                 0:6                   -1                                :             0 vsearch
                   944 row cache lock       xxxxxxxxxx sqlplus              ACTIVE              6                 0:7                   -1                                :             0 vsearch
                   974 row cache lock       xxxxxxxxxx sqlplus              ACTIVE              6                 0:7                   -1                                :             0 weblogic
                   845 row cache lock       xxxxxxxxxx sqlplus              ACTIVE              6                 0:7                   -1                                :             0 weblogic
                  1037 row cache lock       xxxxxxxxxx sqlplus              ACTIVE              6                 0:7                   -1                                :             0 vsearch
                   904 row cache lock       xxxxxxxxxx sqlplus              ACTIVE              6                 0:7                   -1                                :             0 weblogic
                  2636 row cache lock       xxxxxxxxxx python               ACTIVE              6                 0:7                   -1                                :             0 crmmon
                  1024 row cache lock       xxxxxxxxxx sqlplus              ACTIVE              6                 0:7                   -1                                :             0 vsearch
                   836 row cache lock       xxxxxxxxxx sqlplus              ACTIVE              7                 0:7                   -1                                :             0 vsearch
                  2294 row cache lock       xxxxxxxxxx sqlplus              ACTIVE              7                 0:7                   -1                                :             0 cbec
                  1660 row cache lock       xxxxxxxxxx sqlplus              ACTIVE              9                 0:9                   -1                                :             0 cbec
                   798 row cache lock       xxxxxxxxxx sqlplus              ACTIVE              9                 0:9                   -1                                :             0 vsearch
                   760 row cache lock       xxxxxxxxxx sqlplus              ACTIVE              9                 0:9                   -1                                :             0 vsearch
                   721 row cache lock       xxxxxxxxxx sqlplus              ACTIVE              9                 0:10                  -1                                :             0 vsearch
                  2809 row cache lock       xxxxxxxxxx sqlplus              ACTIVE             10                 0:10                  -1                                2:1660        0 vsearch
                   603 row cache lock       xxxxxxxxxx sqlplus              ACTIVE             10                 0:10                  -1                                2:1660        0 vsearch
                   641 row cache lock       xxxxxxxxxx sqlplus              ACTIVE             10                 0:10                  -1                                2:1660        0 vsearch
                   683 row cache lock       xxxxxxxxxx sqlplus              ACTIVE             10                 0:10                  -1                                2:1660        0 vsearch
                   555 row cache lock       xxxxxxxxxx sqlplus              ACTIVE             10                 0:11                  -1                                2:1660        0 vsearch
                   858 row cache lock       xxxxxxxxxx sqlplus              ACTIVE             11                 0:11                  -1                                2:1660        0 vsearch
                   520 row cache lock       xxxxxxxxxx sqlplus              ACTIVE             11                 0:12                  -1                                2:1660        0 vsearch
                   494 row cache lock       xxxxxxxxxx sqlplus              ACTIVE             12                 0:12                  -1                                2:1660        0 taskmon
SYS                 55 row cache lock       xxxxxxxxxx sqlplus              ACTIVE             12 5rzsj7vnvwwrq   0:12             1469878  alter user USER1 account lock 2:1660        0 oracle       2000002
                  2727 library cache lock   xxxxxxxxxx sqlplus              ACTIVE             12                 0:13                  -1                                2:1660        0 cbea
                  2094 library cache lock   xxxxxxxxxx sqlplus              ACTIVE             13                 0:13                  -1                                2:1660        0 cbea
                  2262 library cache lock   xxxxxxxxxx sqlplus              ACTIVE             15                 0:15                  -1                                2:1660        0 oracle
                  1155 library cache lock   xxxxxxxxxx sqlplus              ACTIVE             16                 0:16                  -1                                2:1660        0 oracle
                  1181 library cache lock   xxxxxxxxxx sqlplus              ACTIVE             16                 0:16                  -1                                2:1660        0 oracle
                   608 library cache lock   xxxxxxxxxx sqlplus              ACTIVE             16                 0:16                  -1                                2:1660        0 oracle
                   638 library cache lock   xxxxxxxxxx sqlplus              ACTIVE             16                 0:16                  -1                                2:1660        0 oracle
                   665 library cache lock   xxxxxxxxxx sqlplus              ACTIVE             16                 0:16                  -1                                2:1660        0 tpcint
                   707 library cache lock   xxxxxxxxxx sqlplus              ACTIVE             16                 0:16                  -1                                2:1660        0 oracle
                   747 library cache lock   xxxxxxxxxx sqlplus              ACTIVE             16                 0:16                  -1                                2:1660        0 oracle
                   784 library cache lock   xxxxxxxxxx sqlplus              ACTIVE             16                 0:16                  -1                                2:1660        0 weblogic
                   824 library cache lock   xxxxxxxxxx sqlplus              ACTIVE             16                 0:16                  -1                                2:1660        0 oracle
                   844 library cache lock   xxxxxxxxxx sqlplus              ACTIVE             16                 0:16                  -1                                2:1660        0 oracle
                   294 library cache lock   xxxxxxxxxx sqlplus              ACTIVE             16                 0:16                  -1                                2:1660        0 oracle
                   303 library cache lock   xxxxxxxxxx sqlplus              ACTIVE             16                 0:16                  -1                                2:1660        0 oracle
                   316 library cache lock   xxxxxxxxxx sqlplus              ACTIVE             16                 0:16                  -1                                2:1660        0 oracle
                   338 library cache lock   xxxxxxxxxx sqlplus              ACTIVE             16                 0:16                  -1                                2:1660        0 oracle
				   
...

SQL> alter user xxx account lock;

如果不处理很快连接数就会耗尽。先锁定了用户,但是有部分应用还里需要这个用户,有300多台主机没有修改,即使已修改正确的应用也无法使用,这部分主机的错误密码尝试导致整个库无法使用,那有没有办法解决呢?。

尝试配置12801 event

SQL> oradebug eventdump system
10949 trace name context forever, level 1

SQL> alter system set event='10949 trace name context forever, level 1:28401 trace name context forever, level 1' scope=spfile;

System altered.

SQL> alter system set events '28401 trace name context forever, level 1';

System altered.

SQL> oradebug eventdump system
28401 trace name context forever, level 1
10949 trace name context forever, level 1

再次尝试解锁用户。瞬间数据库会话数增加,又出现了library cache lock, row cache lock和之前的一样,此时在节点2锁用户等在那里,尝试从另一节点(节点1) 锁用户,节点1实例还意外crash,并重启了。 alert log

2020-07-09 09:21:06.197000 +08:00
Errors in file /oracle/app/oracle/diag/rdbms/anbob/anbob1/trace/anbob1_lmd1_81560.trc  (incident=737030):
ORA-00600: internal error code, arguments: [kjxscvr:lstat], [[0x534b481e][0x4b773e70],[LB][ext 0x0,0x0][domid 0x0]], [2], [3590001], [35a0000], [], [], [], [], [], [], []
Incident details in: /oracle/app/oracle/diag/rdbms/anbob/anbob1/incident/incdir_737030/anbob1_lmd1_81560_i737030.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Errors in file /oracle/app/oracle/diag/rdbms/anbob/anbob1/trace/anbob1_lmd1_81560.trc:
ORA-00600: internal error code, arguments: [kjxscvr:lstat], [[0x534b481e][0x4b773e70],[LB][ext 0x0,0x0][domid 0x0]], [2], [3590001], [35a0000], [], [], [], [], [], [], []
Dumping diagnostic data in directory=[cdmp_20200709092107], requested by (instance=1, osid=81560 (LMD1)), summary=[incident=737030].
2020-07-09 09:21:07.950000 +08:00
Errors in file /oracle/app/oracle/diag/rdbms/anbob/anbob1/trace/anbob1_lmd1_81560.trc:
ORA-00600: internal error code, arguments: [kjxscvr:lstat], [[0x534b481e][0x4b773e70],[LB][ext 0x0,0x0][domid 0x0]], [2], [3590001], [35a0000], [], [], [], [], [], [], []
Errors in file /oracle/app/oracle/diag/rdbms/anbob/anbob1/trace/anbob1_lmd1_81560.trc  (incident=737031):
ORA-482 [] [] [] [] [] [] [] [] [] [] [] []
Incident details in: /oracle/app/oracle/diag/rdbms/anbob/anbob1/incident/incdir_737031/anbob1_lmd1_81560_i737031.trc
USER (ospid: 81560): terminating the instance due to error 482
opiodr aborting process unknown ospid (4511) as a result of ORA-1092
System state dump requested by (instance=1, osid=81560 (LMD1)), summary=[abnormal instance termination].
System State dumped to trace file /oracle/app/oracle/diag/rdbms/anbob/anbob1/trace/anbob1_diag_81525_20200709092108.trc
2020-07-09 09:21:10.388000 +08:00
License high water mark = 743
2020-07-09 09:21:14.935000 +08:00
Instance terminated by USER, pid = 81560
Warning: 2 processes are still attach to shmid 6782991:
 (size: 40960 bytes, creator pid: 79260, last attach/detach pid: 81521)

----- Call Stack Trace -----
calling              call     entry                
location             type     point                
-------------------- -------- -------------------- 
              
dbgeEndDDEInvocatio  call     dbgexExplicitEndInc  
nImpl()+658                   ()                   
                                                   
kjxscvr()+28666      call     dbgeEndDDEInvocatio  
                              nImpl()                       
kjmxmpm()+16387      call     kjxscvr()            
kjmpbmsg()+6420      call     kjmxmpm()            
kjmdmain_helper()+8  call     kjmpbmsg()           
kjmdm()+79           call     kjmdmain_helper()    
 ksbrdp()+1079        call     kjmdm()              
opirip()+609         call     ksbrdp()             
opidrv()+602         call     opirip()             
                                              

这个ora-600 [kjxscvr:lstat] 是Bug 29392554,影响范围还挺多,直到20C BASE才修复。 在节点2继续lock 用户等了几分钟user lock 成功,library cache lock 消失。除了延迟密码认证还有另一个触发和我们手动lock用户一样的user profile FAILED_LOGIN_ATTEMPTS 自动lock用户,因为用户ACCOUNT_STATUs的改变也是要持有X模式的LIBRARY CACHE LOCK. 下面检查用户的profile.

SQL> @us user1
Show database usernames from dba_users matching %USER1%

USERNAME                  DEFAULT_TABLESPACE        TEMPORARY_TABLESPACE              USER_ID CREATED           PROFILE
------------------------- ------------------------- ------------------------------ ---------- ----------------- --------------------
USER1                     NETM_DAT                  TEMP                                  352 20181205 14:31:09 PRO_APP

 
SQL> select * from dba_profiles where profile='PRO_APP';

PROFILE              RESOURCE_NAME                    RESOURCE LIMIT                                                                                                                            COM INH IMP
-------------------- -------------------------------- -------- -------------------------------------------------------------------------------------------------------------------------------- --- --- ---
PRO_APP             COMPOSITE_LIMIT                  KERNEL   DEFAULT                                                                                                                          NO  NO  NO
PRO_APP             SESSIONS_PER_USER                KERNEL   DEFAULT                                                                                                                          NO  NO  NO
PRO_APP             CPU_PER_SESSION                  KERNEL   DEFAULT                                                                                                                          NO  NO  NO
PRO_APP             CPU_PER_CALL                     KERNEL   DEFAULT                                                                                                                          NO  NO  NO
PRO_APP             LOGICAL_READS_PER_SESSION        KERNEL   DEFAULT                                                                                                                          NO  NO  NO
PRO_APP             LOGICAL_READS_PER_CALL           KERNEL   DEFAULT                                                                                                                          NO  NO  NO
PRO_APP             IDLE_TIME                        KERNEL   DEFAULT                                                                                                                          NO  NO  NO
PRO_APP             CONNECT_TIME                     KERNEL   DEFAULT                                                                                                                          NO  NO  NO
PRO_APP             PRIVATE_SGA                      KERNEL   DEFAULT                                                                                                                          NO  NO  NO
PRO_APP             FAILED_LOGIN_ATTEMPTS            PASSWORD DEFAULT                                                                                                                          NO  NO  NO
PRO_APP             PASSWORD_LIFE_TIME               PASSWORD UNLIMITED                                                                                                                        NO  NO  NO
PRO_APP             PASSWORD_REUSE_TIME              PASSWORD UNLIMITED                                                                                                                        NO  NO  NO
PRO_APP             PASSWORD_REUSE_MAX               PASSWORD 5                                                                                                                                NO  NO  NO
PRO_APP             PASSWORD_VERIFY_FUNCTION         PASSWORD VERIFY_FUNCTION                                                                                                                  NO  NO  NO
PRO_APP             PASSWORD_LOCK_TIME               PASSWORD .0004                                                                                                                            NO  NO  NO
PRO_APP             PASSWORD_GRACE_TIME              PASSWORD UNLIMITED                                                                                                                        NO  NO  NO
PRO_APP             INACTIVE_ACCOUNT_TIME            PASSWORD DEFAULT                                                                                                                          NO  NO  NO

17 rows selected.

SQL> select * from dba_profiles where profile='DEFAULT';

PROFILE              RESOURCE_NAME                    RESOURCE LIMIT                                                                                                                            COM INH IMP
-------------------- -------------------------------- -------- -------------------------------------------------------------------------------------------------------------------------------- --- --- ---
DEFAULT              COMPOSITE_LIMIT                  KERNEL   UNLIMITED                                                                                                                        NO  NO  NO
DEFAULT              SESSIONS_PER_USER                KERNEL   UNLIMITED                                                                                                                        NO  NO  NO
DEFAULT              CPU_PER_SESSION                  KERNEL   UNLIMITED                                                                                                                        NO  NO  NO
DEFAULT              CPU_PER_CALL                     KERNEL   UNLIMITED                                                                                                                        NO  NO  NO
DEFAULT              LOGICAL_READS_PER_SESSION        KERNEL   UNLIMITED                                                                                                                        NO  NO  NO
DEFAULT              LOGICAL_READS_PER_CALL           KERNEL   UNLIMITED                                                                                                                        NO  NO  NO
DEFAULT              IDLE_TIME                        KERNEL   UNLIMITED                                                                                                                        NO  NO  NO
DEFAULT              CONNECT_TIME                     KERNEL   UNLIMITED                                                                                                                        NO  NO  NO
DEFAULT              PRIVATE_SGA                      KERNEL   UNLIMITED                                                                                                                        NO  NO  NO
DEFAULT              FAILED_LOGIN_ATTEMPTS            PASSWORD 10                                                                                                                               NO  NO  NO
DEFAULT              PASSWORD_LIFE_TIME               PASSWORD UNLIMITED                                                                                                                        NO  NO  NO
DEFAULT              PASSWORD_REUSE_TIME              PASSWORD UNLIMITED                                                                                                                        NO  NO  NO
DEFAULT              PASSWORD_REUSE_MAX               PASSWORD UNLIMITED                                                                                                                        NO  NO  NO
DEFAULT              PASSWORD_VERIFY_FUNCTION         PASSWORD VERIFY_FUNCTION_11G                                                                                                              NO  NO  NO
DEFAULT              PASSWORD_LOCK_TIME               PASSWORD 1                                                                                                                                NO  NO  NO
DEFAULT              PASSWORD_GRACE_TIME              PASSWORD 7                                                                                                                                NO  NO  NO
DEFAULT              INACTIVE_ACCOUNT_TIME            PASSWORD UNLIMITED                                                                                                                        NO  NO  NO

17 rows selected.

Note:
从Profile 看出这个用户是密码连续尝试错误10次后锁住用户0.0004天,然后自动解锁。问题就在这自动lock. 既然这个用户短时间内改不完所有的错误密码配置,干脆禁用密码错误尝试记数。

SQL> create profile pfile_for_USER1 limit FAILED_LOGIN_ATTEMPTS unlimited;
Profile created.

SQL> alter user USER1 profile pfile_for_USER1;
User altered.

SQL> alter user USER1 account unlock;
User altered.

Note:
之前大量的library cache lock消失, 库内的负载恢复了正常,只是因为还有大量的错误尝试还是有少量的Failed Logon Delay event等待。

如果存在Infiniband设备,ifconfig hardware address can be incorrect可以忽略

$
0
0

Infiniband(IB) 是一个用网络通信标准,满足科学计算实验的要求, 致力于服务器端的高性能计算的互联技术,它具有极高的吞吐量和极低的延迟,用于计算机与计算机、服务器与存储系统之间、存储系统的直接或交换互连,当前的ORACLE RAC是支持IB的,适合用于RAC的CACHE FUSION和ORACLE Exadata等工程系统一体机,目前国产数据库一体机也大量使用。 还有最近几年最较火的RDMA远程直接内存访问(即Remote Direct Memory Access)和持久性内存全互联的分布式存储系统也多使用IB。

RDMA最早在Infiniband传输网络上实现,技术先进,但是价格高昂(只有Mellanox和Intel供应商提供全套网络解决方案),后来业界厂家把RDMA移植到传统Ethernet以太网上,降低了RDMA的使用成本,推动了RDMA技术普及。在Ethernet以太网上,根据协议栈融合度的差异,分为iWARP和RoCE两种技术。

除了Infiniband解决方案中的硬件卡、线、交换机硬件价格昂贵,领先的供应商有Mellanox和Intel公司, 很多用户是Mellanox全家桶,这两家公司总部都在美国,近年来美国政府对于中国公司技术限制,国内的自主可控和技术防卡脖,如X为,TaiShan一些存储没有使用IB。

Oracle数据库环境的IB也逐渐普及,如interconnect和存储。在11g中有bug在私网有问题,19.5前因Linux kernel regression in the UEK5 kernel 问题会存在RAC LMS进程hang,影响Exadata 机器(fixed in Linux kernel version V4.14.35-1902.10.6)。

Oracle环境部署OSW用于监控主机性能,似乎成为了未约定的标配, 在数据库出问题时提供OS层的信息,调用的也都是OS命令,其中如网络的ifconfig ,但是在如果服务器上有IB时会提示如下错误”Infiniband hardware address can be incorrect”

# ifconfig bondIB2
bondIB2: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST>  mtu 1500
        inet 192.168.43.91  netmask 255.255.255.0  broadcast 192.168.43.255
        inet6 fe80::526b:4b03:82:8bd2  prefixlen 64  scopeid 0x20
Infiniband hardware address can be incorrect! Please read BUGS section in ifconfig(8).
        infiniband A0:00:03:00:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00  txqueuelen 1000  (InfiniBand)
        RX packets 407680500135  bytes 236690359655375 (215.2 TiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 295004994806  bytes 212627133622966 (193.3 TiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

# ip addr show bondIB2
18: bondIB2: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/infiniband a0:00:03:00:fe:80:00:00:00:00:00:00:50:6b:4b:03:00:82:8b:d2 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
    inet 192.168.43.91/24 brd 192.168.43.255 scope global bondIB2
       valid_lft forever preferred_lft forever
    inet 169.254.167.210/16 brd 169.254.255.255 scope global bondIB2:1
       valid_lft forever preferred_lft forever
    inet6 fe80::526b:4b03:82:8bd2/64 scope link 
       valid_lft forever preferred_lft forever

Cause:

Ifconfig uses the ioctl access method to get the full address information, which limits hardware addresses to 8 bytes.Because Infiniband address has 20 bytes, only the first 8 bytes are displayed correctly. Ifconfig is obsolete! For replacement check ip.

错误是因为IFCONFIG使用IOCTL访问取address,但限制在前8 bytes, IB的地址有20bytes, 所以才显示错误。建议使用IP替换ifconfig, IP命令类似于ifconfig,它对于分配静态IP地址,路由和默认网关等非常熟悉。ifconfig多年以来一直没有进行维护,不推荐使用ifconfig命令,即使该命令在大多数Linux发行版中仍然可用。

ifconfig命令已由功能基本被IP命令所取代,并且可以用一个命令执行多个网络管理任务。IP命令实用程序与iproute2软件包捆绑在一起。默认情况下,iproute2实用程序预先安装了所有主要的Linux发行版。如果没有,您可以iproute2在软件包管理器的帮助下通过在终端上进行发行来安装它,事实是,目前ip不能完全替代ifconfig。命令的结构有所不同。


Oracle 12c R2 – 19C Instance_mode read-only(不是雪中须送炭,聊装风景要诗来。)

$
0
0

Oracle数据库40年来还真是“急人所急 想人所想”,不断努力在一套软件中集成所有解决方案,以至于导致有人抱怨“她”太“胖”了。有没有想过oracle数据库中的读写分离场景?首先会想到使用Active DataGuard,但是如果不要DG,只在一套数据库RAC中不同节点实现呢?如一个节点写,其它节点只读呢。

前几天给LinuxONE的Oracle19c RAC巡检时发现spfile中instance_mode为read-only, 该库不是standby环境。(不一定是LinuxONE大机环境,适用于Oracle 12.2以后的RAC,单实例spfile默认不存在此参数)

什么是instance_mode

SQL> show parameter instance_mode

PARAMETER_NAME               TYPE        VALUE
-----------------------    -----------  ----------------------------------
instance_mode               string      READ-WRITE

SQL> select inst_id,name,value from gv$spparameter where name like 'instance_mode%';

   INST_ID NAME                 VALUE
---------- -------------------- ------------------------------
         2 instance_mode        read-only
         1 instance_mode        read-only

SQL> show spparameter instance_mode

SID      NAME                          TYPE        VALUE
-------- ----------------------------- ----------- ----------------------------
*        instance_mode                 string      read-only

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ WRITE

Note:
spfile中是instance_mode read-only, 当前内存里是read-write, 数据库也是以read-write打开, 如果因为spfile参数配置有问题,在实例重启后都是read-only打开那就是隐患了,在我的单实例测试了一下(其实这参数是有限制的). 注意这里的read-only批的是instance而不是database.

SQL> alter system set instance_mode="read-only" scope=spfile;
System altered.

SQL> startup force
ORACLE instance started.

Total System Global Area 1073738888 bytes
Fixed Size                  9143432 bytes
Variable Size             792723456 bytes
Database Buffers          268435456 bytes
Redo Buffers                3436544 bytes
Database mounted.
ORA-16005: database requires recovery


SQL> recover database;
ORA-12969: invalid alter database option for read-only instance

Note:
如果实例是以read-only mode那数据库就无法启动了。注意默认如果没有像我这种显示修改,实际上是不会发生的。

官方文档
记录了instance_mode是从12cR2引入可以指定值,值INSTANCE_MODE = { READ-WRITE | READ-ONLY | READ-MOSTLY },默认READ-WRITE,可以把部分节点改为READ-ONLY或READ-MOSTLY值,  但是有限制:
1, 至少RAC中有一个节点是READ-WRITE的,这样就不会发生上面的问题,数据库都无法打开。如果仅有的READ-WRITE实例crash,其它所有实例也会crash.
2,  instance_mode为READ-ONLY的实例不能是RAC中打开的第一个实例,要等read-write实例READY。
3,不支持同一RAC中instance_mode为READ-ONLY 和 READ-MOSTLY并存
4, Read-only 实例可以禁掉部分不需要的后台进程如ARCn、CKPT
5.   READ-ONLY 实例可以禁掉redo thread  ,  read more

什么是family:dw_helper.instance_mode

回到原RAC环境,查看SPFILE

SQL> create pfile='/tmp/pfile.ora' from spfile;
File created.

# vi /tmp/pfile.ora
...
*.inmemory_query='DISABLE'
family:dw_helper.instance_mode='read-only'
anbob2.instance_number=2
anbob1.instance_number=1

注意:
这里的instance_mode参数并非常规实例参数, v$spparameter.FAMILY 同样值为dw_helper, 这是oracle内部使用的保留列,parameter “family:dw_helper.instance_mode=read-only” 通常是没有用的,猜测应该是用于FLEX CLUSTER架构的leaf nodes。 和Read-Only instances也是FLEX Cluster 12.2 的leaf Nodes使用。

所以这个参数是可以删除掉的,并也只是在RAC环境的SPfile中存在。

SQL> alter system reset instance_mode scope=spfile;
alter system reset instance_mode scope=spfile
*
ERROR at line 1:
ORA-32010: cannot find entry to delete in SPFILE


SQL> alter system reset "instance_mode" scope=spfile;
alter system reset "instance_mode" scope=spfile
*
ERROR at line 1:
ORA-32010: cannot find entry to delete in SPFILE


SQL> alter system reset instance_mode family='dw_helper' scope=spfile;

System altered.

SQL> create pfile='/tmp/pfile.ora' from spfile;
File created.
Tip:
 pfile中的参数family:dw_helper.instance_mode=read-only已经不存在。

SQL> select inst_id,name,value from gv$spparameter where name like 'instance_mode%';

   INST_ID NAME                 VALUE
---------- -------------------- ------------------------------
         2 instance_mode
         1 instance_mode

什么是Flex CLUSTER

Flex Cluster在12C的新特性,同时还有Flex ASM, Flex cluster引入了leaf node、hub node, 打破了过去节点间都要通过interconnect 互联心跳,多个leaf node可以只于它的hub node互联,实现的是application tier节点扩展. hub节点是跑真正的db ,vip ,asm instance,需要与FLEX ASM连接。 flex ASM 在我之前的文章有介绍。

在12C R1时如果注意过,创建cluster时可以选择是pre-12的standard cluster和12c引入flex cluster. 也切换flex 和standard 模式的方法如下

crsctl set cluster mode {standard|flex}

但是从12cR2开始上面的命令没有了,并且默认就是Flex cluster,无法再使用standard,与ORACLE 产品经理确认后, 12.2默认是在flex cluster,但并不是意味要求部署leaf node, 默认为hub node, 和之前的standard使用一样,只是强置使用FLEX ASM, 也是后续版本的标准,同时LEAF NODE已经逐渐淘汰,leaf node试水失败,但是hub node的名字保留了下来。

Troubleshooting ORA-00600: 内部错误代码 [kdt_bseg_srch_cbk PITL1]

$
0
0

数据库alert log中出现了下面错误 ,环境oracle 19.3

 
ORA-00600: 内部错误代码, 参数: [kdt_bseg_srch_cbk PITL1], [2], [], [], [], [], [], [], [], [], [], []

dbkedDefDump(): Starting incident default dumps (flags=0x2, level=3, mask=0x0)
[TOC00003]
----- Current SQL Statement for this session (sql_id=2avh059ktb0s4) -----
UPDATE /*+ index(a twhxxx$idx1) */ erpdb.twxxx a SET t$acip=:1 WHERE a.t$cwar=:2 AND a.t$item=:3 AND a.t$year=:4 AND a.t$peri=:5
[TOC00003-END]

[TOC00004]
----- Call Stack Trace -----

----- Abridged Call Stack Trace -----
kgeadse()+447<-kgerinv_internal()+44
<-kgerinv()+40<-kgeasnmierr()+146<-kdt_bseg_srch_cbk()+8107<-ktspfpblk()+618<-ktspfsrch()+788<-ktspscan_bmb()+498<-ktspgsp_main()+1271 
----- End of Abridged Call Stack Trace -----

Object id on Block? Y
 seg/obj: 0x13fdf3  csc:  0x00000005cf7345ac  itc: 21  flg: E  typ: 1 - DATA
     brn: 1  bdba: 0x5064d20 ver: 0x01 opc: 0
     inc: 0  exflg: 0
Itl Xid Uba Flag Lck Scn/Fsc
0x01 0x0014.01e.0024f127 0x0144fdb4.fb70.11 --U- 1 fsc 0x0000.cf7346d5
0x02 0x001a.006.0007c114 0x010a62c9.dfa7.10 C--- 0 scn 0x00000005cf58f02b
0x03 0x000d.014.000401f2 0x0162c698.da3c.0d C--- 0 scn 0x00000005cf5fff24
0x04 0x0024.002.000e1a7e 0x014027c6.51c4.0d --U- 1 fsc 0x0000.cf7348b4
0x05 0x0023.017.000c9e62 0x01405311.1b2b.2b C--- 0 scn 0x00000005cf5d072b
0x06 0x0024.016.000e164e 0x01471792.51b0.10 C--- 0 scn 0x00000005cf65f6eb
0x07 0x0024.020.000e14a2 0x01465164.51a6.10 C--- 0 scn 0x00000005cf5d070c
0x08 0x0014.00f.0024efc4 0x01670707.fb6c.04 C--- 0 scn 0x00000005cf6004e0
0x09 0x0019.013.0008afc1 0x010a65d5.ef56.1c C--- 0 scn 0x00000005cf5912b4
0x0a 0x0011.003.00043488 0x01401236.d735.27 --U- 1 fsc 0x0000.cf734873
0x0b 0x0013.005.000a3814 0x0162fa7e.5077.1d C--- 0 scn 0x00000005cf5d0a40
0x0c 0x0024.01e.000e166d 0x01471739.51b0.06 C--- 0 scn 0x00000005cf65f647
0x0d 0x0014.007.0024efbc 0x01670dd2.fb6c.21 C--- 0 scn 0x00000005cf65f72c
0x0e 0x0024.021.000e1512 0x0146917b.51aa.18 C--- 0 scn 0x00000005cf5ffded
0x0f 0x0014.00e.0024f0b1 0x0144fe23.fb70.12 --U- 1 fsc 0x0000.cf7349c6
0x10 0x0024.020.000e1630 0x014717a9.51b0.0d C--- 0 scn 0x00000005cf65f6f0
0x11 0x000b.005.00043d4c 0x0165f224.e4b4.0a --U- 1 fsc 0x0000.cf736db4
0x12 0x0024.021.000e1478 0x0146512c.51a6.20 C--- 0 scn 0x00000005cf5d0613
0x13 0x0024.019.000e15b2 0x014691b4.51aa.10 C--- 0 scn 0x00000005cf5ffe8d
0x14 0x0014.003.0024f063 0x01670d0e.fb6c.14 C--- 0 scn 0x00000005cf654ebb
0x15 0x0014.01e.0024f12b 0x0144ff85.fb70.05 --U- 1 fsc 0x0000.cf736b47
bdba: 0x0391194d
data_block_dump,data header at 0x971f15822c
===============
tsiz: 0x7dd0
hsiz: 0x588
pbl: 0x971f15822c
76543210
flag=-0------
ntab=2
nrow=686
frre=-1
fsbo=0x588
fseo=0x82b
avsp=0x854
tosp=0x854
r0_9ir2=0x0
mec_kdbh9ir2=0x23
76543210
shcf_kdbh9ir2=----------
76543210
flag_9ir2=0-R-LNOC Archive compression: N
fcls_9ir2[0]={ }
perm_9ir2[13]={ 10 12 7 9 11 0 1 2 8 3 4 5 6 }
0x24:pti[0] nrow=108 offs=0
0x28:pti[1] nrow=578 offs=108
0x2c:pri[0] offs=0x7aa5
0x2e:pri[1] offs=0x7a0c

tab 0, row 0, @0x7aa5
tl: 5 fb: --H-FL-- lb: 0x0  cc: 10
col  0: [ 1]  80
col  1: [ 1]  80
col  2: [ 1]  80
col  3: [ 1]  80
col  4: [ 1]  80
col  5: [ 1]  80
col  6: [ 1]  80
col  7: [ 3]  c2 15 15
col  8: [ 1]  80
col  9: [ 2]  c1 05
bindmp: 00 54 0a 17 42

Note:
块上有21个ITL ,该块仍然有一些可用空间(avsp = tosp = 0x854:可用空间=总空间= 4124字节)未到ITL上限,block也未满。

ORA-00600 [PITL1]
ORA-00600 [kdt_bseg_srch_cbk PITL1]
ORA-00700: soft internal error, arguments: [kgegpa:parameter corruption]

有时会伴随ora-700,ORA-700是所谓的“软”断言。当调用者想要记下发生了意外的事实但由于失败对流程或实例没有致命影响而希望继续发生时,会触发软断言。它是在12c中引入的,并得到了一些ORA-600消息(信息性消息),而将ORA-600留给了更多关键问题。

MOS中比较符合Bug:29782211 If a table that is OLTP compressed or which has been OLTP compressed at some point in its lifetime generates any of the errors listed above,
then this bug may be the cause.

kdt_bseg_srch_cbk
====》kernel data table insert check for uncommitted space

要解决此问题:在RDBMS上修复bug 29782211,或禁用压缩对表上数据进行重组(注意禁用压缩的DDL只影响以后数据变化)。建议您在表重组期间也根据Oracle的建议增加PCTFREE, 当前表是个update,用更大的PERCENT FREE重新创建表将为ITL腾出更多空间。

如果当前表定义中是禁用压缩时,建议使用dbms_compression.get_compression_type检查数据是否为压缩。

使用dump信息可以生成rowid, 另外在dump trace中没有看到Compression level: 01 (Query Low)等关键字, 不确认该块是否启用压缩,这个案例安装oneoff patch解决。

Jonathan Lewis大师 在研究行迁移时,行迁移到块中的每一行都增加一个ITL,迁移到块中的行没有行标头–标志字节(fb),它是:“-FL-”,标头没有‘H’,命中了另一个bug 2420831.1

Troubleshooting ORA-600 [KKZGPKORID] impdp from 11G to 19C

$
0
0

从oracle 11.2.0.4 impdp导入到 oracle 19.6,生成本地dump文件,在导入时出现 ORA-600 [KKZGPKORID]

...
Processing object type SCHEMA_EXPORT/MATERIALIZED_VIEW
ORA-39014: One or more workers have prematurely exited.
ORA-39029: worker 1 with process name "DW00" prematurely terminated
ORA-00600: internal error code, arguments: [KKZGPKORID], [0], [], [], [], [], [], [], [], [], [], []

KKZG ==> upport for snapshots or Materialized View validation and operation, 与物化视图有关,同是也是在导出SCHEMA_EXPORT/MATERIALIZED_VIEW出错

检查原库MVIEWS是否为无效状态,修正并重新导出, 或导入时排除物化视图EXCLUDE=MATERIALIZED_VIEW.

Troubleshooting dbms_sqltune ORA-04068 ORA-04065 ORA-06508 ORA-06512 在做异常恢复后

$
0
0

前几日有个库sysaux和部分业务表空间数据文件损坏,在数据库强制异常恢复后, 提示dbms_sqltune使用sql profile无法使用,这个问题与对象的先后创建顺序或部分重建导致,错误信息如下,这里我还原一下问题和分享一下思路。

ORA-04068: existing state of packages has been discarded
ORA-04065: not executed, altered or dropped stored procedure "SYS.DBMS_SQLTUNE_INTERNAL"
ORA-06508: PL/SQL: could not find program unit being called: "SYS.DBMS_SQLTUNE_INTERNAL"
ORA-06512: at "SYS.DBMS_SQLTUNE", line 6759
ORA-06512: at "SYS.DBMS_SQLTUNE", line 6729
ORA-06512: at line 6

1, 还原问题

SQL> @o dbms_sqltune

owner                     object_name                    object_type          status           OID      D_OID CREATED             LAST_DDL_TIME
------------------------- ------------------------------ -------------------- --------- ---------- ---------- ------------------- -------------------
SYS                       DBMS_SQLTUNE                   PACKAGE              VALID          13804            2019-04-17 01:03:55 2020-03-20 05:50:42
SYS                       DBMS_SQLTUNE                   PACKAGE BODY         VALID          19191            2019-04-17 01:11:27 2019-04-17 01:11:27
SYS                       DBMS_SQLTUNE_INTERNAL          PACKAGE              VALID          17064            2019-04-17 01:07:16 2019-04-17 01:07:16
SYS                       DBMS_SQLTUNE_INTERNAL          PACKAGE BODY         VALID          19188            2019-04-17 01:11:25 2019-04-17 01:11:25

字典部分字段说明
obj$
  ctime         date not null,                       /* object creation time */
  mtime         date not null,                      /* DDL modification time */
  stime         date not null,          /* specification timestamp (version) */
  status        number not null,            /* status of object (see KQD.H): */

dependency$                                 /* dependency table */
d_obj#        number not null,                  /* dependent object number */
  d_timestamp   date not null,   /* dependent object specification timestamp */
  order#        number not null,                             /* order number */
  p_obj#        number not null,                     /* parent object number */
  p_timestamp   date not null,      /* parent object specification timestamp */
  d_owner#      number,                           /*  dependent owner number */
  property      number not null,                   /* 0x01 = HARD dependency */
                                                   /* 0x02 = REF  dependency */
                                          /* 0x04 = FINER GRAINED dependency */
  d_attrs       raw("M_CSIZ"), /* Finer grain attr. numbers if finer grained */
  d_reason      raw("M_CSIZ"))  /* Reason mask of attrs causing invalidation */

SQL>          SELECT
              do.obj# d_obj,
              do.name d_name,
              do.type# d_type,
              po.obj# p_obj,
              po.name p_name,
              to_char(p_timestamp,'DD-MON-YYYY HH24:MI:SS') "P_Timestamp",
              to_char(po.stime ,'DD-MON-YYYY HH24:MI:SS') "STIME", po.ctime,po.mtime,
              decode(sign(po.stime-p_timestamp),0,'SAME','*DIFFER*') X
         FROM sys.obj$ do, sys.dependency$ d, sys.obj$ po
         WHERE P_OBJ#=po.obj#(+)
         AND D_OBJ#=do.obj#
                 and p_obj# in(select obj#  from obj$ where name like 'DBMS_SQLTUNE_INTERNAL%'  and type# in(9,11) )
         AND do.status=1 /*dependent is valid*/
         AND po.status=1 /*parent is valid*/
         --AND po.stime!=p_timestamp /*parent timestamp not match*/
         ORDER BY 2,1;
  2    3    4    5    6    7    8    9   10   11   12   13   14   15   16   17
     D_OBJ D_NAME                        D_TYPE      P_OBJ P_NAME                    P_Timestamp             STIME                   CTIME             MTIME             X
---------- ------------------------- ---------- ---------- ------------------------- ----------------------- ----------------------- ----------------- ----------------- --------
     12431 DBMS_AUTO_SQLTUNE                 11      11256 DBMS_SQLTUNE_INTERNAL     25-JUN-2020 07:59:54    25-JUN-2020 07:59:54    20130822 04:08:04 20200625 07:59:54 SAME
     12457 DBMS_SMB                          11      11256 DBMS_SQLTUNE_INTERNAL     25-JUN-2020 07:59:54    25-JUN-2020 07:59:54    20130822 04:08:04 20200625 07:59:54 SAME
     12456 DBMS_SMB_INTERNAL                 11      11256 DBMS_SQLTUNE_INTERNAL     25-JUN-2020 07:59:54    25-JUN-2020 07:59:54    20130822 04:08:04 20200625 07:59:54 SAME
     12455 DBMS_SPM                          11      11256 DBMS_SQLTUNE_INTERNAL     25-JUN-2020 07:59:54    25-JUN-2020 07:59:54    20130822 04:08:04 20200625 07:59:54 SAME
     12454 DBMS_SPM_INTERNAL                 11      11256 DBMS_SQLTUNE_INTERNAL     25-JUN-2020 07:59:54    25-JUN-2020 07:59:54    20130822 04:08:04 20200625 07:59:54 SAME
     12733 DBMS_SQLDIAG                      11      11256 DBMS_SQLTUNE_INTERNAL     25-JUN-2020 07:59:54    25-JUN-2020 07:59:54    20130822 04:08:04 20200625 07:59:54 SAME
     12730 DBMS_SQLDIAG_INTERNAL             11      11256 DBMS_SQLTUNE_INTERNAL     25-JUN-2020 07:59:54    25-JUN-2020 07:59:54    20130822 04:08:04 20200625 07:59:54 SAME
     12452 DBMS_SQLPA                        11      11256 DBMS_SQLTUNE_INTERNAL     25-JUN-2020 07:59:54    25-JUN-2020 07:59:54    20130822 04:08:04 20200625 07:59:54 SAME
     12732 DBMS_SQLTCB_INTERNAL              11      11256 DBMS_SQLTUNE_INTERNAL     25-JUN-2020 07:59:54    25-JUN-2020 07:59:54    20130822 04:08:04 20200625 07:59:54 SAME
     12429 DBMS_SQLTUNE                      11      11256 DBMS_SQLTUNE_INTERNAL     25-JUN-2020 07:59:54    25-JUN-2020 07:59:54    20130822 04:08:04 20200625 07:59:54 SAME
     12426 DBMS_SQLTUNE_INTERNAL             11      11256 DBMS_SQLTUNE_INTERNAL     25-JUN-2020 07:59:54    25-JUN-2020 07:59:54    20130822 04:08:04 20200625 07:59:54 SAME
     12448 DBMS_SQLTUNE_UTIL1                11      11256 DBMS_SQLTUNE_INTERNAL     25-JUN-2020 07:59:54    25-JUN-2020 07:59:54    20130822 04:08:04 20200625 07:59:54 SAME
     12449 DBMS_SQLTUNE_UTIL2                11      11256 DBMS_SQLTUNE_INTERNAL     25-JUN-2020 07:59:54    25-JUN-2020 07:59:54    20130822 04:08:04 20200625 07:59:54 SAME
     12727 DBMS_STATS                        11      11256 DBMS_SQLTUNE_INTERNAL     25-JUN-2020 07:59:54    25-JUN-2020 07:59:54    20130822 04:08:04 20200625 07:59:54 SAME
     12728 DBMS_STATS_INTERNAL               11      11256 DBMS_SQLTUNE_INTERNAL     25-JUN-2020 07:59:54    25-JUN-2020 07:59:54    20130822 04:08:04 20200625 07:59:54 SAME
     12723 DBMS_XPLAN                        11      11256 DBMS_SQLTUNE_INTERNAL     25-JUN-2020 07:59:54    25-JUN-2020 07:59:54    20130822 04:08:04 20200625 07:59:54 SAME
     12422 PRVT_SQLADV_INFRA                 11      11256 DBMS_SQLTUNE_INTERNAL     25-JUN-2020 07:59:54    25-JUN-2020 07:59:54    20130822 04:08:04 20200625 07:59:54 SAME
     12450 PRVT_SQLPA                        11      11256 DBMS_SQLTUNE_INTERNAL     25-JUN-2020 07:59:54    25-JUN-2020 07:59:54    20130822 04:08:04 20200625 07:59:54 SAME
     12424 PRVT_SQLPROF_INFRA                11      11256 DBMS_SQLTUNE_INTERNAL     25-JUN-2020 07:59:54    25-JUN-2020 07:59:54    20130822 04:08:04 20200625 07:59:54 SAME
     12423 PRVT_SQLSET_INFRA                 11      11256 DBMS_SQLTUNE_INTERNAL     25-JUN-2020 07:59:54    25-JUN-2020 07:59:54    20130822 04:08:04 20200625 07:59:54 SAME
     12438 WRI$_ADV_HDM_T                    14      11256 DBMS_SQLTUNE_INTERNAL     25-JUN-2020 07:59:54    25-JUN-2020 07:59:54    20130822 04:08:04 20200625 07:59:54 SAME
     12427 WRI$_ADV_SQLTUNE                  14      11256 DBMS_SQLTUNE_INTERNAL     25-JUN-2020 07:59:54    25-JUN-2020 07:59:54    20130822 04:08:04 20200625 07:59:54 SAME
     12428 WRI$_REPT_SQLT                    14      11256 DBMS_SQLTUNE_INTERNAL     25-JUN-2020 07:59:54    25-JUN-2020 07:59:54    20130822 04:08:04 20200625 07:59:54 SAME

23 rows selected.

破坏,更新obj$.stime 模拟可能重建DBMS_SQLTUNE_INTERNAL

SQL> update obj$ set STIME=sysdate where obj#=11256;
SQL> commit;

2, 验证

SQL> create table anbob.t10000  as select object_id,object_name from dba_objects where rownum<=10000; Table created. SQL> select object_name from anbob.t10000 where object_id=10;

OBJECT_NAME
--------------------------------------------------------------------------------------------------------------------------------
C_USER#

SQL> @sqlt anbob.t1000

HASH_VALUE SQL_ID             CHLD# OPT_MODE   SQL_TEXT
---------- ------------- ---------- ---------- ----------------------------------------------------------------------------------------------------
 413536917 0jar8zncac4np          0 ALL_ROWS   select object_name from anbob.t10000 where object_id=10
2422626187 dh2vr3a86cpwb          0 ALL_ROWS   select  hash_value,     sql_id, -- old_hash_value,  child_number chld#, -- plan_hash_value
                                               plan_hash,  optimizer_mode opt_mode,  sql_text sqlt_sql_text from  v$sql where  lower(sql_text) like
                                               lower('%anbob.t1000%') --and hash_value != (select sql_hash_value from v$session where sid = (select
                                               sid from v$mystat where rownum = 1))

SQL> SELECT operation,options,object_name,object_alias
     FROM v$sql_plan
     WHERE sql_id='&sqlid'
     AND child_number='&cn'  2    3    4
  5  ;
Enter value for sqlid: 0jar8zncac4np
Enter value for cn: 0

OPERATION                                                    OPTIONS                                                      OBJECT_NAME                    OBJECT_ALIAS
------------------------------------------------------------ ------------------------------------------------------------ ------------------------------ -----------------------------------------------------------------
SELECT STATEMENT
TABLE ACCESS                                                 FULL                                                         T10000                         T10000@SEL$1

SQL> DECLARE
   SQL_FTEXT CLOB;
  BEGIN
  SELECT SQL_FULLTEXT INTO SQL_FTEXT FROM V$SQLAREA WHERE SQL_ID = '0jar8zncac4np';

  DBMS_SQLTUNE.IMPORT_SQL_PROFILE(
    SQL_TEXT => SQL_FTEXT,
    PROFILE => SQLPROF_ATTR('INDEX(@"SEL$1" "T10000"@"SEL$1" "IDX_NOT_EXISTS")'),
    NAME => 'PROFILE_0jar8zncac4np',
    REPLACE => TRUE,
    FORCE_MATCH => TRUE
  );
  END;
  /  2    3    4    5    6    7    8    9   10   11   12   13   14
DECLARE
*
ERROR at line 1:
ORA-04068: existing state of packages has been discarded
ORA-04065: not executed, altered or dropped stored procedure "SYS.DBMS_SQLTUNE_INTERNAL"
ORA-06508: PL/SQL: could not find program unit being called: "SYS.DBMS_SQLTUNE_INTERNAL"
ORA-06512: at "SYS.DBMS_SQLTUNE", line 6759
ORA-06512: at "SYS.DBMS_SQLTUNE", line 6729
ORA-06512: at line 6

前提目前对象都还是VALID状态,只是不能执行。

3, 尝试方法一
可以在SESSION或system动态配置_disable_fast_validate 参数。

SQL> @pd fast_val
Show all parameters and session values from x$ksppi/x$ksppcv...

      INDX I_HEX NAME                                               VALUE                          DESCRIPTION
---------- ----- -------------------------------------------------- ------------------------------ ----------------------------------------------------------------------
      1757   6DD _disable_fast_validate                             FALSE                          disable PL/SQL fast validation

SQL> alter session set "_disable_fast_validate"=true;
Session altered.

SQL> DECLARE
   SQL_FTEXT CLOB;
  BEGIN
  SELECT SQL_FULLTEXT INTO SQL_FTEXT FROM V$SQLAREA WHERE SQL_ID = '0jar8zncac4np';

  DBMS_SQLTUNE.IMPORT_SQL_PROFILE(
    SQL_TEXT => SQL_FTEXT,
    PROFILE => SQLPROF_ATTR('INDEX(@"SEL$1" "T10000"@"SEL$1" "IDX_NOT_EXISTS")'),
    NAME => 'PROFILE_0jar8zncac4np',
    REPLACE => TRUE,
    FORCE_MATCH => TRUE
  );
  END;
  /  2    3    4    5    6    7    8    9   10   11   12   13   14
DECLARE
*
ERROR at line 1:
ORA-04068: existing state of packages has been discarded
ORA-04065: not executed, altered or dropped stored procedure "SYS.DBMS_SQLTUNE_INTERNAL"
ORA-06508: PL/SQL: could not find program unit being called: "SYS.DBMS_SQLTUNE_INTERNAL"
ORA-06512: at "SYS.DBMS_SQLTUNE", line 6759
ORA-06512: at "SYS.DBMS_SQLTUNE", line 6729
ORA-06512: at line 6

有时该方法生效,或尝试重启后配置该参数;

4, 尝试方法2
所有对象 编译

1) Run this query to find the objects with timestamp issue

set pagesize 10000
         column d_name format a20
         column p_name format a20
         SELECT
              do.obj# d_obj,
              do.name d_name,
              do.type# d_type,
              po.obj# p_obj,
              po.name p_name,
              to_char(p_timestamp,'DD-MON-YYYY HH24:MI:SS') "P_Timestamp",
              to_char(po.stime ,'DD-MON-YYYY HH24:MI:SS') "STIME",
              decode(sign(po.stime-p_timestamp),0,'SAME','*DIFFER*') X
         FROM sys.obj$ do, sys.dependency$ d, sys.obj$ po
         WHERE P_OBJ#=po.obj#(+)
         AND D_OBJ#=do.obj#
         AND do.status=1 /*dependent is valid*/
         AND po.status=1 /*parent is valid*/
         AND po.stime!=p_timestamp /*parent timestamp not match*/
         ORDER BY 2,1;
		 
2)     d_type = 1 INDEX         alter index  rebuild;
       d_type = 2 TABLE         alter table  upgrade;
       d_type = 4 VIEW          alter view  compile;
       d_type = 5 SYNONYM       alter synonym  compile;
       d_type = 7 PROCEDUR      alter procedure  compile; 
       d_type = 8 FUNCTION      alter function  compile;
       d_type = 9 PACKAGE       alter package  compile;
       d_type = 11 PACKAGE BODY alter package  compile body;
       d_type = 12 TRIGGER      alter trigger  compile;
       d_type = 13 TYPE         alter session set events '10826 trace name context forever, level 1'; 
	                            alter type name compile
如
SQL> alter package SYS.DBMS_SQLTUNE_INTERNAL compile body;
Package body altered.

SQL> alter package SYS.DBMS_SQLTUNE_INTERNAL compile body;
Package body altered.
 
SQL> r
   SELECT
          do.obj# d_obj,
          do.name d_name,
          do.type# d_type,
          po.obj# p_obj,
          po.name p_name,
          to_char(p_timestamp,'DD-MON-YYYY HH24:MI:SS') "P_Timestamp",
          to_char(po.stime ,'DD-MON-YYYY HH24:MI:SS') "STIME", po.ctime,po.mtime,
          decode(sign(po.stime-p_timestamp),0,'SAME','*DIFFER*') X
     FROM sys.obj$ do, sys.dependency$ d, sys.obj$ po
     WHERE P_OBJ#=po.obj#(+)
     AND D_OBJ#=do.obj#
             and p_obj# in(select obj#  from obj$ where name like 'DBMS_SQLTUNE_INTERNAL%'  and type# in(9,11) )
     AND do.status=1 /*dependent is valid*/
     AND po.status=1 /*parent is valid*/
     --AND po.stime!=p_timestamp /*parent timestamp not match*/
 

     D_OBJ D_NAME                        D_TYPE      P_OBJ P_NAME                         P_Timestamp             STIME                   CTIME             MTIME             X
---------- ------------------------- ---------- ---------- ------------------------------ ----------------------- ----------------------- ----------------- ----------------- --------
     12733 DBMS_SQLDIAG                      11      11256 DBMS_SQLTUNE_INTERNAL          25-JUN-2020 07:59:54    30-JUL-2020 08:42:22    20130822 04:08:04 20200625 07:59:54 *DIFFER*
     12429 DBMS_SQLTUNE                      11      11256 DBMS_SQLTUNE_INTERNAL          25-JUN-2020 07:59:54    30-JUL-2020 08:42:22    20130822 04:08:04 20200625 07:59:54 *DIFFER*
     12730 DBMS_SQLDIAG_INTERNAL             11      11256 DBMS_SQLTUNE_INTERNAL          25-JUN-2020 07:59:54    30-JUL-2020 08:42:22    20130822 04:08:04 20200625 07:59:54 *DIFFER*
     12456 DBMS_SMB_INTERNAL                 11      11256 DBMS_SQLTUNE_INTERNAL          25-JUN-2020 07:59:54    30-JUL-2020 08:42:22    20130822 04:08:04 20200625 07:59:54 *DIFFER*
     12422 PRVT_SQLADV_INFRA                 11      11256 DBMS_SQLTUNE_INTERNAL          25-JUN-2020 07:59:54    30-JUL-2020 08:42:22    20130822 04:08:04 20200625 07:59:54 *DIFFER*
     12423 PRVT_SQLSET_INFRA                 11      11256 DBMS_SQLTUNE_INTERNAL          25-JUN-2020 07:59:54    30-JUL-2020 08:42:22    20130822 04:08:04 20200625 07:59:54 *DIFFER*
     12424 PRVT_SQLPROF_INFRA                11      11256 DBMS_SQLTUNE_INTERNAL          25-JUN-2020 07:59:54    30-JUL-2020 08:42:22    20130822 04:08:04 20200625 07:59:54 *DIFFER*
     12427 WRI$_ADV_SQLTUNE                  14      11256 DBMS_SQLTUNE_INTERNAL          25-JUN-2020 07:59:54    30-JUL-2020 08:42:22    20130822 04:08:04 20200625 07:59:54 *DIFFER*
     12428 WRI$_REPT_SQLT                    14      11256 DBMS_SQLTUNE_INTERNAL          25-JUN-2020 07:59:54    30-JUL-2020 08:42:22    20130822 04:08:04 20200625 07:59:54 *DIFFER*
     12431 DBMS_AUTO_SQLTUNE                 11      11256 DBMS_SQLTUNE_INTERNAL          25-JUN-2020 07:59:54    30-JUL-2020 08:42:22    20130822 04:08:04 20200625 07:59:54 *DIFFER*
     12448 DBMS_SQLTUNE_UTIL1                11      11256 DBMS_SQLTUNE_INTERNAL          25-JUN-2020 07:59:54    30-JUL-2020 08:42:22    20130822 04:08:04 20200625 07:59:54 *DIFFER*
     12449 DBMS_SQLTUNE_UTIL2                11      11256 DBMS_SQLTUNE_INTERNAL          25-JUN-2020 07:59:54    30-JUL-2020 08:42:22    20130822 04:08:04 20200625 07:59:54 *DIFFER*
     12450 PRVT_SQLPA                        11      11256 DBMS_SQLTUNE_INTERNAL          25-JUN-2020 07:59:54    30-JUL-2020 08:42:22    20130822 04:08:04 20200625 07:59:54 *DIFFER*
     12452 DBMS_SQLPA                        11      11256 DBMS_SQLTUNE_INTERNAL          25-JUN-2020 07:59:54    30-JUL-2020 08:42:22    20130822 04:08:04 20200625 07:59:54 *DIFFER*
     12454 DBMS_SPM_INTERNAL                 11      11256 DBMS_SQLTUNE_INTERNAL          25-JUN-2020 07:59:54    30-JUL-2020 08:42:22    20130822 04:08:04 20200625 07:59:54 *DIFFER*
     12455 DBMS_SPM                          11      11256 DBMS_SQLTUNE_INTERNAL          25-JUN-2020 07:59:54    30-JUL-2020 08:42:22    20130822 04:08:04 20200625 07:59:54 *DIFFER*
     12457 DBMS_SMB                          11      11256 DBMS_SQLTUNE_INTERNAL          25-JUN-2020 07:59:54    30-JUL-2020 08:42:22    20130822 04:08:04 20200625 07:59:54 *DIFFER*
     12426 DBMS_SQLTUNE_INTERNAL             11      11256 DBMS_SQLTUNE_INTERNAL          30-JUL-2020 08:42:22    30-JUL-2020 08:42:22    20130822 04:08:04 20200625 07:59:54 SAME
     12438 WRI$_ADV_HDM_T                    14      11256 DBMS_SQLTUNE_INTERNAL          25-JUN-2020 07:59:54    30-JUL-2020 08:42:22    20130822 04:08:04 20200625 07:59:54 *DIFFER*
     12723 DBMS_XPLAN                        11      11256 DBMS_SQLTUNE_INTERNAL          25-JUN-2020 07:59:54    30-JUL-2020 08:42:22    20130822 04:08:04 20200625 07:59:54 *DIFFER*
     12728 DBMS_STATS_INTERNAL               11      11256 DBMS_SQLTUNE_INTERNAL          25-JUN-2020 07:59:54    30-JUL-2020 08:42:22    20130822 04:08:04 20200625 07:59:54 *DIFFER*
     12727 DBMS_STATS                        11      11256 DBMS_SQLTUNE_INTERNAL          25-JUN-2020 07:59:54    30-JUL-2020 08:42:22    20130822 04:08:04 20200625 07:59:54 *DIFFER*
     12732 DBMS_SQLTCB_INTERNAL              11      11256 DBMS_SQLTUNE_INTERNAL          25-JUN-2020 07:59:54    30-JUL-2020 08:42:22    20130822 04:08:04 20200625 07:59:54 *DIFFER*

23 rows selected.

SQL> DECLARE
   SQL_FTEXT CLOB;
  BEGIN
  SELECT SQL_FULLTEXT INTO SQL_FTEXT FROM V$SQLAREA WHERE SQL_ID = '0jar8zncac4np';

  DBMS_SQLTUNE.IMPORT_SQL_PROFILE(
    SQL_TEXT => SQL_FTEXT,
    PROFILE => SQLPROF_ATTR('INDEX(@"SEL$1" "T10000"@"SEL$1" "IDX_NOT_EXISTS")'),
    NAME => 'PROFILE_0jar8zncac4np',
    REPLACE => TRUE,
    FORCE_MATCH => TRUE
  );
  END;
  /  2    3    4    5    6    7    8    9   10   11   12   13   14
DECLARE
*
ERROR at line 1:
ORA-04068: existing state of packages has been discarded
ORA-04065: not executed, altered or dropped stored procedure "SYS.DBMS_SQLTUNE_INTERNAL"
ORA-06508: PL/SQL: could not find program unit being called: "SYS.DBMS_SQLTUNE_INTERNAL"
ORA-06512: at "SYS.DBMS_SQLTUNE", line 6759
ORA-06512: at "SYS.DBMS_SQLTUNE", line 6729
ORA-06512: at line 6

如果报错,同样可以在编译完后尝试重启;

5, 尝试方法3
更新基表数据,操作需谨慎。

update sys.dependency$ set P_Timestamp=(select P_Timestamp from sys.dependency$ where D_OBJ#=12429 and p_obj#=11256) where   D_OBJ#=12426 and p_obj#=11256;
update sys.obj$ set stime=(select P_Timestamp from sys.dependency$ where D_OBJ#=12429 and p_obj#=11256) where  obj#=11256;
SQL> commit;

Commit complete.

SQL>  DECLARE
   SQL_FTEXT CLOB;
  BEGIN
  SELECT SQL_FULLTEXT INTO SQL_FTEXT FROM V$SQLAREA WHERE SQL_ID = '0jar8zncac4np';

  DBMS_SQLTUNE.IMPORT_SQL_PROFILE(
    SQL_TEXT => SQL_FTEXT,
    PROFILE => SQLPROF_ATTR('INDEX(@"SEL$1" "T10000"@"SEL$1" "IDX_NOT_EXISTS")'),
    NAME => 'PROFILE_0jar8zncac4np',
    REPLACE => TRUE,
    FORCE_MATCH => TRUE
  );
  END;
  /   2    3    4    5    6    7    8    9   10   11   12   13   14
 DECLARE
*
ERROR at line 1:
ORA-04068: existing state of packages has been discarded
ORA-04065: not executed, altered or dropped stored procedure "SYS.DBMS_SQLTUNE_INTERNAL"
ORA-06508: PL/SQL: could not find program unit being called: "SYS.DBMS_SQLTUNE_INTERNAL"
ORA-06512: at "SYS.DBMS_SQLTUNE", line 6759
ORA-06512: at "SYS.DBMS_SQLTUNE", line 6729
ORA-06512: at line 6


SQL> shut abort
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area  626327552 bytes
Fixed Size                  2253456 bytes
Variable Size             218107248 bytes
Database Buffers          402653184 bytes
Redo Buffers                3313664 bytes
Database mounted.
Database opened.

SQL> select object_name from anbob.t10000 where object_id=10;

OBJECT_NAME
--------------------------------------------------------------------------------------------------------------------------------
C_USER#

SQL> DECLARE
   SQL_FTEXT CLOB;
  BEGIN
  SELECT SQL_FULLTEXT INTO SQL_FTEXT FROM V$SQLAREA WHERE SQL_ID = '0jar8zncac4np';

  DBMS_SQLTUNE.IMPORT_SQL_PROFILE(
    SQL_TEXT => SQL_FTEXT,
    PROFILE => SQLPROF_ATTR('INDEX(@"SEL$1" "T10000"@"SEL$1" "IDX_NOT_EXISTS")'),
    NAME => 'PROFILE_0jar8zncac4np',
    REPLACE => TRUE,
    FORCE_MATCH => TRUE
  );
  END;
  /  2    3    4    5    6    7    8    9   10   11   12   13   14

PL/SQL procedure successfully completed.

如何知道是上面的对象呢? 有该问题时首先用hcheck.sql去检查数据库对象,答案在其中。

Downgrade Grid Infrastructure 12.1.0.2 to 11.2.0.4降级后crs无法启动 No voting files found

$
0
0

Last month, after a set of Exadata test environment of our customer environment was downgraded,Downgrade Grid Infrastructure 12.1.0.2 to 11.2.0.4, the downgrade operation was successful, but when starting the CRS, it prompted that the VD could not be found, and the VD checked that it exists. I found out that he had the same case from the website. Reprint and record it here.

# /opt/app/12.1.0/grid2/crs/install/rootcrs.sh -downgrade
...
CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.

Successfully downgraded Oracle Clusterware stack on this node

since this is a two node RAC the downgrade command is run with lastnode option (OCR-node). This will remove the GI management repository, downgrade the OCR

# /opt/app/12.1.0/grid2/crs/install/rootcrs.sh -downgrade -lastnode
CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.

Successfully downgraded Oracle Clusterware stack on this node
Run '/opt/app/11.2.0/grid4/bin/crsctl start crs' on all nodes to complete downgrade

Before starting the cluster with 11.2 update the inventory by setting crs=true for 11.2 GI home. At this point 12.1 GI home will have crs=true.

$ cd /opt/app/12.1.0/grid2/oui/bin/
$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=false ORACLE_HOME=/opt/app/12.1.0/grid2
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 4095 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true ORACLE_HOME=/opt/app/11.2.0/grid4
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 4095 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

Also make sure the contents in /etc/init.d/ohasd and /etc/init.d/init.ohasd refers the 11.2 as the ORA_CRS_HOME and there are no references to 12.1. Few times the last downgrade node (rhel6m1) did contain references to 12.1 even though downgrade command had successfully completed. However these files on the other node had the correct references to 11.2.

# cat /etc/init.d/ohasd | grep ORA_CRS_HOME
ORA_CRS_HOME=/opt/app/11.2.0/grid4
export ORA_CRS_HOME

# cat /etc/init.d/init.ohasd | grep ORA_CRS_HOME
ORA_CRS_HOME=/opt/app/11.2.0/grid4
export ORA_CRS_HOME
PERL="/opt/app/11.2.0/grid4/perl/bin/perl -I${ORA_CRS_HOME}/perl/lib"

crsctl start crs

crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [11.2.0.4.0]. The cluster upgrade state is [NORMAL].

crsctl query crs softwareversion
Oracle Clusterware version on node [rhel6m1] is [11.2.0.4.0]

# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3356
Available space (kbytes) : 258764
ID : 2072206343
Device/File Name : +CLUSTER_DG
This concludes the steps to successfully downgrading GI from 12.1.0.2 to 11.2.0.4.

There could be occasions the downgrade command successfully completes but start of crs fails. Symptoms in this case include unable to discover any voting disks crsctl query css votedisk doesn’t return any vote disk infomration and ocssd.log will have entries similar to

2015-03-31 14:00:09.618: [ CSSD][898090752]clssnmvDiskVerify: Successful discovery of 0 disks
2015-03-31 14:00:09.618: [ CSSD][898090752]clssnmCompleteInitVFDiscovery: Completing initial voting file discovery
2015-03-31 14:00:09.618: [ CSSD][898090752]clssnmvFindInitialConfigs: No voting files found

Other times includes corrupted OCR with ocssd.log having entries similar to

2015-03-26 11:33:05.633: [ CRSMAIN][3817551648] Initialing cluclu context...
[ OCRMAS][3776734976]th_calc_av:8': Failed in vsnupr. Incorrect SV stored in OCR. Key [SYSTEM.version.hostnames.] Value []
2015-03-26 11:33:06.618: [ OCRSRV][3776734976]th_upgrade:9 Shutdown CacheMaster. prev AV [186647552] new calc av [186647552] my sv [186647552]

No root cause was found of these cases and could only assume this may be due to some of the earlier mentioned reasons such as having backups of OCR from previous upgrades i.e. ocr11.2.0.3.0 (But it must be said a successful downgrade was archived while ocr11.2.0.3.0 was in the cdata directory), wrong binaries referred during downgrade due to environment variable settings. The only option to recover from such a situation is to restore the OCR taken while cluster was 11.2.

crsctl stop crs -f # run on all nodes
crsctl start crs -excl -nocrs # run only on one node
ocrconfig -restore /opt/app/11.2.0/grid4/cdata/rhel6m-cluster/backup_20150331_170634.ocr
crsctl replace votedisk +cluster_dg

Viewing all 703 articles
Browse latest View live