TEMP recreate

Corrupt Block Found
TSN = 2, TSNAME = TEMP
RFN = 5, BLK = 2168258, RDBA = 23139778
OBJN = -1, OBJD = 23139776, OBJECT = , SUBOBJECT =
SEGMENT OWNER = , SEGMENT TYPE =

Dumping diagnostic data in directory=[cdmp_], requested by (instance=1, osid=), summary=[incident=*].
Corrupt Block Found
TSN = 2, TSNAME = TEMP
RFN = 5, BLK = 2168258, RDBA = 23139778
OBJN = -1, OBJD = 23139776, OBJECT = , SUBOBJECT =
SEGMENT OWNER = , SEGMENT TYPE =

tworzymy nowego tbs-a:

CREATE TEMPORARY TABLESPACE TEMP2 TEMPFILE '/data/temp201.dbf’ SIZE 100M AUTOEXTEND ON NEXT 100M MAXSIZE 65500M TABLESPACE GROUP ” EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;

ustawiamy jako defaulta:

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP2;

ubijamy sesje, które mają dane jeszcze w temp (ew. restart DB) i dopiero wtedy można zdropować starego temp-a

SELECT b.tablespace,b.segfile#,b.segblk#,b.blocks,a.sid,a.serial#,
a.username,a.osuser, a.status, 'ALTER SYSTEM KILL


ORA-27125, SGA: Realm creation failed (hugepages)

— pełny błąd:

2022-05-02T11:50:30.341692+02:00
Adjusting the default value of parameter parallel_max_servers
from 880 to 348 due to the value of parameter processes (500)
Starting ORACLE instance (normal) (OS id: 231844)
2022-05-02T11:50:30.362069+02:00
Sys-V shared memory will be used for creating SGA
2022-05-02T11:50:30.365273+02:00
Dump of system resources acquired for SHARED GLOBAL AREA (SGA)

2022-05-02T11:50:30.365469+02:00
Domain name: system.slice/oracle-ohasd.service
2022-05-02T11:50:30.365587+02:00
Per process system memlock (soft) limit = UNLIMITED
2022-05-02T11:50:30.365658+02:00
Expected per process system memlock (soft) limit to lock
instance MAX SHARED GLOBAL AREA (SGA) into memory: 3192M
2022-05-02T11:50:30.365765+02:00
Available system pagesizes:
4K, 2048K
2022-05-02T11:50:30.365868+02:00
Supported system pagesize(s):
2022-05-02T11:50:30.365920+02:00
PAGESIZE AVAILABLE_PAGES EXPECTED_PAGES ALLOCATED_PAGES ERROR(s)
2022-05-02T11:50:30.366069+02:00
2048K 16792 1596 1109 ORA-27125
2022-05-02T11:50:30.366123+02:00
Reason for not supporting certain system pagesizes:
2022-05-02T11:50:30.366193+02:00
4K – Large pagesizes only
2022-05-02T11:50:30.366275+02:00
SGA: Realm creation failed

–odpalamy skrypt z Doc.ID 401749.1 (najlepiej aby wszystkie instancje były odpalone):

https://support.oracle.com/epmos/faces/DocumentDisplay?parent=DOCUMENT&sourceId=361323.1&id=401749.1

[oracle@dm02 ~]$ vi hugepages_settings.sh
[oracle@dm02 ~]$ chmod +x hugepages_settings.sh
[oracle@dm02 ~]$ ./hugepages_settings.sh
Recommended setting: vm.nr_hugepages =


RMAN-06094: datafile 1 must be restored

Podczas odtwarzania DB do innej lokalizacji (powiedzie się), ale przy próbie przesunięcia o kilka scn-ów w 12.1.0.2 może być błąd:

Starting recover at 2022-04-28 10:25:30
released channel: ch1
released channel: ch2
RMAN-00571: ====================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS
RMAN-00571: ====================================
RMAN-03002: failure of recover command at 04/28/2022 10:25:31
RMAN-06094: datafile 1 must be restored


DB 19 – recovery catalog

Przygotowanie home-a, rozpakowanie instalek, patchy itp. (użyszkodnik: oracle)

mkdir -p /u01/app/oracle/product/19/dbhome_1
unzip /u01/install/V982063-01-19.3_DB.zip -d /u01/app/oracle/product/19/dbhome_1/
mkdir -p /u01/app/oracle/product/19/dbhome_1/patch/RU
unzip /u01/install/p33192793_190000_Linux-x86-64_patch_RU_DB_19.13.zip -d /u01/app/oracle/product/19/dbhome_1/patch/RU
rm -rf /u01/app/oracle/product/19/dbhome_1/OPatch/*
unzip /u01/install/p6880880_210000_Linux-x86-64 -d /u01/app/oracle/product/19/dbhome_1/

Instalacja (aplikowanie RU) i tworzenie DB:

cd /u01/app/oracle/product/19/dbhome_1
./runInstaller -applyRU patch/RU/33192793
./dbca


ORA-00600: internal error code, arguments: [krccacp_badfile], (CTWR)), summary=[abnormal instance termination].

błąd:

Fri Jan 28 14:13:11 2022
ERROR: Unable to normalize symbol name for the following short stack (at offset 204):

ORA-00600: internal error code, arguments: [krccacp_badfile], [6964816530811], [6956618400402], [], [], [], [], [], [], [], [], []
Incident details in: /u01/diag/rdbms/baza/baza/incident/incdir_32483/baza_ctwr_14549066_i32483.trc

Fri Jan 28 14:13:13 2022
Dumping diagnostic data in directory=[cdmp_20220128141313], requested by (instance=1, osid=14549066 (CTWR)), summary=[incident=32483].

Fri Jan 28 14:13:25 2022
Errors in file /u01/diag/rdbms/baza/baza/trace/baza_ctwr_14549066.trc:
ORA-00600: internal error code, arguments: [krccacp_badfile], [6964816530811], [6956618400402], [], [], [], [], [], [], [], [], []
CTWR (ospid: 14549066): terminating the instance due to error 487
Fri Jan 28 14:13:25 2022
System state dump requested by (instance=1, osid=14549066 (CTWR)), summary=[abnormal instance termination].
System State dumped to trace file /u01/diag/rdbms/baza/baza/trace/baza_diag_22282266.trc
Dumping diagnostic data in directory=[cdmp_20220128141325], requested by (instance=1, osid=14549066 (CTWR)), summary=[abnormal instance termination].
Instance terminated by CTWR, pid = 14549066

W przypadku, gdy odtwarzamy bazę na tej samej maszynie jako kolejną i pomimo tego, że mamy zmieniony dbid, nazwe itp. jedna baza w danym czasie może być dostępna, bo mamy włączony block change tracking (odnosi się do tego samego pliku), sprawdzamy na 2ch bazach:


SRL log 1 needs clearing because log has not been created

Po aplikowaniu incrementala (DG move forward: https://ora-600.com/2021/03/err-replikacji-danych-data-guard-move-forward-czyli-aplikowanie-intrementala/) możemy trafić na:

SRL log 1 needs clearing because log has not been created
SRL log 2 needs clearing because log has not been created
SRL log 3 needs clearing because log has not been created
SRL log 4 needs clearing because log has not been created

Zatrzymujemy aplikowanie archow, czyścimy grupy logów i odpalamy:

alter database recover managed standby database cancel;
alter database clear logfile group 1;
alter database clear logfile group 2;
alter database clear logfile group 3;


ORA-29701: unable to connect to Cluster Synchronization Service, Error 29701

Pacjent: Grid 19.13, DB 12.2.0.1, OL 7.9 – NODE2 łapie freeza po starcie i trzeba pocisnąć go z palca (po błędach związanych z interconnectem w logu CRS-a)

— w alercie DB NODE1

2021-11-04T14:26:11.977078+01:00
JIT: pid 44720 requesting full stop
2021-11-04T14:26:18.243731+01:00
JIT: pid 44720 requesting full stop
2021-11-04T14:33:53.984937+01:00
IPC Send timeout detected. Sender: ospid 20158 [oracle@NODE1 (LCK0)]
Receiver: inst 2 binc 16 ospid 18523
2021-11-04T14:33:53.994876+01:00
Communications reconfiguration: instance_number 2 by ospid 20158
2021-11-04T14:34:42.795807+01:00
Detected an inconsistent instance membership by instance 1
Evicting instance 2 from cluster
Waiting for instances to leave: 2
2021-11-04T14:34:42.950298+01:00
IPC Send timeout to 2.1 inc 20 for msg type 65521 from opid 24
2021-11-04T14:34:42.950358+01:00
IPC Send timeout to 2.1 inc 20 for msg type 65521 from opid 24

— a w logu CRS-a NODE2 (na NODE1 są jedynie informacje o problemach z NODE2 bez countdownu):

2021-11-04 14:37:21.717 [OCSSD(10029)]CRS-7503: The Oracle Grid Infrastructure process ocssd observed communication issues between node NODE2 and node NODE1, interface list of local node NODE2 is 172.30.1.2:20313, interface list of remote node NODE1 is 172.30.1.1:64128.

2021-11-04 14:37:27.240 [OCSSD(10029)]CRS-1612: Network communication with node NODE1 (1) has been missing for 50% of the timeout interval. If this persists, removal of this no