Untitled
unknown
plain_text
5 months ago
7.3 kB
19
Indexable
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "kafka-producer-network-thread | prod-schemahistory" Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "kafka-producer-network-thread | 1--configs" Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "kafka-producer-network-thread | connector-producer-bo-prod-0" 2024-10-21 20:08:34,250 ERROR Oracle|prod|streaming Mining session stopped due to error. [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] java.lang.OutOfMemoryError: Java heap space 2024-10-21 20:08:34,250 ERROR Oracle|prod|streaming Producer failure [io.debezium.pipeline.ErrorHandler] java.lang.OutOfMemoryError: Java heap space 2024-10-21 20:08:34,250 INFO Oracle|prod|streaming startScn=711236385833, endScn=711893090683 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2024-10-21 20:08:34,251 WARN || [org.eclipse.jetty.util.thread.QueuedThreadPool] java.lang.OutOfMemoryError: Java heap space 2024-10-21 20:08:34,251 ERROR || Uncaught exception in thread 'kafka-producer-network-thread | 1--offsets': [org.apache.kafka.common.utils.KafkaThread] java.lang.OutOfMemoryError: Java heap space 2024-10-21 20:08:34,251 ERROR || Unexpected exception in Thread[KafkaBasedLog Work Thread - my_connect_statuses,5,main] [org.apache.kafka.connect.util.KafkaBasedLog] java.lang.OutOfMemoryError: Java heap space Exception in thread "mysql-cj-abandoned-connection-cleanup" java.lang.OutOfMemoryError: Java heap space 2024-10-21 20:08:34,251 INFO Oracle|prod|streaming Streaming metrics dump: OracleStreamingChangeEventSourceMetrics{currentScn=711893090683, oldestScn=711257157058, committedScn=711261223591, offsetScn=711236385833, logMinerQueryCount=82, totalProcessedRows=133695269, totalCapturedDmlCount=251919, totalDurationOfFetchingQuery=PT5M39.365117S, lastCapturedDmlCount=4387, lastDurationOfFetchingQuery=PT3.826597S, maxCapturedDmlCount=4387, maxDurationOfFetchingQuery=PT5.542703S, totalBatchProcessingDuration=PT1H20.934427S, lastBatchProcessingDuration=PT1M3.336182S, maxBatchProcessingThroughput=104, currentLogFileName=[+CCBDG/ccb/onlinelog/group_3.263.1172292485], minLogFilesMined=1, maxLogFilesMined=1, redoLogStatus=[+CCBDG/ccb/onlinelog/group_5.279.1172292591 | ACTIVE, +CCBDG/ccb/onlinelog/group_6.297.1172292639 | ACTIVE, +CCBDG/ccb/onlinelog/group_4.294.1172292541 | ACTIVE, +CCBDG/ccb/onlinelog/group_3.263.1172292485 | CURRENT, +CCBDG/ccb/onlinelog/group_2.299.1172292765 | INACTIVE, +CCBDG/ccb/onlinelog/group_1.298.1172292721 | INACTIVE], switchCounter=40, batchSize=100000, millisecondToSleepBetweenMiningQuery=0, keepTransactionsDuration=PT0S, networkConnectionProblemsCounter0, batchSizeDefault=20000, batchSizeMin=1000, batchSizeMax=100000, sleepTimeDefault=1000, sleepTimeMin=0, sleepTimeMax=3000, sleepTimeIncrement=200, totalParseTime=PT9.300416S, totalStartLogMiningSessionDuration=PT1M1.086081S, lastStartLogMiningSessionDuration=PT0.002224S, maxStartLogMiningSessionDuration=PT32.951629S, totalProcessTime=PT1H20.934427S, minBatchProcessTime=PT16.230647S, maxBatchProcessTime=PT1M41.450337S, totalResultSetNextTime=PT4H25M24.063849S, lagFromTheSource=DurationPT39H3M16.735773S, maxLagFromTheSourceDuration=PT42H16M27.284395S, minLagFromTheSourceDuration=PT39H1M26.066299S, lastCommitDuration=PT0.000002S, maxCommitDuration=PT2.71567S, activeTransactions=10, rolledBackTransactions=9948736, oversizedTransactions=0, committedTransactions=214169, abandonedTransactionIds={}, rolledbackTransactionIds={5c0419004c575f00=5c0419004c575f00, 4e040600a2c45800=4e040600a2c45800, f3012300f4663900=f3012300f4663900, 6f033800c3c17b00=6f033800c3c17b00, 14020b00578b3800=14020b00578b3800, 750432009f735200=750432009f735200, 1c043b00a4346600=1c043b00a4346600, 1e021900c1d44000=1e021900c1d44000, 71033a005dd06e00=71033a005dd06e00, 4f040600e4cb5a00=4f040600e4cb5a00}, registeredDmlCount=689815, committedDmlCount=296, errorCount=1, warningCount=0, scnFreezeCount=0, unparsableDdlCount=0, miningSessionUserGlobalAreaMemory=9314072, miningSessionUserGlobalAreaMaxMemory=12301976, miningSessionProcessGlobalAreaMemory=72180504, miningSessionProcessGlobalAreaMaxMemory=85418776} [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2024-10-21 20:08:34,251 INFO Oracle|prod|streaming Offsets: OracleOffsetContext [scn=711236385833, commit_scn=["711261223591:1:c40003007cd54b00"]] [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2024-10-21 20:08:34,253 INFO Oracle|prod|streaming Finished streaming [io.debezium.pipeline.ChangeEventSourceCoordinator] 2024-10-21 20:08:34,251 ERROR || Unexpected exception in Thread[KafkaBasedLog Work Thread - my_connect_offsets,5,main] [org.apache.kafka.connect.util.KafkaBasedLog] java.lang.OutOfMemoryError: Java heap space 2024-10-21 20:08:34,253 INFO Oracle|prod|streaming Connected metrics set to 'false' [io.debezium.pipeline.ChangeEventSourceCoordinator] Exception in thread "KafkaBasedLog Work Thread - my_connect_configs" java.lang.OutOfMemoryError: Java heap space 2024-10-21 20:08:34,253 INFO || [Worker clientId=connect-1, groupId=1] Group coordinator kafka:29092 (id: 2147483646 rack: null) is unavailable or invalid due to cause: session timed out without receiving a heartbeat response. isDisconnected: false. Rediscovery will be attempted. [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-10-21 20:08:34,254 INFO || [Worker clientId=connect-1, groupId=1] Requesting disconnect from last known coordinator kafka:29092 (id: 2147483646 rack: null) [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-10-21 20:08:34,254 INFO || [Worker clientId=connect-1, groupId=1] Client requested disconnect from node 2147483646 [org.apache.kafka.clients.NetworkClient] 2024-10-21 20:08:34,455 INFO || [Worker clientId=connect-1, groupId=1] Discovered group coordinator kafka:29092 (id: 2147483646 rack: null) [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-10-21 20:08:34,456 WARN || [Worker clientId=connect-1, groupId=1] consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records. [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-10-21 20:08:34,456 INFO || [Worker clientId=connect-1, groupId=1] Member connect-1-7b7e1ab1-a6ba-4db3-842f-f78343e7b5a7 sending LeaveGroup request to coordinator kafka:29092 (id: 2147483646 rack: null) due to consumer poll timeout has expired. [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-10-21 20:08:34,456 INFO || [Worker clientId=connect-1, groupId=1] Resetting generation and member id due to: consumer pro-actively leaving the group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator]
Editor is loading...
Leave a Comment