Untitled
unknown
plain_text
2 years ago
65 kB
4
Indexable
2022-10-04 10:54:22.462 DEBUG [,a6e1092dd56af14e,a6e1092dd56af14e,false] 1 --- [nio-6020-exec-1] u.c.e.o.p.s.l.c.CheckLiveRemoteLog : ?????? ????????? ???????? {"status":"UP"} 2022-10-04 10:54:22.463 INFO [,a6e1092dd56af14e,a6e1092dd56af14e,false] 1 --- [nio-6020-exec-1] u.c.e.o.p.t.ThreadEndpointDto : ******************* REQUEST *********************** 2022-10-04 10:54:22.464 INFO [,a6e1092dd56af14e,a6e1092dd56af14e,false] 1 --- [nio-6020-exec-1] u.c.e.o.p.t.ThreadEndpointDto : userId 0 user null 2022-10-04 10:54:22.465 INFO [,a6e1092dd56af14e,a6e1092dd56af14e,false] 1 --- [nio-6020-exec-1] u.c.e.o.p.t.ThreadEndpointDto : IP 10.10.10.33 2022-10-04 10:54:22.465 INFO [,a6e1092dd56af14e,a6e1092dd56af14e,false] 1 --- [nio-6020-exec-1] u.c.e.o.p.t.ThreadEndpointDto : request: GET http://products-service:6020/actuator/prometheus 2022-10-04 10:54:22.465 INFO [,a6e1092dd56af14e,a6e1092dd56af14e,false] 1 --- [nio-6020-exec-1] u.c.e.o.p.t.ThreadEndpointDto : **************************************************** 2022-10-04 10:54:24.124 DEBUG [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] u.c.e.o.p.s.l.c.CheckLiveRemoteLog : ?????? ????????? ???????? {"status":"UP"} 2022-10-04 10:54:24.125 INFO [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] u.c.e.o.p.t.ThreadEndpointDto : ******************* REQUEST *********************** 2022-10-04 10:54:24.125 INFO [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] u.c.e.o.p.t.ThreadEndpointDto : userId 44 user user50@gmail.com 2022-10-04 10:54:24.125 INFO [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] u.c.e.o.p.t.ThreadEndpointDto : IP 185.237.216.13 2022-10-04 10:54:24.125 INFO [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] u.c.e.o.p.t.ThreadEndpointDto : request: POST http://10.10.10.52:6020/goods/analytic --body: {"imp_exp":["imp"],"period":{"base_year":"2021","cumulative":false,"period_type":"YEAR","selected":["2019"]},"indicators":[{"indicator_capacity":1,"indicator_measure":["kol1"],"indicator_type":"current","indicator_unit":"base_units"}],"uktz":{"selected":["25"],"select_by":0,"overall":false,"overall_selected":false},"incoterms":[],"countries_imp_exp":{"selected":[],"select_by":"none","overall":false,"overall_selected":false,"chapter":"id_2"},"countries_sending":{"selected":[],"select_by":"none","overall":false,"overall_selected":false,"chapter":"id_2"},"countries_trading":{"selected":[],"select_by":"none","overall":false,"overall_selected":false,"chapter":"id_2"},"transport":{"selected":[],"select_by":"none","overall":false,"overall_selected":false},"pagination":{"limit":25,"offset":0},"sorting":{"field":"","direction":"ASC"},"filter":{},"measures":[166]} 2022-10-04 10:54:24.125 INFO [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] u.c.e.o.p.t.ThreadEndpointDto : **************************************************** 2022-10-04 10:54:24.127 WARN [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] u.c.e.o.p.v.CheckDateIntervalValidator : ==================================================env.getProperty("demo")false 2022-10-04 10:54:24.127 WARN [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] u.c.e.o.p.v.CheckDateIntervalValidator : ==================================================env.getProperty("demo")false 2022-10-04 10:54:24.127 WARN [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] .c.e.o.p.c.p.ImportExportGoodsController : ------------------------------ GOODS_ANALYTIC ------------------------------ 2022-10-04 10:54:24.127 WARN [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] u.c.e.o.p.services.product.Dynamic : __________decompositor [PeriodItem{periodString='2019', periodType=YEAR, year=2019, yearPart=0, cumulative=false, agroCumulativeMonth=false} ] ua.com.ehub.om.products.repository.spark_data.PtieDataImpl@5d04d5f5 2022-10-04 10:54:24.149 INFO [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] o.a.s.s.e.d.FileSourceStrategy : Pruning directories with: isnotnull(period#690),(period#690 = 2019) 2022-10-04 10:54:24.149 INFO [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] o.a.s.s.e.d.FileSourceStrategy : Post-Scan Filters: isnotnull(tovar#672),tovar#672 RLIKE ^(25).*$ 2022-10-04 10:54:24.150 INFO [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] o.a.s.s.e.d.FileSourceStrategy : Output Data Schema: struct<tovar: string, kol1: double, kol2: double, stdol: double, stgrn: double ... 9 more fields> 2022-10-04 10:54:24.150 INFO [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] o.a.s.sql.execution.FileSourceScanExec : Pushed Filters: IsNotNull(tovar) 2022-10-04 10:54:24.155 INFO [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] o.a.s.s.e.datasources.InMemoryFileIndex : Selected 1 partitions out of 20, pruned 95.0% partitions. 2022-10-04 10:54:24.224 INFO [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] o.a.spark.storage.memory.MemoryStore : Block broadcast_2597 stored as values in memory (estimated size 229.1 KB, free 10.5 GB) 2022-10-04 10:54:24.234 INFO [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] o.a.spark.storage.memory.MemoryStore : Block broadcast_2597_piece0 stored as bytes in memory (estimated size 21.5 KB, free 10.5 GB) 2022-10-04 10:54:24.234 INFO [,,,] 1 --- [er-event-loop-4] o.apache.spark.storage.BlockManagerInfo : Added broadcast_2597_piece0 in memory on 2ad8b2e35e60:37651 (size: 21.5 KB, free: 10.5 GB) 2022-10-04 10:54:24.235 INFO [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] org.apache.spark.SparkContext : Created broadcast 2597 from collectAsList at AggregationDynamic.java:207 2022-10-04 10:54:24.235 INFO [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] o.a.s.sql.execution.FileSourceScanExec : Planning scan with bin packing, max size: 4194304 bytes, open cost is considered as scanning 4194304 bytes. 2022-10-04 10:54:24.261 INFO [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] org.apache.spark.SparkContext : Starting job: collectAsList at AggregationDynamic.java:207 2022-10-04 10:54:24.262 INFO [,,,] 1 --- [uler-event-loop] org.apache.spark.scheduler.DAGScheduler : Registering RDD 5268 (collectAsList at AggregationDynamic.java:207) 2022-10-04 10:54:24.262 INFO [,,,] 1 --- [uler-event-loop] org.apache.spark.scheduler.DAGScheduler : Got job 975 (collectAsList at AggregationDynamic.java:207) with 200 output partitions 2022-10-04 10:54:24.262 INFO [,,,] 1 --- [uler-event-loop] org.apache.spark.scheduler.DAGScheduler : Final stage: ResultStage 1797 (collectAsList at AggregationDynamic.java:207) 2022-10-04 10:54:24.262 INFO [,,,] 1 --- [uler-event-loop] org.apache.spark.scheduler.DAGScheduler : Parents of final stage: List(ShuffleMapStage 1796) 2022-10-04 10:54:24.262 INFO [,,,] 1 --- [uler-event-loop] org.apache.spark.scheduler.DAGScheduler : Missing parents: List(ShuffleMapStage 1796) 2022-10-04 10:54:24.262 INFO [,,,] 1 --- [uler-event-loop] org.apache.spark.scheduler.DAGScheduler : Submitting ShuffleMapStage 1796 (MapPartitionsRDD[5268] at collectAsList at AggregationDynamic.java:207), which has no missing parents 2022-10-04 10:54:24.263 INFO [,,,] 1 --- [uler-event-loop] o.a.spark.storage.memory.MemoryStore : Block broadcast_2598 stored as values in memory (estimated size 66.4 KB, free 10.5 GB) 2022-10-04 10:54:24.269 INFO [,,,] 1 --- [uler-event-loop] o.a.spark.storage.memory.MemoryStore : Block broadcast_2598_piece0 stored as bytes in memory (estimated size 22.4 KB, free 10.5 GB) 2022-10-04 10:54:24.269 INFO [,,,] 1 --- [er-event-loop-0] o.apache.spark.storage.BlockManagerInfo : Added broadcast_2598_piece0 in memory on 2ad8b2e35e60:37651 (size: 22.4 KB, free: 10.5 GB) 2022-10-04 10:54:24.270 INFO [,,,] 1 --- [uler-event-loop] org.apache.spark.SparkContext : Created broadcast 2598 from broadcast at DAGScheduler.scala:1161 2022-10-04 10:54:24.270 INFO [,,,] 1 --- [uler-event-loop] org.apache.spark.scheduler.DAGScheduler : Submitting 5 missing tasks from ShuffleMapStage 1796 (MapPartitionsRDD[5268] at collectAsList at AggregationDynamic.java:207) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4)) 2022-10-04 10:54:24.270 INFO [,,,] 1 --- [uler-event-loop] o.a.spark.scheduler.TaskSchedulerImpl : Adding task set 1796.0 with 5 tasks 2022-10-04 10:54:24.270 INFO [,,,] 1 --- [r-event-loop-12] o.apache.spark.scheduler.TaskSetManager : Starting task 0.0 in stage 1796.0 (TID 169069, localhost, executor driver, partition 0, PROCESS_LOCAL, 8341 bytes) 2022-10-04 10:54:24.271 INFO [,,,] 1 --- [r-event-loop-12] o.apache.spark.scheduler.TaskSetManager : Starting task 1.0 in stage 1796.0 (TID 169070, localhost, executor driver, partition 1, PROCESS_LOCAL, 8341 bytes) 2022-10-04 10:54:24.271 INFO [,,,] 1 --- [r-event-loop-12] o.apache.spark.scheduler.TaskSetManager : Starting task 2.0 in stage 1796.0 (TID 169071, localhost, executor driver, partition 2, PROCESS_LOCAL, 8341 bytes) 2022-10-04 10:54:24.271 INFO [,,,] 1 --- [r-event-loop-12] o.apache.spark.scheduler.TaskSetManager : Starting task 3.0 in stage 1796.0 (TID 169072, localhost, executor driver, partition 3, PROCESS_LOCAL, 8341 bytes) 2022-10-04 10:54:24.271 INFO [,,,] 1 --- [r-event-loop-12] o.apache.spark.scheduler.TaskSetManager : Starting task 4.0 in stage 1796.0 (TID 169073, localhost, executor driver, partition 4, PROCESS_LOCAL, 8341 bytes) 2022-10-04 10:54:24.272 INFO [,,,] 1 --- [for task 169073] org.apache.spark.executor.Executor : Running task 4.0 in stage 1796.0 (TID 169073) 2022-10-04 10:54:24.272 INFO [,,,] 1 --- [for task 169070] org.apache.spark.executor.Executor : Running task 1.0 in stage 1796.0 (TID 169070) 2022-10-04 10:54:24.272 INFO [,,,] 1 --- [for task 169069] org.apache.spark.executor.Executor : Running task 0.0 in stage 1796.0 (TID 169069) 2022-10-04 10:54:24.272 INFO [,,,] 1 --- [for task 169072] org.apache.spark.executor.Executor : Running task 3.0 in stage 1796.0 (TID 169072) 2022-10-04 10:54:24.272 INFO [,,,] 1 --- [for task 169071] org.apache.spark.executor.Executor : Running task 2.0 in stage 1796.0 (TID 169071) 2022-10-04 10:54:24.280 INFO [,,,] 1 --- [for task 169073] o.a.s.s.e.datasources.FileScanRDD : Reading File path: file:///opt/data_ram/trades_imp_year_2.parquet/period=2019/part-00000-82d7d167-8634-4bae-94e8-cd22f5493ba6.c000.snappy.parquet, range: 16777216-20370434, partition values: [2019] 2022-10-04 10:54:24.280 INFO [,,,] 1 --- [for task 169069] o.a.s.s.e.datasources.FileScanRDD : Reading File path: file:///opt/data_ram/trades_imp_year_2.parquet/period=2019/part-00000-82d7d167-8634-4bae-94e8-cd22f5493ba6.c000.snappy.parquet, range: 0-4194304, partition values: [2019] 2022-10-04 10:54:24.280 INFO [,,,] 1 --- [for task 169070] o.a.s.s.e.datasources.FileScanRDD : Reading File path: file:///opt/data_ram/trades_imp_year_2.parquet/period=2019/part-00000-82d7d167-8634-4bae-94e8-cd22f5493ba6.c000.snappy.parquet, range: 4194304-8388608, partition values: [2019] 2022-10-04 10:54:24.280 INFO [,,,] 1 --- [for task 169072] o.a.s.s.e.datasources.FileScanRDD : Reading File path: file:///opt/data_ram/trades_imp_year_2.parquet/period=2019/part-00000-82d7d167-8634-4bae-94e8-cd22f5493ba6.c000.snappy.parquet, range: 12582912-16777216, partition values: [2019] 2022-10-04 10:54:24.280 ERROR [,,,] 1 --- [for task 169070] org.apache.spark.executor.Executor : Exception in task 1.0 in stage 1796.0 (TID 169070) java.io.FileNotFoundException: File file:/opt/data_ram/trades_imp_year_2.parquet/period=2019/part-00000-82d7d167-8634-4bae-94e8-cd22f5493ba6.c000.snappy.parquet does not exist It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved. at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:127) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:177) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.scan_nextBatch_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55) at org.apache.spark.scheduler.Task.run(Task.scala:123) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2022-10-04 10:54:24.280 ERROR [,,,] 1 --- [for task 169069] org.apache.spark.executor.Executor : Exception in task 0.0 in stage 1796.0 (TID 169069) java.io.FileNotFoundException: File file:/opt/data_ram/trades_imp_year_2.parquet/period=2019/part-00000-82d7d167-8634-4bae-94e8-cd22f5493ba6.c000.snappy.parquet does not exist It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved. at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:127) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:177) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.scan_nextBatch_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55) at org.apache.spark.scheduler.Task.run(Task.scala:123) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2022-10-04 10:54:24.280 ERROR [,,,] 1 --- [for task 169073] org.apache.spark.executor.Executor : Exception in task 4.0 in stage 1796.0 (TID 169073) java.io.FileNotFoundException: File file:/opt/data_ram/trades_imp_year_2.parquet/period=2019/part-00000-82d7d167-8634-4bae-94e8-cd22f5493ba6.c000.snappy.parquet does not exist It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved. at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:127) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:177) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.scan_nextBatch_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55) at org.apache.spark.scheduler.Task.run(Task.scala:123) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2022-10-04 10:54:24.280 ERROR [,,,] 1 --- [for task 169072] org.apache.spark.executor.Executor : Exception in task 3.0 in stage 1796.0 (TID 169072) java.io.FileNotFoundException: File file:/opt/data_ram/trades_imp_year_2.parquet/period=2019/part-00000-82d7d167-8634-4bae-94e8-cd22f5493ba6.c000.snappy.parquet does not exist It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved. at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:127) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:177) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.scan_nextBatch_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55) at org.apache.spark.scheduler.Task.run(Task.scala:123) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2022-10-04 10:54:24.281 WARN [,,,] 1 --- [result-getter-3] o.apache.spark.scheduler.TaskSetManager : Lost task 1.0 in stage 1796.0 (TID 169070, localhost, executor driver): java.io.FileNotFoundException: File file:/opt/data_ram/trades_imp_year_2.parquet/period=2019/part-00000-82d7d167-8634-4bae-94e8-cd22f5493ba6.c000.snappy.parquet does not exist It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved. at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:127) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:177) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.scan_nextBatch_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55) at org.apache.spark.scheduler.Task.run(Task.scala:123) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2022-10-04 10:54:24.282 ERROR [,,,] 1 --- [result-getter-3] o.apache.spark.scheduler.TaskSetManager : Task 1 in stage 1796.0 failed 1 times; aborting job 2022-10-04 10:54:24.282 INFO [,,,] 1 --- [result-getter-0] o.apache.spark.scheduler.TaskSetManager : Lost task 0.0 in stage 1796.0 (TID 169069) on localhost, executor driver: java.io.FileNotFoundException (File file:/opt/data_ram/trades_imp_year_2.parquet/period=2019/part-00000-82d7d167-8634-4bae-94e8-cd22f5493ba6.c000.snappy.parquet does not exist It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.) [duplicate 1] 2022-10-04 10:54:24.282 INFO [,,,] 1 --- [result-getter-3] o.apache.spark.scheduler.TaskSetManager : Lost task 3.0 in stage 1796.0 (TID 169072) on localhost, executor driver: java.io.FileNotFoundException (File file:/opt/data_ram/trades_imp_year_2.parquet/period=2019/part-00000-82d7d167-8634-4bae-94e8-cd22f5493ba6.c000.snappy.parquet does not exist It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.) [duplicate 2] 2022-10-04 10:54:24.282 INFO [,,,] 1 --- [for task 169071] o.a.s.s.e.datasources.FileScanRDD : Reading File path: file:///opt/data_ram/trades_imp_year_2.parquet/period=2019/part-00000-82d7d167-8634-4bae-94e8-cd22f5493ba6.c000.snappy.parquet, range: 8388608-12582912, partition values: [2019] 2022-10-04 10:54:24.282 INFO [,,,] 1 --- [uler-event-loop] o.a.spark.scheduler.TaskSchedulerImpl : Cancelling stage 1796 2022-10-04 10:54:24.283 INFO [,,,] 1 --- [uler-event-loop] o.a.spark.scheduler.TaskSchedulerImpl : Killing all running tasks in stage 1796: Stage cancelled 2022-10-04 10:54:24.283 INFO [,,,] 1 --- [uler-event-loop] o.a.spark.scheduler.TaskSchedulerImpl : Stage 1796 was cancelled 2022-10-04 10:54:24.283 INFO [,,,] 1 --- [er-event-loop-3] org.apache.spark.executor.Executor : Executor is trying to kill task 2.0 in stage 1796.0 (TID 169071), reason: Stage cancelled 2022-10-04 10:54:24.283 INFO [,,,] 1 --- [uler-event-loop] org.apache.spark.scheduler.DAGScheduler : ShuffleMapStage 1796 (collectAsList at AggregationDynamic.java:207) failed in 0.021 s due to Job aborted due to stage failure: Task 1 in stage 1796.0 failed 1 times, most recent failure: Lost task 1.0 in stage 1796.0 (TID 169070, localhost, executor driver): java.io.FileNotFoundException: File file:/opt/data_ram/trades_imp_year_2.parquet/period=2019/part-00000-82d7d167-8634-4bae-94e8-cd22f5493ba6.c000.snappy.parquet does not exist It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved. at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:127) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:177) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.scan_nextBatch_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55) at org.apache.spark.scheduler.Task.run(Task.scala:123) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Driver stacktrace: 2022-10-04 10:54:24.283 ERROR [,,,] 1 --- [for task 169071] org.apache.spark.executor.Executor : Exception in task 2.0 in stage 1796.0 (TID 169071) java.io.FileNotFoundException: File file:/opt/data_ram/trades_imp_year_2.parquet/period=2019/part-00000-82d7d167-8634-4bae-94e8-cd22f5493ba6.c000.snappy.parquet does not exist It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved. at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:127) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:177) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.scan_nextBatch_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55) at org.apache.spark.scheduler.Task.run(Task.scala:123) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2022-10-04 10:54:24.283 INFO [,,,] 1 --- [result-getter-1] o.apache.spark.scheduler.TaskSetManager : Lost task 4.0 in stage 1796.0 (TID 169073) on localhost, executor driver: java.io.FileNotFoundException (File file:/opt/data_ram/trades_imp_year_2.parquet/period=2019/part-00000-82d7d167-8634-4bae-94e8-cd22f5493ba6.c000.snappy.parquet does not exist It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.) [duplicate 3] 2022-10-04 10:54:24.283 INFO [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] org.apache.spark.scheduler.DAGScheduler : Job 975 failed: collectAsList at AggregationDynamic.java:207, took 0.022388 s org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 1796.0 failed 1 times, most recent failure: Lost task 1.0 in stage 1796.0 (TID 169070, localhost, executor driver): java.io.FileNotFoundException: File file:/opt/data_ram/trades_imp_year_2.parquet/period=2019/part-00000-82d7d167-8634-4bae-94e8-cd22f5493ba6.c000.snappy.parquet does not exist It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved. at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:127) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:177) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.scan_nextBatch_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55) at org.apache.spark.scheduler.Task.run(Task.scala:123) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1889) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1877) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1876) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1876) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2110) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2059) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2048) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126) at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:363) at org.apache.spark.rdd.RDD.collect(RDD.scala:944) at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:299) at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3389) at org.apache.spark.sql.Dataset$$anonfun$collectAsList$1.apply(Dataset.scala:2800) at org.apache.spark.sql.Dataset$$anonfun$collectAsList$1.apply(Dataset.scala:2799) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3370) at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3369) at org.apache.spark.sql.Dataset.collectAsList(Dataset.scala:2799) at ua.com.ehub.om.products.services.preparedata.product.AggregationDynamic.agg(AggregationDynamic.java:207) at ua.com.ehub.om.products.services.preparedata.product.AggregationDynamic.lambda$null$0(AggregationDynamic.java:118) at java.util.ArrayList.forEach(ArrayList.java:1259) at ua.com.ehub.om.products.services.preparedata.product.AggregationDynamic.lambda$runItem$1(AggregationDynamic.java:117) at java.util.HashMap.forEach(HashMap.java:1289) at ua.com.ehub.om.products.services.preparedata.product.AggregationDynamic.runItem(AggregationDynamic.java:115) at ua.com.ehub.om.products.services.preparedata.product.AggregationDynamic.lambda$null$3(AggregationDynamic.java:143) at java.lang.Iterable.forEach(Iterable.java:75) at ua.com.ehub.om.products.services.preparedata.product.AggregationDynamic.lambda$run$4(AggregationDynamic.java:142) at java.util.HashMap$KeySet.forEach(HashMap.java:933) at ua.com.ehub.om.products.services.preparedata.product.AggregationDynamic.run(AggregationDynamic.java:141) at ua.com.ehub.om.products.services.product.Dynamic.run(Dynamic.java:36) at ua.com.ehub.om.products.controller.products.ImportExportGoodsController.dynamicAndStructure(ImportExportGoodsController.java:49) at sun.reflect.GeneratedMethodAccessor443.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:190) at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138) at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:105) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:879) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:793) at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1040) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:943) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:909) at javax.servlet.http.HttpServlet.service(HttpServlet.java:660) 2022-10-04 10:54:24.286 INFO [,,,] 1 --- [result-getter-0] o.apache.spark.scheduler.TaskSetManager : Lost task 2.0 in stage 1796.0 (TID 169071) on localhost, executor driver: java.io.FileNotFoundException (File file:/opt/data_ram/trades_imp_year_2.parquet/period=2019/part-00000-82d7d167-8634-4bae-94e8-cd22f5493ba6.c000.snappy.parquet does not exist It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.) [duplicate 4] 2022-10-04 10:54:24.286 INFO [,,,] 1 --- [result-getter-0] o.a.spark.scheduler.TaskSchedulerImpl : Removed TaskSet 1796.0, whose tasks have all completed, from pool at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) at javax.servlet.http.HttpServlet.service(HttpServlet.java:741) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at brave.servlet.TracingFilter.doFilter(TracingFilter.java:67) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.springframework.web.filter.AbstractRequestLoggingFilter.doFilterInternal(AbstractRequestLoggingFilter.java:289) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at ua.com.ehub.om.products.filter.LoggingFilter.doFilter(LoggingFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.springframework.cloud.sleuth.instrument.web.ExceptionLoggingFilter.doFilter(ExceptionLoggingFilter.java:50) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at brave.servlet.TracingFilter.doFilter(TracingFilter.java:84) at org.springframework.cloud.sleuth.instrument.web.LazyTracingFilter.doFilter(TraceWebServletAutoConfiguration.java:138) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.doFilterInternal(WebMvcMetricsFilter.java:109) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:541) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:373) at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1590) at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.FileNotFoundException: File file:/opt/data_ram/trades_imp_year_2.parquet/period=2019/part-00000-82d7d167-8634-4bae-94e8-cd22f5493ba6.c000.snappy.parquet does not exist It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved. at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:127) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:177) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.scan_nextBatch_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55) at org.apache.spark.scheduler.Task.run(Task.scala:123) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more 2022-10-04 10:54:24.287 ERROR [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] u.c.e.o.p.e.CustomGlobalExceptionHandler : -------------- Handle exception: ---------------- 2022-10-04 10:54:24.287 ERROR [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] u.c.e.o.p.e.CustomGlobalExceptionHandler : --- Request 2022-10-04 10:54:24.287 ERROR [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] u.c.e.o.p.e.CustomGlobalExceptionHandler : userId: 44 Login: user50@gmail.com request: POST http://10.10.10.52:6020/goods/analytic --body: {"imp_exp":["imp"],"period":{"base_year":"2021","cumulative":false,"period_type":"YEAR","selected":["2019"]},"indicators":[{"indicator_capacity":1,"indicator_measure":["kol1"],"indicator_type":"current","indicator_unit":"base_units"}],"uktz":{"selected":["25"],"select_by":0,"overall":false,"overall_selected":false},"incoterms":[],"countries_imp_exp":{"selected":[],"select_by":"none","overall":false,"overall_selected":false,"chapter":"id_2"},"countries_sending":{"selected":[],"select_by":"none","overall":false,"overall_selected":false,"chapter":"id_2"},"countries_trading":{"selected":[],"select_by":"none","overall":false,"overall_selected":false,"chapter":"id_2"},"transport":{"selected":[],"select_by":"none","overall":false,"overall_selected":false},"pagination":{"limit":25,"offset":0},"sorting":{"field":"","direction":"ASC"},"filter":{},"measures":[166]} token: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJlbWFpbCI6InVzZXI1MEBnbWFpbC5jb20iLCJyb2xlIjoiT1dORVIiLCJ1c2VySWQiOjQ0LCJpYXQiOjE2NjQ4ODA2ODksImV4cCI6MTY2NDg4MTEzOX0.Ro8fTG15C_FJHlSEJryLlhwfrrQxv3qTkV-wHtZz8hM 2022-10-04 10:54:24.287 ERROR [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] u.c.e.o.p.e.CustomGlobalExceptionHandler : --- codeErrorLog 38 2022-10-04 10:54:24.287 ERROR [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] u.c.e.o.p.e.CustomGlobalExceptionHandler : --- errorMessageLog ?????????? ?????? 2022-10-04 10:54:24.287 ERROR [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] u.c.e.o.p.e.CustomGlobalExceptionHandler : --- e.getMessage() Job aborted due to stage failure: Task 1 in stage 1796.0 failed 1 times, most recent failure: Lost task 1.0 in stage 1796.0 (TID 169070, localhost, executor driver): java.io.FileNotFoundException: File file:/opt/data_ram/trades_imp_year_2.parquet/period=2019/part-00000-82d7d167-8634-4bae-94e8-cd22f5493ba6.c000.snappy.parquet does not exist It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved. at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:127) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:177) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.scan_nextBatch_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55) at org.apache.spark.scheduler.Task.run(Task.scala:123) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Driver stacktrace:, File file:/opt/data_ram/trades_imp_year_2.parquet/period=2019/part-00000-82d7d167-8634-4bae-94e8-cd22f5493ba6.c000.snappy.parquet does not exist It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved. 2022-10-04 10:54:24.288 INFO [,dc5b5fb9f085d47b,e1118d8da5f90980,false] 1 --- [io-6020-exec-10] u.c.e.o.p.services.log.SaveLogRemote : ServiceLogRequest ServiceLog(moduleId=8, moduleName=products-service, description=?????????? ?????? Job aborted due to stage failure: Task 1 in stage 1796.0 failed 1 times, most recent failure: Lost task 1.0 in stage 1796.0 (TID 169070, localhost, executor driver): java.io.FileNotFoundException: File file:/opt/data_ram/trades_imp_year_2.parquet/period=2019/part-00000-82d7d167-8634-4bae-94e8-cd22f5493ba6.c000.snappy.parquet does not exist It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved. at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:127) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:177) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.scan_nextBatch_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55) at org.apache.spark.scheduler.Task.run(Task.scala:123) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Driver stacktrace:, File file:/opt/data_ram/trades_imp_year_2.parquet/period=2019/part-00000-82d7d167-8634-4bae-94e8-cd22f5493ba6.c000.snappy.parquet does not exist It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved., isSuccess=false, callerType=MODULE, callerDescription= userId: 44 Login: user50@gmail.com request: POST http://10.10.10.52:6020/goods/analytic --body: {"imp_exp":["imp"],"period":{"base_year":"2021","cumulative":false,"period_type":"YEAR","selected":["2019"]},"indicators":[{"indicator_capacity":1,"indicator_measure":["kol1"],"indicator_type":"current","indicator_unit":"base_units"}],"uktz":{"selected":["25"],"select_by":0,"overall":false,"overall_selected":false},"incoterms":[],"countries_imp_exp":{"selected":[],"select_by":"none","overall":false,"overall_selected":false,"chapter":"id_2"},"countries_sending":{"selected":[],"select_by":"none","overall":false,"overall_selected":false,"chapter":"id_2"},"countries_trading":{"selected":[],"select_by":"none","overall":false,"overall_selected":false,"chapter":"id_2"},"transport":{"selected":[],"select_by":"none","overall":false,"overall_selected":false},"pagination":{"limit":25,"offset":0},"sorting":{"field":"","direction":"ASC"},"filter":{},"measures":[166]} token: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJlbWFpbCI6InVzZXI1MEBnbWFpbC5jb20iLCJyb2xlIjoiT1dORVIiLCJ1c2VySWQiOjQ0LCJpYXQiOjE2NjQ4ODA2ODksImV4cCI6MTY2NDg4MTEzOX0.Ro8fTG15C_FJHlSEJryLlhwfrrQxv3qTkV-wHtZz8hM , ip=185.237.216.13, created=2022-10-04T10:54:24.288, code=8038, token=Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJlbWFpbCI6InVzZXI1MEBnbWFpbC5jb20iLCJyb2xlIjoiT1dORVIiLCJ1c2VySWQiOjQ0LCJpYXQiOjE2NjQ4ODA2ODksImV4cCI6MTY2NDg4MTEzOX0.Ro8fTG15C_FJHlSEJryLlhwfrrQxv3qTkV-wHtZz8hM) ******************** sendLogs [ServiceLog(moduleId=8, moduleName=products-service, description=?????????? ?????? Job aborted due to stage failure: Task 1 in stage 1796.0 failed 1 times, most recent failure: Lost task 1.0 in stage 1796.0 (TID 169070, localhost, executor driver): java.io.FileNotFoundException: File file:/opt/data_ram/trades_imp_year_2.parquet/period=2019/part-00000-82d7d167-8634-4bae-94e8-cd22f5493ba6.c000.snappy.parquet does not exist It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved. at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:127) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:177) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.scan_nextBatch_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55) at org.apache.spark.scheduler.Task.run(Task.scala:123) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Driver stacktrace:, File file:/opt/data_ram/trades_imp_year_2.parquet/period=2019/part-00000-82d7d167-8634-4bae-94e8-cd22f5493ba6.c000.snappy.parquet does not exist It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved., isSuccess=false, callerType=MODULE, callerDescription= userId: 44 Login: user50@gmail.com request: POST http://10.10.10.52:6020/goods/analytic --body: {"imp_exp":["imp"],"period":{"base_year":"2021","cumulative":false,"period_type":"YEAR","selected":["2019"]},"indicators":[{"indicator_capacity":1,"indicator_measure":["kol1"],"indicator_type":"current","indicator_unit":"base_units"}],"uktz":{"selected":["25"],"select_by":0,"overall":false,"overall_selected":false},"incoterms":[],"countries_imp_exp":{"selected":[],"select_by":"none","overall":false,"overall_selected":false,"chapter":"id_2"},"countries_sending":{"selected":[],"select_by":"none","overall":false,"overall_selected":false,"chapter":"id_2"},"countries_trading":{"selected":[],"select_by":"none","overall":false,"overall_selected":false,"chapter":"id_2"},"transport":{"selected":[],"select_by":"none","overall":false,"overall_selected":false},"pagination":{"limit":25,"offset":0},"sorting":{"field":"","direction":"ASC"},"filter":{},"measures":[166]} token: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJlbWFpbCI6InVzZXI1MEBnbWFpbC5jb20iLCJyb2xlIjoiT1dORVIiLCJ1c2VySWQiOjQ0LCJpYXQiOjE2NjQ4ODA2ODksImV4cCI6MTY2NDg4MTEzOX0.Ro8fTG15C_FJHlSEJryLlhwfrrQxv3qTkV-wHtZz8hM , ip=185.237.216.13, created=2022-10-04T10:54:24.288, code=8038, token=Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJlbWFpbCI6InVzZXI1MEBnbWFpbC5jb20iLCJyb2xlIjoiT1dORVIiLCJ1c2VySWQiOjQ0LCJpYXQiOjE2NjQ4ODA2ODksImV4cCI6MTY2NDg4MTEzOX0.Ro8fTG15C_FJHlSEJryLlhwfrrQxv3qTkV-wHtZz8hM)] 2022-10-04 10:54:32.455 DEBUG [,e0f286a2cfaedcbd,e0f286a2cfaedcbd,false] 1 --- [nio-6020-exec-9] u.c.e.o.p.s.l.c.CheckLiveRemoteLog : ?????? ????????? ???????? {"status":"UP"} 2022-10-04 10:54:32.455 INFO [,e0f286a2cfaedcbd,e0f286a2cfaedcbd,false] 1 --- [nio-6020-exec-9] u.c.e.o.p.t.ThreadEndpointDto : ******************* REQUEST *********************** 2022-10-04 10:54:32.455 INFO [,e0f286a2cfaedcbd,e0f286a2cfaedcbd,false] 1 --- [nio-6020-exec-9] u.c.e.o.p.t.ThreadEndpointDto : userId 0 user null 2022-10-04 10:54:32.455 INFO [,e0f286a2cfaedcbd,e0f286a2cfaedcbd,false] 1 --- [nio-6020-exec-9] u.c.e.o.p.t.ThreadEndpointDto : IP 10.10.10.33 2022-10-04 10:54:32.455 INFO [,e0f286a2cfaedcbd,e0f286a2cfaedcbd,false] 1 --- [nio-6020-exec-9] u.c.e.o.p.t.ThreadEndpointDto : request: GET http://products-service:6020/actuator/prometheus 2022-10-04 10:54:32.456 INFO [,e0f286a2cfaedcbd,e0f286a2cfaedcbd,false] 1 --- [nio-6020-exec-9] u.c.e.o.p.t.ThreadEndpointDto : **************************************************** 2022-10-04 10:54:42.462 DEBUG [,20e457bae54f778a,20e457bae54f778a,false] 1 --- [nio-6020-exec-3] u.c.e.o.p.s.l.c.CheckLiveRemoteLog : ?????? ????????? ???????? {"status":"UP"} 2022-10-04 10:54:42.463 INFO [,20e457bae54f778a,20e457bae54f778a,false] 1 --- [nio-6020-exec-3] u.c.e.o.p.t.ThreadEndpointDto : ******************* REQUEST *********************** 2022-10-04 10:54:42.463 INFO [,20e457bae54f778a,20e457bae54f778a,false] 1 --- [nio-6020-exec-3] u.c.e.o.p.t.ThreadEndpointDto : userId 0 user null 2022-10-04 10:54:42.463 INFO [,20e457bae54f778a,20e457bae54f778a,false] 1 --- [nio-6020-exec-3] u.c.e.o.p.t.ThreadEndpointDto : IP 10.10.10.33 2022-10-04 10:54:42.463 INFO [,20e457bae54f778a,20e457bae54f778a,false] 1 --- [nio-6020-exec-3] u.c.e.o.p.t.ThreadEndpointDto : request: GET http://products-service:6020/actuator/prometheus 2022-10-04 10:54:42.463 INFO [,20e457bae54f778a,20e457bae54f778a,false] 1 --- [nio-6020-exec-3] u.c.e.o.p.t.ThreadEndpointDto : ****************************************************
Editor is loading...