root to parquet conversion error
unknown
sh
4 years ago
4.8 kB
6
Indexable
[judancu@ithdp-client01 spark_tnp]$ ./tnp_fitter.py convert muon JPsi Run2016 AOD Run2016B SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/cvmfs/sft.cern.ch/lcg/releases/spark/2.4.5-cern1-f7679/x86_64-centos7-gcc8-opt/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/cvmfs/sft.cern.ch/lcg/releases/hadoop/2.7.5.1-79196/x86_64-centos7-gcc8-opt/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 21/11/12 13:20:29 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME. 21/11/12 13:20:48 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered! ------------------ DEBUG ---------------- spark.app.id=application_1636012038056_14005 spark.app.name=TnP spark.authenticate=true spark.blockManager.port=5101 spark.driver.appUIAddress=http://ithdp-client01.cern.ch:5201 spark.driver.extraClassPath=./laurelin-1.0.0.jar,./log4j-api-2.13.0.jar,./log4j-core-2.13.0.jar spark.driver.extraJavaOptions=-XX:+UseG1GC spark.driver.extraLibraryPath=/usr/hdp/hadoop/lib/native spark.driver.host=ithdp-client01.cern.ch spark.driver.memory=6g spark.driver.port=5001 spark.dynamicAllocation.enabled=true spark.dynamicAllocation.maxExecutors=100 spark.dynamicAllocation.minExecutors=1 spark.eventLog.dir=hdfs://analytix//var/log/spark-history spark.eventLog.enabled=true spark.executor.cores=2 spark.executor.extraClassPath=./laurelin-1.0.0.jar,./log4j-api-2.13.0.jar,./log4j-core-2.13.0.jar spark.executor.extraJavaOptions=-XX:+UseG1GC spark.executor.extraLibraryPath=/usr/hdp/hadoop/lib/native spark.executor.id=driver spark.executor.instances=1 spark.executor.memory=4g spark.executorEnv.PYTHONPATH=/cvmfs/sft.cern.ch/lcg/views/LCG_97apython3/x86_64-centos7-gcc8-opt/python:/cvmfs/sft.cern.ch/lcg/views/LCG_97apython3/x86_64-centos7-gcc8-opt/lib:/cvmfs/sft.cern.ch/lcg/views/LCG_97apython3/x86_64-centos7-gcc8-opt/lib/python3.7/site-packages<CPS>{{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.10.7-src.zip spark.jars=./laurelin-1.0.0.jar,./log4j-api-2.13.0.jar,./log4j-core-2.13.0.jar spark.master=yarn spark.org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.param.PROXY_HOSTS=ithdp2401.cern.ch,ithdp2101.cern.ch spark.org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.param.PROXY_URI_BASES=http://ithdp2401.cern.ch:8088/proxy/application_1636012038056_14005,http://ithdp2101.cern.ch:8088/proxy/application_1636012038056_14005 spark.org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.param.RM_HA_URLS=ithdp2401.cern.ch:8088,ithdp2101.cern.ch:8088 spark.port.maxRetries=99 spark.rdd.compress=True spark.repl.local.jars=file:///afs/cern.ch/user/j/judancu/private/muon_tnp/spark_tnp/laurelin-1.0.0.jar,file:///afs/cern.ch/user/j/judancu/private/muon_tnp/spark_tnp/log4j-api-2.13.0.jar,file:///afs/cern.ch/user/j/judancu/private/muon_tnp/spark_tnp/log4j-core-2.13.0.jar spark.serializer.objectStreamReset=100 spark.shuffle.service.enabled=true spark.submit.deployMode=client spark.ui.filters=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter spark.ui.port=5201 spark.ui.proxyBase=/proxy/application_1636012038056_14005 spark.ui.showConsoleProgress=true spark.yarn.access.hadoopFileSystems=analytix spark.yarn.dist.jars=file:///afs/cern.ch/user/j/judancu/private/muon_tnp/spark_tnp/laurelin-1.0.0.jar,file:///afs/cern.ch/user/j/judancu/private/muon_tnp/spark_tnp/log4j-api-2.13.0.jar,file:///afs/cern.ch/user/j/judancu/private/muon_tnp/spark_tnp/log4j-core-2.13.0.jar spark.yarn.historyServer.address=ithdp2401.cern.ch:18080 spark.yarn.isPython=true spark.yarn.secondary.jars=laurelin-1.0.0.jar,log4j-api-2.13.0.jar,log4j-core-2.13.0.jar ---------------- END DEBUG ---------------- >>>>>>>>> Converting: muon JPsi Run2016 Run2016B >>>>>>>>> Path to input root files: hdfs://analytix/user/judancu/root/muon/JPsi/Run2016/AOD/Run2016B >>>>>>>>> Path to output parquet files: hdfs://analytix/user/judancu/parquet/muon/JPsi/Run2016/AOD/Run2016B/tnp.parquet >>>>>>>>> Number of files to process: 5 >>>>>>>>> First file: hdfs://analytix/user/judancu/root/muon/JPsi/Run2016/AOD/Run2016B/haddOut_0_7fa04b04.root 21/11/12 13:21:03 WARN Utils: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.debug.maxToStringFields' in SparkEnv.conf. [Stage 1:> (0 + 0) / 95]21/11/12 13:21:38 WARN TaskSetManager: Stage 1 contains a task of very large size (266 KB). The maximum recommended task size is 100 KB.
Editor is loading...