Synopsys Detect (blackduck): How to scan Golang application - dial tcp: lookup proxy.golang.org: no such host error - go

I am trying to run blackduck scanning on a Go application via command line. The command I am using is below:
sudo java -jar detect.jar \
--blackduck.trust.cert=true \
--detect.java.path="/Library/Java/JavaVirtualMachines/adoptopenjdk-11.jdk/Contents/Home" \
--detect.project.name="My App" \
--detect.project.version.name="My App - test" \
--detect.code.location.name="My App - test" \
--blackduck.url="https://<private-blackduck-portal>/" \
--blackduck.api.token="<API_TOKEN>" \
--detect.source.path="/Users/<user>/go/src/<private-github-repo>/projects/my-app"
The connection to blackduck is successful but when it tries to build the app it cannot retrieve the Go packages - we get the following error for each one 'dial tcp: lookup proxy.golang.org: no such host error'. You can see the output below.
I have my GOPROXY set to our private mirror for proxy.golang.org (am behind a proxy). This works fine when I am doing go get etc on the repository, it's only during the blackduck scan that I have this issue.
Also worth noting I am running blackduck scans for a java application and I have no issues with that one. It's only Go applications where I am having issues.
| _ \ | | | |
| | | |___| |_ ___ ___| |_
| | | / _ \ __/ _ \/ __| __|
| |/ / __/ || __/ (__| |_
|___/ \___|\__\___|\___|\__|
Detect Version: 6.9.1
2021-08-19 15:15:37 AEST INFO [main] ---
2021-08-19 15:15:37 AEST INFO [main] --- Current property values:
2021-08-19 15:15:37 AEST INFO [main] --- --property = value [notes]
2021-08-19 15:15:37 AEST INFO [main] --- ------------------------------------------------------------
2021-08-19 15:15:37 AEST INFO [main] --- blackduck.api.token = **************************************************************************************************** [cmd]
2021-08-19 15:15:37 AEST INFO [main] --- blackduck.trust.cert = true [cmd]
2021-08-19 15:15:37 AEST INFO [main] --- blackduck.url = <private-blackduck-portal>/ [cmd]
2021-08-19 15:15:37 AEST INFO [main] --- detect.code.location.name = My App - test [cmd]
2021-08-19 15:15:37 AEST INFO [main] --- detect.java.path = /Library/Java/JavaVirtualMachines/adoptopenjdk-11.jdk/Contents/Home [cmd]
2021-08-19 15:15:37 AEST INFO [main] --- detect.project.name = My App [cmd]
2021-08-19 15:15:37 AEST INFO [main] --- detect.project.version.name = My App - test [cmd]
2021-08-19 15:15:37 AEST INFO [main] --- detect.source.path = /Users/<user>/go/src/<private-github-repo>/projects/my-app [cmd]
2021-08-19 15:15:37 AEST INFO [main] --- ------------------------------------------------------------
2021-08-19 15:15:37 AEST INFO [main] ---
2021-08-19 15:15:37 AEST INFO [main] --- Tildes will be automatically resolved to USER HOME.
2021-08-19 15:15:37 AEST INFO [main] --- Source directory: /Users/<user>/go/src/<private-github-repo>/projects/my-app
2021-08-19 15:15:37 AEST INFO [main] --- Output directory: /var/root/blackduck
2021-08-19 15:15:37 AEST INFO [main] --- Run directory: /var/root/blackduck/runs/2021-08-19-05-15-37-341
2021-08-19 15:15:37 AEST INFO [main] ---
2021-08-19 15:15:38 AEST ERROR [main] --- Automatically trusting server certificates - not recommended for production use.
2021-08-19 15:15:38 AEST INFO [main] --- A successful connection was made.
2021-08-19 15:15:38 AEST INFO [main] --- Connection to the Black Duck server was successful.
2021-08-19 15:15:38 AEST ERROR [main] --- Automatically trusting server certificates - not recommended for production use.
2021-08-19 15:15:39 AEST INFO [main] --- Successfully connected to Black Duck (version 2020.10.0)!
2021-08-19 15:15:39 AEST INFO [main] --- ----------------------------------
2021-08-19 15:15:39 AEST INFO [main] --- Polaris tools will not be run.
2021-08-19 15:15:39 AEST INFO [main] --- ----------------------------------
2021-08-19 15:15:39 AEST INFO [main] --- Will include the Docker tool.
2021-08-19 15:15:39 AEST INFO [main] --- Docker actions finished.
2021-08-19 15:15:39 AEST INFO [main] --- ----------------------------------
2021-08-19 15:15:39 AEST INFO [main] --- Will include the Bazel tool.
2021-08-19 15:15:39 AEST INFO [main] --- Bazel actions finished.
2021-08-19 15:15:39 AEST INFO [main] --- ----------------------------------
2021-08-19 15:15:39 AEST INFO [main] --- Will include the detector tool.
2021-08-19 15:15:39 AEST INFO [main] --- Searching for detectors. This may take a while.
2021-08-19 15:15:40 AEST INFO [main] ---
2021-08-19 15:15:40 AEST INFO [main] --- Running executable >/usr/local/go/bin/go list -m
2021-08-19 15:15:40 AEST INFO [main] --- Process finished: 0
2021-08-19 15:15:40 AEST INFO [main] --- Running executable >/usr/local/go/bin/go version
2021-08-19 15:15:40 AEST INFO [main] --- Process finished: 0
2021-08-19 15:15:40 AEST INFO [main] --- Running executable >/usr/local/go/bin/go list -mod=readonly -m -u -json all
2021-08-19 15:15:40 AEST INFO [main] --- Process finished: 1
2021-08-19 15:15:40 AEST INFO [main] --- Error Output:
2021-08-19 15:15:40 AEST INFO [main] --- go list -m: loading module retractions for cloud.google.com/go#v0.44.3: Get "https://proxy.golang.org/cloud.google.com/go/#v/list": dial tcp: lookup proxy.golang.org: no such host
***<several more lines with the same error for other packages>***
2021-08-19 15:15:40 AEST INFO [main] --- ======================================================================================================
2021-08-19 15:15:40 AEST INFO [main] --- Detector Report
2021-08-19 15:15:40 AEST INFO [main] --- ======================================================================================================
2021-08-19 15:15:40 AEST INFO [main] --- /Users/<user>/go/src/<private-github-repo>/projects/my-app (depth 0)
2021-08-19 15:15:40 AEST INFO [main] --- GO_MOD - Go Mod Cli
2021-08-19 15:15:40 AEST INFO [main] --- Found file: /Users/<user>/go/src/<private-github-repo>/projects/my-app/go.mod
2021-08-19 15:15:40 AEST INFO [main] --- Found executable: /usr/local/go/bin/go
2021-08-19 15:15:40 AEST INFO [main] --- GIT - Git Cli
2021-08-19 15:15:40 AEST INFO [main] --- Found file: /Users/<user>/go/src/<private-github-repo>/projects/my-app/.git
2021-08-19 15:15:40 AEST INFO [main] --- Found executable: /usr/bin/git
2021-08-19 15:15:40 AEST INFO [main] --- GIT - Git Parse
2021-08-19 15:15:40 AEST INFO [main] --- Found file: /Users/<user>/go/src/<private-github-repo>/projects/my-app/.git
2021-08-19 15:15:40 AEST INFO [main] --- Found file: /Users/<user>/go/src/<private-github-repo>/projects/my-app/.git/config
2021-08-19 15:15:40 AEST INFO [main] --- Found file: /Users/<user>/go/src/<private-github-repo>/projects/my-app/.git/HEAD
2021-08-19 15:15:40 AEST INFO [main] --- Not needed as fallback: GIT - Git Cli successful
2021-08-19 15:15:40 AEST INFO [main] --- ----------------------------------
2021-08-19 15:15:41 AEST INFO [main] --- Detector actions finished.
2021-08-19 15:15:41 AEST INFO [main] --- ----------------------------------
2021-08-19 15:15:41 AEST INFO [main] --- Project name: My App
2021-08-19 15:15:41 AEST INFO [main] --- Project version: My App - test
2021-08-19 15:15:42 AEST INFO [main] --- ----------------------------------
2021-08-19 15:15:42 AEST INFO [main] --- Will include the signature scanner tool.
2021-08-19 15:15:42 AEST ERROR [main] --- Automatically trusting server certificates - not recommended for production use.
2021-08-19 15:15:42 AEST ERROR [main] --- Automatically trusting server certificates - not recommended for production use.
2021-08-19 15:15:46 AEST INFO [main] --- No scan targets provided - registering the source path /Users/<user>/go/src/<private-github-repo>/projects/my-app to scan
2021-08-19 15:15:46 AEST INFO [main] --- The Black Duck Signature Scanner downloaded/found successfully: /var/root/blackduck/tools
2021-08-19 15:15:46 AEST INFO [main] --- Starting the Black Duck Signature Scan commands.
2021-08-19 15:15:46 AEST INFO [pool-3-thread-1] --- Black Duck CLI command: /var/root/blackduck/tools/Black_Duck_Scan_Installation/scan.cli-2020.10.0/jre/Contents/Home/bin/java -Done-jar.silent=true -Done-jar.jar.path=/var/root/blackduck/tools/Black_Duck_Scan_Installation/scan.cli-2020.10.0/lib/cache/scan.cli.impl-standalone.jar -Xmx4096m -jar /var/root/blackduck/tools/Black_Duck_Scan_Installation/scan.cli-2020.10.0/lib/scan.cli-2020.10.0-standalone.jar --no-prompt --scheme https --host <private-blackduck-portal> --port 443 --insecure -v --logDir /var/root/blackduck/runs/2021-08-19-05-15-37-341/scan/BlackDuckScanOutput/2021-08-19_05-15-46-774_1 --statusWriteDir /var/root/blackduck/runs/2021-08-19-05-15-37-341/scan/BlackDuckScanOutput/2021-08-19_05-15-46-774_1 --project My App --release My App - test --name My App - test scan /Users/<user>/go/src/<private-github-repo>/projects/my-app
2021-08-19 15:16:09 AEST INFO [pool-3-thread-1] ---
2021-08-19 15:16:09 AEST INFO [pool-3-thread-1] --- Black Duck Signature Scanner return code: 0
2021-08-19 15:16:09 AEST INFO [pool-3-thread-1] --- You can view the logs at: '/private/var/root/blackduck/runs/2021-08-19-05-15-37-341/scan/BlackDuckScanOutput/2021-08-19_05-15-46-774_1'
2021-08-19 15:16:09 AEST INFO [main] --- Completed the Black Duck Signature Scan commands.
2021-08-19 15:16:09 AEST INFO [main] --- Signature scanner actions finished.
2021-08-19 15:16:09 AEST INFO [main] --- ----------------------------------
2021-08-19 15:16:09 AEST INFO [main] --- Will include the binary scanner tool.
2021-08-19 15:16:09 AEST INFO [main] --- Binary scanner actions finished.
2021-08-19 15:16:09 AEST INFO [main] --- ----------------------------------
2021-08-19 15:16:09 AEST INFO [main] --- Vulnerability Impact Analysis tool will not be run.
2021-08-19 15:16:09 AEST INFO [main] --- ----------------------------------
2021-08-19 15:16:09 AEST INFO [main] --- Will perform Black Duck post actions.
2021-08-19 15:16:09 AEST INFO [main] --- Black Duck actions have finished.
2021-08-19 15:16:09 AEST INFO [main] --- All tools have finished.
2021-08-19 15:16:09 AEST INFO [main] --- ----------------------------------
2021-08-19 15:16:09 AEST INFO [main] ---
2021-08-19 15:16:09 AEST INFO [main] ---
2021-08-19 15:16:09 AEST INFO [main] --- Creating status file: /var/root/blackduck/runs/2021-08-19-05-15-37-341/status/status.json
2021-08-19 15:16:09 AEST INFO [main] --- Status file has been deleted. To preserve status file, turn off cleanup actions.
2021-08-19 15:16:09 AEST INFO [main] --- Cleaning up directory: /var/root/blackduck/runs/2021-08-19-05-15-37-341
2021-08-19 15:16:09 AEST INFO [main] ---
2021-08-19 15:16:09 AEST INFO [main] ---
2021-08-19 15:16:09 AEST INFO [main] --- ======== Detect Issues ========
2021-08-19 15:16:09 AEST INFO [main] ---
2021-08-19 15:16:09 AEST INFO [main] --- DETECTORS:
2021-08-19 15:16:09 AEST INFO [main] --- /Users/<user>/go/src/<private-github-repo>/projects/my-app
2021-08-19 15:16:09 AEST INFO [main] --- Exception: GO_MOD - Go Mod Cli
2021-08-19 15:16:09 AEST INFO [main] --- DetectableException: Querying for the go mod graph failed:1
2021-08-19 15:16:09 AEST INFO [main] ---
2021-08-19 15:16:09 AEST INFO [main] --- ======== Detect Result ========
2021-08-19 15:16:09 AEST INFO [main] ---
2021-08-19 15:16:09 AEST INFO [main] --- Black Duck Project BOM: https://<private-blackduck-portal>/api/projects/e51f552a-e328-4224-9941-c5b3e74782ba/versions/7f99cf3f-e638-4dac-b924-7ca6f19c32e7/components
2021-08-19 15:16:09 AEST INFO [main] ---
2021-08-19 15:16:09 AEST INFO [main] --- ======== Detect Status ========
2021-08-19 15:16:09 AEST INFO [main] ---
2021-08-19 15:16:09 AEST INFO [main] --- GIT: SUCCESS
2021-08-19 15:16:09 AEST INFO [main] --- GO_MOD: FAILURE
2021-08-19 15:16:09 AEST INFO [main] ---
2021-08-19 15:16:09 AEST INFO [main] --- Signature scan / Snippet scan on /Users/<user>/go/src/<private-github-repo>/projects/my-app: SUCCESS
2021-08-19 15:16:09 AEST INFO [main] --- Overall Status: FAILURE_DETECTOR - Detect had one or more detector failures while extracting dependencies. Check that all projects build and your environment is configured correctly.
2021-08-19 15:16:09 AEST INFO [main] ---
2021-08-19 15:16:09 AEST INFO [main] --- ===============================
2021-08-19 15:16:09 AEST INFO [main] ---
2021-08-19 15:16:09 AEST INFO [main] --- Detect duration: 00h 00m 32s 246ms
2021-08-19 15:16:09 AEST ERROR [main] --- Exiting with code 5 - FAILURE_DETECTOR
I don't know why it is only an issue during blackduck scans, not sure what else to try - any help would be appreciated.

Related

The sqoop import action job in oozie is still running

The sqoop import action job in oozie is still running.
What should I check?
oozie version:5.2.1
hadoop version:3.3.4
sqoop version:1.4.7
>>> Invoking Sqoop command line now >>>
2022-10-26 08:26:56,510 [main] WARN org.apache.sqoop.tool.SqoopTool - $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration.
2022-10-26 08:26:56,543 [main] INFO org.apache.sqoop.Sqoop - Running Sqoop version: 1.4.7
2022-10-26 08:26:56,556 [main] WARN org.apache.sqoop.tool.BaseSqoopTool - Setting your password on the command-line is insecure. Consider using -P instead.
2022-10-26 08:26:56,566 [main] WARN org.apache.sqoop.ConnFactory - $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration.
2022-10-26 08:26:56,607 [main] INFO org.apache.sqoop.manager.MySQLManager - Preparing to use a MySQL streaming resultset.
2022-10-26 08:26:56,607 [main] INFO org.apache.sqoop.tool.CodeGenTool - Beginning code generation
2022-10-26 08:26:56,890 [main] INFO org.apache.sqoop.manager.SqlManager - Executing SQL statement: SELECT t.* FROM `calendar` AS t LIMIT 1
2022-10-26 08:26:56,912 [main] INFO org.apache.sqoop.manager.SqlManager - Executing SQL statement: SELECT t.* FROM `calendar` AS t LIMIT 1
2022-10-26 08:26:56,923 [main] INFO org.apache.sqoop.orm.CompilationManager - $HADOOP_MAPRED_HOME is not set
log4j: Finalizing appender named [EventCounter].
2022-10-26 08:26:58,284 [main] INFO org.apache.sqoop.orm.CompilationManager - Writing jar file: /tmp/sqoop-hadoop/compile/92647049c21a99ec3fe668f737f0bf1a/calendar.jar
2022-10-26 08:26:58,296 [main] WARN org.apache.sqoop.manager.MySQLManager - It looks like you are importing from mysql.
2022-10-26 08:26:58,296 [main] WARN org.apache.sqoop.manager.MySQLManager - This transfer can be faster! Use the --direct
2022-10-26 08:26:58,296 [main] WARN org.apache.sqoop.manager.MySQLManager - option to exercise a MySQL-specific fast path.
2022-10-26 08:26:58,296 [main] INFO org.apache.sqoop.manager.MySQLManager - Setting zero DATETIME behavior to convertToNull (mysql)
2022-10-26 08:26:58,305 [main] INFO org.apache.sqoop.mapreduce.ImportJobBase - Beginning import of calendar
2022-10-26 08:26:58,306 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2022-10-26 08:26:58,311 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.jar is deprecated. Instead, use mapreduce.job.jar
2022-10-26 08:26:58,329 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
2022-10-26 08:26:58,331 [main] WARN org.apache.sqoop.mapreduce.JobBase - SQOOP_HOME is unset. May not be able to find all job dependencies.
2022-10-26 08:26:58,393 [main] INFO org.apache.hadoop.yarn.client.DefaultNoHARMFailoverProxyProvider - Connecting to ResourceManager at bigdata/172.3.031.123:8032
2022-10-26 08:26:58,496 [main] INFO org.apache.hadoop.mapreduce.JobResourceUploader - Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/oozie/.staging/job_1666660764861_0196
2022-10-26 08:26:58,659 [main] INFO org.apache.sqoop.mapreduce.db.DBInputFormat - Using read commited transaction isolation
2022-10-26 08:26:58,694 [main] INFO org.apache.hadoop.mapreduce.JobSubmitter - number of splits:1
2022-10-26 08:26:58,800 [main] INFO org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_1666660764861_0196
2022-10-26 08:26:58,801 [main] INFO org.apache.hadoop.mapreduce.JobSubmitter - Executing with tokens: [Kind: YARN_AM_RM_TOKEN, Service: , Ident: (appAttemptId { application_id { id: 195 cluster_timestamp: 1666660764861 } attemptId: 1 } keyId: 1397232171)]
2022-10-26 08:26:58,980 [main] INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl - Submitted application application_1666660764861_0196
2022-10-26 08:26:59,016 [main] INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http://bigdata:8088/proxy/application_1666660764861_0196/
2022-10-26 08:26:59,016 [main] INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http://bigdata:8088/proxy/application_1666660764861_0196/
2022-10-26 08:26:59,017 [main] INFO org.apache.hadoop.mapreduce.Job - Running job: job_1666660764861_0196
2022-10-26 08:26:59,017 [main] INFO org.apache.hadoop.mapreduce.Job - Running job: job_1666660764861_0196

Limit jdbc connection pool fixed amount

Hi i use micronaut data together with various jdbc connection pools.
I first had hikari and also tried the tomcat one.
What i was assuming that setting the datasource to maximum-pool-size: 10 results in max 10 open connections.
But it seems that there is a lot of opening and closing going on. Together with a lot o requests at the same time, it uses much more than only 10 connections. The thing is, that the azure postgresql only allows 100 connections in total.
Currently i have running 7 apps accessing that database. Which i expect to result in 70 connections max total. But in reality it is much more.
I also tried using the tomcat jdbc pool, he behaves a little differntly. But also uses more than the 10 connections. I also checked using a java profiler and figured out, that some times its up to 100 open/close connection events per second.
Any suggestion how to act in that case, except of using a second database.
I was hoping that the pool will buffer the calls, especially cause they come from a kafka topic.
But well, seems to be differently.
--- edit add hikari log
Here is the log output of hikari
2020-12-11 11:59:40,983 [main] DEBUG com.zaxxer.hikari.HikariConfig - Driver class org.postgresql.Driver found in Thread context class loader jdk.internal.loader.ClassLoaders$AppClassLoader#2c13da15
2020-12-11 11:59:40,993 [main] DEBUG com.zaxxer.hikari.HikariConfig - HikariPool-1 - configuration:
2020-12-11 11:59:40,999 [main] DEBUG com.zaxxer.hikari.HikariConfig - allowPoolSuspension.............false
2020-12-11 11:59:41,000 [main] DEBUG com.zaxxer.hikari.HikariConfig - autoCommit......................true
2020-12-11 11:59:41,000 [main] DEBUG com.zaxxer.hikari.HikariConfig - catalog.........................none
2020-12-11 11:59:41,001 [main] DEBUG com.zaxxer.hikari.HikariConfig - connectionInitSql...............none
2020-12-11 11:59:41,001 [main] DEBUG com.zaxxer.hikari.HikariConfig - connectionTestQuery............."SELECT 1;"
2020-12-11 11:59:41,001 [main] DEBUG com.zaxxer.hikari.HikariConfig - connectionTimeout...............30000
2020-12-11 11:59:41,002 [main] DEBUG com.zaxxer.hikari.HikariConfig - dataSource......................none
2020-12-11 11:59:41,002 [main] DEBUG com.zaxxer.hikari.HikariConfig - dataSourceClassName.............none
2020-12-11 11:59:41,002 [main] DEBUG com.zaxxer.hikari.HikariConfig - dataSourceJNDI..................none
2020-12-11 11:59:41,003 [main] DEBUG com.zaxxer.hikari.HikariConfig - dataSourceProperties............{password=<masked>}
2020-12-11 11:59:41,004 [main] DEBUG com.zaxxer.hikari.HikariConfig - driverClassName................."org.postgresql.Driver"
2020-12-11 11:59:41,004 [main] DEBUG com.zaxxer.hikari.HikariConfig - exceptionOverrideClassName......none
2020-12-11 11:59:41,004 [main] DEBUG com.zaxxer.hikari.HikariConfig - healthCheckProperties...........{}
2020-12-11 11:59:41,005 [main] DEBUG com.zaxxer.hikari.HikariConfig - healthCheckRegistry.............none
2020-12-11 11:59:41,005 [main] DEBUG com.zaxxer.hikari.HikariConfig - idleTimeout.....................600000
2020-12-11 11:59:41,005 [main] DEBUG com.zaxxer.hikari.HikariConfig - initializationFailTimeout.......1
2020-12-11 11:59:41,006 [main] DEBUG com.zaxxer.hikari.HikariConfig - isolateInternalQueries..........false
2020-12-11 11:59:41,007 [main] DEBUG com.zaxxer.hikari.HikariConfig - jdbcUrl.........................jdbc:postgresql://URL:5432/postgres
2020-12-11 11:59:41,007 [main] DEBUG com.zaxxer.hikari.HikariConfig - leakDetectionThreshold..........0
2020-12-11 11:59:41,007 [main] DEBUG com.zaxxer.hikari.HikariConfig - maxLifetime.....................1800000
2020-12-11 11:59:41,008 [main] DEBUG com.zaxxer.hikari.HikariConfig - maximumPoolSize.................10
2020-12-11 11:59:41,008 [main] DEBUG com.zaxxer.hikari.HikariConfig - metricRegistry..................none
2020-12-11 11:59:41,008 [main] DEBUG com.zaxxer.hikari.HikariConfig - metricsTrackerFactory...........none
2020-12-11 11:59:41,009 [main] DEBUG com.zaxxer.hikari.HikariConfig - minimumIdle.....................10
2020-12-11 11:59:41,009 [main] DEBUG com.zaxxer.hikari.HikariConfig - password........................<masked>
2020-12-11 11:59:41,009 [main] DEBUG com.zaxxer.hikari.HikariConfig - poolName........................"HikariPool-1"
2020-12-11 11:59:41,010 [main] DEBUG com.zaxxer.hikari.HikariConfig - readOnly........................false
2020-12-11 11:59:41,010 [main] DEBUG com.zaxxer.hikari.HikariConfig - registerMbeans..................false
2020-12-11 11:59:41,011 [main] DEBUG com.zaxxer.hikari.HikariConfig - scheduledExecutor...............none
2020-12-11 11:59:41,011 [main] DEBUG com.zaxxer.hikari.HikariConfig - schema.........................."SCHEMA"
2020-12-11 11:59:41,011 [main] DEBUG com.zaxxer.hikari.HikariConfig - threadFactory...................internal
2020-12-11 11:59:41,011 [main] DEBUG com.zaxxer.hikari.HikariConfig - transactionIsolation............default
2020-12-11 11:59:41,012 [main] DEBUG com.zaxxer.hikari.HikariConfig - username........................"USERNAME"
2020-12-11 11:59:41,012 [main] DEBUG com.zaxxer.hikari.HikariConfig - validationTimeout...............5000
I found the mistake - or at least something that solves the issue.
While saving some data into the database i also try to update the cache.
But due to change on caffeines loadingcache, each save also results in a get on the exactly same data object instance.
My guessing is due the transaction that causes trouble.
After replacing the cache.get with the cache.replace everything works fine.

Vaadin 14.1.17 startup very slow

i'm testing the empty Vaadin project generated from the "Get Started" page (here)
I'm facing very slow startup time (up to 6 minutes) and i'm not really understanding what's going on, i tried setting vaadin.servlet.productionMode=true but it doesn't solve the issue.
Is there a way to produce a more verbose startup log? Below you can find the current log.
2020-02-13 09:18:14.399 INFO 12084 --- [ restartedMain] it.my-project.Application : Starting Application on XCR10248 with PID 12084 (C:\Dev\workspace-eclipse-2019\my-project\target\classes started by cr10248 in C:\Dev\workspace-eclipse-2019\my-project)
2020-02-13 09:18:14.399 INFO 12084 --- [ restartedMain] it.my-project.Application : No active profile set, falling back to default profiles: default
2020-02-13 09:18:14.446 INFO 12084 --- [ restartedMain] .e.DevToolsPropertyDefaultsPostProcessor : Devtools property defaults active! Set 'spring.devtools.add-properties' to 'false' to disable
2020-02-13 09:18:14.446 INFO 12084 --- [ restartedMain] .e.DevToolsPropertyDefaultsPostProcessor : For additional web related logging consider setting the 'logging.level.web' property to 'DEBUG'
2020-02-13 09:18:15.868 INFO 12084 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2020-02-13 09:18:15.872 INFO 12084 --- [ restartedMain] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2020-02-13 09:18:15.872 INFO 12084 --- [ restartedMain] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.27]
2020-02-13 09:18:16.326 INFO 12084 --- [ restartedMain] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2020-02-13 09:18:16.326 INFO 12084 --- [ restartedMain] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 1880 ms
2020-02-13 09:18:19.373 INFO 12084 --- [ restartedMain] c.v.f.s.VaadinServletContextInitializer : Search for subclasses and classes with annotations took 2 seconds
2020-02-13 09:18:19.381 INFO 12084 --- [ restartedMain] c.v.f.server.startup.DevModeInitializer : Starting dev-mode updaters in C:\Dev\workspace-eclipse-2019\my-project folder.
2020-02-13 09:18:19.412 INFO 12084 --- [ restartedMain] dev-updater : Visited 94 classes. Took 31 ms.
2020-02-13 09:18:19.444 INFO 12084 --- [ restartedMain] dev-updater : Skipping `npm install`.
2020-02-13 09:18:19.444 INFO 12084 --- [ restartedMain] dev-updater : Copying frontend resources from jar files ...
2020-02-13 09:18:19.819 INFO 12084 --- [ restartedMain] dev-updater : Visited 12 resources. Took 372 ms.
2020-02-13 09:18:19.850 INFO 12084 --- [ restartedMain] dev-updater : No js modules to update 'C:\Dev\workspace-eclipse-2019\my-project\target\frontend\generated-flow-imports.js' file
2020-02-13 09:24:48.821 INFO 12084 --- [ restartedMain] dev-webpack : Starting webpack-dev-server, port: 63432 dir: C:\Dev\workspace-eclipse-2019\my-project
2020-02-13 09:24:48.951 INFO 12084 --- [ restartedMain] dev-webpack : Running webpack to compile frontend resources. This may take a moment, please stand by...
2020-02-13 09:25:18.986 INFO 12084 --- [ restartedMain] dev-webpack : Webpack startup and compilation completed in 30165ms
2020-02-13 09:25:19.211 INFO 12084 --- [ restartedMain] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2020-02-13 09:25:19.649 INFO 12084 --- [ restartedMain] o.s.b.d.a.OptionalLiveReloadServer : LiveReload server is running on port 35729
2020-02-13 09:25:19.836 INFO 12084 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2020-02-13 09:25:19.836 INFO 12084 --- [ restartedMain] it.my-project.Application : Started Application in 425.889 seconds (JVM running for 427.06)
Thanks
Davide
I answer my own question since i found (at least i hope :D) the "solution".
Like almost ALWAYS, Eclipse didn't manage to understand what's going on with project generated static resources (JS files in this case) and for some reason a full build seems to be performed every time, resulting in long waitings.
Simply doing a F5 on project root did the trick and now the startup takes 5 seconds.
Hope this could help other developers.

GPU resource for hadoop 3.0 / yarn

I try to use Hadoop 3.0 GA release with gpu, but when I executed the below shell command, there is an error and not working with gpu. please check the below and just let you know the shell command. I guess that there are misconfigurations from me.
2018-01-09 15:04:49,256 INFO [main] distributedshell.ApplicationMaster (ApplicationMaster.java:main(355)) - Initializing ApplicationMaster
2018-01-09 15:04:49,391 INFO [main] distributedshell.ApplicationMaster (ApplicationMaster.java:init(514)) - Application master for app, appId=1, clustertimestamp=1515477741976, attemptId=1
2018-01-09 15:04:49,418 WARN [main] distributedshell.ApplicationMaster (ApplicationMaster.java:init(626)) - Timeline service is not enabled
2018-01-09 15:04:49,418 INFO [main] distributedshell.ApplicationMaster (ApplicationMaster.java:run(649)) - Starting ApplicationMaster
2018-01-09 15:04:49,542 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(60)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2018-01-09 15:04:49,623 INFO [main] distributedshell.ApplicationMaster (ApplicationMaster.java:run(659)) - Executing with tokens:
2018-01-09 15:04:49,744 INFO [main] distributedshell.ApplicationMaster (ApplicationMaster.java:run(662)) - Kind: YARN_AM_RM_TOKEN, Service: , Ident: (appAttemptId { application_id { id: 1 cluster_timestamp: 1515477741976 } attemptId: 1 } keyId: 1619387150)
2018-01-09 15:04:49,801 INFO [main] client.RMProxy (RMProxy.java:newProxyInstance(133)) - Connecting to ResourceManager at /0.0.0.0:8030
2018-01-09 15:04:49,886 INFO [main] impl.NMClientAsyncImpl (NMClientAsyncImpl.java:serviceInit(138)) - Upper bound of the thread pool size is 500
2018-01-09 15:04:49,889 WARN [main] distributedshell.ApplicationMaster (ApplicationMaster.java:run(786)) - Timeline service is not enabled
2018-01-09 15:04:50,170 INFO [main] conf.Configuration (Configuration.java:getConfResourceAsInputStream(2656)) - resource-types.xml not found
2018-01-09 15:04:50,170 INFO [main] resource.ResourceUtils (ResourceUtils.java:addResourcesFileToConf(395)) - Unable to find 'resource-types.xml'.
2018-01-09 15:04:50,183 WARN [main] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
2018-01-09 15:04:50,185 WARN [main] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
2018-01-09 15:04:50,185 WARN [main] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
2018-01-09 15:04:50,185 WARN [main] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
2018-01-09 15:04:50,187 WARN [main] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
2018-01-09 15:04:50,187 WARN [main] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
2018-01-09 15:04:50,188 WARN [main] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
2018-01-09 15:04:50,188 WARN [main] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
2018-01-09 15:04:50,188 WARN [main] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
2018-01-09 15:04:50,188 INFO [main] distributedshell.ApplicationMaster (ApplicationMaster.java:run(717)) - Max mem capability of resources in this cluster 8192
2018-01-09 15:04:50,188 INFO [main] distributedshell.ApplicationMaster (ApplicationMaster.java:run(720)) - Max vcores capability of resources in this cluster 4
2018-01-09 15:04:50,189 INFO [main] distributedshell.ApplicationMaster (ApplicationMaster.java:run(739)) - appattempt_1515477741976_0001_000001 received 0 previous attempts' running containers on AM registration.
2018-01-09 15:04:50,202 INFO [main] distributedshell.ApplicationMaster (ApplicationMaster.java:setupContainerAskForRM(1311)) - Requested container ask: Capability[<memory:-1, vCores:-1>]Priority[0]AllocationRequestId[0]ExecutionTypeRequest[{Execution Type: GUARANTEED, Enforce Execution Type: false}]Resource Profile[gpu-1]
2018-01-09 15:04:50,246 WARN [AMRM Heartbeater thread] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
2018-01-09 15:04:51,255 WARN [AMRM Heartbeater thread] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
2018-01-09 15:04:52,273 WARN [AMRM Heartbeater thread] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
2018-01-09 15:04:52,278 INFO [AMRM Callback Handler Thread] distributedshell.ApplicationMaster (ApplicationMaster.java:onContainersAllocated(957)) - Got response from RM for container ask, allocatedCnt=1
2018-01-09 15:04:52,278 WARN [AMRM Callback Handler Thread] pb.ResourcePBImpl (ResourcePBImpl.java:initResources(142)) - Got unknown resource type: yarn.io/gpu; skipping
And the shell command that I executed with respect to YARN-7223 ticket is followed by,
yarn jar <path/to/hadoop-yarn-applications-distributedshell.jar> \ -jar <path/to/hadoop-yarn-applications-distributedshell.jar> \ -shell_command /usr/local/nvidia/bin/nvidia-smi -container_resource_profile gpu-1
Thanks in advance.

MapReduce job is failing with an error failed to write data

I'm trying to export data from teradata to hadoop. but my export query is failing by giving an error "Failed to write data".Please look at the Mapreduce and application logs below:
Log Type: syslog
Log Upload Time: Tue Mar 08 22:59:27 -0800 2016
Log Length: 4931
2016-03-08 22:47:07,414 WARN [main] org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-maptask.properties,hadoop-metrics2.properties
2016-03-08 22:47:07,499 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2016-03-08 22:47:07,499 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system started
2016-03-08 22:47:07,509 INFO [main] org.apache.hadoop.mapred.YarnChild: Executing with tokens:
2016-03-08 22:47:07,510 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: mapreduce.job, Service: job_1457504560070_0004, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier#175b9425)
2016-03-08 22:47:07,556 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: RM_DELEGATION_TOKEN, Service: 39.7.48.2:8032,39.7.48.3:8032, Ident: (owner=hive, renewer=oozie mr token, realUser=oozie, issueDate=1457506410968, maxDate=1458111210968, sequenceNumber=908, masterKeyId=280)
2016-03-08 22:47:07,599 INFO [main] org.apache.hadoop.mapred.YarnChild: Sleeping for 0ms before retrying again. Got null now.
2016-03-08 22:47:07,848 INFO [main] org.apache.hadoop.mapred.YarnChild: mapreduce.cluster.local.dir for child: /data1/hadoop/yarn/local/usercache/hive/appcache/application_1457504560070_0004,/data2/hadoop/yarn/local/usercache/hive/appcache/application_1457504560070_0004,/data3/hadoop/yarn/local/usercache/hive/appcache/application_1457504560070_0004,/data4/hadoop/yarn/local/usercache/hive/appcache/application_1457504560070_0004,/data5/hadoop/yarn/local/usercache/hive/appcache/application_1457504560070_0004,/data6/hadoop/yarn/local/usercache/hive/appcache/application_1457504560070_0004,/data7/hadoop/yarn/local/usercache/hive/appcache/application_1457504560070_0004,/data8/hadoop/yarn/local/usercache/hive/appcache/application_1457504560070_0004,/data9/hadoop/yarn/local/usercache/hive/appcache/application_1457504560070_0004,/data10/hadoop/yarn/local/usercache/hive/appcache/application_1457504560070_0004,/data12/hadoop/yarn/local/usercache/hive/appcache/application_1457504560070_0004
2016-03-08 22:47:08,132 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
2016-03-08 22:47:08,623 INFO [main] org.apache.hadoop.mapred.Task: Using ResourceCalculatorProcessTree : [ ]
2016-03-08 22:47:08,840 INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: com.teradata.dynaload.hcatalog.mapper.TDInputFormat$TeradataInputSplit#2ece4966
2016-03-08 22:47:08,844 INFO [main] com.teradata.dynaload.hcatalog.mapper.TDInputFormat$TeradataRecordReader: recordreader class com.teradata.dynaload.hcatalog.mapper.TDInputFormat$TeradataRecordReaderinitialize time is: 1457506028844
2016-03-08 22:47:09,512 INFO [main] org.apache.hadoop.mapred.MapTask: (EQUATOR) 0 kvi 300417020(1201668080)
2016-03-08 22:47:09,512 INFO [main] org.apache.hadoop.mapred.MapTask: mapreduce.task.io.sort.mb: 1146
2016-03-08 22:47:09,512 INFO [main] org.apache.hadoop.mapred.MapTask: soft limit at 841167680
2016-03-08 22:47:09,512 INFO [main] org.apache.hadoop.mapred.MapTask: bufstart = 0; bufvoid = 1201668096
2016-03-08 22:47:09,512 INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 300417020; length = 75104256
2016-03-08 22:47:09,515 INFO [main] org.apache.hadoop.mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2016-03-08 22:47:09,518 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
2016-03-08 22:47:09,518 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
2016-03-08 22:47:09,848 WARN [main] org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.metastore.local does not exist
2016-03-08 22:47:09,914 INFO [main] hive.metastore: Trying to connect to metastore with URI thrift://apus2.labs.teradata.com:9083
2016-03-08 22:47:09,951 INFO [main] hive.metastore: Connected to metastore.
2016-03-08 22:47:10,407 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
2016-03-08 22:47:10,452 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output Committer Algorithm version is 1
2016-03-08 22:47:10,453 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: mapred.work.output.dir is deprecated. Instead, use mapreduce.task.output.dir
2016-03-08 22:47:10,453 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
2016-03-08 22:47:10,457 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
APPLICATION Master LOGS:
Log Type: stderr
Log Upload Time: Tue Mar 08 22:59:27 -0800 2016
Log Length: 240
log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Log Type: stdout
Log Upload Time: Tue Mar 08 22:59:27 -0800 2016
Log Length: 0
Log Type: syslog
Log Upload Time: Tue Mar 08 22:59:27 -0800 2016
Log Length: 66959
Showing 4096 bytes of 66959 total. Click here for the full log.
ILED
2016-03-08 22:59:19,325 INFO [Thread-89] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In stop, writing event JOB_FAILED
2016-03-08 22:59:19,456 INFO [Thread-89] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://C423A:8020/user/hive/.staging/job_1457504560070_0004/job_1457504560070_0004_1.jhist to hdfs://C423A:8020/mr-history/tmp/hive/job_1457504560070_0004-1457506422934-hive-oozie%3Aaction%3AT%3Djava%3AW%3DTDExportMR%3AA%3Dexport%3AID%3D00001-1457506759193-0-0-FAILED-default-1457506429243.jhist_tmp
2016-03-08 22:59:19,550 INFO [Thread-89] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://C423A:8020/mr-history/tmp/hive/job_1457504560070_0004-1457506422934-hive-oozie%3Aaction%3AT%3Djava%3AW%3DTDExportMR%3AA%3Dexport%3AID%3D00001-1457506759193-0-0-FAILED-default-1457506429243.jhist_tmp
2016-03-08 22:59:19,562 INFO [Thread-89] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://C423A:8020/user/hive/.staging/job_1457504560070_0004/job_1457504560070_0004_1_conf.xml to hdfs://C423A:8020/mr-history/tmp/hive/job_1457504560070_0004_conf.xml_tmp
2016-03-08 22:59:19,614 INFO [Thread-89] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://C423A:8020/mr-history/tmp/hive/job_1457504560070_0004_conf.xml_tmp
2016-03-08 22:59:19,645 INFO [Thread-89] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://C423A:8020/mr-history/tmp/hive/job_1457504560070_0004.summary_tmp to hdfs://C423A:8020/mr-history/tmp/hive/job_1457504560070_0004.summary
2016-03-08 22:59:19,654 INFO [Thread-89] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://C423A:8020/mr-history/tmp/hive/job_1457504560070_0004_conf.xml_tmp to hdfs://C423A:8020/mr-history/tmp/hive/job_1457504560070_0004_conf.xml
2016-03-08 22:59:19,666 INFO [Thread-89] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://C423A:8020/mr-history/tmp/hive/job_1457504560070_0004-1457506422934-hive-oozie%3Aaction%3AT%3Djava%3AW%3DTDExportMR%3AA%3Dexport%3AID%3D00001-1457506759193-0-0-FAILED-default-1457506429243.jhist_tmp to hdfs://C423A:8020/mr-history/tmp/hive/job_1457504560070_0004-1457506422934-hive-oozie%3Aaction%3AT%3Djava%3AW%3DTDExportMR%3AA%3Dexport%3AID%3D00001-1457506759193-0-0-FAILED-default-1457506429243.jhist
2016-03-08 22:59:19,666 INFO [Thread-89] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped JobHistoryEventHandler. super.stop()
2016-03-08 22:59:19,671 INFO [Thread-89] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: Setting job diagnostics to Task failed task_1457504560070_0004_m_000004
Job failed as tasks failed. failedMaps:1 failedReduces:0
2016-03-08 22:59:19,672 INFO [Thread-89] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: History url is http://apus2.labs.teradata.com:19888/jobhistory/job/job_1457504560070_0004
2016-03-08 22:59:19,680 INFO [Thread-89] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: Waiting for application to be successfully unregistered.
2016-03-08 22:59:20,682 INFO [Thread-89] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:7 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:7 ContRel:0 HostLocal:6 RackLocal:1
2016-03-08 22:59:20,684 INFO [Thread-89] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory hdfs://C423A /user/hive/.staging/job_1457504560070_0004
2016-03-08 22:59:20,711 INFO [Thread-89] org.apache.hadoop.ipc.Server: Stopping server on 46067
2016-03-08 22:59:20,712 INFO [IPC Server listener on 46067] org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 46067
2016-03-08 22:59:20,712 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2016-03-08 22:59:20,714 INFO [TaskHeartbeatHandler PingChecker] org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler: TaskHeartbeatHandler thread interrupted.
Plese help me in resolving the issue.
You must be using sqoop to bring data to hadoop. Please paste command you are running. For "Failed to write data" , there can be multiple issues. destination parent directory is not avialble, space is not there at cluster etc.Only command can give explanation.

Resources