I'm trying to deploy liferay ce 7.4 in kubernetes and I can't connect elasticsearch 7.14.0. I get the following error:
2022-03-19 20:06:29.375 ERROR [main][ElasticsearchEngineConfigurator:93] bundle com.liferay.portal.search.elasticsearch7.impl:6.0.30 (1134)[com.liferay.portal.search.elasticsearch7.internal.ElasticsearchEngineConfigurator(3789)] : The activate method has thrown an exception
java.lang.RuntimeException: org.elasticsearch.ElasticsearchException: ElasticsearchException[java.util.concurrent.ExecutionException: java.net.ConnectException: Timeout connecting to [search/10.110.10.150:9200]]; nested: ExecutionException[java.net.ConnectException: Timeout connecting to [search/10.110.10.150:9200]]; nested: ConnectException[Timeout connecting to [search/10.110.10.150:9200]];
at org.elasticsearch.client.RestHighLevelClient.performClientRequest(RestHighLevelClient.java:2078) ~[?:?]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1732) ~[?:?]
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1702) ~[?:?]
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1672) ~[?:?]
at org.elasticsearch.client.ClusterClient.health(ClusterClient.java:119) ~[?:?]
at com.liferay.portal.search.elasticsearch7.internal.search.engine.adapter.cluster.HealthClusterRequestExecutorImpl._getClusterHealthResponse(HealthClusterRequestExecutorImpl.java:112) ~[?:?]
I have verified that the elasticsearch is correctly deployed by running: kubectl port-forward search-59fcc9c4f6-brhcv 9200
My file com.liferay.portal.search.elasticsearch7.configuration.ElasticsearchConfiguration.config:
additionalConfigurations=""
additionalIndexConfigurations=""
additionalTypeMappings=""
authenticationEnabled="false"
bootstrapMlockAll="false"
clusterName="LiferayElasticsearchCluster"
discoveryZenPingUnicastHostsPort="9300-9400"
embeddedHttpPort="9200"
httpCORSAllowOrigin="/https?:\\/\\/localhost(:[0-9]+)?/"
httpCORSConfigurations=""
httpCORSEnabled="true"
httpSSLEnabled="false"
indexNamePrefix="liferay-"
indexNumberOfReplicas=""
indexNumberOfShards=""
logExceptionsOnly="true"
networkBindHost=""
networkHost=""
networkHostAddresses=[ \
"", \
]
networkPublishHost=""
nodeName=""
operationMode="REMOTE"
overrideTypeMappings=""
productionModeEnabled="true"
proxyHost=""
proxyPort="0"
proxyUserName=""
remoteClusterConnectionId="RemoteElasticSearchCluster"
restClientLoggerLevel="ERROR"
sidecarDebug="false"
sidecarDebugSettings="-agentlib:jdwp\=transport\=dt_socket,address\=8001,server\=y,suspend\=y,quiet\=y"
sidecarHeartbeatInterval="10000"
sidecarHome="elasticsearch7"
sidecarHttpPort=""
sidecarJVMOptions=[ \
"-Xms1g", \
"-Xmx1g", \
"-XX:+AlwaysPreTouch", \
]
sidecarShutdownTimeout="10000"
trackTotalHits="true"
transportTcpPort=""
truststorePath="/path/to/localhost.p12"
truststoreType="pkcs12"
username="elastic"
And my file com.liferay.portal.search.elasticsearch7.configuration.ElasticsearchConnectionConfiguration.config:
active="true"
authenticationEnabled="false"
connectionId="RemoteElasticSearchCluster"
httpSSLEnabled="false"
networkHostAddresses=[ \
"search:9200" \
]
proxyHost=""
proxyPort="0"
proxyUserName=""
truststorePath="/path/to/localhost.p12"
truststoreType="pkcs12"
username="elastic"
To configure the elasticsearch connector I followed the page: http://www.liferaysavvy.com/2021/07/configure-remote-elasticsearch-cluster.html and https://liferay.dev/blogs/-/blogs/deploying-liferay-7-3-in-kubernetes
Someone could help me?
Thanks in advance.
Related
I am able to do an API scan as well as generate a report when I run the below command from Windows :
docker run -v "$(pwd):/zap/wrk/:rw" -t owasp/zap2docker-weekly zap-api-scan.py -t http://10.170.170.170:1700 /account?field4=448808888888"&"field7=GENERIC01"&"field10=ABC076 -f openapi -r ZAP_Report.htm
Once I switch to running the same command :
docker run -v $(pwd):/zap/wrk/:rw -t owasp/zap2docker-weekly zap-api-scan.py -t http://10.170.170.170:1700/account?field4=448808888888"&"field7=GENERIC01"&"field10=DCF43 -f openapi -r ~/serverkeys/ZAP_REPORT.htm
from Debian I get an error, not quite sure what I'm missing :
.....
[ZAP-ActiveScanner-1] WARN org.zaproxy.zap.extension.ascanrules.CommandInjectionScanRule - Command Injection vulnerability check failed for parameter [field10] and payload [';cat /etc/passwd;'] due to an I/O error
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method) ~[?:?]
at java.net.SocketInputStream.socketRead(SocketInputStream.java:115) ~[?:?]
at java.net.SocketInputStream.read(SocketInputStream.java:168) ~[?:?]
at java.net.SocketInputStream.read(SocketInputStream.java:140) ~[?:?]
at java.io.BufferedInputStream.fill(BufferedInputStream.java:252) ~[?:?]
at java.io.BufferedInputStream.read(BufferedInputStream.java:271) ~[?:?]
at org.apache.commons.httpclient.HttpParser.readRawLine(HttpParser.java:78) ~[commons-httpclient-3.1.jar:D-2021-10-25]
at org.apache.commons.httpclient.HttpParser.readLine(HttpParser.java:106) ~[commons-httpclient-3.1.jar:D-2021-10-25]
at org.apache.commons.httpclient.HttpConnection.readLine(HttpConnection.java:1153) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$HttpConnectionAdapter.readLine(MultiThreadedHttpConnectionManager.java:1413) ~[commons-httpclient-3.1.jar:D-2021-10-25]
at org.apache.commons.httpclient.HttpMethodBase.readStatusLine(HttpMethodBase.java:2138) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.zaproxy.zap.ZapGetMethod.readResponse(ZapGetMethod.java:112) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1162) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:470) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:207) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397) ~[commons-httpclient-3.1.jar:D-2021-10-25]
at org.parosproxy.paros.network.HttpSender.executeMethod(HttpSender.java:430) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.parosproxy.paros.network.HttpSender.runMethod(HttpSender.java:672) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.parosproxy.paros.network.HttpSender.send(HttpSender.java:627) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.parosproxy.paros.network.HttpSender.sendAuthenticated(HttpSender.java:602) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.parosproxy.paros.network.HttpSender.sendAuthenticated(HttpSender.java:585) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.parosproxy.paros.network.HttpSender.sendAndReceive(HttpSender.java:490) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.parosproxy.paros.core.scanner.AbstractPlugin.sendAndReceive(AbstractPlugin.java:315) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.parosproxy.paros.core.scanner.AbstractPlugin.sendAndReceive(AbstractPlugin.java:246) ~[zap-D-2021-10-25.jar:D-2021-10-25]
at org.zaproxy.zap.extension.ascanrules.CommandInjectionScanRule.testCommandInjection(CommandInjectionScanRule.java:524) [ascanrules-release-42.zap:?]
at org.zaproxy.zap.extension.ascanrules.CommandInjectionScanRule.scan(CommandInjectionScanRule.java:431) [ascanrules-release-42.zap:?]
at org.parosproxy.paros.core.scanner.AbstractAppParamPlugin.scan(AbstractAppParamPlugin.java:201) [zap-D-2021-10-25.jar:D-2021-10-25]
at org.parosproxy.paros.core.scanner.AbstractAppParamPlugin.scan(AbstractAppParamPlugin.java:126) [zap-D-2021-10-25.jar:D-2021-10-25]
at org.parosproxy.paros.core.scanner.AbstractAppParamPlugin.scan(AbstractAppParamPlugin.java:87) [zap-D-2021-10-25.jar:D-2021-10-25]
at org.parosproxy.paros.core.scanner.AbstractPlugin.run(AbstractPlugin.java:333) [zap-D-2021-10-25.jar:D-2021-10-25]
at java.lang.Thread.run(Thread.java:829) [?:?]
493852 [Thread-6] INFO org.parosproxy.paros.core.scanner.HostProcess - completed host/plugin http://10.170.4.117:8002 | CommandInjectionScanRule in 421.201s with 84 message(s) sent and 0 alert(s) raised.
493853 [Thread-6] INFO org.parosproxy.paros.core.scanner.HostProcess - start host http://10.170.170.170:1700 | DirectoryBrowsingScanRule strength MEDIUM threshold MEDIUM
493988 [Thread-6] INFO org.parosproxy.paros.core.scanner.HostProcess - completed host/plugin http://10.170.170.170:1700 | DirectoryBrowsingScanRule in 0.136s with 2 message(s) sent and 0 alert(s) raised.
493988 [Thread-6] INFO org.parosproxy.paros.core.scanner.HostProcess - start host http://10.170.170.170:1700 | BufferOverflowScanRule strength MEDIUM threshold MEDIUM
494126 [Thread-6] INFO org.parosproxy.paros.core.scanner.HostProcess - completed host/plugin http://10.170.170.170:1700 | BufferOverflowScanRule in 0.137s with 3 message(s) sent and 0 alert(s) raised.
494126 [Thread-6] INFO org.parosproxy.paros.core.scanner.HostProcess - start host http://10.170.170.170:1700 | FormatStringScanRule strength MEDIUM threshold MEDIUM
494287 [Thread-6] INFO org.parosproxy.paros.core.scanner.HostProcess - completed host/plugin http://10.170.170.170:1700 | FormatStringScanRule in 0.161s with 9 message(s) sent and 0 alert(s) raised.
494287 [Thread-6] INFO org.parosproxy.paros.core.scanner.HostProcess - start host http://10.170.170.170:1700 | CrlfInjectionScanRule strength MEDIUM threshold MEDIUM
494560 [Thread-6] INFO org.parosproxy.paros.core.scanner.HostProcess - completed host/plugin http://10.170.170.170:1700 | CrlfInjectionScanRule in 0.273s with 21 message(s) sent and 0 alert(s) raised.
........
........
Is they any additional tracing I can do on the scan - why its timing out?
It appears the scan is terminating before completing and its also pointing to /etc/passwd ??
You are not necessarily missing anything.
ZAP typically makes loads of requests to the target. Some of those may timeout - thats all this warning is telling you. If you keep getting these then it might be an indication that your site has become unresponsive.
Below is how i'm creating my dataproc cluster, while formulating properties i'm taking care of the network timeout by assigning 3600s but despite of that the executor's heartbeat timed out after 125009ms. Why is this happening and what can be done to avoid this?
default_parallelism=512
PROPERTIES="\
spark:spark.executor.cores=2,\
spark:spark.executor.memory=8g,\
spark:spark.executor.memoryOverhead=2g,\
spark:spark.driver.memory=6g,\
spark:spark.driver.maxResultSize=6g,\
spark:spark.kryoserializer.buffer=128m,\
spark:spark.kryoserializer.buffer.max=1024m,\
spark:spark.serializer=org.apache.spark.serializer.KryoSerializer,\
spark:spark.default.parallelism=${default_parallelism},\
spark:spark.rdd.compress=true,\
spark:spark.network.timeout=3600s,\
spark:spark.rpc.message.maxSize=256,\
spark:spark.io.compression.codec=snappy,\
spark:spark.shuffle.service.enabled=true,\
spark:spark.sql.shuffle.partitions=256,\
spark:spark.sql.files.ignoreCorruptFiles=true,\
yarn:yarn.nodemanager.resource.cpu-vcores=8,\
yarn:yarn.scheduler.minimum-allocation-vcores=2,\
yarn:yarn.scheduler.maximum-allocation-vcores=4,\
yarn:yarn.nodemanager.vmem-check-enabled=false,\
capacity-scheduler:yarn.scheduler.capacity.resource-calculator=org.apache.hadoop.yarn.util.resource.DominantResourceCalculator
"
gcloud beta dataproc clusters create $CLUSTER_NAME \
--zone $ZONE \
--region $REGION \
--master-machine-type n1-standard-4 \
--master-boot-disk-size 500 \
--worker-machine-type n1-standard-4 \
--worker-boot-disk-size 500 \
--num-workers 3 \
--bucket $GCS_BUCKET \
--image-version 1.4-ubuntu18 \
--optional-components=ANACONDA,JUPYTER \
--subnet=default \
--enable-component-gateway \
--scopes 'https://www.googleapis.com/auth/cloud-platform'
Below is the error i'm getting:
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 5.0 failed 4 times, most recent failure: Lost task 0.3 in stage 5.0 (TID 11, cluster-abc-z-2.c.project_name.internal, executor 5): ExecutorLostFailure (executor 5 exited caused by one of the running tasks) Reason: Executor heartbeat timed out after 125009 ms
You should be setting spark.executor.heartbeatInterval. Default value for it is 10s.
https://spark.apache.org/docs/latest/configuration.html
RUN THIS COMMAND
/bin/nifi.sh stateless RunFromRegistry Once --file ./test/stateless_test1.json
LOG
Note: Use of this command is considered experimental. The commands and approach used may change from time to time.
Java home (JAVA_HOME): /home/deltaman/software/jdk1.8.0_211
Java options (STATELESS_JAVA_OPTS): -Xms1024m -Xmx1024m
13:48:39.835 [main] INFO org.apache.nifi.StatelessNiFi - Unpacking 100 NARs
13:50:51.513 [main] INFO org.apache.nifi.StatelessNiFi - Finished unpacking 100 NARs in 131671 millis
Exception in thread "main" java.lang.reflect.InvocationTargetException
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.nifi.StatelessNiFi.main(StatelessNiFi.java:103)
... 5 more
Caused by: java.nio.file.NoSuchFileException: ./test/stateless_test1.json
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.newByteChannel(Files.java:407)
at java.nio.file.Files.readAllBytes(Files.java:3152)
at org.apache.nifi.stateless.runtimes.Program.runLocal(Program.java:119)
at org.apache.nifi.stateless.runtimes.Program.launch(Program.java:67)
... 10 more
it seems no exist file,but i can find the file as follows:
$ cat ./test/stateless_test1.json
{
"registryUrl": "http://10.148.123.12:9991",
"bucketId": "ec1b291e-c3f1-437c-a4e4-c069bd2f6ed1",
"flowId": "b1f73fe8-2874-47a5-970c-6b25eea19497",
"parameters": {
"text" : "xixixixi"
}
}
CONFIGURATION
IDK WHAT IS THE PROBLEM?
ANY SUGGESTION IS APPRECIATION!
/bin/nifi.sh stateless RunFromRegistry Once --file ./test/stateless_test1.json
it is relative path,must use full path,such as
/home/NiFi/nifi-1.10.0/bin/nifi.sh stateless RunFromRegistry Once --file /home/NiFi/nifi-1.10.0/test/stateless_test1.json
I get the following error when starting up elasticsearch 7.4.2 on Windows 10 platform. I download the zip file, unzipped it and tried started it using elasticsearch.bat. I also tried updating the elasticsearch.yml to change the port and host name, but nothing works. Could anyone help with this one? Thanks.
[2019-11-30T00:26:06,319][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [ABHATIA-P51] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: failed to create a child event loop
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:163) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:125) ~[elasticsearch-cli-7.4.2.jar:7.4.2]
at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-7.4.2.jar:7.4.2]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-7.4.2.jar:7.4.2]
Caused by: java.lang.IllegalStateException: failed to create a child event loop
at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:88) ~[?:?]
at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:58) ~[?:?]
at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:47) ~[?:?]
at io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:59) ~[?:?]
at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:78) ~[?:?]
at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:73) ~[?:?]
at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:60) ~[?:?]
at org.elasticsearch.transport.netty4.Netty4Transport.doStart(Netty4Transport.java:134) ~[?:?]
at org.elasticsearch.xpack.core.security.transport.netty4.SecurityNetty4Transport.doStart(SecurityNetty4Transport.java:81) ~[?:?]
at org.elasticsearch.xpack.security.transport.netty4.SecurityNetty4ServerTransport.doStart(SecurityNetty4ServerTransport.java:43) ~[?:?]
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:59) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.transport.TransportService.doStart(TransportService.java:230) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:59) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.node.Node.start(Node.java:695) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:273) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:358) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.4.2.jar:7.4.2]
... 6 more
Caused by: io.netty.channel.ChannelException: failed to open a new selector
at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:180) ~[?:?]
at io.netty.channel.nio.NioEventLoop.<init>(NioEventLoop.java:146) ~[?:?]
at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:138) ~[?:?]
at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:37) ~[?:?]
at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:84) ~[?:?]
at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:58) ~[?:?]
at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:47) ~[?:?]
at io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:59) ~[?:?]
at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:78) ~[?:?]
at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:73) ~[?:?]
at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:60) ~[?:?]
at org.elasticsearch.transport.netty4.Netty4Transport.doStart(Netty4Transport.java:134) ~[?:?]
at org.elasticsearch.xpack.core.security.transport.netty4.SecurityNetty4Transport.doStart(SecurityNetty4Transport.java:81) ~[?:?]
at org.elasticsearch.xpack.security.transport.netty4.SecurityNetty4ServerTransport.doStart(SecurityNetty4ServerTransport.java:43) ~[?:?]
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:59) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.transport.TransportService.doStart(TransportService.java:230) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:59) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.node.Node.start(Node.java:695) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:273) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:358) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.4.2.jar:7.4.2]
... 6 more
Caused by: java.io.IOException: Unable to establish loopback connection
at sun.nio.ch.PipeImpl$Initializer.run(PipeImpl.java:94) ~[?:?]
at sun.nio.ch.PipeImpl$Initializer.run(PipeImpl.java:61) ~[?:?]
at java.security.AccessController.doPrivileged(AccessController.java:554) ~[?:?]
at sun.nio.ch.PipeImpl.<init>(PipeImpl.java:171) ~[?:?]
at sun.nio.ch.SelectorProviderImpl.openPipe(SelectorProviderImpl.java:50) ~[?:?]
at java.nio.channels.Pipe.open(Pipe.java:155) ~[?:?]
at sun.nio.ch.WindowsSelectorImpl.<init>(WindowsSelectorImpl.java:127) ~[?:?]
at sun.nio.ch.WindowsSelectorProvider.openSelector(WindowsSelectorProvider.java:44) ~[?:?]
at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:178) ~[?:?]
at io.netty.channel.nio.NioEventLoop.<init>(NioEventLoop.java:146) ~[?:?]
at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:138) ~[?:?]
at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:37) ~[?:?]
at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:84) ~[?:?]
at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:58) ~[?:?]
at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:47) ~[?:?]
at io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:59) ~[?:?]
at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:78) ~[?:?]
at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:73) ~[?:?]
at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:60) ~[?:?]
at org.elasticsearch.transport.netty4.Netty4Transport.doStart(Netty4Transport.java:134) ~[?:?]
at org.elasticsearch.xpack.core.security.transport.netty4.SecurityNetty4Transport.doStart(SecurityNetty4Transport.java:81) ~[?:?]
at org.elasticsearch.xpack.security.transport.netty4.SecurityNetty4ServerTransport.doStart(SecurityNetty4ServerTransport.java:43) ~[?:?]
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:59) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.transport.TransportService.doStart(TransportService.java:230) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:59) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.node.Node.start(Node.java:695) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:273) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:358) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.4.2.jar:7.4.2]
... 6 more
Caused by: java.net.BindException: Address already in use: connect
at sun.nio.ch.Net.connect0(Native Method) ~[?:?]
at sun.nio.ch.Net.connect(Net.java:493) ~[?:?]
at sun.nio.ch.Net.connect(Net.java:482) ~[?:?]
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:732) ~[?:?]
at java.nio.channels.SocketChannel.open(SocketChannel.java:194) ~[?:?]
at sun.nio.ch.PipeImpl$Initializer$LoopbackConnector.run(PipeImpl.java:127) ~[?:?]
at sun.nio.ch.PipeImpl$Initializer.run(PipeImpl.java:76) ~[?:?]
at sun.nio.ch.PipeImpl$Initializer.run(PipeImpl.java:61) ~[?:?]
at java.security.AccessController.doPrivileged(AccessController.java:554) ~[?:?]
at sun.nio.ch.PipeImpl.<init>(PipeImpl.java:171) ~[?:?]
at sun.nio.ch.SelectorProviderImpl.openPipe(SelectorProviderImpl.java:50) ~[?:?]
at java.nio.channels.Pipe.open(Pipe.java:155) ~[?:?]
at sun.nio.ch.WindowsSelectorImpl.<init>(WindowsSelectorImpl.java:127) ~[?:?]
at sun.nio.ch.WindowsSelectorProvider.openSelector(WindowsSelectorProvider.java:44) ~[?:?]
at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:178) ~[?:?]
at io.netty.channel.nio.NioEventLoop.<init>(NioEventLoop.java:146) ~[?:?]
at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:138) ~[?:?]
at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:37) ~[?:?]
at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:84) ~[?:?]
at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:58) ~[?:?]
at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:47) ~[?:?]
at io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:59) ~[?:?]
at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:78) ~[?:?]
at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:73) ~[?:?]
at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:60) ~[?:?]
at org.elasticsearch.transport.netty4.Netty4Transport.doStart(Netty4Transport.java:134) ~[?:?]
at org.elasticsearch.xpack.core.security.transport.netty4.SecurityNetty4Transport.doStart(SecurityNetty4Transport.java:81) ~[?:?]
at org.elasticsearch.xpack.security.transport.netty4.SecurityNetty4ServerTransport.doStart(SecurityNetty4ServerTransport.java:43) ~[?:?]
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:59) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.transport.TransportService.doStart(TransportService.java:230) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:59) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.node.Node.start(Node.java:695) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:273) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:358) ~[elasticsearch-7.4.2.jar:7.4.2]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.4.2.jar:7.4.2]
... 6 more
Below is the content of the elasticsearch.yml file.
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: localhost
#
# Set a custom port for HTTP:
#
http.port: 19300
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
The error message says that the port you configured is already in use.
Try starting up Elasticsearch using its default host and port configuration for both, the transport and http protocols by commenting out
# network.host: localhost
# http.port: 19300
Doing so should configure Elasticsearch to use localhost for both, the transport and http-protocols and the first available port in the range of 9200-9299 will be used for http while the first available port in the range of 9300-9399 will be used for transport.
BTW: Rather than specifying localhost you could/should use the special value "_local_" (see Elasticsearch Reference: Network Settings)
I can not solve GCS bucket permission issue when submitting job to Dataproc.
Here is what I'm doing:
Created a project
Created a bucket xmitya-test
Created a cluster:
gcloud dataproc clusters create cascade --bucket=xmitya-test \
--master-boot-disk-size=80G --master-boot-disk-type=pd-standard \
--num-master-local-ssds=0 --num-masters=1 \
--num-workers=2 --num-worker-local-ssds=0 \
--worker-boot-disk-size=80G --worker-boot-disk-type=pd-standard \
--master-machine-type=n1-standard-2 \
--worker-machine-type=n1-standard-2 \
--zone=us-west1-a --image-version=1.3 \
--properties 'hadoop-env:HADOOP_CLASSPATH=${HADOOP_CLASSPATH}:/etc/tez/conf:/usr/lib/tez/*:/usr/lib/tez/lib/*'
Uploaded job jar: /apps/wordcount.jar and library /apps/lib/commons-collections-3.2.2.jar
Then submit a job with jar in classpath:
gcloud dataproc jobs submit hadoop --cluster=cascade \
--jar=gs:/apps/wordcount.jar \
--jars=gs://apps/lib/commons-collections-3.2.2.jar --bucket=xmitya-test \
-- gs:/input/url+page.200.txt gs:/output/wc.out local
Then I'm getting forbidden error accessing the library file:
java.io.IOException: Error accessing: bucket: apps, object: lib/commons-collections-3.2.2.jar
at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.wrapException(GoogleCloudStorageImpl.java:1957)
at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.getObject(GoogleCloudStorageImpl.java:1983)
at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.getItemInfo(GoogleCloudStorageImpl.java:1870)
at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getFileInfo(GoogleCloudStorageFileSystem.java:1156)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.getFileStatus(GoogleHadoopFileSystemBase.java:1058)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:363)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:314)
at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2375)
at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2344)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.copyToLocalFile(GoogleHadoopFileSystemBase.java:1793)
at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2320)
at com.google.cloud.hadoop.services.agent.util.HadoopUtil.download(HadoopUtil.java:70)
at com.google.cloud.hadoop.services.agent.job.AbstractJobHandler.downloadResources(AbstractJobHandler.java:448)
at com.google.cloud.hadoop.services.agent.job.AbstractJobHandler$StartDriver.call(AbstractJobHandler.java:579)
at com.google.cloud.hadoop.services.agent.job.AbstractJobHandler$StartDriver.call(AbstractJobHandler.java:568)
at com.google.cloud.hadoop.services.repackaged.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.cloud.hadoop.services.repackaged.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
at com.google.cloud.hadoop.services.repackaged.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.google.cloud.hadoop.repackaged.gcs.com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 Forbidden
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "714526773712-compute#developer.gserviceaccount.com does not have storage.objects.get access to apps/lib/commons-collections-3.2.2.jar.",
"reason" : "forbidden"
} ],
"message" : "714526773712-compute#developer.gserviceaccount.com does not have storage.objects.get access to apps/lib/commons-collections-3.2.2.jar."
}
at com.google.cloud.hadoop.repackaged.gcs.com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:150)
at com.google.cloud.hadoop.repackaged.gcs.com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:113)
at com.google.cloud.hadoop.repackaged.gcs.com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:40)
at com.google.cloud.hadoop.repackaged.gcs.com.google.api.client.googleapis.services.AbstractGoogleClientRequest$1.interceptResponse(AbstractGoogleClientRequest.java:401)
at com.google.cloud.hadoop.repackaged.gcs.com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1097)
at com.google.cloud.hadoop.repackaged.gcs.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:499)
at com.google.cloud.hadoop.repackaged.gcs.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:432)
at com.google.cloud.hadoop.repackaged.gcs.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:549)
at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.getObject(GoogleCloudStorageImpl.java:1978)
... 23 more
Tried set read permission from browser to 714526773712-compute#developer.gserviceaccount.com user and set public permissions to all files: gsutil defacl ch -u AllUsers:R gs://xmitya-test and gsutil acl ch -d allUsers:R gs://xmitya-test/** - no effect.
What could be the reason?
Thanks!
It's complaining about access to apps, input and output buckets, that you specified in parameters of job submission command:
gcloud dataproc jobs submit hadoop --cluster=cascade --jar=gs:/apps/wordcount.jar --jars=gs://apps/lib/commons-collections-3.2.2.jar --bucket=xmitya-test gs:/input/url+page.200.txt gs:/output/wc.out local
To fix this issue you need to grant access to these buckets or if these are folders inside xmitya-test bucket then you need to specify it explicitly in the path: gs://xmitya-test/apps/wordcount.jar.