Apache Drill failed to connect to HDFS - hadoop

This is my hdfs version:
NameNode '10.207.78.21:38234'
Started: Mon Feb 02 19:16:43 CST 2015
Version: 1.0.4, r1393290
This is the config of drill file system plugin:
{
"type": "file",
"enabled": true,
"connection": "hdfs://10.207.78.21:38234",
"config": null,
"workspaces": {
"root": {
"location": "/",
"writable": false,
"defaultInputFormat": null
},
"tmp": {
"location": "/tmp",
"writable": true,
"defaultInputFormat": null
}
},
"formats": ...
This is my test data:
bash-4.3$ ./hadoop fs -cat /test
{"key": "value"}
And drill failed executing a query in embedded mode:
0: jdbc:drill:zk=local> SELECT * FROM rpmp.`/test` LIMIT 20;
Error: SYSTEM ERROR: EOFException
[Error Id: fd784c1a-8353-430a-9ae3-08a5154755fe on xxx.com:31010]
(org.apache.drill.exec.work.foreman.ForemanException) Unexpected exception during fragment initialization: Failed to create schema tree: End of File Exception between local host is: "xxx.com/10.95.112.80"; destination host is: "yyy.com":38234; : java.io.EOFException; For more details see: http://wiki.apache.org/hadoop/EOFException
org.apache.drill.exec.work.foreman.Foreman.run():262
java.util.concurrent.ThreadPoolExecutor.runWorker():1145
java.util.concurrent.ThreadPoolExecutor$Worker.run():615
java.lang.Thread.run():745
Caused By (org.apache.drill.common.exceptions.DrillRuntimeException) Failed to create schema tree: End of File Exception between local host is: "xxx.com/10.95.112.80"; destination host is: "yyy.com":38234; : java.io.EOFException; For more details see: http://wiki.apache.org/hadoop/EOFException
org.apache.drill.exec.ops.QueryContext.getRootSchema():169
org.apache.drill.exec.ops.QueryContext.getRootSchema():151
...

Related

unable to control swarm ingress network with ansible

I'm deploying Docker swarm with ansible and I would like to ensure the ingress network has been created. In that aim, I configured the following task :
- name: Ensure ingress network exists
docker_network:
state: present
name: ingress
driver: overlay
driver_options:
ingress: true
And I'm getting the following error :
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: docker.errors.NotFound: 404 Client Error for http+docker://localhost/v1.41/networks/ingress/disconnect: Not Found ("No such container: ingress-endpoint")
fatal: [swarm-srv-1]: FAILED! => {"changed": false, "msg": "An unexpected docker error occurred: 404 Client Error for http+docker://localhost/v1.41/networks/ingress/disconnect: Not Found (\"No such container: ingress-endpoint\")"}
I've tried to add some arguments likes :
scope: swarm
force: yes
But no changes... I've also tried to delete the ingress with ansible (state: absent), but I always get the same error.
Note that I don't face any issue when trying to delete a recreate the ingress network manually on the swarm : docker network rm ingress
I don't know how to resolve that issue...Any help would be appreciated. Thanks !
Here are some informations that may help...
# docker version
Version: 20.10.6
API version: 1.41
Go version: go1.13.15
Git commit: 370c289
Built: Fri Apr 9 22:47:35 2021
OS/Arch: linux/amd64
# docker inspect ingress
[
{
"Name": "ingress",
"Id": "yb2tkhep8vtaj9q7w3mssc9lx",
"Created": "2021-05-19T05:53:27.524446929-04:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": true,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"ingress-sbox": {
"Name": "ingress-endpoint",
"EndpointID": "dfdc0f123d21a196c7a815c7e0a886924d0799ae5f3be2d38b64d527ed4620b1",
"MacAddress": "02:42:0a:00:00:02",
"IPv4Address": "10.0.0.2/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4096"
},
"Labels": {},
"Peers": [
{
"Name": "8f8932d6f99f",
"IP": "(ip address here)"
},
{
"Name": "28b9ca95dcf0",
"IP": "(ip address here)"
},
{
"Name": "f7c48c8af2f5",
"IP": "(ip address here)"
}
]
}
]
I had the exact same issue when trying to customize the IP range of the ingress network. It looks like the docker_network module does not support modification of swarm specific networks: there is a open Github issue for this.
I went for the ugly workaround of removing the network by executing it through a shell (docker network rm ingress command) and adding it again. When adding it with the docker_network module, I found that adding also seems not be working (fails to set the ingress property of the network). So I ended up doing both remove- and create operation through a shell command.
Since the removal will trigger a confirmation dialogue:
WARNING! Before removing the routing-mesh network, make sure all the nodes in your swarm run the same docker engine version. Otherwise, removal may not be effective and functionality of newly create ingress networks will be impaired.
Are you sure you want to continue? [y/N]
I used the expect module to confirm the dialogue:
- name: remove default ingress network
ansible.builtin.expect:
command: docker network rm ingress
responses:
"[y/N]": "y"
- name: create customized ingress network
shell: "docker network create --ingress --subnet {{ docker_ingress_network }} --driver overlay ingress"
It is not perfect but it works.
There was one last problem I experienced: when running it on an existing swarm I ended up having network issues on the node where I did run this (somehow the docker_gwbridge network on that node could not handle the change). The fix for this was to fully remove the node and re-join the swarm (regenerates the docker_gwbridge).

Error importing Kibana dashboards: fail to create the Kibana loader: Error creating Kibana client

I'm having a problem when I try to run the command sudo metricbeat -e -setup
it return Error importing Kibana dashboards: fail to create the Kibana loader: Error creating Kibana client
but if I run sudo metricbeat test config
Config OK
or
sudo metricbeat test modules
nginx...
stubstatus...OK
result:
{
"#timestamp": "2018-10-05T12:30:19.077Z",
"metricset": {
"host": "127.0.0.1:8085",
"module": "nginx",
"name": "stubstatus",
"rtt": 438
},
"nginx": {
"stubstatus": {
"accepts": 2871,
"active": 2,
"current": 3559,
"dropped": 0,
"handled": 2871,
"hostname": "127.0.0.1:8085",
"reading": 0,
"requests": 3559,
"waiting": 1,
"writing": 1
}
}
}
Kibana is up and running?
Kibana IP and Port are configured correctly in metricbeat?
Metricbeat starting from V6.x are importing their dashboards into Kibana, thus resulting in errors like this if the Kibana endpoint isn't reachable.

Apache Drill :Please retry: error (unable to create/ update storage)

Hi I am unable to connect mysql to apache drill on vm.
I have used jdbc connector 5.1.45 and mysql version is 5.7.20.
Also in VM its giving warning:
Download mysql-connector-java-5.1.37-bin.jar connector and place it in jars/3rdparty/ folder
Modify conf/drill-override.conf to add the below line before }:
drill.exec.sys.store.provider.local.path = "mysql-connector-java-5.1.37-bin.jar"
Restart drill and try again.
Storage plugin should look something like this:
{
"type": "jdbc",
"driver": "com.mysql.jdbc.Driver",
"url": "jdbc:mysql://localhost:3306",
"username": "username",
"password": "password",
"enabled": true
}

Querying remote registry service on machine <IP Address> resulted in exception: Unable to change open service manager

My cluster Config file as follows
`
{
"name": "SampleCluster",
"clusterConfigurationVersion": "1.0.0",
"apiVersion": "01-2017",
"nodes":
[
{
"nodeName": "vm0",
"iPAddress": "here is my VPS ip",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r0",
"upgradeDomain": "UD0"
},
{
"nodeName": "vm1",
"iPAddress": "here is my another VPS ip",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r1",
"upgradeDomain": "UD1"
},
{
"nodeName": "vm2",
"iPAddress": "here is my another VPS ip",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r2",
"upgradeDomain": "UD2"
}
],
"properties": {
"reliabilityLevel": "Bronze",
"diagnosticsStore":
{
"metadata": "Please replace the diagnostics file share with an actual file share accessible from all cluster machines.",
"dataDeletionAgeInDays": "7",
"storeType": "FileShare",
"IsEncrypted": "false",
"connectionstring": "c:\\ProgramData\\SF\\DiagnosticsStore"
},
"nodeTypes": [
{
"name": "NodeType0",
"clientConnectionEndpointPort": "19000",
"clusterConnectionEndpointPort": "19001",
"leaseDriverEndpointPort": "19002",
"serviceConnectionEndpointPort": "19003",
"httpGatewayEndpointPort": "19080",
"reverseProxyEndpointPort": "19081",
"applicationPorts": {
"startPort": "20001",
"endPort": "20031"
},
"isPrimary": true
}
],
"fabricSettings": [
{
"name": "Setup",
"parameters": [
{
"name": "FabricDataRoot",
"value": "C:\\ProgramData\\SF"
},
{
"name": "FabricLogRoot",
"value": "C:\\ProgramData\\SF\\Log"
}
]
}
]
}
}
It is almost identical to standalone service fabric download demo file for untrusted cluster except my VPS ip. I enabled remote registry service.I ran the
\TestConfiguration.ps1 -ClusterConfigFilePath \ClusterConfig.Unsecure.MultiMachine.json but i got the following error.
Unable to change open service manager handle because 5
Unable to query service configuration because System.InvalidOperationException: Unable to change open service manager ha
ndle because 5
at System.Fabric.FabricDeployer.FabricDeployerServiceController.GetServiceStartupType(String machineName, String serv
iceName)
Querying remote registry service on machine <IP Address> resulted in exception: Unable to change open service manager
handle because 5.
Unable to change open service manager handle because 5
Unable to query service configuration because System.InvalidOperationException: Unable to change open service manager ha
ndle because 5
at System.Fabric.FabricDeployer.FabricDeployerServiceController.GetServiceStartupType(String machineName, String serv
iceName)
Querying remote registry service on machine <Another IP Address> resulted in exception: Unable to change open service manager
handle because 5.
Best Practices Analyzer determined environment has an issue. Please see additional BPA log output in DeploymentTraces
LocalAdminPrivilege : True
IsJsonValid : True
IsCabValid :
RequiredPortsOpen : True
RemoteRegistryAvailable : False
FirewallAvailable :
RpcCheckPassed :
NoConflictingInstallations :
FabricInstallable :
DataDrivesAvailable :
Passed : False
Test Config failed with exception: System.InvalidOperationException: Best Practices Analyzer determined environment has
an issue. Please see additional BPA log output in DeploymentTraces folder.
at System.Management.Automation.MshCommandRuntime.ThrowTerminatingError(ErrorRecord errorRecord)
I don't understand the problem.VPSs are not locally connected. All are public IP.I don't know, this may b an issue. how do I make virtual LAN among these VPS?Can anyone give me some direction about this error?Anyone helps me is greatly appreciated.
Edit: I used VM term insted of VPS.
Finally I make this working. Actually all the nodes are in a network, i thought it wasn't. I enable file sharing. I try to access the shared file from the node where I ran configuration test to the all other nodes. I have to give the credentials of logins. And then it works like a charm.

Error in shadowing bucket in sync gateway

I am trying to connect sync gateway to couchbase server with following config.json file
{
"interface":":4984",
"adminInterface":":4985",
"log":["REST"],
"databases":{
"sync_gateway":{
"server":"http://localhost:8091",
"bucket":"sync_gateway",
"sync":`function(doc) {channel(doc.channels);}`,
"users": {
"GUEST": {
"disabled": false, "admin_channels": ["*"]
}
},
"shadow": {
"server": "http://localhost:8091",
"bucket": "copy"
}
}`enter code here`
}
}
but I am not able to do shadowing...showing following error
2016-06-30T17:54:57.013+05:30 WARNING: Database "sync_gateway": unable to connec
t to external bucket for shadowing: 502 Unable to connect to shadow bucket: No b
ucket named copy -- rest.(*ServerContext)._getOrAddDatabaseFromConfig() at serve
r_context.go:793
enter image description here

Resources