MapR installation failing for single node cluster - hadoop

I was referring quick installation guide for single node cluster. For this i used 20GB storage file for MaprFS but while on installation , it is giving ' Unable to find disks: /maprfs/storagefile' .
Here is my configuration file.
# Each Node section can specify nodes in the following format
# Hostname: disk1, disk2, disk3
# Specifying disks is optional. If not provided, the installer will use the values of 'disks' from the Defaults section
[Control_Nodes]
maprlocal.td.td.com: /maprfs/storagefile
#control-node2.mydomain: /dev/disk3, /dev/disk9
#control-node3.mydomain: /dev/sdb, /dev/sdc, /dev/sdd
[Data_Nodes]
#data-node1.mydomain
#data-node2.mydomain: /dev/sdb, /dev/sdc, /dev/sdd
#data-node3.mydomain: /dev/sdd
#data-node4.mydomain: /dev/sdb, /dev/sdd
[Client_Nodes]
#client1.mydomain
#client2.mydomain
#client3.mydomain
[Options]
MapReduce1 = true
YARN = true
HBase = true
MapR-DB = true
ControlNodesAsDataNodes = true
WirelevelSecurity = false
LocalRepo = false
[Defaults]
ClusterName = my.cluster.com
User = mapr
Group = mapr
Password = mapr
UID = 2000
GID = 2000
Disks = /maprfs/storagefile
StripeWidth = 3
ForceFormat = false
CoreRepoURL = http://package.mapr.com/releases
EcoRepoURL = http://package.mapr.com/releases/ecosystem-4.x
Version = 4.0.2
MetricsDBHost =
MetricsDBUser =
MetricsDBPassword =
MetricsDBSchema =
Below is the error that i am getting.
2015-04-16 08:18:03,659 callbacks 42 [INFO]: Running task: [Verify Pre-Requisites]
2015-04-16 08:18:03,661 callbacks 87 [ERROR]: maprlocal.td.td.com: Unable to find disks: /maprfs/storagefile from /maprfs/storagefile remove disks: /dev/sda,/dev/sda1,/dev/sda2,/dev/sda3 and retry
2015-04-16 08:18:03,662 callbacks 91 [ERROR]: failed: [maprlocal.td.td.com] => {"failed": true}
2015-04-16 08:18:03,667 installrunner 199 [ERROR]: Host: maprlocal.td.td.com has 1 failures
2015-04-16 08:18:03,668 common 203 [ERROR]: Control Nodes have failures. Please fix the failures and re-run the installation. For more information refer to the installer log at /opt/mapr-installer/var/mapr-installer.log
Please help me here.
Thanks
Shashi

Error is resolved by adding skip-check new option after install
/opt/mapr-installer/bin/install --skip-checks new

Related

Nomad Job - Failed to place all allocations

I’m trying to deploy an AWS EBS volume via nomad but I’m this below error. How do I resolve it?
$ nomad job plan -var-file bambootest.vars bamboo2.nomad
+/- Job: “bamboo2”
+/- Stop: “true” => “false”
+/- Task Group: “main” (1 create)
Volume {
AccessMode: “single-node-writer”
AttachmentMode: “file-system”
Name: “bambootest”
PerAlloc: “false”
ReadOnly: “false”
Source: “bambootest”
Type: “csi”
}
Task: “web”
Scheduler dry-run:
WARNING: Failed to place all allocations.
Task Group “main” (failed to place 1 allocation):
Class “system”: 3 nodes excluded by filter
Class “svt”: 2 nodes excluded by filter
Class “devtools”: 2 nodes excluded by filter
Class “bambootest”: 2 nodes excluded by filter
Class “ambt”: 2 nodes excluded by filter
Constraint “${meta.namespace} = bambootest”: 9 nodes excluded by filter
Constraint “missing CSI Volume bambootest”: 2 nodes excluded by filter
Below is an excerpt of the volume block that seems to be the problem.
group main {
count = 1
volume "bambootest" {
type = "csi"
source = "bambootest"
read_only = false
access_mode = "single-node-writer"
attachment_mode = "file-system"
}
task web {
driver = "docker"

Unknown bitrate error in g(mixed) opration mode in the Ieee80211DimensionalTransmitter physical layer for 2.4GHz

In the network, there are 3 AdhocHost nodes, named hostA, hostB and hostC.
The following error occurred when the simulation is run:
Unknown bitrate: (1.095e+07 - 1.105e+07) in operation mode: 'g(mixed)' -- in module (inet::physicallayer::Ieee80211DimensionalTransmitter) WirelessA.hostA.wlan[0].radio.transmitter (id=61), during network initialization
*.host*.wlan[*].radio.centerFrequency = 2.412GHz
*.host*.wlan[*].radio.bandwidth = 2MHz
*.host*.wlan[*].bitrate = 11Mbps
Thanks, the screenshot of the error is presented in the following.

Error: Code="VMExtensionProvisioningError" JsonADDomainExtension

I have terraform script which make a domain join with the code below:
resource "azurerm_virtual_machine_extension" "join-domain" {
name = azurerm_virtual_machine.client.name
virtual_machine_id = azurerm_virtual_machine.client.id
// resource_group_name = var.resource_group_name
//virtual_machine_name = azurerm_virtual_machine.client.name
publisher = "Microsoft.Compute"
type = "JsonADDomainExtension"
type_handler_version = "1.3"
# NOTE: the `OUPath` field is intentionally blank, to put it in the Computers OU
settings = <<SETTINGS
{
"Name": "${var.active_directory_domain}",
"OUPath": "",
"User": "${var.active_directory_domain}\\${var.active_directory_username}",
"Restart": "true",
"Options": "3"
}
SETTINGS
protected_settings = <<SETTINGS
{
"Password": "${var.active_directory_password}"
}
SETTINGS
depends_on = ["null_resource.wait-for-domain-to-provision"]
}
After the code run, it gets the error below in Terraform:
Error: Code="VMExtensionProvisioningError" Message="VM has reported a failure when processing extension 'pocvde-client'. Error message: \"Exception(s) occured while joining Domain 'pocvde.local'\"\r\n\r\nMore information on troubleshooting is available at https://aka.ms/vmextensionwindowstroubleshoot "
on modules/windows-client/4-join-domain.tf line 1, in resource "azurerm_virtual_machine_extension" "join-domain":
1: resource "azurerm_virtual_machine_extension" "join-domain" {
I have checked the windows client logs in C:\WindowsAzure\Logs\Plugins\Microsoft.Compute.JsonADDomainExtension, i got the trace below:
2020-06-23T06:30:54.1176834Z [Info]: Get Domain/Workgroup Information
2020-06-23T06:30:54.1645880Z [Info]: Current domain: (), current workgroup: WORKGROUP, IsDomainJoin: True, Target Domain/Workgroup: pocvde.local.
2020-06-23T06:30:54.1802137Z [Info]: Domain Join Path.
2020-06-23T06:30:54.1802137Z [Info]: Current Domain name is empty/null. Try to get Local domain name.
2020-06-23T06:30:54.1958114Z [Info]: In AD Domain extension process, the local domain is: ''.
2020-06-23T06:30:54.1958114Z [Info]: Domain Join will be performed.
2020-06-23T06:30:54.3460994Z [Error]: Try join: domain='pocvde.local', ou='', user='pocvde.local\AdminAls', option='NetSetupJoinDomain, NetSetupAcctCreate' (#3:User Specified), errCode='1355'.
2020-06-23T06:30:54.3621879Z [Error]: Setting error code to 53 while joining domain
2020-06-23T06:30:54.4085771Z [Error]: Try join: domain='pocvde.local', ou='', user='pocvde.local\AdminAls', option='NetSetupJoinDomain' (#1:User Specified without NetSetupAcctCreate), errCode='1355'.
2020-06-23T06:30:54.4085771Z [Error]: Setting error code to 53 while joining domain
2020-06-23T06:30:54.4241709Z [Error]: Computer failed to join domain 'pocvde.local' from workgroup 'WORKGROUP'.
I have changed the client vm OS from DataCenter-16 to Windows 10, i got still same error. Increased the waiting time before domain join operation from 12 minutes to 24 minutes, nothing changed.
Do you have any idea?
I saw that the Domain Controller was not set correctly, after solving that domain join operation is done correctly and that solved my problem

Curator is not deleting indices

I am new to curator. I want to see how curator works so installed curator on my mac and I have created one action file and one configuration file to delete all the indices from elastic search.
But whenever I run command
curator --config ./config.yml --dry-run ./action.yml
I get output as
2019-06-27 12:06:49,848 INFO Preparing Action ID: 1,
"delete_indices"
2019-06-27 12:06:49,854 INFO Trying Action ID: 1,
"delete_indices": Delete selected indices
2019-06-27 12:06:49,866 INFO DRY-RUN MODE. No changes will be
made.
2019-06-27 12:06:49,866 INFO (CLOSED) indices may be shown that
may not be acted on by action "delete_indices".
2019-06-27 12:06:49,866 INFO Action ID: 1, "delete_indices"
completed.
2019-06-27 12:06:49,867 INFO Job completed.
I thought indices are deleted ,But I can see all the indices on Elastic search.
I cannot see any error It is really difficult to debug
I am sharing both files :
config.yml
---
# Remember, leave a key empty if there is no value. None will be a
string,
# not a Python "NoneType"
client:
hosts:
- 127.0.0.1
port: 9200
url_prefix:
use_ssl: False
certificate:
client_cert:
client_key:
ssl_no_validate: False
http_auth:
timeout: 30
master_only: False
logging:
loglevel: INFO
logfile:
logformat: default
blacklist:
action.yml:
---
# Remember, leave a key empty if there is no value. None will be a
string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True.
If you
# want to use this action as a template, be sure to set this to False
after
# copying it.
actions:
1:
action: delete_indices
description: "Delete selected indices"
options:
timeout_override: 300
continue_if_exception: False
filters:
- filtertype: age
source: creation_date
direction: older
timestring: '%Y.%W'
unit: days
unit_count: 30
I created index this week.
Thanks in advance for your help ^^

CouchDB 3-node cluster (Windows) - multiple erlang errors

Im receiving multiple erlang errors in my CouchDB 2.1.1 cluster (3 nodes/Windows), see errors and node configuration below:
3 nodes (10.0.7.4 - 10.0.7.6), Azure application gateway is used as load balancer.
Why do these errors appear? system resources of the nodes are far from overload.
I would be thankful for any help - thanks in advance.
Errors:
rexi_server: from: couchdb#10.0.7.4(<0.14976.568>) mfa: fabric_rpc:changes/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,256}]},{rexi,stream_last,2,[{file,"src/rexi.erl"},{line,224}]},{fabric_rpc,changes,4,[{file,"src/fabric_rpc.erl"},{line,86}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,139}]}]
rexi_server: from: couchdb#10.0.7.6(<13540.24597.655>) mfa: fabric_rpc:all_docs/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,256}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,204}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,308}]},{couch_mrview,finish_fold,2,[{file,"src/couch_mrview.erl"},{line,642}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,139}]}]
rexi_server: from: couchdb#10.0.7.6(<13540.5991.623>) mfa: fabric_rpc:all_docs/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,256}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,204}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,308}]},{couch_mrview,map_fold,3,[{file,"src/couch_mrview.erl"},{line,511}]},{couch_btree,stream_kv_node2,8,[{file,"src/couch_btree.erl"},{line,848}]},{couch_btree,fold,4,[{file,"src/couch_btree.erl"},{line,222}]},{couch_db,enum_docs,5,[{file,"src/couch_db.erl"},{line,1450}]},{couch_mrview,all_docs_fold,4,[{file,"src/couch_mrview.erl"},{line,425}]}]
req_err(3206982071) unknown_error : normal [<<"mochiweb_request:recv/3 L180">>,<<"mochiweb_request:stream_unchunked_body/4 L540">>,<<"mochiweb_request:recv_body/2 L214">>,<<"chttpd:body/1 L636">>,<<"chttpd:json_body/1 L649">>,<<"chttpd:json_body_obj/1 L657">>,<<"chttpd_db:db_req/2 L386">>,<<"chttpd:process_request/1 L295">>]
System running to use fully qualified hostnames ** ** Hostname localhost is illegal
COMPACTION-ERRORS
Supervisor couch_secondary_services had child compaction_daemon started with couch_compaction_daemon:start_link() at <0.18509.478> exit with reason {compaction_loop_died,{timeout,{gen_server,call,[couch_server,get_server]}}} in context child_terminated
CRASH REPORT Process couch_compaction_daemon (<0.18509.478>) with 0 neighbors exited with reason: {compaction_loop_died,{timeout,{gen_server,call,[couch_server,get_server]}}} at gen_server:terminate/7(line:826) <= proc_lib:init_p_do_apply/3(line:240); initial_call: {couch_compaction_daemon,init,['Argument__1']}, ancestors: [couch_secondary_services,couch_sup,<0.200.0>], messages: [], links: [<0.12665.492>], dictionary: [], trap_exit: true, status: running, heap_size: 987, stack_size: 27, reductions: 3173
gen_server couch_compaction_daemon terminated with reason: {compaction_loop_died,{timeout,{gen_server,call,[couch_server,get_server]}}} last msg: {'EXIT',<0.23195.476>,{timeout,{gen_server,call,[couch_server,get_server]}}} state: {state,<0.23195.476>,[]}
Error in process <0.16890.22> on node 'couchdb#10.0.7.4' with exit value: {{rexi_DOWN,{'couchdb#10.0.7.5',noproc}},[{mem3_rpc,rexi_call,2,[{file,"src/mem3_rpc.erl"},{line,269}]},{mem3_rep,calculate_start_seq,1,[{file,"src/mem3_rep.erl"},{line,194}]},{mem3_rep,repl,2,[{file,"src/mem3_rep.erl"},{line,175}]},{mem3_rep,go,1,[{file,"src/mem3_rep.erl"},{line,81}]},{mem3_sync,'-start_push_replication/1-fun-0-',2,[{file,"src/mem3_sync.erl"},{line,208}]}]}
#vm.args
-name couchdb#10.0.7.4
-setcookie monster
-kernel error_logger silent
-sasl sasl_error_logger false
+K true
+A 16
+Bd -noinput
+Q 134217727`
local.ini
[fabric]
request_timeout = infinity
[couchdb]
max_dbs_open = 10000
os_process_timeout = 20000
uuid =
[chttpd]
port = 5984
bind_address = 0.0.0.0
[httpd]
socket_options = [{recbuf, 262144}, {sndbuf, 262144}, {nodelay, true}]
enable_cors = true
[couch_httpd_auth]
secret =
[daemons]
compaction_daemon={couch_compaction_daemon, start_link, []}
[compactions]
_default = [{db_fragmentation, "50%"}, {view_fragmentation, "50%"}, {from, "23:00"}, {to, "04:00"}]
[compaction_daemon]
check_interval = 300
min_file_size = 100000
[vendor]
name = COUCHCLUSTERNODE0X
[admins]
adminuser =
[cors]
methods = GET, PUT, POST, HEAD, DELETE
headers = accept, authorization, content-type, origin, referer
origins = *
credentials = true
[query_server_config]
os_process_limit = 2000
os_process_soft_limit = 1000

Resources