HortonWorks HDP 2.6:DRPC Server Install issue via Ambari - hortonworks-data-platform

I install HDP 2.6 via Ambari 2.5.0.3 on SUSE11sp3,the last step one of my nodes meet failure ,it seems happen in "DRPC Server Install" task,here is log:
stderr: /var/lib/ambari-agent/data/errors-603.txt
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/STORM/0.9.1/package/scripts/drpc_server.py", line 139, in <module>
DrpcServer().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 314, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/STORM/0.9.1/package/scripts/drpc_server.py", line 44, in install
self.configure(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 117, in locking_configure
original_configure(obj, *args, **kw)
File "/var/lib/ambari-agent/cache/common-services/STORM/0.9.1/package/scripts/drpc_server.py", line 50, in configure
storm()
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/STORM/0.9.1/package/scripts/storm.py", line 86, in storm
content=Template("storm.conf.j2")
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 120, in action_create
raise Fail("Applying %s failed, parent directory %s doesn't exist" % (self.resource, dirname))
resource_management.core.exceptions.Fail: Applying File['/etc/security/limits.d/storm.conf'] failed, parent directory /etc/security/limits.d doesn't exist
stdout: /var/lib/ambari-agent/data/output-603.txt
2017-05-16 02:07:02,497 - Stack Feature Version Info: stack_version=2.6, version=None, current_cluster_version=None -> 2.6
2017-05-16 02:07:02,499 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2017-05-16 02:07:02,500 - Group['livy'] {}
2017-05-16 02:07:02,503 - Group['spark'] {}
2017-05-16 02:07:02,504 - Group['zeppelin'] {}
2017-05-16 02:07:02,505 - Group['hadoop'] {}
2017-05-16 02:07:02,506 - Group['users'] {}
2017-05-16 02:07:02,507 - Group['knox'] {}
2017-05-16 02:07:02,509 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-05-16 02:07:02,512 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-05-16 02:07:02,514 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-05-16 02:07:02,516 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-05-16 02:07:02,517 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-05-16 02:07:02,519 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-05-16 02:07:02,521 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-05-16 02:07:02,523 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-05-16 02:07:02,525 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-05-16 02:07:02,527 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['zeppelin', 'hadoop']}
2017-05-16 02:07:02,529 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-05-16 02:07:02,531 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-05-16 02:07:02,533 - User['mahout'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-05-16 02:07:02,535 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-05-16 02:07:02,537 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-05-16 02:07:02,538 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-05-16 02:07:02,540 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-05-16 02:07:02,542 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-05-16 02:07:02,543 - Modifying user hdfs
2017-05-16 02:07:03,568 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-05-16 02:07:03,571 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-05-16 02:07:03,573 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-05-16 02:07:03,574 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-05-16 02:07:03,576 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-05-16 02:07:03,578 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-05-16 02:07:03,581 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-05-16 02:07:03,584 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-05-16 02:07:03,595 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-05-16 02:07:03,595 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-05-16 02:07:03,597 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-05-16 02:07:03,599 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-05-16 02:07:03,609 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-05-16 02:07:03,610 - Group['hdfs'] {}
2017-05-16 02:07:03,611 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs', 'hdfs']}
2017-05-16 02:07:03,613 - Modifying user hdfs
2017-05-16 02:07:03,664 - FS Type:
2017-05-16 02:07:03,664 - Directory['/etc/hadoop'] {'mode': 0755}
2017-05-16 02:07:03,689 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-05-16 02:07:03,690 - Writing File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] because contents don't match
2017-05-16 02:07:03,690 - Changing owner for /usr/hdp/current/hadoop-client/conf/hadoop-env.sh from 0 to hdfs
2017-05-16 02:07:03,691 - Changing group for /usr/hdp/current/hadoop-client/conf/hadoop-env.sh from 0 to hadoop
2017-05-16 02:07:03,692 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-05-16 02:07:03,718 - Initializing 2 repositories
2017-05-16 02:07:03,719 - Repository['HDP-2.6'] {'base_url': 'http://192.168.156.25/hdp/HDP/suse11sp3/2.x/updates/2.6.0.3', 'action': ['create'], 'components': ['HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-05-16 02:07:03,734 - Flushing package manager cache since repo file content is about to change
2017-05-16 02:07:03,734 - checked_call[['zypper', 'clean', '--all']] {'sudo': True}
2017-05-16 02:07:03,837 - checked_call returned (0, 'All repositories have been cleaned up.')
2017-05-16 02:07:03,837 - File['/etc/zypp/repos.d/HDP.repo'] {'content': '[HDP-2.6]\nname=HDP-2.6\nbaseurl=http://192.168.156.25/hdp/HDP/suse11sp3/2.x/updates/2.6.0.3\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-05-16 02:07:03,839 - Writing File['/etc/zypp/repos.d/HDP.repo'] because contents don't match
2017-05-16 02:07:03,840 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://192.168.156.25/hdp/HDP-UTILS-1.1.0.21/repos/suse11sp3', 'action': ['create'], 'components': ['HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-05-16 02:07:03,846 - Flushing package manager cache since repo file content is about to change
2017-05-16 02:07:03,846 - checked_call[['zypper', 'clean', '--all']] {'sudo': True}
2017-05-16 02:07:04,410 - checked_call returned (0, 'All repositories have been cleaned up.')
2017-05-16 02:07:04,411 - File['/etc/zypp/repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://192.168.156.25/hdp/HDP-UTILS-1.1.0.21/repos/suse11sp3\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-05-16 02:07:04,411 - Writing File['/etc/zypp/repos.d/HDP-UTILS.repo'] because contents don't match
2017-05-16 02:07:04,412 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-05-16 02:07:04,884 - Skipping installation of existing package unzip
2017-05-16 02:07:04,884 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-05-16 02:07:05,170 - Skipping installation of existing package curl
2017-05-16 02:07:05,171 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-05-16 02:07:05,391 - Skipping installation of existing package hdp-select
2017-05-16 02:07:05,594 - Package['storm_2_6_0_3_8'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-05-16 02:07:06,118 - Installing package storm_2_6_0_3_8 ('/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm storm_2_6_0_3_8')
2017-05-16 02:07:36,817 - Stack Feature Version Info: stack_version=2.6, version=None, current_cluster_version=None -> 2.6
2017-05-16 02:07:36,820 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-05-16 02:07:36,822 - Directory['/var/log/storm'] {'owner': 'storm', 'group': 'hadoop', 'create_parents': True, 'mode': 0777, 'cd_access': 'a'}
2017-05-16 02:07:36,823 - Changing group for /var/log/storm from 117 to hadoop
2017-05-16 02:07:36,824 - Changing permission for /var/log/storm from 755 to 777
2017-05-16 02:07:36,825 - Directory['/var/run/storm'] {'owner': 'storm', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2017-05-16 02:07:36,826 - Changing group for /var/run/storm from 117 to hadoop
2017-05-16 02:07:36,827 - Directory['/hadoop/storm'] {'owner': 'storm', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-05-16 02:07:36,827 - Creating directory Directory['/hadoop/storm'] since it doesn't exist.
2017-05-16 02:07:36,827 - Changing owner for /hadoop/storm from 0 to storm
2017-05-16 02:07:36,828 - Changing group for /hadoop/storm from 0 to hadoop
2017-05-16 02:07:36,828 - Directory['/usr/hdp/current/storm-client/conf'] {'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'}
2017-05-16 02:07:36,828 - Changing group for /usr/hdp/current/storm-client/conf from 0 to hadoop
2017-05-16 02:07:36,834 - File['/etc/security/limits.d/storm.conf'] {'content': Template('storm.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
Command failed after 1 tries
I found similar issue in https://issues.apache.org/jira/browse/AMBARI-7792
,but it has been solved in Ambari 1.7 .

I hit this same error with Storm Nimbus on Suse 12 while deploying HDF 3.0 with Ambari 2.5.1.
I've raised it as a new apache jira https://issues.apache.org/jira/browse/AMBARI-21489 as AMBARI-7792 was supposed to be fixed back in Ambari 1.7.
The obvious quick workaround it just to mkdir /etc/security/limits.d

Related

Nested templating with folders and files

There is a variable which contains a name with value foo. Only for this name 'special' templating must be done. The folder name, the filename and content must be templated 2 times, and once without any additions. The remaining files must also be templated, but without the special treatment. Let me describe my issue:
This is my variable:
bar:
- name: something
- name: foo
This is my template directory in Ansible:
templates/foo/another_folder/foo.spec
templates/foo/folder_folder/folder2/some_file/foo.service
templates/foo/some_file.txt
templates/something/ignore.txt
The contents of foo.spec:
name: foo
The goal to have the directory structure on the target machine:
foo/another_folder/foo.spec
foo/folder_folder/folder2/some_file/foo.service
foo/some_file.txt
foo-1/another_folder/foo-1.spec
foo-1/folder_folder/folder2/some_file/foo-1.service
foo-1/some_file.txt
foo-2/another_folder/foo-2.spec
foo-2/folder_folder/folder2/some_file/foo-2.service
foo-2/some_file.txt
something/ignore.txt
The content for each foo.spec should be templated to:
# for foo:
name: foo
--
# for foo-1
name: foo-1
--
# for foo-2
name: foo-2
To solve my problem, I took a look at a similar question, which is with_filetree and loops. However, I simply can't figure out where to start. Is this possible to configure in Ansible?
This is quite an awkward requirement but was kind of fun crafting a solution so here we go.
The below solution takes for granted your templates have the exact same file names as the target files where you deploy them. If you ever want to add the j2 extension to them, see the examples in the filetree documentation to remove it while templating to target.
It is not possible in ansible to imbricate loops in the same task. The solution to this is to loop over an include_tasks and add more loops in the included file.
The basic operation is to template an entire file tree to a target dir so this will be our final included file. In between we just have to detect if we are in the foo situation where we want to loop X time over a range of integers, or in the default one where we only process the directory once.
I used the exact same file tree you introduced in your question for my tests. The only change is the content of the foo/another_folder/foo.spec which is now:
name: {{ spec_name }}
Let's start with the base playbook deploy.yml
---
- name: my bizare templating pattern
hosts: localhost
gather_facts: false
vars:
bar:
- name: something
- name: foo
target_base_dir: /tmp/example/
tasks:
- name: make sure target base dir exists
file:
path: "{{ target_base_dir }}"
state: directory
- name: load template pattern chooser file
include_tasks: "template_pattern_chooser.yml"
loop: "{{ bar }}"
loop_control:
loop_var: template_pattern
This is the included template_pattern_chooser.yml
---
- name: Select the corresponding templating pattern
vars:
find_me:
- "pattern_{{ template_pattern.name }}.yml"
- "pattern_default.yml"
include_tasks: "{{ lookup('first_found', find_me) }}"
As you can see, this will look for either a specific file named after the template pattern name or fallback to a default one.
Thi is the specific pattern_foo.yml. Note that this is where set the var spec_name which is used in the above template.
---
- name: "loop over our {{ template_pattern.name }} pattern"
vars:
target_dir_name: "{{ template_pattern.name }}-{{ pattern_iteration }}"
spec_name: "{{ target_dir_name }}"
include_tasks: "template_tree.yml"
loop: "{{ range(1,4) | list }}"
loop_control:
loop_var: pattern_iteration
pattern_default.yml:
---
- name: default templating pattern
vars:
target_dir_name: "{{ template_pattern.name }}"
include_tasks: template_tree.yml
Note that both files include the same template_tree.yml file. The only change is that we loop over it when we are dealing with the foo pattern. This is where the real job takes place:
---
- name: Get list of templates only once
set_fact:
template_tree: "{{ query('filetree', 'templates/' ~ template_pattern.name ~ '/') }}"
- name: Create needed target dir
file:
path: "{{ target_base_dir }}/{{ target_dir_name }}"
state: "directory"
- name: Create needed directories inside target
file:
path: "{{ target_base_dir }}/{{ target_dir_name }}/{{ item.path }}"
state: "{{ item.state }}"
loop: "{{ template_tree }}"
when: item.state == 'directory'
- name: Deploy templates
template:
src: "{{ item.src }}"
dest: "{{ target_base_dir }}/{{ target_dir_name }}/{{ item.path }}"
loop: "{{ template_tree }}"
when: item.state == 'file'
Running this as a test on my machine gives:
$ ansible-playbook deploy.yml
PLAY [my bizare templating pattern] *****************************************************************************
TASK [make sure target base dir exists] *****************************************************************************
changed: [localhost]
TASK [load template pattern chooser file] *****************************************************************************
included: /home/user/test/template_pattern_chooser.yml for localhost => (item={'name': 'something'})
included: /home/user/test/template_pattern_chooser.yml for localhost => (item={'name': 'foo'})
TASK [Select the corresponding templating pattern] *****************************************************************************
included: /home/user/test/pattern_default.yml for localhost
TASK [default templating pattern] *****************************************************************************
included: /home/user/test/template_tree.yml for localhost
TASK [Get list of templates only once] *****************************************************************************
ok: [localhost]
TASK [Create needed target dir] *****************************************************************************
changed: [localhost]
TASK [Create needed directories inside target] *****************************************************************************
skipping: [localhost] => (item={'root': '/home/user/test/templates/something/', 'path': 'ignore.txt', 'state': 'file', 'src': '/home/user/test/templates/something/ignore.txt', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0644', 'size': 6, 'mtime': 1640280116.3296282, 'ctime': 1640280116.3296282})
TASK [Deploy templates] *****************************************************************************
changed: [localhost] => (item={'root': '/home/user/test/templates/something/', 'path': 'ignore.txt', 'state': 'file', 'src': '/home/user/test/templates/something/ignore.txt', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0644', 'size': 6, 'mtime': 1640280116.3296282, 'ctime': 1640280116.3296282})
TASK [Select the corresponding templating pattern] *****************************************************************************
included: /home/user/test/pattern_foo.yml for localhost
TASK [loop over our foo pattern] *****************************************************************************
included: /home/user/test/template_tree.yml for localhost => (item=1)
included: /home/user/test/template_tree.yml for localhost => (item=2)
included: /home/user/test/template_tree.yml for localhost => (item=3)
TASK [Get list of templates only once] *****************************************************************************
ok: [localhost]
TASK [Create needed target dir] *****************************************************************************
changed: [localhost]
TASK [Create needed directories inside target] *****************************************************************************
changed: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'folder_folder', 'state': 'directory', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0755', 'size': 3, 'mtime': 1640280024.0969715, 'ctime': 1640280024.0969715})
changed: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'another_folder', 'state': 'directory', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0755', 'size': 3, 'mtime': 1640279254.875486, 'ctime': 1640279254.875486})
skipping: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'some_file.txt', 'state': 'file', 'src': '/home/user/test/templates/foo/some_file.txt', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0644', 'size': 4, 'mtime': 1640280116.3376281, 'ctime': 1640280116.3376281})
changed: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'folder_folder/some_file', 'state': 'directory', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0755', 'size': 3, 'mtime': 1640280051.5691671, 'ctime': 1640280051.5691671})
skipping: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'folder_folder/some_file/foo.service', 'state': 'file', 'src': '/home/user/test/templates/foo/folder_folder/some_file/foo.service', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0644', 'size': 4, 'mtime': 1640280051.561167, 'ctime': 1640280051.561167})
skipping: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'another_folder/foo.spec', 'state': 'file', 'src': '/home/user/test/templates/foo/another_folder/foo.spec', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0644', 'size': 21, 'mtime': 1640279254.871486, 'ctime': 1640279254.871486})
TASK [Deploy templates] *****************************************************************************
skipping: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'folder_folder', 'state': 'directory', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0755', 'size': 3, 'mtime': 1640280024.0969715, 'ctime': 1640280024.0969715})
skipping: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'another_folder', 'state': 'directory', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0755', 'size': 3, 'mtime': 1640279254.875486, 'ctime': 1640279254.875486})
changed: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'some_file.txt', 'state': 'file', 'src': '/home/user/test/templates/foo/some_file.txt', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0644', 'size': 4, 'mtime': 1640280116.3376281, 'ctime': 1640280116.3376281})
skipping: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'folder_folder/some_file', 'state': 'directory', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0755', 'size': 3, 'mtime': 1640280051.5691671, 'ctime': 1640280051.5691671})
changed: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'folder_folder/some_file/foo.service', 'state': 'file', 'src': '/home/user/test/templates/foo/folder_folder/some_file/foo.service', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0644', 'size': 4, 'mtime': 1640280051.561167, 'ctime': 1640280051.561167})
changed: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'another_folder/foo.spec', 'state': 'file', 'src': '/home/user/test/templates/foo/another_folder/foo.spec', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0644', 'size': 21, 'mtime': 1640279254.871486, 'ctime': 1640279254.871486})
TASK [Get list of templates only once] *****************************************************************************
ok: [localhost]
TASK [Create needed target dir] *****************************************************************************
changed: [localhost]
TASK [Create needed directories inside target] *****************************************************************************
changed: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'folder_folder', 'state': 'directory', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0755', 'size': 3, 'mtime': 1640280024.0969715, 'ctime': 1640280024.0969715})
changed: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'another_folder', 'state': 'directory', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0755', 'size': 3, 'mtime': 1640279254.875486, 'ctime': 1640279254.875486})
skipping: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'some_file.txt', 'state': 'file', 'src': '/home/user/test/templates/foo/some_file.txt', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0644', 'size': 4, 'mtime': 1640280116.3376281, 'ctime': 1640280116.3376281})
changed: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'folder_folder/some_file', 'state': 'directory', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0755', 'size': 3, 'mtime': 1640280051.5691671, 'ctime': 1640280051.5691671})
skipping: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'folder_folder/some_file/foo.service', 'state': 'file', 'src': '/home/user/test/templates/foo/folder_folder/some_file/foo.service', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0644', 'size': 4, 'mtime': 1640280051.561167, 'ctime': 1640280051.561167})
skipping: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'another_folder/foo.spec', 'state': 'file', 'src': '/home/user/test/templates/foo/another_folder/foo.spec', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0644', 'size': 21, 'mtime': 1640279254.871486, 'ctime': 1640279254.871486})
TASK [Deploy templates] *****************************************************************************
skipping: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'folder_folder', 'state': 'directory', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0755', 'size': 3, 'mtime': 1640280024.0969715, 'ctime': 1640280024.0969715})
skipping: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'another_folder', 'state': 'directory', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0755', 'size': 3, 'mtime': 1640279254.875486, 'ctime': 1640279254.875486})
changed: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'some_file.txt', 'state': 'file', 'src': '/home/user/test/templates/foo/some_file.txt', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0644', 'size': 4, 'mtime': 1640280116.3376281, 'ctime': 1640280116.3376281})
skipping: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'folder_folder/some_file', 'state': 'directory', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0755', 'size': 3, 'mtime': 1640280051.5691671, 'ctime': 1640280051.5691671})
changed: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'folder_folder/some_file/foo.service', 'state': 'file', 'src': '/home/user/test/templates/foo/folder_folder/some_file/foo.service', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0644', 'size': 4, 'mtime': 1640280051.561167, 'ctime': 1640280051.561167})
changed: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'another_folder/foo.spec', 'state': 'file', 'src': '/home/user/test/templates/foo/another_folder/foo.spec', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0644', 'size': 21, 'mtime': 1640279254.871486, 'ctime': 1640279254.871486})
TASK [Get list of templates only once] *****************************************************************************
ok: [localhost]
TASK [Create needed target dir] *****************************************************************************
changed: [localhost]
TASK [Create needed directories inside target] *****************************************************************************
changed: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'folder_folder', 'state': 'directory', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0755', 'size': 3, 'mtime': 1640280024.0969715, 'ctime': 1640280024.0969715})
changed: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'another_folder', 'state': 'directory', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0755', 'size': 3, 'mtime': 1640279254.875486, 'ctime': 1640279254.875486})
skipping: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'some_file.txt', 'state': 'file', 'src': '/home/user/test/templates/foo/some_file.txt', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0644', 'size': 4, 'mtime': 1640280116.3376281, 'ctime': 1640280116.3376281})
changed: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'folder_folder/some_file', 'state': 'directory', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0755', 'size': 3, 'mtime': 1640280051.5691671, 'ctime': 1640280051.5691671})
skipping: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'folder_folder/some_file/foo.service', 'state': 'file', 'src': '/home/user/test/templates/foo/folder_folder/some_file/foo.service', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0644', 'size': 4, 'mtime': 1640280051.561167, 'ctime': 1640280051.561167})
skipping: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'another_folder/foo.spec', 'state': 'file', 'src': '/home/user/test/templates/foo/another_folder/foo.spec', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0644', 'size': 21, 'mtime': 1640279254.871486, 'ctime': 1640279254.871486})
TASK [Deploy templates] *****************************************************************************
skipping: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'folder_folder', 'state': 'directory', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0755', 'size': 3, 'mtime': 1640280024.0969715, 'ctime': 1640280024.0969715})
skipping: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'another_folder', 'state': 'directory', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0755', 'size': 3, 'mtime': 1640279254.875486, 'ctime': 1640279254.875486})
changed: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'some_file.txt', 'state': 'file', 'src': '/home/user/test/templates/foo/some_file.txt', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0644', 'size': 4, 'mtime': 1640280116.3376281, 'ctime': 1640280116.3376281})
skipping: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'folder_folder/some_file', 'state': 'directory', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0755', 'size': 3, 'mtime': 1640280051.5691671, 'ctime': 1640280051.5691671})
changed: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'folder_folder/some_file/foo.service', 'state': 'file', 'src': '/home/user/test/templates/foo/folder_folder/some_file/foo.service', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0644', 'size': 4, 'mtime': 1640280051.561167, 'ctime': 1640280051.561167})
changed: [localhost] => (item={'root': '/home/user/test/templates/foo/', 'path': 'another_folder/foo.spec', 'state': 'file', 'src': '/home/user/test/templates/foo/another_folder/foo.spec', 'uid': 1000, 'gid': 100, 'owner': 'user', 'group': 'users', 'mode': '0644', 'size': 21, 'mtime': 1640279254.871486, 'ctime': 1640279254.871486})
PLAY RECAP *****************************************************************************
localhost : ok=24 changed=12 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
This is the result in the target dir:
$ tree /tmp/example/
/tmp/example/
├── foo-1
│   ├── another_folder
│   │   └── foo.spec
│   ├── folder_folder
│   │   └── some_file
│   │   └── foo.service
│   └── some_file.txt
├── foo-2
│   ├── another_folder
│   │   └── foo.spec
│   ├── folder_folder
│   │   └── some_file
│   │   └── foo.service
│   └── some_file.txt
├── foo-3
│   ├── another_folder
│   │   └── foo.spec
│   ├── folder_folder
│   │   └── some_file
│   │   └── foo.service
│   └── some_file.txt
└── something
└── ignore.txt
13 directories, 10 files
And as an example the content of one the foo.spec files:
$ cat /tmp/example/foo-2/another_folder/foo.spec
name: foo-2

hdp 3.1.0.0 installation using ambari2.7.4.0 centOS 7

While installation of HDP-3.1.0.0 using ambari 2.7.4 getting below mentioned error. Could you please help me to figure out the issue ?
I have used proxy to connect to internet in yum.conf and ambari-server is running with root user.
=============
`stderr:
There was an error communicating with RHN.
Red Hat Satellite or RHN Classic support will be disabled.
rhn-plugin: Error communicating with server. The message was:
Connection refused
There was an error communicating with RHN.
Red Hat Satellite or RHN Classic support will be disabled.
rhn-plugin: Error communicating with server. The message was:
Connection refused
There was an error communicating with RHN.
Red Hat Satellite or RHN Classic support will be disabled.
rhn-plugin: Error communicating with server. The message was:
Connection refused
2019-11-22 16:25:32,084 - Reporting component version failed
Traceback (most recent call last):
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 363, in execute
self.save_component_version_to_structured_out(self.command_name)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 223, in save_component_version_to_structured_out
stack_select_package_name = stack_select.get_package_name()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 109, in get_package_name
package = get_packages(PACKAGE_SCOPE_STACK_SELECT, service_name, component_name)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 223, in get_packages
supported_packages = get_supported_packages()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 147, in get_supported_packages
raise Fail("Unable to query for supported packages using {0}".format(stack_selector_path))
Fail: Unable to query for supported packages using /usr/bin/hdp-select
There was an error communicating with RHN.
Red Hat Satellite or RHN Classic support will be disabled.
rhn-plugin: Error communicating with server. The message was:
Connection refused
2019-11-22 16:28:13,557 - Reporting component version failed
Traceback (most recent call last):
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 363, in execute
self.save_component_version_to_structured_out(self.command_name)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 223, in save_component_version_to_structured_out
stack_select_package_name = stack_select.get_package_name()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 109, in get_package_name
package = get_packages(PACKAGE_SCOPE_STACK_SELECT, service_name, component_name)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 223, in get_packages
supported_packages = get_supported_packages()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 147, in get_supported_packages
raise Fail("Unable to query for supported packages using {0}".format(stack_selector_path))
Fail: Unable to query for supported packages using /usr/bin/hdp-select
2019-11-22 16:28:14,080 - Reporting component version failed
Traceback (most recent call last):
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 363, in execute
self.save_component_version_to_structured_out(self.command_name)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 223, in save_component_version_to_structured_out
stack_select_package_name = stack_select.get_package_name()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 109, in get_package_name
package = get_packages(PACKAGE_SCOPE_STACK_SELECT, service_name, component_name)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 223, in get_packages
supported_packages = get_supported_packages()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 147, in get_supported_packages
raise Fail("Unable to query for supported packages using {0}".format(stack_selector_path))
Fail: Unable to query for supported packages using /usr/bin/hdp-select
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stack-hooks/after-INSTALL/scripts/hook.py", line 39, in
AfterInstallHook().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
method(env)
File "/var/lib/ambari-agent/cache/stack-hooks/after-INSTALL/scripts/hook.py", line 32, in hook
setup_stack_symlinks(self.stroutfile)
File "/var/lib/ambari-agent/cache/stack-hooks/after-INSTALL/scripts/shared_initialization.py", line 53, in setup_stack_symlinks
stack_packages = stack_select.get_packages(stack_select.PACKAGE_SCOPE_INSTALL)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 223, in get_packages
supported_packages = get_supported_packages()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 147, in get_supported_packages
raise Fail("Unable to query for supported packages using {0}".format(stack_selector_path))
resource_management.core.exceptions.Fail: Unable to query for supported packages using /usr/bin/hdp-select
stdout:
2019-11-22 16:24:59,287 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=None -> 3.1
2019-11-22 16:24:59,297 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2019-11-22 16:24:59,300 - Group['kms'] {}
2019-11-22 16:24:59,302 - Group['livy'] {}
2019-11-22 16:24:59,302 - Group['spark'] {}
2019-11-22 16:24:59,303 - Group['ranger'] {}
2019-11-22 16:24:59,303 - Group['hdfs'] {}
2019-11-22 16:24:59,304 - Group['zeppelin'] {}
2019-11-22 16:24:59,304 - Group['hadoop'] {}
2019-11-22 16:24:59,304 - Group['users'] {}
2019-11-22 16:24:59,305 - Group['knox'] {}
2019-11-22 16:24:59,306 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-11-22 16:24:59,314 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-11-22 16:24:59,319 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-11-22 16:24:59,325 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-11-22 16:24:59,331 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-11-22 16:24:59,336 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-11-22 16:24:59,342 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-11-22 16:24:59,347 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-11-22 16:24:59,353 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger', 'hadoop'], 'uid': None}
2019-11-22 16:24:59,358 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-11-22 16:24:59,363 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['zeppelin', 'hadoop'], 'uid': None}
2019-11-22 16:24:59,369 - User['kms'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['kms', 'hadoop'], 'uid': None}
2019-11-22 16:24:59,375 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-11-22 16:24:59,380 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None}
2019-11-22 16:24:59,386 - User['druid'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-11-22 16:24:59,392 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None}
2019-11-22 16:24:59,398 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-11-22 16:24:59,404 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-11-22 16:24:59,409 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}
2019-11-22 16:24:59,415 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-11-22 16:24:59,421 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-11-22 16:24:59,426 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-11-22 16:24:59,433 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-11-22 16:24:59,444 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'knox'], 'uid': None}
2019-11-22 16:24:59,455 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-11-22 16:24:59,458 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2019-11-22 16:24:59,473 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2019-11-22 16:24:59,474 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2019-11-22 16:24:59,478 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-11-22 16:24:59,481 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-11-22 16:24:59,483 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2019-11-22 16:24:59,505 - call returned (0, '1033')
2019-11-22 16:24:59,506 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1033'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2019-11-22 16:24:59,518 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1033'] due to not_if
2019-11-22 16:24:59,519 - Group['hdfs'] {}
2019-11-22 16:24:59,520 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}
2019-11-22 16:24:59,523 - FS Type: HDFS
2019-11-22 16:24:59,524 - Directory['/etc/hadoop'] {'mode': 0755}
2019-11-22 16:24:59,525 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2019-11-22 16:24:59,554 - Repository['HDP-3.1-repo-153'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/3.x/updates/3.1.0.0', 'action': ['prepare'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-153', 'mirror_list': None}
2019-11-22 16:24:59,567 - Repository['HDP-3.1-GPL-repo-153'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-GPL/centos7/3.x/updates/3.1.0.0', 'action': ['prepare'], 'components': [u'HDP-GPL', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-153', 'mirror_list': None}
2019-11-22 16:24:59,571 - Repository['HDP-UTILS-1.1.0.22-repo-153'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7', 'action': ['prepare'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-153', 'mirror_list': None}
2019-11-22 16:24:59,574 - Repository[None] {'action': ['create']}
2019-11-22 16:24:59,576 - File['/tmp/tmp0eMg1W'] {'content': '[HDP-3.1-repo-153]\nname=HDP-3.1-repo-153\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/3.x/updates/3.1.0.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-3.1-GPL-repo-153]\nname=HDP-3.1-GPL-repo-153\nbaseurl=http://public-repo-1.hortonworks.com/HDP-GPL/centos7/3.x/updates/3.1.0.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.22-repo-153]\nname=HDP-UTILS-1.1.0.22-repo-153\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2019-11-22 16:24:59,577 - Writing File['/tmp/tmp0eMg1W'] because contents don't match
2019-11-22 16:24:59,578 - Rewriting /etc/yum.repos.d/ambari-hdp-153.repo since it has changed.
2019-11-22 16:24:59,578 - File['/etc/yum.repos.d/ambari-hdp-153.repo'] {'content': StaticFile('/tmp/tmp0eMg1W')}
2019-11-22 16:24:59,580 - Writing File['/etc/yum.repos.d/ambari-hdp-153.repo'] because it doesn't exist
2019-11-22 16:24:59,581 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2019-11-22 16:25:00,081 - Skipping installation of existing package unzip
2019-11-22 16:25:00,081 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2019-11-22 16:25:00,380 - Skipping installation of existing package curl
2019-11-22 16:25:00,381 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2019-11-22 16:25:00,697 - Installing package hdp-select ('/usr/bin/yum -y install hdp-select')
2019-11-22 16:25:32,084 - Reporting component version failed
Traceback (most recent call last):
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 363, in execute
self.save_component_version_to_structured_out(self.command_name)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 223, in save_component_version_to_structured_out
stack_select_package_name = stack_select.get_package_name()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 109, in get_package_name
package = get_packages(PACKAGE_SCOPE_STACK_SELECT, service_name, component_name)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 223, in get_packages
supported_packages = get_supported_packages()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 147, in get_supported_packages
raise Fail("Unable to query for supported packages using {0}".format(stack_selector_path))
Fail: Unable to query for supported packages using /usr/bin/hdp-select
2019-11-22 16:25:32,505 - Command repositories: HDP-3.1-repo-153, HDP-3.1-GPL-repo-153, HDP-UTILS-1.1.0.22-repo-153
2019-11-22 16:25:32,505 - Applicable repositories: HDP-3.1-repo-153, HDP-3.1-GPL-repo-153, HDP-UTILS-1.1.0.22-repo-153
2019-11-22 16:25:32,506 - Looking for matching packages in the following repositories: HDP-3.1-repo-153, HDP-3.1-GPL-repo-153, HDP-UTILS-1.1.0.22-repo-153
2019-11-22 16:26:06,035 - Package['accumulo_3_1_0_0_78'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2019-11-22 16:26:06,268 - Installing package accumulo_3_1_0_0_78 ('/usr/bin/yum -y install accumulo_3_1_0_0_78')
2019-11-22 16:28:13,372 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2019-11-22 16:28:13,376 - XmlConfig['accumulo-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/accumulo-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'accumulo', 'configurations': ...}
2019-11-22 16:28:13,396 - Generating config: /usr/hdp/current/accumulo-client/conf/accumulo-site.xml
2019-11-22 16:28:13,397 - File['/usr/hdp/current/accumulo-client/conf/accumulo-site.xml'] {'owner': 'accumulo', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-11-22 16:28:13,415 - Writing File['/usr/hdp/current/accumulo-client/conf/accumulo-site.xml'] because contents don't match
2019-11-22 16:28:13,423 - File['/usr/hdp/current/accumulo-client/conf/accumulo-env.sh'] {'content': InlineTemplate(...), 'owner': 'accumulo', 'group': 'hadoop', 'mode': 0644}
2019-11-22 16:28:13,424 - Writing File['/usr/hdp/current/accumulo-client/conf/accumulo-env.sh'] because contents don't match
2019-11-22 16:28:13,428 - PropertiesFile['/usr/hdp/current/accumulo-client/conf/client.conf'] {'owner': 'accumulo', 'group': 'hadoop', 'properties': {'instance.zookeeper.host': u'server1.aaa.com:2181,server3.aaa.com:2181,server2.aaa.com:2181', 'instance.name': u'hdp-accumulo-instance', 'instance.zookeeper.timeout': u'30s'}}
2019-11-22 16:28:13,438 - Generating properties file: /usr/hdp/current/accumulo-client/conf/client.conf
2019-11-22 16:28:13,439 - File['/usr/hdp/current/accumulo-client/conf/client.conf'] {'owner': 'accumulo', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-11-22 16:28:13,442 - Writing File['/usr/hdp/current/accumulo-client/conf/client.conf'] because contents don't match
2019-11-22 16:28:13,445 - File['/usr/hdp/current/accumulo-client/conf/log4j.properties'] {'content': ..., 'owner': 'accumulo', 'group': 'hadoop', 'mode': 0644}
2019-11-22 16:28:13,447 - TemplateConfig['/usr/hdp/current/accumulo-client/conf/auditLog.xml'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2019-11-22 16:28:13,453 - File['/usr/hdp/current/accumulo-client/conf/auditLog.xml'] {'content': Template('auditLog.xml.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2019-11-22 16:28:13,456 - TemplateConfig['/usr/hdp/current/accumulo-client/conf/generic_logger.xml'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2019-11-22 16:28:13,460 - File['/usr/hdp/current/accumulo-client/conf/generic_logger.xml'] {'content': Template('generic_logger.xml.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2019-11-22 16:28:13,463 - TemplateConfig['/usr/hdp/current/accumulo-client/conf/monitor_logger.xml'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2019-11-22 16:28:13,467 - File['/usr/hdp/current/accumulo-client/conf/monitor_logger.xml'] {'content': Template('monitor_logger.xml.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2019-11-22 16:28:13,470 - File['/usr/hdp/current/accumulo-client/conf/accumulo-metrics.xml'] {'content': StaticFile('accumulo-metrics.xml'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': 0644}
2019-11-22 16:28:13,473 - TemplateConfig['/usr/hdp/current/accumulo-client/conf/tracers'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2019-11-22 16:28:13,476 - File['/usr/hdp/current/accumulo-client/conf/tracers'] {'content': Template('tracers.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2019-11-22 16:28:13,479 - TemplateConfig['/usr/hdp/current/accumulo-client/conf/gc'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2019-11-22 16:28:13,487 - File['/usr/hdp/current/accumulo-client/conf/gc'] {'content': Template('gc.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2019-11-22 16:28:13,491 - TemplateConfig['/usr/hdp/current/accumulo-client/conf/monitor'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2019-11-22 16:28:13,499 - File['/usr/hdp/current/accumulo-client/conf/monitor'] {'content': Template('monitor.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2019-11-22 16:28:13,503 - TemplateConfig['/usr/hdp/current/accumulo-client/conf/slaves'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2019-11-22 16:28:13,506 - File['/usr/hdp/current/accumulo-client/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2019-11-22 16:28:13,508 - TemplateConfig['/usr/hdp/current/accumulo-client/conf/masters'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2019-11-22 16:28:13,511 - File['/usr/hdp/current/accumulo-client/conf/masters'] {'content': Template('masters.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2019-11-22 16:28:13,514 - TemplateConfig['/usr/hdp/current/accumulo-client/conf/hadoop-metrics2-accumulo.properties'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2019-11-22 16:28:13,524 - File['/usr/hdp/current/accumulo-client/conf/hadoop-metrics2-accumulo.properties'] {'content': Template('hadoop-metrics2-accumulo.properties.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2019-11-22 16:28:13,557 - Reporting component version failed
Traceback (most recent call last):
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 363, in execute
self.save_component_version_to_structured_out(self.command_name)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 223, in save_component_version_to_structured_out
stack_select_package_name = stack_select.get_package_name()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 109, in get_package_name
package = get_packages(PACKAGE_SCOPE_STACK_SELECT, service_name, component_name)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 223, in get_packages
supported_packages = get_supported_packages()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 147, in get_supported_packages
raise Fail("Unable to query for supported packages using {0}".format(stack_selector_path))
Fail: Unable to query for supported packages using /usr/bin/hdp-select
2019-11-22 16:28:14,031 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2019-11-22 16:28:14,080 - Reporting component version failed
Traceback (most recent call last):
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 363, in execute
self.save_component_version_to_structured_out(self.command_name)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 223, in save_component_version_to_structured_out
stack_select_package_name = stack_select.get_package_name()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 109, in get_package_name
package = get_packages(PACKAGE_SCOPE_STACK_SELECT, service_name, component_name)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 223, in get_packages
supported_packages = get_supported_packages()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 147, in get_supported_packages
raise Fail("Unable to query for supported packages using {0}".format(stack_selector_path))
Fail: Unable to query for supported packages using /usr/bin/hdp-select
Command failed after 1 tries`
=============
I am able to getting output for command "/usr/bin/hdp-select" on running it on server
Remove the ambari server and agent completely from all nodesand remove the ambari repo file.
enter the command apt-get update
then download and install again.

Ambari hadoop installation failed

017-12-21 13:46:55,297 - Stack Feature Version Info: Cluster Stack=2.6, Cluster Current Version=None, Command Stack=None, Command Version=None -> 2.6
2017-12-21 13:46:55,317 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2017-12-21 13:46:55,319 - Group['hadoop'] {}
2017-12-21 13:46:55,320 - Group['users'] {}
2017-12-21 13:46:55,321 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-21 13:46:55,323 - call['/var/lib/ambari-agent/tmp/changeUid.sh zookeeper'] {}
2017-12-21 13:46:55,334 - call returned (0, '1002')
2017-12-21 13:46:55,335 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1002}
2017-12-21 13:46:55,337 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-21 13:46:55,338 - call['/var/lib/ambari-agent/tmp/changeUid.sh infra-solr'] {}
2017-12-21 13:46:55,349 - call returned (0, '1013')
2017-12-21 13:46:55,350 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1013}
2017-12-21 13:46:55,351 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-21 13:46:55,351 - call['/var/lib/ambari-agent/tmp/changeUid.sh oozie'] {}
2017-12-21 13:46:55,360 - call returned (0, '1003')
2017-12-21 13:46:55,360 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': 1003}
2017-12-21 13:46:55,363 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-21 13:46:55,364 - call['/var/lib/ambari-agent/tmp/changeUid.sh ams'] {}
2017-12-21 13:46:55,375 - call returned (0, '1004')
2017-12-21 13:46:55,376 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1004}
2017-12-21 13:46:55,378 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2017-12-21 13:46:55,380 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-21 13:46:55,381 - call['/var/lib/ambari-agent/tmp/changeUid.sh kafka'] {}
2017-12-21 13:46:55,390 - call returned (0, '1020')
2017-12-21 13:46:55,390 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1020}
2017-12-21 13:46:55,391 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-21 13:46:55,392 - call['/var/lib/ambari-agent/tmp/changeUid.sh tez'] {}
2017-12-21 13:46:55,401 - call returned (0, '1006')
2017-12-21 13:46:55,402 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': 1006}
2017-12-21 13:46:55,403 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-21 13:46:55,404 - call['/var/lib/ambari-agent/tmp/changeUid.sh hdfs'] {}
2017-12-21 13:46:55,415 - call returned (0, '1007')
2017-12-21 13:46:55,416 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1007}
2017-12-21 13:46:55,418 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-21 13:46:55,418 - call['/var/lib/ambari-agent/tmp/changeUid.sh sqoop'] {}
2017-12-21 13:46:55,426 - call returned (0, '1008')
2017-12-21 13:46:55,427 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1008}
2017-12-21 13:46:55,429 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-21 13:46:55,429 - call['/var/lib/ambari-agent/tmp/changeUid.sh yarn'] {}
2017-12-21 13:46:55,439 - call returned (0, '1009')
2017-12-21 13:46:55,440 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1009}
2017-12-21 13:46:55,441 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-21 13:46:55,442 - call['/var/lib/ambari-agent/tmp/changeUid.sh mapred'] {}
2017-12-21 13:46:55,452 - call returned (0, '1010')
2017-12-21 13:46:55,452 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1010}
2017-12-21 13:46:55,453 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-21 13:46:55,456 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-12-21 13:46:55,462 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2017-12-21 13:46:55,462 - Group['hdfs'] {}
2017-12-21 13:46:55,463 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2017-12-21 13:46:55,463 - FS Type:
2017-12-21 13:46:55,464 - Directory['/etc/hadoop'] {'mode': 0755}
2017-12-21 13:46:55,465 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-12-21 13:46:55,483 - Initializing 2 repositories
2017-12-21 13:46:55,484 - Repository['HDP-2.6'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/ubuntu16/2.x/updates/2.6.3.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '{{package_type}} {{base_url}} {{components}}', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-12-21 13:46:55,491 - File['/tmp/tmp8SS8_V'] {'content': 'deb http://public-repo-1.hortonworks.com/HDP/ubuntu16/2.x/updates/2.6.3.0 HDP main'}
2017-12-21 13:46:55,492 - Writing File['/tmp/tmp8SS8_V'] because contents don't match
2017-12-21 13:46:55,492 - File['/tmp/tmpceZ8Dh'] {'content': StaticFile('/etc/apt/sources.list.d/HDP.list')}
2017-12-21 13:46:55,493 - Writing File['/tmp/tmpceZ8Dh'] because contents don't match
2017-12-21 13:46:55,570 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/ubuntu16', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '{{package_type}} {{base_url}} {{components}}', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-12-21 13:46:55,573 - File['/tmp/tmpabpRd8'] {'content': 'deb http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/ubuntu16 HDP-UTILS main'}
2017-12-21 13:46:55,573 - Writing File['/tmp/tmpabpRd8'] because contents don't match
2017-12-21 13:46:55,574 - File['/tmp/tmpAKCJ9S'] {'content': StaticFile('/etc/apt/sources.list.d/HDP-UTILS.list')}
2017-12-21 13:46:55,574 - Writing File['/tmp/tmpAKCJ9S'] because contents don't match
2017-12-21 13:46:55,616 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-21 13:46:55,671 - Skipping installation of existing package unzip
2017-12-21 13:46:55,672 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-21 13:46:55,720 - Skipping installation of existing package curl
2017-12-21 13:46:55,721 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-21 13:46:55,767 - Skipping installation of existing package hdp-select
2017-12-21 13:46:56,025 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-12-21 13:46:56,028 - Stack Feature Version Info: Cluster Stack=2.6, Cluster Current Version=None, Command Stack=None, Command Version=None -> 2.6
2017-12-21 13:46:56,067 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-12-21 13:46:56,091 - checked_call['dpkg -s hdp-select | grep Version | awk '{print $2}''] {'stderr': -1}
2017-12-21 13:46:56,136 - checked_call returned (0, '2.6.1.0-129', '')
2017-12-21 13:46:56,146 - Package['hadoop-2-6-1-0-129-client'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-21 13:46:56,195 - Skipping installation of existing package hadoop-2-6-1-0-129-client
2017-12-21 13:46:56,196 - Package['hadoop-2-6-1-0-129-hdfs-datanode'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-21 13:46:56,245 - Skipping installation of existing package hadoop-2-6-1-0-129-hdfs-datanode
2017-12-21 13:46:56,246 - Package['hadoop-2-6-1-0-129-hdfs-journalnode'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-21 13:46:56,293 - Skipping installation of existing package hadoop-2-6-1-0-129-hdfs-journalnode
2017-12-21 13:46:56,294 - Package['hadoop-2-6-1-0-129-hdfs-namenode'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-21 13:46:56,348 - Skipping installation of existing package hadoop-2-6-1-0-129-hdfs-namenode
2017-12-21 13:46:56,349 - Package['hadoop-2-6-1-0-129-hdfs-secondarynamenode'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-21 13:46:56,396 - Skipping installation of existing package hadoop-2-6-1-0-129-hdfs-secondarynamenode
2017-12-21 13:46:56,397 - Package['hadoop-2-6-1-0-129-hdfs-zkfc'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-21 13:46:56,443 - Skipping installation of existing package hadoop-2-6-1-0-129-hdfs-zkfc
2017-12-21 13:46:56,444 - Package['libsnappy1'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-21 13:46:56,489 - Installing package libsnappy1 ('/usr/bin/apt-get -q -o Dpkg::Options::=--force-confdef --allow-unauthenticated --assume-yes install libsnappy1')
2017-12-21 13:47:05,876 - Package['libsnappy-dev'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-21 13:47:05,922 - Installing package libsnappy-dev ('/usr/bin/apt-get -q -o Dpkg::Options::=--force-confdef --allow-unauthenticated --assume-yes install libsnappy-dev')
2017-12-21 13:47:14,593 - Package['libhdfs0-2-6-1-0-129'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-21 13:47:14,640 - Skipping installation of existing package libhdfs0-2-6-1-0-129
2017-12-21 13:47:14,642 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2017-12-21 13:47:14,658 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2017-12-21 13:47:14,658 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-12-21 13:47:14,670 - Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml
2017-12-21 13:47:14,671 - File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
Command failed after 1 tries
It looks like the problem with repository. Try to configure and use local repo.
Check this: Setting Up a Local Repository with No Internet Access.

Ansible replace command is not working find module

I am new to Ansible, I have to find some file and then replace some pattern in the all files. so I am using the find and replace module as follows.
- name: My Great Playbook
hosts: all
gather_facts: False
accelerate: False
strategy: free
vars:
dbname: "#DBNAME#"
repldbname: "connect to mydb"
tasks:
- block:
- name: finding fl
find:
paths: "/home/username1/temp"
patterns: "*.sql"
file_type: "file"
register: repos
- name: some thing
debug: msg="{{ item }}"
with_items: "{{ repos.files }}"
- name: replacing string
replace:
path: "{{ item }}"
#path: "/home/username1/temp/1.sql"
regexp: ({{ dbname }})
replace: '{{ repldbname }}'
backup: no
unsafe_writes: yes
with_items: "{{ repos.files }}"
I am getting following error as follows
failed: [localhost] (item={u'uid': 575479814, u'woth': True, u'mtime': 1504541305.603901, u'inode': 8433422, u'isgid': False, u'size': 256, u'roth': True, u'isuid': False, u'isreg': True, u'gid': 575144449, u'ischr': False, u'wusr': True, u'xoth': True, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'path': u'/home/username1/temp/1.sql', u'xusr': True, u'atime': 1504541305.604901, u'isdir': False, u'ctime': 1504541305.6059012, u'wgrp': True, u'xgrp': True, u'dev': 64772, u'isblk': False, u'isfifo': False, u'mode': u'0777', u'islnk': False}) => {
"failed": true,
"item": {
"atime": 1504541305.604901,
"ctime": 1504541305.6059012,
"dev": 64772,
"gid": 575144449,
"inode": 8433422,
"isblk": false,
"ischr": false,
"isdir": false,
"isfifo": false,
"isgid": false,
"islnk": false,
"isreg": true,
"issock": false,
"isuid": false,
"mode": "0777",
"mtime": 1504541305.603901,
"nlink": 1,
"path": "/home/username1/temp/1.sql",
"rgrp": true,
"roth": true,
"rusr": true,
"size": 256,
"uid": 575479814,
"wgrp": true,
"woth": true,
"wusr": true,
"xgrp": true,
"xoth": true,
"xusr": true
},
"rc": 257
}
MSG:
Path {'uid': 575479814, 'woth': True, 'mtime': 1504541305.603901, 'inode': 8433422, 'isgid': False, 'size': 256, 'wgrp': True, 'isuid': False, 'isreg': True, 'gid': 575144449, 'ischr': False, 'wusr': True, 'xoth': True, 'islnk': False, 'nlink': 1, 'issock': False, 'rgrp': True, 'path': '/home/username1/temp/1.sql', 'xusr': True, 'atime': 1504541305.604901, 'isdir': False, 'ctime': 1504541305.6059012, 'isblk': False, 'xgrp': True, 'dev': 64772, 'roth': True, 'isfifo': False, 'mode': '0777', 'rusr': True} does not exist !
Please let me know what is issue here ?
Replace:
path: "{{ item }}"
With:
path: "{{ item.path }}"
You are trying to pass a dictionary object to an argument which requires a string value.

HortonWorks HDP 2.6: NameNode Install issue via Ambari

I tried to install HDP V2.6 with Ambari on 3 nodes (node0.local, node1.local, node2.local), but during the installation the following NameNode failure happened on node0 :
"OSError: [Errno 1] Operation not permitted: '/boot/efi/hadoop/hdfs/namenode'"
Note: The choice [All] has been checked for DataNode and NodeManager during the "Assign Slaves and Clients" step.
Thanks.
Ambari screenshots
Logs:
---------------------------------------------------------
- [stderr: /var/lib/ambari-agent/data/errors-1185.txt]
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 424, in <module>
NameNode().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 314, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 85, in install
self.configure(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 117, in locking_configure
original_configure(obj, *args, **kw)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 92, in configure
namenode(action="configure", hdfs_binary=hdfs_binary, env=env)
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 98, in namenode
create_name_dirs(params.dfs_name_dir)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 282, in create_name_dirs
cd_access="a",
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 114, in __new__
cls(names_list.pop(0), env, provider, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 199, in action_create
recursion_follow_links=self.resource.recursion_follow_links, safemode_folders=self.resource.safemode_folders)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 75, in _ensure_metadata
sudo.chown(path, user_entity, group_entity)
File "/usr/lib/python2.6/site-packages/resource_management/core/sudo.py", line 39, in chown
return os.chown(path, uid, gid)
OSError: [Errno 1] Operation not permitted: '/boot/efi/hadoop/hdfs/namenode'
---------------------------------------------------------
- [stdout: /var/lib/ambari-agent/data/output-1185.txt]
2017-05-09 00:05:01,564 - Stack Feature Version Info: stack_version=2.6, version=None, current_cluster_version=None -> 2.6
2017-05-09 00:05:01,572 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2017-05-09 00:05:01,573 - Group['livy'] {}
2017-05-09 00:05:01,574 - Group['spark'] {}
2017-05-09 00:05:01,574 - Group['zeppelin'] {}
2017-05-09 00:05:01,574 - Group['hadoop'] {}
2017-05-09 00:05:01,575 - Group['users'] {}
2017-05-09 00:05:01,575 - Group['knox'] {}
2017-05-09 00:05:01,575 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-09 00:05:01,576 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-09 00:05:01,576 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-09 00:05:01,577 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-09 00:05:01,577 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-09 00:05:01,578 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-05-09 00:05:01,578 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-09 00:05:01,579 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-05-09 00:05:01,579 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-05-09 00:05:01,580 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'zeppelin', u'hadoop']}
2017-05-09 00:05:01,580 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-09 00:05:01,581 - User['mahout'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-09 00:05:01,582 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-09 00:05:01,582 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-09 00:05:01,583 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-05-09 00:05:01,583 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-09 00:05:01,584 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-09 00:05:01,584 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-09 00:05:01,585 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-09 00:05:01,585 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-09 00:05:01,586 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-09 00:05:01,586 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-09 00:05:01,587 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-09 00:05:01,588 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-09 00:05:01,588 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-05-09 00:05:01,590 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-05-09 00:05:01,596 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-05-09 00:05:01,597 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-05-09 00:05:01,598 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-05-09 00:05:01,599 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-05-09 00:05:01,607 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-05-09 00:05:01,608 - Group['hdfs'] {}
2017-05-09 00:05:01,608 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2017-05-09 00:05:01,609 - FS Type:
2017-05-09 00:05:01,609 - Directory['/etc/hadoop'] {'mode': 0755}
2017-05-09 00:05:01,624 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-05-09 00:05:01,625 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-05-09 00:05:01,640 - Initializing 2 repositories
2017-05-09 00:05:01,640 - Repository['HDP-2.6'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.0.3', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-05-09 00:05:01,647 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.6]\nname=HDP-2.6\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.0.3\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-05-09 00:05:01,648 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-05-09 00:05:01,651 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-05-09 00:05:01,652 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-05-09 00:05:01,734 - Skipping installation of existing package unzip
2017-05-09 00:05:01,735 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-05-09 00:05:01,743 - Skipping installation of existing package curl
2017-05-09 00:05:01,743 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-05-09 00:05:01,752 - Skipping installation of existing package hdp-select
2017-05-09 00:05:01,992 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-05-09 00:05:01,993 - Stack Feature Version Info: stack_version=2.6, version=None, current_cluster_version=None -> 2.6
2017-05-09 00:05:02,015 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-05-09 00:05:02,028 - checked_call['rpm -q --queryformat '%{version}-%{release}' hdp-select | sed -e 's/\.el[0-9]//g''] {'stderr': -1}
2017-05-09 00:05:02,081 - checked_call returned (0, '2.6.0.3-8', '')
2017-05-09 00:05:02,094 - Package['hadoop_2_6_0_3_8'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-05-09 00:05:02,211 - Skipping installation of existing package hadoop_2_6_0_3_8
2017-05-09 00:05:02,213 - Package['hadoop_2_6_0_3_8-client'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-05-09 00:05:02,224 - Skipping installation of existing package hadoop_2_6_0_3_8-client
2017-05-09 00:05:02,225 - Package['snappy'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-05-09 00:05:02,235 - Skipping installation of existing package snappy
2017-05-09 00:05:02,236 - Package['snappy-devel'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-05-09 00:05:02,247 - Skipping installation of existing package snappy-devel
2017-05-09 00:05:02,248 - Package['hadoop_2_6_0_3_8-libhdfs'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-05-09 00:05:02,257 - Skipping installation of existing package hadoop_2_6_0_3_8-libhdfs
2017-05-09 00:05:02,259 - Package['libtirpc-devel'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-05-09 00:05:02,268 - Skipping installation of existing package libtirpc-devel
2017-05-09 00:05:02,270 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2017-05-09 00:05:02,275 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2017-05-09 00:05:02,275 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-05-09 00:05:02,286 - Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml
2017-05-09 00:05:02,286 - File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-05-09 00:05:02,295 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-05-09 00:05:02,303 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml
2017-05-09 00:05:02,303 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-05-09 00:05:02,309 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-05-09 00:05:02,310 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2017-05-09 00:05:02,318 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml
2017-05-09 00:05:02,318 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-05-09 00:05:02,324 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-05-09 00:05:02,332 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml
2017-05-09 00:05:02,332 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-05-09 00:05:02,339 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...}
2017-05-09 00:05:02,346 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2017-05-09 00:05:02,347 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-05-09 00:05:02,390 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2017-05-09 00:05:02,398 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2017-05-09 00:05:02,398 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-05-09 00:05:02,421 - File['/usr/hdp/current/hadoop-client/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2017-05-09 00:05:02,425 - Directory['/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2017-05-09 00:05:02,425 - Directory['/boot/efi/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2017-05-09 00:05:02,425 - Creating directory Directory['/boot/efi/hadoop/hdfs/namenode'] since it doesn't exist.
2017-05-09 00:05:02,426 - Changing owner for /boot/efi/hadoop/hdfs/namenode from 0 to hdfs
2017-05-09 00:05:02,426 - Changing group for /boot/efi/hadoop/hdfs/namenode from 0 to hadoop
Command failed after 1 tries
The Script tries to make a chown-command to Change the owner of the Directory. That is not allowed there. /boot/efi/ is a pretty strange Folder for the namenode dir. it should be placed in root dir like /hadoop/hdfs/namenode. The Boot-Folder is a pretty restricted Folder in Linux.

Resources