Cloudera CDH4 install fails using Yum - hadoop

I am trying to install the datanode and it gives the error "metadata file does not match checksum"
I am behind a proxy
I have tried everything- yum clear all, yum clear metadata. I also edited the yum conf and disabled caching.
In addition, i also manually deleted the cache directory. Nothing works. Nothing. Please help.
On another machine, i was able to get the name node successfully installed
**[root#bi ~]# export http_proxy= myproxy**
**[root#bi ~]# sudo yum install hadoop-0.20-mapreduce-tasktracker hadoop-hdfs-datanode**
http://archive.cloudera.com/cdh4/redhat/5/x86_64/cdh/4/repodata/primary.xml.gz: [Errno -1] Metadata file does not match checksum
Trying other mirror.
http://archive.cloudera.com/cdh4/redhat/5/x86_64/cdh/4/repodata/primary.xml.gz: [Errno -1] Metadata file does not match checksum
Trying other mirror.
Error: failure: repodata/primary.xml.gz from cloudera-cdh4: [Errno 256] No more mirrors to try

I had the same problem, there seams to be a problem with proxy. Try an other proxy or take a look at your configures there.

cloudera manager creates a .repo before installing, and if there are any conflicts it causes that error.
To avoid such conflicts,
1)Create a /etc/yum.repos.d/cloudera-manager.repo file using any stable version of cloudera manager.(5.2.1 was the version when I did this)
My cloudera-manager.repo file looked like this:
[cloudera-manager]
name = Cloudera Manager, Version 5.2.1
baseurl = http://archive.cloudera.com/cm5/redhat/6/x86_64/cm/5.2.1/
gpgkey = http://archive.cloudera.com/redhat/cdh/RPM-GPG-KEY-cloudera
gpgcheck = 1
2) Now run the following command to make the installer use the local repo file.
./cloudera-manager-installer.bin --skip_repo_package=1

Related

Unable to perform initial svn checkout due to error "Assertion `svn_uri_is_canonical(child_uri, NULL)' failed. Aborted (core dumped)"

Noob alert... I am fairly new to both Linux and Apache/SVN, so this issue below is likely a stupid user error (stupid referring to both user and error)
Usually I can google my way to a solution, but I have been stumped for a few days on the "conical" error below. There are not many google references to "svn_uri_is_canonical(child_uri, NULL)" that don't also include accessing "error: git-svn died of signal 6" which I did not see
Any advise as to where I should be looking to solve issue below would be greatly appreciated
When I attempt to check out the svn repository that was created during the SVN install I see the following error
$ svn checkout http://127.0.0.1/svn/first-repo
Redirecting to URL 'http://127.0.0.1/svn/first-repo/':
svn: /build/subversion-Z4OiCa/subversion-1.13.0/subversion/libsvn_subr/dirent_uri.c:1562: uri_skip_ancestor: Assertion `svn_uri_is_canonical(child_uri, NULL)' failed.
Aborted (core dumped)
Some details about installation
From my Browser, entering "http://127.0.0.1/svn/first-repo/" displays:
first-repo - Revision 0: /
--------------------------------------------------------
Powered by Apache Subversion version 1.13.0 (r1867053).
Running svnadmin to check the repo:
$ svnadmin verify /var/www/svn/first-repo
* Verifying metadata at revision 0 ...
* Verified revision 0.
From dav_svn.conf (only changes made)
Alias /svn /var/www/svn
<Location /svn>
DAV svn
SVNParentPath /var/www/svn
AuthType Basic
AuthName "Subversion Repository"
AuthUserFile /etc/svnusers
Require valid-user
</Location>
contents of "os-release":
NAME="Linux Mint"
VERSION="20.2 (Uma)"
ID=linuxmint
ID_LIKE=ubuntu
PRETTY_NAME="Linux Mint 20.2"
VERSION_ID="20.2"
HOME_URL="https://www.linuxmint.com/"
SUPPORT_URL="https://forums.linuxmint.com/"
BUG_REPORT_URL="http://linuxmint-troubleshooting-guide.readthedocs.io/en/latest/"
PRIVACY_POLICY_URL="https://www.linuxmint.com/"
VERSION_CODENAME=uma
UBUNTU_CODENAME=focal
Apache version:
$ apachectl -v
Server version: Apache/2.4.41 (Ubuntu)
Server built: 2021-10-14T16:24:43
SVN version:
svnadmin --version
svnadmin, version 1.13.0 (r1867053)
compiled Mar 24 2020, 12:33:36 on x86_64-pc-linux-gnu
For installation, i followed this guide
https://linuxtechlab.com/simple-guide-to-install-svn-on-linux-apache-subversion/.
I ran into a problem with executing the command:
$ sudo apt-get install apache2 libapache2-mod-svn libapache2-svn
E: Unable to locate package libapache2-svn
Based on one google result, that package was referred to a "dummy" & not required... not sure if it is related to this issue
I uninstalled and tried again from this guide (very similar instructions with minor differences)
https://www.ubuntupit.com/how-to-install-and-configure-apache-svn-server-on-linux-desktop/
I again ran into a problem with executing the command:
$sudo apt-get install subversion libapache2-mod-svn libapache2-svn libsvn-dev
E: Unable to locate package libapache2-svn
the same conical error still occurs when I attempt checkout
Thanks
Read the doc CAREFULLY!!!
Who, when, where told you to use for the same svn-PATH both Location and Alias and even worse, with the same name?!
In assumption, that all prerequisites are met, you have
Start with minimal Location section without preceding Alias, don't mix real pre-existing physical path inside web-server with somehow virtual location (existing /var/www/svn in case of server root in /var/www/ and /svn location is very big mistake), use smth. like (move repo accordingly)
<Location /svn>
DAV svn
SVNParentPath /var/repos
</Location>
Test functioning of your (still) dummy system with existing repo (better not as 127.0.0.1, but with defined in apache's conf ServerName) http://127.0.0.1/svn/first-repo/not with web-browser or git-svn bridge, but with plain pure CLI svn client
After successful checkout of "open" repo, expand configuration by auth-option and re-check both (checkout and commit) basic operations
PS: It's extremely The Bad Idea (tm) to physically place repositories inside ordinary web-space, while not prohibited directly it's at least totally insecure

Artifactory bundle install '/versions' file not found

We are running a local Artifactory Pro installation and have rubygems.org configured as a Remote Repository.
When running bundle install in a CI Job, the local Artifactory Instance does return a 404 File not found when querying for the /versions file. When doing a manual lookup in the Remote (&Cache) Repo, the file >is present<.
The path we pass to bundle install is provided by the Set me up Wizard and looks like this:
https://$rtf-instance.com/artifactory/api/gems/gems-remote/versions
This was mentioned in an issue here: https://www.jfrog.com/jira/browse/RTFACT-16005
and should have been fixed, but it is still not working in our installation.
Can't find any mention of RTFACT-16005 or the related RTFACT-19012 in the Release Notes.
Repo is setup with default values, no additional config done.
Are we missing something?
Environment:
debian 10 (buster)
nginx 1.14.2
artifactory-pro 7.15.4 / 7.15.3
To enable the gems compact index support you need to add the following system property (under $JFROG_HOME/var/etc/artifactory/artifactory.system.properties):
artifactory.gems.compact.index.enabled=true
You will need to restart Artifactory afterward.
This can be found in JFrog Wiki, here: https://www.jfrog.com/confluence/display/JFROG/RubyGems+Repositories#RubyGemsRepositories-RetrievingLatestRubyGemsPackageCompatiblewithYourRubyVersions

Can I list all available index servers with pip?

I have added an index server in my ~/.pypirc file as:
[distutils]
index-servers = example
[example]
repository: https://example.com/simple/
username: someplaintextusername
password: someplaintextpw
However, I can't install a package which definitely is on the example index server. Now I want to check if pip actually notices that server in the pypirc file.
Can I make pip list all available index servers?
edit: For the problem I'm trying to solve, it seems as if ~/config/pip/pip.config is the file I should edit. But my question is still the same.
pip's own config list command should get you at least some of this info:
path/to/pythonX.Y -m pip config list

How to push Ambari use local repository on Hue installation

I have HDP installed with Ambari using public repositories.
I wanted to add Hue to the ecosystem. Since Ambari didn't have Hue as a service to install, I went on with the guide here:
https://github.com/EsharEditor/ambari-hue-service
As far as I understand this guide adds Hue as a service in possible services that Ambari can install.
I think it (this guide) is for local repository installation as I've learned.
My installation failed when it tried to download from public repository. It couldn't find hue server package.
Error log start
2017-01-24 18:53:50,351 - Downloading Hue Service
2017-01-24 18:53:50,351 - Execute['cat /etc/yum.repos.d/HDP.repo | grep "baseurl" | awk -F '=' '{print $2"hue/hue-3.11.0.tgz"}' | xargs wget -O hue.tgz'] {}
Command failed after 1 tries
Error log end
Then I wanted to try installing Hue manually
I followed the guide here:
http://gethue.com/hadoop-hue-3-on-hdp-installation-tutorial
Installation was successfull but my installation was not integrated with Ambari.
I wanted to try the first method again, changing my OS repo files to local repository at first step.
I changed the contents of the files under /etc/yum.repos.d/ to local repository paths to make Ambari use local repository packages but Ambari displayed public-repository. I had tried to install over public repository before. Got the same shell command error this time again as I went on the the next step of ambari add service wizard:
After a short search I found following file and updated also that file with local repository paths:
/var/lib/ambari-server/resources/stacks/HDP/2.5/repos/repoinfo.xml
However, it didn't work either. Ambari was still trying to download from public repository.
Does anyone have a comment?
If I achieve using public repository problem, next step will be finding rpm packages of hue for 3.9.0 or 3.11.0 because my local HDP repository had 2.6 version.
Any help also for this will be appreciated.
OS: Centos 7
HDP: 2.5.3
Ambari: 2.4.2
Hue: 3.9.0
I worked on this with a friend and we were able to overcome this.
I can't say it is the ideal answer but it is a workaround for my case:
The scripts under path
/var/lib/ambari-agent/cache/stacks/HDP/2.5/services/HUE/package/scripts
`$ ls`
common.py hue_server.py params.py setup_hue.py status_params.py
common.pyc hue_server.pyc params.pyc setup_hue.pyc status_params.pyc
were managing the Hue installation over Ambari.
The error message we received was due to a command in common.py
Although we couldn't find out how it overrides our local repository, we searched for pattern "public-repo" and found following files:
/usr/lib/ambari-server/web/data/wizard/stack/HDP_versions.json
/usr/lib/ambari-server/web/data/wizard/stack/HDP_version_definitions.json
/usr/lib/ambari-server/web/data/stacks/HDP-2.1/operating_systems.json
Instead of replacing content of these files, we updated the "download_url" variable inside params.py file.
We hard-coded our local repository URL as value.
We executed the command that we received error from common.py (line 57)
We tried and received another error for the next command.
Then we also applied that command manually
and converted the manually applied command line to comment line
and we retried.
We had to use this apply-manually, comment, retry, receive-error thing for the next command as latest one (3 commands of common.py in total).
On the next retry, installation was successful and hue was up. Rest of is the normal procedure. We updated the hue.ini file.
Currently I am having errors on Hue page as the errors mentioned in this unanswered post :)
https://community.cloudera.com/t5/Web-UI-Hue-Beeswax/Hue-cannot-access-database-Failed-to-access-filesystem-root/td-p/40318
Good luck!

Can't reindex sunspot. "Solr Response: Bad Request"

I'm trying to use sunspot in production with tomcat-solr, in ubuntu
10.10
I followed these steps:
sudo apt-get install openjdk-6-jdk
sudo apt-get install solr-tomcat
sudo service tomcat6 start
Then I updated my sunspot.yml to point the production / staging
environment to the port :8080.
But when I try to run rake sunspot:solr:reindex , it gives me this
message. "Solr Response: Bad Request"
It's been four days and I still can't figure ou what is
wrong =/ I couldn't find the tomcat/solr logs to get more info on
what's bad in my request.
Can someone help me?
In your case, I am willing to bet that you haven't updated your configuration files with Sunspot's default schema.xml and solrconfig.xml. Log files will likely be in /var/log/tomcat6 and may complain about an unknown field "type".
I am not exactly sure where Ubuntu's solr-tomcat package creates the Solr home, but /usr/share/solr is a good place to check. You should copy Sunspot configuration files from solr/conf into Solr's own configuration directory and restart Solr to update the config files.
See also my answer to sunspot solr undefined field type.

Resources