git-p4 fails to clone repo while executing through ansible - ansible

I am running an Ansible master machine with a shell script which clones my repo to my local git server. I am cloning from Perforce and hence using git-p4 module for the same.
#!/bin/bash
p4port=myport
p4user=myuser
p4repourl=myurl
p4path=/usr/local/bin/p4
p4passwd=mypass
clone_dest=/root/mycode
gitp4=/usr/local/bin/git-p4
export P4PORT=$p4port
export P4USER=$p4user
$p4path trust -y
echo $p4passwd|$p4path login
echo "now using git-p4 to clone repo.."
python $gitp4 clone -v $p4repourl $clone_dest
My target machine has RHEL 7 and if I execute this shell script on the target machine it works fine. However, if I run this from my Ansible master using the command module it throws the following error:
['git', 'config', 'git-p4.client']\nOpening pipe: ['p4', '-G', 'files', 'mydepot/...#head']\nTraceback (most recent call last):\n File \"/usr/local/bin/git-p4\", line 3657, in <module>\n main()\n File \"/usr/local/bin/git-p4\", line 3651, in main\n if not cmd.run(args):\n File \"/usr/local/bin/git-p4\", line 3525, in run\n if not P4Sync.run(self, depotPaths):\n File \"/usr/local/bin/git-p4\", line 3330, in run\n self.importHeadRevision(revision)\n File \"/usr/local/bin/git-p4\", line 3079, in importHeadRevision\n for info in p4CmdList([\"files\"] + fileArgs):\n File \"/usr/local/bin/git-p4\", line 495, in p4CmdList\n stdout=subprocess.PIPE)\n File \"/usr/lib64/python2.7/subprocess.py\", line 711, in __init__\n errread, errwrite)\n File \"/usr/lib64/python2.7/subprocess.py\", line 1308, in _execute_child\n raise child_exception\nOSError: [Errno 2] No such file or directory", "stdout":

Your PATH isn't set correctly while under ansible, so git-p4 can't find p4.
Also I suggest replacing \n with newlines in your error message to make it easier to understand.

Related

Conda Create Environment - No Compatible Shell found

I have a bash script where i will be creating conda virtual environment and install packages into it.
currently we are using conda version 4.5.12 with python 3.6 in my virtual environment.
Am trying to upgrade conda version to 4.9.2 with python 3.6.
conda --version
4.9.2
This is the command i use inside my script
conda create -y --name virtual_env python=3.6
This Runs and fails during Download and Extracting Packages Step. Below is the Error Report
Traceback (most recent call last):
File "/root/project/miniconda/lib/python3.9/site-packages/conda/exceptions.py", line 1079, in __call__
return func(*args, **kwargs)
File "/root/project/miniconda/lib/python3.9/site-packages/conda/cli/main.py", line 84, in _main
exit_code = do_call(args, p)
File "/root/project/miniconda/lib/python3.9/site-packages/conda/cli/conda_argparse.py", line 83, in do_call
return getattr(module, func_name)(args, parser)
File "/root/project/miniconda/lib/python3.9/site-packages/conda/cli/main_create.py", line 41, in execute
install(args, parser, 'create')
File "/root/project/miniconda/lib/python3.9/site-packages/conda/cli/install.py", line 317, in install
handle_txn(unlink_link_transaction, prefix, args, newenv)
File "/root/project/miniconda/lib/python3.9/site-packages/conda/cli/install.py", line 346, in handle_txn
unlink_link_transaction.execute()
File "/root/project/miniconda/lib/python3.9/site-packages/conda/core/link.py", line 249, in execute
self._execute(tuple(concat(interleave(itervalues(self.prefix_action_groups)))))
File "/root/project/miniconda/lib/python3.9/site-packages/conda/core/link.py", line 712, in _execute
raise CondaMultiError(tuple(concatv(
conda.CondaMultiError: No compatible shell found!
()
Experts Please help.
Adding briefs of the script
#!/bin/bash
set -e
install_conda_for_linux(){
#
# Determine Installation Location for non Windows systems
#
#Get path where miniconda needs to get installed and remove if anything exixsts already#
downloaded_file=$base_dir/$conda_file
output_formatted Removing file: $downloaded_file
rm -f $downloaded_file
#
# Download Miniconda
#
output_formatted Downloading Miniconda from: $conda_url '\n' Saving file in: $base_dir
curl -L $conda_url > $base_dir/$conda_file
#
# Install Miniconda
#
rm -rf $install_dir
bash $base_dir/$conda_file -b -p $install_dir
#
# Modify PATH
#
conda_path=$install_dir/bin
export PATH=$conda_path:\$PATH
conda_version=`conda --version`
}
#
# Variables
#
pyversion=3
python_version="3.6.10"
conda_version="4.9.2"
skip_install=$1
base_url='https://repo.anaconda.com/miniconda'
virtual_env=venv
#conda_file is only specified for use in messages below on Windows, as it is manual install, which must be done before running this script.
declare -A conda_file_map
conda_file_map[Linux]="Miniconda${pyversion}-py39_${conda_version}-Linux-x86_64.sh"
conda_file=${conda_file_map[${os_type}]}
#
# Installation of conda and its dependencies
#
if [ ${skip_install} != 'true' ];then
conda_url=${base_url}/${conda_file}
install_conda_for_linux
#
# Create Environment
#
output_formatted Creating new virtual environment: $virtual_env for python_version $python_version
conda create -y -vv --name $virtual_env python=$python_version
Here's your bug:
conda_path=$install_dir/bin
export PATH=$conda_path:\$PATH
Let's say install_dir=/path/to/install, and the starting value of PATH is PATH=/bin:/usr/bin:/usr/local/bin (which is how which sh finds /bin/sh or /usr/bin/sh).
After you ran this command, because of the backslash, you don't have PATH=/path/to/install/bin:/bin:/usr/bin:/usr/local/bin (which is what you want), but instead you have PATH=/path/to/install/bin:$PATH; the backslash caused the literal string $PATH to be added to your variable, instead of the string that's contained therein.
Thus, /bin and /usr/bin are no longer listed in your PATH variable, so which can't find them.
To fix this, just make it:
conda_path=$install_dir/bin
PATH=$conda_path:$PATH
The big fix is changing \$PATH to just $PATH -- but beyond that, you don't need the export (changes to variables that are already in the environment are automatically re-exported), and having it adds complexity for no good reason.

Error in installing OpenStack using devstack in centos7

Im trying to install OpenStack using Devstack in CentOS 7 ,Im using the following documentation as the guide, but im encountering an error which is shown.This is the output while running ./stack.sh.Im running./clean.sh and ./unstack.sh before running ./stack.sh.
Obtaining file:///opt/stack/keystone
Complete output from command python setup.py egg_info:
ERROR:root:Error parsing
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/pbr/core.py", line 96, in pbr
attrs = util.cfg_to_args(path, dist.script_args)
File "/usr/lib/python2.7/site-packages/pbr/util.py", line 259, in cfg_to_args
pbr.hooks.setup_hook(config)
File "/usr/lib/python2.7/site-packages/pbr/hooks/__init__.py", line 25, in setup_hook
metadata_config.run()
File "/usr/lib/python2.7/site-packages/pbr/hooks/base.py", line 27, in run
self.hook()
File "/usr/lib/python2.7/site-packages/pbr/hooks/metadata.py", line 26, in hook
self.config['name'], self.config.get('version', None))
File "/usr/lib/python2.7/site-packages/pbr/packaging.py", line 839, in get_version
name=package_name))
Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name keystone was given, but was not able to be found.
error in setup command: Error parsing /opt/stack/keystone/setup.cfg: Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name keystone was given, but was not able to be found.
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /opt/stack/keystone/
You are using pip version 9.0.3, however version 18.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
+inc/python:pip_install:1 exit_trap
+./stack.sh:exit_trap:515 local r=1
++./stack.sh:exit_trap:516 jobs -p
+./stack.sh:exit_trap:516 jobs=
+./stack.sh:exit_trap:519 [[ -n '' ]]
+./stack.sh:exit_trap:525 '[' -f '' ']'
+./stack.sh:exit_trap:530 kill_spinner
+./stack.sh:kill_spinner:425 '[' '!' -z '' ']'
+./stack.sh:exit_trap:532 [[ 1 -ne 0 ]]
+./stack.sh:exit_trap:533 echo 'Error on exit'
Error on exit
+./stack.sh:exit_trap:535 type -p generate-subunit
+./stack.sh:exit_trap:536 generate-subunit 1545131409 150 fail
+./stack.sh:exit_trap:538 [[ -z /opt/stack/logs ]]
+./stack.sh:exit_trap:541 /opt/stack/devstack/tools/worlddump.py -d /opt/stack/logs
World dumping... see /opt/stack/logs/worlddump-2018-12-18-111239.txt for details
+./stack.sh:exit_trap:550 exit 1

EMR failed to bootstrap Airflow

I'm creating a bash to provide the enviroment to execute airflow, but for some reason the script don't work as well.
If I provide first the EMR and after that execute the script, it work's fine. But if I use the script to execute in custom activity don't work. I tried to change the commands to execute like sudo, but still not working.
#!/bin/bash
# check for master node
IS_MASTER=true
if [ -f /mnt/var/lib/info/instance.json ]
then
IS_MASTER=`cat /mnt/var/lib/info/instance.json | tr -d '\n ' | sed -n 's|.*\"isMaster\":\([^,]*\).*|\1|p'`
fi
if [ "$IS_MASTER" = "true}" ];
then
# install mysql jdbc driver on sqoop
wget -qN -O ~/mysql-connector-java-5.1.39.tar.gz "https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.39.tar.gz"
tar -zxvf ~/mysql-connector-java-5.1.39.tar.gz && rm ~/mysql-connector-java-5.1.39.tar.gz
sudo mv ~/mysql-connector-java-5.1.39/mysql-connector-java-5.1.39-bin.jar /usr/lib/sqoop/lib
sudo chmod 744 /usr/lib/sqoop/lib/mysql-connector-java-5.1.39-bin.jar
aws s3 cp s3://monet-datapipeline/scripts/emr_boostrap_scripts/airflow_boostrap ~/ --recursive --exclude "*.sh"
#create enviroment for airflow
virtualenv airflowenv -p python3
source ~/airflowenv/bin/activate
pip install --upgrade pip
pip install airflow
pip install boto3
airflow initdb
mv ~/carriola/airflow.cfg ~/airflow
airflow webserver -p 9030
airflow scheduler
fi
This is the code error.
The stderr from master node.
mv: cannot stat ‘/home/hadoop/mysql-connector-java-5.1.39/mysql-connector-java-5.1.39-bin.jar’: No such file or directory
chmod: cannot access ‘/usr/lib/sqoop/lib/mysql-connector-java-5.1.39-bin.jar’: No such file or directory
/emr/instance-controller/lib/bootstrap-actions/1/airflow_bootstrap.sh: line 25: /home/hadoop/airflowenv/bin/activate: No such file or directory
You are using pip version 6.1.1, however version 8.1.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Exception:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 246, in main
status = self.run(options, args)
File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 352, in run
root=options.root_path,
File "/usr/lib/python2.7/dist-packages/pip/req/req_set.py", line 687, in install
requirement.uninstall(auto_confirm=True)
File "/usr/lib/python2.7/dist-packages/pip/req/req_install.py", line 730, in uninstall
paths_to_remove.remove(auto_confirm)
File "/usr/lib/python2.7/dist-packages/pip/req/req_uninstall.py", line 126, in remove
renames(path, new_path)
File "/usr/lib/python2.7/dist-packages/pip/utils/__init__.py", line 292, in renames
shutil.move(old, new)
File "/usr/lib64/python2.7/shutil.py", line 303, in move
os.unlink(src)
OSError: [Errno 13] Permission denied: '/usr/bin/pip'
You are using pip version 6.1.1, however version 8.1.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Command "/usr/bin/python2.7 -c "import setuptools, tokenize;__file__='/mnt/tmp/pip-build-rmGy3J/sqlalchemy/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-J6Ft9n-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /mnt/tmp/pip-build-rmGy3J/sqlalchemy
You are using pip version 6.1.1, however version 8.1.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Exception:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 246, in main
status = self.run(options, args)
File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 352, in run
root=options.root_path,
File "/usr/lib/python2.7/dist-packages/pip/req/req_set.py", line 693, in install
**kwargs
File "/usr/lib/python2.7/dist-packages/pip/req/req_install.py", line 817, in install
self.move_wheel_files(self.source_dir, root=root)
File "/usr/lib/python2.7/dist-packages/pip/req/req_install.py", line 1018, in move_wheel_files
isolated=self.isolated,
File "/usr/lib/python2.7/dist-packages/pip/wheel.py", line 237, in move_wheel_files
clobber(source, lib_dir, True)
File "/usr/lib/python2.7/dist-packages/pip/wheel.py", line 208, in clobber
os.makedirs(destdir)
File "/usr/lib64/python2.7/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/site-packages/s3transfer'
/emr/instance-controller/lib/bootstrap-actions/1/airflow_bootstrap.sh: line 31: airflow: command not found
mv: cannot stat ‘/home/hadoop/carriola/airflow.cfg’: No such file or directory
/emr/instance-controller/lib/bootstrap-actions/1/airflow_bootstrap.sh: line 35: airflow: command not found
/emr/instance-controller/lib/bootstrap-actions/1/airflow_bootstrap.sh: line 37: airflow: command not found
You can use:
cat /mnt/var/lib/info/instance.json | jq .isMaster
instead to find isMaster true or not.

AmpLab Big Data Benchmark for Spark error on EC2

I am trying to run the Big Data benchmark on my EC2 cluster for my own Spark fork located here. It just modifies some files on the Spark core. My cluster contains 1 master and 2 slave nodes of type m1.large. I use the ec2 scripts bundled with Spark to launch my cluster. The cluster launched perfectly and I am able to successfully ssh into the master. However when I try to run the benchmarks from the master using the command
./runner/prepare-benchmark.sh --shark --aws-key-id=xxxxxxxx --aws-key=xxxxxxxx --shark-host=<my-spark-master> --shark-identity-file=/root/.ssh/id_rsa --scale-factor=1
I get the following error:
=== IMPORTING BENCHMARK DATA FROM S3 ===
bash: /root/ephemeral-hdfs/bin/hdfs: No such file or directory
Connection to ec2-54-201-169-165.us-west-2.compute.amazonaws.com closed.
bash: /root/mapreduce/bin/start-mapred.sh: No such file or directory
Connection to ec2-54-201-169-165.us-west-2.compute.amazonaws.com closed.
Traceback (most recent call last):
File "./prepare_benchmark.py", line 606, in <module>
main()
File "./prepare_benchmark.py", line 594, in main
prepare_shark_dataset(opts)
File "./prepare_benchmark.py", line 192, in prepare_shark_dataset
ssh_shark("/root/mapreduce/bin/start-mapred.sh")
File "./prepare_benchmark.py", line 180, in ssh_shark
ssh(opts.shark_host, "root", opts.shark_identity_file, command)
File "./prepare_benchmark.py", line 139, in ssh
(identity_file, username, host, command), shell=True)
File "/usr/lib64/python2.6/subprocess.py", line 505, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command 'ssh -t -o StrictHostKeyChecking=no -i /root/.ssh/id_rsa root#ec2-54-201-169-165.us-west-2.compute.amazonaws.com 'source /root/.bash_profile;
/root/mapreduce/bin/start-mapred.sh'' returned non-zero exit status 127
I have tried terminating the cluster and launching it again multiples times but the problem persists. What could be the issue?

Attempting to install Portia on OSX or Ubuntu

Could someone help me? I have been over and over installing Portia. All goes well until I get to the point where I am using the twistd command and I get this:
(portia)Matts-Mac-mini:slyd matt$ twistd -n slyd
Traceback (most> recent call last): File "/Users/matt/portia/bin/twistd", line 14, in run() File "/Users/matt/portia/lib/python2.7/site-packages/twisted/scripts/twistd.py", line 27, in run app.run(runApp, ServerOptions) File "/Users/matt/portia/lib/python2.7/site-packages/twisted/application/app.py", line 642, in run runApp(config) File "/Users/matt/portia/lib/python2.7/site-packages/twisted/scripts/twistd.py", line 23, in runApp _SomeApplicationRunner(config).run() File "/Users/matt/portia/lib/python2.7/site-packages/twisted/application/app.py", line 376, in run self.application = self.createOrGetApplication() File "/Users/matt/portia/lib/python2.7/site-packages/twisted/application/app.py", line 436, in createOrGetApplication ser = plg.makeService(self.config.subOptions) File "/Users/matt/portia/portia/slyd/slyd/tap.py", line 74, in makeService root = create_root(config) File "/Users/matt/portia/portia/slyd/slyd/tap.py", line 41, in create_root from .projectspec import create_project_resource File "/Users/matt/portia/portia/slyd/slyd/projectspec.py", line 5, in from slybot.validation.schema import get_schema_validator
ImportError: No module named slybot.validation.schema.
I also noted that when trying to do the 'pip install -r requirements.txt' even though I am in the correct directory( [virtualenv-name]/portia/slyd), the requirements.txt file is not in the slyd directory but in the portia directory.
I am going crazy here and any help is very much appreciated.
Looks like There is a mistake in the installation guide.
The guide should be:
virtualenv ENV_NAME --no-site-packages
source ENV_NAME/bin/activate
cd ENV_NAME
git clone https://github.com/scrapinghub/portia.git
cd portia
pip install -r requirements.txt
pip install -e ./slybot
cd slyd
twistd -n slyd
This worked for me. Hopefully it will work for you too.

Resources