Upload multiple files with one shortcut in PhpStorm - bash

How can I upload a selection of files with a single shortcut in PhpStorm?
Ideally it uses the PhpStorm Deployment-mechanism, - but all answers are welcome. Such as making a bash-file, that scp's the files over (which is then executed using PhpStorm).
I'm looking for a way, where I can simply press something like: CMD + OPT + CTRL + J - and then it uploads all these marked files.
My project have below-shown structure. I've marked the files I would like to be able to upload with an (x):
project
|- subfolder
|- subsubfolder
|- assets
| |- css (x)
| |- js (x)
| |- admin (x)
| |- img
|
|- foo.php
|- bar.php
|- style.css (x)
|- bundle.js (x)
|- other.php
|- other-1.php
Attempt1
I've already tried the: "Upload changed files automatically to the default server" == "Always|On explicit save..." - and it's quite magically. But if the setup isn't right - then it can mess up badly.

Alright. This is how I did it. FINALLY!!! I've been looking for this for years (no exaggeration). I just got too annoyed with it now, that I dived down into the suggestion #LazyOne came with ( <3 ).
I using PhpStorm v. 2020 and I'm on a Mac.
Step 1 - Install SSHPASS
This was the first hurdle. There are a couple of different taps out there. This one worked for me:
brew install hudochenkov/sshpass/sshpass
This is necessary to avoid getting prompted for the password whenever the scp-command is executed.
Step 2 - Get SCP to work from a terminal
It's annoying to debug from inside PhpStorm. So I would suggest starting in the terminal, getting a SCP-command to work. It relative to soooo many things, so it might differ from one host to another.
Please note that SCP is using SSH to copy files, to SSH should be enabled for this to work.
Here's a command that works for me:
sshpass -p 'mypassword' scp style.css app.js SERVER_USER#SERVER_IP:public_html/wp-content/themes/my_theme_name/
This copies two files to the given destination.
Why I'm not using a SSH-key instead?
I'm on a server where there is a master-user, that I can add my ssh-key to - so that can access the server. But I can't do that for the individual instances.
Step 3 - Make a shell-script
I made a script called uploader.sh and added this content:
#!/bin/bash
sshpass -p 'mypassword' scp myfile.css anotherfile.js athirdfile.php SERVER_USER#SERVER_IP:public_html/wp-content/themes/my_theme_name/
Then I went to 'Run' >> 'Edit configurations' and add a new Shell-file.
Note!! Remember to uncheck 'Execute in terminal'. The reason being that it's nice to be able to just keep working whereever you are. And if you execute it in the terminal, then the cursor will finish in the terminal. If it's unchecked, then it doesn't do that.
Here you can see my configuration:
Step 4 - Run and test
Now go tho 'Run' >> 'Run' and choose the one you just added. Then you should see a window like this:
And to run latest 'Run' again, you can simply press CMD + r.
BAM!
Step 5 - Add uploader.sh to .gitignore
Now the password to your server is stored in clean-text in a file on your machine. It's bad for security purposed. So if you're coding a backend for nuclear launches, then you probably shouldn't do this.
But remember to add the uploader.sh file to your .gitignore-file to avoid uploading it to the repo.
Useful resources
How to use sshpass, to pass a password to scp.

Related

Run ansible-lint through subdirectories within a gitlab role

I am trying to add a validation step to a gitlab repo holding a single ansible role (with no playbook).
The structure of the role looks like :
.gitlab-ci.yml
tasks/
templates/
files/
vars/
handlers/
With the gitlab-ci looking like :
stages:
- lint
job-lint:
image:
name: cytopia/ansible-lint:latest
entrypoint: ["/bin/sh", "-c"]
stage: lint
script:
- ansible-lint --version
- ansible-lint . -x 106 tasks/*.yml
I need to skip the naming rule, thus ignoring rule 106.
Otherwise, I would like all files at the root repo to be checked. Since there is no playbook, lint has to be given the files that need to be checked... or at least, that is what I understoodd : I may have this point wrong. But anyway, if I give no name, lint does return ok but actually performs no check.
My problem is that I don't know how to tell him to check all the yaml in a recursive way, or even within a subdirectory. The above code returns an error :
ansible-lint: error: unrecognized arguments: tasks/deploy.yml tasks/localhost.yml tasks/main.yml tasks/managedata.yml tasks/psqlconf.yml
Any idea on how to check all the files from a subdirectory or through the whole role?
PS : I am using cytopia image for ansible-lint, but I have no problem using another, provided it's hosted on dockerhub.
You should certainly be able to pass multiple YAML files as arguments to ansible-lint. I have version 4.1.1a0, and I'm able to use it like this, for example:
anisble-lint -x 106 roles/*/tasks/*.yml
I notice that you seem to have placed a . before your -x 106; that looks like an error. It doesn't look like ansible-lint will accept a directory name as an argument (it doesn't cause it to fail; it just doesn't accomplish anything).
I've tried this both with a locally installed ansible-lint and using the cytopia/ansible-lint image, which appears to perform identically:
docker run --rm -v $PWD:/src -w /src cytopia/ansible-lint -x 106 roles/*/tasks/*.yml
If you want to check all the yaml files, you can use find with exec option, something like this:
find ./ -not -name ".gitlab-ci.yml" -name "*.yml" | xargs ansible-lint -x 106
However ansible-lint -x 106 ./ should work, are you sure that your role really has errors? I've tested it both on ansible-galaxy init generated roles (with meta and all that stuff) and roles which were containing only tasks directory, and it worked every time.
EDIT: I tried creating an error in existing role, replacing "present" with "latest" in package install task
$ ansible-galaxy install geerlingguy.nfs
$ cd ~/.ansible/roles/geerlingguy.nfs
$ sed -i "s/present/latest/g" tasks/setup-RedHat.yml
$ ansible-lint ./
Examining tasks/main.yml of type tasks
Examining tasks/setup-Debian.yml of type tasks
Examining tasks/setup-RedHat.yml of type tasks
Examining handlers/main.yml of type handlers
Examining meta/main.yml of type meta
[403] Package installs should not use latest
tasks/setup-RedHat.yml:2
Task/Handler: Ensure NFS utilities are installed.
and it actually worked, so you may want to run a verbose output to see if actually works, maybe individual yaml file rules are different from whole roles.
When i ran my find-based check i got a lot of extra [204] Lines should be no longer than 160 chars

MapReduceIndexerTool output dir error "Cannot write parent of file"

I want to use Cloudera's MapReduceIndexerTool to understand how morphlines work. I created a basic morphline that just reads lines from the input file and I tried to run that tool using that command:
hadoop jar /opt/cloudera/parcels/CDH/lib/solr/contrib/mr/search-mr-*-job.jar org.apache.solr.hadoop.MapReduceIndexerTool \
--morphline-file morphline.conf \
--output-dir hdfs:///hostname/dir/ \
--dry-run true
Hadoop is installed on the same machine where I run this command.
The error I'm getting is the following:
net.sourceforge.argparse4j.inf.ArgumentParserException: Cannot write parent of file: hdfs:/hostname/dir
at org.apache.solr.hadoop.PathArgumentType.verifyCanWriteParent(PathArgumentType.java:200)
The /dir directory has 777 permissions on it, so it is definitely allowed to write into it. I don't know what I should do to allow it to write into that output directory.
I'm new to HDFS and I don't know how I should approach this problem. Logs don't offer me any info about that.
What I tried until now (with no result):
created a hierarchy of 2 directories (/dir/dir2) and put 777 permissions on both of them
changed the output-dir schema from hdfs:///... to hdfs://... because all the examples in the --help menu are built that way, but this leads to an invalid schema error
Thank you.
It states 'cannot write parent of file'. And the parent in your case is /. Take a look into the source:
private void verifyCanWriteParent(ArgumentParser parser, Path file) throws ArgumentParserException, IOException {
Path parent = file.getParent();
if (parent == null || !fs.exists(parent) || !fs.getFileStatus(parent).getPermission().getUserAction().implies(FsAction.WRITE)) {
throw new ArgumentParserException("Cannot write parent of file: " + file, parser);
}
}
In the message printed is file, in your case hdfs:/hostname/dir, so file.getParent() will be /.
Additionally you can try the permissions with hadoop fs command, for example you can try to create a zero length file in the path:
hadoop fs -touchz /test-file
I solved that problem after days of working on it.
The problem is with that line --output-dir hdfs:///hostname/dir/.
First of all, there are not 3 slashes at the beginning as I put in my continuous trying to make this work, there are only 2 (as in any valid HDFS URI). Actually I put 3 slashes because otherwise, the tool throws an invalid schema exception! You can easily see in this code that the schema check is done before the verifyCanWriteParent check.
I tried to get the hostname by simply running the hostname command on the Cent OS machine that I was running the tool on. This was the main issue. I analyzed the /etc/hosts file and I saw that there are 2 hostnames for the same local IP. I took the second one and it worked. (I also attached the port to the hostname, so the final format is the following: --output-dir hdfs://correct_hostname:8020/path/to/file/from/hdfs
This error is very confusing because everywhere you look for the namenode hostname, you will see the same thing that the hostname command returns. Moreover, the errors are not structured in a way that you can diagnose the problem and take a logical path to solve it.
Additional information regarding this tool and debugging it
If you want to see the actual code that runs behind it, check the cloudera version that you are running and select the same branch on the official repository. The master is not up to date.
If you want to just run this tool to play with the morphline (by using the --dry-run option) without connecting to Solr and playing with it, you can't. You have to specify a Zookeeper endpoint and a Solr collection or a solr config directory, which involves additional work to research on. This is something that can be improved to this tool.
You don't need to run the tool with -u hdfs, it works with a regular user.

Store the log output into a file with cmdenv-output-file

I need to recover the content of the show log module of Omnet++/Tkenv into a file, I added in the omnetpp.ini:
cmdenv-express-mode = false
cmdenv-output-file = log.txt
but I have two types of problems:
1) after the simulation, I did not find the "log.txt" If I do not create it
2) and when I created it before launching the simulation under ../omnetpp-4.6/log.txt also I find it empty
I used EV << to display the content of variables that I used, I need to resolve this problem in order to analyze the traffic so how can I do that please?
You have to start your simulation in Cmdenv mode. To do that go to Run | Run Configurations | select your configuration, then select Command line as User interface. The log file is created in simulations directory by default.

Run Ansible playbook on UNIQUE user/host combination

I've been trying to implement Ansible in our team to manage different kinds of application things such as configuration files for products and applications, the distribution of maintenance scripts, ...
We don't like to work with "hostnames" in our team because we have 300+ of them with meaningless names. Therefor, I started out creating aliases for them in the Ansible hosts file like:
[bpm-i]
bpm-app1-i1 ansible_user=bpmadmin ansible_host=el1001.bc
bpm-app1-i2 ansible_user=bpmadmin ansible_host=el1003.bc
[bpm-u]
bpm-app1-u1 ansible_user=bpmadmin ansible_host=el2001.bc
bpm-app1-u2 ansible_user=bpmadmin ansible_host=el2003.bc
[bpm-all:children]
bpm-i
bpm-u
Meaning we have a BPM application named "app1" and it's deployed on two hosts in integration-testing and on two hosts in user-acceptance-testing. So far so good. Now I can run an Ansible playbook to (for example) setup the SSH accesses (authorized_keys) for team members or push a maintenance script. I can run those PBs on each host seperately, on all hosts ITT or UAT or even everywhere.
But, typically, we'll have install the same application app1 again on an existing host but with a different purpose - say "training" environment. My reflex would be to do this:
[bpm-i]
bpm-app1-i1 ansible_user=bpmadmin ansible_host=el1001.bc
bpm-app1-i2 ansible_user=bpmadmin ansible_host=el1003.bc
[bpm-u]
bpm-app1-u1 ansible_user=bpmadmin ansible_host=el2001.bc
bpm-app1-u2 ansible_user=bpmadmin ansible_host=el2003.bc
[bpm-t]
bpm-app1-t1 ansible_user=bpmadmin ansible_host=el2001.bc
bpm-app1-t2 ansible_user=bpmadmin ansible_host=el2003.bc
[bpm-all:children]
bpm-i
bpm-u
bpm-t
But ... running PB's becomes a mess now and cause errors. Logically I have two alias names to reach the same user/host combination : bpm-app1-u1 and bpm-app1-t1. I don't mind, that's perfectly logical, but if I were to test a new maintenance script, I would first push it to bpm-app1-i1 for testing and when ok, I probably would run the PB against bpm-all. But because of the non-unique user/host combinations for some aliases the PB would run multiple times on the same user/host. Depending on the actions in the PB this may work coincidentally, but it may also fail horribly.
Is there no way to tell Ansible "Run on ALL - UNIQUE user/host combinations" ?
Since most tasks change something on the remote host, you could use Conditionals to check for that change on the host before running.
For example, if your playbook has a task to run a script that creates a file on the remote host, you can add a when clause to "skip the task if file exists" and check for the existence of that file with a stat task before that one.
- Check whether script has run in previous instance by looking for file
stat: path=/path/to/something
register: something
- name: Run Script when file above does not exist
command: bash myscript.sh
when: not something.exists

Creating alias (or something similar) that activates when cd into a specific folder

Is it possible to create aliases when I enter a certain folder?
What I want:
I use composer a lot (a PHP package manager), which installs binaries in ./vendor/bin. I would like to run the binaries directly from ..
For example:
/path/to/project
| - composer.json // dictates dependencies for the project
| - vendor // libs folder for composer, is created by composer
| | - bin // if lib has bin, composer creates this folder
| | | phpunit // binary
| | | phinx // binary
| | - somelib1 // downloaded by composer
| | - somelib2 // downloaded by composer
Is it possible to get this to work:
> cd /path/to/project
> phpunit
And get phpunit to execute?
Something like "sensing" the composer.json file and dynamically find the binaries in ./vendor/bin and then do something like alias="./vendor/bin/<binary-name> $#" automatically?
I use OS X 10.9 and the boxed in Terminal app.
You can override cd, trap my_function DEBUG to run something on every command, or add a command into PS1 or PROMPT_COMMAND.
These have different behaviour and caveats, and I can't recommend doing any of them for this use case (after having used each of them at some point). They are bad solutions to X-Y problems.
An alternative which is much less likely to break things horribly is to create a custom function to do both things:
cdp() {
cd "$#" && phpunit
}

Resources