Directory not created with specified mode [closed] - ansible

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 4 years ago.
Improve this question
EDIT: I made a stupid error of thought, I am running ansible through an asible docker base image. The directory is created but in the container where ansible is running and not on the host of ansible docker image
I am using ansible to create directories that are later used for docker mounts.
In the playbook I have the following:
- name: Create DB mount point
file:
path: /mnt/sda1/bic-mounts/oracle-database
state: directory
directory_mode: "777"
mode: "777"
I had to add the directory_mode parameter even though it is not documented in order to make it work: Looking at this ansible issue from a few months ago it seems I am not the only one with that problem.
With that parameter the directory is created, however its mode is 755 and not 777. It does not seem to be an issue with octal vs decimal (as it should not since it is a string, but who knows) since 777 in decimal is 1411 in octal.
Does anyone know what causes the permissions to be wrong? I couldn't find anything in the documentation preventing 777 but the need for directory_mode is also not documented :)

I confirm that specifying the leading zero works:
- name: Create DB mount point
file:
path: /mnt/sda1/bic-mounts/oracle-database
state: directory
mode: 0777
See official documentation:
Mode the file or directory should be. For those used to /usr/bin/chmod remember that modes are actually octal numbers. You must either specify the leading zero so that Ansible's YAML parser knows it is an octal number (like 0644 or 01777) or quote it (like '644' or '0644' so Ansible receives a string and can do its own conversion from string into number. Giving Ansible a number without following one of these rules will end up with a decimal number which will have unexpected results. As of version 1.8, the mode may be specified as a symbolic mode (for example, u+rwx or u=rw,g=r,o=r).

Related

FindFirstFile/FindNextFile dwFileAttributes unexpected value (or not)? [duplicate]

This question already has an answer here:
All files has FILE_ATTRIBUTE_ARCHIVE attribute
(1 answer)
Closed 1 year ago.
When I run my code, the file attribute is 32 for all of my files.
According to this Microsoft docs page:
FILE_ATTRIBUTE_ARCHIVE, 32 (0x20), A file or directory that is an archive file or directory. Applications typically use this attribute to mark files for backup or removal .
But those are normal .jpg files. I would have expected something like this:
FILE_ATTRIBUTE_NORMAL, 128 (0x80), A file that does not have other attributes set. This attribute is valid only when used alone.
Is this just my setup, or is this the expected value for normal files?
There's nothing wrong with it. All files/folders in Windows have 4 basic attributes: Read-only, System, Hidden, Archive. The Archive attribute is pretty much useless these days because it's only used for backup tools to recognize whether a file has been backed up or not in the CP/M and DOS era and has nothing to do with the file type. Any files can have it enabled
It's also explained in the MSDN doc you linked above:
A file or directory that is an archive file or directory. Applications typically use this attribute to mark files for backup or removal .

Copy a file to host machine in Ansible

I have to perform the below steps.
Create a Playbook test.yml.
This playbook should copy the file (somefile.j2) to the Host machine's folder1, only if
somefile.j2 does not exist in host01.
By using vi editor you can add the tasks to test.yml.
[Hint: Use the stat and template module].
somefile.j2 is present at /root.
An inventory file named "myhosts" is present at /root
$ cat myhosts
host01 ansible_ssh_user=root
What should be the contents of test.yml?
Homework questions without "the work done so far to solve the problem and a description of the difficulty" is off-topic. But the conflict in the assignment, which might be considered "a practical, answerable problem that is unique to software development", deserves the answer.
Copy the file to the host only if somefile.j2 does not exist
Use the stat and template module
The assignment requires to use the module stat to find out whether the file exists or not. If it does not exist use the module template to create it.
It is not necessary to use the module stat. The module template "will only transfer the file if the destination does not exist" when "force: no" (default yes). Such "idempotent" behavior of Ansible modules is essential, should be expected, and searched for.
Simply take a look at the examples to see "What should be the contents of test.yml?"

How do I define variables for the current user in Ansible?

We are using vagrant and ansible to create standard development environments.
The ansible playbooks, vagrant files, etc. are in a git repository.
I've using variable file separation to refer to variable files in the developer's home directory for some senstitive and/or user-specific information (e.g. email address).
We use the variables by doing a vars_file: as part of the playbook, but have to do it for every play.
I don't want to put it in the group_vars/all file because it would then be in the repository and not specific to the user.
I would rather not have a file in the repository that is ignored because people still manage to include it and it screw everybody else up.
Is there a way of doing an equivalent of groups/all which can contain tasks and/or variable definitions that will automatically run whenever a playbook is run?
We use the variables by doing a vars_file: as part of the playbook, but have to do it for every play.
Nope, you can do it on playbook level. (But this might be a new thing, could have been impossible back then, I did not check.)
Is there a way of doing an equivalent of groups/all which can contain tasks and/or variable definitions that will automatically run whenever a playbook is run?
Automatically run/included when?! I don't think this is possible as there would be a lot of open questions like:
Should this be specified on the target machine or the ansible server?
How do you specify for which user should this happen on which host?
If there are tasks: do you want this to be executed on each playbook
when it is run using the given user? What about tasks which specifies
that they run as root (become)? What about tasks that specify a
given user to be executed as? What about tasks that are run as root
but creates a file with the owner matching the given user?
As there are no user scopes with variables and we don't really have a "user context" outlined (see the last questions) we are currently stuck with inclusion of variable files explicitly. Hence the below options:
You can keep using vars_file and specify a first found list.
vars_file:
- - ~/ansible_config/vars.yml
- <default vars file somewhere on the machine>
This way the ansible executor user can redefine values...
You can use the --extra-vars #<filepath> syntax to include all variables from a file, and you can have more than one of these.
A similar thing I do is that I include every variable from every yml file within my GLOBAL_INPUT_DIR (which is an environment variable that can be defined before running the bash script executing ansible-playbook or in a your bash profile or something).
EXTRA_ARGS=`{
{
find "${GLOBAL_INPUT_DIR}" -iname "*.yml";
}\
| while read line; do echo "--extra-vars #${line} "; done \
| tr -d "\n"
}`
ansible-playbook $# ${EXTRA_ARGS}
I usually include something like this in my doings to provide an easy way of redifining variables...
BUT: be aware that this will redefine ALL occurances of a variable name within the playbook (but this was also true with vars_file).

Create VCF from .bim, .bed and .fam files [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I have a .fam, .bed and .bim file with markers for few individuals. I would need to convert it into a VCF file.
Could someone help to create a VCF file. Are there any opensource tools which can do this?
You can perform this operation with plink2 (https://www.cog-genomics.org/plink2/) with the following command:
plink --bfile {prefix} --recode vcf bgz --out [prefix]
See here for more options: https://www.cog-genomics.org/plink2/data#recode
However, this will not generate a properly formatted VCF, as plink2 does not keep information about what the reference allele is, while VCF format expects the first allele to be the reference allele. Indels are also often coded differently, though there is no guideline for how to code them in plink format.
For more advanced ways to perform the conversion, a combination of "bedtools getfasta" and "bcftools norm" can help you overcome the above shortcomings.
You could try PlinkSeq, or see this post:
http://bhoom.wordpress.com/2012/04/06/convert-plink-format-to-vcf/
Briefly, the post lists user code for turning plink files into vcf format:
#!/bin/sh
##-- SCRIPT PARAMETER TO MODIFY--##
PLINKFILE=csOmni25
REF_ALLELE_FILE=csOmni25.refAllele
NEWPLINKFILE=csOmni25Ref
PLINKSEQ_PROJECT=csGWAS
## ------END SCRIPT PARAMETER------ ##
#1. convert plink/binary to have the specify reference allele
plink --noweb --bfile $PLINKFILE --reference-allele $REF_ALLELE_FILE --make-bed --out $NEWPLINKFILE
#2. create plink/seq project
pseq $PLINKSEQ_PROJECT new-project
#3. load plink file into plink/seq
pseq $PLINKSEQ_PROJECT load-plink --file $NEWPLINKFILE --id $NEWPLINKFILE
#4. write out vcf file, as of today 4/6/2012 using vcftools version 0.1.8, although the documentation says that you can write out a compressed vcf format using --format BGZF option, vcftools doesn't recognize what this option is. So, I invented my own solution
pseq $PLINKSEQ_PROJECT write-vcf | gzip > $NEWPLINKFILE.vcf.gz

How could I automate this process? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
I'm new to programming/scripting. I have about 40 folders (Win7) with files containing various dates. Currently, I open each folder, search for the date I need, and then copy it elsewhere. How difficult would it be to automate this process? Could I enter the dates that I need, and the tool would then copy all the files I need to a given destination?
Would be pretty simple to whip something up with python, assuming all the dates are similar in format.
import os
import sys
import shutil
fileList = []
rootdir = sys.argv[1]
#iterate over the files
for root, subFolders, files in os.walk(rootdir):
for file in files:
#if date is in the file path add it to a list
if sys.argv[2] in root:
fileList.append(os.path.join(root,file))
#move file from one location to the new dest
for f in fileList:
shutil.move(f,sys.argv[3] + f[f.rindex("/"):])
print "moving %s to %s" % (f,sys.argv[3] + f[f.rindex("/"):])
This doesn't do any error checking, or checking if your output directory is created. but the gist is there.
python script.py directory_to_search str_to_find dest_dir
EDIT: Missed the bit that was the modified date. i'm sure there are libraries to get that kind of information. This solely looks for a string in a directory :(
EDIT EDIT: os.path.getmtime(filepath) is how you get the modified time of a file in python if i'm not mistaken.

Resources