What are the critical files in ~/.gnupg - gnupg

Over the years, quite a lot of files have accumulated in my ~/.gnupg directory:
~/.gnupg$ tree -a
.
├── crls.d
│   └── DIR.txt
├── gpg-agent.conf
├── gpg.conf
├── .gpg-v21-migrated
├── private-keys-v1.d
│   └── XXX-XXX.key # several .key-files are here
├── pubring.gpg
├── pubring.kbx
├── random_seed
└── trustdb.gpg
Does anyone know how to find out what each file is for, so I can decide which can be safely deleted?

Related

Copying entire folder structure from a role merging/overwriting with already existing files

I use Ansible to deploy my userspecific configuration (shell, texteditor, etc.) on a newly installed system. That's why i have all config files in my roles file directory, structured the same way as they should be placed in my home directory.
What's the correct way to realize this? I don't want to list every single file in the role and exisiting files should be overwriten, existing directories should be merged.
I've tried the copy module, but the whole task is skipped; I assume because the parent directory(.config) already exist.
Edit: add the requested additional information
Ansible Version: 2.9.9
The roles copy task:
- name: Install user configurations
copy:
src: "home/"
dest: "{{ ansible_env.HOME }}"
The Files to copy in the role directory:
desktop-enviroment
├── defaults
│   └── main.yml
├── files
│   └── home
│   ├── .config
│   │   ├── autostart-scripts
│   │   │   └── ssh-keys.sh
│   │   ├── MusicBrainz
│   │   │   ├── Picard
│   │   │   ├── Picard.conf
│   │   │   └── Picard.ini
│   │   ├── sublime-text-3
│   │   │   ├── Installed Packages
│   │   │   ├── Lib
│   │   │   ├── Local
│   │   │   └── Packages
│   │   └── yakuakerc
│   └── .local
│   └── share
│   ├── plasma
│   └── yakuake
├── handlers
│   └── main.yml
├── meta
│   └── main.yml
├── tasks
│   ├── desktop-common.yaml
│   ├── desktop-gnome.yaml
│   ├── desktop-kde.yaml
│   └── main.yml
├── templates
└── vars
└── main.yml
The relevant ansible output:
TASK [desktop-enviroment : Install user configurations] **
ok: [localhost]

Kubernetes client code generator: Can the code exist only locally and not on a repository for the core-generator to work?

I am trying to generate client code using k8s.io/code-generator.
These are the instructions that I am following: https://itnext.io/how-to-generate-client-codes-for-kubernetes-custom-resource-definitions-crd-b4b9907769ba
My question is, does my go module need to be present on a repository or can I simply run the generate-groups.sh script on a go module that is ONLY present on my local system and not on any repository?
I have already tried running it and from what I understand, there needs to be a repository having ALL the contents of my local go module. Is my understanding correct?
You CAN run kubernetes/code-generator's generate-groups.sh on a go module that is only present on your local system. Neither code-generator nor your module needs to be in your GOPATH.
Verification
Cloned kubernetes/code-generator into a new directory.
$HOME/somedir
├── code-generator
Created a project called myrepo and mocked it with content to resemble sample-controller. Did this in the same directory to keep it simple.
somedir
├── code-generator
└── myorg.com
└── myrepo # mock of sample-controller
├── go.mod
├── go.sum
└── pkg
└── apis
└── myorg
├── register.go
└── v1alpha1
├── doc.go
├── register.go
└── types.go
My go.mod looked like
module myorg.com/myrepo
go 1.14
require k8s.io/apimachinery v0.17.4
Ran generate-group.sh. The -h flag specifies which header file to use. The -o flag specifies the output base which is necessary here because we're not in GOPATH.
$HOME/somedir/code-generator/generate-groups.sh all myorg.com/myrepo/pkg/client myorg.com/myrepo/pkg/apis "myorg:v1alpha1" \
-h $HOME/somedir/code-generator/hack/boilerplate.go.txt \
-o $HOME/somedir
Confirmed code generated in correct locations
myrepo
├── go.mod
├── go.sum
└── pkg
├── apis
│   └── myorg
│   ├── register.go
│   └── v1alpha1
│   ├── doc.go
│   ├── register.go
│   ├── types.go
│   └── zz_generated.deepcopy.go
└── client
├── clientset
│   └── versioned
│   ├── clientset.go
│   ├── doc.go
│   ├── fake
│   ├── scheme
│   └── typed
├── informers
│   └── externalversions
│   ├── factory.go
│   ├── generic.go
│   ├── internalinterfaces
│   └── myorg
└── listers
└── myorg
└── v1alpha1
Sources
Go modules support https://github.com/kubernetes/code-generator/issues/57
Documentation or support for Go modules https://github.com/kubernetes/sample-controller/issues/47

Use drush-patchfile in DDEV environment

In Drupal 7 I use
drush-patchfile
to automatically implements patches when installing/updating module via drush. But in DDEV I don't know how to extend existing drush with drush-patchfile
As you can see on https://bitbucket.org/davereid/drush-patchfile section Installation, I need to clone the repository into
~/.drush
directory and that will append it to existing drush.
On another project without DDEV, I've already done that with creating new docker image file
FROM wodby/drupal-php:7.1
USER root
RUN mkdir -p /home/www-data/.drush && chown -R www-data:www-data /home/www-data/;
RUN cd /home/www-data/.drush && git clone https://bitbucket.org/davereid/drush-patchfile.git \
&& echo "<?php \$options['patch-file'] = '/home/www-data/patches/patches.make';" \
> /home/www-data/.drush/drushrc.php;
USER wodby
But I'm not sure how to do that in DDEV container.
Do I need to create a new service based on drud/ddev-webserver or something else?
I've read documentation but not sure in what direction to go.
Based on #rfay comment, here solution that works for me (and with little modification can works for other projects).
I've cloned repo outside of docker container; for example, I've cloned into
$PROJECT_ROOT/docker/drush-patchfile
Create custom drushrc.php in the $PROJECT_ROOT/.esenca/patches folder (you can choose different folder)
<?php
# Location to the patch.make file. This should be location within docker container
$options['patch-file'] = '/var/www/html/.esenca/patches/patches.make';
Add following hooks into $PROJECT_ROOT/.ddev/config.yaml
hooks:
post-start:
# Copy drush-patchfile directory into /home/.drush
- exec: "ln -s -t /home/.drush/ /var/www/html/docker/drush-patchfile"
# Copy custom drushrc file.
- exec: "ln -s -t /home/.drush/ /var/www/html/.esenca/patches/drushrc.php"
Final project structure should looks like
.
├── .ddev
│   ├── config.yaml
│   ├── docker-compose.yaml
│   ├── .gitignore
│   └── import-db
├── docker
│   ├── drush-patchfile
│   │   ├── composer.json
│   │   ├── patchfile.drush.inc
│   │   ├── README.md
│   │   └── src
├── .esenca
│   └── patches
│   ├── drushrc.php
│   └── patches.make
├── public_html
│   ├── authorize.php
│   ├── CHANGELOG.txt
│   ├── COPYRIGHT.txt
│   ├── cron.php
│   ├── includes
│   ├── index.html
│   ├── index.php
│   ├── INSTALL.mysql.txt
│   ├── INSTALL.pgsql.txt
│   ├── install.php
│   ├── INSTALL.sqlite.txt
│   ├── INSTALL.txt
│   ├── LICENSE.txt
│   ├── MAINTAINERS.txt
│   ├── misc
│   ├── modules
│   ├── profiles
│   ├── README.txt
│   ├── robots.txt
│   ├── scripts
│   ├── sites
│   │   ├── all
│   │   ├── default
│   │   ├── example.sites.php
│   │   └── README.txt
│   ├── themes
│   ├── Under-Construction.gif
│   ├── update.php
│   ├── UPGRADE.txt
│   ├── web.config
│   └── xmlrpc.php
└── README.md
At the end start ddev envronment
ddev start
and now you can use drush-patchfile commands within web docker container.
You can ddev ssh and then sudo chown -R $(id -u) ~/.drush/ and then do whwatever you want in that directory (~/.drush is /home/.drush).
When you get it going and you want to do it repetitively for every start, you can encode the instructions you need using post-start hooks: https://ddev.readthedocs.io/en/latest/users/extending-commands/
Please follow up with the exact recipe you use, as it may help others. Thanks!

How can I recursively go to every folders and execute shell script with the same name?

I have this directory.
.
├── animation
│   ├── animation-events
│   │   ├── app.js
│   │   ├── app.mustache.json
│   │   ├── create_view.sh
│   │   └── assets
│   │   └── dummy_character_sprite.png
│   └── change-frame
│   ├── app.js
│   ├── app.mustache.json
│      ├── create_view.sh
│   └── assets
│   └── dummy_character_sprite.png
├── app.css
├── app.mustache
├── bunch_of_functions.js
├── decorators.js
├── create_all_views.sh
└── displaying-a-static-image
├── app.js
├── app.mustache.json
├── create_view.sh
└── assets
└── piplup.png
I want for create_all_views.sh to execute all create_view.sh in the children directory. How can I achieve such thing?
As you are in Ubuntu, you have the GNU implementation of find,
which has the -execdir option,
and you can do like this:
find path/to/dir -name create_view.sh -execdir ./create_view.sh \;
That is,
for each create_view.sh file it finds in the directory tree,
it will execute ./create_view.sh in the directory of that file.

depth first search in Bash script using simple data structures

I'm trying to do as stated above. I have designed a breadth first search with relative ease.
The goal of the script is to create a directory structure of a certain depth and breadth input by the user. I'm trying to alter my breadth first implementation to support depth first search. This is where I got:
depthsearch(){
local open=("seed")
local tmpopen=()
local closed=()
local x="seed"
for((j=0;j<$depth;j++)); do
for x in "${open[#]}" ; do
for ((i=0;i<$breadth;i++)); do
tmpopen=("${tmpopen[#]}" "$x/$i")
mkdir echo "$x/$i"
done
open=("${tmpopen[#]}" "${open[#]:1}")
tmpopen=()
closed=("${closed[#]}" "$x")
done
tmpopen=()
done
}
Okay, so I trimmed down my question a bit. Apparently the problem was that I wasn't iterating by index, so I couldn't update my loop while it was iterating. However, I can't figure out how to iterate by index and update my array so I can construct my directories depth first. Any example would be appreciated.
If it is not strictly necessary to use data structures, you can do it with a simple recusion:
#!/bin/bash
depth=4
breadth=3
node_id=0 #for testing, increments +1 each time a folder is created.
# args: level( [0,depth) ), childNo ( [0,breadth) )
generateTreeDFS(){
declare -i level=$1
declare -i childNo=$2
declare -i i=0
if (( $level < $depth ));then
mkdir "n_$childNo-$node_id"
cd "n_$childNo-$node_id"
let node_id++
let level++
while [ $i -lt $breadth ]
do
generateTreeDFS $level $i
let i++
done
cd ..
fi
}
If we call
generateTreeDFS 0 0
The directory structure will be like this:
$ tree .
.
├── depthsearch.sh
└── n_0-0
├── n_0-1
│   ├── n_0-2
│   │   ├── n_0-3
│   │   ├── n_1-4
│   │   └── n_2-5
│   ├── n_1-6
│   │   ├── n_0-7
│   │   ├── n_1-8
│   │   └── n_2-9
│   └── n_2-10
│   ├── n_0-11
│   ├── n_1-12
│   └── n_2-13
├── n_1-14
│   ├── n_0-15
│   │   ├── n_0-16
│   │   ├── n_1-17
│   │   └── n_2-18
│   ├── n_1-19
│   │   ├── n_0-20
│   │   ├── n_1-21
│   │   └── n_2-22
│   └── n_2-23
│   ├── n_0-24
│   ├── n_1-25
│   └── n_2-26
└── n_2-27
├── n_0-28
│   ├── n_0-29
│   ├── n_1-30
│   └── n_2-31
├── n_1-32
│   ├── n_0-33
│   ├── n_1-34
│   └── n_2-35
└── n_2-36
├── n_0-37
├── n_1-38
└── n_2-39

Resources