I am using kubectl kustomizecommands to deploy multiple applications (parsers and receivers) with similar configurations and I'm having problems with the hierarchy of kustomization.yaml files (not understanding what's possible and what's not).
I run the kustomize command as follows from custom directory:
$ kubectl kustomize overlay/pipeline/parsers/commercial/dev - this works fine, it produces expected output defined in the kustomization.yaml #1 as desired. What's not working is that it does NOT automatically execute the #2 kustomization, which is in the (already traversed) directory path 2 levels above. The #2 kustomization.yaml contains configMap creation that's common to all of the parser environments. I don't want to repeat those in every env. When I tried to refer to #1 from #2 I got an error about circular reference, yet it fails to run the config creation.
I have the following directory structure tree:
custom
├── base
| ├── kustomization.yaml
│ ├── logstash-config.yaml
│ └── successful-vanilla-ls7.8.yaml
├── install_notes.txt
├── overlay
│ └── pipeline
│ ├── logstash-config.yaml
│ ├── parsers
│ │ ├── commercial
│ │ │ ├── dev
│ │ │ │ ├── dev-patches.yaml
│ │ │ │ ├── kustomization.yaml <====== #1 this works
│ │ │ │ ├── logstash-config.yaml
│ │ │ │ └── parser-config.yaml
│ │ │ ├── prod
│ │ │ ├── stage
│ │ ├── kustomization.yaml <============= #2 why won't this run automatically?
│ │ ├── logstash-config.yaml
│ │ ├── parser-config.yaml
│
Here is my #1 kustomization.yaml:
bases:
- ../../../../../base
namePrefix: dev-
commonLabels:
app: "ls-7.8-logstash"
chart: "logstash"
heritage: "Helm"
release: "ls-7.8"
patchesStrategicMerge:
- dev-patches.yaml
And here is my #2 kustomization.yaml file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
# generate a ConfigMap named my-generated-configmap-<some-hash> where each file
# in the list appears as a data entry (keyed by base filename).
- name: logstashpipeline-parser
behavior: create
files:
- parser-config.yaml
- name: logstashconfig
behavior: create
files:
- logstash-config.yaml
The issue lays within your structure. Each entry in base should resolve to a directory containing one kustomization.yaml file. The same goes with overlay. Now, I think it would be easier to explain on an example (I will use $ to show what goes where):
├── base $1
│ ├── deployment.yaml
│ ├── kustomization.yaml $1
│ └── service.yaml
└── overlays
├── dev $2
│ ├── kustomization.yaml $2
│ └── patch.yaml
├── prod #3
│ ├── kustomization.yaml $3
│ └── patch.yaml
└── staging #4
├── kustomization.yaml $4
└── patch.yaml
Every entry resolves to it's corresponding kustomization.yaml file. Base $1 resolves to kustomization.yaml $1, dev $2 to kustomization.yaml $2 and so on.
However in your use case:
├── base $1
| ├── kustomization.yaml $1
│ ├── logstash-config.yaml
│ └── successful-vanilla-ls7.8.yaml
├── install_notes.txt
├── overlay
│ └── pipeline
│ ├── logstash-config.yaml
│ ├── parsers
│ │ ├── commercial
│ │ │ ├── dev $2
│ │ │ │ ├── dev-patches.yaml
│ │ │ │ ├── kustomization.yaml $2
│ │ │ │ ├── logstash-config.yaml
│ │ │ │ └── parser-config.yaml
│ │ │ ├── prod $3
│ │ │ ├── stage $4
│ │ ├── kustomization.yaml $???
│ │ ├── logstash-config.yaml
│ │ ├── parser-config.yaml
│
Nothing resolves to your second kustomization.yaml.
So to make it work you should put those files separately under each environment.
Below you can find sources with some more examples showing how the tipical directory structure should look like:
Components
Directory layout
GitHub
Related
Are there any best-practices how to organize your project folders so that the CI/CD pipline remains simple?
Here, the following structure is used, which seems to be quite complex:
project
│ README.md
│ azure-pipelines.yml
│ config.json
│ .gitignore
└─── package1
│ │ __init__.py
│ │ setup.py
│ │ README.md
│ │ file.py
│ └── submodule
│ │ │ file.py
│ │ │ file_test.py
│ └── requirements
│ │ │ common.txt
│ │ │ dev.txt
│ └─ notebooks
│ │ notebook1.txt
│ │ notebook2.txt
└─── package2
| │ ...
└─── ci_cd_scripts
│ requirements.py
│ script1.py
│ script2.py
│ ...
Here, the following structure is suggested:
.
├── .dbx
│ └── project.json
├── .github
│ └── workflows
│ ├── onpush.yml
│ └── onrelease.yml
├── .gitignore
├── README.md
├── conf
│ ├── deployment.json
│ └── test
│ └── sample.json
├── pytest.ini
├── sample_project
│ ├── __init__.py
│ ├── common.py
│ └── jobs
│ ├── __init__.py
│ └── sample
│ ├── __init__.py
│ └── entrypoint.py
├── setup.py
├── tests
│ ├── integration
│ │ └── sample_test.py
│ └── unit
│ └── sample_test.py
└── unit-requirements.txt
In concrete, I want to know:
Should I use one repo for all repositories and notebooks (such as suggested in the first approach) or should I create one repo per library (which makes the CI/CD more effortfull as there might be dependencies between the packages)
With both suggested folder structures it is unclear for me where to place my notebooks that are not related to any specific package (e.g. notebooks that contain my business logic and use the package)?
Is there a well-established folder structure?
The Databricks had a repository with project templates to be used with Databricks (link) but now it has been archived and the template creation is part of dbx tool - maybe these two links will be useful for you:
dbx init command - https://dbx.readthedocs.io/en/latest/reference/cli/?h=init#dbx-init
DevOps for Workflows Guide - https://dbx.readthedocs.io/en/latest/concepts/devops/#devops-for-workflows
I am experimenting with xpdf (pdftotext) on a macOS Terminal. I use one language package (Japanese). Everything works fine if I call the executable like this (from the lib directory):
lib kelly$ ./p2t -enc UTF-8 jp.pdf
and my data structure
files/lib/pdftotext
files/lib/xpdfrc
files/lib/jp.pdf #file to convert
files/options/Enc/jp/ # Here I have the language package files
and the following edited xpdfrc configuration file:
#----- begin Japanese support package (2011-sep-02)
cidToUnicode Adobe-Japan1 ../options/Enc/jp/Adobe-Japan1.cidToUnicode
unicodeMap ISO-2022-JP ../options/Enc/jp/ISO-2022-JP.unicodeMap
unicodeMap EUC-JP ../options/Enc/jp/EUC-JP.unicodeMap
unicodeMap Shift-JIS ../options/Enc/jp/Shift-JIS.unicodeMap
cMapDir Adobe-Japan1 ../options/Enc/jp/CMap
toUnicodeDir ../options/Enc/jp/CMap
#----- end Japanese support package
the problem I have is to call 'pdftoext' from a different directory, for example from 'files'. In this case, the files that the configuration files is pointing to are not seen.
files kelly$ ./lib/p2t -enc UTF-8 ./lib/jp.pdf
I get the following error:
Syntax Error: Unknown character collection 'Adobe-Japan1'
And the generated file is garbage.
Any idea on how the configuration file needs to be changed?
I was able to solve a similar problem.
I installed pdftotext with a brew cask.
The installation was done with the following command
$ brew cask install pdftotext
$ pdftotext -v
pdftotext version 3.03
Copyright 1996-2011 Glyph & Cog, LLC
and place the xpdfrc/language support packages in the following directory I did.
ls /usr/local/etc/xpdfrc
/usr/local/etc/xpdfrc
I downloaded the Japanese Language Pack from here.
https://www.xpdfreader.com/download.html
$ tree /usr/local/share/xpdf
/usr/local/share/xpdf
└── japanese
├── Adobe-Japan1.cidToUnicode
├── CMap
│ ├── 78-EUC-H
│ ├── 78-EUC-V
│ ├── 78-H
│ ├── 78-RKSJ-H
│ ├── 78-RKSJ-V
│ ├── 78-V
│ ├── 78ms-RKSJ-H
│ ├── 78ms-RKSJ-V
│ ├── 83pv-RKSJ-H
│ ├── 90ms-RKSJ-H
│ ├── 90ms-RKSJ-UCS2
│ ├── 90ms-RKSJ-V
│ ├── 90msp-RKSJ-H
│ ├── 90msp-RKSJ-V
│ ├── 90pv-RKSJ-H
│ ├── 90pv-RKSJ-UCS2
│ ├── 90pv-RKSJ-UCS2C
│ ├── 90pv-RKSJ-V
│ ├── Add-H
│ ├── Add-RKSJ-H
│ ├── Add-RKSJ-V
│ ├── Add-V
│ ├── Adobe-Japan1-0
│ ├── Adobe-Japan1-1
│ ├── Adobe-Japan1-2
│ ├── Adobe-Japan1-3
│ ├── Adobe-Japan1-4
│ ├── Adobe-Japan1-5
│ ├── Adobe-Japan1-6
│ ├── Adobe-Japan1-UCS2
│ ├── EUC-H
│ ├── EUC-V
│ ├── Ext-H
│ ├── Ext-RKSJ-H
│ ├── Ext-RKSJ-V
│ ├── Ext-V
│ ├── H
│ ├── Hankaku
│ ├── Hiragana
│ ├── Katakana
│ ├── NWP-H
│ ├── NWP-V
│ ├── RKSJ-H
│ ├── RKSJ-V
│ ├── Roman
│ ├── UniJIS-UCS2-H
│ ├── UniJIS-UCS2-HW-H
│ ├── UniJIS-UCS2-HW-V
│ ├── UniJIS-UCS2-V
│ ├── UniJIS-UTF16-H
│ ├── UniJIS-UTF16-V
│ ├── UniJIS-UTF32-H
│ ├── UniJIS-UTF32-V
│ ├── UniJIS-UTF8-H
│ ├── UniJIS-UTF8-V
│ ├── UniJIS2004-UTF16-H
│ ├── UniJIS2004-UTF16-V
│ ├── UniJIS2004-UTF32-H
│ ├── UniJIS2004-UTF32-V
│ ├── UniJIS2004-UTF8-H
│ ├── UniJIS2004-UTF8-V
│ ├── UniJISPro-UCS2-HW-V
│ ├── UniJISPro-UCS2-V
│ ├── UniJISPro-UTF8-V
│ ├── UniJISX0213-UTF32-H
│ ├── UniJISX0213-UTF32-V
│ ├── UniJISX02132004-UTF32-H
│ ├── UniJISX02132004-UTF32-V
│ ├── V
│ └── WP-Symbol
├── EUC-JP.unicodeMap
├── ISO-2022-JP.unicodeMap
├── README
├── Shift-JIS.unicodeMap
└── add-to-xpdfrc
2 directories, 76 files
The contents of xpdfrc are as follows
$ cat /usr/local/etc/xpdfrc
cidToUnicode Adobe-Japan1 /usr/local/share/xpdf/japanese/Adobe-Japan1.cidToUnicode
unicodeMap ISO-2022-JP /usr/local/share/xpdf/japanese/ISO-2022-JP.unicodeMap
unicodeMap EUC-JP /usr/local/share/xpdf/japanese/EUC-JP.unicodeMap
unicodeMap Shift-JIS /usr/local/share/xpdf/japanese/Shift-JIS.unicodeMap
cMapDir Adobe-Japan1 /usr/local/share/xpdf/japanese/CMap
toUnicodeDir /usr/local/share/xpdf/japanese/CMap
Let's say I have a project called my-project/ that lives in it's own directory and has the following file structure.
my-project/
.
├── src
│ ├── index.html
│ ├── main.js
│ ├── normalize.js
│ ├── routes
│ │ ├── index.js
│ │ └── Home
│ │ ├── index.js
│ │ └── assets
│ ├── static
│ ├── store
│ │ ├── createStore.js
│ │ └── reducers.js
│ └── styles
└── project.config.js
Now let's say I have a new project called my-new-project that also lives in it's own directory and has the same file structure as my-project but it contains an additional file called my-files-to-copy.txt
my-new-project/
.
├── src
│ ├── index.html
│ ├── main.js
│ ├── normalize.js
│ ├── routes
│ │ ├── index.js
│ │ └── Home
│ │ ├── index.js
│ │ └── assets
│ ├── static
│ ├── store
│ │ ├── createStore.js
│ │ └── reducers.js
│ └── styles
├── project.config.js
└── my-files-to-copy.txt # new file added to tree
my-new-project/ has the same file structure but different file contents than my-project/
Now let's say my-files-to-copy.txt contains a list of files I want to copy from my-project/ and write to the same path in my-new-project/ to overwrite the existing files in my-new-project/ at those locations.
my-files-to-copy.txt
src/main.js
src/routes/index.js
src/store/reducers.js
project.config.js
How can I accomplish this with a terminal/bash/shell command or script?
edit:
I think I might be able to do:
cp my-project/src/main.js my-new-project/src/main.js
cp my-project/src/routes/index.js my-new-project/src/routes/index.js
cp my-project/src/store/reducers.js my-new-project/src/store/reducers.js
cp my-project/project.config.js my-new-project/project.config.js
But as the number of files scales, this method will become less efficient. I was looking for a more efficient solution that would allow me to leverage the file that contains the list of files (or at least a script) without having to write a separate command for each one.
Assuming my-project and my-new-project are on the same directory:
xargs -i -a my-new-project/my-files-to-copy.txt cp my-project/{} my-new-project/{}
I have a folder structure like so:
.
├── ansible.cfg
├── etc
│ ├── dev
│ │ ├── common
│ │ │ ├── graphite.yml
│ │ │ ├── mongo.yml
│ │ │ ├── mysql.yml
│ │ │ └── rs4.yml
│ │ ├── inventory
│ │ └── products
│ │ ├── a.yml
│ │ ├── b.yml
│ │ └── c.yml
│ └── prod
│ ├── common
│ │ ├── graphite.yml
│ │ ├── mongo.yml
│ │ ├── redis.yml
│ │ └── rs4.yml
│ ├── inventory
│ └── products
│ ├── a.yml
│ ├── b.yml
│ └── c.yml
├── globals.yml
├── startup.yml
├── roles
| └── [...]
└── requirements.txt
And in my ansible.cfg, I would like to do something like: hostfile=./etc/{{ env }}/inventory, but this doesn't work. Is there a way I can go about specifying environment specific inventory files in Ansible?
I assume common and products are variable files.
As #Deepali Mittal already mentioned your inventory should look like inventory/{{ env }}.
In inventory/prod you would define a group prod and in inventory/dev you would define a group dev:
[prod]
host1
host2
hostN
This enables you to define group vars for prod and dev. For this simply create a folder group_vars/prod and place your vars files inside.
Re-ordered your structure would look like this:
.
├── ansible.cfg
├── inventory
│ ├── dev
│ └── prod
├── group_vars
│ ├── dev
│ │ ├── common
│ │ │ ├── graphite.yml
│ │ │ ├── mongo.yml
│ │ │ ├── mysql.yml
│ │ │ └── rs4.yml
│ │ └── products
│ │ ├── a.yml
│ │ ├── b.yml
│ │ └── c.yml
│ └── prod
│ ├── common
│ │ ├── graphite.yml
│ │ ├── mongo.yml
│ │ ├── mysql.yml
│ │ └── rs4.yml
│ └── products
│ ├── a.yml
│ ├── b.yml
│ └── c.yml
├── globals.yml
├── startup.yml
├── roles
| └── [...]
└── requirements.txt
I'm not sure what globals.yml is. If it is a playbook, it is in the correct location. If it is a variable file with global definitions it should be saved as group_vars/all.yml and automatically would be loaded for all hosts.
Now you call ansible-playbook with the correct inventory file:
ansible-playbook -i inventory/prod startup.yml
I don't think it's possible to evaluate the environment inside the ansible.cfg like you asked.
I think instead of {{ env }}/inventory, /inventory/{{ env }} should work. Also if you can please share how you use it right now and the error you get when you change the configuration to envs one
I read in wiki that NoScript is open source http://en.wikipedia.org/wiki/NoScript, but on official site http://noscript.net/, I can't find any sources. So my question is: where to find sources? Or, is there something I did not understand, and the source code is not available?
The Firefox XPI format does not prevent you from simply extracting the contents of the plugin to examine the source code.
While I cannot find a canonical public repository, it looks like someone has systematically downloaded and extracted all the available XPIs and created a GitHub repository out of them.
https://github.com/avian2/noscript
If you'd like to do it yourself, XPI files are just standard ZIP files, so if you want to extract one yourself you can simply point an extraction program at it.
Here's an example of doing that from the command line:
mkdir noscript_source
cd noscript_source
curl -LO https://addons.mozilla.org/firefox/downloads/file/219550/noscript_security_suite-2.6.6.8-fx+fn+sm.xpi
unzip noscript_security_suite-2.6.6.8-fx+fn+sm.xpi
That yields a directory structure that looks like this:
.
├── GPL.txt
├── META-INF
│ ├── manifest.mf
│ ├── zigbert.rsa
│ └── zigbert.sf
├── NoScript_License.txt
├── chrome
│ └── noscript.jar
├── chrome.manifest
├── components
│ └── noscriptService.js
├── defaults
│ └── preferences
│ └── noscript.js
├── install.rdf
├── mozilla.cfg
└── noscript_security_suite-2.6.6.8-fx+fn+sm.xpi
Then the main code is located inside chrome/noscript.jar. You can extract that to get at the JavaScript that makes up the bulk of the plugin:
cd chrome/
unzip noscript.jar
Which will yield the main source tree:
.
├── content
│ └── noscript
│ ├── ABE.g
│ ├── ABE.js
│ ├── ABELexer.js
│ ├── ABEParser.js
│ ├── ASPIdiocy.js
│ ├── ChannelReplacement.js
│ ├── ClearClickHandler.js
│ ├── ClearClickHandlerLegacy.js
│ ├── Cookie.js
│ ├── DNS.js
│ ├── DOM.js
│ ├── ExternalFilters.js
│ ├── FlashIdiocy.js
│ ├── HTTPS.js
│ ├── Lang.js
│ ├── NoScript_License.txt
│ ├── PlacesPrefs.js
│ ├── Plugins.js
│ ├── Policy.js
│ ├── Profiler.js
│ ├── Removal.js
│ ├── RequestWatchdog.js
│ ├── STS.js
│ ├── ScriptSurrogate.js
│ ├── Strings.js
│ ├── URIValidator.js
│ ├── about.xul
│ ├── antlr.js
│ ├── clearClick.js
│ ├── clearClick.xul
│ ├── frameOptErr.xhtml
│ ├── iaUI.js
│ ├── noscript.js
│ ├── noscript.xbl
│ ├── noscriptBM.js
│ ├── noscriptBMOverlay.xul
│ ├── noscriptOptions.js
│ ├── noscriptOptions.xul
│ ├── noscriptOverlay.js
│ ├── noscriptOverlay.xul
│ ├── options-mobile.xul
│ └── overlay-mobile.xul
├── locale
└── skin
The extension contains the source code - you just need to unzip it. See Giorgio's response here.
The whole source code is publicly available in every each XPI.
You've got it on your hard disk right now, if you're a NoScript user, otheriwise you can download it here.
You can examine and/or modify it by unzipping the XPI and the JAR inside, and "building" it back by rezipping both.
It's been like that for ever, since the very first version.