My view folders is like
views/
└── web/
├── Frontend
│ ├── layout
│ ├── admin
│ ├── profile
│
└── Backend
├── layout
├── user
├── post
I need to make it like this:
views/
└── web/
├── Frontend
│ ├── layout
│ ├── admin
│ ├── components
│
└── Backend
├── layout
├── user
├── components
each section has his owncomponents
Is there any way to achieve to this point ?
As you would do for views, you can use the dot notation to specify the path, and so if you have a component called item.blade.php inside views/web/Frontend/components you can use
#component('web.Frontend.components.item', [...])
...
#endcomponent
Then in order to bind variables from the view to the component, you just need to pass them in the array.
Example, in your view you have a $var1 which in the component has $var2 as name, you just need to pass it like this:
#component('web.Frontend.components.item', ['var1'=>$var2])
...
#endcomponent
Related
Are there any best-practices how to organize your project folders so that the CI/CD pipline remains simple?
Here, the following structure is used, which seems to be quite complex:
project
│ README.md
│ azure-pipelines.yml
│ config.json
│ .gitignore
└─── package1
│ │ __init__.py
│ │ setup.py
│ │ README.md
│ │ file.py
│ └── submodule
│ │ │ file.py
│ │ │ file_test.py
│ └── requirements
│ │ │ common.txt
│ │ │ dev.txt
│ └─ notebooks
│ │ notebook1.txt
│ │ notebook2.txt
└─── package2
| │ ...
└─── ci_cd_scripts
│ requirements.py
│ script1.py
│ script2.py
│ ...
Here, the following structure is suggested:
.
├── .dbx
│ └── project.json
├── .github
│ └── workflows
│ ├── onpush.yml
│ └── onrelease.yml
├── .gitignore
├── README.md
├── conf
│ ├── deployment.json
│ └── test
│ └── sample.json
├── pytest.ini
├── sample_project
│ ├── __init__.py
│ ├── common.py
│ └── jobs
│ ├── __init__.py
│ └── sample
│ ├── __init__.py
│ └── entrypoint.py
├── setup.py
├── tests
│ ├── integration
│ │ └── sample_test.py
│ └── unit
│ └── sample_test.py
└── unit-requirements.txt
In concrete, I want to know:
Should I use one repo for all repositories and notebooks (such as suggested in the first approach) or should I create one repo per library (which makes the CI/CD more effortfull as there might be dependencies between the packages)
With both suggested folder structures it is unclear for me where to place my notebooks that are not related to any specific package (e.g. notebooks that contain my business logic and use the package)?
Is there a well-established folder structure?
The Databricks had a repository with project templates to be used with Databricks (link) but now it has been archived and the template creation is part of dbx tool - maybe these two links will be useful for you:
dbx init command - https://dbx.readthedocs.io/en/latest/reference/cli/?h=init#dbx-init
DevOps for Workflows Guide - https://dbx.readthedocs.io/en/latest/concepts/devops/#devops-for-workflows
When setting up a github actions pipeline, I can't get it to find packages that are within my repository, and the test fails because it's missing packages.
What happens is that it clones the repo someplace but doesn't include the cloned repo's directories to look for packages. That fails because I am importing packages from within that repo in my code.
I believe my directory structure is sound because I have no trouble testing and building locally:
. │
├── extractors │
│ ├── fip.go │
│ └── fip_test.go │
├── fixtures │
│ └── fip │
│ ├── bad_req.json │
│ └── history_response.json │
├── .github │
│ └── workflows │
│ └── go_test.yml │
├── main.go │
├── Makefile │
├── playlist │
│ └── playlist.go │
├── README.md │
└── utils │
├── logger │
│ └── logger.go │
└── mocks │
└── server.go │
│
View the run here
How do I make Github actions look for the package within the cloned dir as well?
Make sure to run go mod init MODULE_NAME (if the project is outside GOROOT or GOPATH) or just simply go mod init (if the project is inside GOROOT or GOPATH). The command should be run on the root folder of your project. This would create a go.mod file that would enable go resolve your packages.
Let's say I have a project called my-project/ that lives in it's own directory and has the following file structure.
my-project/
.
├── src
│ ├── index.html
│ ├── main.js
│ ├── normalize.js
│ ├── routes
│ │ ├── index.js
│ │ └── Home
│ │ ├── index.js
│ │ └── assets
│ ├── static
│ ├── store
│ │ ├── createStore.js
│ │ └── reducers.js
│ └── styles
└── project.config.js
Now let's say I have a new project called my-new-project that also lives in it's own directory and has the same file structure as my-project but it contains an additional file called my-files-to-copy.txt
my-new-project/
.
├── src
│ ├── index.html
│ ├── main.js
│ ├── normalize.js
│ ├── routes
│ │ ├── index.js
│ │ └── Home
│ │ ├── index.js
│ │ └── assets
│ ├── static
│ ├── store
│ │ ├── createStore.js
│ │ └── reducers.js
│ └── styles
├── project.config.js
└── my-files-to-copy.txt # new file added to tree
my-new-project/ has the same file structure but different file contents than my-project/
Now let's say my-files-to-copy.txt contains a list of files I want to copy from my-project/ and write to the same path in my-new-project/ to overwrite the existing files in my-new-project/ at those locations.
my-files-to-copy.txt
src/main.js
src/routes/index.js
src/store/reducers.js
project.config.js
How can I accomplish this with a terminal/bash/shell command or script?
edit:
I think I might be able to do:
cp my-project/src/main.js my-new-project/src/main.js
cp my-project/src/routes/index.js my-new-project/src/routes/index.js
cp my-project/src/store/reducers.js my-new-project/src/store/reducers.js
cp my-project/project.config.js my-new-project/project.config.js
But as the number of files scales, this method will become less efficient. I was looking for a more efficient solution that would allow me to leverage the file that contains the list of files (or at least a script) without having to write a separate command for each one.
Assuming my-project and my-new-project are on the same directory:
xargs -i -a my-new-project/my-files-to-copy.txt cp my-project/{} my-new-project/{}
I have a folder structure like so:
.
├── ansible.cfg
├── etc
│ ├── dev
│ │ ├── common
│ │ │ ├── graphite.yml
│ │ │ ├── mongo.yml
│ │ │ ├── mysql.yml
│ │ │ └── rs4.yml
│ │ ├── inventory
│ │ └── products
│ │ ├── a.yml
│ │ ├── b.yml
│ │ └── c.yml
│ └── prod
│ ├── common
│ │ ├── graphite.yml
│ │ ├── mongo.yml
│ │ ├── redis.yml
│ │ └── rs4.yml
│ ├── inventory
│ └── products
│ ├── a.yml
│ ├── b.yml
│ └── c.yml
├── globals.yml
├── startup.yml
├── roles
| └── [...]
└── requirements.txt
And in my ansible.cfg, I would like to do something like: hostfile=./etc/{{ env }}/inventory, but this doesn't work. Is there a way I can go about specifying environment specific inventory files in Ansible?
I assume common and products are variable files.
As #Deepali Mittal already mentioned your inventory should look like inventory/{{ env }}.
In inventory/prod you would define a group prod and in inventory/dev you would define a group dev:
[prod]
host1
host2
hostN
This enables you to define group vars for prod and dev. For this simply create a folder group_vars/prod and place your vars files inside.
Re-ordered your structure would look like this:
.
├── ansible.cfg
├── inventory
│ ├── dev
│ └── prod
├── group_vars
│ ├── dev
│ │ ├── common
│ │ │ ├── graphite.yml
│ │ │ ├── mongo.yml
│ │ │ ├── mysql.yml
│ │ │ └── rs4.yml
│ │ └── products
│ │ ├── a.yml
│ │ ├── b.yml
│ │ └── c.yml
│ └── prod
│ ├── common
│ │ ├── graphite.yml
│ │ ├── mongo.yml
│ │ ├── mysql.yml
│ │ └── rs4.yml
│ └── products
│ ├── a.yml
│ ├── b.yml
│ └── c.yml
├── globals.yml
├── startup.yml
├── roles
| └── [...]
└── requirements.txt
I'm not sure what globals.yml is. If it is a playbook, it is in the correct location. If it is a variable file with global definitions it should be saved as group_vars/all.yml and automatically would be loaded for all hosts.
Now you call ansible-playbook with the correct inventory file:
ansible-playbook -i inventory/prod startup.yml
I don't think it's possible to evaluate the environment inside the ansible.cfg like you asked.
I think instead of {{ env }}/inventory, /inventory/{{ env }} should work. Also if you can please share how you use it right now and the error you get when you change the configuration to envs one
I read in wiki that NoScript is open source http://en.wikipedia.org/wiki/NoScript, but on official site http://noscript.net/, I can't find any sources. So my question is: where to find sources? Or, is there something I did not understand, and the source code is not available?
The Firefox XPI format does not prevent you from simply extracting the contents of the plugin to examine the source code.
While I cannot find a canonical public repository, it looks like someone has systematically downloaded and extracted all the available XPIs and created a GitHub repository out of them.
https://github.com/avian2/noscript
If you'd like to do it yourself, XPI files are just standard ZIP files, so if you want to extract one yourself you can simply point an extraction program at it.
Here's an example of doing that from the command line:
mkdir noscript_source
cd noscript_source
curl -LO https://addons.mozilla.org/firefox/downloads/file/219550/noscript_security_suite-2.6.6.8-fx+fn+sm.xpi
unzip noscript_security_suite-2.6.6.8-fx+fn+sm.xpi
That yields a directory structure that looks like this:
.
├── GPL.txt
├── META-INF
│ ├── manifest.mf
│ ├── zigbert.rsa
│ └── zigbert.sf
├── NoScript_License.txt
├── chrome
│ └── noscript.jar
├── chrome.manifest
├── components
│ └── noscriptService.js
├── defaults
│ └── preferences
│ └── noscript.js
├── install.rdf
├── mozilla.cfg
└── noscript_security_suite-2.6.6.8-fx+fn+sm.xpi
Then the main code is located inside chrome/noscript.jar. You can extract that to get at the JavaScript that makes up the bulk of the plugin:
cd chrome/
unzip noscript.jar
Which will yield the main source tree:
.
├── content
│ └── noscript
│ ├── ABE.g
│ ├── ABE.js
│ ├── ABELexer.js
│ ├── ABEParser.js
│ ├── ASPIdiocy.js
│ ├── ChannelReplacement.js
│ ├── ClearClickHandler.js
│ ├── ClearClickHandlerLegacy.js
│ ├── Cookie.js
│ ├── DNS.js
│ ├── DOM.js
│ ├── ExternalFilters.js
│ ├── FlashIdiocy.js
│ ├── HTTPS.js
│ ├── Lang.js
│ ├── NoScript_License.txt
│ ├── PlacesPrefs.js
│ ├── Plugins.js
│ ├── Policy.js
│ ├── Profiler.js
│ ├── Removal.js
│ ├── RequestWatchdog.js
│ ├── STS.js
│ ├── ScriptSurrogate.js
│ ├── Strings.js
│ ├── URIValidator.js
│ ├── about.xul
│ ├── antlr.js
│ ├── clearClick.js
│ ├── clearClick.xul
│ ├── frameOptErr.xhtml
│ ├── iaUI.js
│ ├── noscript.js
│ ├── noscript.xbl
│ ├── noscriptBM.js
│ ├── noscriptBMOverlay.xul
│ ├── noscriptOptions.js
│ ├── noscriptOptions.xul
│ ├── noscriptOverlay.js
│ ├── noscriptOverlay.xul
│ ├── options-mobile.xul
│ └── overlay-mobile.xul
├── locale
└── skin
The extension contains the source code - you just need to unzip it. See Giorgio's response here.
The whole source code is publicly available in every each XPI.
You've got it on your hard disk right now, if you're a NoScript user, otheriwise you can download it here.
You can examine and/or modify it by unzipping the XPI and the JAR inside, and "building" it back by rezipping both.
It's been like that for ever, since the very first version.