I have seen that when a node is deleted from an AVL tree it may require restructuring multiple no of times in contrast to inserting which requires only once. Can anyone give me an example of such case of deletion.
Also I have implemented an AVL Tree and I want to check whether the delete() function works properly. So can you give a sequence of insertions and then deletion which can help me figure out whether my delete() is correct and takes care of all this?
Assume you have an an AVL Tree with each node storing a name (string) and you have functions insertAVL(element) and deleteAVL(element).
Well, both insert and delete can have multiple rotations since you have to work your way up the tree.
For example, add the data set of {5,2,4,3,7,8,10,9} then remove {5}, add {9}, and finally remove {2}. You get the following.
addValue. id=5
└── (1) 5
addValue. id=2
└── (2) 5
└── (1) 2
addValue. id=4
└── (3) 5 *unbalanced left 2 - right 0*
└── (2) 2
└── (1) 4
After left rotation:
└── (3) 5 *unbalanced left 2 - right 0*
└── (2) 4
└── (1) 2
After right rotation:
└── (2) 4
├── (1) 2
└── (1) 5
addValue. id=3
└── (3) 4
├── (2) 2
│ └── (1) 3
└── (1) 5
addValue. id=7
└── (3) 4
├── (2) 2
│ └── (1) 3
└── (2) 5
└── (1) 7
addValue. id=8
└── (3) 4
├── (2) 2
│ └── (1) 3
└── (3) 5 *unbalanced right 2 - left 0*
└── (2) 7
└── (1) 8
After left rotation:
└── (3) 4
├── (2) 2
│ └── (1) 3
└── (2) 7
├── (1) 5
└── (1) 8
addValue. id=10
└── (4) 4
├── (2) 2
│ └── (1) 3
└── (3) 7
├── (1) 5
└── (2) 8
└── (1) 10
addValue. id=9
└── (4) 4
├── (2) 2
│ └── (1) 3
└── (3) 7
├── (1) 5
└── (3) 8 *unbalanced left 0 - right 2*
└── (2) 10
└── (1) 9
After right rotation:
└── (5) 4
├── (2) 2
│ └── (1) 3
└── (4) 7
├── (1) 5
└── (3) 8 *unbalanced right 2 - left 0*
└── (2) 9
└── (1) 10
After left rotation:
└── (4) 4
├── (2) 2
│ └── (1) 3
└── (3) 7
├── (1) 5
└── (2) 9
├── (1) 8
└── (1) 10
removeValue. value=5
└── (4) 4
├── (2) 2
│ └── (1) 3
└── (3) 7 *unbalanced right 2 - left 0*
└── (2) 9
├── (1) 8
└── (1) 10
After left rotation:
└── (4) 4
├── (2) 2
│ └── (1) 3
└── (3) 9
├── (2) 7
│ └── (1) 8
└── (1) 10
addValue. id=9
└── (5) 4
├── (2) 2
│ └── (1) 3
└── (4) 9
├── (3) 7 *unbalanced right 2 - left 0*
│ └── (2) 8
│ └── (1) 9
└── (1) 10
After left:
└── (4) 4
├── (2) 2
│ └── (1) 3
└── (3) 9
├── (2) 8
│ ├── (1) 7
│ └── (1) 9
└── (1) 10
removeValue. value=2
└── (4) 4 *unbalanced right 3 - left 1*
├── (1) 3
└── (3) 9
├── (2) 8
│ ├── (1) 7
│ └── (1) 9
└── (1) 10
After right rotation:
└── (4) 4 *unbalanced right 3 - left 1*
├── (1) 3
└── (3) 8
├── (1) 7
└── (2) 9
├── (1) 9
└── (1) 10
After left rotation:
└── (3) 8
├── (2) 4
│ ├── (1) 3
│ └── (1) 7
└── (2) 9
├── (1) 9
└── (1) 10
I have an AVL tree here, if you want to take a look closer.
Related
Are there any best-practices how to organize your project folders so that the CI/CD pipline remains simple?
Here, the following structure is used, which seems to be quite complex:
project
│ README.md
│ azure-pipelines.yml
│ config.json
│ .gitignore
└─── package1
│ │ __init__.py
│ │ setup.py
│ │ README.md
│ │ file.py
│ └── submodule
│ │ │ file.py
│ │ │ file_test.py
│ └── requirements
│ │ │ common.txt
│ │ │ dev.txt
│ └─ notebooks
│ │ notebook1.txt
│ │ notebook2.txt
└─── package2
| │ ...
└─── ci_cd_scripts
│ requirements.py
│ script1.py
│ script2.py
│ ...
Here, the following structure is suggested:
.
├── .dbx
│ └── project.json
├── .github
│ └── workflows
│ ├── onpush.yml
│ └── onrelease.yml
├── .gitignore
├── README.md
├── conf
│ ├── deployment.json
│ └── test
│ └── sample.json
├── pytest.ini
├── sample_project
│ ├── __init__.py
│ ├── common.py
│ └── jobs
│ ├── __init__.py
│ └── sample
│ ├── __init__.py
│ └── entrypoint.py
├── setup.py
├── tests
│ ├── integration
│ │ └── sample_test.py
│ └── unit
│ └── sample_test.py
└── unit-requirements.txt
In concrete, I want to know:
Should I use one repo for all repositories and notebooks (such as suggested in the first approach) or should I create one repo per library (which makes the CI/CD more effortfull as there might be dependencies between the packages)
With both suggested folder structures it is unclear for me where to place my notebooks that are not related to any specific package (e.g. notebooks that contain my business logic and use the package)?
Is there a well-established folder structure?
The Databricks had a repository with project templates to be used with Databricks (link) but now it has been archived and the template creation is part of dbx tool - maybe these two links will be useful for you:
dbx init command - https://dbx.readthedocs.io/en/latest/reference/cli/?h=init#dbx-init
DevOps for Workflows Guide - https://dbx.readthedocs.io/en/latest/concepts/devops/#devops-for-workflows
I have two sets of config groups, from which I need to select one config file each time.
the file structure looks like this:
conf
├── datasets
│ ├── A
│ │ ├── a1.yaml
│ │ └── a2.yaml
│ └── B
│ ├── b1.yaml
│ └── b2.yaml
└── config.yaml
I could choose dataset config with dataset=A/a1 or dataset=B/b1, etc.
Now suppose each config file in A or B has many items in common(that's why they are grouped in two subfolders).
The question is:
How could I specify these common items in A and B, without having to specify them in each config file under these folders?
The resulting file structure may look like this:
conf
├── datasets
│ ├── A
│ │ ├── a_common.yaml # common config items for a1 and a2
│ │ ├── a1.yaml
│ │ └── a2.yaml
│ └── B
│ ├── b_common.yaml # common config items for b1 and b2
│ ├── b1.yaml
│ └── b2.yaml
└── config.yaml
You can use a defaults list in the a1/a2/b1/b2 files:
# A/a1.yaml
defaults:
- a_common
... # a1 contents
References:
The tutorial on defaults lists
The reference page for defaults lists
I am using kubectl kustomizecommands to deploy multiple applications (parsers and receivers) with similar configurations and I'm having problems with the hierarchy of kustomization.yaml files (not understanding what's possible and what's not).
I run the kustomize command as follows from custom directory:
$ kubectl kustomize overlay/pipeline/parsers/commercial/dev - this works fine, it produces expected output defined in the kustomization.yaml #1 as desired. What's not working is that it does NOT automatically execute the #2 kustomization, which is in the (already traversed) directory path 2 levels above. The #2 kustomization.yaml contains configMap creation that's common to all of the parser environments. I don't want to repeat those in every env. When I tried to refer to #1 from #2 I got an error about circular reference, yet it fails to run the config creation.
I have the following directory structure tree:
custom
├── base
| ├── kustomization.yaml
│ ├── logstash-config.yaml
│ └── successful-vanilla-ls7.8.yaml
├── install_notes.txt
├── overlay
│ └── pipeline
│ ├── logstash-config.yaml
│ ├── parsers
│ │ ├── commercial
│ │ │ ├── dev
│ │ │ │ ├── dev-patches.yaml
│ │ │ │ ├── kustomization.yaml <====== #1 this works
│ │ │ │ ├── logstash-config.yaml
│ │ │ │ └── parser-config.yaml
│ │ │ ├── prod
│ │ │ ├── stage
│ │ ├── kustomization.yaml <============= #2 why won't this run automatically?
│ │ ├── logstash-config.yaml
│ │ ├── parser-config.yaml
│
Here is my #1 kustomization.yaml:
bases:
- ../../../../../base
namePrefix: dev-
commonLabels:
app: "ls-7.8-logstash"
chart: "logstash"
heritage: "Helm"
release: "ls-7.8"
patchesStrategicMerge:
- dev-patches.yaml
And here is my #2 kustomization.yaml file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
# generate a ConfigMap named my-generated-configmap-<some-hash> where each file
# in the list appears as a data entry (keyed by base filename).
- name: logstashpipeline-parser
behavior: create
files:
- parser-config.yaml
- name: logstashconfig
behavior: create
files:
- logstash-config.yaml
The issue lays within your structure. Each entry in base should resolve to a directory containing one kustomization.yaml file. The same goes with overlay. Now, I think it would be easier to explain on an example (I will use $ to show what goes where):
├── base $1
│ ├── deployment.yaml
│ ├── kustomization.yaml $1
│ └── service.yaml
└── overlays
├── dev $2
│ ├── kustomization.yaml $2
│ └── patch.yaml
├── prod #3
│ ├── kustomization.yaml $3
│ └── patch.yaml
└── staging #4
├── kustomization.yaml $4
└── patch.yaml
Every entry resolves to it's corresponding kustomization.yaml file. Base $1 resolves to kustomization.yaml $1, dev $2 to kustomization.yaml $2 and so on.
However in your use case:
├── base $1
| ├── kustomization.yaml $1
│ ├── logstash-config.yaml
│ └── successful-vanilla-ls7.8.yaml
├── install_notes.txt
├── overlay
│ └── pipeline
│ ├── logstash-config.yaml
│ ├── parsers
│ │ ├── commercial
│ │ │ ├── dev $2
│ │ │ │ ├── dev-patches.yaml
│ │ │ │ ├── kustomization.yaml $2
│ │ │ │ ├── logstash-config.yaml
│ │ │ │ └── parser-config.yaml
│ │ │ ├── prod $3
│ │ │ ├── stage $4
│ │ ├── kustomization.yaml $???
│ │ ├── logstash-config.yaml
│ │ ├── parser-config.yaml
│
Nothing resolves to your second kustomization.yaml.
So to make it work you should put those files separately under each environment.
Below you can find sources with some more examples showing how the tipical directory structure should look like:
Components
Directory layout
GitHub
Haven't found detailed information according how to correctly use intersphinx.
I want to combine multiple sphinx documentations which are in different projects (API backends),
my structure is:
Projects
├── main_api_name
│ └── docs
│ └── source
│ └── ...
│ └── index.rst
│ └── main_api_name.rst
│ └── build
│ └── lib
│ └── <python files>
├── api_name1
│ └── docs
│ └── source
│ └── ...
│ └── index.rst
│ └── api_name1.rst
│ └── build
│ └── lib
│ └── <python files>
├──
All sphinx documentation (whether generated or not) of all projects in Projects dir I want to store in main_api_name project.
My index.rst in projects same as:
###########################################
Main API documentation
###########################################
********
Contents
********
.. toctree::
:maxdepth: 2
main_api_name
main_api_name.rst and api_name1 and others look like:
<main_api_name>
=============================================
Access actions
##############
.. autoclass:: lib.api_access_actions.API
:members:
Access profiles
###############
.. autoclass:: lib.api_access_profiles.API
:members:
...
Summarizing all of above: I just want to combine them in one sphinx documentation
I have a folder structure like so:
.
├── ansible.cfg
├── etc
│ ├── dev
│ │ ├── common
│ │ │ ├── graphite.yml
│ │ │ ├── mongo.yml
│ │ │ ├── mysql.yml
│ │ │ └── rs4.yml
│ │ ├── inventory
│ │ └── products
│ │ ├── a.yml
│ │ ├── b.yml
│ │ └── c.yml
│ └── prod
│ ├── common
│ │ ├── graphite.yml
│ │ ├── mongo.yml
│ │ ├── redis.yml
│ │ └── rs4.yml
│ ├── inventory
│ └── products
│ ├── a.yml
│ ├── b.yml
│ └── c.yml
├── globals.yml
├── startup.yml
├── roles
| └── [...]
└── requirements.txt
And in my ansible.cfg, I would like to do something like: hostfile=./etc/{{ env }}/inventory, but this doesn't work. Is there a way I can go about specifying environment specific inventory files in Ansible?
I assume common and products are variable files.
As #Deepali Mittal already mentioned your inventory should look like inventory/{{ env }}.
In inventory/prod you would define a group prod and in inventory/dev you would define a group dev:
[prod]
host1
host2
hostN
This enables you to define group vars for prod and dev. For this simply create a folder group_vars/prod and place your vars files inside.
Re-ordered your structure would look like this:
.
├── ansible.cfg
├── inventory
│ ├── dev
│ └── prod
├── group_vars
│ ├── dev
│ │ ├── common
│ │ │ ├── graphite.yml
│ │ │ ├── mongo.yml
│ │ │ ├── mysql.yml
│ │ │ └── rs4.yml
│ │ └── products
│ │ ├── a.yml
│ │ ├── b.yml
│ │ └── c.yml
│ └── prod
│ ├── common
│ │ ├── graphite.yml
│ │ ├── mongo.yml
│ │ ├── mysql.yml
│ │ └── rs4.yml
│ └── products
│ ├── a.yml
│ ├── b.yml
│ └── c.yml
├── globals.yml
├── startup.yml
├── roles
| └── [...]
└── requirements.txt
I'm not sure what globals.yml is. If it is a playbook, it is in the correct location. If it is a variable file with global definitions it should be saved as group_vars/all.yml and automatically would be loaded for all hosts.
Now you call ansible-playbook with the correct inventory file:
ansible-playbook -i inventory/prod startup.yml
I don't think it's possible to evaluate the environment inside the ansible.cfg like you asked.
I think instead of {{ env }}/inventory, /inventory/{{ env }} should work. Also if you can please share how you use it right now and the error you get when you change the configuration to envs one