In RobotFramework, is it possible to run test cases in For-Loop? - for-loop

So my issues might be of syntactic nature, maybe not, but I am clueless on how to proceed next. I am writing a test case on the Robot Framework, and my end goal is to be able to run ,multiple tests, back to back in a Loop.
In this cases below, the Log to Console call works fine, and outputs the different values passed as parameters. The next call "Query Database And Analyse Data" works as well.
*** Test Cases ***
| For-Loop-Elements
| | #{Items} = | Create List | ${120} | ${240} | ${240}
| | :FOR | ${ELEMENT} | IN | #{ITEMS}
| | | Log To Console | Running tests at Voltage: ${ELEMENT}
| | | Query Database And Analyse Data
But then, when I try to makes a test cases with documentation and tags with "Query Database And Analyse Data", I get the Error: Keyword Name cannot be Empty, which leads me to think that when the file gets to [Documentation tag], it doesn't understand that it is part of a test case. This is usually how I write test cases.
Please note here that the indentation tries to match the inside of the loop
*** Test Cases ***
| For-Loop-Elements
| | #{Items} = | Create List | ${120} | ${240} | ${240}
| | :FOR | ${ELEMENT} | IN | #{ITEMS}
| | | Log To Console | Running tests at Voltage: ${ELEMENT}
| | | Query Database And Analyse Data
| | | | [Documentation] | Query DB.
| | | | [Tags] | query | voltagevariation
| | | Duplicates Test
| | | | [Documentation] | Packets should be unique.
| | | | [Tags] | packet_duplicates | system
| | | | Duplicates
| | | Chroma Output ON
| | | | [Documentation] | Setting output terminal status to ON
| | | | [Tags] | set_output_on | voltagevariation
| | | | ${chroma-status} = | Chroma Output On | ${HOST} | ${PORT}
Now is this a syntax problem, indentation issue, or is it just plain impossible to do what I'm trying to do? If you have written similar cases, but in a different manner, please let me know!
Any help or input would be highly appreciated!

You are trying to use Keywords as Test Cases. This approach is not supported by Robot Framework.
What you could do is make one Test Case with a lot of Keywords:
*** Test Cases ***
| For-Loop-Elements
| | #{Items} = | Create List | ${120} | ${240} | ${240}
| | :FOR | ${ELEMENT} | IN | #{ITEMS}
| | | Log To Console | Running tests at Voltage: ${ELEMENT}
| | | Query Database And Analyse Data
| | | Duplicates
| | | ${chroma-status} = | Chroma Output On | ${HOST} | ${PORT}
*** Keywords ***
| Query Database And Analyse Data
| | Do something
| | Do something else
...
You can't really fit [Tags] anywhere useful. You can, however, fire meaningful fail messages (substituting the [Documentation]) if instead of using a Keyword directly you wrapped it in Run Keyword And Return Status.
Furthermore, please have a look at data driven tests to get rid of the :FOR-loop completely.

Related

Cannot decommission cockroachdb node

Inside Kubernetes, after scale node, cannot decommission
{"level":"warn","ts":1665138574.1910405,"logger":"controller.CrdbCluster","msg":"scaling down stateful set","CrdbCluster":"cockroach-cluster-stage/cockroachdb","ReconcileId":"kF7Vns39vPGnqXncUhmWnX","have":5,"want":4}
{"level":"error","ts":1665138574.8271742,"logger":"controller.CrdbCluster","msg":"decommission failed","CrdbCluster":"cockroach-cluster-stage/cockroachdb","ReconcileId":"kF7Vns39vPGnqXncUhmWnX","error":"failed to stream execution results back: command terminated with exit code 1","errorVerbose":"failed to stream execution results back: command terminated with exit code 1\n(1) attached stack trace\n -- stack trace:\n | github.com/cockroachdb/cockroach-operator/pkg/scale.CockroachExecutor.Exec\n | \tpkg/scale/executor.go:57\n | github.com/cockroachdb/cockroach-operator/pkg/scale.(*CockroachNodeDrainer).findNodeID\n | \tpkg/scale/drainer.go:242\n | github.com/cockroachdb/cockroach-operator/pkg/scale.(*CockroachNodeDrainer).Decommission\n | \tpkg/scale/drainer.go:79\n | github.com/cockroachdb/cockroach-operator/pkg/scale.(*Scaler).EnsureScale\n | \tpkg/scale/scale.go:91\n | github.com/cockroachdb/cockroach-operator/pkg/actor.decommission.Act\n | \tpkg/actor/decommission.go:143\n | github.com/cockroachdb/cockroach-operator/pkg/controller.(*ClusterReconciler).Reconcile\n | \tpkg/controller/cluster_controller.go:153\n | sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n | \texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:297\n | sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n | \texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:252\n | sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2\n | \texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:215\n | k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:185\n | k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:155\n | k8s.io/apimachinery/pkg/util/wait.BackoffUntil\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:156\n | k8s.io/apimachinery/pkg/util/wait.JitterUntil\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:133\n | k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:185\n | k8s.io/apimachinery/pkg/util/wait.UntilWithContext\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:99\n | runtime.goexit\n | \tsrc/runtime/asm_amd64.s:1581\nWraps: (2) failed to stream execution results back\nWraps: (3) command terminated with exit code 1\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) exec.CodeExitError","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\texternal/com_github_go_logr_zapr/zapr.go:132\ngithub.com/cockroachdb/cockroach-operator/pkg/actor.decommission.Act\n\tpkg/actor/decommission.go:145\ngithub.com/cockroachdb/cockroach-operator/pkg/controller.(*ClusterReconciler).Reconcile\n\tpkg/controller/cluster_controller.go:153\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:297\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:252\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2\n\texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:215\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:99"}
{"level":"info","ts":1665138574.8283174,"logger":"controller.CrdbCluster","msg":"Error on action","CrdbCluster":"cockroach-cluster-stage/cockroachdb","ReconcileId":"kF7Vns39vPGnqXncUhmWnX","Action":"Decommission","err":"failed to stream execution results back: command terminated with exit code 1"}
{"level":"error","ts":1665138574.8283627,"logger":"controller.CrdbCluster","msg":"action failed","CrdbCluster":"cockroach-cluster-stage/cockroachdb","ReconcileId":"kF7Vns39vPGnqXncUhmWnX","error":"failed to stream execution results back: command terminated with exit code 1","errorVerbose":"failed to stream execution results back: command terminated with exit code 1\n(1) attached stack trace\n -- stack trace:\n | github.com/cockroachdb/cockroach-operator/pkg/scale.CockroachExecutor.Exec\n | \tpkg/scale/executor.go:57\n | github.com/cockroachdb/cockroach-operator/pkg/scale.(*CockroachNodeDrainer).findNodeID\n | \tpkg/scale/drainer.go:242\n | github.com/cockroachdb/cockroach-operator/pkg/scale.(*CockroachNodeDrainer).Decommission\n | \tpkg/scale/drainer.go:79\n | github.com/cockroachdb/cockroach-operator/pkg/scale.(*Scaler).EnsureScale\n | \tpkg/scale/scale.go:91\n | github.com/cockroachdb/cockroach-operator/pkg/actor.decommission.Act\n | \tpkg/actor/decommission.go:143\n | github.com/cockroachdb/cockroach-operator/pkg/controller.(*ClusterReconciler).Reconcile\n | \tpkg/controller/cluster_controller.go:153\n | sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n | \texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:297\n | sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n | \texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:252\n | sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2\n | \texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:215\n | k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:185\n | k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:155\n | k8s.io/apimachinery/pkg/util/wait.BackoffUntil\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:156\n | k8s.io/apimachinery/pkg/util/wait.JitterUntil\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:133\n | k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:185\n | k8s.io/apimachinery/pkg/util/wait.UntilWithContext\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:99\n | runtime.goexit\n | \tsrc/runtime/asm_amd64.s:1581\nWraps: (2) failed to stream execution results back\nWraps: (3) command terminated with exit code 1\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) exec.CodeExitError","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\texternal/com_github_go_logr_zapr/zapr.go:132\ngithub.com/cockroachdb/cockroach-operator/pkg/controller.(*ClusterReconciler).Reconcile\n\tpkg/controller/cluster_controller.go:185\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:297\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:252\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2\n\texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:215\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:99"}
{"level":"error","ts":1665138574.836441,"logger":"controller-runtime.manager.controller.crdbcluster","msg":"Reconciler error","reconciler group":"crdb.cockroachlabs.com","reconciler kind":"CrdbCluster","name":"cockroachdb","namespace":"cockroach-cluster-stage","error":"failed to stream execution results back: command terminated with exit code 1","errorVerbose":"failed to stream execution results back: command terminated with exit code 1\n(1) attached stack trace\n -- stack trace:\n | github.com/cockroachdb/cockroach-operator/pkg/scale.CockroachExecutor.Exec\n | \tpkg/scale/executor.go:57\n | github.com/cockroachdb/cockroach-operator/pkg/scale.(*CockroachNodeDrainer).findNodeID\n | \tpkg/scale/drainer.go:242\n | github.com/cockroachdb/cockroach-operator/pkg/scale.(*CockroachNodeDrainer).Decommission\n | \tpkg/scale/drainer.go:79\n | github.com/cockroachdb/cockroach-operator/pkg/scale.(*Scaler).EnsureScale\n | \tpkg/scale/scale.go:91\n | github.com/cockroachdb/cockroach-operator/pkg/actor.decommission.Act\n | \tpkg/actor/decommission.go:143\n | github.com/cockroachdb/cockroach-operator/pkg/controller.(*ClusterReconciler).Reconcile\n | \tpkg/controller/cluster_controller.go:153\n | sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n | \texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:297\n | sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n | \texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:252\n | sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2\n | \texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:215\n | k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:185\n | k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:155\n | k8s.io/apimachinery/pkg/util/wait.BackoffUntil\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:156\n | k8s.io/apimachinery/pkg/util/wait.JitterUntil\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:133\n | k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:185\n | k8s.io/apimachinery/pkg/util/wait.UntilWithContext\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:99\n | runtime.goexit\n | \tsrc/runtime/asm_amd64.s:1581\nWraps: (2) failed to stream execution results back\nWraps: (3) command terminated with exit code 1\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) exec.CodeExitError","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\texternal/com_github_go_logr_zapr/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:301\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:252\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2\n\texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:215\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:99"}
{"level":"info","ts":1665138584.3979504,"logger":"controller.CrdbCluster","msg":"reconciling CockroachDB cluster","CrdbCluster":"cockroach-cluster-stage/cockroachdb","ReconcileId":"URAajRJhYWotEB4tQs6hRm"}
{"level":"info","ts":1665138584.3980412,"logger":"webhooks","msg":"default","name":"cockroachdb"}
{"level":"info","ts":1665138584.4027824,"logger":"controller.CrdbCluster","msg":"Running action with name: Decommission","CrdbCluster":"cockroach-cluster-stage/cockroachdb","ReconcileId":"URAajRJhYWotEB4tQs6hRm"}
{"level":"warn","ts":1665138584.4028075,"logger":"controller.CrdbCluster","msg":"check decommission opportunities","CrdbCluster":"cockroach-cluster-stage/cockroachdb","ReconcileId":"URAajRJhYWotEB4tQs6hRm"}
{"level":"info","ts":1665138584.4028518,"logger":"controller.CrdbCluster","msg":"replicas decommissioning","CrdbCluster":"cockroach-cluster-stage/cockroachdb","ReconcileId":"URAajRJhYWotEB4tQs6hRm","status.CurrentReplicas":5,"expected":4}
{"level":"warn","ts":1665138584.4028952,"logger":"controller.CrdbCluster","msg":"operator is running inside of kubernetes, connecting to service for db connection","CrdbCluster":"cockroach-cluster-stage/cockroachdb","ReconcileId":"URAajRJhYWotEB4tQs6hRm"}
This was a bug and should be fixed by this PR

Use AWK with delimiter to print specific columns

My file looks as follows:
+------------------------------------------+---------------+----------------+------------------+------------------+-----------------+
| Message | Status | Adress | Changes | Test | Calibration |
|------------------------------------------+---------------+----------------+------------------+------------------+-----------------|
| Hello World | Active | up | 1 | up | done |
| Hello Everyone Here | Passive | up | 2 | down | none |
| Hi there. My name is Eric. How are you? | Down | up | 3 | inactive | done |
+------------------------------------------+---------------+----------------+------------------+------------------+-----------------+
+----------------------------+---------------+----------------+------------------+------------------+-----------------+
| Message | Status | Adress | Changes | Test | Calibration |
|----------------------------+---------------+----------------+------------------+------------------+-----------------|
| What's up? | Active | up | 1 | up | done |
| Hi. I'm Otilia | Passive | up | 2 | down | none |
| Hi there. This is Marcus | Up | up | 3 | inactive | done |
+----------------------------+---------------+----------------+------------------+------------------+-----------------+
I want to extract a specific column using AWK.
I can use CUT to do it; however when the length of each table varies depending on how many characters are present in each column, I'm not getting the desired output.
cat File.txt | cut -c -44
+------------------------------------------+
| Message |
|------------------------------------------+
| Hello World |
| Hello Everyone Here |
| Hi there. My name is Eric. How are you? |
+------------------------------------------+
+----------------------------+--------------
| Message | Status
|----------------------------+--------------
| What's up? | Active
| Hi. I'm Otilia | Passive
| Hi there. This is Marcus | Up
+----------------------------+--------------
or
cat File.txt | cut -c 44-60
+---------------+
| Status |
+---------------+
| Active |
| Passive |
| Down |
+---------------+
--+--------------
| Adress
--+--------------
| up
| up
| up
--+--------------
I tried using AWK but I don't know how to add 2 different delimiters which would take care of all the lines.
cat File.txt | awk 'BEGIN {FS="|";}{print $2,$3}'
Message Status
------------------------------------------+---------------+----------------+------------------+------------------+-----------------
Hello World Active
Hello Everyone Here Passive
Hi there. My name is Eric. How are you? Down
Message Status
----------------------------+---------------+----------------+------------------+------------------+-----------------
What's up? Active
Hi. I'm Otilia Passive
Hi there. This is Marcus Up
The output I'm looking for:
+------------------------------------------+
| Message |
|------------------------------------------+
| Hello World |
| Hello Everyone Here |
| Hi there. My name is Eric. How are you? |
+------------------------------------------+
+----------------------------+
| Message |
|----------------------------+
| What's up? |
| Hi. I'm Otilia |
| Hi there. This is Marcus |
+----------------------------+
or
+------------------------------------------+---------------+
| Message | Status |
|------------------------------------------+---------------+
| Hello World | Active |
| Hello Everyone Here | Passive |
| Hi there. My name is Eric. How are you? | Down |
+------------------------------------------+---------------+
+----------------------------+---------------+
| Message | Status |
|----------------------------+---------------+
| What's up? | Active |
| Hi. I'm Otilia | Passive |
| Hi there. This is Marcus | Up |
+----------------------------+---------------+
or random other columns
+------------------------------------------+----------------+------------------+
| Message | Adress | Test |
|------------------------------------------+----------------+------------------+
| Hello World | up | up |
| Hello Everyone Here | up | down |
| Hi there. My name is Eric. How are you? | up | inactive |
+------------------------------------------+----------------+------------------+
+----------------------------+---------------+------------------+
| Message |Adress | Test |
|----------------------------+---------------+------------------+
| What's up? |up | up |
| Hi. I'm Otilia |up | down |
| Hi there. This is Marcus |up | inactive |
+----------------------------+---------------+------------------+
Thanks in advance.
One idea using GNU awk:
awk -v fldlist="2,3" '
BEGIN { fldcnt=split(fldlist,fields,",") } # split fldlist into array fields[]
{ split($0,arr,/[|+]/,seps) # split current line on dual delimiters "|" and "+"
for (i=1;i<=fldcnt;i++) # loop through our array of fields (fldlist)
printf "%s%s", seps[fields[i]-1], arr[fields[i]] # print leading separator/delimiter and field
printf "%s\n", seps[fields[fldcnt]] # print trailing separator/delimiter and terminate line
}
' File.txt
NOTES:
requires GNU awk for the 4th argument to the split() function (seps == array of separators; see gawk string functions for details)
assumes our field delimiters (|, +) do not show up as part of the data
the input variable fldlist is a comma-delimited list of columns that mimics what would be passed to cut (eg, when a line starts with a delimiter then field #1 is blank)
For fldlist="2,3" this generates:
+------------------------------------------+---------------+
| Message | Status |
|------------------------------------------+---------------+
| Hello World | Active |
| Hello Everyone Here | Passive |
| Hi there. My name is Eric. How are you? | Down |
+------------------------------------------+---------------+
+----------------------------+---------------+
| Message | Status |
|----------------------------+---------------+
| What's up? | Active |
| Hi. I'm Otilia | Passive |
| Hi there. This is Marcus | Up |
+----------------------------+---------------+
For fldlist="2,4,6" this generates:
+------------------------------------------+----------------+------------------+
| Message | Adress | Test |
|------------------------------------------+----------------+------------------+
| Hello World | up | up |
| Hello Everyone Here | up | down |
| Hi there. My name is Eric. How are you? | up | inactive |
+------------------------------------------+----------------+------------------+
+----------------------------+----------------+------------------+
| Message | Adress | Test |
|----------------------------+----------------+------------------+
| What's up? | up | up |
| Hi. I'm Otilia | up | down |
| Hi there. This is Marcus | up | inactive |
+----------------------------+----------------+------------------+
For fldlist="4,3,2" this generates:
+----------------+---------------+------------------------------------------+
| Adress | Status | Message |
+----------------+---------------|------------------------------------------+
| up | Active | Hello World |
| up | Passive | Hello Everyone Here |
| up | Down | Hi there. My name is Eric. How are you? |
+----------------+---------------+------------------------------------------+
+----------------+---------------+----------------------------+
| Adress | Status | Message |
+----------------+---------------|----------------------------+
| up | Active | What's up? |
| up | Passive | Hi. I'm Otilia |
| up | Up | Hi there. This is Marcus |
+----------------+---------------+----------------------------+
Say that again? (fldlist="3,3,3"):
+---------------+---------------+---------------+
| Status | Status | Status |
+---------------+---------------+---------------+
| Active | Active | Active |
| Passive | Passive | Passive |
| Down | Down | Down |
+---------------+---------------+---------------+
+---------------+---------------+---------------+
| Status | Status | Status |
+---------------+---------------+---------------+
| Active | Active | Active |
| Passive | Passive | Passive |
| Up | Up | Up |
+---------------+---------------+---------------+
And if you make the mistake of trying to print the '1st' column, ie, fldlist="1":
+
|
|
|
|
|
+
+
|
|
|
|
|
+
If GNU awk is available, please try markp-fuso's nice solution.
If not, here is a posix-compliant alternative:
#!/bin/bash
# define bash variables
cols=(2 3 6) # bash array of desired columns
col_list=$(IFS=,; echo "${cols[*]}") # create a csv string
awk -v cols="$col_list" '
NR==FNR {
if (match($0, /^[|+]/)) { # the record contains a table
if (match($0, /^[|+]-/)) # horizontally ruled line
n = split($0, a, /[|+]/) # split into columns
else # "cell" line
n = split($0, a, /\|/)
len = 0
for (i = 1; i < n; i++) {
len += length(a[i]) + 1 # accumulated column position
pos[FNR, i] = len
}
}
next
}
{
n = split(cols, a, /,/) # split the variable `cols` on comma into an array
for (i = 1; i <= n; i++) {
col = a[i]
if (pos[FNR, col] && pos[FNR, col+1]) {
printf("%s", substr($0, pos[FNR, col], pos[FNR, col + 1] - pos[FNR, col]))
}
}
print(substr($0, pos[FNR, col + 1], 1))
}
' file.txt file.txt
Result with cols=(2 3 6) as shown above:
+---------------+----------------+-----------------+
| Status | Adress | Calibration |
+---------------+----------------+-----------------|
| Active | up | done |
| Passive | up | none |
| Down | up | done |
+---------------+----------------+-----------------+
+---------------+----------------+-----------------+
| Status | Adress | Calibration |
+---------------+----------------+-----------------|
| Active | up | done |
| Passive | up | none |
| Up | up | done |
+---------------+----------------+-----------------+
It detects the column width in the 1st pass then splits the line on the column position in the 2nd pass.
You can control the columns to print with the bash array cols which is assigned at the beginning of the script. Please assign the array to the list of desired column numbers in increasing order. If you want to use the bash variable in different way, please let me know.

Fetch particular column value from rows with specified condition using shell script

I have a sample output from a command
+--------------------------------------+------------------+---------------------+-------------------------------------+
| id | fixed_ip_address | floating_ip_address | port_id |
+--------------------------------------+------------------+---------------------+-------------------------------------+
| 04584e8a-c210-430b-8028-79dbf741797c | | 99.99.99.91 | |
| 12d2257c-c02b-4295-b910-2069f583bee5 | 20.0.0.92 | 99.99.99.92 | 37ebfa4c-c0f9-459a-a63b-fb2e84ab7f92 |
| 98c5a929-e125-411d-8a18-89877d3c932b | | 99.99.99.93 | |
| f55e54fb-e50a-4800-9a6e-1d75004a2541 | 20.0.0.94 | 99.99.99.94 | fe996e76-ffdb-4687-91a0-9b4df2631b4e |
+--------------------------------------+------------------+---------------------+-------------------------------------+
Now I want to fetch all the "floating _ip_address" for which "port_id" & "fixed_ip_address" fields are blank/empty (In above sample 99.99.99.91 & 99.99.99.93)
How can I do it with shell scripting?
You can use sed:
fl_ips=($(sed -nE 's/\|.*\|.*\|(.*)\|\s*\|/\1/p' inputfile))
Here inputfile is the table provided in the question. The array fl_ips contains the output of sed:
>echo ${#fl_ips[#]}
2 # Array has two elements
>echo ${fl_ips[0]}
99.99.99.91
>echo ${fl_ips[1]}
99.99.99.93

Find references to files, recursively

In a project where XML/JS/Java files can contain references to other such files, I'd like to be able to have a quick overview of what has to be carefully checked, when one file has been updated.
So, it means I need to eventually have a look at all files referencing the modified one, and all files referencing files which refer to the modified one, etc. (recursively on matched files).
For one level, it's quite simple:
grep -E -l -o --include=*.{xml,js,java} -r "$FILE" . | xargs -n 1 basename
But how can I automate that to match (grand-(grand-))parents?
And how can that be, maybe, made more readable? For example, with a tree structure?
For example, if the file that interests me is called modified.js...
show-referring-files-to modified.js
... I could wish such an output:
some-file-with-ref-to-modified.xml
|__ a-file-referring-to-some-file-with-ref-to-modified.js
another-one-with-ref-to-modified.xml
|__ a-file-referring-to-another-one-with-ref-to-modified.js
|__ a-grand-parent-file-having-ref-to-ref-file.xml
|__ another-file-referring-to-another-one-with-ref-to-modified.js
or any other output (even flat) which allows for quickly checking which files are potentially impacted by a change.
UPDATE -- Results of current proposed answer:
ahmsff.js
|__ahmsff.xml
| |__ahmsd.js
| | |__ahmsd.xml
| | | |__ahmst.xml
| | | | |__BESH.java
| |__ahru.js
| | |__ahru.xml
| | | |__ahrut.xml
| | | | |__ashrba.js
| | | | | |__ashrba.xml
| | | | | | |__STR.java
| | |__ahrufrp.xml
| | | |__ahru.js
| | | | |__ahru.xml
| | | | | |__ahrut.xml
| | | | | | |__ashrba.js
| | | | | | | |__ashrba.xml
| | | | | | | | |__STR.java
| | | | |__ahrufrp.xml
| | | | | |__ahru.js
| | | | | | |__ahru.xml
| | | | | | | |__ahrut.xml
| | | | | | | | |__ashrba.js
| | | | | | | | | |__ashrba.xml
| | | | | | | | | | |__STR.java
| | | | | | |__ahrufrp.xml
(...)
I'd use a shell function (for the recursion) inside an shell script:
Assuming the filenames are unique have no characters that need escaping in them:
File: /usr/local/bin/show-referring-files-to
#!/bin/sh
get_references() {
grep -F -l --include=*.{xml,js,java} -r "$1" . | grep -v "$3" | while read -r subfile; do
#read each line of the grep result into the variable subfile
subfile="$(basename "$subfile")"
echo "$2""$subfile"
get_references "$subfile" ' '"$2" "$3"'\|'"$subfile"
done
}
while test $# -gt 0; do
#loop so more than one file can be given as argument to this script
echo "$1"
get_references "$1" '|__' "$1"
shift
done
There still are lots of performance enhancements possible.
Edit: Added $3 to prevent infinite-loop.

How to Move a whole partition to another tabel on another database?

Database: Oracle 12c
I want to take single partition, or a set of partitions, disconnect it from a Table, or set of tables on DB1 and move it to another table on another database. I would like to avoid doing DML to do this for performance reasons (It needs to be fast).
Each Partition will contain between three and four hundred million records.
Each Partition will be broken up into approximately 300 Sub-Partitions.
The task will need to be automated.
Some thoughts I had:
Somehow put each partition in it's own datafile upon creation, then detaching from the source and attaching it to the destination?
Extract the whole partition (not record-by-record)
Any other non-DML Solutions are also welcom
Example (Move Part#33 from both to DB#2, preferably with a single, operation):
__________________ __________________
| DB#1 | | DB#2 |
|------------------| |------------------|
|Table1 | |Table1 |
| Part#1 | | Part#1 |
| ... | | ... |
| Part#33 | ----> | Part#32 |
| Subpart#1 | | |
| ... | | |
| Subpart#300 | | |
|------------------| |------------------|
|Table2 | |Table2 |
| Part#1 | | Part#1 |
| ... | | ... |
| Part#33 | ----> | Part#32 |
| Subpart#1 | | |
| ... | | |
| Subpart#300 | | |
|__________________| |__________________|
Please read the document below with all the examples of exchanging partitions of table.
https://oracle-base.com/articles/misc/partitioning-an-existing-table-using-exchange-partition

Resources