strings.Split acting weird - go

I am doing a simple strings.Split on a date.
The format is 2015-10-04
month := strings.Split(date, "-")
output is [2015 10 03].
If I do month[0] it returns 2015 but when I do month[1], it returns
panic: runtime error: index out of range
Though it clearly isn't. Am I using it wrong? Any idea what is going on?

Here's a complete working example:
package main
import "strings"
func main() {
date := "2015-01-02"
month := strings.Split(date, "-")
println(month[0])
println(month[1])
println(month[2])
}
Output:
2015
01
02
Playground
Perhaps you're not using the correct "dash" character? There are lots:
+-------+--------+----------+
| glyph | codes |
+-------+--------+----------+
| - | U+002D | - |
| ֊ | U+058A | ֊ |
| ־ | U+05BE | ־ |
| ᠆ | U+1806 | ᠆ |
| ‐ | U+2010 | ‐ |
| ‑ | U+2011 | ‑ |
| ‒ | U+2012 | ‒ |
| – | U+2013 | – |
| — | U+2014 | — |
| ― | U+2015 | ― |
| ⁻ | U+207B | ⁻ |
| ₋ | U+208B | ₋ |
| − | U+2212 | − |
| ﹘ | U+FE58 | ﹘ |
| ﹣ | U+FE63 | ﹣ |
| - | U+FF0D | - |
+-------+--------+----------+
Here is the code with a different input string, which also throws an index out of bounds exception:
package main
import "strings"
func main() {
date := "2015‐01‐02" // U+2010 dashes
month := strings.Split(date, "-")
println(month[0])
println(month[1])
println(month[2])
}
Playground.

Related

Use AWK with delimiter to print specific columns

My file looks as follows:
+------------------------------------------+---------------+----------------+------------------+------------------+-----------------+
| Message | Status | Adress | Changes | Test | Calibration |
|------------------------------------------+---------------+----------------+------------------+------------------+-----------------|
| Hello World | Active | up | 1 | up | done |
| Hello Everyone Here | Passive | up | 2 | down | none |
| Hi there. My name is Eric. How are you? | Down | up | 3 | inactive | done |
+------------------------------------------+---------------+----------------+------------------+------------------+-----------------+
+----------------------------+---------------+----------------+------------------+------------------+-----------------+
| Message | Status | Adress | Changes | Test | Calibration |
|----------------------------+---------------+----------------+------------------+------------------+-----------------|
| What's up? | Active | up | 1 | up | done |
| Hi. I'm Otilia | Passive | up | 2 | down | none |
| Hi there. This is Marcus | Up | up | 3 | inactive | done |
+----------------------------+---------------+----------------+------------------+------------------+-----------------+
I want to extract a specific column using AWK.
I can use CUT to do it; however when the length of each table varies depending on how many characters are present in each column, I'm not getting the desired output.
cat File.txt | cut -c -44
+------------------------------------------+
| Message |
|------------------------------------------+
| Hello World |
| Hello Everyone Here |
| Hi there. My name is Eric. How are you? |
+------------------------------------------+
+----------------------------+--------------
| Message | Status
|----------------------------+--------------
| What's up? | Active
| Hi. I'm Otilia | Passive
| Hi there. This is Marcus | Up
+----------------------------+--------------
or
cat File.txt | cut -c 44-60
+---------------+
| Status |
+---------------+
| Active |
| Passive |
| Down |
+---------------+
--+--------------
| Adress
--+--------------
| up
| up
| up
--+--------------
I tried using AWK but I don't know how to add 2 different delimiters which would take care of all the lines.
cat File.txt | awk 'BEGIN {FS="|";}{print $2,$3}'
Message Status
------------------------------------------+---------------+----------------+------------------+------------------+-----------------
Hello World Active
Hello Everyone Here Passive
Hi there. My name is Eric. How are you? Down
Message Status
----------------------------+---------------+----------------+------------------+------------------+-----------------
What's up? Active
Hi. I'm Otilia Passive
Hi there. This is Marcus Up
The output I'm looking for:
+------------------------------------------+
| Message |
|------------------------------------------+
| Hello World |
| Hello Everyone Here |
| Hi there. My name is Eric. How are you? |
+------------------------------------------+
+----------------------------+
| Message |
|----------------------------+
| What's up? |
| Hi. I'm Otilia |
| Hi there. This is Marcus |
+----------------------------+
or
+------------------------------------------+---------------+
| Message | Status |
|------------------------------------------+---------------+
| Hello World | Active |
| Hello Everyone Here | Passive |
| Hi there. My name is Eric. How are you? | Down |
+------------------------------------------+---------------+
+----------------------------+---------------+
| Message | Status |
|----------------------------+---------------+
| What's up? | Active |
| Hi. I'm Otilia | Passive |
| Hi there. This is Marcus | Up |
+----------------------------+---------------+
or random other columns
+------------------------------------------+----------------+------------------+
| Message | Adress | Test |
|------------------------------------------+----------------+------------------+
| Hello World | up | up |
| Hello Everyone Here | up | down |
| Hi there. My name is Eric. How are you? | up | inactive |
+------------------------------------------+----------------+------------------+
+----------------------------+---------------+------------------+
| Message |Adress | Test |
|----------------------------+---------------+------------------+
| What's up? |up | up |
| Hi. I'm Otilia |up | down |
| Hi there. This is Marcus |up | inactive |
+----------------------------+---------------+------------------+
Thanks in advance.
One idea using GNU awk:
awk -v fldlist="2,3" '
BEGIN { fldcnt=split(fldlist,fields,",") } # split fldlist into array fields[]
{ split($0,arr,/[|+]/,seps) # split current line on dual delimiters "|" and "+"
for (i=1;i<=fldcnt;i++) # loop through our array of fields (fldlist)
printf "%s%s", seps[fields[i]-1], arr[fields[i]] # print leading separator/delimiter and field
printf "%s\n", seps[fields[fldcnt]] # print trailing separator/delimiter and terminate line
}
' File.txt
NOTES:
requires GNU awk for the 4th argument to the split() function (seps == array of separators; see gawk string functions for details)
assumes our field delimiters (|, +) do not show up as part of the data
the input variable fldlist is a comma-delimited list of columns that mimics what would be passed to cut (eg, when a line starts with a delimiter then field #1 is blank)
For fldlist="2,3" this generates:
+------------------------------------------+---------------+
| Message | Status |
|------------------------------------------+---------------+
| Hello World | Active |
| Hello Everyone Here | Passive |
| Hi there. My name is Eric. How are you? | Down |
+------------------------------------------+---------------+
+----------------------------+---------------+
| Message | Status |
|----------------------------+---------------+
| What's up? | Active |
| Hi. I'm Otilia | Passive |
| Hi there. This is Marcus | Up |
+----------------------------+---------------+
For fldlist="2,4,6" this generates:
+------------------------------------------+----------------+------------------+
| Message | Adress | Test |
|------------------------------------------+----------------+------------------+
| Hello World | up | up |
| Hello Everyone Here | up | down |
| Hi there. My name is Eric. How are you? | up | inactive |
+------------------------------------------+----------------+------------------+
+----------------------------+----------------+------------------+
| Message | Adress | Test |
|----------------------------+----------------+------------------+
| What's up? | up | up |
| Hi. I'm Otilia | up | down |
| Hi there. This is Marcus | up | inactive |
+----------------------------+----------------+------------------+
For fldlist="4,3,2" this generates:
+----------------+---------------+------------------------------------------+
| Adress | Status | Message |
+----------------+---------------|------------------------------------------+
| up | Active | Hello World |
| up | Passive | Hello Everyone Here |
| up | Down | Hi there. My name is Eric. How are you? |
+----------------+---------------+------------------------------------------+
+----------------+---------------+----------------------------+
| Adress | Status | Message |
+----------------+---------------|----------------------------+
| up | Active | What's up? |
| up | Passive | Hi. I'm Otilia |
| up | Up | Hi there. This is Marcus |
+----------------+---------------+----------------------------+
Say that again? (fldlist="3,3,3"):
+---------------+---------------+---------------+
| Status | Status | Status |
+---------------+---------------+---------------+
| Active | Active | Active |
| Passive | Passive | Passive |
| Down | Down | Down |
+---------------+---------------+---------------+
+---------------+---------------+---------------+
| Status | Status | Status |
+---------------+---------------+---------------+
| Active | Active | Active |
| Passive | Passive | Passive |
| Up | Up | Up |
+---------------+---------------+---------------+
And if you make the mistake of trying to print the '1st' column, ie, fldlist="1":
+
|
|
|
|
|
+
+
|
|
|
|
|
+
If GNU awk is available, please try markp-fuso's nice solution.
If not, here is a posix-compliant alternative:
#!/bin/bash
# define bash variables
cols=(2 3 6) # bash array of desired columns
col_list=$(IFS=,; echo "${cols[*]}") # create a csv string
awk -v cols="$col_list" '
NR==FNR {
if (match($0, /^[|+]/)) { # the record contains a table
if (match($0, /^[|+]-/)) # horizontally ruled line
n = split($0, a, /[|+]/) # split into columns
else # "cell" line
n = split($0, a, /\|/)
len = 0
for (i = 1; i < n; i++) {
len += length(a[i]) + 1 # accumulated column position
pos[FNR, i] = len
}
}
next
}
{
n = split(cols, a, /,/) # split the variable `cols` on comma into an array
for (i = 1; i <= n; i++) {
col = a[i]
if (pos[FNR, col] && pos[FNR, col+1]) {
printf("%s", substr($0, pos[FNR, col], pos[FNR, col + 1] - pos[FNR, col]))
}
}
print(substr($0, pos[FNR, col + 1], 1))
}
' file.txt file.txt
Result with cols=(2 3 6) as shown above:
+---------------+----------------+-----------------+
| Status | Adress | Calibration |
+---------------+----------------+-----------------|
| Active | up | done |
| Passive | up | none |
| Down | up | done |
+---------------+----------------+-----------------+
+---------------+----------------+-----------------+
| Status | Adress | Calibration |
+---------------+----------------+-----------------|
| Active | up | done |
| Passive | up | none |
| Up | up | done |
+---------------+----------------+-----------------+
It detects the column width in the 1st pass then splits the line on the column position in the 2nd pass.
You can control the columns to print with the bash array cols which is assigned at the beginning of the script. Please assign the array to the list of desired column numbers in increasing order. If you want to use the bash variable in different way, please let me know.

Find references to files, recursively

In a project where XML/JS/Java files can contain references to other such files, I'd like to be able to have a quick overview of what has to be carefully checked, when one file has been updated.
So, it means I need to eventually have a look at all files referencing the modified one, and all files referencing files which refer to the modified one, etc. (recursively on matched files).
For one level, it's quite simple:
grep -E -l -o --include=*.{xml,js,java} -r "$FILE" . | xargs -n 1 basename
But how can I automate that to match (grand-(grand-))parents?
And how can that be, maybe, made more readable? For example, with a tree structure?
For example, if the file that interests me is called modified.js...
show-referring-files-to modified.js
... I could wish such an output:
some-file-with-ref-to-modified.xml
|__ a-file-referring-to-some-file-with-ref-to-modified.js
another-one-with-ref-to-modified.xml
|__ a-file-referring-to-another-one-with-ref-to-modified.js
|__ a-grand-parent-file-having-ref-to-ref-file.xml
|__ another-file-referring-to-another-one-with-ref-to-modified.js
or any other output (even flat) which allows for quickly checking which files are potentially impacted by a change.
UPDATE -- Results of current proposed answer:
ahmsff.js
|__ahmsff.xml
| |__ahmsd.js
| | |__ahmsd.xml
| | | |__ahmst.xml
| | | | |__BESH.java
| |__ahru.js
| | |__ahru.xml
| | | |__ahrut.xml
| | | | |__ashrba.js
| | | | | |__ashrba.xml
| | | | | | |__STR.java
| | |__ahrufrp.xml
| | | |__ahru.js
| | | | |__ahru.xml
| | | | | |__ahrut.xml
| | | | | | |__ashrba.js
| | | | | | | |__ashrba.xml
| | | | | | | | |__STR.java
| | | | |__ahrufrp.xml
| | | | | |__ahru.js
| | | | | | |__ahru.xml
| | | | | | | |__ahrut.xml
| | | | | | | | |__ashrba.js
| | | | | | | | | |__ashrba.xml
| | | | | | | | | | |__STR.java
| | | | | | |__ahrufrp.xml
(...)
I'd use a shell function (for the recursion) inside an shell script:
Assuming the filenames are unique have no characters that need escaping in them:
File: /usr/local/bin/show-referring-files-to
#!/bin/sh
get_references() {
grep -F -l --include=*.{xml,js,java} -r "$1" . | grep -v "$3" | while read -r subfile; do
#read each line of the grep result into the variable subfile
subfile="$(basename "$subfile")"
echo "$2""$subfile"
get_references "$subfile" ' '"$2" "$3"'\|'"$subfile"
done
}
while test $# -gt 0; do
#loop so more than one file can be given as argument to this script
echo "$1"
get_references "$1" '|__' "$1"
shift
done
There still are lots of performance enhancements possible.
Edit: Added $3 to prevent infinite-loop.

Laravel - Pivot table - No join throws an error

I've a question regarding how Laravel handles pivot tables:
Summarizing: 2 models, Project and Stage.
Project
+----+----------+
| id | name |
+----+----------+
| 1 | Project1 |
| 2 | Project2 |
+----+----------+
Stage
+----+--------+
| id | name |
+----+--------+
| 1 | Stage1 |
| 2 | Stage2 |
| 3 | Stage3 |
+----+--------+
And a pivot table
+----+------------+----------+------------+-----------+
| id | project_id | stage_id | date | info |
+----+------------+----------+------------+-----------+
| 1 | 1 | 1 | 2014-12-20 | Moreinfo1 |
| 2 | 1 | 2 | 2014-12-21 | Moreinfo2 |
| 3 | 2 | 1 | 2014-12-22 | Moreinfo3 |
| 4 | 1 | 3 | 2014-12-23 | Moreinfo4 |
+----+------------+----------+------------+-----------+
I'm showing the info:
+----------+------------+------------+-----------+
| project | last_stage | date | info |
+----------+------------+------------+-----------+
| Project1 | 3 | 2014-12-23 | Moreinfo4 |
| Project2 | 1 | 2014-12-22 | Moreinfo3 |
+----------+------------+------------+-----------+
And everything works fine; however, if I add a new Project (as there's no info on the pivot table), I get the annoying:
Whoops, looks like something went wrong.
Is there any way to indicate that, in case of null, the row should be left empty (without an error)? I'd like to get:
+----------+------------+------------+-----------+
| project | last_stage | date | info |
+----------+------------+------------+-----------+
| Project1 | 3 | 2014-12-23 | Moreinfo4 |
| Project2 | 1 | 2014-12-22 | Moreinfo3 |
| Project3 | | | |
+----------+------------+------------+-----------+
The problem is the call in your view:
{{ $project->stages_accomp()->orderBy('date', 'desc')->first()->pivot->date }}
When you have a project that has no stage assigned ->first() will return null.
And PHP doesn't like it if you want to access a property of a non-object (in this case null)
You need to add a little check like this:
#if($stage = $project->stages_accomp()->orderBy('date', 'desc')->first())
{{ $stage->pivot->date }}
#endif
It will make sure the first() returns a truthy value (not null) and will also assign the value to the variable $stage.

Format text in sphinx table cells

I have a table I am generating in sphinx for comparing constructs in different languages. I would like to have the cells contain code blocks in each language and have it come out looking like code (at least in a monospaced font). What I have so far is:
+-----------------------------+------------------------+
| Haskell | Scala |
+=============================+========================+
| | do var1<- expn1 | | for {var1 <- expn1; |
| | var2 <- expn2 | | var2 <- expn2; |
| | expn3 | | result <- expn3 |
| | | } yield result |
+-----------------------------+------------------------+
| | do var1 <- expn1 | | for {var1 <- expn1; |
| | var2 <- expn2 | | var2 <- expn2; |
| | return expn3 | | } yield expn3 |
+-----------------------------+------------------------+
| | do var1 <- expn1 >> expn2 | | for {_ <- expn1; |
| | return expn3 | | var1 <- expn2 |
| | | } yield expn3 |
+-----------------------------+------------------------+
This, at least preserves line breaks but it comes out in the same font as the rest of the document which is a little annoying.
Is there any way to convert the cells to some better format?
Did you try using the .. code-block:: directive?
This works fine on my PC using Sphinx 1.4.1:
+----------------------------------+----------------------------------+
| Tweedledee | Tweedledum |
+----------------------------------+----------------------------------+
| .. code-block:: c | .. code-block:: c |
| :caption: foo.c | :caption: bar.c |
| | |
| extern int bar(int y); | extern int foo(int x); |
| int foo(int x) | int bar(int y) |
| { | { |
| return x > 0 ? bar(x-1)+1 | return y > 0 ? foo(x-1)*2 |
| : 0; | : 0; |
| } | } |
+----------------------------------+----------------------------------+

Linux - Postgres psql retrieving undesired table

I've got the following problem:
There is a Postgres database which I need to get data from, via a Nagios Linux distribution.
My intention is to make a resulting SELECT be saved to a .txt, that would be sent via email to me using MUTT.
Until now, I've done:
#!/bin/sh
psql -d roaming -U thdroaming -o saida.txt << EOF
\d
\pset border 2
SELECT central, imsi, mapver, camel, nrrg, plmn, inoper, natms, cba, cbaz, stall, ownms, imsi_translation, forbrat FROM vw_erros_mgisp_totalizador
EOF
My problem is:
The .txt "saida.txt" is bringing me info about the database, as follows:
Lista de relações
Esquema | Nome | Tipo | Dono
---------+----------------------------------+-----------+------------
public | apns | tabela | jmsilva
public | config_imsis_centrais | tabela | thdroaming
public | config_imsis_sgsn | tabela | postgres
(3 Registers)
+---------+---------+----------+---------+---------+--------+------------+-------+---------+----------+-------+-------+------------------+-----------+
| central | imsi | mapver | camel | nrrg | plmn | inoper | natms | cba | cbaz | stall | ownms | imsi_translation | forbrat |
+---------+---------+----------+---------+---------+--------+------------+-------+---------+----------+-------+-------+------------------+-----------+
| MCTA02 | 20210 | | | | | INOPER-127 | | | | | | | |
| MCTA02 | 20404 | | | | | INOPER-127 | | | | | | | |
| MCTA02 | 20408 | | | | | INOPER-127 | | | | | | | |
| MCTA02 | 20412 | | | | | INOPER-127 | | | | | | | |
.
.
.
How could I make the first table not to be imported to the .txt?
Remove the '\d' portion of the script which causing listing the tables in the DB you see at the top of your output. So your script will become:
#!/bin/sh
psql -d roaming -U thdroaming -o saida.txt << EOF
\pset border 2
SELECT central, imsi, mapver, camel, nrrg, plmn, inoper, natms, cba, cbaz, stall, ownms, imsi_translation, forbrat FROM vw_erros_mgisp_totalizador
EOF
To get the output to appear CSV formatted in a file named /tmp/output.csv do you can do the following:
#!/bin/sh
psql -d roaming -U thdroaming -o saida.txt << EOF
\pset border 2
COPY (SELECT central, imsi, mapver, camel, nrrg, plmn, inoper, natms, cba, cbaz, stall, ownms, imsi_translation, forbrat FROM vw_erros_mgisp_totalizador) TO '/tmp/output.csv' WITH (FORMAT CSV)
EOF

Resources