In Go, is it possible to get the root directory of a path so that
foo/bar/file.txt
returns foo? I know about path/filepath, but
package main
import (
"fmt"
"path/filepath"
)
func main() {
parts := filepath.SplitList("foo/bar/file.txt")
fmt.Println(parts[0])
fmt.Println(len(parts))
}
prints foo/bar/file.txt and 1 whereas I would have expected foo and 3.
Simply use strings.Split():
s := "foo/bar/file.txt"
parts := strings.Split(s, "/")
fmt.Println(parts[0], len(parts))
fmt.Println(parts)
Output (try it on the Go Playground):
foo 3
[foo bar file.txt]
Note:
If you want to split by the path separator of the current OS, use os.PathSeparator as the separator:
parts := strings.Split(s, string(os.PathSeparator))
filepath.SplitList() splits multiple joined paths into separate paths. It does not split one path into folders and file. For example:
fmt.Println("On Unix:", filepath.SplitList("/a/b/c:/usr/bin"))
Outputs:
On Unix: [/a/b/c /usr/bin]
Note that if you just need the first part, strings.SplitN is at least 10 times
faster from my testing:
package main
import "strings"
func main() {
parts := strings.SplitN("foo/bar/file.txt", "/", 2)
println(parts[0] == "foo")
}
https://golang.org/pkg/strings#SplitN
Related
How to get the correct number of fields when using NewReader ?
package main
import (
"encoding/csv"
"fmt"
"log"
"strings"
)
func main() {
parser := csv.NewReader(strings.NewReader(`||""FOO""||`))
parser.Comma = '|'
parser.LazyQuotes = true
record, err := parser.Read()
if err != nil {
log.Fatal(err)
}
fmt.Printf("record length: %v\n", len(record))
}
https://go.dev/play/p/gg-KYRciWFH
It should return 5, but instead I'm getting 3:
record length: 3
Program exited.
EDIT
I'm actually working with a big CSV file containing many double quotes.
After examining your code, I decided to modify it slightly and then print the results:
package main
import (
"encoding/csv"
"fmt"
"log"
"strings"
)
func main() {
parser := csv.NewReader(strings.NewReader(`x||""FOO""|x|x\n`))
parser.Comma = '|'
parser.LazyQuotes = true
record, err := parser.Read()
if err != nil {
log.Fatal(err)
}
fmt.Printf("record length: %v, Data: %v\n", len(record), strings.Join(record, ", "))
}
When you run this, the data is printed as x, , "FOO"||x|x\n". My thought is that when you end your entry with two double-quotes, the parser is assuming the string is still being quoted and therefore lumps the rest of the line into the third entry. This appears to be a bug with how lazy-quoting works in the csv package, however, when examining the documentation for LazyQuotes, you'll see this:
If LazyQuotes is true, a quote may appear in an unquoted field and a non-doubled quote may appear in a quoted field.
This doesn't mention anything about finding double quotes within double quotes. To fix this, you should either remove the quotes altogether or replace the double double-quotes ("") with double quotes (").
One other thing you might consider would be using the gocsv package. I've worked with this package in the past and it's reasonably stable. I'm not sure how it would respond to this specific issue, but it might be worth your time checking it out.
Note:
The encoding/csv package implements the RFC 4180 standard. If you have such input, that's not an RFC 4180 compliant CSV file and encoding/csv will not parse it properly.
You're misusing the quotes. Quoting a single field FOO is like this:
parser := csv.NewReader(strings.NewReader(`||"FOO"||`))
If you want the field to have the "FOO" value, you have to use 2 double quotes in a quoted field, so it should be:
parser := csv.NewReader(strings.NewReader(`||"""FOO"""||`))
This will output 5. Try it on the Go Playground.
What you have is this:
parser := csv.NewReader(strings.NewReader(`||""FOO""||`))
Since the second " character is not followed by a separator character, the field is not interrupted and the rest is processed as the content of the quoted field (which will terminate at the end of the line).
If you print the record:
fmt.Println(record)
fmt.Printf("%#v", record)
Output will be (try it on the Go Playground):
[ "FOO"||]
[]string{"", "", "\"FOO\"||"}
Quotes are a part of csv format.
There is a problem with go/csv shielding, you can try something like this:
package main
import (
"encoding/csv"
"fmt"
"log"
"strings"
)
func main() {
parser := csv.NewReader(strings.NewReader(`||FOO||`))
parser.Comma = '|'
parser.LazyQuotes = true
record, err := parser.Read()
if err != nil {
log.Fatal(err)
}
fmt.Printf("record length: %v\n", len(record))
fmt.Println(strings.Join(record, " /SEP/ "))
}
or like this:
package main
import (
"encoding/csv"
"fmt"
"log"
"strings"
)
func main() {
parser := csv.NewReader(strings.NewReader(`||"""FOO"""||`))
parser.Comma = '|'
parser.LazyQuotes = true
record, err := parser.Read()
if err != nil {
log.Fatal(err)
}
fmt.Printf("record length: %v\n", len(record))
fmt.Println(strings.Join(record, " SEP "))
}
I am trying to obtain the first directory in an URL-like string like this: "/blog/:year/:daynum/:postname". I thought splitting it, then retrieving the first directory, would be this simple. But it returns square brackets surrounding the string even though it's not a slice. How can I get that first directory? (I am guaranteed that the string starts with a "/" followed by a valid directory designation and that contains both a leading directory and a string using those permalink properties).
What's the best way to parse out that first directory?
package main
import (
"fmt"
"strings"
)
// Retrieve the first directory in the URL-like
// string passed in
func firstDir(permalink string) string {
split := strings.Split(permalink, "/")
return string(fmt.Sprint((split[0:2])))
}
func main() {
permalink := "/blog/:year/:daynum/:postname"
dir := firstDir(permalink)
fmt.Printf("leading dir is: %s.", dir)
// Prints NOT "blog" but "[ blog]".
}
Since you said:"(I am guaranteed that the string starts with a "/" followed by a valid directory designation and that contains both a leading directory and a string using those permalink properties)"
Then simply use split[1] to get the root directory.
package main
import (
"fmt"
"os"
"strings"
)
func firstDir(permalink string) string {
split := strings.Split(permalink, string(os.PathSeparator))
return split[1]
}
func main() {
permalink := "/blog/:year/:daynum/:postname"
dir := firstDir(permalink)
fmt.Printf("leading dir is: %s.", dir)
// Prints "blog".
}
https://go.dev/play/p/hCHnrDIsWYE
Is there a function in go standard library that lets me do this
a = 'www.my.com/your/stuff'
b = 'www.my.com/your/stuff/123/4'
function(b,a) // /123/4
or
function(URL(b),URL(a)) // /123/4
The following is probably defined in this case
function(a,b) // error ? or ../../
I'm aware that I can use path package for this. But it cannot work in many cases where there is query param, file extension etc.
Basically I'm looking for a path.resolve counterpart for URL
It turns out that the path/filepath package can do this for you. If you ignore the fact that these are URLs and instead treat them like paths, you can use filepath.Rel():
package main
import (
"fmt"
"path/filepath"
)
func main() {
base := "www.my.com/your/stuff"
target := "www.my.com/your/stuff/123/4"
rel, _ := filepath.Rel(base, target)
fmt.Println(rel) // prints "123/4"
}
Playground: https://play.golang.org/p/nnF9zfFAFfc
If you want to treat these paths as actual URLs, you should probably use the net/url package to first parse the path as a URL, then extract the path and use filepath.Rel() on that. This allows you to properly deal with things like queries in the URL string, which would trip up filepath, like so:
package main
import (
"fmt"
"path/filepath"
"net/url"
)
func main() {
url1, _ := url.Parse("http://www.my.com/your/stuff")
url2, _ := url.Parse("http://www.my.com/your/stuff/123/4?query=test")
base := url1.Path
target := url2.Path
rel, _ := filepath.Rel(base, target)
fmt.Println(base) // "/your/stuff"
fmt.Println(target) // "/your/stuff/123/4"
fmt.Println(rel) // "123/4"
}
Playground: https://play.golang.org/p/gnZfk0t8GOZ
As a bonus, filepath.Rel() is smart enough to handle relative paths in the other direction, too:
rel, _ = filepath.Rel(target, base) // rel is now "../.."
I have a string that is comma separated, so it could be
test1, test2, test3 or test1,test2,test3 or test1, test2, test3.
I split this in Go currently with strings.Split(s, ","), but now I have a []string that can contain elements with an arbitrary numbers of whitespaces.
How can I easily trim them off? What is best practice here?
This is my current code
var property= os.Getenv(env.templateDirectories)
if property != "" {
var dirs = strings.Split(property, ",")
for index,ele := range dirs {
dirs[index] = strings.TrimSpace(ele)
}
return dirs
}
I come from Java and assumed that there is a map/reduce etc functionality in Go also, therefore the question.
You can use strings.TrimSpace in a loop. If you want to preserve order too, the indexes can be used rather than values as the loop parameters:
Go Playground Example
EDIT: To see the code without the click:
package main
import (
"fmt"
"strings"
)
func main() {
input := "test1, test2, test3"
slc := strings.Split(input , ",")
for i := range slc {
slc[i] = strings.TrimSpace(slc[i])
}
fmt.Println(slc)
}
Easy way without looping
test := "2 , 123, 1"
result := strings.Split(strings.ReplaceAll(test," ","") , ",")
The encoding/csv package can handle this:
package main
import (
"encoding/csv"
"fmt"
"strings"
)
func main() {
for _, each := range []string{
"test1, test2, test3", "test1, test2, test3", "test1,test2,test3",
} {
r := csv.NewReader(strings.NewReader(each))
r.TrimLeadingSpace = true
s, e := r.Read()
if e != nil {
panic(e)
}
fmt.Printf("%q\n", s)
}
}
https://golang.org/pkg/encoding/csv#Reader.TrimLeadingSpace
If you already use regexp may be you can split using regular expressions:
regexp.MustCompile(`\s*,\s*`).Split(test, -1)
This solution is probably slower than the standard Split + TrimSpaces, but is more flexible. For example if you want to skip empty fields you can :
regexp.MustCompile(`(\s*,\s*)+`).Split(test, -1)
or to use multiple separators
regexp.MustCompile(`\s*[,;]\s*`).Split(test, -1)
You can test it in the go playground.
i need some help in golan to write a program that can search through a given directory and its subdirectories to look for a particular word word in each of them.
this what i have so far to list the directories and save them as an array. now i want to check each of them to see if it has children, if yes, i should open it until i reach the last level of the tree.
package main
import (
"fmt"
"os"
)
func main() {
d, err := os.Open("/Perkins")
// fmt.Println(d.Readdirnames(-1))
y, err:=d.Readdirnames(-1) //
fmt.Println(y)
for i:=0; i<len(y); i++{
if y[i]!=" "{
Folders:=y[i]
temp,err:=os.Open("/"Folders) //how do i out the array element as a path?
fmt.Println (temp)
fmt.Println(err)
}
}
Note: "/"Folders wouldn't work: "/" + Folders
In your case, this should work better:
temp,err:=os.Open("/Perkins/" + Folders)
(even though 'Folders' is not a good name, 'subfolder' would be more appropriate)
A more efficient way, as commented by chendesheng (see answer), would be (as in this class) to use path/filepath/#Walk:
package main
import (
"fmt"
"os"
"path/filepath"
)
func main() {
filepath.Walk("/Perkins", func(path string, info os.FileInfo, err error) error {
fmt.Println(path)
return nil
})
}
That will list all the file, but you can associate it with a function which will filter those: see this example:
matched, err := filepath.Match("*.mp3", fi.Name())
You can then ignore the files you don't want and proceed only for the ones matching your pattern.
I think you can use filepath.Walk
Walk walks the file tree rooted at root, calling walkFn for each file
or directory in the tree, including root. All errors that arise
visiting files and directories are filtered by walkFn. The files are
walked in lexical order, which makes the output deterministic but
means that for very large directories Walk can be inefficient. Walk
does not follow symbolic links.
Maybe you need this?
http://golang.org/pkg/io/ioutil/#ReadDir
And then check is type from http://golang.org/pkg/os/#FileInfo and do recursive func if it folder