golang dynamic progressbar - go

I'm trying file download with golang.
I'm downloading file it's okay. After I'm using cheggaaa's progressbar library. But I can't dynamic.
How can I do dynamic progressbar?
My code below:
package main
import (
"flag"
"fmt"
"io"
"net/http"
"net/url"
"os"
"strings"
"github.com/cheggaaa/pb"
"time"
)
/*
usage = usage text
version = current number
help use Sprintf
*cliUrl from cmd
*cliVersion from cmd
*cliHelp * from cmd
*/
var (
usage = "Usage: ./gofret -url=http://some/do.zip"
version = "Version: 0.1"
help = fmt.Sprintf("\n\n %s\n\n\n %s", usage, version)
cliUrl *string
cliVersion *bool
cliHelp *bool
)
func init() {
/*
if *cliUrl != "" {
fmt.Println(*cliUrl)
}
./gofret -url=http://somesite.com/somefile.zip
./gofret -url=https://github.com/aligoren/syspy/archive/master.zip
*/
cliUrl = flag.String("url", "", usage)
/*
else if *cliVersion{
fmt.Println(flag.Lookup("version").Usage)
}
./gofret -version
*/
cliVersion = flag.Bool("version", false, version)
/*
if *cliHelp {
fmt.Println(flag.Lookup("help").Usage)
}
./gofret -help
*/
cliHelp = flag.Bool("help", false, help)
}
func main() {
/*
Parse all flags
*/
flag.Parse()
if *cliUrl != "" {
fmt.Println("Downloading file")
/* parse url from *cliUrl */
fileUrl, err := url.Parse(*cliUrl)
if err != nil {
panic(err)
}
/* get path from *cliUrl */
filePath := fileUrl.Path
/*
seperate file.
http://+site.com/+(file.zip)
*/
segments := strings.Split(filePath, "/")
/*
file.zip filename lenth -1
*/
fileName := segments[len(segments)-1]
/*
Create new file.
Filename from fileName variable
*/
file, err := os.Create(fileName)
if err != nil {
fmt.Println(err)
panic(err)
}
defer file.Close()
/*
check status and CheckRedirect
*/
checkStatus := http.Client{
CheckRedirect: func(r *http.Request, via []*http.Request) error {
r.URL.Opaque = r.URL.Path
return nil
},
}
/*
Get Response: 200 OK?
*/
response, err := checkStatus.Get(*cliUrl)
if err != nil {
fmt.Println(err)
panic(err)
}
defer response.Body.Close()
fmt.Println(response.Status) // Example: 200 OK
/*
fileSize example: 12572 bytes
*/
fileSize, err := io.Copy(file, response.Body)
/*
progressbar worked after download :(
*/
var countSize int = int(fileSize/1000)
count := countSize
bar := pb.StartNew(count)
for i := 0; i < count; i++ {
bar.Increment()
time.Sleep(time.Millisecond)
}
bar.FinishPrint("The End!")
if err != nil {
panic(err)
}
fmt.Printf("%s with %v bytes downloaded", fileName, count)
} else if *cliVersion {
/*
lookup version flag's usage text
*/
fmt.Println(flag.Lookup("version").Usage)
} else if *cliHelp {
/*
lookup help flag's usage text
*/
fmt.Println(flag.Lookup("help").Usage)
} else {
/*
using help's usage text for handling other status
*/
fmt.Println(flag.Lookup("help").Usage)
}
}
While the my program is running:
Downloading file
200 OK
After download working progressbar:
6612 / 6612 [=====================================================] 100.00 % 7s
The End!
master.zip with 6612 bytes downloaded
My progressbar code below:
/*
progressbar worked after download :(
*/
var countSize int = int(fileSize/1000)
count := countSize
bar := pb.StartNew(count)
for i := 0; i < count; i++ {
bar.Increment()
time.Sleep(time.Millisecond)
}
bar.FinishPrint("The End!")
How can I solve progressbar problem?

More goroutines is not needed. Just read from bar
// start new bar
bar := pb.New(fileSize).SetUnits(pb.U_BYTES)
bar.Start()
// create proxy reader
rd := bar.NewProxyReader(response.Body)
// and copy from reader
io.Copy(file, rd)

I wrote the below stuff, and it's right in the general case, unrelated to progressbar, but this library is exactly designed to handle this problem, has specialized support for it, and gives an explicit example of downloading a file.
You need to run the download in parallel with updating the progressbar, currently you're downloading the whole file and then updating the progressbar.
This is a little sloppy, but should get you going in the right direction:
First, grab the expected filesize:
filesize := response.ContentLength
Then start your download in a goroutine:
go func() {
n, err := io.Copy(file, response.Body)
if n != filesize {
log.Fatal("Truncated")
}
if err != nil {
log.Fatalf("Error: %v", err)
}
}()
Then update your progressbar as it goes along:
countSize := int(filesize / 1000)
bar := pb.StartNew(countSize)
var fi os.FileInfo
for fi == nil || fi.Size() < filesize {
fi, _ = file.Stat()
bar.Set(int(fi.Size() / 1000))
time.Sleep(time.Millisecond)
}
bar.FinishPrint("The End!")
Like I say, this is a little sloppy; you probably want to scale the bar better depending on the size of the file, and the log.Fatal calls are ugly. But it handles the core of the issue.
Alternately, you can do this without a goroutine just by writing your own version of io.Copy. Read a block from response.Body, update the progress bar, and then write a block to file. That's arguably better because you can avoid the sleep call.

Actually you can implement progress bar by yourself with below piece of code.
func (bar *Bar) Play(cur int64) {
bar.cur = cur
last := bar.percent
bar.percent = bar.getPercent()
if bar.percent != last && bar.percent%2 == 0 {
bar.rate += bar.graph
}
fmt.Printf("\r[%-50s]%3d%% %8d/%d", bar.rate, bar.percent, bar.cur, bar.total)
}
The key here is to use escape \r which will replace the current progress with updated one in place which creates a dynamic effect.
A detailed explanation can be found at here.

Related

Creating and seeding a torrent in golang

I want to use the golang library https://github.com/anacrolix/torrent to create to torrent and get a magnet and seed the torrent. Below you can find the code I wrote. Yet, if I use the magnet the code generates I can not download anything not even the metainfo.
Am I missing something here?
package main
import (
"log"
"time"
"github.com/anacrolix/torrent"
"github.com/anacrolix/torrent/bencode"
"github.com/anacrolix/torrent/metainfo"
)
var builtinAnnounceList = [][]string{
{"http://p4p.arenabg.com:1337/announce"},
{"udp://tracker.opentrackr.org:1337/announce"},
{"udp://tracker.openbittorrent.com:6969/announce"},
}
func main() {
tmpComment:="Cool torrent description"
tmpCreatedBy:="coolboys"
tmpInfoName:="CoolInfoName"
mi := metainfo.MetaInfo{
AnnounceList: builtinAnnounceList,
}
mi.SetDefaults()
mi.Comment = tmpComment
mi.CreatedBy = tmpCreatedBy
//}
//mi.UrlList = args.Url//???????????
info := metainfo.Info{
PieceLength: 256 * 1024,
}
err := info.BuildFromFilePath("./TorrentFiles")//args.Root)
if err != nil {
log.Fatal(err)
}
info.Name =tmpInfoName
mi.InfoBytes, err = bencode.Marshal(info)
if err != nil {
log.Fatal(err)
}
tmpMagnet:=mi.Magnet(nil,nil)
log.Println("****",tmpMagnet)
//
cfg := torrent.NewDefaultClientConfig()
cfg.Seed = true
mainclient, ncerr := torrent.NewClient(cfg)
if ncerr != nil {
log.Println("new torrent client:", ncerr)
return
}
defer mainclient.Close()
t, _ := mainclient.AddMagnet(tmpMagnet.String())
for {
log.Println("******", t.Seeding())
time.Sleep(8 * time.Second)
}
}
I think you might need to take a closer look at this example. Among other things, I don't see any invocation of t.DownloadAll() to do the actual download or mainclient.WaitAll() to tell you when the downloads are complete.
I corrected the code I wrote and the correct code can be found here:
enter link description here
In the original code I should not have used addmagnet as it assumes that I don't have the info available, which is why it would fail to seed.

Parsing prometheus metrics from file and updating counters

I've a go application that gets run periodically by a batch. Each run, it should read some prometheus metrics from a file, run its logic, update a success/fail counter, and write metrics back out to a file.
From looking at How to parse Prometheus data as well as the godocs for prometheus, I'm able to read in the file, but I don't know how to update app_processed_total with the value returned by expfmt.ExtractSamples().
This is what I've done so far. Could someone please tell me how should I proceed from here? How can I typecast the Vector I got into a CounterVec?
package main
import (
"fmt"
"net/http"
"strings"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
dto "github.com/prometheus/client_model/go"
"github.com/prometheus/common/expfmt"
"github.com/prometheus/common/model"
)
var (
fileOnDisk = prometheus.NewRegistry()
processedTotal = prometheus.NewCounterVec(prometheus.CounterOpts{
Name: "app_processed_total",
Help: "Number of times ran",
}, []string{"status"})
)
func doInit() {
prometheus.MustRegister(processedTotal)
}
func recordMetrics() {
go func() {
for {
processedTotal.With(prometheus.Labels{"status": "ok"}).Inc()
time.Sleep(5 * time.Second)
}
}()
}
func readExistingMetrics() {
var parser expfmt.TextParser
text := `
# HELP app_processed_total Number of times ran
# TYPE app_processed_total counter
app_processed_total{status="ok"} 300
`
parseText := func() ([]*dto.MetricFamily, error) {
parsed, err := parser.TextToMetricFamilies(strings.NewReader(text))
if err != nil {
return nil, err
}
var result []*dto.MetricFamily
for _, mf := range parsed {
result = append(result, mf)
}
return result, nil
}
gatherers := prometheus.Gatherers{
fileOnDisk,
prometheus.GathererFunc(parseText),
}
gathering, err := gatherers.Gather()
if err != nil {
fmt.Println(err)
}
fmt.Println("gathering: ", gathering)
for _, g := range gathering {
vector, err := expfmt.ExtractSamples(&expfmt.DecodeOptions{
Timestamp: model.Now(),
}, g)
fmt.Println("vector: ", vector)
if err != nil {
fmt.Println(err)
}
// How can I update processedTotal with this new value?
}
}
func main() {
doInit()
readExistingMetrics()
recordMetrics()
http.Handle("/metrics", promhttp.Handler())
http.ListenAndServe("localhost:2112", nil)
}
I believe you would need to use processedTotal.WithLabelValues("ok").Inc() or something similar to that.
The more complete example is here
func ExampleCounterVec() {
httpReqs := prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "http_requests_total",
Help: "How many HTTP requests processed, partitioned by status code and HTTP method.",
},
[]string{"code", "method"},
)
prometheus.MustRegister(httpReqs)
httpReqs.WithLabelValues("404", "POST").Add(42)
// If you have to access the same set of labels very frequently, it
// might be good to retrieve the metric only once and keep a handle to
// it. But beware of deletion of that metric, see below!
m := httpReqs.WithLabelValues("200", "GET")
for i := 0; i < 1000000; i++ {
m.Inc()
}
// Delete a metric from the vector. If you have previously kept a handle
// to that metric (as above), future updates via that handle will go
// unseen (even if you re-create a metric with the same label set
// later).
httpReqs.DeleteLabelValues("200", "GET")
// Same thing with the more verbose Labels syntax.
httpReqs.Delete(prometheus.Labels{"method": "GET", "code": "200"})
}
This is taken from the Promethus examples on Github
To use the value of vector you can do the following:
vectorFloat, err := strconv.ParseFloat(vector[0].Value.String(), 64)
if err != nil {
panic(err)
}
processedTotal.WithLabelValues("ok").Add(vectorFloat)
This is assuming you will only ever get a single vector value in your response. The value of the vector is stored as a string but you can convert it to a float with the strconv.ParseFloat method.

How to get a File description(Product name, Original filname, etc.) using golang in windows?

Fileinfo in golang gives Name, time modified, size, etc. I need to get particular File's description(eg: Product name, Original filname, etc.) using golang in windows.
You can use the w32 library for Win32 API calls from Go. No CGo needed.
Here is an example of how you can retrieve all file information through GetFileVersionInfo and VerQueryValue:
package main
import (
"fmt"
"github.com/gonutz/w32/v2"
)
func main() {
const path = `C:\some file`
size := w32.GetFileVersionInfoSize(path)
if size <= 0 {
panic("GetFileVersionInfoSize failed")
}
info := make([]byte, size)
ok := w32.GetFileVersionInfo(path, info)
if !ok {
panic("GetFileVersionInfo failed")
}
fixed, ok := w32.VerQueryValueRoot(info)
if !ok {
panic("VerQueryValueRoot failed")
}
version := fixed.FileVersion()
fmt.Printf(
"file version: %d.%d.%d.%d\n",
version&0xFFFF000000000000>>48,
version&0x0000FFFF00000000>>32,
version&0x00000000FFFF0000>>16,
version&0x000000000000FFFF>>0,
)
translations, ok := w32.VerQueryValueTranslations(info)
if !ok {
panic("VerQueryValueTranslations failed")
}
if len(translations) == 0 {
panic("no translation found")
}
fmt.Println("translations:", translations)
t := translations[0]
// w32.CompanyName simply translates to "CompanyName"
company, ok := w32.VerQueryValueString(info, t, w32.CompanyName)
if !ok {
panic("cannot get company name")
}
fmt.Println("company:", company)
}

Output of GET request different to view source

I'm trying to extract match data from whoscored.com. When I view the source on firefox, I find on line 816 a big json string with the data I want for that matchid. My goal is to eventually get this json.
In doing this, I've tried to download every page of https://www.whoscored.com/Matches/ID/Live where ID is the id of the match. I wrote a little Go program to GET request each ID up to a certain point:
package main
import (
"fmt"
"io/ioutil"
"net/http"
"os"
)
// http://www.whoscored.com/Matches/614052/Live is the match for
// Eveton vs Manchester
const match_address = "http://www.whoscored.com/Matches/"
// the max id we get
const max_id = 300
const num_workers = 10
// function that get the bytes of the match id from the website
func match_fetch(matchid int) {
url := fmt.Sprintf("%s%d/Live", match_address, matchid)
resp, err := http.Get(url)
if err != nil {
fmt.Println(err)
return
}
// if we sucessfully got a response, store the
// body in memory
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
fmt.Println(err)
return
}
// write the body to memory
pwd, _ := os.Getwd()
filepath := fmt.Sprintf("%s/match_data/%d", pwd, matchid)
err = ioutil.WriteFile(filepath, body, 0644)
if err != nil {
fmt.Println(err)
return
}
}
// data type to send to the workers,
// last means this job is the last one
// matchid is the match id to be fetched
// a matchid of -1 means don't fetch a match
type job struct {
last bool
matchid int
}
func create_worker(jobs chan job) {
for {
next_job := <-jobs
if next_job.matchid != -1 {
match_fetch(next_job.matchid)
}
if next_job.last {
return
}
}
}
func main() {
// do the eveton match as a reference
match_fetch(614052)
var joblist [num_workers]chan job
var v int
for i := 0; i < num_workers; i++ {
job_chan := make(chan job)
joblist[i] = job_chan
go create_worker(job_chan)
}
for i := 0; i < max_id; i = i + num_workers {
for index, c := range joblist {
if i+index < max_id {
v = i + index
} else {
v = -1
}
c <- job{false, v}
}
}
for _, c := range joblist {
c <- job{true, -1}
}
}
The code seems to work in that it fills a directory called match_data with html. The problem is that this html is completely different to what I get in the browser! Here is the section which I think does this: (from the body of the GET request of http://www.whoscored.com/Matches/614052/Live.
(function() {
var z="";var b="7472797B766172207868723B76617220743D6E6577204461746528292E67657454696D6528293B766172207374617475733D227374617274223B7661722074696D696E673D6E65772041727261792833293B77696E646F772E6F6E756E6C6F61643D66756E6374696F6E28297B74696D696E675B325D3D22723A222B286E6577204461746528292E67657454696D6528292D74293B646F63756D656E742E637265617465456C656D656E742822696D6722292E7372633D222F5F496E63617073756C615F5265736F757263653F4553324C555243543D363726743D373826643D222B656E636F6465555249436F6D706F6E656E74287374617475732B222028222B74696D696E672E6A6F696E28292B222922297D3B69662877696E646F772E584D4C4874747052657175657374297B7868723D6E657720584D4C48747470526571756573747D656C73657B7868723D6E657720416374697665584F626A65637428224D6963726F736F66742E584D4C4854545022297D7868722E6F6E726561647973746174656368616E67653D66756E6374696F6E28297B737769746368287868722E72656164795374617465297B6361736520303A7374617475733D6E6577204461746528292E67657454696D6528292D742B223A2072657175657374206E6F7420696E697469616C697A656420223B627265616B3B6361736520313A7374617475733D6E6577204461746528292E67657454696D6528292D742B223A2073657276657220636F6E6E656374696F6E2065737461626C6973686564223B627265616B3B6361736520323A7374617475733D6E6577204461746528292E67657454696D6528292D742B223A2072657175657374207265636569766564223B627265616B3B6361736520333A7374617475733D6E6577204461746528292E67657454696D6528292D742B223A2070726F63657373696E672072657175657374223B627265616B3B6361736520343A7374617475733D22636F6D706C657465223B74696D696E675B315D3D22633A222B286E6577204461746528292E67657454696D6528292D74293B6966287868722E7374617475733D3D323030297B706172656E742E6C6F636174696F6E2E72656C6F616428297D627265616B7D7D3B74696D696E675B305D3D22733A222B286E6577204461746528292E67657454696D6528292D74293B7868722E6F70656E2822474554222C222F5F496E63617073756C615F5265736F757263653F535748414E45444C3D313536343032333530343538313538333938362C31373139363833393832313930303534313833392C31333935303737313737393531363432383234342C3132363636222C66616C7365293B7868722E73656E64286E756C6C297D63617463682863297B7374617475732B3D6E6577204461746528292E67657454696D6528292D742B2220696E6361705F6578633A20222B633B646F63756D656E742E637265617465456C656D656E742822696D6722292E7372633D222F5F496E63617073756C615F5265736F757263653F4553324C555243543D363726743D373826643D222B656E636F6465555249436F6D706F6E656E74287374617475732B222028222B74696D696E672E6A6F696E28292B222922297D3B";for (var i=0;i<b.length;i+=2){z=z+parseInt(b.substring(i, i+2), 16)+",";}z = z.substring(0,z.length-1); eval(eval('String.fromCharCode('+z+')'));})();
The reason I think this is the case is that the javascript in the page fetches and edits the DOM to what I see on view source. How can I get golang to run the javascript? Is there are library to do this? Better still, could I directly grab the JSON from the servers?
This can be done with https://godoc.org/github.com/sourcegraph/webloop#View.EvaluateJavaScript
Read their main example https://github.com/sourcegraph/webloop
What you need is a "headless browser" in general.
In general it is better to use an Web API vs. scraping. For example, whoscored themselves use OPTA which you should be able to access directly.
http://www.jokecamp.com/blog/guide-to-football-and-soccer-data-and-apis/#opta

Can I create a function that must only be used with defer?

For example:
package package
// Dear user, CleanUp must only be used with defer: defer CleanUp()
func CleanUp() {
// some logic to check if call was deferred
// do tear down
}
And in userland code:
func main() {
package.CleanUp() // PANIC, CleanUp must be deferred!
}
But all should be fine if user runs:
func main() {
defer package.CleanUp() // good job, no panic
}
Things I already tried:
func DeferCleanUp() {
defer func() { /* do tear down */ }()
// But then I realized this was exactly the opposite of what I needed
// user doesn't need to call defer CleanUp anymore but...
}
// now if the APi is misused it can cause problems too:
defer DeferCleanUp() // a defer inception xD, question remains.
Alright, per OPs request and just for laughs, I'm posting this hacky approach to solving this by looking at the call stack and applying some heuristics.
DISCLAIMER: Do not use this in real code. I don't think checking deferred is even a good thing.
Also Note: this approach will only work if the executable and the source are on the same machine.
Link to gist: https://gist.github.com/dvirsky/dfdfd4066c70e8391dc5 (this doesn't work in the playground because you can't read the source file there)
package main
import(
"fmt"
"runtime"
"io/ioutil"
"bytes"
"strings"
)
func isDeferred() bool {
// Let's get the caller's name first
var caller string
if fn, _, _, ok := runtime.Caller(1); ok {
caller = function(fn)
} else {
panic("No caller")
}
// Let's peek 2 levels above this - the first level is this function,
// The second is CleanUp()
// The one we want is who called CleanUp()
if _, file, line, ok := runtime.Caller(2); ok {
// now we actually need to read the source file
// This should be cached of course to avoid terrible performance
// I copied this from runtime/debug, so it's a legitimate thing to do :)
data, err := ioutil.ReadFile(file)
if err != nil {
panic("Could not read file")
}
// now let's read the exact line of the caller
lines := bytes.Split(data, []byte{'\n'})
lineText := strings.TrimSpace(string(lines[line-1]))
fmt.Printf("Line text: '%s'\n", lineText)
// Now let's apply some ugly rules of thumb. This is the fragile part
// It can be improved with regex or actual AST parsing, but dude...
return lineText == "}" || // on simple defer this is what we get
!strings.Contains(lineText, caller) || // this handles the case of defer func() { CleanUp() }()
strings.Contains(lineText, "defer ")
} // not ok - means we were not clled from at least 3 levels deep
return false
}
func CleanUp() {
if !isDeferred() {
panic("Not Deferred!")
}
}
// This should not panic
func fine() {
defer CleanUp()
fmt.Println("Fine!")
}
// this should not panic as well
func alsoFine() {
defer func() { CleanUp() }()
fmt.Println("Also Fine!")
}
// this should panic
func notFine() {
CleanUp()
fmt.Println("Not Fine!")
}
// Taken from the std lib's runtime/debug:
// function returns, if possible, the name of the function containing the PC.
func function(pc uintptr) string {
fn := runtime.FuncForPC(pc)
if fn == nil {
return ""
}
name := fn.Name()
if lastslash := strings.LastIndex(name, "/"); lastslash >= 0 {
name = name[lastslash+1:]
}
if period := strings.Index(name, "."); period >= 0 {
name = name[period+1:]
}
name = strings.Replace(name, "ยท", ".", -1)
return name
}
func main(){
fine()
alsoFine()
notFine()
}

Resources