I am pretty new to Rust and I want to time the execution of my program. I searched online, but found nothing so far. After running cargo build, my code is executed like this:
bling#bling ~/github/rust/hello_world [master *]
± % ./target/debug/hello_world
Does Cargo have a built in way to time the execution or do I need to use the command line?
I don't think you can pass a flag to cargo directly to time the compilation step. The simplest way to do it is to use the time command line utility:
$ time ./target/debug/hello_world
./target/debug/hello_world 3.02s user 0.18s system 34% cpu 9.177 total
Cargo does have something similar: cargo bench allows you to write benchmarks for your program, although it is only available on nightly Rust. This gives you very specific reports about the speed of certain parts of your program. The docs have more reading.
$ cargo bench
running 1 test
test bench_xor_1000_ints ... bench: 131 ns/iter (+/- 3)
Cargo does not have a built in way to time things. You will want to use your operating systems mechanism. For example, on Linux or macOS, you can probably use time (I use hyperfine myself):
time ./target/debug/hello_world
Of special note is that you have a debug build. This has no optimizations and should not be used for profiling. Instead, you should build your code in release mode:
cargo build --release
and then execute the program in the target/release directory.
Also, you probably do not want to include the time for Cargo itself. That is, do not do either of these:
time cargo run
time cargo run --release
It's not baked into Cargo, but hyperfine is written in Rust and is an alternative to time:
cargo install hyperfine
hyperfine ./target/debug/hello_world
Related
Our Jenkins build has a line like
go test -p=4 -count=1 -timeout=3m -v -coverprofile=coverage.out ./...
Usually this is fine but sometimes it fails with an error like
vet: $WORK/b1704/cache.cover.go:1:1: expected 'package', found 'EOF'
The build slave machine has about 5 GB available at the start of the build, and it didn't appear to drop below 4GB during the build, although it's possible I missed transient spike in usage.
This is the only command executing at the time, there is no parallelisation in the build script, only what is done by go test -p itself.
Any ideas what could cause an intermittent failure like that?
go version go1.14 linux/arm64
I'm running a project using Go Modules with 1.11.4 on Ubuntu, running in WSL.
My problem is that I'm having getting incremental builds to work as I expect. Perhaps this is due to me misunderstanding how it's supposed to, but I'd be glad if someone could clarify if that is the case.
Just as an example, if I do go build ./... then everything is built, as expected.
If I now do go build ./... again without any changes, my expectation was that due to the incremental builds, this time nothing would be built. But it builds everything again. I tried doing go build -i ./... (even though my understanding is that -i isn't needed anymore from 1.10), but the result is the same. This has been puzzling me for some time, as after reading the documentation I indeed expected the go build command to produce incremental builds.
The other day I realized that if instead I do go install ./... first, and then go install ./... again a second time, the second time around nothing is built, as I would expect. If I change a single module and run go install ./... again, then only that module is rebuilt, again what I would expect. So this gives me incremental builds.
So my questions are
1) Did I misunderstand go build ./... and how it handles incremental builds? Do I need to use go install instead?
2) Typically, we build the modules one by one, using the -o flag to specify an output path. Using go install instead, there is no -o option to specify an output path. Is there anything I can do to achieve something similar to -o using go install?
Thanks!
I encountered the problem that I wanted to have a debug, then I wanted to build a debug version of tensorflow, using the following command:
bazel build --compilation_mode=dbg -s //tensorflow/tools/pip_package:build_pip_package
but it will trigger the longtime link in protobuf for almost oneday, and still not finished.
and my intension is to build some other package which is used by tensorflow with debug mode, could I configure the bazel build file to get some debug package separately?
To understand the issue better, try running the neverending action manually:
start the debug build, wait for it to get stuck in the protobuf linking action
interrupt the build (Ctrl+C)
run the build again with the -s flag, so Bazel shows the command line it executes (you could've ran step 1. with the -s flag, but then there's a lot more output and it's harder to find the right information)
interrupt the build again
cd into the directory shown in the by command and set environment variables
try running the command that failed (you may need to change the output paths because they are sometimes not user-writable) and see if it still never finishes
What you just did is running the same command Bazel was running and getting stuck on. If the command is stuck in this manual mode too, then the error might be with the linker (I doubt this is the case though). But if it succeeds, then the problem is with Bazel.
how to build packages in Parallel to reduce the time taken by make command(Golang)
my current directory is "thor"
inside my Makefile
build:
go build -o thor
go build thor/package1
go build thor/scripts/package2
in the above case ,all the packages are stand alone processes which run independently . when i run make command each packages build one by one . so if each package build take 30 sec ,totally it takes 90 sec(30 sec * 3) . but if i can build these packages in parallel ,it will only take 30 sec.
in case of shell scripts this kind of cases can be handled by running each scripts in background with & and wait till the end of each scripts using wait
sample code
#!/bin/bash
echo "Starts"
./s1.sh &
./s2.sh &
./s3.sh &
wait
echo "ends"
In the above cases all the scripts s1.sh ,s2.sh ,s3.sh will run concurrently and wait till all the process finished.
So , can we do something like this in Makefile also :)
Instead of
build:
go build -o thor
go build thor/package1
go build thor/scripts/package2
split this in three separate recipes, something like this
build: build1 build2 build3
build1:
go build -o thor
build2:
go build thor/package1
build3:
go build thor/scripts/package2
.PHONY: build build1 build2 build3
Then call make with the -j option and you're done.
It would be better to have targets that correspond to files that are actually created by your go build command, together with a list of prerequisites. Then you don't need .PHONY and make can decide whether a rebuild is actually needed.
You can use gox that will parallelize builds for multiple platforms.
By default, Gox will parallelize based on the number of CPUs you have and build for every platform by default
Exactly the same as go build, build using
gox
If you want to build packages and sub-packages:
gox ./...
go build and go run are very slow on a tiny program I have (cgo invocations in particular). I'd like go to cache the binary so that it only rebuilds when the source is newer. I would use a simple Makefile with a % rule, but the language designers claim that go's build support doesn't need Makefiles.
Is there another alternative I've overlooked? Does the go community prefer another build system, maybe hash-based instead, for caching and reusing build products?
go build and go install will soon (Go 1.10, Q1 2018) be much faster: see this thread and this draft document.
The go command now maintains a cache of built packages and other small metadata (CL 68116 and CL 75473). The cache defaults to the operating system-defined user cache directory but can be moved by setting $GOCACHE.
Run "go env GOCACHE" to see the current effective setting. Right now the go command never deletes anything from the cache. If the cache gets too big, run "go clean -cache" instead of deleting the directory. That command will preserve the cache's log.txt file. In a few weeks I'll ask people to post their log.txt files to a Github issue so that we can evaluate cache size management approaches.
The main effect of the build cache is that commands like "go test" and "go build" run fast and do incremental builds always, reusing past build steps as aggressively as possible.
You do not have to use "go test -i" or "go build -i" or "go install" just to get fast incremental builds. We will not have to teach new users those workarounds anymore. Everything will just be fast.
Note that go install won't installs dependencies of the named packages: see "What does go build build?".
I wrote a tool that happens to solve this as a side effect. go build alone will not check if the executable it's producing is already up to date. go install does, and if you tweak it to install to a location of your choice, then you'll get the desired result, similar to go build.
You can see the behaviour you describe by doing something like this:
$ go get -d github.com/anacrolix/missinggo/cmd/nop
$ time go run "$GOPATH"/src/github.com/anacrolix/missinggo/cmd/nop/*.go
real 0m0.176s
user 0m0.142s
sys 0m0.048s
That's on a warm run. go run will link on every invocation, just as go build would. Note that github.com/anacrolix/missinggo/cmd/nop is an program that does absolutely nothing.
Here's invoking the same package, using my tool, godo:
$ time godo github.com/anacrolix/missinggo/cmd/nop
real 0m0.073s
user 0m0.029s
sys 0m0.033s
For larger programs, the difference should be more pronounced.
So in summary, your standard tooling option is to use go install, or an alternative like godo.