flowvisor didn't show the slices that i create - slice

So i was creating slice in flowvisor following the step i read on the book, by turn on the flowvisor and then use this code
$ fvctl -f /dev/null add-slice slice1 tcp:localhost:10001 admin#lowerslice1
but when run
$ fvctl -f /dev/null list-slices
it only shows fvadmin as the only slices
i already tried to run with
fvctl add-slice tenant1 tcp:localhost:10001 admin#tenant1
but the slices still only show fvadmin
what step should i do to show the slices that i made or how to make my slices appear?

Related

Bintray docker repository storage

On Bintray I found out that I have a private docker repository consuming quite a lot of space:
Account usage by repository
I then proceeded to do some house keeping and kept only the last 3 tags of all the images I have. However, that didn't help much. The storage didn't change at all after deleting all these old tags.
I got this API endpoint here: https://bintray.com/docs/api/#_get_package_files to have an estimate on the package files size:
for img in $(cat images) ; do curl -s -XGET -u "user:pass" https://bintray.com/api/v1/packages/my-org/internal-docker/$img/files | python -m json.tool | jq '.[] | .size' | awk '{ sum += $1 } END { print sum }' ; done
Suming all those up gets me 63723101568 bytes, 60GB.
Any idea where the other 310GBs are?
Notice that, even if the 3 tags were completely different from each other, I would get worst case 3x that figure, so 180GB. But the 375GB is still there.
Where are you getting the array 'images'?
You are not curling for the entire list but rather single files.
It's possible your list did not include all the images in this repo.
Check to make sure you have traversed all subfolders for images as well.
After some time, something changed in the backend storage
I asked on their support (you have to click on Feedback when logged in to Bintary) and they're checking if there was any house keeping done or only after I complained to them something was done.
I'll update if I hear more from them.

Extracting unmapped reads where both mates are unmapped using samtools?

I'm trying to determine the best way to extract unmapped reads in which both mates in a pair did not map. Currently, it seems that my code is simply extracting all unmapped reads, regardless of their mate. I'm not sure how to go about this, as I'm already using the -f option to extract unmapped reads. Would I just do another iteration of samtools view?
samtools view -# 4 -buh -f4 sample${r}_pe.remove.sam > sample${r}_pe.unmapped.bam
To extract only the reads where read 1 is unmapped AND read 2 is unmapped (= both mates are unmapped):
samtools view -b -f12 input.sam > output.both_mates_unmapped.bam
Here, the options are:
-b - output BAM,
-f12 - filter only reads with flag: 4 (read unmapped) + 8 (mate unmapped).
SEE ALSO:
Decoding SAM flags: https://broadinstitute.github.io/picard/explain-flags.html

Faster way of Appending/combining thousands (42000) of netCDF files in NCO

I seem to be having trouble properly combining thousands of netCDF files (42000+) (3gb in size, for this particular folder/variable). The main variable that i want to combine has a structure of (6, 127, 118) i.e (time,lat,lon)
Im appending each file 1 by 1 since the number of files is too long.
I have tried:
for i in input_source/**/**/*.nc; do ncrcat -A -h append_output.nc $i append_output.nc ; done
but this method seems to be really slow (order of kb/s and seems to be getting slower as more files are appended) and is also giving a warning:
ncrcat: WARNING Intra-file non-monotonicity. Record coordinate "forecast_period" does not monotonically increase between (input file file1.nc record indices: 17, 18) (output file file1.nc record indices 17, 18) record coordinate values 6.000000, 1.000000
that basically just increases the variable "forecast_period" 1-6 n-times. n = 42000files. i.e. [1,2,3,4,5,6,1,2,3,4,5,6......n]
And despite this warning i can still open the file and ncrcat does what its supposed to, it is just slow, at-least for this particular method
I have also tried adding in the option:
--no_tmp_fl
but this gives an eror:
ERROR: nco__open() unable to open file "append_output.nc"
full error attached below
If it helps, im using wsl and ubuntu in windows 10.
Im new to bash and any comments would be much appreciated.
Either of these commands should work:
ncrcat --no_tmp_fl -h *.nc
or
ls input_source/**/**/*.nc | ncrcat --no_tmp_fl -h append_output.nc
Your original command is slow because you open and close the output files N times. These commands open it once, fill-it up, then close it.
I would use CDO for this task. Given the huge number of files it is recommended to first sort them on time (assuming you want to merge them along the time axis). After that, you can use
cdo cat *.nc outfile

export github commits/names to CSV with bash & jq

For a project I need to extract data from a lot of different blockchain GitHub profiles to a csv.
After browsing through the GitHub API I was able to achieve some of the necessary data being shown as txt/csv files using bash commands and jq.
Now doing all of this manually would probably take 7 days. I have a list of profiles i need to loop through saved as CSV.
The list looks like this --> https://docs.google.com/spreadsheets/d/1lFsewAYI7F8zSw7WPhI9E9WwR8f4G1clw1yjxY3wz_4/edit#gid=0
My approach so far to get all the repo names looks like this:
sample='[{"name":"0chain"},{"name":"0stateapp"},{"name":"0xcert"}]'
the csv belongs in here, I didn't know how to redirect it to that variable yet, but for testing purposes this was enough. If somebody knows how to, feel free to give a hint.
for row in $(echo "${sample}" | jq -r '.[] | #base64'); do
_jq()
{
echo ${row} | base64 --decode | jq -r ${1}
}
for GHUSER in $( echo $(_jq '.name')); do
curl -s https://api.github.com/users/$GHUSER/repos?per_page=100 | jq -r '.[]|.full_name'
done
done
The output looks like this:
0chain/0chain-token
0chain/client-sdk
0chain/docs
0chain/gorocksdb
0chain/hostadmin
0chain/rocksdb
0stateapp/ZSCoin
0xcert/0xcert
0xcert/conventions
0xcert/docs
0xcert/erc721-validator
0xcert/erc721-validator-api
0xcert/erc721-validator-ui
0xcert/erc721-website
0xcert/ethereum
0xcert/ethereum-crowdsale
0xcert/ethereum-dex
0xcert/ethereum-erc20
0xcert/ethereum-erc721
0xcert/ethereum-minter
0xcert/ethereum-utils
0xcert/ethereum-xcert
0xcert/ethereum-xcert-builder
0xcert/ethereum-zxc
0xcert/framework
0xcert/framework-cert-test
0xcert/nonfungiblealliance-www
0xcert/solidity-style-guide
0xcert/techpaper
0xcert/truffle
0xcert/web3.js
What I need to do is use all of the above values and generate a file that contains:
Github Profile (already stored in the attached sheet)
The Date when accessing this information
All the repositories belonging to that profile (code above but
filtered)
Now the Interesting part:
The commit history
number of commit (ID)
number of commit (ID)
Date of commit
Description of commit
person who commited
checks passed
checks failed
Almost the same needs to be done for closed and open pull requests although I think when solving the "problem" above solving the pull requests is the same strategy.
For the commits I'd do something like this:
for commits in $( $repoarray) do curl -i https://api.github.com/repos/$commits/commits | jq -r '.[]|.author.lgoin (and whatever els is needed)' done
basically this chart here needs to be filled
https://docs.google.com/spreadsheets/d/1mFXiohiWNXNP8CVztFA1PFF41jn3J9sRUhYALZShsPY/edit?usp=sharing
what I need help with:
storing my output from the first loop in a an array
loop through that array to get the number of commits
loop through that array to get the data to closed pull requests
loop through that array to get the data to open pull requests
Excuse my "noobish" question.
I'm using bash/jq and the GitHub API for the time.
I'd appreciate any kind of help.

How to list the published container images in the Google Container Registry in a CLI in image size order

Using a CLI, I want to list the images in each repository in a Google Container Registry project but with the following conditions:
Lists the images with the latest tag only
Lists the human-readable size of the images
Lists the name of the images
The closest I've managed to get us through gsutil:
gsutil du -h gs://eu.artifacts.my-registry.appspot.com/containers/images
Resulting in:
33.77 MiB gs://eu.artifacts.my-registry.appspot.com/containers/images/sha256:03c1a2387ef6cb30a7428a46821f946d6a2c591a26cb2066891c55b2b6846ae2
1.27 MiB gs://eu.artifacts.my-registry.appspot.com/containers/images/sha256:03c1e7db6bf0140bd5fa34236a35453cb73cef01f6d89b98bc5995ae8ea07aaf
1.32 KiB gs://eu.artifacts.my-registry.appspot.com/containers/images/sha256:03c3c97495d60c68d37d04a7e6c9b3a48bb159ce5dde13d0d81b4e75e2a3f1d4
81.92 KiB gs://eu.artifacts.my-registry.appspot.com/containers/images/sha256:03c5483cb8ac9c9ae498507e15d68d909a11859a8e5238556b7188e0af4d9264
457.43 KiB gs://eu.artifacts.my-registry.appspot.com/containers/images/sha256:03c7f98faa1cfc05264e743e23ca2e118d24c57bfd67d5cb2e2c7a57e8124b6c
7.88 KiB gs://eu.artifacts.my-registry.appspot.com/containers/images/sha256:03c83b13d044844cd3f6b278382e408541f22029acaf55d9e7e5689b8d51eeea
But obviously this does not meet most of my criteria.
The information is available through the GUI like so on a per image basis:
Any ideas?
I'm open to gsutil, gcloud, docker, anything really which can be installed on a docker container.
You can use the Google Cloud UI to accomplish this. There's a column selector right next to the filter bar and it has an option for the image size.
Once the column is displayed, you'll be able to order by size.
Its seems you have only one outstanding issue with listing container images size after reading your comment at Jason's answer. So it is not possible to retrieve with gcloud command directly. Here are two work around I tested:
You can use gcloud container images describe command to see the size of the images. Make sure you use "--log-http" flag with it. Command should be like this:
$ gcloud container images describe gcr.io/myproject/myimage:tag --log-http
Another way to get the size of the image is using gsutil stat command.
So here's what I did:
a. Upon running below command, I listed all my images from the GCS bucket and saved it to a file called images.txt
$ gsutil ls "BUCKET URL" > images.txt
b. I ran gcloud stat command like below to read image names from the images.txt file and return size of the images chronologically.
$ for x in $(cat images.txt); do `gsutil stat $x | grep Content-Length | awk '{print $2}'`; done
You can customize this little script according to your need.
I understand these are not efficient workaround but thats all seems to be an option now. However, GCR just implements the docker container API, so may be you can read this document to see if you can find/do something of your own.
Hi here just to share a rudimental script which takes the first tag and get the size of the whole layers and write it on a report, it takes ages on 3TB repo but at least i know which repo is big.
echo "REPO,SIZE" > repository-size-report.csv
for REPO in $(gcloud container images list --repository eu.gcr.io/comerge-comerge01-171833 --format="table[no-heading](NAME)") ; do
for TAGS in $(gcloud container images list-tags $REPO --format="table[no-heading](TAGS)"); do
TAG=$(echo $TAGS | cut -d, -f1)
SUM=0
for SIZE in $(gcloud container images describe $REPO:$TAG --log-http 2>&1 | grep size | grep -o '[0-9][0-9]*') ; do
SUM=$((SUM + SIZE))
done
HSUM=$(echo $SUM | numfmt --to iec --format "%8f")
echo "$REPO:$TAG,$HSUM"
echo "$REPO:$TAG,$HSUM" >> repository-size-report.csv
done
done
You can use the command gcloud container images list command to accomplish this task; however, you will need to set the appropriate flags to fulfill your use case. You can read more about the command and the flag options here.

Resources