Windows Console Output from Waifu2x-caffe unintelligible - windows

I am using Waif2x to upscale a series of images, but I am having a problem with the command I am running. I would try to troubleshoot it myself, yet I can't not make use of the error output. It reads:
âGâëü[: âéâfâïâtâ#âCâïé¬èJé»é▄é╣é±é┼é╡é╜
I think it is in Japanese as evidenced by Waifu2x's Github page. I also believe it is the same everytime, yet I cant know for sure. I am using a English Computer, and I am also a English speaker, so I really need to know what it is in English or something I can Google translate.
I have already tried the solution here as evidenced by looking at the regedit, where Name=00, Data=Consolas.
Regarding my specific problem, the Command I am typing into cmd is
waifu2x-caffe-cui -i "C:\Users\Christian\workspace\CodeLyokoUpscaleing\bin\480Frames" -e png -l png -m noise_scale -d 16 -h 1440 -n 1 -p cudnn -c 256 -b 1 --auto_start 1 --auto_exit 1 --no_overwrite 1 -y upconv_7_anime_style_art_rgb -o "C:\Users\Christian\workspace\CodeLyokoUpscaleing\bin\1440Frames"
I really think it should work as I converted it from another batch file I created that contained variables instead of file paths
waifu2x-caffe-cui -i "%~dp0480Frames" -e png -l png -m noise_scale -d 16 -h 1440 -n 1 -p cudnn -c 256 -o "%~dp01440Frames" --auto_start 1 --auto_exit 1 --no_overwrite 1 -y upconv_7_anime_style_art_rgb
But I still get the weird output.
How can I see what the error is?

Related

Is there any way to use wildcards to download images from unspecified URLs within a domain?

I want to download all of the display images for each of the 807 pokemon on Bulbapedia. For instance, for Bulbasaur, I'd like to obtain this image:
When I click on the image, I can see that the image addresses follow a certain pattern:
Bulbasaur: https://cdn.bulbagarden.net/upload/2/21/001Bulbasaur.png
Ivysaur: https://cdn.bulbagarden.net/upload/7/73/002Ivysaur.png
Venusaur: https://cdn.bulbagarden.net/upload/a/ae/003Venusaur.png
Charmander: https://cdn.bulbagarden.net/upload/7/73/004Charmander.png
Zeraora: https://cdn.bulbagarden.net/upload/a/a7/807Zeraora.png
...and so on. Basically, the URL that hosts each of the images is some form of https://cdn.bulbagarden.net/upload/*/*/*.png, each asterisk representing a wildcard.
My problem is that I'm unsure how I can represent these wildcards when using bash or wget. I've tried the following wget command to obtain the images:
wget -A.png -e robots=off -m -k -nv -np -p \ --no-check-certificate --user-agent="Mozilla/5.0 (compatible; Konqueror/3.0.0/10; Linux)" \ https://cdn.bulbagarden.net/upload/
However, I download 0 bytes in 0 files which means that no files are being recognized.
Is there any way I can go about doing this?
UPDATE: As some people have pointed out in the comments, I need some way to aggregate all the individual links themselves. I've found this page which has links to the articles for each of the 807 pokemon. However, this creates the dilemma of recursively retrieving links from the linked pages. In order to actually get to the images, I'd need to click two more links after landing on the article for the individual pokemon. I'll show what I mean graphically:
From the List of Pokémon by National Pokédex number page, get the page link for Bulbasaur:
From the Bulbasaur (Pokémon) page, click on the Bulbasaur image to get to the directory that links to the actual png:
Finally, from the File:001Bulbasaur.png page, get the image link to the target png: https://cdn.bulbagarden.net/upload/2/21/001Bulbasaur.png:
This process should be applied recursively to all of the links from the initial list page.
The command I've tried to get the desired result is:
wget --recursive --level=1 --no-directories --accept png https://bulbapedia.bulbagarden.net/wiki/List_of_Pokémon_by_National_Pokédex_number
But all I'm getting is this error: er: Unsupported scheme.
I'm pretty much a wget noob so I'm not quite sure what I'm doing wrong here. How can I recursively get to the image links?
I want to download all of the display images for each of the 807 pokemon on Bulbapedia.
[...]
Basically, the URL that hosts each of the images is some form of https://cdn.bulbagarden.net/upload/*/*/*.png, each asterisk representing a wildcard.
Especially the first 2 asterisks are pretty random, so I'd forget about this pattern if I were you. Using an HTML-parser instead, like xidel, would be a much better idea if you're looking for specific files on a website.
Fast-forward to 2023, the List of Pokémon by National Pokédex number page now lists 1008 Pokémon.
Extracting the 1008 individual urls:
$ xidel -s "https://bulbapedia.bulbagarden.net/wiki/List_of_Pok%C3%A9mon_by_National_Pok%C3%A9dex_number" \
-e '//tr[starts-with(td,"#")]/td[2]/a/#href'
/wiki/Bulbasaur_(Pok%C3%A9mon)
/wiki/Ivysaur_(Pok%C3%A9mon)
/wiki/Venusaur_(Pok%C3%A9mon)
[...]
Retrieving the indirect image-url from those 1008 individual urls:
$ xidel -s "https://bulbapedia.bulbagarden.net/wiki/List_of_Pok%C3%A9mon_by_National_Pok%C3%A9dex_number" \
-f '//tr[starts-with(td,"#")]/td[2]/a/#href' \
-e '//td[#colspan="4"]/a[#class="image"]/#href'
/wiki/File:0001Bulbasaur.png
/wiki/File:0002Ivysaur.png
/wiki/File:0003Venusaur.png
[...]
Retrieve the direct image-url:
$ xidel -s "https://bulbapedia.bulbagarden.net/wiki/List_of_Pok%C3%A9mon_by_National_Pok%C3%A9dex_number" \
-f '//tr[starts-with(td,"#")]/td[2]/a/#href' \
-f '//td[#colspan="4"]/a[#class="image"]/#href' \
-e '//div[#class="fullMedia"]/p/a/#href' \
//archives.bulbagarden.net/media/upload/f/fb/0001Bulbasaur.png
//archives.bulbagarden.net/media/upload/8/81/0002Ivysaur.png
//archives.bulbagarden.net/media/upload/6/6b/0003Venusaur.png
[...]
And finally to download them to the current dir:
$ xidel "https://bulbapedia.bulbagarden.net/wiki/List_of_Pok%C3%A9mon_by_National_Pok%C3%A9dex_number" \
-f '//tr[starts-with(td,"#")]/td[2]/a/#href' \
-f '//td[#colspan="4"]/a[#class="image"]/#href' \
-f '//div[#class="fullMedia"]/p/a/#href' \
--download .
(notice the absence of -s/--silent to see status information)
This involves 1 + (1008 x 3) = 3025 GET requests, so this will take a while!
Alternatively there's a quicker way:
$ xidel -s "https://bulbapedia.bulbagarden.net/wiki/List_of_Pok%C3%A9mon_by_National_Pok%C3%A9dex_number" \
-e '//tr[starts-with(td,"#")]//img/#src'
//archives.bulbagarden.net/media/upload/thumb/f/fb/0001Bulbasaur.png/70px-0001Bulbasaur.png
//archives.bulbagarden.net/media/upload/thumb/8/81/0002Ivysaur.png/70px-0002Ivysaur.png
//archives.bulbagarden.net/media/upload/thumb/6/6b/0003Venusaur.png/70px-0003Venusaur.png
[...]
$ xidel -s "https://bulbapedia.bulbagarden.net/wiki/List_of_Pok%C3%A9mon_by_National_Pok%C3%A9dex_number" \
-e '//tr[starts-with(td,"#")]//img/replace(#src,"(.+)/thumb(.+?png).+","$1$2")'
//archives.bulbagarden.net/media/upload/f/fb/0001Bulbasaur.png
//archives.bulbagarden.net/media/upload/8/81/0002Ivysaur.png
//archives.bulbagarden.net/media/upload/6/6b/0003Venusaur.png
[...]
$ xidel "https://bulbapedia.bulbagarden.net/wiki/List_of_Pok%C3%A9mon_by_National_Pok%C3%A9dex_number" \
-f '//tr[starts-with(td,"#")]//img/replace(#src,"(.+)/thumb(.+?png).+","$1$2")' \
--download .
With a rather simple string-manipulation on the thumbnail-urls on the list-page you can get the same direct image-urls. This obviously only involves 1009 GET request and will get you these images a lot quicker.
Actually I had the same insane idea 5 years after you and this was my solution:
I copied all the tables from the link you posted in VSCode and copied all the URLs of the 70px images with a regex //archives.bulbagarden.net(.*?)(70px)(.*?)(.png)
I added https: at the beginning and replaced 70 with 375px (which is the most common resolution, and it's enough for my use case)
I made a python script to download all the images from a .txt file:
import os
import requests
def download_image(url, filename):
r = requests.get(url, allow_redirects=True)
open(filename, 'wb').write(r.content)
def main():
with open('links.txt', 'r') as f:
for line in f:
url = line.strip()
filename = url.split('/')[-1]
download_image(url, filename)
main()
Some regional versions are missing but I'm satisfied with the result:

Can't return to command line

Total noob question - it has me stumped. I'm using SOX to create spectrograms of images files, and it's been working fine, but now when I try to execute it just gives me a carat prompt and does nothing (>)...anything I type seems to have no effect. Command-. does get me back to the normal prompt. Here's what it looks like on my screen:
Last login: Tue Oct 24 11:39:33 on ttys000
Scotts-iMac:~ scottwhittle$ cd /Users/scottwhittle/Desktop/Belize\Test2
Scotts-iMac:Belize Test2 scottwhittle$ for file in *.mp3
> do
> outfile="${file%.*}.png"
> sox "$file" -n spectrogram r -l -m -d "$title_in_pic" -o ”$outfile" -X 5000 -y 513 norm
> done
> help!
> nothing I type seems to do anything
>
What's confusing is that it works sometimes and not others. One thing I am doing is editing the command in TextEdit to change parameters, and then copying and pasting to terminal. Is thing the issue? Thanks for any and all help.

Tcpdump with -w writing gibberish to file

When trying to capture tcpdump output to a file, I get the following:
▒ò▒▒▒▒3▒X▒▒<<▒▒▒▒▒▒▒4▒4▒b
7
7▒▒3▒X▒▒<<▒▒▒▒▒▒▒4▒4▒b
7
7▒▒3▒X▒▒<<▒▒▒▒▒▒▒4▒4▒b
7
7▒▒3▒X▒<<▒▒▒▒▒▒▒4▒4▒b
7
7▒▒3▒Xu<<▒▒▒▒▒▒▒4▒4▒b
7
7▒▒3▒X▒<<▒▒▒▒▒▒▒4▒4▒b
7
7▒▒3▒X▒D<<▒▒▒▒▒▒▒4▒4▒b
7
7▒▒3▒X▒D<<▒▒▒▒▒▒▒4▒4▒b
7
7▒▒3▒X5▒<<▒▒▒▒▒▒▒4▒4▒b
7
7▒▒3▒X▒<<▒▒▒▒▒▒▒4▒4▒b
If I run tcpdump without the -w the output displays fine in the shell.
Here is the input:
tcpdump -i eth0 -Z root -w `date '+%m-%d-%y.%T.pcap'`
tcpdump -w writes the raw file, which is not meant for reading directly. You can read the file back with the tcpdump -r option as suggested in the man page:
-r Read packets from file (which was created with the -w option). Standard input is used if file is ‘‘-’’.
-w Write the raw packets to file rather than parsing and printing them out. They can later be printed with the -r option. Standard output is used if file is ‘‘-’’. See pcap-savefile(5) for a description of the file format.
Another option would be to redirect the output without using the -w option:
tcpdump -i eth0 -Z root > `date '+%m-%d-%y.%T.pcap'`
But if I remember correctly you don’t get exactly what would be written with the -w option.

Different results with MACS2 when Peakcalling with .bed or .bam

I got the following problem:
I use MACS2 (2.1.0.20140616) with the following short commandline:
macs2 callpeak -t file.bam -f bam -g 650000000 -n Test -B --nomodel -q 0.01
It seems to work as I want, but when I convert the .bamfile into .bed via
bedtools bamtobed -i file.bam > file.bed
and use MACS2 on this, I get a lot more peaks. As far as I understand, the .bed-file should contain the same information as the .bam-file, so that's kinda odd.
Any suggestions what's the problem?
Thanks!

Parallel on unix with screen

Is there a way to run in parallel? I can manually start screens, but I need to start up 30.
I attempted to do it by hand (stupid yeah) but I got confused halfway through and decided I better ask stackoverflow.
#!/bin/bash --login
2
3
4
5
6 avida=~/avida/cbuild/bin/avida
7 skeleton_dir=~/cse845/no_pred
8 # wd=/mnt/scratch/cse845_avida/predator_sim
9 wd=~/cse845/no_predator_editor_sim_wd
10
11 for i in {1..30}
12 do
13 screen
14
15
16 sim_num=${i}
17 sim_dir=${wd}/sim_$sim_num
18 mkdir $sim_dir
19 cd $sim_dir
20 cp ${skeleton_dir}/*.cfg ${skeleton_dir}/*.org ./
21 $avida &> avida_log.txt
22# Here I would like to do the equivalent of exiting screen manually, ^A, d
23 done
Here is how to start 3 at the same time in a shell script (the -d -m starts them in the background)
screen -s "name1" -c ~/screen/name1.screenrc -d -m
screen -s "name2" -c ~/screen/name2.screenrc -d -m
screen -s "name3" -c ~/screen/name3.screenrc -d -m
Then you could have a variable number of tabs/windows within each screen, specified in your screenrc files. (with -t).
See example screenrc file designed to work well with emacs: https://github.com/startup-class/dotfiles/blob/master/.screenrc
This is the only Section about specifying which tabs/windows per socket.
# 2.3) Autoload two screen tabs for emacs/bash.
screen -t emacs 0
screen -t bash 1
So when you do screen -ls you would get
There are screens on:
4149.name1 (07/10/13 22:18:44) (Detached)
4018.name2 (07/10/13 22:18:23) (Detached)
3882.name3 (07/10/13 22:17:08) (Detached)
3 Sockets in /var/run/screen/S-yourid.
And then if you wanted to connect to name1, you'd do screen -r 4149 or screen -r name1
I see two things right away:
You are missing the necessary backslashes to escape your newlines in the screen command arguments.
You need to tell screen to run in the background. See the -d and -m options.

Resources