Oftentimes I want to post something to a github bug like
$ ping google.com
PING google.com (216.58.195.238): 56 data bytes
Request timeout for icmp_seq 0
64 bytes from 216.58.195.238: icmp_seq=0 ttl=53 time=1064.747 ms
Right now I run the command, use screen's C-a C-[ to highlight the area, enter to copy it to that buffer, paste it into vim, write it to a file and then cat that into pbcopy. There has to be a better way.
Is there a command I can run which will tee the command I type prefixed with a $ and all the output to pbcopy? Or anything close? I envision
$ demo ping google.com
PING google.com (216.58.195.238): 56 data bytes
Request timeout for icmp_seq 0
64 bytes from 216.58.195.238: icmp_seq=0 ttl=53 time=1064.747 ms
^C
$
and now the original thing I pasted is in my mac clipboard.
You can do
script log.txt
ping www.google.com
exit
And you'll have your command and output saved in log.txt
Edit
Based on your comment, what you want is
command="whatever command you want to run"
echo \$ $command > log.txt
$command >> log.txt
I don't think you'll find a single command that does exactly this.
Related
I used Ping command:
ping -c1 -W1 8.8.8.8
Works good if online,
But in case of internet is't available or LAN is off the terminal takes long time to reply with result..
I want terminal reply in maximum time 1 or 0.5 second to use result in my
ExtendScript code:
var command = "ping -c1 -W1 8.8.8.8";
var result= system.callSystem(command);
I tried to set timeout using -t or -W but failed
Edit:
thanks to Philippe the solution is:
nc -z www.google.com 80 -G1
If you are on MacOS Terminal, you can check internet connection by :
nc -z www.google.com 80
In a bash script I need to run a tcpdump command and save the output to a file however when I do that via > /tmp/test.txt i still get the following output in the console:
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 1500 bytes
1 packet captured
1 packet received by filter
0 packets dropped by kernel
However I do wnat the script to wait for the command to complete before continuing.
is it possible to supress this output?
The output you're seeing is written to stderr, not stdout, so you can redirect it to /dev/null if you don't want to see it. For example:
tcpdump -nn -v -i eth0 -s 1500 -c 1 'ether proto 0x88cc' > /tmp/test.txt 2> /dev/null
I am trying to transfer images to a server via ftp.
When I use Filezilla, it works: I can see my files on the server.
When I use these raw ftp commands:
ftp -p -v -n $server << EOF
quote USER $user
quote PASS $pass
prompt off
cd Stock
mput *.jpg
quit
EOF
it doesn't work, I can't see my images on the server, even if in my terminal it looks like it worked:
227 Entering Passive Mode (89,151,93,136,207,15).
150 Opening ASCII mode data connection.
226 Transfer complete.
1225684 bytes sent in 1.88 secs (651.70 Kbytes/sec)
Any idea what could cause this?
Add BINARY to force binary mode:
ftp -p -v -n $server << EOF
quote USER $user
quote PASS $pass
prompt off
cd Stock
BINARY
mput *.jpg
quit
EOF
I've been trying to make a batch script that sends a HTTP POST request using cURL. In the cURL command I have told it to output to stdout (http_code, time_total, url_effective). I have been using "timeout /t 20" inbetween each cURL command and that is all I am seeing when I run the script because cURL is running in another console window (which disappears in a flash).
Here's some source code:
#echo off
rem -----------------------------------------------------------
rem SCRIPT TO REBOOT AND CHANGE ENCODE TO H264 FOR ACTI CAMERAS
rem -----------------------------------------------------------
rem SCRIPT NAME: ACTI ROOFTOP CAMERAS (REBOOT)
rem ------------------------------------------
start "" C:\curl\curl -sL -d "?USER=user&PWD=pass&SAVE_REBOOT" -w "\n%{http_code}\t %{time_connect}\t %{url_effective}\n" "http://192.168.0.30:80/cgi-bin/system" -o stdout
timeout /t 20
start "" C:\curl\curl -sL -d "?USER=user&PWD=pass&SAVE_REBOOT" -w "\n%{http_code}\t %{time_connect}\t %{url_effective}\n" "http://192.168.0.31:80/cgi-bin/system" -o stdout
timeout /t 20
start "" C:\curl\curl -sL -d "?USER=user&PWD=pass&SAVE_REBOOT" -w "\n%{http_code}\t %{time_connect}\t %{url_effective}\n" "http://192.168.0.32:80/cgi-bin/system" -o stdout
timeout /t 20
So the output I'm trying to get is:
200 2.00s http://192.168.0.30:80/cgi-bin/system
Waiting for 15 seconds ...
200 2.00s http://192.168.0.31:80/cgi-bin/system
Waiting for 15 seconds ...
200 2.00s http://192.168.0.32:80/cgi-bin/system
Any help is much appreciated, thanks.
Using curl from the shell, what is the best way to discard (or detect) files that are not completely downloaded because a timeout occurred? What I'm trying to do is:
curl -m 2 --compress -o "dest/#1" "http://url/{$list}"
When a timeout occurs, the log shows it, and the part of the file that was downloaded is saved to disk:
[4/13]: http://URL/123.jpg --> dest/123.jpg
99 97984 99 97189 0 0 45469 0 0:00:02 0:00:02 --:--:-- 62500
curl: (28) Operation timed out after 2000 milliseconds with 97189 bytes received
I'm trying to either get rid of the files that were not 100% downloaded, or have them listed to attempt a resume (-C flag), later.
The best solution I have found so far is to capture the stderr of the curl call, and parse it with a combination of perl and grep to get the output file names:
curl -m 2 -o "dest/#1" "http://url/{$list}" 2>curl.out
perl -pe 's/[\t\n ]+/ /g ; s/--> /\n/g' curl.out | grep -i "Curl: (28)" | perl -pe 's/ .*//g'