Error pod 'SwiftLint' - cocoapods

[!] /usr/bin/curl -f -L -o /var/folders/1x/jmv798095x1fbjc_6128mflh0000gn/T/d20170314-59599-7fjizg/file.zip https://github.com/realm/SwiftLint/releases/download/0.16.1/portable_swiftlint.zip --create-dirs --netrc
% Total % Received % Xferd Average Speed Time Time Time
Current
Dload Upload Total Spent Left Speed 100 599 0 599 0 0 166 0 --:--:-- 0:00:03
--:--:-- 166 0 0 0 0 0 0 0 0 --:--:-- 0:01:20 --:--:-- 0curl: (7) Failed to connect to
github-cloud.s3.amazonaws.com port 443: Operation timed out
This is the error when i execute pod install. My network is under socks5 proxy. I can download https://github.com/realm/SwiftLint/releases/download/0.16.1/portable_swiftlint.zip easily. But how can i pod success?

Related

Getting curl: (35) getpeername() failed with errno 22: Invalid argument while installing brew on mac

While running this command
curl -O https://raw.githubusercontent.com/Homebrew/install/master/install.sh
I am getting this
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:01:15 --:--:-- 0
curl: (35) getpeername() failed with errno 22: Invalid argument

Running curl locally vs inside an erb file yields different results

I have a template file which I'd like to execute some simple code in (I have an endpoint which returns some json revealing other server details which are relevant to the template). I've added in the following code (values omitted where relevant):
<% require 'open3'
url = 'https://a.valid.address.com'
path = '/nodeStatuses'
port = '18091'
username = 'admin'
password = "#{#template_password}"
Open3.popen3("curl -m 10 -X GET --noproxy '*' -vvvv -m 10 --cacert /etc/pki/ca-trust/source/anchors/RootCA.crt -k -u #{username}:#{password} #{url}:#{port}#{path}") do |stdin, stdout, stderr, thread|
pid = thread.pid
stdin.close
#stdout = stdout.read.chomp
#stderr = stderr.read.chomp
end %>
stdout: <%= #stdout %>
stderr: <%= #stderr %>
Strangely all my templates are filled with timeouts:
stderr: % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
^M 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* About to connect() to a.valid.address.com port 18091 (#0)
* Trying 10.10.10.10...
^M 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0^M 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0^M 0 0 0 0 0 0 0 0 --:--:-- 0:00:03 --:--:-- 0^M 0 0 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0^M 0 0 0 0 0 0 0 0 --:--:-- 0:00:05 --:--:-- 0^M 0 0 0 0 0 0 0 0 --:--:-- 0:00:06 --:--:-- 0^M 0 0 0 0 0 0 0 0 --:--:-- 0:00:07 --:--:-- 0^M 0 0 0 0 0 0 0 0 --:--:-- 0:00:08 --:--:-- 0^M 0 0 0 0 0 0 0 0 --:--:-- 0:00:09 --:--:-- 0* Connection timed out after 10001 milliseconds
^M 0 0 0 0 0 0 0 0 --:--:-- 0:00:10 --:--:-- 0
* Closing connection 0
curl: (28) Connection timed out after 10001 milliseconds
My immediate thoughts were that the url is offline or there's something wrong with the curl, but running the same command via command line yields results:
curl -I -s -m 10 --cacert /etc/pki/ca-trust/source/anchors/RootCA.crt -k -u admin:secret https://a.valid.address.com:18091/nodeStatuses
HTTP/1.1 200 OK
X-XSS-Protection: 1; mode=block
X-Permitted-Cross-Domain-Policies: none
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
Server: Couchbase Server
Pragma: no-cache
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Date: Fri, 30 Sep 2022 09:00:46 GMT
Content-Type: application/json
Content-Length: 685
Cache-Control: no-cache,no-store,must-revalidate
Then I thought it might be a limitation with erb; nope I get a response from another url (e.g stackoverflow).
Really looking for some clues here. Any help would be very much appreciated.
This template was being rendered on the puppetmaster and not locally on the server (this is expected behaviour). In the end I changed the logic and made the command executed on the puppetmaster return a list of hosts to populate the template file. This isn’t quite what I’d intended to do originally, but the end result is the same/similar.

Curl in Bash, Redirect STDERR to file except for progress stderr

My shell script uploads files to a server. I'd like the stdout and stderr to write to both a file and the console. BUT I don't want the progress bar/percentage which is stderr to go to file. I only want curl errors to write to file.
Initially I had this
curl ... 2>> "$log"
This wrote nice neat 1 or more lines of the download to the log file, but nothing to console.
I then changed it to
curl ... 3>&1 1>&2 2>&3 | tee -a "$log"
This wrote to both console and file, yay! except it wrote the whole progress for each percentage to the file, making the log file very large and tedious to read.
How can I view the progress in the console, but only write the last part of the output to file?
I want this
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
106 1166 0 0 106 1166 0 2514 --:--:-- --:--:-- --:--:-- 2 106 1166 0 0 106 1166 0 795 0:00 :01 0:00:01 --:--:-- 0 106 1166 0 0 106 1166 0 660 0:00:01 0:00:01 --:--:-- 0
This is what I get with the second curl redirect
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 9.8G 0 0 0 16384 0 24764 4d 22h --:--:-- 4d 22h 24764
0 9.8G 0 0 0 3616k 0 4098k 0:41:54 --:--:-- 0:41:54 15.9M
0 9.8G 0 0 0 24.2M 0 12.9M 0:12:59 0:00:01 0:12:58 19.8M
0 9.8G 0 0 0 50.4M 0 17.5M 0:09:34 0:00:02 0:09:32 22.7M
0 9.8G 0 0 0 79.8M 0 20.5M 0:08:09 0:00:03 0:08:06 24.7M
1 9.8G 0 0 1 101M 0 20.7M 0:08:04 0:00:04 0:08:00 24.0M
1 9.8G 0 0 1 129M 0 21.9M 0:07:37 0:00:05 0:07:32 25.1M
1 9.8G 0 0 1 150M 0 21.8M 0:07:41 0:00:06 0:07:35 25.1M
1 9.8G 0 0 1 169M 0 21.5M 0:07:47 0:00:07 0:07:40 23.8M
1 9.8G 0 0 1 195M 0 21.9M 0:07:38 0:00:08 0:07:30 23.0M
2 9.8G 0 0 2 219M 0 22.1M 0:07:33 0:00:09 0:07:24 23.5M
2 9.8G 0 0 2 243M 0 22.4M 0:07:29 0:00:10 0:07:19 22.9M
2 9.8G 0 0 2 273M 0 22.9M 0:07:17 0:00:11 0:07:06 24.6M
..
.. hundreds of lines...
..
99 9.8G 0 0 99 9982M 0 24.8M 0:06:45 0:06:41 0:00:04 24.5M
99 9.8G 0 0 99 9.7G 0 24.8M 0:06:44 0:06:42 0:00:02 24.9M
99 9.8G 0 0 99 9.8G 0 24.8M 0:06:44 0:06:43 0:00:01 26.0M
100 9.8G 0 0 100 9.8G 0 24.8M 0:06:44 0:06:44 --:--:-- 25.8M
Edit:
What I dont understand is according to http://www.tldp.org/LDP/abs/html/io-redirection.html
2>filename
# Redirect stderr to file "filename."
but if I use that I don't get every single stderr progress line in the file. If I try any of the other solutions every stderr progress line is redirected to file
Just as a video is a series of frames, an updating percentage in the console is a series of lines. What is in the file is the true output. The difference is a carriage return in a text file puts the following text below, whereas in the console it overwrites the current line.
If you want to see the updating percentage in the console, but not the file, you could use something like this:
curl |& tee >(sed '1b;$!d' > log)
Or:
curl |& tee /dev/tty | sed '1b;$!d' > log
Let's say you have 10 huge .txt files test1.txt ... test10.txt
Here's how you would upload them with a single cURL command and log the results without logging the progress meter, the trick is to use the --write-out or -w option, from the man file, these are all the relevent fields for uploading to FTP:
curl -T "test[1-10:1].txt" -u user:password sftp://example.com:22/home/user/ -# -k -w "%{url_effective}\t%{ftp_entry_path}\t%{http_code}\t%{http_connect}\t%{local_ip}\t%{local_port}\t%{num_connects}\t%{num_redirects}\t%{redirect_url}\t%{remote_ip}\t%{remote_port}\t%{size_download}\t%{size_header}\t%{size_request}\t%{size_upload}\t%{speed_download}\t%{speed_upload}\t%{ssl_verify_result}\t%{time_appconnect}\t%{time_connect}\t%{time_namelookup}\t%{time_pretransfer}\t%{time_redirect}\t%{time_starttransfer}\t%{time_total}\n" >> log.txt
For your log.txt file you may want to pre-pend the column headers first:
echo -e "url_effective\tftp_entry_path\thttp_code\thttp_connect\tlocal_ip\tlocal_port\tnum_connects\tnum_redirects\tredirect_url\tremote_ip\tremote_port\tsize_download\tsize_header\tsize_request\tsize_upload\tspeed_download\tspeed_upload\tssl_verify_result\ttime_appconnect\ttime_connect\ttime_namelookup\ttime_pretransfer\ttime_redirect\ttime_starttransfer\ttime_total" > log.txt
The -# makes the progress bar a bit neater like:
######################################################################## 100.0%
######################################################################## 100.0%
...and the curl -T "test[1-10:1].txt" piece lets you specify a range of files to upload.

Using gulp exec to run bash script

I am trying to use gulp exec to run a bash script with gulp that updates assets in the project. The directory of the script is as follows:
./assets/update_assets.sh
I have followed the documentation and my gulpfile look as follows:
'use-strict';
var gulp = require('gulp'),
gutil = require('gulp-util');
exec = require('child_process').exec;
gulp.task('update-assets', function (cb) {
exec('./assets/update_assets.sh', function(err, stdout, stderr) {
console.log(stdout);
console.log(stderr);
cb(err);
});
});
gulp.task('default', ['update-assets'], function() {
gulp.start('update-assets');
});
When I try and run gulp i get errors:
[15:13:47] Error: Command failed: /bin/sh -c ./assets/update_assets.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 168 100 168 0 0 197 0 --:--:-- --:--:-- --:--:-- 197
tar: Unrecognized archive format
tar: Error exit delayed from previous errors.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 168 100 168 0 0 288 0 --:--:-- --:--:-- --:--:-- 288
tar: could not chdir to 'README.md/'
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 168 100 168 0 0 265 0 --:--:-- --:--:-- --:--:-- 264
tar: Unrecognized archive format
tar: Error exit delayed from previous errors.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 168 100 168 0 0 276 0 --:--:-- --:--:-- --:--:-- 276
tar: could not chdir to 'gulpfile.js/'
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 168 100 168 0 0 283 0 --:--:-- --:--:-- --:--:-- 282
tar: Unrecognized archive format
tar: Error exit delayed from previous errors.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 168 100 168 0 0 268 0 --:--:-- --:--:-- --:--:-- 268
tar: could not chdir to 'package.json/'
at ChildProcess.exithandler (child_process.js:744:12)
at ChildProcess.emit (events.js:110:17)
at maybeClose (child_process.js:1008:16)
at Socket.<anonymous> (child_process.js:1176:11)
at Socket.emit (events.js:107:17)
at Pipe.close (net.js:476:12)

performance test using curl - delays

Can somebody help in analysis of the below output received when testing perfomance of certain website? Especially those lines representing progress meter
It appears to me curl re-tries downloading the content of the page a couple of times.. am I right ?
What would be the possible causes - would it be the malformed Content-Lenght response header ?
About to connect() to xx.example.com port 80 (#0)
Trying 12.12.12.12... connected
Connected to xx.example.com (12.12.12.12) port 80 (#0)
GET /testing/page HTTP/1.1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0.1)
Gecko/20100101 Firefox/4.0.1
Host: mp.example.com
Accept-Encoding: deflate, gzip
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
HTTP/1.1 200 OK
Age: 26
Date: Thu, 21 Sept 2012 15:19:48 GMT
Cache-Control: max-age=60
Xontent-Length:
Connection: Close
Via: proxy
ETag: "KNANXUSNFMBN"
Content-Type: application/json;charset=UTF-8
Vary: Accept-Encoding
[data not shown]
100 32074 0 32074 0 0 54987 0 --:--:-- --:--:-- --:--:-- 55300
100 49400 0 49400 0 0 28372 0 --:--:-- 0:00:01 --:--:-- 28423
100 52121 0 52121 0 0 20174 0 --:--:-- 0:00:02 --:--:-- 20201
100 58912 0 58912 0 0 16923 0 --:--:-- 0:00:03 --:--:-- 16938
100 58912 0 58912 0 0 13142 0 --:--:-- 0:00:04 --:--:-- 13152
100 58912 0 58912 0 0 10742 0 --:--:-- 0:00:05 --:--:-- 5476
100 58912 0 58912 0 0 9083 0 --:--:-- 0:00:06 --:--:-- 2004
100 58912 0 58912 0 0 7868 0 --:--:-- 0:00:07 --:--:-- 1384
100 58912 0 58912 0 0 6940 0 --:--:-- 0:00:08 --:--:-- 0
100 58912 0 58912 0 0 6207 0 --:--:-- 0:00:09 --:--:-- 0
100 58912 0 58912 0 0 5615 0 --:--:-- 0:00:10 --:--:-- 0
100 58912 0 58912 0 0 5125 0 --:--:-- 0:00:11 --:--:-- 0
100 58912 0 58912 0 0 4715 0 --:--:-- 0:00:12 --:--:-- 0
100 58912 0 58912 0 0 4365 0 --:--:-- 0:00:13 --:--:-- 0
100 58912 0 58912 0 0 4063 0 --:--:-- 0:00:14 --:--:-- 0
100 58912 0 58912 0 0 3801 0 --:--:-- 0:00:15 --:--:-- 0
100 58912 0 58912 0 0 3570 0 --:--:-- 0:00:16 --:--:-- 0
100 58912 0 58912 0 0 3366 0 --:--:-- 0:00:17 --:--:-- 0
100 58913 0 58913 0 0 3226 0 --:--:-- 0:00:18 --:--:-- 0
100 113k 0 113k 0 0 6067 0 --:--:-- 0:00:19 --:--:-- 12387*
Closing connection #0
END - total_time: 19.094
(cumul_times - dns: 0.002 connect: 0.004 pretrans: 0.004 firstbyte: 0.006)
status: 200 size: 115856 hsize: 269 date: 16.08.2012-18:20:33 1345130433
I would appreciate all input on this.
I am troubleshooting the delays to that specific web page, I am looking for advise on how to interpret those curl progress meter lines.
in working scenario - where there is no delay - there is 1 progress meter line :
Age: 28
Date: Thu, 21 Sep Aug 2012 15:20:46 GMT
Cache-Control: max-age=60
Content-Length: 115856
Connection: Keep-Alive
Via: proxy
ETag: "KXNFGAHSKCUY"
Content-Type: application/json;charset=UTF-8
Vary: Accept-Encoding
[data not shown]
100 113k 100 113k 0 0 6402k 0 --:--:-- --:--:-- --:--:-- 8703k*
Connection #0 to host xx.example.com left intact
Closing connection #0
END - **total_time: 0.018**
(cumul_times - dns: 0.002 connect: 0.004 pretrans: 0.004 firstbyte: 0.006)
status: 200 size: 115856 hsize: 269 date: 16.08.2012-18:21:14 1345130474
My question is what do the individual lines mean? That the curl got only part of the content and has been re-trying and re-trying.
And what could be the cause? Slow server? Drops on the WAN connection .....?
Can you post the curl request you executed?
Though, You may want to use ab or apib (available on google code) for benchmarking.

Resources