I have a template file which I'd like to execute some simple code in (I have an endpoint which returns some json revealing other server details which are relevant to the template). I've added in the following code (values omitted where relevant):
<% require 'open3'
url = 'https://a.valid.address.com'
path = '/nodeStatuses'
port = '18091'
username = 'admin'
password = "#{#template_password}"
Open3.popen3("curl -m 10 -X GET --noproxy '*' -vvvv -m 10 --cacert /etc/pki/ca-trust/source/anchors/RootCA.crt -k -u #{username}:#{password} #{url}:#{port}#{path}") do |stdin, stdout, stderr, thread|
pid = thread.pid
stdin.close
#stdout = stdout.read.chomp
#stderr = stderr.read.chomp
end %>
stdout: <%= #stdout %>
stderr: <%= #stderr %>
Strangely all my templates are filled with timeouts:
stderr: % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
^M 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* About to connect() to a.valid.address.com port 18091 (#0)
* Trying 10.10.10.10...
^M 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0^M 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0^M 0 0 0 0 0 0 0 0 --:--:-- 0:00:03 --:--:-- 0^M 0 0 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0^M 0 0 0 0 0 0 0 0 --:--:-- 0:00:05 --:--:-- 0^M 0 0 0 0 0 0 0 0 --:--:-- 0:00:06 --:--:-- 0^M 0 0 0 0 0 0 0 0 --:--:-- 0:00:07 --:--:-- 0^M 0 0 0 0 0 0 0 0 --:--:-- 0:00:08 --:--:-- 0^M 0 0 0 0 0 0 0 0 --:--:-- 0:00:09 --:--:-- 0* Connection timed out after 10001 milliseconds
^M 0 0 0 0 0 0 0 0 --:--:-- 0:00:10 --:--:-- 0
* Closing connection 0
curl: (28) Connection timed out after 10001 milliseconds
My immediate thoughts were that the url is offline or there's something wrong with the curl, but running the same command via command line yields results:
curl -I -s -m 10 --cacert /etc/pki/ca-trust/source/anchors/RootCA.crt -k -u admin:secret https://a.valid.address.com:18091/nodeStatuses
HTTP/1.1 200 OK
X-XSS-Protection: 1; mode=block
X-Permitted-Cross-Domain-Policies: none
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
Server: Couchbase Server
Pragma: no-cache
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Date: Fri, 30 Sep 2022 09:00:46 GMT
Content-Type: application/json
Content-Length: 685
Cache-Control: no-cache,no-store,must-revalidate
Then I thought it might be a limitation with erb; nope I get a response from another url (e.g stackoverflow).
Really looking for some clues here. Any help would be very much appreciated.
This template was being rendered on the puppetmaster and not locally on the server (this is expected behaviour). In the end I changed the logic and made the command executed on the puppetmaster return a list of hosts to populate the template file. This isn’t quite what I’d intended to do originally, but the end result is the same/similar.
Related
[!] /usr/bin/curl -f -L -o /var/folders/1x/jmv798095x1fbjc_6128mflh0000gn/T/d20170314-59599-7fjizg/file.zip https://github.com/realm/SwiftLint/releases/download/0.16.1/portable_swiftlint.zip --create-dirs --netrc
% Total % Received % Xferd Average Speed Time Time Time
Current
Dload Upload Total Spent Left Speed 100 599 0 599 0 0 166 0 --:--:-- 0:00:03
--:--:-- 166 0 0 0 0 0 0 0 0 --:--:-- 0:01:20 --:--:-- 0curl: (7) Failed to connect to
github-cloud.s3.amazonaws.com port 443: Operation timed out
This is the error when i execute pod install. My network is under socks5 proxy. I can download https://github.com/realm/SwiftLint/releases/download/0.16.1/portable_swiftlint.zip easily. But how can i pod success?
Given this file:
Variable_name Value
Aborted_clients 0
Aborted_connects 4
Binlog_cache_disk_use 0
Binlog_cache_use 0
Binlog_stmt_cache_disk_use 0
Binlog_stmt_cache_use 0
Bytes_received 141
Bytes_sent 177
Com_admin_commands 0
Com_assign_to_keycache 0
Com_alter_db 0
Com_alter_db_upgrade 0
Com_alter_event 0
Com_alter_function 0
Com_alter_procedure 0
Com_alter_server 0
Com_alter_table 0
Com_alter_tablespace 0
Com_analyze 0
Com_begin 0
Com_binlog 0
Com_call_procedure 0
Com_change_db 0
Com_change_master 0
Com_check 0
Com_checksum 0
Com_commit 0
Com_create_db 0
Com_create_event 0
Com_create_function 0
Com_create_index 0
Com_create_procedure 0
Com_create_server 0
Com_create_table 0
Com_create_trigger 0
Com_create_udf 0
Com_create_user 0
Com_create_view 0
Com_dealloc_sql 0
Com_delete 0
Com_delete_multi 0
Com_do 0
Com_drop_db 0
Com_drop_event 0
Com_drop_function 0
Com_drop_index 0
Com_drop_procedure 0
Com_drop_server 0
Com_drop_table 0
Com_drop_trigger 0
Com_drop_user 0
Com_drop_view 0
Com_empty_query 0
Com_execute_sql 0
Com_flush 0
Com_grant 0
Com_ha_close 0
Com_ha_open 0
Com_ha_read 0
Com_help 0
Com_insert 0
Com_insert_select 0
Com_install_plugin 0
Com_kill 0
Com_load 0
Com_lock_tables 0
Com_optimize 0
Com_preload_keys 0
Com_prepare_sql 0
Com_purge 0
Com_purge_before_date 0
Com_release_savepoint 0
Com_rename_table 0
Com_rename_user 0
Com_repair 0
Com_replace 0
Com_replace_select 0
Com_reset 0
Com_resignal 0
Com_revoke 0
Com_revoke_all 0
Com_rollback 0
Com_rollback_to_savepoint 0
Com_savepoint 0
Com_select 1
Com_set_option 0
Com_signal 0
Com_show_authors 0
Com_show_binlog_events 0
Com_show_binlogs 0
Com_show_charsets 0
Com_show_collations 0
Com_show_contributors 0
Com_show_create_db 0
Com_show_create_event 0
Com_show_create_func 0
Com_show_create_proc 0
Com_show_create_table 0
Com_show_create_trigger 0
Com_show_databases 0
Com_show_engine_logs 0
Com_show_engine_mutex 0
Com_show_engine_status 0
Com_show_events 0
Com_show_errors 0
Com_show_fields 0
Com_show_function_status 0
Com_show_grants 0
Com_show_keys 0
Com_show_master_status 0
Com_show_open_tables 0
Com_show_plugins 0
Com_show_privileges 0
Com_show_procedure_status 0
Com_show_processlist 0
Com_show_profile 0
Com_show_profiles 0
Com_show_relaylog_events 0
Com_show_slave_hosts 0
Com_show_slave_status 0
Com_show_status 1
Com_show_storage_engines 0
Com_show_table_status 0
Com_show_tables 0
Com_show_triggers 0
Com_show_variables 0
Com_show_warnings 0
Com_slave_start 0
Com_slave_stop 0
Com_stmt_close 0
Com_stmt_execute 0
Com_stmt_fetch 0
Com_stmt_prepare 0
Com_stmt_reprepare 0
Com_stmt_reset 0
Com_stmt_send_long_data 0
Com_truncate 0
Com_uninstall_plugin 0
Com_unlock_tables 0
Com_update 0
Com_update_multi 0
Com_xa_commit 0
Com_xa_end 0
Com_xa_prepare 0
Com_xa_recover 0
Com_xa_rollback 0
Com_xa_start 0
Compression OFF
Connections 375
Created_tmp_disk_tables 0
Created_tmp_files 6
Created_tmp_tables 0
Delayed_errors 0
Delayed_insert_threads 0
Delayed_writes 0
Flush_commands 1
Handler_commit 0
Handler_delete 0
Handler_discover 0
Handler_prepare 0
Handler_read_first 0
Handler_read_key 0
Handler_read_last 0
Handler_read_next 0
Handler_read_prev 0
Handler_read_rnd 0
Handler_read_rnd_next 0
Handler_rollback 0
Handler_savepoint 0
Handler_savepoint_rollback 0
Handler_update 0
Handler_write 0
Innodb_buffer_pool_pages_data 584
Innodb_buffer_pool_bytes_data 9568256
Innodb_buffer_pool_pages_dirty 0
Innodb_buffer_pool_bytes_dirty 0
Innodb_buffer_pool_pages_flushed 120
Innodb_buffer_pool_pages_free 7607
Innodb_buffer_pool_pages_misc 0
Innodb_buffer_pool_pages_total 8191
Innodb_buffer_pool_read_ahead_rnd 0
Innodb_buffer_pool_read_ahead 0
Innodb_buffer_pool_read_ahead_evicted 0
Innodb_buffer_pool_read_requests 14912
Innodb_buffer_pool_reads 584
Innodb_buffer_pool_wait_free 0
Innodb_buffer_pool_write_requests 203
Innodb_data_fsyncs 163
Innodb_data_pending_fsyncs 0
Innodb_data_pending_reads 0
Innodb_data_pending_writes 0
Innodb_data_read 11751424
Innodb_data_reads 594
Innodb_data_writes 243
Innodb_data_written 3988480
Innodb_dblwr_pages_written 120
Innodb_dblwr_writes 40
Innodb_have_atomic_builtins ON
Innodb_log_waits 0
Innodb_log_write_requests 28
Innodb_log_writes 41
Innodb_os_log_fsyncs 83
Innodb_os_log_pending_fsyncs 0
Innodb_os_log_pending_writes 0
Innodb_os_log_written 34816
Innodb_page_size 16384
Innodb_pages_created 1
Innodb_pages_read 583
Innodb_pages_written 120
Innodb_row_lock_current_waits 0
Innodb_row_lock_time 0
Innodb_row_lock_time_avg 0
Innodb_row_lock_time_max 0
Innodb_row_lock_waits 0
Innodb_rows_deleted 0
Innodb_rows_inserted 0
Innodb_rows_read 40
Innodb_rows_updated 39
Innodb_truncated_status_writes 0
Key_blocks_not_flushed 0
Key_blocks_unused 13396
Key_blocks_used 0
Key_read_requests 0
Key_reads 0
Key_write_requests 0
Key_writes 0
Last_query_cost 0.000000
Max_used_connections 3
Not_flushed_delayed_rows 0
Open_files 86
Open_streams 0
Open_table_definitions 109
Open_tables 109
Opened_files 439
Opened_table_definitions 0
Opened_tables 0
Performance_schema_cond_classes_lost 0
Performance_schema_cond_instances_lost 0
Performance_schema_file_classes_lost 0
Performance_schema_file_handles_lost 0
Performance_schema_file_instances_lost 0
Performance_schema_locker_lost 0
Performance_schema_mutex_classes_lost 0
Performance_schema_mutex_instances_lost 0
Performance_schema_rwlock_classes_lost 0
Performance_schema_rwlock_instances_lost 0
Performance_schema_table_handles_lost 0
Performance_schema_table_instances_lost 0
Performance_schema_thread_classes_lost 0
Performance_schema_thread_instances_lost 0
Prepared_stmt_count 0
Qcache_free_blocks 1
Qcache_free_memory 16758160
Qcache_hits 0
Qcache_inserts 1
Qcache_lowmem_prunes 0
Qcache_not_cached 419
Qcache_queries_in_cache 1
Qcache_total_blocks 4
Queries 1146
Questions 2
Rpl_status AUTH_MASTER
Select_full_join 0
Select_full_range_join 0
Select_range 0
Select_range_check 0
Select_scan 0
Slave_heartbeat_period 0.000
Slave_open_temp_tables 0
Slave_received_heartbeats 0
Slave_retried_transactions 0
Slave_running OFF
Slow_launch_threads 0
Slow_queries 0
Sort_merge_passes 0
Sort_range 0
Sort_rows 0
Sort_scan 0
Ssl_accept_renegotiates 0
Ssl_accepts 0
Ssl_callback_cache_hits 0
Ssl_cipher
Ssl_cipher_list
Ssl_client_connects 0
Ssl_connect_renegotiates 0
Ssl_ctx_verify_depth 0
Ssl_ctx_verify_mode 0
Ssl_default_timeout 0
Ssl_finished_accepts 0
Ssl_finished_connects 0
Ssl_session_cache_hits 0
Ssl_session_cache_misses 0
Ssl_session_cache_mode NONE
Ssl_session_cache_overflows 0
Ssl_session_cache_size 0
Ssl_session_cache_timeouts 0
Ssl_sessions_reused 0
Ssl_used_session_cache_entries 0
Ssl_verify_depth 0
Ssl_verify_mode 0
Ssl_version
Table_locks_immediate 123
Table_locks_waited 0
Tc_log_max_pages_used 0
Tc_log_page_size 0
Tc_log_page_waits 0
Threads_cached 1
Threads_connected 2
Threads_created 3
Threads_running 1
Uptime 2389
Uptime_since_flush_status 2389
How would one use awk to make this calculation of Queries per second (Queries/Uptime):
1146/2389
And print the result?
I'm grepping 2 results from a list of results and need to calculate items/second where 302 is the total item count and 503 the total uptimecount.
At this moment I'm doing
grep -Ew "Queries|Uptime" | awk '{print $2}'
to print out:
302
503
But here i got stuck.
You can use something like:
$ awk '/Queries/ {q=$2} /Uptime/ {print q/$2}' file
0.600398
That is: when the line contains the string "Queries", store its value. When it contains "Uptime", print the result of dividing its value by the one stored in queries.
This assumes the string "Queries" appearing before the string "Uptime".
Given your updated input, I see that we need to check if the first field is exactly "Uptime" or "Queries" so that it does not match other lines with this content:
$ awk '$1 == "Queries" {q=$2} $1=="Uptime" {print q/$2}' file
0.479699
I think the following awk one-liner will help you:
kent$ cat f
Queries 302
Uptime 503
LsyHP 13:42:57 /tmp/test
kent$ awk '{a[NR]=$NF}END{printf "%.2f\n",a[NR-1]/a[NR]}' f
0.60
If you want to do together with "grep" function:
kent$ awk '/Queries/{a=$NF}/Uptime/{b=$NF}END{printf "%.2f\n",a/b}' f
0.60
My shell script uploads files to a server. I'd like the stdout and stderr to write to both a file and the console. BUT I don't want the progress bar/percentage which is stderr to go to file. I only want curl errors to write to file.
Initially I had this
curl ... 2>> "$log"
This wrote nice neat 1 or more lines of the download to the log file, but nothing to console.
I then changed it to
curl ... 3>&1 1>&2 2>&3 | tee -a "$log"
This wrote to both console and file, yay! except it wrote the whole progress for each percentage to the file, making the log file very large and tedious to read.
How can I view the progress in the console, but only write the last part of the output to file?
I want this
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
106 1166 0 0 106 1166 0 2514 --:--:-- --:--:-- --:--:-- 2 106 1166 0 0 106 1166 0 795 0:00 :01 0:00:01 --:--:-- 0 106 1166 0 0 106 1166 0 660 0:00:01 0:00:01 --:--:-- 0
This is what I get with the second curl redirect
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 9.8G 0 0 0 16384 0 24764 4d 22h --:--:-- 4d 22h 24764
0 9.8G 0 0 0 3616k 0 4098k 0:41:54 --:--:-- 0:41:54 15.9M
0 9.8G 0 0 0 24.2M 0 12.9M 0:12:59 0:00:01 0:12:58 19.8M
0 9.8G 0 0 0 50.4M 0 17.5M 0:09:34 0:00:02 0:09:32 22.7M
0 9.8G 0 0 0 79.8M 0 20.5M 0:08:09 0:00:03 0:08:06 24.7M
1 9.8G 0 0 1 101M 0 20.7M 0:08:04 0:00:04 0:08:00 24.0M
1 9.8G 0 0 1 129M 0 21.9M 0:07:37 0:00:05 0:07:32 25.1M
1 9.8G 0 0 1 150M 0 21.8M 0:07:41 0:00:06 0:07:35 25.1M
1 9.8G 0 0 1 169M 0 21.5M 0:07:47 0:00:07 0:07:40 23.8M
1 9.8G 0 0 1 195M 0 21.9M 0:07:38 0:00:08 0:07:30 23.0M
2 9.8G 0 0 2 219M 0 22.1M 0:07:33 0:00:09 0:07:24 23.5M
2 9.8G 0 0 2 243M 0 22.4M 0:07:29 0:00:10 0:07:19 22.9M
2 9.8G 0 0 2 273M 0 22.9M 0:07:17 0:00:11 0:07:06 24.6M
..
.. hundreds of lines...
..
99 9.8G 0 0 99 9982M 0 24.8M 0:06:45 0:06:41 0:00:04 24.5M
99 9.8G 0 0 99 9.7G 0 24.8M 0:06:44 0:06:42 0:00:02 24.9M
99 9.8G 0 0 99 9.8G 0 24.8M 0:06:44 0:06:43 0:00:01 26.0M
100 9.8G 0 0 100 9.8G 0 24.8M 0:06:44 0:06:44 --:--:-- 25.8M
Edit:
What I dont understand is according to http://www.tldp.org/LDP/abs/html/io-redirection.html
2>filename
# Redirect stderr to file "filename."
but if I use that I don't get every single stderr progress line in the file. If I try any of the other solutions every stderr progress line is redirected to file
Just as a video is a series of frames, an updating percentage in the console is a series of lines. What is in the file is the true output. The difference is a carriage return in a text file puts the following text below, whereas in the console it overwrites the current line.
If you want to see the updating percentage in the console, but not the file, you could use something like this:
curl |& tee >(sed '1b;$!d' > log)
Or:
curl |& tee /dev/tty | sed '1b;$!d' > log
Let's say you have 10 huge .txt files test1.txt ... test10.txt
Here's how you would upload them with a single cURL command and log the results without logging the progress meter, the trick is to use the --write-out or -w option, from the man file, these are all the relevent fields for uploading to FTP:
curl -T "test[1-10:1].txt" -u user:password sftp://example.com:22/home/user/ -# -k -w "%{url_effective}\t%{ftp_entry_path}\t%{http_code}\t%{http_connect}\t%{local_ip}\t%{local_port}\t%{num_connects}\t%{num_redirects}\t%{redirect_url}\t%{remote_ip}\t%{remote_port}\t%{size_download}\t%{size_header}\t%{size_request}\t%{size_upload}\t%{speed_download}\t%{speed_upload}\t%{ssl_verify_result}\t%{time_appconnect}\t%{time_connect}\t%{time_namelookup}\t%{time_pretransfer}\t%{time_redirect}\t%{time_starttransfer}\t%{time_total}\n" >> log.txt
For your log.txt file you may want to pre-pend the column headers first:
echo -e "url_effective\tftp_entry_path\thttp_code\thttp_connect\tlocal_ip\tlocal_port\tnum_connects\tnum_redirects\tredirect_url\tremote_ip\tremote_port\tsize_download\tsize_header\tsize_request\tsize_upload\tspeed_download\tspeed_upload\tssl_verify_result\ttime_appconnect\ttime_connect\ttime_namelookup\ttime_pretransfer\ttime_redirect\ttime_starttransfer\ttime_total" > log.txt
The -# makes the progress bar a bit neater like:
######################################################################## 100.0%
######################################################################## 100.0%
...and the curl -T "test[1-10:1].txt" piece lets you specify a range of files to upload.
i want to make Bar graph from attached data in Rstudio i want to show that what ip used what protocol and how many times
Protocol
Source DNS FTP HTTP IMF LLC SMTP TCP TELNET
172.16.112.100 306 0 0 0 0 0 0 0
172.16.112.50 0 0 0 0 0 0 0 24
172.16.113.168 0 0 0 0 0 0 0 15
172.16.113.204 1 0 0 0 0 0 0 0
172.16.114.50 1 0 0 0 0 0 0 0
172.16.115.20 158 0 0 0 0 0 0 2
192.168.1.20 3 0 0 0 0 0 0 0
194.7.248.153 0 0 0 0 0 0 0 2
197.218.177.69 0 0 0 0 0 0 0 0
HP_ed:9b:2d 0 0 0 0 0 0 0 0
Simple way to build one plot for each IP:
After loading data as a data.frame with colnames and rownames.
par(mfrow=c(4,3))
for (i in 1:nrow(data)) {
barplot(as.numeric(data[i,]), main=rownames(data)[i], names.arg=colnames(data))
}
Your data is very poor, which makes the graphs have only one or no bar at all. If you want stacked or grouped bars you should have a look at the pakages ggplot2 or lattice.
Can somebody help in analysis of the below output received when testing perfomance of certain website? Especially those lines representing progress meter
It appears to me curl re-tries downloading the content of the page a couple of times.. am I right ?
What would be the possible causes - would it be the malformed Content-Lenght response header ?
About to connect() to xx.example.com port 80 (#0)
Trying 12.12.12.12... connected
Connected to xx.example.com (12.12.12.12) port 80 (#0)
GET /testing/page HTTP/1.1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0.1)
Gecko/20100101 Firefox/4.0.1
Host: mp.example.com
Accept-Encoding: deflate, gzip
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
HTTP/1.1 200 OK
Age: 26
Date: Thu, 21 Sept 2012 15:19:48 GMT
Cache-Control: max-age=60
Xontent-Length:
Connection: Close
Via: proxy
ETag: "KNANXUSNFMBN"
Content-Type: application/json;charset=UTF-8
Vary: Accept-Encoding
[data not shown]
100 32074 0 32074 0 0 54987 0 --:--:-- --:--:-- --:--:-- 55300
100 49400 0 49400 0 0 28372 0 --:--:-- 0:00:01 --:--:-- 28423
100 52121 0 52121 0 0 20174 0 --:--:-- 0:00:02 --:--:-- 20201
100 58912 0 58912 0 0 16923 0 --:--:-- 0:00:03 --:--:-- 16938
100 58912 0 58912 0 0 13142 0 --:--:-- 0:00:04 --:--:-- 13152
100 58912 0 58912 0 0 10742 0 --:--:-- 0:00:05 --:--:-- 5476
100 58912 0 58912 0 0 9083 0 --:--:-- 0:00:06 --:--:-- 2004
100 58912 0 58912 0 0 7868 0 --:--:-- 0:00:07 --:--:-- 1384
100 58912 0 58912 0 0 6940 0 --:--:-- 0:00:08 --:--:-- 0
100 58912 0 58912 0 0 6207 0 --:--:-- 0:00:09 --:--:-- 0
100 58912 0 58912 0 0 5615 0 --:--:-- 0:00:10 --:--:-- 0
100 58912 0 58912 0 0 5125 0 --:--:-- 0:00:11 --:--:-- 0
100 58912 0 58912 0 0 4715 0 --:--:-- 0:00:12 --:--:-- 0
100 58912 0 58912 0 0 4365 0 --:--:-- 0:00:13 --:--:-- 0
100 58912 0 58912 0 0 4063 0 --:--:-- 0:00:14 --:--:-- 0
100 58912 0 58912 0 0 3801 0 --:--:-- 0:00:15 --:--:-- 0
100 58912 0 58912 0 0 3570 0 --:--:-- 0:00:16 --:--:-- 0
100 58912 0 58912 0 0 3366 0 --:--:-- 0:00:17 --:--:-- 0
100 58913 0 58913 0 0 3226 0 --:--:-- 0:00:18 --:--:-- 0
100 113k 0 113k 0 0 6067 0 --:--:-- 0:00:19 --:--:-- 12387*
Closing connection #0
END - total_time: 19.094
(cumul_times - dns: 0.002 connect: 0.004 pretrans: 0.004 firstbyte: 0.006)
status: 200 size: 115856 hsize: 269 date: 16.08.2012-18:20:33 1345130433
I would appreciate all input on this.
I am troubleshooting the delays to that specific web page, I am looking for advise on how to interpret those curl progress meter lines.
in working scenario - where there is no delay - there is 1 progress meter line :
Age: 28
Date: Thu, 21 Sep Aug 2012 15:20:46 GMT
Cache-Control: max-age=60
Content-Length: 115856
Connection: Keep-Alive
Via: proxy
ETag: "KXNFGAHSKCUY"
Content-Type: application/json;charset=UTF-8
Vary: Accept-Encoding
[data not shown]
100 113k 100 113k 0 0 6402k 0 --:--:-- --:--:-- --:--:-- 8703k*
Connection #0 to host xx.example.com left intact
Closing connection #0
END - **total_time: 0.018**
(cumul_times - dns: 0.002 connect: 0.004 pretrans: 0.004 firstbyte: 0.006)
status: 200 size: 115856 hsize: 269 date: 16.08.2012-18:21:14 1345130474
My question is what do the individual lines mean? That the curl got only part of the content and has been re-trying and re-trying.
And what could be the cause? Slow server? Drops on the WAN connection .....?
Can you post the curl request you executed?
Though, You may want to use ab or apib (available on google code) for benchmarking.