How to query a URL (DNS lookup) using Windows terminal? - windows

I am trying to use Windows' command line in order to query http:BL using their API (link) but can't seem to fine a command to do what I want.
I thought I should use something like:
ping secretkey.7.1.1.127.dnsbl.httpbl.org
But I'm only getting:
[1] ""
[2] "Pinging secretkey.7.1.1.127.dnsbl.httpbl.org [127.1.1.7] with 32 bytes of data:"
[3] "Reply from 127.1.1.7: bytes=32 time<1ms TTL=128"
[4] "Reply from 127.1.1.7: bytes=32 time<1ms TTL=128"
[5] "Reply from 127.1.1.7: bytes=32 time<1ms TTL=128"
[6] "Reply from 127.1.1.7: bytes=32 time<1ms TTL=128"
[7] ""
[8] "Ping statistics for 127.1.1.7:"
[9] " Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),"
[10] "Approximate round trip times in milli-seconds:"
[11] " Minimum = 0ms, Maximum = 0ms, Average = 0ms"
Or
ping -a secretkey.7.1.1.127.dnsbl.httpbl.org
But neither seem to give me the desired output.
Any suggestion on which command to use?
UPDATE.
Using:
nslookup -debug secretkey.7.1.1.127.dnsbl.httpbl.org
I got:
Non-authoritative answer:
------------ Got answer: HEADER:
opcode = QUERY, id = 1, rcode = NXDOMAIN
header flags: response, want recursion, recursion avail.
questions = 1, answers = 0, authority records = 1, additional = 0 QUESTIONS:
1.1.168.192.in-addr.arpa, type = PTR, class = IN AUTHORITY RECORDS: -> 168.192.in-addr.arpa
ttl = 86399 (23 hours 59 mins 59 secs)
primary name server = dns.netvision.net.il
responsible mail addr = hostmaster.netvision.net.il
serial = 2011010100
refresh = 1800 (30 mins)
retry = 900 (15 mins)
expire = 604800 (7 days)
default TTL = 604800 (7 days) ------------ Server: UnKnown Address: 192.168.1.1 ------------ Got answer: HEADER:
opcode = QUERY, id = 2, rcode = NOERROR
header flags: response, want recursion, recursion avail.
questions = 1, answers = 1, authority records = 3, additional = 3 QUESTIONS:
secretkey.7.1.1.127.dnsbl.httpbl.org, type = A, class = IN ANSWERS: -> secretkey.7.1.1.127.dnsbl.httpbl.org
internet address = 127.1.1.7
ttl = 300 (5 mins) AUTHORITY RECORDS: -> dnsbl.httpbl.org
nameserver = ns2.httpbl.org
ttl = 300 (5 mins) -> dnsbl.httpbl.org
nameserver = ns3.httpbl.org
ttl = 300 (5 mins) -> dnsbl.httpbl.org
nameserver = ns1.httpbl.org
ttl = 300 (5 mins) ADDITIONAL RECORDS: -> ns1.httpbl.org
internet address = 209.124.55.46
ttl = 300 (5 mins) -> ns2.httpbl.org
internet address = 66.114.104.118
ttl = 300 (5 mins) -> ns3.httpbl.org
internet address = 81.17.242.92
ttl = 300 (5 mins) ------------ ------------ Got answer: HEADER:
opcode = QUERY, id = 3, rcode = NOERROR
header flags: response, want recursion, recursion avail.
questions = 1, answers = 0, authority records = 1, additional = 0 QUESTIONS:
secretkey.7.1.1.127.dnsbl.httpbl.org, type = AAAA, class = IN AUTHORITY RECORDS: -> dnsbl.httpbl.org
ttl = 1162 (19 mins 22 secs)
primary name server = dnsbl.httpbl.org
responsible mail addr = dnsadmin.projecthoneypot.org
serial = 1397664327
refresh = 7200 (2 hours)
retry = 7200 (2 hours)
expire = 604800 (7 days)
default TTL = 3600 (1 hour) ------------ Name: secretkey.7.1.1.127.dnsbl.httpbl.org Address: 127.1.1.7 >

You're looking for nslookup:
nslookup -debug secretkey.7.1.1.127.dnsbl.httpbl.org
Among other information, this part is what you're looking for:
ANSWERS:
-> secretkey.7.1.1.127.dnsbl.httpbl.org
internet address = 127.1.1.7
ttl = 21236
An answer like 127.1.1.7 in blacklists usually means it's a positive. No answer would be a negative.

Related

GraphQL - How to add/manipulate response columns

Is there a way to pass a static variable to a GraphQL endpoint in order to be returned with the response?
In my case I'm pulling timesheets for a specific userId. Unfortunately the userId isn't returned in the response.
Types
type Query {
# Returns a timesheet for user id and given date.
#
# Arguments
# userId: User's id.
# date: Date.
timesheet(userId: ID, date: Date!): Timesheet
}
type Timesheet {
# Id.
id: ID
# Date.
date: Date
# Expected time based on work schedule in seconds.
expectedTime: Int
# Tracked time in seconds.
trackedTime: Int
# Time off time in seconds.
timeOffTime: Int
# Holiday time in seconds.
holidayTime: Int
# Sum of tracked, holiday, time off time minus break time.
totalTime: Int
# Break time in seconds.
breakTime: Int
}
Request Body
{"query":"{
timesheets(userId: \"10608\", dateFrom: \"2022-01-01\", dateTo: \"2022-12-31\") {
items { id date trackedTime timeOffTime holidayTime totalTime breakTime }
}
}"}
Example Response
data.timesheets.items.id
data.timesheets.items.date
data.timesheets.items.trackedTime
data.timesheets.items.timeOffTime
data.timesheets.items.holidayTime
data.timesheets.items.totalTime
data.timesheets.items.breakTime
3646982
2022-01-01
0
0
0
0
3495676
2022-01-02
18000
0
0
18000
3500068
2022-01-03
35100
0
0
35100
Desired Response
userId
data.timesheets.items.id
data.timesheets.items.date
data.timesheets.items.trackedTime
data.timesheets.items.timeOffTime
data.timesheets.items.holidayTime
data.timesheets.items.totalTime
data.timesheets.items.breakTime
10608
3646982
2022-01-01
0
0
0
0
10608
3495676
2022-01-02
18000
0
0
18000
10608
3500068
2022-01-03
35100
0
0
35100

How to configure windows for large number of concurrent connections

I have seen many tutorial on how to tune Linux to scale Node.js or Erlang server to 600K+ concurrent connections.
But I have not found similar tutorial for windows, can someone help with what are the similar knobs that exist for Windows.
/etc/security/limits.d/custom.conf
In root soft nofile 1000000
root hard nofile 1000000
* soft nofile 1000000
* hard nofile 1000000
* List item
/etc/sysctl.conf
fs.file-max = 1000000
fs.nr_open = 1000000
net.ipv4.netfilter.ip_conntrack_max = 1048576
net.nf_conntrack_max = 1048576
“fs.file-max”
The maximum file handles that can be allocated
“fs.nr_open”
Max amount of file handles that can be opened
“net.ipv4.netfilter.ip_conntrack_max”
how many connections the NAT can keep track of in the “tracking” table Default is: 65536
// Increase total number of connection allowed
[HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters]
TcpnumConnections = 16,777,214
// increase MaxFreeTcbs to > than number concurrent connection
[HKEY_LOCAL_MACHINE \System \CurrentControlSet \Services \Tcpip
\Parameters] MaxFreeTcbs = 2000 (Default = RAM dependent, but usual
Pro = 1000, Srv=2000)
// Increase hash table size for the hashtable helping TCP lookup
[HKEY_LOCAL_MACHINE \System \CurrentControlSet \services \Tcpip
\Parameters] MaxHashTableSize = 512 (Default = 512, Range = 64-65536)
// reduce tcp delay to limit half-open connection
[HKEY_LOCAL_MACHINE\System \CurrentControlSet \services \Tcpip \Parameters]
TcpTimedWaitDelay = 120 (Default = 240 secs, Range = 30-300)

Trying to understand why curl calls are slow on codeigniter app

Previous note: I'm a Windows dev, to please bear with me since this is seems to be a Linux issue.
We're having some issues with a PHP app (which was built with codeigniter, I believe). The app is hosted on Ubuntun 16.04 Server (Apache) and I think it's using PHP 7.4.
The issue: the controllers which return the html shown by the browser call a couple of web services (which are hosted on a server that is on the same network) and these call are slow (each takes amore than 1 second to complete).
We've noticed this because we installed and enabled XDebug on both servers. For our test scenario (which envolves loading 2 or 3 pages), we've ended up with the following:
The main portal log file shows that curl_exec required around 32 seconds for performing around 25 calls
The services log files shows that it only run for about 2 second (for loading and returning the data consumed by the curl calls)
Since it looked like there were some issue with the network stack, we've activated wireshark and it looks like each web service call is taking more than one second to complete (so, it seems to confirm the xdebug logs that pointed to a communication issue). For instance, here's a screenshot of one of those calls:
It seems like the ACK for the 1st application data is taking more than one second (RTT time is over 1 second). This does not happen with the following ack (for instance, 122 entry is an ack for 121 and in this case the rtt is about 0.0002 seconds). Btw, here's the info that is shown for the application data entry that is being ack after 1 second:
Frame 116: 470 bytes on wire (3760 bits), 470 bytes captured (3760 bits)
Encapsulation type: Ethernet (1)
Arrival Time: Jul 7, 2020 15:46:23.036999000 GMT Daylight Time
[Time shift for this packet: 0.000000000 seconds]
Epoch Time: 1594133183.036999000 seconds
[Time delta from previous captured frame: 0.000405000 seconds]
[Time delta from previous displayed frame: 0.000405000 seconds]
[Time since reference or first frame: 3.854565000 seconds]
Frame Number: 116
Frame Length: 470 bytes (3760 bits)
Capture Length: 470 bytes (3760 bits)
[Frame is marked: False]
[Frame is ignored: False]
[Protocols in frame: eth:ethertype:ip:tcp:tls]
[Coloring Rule Name: TCP]
[Coloring Rule String: tcp]
Ethernet II, Src: Microsof_15:5a:5e (00:15:5d:15:5a:5e), Dst: Fortinet_09:03:22 (00:09:0f:09:03:22)
Destination: Fortinet_09:03:22 (00:09:0f:09:03:22)
Address: Fortinet_09:03:22 (00:09:0f:09:03:22)
.... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)
.... ...0 .... .... .... .... = IG bit: Individual address (unicast)
Source: Microsof_15:5a:5e (00:15:5d:15:5a:5e)
Address: Microsof_15:5a:5e (00:15:5d:15:5a:5e)
.... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)
.... ...0 .... .... .... .... = IG bit: Individual address (unicast)
Type: IPv4 (0x0800)
Internet Protocol Version 4, Src: 10.50.100.28, Dst: 10.50.110.100
0100 .... = Version: 4
.... 0101 = Header Length: 20 bytes (5)
Differentiated Services Field: 0x00 (DSCP: CS0, ECN: Not-ECT)
0000 00.. = Differentiated Services Codepoint: Default (0)
.... ..00 = Explicit Congestion Notification: Not ECN-Capable Transport (0)
Total Length: 456
Identification: 0x459c (17820)
Flags: 0x4000, Don't fragment
0... .... .... .... = Reserved bit: Not set
.1.. .... .... .... = Don't fragment: Set
..0. .... .... .... = More fragments: Not set
...0 0000 0000 0000 = Fragment offset: 0
Time to live: 64
Protocol: TCP (6)
Header checksum: 0x0cb0 [validation disabled]
[Header checksum status: Unverified]
Source: 10.50.100.28
Destination: 10.50.110.100
Transmission Control Protocol, Src Port: 34588, Dst Port: 443, Seq: 644, Ack: 5359, Len: 404
Source Port: 34588
Destination Port: 443
[Stream index: 5]
[TCP Segment Len: 404]
Sequence number: 644 (relative sequence number)
[Next sequence number: 1048 (relative sequence number)]
Acknowledgment number: 5359 (relative ack number)
1000 .... = Header Length: 32 bytes (8)
Flags: 0x018 (PSH, ACK)
000. .... .... = Reserved: Not set
...0 .... .... = Nonce: Not set
.... 0... .... = Congestion Window Reduced (CWR): Not set
.... .0.. .... = ECN-Echo: Not set
.... ..0. .... = Urgent: Not set
.... ...1 .... = Acknowledgment: Set
.... .... 1... = Push: Set
.... .... .0.. = Reset: Not set
.... .... ..0. = Syn: Not set
.... .... ...0 = Fin: Not set
[TCP Flags: ·······AP···]
Window size value: 319
[Calculated window size: 40832]
[Window size scaling factor: 128]
Checksum: 0x8850 [unverified]
[Checksum Status: Unverified]
Urgent pointer: 0
Options: (12 bytes), No-Operation (NOP), No-Operation (NOP), Timestamps
TCP Option - No-Operation (NOP)
Kind: No-Operation (1)
TCP Option - No-Operation (NOP)
Kind: No-Operation (1)
TCP Option - Timestamps: TSval 1446266633, TSecr 1807771224
Kind: Time Stamp Option (8)
Length: 10
Timestamp value: 1446266633
Timestamp echo reply: 1807771224
[SEQ/ACK analysis]
[This is an ACK to the segment in frame: 115]
[The RTT to ACK the segment was: 0.000405000 seconds]
[iRTT: 0.000474000 seconds]
[Bytes in flight: 404]
[Bytes sent since last PSH flag: 404]
[Timestamps]
[Time since first frame in this TCP stream: 0.010560000 seconds]
[Time since previous frame in this TCP stream: 0.000405000 seconds]
TCP payload (404 bytes)
Transport Layer Security
TLSv1.2 Record Layer: Application Data Protocol: http-over-tls
Content Type: Application Data (23)
Version: TLS 1.2 (0x0303)
Length: 399
Encrypted Application Data: 6611c266b7d32e17367b99607d0a0607f61149d15bcb135d…
Any tips on what's going on?
Thanks.

Why hping3 rtt >> sockperf latency

I tried to run tcp hping3 on linux VM in same network, got avg rtt ~5ms
sudo hping3 -S -p 22 10.1.0.8 -c 100
...
len=44 ip=10.1.0.8 ttl=64 DF id=0 sport=22 flags=SA seq=98 win=29200 rtt=0.6 ms
len=44 ip=10.1.0.8 ttl=64 DF id=0 sport=22 flags=SA seq=99 win=29200 rtt=1.4 ms
--- 10.1.0.8 hping statistic ---
100 packets transmitted, 100 packets received, 0% packet loss
round-trip min/avg/max = 0.6/5.2/9.7 ms
If I measure latency using sockperf tool avg latency is getting ~ 0.5ms
(sockperf.bin ping-pong -i 10.1.0.8 -p 8302 -t 15 --pps=max )
sockperf output:
[[2;35m====> avg-lat=495.943 (std-dev=484.312)
[[0msockperf: # dropped messages = 0; # duplicated messages = 0; # out-of-order messages = 0sockperf: Summary: Latency is 495.943 usecsockperf:
[[2;35mTotal 15119 observations
[[0m; each percentile contains 151.19 observationssockperf: ---> <MAX> observation = 6839.398sockperf: ---> percentile 99.999 = 6839.398sockperf: ---> percentile 99.990 = 5292.623sockperf: ---> percentile 99.900 = 4023.327sockperf: ---> percentile 99.000 = 2434.115sockperf: ---> percentile 90.000 = 1005.612sockperf: ---> percentile 75.000 = 638.746sockperf: ---> percentile 50.000 = 360.516sockperf: ---> percentile 25.000 = 178.134sockperf: ---> <MIN> observation = 45.356
Wanted to know what could be the reason for such large difference between latency of these two tool. Does internal way of measuring tcp rtt from hping3 and tcp latency from sockperf same?
Am I doing anything wrong here?
To verify I also tried to measure tcp latency between two windows VM in same network using psping tool
Connecting to 10.1.0.7:8888: from 10.1.0.6:62312: 1.03ms
TCP connect statistics for 10.1.0.7:8888:
Sent = 100, Received = 100, Lost = 0 (0% loss),
Minimum = 0.59ms, Maximum = 9.82ms, Average = 1.05ms

Golang Server Timeout

I have a very simple go server:
package main
import(
"net/http"
"fmt"
"log"
)
func test(w http.ResponseWriter, r *http.Request){
fmt.Println("No bid")
http.Error(w, "NoBid", 204)
}
func main() {
http.HandleFunc("/test/bid", test)
http.ListenAndServe(":8080", nil)
log.Println("Done serving")
}
I then run the apache benchmark tool:
ab -c 50 -n 50000 -p post.txt http://127.0.0.1:8080/test/bid
The Server runs and responds to about 15000 requests and then times out. I was wondering why this happens and if there is something I can do about this.
If you running in Linux, Maybe too many open files, so it can't create connection, You need change system config to support more connections.
For example,
edit /etc/security/limits.conf add
* soft nofile 100000
* soft nofile 100000
To open more file.
edit /etc/sysctl.conf
# use more port
net.ipv4.ip_local_port_range = 1024 65000
# keep alive timeout
net.ipv4.tcp_keepalive_time = 300
# allow reuse
net.ipv4.tcp_tw_reuse = 1
# quick recovery
net.ipv4.tcp_tw_recycle = 1
I tried to replicate your problem on my linux amd64 laptop with no success - it worked fine even with
ab -c 200 -n 500000 -p post.txt http://127.0.0.1:8080/test/bid
There were about 28,000 sockets open though which may be bumping a limit on your system.
A more real world test might be to turn keepalives on which maxes out at 400 sockets
ab -k -c 200 -n 500000 -p post.txt http://127.0.0.1:8080/test/bid
The result for this was
Server Software:
Server Hostname: 127.0.0.1
Server Port: 8080
Document Path: /test/bid
Document Length: 6 bytes
Concurrency Level: 200
Time taken for tests: 33.807 seconds
Complete requests: 500000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 500000
Total transferred: 77000000 bytes
Total body sent: 221500000
HTML transferred: 3000000 bytes
Requests per second: 14790.04 [#/sec] (mean)
Time per request: 13.523 [ms] (mean)
Time per request: 0.068 [ms] (mean, across all concurrent requests)
Transfer rate: 2224.28 [Kbytes/sec] received
6398.43 kb/s sent
8622.71 kb/s total
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 11
Processing: 0 14 5.2 13 42
Waiting: 0 14 5.2 13 42
Total: 0 14 5.2 13 42
Percentage of the requests served within a certain time (ms)
50% 13
66% 16
75% 17
80% 18
90% 20
95% 21
98% 24
99% 27
100% 42 (longest request)
I suggest you try ab with -k and take a look at tuning your system for lots of open sockets

Resources