Apache Ignite REST API - caching

I am Using Apache Ignite 2.8.0.
I have developed small dash board that is used to monitor the performance of Ignite.
Now my problem is finding the number of servers.
first i have find the total number of nodes(in node variable), then
total_servers = 0
port = 8080
for j in range(0,node + 1):
if(persistence == True):
url_cache = "http://localhost:" + str(port) + "/ignite?cmd=top&sessionToken=" +sessionToken
else:
url_cache = "http://localhost:" + str(port) + "/ignite?cmd=top"
try:
print(j)
try:
res = requests.get(url = url_cache)
print(res.status_code)
if(res.status_code == 200):
total_servers = total_servers + 1
except:
pass
except:
pass
port = port + 1
But it will take much time, i don't want that.
is there any simple way to finding the number of servers running in Apache Ignite by using REST API http request?

From REST you can run SQL command SELECT * FROM SYS.NODES; to determine that:
~/Downloads/apache-ignite-2.8.1-bin% wget -q -O- http://localhost:8080/ignite\?cmd=qryfldexe\&pageSize\=10\&cacheName\=default\&qry=select\ \*\ from\ sys.nodes | jq .response.items
[
[
"3304155a-bc83-402f-a884-59d39f074d3a",
"0:0:0:0:0:0:0:1%lo,127.0.0.1,172.17.0.1,192.168.1.7:47500",
"2.8.1#20200521-sha1:86422096",
false,
false,
1,
"[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 172.17.0.1, 192.168.1.7]",
"[192.168.1.7, 172.17.0.1]",
true
]
]
(assumes you have cache named default for API purposes)

Finally i have found one Answer.. if it's wrong please suggest me,
http://localhost:8080/ignite?cmd=node&id=a427-a04631d64c98&attr=true
in response["attributes"]["org.apache.ignite.cache.client"] => false,

Related

Send realtime jmeter active threads on all slaves during remote testing to influxdb via jsr223 listner

How the send realtime jmeter active threads on all slaves during remote testing to influxdb via jsr223 listner
Any reason for not using JMeter's Backend Listener? This way you will get way more data without having to implement your custom solutions, you will have the number of active threads along with other metrics plotted:
If you still want an example Groovy code for the JSR223 Listener to add your custom metric with the custom tags to the custom database here is some sample you can use as a basis:
def influxHost = '192.168.99.100'
def influxPort = '8086'
def database = 'mydb'
def ip = org.apache.jmeter.util.JMeterUtils.getLocalHostIP()
def hostname = org.apache.jmeter.util.JMeterUtils.getLocalHostName()
def activeThreads = ctx.getThreadGroup().numberOfActiveThreads()
def client = org.apache.http.impl.client.HttpClientBuilder.create().build()
def post = new org.apache.http.client.methods.HttpPost('http://' + influxHost + ':' + influxPort + '/write?db=' + database)
def entity = new org.apache.http.entity.StringEntity('active_threads,host=' + hostname + ',ip=' + ip + ' value=' + activeThreads + ' ' + System.nanoTime())
post.setEntity(entity)
client.execute(post)
More information: Write data using the InfluxDB API

HTTP request based on timestamp using Jmeter

I am trying to send HTTP requests using jmeter for which I am using a HTTP sampler. The http requests have a parameter TaskID and these parameters read from a CSV file. I just wanted to make changes on how the HTTP request will be send.
The CSV file looks like this
Time TaskID
9000 42353456
9000 53463464
9000 65475787
9300 42354366
9300 23423535
9600 43545756
9600 53463467
9600 23435346
Now I want to send request based on the Time. For example in Time 9000 there are 3 TaskID. So I want to send 3 HTTP requests with those TaskIDs at a time. Similarly for the other Times as well. Any idea on how to do it?
Update:
I created a minimal working example for one possible solution.
Basically I read the csv in a JSR223 Sampler and group it with following groovy code in "read csv" sampler:
import org.apache.jmeter.services.FileServer
current_dir = FileServer.getFileServer().getBaseDir().replace("\\","/")
csv_lines = new File(current_dir + "/test.csv").readLines()
times = []
csv_lines.each { line ->
line = line.split(",")
time = line[0]
task_id = line[1]
if (vars.getObject(time)){
tasks = vars.getObject(time)
tasks.add(task_id)
vars.putObject(time, tasks)
}
else{
times.add(time)
vars.putObject(time, [task_id])
}
}
times.eachWithIndex { time, i ->
vars.put("time_" + (i+1), time)
}
Notes:
(i+1) is used because the ForEach Controller will not consider the 0th element
I used "," as csv separator and omitted the header line
the "initialize task_ids" sampler holds following code:
.
time = vars.get("time")
tasks = vars.getObject(time)
tasks.eachWithIndex {task, i ->
vars.put(time + "_" + (i+1), task)
}
I hope, this helps!

python - Is my code ever reaching the proxy method? - requests

So I am playing around with proxies here and there with requests, Basically meaning that if I run a proxy with a requests. It will then be used in the whole session with that requests. Now I have coded but I haven't been able to check if it really goes through and I don't either know if this place is correct to paste. What I have done is looking like
with open('proxies.json') as json_data_file:
proxies = json.load(json_data_file)
def setProxy(proxy):
s = requests.Session()
proxies = {'http': 'http://' + proxy,
'https': 'http://' + proxy}
s.proxies.update(proxies)
return s
def info(thread):
global prod
prod = int(thread) + 1
runit(proxies)
def runit(proxies):
try:
if proxies != []:
s = setProxy(random.choice(proxies))
sleepy = time.sleep(.5)
else:
s = requests.Session()
sleepy = time.sleep(1)
r = s.get(url)
except requests.exceptions.ProxyError:
log(Fore.RED + "Proxy DEAD - rotating" + Fore.RESET)
sleepy
passwd(proxies)
PostUrl = s.post('www.hellotest.com')
print("Does it actually use the proxy or not?"
def main():
i = 0
jobs = []
for i in range(10):
p = multiprocessing.Process(target=info, args=(str(i),))
jobs.append(p)
time.sleep(.5)
p.start()
for p in jobs:
p.join()
sys.exit()
Is there a way to actually see if it does it or not? This is also my first time doing it so! Please do not judge!

How to get the number of forks of a GitHub repo with the GitHub API?

I use Github API V3 to get forks count for a repository, i use:
GET /repos/:owner/:repo/forks
The request bring me only 30 results even if a repository contain more, I googled a little and I found that due to the memory restrict the API return only 30 results per page, and if I want next results I have to specify the number of page.
Only me I don't need all this information, all I need is the number of forks.
Is there any way to get only the number of forks?
Because If I start to loop page per page my script risque to crash if a repository contain thousand results.
You can try and use a search query.
For instance, for my repo VonC/b2d, I would use:
https://api.github.com/search/repositories?q=user%3AVonC+repo%3Ab2d+b2d
The json answer gives me a "forks_count": 5
Here is one with more than 4000 forks (consider only the first result, meaning the one whose "full_name" is actually "strongloop/express")
https://api.github.com/search/repositories?q=user%3Astrongloop+repo%3Aexpress+express
"forks_count": 4114,
I had a job where I need to get all forks as git-remotes of a github project.
I wrote the simple python script https://gist.github.com/urpylka/9a404991b28aeff006a34fb64da12de4
At the base of the program is recursion function for getting forks of a fork. And I met same problem (GitHub API was returning me only 30 items).
I solved it with add increment of ?page=1 and add check for null response from server.
def get_fork(username, repo, forks, auth=None):
page = 1
while 1:
r = None
request = "https://api.github.com/repos/{}/{}/forks?page={}".format(username, repo, page)
if auth is None: r = requests.get(request)
else: r = requests.get(request, auth=(auth['login'], auth['secret']))
j = r.json()
r.close()
if 'message' in j:
print("username: {}, repo: {}".format(username, repo))
print(j['message'] + " " + j['documentation_url'])
if str(j['message']) == "Not Found": break
else: exit(1)
if len(j) == 0: break
else: page += 1
for item in j:
forks.append({'user': item['owner']['login'], 'repo': item['name']})
if auth is None:
get_fork(item['owner']['login'], item['name'], forks)
else:
get_fork(item['owner']['login'], item['name'], forks, auth)

Using varnish to cache heroku app

I have setup varnish a long time ago and I have one of my backends set to a host.herokuapp.com and it works great. For a while I was able to change settings and reload the varnish config with the basic service varnish reload command.
Now when I try reloading it I get:
* Reloading HTTP accelerator varnishd Command failed with error code 106
Message from VCC-compiler:
Backend host "myapp.herokuapp.com": resolves to multiple IPv4 addresses.
Only one address is allowed.
Please specify which exact address you want to use, we found these:
154.129.225.36
13.21.108.188
50.10.185.176
50.13.98.193
54.125.177.29
54.213.81.135
107.25.192.112
174.139.35.141
('/etc/varnish/backends.vcl' Line 39 Pos 27)
backend mobile { .host = "myapp.herokuapp.com"; .port = "80"; }
--------------------------#####################-----------------
In backend specification starting at:
('/etc/varnish/backends.vcl' Line 39 Pos 1)
backend mobile { .host = "myapp.herokuapp.com"; .port = "80"; }
#######---------------------------------------------------------------
Running VCC-compiler failed, exit 1
VCL compilation failed
Error: vcl.load 7ba71b44-c6b9-40e9-b0be-18f02bb5e9be /etc/varnish/default.vcl failed
As heroku uses dynamic IPs for their dynos, the IP list changes constantly and therefore it makes no sense to set the IPs as backends. Any clue on a way to fix this?
i had nearly the same problem with servers hosted by acquia.
A way to solve it was to :
- set the backend IPs from the hoster servers by acquia in a separate vcl
- build a croned script that regulary updates that vcl if the backend changed
- restart the varnish to put the new backends in production
#!/usr/bin/python2.7
import socket
import subprocess
import re
#
# Do a nslookup and return the list of the IPs
#
def _nslookup(host):
ips = ""
ips = socket.getaddrinfo(host ,0,0,0,0)
ip_list = []
for result in ips:
ip_list.append(result[-1][0])
ip_list = list(set(ip_list))
return ip_list
#
# Compare current backends with the list returned by nslookup
#
def _compare_backends_vcl(host_name, group_name):
current_ips = []
current_ips = _nslookup(host_name)
# Get current backends
current_backends = []
list = subprocess.Popen("/usr/bin/varnishadm backend.list | grep " + group_name + " | awk '{print $1}'", shell=True, stdout=subprocess.PIPE)
backend = ""
for backend in list.stdout:
current_backends.append(re.sub(r'^.*\((.*),.*,.*$\n', r'\1', backend))
# Due to varnish bug that is not removing backends (old ones are still declared in backend.list
# we are forced to add backends
# So the nslookup should be part of the current set of backends
# if set(current_ips).symmetric_difference(current_backends):
if set(current_ips).difference(current_backends):
# List is present so difference exist
print "_compare: We have to update " + group_name
return True
else:
return False
#
# Write the corresponding file
#
def _write_backends_vcl(host_name, group_name):
TEMPLATE_NODE = '''backend %s {
\t.host = "%s";
\t.port = "80";
\t.probe = %s;
}'''
vcl_file = open("/etc/varnish/" + group_name + "_backends.vcl", 'w')
host_num = 1
hosts = _nslookup(host_name)
for host in hosts:
vcl_file.write(TEMPLATE_NODE % (group_name + "_" + str(host_num), host, group_name + "_probe"))
vcl_file.write("\n\n")
host_num +=1
vcl_file.write("director " + group_name + "_default round-robin {\n")
for i in range(len(hosts)):
node = group_name + "_" + str(i+1)
vcl_file.write("\t{ .backend = %s; }\n" % node)
vcl_file.write("}\n")
vcl_file.close()
# Main
do_reload = ""
if _compare_backends_vcl("myhost.prod.acquia-sites.com", "MYHOST_CONFIG"):
do_reload = True
_write_backends_vcl("myhost.prod.acquia-sites.com", "MYHOST_CONFIG")
if do_reload:
print "Reloading varnish"
subprocess.Popen(['sudo', '/etc/init.d/varnish', 'reload'])
exit(1)
else:
# print "Everything is ok"
exit(0)
then, the corresponding vcl looks like :
backend MYHOST_CONFIG_1 {
.host = "XX.XX.XX.XX";
.port = "80";
.probe = MYHOST_CONFIG_probe;
}
backend MYHOST_CONFIG_2 {
.host = "XX.XX.XX.XX";
.port = "80";
.probe = MYHOST_CONFIG_probe;
}
director MYHOST_CONFIG_default round-robin {
{ .backend = MYHOST_CONFIG_1; }
{ .backend = MYHOST_CONFIG_2; }
}
You have to setup the MYHOST_CONFIG_probe probe and to set MYHOST_CONFIG_default as a director for your config.
Beware that varnish stores all the backend, and so you have to restart it regularly to purge the defective servers
I had the same problem today.
So, I installed nginx server on port 3000, and setup a proxy_pass server to myapp.herokuapp.com.
then, modify host="myapp.herokuapp.com" and port="80" to host="127.0.0.1" and port="3000".

Resources