Ignite. active(true)? [closed] - cluster-computing

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
when I try to create an apache ignite key value record, I get an error (Can not perform the operation because the cluster is inactive. Note, that the cluster is considered inactive by default if Ignite Persistent Store is used to let all the nodes join the cluster. To activate the cluster call Ignite.active(true).) for work, I use golang and the library github.com/amsokol/ignite-go-client/binary/v1. Since I'm just learning how to work with apache ignite, I don't really understand where I should enable Ignite. active(true) ?

Use the control.sh script:
./control.sh --activate
You only need to activate the cluster once, so it's generally not a good idea to put it in code.
More in the documentation.

Related

Differences between JMeter and Apache Benchmark [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I've been using JMeter for a long time and during my research, I came across Apache Benchmark, which seemed a bit more simple load testing tool to me.
So my assumption is that; Apache Benchmark is more suitable choice for benchmarking one API at a time. And it won't be a good choice -and maybe impossible- for performing an end-to-end load test.
But I am also curious about if A/B has any kind of advantages over JMeter in terms of performance / benchmark testing.
Could you please explain?
Thanks...
When it comes to "hammering" the endpoint with simple HTTP Requests ab can be suitable alternative to JMeter as long as you're fine with the following limitations:
no control regarding how connections are used/re-used
no support of other authentication types than Basic (digest, NTLM, Kerberos)
no control of DNS caching
no clustered mode of tests execution
missing metrics like connect time, TTFB, etc. and in general results are quite "poor" comparing to JMeter's HTML Reporting Dashboard
The only advantage of ab I can think of is lower CPU/Memory footprint

Prometheus vs ElasticSearch. Which is better for container and server monitoring? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
ElasticSearch is a document store and more of a search engine, I think ElasticSearch is not good choice for monitoring high dimensional data as it consumes lot of resources. On the other hand prometheus is a TSDB which is designed for capturing high dimensional data.
Anyone experienced in this please let me know what's the best tool to go with for container and server monitoring.
ELK is a general-purpose no-sql stack that can be used for monitoring. We've successfully deployed one on production and used it for some aspects of our monitoring system. You can ship metrics into it (if you wish) and use it to monitor them, but its not specifically designed to do that. Nor does the non-commercial version (version 7.9) come with an alerting system - you'll need to setup another component for that (like Sensu) or pay for ES commercial license.
Prometheus, on the other hand, is designed to be used for monitoring. And along with its metric-gathering clients (or other 3rd party clients like Telegraf and its service discovery options (like consul) and its alert-manager is just the right tool for this job.
Ultimately, both solutions can work, but in my opinion Elasticsearch will require more work and more upkeep (we found that ES clusters are a pain to maintain - but that depends on the amount of data you'll have).
I am using openshift and we are running both tool and both have different job. aggregating all the logging and shipping to elastic search for ease of browsing all the logging and similar things.
our prometheus use is mainly for metrics either for the nodes or the pods and definitely grafana makes a great interface to view all of prometheus metrics for sure.
Agree that it depends on what you mean by "high dimensional" and for container and server monitoring. You could use some opensource monitoring solution, I've tried Pandora FMS and they offer several options for high environments and distributed architectures, server monitoring is mostly agent based tho, but I feel it has a lot of potential.

Locking Data Structures in Distributed Systems [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am entirely new to the concepts of distributed systems. Kindly let me know even if the question should be rephrased.
I am trying to make a distributed systems with 10 clients and one server. There is a queue at server side, that can be accessed by clients one at a time. So what kind of locking mechanism could be used so as to avoid spurious data? Are semaphores feasible in this situation? If possible, kindly provide a reference, so as to have a much deeper knowledge of the same.
Semaphores on the server are feasible, and indeed are the way to go. On a GNU/Linux system such as Debian, see man 7 sem_overview and man 1 lockfile.
The simplest method is probably to let the server serve no more than one client at a time, refusing all requests from other clients. A refused client waits a random (not definite) length of time, then tries again.
Another method can be to let the server queue requests, but this is more complicated (and may still involve refusing some requests).

how to find memory leaks in java based web application using jmeter? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am new to jmeter. We have developed a java based web application using spring framework. We want to know whether we can find memory leaks using jmeter.
You can run a JMeter script for a few hours to create the load together with an Application Performance Monitoring solution like New Relic to look for memory leaks.
JMeter is not primary designated to find memory leaks. In few words the main rule of JMeter is request URLs so many times as you required and evaluate results you get back. That explanation is a bit simply and JMeter is really more complex then just this case. However if you guess you've a memory leak in your application, it can be useful in some cases to use JMeter just for to generate many request. However finding memory leaks IS NOT JMeter functionality. You have to monitor you application with some other tools (e.g. jmap, jvisualvm, etc.) and use JMeter just for to generate required specific requests.

Oracle Client Upgrade from 9 to 10 [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 9 years ago.
Improve this question
Last Friday where I work, an oracle client was upgarded and our IIS server from version 9 to version 10. Now that its on version 10, we are seeing a lot of connections being open up to the database. It is opening up so many connections that we cannot log onto the database using tools like PlSQL developer or Toad. We never had an issue like this when the oracle client was at version 9. Because of the number of clients that exists on this particular box, i dont think it will be possible to revert back to the Oracle 9 client.
Is anyone aware of this problem or know of any possible work arounds?
Any help is greatly appreciated
Which connection library are you using? OO4O, ODP, Other?
I'm working from memories of old issues here, so the details are a little fuzzy. With OO4O there are two different ways to initialize the library. One tries to re-use connections more than the other.
In ODP the default is to use connection pooling. Sometimes this leads to extra connections, in case they're needed again. There are some issues with pooled connections that lead me to turn them off. (PL/SQL procedures can hang if called on a dead connection)
If you get more information I'll try to get clarification
Let us know what you find and good luck
Thanks very much for your response, it was very useful to us.
We sent off our issue to Oracle and got the following back
============
This is a known issue discussed in
Note:417092.1
Database Connections Are Left Open By Oracle Objects for OLE (OO4O)
Your question:
"Does 10g client interface allow the ASP code/class functions the same way as 9i client?"
The workaround for this issue is to implement a loop to remove all the parameters. For example -
for i = 1 to OraDatabase.Parameters.Count
OraDatabase.Parameters.Remove(0)
next
Bug 5918934 OO4O Leaves Sessions Behind If OraParameters Are Not Removed
was logged for this behavior, and has been deemed "not feasible to fix" due to architecture changes required to resolve memory issues.
We did have a loop implemented within our code to remove parameters but on looking at it again, it looks like it is not removing all the parameters.
We are currently investigating this.
I will write back to this post once we have identified a solution
Thnaks
Damien

Resources