I am pretty new to caching modules.
In my task, i need to design a cache cluster(of 2 servers). The data from my webapp should hit the cache server(that is in the cluster), and the other server will be for high availability.
I went though the ehcache documentation, but still not clear how to start on this cache-server task.
Can somebody help me in this task or please point me to the relevant link.
http://www.developer.com/open/article.php/3700661/Caching-Solutions-in-Java.htm
go through that link, that will defnitely help you to get started
Related
what are some good panels to have in kibana visualisation for developers to troubleshoot issues in applications? I am trying to create a dashboard that developers could use to pinpoint where the app is having issues. So that they could resolve it. These are a few factors that I have considered :
Cpu usage of pod, memory usage of pod, network in and out, application logs are the ones I have got in mind. Any other panels I could add to so that developers could get an idea where to check if something goes wrong in the app.
For example, application slowness could be because of high cpu consumption, app goes down could because OOM kill, request takes longer could be due to latency or cache issues etc Is there any other thing that I could take into consideration if yes please suggest?
So here a few things that we could add are:
Number of pods, deployments,daemonsets,statefulsets present in the cluster
cpu utilised by the pod(pod wise breakdown)
memory utilised by the pod(pod wise breakdown)
Network in/out
5.top memory/cpu consuming pods and nodes
Latency
persistence disk details
error logs as annotations in tsvb
Logstreams to check logs within dashboard.
I'm trying to process a document and store many documents into ravendb which I have running locally.
I'm getting the error
Tried to send *ravendb.BatchCommand request via POST http://127.0.0.1:8080/databases/mydb/bulk_docs to all configured nodes in the topology, all of them seem to be down or not responding. I've tried to access the following nodes: http://127.0.0.1:8080
I was able to fetch mydb topology from http://127.0.0.1:8080.
Fetched topology: ( url: http://127.0.0.1:8080, clusterTag: A, serverRole: Member)
exit status 1
To me, it sounds like maybe my local cluster is running out of compute to process the large amount of data I'm trying to store.
RavenDB says I'm using 3 of 12 available cores, and I'd also like to make sure it's using a reasonable amount of the ram I have available on the machine (I'd even be happy with giving it a swap)
But reading around online, I'm not finding much helpful information for making sure RavenDB is able to use what it needs. I found the settings.json so I can add in configurations which theoretically should get included into the server but I'm not making much progress.
I also found some settings and changed "reassign cores" to 12 but it says that still 3/12 are being used and 6/31.1 GB of memory are being used.
If an alternative solution is recommended I'm all ears. I just need to run things locally and storing everything as json's doesn't enable fast enough retrieval for my usecase.
Update
I was able to install mongodb and setup a local database. It hasn't given me any problems yet. RavenDB looks appealing if I understood it better but I guess I'll stick with the tried and true for this project.
It is highly unlikely that you managed to run out of resources on the server with 3 cores / 6 GB unless you are pushing hundreds of millions of documents and doing very heavy work.
Do you get any error on the server? There should be more details on the error or in the server log.
While executing Jmeter test on staging environment, we ran ehcache clear command, which removed all site cache. Since the ehcache got cleared, we were expecting that the performance and throughput would go down for some time. Instead, the number of transactions per second (throughput) increased drastically.
What can be the explanation for this?
It could be a bug/wrong implementation of ehcache, you can check a detailed how ehcache dissected:
...database connections were kept open. Which meant that the database
started to slow down. This meant that other activity started to take
longer as well...
and in summary:
for a non-distributed cache, it performs well enough as long as you
configure it okay.
Also check guidelines which will conclude in an interesting way:
we learned that we do not need a cache. In fact, in most cases where
people introduce a cache it is not really needed. ...
Our guidelines for using a cache are as follows:
You do not need a cache.
Really, you don’t.
If you still have a performance issue, can you
solve it at the source? What is slow? Why is it slow? Can you
architect it differently to not be slow? Can you prepare data to be
read-optimized?
If it's not due to a slow EhCache configuration, I suppose explanation could be that :
You don't have response assertion in your test plan so just base it on Response Code which might be 200 while response page was not the one you requested
The page served after the clear, are maybe lighter (error pages ? default pages) and do not require as
See:
http://www.ubik-ingenierie.com/blog/best-practice-using-jmeter-assertions/
When our Cascading jobs encounter an error in data, they throw various exceptions… These end up in the logs, and if the logs fill up, the cluster stops working. do we have any config file to be edited/configured to avoid such scenarios?
we are using MapR 3.1.0, and we are looking for a way to limit the log use (syslogs/userlogs), without using centralized logging, without adjusting the logging level, and we are less bothered about whether it keeps the first N bytes, or the last N bytes of logs and discords remain part.
We don't really care about the logs, and we only need the first (or last) few Megs to figure out what went wrong. We don't wan't to use centralized logging, because we don't really want to keep the logs/ don't care to spend the perf overhead of replicating them. Also, correct me if I'm wrong: user_log.retain-size, has issues when JVM re-use is used.
Any clue/answer will be greatly appreciated !!
Thanks,
Srinivas
This should probably be in a different StackExchange site as it's a more of a DevOps question than a programming question.
Anyway, what you need is your DevOps to setup logrotate and configure it according to your needs. Or edit the log4j xml files on the cluster to change the way logging is done.
I am working on a clustered marklogic environment where we have 10 Nodes. All nodes are shared E&D Nodes.
Problem that we are facing:
When a page is written in marklogic its takes some time (upto 3 secs) for all the nodes in the cluster to get updated & its during this time if I then do a read operation to fetch the previously written page, its not found.
Has anyone experienced this latency issue? and looked at eliminating it then please let me know.
Thanks
It's normal for a new document to only appear after the database transaction commits. But it is not normal for a commit to take 3-sec.
Which version of MarkLogic Server?
Which OS and version?
Can you describe the hardware configuration?
How large are these documents? All other things equal, update time should be proportional to document size.
Can you reproduce this with a standalone host? That should eliminate cluster-related network latency from the transaction, which might tell you something. Possibly your cluster network has problems, or possibly one or more of the hosts has problems.
If you can reproduce the problem with a standalone host, use system monitoring to see what that host is doing at the time. On linux I favor something like iostat -Mxz 5 and top, but other tools can also help. The problem could be disk I/O - though it would have to be really slow to result in 3-sec commits. Or it might be that your servers are low on RAM, so they are paging during the commit phase.
If you can't reproduce it with a standalone host, then I think you'll have to run similar system monitoring on all the hosts in the cluster. That's harder, but for 10 hosts it is just barely manageable.