I'm running a Symfony 3 APP, on Apache 2.4 with PHP 7.3.
Memcached, opcache and apcu are enabled and configured as per the performance guide of the Symfony documentation. So I thought I was all set up...
But then I found this text:
Symfony makes heavy use of a filesystem cache. By default, the cache is located in app/cache/ENV where ENV is the environment currently accessed.
Which I interpret as the cache being stored in a directory on the FS (which is usually a hard drive), hence here's my question:
Would it be of any help to the Symfony application's performance if I was to run the cache directory in a RAM disk?
Thank you!
PD: If you know of any good guide on how to improve Symfony's performance in relationship with the database from a SysAdmin/DevOps perspective, I would be very grateful if you could share a link with me.
Would it be of any help to the Symfony application's performance if I was to run the cache directory in a RAM disk?
I don't think so. The cache files contain compiled files, like classes and templates. These files are then cached by OPcache in memory.
The Performance section of the documentation covers most things:
https://symfony.com/doc/current/performance.html#performance-checklists
Related
Shoud I put programs on HDFS or keep them local?
I am talking about a binary file which is:
Launched by spark-submit
Executed daily
Execute spark map reduce functions on RDD/Dataframes
Is a JAR
Weights 20 Mo
Processes a lot of data, this dfata is located on HDFS
I would think it is a bad idea, since distributing an executable file on HDFS might slow down the execution. I think it would be even worst for a file which is larger than 64 Mo (Hadoop block size). However, I did not find ressources about that. Plus, I do not know the consequences about memory management (is java heap replicated for each node that holds a copy of the JAR?)
Yes, this is exactly the concept behind YARN's shared cache.
The main reason for doing this is if you have a large amount of resources tied to jobs, and submitting them as local resources wastes network bandwidth.
Refer to the Slideshare to understand the performance impacts in more detail:
Slideshare: Hadoop Summit 2015: A Secure Public Cache For YARN Application Resources
YARN Shared Cache
YARN-1492 truly shared cache for jars (jobjar/libjar)
We are using Solr with HDFS for our indexing needs. While updating the existing documents(read existing doc and update) in our performance run, we observed that the HDFS storage space was growing exponentially. We are using the standard setting mentioned here: https://cwiki.apache.org/confluence/display/solr/Running+Solr+on+HDFS. Any clues on what could be root cause for our issue? Thanks for your help.
We have been testing different configuration values to solve this issue. So far it seems that by enabling solr.hdfs.blockcache.direct.memory.allocation=true in the solrconfig.xml file the issue is solved.
Initially I had two machines to setup hadoop, spark, hbase, kafka, zookeeper, MR2. Each of those machines had 16GB of RAM. I used Apache Ambari to setup the two machines with the above mentioned services.
Now I have upgraded the RAM of each of those machines to 128GB.
How can I now tell Ambari to scale up all its services to make use of the additional memory?
Do I need to understand how the memory is configured for each of these services?
Is this part covered in Ambari documentation somewhere?
Ambari calculates recommended settings for memory usage of each service at install time. So a change in memory post install will not scale up. You would have to edit these settings manually for each service. In order to do that yes you would need an understanding of how memory should be configured for each service. I don't know of any Ambari documentation that recommends memory configuration values for each service. I would suggest one of the following routes:
1) Take a look at each services documentation (YARN, Oozie, Spark, etc.) and take a look at what they recommend for memory related parameter configurations.
2) Take a look at the Ambari code that calculates recommended values for these memory parameters and use those equations to come up with new values that account for your increased memory.
I used this https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_installing_manually_book/content/determine-hdp-memory-config.html
Also, Smartsense is must http://docs.hortonworks.com/HDPDocuments/SS1/SmartSense-1.2.0/index.html
We need to define cores, memory, Disks and if we use Hbase or not then script will provide the memory settings for yarn and mapreduce.
root#ttsv-lab-vmdb-01 scripts]# python yarn-utils.py -c 8 -m 128 -d 3 -k True
Using cores=8 memory=128GB disks=3 hbase=True
Profile: cores=8 memory=81920MB reserved=48GB usableMem=80GB disks=3
Num Container=6
Container Ram=13312MB
Used Ram=78GB
Unused Ram=48GB
yarn.scheduler.minimum-allocation-mb=13312
yarn.scheduler.maximum-allocation-mb=79872
yarn.nodemanager.resource.memory-mb=79872
mapreduce.map.memory.mb=13312
mapreduce.map.java.opts=-Xmx10649m
mapreduce.reduce.memory.mb=13312
mapreduce.reduce.java.opts=-Xmx10649m
yarn.app.mapreduce.am.resource.mb=13312
yarn.app.mapreduce.am.command-opts=-Xmx10649m
mapreduce.task.io.sort.mb=5324
Apart from this, we have formulas there to do calculate it manually. I tried with this settings and it was working for me.
I am using Infinispan with jgroups in java.
I want to get all the cache names in an infinispan cache cluster.
I have tried using
DefaultCacheManager.getCacheNames();
but it gives only caches which are accessed on that the jvm from which it is called from and not all the caches in that cluster.
Once i access a cache on that jvm, it becomes available and the it starts coming in the cachelist which i get from
DefaultCacheManager.getCacheNames();
I am using the same config file for infinispan and jgroups(using tcp).
Please suggest a way by which I can get all the cache names in a cluster.
Thanks,
Ankur
Hmmm, normally you'll have all caches defined cluster wide, so getting the cache names in a node is good enough to know the caches that are available cluster wide.
This doesn't seem to be your case though, so the easiest thing I can think of is to do a Map/Reduce functionality in Infinispan to retrieve the cache names from individual nodes in the cluster and then collate them.
For more info, see https://docs.jboss.org/author/display/ISPN/Infinispan+Distributed+Execution+Framework and https://www.jboss.org/dms/judcon/presentations/Boston2011/JUDConBoston2011_day2track2session2.pdf
I have RED5 installed on my virtual server (I need it for my chat application), which has 1GB of RAM memory. When I start my RED5 it takes approx. 1GB immediately after start and thats a problem, because then my whole site is very site. Iam sure it does not use the whole 1GB, so I need a solution how could I limit it to lets say 700MB.
I have tried such things in red5.sh:
export JAVA_OPTS="-Xms512m -Xmx1024m $LOGGING_OPTS $SECURITY_OPTS $JAVA_OPTS"
But without success.
EDIT - forgot to mention, i use debian on my VPS.