SCM management of AppFabric Cache Cluster - caching

I'm working on building out a standard set of configurations for our cache clusters within App Fabric. My goal is to have a repeatable cache settings configuration when we load up a new environment (so server names are different, number of hosts, and other environmental factors).
My initial pass was to utilize the XML available from Export-CacheClusterConfig and simply change server names and size attributes in the <hosts> section, but I'm not sure what else is automatically registered with those values (the hostId parameter, for example).
My next approach that I've considered is a PowerShell script to simply build up the various caches with the correct parameters passed in that would simply run as a post-deploy step.
Anyone else have experience with repeatable AppFabric cache cluster deployments?

After trying both, the more successful option seems to be a combination of two factors. Management of the Cache Cluster (host information) is primarily an operations concern and is managed best by the operations team (i.e. those guys that read Server Fault). Since this information is stored in the configuration as well (and would require an XML file obtained from Export-CacheClusterConfig for each environment) it's best left to the operations team on how they want to manage it. Importing the wrong file (with the incorrect host information) has led to a number of issues.
So, we're left with PowerShell scripts. Here's a sample that I have. It could be cleaned up (check for Cache existence first) but you get the general idea. It's also much easier to store in source control (as it's just one file).
New-Cache -CacheName CRMTickets -Eviction None -Expirable false -NotificationsEnabled true
New-Cache -CacheName ConsultantCache -Eviction Lru -Expirable true -TimeToLive 60
New-Cache -CacheName WorkitemCache -Eviction None -Expirable true -TimeToLive 60

Related

ActiveMQ Artemis HA & users/roles - am I supposed to create user/role on each node separately?

I have ActiveMQ Artemis cluster (2 nodes) with active-backup HA (shared-store mode), 2.17.0.
Shared-store is setup with NFS, mounted on $ARTEMIS_INSTANCE/data. In broker.xml I specified the following settings - pretty standard:
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
According to this official documentation page, there is login.conf file in etc directory which specifies users & roles files. I have the following contents:
activemq {
org.apache.activemq.artemis.spi.core.security.jaas.PropertiesLoginModule required
debug=false
reload=true
org.apache.activemq.jaas.properties.user="artemis-users.properties"
org.apache.activemq.jaas.properties.role="artemis-roles.properties";
};
Well, everything seem to work fine with it, but I noticed that every time I want to create a new user/role, I have to create twice, in each node separately. If I have replication HA mode and 6 nodes, I would need to create the same user/role 6 times (for each node).
Am I not missing anything here?
Then I've come up with an idea to literally move artemis-users.properties and artemis-roles.properties to a $ARTEMIS_INSTANCE/data directory and modify login.conf file accordingly, so I can create user/role only once, and created entries will be accessible from other node(s):
activemq {
org.apache.activemq.artemis.spi.core.security.jaas.PropertiesLoginModule required
debug=false
reload=true
org.apache.activemq.jaas.properties.user="../data/artemis-users.properties"
org.apache.activemq.jaas.properties.role="../data/artemis-roles.properties";
};
Since this is shared store, it kind of makes sense for me to store this way. I did quite some testing and everything seems to work fine, I do not think there are going to be any race conditions this way.
Again, am I not missing anything? Any suggestions/better workarounds?
The PropertiesLoginModule is provided by default because it is simple and straight-forward to configure for basic use-cases. However, it's not really designed for production use across a cluster. Typically you'd use an LDAP server (or some equivalent) which is a central store for all your user & role data. As the documentation states:
In general, using properties files and broker-centric user management for anything other than very basic use-cases is not recommended. The broker is designed to deal with messages. It's not in the business of managing users, although that functionality is provided at a limited level for convenience. LDAP is recommended for enterprise level production use-cases.
You are, of course, free to use the PropertiesLoginModule in more complex use-cases (e.g. like yours). I think putting the properties files on shared storage is fine and not likely to lead to problems.

Mark standalone redis as read-only

I want to mark a standalone Redis server (not a Redis-Cluster, not a Redis-Sentinel) as read-only. I have been googling for this for quite sometime but I don't seem to find a definite answer (Almost all answers point to Clustering or Sentinel). I was looking out for some config modification (CONFIG SET something).
NOTE: config set replica-read-only yes does not make the current redis-server read-only, but only its replicas.
My use-case basically is I am doing a migration wherein at some point I want to make the redis-server read-only. My application code can handle failures whenever a write call happens so that's not an issue.
Also, if this is not directly possible from redis server, is there something that I can do in the client code that'll have the same effect (I am using redis-py as the client library)? (Although this is less than ideal)
Things that I've tried
Played around with config set replica-read-only yes and other configs. They don't seem to be applying the current redis-server.
Tried marking a redis-server as a replica of itself (This was illogical, but just wanted to see if this worked), but turns out it deleted all keys in my local redis, so not something I can do.
Once the writes are done and you want to switch the node to read-only, couple of ways to do that:
Modify the redis.conf to have "min-replicas-to-write 3". Since you don't have 3 replicas your node will stop accepting writes but will continue to serve reads, as shown below:
However, please note that after modifying redis.conf, you will have to restart your redis node for the changes to take effect.
Another way is when you want to switch to readonly mode, at that time you create a replica and let it sync with the master and then kill the master node. Then replica will exist as read only.
There're several solution you can try:
You can use the rename-command config to disable write commands. If you only want to disable small number of commands, that's a good solution. However, since there're too many write commands, you might need to have too many configuration, and easy to miss some of them.
If you're using Redis 6.0, you can use Redis ACL to disable write commands for specific users.
You can setup a read-only Redis replica for your master, and ask clients to read from the replica.

existdb: identify database server

We have a number of (developer) existDb database servers, and some staging/production servers.
Each have their own configuration, that are slightly different.
We need to select which configuration to load and use in queries.
The configuration is to be stored in an XML file within the repository.
However, when syncing the content of the servers, a single burnt-in XML file is not sufficient, since it is overwritten during copying from the other server.
For this, we need the physical name of the actual database server.
The only function found, request:get-server-name that is not quite stable since a single eXist server can be accessed through a number of various (localhost, intranet or external) URLs. However, that leads to unnecessary duplication of the configuration, one for each external URL...
(Accessing some local files in the file system is not secure and not fast.)
How to get the physical name of the existDb server from XQuery?
I m sorry but I don't fully understand your question, are you talking about exist's default conf.xml or your own configuration file that you need to store in a VCS repo? Should the xquery be executed on one instance and trigger an event in all others, or just some, or...? Without some code it is difficult to see why and when something gets overwritten.
you could try console:jmx-token which does not vary depending on URL (at least it shouldn't)
Also you might find it much easier to use a docker based approach. Either with multiple instances coordinated via docker-compose or to keep the individual configs from not interfering with each other when moving from dev to staging to production https://github.com/duncdrum/exist-docker
If I understand correctly, you basically want to be able to get the hostname or the IP address of a server from XQuery. If the functions in the XQuery Request module are not doing as you wish, then another option would be to set a Java System Property when starting eXist-db. This system property could be the internal DNS name or IP of your server, for example: -Dour-server-name=server1.mydomain.com
From XQuery you could then read that Java System property using util:system-property("our-server-name").

DB job to generate/email Oracle report output

The task is to have an Oracle report generated daily, automatically, and e-mailed to a user.
So I've sort of got this working (it works if I hardcode one of the reports server names below).
I created a job on the database that will generate the report. I'm able to get the report to email as a PDF to the destination with this command:
UTL_HTTP.REQUEST('http://server/reports/rwservlet?server=specific_report_server &report='||p_report_name||'&userid='||p_connstring||'&destype=mail'||p_parameters||'&desname='||p_to_recipientlist||' &cc='||p_cc_recipientlist||'&bcc='||p_bcc_recipientlist||'&subject=%22' || REPLACE(p_subject,' ','%20') || '%22&paramform=no&DESformat=pdf&ENVID='||p_envid);
That works great...
The problem however is that my organization has two report servers that are load balanced. Our server team could take down one of the servers without really any warning, so I can't just hardcode the report server name (the ?server= parameter above) with one of the report server names because it will work for a while, then when that server goes down, it will stop working.
My server team asked me to look for a way to pull the server from the formsweb.cfg file or from default.env value within the job (there are parameters in there that hold the server name). The idea there is that the "http://server" piece will direct the report to be run on the appropriate server, and the first part of the job could get the reports server name from the config file that the report is run on. I'm not sure if this is possible from the database level, or how to do this. Any ideas?
Is there a better way that this can be done, perhaps?
If there are two load-balanced servers, that strongly implies that the network folks must have configured some sort of virtual IP (VIP) for the service. You (and everyone else) should be using that VIP rather than a specific server name.
For example, if you have two servers reportA.yourdomain.com and reportB.yourdomain.com, you would almost certainly create a VIP for reports.yourdomain.com that load balances between the two servers (and knows whether one of the servers is down or whether a new reportC server has been added). This VIP would either do the load balancing itself or would point to an actual physical load balancer that distributes the traffic. All applications would reference the reports.yourdomain.com VIP rather than any hard-coded server names.

Magento - Magento Cache

I am using memcache.
I want to understand what is stored in Magento cache and how?
Do magento stores cache variable with website scope or store scope?
I have googled and greped the code but couldnt conclude anything,
Please if someone can direct me to correct links and path
Thanks & Regards,
Saurabh
If you go to the Cache Management section of the admin area you can see what it caches (configuration, layout configuration, block html output, translations, eav types, etc). I am no expert on Magento's caching mechanisms but here are a few random tidbits that might be helpful (maybe). (Also note that I am only familiar with Magento 1.3.x, not 1.4.x so things could have changed).
The caching is actually stored in the var/cache directory. There are a ton of directories in there (mage--0, mage--1, mage--2) and each directory has the cache files. Do a ls var/cache/mage*/* to see all the files.
Configuration - This source for the configuration is varied. Your app/etc/local.xml, and all of the config.xml files (that are in each module's etc dir) are combined together to make one big configuration object. Then Magento reads from the core_config_data table to update the configuration object. Then the configuration is written to a cache file so that next time a request is made it doesn't need to open a ton of config files and hit the database. Somehow this info gets stored in a bunch of files under var/cache. For some insight do a ls var/cache/mage*/*CONF*.
Layout - This is a lot like the configuration... there are a bunch of xml files in the app/design/frontendOrAdminhtml/yournamespace/layout/ directory and all these are merged into one layout configuration object, then cached in the cache directory.
Block HTML - The actual html generated by a block is cached. Each block is able to decide how long it is going to be cached.
Lastly, to (not really) answer your question about if the cache is per website or store, I can't really say since I haven't had the need to setup a multi-website/multi-store shop yet. It looks like there may be some store/website-specific files, but I can't see that they are really organized in a logical way. For example, in one of my instances I see a var/cache/mage--f/mage---LAYOUT_FRONTEND_STORE0_DEFAULT_BLANK_SEO file and a var/cache/mage--f/mage---LAYOUT_FRONTEND_STORE1_DEFAULT_BLANK_SEO... but then again, I only have one store configured and those two files have the same contents. Good luck with that!
You could also use some of the very great memcached analysis and reporting tools available
http://code.google.com/p/memcached/wiki/Tools
The best solution I have come up with is to use a two level cache.
Consult app/etc/local.xml.additional to see how to put memcached server nodes in there. Note that within the <servers> tag you will have to have tags like <server1> and <server2> encapsulating each memcached node's settings.
<cache>
<backend>memcached</backend>
<slow_backend>database</slow_backend>
</cache>
In this way all cache is shared.
To clear it the way I do it is to:
1. shut down apache
2. connect to mysql and connect to the magento db and run truncate core_cache; truncate core_cache_tag.
3. I then bounce the memcached nodes.
4. I restart apache but I keep it out of the load balancer until I have hit it at least once to generate the APC opcode cache. Otherwise the load can shot up through the roof.
This all seems extreme but I have found it works for me. Clearing cache using the backend is REALLY slow. I have around 100k entries in the core_cache table and close to 1 million entries in core_cache_tag. If I don't do it this way sometimes I get strange behavior.
Your Memcache configuration in ./app/etc/local/xml will dictate what Memcache is actually caching.
If you are only using a the single-level cache (without ), then Magento will store its cache (in its entirety) in Memcache.
HOWEVER without the slow_backend defined - it is caching content, without cache_tags - ie. without the ability to differentiate cache items
Eg. configuration, block, layouts, translations etc.
So, without the defined, you cannot refresh caches individually, in-fact, you'll almost always have to rely on "Flush Cache Storage" to actually see updates take effect.
We wrote a nice article here which covers your very issue - http://www.sonassi.com/knowledge-base/magento-knowledge-base/what-is-memcache-actually-caching-in-magento/
Memcached is a distributed memory caching system. It speeds up websites having large dynamic databases by storing database objects in Dynamic Memory to reduce the pressure on a server whenever an external data source requests a read. A Memcached layer reduces the number of times database requests are made.
The caching is actually stored in the var/cache directory. There are a ton of directories in there (mage--0, mage--1, mage--2) and each directory has the cache files. Do a ls var/cache/mage*/* to see all the files.
Configure Memcache Magento 2
Magento 2 also supports Memcached for caching objects but it isn’t enabled by default. You need to make simple changes to the $Magento2Root/app/etc/env.php file to enable it.
In env.php, you will see a large number of PHP arrays with different settings and configurations. Open the file in your favorite code editor and locate the following code:
array (
session' =>
'save' => 'files',
),
Modify this chunk as:
'session' =>
array (
'save' => 'memcached',
'save_path' => '<memcache ip or host>:<memcache port>'
),
Note that the default value for memcache ip is 127.0.0.1:11211. Similarly, the default value for memcache port is 11211.
For complete manual please look into it:
https://www.cloudways.com/blog/magento-2-memcached/
https://devdocs.magento.com/guides/v2.4/config-guide/memcache/memcache_magento.html

Resources