Sitecore Remote Publishing Cache Issue - caching

I have an AUTHORING machine, a FAILOVER machine, and a PUBLIC machine. AUTHORING points to both FAILOVER and PUBLIC as remote publishing targets.
When publishing to all targets the content is immediately visible on FAILOVER. However, I am forced to manually clear the cache on PUBLIC to have the new content viewable by visitors.
I'm hopeful this is a simple configuration issue that someone can point me to an answer for.
Many thanks!

Probably, the cache clearing configuration is different on FAILOVER and PUBLIC. It's difficult to say exactly based on the info you provided, but the links below might help you or give a hint:
Problem with publishing items and not seeing it until hours later
Clear Cache on Publish
Clear cache on publishing target - without staging module

Instead of Failover and Public why not have the single Web database with the two servers sharing it? With a load balancer in front you'd not only have redundancy but also increased performance.

Check the staging module logs to make sure the cache clear is successful on both servers following a publish. They can be found at:
\sitecore modules\staging\workdir
Perhaps there is a network or security error that is preventing the cache clear from working properly on PUBLIC?

You might need to check your config to see if the history engine is setup/configured. E.g.
<sitecore><configuration>...
<database id="webtarget">
...
<Engines.HistoryEngine.Storage>
<obj type="Sitecore.Data.$(database).$(database)HistoryStorage, Sitecore.Kernel">
<param connectionStringName="$(id)" />
<EntryLifeTime>30.00:00:00</EntryLifeTime>
</obj>
</Engines.HistoryEngine.Storage>
</database>
...
<hooks>
<hook type="Sitecore.Modules.Staging.InitializeEngines, Staging.Kernel" />
</hooks>
...
</configuration></sitecore>
Review the staging module documentation.

Related

ActiveMQ Artemis HA & users/roles - am I supposed to create user/role on each node separately?

I have ActiveMQ Artemis cluster (2 nodes) with active-backup HA (shared-store mode), 2.17.0.
Shared-store is setup with NFS, mounted on $ARTEMIS_INSTANCE/data. In broker.xml I specified the following settings - pretty standard:
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
According to this official documentation page, there is login.conf file in etc directory which specifies users & roles files. I have the following contents:
activemq {
org.apache.activemq.artemis.spi.core.security.jaas.PropertiesLoginModule required
debug=false
reload=true
org.apache.activemq.jaas.properties.user="artemis-users.properties"
org.apache.activemq.jaas.properties.role="artemis-roles.properties";
};
Well, everything seem to work fine with it, but I noticed that every time I want to create a new user/role, I have to create twice, in each node separately. If I have replication HA mode and 6 nodes, I would need to create the same user/role 6 times (for each node).
Am I not missing anything here?
Then I've come up with an idea to literally move artemis-users.properties and artemis-roles.properties to a $ARTEMIS_INSTANCE/data directory and modify login.conf file accordingly, so I can create user/role only once, and created entries will be accessible from other node(s):
activemq {
org.apache.activemq.artemis.spi.core.security.jaas.PropertiesLoginModule required
debug=false
reload=true
org.apache.activemq.jaas.properties.user="../data/artemis-users.properties"
org.apache.activemq.jaas.properties.role="../data/artemis-roles.properties";
};
Since this is shared store, it kind of makes sense for me to store this way. I did quite some testing and everything seems to work fine, I do not think there are going to be any race conditions this way.
Again, am I not missing anything? Any suggestions/better workarounds?
The PropertiesLoginModule is provided by default because it is simple and straight-forward to configure for basic use-cases. However, it's not really designed for production use across a cluster. Typically you'd use an LDAP server (or some equivalent) which is a central store for all your user & role data. As the documentation states:
In general, using properties files and broker-centric user management for anything other than very basic use-cases is not recommended. The broker is designed to deal with messages. It's not in the business of managing users, although that functionality is provided at a limited level for convenience. LDAP is recommended for enterprise level production use-cases.
You are, of course, free to use the PropertiesLoginModule in more complex use-cases (e.g. like yours). I think putting the properties files on shared storage is fine and not likely to lead to problems.

How to clear cache in Pentaho

I am using Pentaho 5. My dimensions keep changing frequently and I need the changes to be applied to the dashboard, this is not possible because Pentaho keeps caching. I have created the cube using the datasource wizard and the querys using mdx over mondrian jndi. Even though I set the property Cache to false or set cache duration doesn't seem to work. Is there a API that I can use with mondrian jndi to clear cache? Or are there any property files that I should change? Please help.
In Pentahoo 7 the "Clear Cache" option is in a different Menu:
Tools -> Database -> Clear Cache
If you are using Database lookup and if you are getting older fields, cleaning cache can actually solved the problem.
It worked for me.
You can do it manually inside Pentaho User Console: Tools -> Refresh -> Mondrian Schema Cache.
Or you can make schedule for refreshing cache: find clear_mondrian_schema_cache.xaction inside your installation and schedule it.
Option 1:
You can read http://javadoc.pentaho.com/bi-platform500/webservice500/ for api details.
And also you can refresh Reporting Metadata Cache via web service, you can use the following web service call:
http://localhost:8080/pentaho/api/system/refresh/metadata
Option2: You can navigate to \biserver-ee\tomcat\webapps\pentaho\WEB-INF\classes
and change the configuration file "ehcache.xml"
<cache name="report-dataset-cache"
maxElementsInMemory="50"
eternal="false"
overflowToDisk="false"
timeToIdleSeconds="1"
timeToLiveSeconds="2"
diskPersistent="false"
diskExpiryThreadIntervalSeconds="1"
/>
If you have done that and still no positive answer, I think you have not restarted the BA server. If the issue still exists comment below.
There are 2 options
One is to schedule "clear mondrian schema" on ba server but for that to happen , you need to get clear_mondrian_schema.xml from pentaho-solutions/systems folder and upload it in some folder that you can access on ba server. You can then use normal schedule file options to achieve you want. This put a lof of load on BA server though.
My second recommendation is if you are using cubes/schema and building using schema workbench, you can turn the caching off. If your database is architecturally good and your schemas all well defined, user will get updates/new data as soon as they refresh.

Azure "No deployments were found" error message

I went to deploy over an existing Cloud Service (in staging) and received the following message:
"Error: No deployments were found. Http Status Code: NotFound"
Does anyone know what this means?
I am looking at the Cloud Service, and it surely exists.
UPDATE:
Been using the same deploy method as prior (successful) efforts. However, I simply right click the cloud service in Visual Studio 2013. In the Windows Azure Publish Summary, I set to: the correct cloud service name, to staging, to realease ... and press publish. Nothing special really...which is why I am perplexed
You may have exceeded the maximum number of cores allowed on your Azure subscription. Either remove unneeded deployments or ask Microsoft to increase the maximum allowed cores on your Azure subscription.
Since I had this problem and none of the answers above were the cause... I had to dig a little bit more. The RoleName specified in the Role tag must of course match the one in the EndpointAcl tag.
<Role name="TheRoleName">
<Instances count="1" />
</Role>
<NetworkConfiguration>
<AccessControls>
<AccessControl name="ac-name-1">
<Rule action="deny" description="TheWorld" order="100" remoteSubnet="0.0.0.0/32" />
</AccessControl>
</AccessControls>
<EndpointAcls>
<EndpointAcl role="TheRoleName" endPoint="HTTP" accessControl="ac-name-1" />
<EndpointAcl role="TheRoleName" endPoint="HTTPS" accessControl="ac-name-1" />
</EndpointAcls>
</NetworkConfiguration>
UPDATE
It seems that the previous situation is not the only one causing this error.
I ran into it again now due to a related but still different mismatch.
In the file ServiceDefinition.csdef the <WebRole name="TheRoleName" vmsize="Standard_D1"> tag must have a vmsize that exists (of course!) but according to Microsoft here (https://azure.microsoft.com/en-us/documentation/articles/cloud-services-sizes-specs/) the value Standard_D1_v2 should also be accepted.
At the moment it was causing this same error... once I removed the _v2 it worked fine.
Conclusion: everytime something is wrong in the Azure cfgs this error message might come along... it is then necessary to find out where it came from.
Just to add some info.
The same occured to me, my WM Size was setted to a size that was "Wrong".
I have multiple subscriptions, I was pointing one of them, and using a machine "D2", I don't know what happened, the information was refreshed and this machine disappeared as an option. I then selected "Large" (old), and worked well.
Lost 6 hours trying to upload this #$%#$% package.
I think the problem can be related to any VM Size problem
I hit this problem after resizing my role from small to extra-small. I still had the Local Storage set to the default of 20GB, which an extra-small instance can't hold. I ended up reducing it to 100MB and the deployment worked (the role I'm deploying is in maintenance mode only for a couple of months, so I don't care much about getting diagnostics from it).
A quick tip: I was getting nowhere debugging this with Visual Studio's error message. On a whim, I switched to the azure website and manually uploaded the package. That finally gave me a useful error: that VM size was too small for the resources I had requested.
I encountered this error during the initial deployment of a Cloud Service that required a specific SSL Certificate... that was missing from Azure.
Corrected the certificate - deploy succeeded.
(After the first deployment Visual Studio provides a meaningful error in this case.)

Windows azure cache error - "Cache referred to does not exist. Contact administrator or use the Cache administration tool to create a Cache."

I have one application on Windows Azure cloud and I'm using Windows Azure Co-Located Cache.
Some times, when I publish the website/webservice, this error appears when I call the DataCacheFactory.GetCache method:
Cache referred to does not exist. Contact administrator or use the Cache administration tool to create a Cache.
This problem can go away after few moments, but some times it never fix, then I need to publish projects again.
The stacktrace is:
Microsoft.ApplicationServer.Caching.DataCache.ThrowException(ErrStatus errStatus, Guid trackingId, Exception responseException, Byte[][] payload,
EndpointID destination) at Microsoft.ApplicationServer.Caching.DataCacheFactory.EstablishConnection(IEnumerable`1 servers, RequestBody request, Func`3
sendMessageDelegate, DataCacheReadyRetryPolicy retryPolicy) at Microsoft.ApplicationServer.Caching.SocketClientProtocol.Initialize(IEnumerable`1 servers)
at Microsoft.ApplicationServer.Caching.DataCacheFactory.GetCache(String cacheName, CreateNewCacheDelegate cacheCreationDelegate,
DataCacheInitializationViaCopyDelegate initializeDelegate)
See this link whether it can help you...
http://www.windowsazure.com/en-us/develop/net/how-to-guides/cache/#comment-743576866
we were missing the required blob storage container on local
devstorage. After creating the following container :
'cacheclusterconfigs' everything seems to be working now
'cacheclusterconfigs' container will be created by the service internally.you may accidentally deleted that.
Note: IMO please verify the cache name. By default you will be using the cache named 'default'.

sitecore proxy items published, still seem to have a link to the source

On the project I am working on, there are some proxy items that were added at some point from source location A to location B. However right now is not possible to check the source of the proxy and the proxy folder in B does not show anything that suggests that it's a proxy, besides the visual cue that it's grayed out.
When I analysed this article, I looked into the web.config and found this:
<proxiesEnabled>false</proxiesEnabled>
<publishVirtualItems>true</publishVirtualItems>
This seems to suggest that when the proxies were published they were published as regular items, losing any connection to their source, so since I want to recreate the proxies, due to some weird issues related to layout on the standard values item on the template not propagating correctly to the proxied items, I wanted to try to rename the old proxy folder and create a new one, however when I wanted to rename I got a modal alert with this message:
"This item occurs in other locations. If you rename it, the item will be renamed in the other locations as well. Are you sure you want to rename 'MyFoo'?"
Does this means the item still is attached to the source?
I am using Sitecore 6.2.0 (rev. 100701)
I suppose that the settings you mentioned are for master database. Now if you take a closer look at the article you reference, it lists 2 valid cases of proxies setup:
when web database also relies on proxies
when web database contains regular items only which came from publishing
These both cases assume that master database has proxiesEnabled='true'. Look, it doesn't have any sense otherwise - if proxies are disabled, the rest of the mechanism doesn't work, there are no virtual items.
And I can see proxiesEnabled='false' in the example you mentioned.
I'm not sure about the message you get. But if I need to change the proxy definition, I would do the following:
make sure proxiesEnabled='false' for web database (I guess this is your intention)
disable proxies for master database and change the proxies definition the way you want
set publishVirtualItems to true for master database
turn the proxies on for master database
make sure virtual items are in place and publish the site
Try this on some test environment and experiment to get the behavior you'd like - playing with the live site is a bad karma :)

Resources