Our joomla site is using 1.7.3, and having performance issues when there are a number of users on line, as well as database corruption issues. The table being corrupted is the _session table.
I would like to try and turn off the session handling, and therefore set the session handler in Joomla Global Configuration to "None" from "database".
Can this cause other issues? What is the possible consequences of doing this?
Thanks,
ken
To answer your direct question - yes there will be lots of problems from turning session handling off especially in an areas of interactivity with users. Most things will break, such as:
Any /administrator functionality
Registering users
Forms
Polls
Front-end article editing etc
anything like JomSocial or similar products
The corruption in #_session is usually caused by failed writes to the DB because the host isn't keeping up with the load - if you are getting these problems during high load time you will have to consider a better hosting package/service.
More importantly the 1.7.x series is no longer supported, you should upgrade to 2.5.3 as it fixes a very nasty pair of exploits that leave all prior version vulnerable to hackers.
If you set the session handler to none Joomla will use the session handler that is build into PHP.
If PHP is installed and configured properly then setting the session handler to none shouldn't cause any issues.
When using a load balancing cluster that isn't session aware you would want to use the database option. So that all servers can access the session data from the database.
In all other cases you can use the none option, which should (theoretically) be faster because the sessions are on the local server and don't have the overhead of setting up a database connection. Additionally I believe PHP has the file cached in memory which would mean it can access the session list virtually instantaneous.
Related
We recently had a windows upgrade 10 to patch version 10.0.18362. Before this we were able to connect to any URLs.
Aftermath is now we are not able to connect to a website which is hosted after a security device (WAF - web application firewall). CI_session (cookie value is now in different format value like - session_id%22%3Bs%3A32%3A%2)
Just wondering does an operating system changes the HTTPs request format after a client sends it to the server.
Will need help on this. Not sure how to put this up and look for answers.
There seem to be two separate paths going on here:
Windows 10 patches
A WAF implemented on your web app (is it F5's ASM, Advanced WAF or NGINX WAF, F5 was tagged).
That version of Windows by itself doesn't have issues related to connecting to URLs. If you have a VPN in the mix, then that does pose some changes that could affect your browsing sessions (split-dns versus full tunnel and such).
If the WAF is our potential culprit (using F5 as example since it was tagged) and there are blocking events by default the WAF will give you a message stating it was blocked along with an error code.
When rolling out a WAF policy in front of an application, the standard process is to run in transparent mode while learning the application. The WAF then understands the default behavior of that application (if going beyond default attack signatures). If the application is changed, it's standard practice to rerun learning and update the WAF policy as needed (usually done during test/stg processes).
Regardless, the WAF would generate warning or blocking events and this would be visible in analytics, logging, and a blocking page would present itself to the user being blocked (unless disabled - bad experience though).
Moving beyond the WAF aspect of this, if the application is indeed behind a BIG-IP, there may be load balancing methods involved using cookie persistence for the session. The F5 BIG-IP will use a cookie insert or rewrite which clients use until the cookie/session expires (expiration based on persistence within the BIG-IP - more on that here: AskF5 K6917).
Depending on what system is responsible for the session, you should A) not see two separate ci_sessions and B) the BIG-IP would be responsible for the session state to the back end node.
Your client COULD be connecting to two back end nodes and receiving two separate sessions independent of each other. If that's possibly the case, then investigating how the F5 BIG-IP is determining persistence is needed.
If another persistence method was used you'll need to find out and resolve with the BIG-IP admin/app owners. Example of persistence methods on BIG-IP v15. Either way, you'll need to find out how the application is deployed behind BIG-IP and if that changed. If the answer cannot be found within F5's DevCentral community or at AskF5, then a ticket should be created. Cookie persistence on BIG-IP isn't difficult to implement but it's all dependent on how the application behaves.
If not, gather some more details and I can update this answer. Hope this helped at least understand the WAF and BIG-IP LB methonds.
I would like to make a LDAP cache with the following goals
Decrease connection attempt to the ldap server
Read local cache if entry is exist and it is valid in the cache
Fetch from ldap if there is no such request before or the entry in the cache is invalid
Current i am using unboundid LDAP SDK to query LDAP and it works.
After doing some research, i found a persistent search example that may works. Updated entry in the ldap server will pass the entry to searchEntryReturned so that cache updating is possible.
https://code.google.com/p/ldap-sample-code/source/browse/trunk/src/main/java/samplecode/PersistentSearchExample.java
http://www.unboundid.com/products/ldapsdk/docs/javadoc/com/unboundid/ldap/sdk/AsyncSearchResultListener.html
But i am not sure how to do this since it is async or is there a better way to implement to cache ? Example and ideas is greatly welcomed.
Ldap server is Apache DS and it supports persistent search.
The program is a JSF2 application.
I believe that Apache DS supports the use of the content synchronization controls as defined in RFC 4533. These controls may be used to implement a kind of replication or data synchronization between systems, and caching is a somewhat common use of that. The UnboundID LDAP SDK supports these controls (http://www.unboundid.com/products/ldap-sdk/docs/javadoc/index.html?com/unboundid/ldap/sdk/controls/ContentSyncRequestControl.html). I'd recommend looking at those controls and the information contained in RFC 4533 to determine whether that might be more appropriate.
Another approach might be to see if Apache DS supports an LDAP changelog (e.g., in the format described in draft-good-ldap-changelog). This allows you to retrieve information about entries that have changed so that they can be updated in your local copy. By periodically polling the changelog to look for new changes, you can consume information about changes at your own pace (including those which might have been made while your application was offline).
Although persistent search may work in your case, there are a few issues that might make it problematic. The first is that you don't get any control over the rate at which updated entries are sent to your client, and if the server can apply changes faster than the client can consume them, then this can overwhelm the client (which has been observed in a number of real-world cases). The second is that a persistent search will let you know what entries were updated, but not what changes were made to them. In the case of a cache, this may not have a huge impact because you'll just replace your copy of the entire entry, but it's less desirable in other cases. Another big problem is that a persistent search will only return information about entries updated while the search was active. If your client is shut down or the connection becomes invalid for some reason, then there's no easy way to get information about any changes while the client was in that state.
Client-side caching is generally a bad thing, for many reasons. It can serve stale data to applications, which has the potential to cause incorrect behavior or in some cases pose a security risk, and it's absolutely a huge security risk if you're using it for authentication. It could also pose a security risk if not all of the clients have the same level of access to the data contained in the cache. Further, implementing a cache for each client application isn't a scalable solution, and if you were to try to share a cache across multiple applications, then you might as well just make it a full directory server instance. It's much better to use a server that can simply handle the desired load without the need for any additional caching.
I'm trying to do some research to find the best option for sessions management in a multi-server environment and was wondering what people have found successful and why. Pros and cons.
RDBMS - Slower. Better used for other data.
Memcached - You can't take down a memcached server without losing sessions
Redis - Fixes the problem of memcached, but what about ease of scalability? Fault tolerance?
Cassandra - Has good fault tolerance. Pros and cons?
MongoDB, Others?
Thanks!
Personally, I use Cassandra to persist php session data. It stores it in a single column on a single row with session_id:{session_data_as_json} and I set the TTL on the column so that it does garbage cleanup automatically. Works a treat.
I went with cassandra as it has all other user data already ... For caching, I enabled APC on all front end webservers and haven't had any issues ...
Is this the best approach? Not sure. it was fit for purpose for the environment, technologies and business rules I needed to fulfill. ...
Side note, I did start working on a native php -> cassandra session handler: https://github.com/sdolgy/php-cassandra-sessions -- this shows how the TTL's are set with PHPCassa and Cassandra
Redis - Fixes the problem of memcached, but what about ease of
scalability? Fault tolerance?
Redis supports replication and upcoming cluster should also support sharding of data across multiple nodes.
A bit late, but maybe someone is interested in a follow up. We are using Cassandra as our session store and access it via spring-session (with a home grown spring-session-cassandra addon). Objects in the session are marshalled/unmarshalled via Kryo ( https://github.com/EsotericSoftware/kryo ).
This setup gives us a session get between 1 and 2 ms and a save under 1ms:
But depending on the ring load there are some outliers in the response time:
What should we take care of before moving an application from a single Websphere Application Server to a Websphere cluster
This is my list from experience. It is not complete but should cover the most common problem areas:
Plan head the distributed session management configuration (ie. will you use memory-to-memory or database based replicaton). Make a notice that if you are still on 32-bit platform the resource requirement overhead from clustering might cause you instability issues if your application uses already lots of memory.
Make sure that everything you put into user sessions can be serialized with the default serializer (implements Serializable). You might otherwise run into problems with distributed sessions.
The same goes for everything you put into DynaCache. Make sure everything serializes properly.
Specify and make sure all the resource definitions (JDBC providers etc) will be made to a proper scope. I would usually recommend using the actual Cluster scope for everything that your applications installed to cluster use. That ensures the testing features work properly from proper points, and that you don't make conflicting definitions.
Make sure your application uses relative paths for resources in web interfaces. Once you start load balancing and stuff you can run into some serious problems if you have bolted down a lot of stuff.
If you had any sort of timers make sure they work well with clusters. With Quartz that means probably that you should use the JDBC store for timer tasks. With EJB Timers make sure you register the timers only once (it is possible to corrupt the timer database of WAS if you have several nodes attempting the registering at the exactly same time) and make sure you install them to Cluster scope.
Make sure you use the WAS provided SSO mechanisms. If you have a custom implementation please make sure it handles moving the user between servers in cluster well.
Keep it simple, depending on your requirements, try configuring your load balancer to use sticky sessions and not hold state in your HTTP Session. That way you don't need to use resource hungry in memory session replication.
Single Sign On isn't an issue for a single cluster as your HTTP clients will not be moving off the same http://server.acme.com/... host domain name.
Most of your testing should focus on database contention. If you have a highly transactional application (i.e. many writes to the same table) make sure you look at your database Isolation levels so that locks are not held unecessarily. Same goes for your transaction demarkaction. Keep transactions as brief as possible. If you dont have database skills yourself make sure you get a Database Analyst to help you monitor the database while you test.
Also a good advice to raise a PMR to IBM Support up front of any major changes, such as this one or upgrading to new versions etc. Raise it as a "Software Usage Question" and they can provide you with feedback from their knowledge database based on other customers input. Same would apply for any type of product which you have a support agreement for - ask support before problems occur.
We run a Drupal site, and are expecting a sudden burst of users some time soon.
What are some of the best Drupal practices to handle sudden burst of:
- User registrations
- User Authentication
These operations are heavily dependent on database... so, how do we optimize that?
Are there any techniques that minimize DB interaction during User Authentication? (for example: storing objects in memory, and writing them to DB at a later point in time?)
Any tips are greatly appreciated.
User authentication and registration usually aren't processes that you can cache or delay (as in MySQL's INSERT DELAY). However, there are things you can do to alleviate some load. For example:
Allow users to stay logged in via cookie so that you can avoid the DB access of having to re-authenticate
In general, store commonly used/small bits of data in the user's session or a memcached block
In general, cache as much as possible with memcached
Some of the commercal drupal distros (like Aquia or presflow) have support for multiple DBs, this may help a little. I would say if your hardware is halfway decend you would have to have a major surge to worry.
User registration and user auth is usually not a problem. Having a lot of users that are logged on can be a problem however. Drupal doesn't do much caching for logged in users. The reason is that the pages will look slightly different to each user when displaying user specific stuff. You can cache parts of a page which is the same for all to decrease the load. I don't have experience with it myself, but I've heard about a setup that did it. Doing this won't be that easy though.