PHP Storing sessions - performance

Which way would be the most effective way of storing a session in php (for login or user-related data)?
Would the best thing be a database or using the built in $_SESSION in php?
which one would be the most effective when it comes to a larger website and speed in general?

this is a very bad question because its not defining a single answer but ill go ahead and try to suit your needs
first there will be a checking so if username doesnt equal one on the data base
or the email and do propper error checking once you can check for usernames in the database and are succesfully creating functions for that
you need to have a user login ,,if credentials were truthfull then
session_start();
function logged_in()
{
return (isset($_SESSION['user_id'])) ? true : false;
}
if ($login === false)
{
$errors[] = 'Wrong username/password combination';
} else
{
$_SESSION['user_id'] = $login;
header('Location: index.php');
exit();
try this series is highly recomended
https://www.youtube.com/watch?v=9kyQGBABA38&list=PLE134D877783367C7 youll be done watching it in two days but will have a good foundation on this issue
i hope i helped

There is absolutely no reason to believe the built-in PHP mechanism is slow or bad. It is writing to files, which is perfect for single-server scenarios that about 90% of websites make use of, it is thoroughly tested and works. The inner workings do use compiled C language for the best possible performance with this storage.
Whatever you do yourself must either be implemented in PHP, which is slower, or you have to get into the business of creating PHP Session Save Handlers in C.
So if you really get into having a multi-server setup with a loadbalancer in front, and no stickyness configured, only then do you need a session storage that is accessible from all the webservers. There are plenty of solutions already existing:
Memcached - when the memcached extension is installed, there is also the memcached session save handler.
You can also try the Zend Session Cluster that comes with Zend Server.
And you can try to code your own session handler, but you must make sure that you do proper locking! Otherwise concurrent requests will overwrite each others session data. That is where most code I have seen so far fails, even mature frameworks like Symfony 2 still do with non-native storage.
PHP takes care of locking itself only if the internal session save handler is used (and coded correctly), so effectively only one script can run per session id. All others are stopped at the call to session_start().
If you think that none of the existing save handlers will fit your requirements, then you have to implement something yourself, but your question does not sound like you are already accustomed to clustered webserver environments.
Regarding "What is most effective"? Effective or efficient? Or performant? Or fastest? Nobody will know if you cannot name the numbers to measure! And how to measure.
And even if that would be known, there simply is no way of knowing beforehand. Just think about someone saying "Use a database, this is always faster", and then you end up with a bad provider in a shared hosting system with an overloaded database system that delivers absolutely no performance, and you are like "no way is this database any faster than files" - and you are right. For the same reason, saying than NOT using databases is faster is a lie also.
Measure The Performance! Change something and see if it is faster. If it is, stay there, and go back otherwise.

Related

Architecture/Technical Challenges in Handling Authentication/Permissions in Elixir Channels/Sockets

So I have decided to rewrite an application I have been writing in Node.js to Elixir because of all the extra complexity working with Node that Elixir comes with out of the box.
My issue was something I didn't have quite right in Node and is becoming just as complex in Elixir and I am not entirely sure how to go about approaching it.
I am trying to recreate a lot of how Discord does permissions. I am essentially building a CRM system, with different roles like "Sales Manager", "Sales", "Customer Service Rep" etc... But they all are able to do different things based on their "role".
Some things I need to do is be able to update a permission on the fly for a person or role. Maybe the "Sales Manager" role can't look at company financial data like an "Accountant" but we need to give that specific person access for a few days. Or I have a "Customer Service Rep" and we give that entire role the ability to add things to a calendar. I also would like to have the ability to kill sessions.
So there are a few ways I've seen said around Elixir forums, like:
Using Guardian, I really want to like tokens and think not having to hit the database every time sounds wonderful, but I don't think it's practical for this. Unless there is a good solution to updating tokens on the fly which I haven't found.
Giving each person their own process and just kill and start the process on changes with new changes. This seems pretty neat, but I'd rather not kill processes unless there is an actual error, I think this solution will come with big problems, like tracing problems. Although I am not familiar enough to know if this might actually cause problems, or if this is a bad solution for other reasons.
Use Guardian with Guardian_DB, which then defeats the purpose of using tokens, but at least I'd have a trackable session. My only problem with this is I do plan on using a load-balancer so that if a socket connection dies I can reconnect it to the same server and I am not sure there is a way to do that with tokens or if the socket itself has a session attached to it. This is not really that big of an issue though and is pretty close to what I had with Node.js.
Use Redis which I'd like to stay away from, and then update session data in Redis based on user_id when updates occur and hit Redis on every request to see if the user has permissions. I plan to put this across multiple servers eventually which means ETS is not viable unless I can load-balance socket connections like I could in Node.js.
So I guess my questions are,
Can I attach sessions to sockets? Is this a bad idea?
Should I still use a token, and just use Redis to check the token on every request?
Is a token still a better choice than a session?
Is there a much better/easier solution that I have not even mentioned?
I'm sorry this was pretty drawn out and long, I've never had to do something as permission bound as this project professionally and am pretty new to Elixir.
Phoenix channels are stateful. You can put data in the assigns field and it stays there for the duration of the connection. That is where you normally put your user_id after authenticating the user on join.
I also use the channels assigns to store client state that I need on the server.
WRT to the role to permissions question, I'm doing exactly this. What I do is load the load the role permissions from the database on startup and build an ETS store with them. You can do the same with a Task or a GenServer. If the permissions change for a given role, i update the database and the ETS table.
My user model supports a list of roles for each user.
When in need to validate the permissions for a given user, I call the Permission model api like Permission.has_permission?("create-room", user, scope). I have two level of permissions, global and per room. That is what the scope is used for.

Should I make my CouchDB database server public-facing?

I'm new to CouchDb and am trying to comprehend how to properly make use of it. I'm coming from MongoDB where I would always write a web layer and put it in front of mongo so that I could allow users to access the data inside of it, etc. In fact, this is how I've used all databases for every web site that I've ever written. So, looking at Couch, I see that it's native API is HTTP and that it has built in things like OAuth support, and other features that hint to me that perhaps I should no longer have my code layer sitting in front of Couch, but instead write Views and things and just give out accounts to Couch to my users? I'm thinking in terms of like an HTTP-based API for a site of mine, or something that users would consume my data through. Opening up Couch like this seems odd to me, though. Is OAuth, in Couch's sense, meant more for remote access for software that I'd write and run internal to my own network "officially", or is it literally meant for the end users?
I know there might be things that could only be done through a code layer on top of CouchDB, like if you wanted additional non-database related things to occur during API requests, also. So thinking along those lines I think I will still need a code layer, anyway.
Dealer's choice.
Nodejitsu has a great writeup on this sort of topic here.
Not knowing your application specifics I'll take a broad approach...
Back-end
If you want to prevent users from ever seeing your database then make it back-end. You can pipe everything through something like node.js and present only what the user needs to see and they'll never know anything about the database.
See Resource View Presenter
Front-end
If you are not concerned about data security, you can host an entire app on CouchDB; see CouchApp. This approach has the benefit of using the replication mechanism to control publishing your site/data. The drawback here is that you will almost certainly run into some technical limitations that will require moving CouchDB closer to the backend.
Bl-end
Have the app server present the interface and the client pull the data from the database separately. This gives the most flexibility but can be a bag of hurt because even with good design this could lead to supportability and scalability issues.
My recommendation
Use CouchDB on the backend. If you need mobile clients to synchronize then use a secondary DB publicly exposed for this purpose and selectively sync this data to wherever it needs to go.
Simply put, no.
There's no way to secure Couch properly on a public facing site. There's no way to discriminate access at a fine enough granular level. If someone has access to any of the data, they have access to all of the data.
Not all data on a site is meant for public consumption, save for the most trivial of sites.

what way to store data by key and value?

I store data in
HttpContext.Current.Application.Add(appKey, value);
And read data by this one:
HttpContext.Current.Application[appKey];
This has the advantage for me that is using a key for a value but after a short time (about 20 minutes) it does not work, and I can not find [appKey],because the application life cycle in iis data will lose.
i want to know is that another way to store my data by key and value?
i do not want sql server,file,... and want storing data on server not on client
i store users some data in it.
thanks for your helping
Since IIS may recycle and throw away any cache/memory contents at any time, the only way you will get data persisted is to store it outside IIS. Some examples are; (and yes, I included the ones you stated you didn't want just to have the list a bit more complete, feel free to skip them)
A SQL database (there are quite a few free ones if the price is prohibitive)
A NoSQL database (same thing there, quite a few free ones and usually simpler to use for key/value)
File (which you also stated you didn't want)
Some kind of external memory cache, a'la AppFabric cache or memcached.
Cookies (somewhat limited in size and not secure in any way by default)
you could create a persistent cookie on the user's machine so that the session doesn't expire, or increase the session timeout to a value that would work better for your situation/users
How to create persistent cookies in asp.net?
Session timeout in ASP.NET
You're talking about persisting data beyond the scope of a session. So you're going to have to use some form of persistent storage (Database, File, Caching Server).
Have you considered using AppFabric. It's actually pretty easy to implement. You could either access it directly from your code using the nuget packages, or you could just configured it as a session store. (I think) doing the latter would mean you'd get rid of the session timeout issue.
Do you understand that whatever you decide to store in Application, will be available for all users in your application?
Now regarding your actual question, what kind of data do you plan on storing? If its user sensitive data, then it probably makes sense to store it in the session. If it's client specific and it doesn't contain any sensitive information, than cookies is probably a reasonable way forward.
If it is indeed an application wide data and it must be the same for every user of your application, then you can make configuration changes to make sure that it doesn't expiry after 20 minutes.

Can/Should I disable the cache expiry when backing data store is unavailable?

I'm just started out with Ehcache, and it seems pretty good so far. I'm using it in a simplistic fashion to speed up reads against a database, but I wonder whether I can also use it to let the application stay up if the database is unavailable for short periods. (Update - my context is a application with high-availability modules that only read from the database)
It seems like I could do that by disabling expiry in the event of a database read problem, and re-enabling it when a read works again.
What do you think? Is that a reasonable approach or have I missed something? If it's a fair approach, any tips for how best to implement appreciated.
Update - ehcache supports a dynamically configurable option to un/set the cache to 'eternal'. This seems to do what I need.
Interesting question - usually, the answer would be "it depends".
Firstly, if you have database reliability problems, I'd invest time and energy in fixing them, rather than applying a bandaid solution.
Secondly, most applications need both reading and writing to work - it doesn't seem to make sense to keep your app up for reads only.
However, if your app has a genuine "read only" function, and there's a known and controlled reason for database down time (e.g. backups), then yes, you can use your cache to keep the application up and running while the database is down. I would do this by extending the cache periods, rather than trying to code specific edge cases. For instance, you might have a background process which checks whether the database is available and swaps in a different configuration file when there's trouble.

Would SQLite be a 'better' choice for Joomla than MySQL, if it would be available?

Since this doesn't touch a real problem of mine I'm somwhat uncertain, if it is even worth to be asked here. However maybe some of you would like to share your opinion on that.
In general I have to admit, that 'better' means anything and nothing at all at the same time. So I probably should be more specific, but I tried not to overflow the topic. In a regular hosted environment on one of those cheap webhosters (like Dreamhost), with around 1000 articles in Joomla, a couple of users and a few hundreds visitors a day, would a SQLite database with a persistent connection (sqlite_popen) perform noticeable faster than the MySQL equivalent (with the TCP/IP overhead etc.)?
Or in short: Would it be wise to call Joomla to support SQLite?
I have never used sqlite on a website, but I have used it extensively for other purposes and I quite like it. The truth is, you won't know till you try. If you try, I reccomend creating a db abstraction layer first so that you can easily swap in other db's.
The downside to sqlite is that it's not really meant to be a multi user database. If you rarely write to the db, but do lots of reading, sqlite will probably be fine. If you find that you need multiple processes writing to the same db, I believe sqlite uses file level locking to maintain database consistency.. So, if all you're tables are in the same file, you'll lock the whole file while it's being written to even if another process wants to modify a completely different table.
In my opinion it's not the big multi user databases of the world that should be worried about competition from sqlite... It's all the regular files out there (and there custom file formats) that applications create and use that should be shaking in their boots about sqlite...
Linux ISPs for whatever reason seem to have settled on MySQL. This is what they offer and you will lock yourself to a limited number of service providers if you wander outside the norm.

Resources