In Classic asp, can I store a database connection in the Session object? - session

Can I store a database connection in the Session object?

It is generally not recommended to do so, a connection string in the Application variable, with a nice helper function/class is a much preferred method. Here is some reference. (Dead link removed because it now leads to a phishy site)

I seem to recall doing so will have the effect of single threading your application which would be a bad thing.

In general, I wouldn't store any objects in Application variables (and certainly not in session variables).
When it comes to database connections, it's a definite no-no; besides, there is absolutely no need.
If you are use ADO to communicate with the database, if you use the same connection string (yes, by all means, store this in an Application variable) for all your database connections, 'connection pooling' will be implemented behind the scenes. This means that when you release a connection, it isn't actually destroyed - it is put to one side for the next guys who wants the same connection. So next time you request the same connection, it is pulled 'off the shelf' rather than having to be explicitly created and instantiated - which is a quite a nice efficiency improvement.

From this link http://support.microsoft.com/default.aspx/kb/243543
You shouldnt store database connection in Session.
From what I understand, if you do then subsequent ASP requests for the same user must use the same thread.
Therefore if you have a busy site its likely that 'your' thread will already be being used by someone else, so you will have to wait for it to become available.
Multiply this up by lots more users and you will get everyone waiting for everyone elses thread and a not very responsive site.

As said by CJM, there is no need to store a connection in a Session object : connection pooling is much better.

Related

Best way to initialize initial connection with a server for REST calls?

I've been building some apps that connect to a SQL backend. I use ajax calls to hit WebMethods, a WebAPI, etc.
I notice that the first initial call to the SQL backend retrieves the data fairly slow. I can only assume that this is because it must first negotiate credentials first before retrieving the data. It probably caches this somewhere, and thus, any calls made afterwards come back very fast.
I'm wondering if there's an ideal, or optimal way, to initialize this connection.
My thought was to make a simple GET call right when the page loads (grabbing something very small, like a single entry). I probably wouldn't be using the returned data in any useful way, other than to ensure that any calls afterwards come back faster.
Is this an okay way to approach fixing the initial delay? I'd love to hear how others handle this.
Cheers!
There are a number of reasons that your first call could be slower than subsequent ones
Depending on your server platform, code may be compiled when first executed
You may not have an active DB connection in your connection pool
The database may not have cached indices or data on the first call
Some VM platforms may take a while to allocate sufficient resources to your server if it has been idle for a while.
One way I deal with those types of issues on the server side is to add startup code to my web service that fetches data likely to be used by many callers when the service first initializes (e.g. lookup tables, user credential tables, etc).
If you only control the client, consider that you may well wish to monitor server health (I use the open source monitoring platform Zabbix. There are also many commercial web-based monitoring solutions). Exercising the server outside of end-user code is probably better than making an extra GET call from a page that an end user has loaded.

To close or not to close an Oracle Connection?

My application have performance issues, so i started to investigate this from the root: "The connection with the database".
The best practices says: "Open a connection, use it and close is as soon as possible", but i dont know the overhead that this causes, so the question is:
1 -"Open, Use, Close connections as soon as possible is the best aproach using ODP.NET?"
2 - Is there a way and how to use connection pooling with ODP.NET?
I thinking about to create a List to store some connections strings and create a logic to choose the "best" connection every time i need. Is this the best way to do it?
Here is a slide deck containing Oracle's recommended best practices:
http://www.oracle.com/technetwork/topics/dotnet/ow2011-bp-performance-deploy-dotnet-518050.pdf
You automatically get a connection pool when you create an OracleConnection. For most middle tier applications you will want to take advantage of that. You will also want to tune your pool for a realistic workload by turning on Performance Counters in the registry.
Please see the ODP.NET online help for details on connection pooling. Pool settings are added to the connection string.
Another issue people run into a lot with OracleConnections is that the garbage collector does not realize how truly resource intensive they are and does not clean them up promptly. This is compounded by the fact that ODP.NET is not fully managed and so some resources are hidden from the garbage collector. Hence the best practice is to Close() AND Dispose() all Oracle ODP.NET objects (including OracleConnection) to force them to be cleaned up.
This particular issue will be mitigated in Oracle's fully managed provider (a beta will be out shortly)
(EDIT: ODP.NET, Managed Driver is now available.)
Christian Shay
Oracle
The ODP.NET is a data provider for ADO.NET.
The best practice for ADO.Net is Open, Get Data (to memory), close, use in memory data.
For example using a OracleDataReader to load data in a DataTable in memory and close connection.
[]'s
For a single transaction this is best but for multiple transaction where you commit at the end this might not be the best solution. You need to keep the connection open until the transaction either committed or rolled back. How do you manage that and also how do you check the connection still exist in that case?(ie network failure) There is ConnectionState.Broken property which does not work at this point.

Closing DataMapper DB connection

my rails application generates lots of small sqlite databases using DataMapper. After data saved, .sqlite-file must be uploaded on a remote server and destroyed locally.
My question is how to make DataMapper close .sqlite db connection and free repo's memory? Application should generate many databases, so it's important to save server resources.
Only way I googled is
DataObjects::Pooling.pools.each do {|pool| pool.dispose}
which is totally unacceptable for me I think because it seems to be closing all DataMapper connections, however few databases can be generated in parallel threads and I want to destroy DataMapper's repository too.
Sorry for my English.
Also, DataMapper::Repository.adapters is a hash of current Repository objects. You may be able to dig around in there to get at the connections.
I'm not aware of any nice way of doing this. This discussion is pertinent, however:
http://www.mail-archive.com/datamapper#googlegroups.com/msg02894.html
Apparently, it is possible to reopen a connection using DataMapper.setup(), but it seems that the closing of connections is handled automatically.
However, maybe these observations will help:
It's possible to store a reference to the Adapter, e.g.
a = DataMapper.setup(:default, "sqlite:db/development.sqlite3")
Viewing this object shows that the path is stored, implying that it's for that particular connection, rather than a SQLite adapter in general, or such:
p a
#<DataMapper::Adapters::SqliteAdapter:0x00000001aa9258 #name=:default, #options={"scheme"=>"sqlite", "user"=>nil, "password"=>nil, "host"=>nil, "port"=>nil, "query"=>nil, "fragment"=>nil, "adapter"=>"sqlite3", "path"=>"db/development.sqlite3"}, #resource_naming_convention=DataMapper::NamingConventions::Resource::UnderscoredAndPluralized, #field_naming_convention=DataMapper::NamingConventions::Field::Underscored, #normalized_uri=sqlite3:db/development.sqlite3?scheme=sqlite&user=&password=&host=&port=&query=&fragment=&adapter=sqlite3&path=db/development.sqlite3>
Presumably, this can somehow be marked for garbage collection or something (will simply setting it to nil work?).
There is also a close_connection() method in DataMapper::Adapters::DataObjectsAdapter, but it's protected, and I'm not sure whether or how this could be used.
Hope this provides some pointers!

Cache an FTP connection via session variables for use via AJAX?

I'm working on a Ruby web Application that uses the Net::FTP library. One part of it allows users to interact with an FTP site via AJAX. When the user does something, and AJAX call is made, and then Ruby reconnects to the FTP server, performs an action, and outputs information.
Every time the AJAX call is made, Ruby has to reconnect to the FTP server, and that's slow. Is there a way I could cache this FTP connection? I've tried caching in the session hash, but "We're sorry, but something went wrong" is displayed, and a TCP dump is outputted in my logs whenever I attempt to store it in the session hash. I haven't tried memcache yet.
Any suggestions?
What you are trying to do is possible, but far from trivial, and Rails doesn't offer any built-in support for it. In fact you will need to descend to the OS level to get this done, and if you have more than one physical server then it will get even more complicated.
First, you can't store a connection in the session. In fact you don't want to store any Ruby object in the session for many reasons, including but not limited to:
Some types of objects have trouble being marshalled/unmarshalled
Deploying could break stuff if the model changes and people have outdates stuff serialized in their session
If you are using the cookie session store then you only have 4k
So in general, you only ever want to put primitives like strings, numbers and booleans into the session.
Now as far as an FTP connection is concerned, this falls into the category of things that can't be serialized/unserialized reliably. The reason is because it's not just a Ruby object, but also has a socket open which is going to be closed as soon as the original object is garbage collected.
So, in order to keep a FTP connection persistent, it can't be stored in a controller instance variable because the controller instance is per-request. You could try to instantiate it it somewhere outside the controller instance, but that has the potential for memory leaks if you are not very careful to clean up the connections, and besides, if you have more than one app server instance then you would also need to find a way to guarantee that the user talks to the same app server instance on each request, or it wouldn't be able to find the hook. So all in all, keeping the session open in the Ruby process is a non-starter.
What you need to do is open the connection in a separate process that any of the ruby processes can talk to. There's really no established and standard way to do that, you'll have to roll your own. You could look into DRb to provide some of the primitives you will need.
AJAX can't directly talk to FTP. It's designed for HTTP. That doesn't stop you from writing something that does cache the FTP server though. You probably should profile it to find out what's really slow. My guess is that the FTP access is just slow. Caching it may be a mixed blessing though. How do you know when the content of the ftp site changes?

Session timeout in web applications

The session timeout in web applications typically denotes the idle time - i.e. the period of time when the user doesn't work with the application.
Now, what if there is an automated script written that posts a request every 5 minutes - wouldn't that user's session go on endlessly? This being the case, won't this approach heavily load the application affecting its performance in the long run?
Running an automated call to the server, say via an AJAX request, will keep the session alive. Typically that's the point though. An interesting side effect of this is that if the request happens predictably and regularly, you can use it as a "ping" to determine if the user's browser is still open. If one or two pings are missed, you can close the session earlier and actually free up resources sooner than if you just let the session time out.
Yes, and Yes.
This is why if you're going to write an application for the web, you really want to find a way to implement it without using server side sessions. Usually, you will be able to find ways to implement the same functionality using cookies -- then the session data is client-side so who cares if they stay active permanently.
I did something similar for an application that relies heavily on session data.
What I did was set the IIS timeout to a relatively low number, say 10 minutes, then have a timed AJAX call that pings a blank page every 5 minutes.
This overhead on this is actually fairly low, as all you are doing is requesting a blank page, and if a person closes their browser, the session ends in 10 minutes.
You want to keep session as small as possible. That said, if everyone starts doing that, of course it will load your application, with(out) session. If you think your users are compelled to do that, consider why, as either your application is missing an important feature or is forcing them into something.
Now, regardless of that, if you are expecting lots of users to be active at the same time, so much than a single server won't do, then you would will end up having the session out of process. If the session is in Sql Server, it is just saved data, so in that case we wouldn't be talking about memory usage.
Well... I guess "It Depends" The first question you should ask yourself is whether you even need session.
If you have an automated process, my guess is that you don't really need to use session.
In that case, either turn it off or don't worry about it.
I guess your session table would be a little bit larger, but on the other hand you won't be tearing down and recreating the session. I don't see how this would "heavily load" the application. I suppose it would depend on the application itself and how much memory is used to maintain session state.
It would allow the use's session to go on endlessly, as long as they have their browser open. If need to keep a session alive for an extended period of time, you could also track the sessions via the DB and not in memory.
Also, if you are worried about the indefinite open session, you could implement a timeout from when the session opened and if there is an extended idle time.

Resources