Report generated that shows the IP addresses of customers/visitors - magento

Is it possible to detect a visitor's IP address and store it on the magento administrator so that the admin can be able to view who visited their store

This is already possible. Take a look at Customers->Online Customers. This log is cleared after 15 minutes by default but you can increase the value from System->Configuration->Customer Configuration->Online Customers Options. Don't make the value too big because it may affect performance.
EDIT (correction)
All the access to you website is stored in the table log_visitor_info including the IP address. This table is not cleaned up after X minutes.

Related

Does limiting an LDAP search by baseDN provide any benefit when the attribute being searched on has an index?

We are designing an LDAP schema (specifically for OpenDJ) and we primarily need to be able to search on the mail attribute. We don't need to do a substring search as the user would provide the whole email address when they log in.
We already have an index on the mail attribute. However we are also considering to sub-divide the user directory by the first letter of the email address as well (so all users with an email address that starts with the letter A would be in an ou=A subdirectory under ou=users. The only value I can see in doing this is that when we do searches for a user by email, we can limit the baseDN of the search, thus reducing the scope of the search to approximately 1/26 of the entire directory.
My primary question is, does limiting the baseDN of an LDAP search like this provide any improvement on performance if the attribute already has an index? Do indexes take into account the baseDN, or are they indexed over the whole directory?
A secondary question, if I'm allowed, is there any other usage for splitting the users directory by first letter (or any other arrangement) other than providing a more specific baseDN when searching?
What you are thinking about seems like premature optimization when you don't even know if you have a performance issue.
Also, indexes and processing a query is not a standard element of LDAP, it's an implementation detail of the technology you are using.
In OpenDJ, an index is configured and maintain for a whole database backend.
The cost of a lookup in the email equality index and returning a single entry is the same whether you have 1 entry or 1 billion entries.
I have more than 20 years of experiences with LDAP and directory services, I've never seen any directory structured with splitting entries by the first letter of an attribute.
I once (and only once) encountered a problem similar to the one you're anticipating -- essentially you've got so many records that searching for a record creates an unacceptable user experience. In my case, there were over a million customers in the directory. What is now a rather old iteration of IBM's Tivoli Directory Server had several bugs that meant searching the directory took minutes to accomplish (indexes or no indexes). No one wants to wait minutes to log in and pay their bill! And we were constrained to using IBM's LDAP server.
In that case, I used the e-mail address used as the naming attribute when the account was created and never searched the directory. I.E. I'm cn=lisa#example.com,ou=customers,o=example within the directory. When I log in with lisa#example.com, the site programmatically formulates the bind DN as "cn=" + userInput + ",ou=customers,o=example" and validates the supplied password instead of searching for my account.

DNS Resolution with 2 A records

So I'm a Windows / Network admin, have been for 2 years, but today I had a question that I didn't really know the answer to.
Say I do a nslookup, and the query retrieves 2 A records.
Which A records does say, a browser use?
If we do an nslookup for google.com, we get many responses. Is there a preferred address that windows uses? Is there any deciding factors?
If you have three A records in example.com a,b,c
The first query will retrieve a.example.com
the second b.example.com
the third c.example.com, and the next will get a.example.com again.
This is known as round-robin DNS

Dynamics AX Preload Behaviour

Questions
Does the user option preload refer to caching on the client or on the server?
Are there any ways to make this occur asynchronously so that users don't take a large performance hit when first requesting data from a table?
More Info
In Dynamics Ax 2012, under File > User Options > Preload a user can select which tables are preloaded the first time they're accessed.
I've not found anything to say whether this behaviour relates to caching on the client or the AOS.
The fact it's a user setting implies that it's the client.
But it could be an AOS setting where users with this option take the initial hit of preloading the entire table, whilst those without would benefit from any caching caused by other users, but wouldn't trigger the load themselves.
If it's the latter we could improve performance by removing this option from all (human) users, leaving it enabled only on our batch user account, having scheduled jobs on each AOS to request a record from each table, thus triggering the preload without any user being negatively impacted.
Ref: http://dynamicbusinesssolutions.ru/axshared.en/html/9cd36702-2fa7-470c-a627-08
If a table is large or frequently changed it is not a candidate for entire table cache. This applies to ordinary users and batch users alike.
The EntireTable cache is located on the server, but the load is initiated by the user, the first user doing the select takes a performance hit.
To succesfully disable a table from preload, you can disable it using the Admin user, it will apply to all users. Or you can let all users disable it by themselves.
Personally I never change the user setup. If a table is large I change the table CacheLookup property as a customization.
See Set-based Caching:
When you set a table's CacheLookup property to EntireTable, all the
records in the table are placed in the cache after the first select.
This type of caching follows the rules of single record caching. This
means that the SELECT statement WHERE clause must include equality
tests on all fields of the unique index that is defined in the table's
PrimaryIndex property.
The EntireTable cache is located on the server
and is shared by all connections to the Application Object Server
(AOS). If a select is made on the client tier to a table that is
EntireTable cached, it first looks in its own cache and then searches
the server-side EntireTable cache.
An EntireTable cache is created for
each table for a given company. If you have two selects on the same
table for different companies the entire table is cached twice.
Note: Avoid using EntireTable caches for large tables because once
the cache size reaches 128 KB the cache is moved from memory to disk.
A disk search is much slower than an in-memory search.

Implementing visitors statistics for many users

I'm facing a challenge and I need your opinion, let me explain:
I have a database of around 300 000 users, which all have a profile page, and I would like to store the amounts of visitors that visit their profile on a weekly ( or daily?) basis for reporting purpose (graph would be available on their admin page).
I'm thinking about doing so in a dedicated table (let's call it "stat") organised as follows:
id / integer (id of users -- unique)
current_ip / text (serialized array of ip of visitors of the current period)
statistics / text (serialized array of statistics per period)
I'm thinking about an AJAX request on the profile page that would filter only non-robot user, check if the ip exist in the ´current_ip´ table (with a LIKE request) and if it doesn't exist I would unserialize the ´current_ip´, push the ip of the new visitor, serialize the ip and UPDATE the table.
At the end of each period (so every week or every day) I'm thinking about a cron task counting the number of ip un the 'current_ip', push that number (with the date) in the 'statistic' value (using the same method than previously explained), and then delete the ´curent_ip´ value so it´s empty for the next period.
Btw I'm using php5 and PostgreSQL (9.1) with an i5 (4 x 3.2 Ghz) in an ubuntu 12.04LTS dedicated server with SSD and 16g RAM.
Is that the best, easiest or fastest way of doing it? Am I all wrong?! Should I use 1 line per period instead of using a serialized array to store historical values?!
Any suggestion is welcome =)
Cheers
Geoffrey
Use HBase counters instead of postgres. It's much more eficient for that purpose.

Caching expensive SQL query in memory or in the database?

Let me start by describing the scenario. I have an MVC 3 application with SQL Server 2008. In one of the pages we display a list of Products that is returned from the database and is UNIQUE per logged in user.
The SQL query (actually a VIEW) used to return the list of products is VERY expensive.
It is based on very complex business requirements which cannot be changed at this stage.
The database schema cannot be changed or redesigned as it is used by other applications.
There are 50k products and 5k users (each user may have access to 1 up to 50k products).
In order to display the Products page for the logged in user we use:
SELECT TOP X * FROM [VIEW] WHERE UserID = #UserId -- where 'X' is the size of the page
The query above returns a maximum of 50 rows (maximum page size). The WHERE clause restricts the number of rows to a maximum of 50k (products that the user has access to).
The page is taking about 5 to 7 seconds to load and that is exactly the time the SQL query above takes to run in SQL.
Problem:
The user goes to the Products page and very likely uses paging, re-sorts the results, goes to the details page, etc and then goes back to the list. And every time it takes 5-7s to display the results.
That is unacceptable, but at the same time the business team has accepted that the first time the Products page is loaded it can take 5-7s. Therefore, we thought about CACHING.
We now have two options to choose from, the most "obvious" one, at least to me, is using .Net Caching (in memory / in proc). (Please note that Distributed Cache is not allowed at the moment for technical constraints with our provider / hosting partner).
But I'm not very comfortable with this. We could end up with lots of products in memory (when there are 50 or 100 users logged in simultaneously) which could cause other issues on the server, like .Net constantly removing cache items to free up space while our code inserts new items.
The SECOND option:
The main problem here is that it is very EXPENSIVE to generate the User x Product x Access view, so we thought we could create a flat table (or in other words a CACHE of all products x users in the database). This table would be exactly the result of the view.
However the results can change at any time if new products are added, user permissions are changed, etc. So we would need to constantly refresh the table (which could take a few seconds) and this started to get a little bit complex.
Similarly, we though we could implement some sort of Cache Provider and, upon request from a user, we would run the original SQL query and select the products from the view (5-7s, acceptable only once) and save that result in a flat table called ProductUserAccessCache in SQL. Next request, we would get the values from this cached-table (as we could easily identify the results were cached for that particular user) with a fast query without calculations in SQL.
Any time a product was added or a permission changed, we would truncate the cached-table and upon a new request the table would be repopulated for the requested user.
It doesn't seem too complex to me, but what we are doing here basically is creating a NEW cache "provider".
Does any one have any experience with this kind of issue?
Would it be better to use .Net Caching (in proc)?
Any suggestions?
We were facing a similar issue some time ago, and we were thinking of using EF caching in order to avoid the delay on retrieving the information. Our problem was a 1 - 2 secs. delay. Here is some info that might help on how to cache a table extending EF. One of the drawbacks of caching is how fresh you need the information to be, so you set your cache expiration accordingly. Depending on that expiration, users might need to wait to get the fresh info more than they would like to, but if your users can accept that they migth be seing outdated info in order to avoid the delay, then the tradeoff would worth it.
In our scenario, we decided to better have the fresh info than quick, but as I said before, our waiting period wasn't that long.
Hope it helps

Resources