So I'm a Windows / Network admin, have been for 2 years, but today I had a question that I didn't really know the answer to.
Say I do a nslookup, and the query retrieves 2 A records.
Which A records does say, a browser use?
If we do an nslookup for google.com, we get many responses. Is there a preferred address that windows uses? Is there any deciding factors?
If you have three A records in example.com a,b,c
The first query will retrieve a.example.com
the second b.example.com
the third c.example.com, and the next will get a.example.com again.
This is known as round-robin DNS
Related
I'm trying to process some IP data that we store on our clickhouse database. We have some users who have IPv6 addresses logged, and some people have multiple IP addresses logged, so what I am trying to achieve is to get only the IPv4 addresses, and if there are multiple IP addresses listed, then I choose the first one logged.
Here is the query that I made to filter them:
SELECT IF(ip LIKE '%,%', arrayElement(splitByChar(',', assumeNotNull(ip)), 1), ip) AS ip
FROM usage_analytics.users
WHERE ip NOT LIKE '%:%'
the results are not consistent. Sometimes it works fine, and gets all IPv4 addresses. However sometimes it returns null rows always at around 70 rows into the results. This happens around 4/5 times when you run this query.
What's going on? Is this a clickhouse issue, or a logic issue, or something else I'm not considering?
I want to understand what is the role of 'Most recent discovery' in cmdb_ci table. What are the scenarios when this field got updated.
This field is updated from integrations, either by ones you make for yourself or ones that come from ServiceNow.
The general intent of this field is to indicate the last time it was known to exist on your network. This allows you to do something such as retire a record in your CMDB for a computer, printer, server, etc after it's not been seen for a period of time.
This is what the Discovery Dashboard uses for Unrefreshed Devices (Beyond Last 30 Days) and so on.
An example from my side is that we had been populating our CMDB from Lansweeper as a custom integration, we populated the Most recent discovery field with tblAssets.LastSeen from Lansweeper. This was the last time Lansweeper saw the device on our network.
These are generally up to you to determine what you want to do with CMDB records not seen for a period of time.
I'm nearing end of development of a site, and am offloading images, scripts and fonts to a second server pool. Currently, the static pool is io.mydomain.com and the site itself is mydomain.com (www.* redirects to naked domain).
It's been well documented* that using a separate DNS lookup for static assets improves performance as it doubles concurrent asset downloads, but I'm trying to find the highest performance method of achieving this?
My question is this: from a DNS perspective, is it better to use a subdomain (TLD lookup, domain lookup, subdomain lookup), like Apple does (images.apple.com), or a separate domain (TLD lookup and domain lookup) like Yahoo and Microsoft do (yimg.com and c.s-microsoft.com)? Is there much of a difference between the two or is it negligible?
*https://developer.yahoo.com/performance/rules.html
Presuming that nothing is cached, there would be a infinitesimally small improvement brought about by having the same top level domain.
For the same top level domain queries the queries would go something like
query root servers for .com. name servers
query .com. name servers for .example.com. name servers.
query .example.com. name servers for www.example.com
query .example.com. (cached address) name servers for io.example.com.
For separate domains it would be
query root servers for .com. name servers
query .com. name servers for .example.com. name servers.
query .example.com. name servers for www.example.com
query .com.(cached address) name servers for .xmpl.com. name servers.
query .xmpl.com. name servers for io.xmpl.com
Once that first query was made - as long as you hadn't set incredibly short expiry - the client would never need to look them up again.
At the very best, you might shave a millisecond from the very first query. After that it changes nothing.
It isn't even remotely worth thinking about! there are so many other places where you will transiently loose that sort of time.
I'm building an internal server which contains a database of customer events. The webpage which allows access to the events is going to utilize an infinite scroll/dynamic loading scheme for display of live events as well as for browsing the results of queries to the database. So, you might query the database and maybe get 200k results. The webpage would display the 'first' 50 and allow you to scroll and scroll and scroll to see more and more results (loading perhaps 50 more at time).
I'm supposed to be using a REST api for the database access (a C# server). I'm unsure what the API should be so it remains RESTful. I've come up with 3 options. The question is, are any of them RESTful and which is most RESTful(is there such a thing -- if not I'll pick one of the RESTful).
Option 1.
GET /events?query=asdfasdf&first=1&last=50
This simply does the query and specifies the range of results to return. The server, unable to keep state, would have to requery the database each time (though perhaps utilizing the first/last hints to stop early) the infinite scroll occurs. Seems bad and there isn't any feedback about how many results are forthcoming.
Option 2 :
GET /events/?query=asdfasdf
GET /events/details?id1=asdf&id2=qwer&id3=zxcv&id4=tyui&...&id50=vbnm
This option first does a query which then returns the list of event ids but no further details. The webpage simply has the list of all the ids(at least it knows the count). The webpage holds onto the event id list and as infinite scroll/dynamic load is needed, makes another query for the event details of the specified ids. Each id is would nominally be a guid, so about 36 characters per id (plus &id##= for 41 characters). At 50 queries per hit, the URL would be quite long, 2000+ characters. The URL limit mentioned elsewhere on SO is around 2k. Maybe if I limit it to 40 ids per query this would be fine. It'd be nice to simply have a comma separated list instead of all the query parameters. Can you make a query parameter like ?ids=qwer,asdf,zxcv,wert,sdfg,rtyu,gfhj, ... ,vbnm ?
Option 3 :
POST /events/?query=asdfasdf
GET /events/results/{id}?first=1&last=50
This would post the query to the server and cause it to create a results resource. The ID of the results resource would be returned and would then be used to get blocks of the query results which in turn contain the event details needed for the webpage. The return from the POST XML could contain the number of records and other useful information besides the ID. Either the webpage would have to later delete the resource when the query page closed or the server would have to clean them up once they expire (days or weeks later).
I am concerned at Option 1, while RESTful is horrible for the server. I'm not sure requesting so many simultaneous resources, like the second GET in Option 2 is really RESTful or practical(seems like there has to be a better way). I'm not sure Option 3 is RESTful at all or if it is, its sort of cheating the REST thing by creating state via a POST(or should that be PUT).
Option 3 worked out fine. It required the server to maintain the query results and there was a bit of debate about how many queries (from various users) should simultaneously be saved as there would be no way to know when a user was actually done with a query.
I'm facing a challenge and I need your opinion, let me explain:
I have a database of around 300 000 users, which all have a profile page, and I would like to store the amounts of visitors that visit their profile on a weekly ( or daily?) basis for reporting purpose (graph would be available on their admin page).
I'm thinking about doing so in a dedicated table (let's call it "stat") organised as follows:
id / integer (id of users -- unique)
current_ip / text (serialized array of ip of visitors of the current period)
statistics / text (serialized array of statistics per period)
I'm thinking about an AJAX request on the profile page that would filter only non-robot user, check if the ip exist in the ´current_ip´ table (with a LIKE request) and if it doesn't exist I would unserialize the ´current_ip´, push the ip of the new visitor, serialize the ip and UPDATE the table.
At the end of each period (so every week or every day) I'm thinking about a cron task counting the number of ip un the 'current_ip', push that number (with the date) in the 'statistic' value (using the same method than previously explained), and then delete the ´curent_ip´ value so it´s empty for the next period.
Btw I'm using php5 and PostgreSQL (9.1) with an i5 (4 x 3.2 Ghz) in an ubuntu 12.04LTS dedicated server with SSD and 16g RAM.
Is that the best, easiest or fastest way of doing it? Am I all wrong?! Should I use 1 line per period instead of using a serialized array to store historical values?!
Any suggestion is welcome =)
Cheers
Geoffrey
Use HBase counters instead of postgres. It's much more eficient for that purpose.