ServiceNow Compare Servers Attributes - servicenow

How can I get a diff between two servers taking all the attributes like OS Details, Patches installed, Load Balncer associated, Software installed etc.

As far as I know there is no default option to get the diff between two records in ServiceNow. Some ideas:
Use the list view of the table, show all the columns you want to compare and do it manually.
You can create a UI Page with a simple diff functionality using Jelly or Angular, etc. You would need to get the records and compare each field in the code.
Export the records as XML files and compare them with external tools: http://text-compare.com/, notepad++, etc.
Hope it helps!

You may want to check AvailabilityGuard. It's a ServiceNow Addon that allows Server Comparison as well as compliance standard checks and out of the box Downtime and Data-Loss risk assessment.
Another option is to check Symantec's CCS. More security oriented but potentially does something similar.
Mark.

Related

Where to store table version

Where's the best place to store the version of a table in Oracle? Is it possible to store the version in the table itself, e. g. similar to the comment assigned to a table?
I don't think you can store that information in Oracle, except maybe in a comment on the table, but that would be error prone.
But personally I think you shouldn't want to keep track of versions of tables. After all, to get from a version 1 to a version 2, you may need to modify data as well, or other objects like triggers and procedures that use to new version of the table.
So in a way, it's better to version the entire database, so you can 'combine' multiple changes in one atomic version number.
There are different approaches to this, and different tools that can help you with that. I think Oracle even has some built-in feature, but with Oracle, that means that you will be charged gold bars if you use it, so I won't get into that, and just describe the two that I have tried:
Been there, done that: saving schema structure in Git
At some point we wanted to save our database changes in GitHub, where our other source is too.
We've been using Red Gate Source Control for Oracle (and Schema Compare, a similar tool), and have been looking into other similar tools as well. These tools use version control like Git to keep the latest structure of the database, and it can help you get your changes from your development database to scripts folder or VCS, and it can generate migration scripts for you.
Personally I'm not a big fan, because those tools and scripts focus only on the structure of the database (like you would with versioning individual tables). You'd still need to know how to get from version 1 to version 2, and sometimes only adding a column isn't enough; you need to migrate your data too. This isn't covered properly by tools like this.
In addition, I thought they were overall quite expensive for the work that they do, they don't work as easy as promised on the box, and you'd need different tools for different databases.
Working with migrations
A better solution would be to have migration script. You just make a script to get your database from version 1 to version 2, and another script to get it from version 2 to version 3. These migrations can be about table structure, object modifications, or even just data, it doesn't matter. All you need to do is remember which script was executed last, and execute all versions after that.
Executing migrations can be done by hand, or you can simply script it. But there are tools for this as well. One of them is Flyway, a free tool (paid pro support should you need it) that does exactly this. You can feed it SQL scripts from a folder, which are sorted and executed in order. Each script is a 'version'. Meta data about the process is stored in a separate table in your database. The whole process is described in more detail on Flyway's website.
The advantage of this tool is that it's really simple and flexible, because you just write the migration scripts yourself. All the tool does is execute them and keep track of it. And it can do it for all kinds of databases, so you can introduce the same flow for each database you have.
One way is to define a comment on the table:
comment on table your_table is 'some comment';
Then you can read that meta information using all_tab_comments table.
See
How to get table comments via SQL in Oracle?
For further reading, see:
https://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_4009.htm

Getting into designing dashboards and need some help identifying each technical layer along the way

So I will be embarking on designing a dashboard that will display KPI's and other relevant information for my team. Since I am in the early stages of this project and am not very familiar on the technical process behind designing a dashboard, I need some questions vetted out first before I go and shop for some solutions to avoid reinventing the wheel.
Here are some of my questions:
We want a dashboard that can provide live-time information via our data sources (or as close to live-time as possible). What function allows a dashboard to update itself with concurrent datasources? From a conceptual standpoint, I can understand creating a dashboard out of Microsoft Excel, and having the dashboard dependent on the values you may have set within your pivot table.
How do you make a dashboard request information from multiple datasources on its own? Just like the excel example, a user may have to go into the pivot tables to update values, but I want to know how would a dashboard request this by itself and what is the exact method from a programming standpoint? Does the code execute itself every time you refresh the webpage?
How do you create datasources organically? I know for some solutions such as SharePoint BI Center, there are pre-supported datasources like an excel sheet or SharePoint and it's as easy as uploading your document and letting the design handle the rest. However, there are going to be some datasources that I know that will need to be fetched. Do I need to understand something else like an event recorder in order to navigate this issue?
Introduction
The dashboard (or a report, respectively) is usually the result of a long chain of steps. Very much simplified it could look like this:
src1
|------\
src2 | /---- Dashboards
|------+---[DWH]-[BR]-+
src n | | \---- Reports etc.
|------/ [Big Data]
Keep in mind, this is only a very, very simple structure of a data backend / frontend.
DWH means Data Warehouse, where data might be stored temporarily (you referred to this as fetching). This could be a database, could be a Big Data engine, could be a combination of both...
Afterwards, there are Business Rules (BR). Those might be specific rules in how different departments calculate and relate to data, but also simple things like algebra.
Questions
So, the main question should not be about the technology:
What software should we choose?
How can we create a dashboard?
but on the contrary focused on your business processes (see it like a top-down view):
How does our core process look like? Where would I like to measure data?
How would department a calculate sales in difference to department b? Should all use the same rule?
Where does everyone store the data? Can we access it? Do we need structural data?
And, very easy to forget but also easily sometimes one of the biggest parts: Is the identifier of a business object (say, sales id) everywhere build and formatted in the same way?
Conclusion
When those questions are at least in the back of your head and you keep working in this direction, more or less automatically data will spill out at certain points of that process.
Then it won't matter if you use Excel, a small-to medium app like Tableau, Tibco Spotfire, QlikView, Power BI or you want to go full scale with a big Hadoop backend, databases and JasperReports, Apache Drill, Pentaho, SSIS on top of it... it will come out eventually.
TL;DR
Focus on the processes first. Make sure to understand them. Draft in Excel. Then proceed in getting the data and the tools you need to help your use cases. It will work out much better from a "top-down" approach than trying to solve your requirements with tools only.

How flexible is Pentaho for dynamic transformations? (user-input based parameters)

Based on the following use case, how flexible are pentaho tools to accomplish a dynamic transformation?
The user needs to make a first choice from a catalog. (using a web interface)
Based on the previously selected item, the user has to select from another catalog (this second catalog must be filtered based on the first selection).
steps 1 and 2 may repeat in some cases, (i.e. more than two dynamic and dependent parameters).
From what the user chose in step 1 and 2, the ETL has to extract information from a database. The tables to select data from will depend on what the user chose in previous steps. Most of the tables have a similar structure but different name based on the selected item. Some tables have different structure and the user have to be able to select the fields in step 2, again based on the selection of step 1.
All the selections made by the user should be able to be saved, so the user doesn't have to repeat the selection in the future, only re-run the process to get updated information based on the pre-selected filters. However he/she must be able to make a different selection and save it for further use if he/she wants different parameters.
Is there any web-based tool to allow the user to make all this choices based? I made the whole process using kettle but not dynamically, since all the parameters need to be passed when running the process in the console. The thing is, the end user doesn't know all the parameter values unless you show them and let them chose, and some parameters depend on a previous selection. When testing I can use my test-case scenario parameters, so I have no problem, but in production there is no way to know in advance what combination the user will chose.
I found a similar question, but it doesn't seem to require user input between transformation steps.
I'd appreciate any comments about the capabilities of Pentaho tools to accomplish the aforementioned use case.
I would disagree with the other answer here. If you use CDE it is possible to build a front end that will easily do those prompts you suggest. And the beauty of CDE is that a transformation can be a native data source via the CDA data access layer. In this environment kettle is barely any slower than executing the query directly.
The key thing with PDI performance is to avoid starting the JVM again and again - when running in a web app you're already going so performance will be good.
Also; The latest release of PDI5 will have the "light jdbc" driver (EE customers) which is basically a SQL interface on PDI jobs. So that again shows that PDI is much more these days than just a "batch" etl process.
This is completely outside the realm of a Kettle use case. The response time from Kettle is far too slow for anything user facing. It's real strength is in running batch ETL processes.
See, for example, this slideshow (especially slide 11) for examples of typical Kettle use cases.

Strategy for updating data in databases (Oracle)

We have a product using Oracle, with about 5000 objects in the database (tables and packages). The product was divided into two parts, the first is the hard part: client, packages and database schema, the second is composed basically by soft data representing processes (Workflow) that can be configured to run on our product.
Well, the basic processes (workflow) are delivered as part of the product, our customers can change these processes and adapt them to their needs, the problem arises when trying to upgrade to a newer version of the product, then trying to update the database records data, there are problems for records deleted or modified by our customers.
Is there a strategy to handle this problem?
It is common for a software product to be comprised of not just client and schema objects, but data as well; typically it seems to be called "static data", i.e. it is data that should only be modified by the software developer, and is usually not modifiable by end users.
If the end users bypass your security controls and modify/delete the static data, then you need to either:
write code that detects, and compensates for, any modifications the end user may have done; e.g. wipe the tables and repopulate with "known good" data;
get samples of modifications from your customers so you can hand-code customised update scripts for them, without affecting their customisations; or
don't allow modifications of static data (i.e. if they customise the product by changing data they shouldn't, you say "sorry, you modified the product, we don't support you".
From your description, however, it looks like your product is designed to allow customers to customise it by changing data in these tables; in which case, your code just needs to be able to adapt to whatever changes they may have made. That needs to be a fundamental consideration in the design of the upgrade. The strategy is to enumerate all the types of changes that users may have made (or are likely to have made), and cater for them. The only viable alternative is #1 above, which removes all customisations.

Filter by zip code, or other location based data retrieval strategies

My little site should be pooling list of items from a table using the active user's location as a filter. Think Craigslist, where you search for "dvd' but the results are not from all the DB, they are filtered by a location you select. My question has 2 levels:
should I go a-la-craigslist, and ask users to use a city level location? My problem with this is that you need to generate what seems to me a hard coded, hand made list of locations.
should I go a-la-zipCode. The idea of just asking the user to type his zipcode, and then pool all items that are in the same or in a certain distance from his zip code.
I seem to prefer the zip code way as it seems more elegant solution, but how on earth do one goes about creating a DB of all zip codes and implement the function that given zip code 12345, gets all zipcodes in 1 mile distance?
this should be fairly common "task" as many sites have a need similar to mine, so I am hoping not to re-invent the wheel here.
Getting a Zip Code database is no problem. You can try this free one:
http://zips.sourceforge.net/
Although I don't know how current it is, or you can use one of many providers. We have an annual subscription to ZipCodeDownload.com, and for maybe $100 we get monthly updates with the latest Zip Code data complete with Lat/Longs of the centroid of the zip code.
As for querying for all zips within a certain radius, you are going to need a spatial library of some sort. If you just have a table of zips with lats/longs, you will need a database-oriented mechanism. SQL Server 2008 has the capability built in, and there are open source libraries and commercial libraries that will add such capabilities to SQL Server 2005. The open source database PostgreSQL has a project, PostGIS that adds this capability to that database. It is here: http://postgis.refractions.net/
Other database platforms probably have similar projects, but those are the ones I am aware of. With one of these DB based libraries you should be able to directly query for any zip codes (or any rows of any kind that have lat/long columns) within a given radius.
If you want to go a different route you can use spatial tools with a mapping library. There are open source options here as well, such as SharpMap and many others (Google can help out) that can use the free Tiger maps for the united states as the data source. However, this route is somewhat more complicated and possibly less performant if all you need is a radius search.
Finally, you may want to look into a web service. This, as you say, is a common need, and I imagine there are any number ob web services that you can subscribe to that can provide all zip codes in a given radius from a provided zip code. A quick Google search turned up this:
http://www.zip-codes.com/free-zip-code-tools.asp#radius
But there are MANY resources to be had for the searching on this subject.
how on earth do one [...] implement the function that given zip code 12345, gets all zipcodes in 1 mile distance?
Here is a sample on how to do that:
http://www.codeproject.com/KB/cs/zipcodeutil.aspx
Just to be technical... PostGIS isn't a project of the Postgres community... it's a stand-alone project that is built on top of Postgres. If you want help or support with PostGIS, you'll want to go to it's community instead of Postgres.
You can use PostGIS. Additionally, I've used deCarta's mapping libraries. They have technology which allows you to geokey any arbitrary data type. Then you can query these spatially.
disclaimer: I work for deCarta
Wouldn't it be more efficient to just figure out which cities are within a 1 mile radius and store that information in a table? Then you don't have to do calculations in the database all the time.

Resources