Difference between two sccm reports - sccm

I can't seem to get my mind wrapped around these two reports:
List of assets by compliance state for a configuration baseline
List of assets by compliance state for a configuration item in a configuration baseline
Can someone please enlighten me and possibly give some examples?
I'm trying to find out which assets are compliant with our Screen Saver Timeout. We have a gpo (user config) that sets the timeout at 900secs. I then created a CI that has a script to query the value of a registry key under HKCU. I then created a CB and added the CI. But I'm getting different compliance/noncompliance count when I run the two reports. The first report says that I have 1700 compliant but when I run the second report, I only have 7 compliant.
Please please help!!
Thank you in advance!

A baseline can have multiple configuration items in it, so the first report would show you the overall compliance for all of the CIs in that baseline. The second report is for a single CI.
Difficult to say what may be causing the difference that you're seeing, but it's likely to do with either having multiple CIs in the baseline, or the first report may be showing Compliant for machines that haven't run the CI yet, whereas the second may only be reporting those that have actually run it?

List of assets by compliance state for a configuration baseline
Lists the devices or users in a specified compliance state following the evaluation of a specified configuration baseline.
List of assets by compliance state for a configuration item in a configuration baseline
Lists the devices or users in a specified compliance state following the evaluation of a specified configuration item.
You can use Report Builder to review the query. If you are interested, you can find more information from database.
Here is relevant snippet of the difference between the two reports:
enter image description here
Based on the following query, we can find the meaning of the CIType.
select * from v_ConfigurationItems
where CIType_ID in (2,50)
2= Baseline
50=Configuration Policy

Related

Azure Blob Storage lifecycle management - send report or log after run

I am considering using Azure Blob Storage's build-in lifecycle management feature for deleting blobs of a certain age.
However, due to a business requirement, it must be possible to generate a report or log statement after each daily execution of the defined ruleset. The report or log must state the number of blob blocks that were affected, e.g. deleted during the run.
I have read through the documentation and Googled to see if others have had similar inquiries, but so far without any luck.
So my question: Does any of you know if and how I can get a build-in Lifecycle management system to do one of the following after each daily run:
Add a log statement to the storage account containing the Blob storage.
Generate and send a report to an endpoint I define.
If the above can't be done I will have to code the daily deletion job and report generation myself, which surely I can do, but I would like to use the built-in feature if possible.
I summarize the solution as below.
If you want to know which blobs are deleted every day, we can configure Diagnostics settings in the storqge account. After doing that, we will get the logs for read, write, and delete requests for the blob. For more detail, please refer to here and here
Regarding how to enable it, we can use PowerShell command Set-AzStorageServiceLoggingProperty.

D365 Can I update systemuserid?

In our D365 online environment we have multiple sandboxes and production instances. In each of these the systemuserid is different (user import was done before I joined!!). This mismatch in SystemUserId is also happening when new user is added. (my own user record for example that was added last week)
I know updating systemuserid in onPrem was unsupported but was possible but with online environment what are my best options to fix this issue? With different Guids, all references (workflow etc) are failing when moving solution across different environments.
Coming here as my last option as I have googled and looked in to SDK already.
Thanks,
hardcoding data into processes is a bad practice, makes your processes really rigid. You can create a configuration entity, stab the sys admin id there and retrieve it. If you have a custom workflow activity you will be able to retrieve the record and use it in every configuration task.
You can't update an ID at all. I usually copy my production database in all my dev environment to avoid this problem. D365 also make it easy to do so. You should take a moment between two sprints to do it because it can help to have to system user ID and entities object type code identical everywhere.

Can I create a report that will list all previously run reports?

I'm using Dynamics CRM 2015 and want to create a report that will show all reports run in the last 12 months.
I've been using the Report Wizard and can't seem to find the entity that is created when a report is run. I can find when a report was created but not every time it has been run.
Example of expected results:
Report X
4/3/2019 Admin 1
4/2/2019 Admin 3
Report Y
4/3/2019 Admin 2
4/2/2019 Admin 1
I'm not worried about the formats, I will most likely tinker with it after. I simply want to find a way to display every instance any report has been run.
Since you're on CRM 2015, it would follow that your system is on-prem.
That means you're not able to use the relatively new Activity Logging a.k.a. Read Auditing that's available in D365 Online, which seems to have what you're looking for.
The out-of-box audit in CRM 2015 would give you some kind of "user access" auditing (i.e. when people login), but not show you specific report runs. It's really designed to capture changes to the data for audited entities.
As far as I know there is no entity record created when a user runs a report. Provided you were willing to hook into and/or replace all the report triggers throughout the system (i.e. in all ribbons), you could hypothetically build something to track report runs. But it seems like that would be cost prohibitive.
According to this article you should be able to pull this info out of the ReportServer DB. I'd quote the relevant parts here but it seems very involved - creating temp tables, etc.

Storing ssrs reports in a file that can be called immediately

Hi Fellow SSRS Developers,
I have a scenario that I'm trying to tend to but need to know if what I want to do is even possible.
I have 4 reports that I would like to have run and then store the actual report in a file on a server. The reason for this need is because the response time on the reports is a bit long and I've done everything in SQL to speed it up.
What I want to have happen, is when a user clicks on the report name, instead of rendering the report on their screen I simply want to call the report that is already in a file so that it will load in lightning quick time.
Has anyone ever done this with SSRS and is it even possible?
Thanks,
Other than running reports on demand, there are two specific options: Running from a Cached report and running from a Snapshot.
You can see details on all of this in Setting Report Processing Properties.
Caching
From Books Online:
To enhance performance, you can specify a report (and data) to be
cached temporarily when a user runs the report. The cached copy is
subsequently available to other users who access the same report. With
this approach, if ten users open the report, only the first request
results in report processing. The report is subsequently cached, and
the remaining nine users view the cached report.
So here you can see that it is a specific user action that causes a stored report to be created.
See Report Caching in Reporting Services.
Snapshots
From Books Online:
A report snapshot is a report that contains layout information and
data that is retrieved at a specific point in time. You can run a
report as a report snapshot to prevent the report from being run at
arbitrary times (for example, during a scheduled backup). A report
snapshot is usually created and subsequently refreshed on a schedule,
allowing you to time exactly when report and data processing will
occur. If a report is based on queries that take a long time to run,
or on queries that use data from a data source that you prefer no one
access during certain hours, you should run the report as a snapshot.
Here you can see that these are these are generally set up on a regular schedule, i.e. independent of user activity.
See Creating, Modifying, and Deleting Snapshots in Report History.
In this case it seems like Snapshots might be your best option so you have more control of when the stored report is created. The main issue with Snapshots is that they need either stored credentials or an unattended execution account so might not be possible in all cases.

SCM management of AppFabric Cache Cluster

I'm working on building out a standard set of configurations for our cache clusters within App Fabric. My goal is to have a repeatable cache settings configuration when we load up a new environment (so server names are different, number of hosts, and other environmental factors).
My initial pass was to utilize the XML available from Export-CacheClusterConfig and simply change server names and size attributes in the <hosts> section, but I'm not sure what else is automatically registered with those values (the hostId parameter, for example).
My next approach that I've considered is a PowerShell script to simply build up the various caches with the correct parameters passed in that would simply run as a post-deploy step.
Anyone else have experience with repeatable AppFabric cache cluster deployments?
After trying both, the more successful option seems to be a combination of two factors. Management of the Cache Cluster (host information) is primarily an operations concern and is managed best by the operations team (i.e. those guys that read Server Fault). Since this information is stored in the configuration as well (and would require an XML file obtained from Export-CacheClusterConfig for each environment) it's best left to the operations team on how they want to manage it. Importing the wrong file (with the incorrect host information) has led to a number of issues.
So, we're left with PowerShell scripts. Here's a sample that I have. It could be cleaned up (check for Cache existence first) but you get the general idea. It's also much easier to store in source control (as it's just one file).
New-Cache -CacheName CRMTickets -Eviction None -Expirable false -NotificationsEnabled true
New-Cache -CacheName ConsultantCache -Eviction Lru -Expirable true -TimeToLive 60
New-Cache -CacheName WorkitemCache -Eviction None -Expirable true -TimeToLive 60

Resources