I need to find out who is using PST file in my organization.
i thougt maybe it easier to fo it using SCCM 2012 query.
but i didn't found any help on google.
the search need to contain the path:
C:\Users\*user*\AppData\Local\Microsoft\Outlook
can anyone help me?
thanks!
Software inventory
Before everything, you need to enable software inventory and create rules in your environment.
Go to Administration - Client Settings - Default Client Setting properties- Software Inventory, Click Set Types... Click New to add a path for querying, here you can add C:\Users\ and file name as *.pst and save. More instruction of software inventory rule, see here
After next Software inventory cycle finished (default is 7 days, you can manual trigger on client side by Control panel - Configuration Manager - Actions - Software Inventory Cycle), data will be collected and saved in database. Then you can use SQL query or built-in reports to get what you want. Example query like below:
select distinct
Sys.Name0,
SF.FileName AS [Filename],
SF.ModifiedDate AS [Last Modified Date],
SF.Filepath AS [Local Filepath],
SF.FileSize/1024 as [Size (MB)],
SF.FileSize/1024000000 as [Size (GB)]
from v_R_System Sys
INNER JOIN v_GS_SoftwareFile SF on
Sys.ResourceID = SF.ResourceID
INNER JOIN v_FullCollectionMembership FCM on
FCM.ResourceID=sys.ResourceID
where SF.FileName like '%'+'.pst'
Order by SF.FileName
Attention, software inventory is a file by file scanning so it will need long time to complete and may cost more computer resource; During the inventory period, other inventory like heart beat inventory, hardware inventory will on hold till software inventory complete.
About Software inventory on Technet, see About Software Inventory
See here for implementation example
Also, Script working with hardware inventory can collect all these information as well. I found a good solution and it worked well in my lab, see Example
Related
Ansible has logging plugins to send the data to log stash, log DNA, etc. But is it possible to log specific details like person who ran the playbook, on what servers playbook was executed , IP addresses. I am trying to understand whether is there any module specifically for logging.
I bet the thing you want is ARA from the OpenStack folks. I have been using it for a while and find it a ton easier to read than wading through a sea of log output.
That said, you may also be happier with AWX, or "Tower" (their commercial verion). Using AWX would have the benefit of enforcing access to the playbooks, versus asking people to correctly configure their ansible.cfg to use ARA.
I can't seem to get my mind wrapped around these two reports:
List of assets by compliance state for a configuration baseline
List of assets by compliance state for a configuration item in a configuration baseline
Can someone please enlighten me and possibly give some examples?
I'm trying to find out which assets are compliant with our Screen Saver Timeout. We have a gpo (user config) that sets the timeout at 900secs. I then created a CI that has a script to query the value of a registry key under HKCU. I then created a CB and added the CI. But I'm getting different compliance/noncompliance count when I run the two reports. The first report says that I have 1700 compliant but when I run the second report, I only have 7 compliant.
Please please help!!
Thank you in advance!
A baseline can have multiple configuration items in it, so the first report would show you the overall compliance for all of the CIs in that baseline. The second report is for a single CI.
Difficult to say what may be causing the difference that you're seeing, but it's likely to do with either having multiple CIs in the baseline, or the first report may be showing Compliant for machines that haven't run the CI yet, whereas the second may only be reporting those that have actually run it?
List of assets by compliance state for a configuration baseline
Lists the devices or users in a specified compliance state following the evaluation of a specified configuration baseline.
List of assets by compliance state for a configuration item in a configuration baseline
Lists the devices or users in a specified compliance state following the evaluation of a specified configuration item.
You can use Report Builder to review the query. If you are interested, you can find more information from database.
Here is relevant snippet of the difference between the two reports:
enter image description here
Based on the following query, we can find the meaning of the CIType.
select * from v_ConfigurationItems
where CIType_ID in (2,50)
2= Baseline
50=Configuration Policy
Hey to every alfresco pro out there!
Is there any way to create a report (graphical or textually, i don't care) to see the following information:
download count per file
how many times did user X download a specific file
which permissions do the users have
Are my goals easy to realize? Is there any plugin out there that i can use for this? (Already searched for some but couldn't find one) Hope that you can help me :)
mtzE
There is nothing out-of-the-box that is counting downloads. Maybe the audit service can be used to count reads, but you'll have to turn it on and configure it. Once turned on, the audit service writes records to a set of audit tables in your Alfresco database. You can then use any reporting tool to query those tables.
If you want to check the permissions a user has you can use something like OpenCMIS to connect to the repository, traverse a folder path, and then, for each object, you can inspect the ACL of that object to use as data in your report.
As Lista said, one way to create such reports is to use AAAR, but that is not required.
We are trying to design an Ansible system for our crew.
We have some open questions that cause us to stop and think and maybe hear other ideas.
The details:
4 development teams.
We hold CI servers, DB servers, and a personal virtual machine for each programer.
A new programer receives a clean VM and we would like to use Ansible to "prepare" it for him according to team he is about to join.
We also want to use Ansible for weekly updates (when needed) on some VMs - it might be for a whole team or for all our VMs.
Team A and Team B shares some of their needs (for example, they both use Django) but there are naturally applications that Team A uses and Team B does not.
What we have done:
We had old "maintenance" bash scripts that we translate to YAML scripts.
We grouped them into Ansible roles
We have an inventory file which contains group for each team and our servers:
`
[ALL:children]
Team A
Team B
...
[Team A]
...
[Team B]
...
[CIservers]
...
[DBservers]
...
We have large playbook that contains all our roles (with tag to each):
- hosts: ALL
roles:
- { role x, tags: 'x' }
- { role y, tags: 'y' }
...
We invoke Ansible like that:
ansible-playbook -i inventory -t TAG1,TAG2 -l TeamA play.yml
The Problems:
We have a feeling we are not using roles as we should. We ended up with roles like "mercurial" or "eclipse" that install and configure (add aliases, edit PATH, creates symbolic links, etc) and role for apt_packages (using apt module to install the packages we need) and role for pip_packages (using pip module to install the packages we need).
Some of our roles depends on other roles (we used the meta folder to declare those dependencies). Because our playbook contains all the roles we have, when we run it without tags (on a new programer VM for example) the roles that other roles depends on are running twice (or more) and it is a waste of time. We taught to remove the roles that other depends on from our playbook, but it is not a good solution because in this way we loose the ability to run that role by itself.
We are not sure how to continue from this point. Whether to yield roles dependencies and create playbooks that implement those dependencies by specify the roles in the right order.
Should we change our roles into something like TeamA or DBserver that will unite many of our current roles (in such case, how do we handle the common tasks between TeamA and TeamB and how do we handle the tasks that relevant only for TeamA?)
Well, that is about everything.
Thanks in advance!
Sorry for the late answer and I guess your team has probably figured out the solution by now. I suspect you'll have the standard ansible structure with group_vars, hosts_vars, a roles folder and a site.yml as outlines below
site.yml
group_vars
host_vars
roles
common
dbserver
ciserver
I suspect your team is attempting to link everything into a single site.yml file. This is fine for the common tasks which operate based on roles and tags. I suggest for those edge cases, you create a second or third playbook at the root level, which can be specific to a team or a weekly deployment. In this case, you can still keep the common tasks in the standard structure, but you are not complicating your tasks with all the meta stuff.
site.yml // the standard ansible way
teamb.yml // we need to do something slightly different
Again, as you spot better ways of implementing a task, the playbooks can be refactored and tasks moved from the specific files to the standard roles
Seems you are still trying to see whats the best way to use ansible when you have multiple teams which will work on the same and don't want to affect others task. Have a look at this boilerplate it might help.
If you look in that repo. You will see there are multiple roles and you can design the playbook as per your requirement.
Example:
- common.yml (This will be common between all the team)
- Else you can create using by teamname.yml or project.yml
If you use any of the above you just need to define the proper role in the playbook & it should associate with the right host & group vars.
Hi Can anybody tell me that is it possible to automate the inventory with suppliers data with magmin.
Actually i have 3 suppliers and they update there inventory regularly and i have to do all through csv , is there a way that i can automate all process ,means
regularly update the data
with suppliers data automatically or at
scheduled time.
yes you can.
Magmi CLI interface is made for this, coupled with a cron , if you have the right profiles already defined (one per provider) , automating is a no-brainer.
Magmi is already able to get csv from remote locations, so configure one profile per vendor with adequate plugin parameters.
See : Magmi Command line