Sample ansible custom fact on Windows - ansible

Does anyone have a sample of a working powershell script to create a custom fact on Windows for Ansible ? I tried something similar on Linux and it works fine but on Windows I cannot get it to work. It does not help that I cannot find any example on the internet either !
I have tried a number of variations of the following powershell script but am unable to get it to work. To be clear, I am able to display the actual custom fact but am unable to access it using say ansible_facts.some_var
$myjson = #"
{
"some_var": "some_value"
}
"#
Write-Output $myjson
Variations attempted
Tried converting to json Write-Output $myjson | ConvertToJson
Added/removed quotes

I've wrestled with windows facts in the past. This is what I do:
This is a test facts file you need to place on your Windows server. I use ansible to put it in place:
$InstanceType = $(Invoke-RestMethod -uri http://169.254.169.254/latest/meta-data/instance-type)
$AvailZone = $(Invoke-RestMethod -uri http://169.254.169.254/latest/meta-data/placement/availability-zone)
$AMI_ID = $(Invoke-RestMethod -uri http://169.254.169.254/latest/meta-data/ami-id)
#{
local = #{
local_facts = #{
cloud = 'AWS'
instance_type = $InstanceType
avail_zone = $AvailZone
ami_id = $AMI_ID
region = 'eu-west-1'
environment = 'DIT'
Support_Team = 'Win_Team'
Callout = '6-8'
}
}
}
You don't need to nest the local & local_facts, that's just my personal preference.
At the top of the file, I've added a few variables to grab the AWS metadata but you can change these to something else.
I put my file in C:\TEMP\facts
Then run ansible:
ansible win -m setup -a fact_path=C:/TEMP/facts
and you should get out something like this:
"ansible_local": {
"local": {
"local_facts": {
"Callout": "6-8",
"Support_Team": "Win_Team",
"ami_id": "ami-07d8c98607d0b1326",
"avail_zone": "eu-west-1c",
"cloud": "AWS",
"environment": "DIT",
"instance_type": "t2.micro",
"region": "eu-west-1"
}
}
},
if you put the JSON into a file, you should be able to query it with a command like this:
cat file1 | jq '.[] .ansible_facts.ansible_local.local.local_facts'
{
"ami_id": "ami-0c95efaa8fa6e2424",
"avail_zone": "eu-west-1c",
"region": "eu-west-1",
"Callout": "6-8",
"environment": "DIT",
"instance_type": "t2.micro",
"Support_Team": "Win_Team",
"cloud": "AWS"
}

it is a late answer, but it works for me:
i use the following powershell-script with the name installed_software.ps1 to get the installed software:
# get installed software
Get-WmiObject -Class Win32_Product | select Name, Version
put the powershell script in the path c:\temp\facts.
I call it with:
ansible win -m setup -a 'fact_path=C:/TEMP/facts gather_timeout=20 gather_subset=!hardware,!network,!ohai,!facter'
read the documentation here: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/setup_module.html
the most important thing is: do not output json. The setup-module in windows expect raw hashtables, arrays, or other primitive objects.
My output within the json looks like:
...
"ansible_installed_software": [
{
"Name": "Office 16 Click-to-Run Extensibility Component",
"Version": "16.0.15427.20178"
},
{
"Name": "Office 16 Click-to-Run Localization Component",
"Version": "16.0.14131.20278"
},
...

Related

terraform windows server 2016 in Azure and domain join issue logging in to domain with network level authentication error message

I successfully got a windows server 2016 to come up and join the domain. However, when I go to remote desktop login it throws an error about network level authentication. Something about domain controller cannot be contacted to perform Network Level Authentication (NLA).
I saw some video on work arounds at https://www.bing.com/videos/search?q=requires+network+level+authentication+error&docid=608000415751557665&mid=8CE580438CBAEAC747AC8CE580438CBAEAC747AC&view=detail&FORM=VIRE.
Is there a way to address this with terraform and up front instead?
To join domain I am using:
name = "domjoin"
virtual_machine_id = azurerm_windows_virtual_machine.vm_windows_vm.id
publisher = "Microsoft.Compute"
type = "JsonADDomainExtension"
type_handler_version = "1.3"
settings = <<SETTINGS
{
"Name": "mydomain.com",
"User": "mydomain.com\\myuser",
"Restart": "true",
"Options": "3"
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"Password": "${var.admin_password}"
}
PROTECTED_SETTINGS
depends_on = [ azurerm_windows_virtual_machine.vm_windows_vm ]
Is there an option I should add in this domjoin code perhaps?
I can log in with my local admin account just fine. I see the server is connected to the domain. A nslookup on the domain shows an ip address that was configured to be reachable by firewall rules, so it can reach the domain controller.
Seems like there might be some settings that could help out, see here, possibly all that is needed might be:
"EnableCredSspSupport": "true" inside your domjoin settings block.
You might also need to do something with the registry on the server side, which can be done by using remote-exec.
For example something like:
resource "azurerm_windows_virtual_machine" "example" {
name = "example-vm"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
network_interface_ids = [azurerm_network_interface.example.id]
vm_size = "Standard_DS1_v2"
storage_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2019-Datacenter"
version = "latest"
}
storage_os_disk {
name = "example-os-disk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "example-vm"
admin_username = "adminuser"
admin_password = "SuperSecurePassword1234!"
}
os_profile_windows_config {
provision_vm_agent = true
}
provisioner "remote-exec" {
inline = [
"echo Updating Windows...",
"powershell.exe -Command \"& {Get-WindowsUpdate -Install}\"",
"echo Done updating Windows."
]
connection {
type = "winrm"
user = "adminuser"
password = "SuperSecurePassword1234!"
timeout = "30m"
}
}
}
In order to set the correct keys in the registry you might need something like this inside the remote-exec block (I have not validated this code) :
Set-ItemProperty -Path 'HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server\Winstations\RDP-tcp' -Name 'SecurityLayer' -Value 0
In order to make the Terraform config cleaner I would recommend using templates for the Powershell script, see here
Hope this helps

Azure Databricks API: import entire directory with notebooks

I need to import many notebooks (both Python and Scala) to Databricks using Databricks REST API 2.0
My source path (local machine) is ./db_code and destination (Databricks workspace) is /Users/dmitriy#kagarlickij.com
I'm trying to build 2.0/workspace/import call, so my body is: { "content": "$SOURCES_PATH", "path": "$DESTINATION_PATH", "format": "SOURCE", "language": "SCALA", "overwrite": true }
However I'm getting error: Could not parse request object: Illegal character and as per documentation content must be The base64-encoded content
Should I encode all notebooks to base64?
Maybe there're some examples of importing directory to Databricks using API?
After some research I've managed to get it work:
Write-Output "Task: Create Databricks Directory Structure"
Get-ChildItem "$SOURCES_PATH" -Recurse -Directory |
Foreach-Object {
$DIR = $_.FullName.split("$WORKDIR_PATH/")[1]
$BODY = #{
"path" = "$DESTINATION_PATH/$DIR"
}
$BODY_JSON = $BODY | ConvertTo-Json
Invoke-RestMethod -Method POST -Uri "https://$DATABRICKS_REGION.azuredatabricks.net/api/2.0/workspace/mkdirs" -Headers $HEADERS -Body $BODY_JSON | Out-Null
}
Write-Output "Task: Deploy Scala notebooks to Databricks"
Get-ChildItem "$SOURCES_PATH" -Recurse -Include *.scala |
Foreach-Object {
$NOTEBOOK_NAME = $_.FullName.split("$WORKDIR_PATH/")[1]
$NOTEBOOK_BASE64 = [Convert]::ToBase64String([IO.File]::ReadAllBytes("$_"))
$BODY = #{
"content" = "$NOTEBOOK_BASE64"
"path" = "$DESTINATION_PATH/$NOTEBOOK_NAME"
"language" = "SCALA"
"overwrite" = "true"
"format" = "SOURCE"
}
$BODY_JSON = $BODY | ConvertTo-Json
Invoke-RestMethod -Method POST -Uri "https://$DATABRICKS_REGION.azuredatabricks.net/api/2.0/workspace/import" -Headers $HEADERS -Body $BODY_JSON | Out-Null
}
Yes, the notebooks must be encoded to base64. You can use Powershell to achieve this. Try the below.
$BinaryContents = [System.IO.File]::ReadAllBytes("$SOURCES_PATH")
$EncodedContents = [System.Convert]::ToBase64String($BinaryContents)
The body for the REST call would look like this.
{
"format": "SOURCE",
"content": "$EncodedContents",
"path": "$DESTINATION_PATH",
"overwrite": "true",
"language": "Scala"
}
With respect to importing whole directories, you could loop through the files and folders in your local directory using Powershell and make a REST call for each notebook.

Batch script to Queue the builds in tfs 2015/2017

I was trying to execute builds using a Batch script, I wrote one but getting this error:
please define the Build definition name. tfsbuild start /collection:https://tfs.prod.dcx.int.bell.ca/tfs/bellca/Consumer/builds/All Definitions/{Release)/{Project-name}/{Build definition name}
How can can I fix this?
The tfsbuild command line tool is only for XAML builds. For modern builds, you'll need to use the REST API or the C# wrapper for the REST API.
The documentation has good examples, but basically POST https://{instance}/DefaultCollection/{project}/_apis/build/builds?api-version={version}
with an appropriate body:
{
"definition": {
"id": 25
},
"sourceBranch": "refs/heads/master",
"parameters": "{\"system.debug\":\"true\",\"BuildConfiguration\":\"debug\",\"BuildPlatform\":\"x64\"}"
}
Yes, just as Daniel said, you need to use the REST API, See Queue-a-build.
You could simply use below PowserShell script to queue the builds (Just replace the params accordingly):
Param(
[string]$collectionurl = "http://server:8080/tfs/DefaultCollection",
[string]$projectName = "ProjectName",
[string]$keepForever = "true",
[string]$BuildDefinitionId = "34",
[string]$user = "username",
[string]$token = "password"
)
# Base64-encodes the Personal Access Token (PAT) appropriately
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $user,$token)))
function CreateJsonBody
{
$value = #"
{
"definition": {
"id": $BuildDefinitionId
},
"parameters": "{\"system.debug\":\"true\",\"BuildConfiguration\":\"debug\",\"BuildPlatform\":\"x64\"}"
}
"#
return $value
}
$json = CreateJsonBody
$uri = "$($collectionurl)/$($projectName)/_apis/build/builds?api-version=2.0"
$result = Invoke-RestMethod -Uri $uri -Method Post -Body $json -ContentType "application/json" -Headers #{Authorization=("Basic {0}" -f $base64AuthInfo)}

AWS Iam commands, Working correct in terminal and not working in Laravel/PHP AWS SDK

I am using OS: Ubuntu 14.04 and installed AWSCLI package using terminal.
while running AWS Iam commands, its working fine.
eg: I ran this command
aws iam list-users
and got following results
{
"Users": [
{
"Arn": "arn:aws:iam::3**16****332:user/xyz",
"CreateDate": "2014-09-29T14:21:25Z",
"UserId": "AIDAJY*******MW**W",
"Path": "/",
"UserName": "xyz"
},
{
"Arn": "arn:aws:iam::34****044**2:user/abcxyz",
"CreateDate": "2014-02-07T21:08:53Z",
"UserId": "AIDAJ******JML**V6Y",
"Path": "/",
"UserName": "abcxyz"
},
}
While using AWS SDK with Laravel 5.1, I configure Key, Secrect, Region etc (same as configured in AWSCLI package)
while running this code in Laravel 5.1
$Iam = \App::make('aws')->createClient('Iam');
$result = $Iam->listUsers();
echo "<pre>";
print_r($result);
die();
getting following error(see attachment).
what can be reason because same configuration working fine in terminal but not with SDK. I also tried SQS, which is working fine see following code.
$obj = \App::make('aws')->createClient('Sqs');
$queue = $obj->getQueueUrl(['QueueName'=>'sms-demo']);
$queueUrl= $queue->get('QueueUrl');
$result = $obj->receiveMessage(
array('QueueUrl'=> $queueUrl));
IAM is a global service and not region specific. However when using the api you must use the us-east-1 region for the call.

Vagrant Meta Data is Corrupt

Hope someone can help out here. I'm trying to version self hosted vagrant boxes, so doing this without using Vagrant Cloud.
I've created the following meta data file:
{
"description": "How about this",
"name": "Graphite",
"versions": [
{
"version": "1.8",
"providers": [
{
"name": "virtualbox",
"url": "http://desktopenvironments/Graphite/Graphite_1.8.box"
}
]
}
]
}
This is taken directly from the vagrant (somewhat lacking) documentation found at: http://docs.vagrantup.com/v2/boxes/format.html.
When running a vagrant add (taking the box file that contains this file directly from disk) I get:
The metadata associated with the box 'graphite' appears corrupted.
This is most often caused by a disk issue or system crash. Please
remove the box, re-add it, and try again.
Any assistance as to why this is happening would be greatly appreciated.
Was generating my metadata file from a c# app I wrote, using UTF8 for text encoding. This is not enough. You need to use UTF8 without BOM.
Once the Byte Order Mark was removed it all works 100s.
var settings = new JsonSerializerSettings() { ContractResolver = new LowercaseContractResolver() };
string json = JsonConvert.SerializeObject(metadata, Formatting.None, settings);
var utf8WithoutBom = new System.Text.UTF8Encoding(false);
using (var sink = new StreamWriter(outputFilePath, false, utf8WithoutBom))
{
sink.Write(json);
}

Resources