I tried what seemed to be the straight-forward approach, and added a Package resource in my node configuration for the MongoDB MSI. I got the following error: "Could not get the https stream for file".
Here's the package configuration I tried:
package MongoDB {
Name = "MongoDB 3.6.11 2008R2Plus SSL (64 bit)"
Path = "https://fastdl.mongodb.org/win32/mongodb-win32-x86_64-2008plus-ssl-3.6.11-signed.msi"
ProductId = "88F7AA23-BDD2-4EBE-9985-EBB5D2E23E83"
Arguments = "ADDLOCAL=`"all`" SHOULD_INSTALL_COMPASS=`"0`" INSTALLLOCATION=`"C:\MongoDB\Server\3.6`""
}
(I had $ConfigurationData references in there, but substituted for literals for simplicity)
I get the following error:
Could not get the https stream for file
Possible TLS version issue? I found that Invoke-WebRequest needed the following to get it to work with that same mongo download URL. Is there a way to do this with the package resource?
[Net.ServicePointManager]::SecurityProtocol = "tls12, tls11, tls"
Using nmap to interrogate both nodejs.org and fastdl.mongodb.org (which is actually on cloudfront) it was indeed true that TLS support differed. Node still supports TLS version 1.0, which so happens to work with PowerShell. But MongoDB's site only supports TLS versions 1.1 or 1.2.
As I mentioned in my question, I suspected that setting the .Net security protocol work, and indeed it does. There's no way to add arbitrary script to the DSC package resource, so I needed to make a script block just to run this code, and have the package resource depend on it.
This is what I got to work:
Node $AllNodes.Where{$_.Role -contains 'MongoDBServer'}.NodeName {
Script SetTLS {
GetScript = { #{ Result = $true } }
SetScript = { [Net.ServicePointManager]::SecurityProtocol = "tls12, tls11, tls" }
TestScript = { $false } #Always run
}
package MongoDB {
Ensure = 'Present'
Name = 'MongoDB 3.6.11 2008R2Plus SSL (64 bit)'
Path = 'https://fastdl.mongodb.org/win32/mongodb-win32-x86_64-2008plus-ssl-3.6.11-signed.msi'
ProductId = ''
Arguments = 'ADDLOCAL="all" SHOULD_INSTALL_COMPASS="0" INSTALLLOCATION="C:\MongoDB\Server\3.6"'
DependsOn = '[Script]SetTLS'
}
...
Related
I successfully got a windows server 2016 to come up and join the domain. However, when I go to remote desktop login it throws an error about network level authentication. Something about domain controller cannot be contacted to perform Network Level Authentication (NLA).
I saw some video on work arounds at https://www.bing.com/videos/search?q=requires+network+level+authentication+error&docid=608000415751557665&mid=8CE580438CBAEAC747AC8CE580438CBAEAC747AC&view=detail&FORM=VIRE.
Is there a way to address this with terraform and up front instead?
To join domain I am using:
name = "domjoin"
virtual_machine_id = azurerm_windows_virtual_machine.vm_windows_vm.id
publisher = "Microsoft.Compute"
type = "JsonADDomainExtension"
type_handler_version = "1.3"
settings = <<SETTINGS
{
"Name": "mydomain.com",
"User": "mydomain.com\\myuser",
"Restart": "true",
"Options": "3"
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"Password": "${var.admin_password}"
}
PROTECTED_SETTINGS
depends_on = [ azurerm_windows_virtual_machine.vm_windows_vm ]
Is there an option I should add in this domjoin code perhaps?
I can log in with my local admin account just fine. I see the server is connected to the domain. A nslookup on the domain shows an ip address that was configured to be reachable by firewall rules, so it can reach the domain controller.
Seems like there might be some settings that could help out, see here, possibly all that is needed might be:
"EnableCredSspSupport": "true" inside your domjoin settings block.
You might also need to do something with the registry on the server side, which can be done by using remote-exec.
For example something like:
resource "azurerm_windows_virtual_machine" "example" {
name = "example-vm"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
network_interface_ids = [azurerm_network_interface.example.id]
vm_size = "Standard_DS1_v2"
storage_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2019-Datacenter"
version = "latest"
}
storage_os_disk {
name = "example-os-disk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "example-vm"
admin_username = "adminuser"
admin_password = "SuperSecurePassword1234!"
}
os_profile_windows_config {
provision_vm_agent = true
}
provisioner "remote-exec" {
inline = [
"echo Updating Windows...",
"powershell.exe -Command \"& {Get-WindowsUpdate -Install}\"",
"echo Done updating Windows."
]
connection {
type = "winrm"
user = "adminuser"
password = "SuperSecurePassword1234!"
timeout = "30m"
}
}
}
In order to set the correct keys in the registry you might need something like this inside the remote-exec block (I have not validated this code) :
Set-ItemProperty -Path 'HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server\Winstations\RDP-tcp' -Name 'SecurityLayer' -Value 0
In order to make the Terraform config cleaner I would recommend using templates for the Powershell script, see here
Hope this helps
I am trying to install Prestashop on my local machine
Using ubuntu 18.04, php 7.2, mysql 5.6, apache 2.4
I have cloned from their github repo, checkouted to branch 1.7.6.x, installed composer dependencies and made a symbolic link to the code directory from my /var/www/html (didn't want to bother creating a vhost)
Opened chromium to http://127.0.0.1/prestashop/install-dev/index.php
Proceeded with all steps, with correct mysql settings, directory permission settings, left the language to English (English)
But during the Store Installation step, when "Installing addons modules", it fails with a curl error like this:
file_get_contents_curl failed to download http://i18n.prestashop.com/translations/1.7.6.7/en-US/en-US.zip : (error code 28) Operation timed out after 5001 milliseconds with 221832 out of 516048 bytes received
I also have the following info in the Request tab of my inspector:
It's calling http://127.0.0.1/prestashop/install-dev/index.php?installModulesAddons=true&_=1596718771175 with status_code 200
I have the following as response body in there:
{
"success":false,
"message":"file_get_contents_curl failed to download http:\/\/i18n.prestashop.com\/translations\/1.7.6.7\/en-US\/en-US.zip : (error code 28) Operation timed out after 5001 milliseconds with 221832 out of 516048 bytes received"
}
I checked the file in install-dev/controllers/http/process.php but the code that I think is called just looks like this:
/**
* PROCESS : installModulesAddons
* Install modules from addons
*/
public function processInstallAddonsModules()
{
$this->initializeContext();
if (($module = Tools::getValue('module')) && $id_module = Tools::getValue('id_module')) {
$result = $this->model_install->installModulesAddons(array('name' => $module, 'id_module' => $id_module));
} else {
$result = $this->model_install->installModulesAddons();
}
if (!$result || $this->model_install->getErrors()) {
$this->ajaxJsonAnswer(false, $this->model_install->getErrors());
}
$this->session->process_validated = array_merge($this->session->process_validated, array('installModulesAddons' => true));
$this->ajaxJsonAnswer(true);
}
I suspect my faulty internet is to blame, but is there a workaround to increase the curl timeout?
Comment the code lines like this:
I have a groovy script that work on Linux Jenkins
import groovy.json.JsonSlurper
try {
List<String> artifacts = new ArrayList<String>()
//jira get summery for list by issue type story and label demo and project 11411
def artifactsUrl = 'https://companyname.atlassian.net/rest/api/2/search?jql=project=11411%20and%20issuetype%20in%20(Story)%20and%20labels%20in%20(demo)+&fields=summary' ;
def artifactsObjectRaw = ["curl", "-u", "someusername#xxxx.com:tokenkey" ,"-X" ,"GET", "-H", "Content-Type: application/json", "-H", "accept: application/json","-K", "--url","${artifactsUrl}"].execute().text;
def parser = new JsonSlurper();
def json = parser.parseText(artifactsObjectRaw );
//insert all result into list
for(item in json.issues){
artifacts.add( item.fields.summary);
}
//return list to extended result
return artifacts ;
}catch (Exception e) {
println "There was a problem fetching the artifacts " + e.message;
}
This script return all the names from Jira jobs by the API ,
But when I tried to run this groovy on Windows Jenkins the script will not work because windows do not have the command curl
def artifactsObjectRaw = ["curl", "-u","someusername#xxxx.com:tokenkey" ,"-X" ,"GET", "-H", "Content-Type: application/json", "-H", "accept: application/json","-K","--url","${artifactsUrl}"].execute().text;
how should I preform this command?
The following code:
import groovy.json.JsonSlurper
try {
def baseUrl = 'https://companyname.atlassian.net'
def artifactsUrl = "${baseUrl}/rest/api/2/search?jql=project=MYPROJECT&fields=summary"
def auth = "someusername#somewhere.com:tokenkey".bytes.encodeBase64()
def headers = ['Content-Type': "application/json",
'Authorization': "Basic ${auth}"]
def response = artifactsUrl.toURL().getText(requestProperties: headers)
def json = new JsonSlurper().parseText(response)
// the below will implicitly return a list of summaries, no
// need to define an 'artifacts' list beforehand
def artifacts = json.issues.collect { issue -> issue.fields.summary }
} catch (Exception e) {
e.printStackTrace()
}
is pure groovy, i.e. no need for curl. It gets the items from the jira instance and returns a List<String> of summaries. Since we don't want any external dependencies like HttpBuidler (as you are doing this from jenkins) we have to manually do the basic auth encoding.
Script tested (the connecting and getting json part, did not test the extraction of summary fields) with:
Groovy Version: 2.4.15 JVM: 1.8.0_201 Vendor: Oracle Corporation OS: Linux
against an atlassian on demand cloud instance.
I removed your jql query as it didn't work for me but you should be able to add it back as needed.
Install curl and set the path in environment variable of windows.
Please follow the link to download curl on windows.
I would consider using HTTP request plugin when making HTTP Requests.
Since you are using a plugin, it does not matter if you are running in Windows or .
Linux as your Jenkins Host
I try to generate my graphql schema using gradle apollo generateApolloClasses. So the first step is to generateMainApolloIR and it is working fine. It is generating a MainAPI.json under
/generated/source/apollo/generatedIR/main/src/main/graphql/client/backend/MainAPI.json. But the generateApolloClasses is failing with:
> java.io.FileNotFoundException: /Users/mctigg/Documents/Repositories/generated/source/apollo/generatedIR/main (Is a directory)
So it is looking into the wrong path! This is my gradle config:
apollo {
nullableValueType = "javaOptional"
outputPackageName = "generated.client.backend"
}
task generateBackendSchemaJson(type: ApolloSchemaIntrospectionTask) {
url = 'src/main/graphql/client/backend/schema.graphqls'
output = 'src/main/graphql/client/backend/schema.json'
}
tasks.findByName('generateMainApolloIR').dependsOn(['generateBackendSchemaJson'])
So how can I configure generateApolloClasses to look into:
/generated/source/apollo/generatedIR/main/src/main/graphql/client/backend/
Instead of
/generated/source/apollo/generatedIR/main/
May be you should set schema file path as follows:
apollo {
schemaFilePath = "/generated/source/apollo/generatedIR/main/src/main/graphql/client/backend/schema.json"
nullableValueType = "javaOptional"
outputPackageName = "generated.client.backend"
}
I try to change the default SonarQube server value in SonarQube Eclipse plugin (v3.2)...
Using the pluginCustomization process (argument -pluginCustomization myPrefs.ini in eclipse.ini file), I add the same value as result of eclipse preferences export :
# SonarQube default configuration server
org.sonar.ide.eclipse.core/servers/http\:\\2f\\2fsonar.mycompany.org/auth=true
But after workspace creation, the default value is always http://localhost:9000
This is a bug ? or there is a best common way to do that ?
Thanks for the tips.
24/09/2015 Update : Fixed with SonarQube Eclipse plugin v3.5 (see SONARCLIPS-428).
It is not the answer but a trick ... if you consider :
Having your own plugin with some IStartup process in the Eclipse distribution
You are using a proxy with the same credentials as SonarQube
This code could help you in an earlyStartup() method :
// Plugin dependencies : org.sonar.ide.eclipse.core, org.eclipse.core.net
// Write a process to do this code once (IEclipsePreferences use)
// ...
String userName = null;
String password = null;
// Get first login/password defined in eclipse proxy configuration
BundleContext bc = Activator.getDefault().getBundle().getBundleContext();
ServiceReference<?> serviceReference = bc.getServiceReference(IProxyService.class.getName());
IProxyService proxyService = (IProxyService) bc.getService(serviceReference);
if (proxyService.getProxyData() != null) {
for (IProxyData pd : proxyService.getProxyData()) {
if (StringUtils.isNotBlank(pd.getUserId()) && StringUtils.isNotBlank(pd.getPassword())) {
userName = pd.getUserId();
password = pd.getPassword();
break;
}
}
}
// Configure SonarQube with url and proxy user/password if exist
SonarCorePlugin.getServersManager().addServer("http://sonarqube.mycompany.com", userName, password);
// Store process done in preferences
// ...