Msi is over write installation directory - windows

I try to install Mysql.msi file in the F:\ drive using command below command:
$msiexec /a mysql-commercial-8.0.27-winx64.msi INSTALLDIR="F:", /qn, /l*v, install.log
In the MSI log file, I found the path changed automatically.
MSI (s) (E4:18) [14:37:45:516]: PROPERTY CHANGE: Modifying INSTALLDIR property. Its current value is 'F:'. Its new value: 'C:\MySQL\MySQL Enterprise Backup 8.0'.
Note:I am facing this issue in Azure and GCP server alone not in AWS windows server.

Related

Azure windows VM extensions are provisioning failed state VM agent is "Ready" but Backup failed at VM running State GuestAgentSnapshotTaskStatusError

Backup is failing for Azure VM with error - GuestAgentSnapshotTaskStatusError
Azure Backup service could not communicate with the VM Agent for triggering a snapshot (to take a backup), because the VM Agent might be in an inconsistent state.
Guest Agent is in a Ready state, however, the backup extension is in a failed state. the issue is that the VM agent is ready but the VM extensions are in provisioning failed state they are as follows
1.AzureDiskEncryption
2.enablevmaccess
3.MicrosoftMonitoringAgent
4.WindowsAgent.AzureSecurityCenter
Solution :
You would need to re-install the Backup extension. Please follow the below action plan:
Take Backup of whole Registry then use below steps :
1.Login to the affected machine.
Open Registry Editor.
Remove VMSnapshot registry keys at HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Azure\HandlerState
Remove or rename VMSnapshot Plugin Folders at C:\Packages\Plugins.
Now, open command prompt as admin and run below commands to force the extension installation :
REG ADD "HKLM\SOFTWARE\Microsoft\BcdrAgent" /v IsProviderInstalled /t REG_SZ /d False /f
REG ADD "HKLM\SOFTWARE\Microsoft\BcdrAgentPersistentKeys" /v IsCommonProviderInstalled /t REG_SZ /d False /f
6.Restart the Service "WindowsAzureGuestAgent".
7.Trigger a manual backup. As a part of backup, extension will be re-installed automatically.

Special character fails installation of Oracle Weblogic Server 12x

I'm trying to install "Oracle WebLogic Server 12.2.1.4" on Windows 10 with
java -jar fmw_12.2.1.4.0_wls_quick.jar command run on Git Bash. Problem is my folder account name (let's say UserŃame) has special characters and I cannot change that. This is error message that I've got:
The directory path "C:\Users\User▒ame\AppData\Local\Temp\OraInstall2022-01-05_04-13-52PM" contains invalid characters.
Unable to locate or create a temporary directory for the Oracle Universal Installer.
Are there any tricks to bypass this extraction to temp folder maybe?
I'm trying setup new local account with admin privileges but after creation I cannot log in to it... this is different problem.
You can try to set a different temp dir with this JVM property -Djava.io.tmpdir

Can a Service Fabric Aplication be deployed from within a Windows Docker Container to a cluster?

When building a Service Fabric project in Visual Studio (*.sfproj), a Deploy-FabricApplication.ps1 script is created as part of the template to deploy this application to Azure (or Service Fabric running wherever, for that matter). I'm looking for a way to containerize that mechanism as part of a Windows Docker image since our build and deployment process is containerized. Is there a way to run this script from within a Windows Docker container, and if so, what prerequisites would the image need to have?
Update:
Service Fabric SDK 3.3.617 released as part of Service Fabric 6.4 can now be installed in containers to build and deploy Service Fabric projects. This can be done in a Dockerfile using the following:
ADD https://download.microsoft.com/download/D/D/D/DDD408E4-6802-47FB-B0A1-ECF657BEB35F/MicrosoftServiceFabric.6.4.617.9590.exe C:\TEMP\MicrosoftServiceFabricRuntime.exe
ADD https://download.microsoft.com/download/D/D/D/DDD408E4-6802-47FB-B0A1-ECF657BEB35F/MicrosoftServiceFabricSDK.3.3.617.msi C:\TEMP\MicrosoftServiceFabricSDK.msi
RUN C:\TEMP\MicrosoftServiceFabricRuntime.exe /accepteula /sdkcontainerclient /quiet
RUN msiexec.exe /i "C:\TEMP\MicrosoftServiceFabricSDK.msi" /qn
Here is an example Dockerfile
Original Answer:
Turns out, this is no small feat. This script requires the Windows Service Fabric SDK to be installed. The recommended (and only supported) way to install the Service Fabric SDK is through WebPI, which is available here. It's possible to Dockerize the WebPI, however there's a problem. The WebPI installer consists of three components; the Service Fabric SDK, the Service Fabric Runtime, and the Service Fabric Tools for Visual Studio. The WebPI installer will install all of them. Unfortunately, the Service Fabric Runtime (as of this writing) cannot run under a Docker container since it wants to install a kernel level driver. This bug is being tracked here, but has been open for nearly a year with no real progress. This means that one could not run a Service Fabric cluster within a Docker container, but surely the SDK and tools should still be able to run, correct? Unfortunately, there is no way to tell the installer to only install the SDK and tools, but not the runtime.
So, perhaps there is an unsupported way to install just the SDK and tools. Turns out, the release notes have references to various MSIs for the individual components.
SDK Available Here
Tools for Visual Studio Available Here
It's fairly trivial to run msiexec.exe from a Dockerfile, which means we should be able to install the SDK that way. Nope. Unforunately, msiexec will fail with a generic 1603 code. If you run msiexec in verbose mode and output a log file, you can dig into this error and see the root cause:
MSI (s) (78:34) [19:07:56:049]: Product: Microsoft Azure Service
Fabric SDK -- This product requires Service Fabric Runtime to be
installed.
This product requires Service Fabric Runtime to be installed. Action
ended 19:07:56: LaunchConditions. Return value 3.
So, we're once again shot down. I've found no other packaged version of the Service Fabric SDK (Chocolatey has one, but it just launches the WebPI installer) which leaves one final solution; we install the SDK manually without the help of an installer. This requires reverse engineering exactly what the installer does, and integrating this into our Dockerfile.
The SDK installer does a few things. It copies a bunch of files into c:\program files\microsoft sdks\service fabric\ and a bunch of files into c:\program files\microsoft service fabric\. It also GAC's a bunch of stuff (Such as System.Fabric.dll), adds some stuff to the registry, and also installs a Powershell module. We need to do all those things for the script to run.
What I ended up doing is mounting the key folders as Docker volumes so I can use them within my container:
docker run `
-v 'c:\program files\microsoft sdks\service fabric\tools\psmodule\servicefabricsdk:C:\ServiceFabricModules' `
-v 'c:\program files\microsoft service fabric\bin\fabric\fabric.code:C:\ServiceFabricCode' `
-v 'c:\program files\microsoft service fabric\bin\servicefabric:C:\ServiceFabricBin' `
-e ModuleFolderPath=C:\ServiceFabricModules `
-it build-agent powershell
First, I need to share out the c:\program files\microsoft sdks\service fabric\tools\psmodule\servicefabricsdk directory which contains the Powershell module that the Deploy-FabricApplication.ps1 script loads:
Import-Module "$ModuleFolderPath\ServiceFabricSDK.psm1"
Next, we need to share out c:\program files\microsoft service fabric\bin\fabric\fabric.code because it has a bunch of DLLs that the installer GACs.
Lastly, we share out c:\program files\microsoft service fabric\bin\servicefabric because that directory contains the PowerShell module installed by the SDK.
When the container starts, we need to do the following:
First, register the module with PowerShell:
Copy-Item C:\ServiceFabricBin C:\windows\system32\WindowsPowerShell\v1.0\modules\ServiceFabric -Recurse
After you do this, Get-Module -ListAvailable will show the ServiceFabric module. However, no exports will be loaded because it's missing a bunch of DLLs. The installer puts those DLLs in the GAC, but the GAC is dumb so let's just put those DLLs in the same directory so the module finds them:
Copy-Item C:\ServiceFabricCode\System.Fabric*.dll C:\windows\system32\WindowsPowerShell\v1.0\modules\ServiceFabric -Recurse
After this, you should be able to run Get-Module -ListAvailable and see the ServiceFabric module fully loaded.
There's one final thing to do. The Deploy-FabricApplication.ps1 script imports the ServiceFabricSDK.psm1 module (see above). But what is $ModuleFolderPath? Well, the script by default looks in the registry for this value, which of course the installer sets for you. We don't want to muck with the registry for our Docker image, so let's just change the script to look at an environment variable instead:
$ModuleFolderPath = $ENV:ModuleFolderPath
Import-Module "$ModuleFolderPath\ServiceFabricSDK.psm1"
Now we can set that environment variable when we run our Docker container (or from our Dockerfile). Obviously, if you didn't want to modify the Deploy-FabricApplication.ps1 file, you could set this at HKLM:\SOFTWARE\Microsoft\Service Fabric SDK\FabricSDKPSModulePath as well. I'm fairly anti-registry so an environment variable (or just hard code if you really don't care) makes more sense to me.
Also note you'll need to import your certificate (Which you can download from the Key Vault in the form of a PFX file) before the script will deploy:
Import-PfxCertificate -Exportable -CertStoreLocation Cert:\CurrentUser\My\ -FilePath C:\Certs\MyCert.pfx
I believe a more production quality version of this would be to copy the required files into the image within your Dockerfile rather than mount them as volumes so the image would be more self contained, but that should be fairly straight forward. Also, I believe the DLLs that were GAC'ed are also available on NuGet, so it could be possible to download all those files through NuGet during the Docker build process.
Also, here's my full Dockerfile which I've successfully deployed an app to Service Fabric using:
# escape=`
FROM microsoft/dotnet-framework:4.7.1
SHELL ["cmd", "/S", "/C"]
# Install Visual Studio Build Tools
ADD https://aka.ms/vs/15/release/vs_buildtools.exe C:\SETUP\vs_buildtools.exe
RUN C:\SETUP\vs_buildtools.exe --quiet --wait --norestart --nocache `
--add Microsoft.VisualStudio.Workload.AzureBuildTools `
|| IF "%ERRORLEVEL%"=="3010" EXIT 0
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
# Our Deploy Certs
ADD ./Certs/ C:\Certs\
# Update Path (I forget if this was needed for something)
RUN SETX /M PATH $($Env:PATH + ';C:\ServiceFabricCode')
I'm hoping this helps someone, but more so I'm hoping Microsoft fixes their installer to remove the runtime requirement.
Best way to install Azure service fabric is by creating a poweshell file and call it from dockerfile
PowershellFile:
Start-Process "msiexec" -ArgumentList '/i', 'C:/app/WebPlatformInstaller_amd64_en-US.msi', '/passive', '/quiet', '/norestart', '/qn' -NoNewWindow -Wait;
& "C:\Program Files\Microsoft\Web Platform Installer\WebPICMD.exe" /Install /Products:MicrosoftAzure-ServiceFabric-CoreSDK /AcceptEULA
Dockerfile :
RUN powershell -noexit "& ""./InstallServiceFabric.ps1"""

How to resolve 'INS 30131 Initial setup required for the execution of installer validation failed' in Oracle installation?

This error occurred during installation of Oracle on Windows Server 2008.
Details:
Cause - Failed to access the temporary location.
Action - Ensure that the current user has required permissions to access the temporary location.
Additional Information:
 - PRVG-1901 : failed to setup CVU remote execution framework directory C:\Users\ADMINI~1\AppData\Local\Temp\2\CVU_12.2.0.1.0_Administrator\ on nodes "rgfindbd"
 - Cause:  An operation requiring remote execution could not complete because
the attempt to set up the Cluster Verification Utility remote
execution framework failed on the indicated nodes at the
indicated directory location because the CVU remote execution
framework version did not match the CVU java verification
framework version. The accompanying message provides detailed
failure information.
 - Action:  Ensure that the directory indicated exists or can be created and
the user executing the checks has sufficient permission to
overwrite the contents of this directory. Also review the
accompanying error messages and respond to them.
Summary of the failed nodes rgfindbd
 - Version of exectask could not be retrieved from node "rgfindbd"
 - Cause: Cause Of Problem Not Available
 - Action: User Action Not Available
 - Version of exectask could not be retrieved from node "rgfindbd"
 - Cause: Cause Of Problem Not Available
 - Action: User Action Not Available
In the folder, where your setup.exe is, run:
setup -ignorePrereq -J"-Doracle.install.db.validate.supportedOSCheck=false"
In administrator cmd go to your setup folder then:
For a client installation:
setup
-ignorePrereq
-J"-Doracle.install.client.validate.clientSupportedOSCheck=false"
For a server installation:
setup
-ignorePrereq -J"-Doracle.install.db.validate.supportedOSCheck=false"
I can suggest you to
check if RemoteExecService.exe is running from your temp location,
for example C:\Users\\AppData\Local\Temp\oraremservice.
If it does, then kill the process and delete the oraremservice folder.
Rerun your installation
this work in my case
# chmod 777 -R /tmp
Run cmd as administrator
Locate the folder of the setup
And use this
setup -ignorePrereq -J"-Doracle.install.db.validate.supportedOSCheck=false"
Delete the oraremservicev2 folder in *C:\Users\{name}\AppData\Local\Temp* location and continue the installation. Working fine

Cannot locate jruby during logstash installation

I am learning something about ElasticSearch Stack and I am having a problem installing Logstash on Windows 10 (windows 10 enterprise N OS build 15063.674).
I installed ElasticSearch and Kibana and these are up and running.
I followed the steps on this page to install Logstash:
Step 1: Download and unzip Logstash
downloaded "logstash-5.6.3.zip" file and unzipped it to: "c:\program files\elastic\"
Step 2: Prepare a logstash.conf config file
as described here, I created a "logstash-simple.conf" in the "c:\program files\elastic\logstash-5.6.3>" folder
Step 3: Run bin/logstash -f logstash.conf
at this point I am having the issue (I tried using both cmd and PowerShell with elevated privileges): the result is:
The system cannot find the path specified.
"could not find jruby in C:\Program Files\Elastic\logstash-5.6.3\vendor\jruby"
Of course, the "vendor" folder exists, and there is a "jruby.bat" file inside. I searched the web and I found something about the JRUBY_BIN environment variable but event after the creation (and the additional reboot) the issue still is there.
Can someone address me to the problem?
I have found this solution: https://discuss.elastic.co/t/logstash-does-not-start-says-could-not-find-jruby-in/113500.
You can also try move logstash folder out of program files direct to C, it might help.

Resources