I have a boot disk backup with me which i want to use to migrate to cloud. I need to install some msi packages. I have figured out a way to install the msi packages by using srvany.exe, but i need to create a srvany service in boot disk using registry entries.
Can i create a service from registry?
Is there any other way to trigger a installer on system startup using only the backed up boot disk?
Note - I have tried using Run/RunOnce option and triggering it using AutoAdminLogon, but that works at system logon and i need a solution which works at system startup.
I finally got the answer to the questions I asked above.
Can i create a service from registry? - No, we cannot. Creation of services are not only adding the services entry to the registry. The entries are also present for each service in the service manager and few other places too. Hence just by changing the registry values we cannot create a service.
Is there any other way to trigger a installer on system startup using only the backed up boot disk? - Yes, we can write startup scripts to trigger the operations at the system startup. Please refer stackover answer
Related
I am developing a Windows service that uses a config.json file to store its configuration. When I developed it without a service, I stored the data in %APPDATA%\companyname\product.
Now I am switching to real Windows service. When is run it as a service, the path points to C:\Windows\system32\config\systemprofile\AppData\Roaming\company\product. But I can't find my file in there. I tested it with the Explorer and a command prompt (as administrator). How can I access my configuration.
Is that the best place to put my configuration file?
I ended up putting my configuration in C:\ProgramData.
I need to configure service accounts for connecting to some of the services and for that we are required to configure the details in a template file.
So basically that means, I want to configure service account at run time.
We are using oracle service bus 11g.
Since I've never worked on service accounts before, any suggestions will be helpful.
I checked that we can do that at run time by fn-bea:lookupBasicCredentials XQuery function. but this is not what we want. We want to generated dynamically through the template files.
I have a computer that is used for getting database information from the server in the same domain, and this computer is used by employees who don't have the server admin information.
When the computer restarts, I'd like it to automatically log in to Windows Server so that it can access the database files. Is it possible to write a script for this that runs on boot?
Thanks in advance
I solved this by adding the credentials to the Credentials Manager in Windows, along with disabling the Windows Server dashboard program. This makes Windows automatically log in to the server with the stored credentials on boot.
Since your question really isn't specific, I'd like to suggest two ways of accomplishing your goal.
Since you'd like to access database information, why not use some kind of database management software (like SSMS if you're using MSSQL) and set up proper permissions for the user/computer that will need to obtain information from that particular server/database.
If you need access to raw files (which doesn't make much sense in case of MSSQL for accessing purposes), why not set up proper permissions on the file or parent folder, giving the user that is logged to the client PC proper permissions to access the files that are of interest.
I have created some stored procedures on my dashDB instance on Bluemix, to manipulate data in tables in the same instance.
I can run these from Data Studio and they work as intended.
Next, I created a process in Workflow Scheduler, which I provisions as a service in the same app, where the dashDB is also a service.
While creating the job step in the process, I noticed a message in the dialog window. I have attached a screen shot here:
http://i.stack.imgur.com/EI2b7.jpg
When I did try to run the process step from Workflow Scheduler, the process failed with a JDBC not found error.
I do realize that the Workflow agent I'm using is hosted on Bluemix, so I am puzzled how I can install the JDBC client there.
Should I be setting up an agent on a local machine outside of bluemix, in a hybrid mode?
Currently there are 2 possibilities:
Open a ticket to ask for a dedicated cloud agent and then download
the JDBC driver on the agent.
Download and install an agent on a VM or on-prem.
It looks like you have the incorrect value for "JDBC jar class path" the correct value is /home/wauser/utils/
I'm not sure why it is required that we put enter this but I was able to get the connection to dashDB working with this change.
JDBC jar class path: /home/wauser/utils/
I'm trying to do some clustering testing and I am setting up multiple RabbitMQ services on a single Windows machine. I am able to set the environment variables RABBITMQ_NODENAME, RABBITMQ_SERVICENAME, and RABBITMQ_NODE_PORT then run RabbitMQ-Service Install to have a new RabbitMQ service installed under a different name.
My question is regarding the configuration file. Based on what I read on the RabbitMQ site, the configuration file defaults to the %AppData%\RabbitMQ directory.
I'm just having trouble trying to understand how it should be setup so I can have 3 instances of the service running with their own configuration.
Do I run the installation under a different local or domain account so it gets placed under a different %AppData%\RabbitMQ directory or can I add a directive to the service to look in a particular directory for the configuration file for that particular service?
Also, how does RABBITMQ_BASE come into play? Is that only for data and log files or does that also apply to the configuration file? I'm not sure if once I have the service setup with BASE defined as a specific path I can place a new rabbitmq.config under the root of that path.
Please confirm and provide any additional assistance. Thank you in advance!
For now I'm testing on Windows but I plan on converting to linux once I have this all working correctly and understood. Unfortunately, I've inherited the current environment and it's already installed and running using Windows servers. They just wanted me to setup clustering for it so I'm trying to simulate the cluster on my workstation.
Nevermind, I found out what I needed. The environment variable RABBITMQ_CONFIG_FILE can be used to override the location of the default config file.
http://www.rabbitmq.com/relocate.html
You can run multiple RabbitMQ instances on 1 machine without clustering. You just need to change the ports and the node name in rabbitmq-defaults, rabbitmq-env and config files. If you want them as a service you can just create them from the already configured instances.
HERE is a detailed guide on how to do that. It's pretty easy and straightforward.