I am able to move a checkpoint from local OR to shared OR but when the code is executed it fails to find the checkpoint located in the shared OR. Anyway I can make this happen-please help?
*OR=Object Repository
Using HP UFT 11.53
NOTE: My checkpoint code is NOT in the Action1 but in a separate .vbs file which I call from the Action1. Also I am NOT using the local OR at all only using shared OR.
Have you associated this separate .vbs script with the test? If you're unaware of how to do this, I suggest you check this link:
http://www.automationrepository.com/2011/09/associate-function-library-to-qtp-script/
Related
Is there a way to configure the names of the files exported from Logging?
Currently the file exported includes colons. This are invalid characters as a path element in hadoop, so PySpark for instance cannot read these files. Obviously the easy solution is to rename the files, but this interferes with syncing.
Is there a way to configure the names or change them to no include colons? Any other solutions are appreciated. Thanks!
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
At this time, there is no way to change the naming convention when exporting log files as this process is automated on the backend.
If you would like to request to have this feature available in GCP, I would suggest creating a PIT. This page allows you to report bugs and request new features to be implemented within GCP.
Looking to setup a high performance environment running Mongo 3.4 on windows 2016 in azure. I come from a SQL\windows background and was wondering if there are any options with Mongo to spread out the IO workload of mongod. It seems odd that there is only a dbPath option and that you can not configure separate locations for the DB(s), opslogs and journal. Am i missing something ?
Thanks for any assistance
This is indeed possible, using a couple of different techniques:
The oplog is stored in the local database, so you can keep it in a separate folder by using the storage.directoryPerDB config option.
The journal is stored in a subfolder of the data directory; you can make MongoDB save its journal files in a separate directory by preparing a symbolic link called journal in the data directory, pointing to your other folder.
I have 2 AWS EC2 LAMP servers and i want to replicate the data on one of the folders to others. I know I can try with EFS, but for some reason it is not a viable option at this moment. So, here is what I want to request for help:
Our Server A and Server B has same file structure but the files inside are mismatch. So, I want a script in Server A to look in, example, /var/www/html/../file/ folder and compare with /var/www/html/../file/ in Server B, and dump all new files from Server A to B.
Any help on how to write it?
Well, I used S3FS which is lot easier than breaking head over the script. It readily copies the files from one server to another.
I'm creating an init.d script that will run a couple of tasks when the instance starts up.
it will create a new volume with our code repository and mount it if it doesn't exist already.
it will tag the instance
The tasks above being complete will be crucial for our site (i.e. without the code repository mounted the site won't work). How can I make sure that the server doesn't end up being publicly visible? Should I start my init.d script with de-registering the instance from the ELB (I'm not even sure if it will be registered at that point), and then register it again when all the tasks finished successfully?
What is the best practice?
Thanks!
You should have a health check on your ELB. So your server shouldn't get in unless it reports as happy. And it shouldn't report happy if the boot script errors out.
(Also, you should look into using cloud-init. That way you can change the boot script without making a new AMI.)
I suggest you use CloudFormation instead. You can bring up a full stack of your system by representing it in a JSON format template.
For example, you can create an autoscale group that has an instances with unique tags and the instances have another volume attached (which presumably has your code)
Here's a sample JSON template attaching an EBS volume to an instance:
https://s3.amazonaws.com/cloudformation-templates-us-east-1/EC2WithEBSSample.template
And here many other JSON templates that you can use for your guidance and deploy your specific Stack and Application.
http://aws.amazon.com/cloudformation/aws-cloudformation-templates/
Of course you can accomplish the same using init.d script or using the rc.local file in your instance but I believe CloudFormation is a cleaner solution from the outside (not inside your instance)
You can also write your own script that brings up your stack from the outside by why reinvent the wheel.
Hope this helps.
Currently I'm working in a continuous integration server solution using Hudson.
Now I'm looking for a build job which will be triggered every time it finds a file in a specific directory.
I've found some plugins which allow Hudson to watch and poll files from a directory (File Found Trigger, FSTrigger and SCM File Trigger) but none of them allow me to get the filename and file contents from the file found and use these values during the build execution (My idea would pass these values to a shell script)
Do you guys know if this is something possible to do via any other Hudson plugin? or maybe I'm missing something.
Thanks,
Davi
Two valid solutions:
As suggested by Christopher, read the values from the file via Shell/Batch commands at the beginning of your build-script.(The downside is that Hudson will not be aware of those values in any way)
Use the Envfile Plugin to read the content of the file and interperate it as a set of key-value pairs.
Note that if the File Found Trigger "eats" the flag-file, you may need to create two files -
one to hold the key-value pairs and another to serve as a flag for the File Found Trigger.