Selenium webdriver and Ruby. Standard directory and file structure - ruby

I want to make my Selenium Webdriver Ruby tests suites, test cases, and test methods in separate files, so i can reuse the code between them. Right now I have separate ruby files for every test suite, containing every test case, and every method. This works, but its not the best way to maintain a lot of test suites constantly.
So I wanted to know what is the standard way to do this file separation, from a complete text file, to separate files for test cases and methods.
I found the following structure but don't understand how to use it with my requirements:
.
├── bin (not used)
├── data (not used)
├── doc (not used)
├── etc (I use it to store 3 different HOSTS files i overwrite depending on some parameters)
├── ext (not used)
├── lib (not used)
├── logs (keeps execution logs)
│  └── screenshots (keeps only failed test cases screenshots)
└── tests (test suites... with test data, test cases, and methods, in a
single file per test suite)

I have found the answer I was looking for, because the directory I was more troubled about was the "tests/" directory, where I has all my tests, and the best way to have shared code between them, is to have a module with methods in a "tests/support" or "tests/shared" directory.

Related

Seperate Test Suites & Seeders in Laravel/PhpUnit

Is it possible to run a set collection of test suites & seeders within Laravel? For example, i have one codebase for a project which is home to different clients. Each of these clients have different requirements, so the tests are going to differ, along with the seeder.
How can I package these tests so that I can can run one set without running them all? I understand we can create test suites within phpunit, but all my seeders will run which will create excess load on a server when I only want to test a specific set.
Thanks.
I've created seperate 'main' file seeders.
ClientASeeder (Calls more seeders)
ClientBSeeder (Calls more seeders)
...
And seperate test folders, for each client. Along with a generic test folder for shared tests.

Monorepo: How to consume a package from another project?

I am trying to create my first monorepo in Go. The project structure looks as follows:
As you can see on the picture, the monoplay folder is the root.
The pb folder contains the generated gRPC code that I would like to consume in the srv_boo/main.go and srv_foo/main.go files.
The question is, how to consume the generated gRPC code from folder pb in the srv_boo/main.go and srv_foo/main.go files?
Is the folder structure correct?
Would like also to deploy the services individually.
Is maybe https://bazel.build/ the solution?
Having the entire repository as one go module will help with this, i.e only one go.mod file in the "Monoplay" root folder.
Then the services can reference the generated go files using "github.com/*/monoplay/pb/*" imports.
This will also centralize dependency management for all the entire repository, since there is only one go.mod file, if that is something you want.
Other alternatives:
Using "go mod edit":
https://go.dev/ref/mod#go-mod-edit
Or, as DazWilkin suggests, use "go_package" in proto files together with "go-grpc_opt" and "go_opt".
I use the single module approach and recommend it.
If the repository will contain a lot of code and building everything (including container images) is cumbersome and takes to long then look into bazel.

Provisioning with Ansible and Vagrant multiple vagrantfiles

I'm creating a monitoring environment that has monitoring_servers and monitored_boxes, and of course Ansible controller. For testing roles etc I've created a new "project" that worked well in terms of organizing the development. But now, when most of the stuff is (hopefully) working as should I would love to get the whole Infrastructure easier to manage, if possible, from one file state.
I've been googling this every now and then and IIRC I still haven't found a solution to have one master Vagrantfile which then could call other Vagrantfiles to kickstart needed boxes.
Now there is one Vagrantfile for creating Ansible Controller, 3 ubuntu nodes and 3 Windows nodes, and another to spin up three Ubuntu VM's for Grafana, Loki, and Prometheus. Then there would be needs for an Alert manager, maybe for Influxdb, etc, and keeping all those machines in one vagrant file haven't worked very well for me as I would like to see a situation where there is:
Vagrantfile (master) to create Ansible Controller and from that file, I could call files like "monitoring_stack", "monitored_boxes", "common_purpose_boxes" and so on.
Master
├── Vagrantfile.ansible.controller
└── monitoring
├── monitored_boxes
│   └── Vagrantfile.monitored
├── monitoring_servers
│   └── Vagrantfile.monitoring
└── whatever_boxes
└── Vagrantfile.whatever
Something like that would be an ideal setup to manage.
If that's not doable nor easy to get to are there other methods you normally take to tackle similar setups?
Maybe just forget the whole Vagrant and go full-on with Pulumi or Terraform. Then again, that probably wouldn't solve this issue either as I want to provide a playground for other team members also to test and play with new toys.
Thanks, everyone for any tips :)
Hopefully I'm not too late.
Vagrant supports multi-nodes setup, within the same Vagrantfile:
https://www.vagrantup.com/docs/multi-machine
I'm currently working on a dual-node setup with ansible provisioning (WIP):
https://gitlab.com/UnderGrounder96/gitlab_jenkins

Maintaining staging+prod environments from single repo, 2 remotes with revel buildpack on heroku

Revel models are defined under the models package; so in order to import them one must use the full repo path relative to the %GOPATH/src folder which in this case project/app/models thus results in
import PROJECTNAME/app/models
so far, so good i'f you'r using your app name as the folder name of your local dev machine and have dev+prod environments only.
Heroku's docs recommends using multiple apps for different environment (i.e. for staging). with the same repository with distinct origins;
This is where problem starts, now, since the staging enviromnent resides on alternative appname(let's say PROJECTNAME_STAGING), it's sources are stored under PROJECTNAME_STAGING but the actual code still import PROJECTNAME/app/models instead of import PROJECTNAME_STAGING/app/models; so compile fails, etc.
Is there any possibility to manage multiple environments with a single local repo and multiple origins with revel's heroku buildpack? or a feature is needed in the buildpack that is yet to be implemented?
In addition, there is this possible issue with the .godir file that is required to be versioned and contain the git path to the app, so what about the multi-environment duality regarding this file?
Solution was simple enougth;
The buildpack uses the string in .godir both for the argument for revel run as well as the directory name under GOPATH/src. My .godir file had a git.heroku.com/<APPNAME>.git format; Instead I just used APPNAME format.

Several apps (i.e war files) on same Beanstalk instance

In order to be conservative on resources (and costs), I would like to put more than 1 war file (representing different apps) on the same EC2 beanstalk instance.
I would like then to have appl A mapping to myapp.elasticbeanstalk.com/applA using warA and appl B mapping to myapp.elasticbeanstalk.com/applB using warB
But, the console allows you to upload a single and only war for any instance.
1) So, I understand that its not possible with the current interface. Am I right ?
2) Though, is is possible to achieve this via "non-standard" ways: uploading warA via interface and copying / updating warB to /tomcat6/webapps via ssh, ftp, etc ?
3) With (2), my concern is that B will be lost each time BT health checker decides to terminate the instance (successive failed checks for example) and restart a new one. I would then have to make warB as part of my customized AMI used by applA and create a new version of this AMI each time i update warB
Please, help me
regards
didier
You are correct ! You can not (yet ) have multiple war in beanstalk.
Amazon Forum answer is here
https://forums.aws.amazon.com/thread.jspa?messageID=219284
There is a workaround though, but not using Beanstalk, but plain EC2:
https://forums.aws.amazon.com/thread.jspa?messageID=229121
http://blog.jetztgrad.net/2011/02/how-to-customize-an-amazon-elastic-beanstalk-instance/
Shameless plug: While not related directory, I've made a plugin for Maven 2 to automate Beanstalk deployments and Elastic MapReduce as well. Check out http://beanstalker.ingenieux.com.br/
This is an old question but it took me some time to find a more up to date answer so I thought I'd share my findings.
Multiple WAR deployment is now supported natively by Elastic Beanstalk (and has been for some time).
Simply create a new zip file with each of your WAR files inside of it. If you want one of them to be available at the root context name it ROOT.war like you would if you were deploying to Tomcat manually.
Your zip file structure should looks like so:
MyApplication.zip
├── .ebextensions
├── foo.war
├── bar.war
└── ROOT.war
Full details can be found in the Elastic Beanstalk documentation.
The .ebextensions folder is optional and can contain configuration files that customize the resources deployed to your environment. See Elastic Beanstalk Environment Configuration for information on using configuration files.
There another hack which allows you to boot an arbitrary jar by installing java and using a node.js boot script:
http://docs.ingenieux.com.br/project/beanstalker/using-arbitrary-platforms.html
Hope it helps

Resources