It's my first time with sorry cypress, I'm trying to get it up but I can't connect it with documentDB.
I would also like to hear experiences with sorry cypress.
I think I need to introduce a .pem to docker-compose.
I don't have experience with SorryCypress, but my assumptions for connecting to Amazon DocumentDB are:
Yes, you need the .pem file: https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem
Specify the MONGODB_URI, something like this: mongodb://sample-cluster.node.us-east-1.docdb.amazonaws.com:27017/?tls=true&tlsCAFile=rds-combined-ca-bundle.pem&replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=false - see https://docs.sorry-cypress.dev/configuration/mongodb-configuration for the other variables.
How to introduce the .pem file? I see you're using docker-compose, which means you have several options , see which one is convenient for you.
Good luck!
Related
Does anyone have any information on how to run AWS lambda scripts from rundeck? I was looking into doing this to have a central place that certain uses can log into run deck and run the scripts that are relevant to them, as not everyone has aws access.
I found this: https://www.slideshare.net/tetutaro/lambda-and-rundeck-58884982
But I was hoping there might be something more official somewhere and in English :)
A good way to integrate with Lambda is to use AWS CLI on the Rundeck server and call functions using script step or command step on your workflow. Take a look at this.
Also, and similar to this answer, another good way to interact with Lamda is to access it using API (you have two options: using HTTP Workflow Step plugin or via script step on your workflow).
Finally, maybe is a good opportunity to develop some custom plugin focused on AWS Lambda.
I know that https://forge.laravel.com/auth/register is available for $12/month*, but I'd like to understand how to accomplish the same thing myself.
What I assume is possible (and what I'm looking for): I create a server that has only Ubuntu 18.04.3 installed and nothing else, and I upload a script that installs all the appropriate software and sets up MySQL with the correct passwords, etc (without manual intervention).
I've tried Laradock and had tons of problems with Docker and don't want to do that anymore.
I see that https://cloud.digitalocean.com/droplets/new lets me create a LEMP droplet (Ubuntu, Nginx, MySQL, PHP-FPM) with one click. But it lacks Redis, and its versions are outdated (e.g. PHP 7.2).
I've heard people mention Chef (maybe this?), but that seems to be more complicated than what I'm imagining.
Unfortunately I'm not even sure how to search for what I'm trying to do (or how to tag this question); is this called "server provisioning"? I've been searching phrases like "automatic install script redis mysql server for laravel".
Thanks in advance for pointing me in the right direction.
* I also just found https://getcleaver.com/ and https://runcloud.io/server-management, which each look like Forge + Envoyer (and RunCloud offers a free plan).
It is called server provisioning and Chef would be a good fit for this, check out Ansible too - another thing you could do is setup the server yourself and create an image from that server and then base your new servers out of that image, that way you'll have all your services installed from the start.
This sounds like a job or something like Puppet (or Chef/Ansible), however Laravel Envoy may be another tool to look at if you haven't already for the second part of your problem.
I highly recommend Heroku (or similar service), as this is all done out of the box, and has a ton of other great features that make developing a pipeline a breeze.
I have several projects in gcloud, call them e.g. "staging-project" and "production-project". I have created an image, call it "stagign-image-1" in "staging-project", which I use for new instances. And I would like to use this image in "production-project" as well.
As far as I know, it is possible to do it using gcloud command line tool - you log there using your private google account, which has access to both projects and do:
gcloud config set project "production-project
gcloud compute instances create production-instance-from-staging-image --image staging-image-1 --image-project staging-project
This works fine for me, but I have few colleagues who don't like command line so much. So is there a way how to achieve this in the gcloud web console? When I list images in production-project, I simply do not see the staging-image-1 and I found no way how to select it. :(
--image-project is not currently supported in the developers console.
An issue has been filed to fix that. Thanks.
I would just like to ask if anybody here has run Turbogears2 from an Amazon EC2 instance. I've been looking for a way to do it, but so far searching the Web hasn't given me anything I could use as an example. I did see one here:
http://codersbuffet.blogspot.com/2010/05/announcing-turbpgears-ec2-images.html
Though I think the person used an earlier version of TG in his post.
I thought it would be as simple as changing the host parameter in the development.ini, but that did not work. I've also tried connecting to the instance with the -L option for ssh, but it did not work as well (I did this approach for web2py way back, and it worked).
I'm wondering if I need to configure some file somewhere in the TG2 application. I've also tried searching the TG2 documentation. Either I'm not using the right keywords, or I'm just not getting the right results.
Thanks in advance for any help!
DM
By itself EC2 doesn't provide a platform, you can freely choose a deploy environment from mod_wsgi, circus, gunicorn or whatever your prefer. It's not strictly a TurboGears problem, it can be deployed like any other WSGI application.
There are some tutorial for a step by step deploy on Apache+mod_wsgi and Circus+chausette on the TurboGears documentation, you can find them here: http://turbogears.readthedocs.org/en/latest/cookbook/deploy/index.html
Avoid deploying on gearbox+wsgiref because it is not meant for production usage, if you want to use gearbox I suggest you give a try to waitress
Well, my head is spinning a bit here. I started with what i thought would be a simple task, to take regular db dumps on heroku and push them to a personal S3 account for backup.
I am not sure the best a approach to do this. Accessing S3 within Java is crystal clear, getting the db dump from heroku is clear as mud right now...
Disclaimer: i don't know Ruby, and i don't really want to learn Ruby if i don't have to, i really want to use Java (that is why i chose play) and i want to have it hosted, that is why i chose Heroku :-)
So, I could use the heroku Scheduler, but i am not understanding what scripts are being executed here - is it all scripts in /bin? What kind of scripts are these, are they ruby scripts? How do i add them as 'tasks' when they aren't rake tasks?
Can I use the pgbackups via URL somehow? It looks like the rake examples do pg_dump instead, write to a tmp file and then move it around from there. I'm pretty unclear how to access the heroku databased stuff from a script, the examples i have seen so far are in rake, so any insight there would be helpful...
Or coming at it from inside my java app, what is the status of the Heroku java API? If there is a way to get to the heroku runtime from my java, or somehow use the heroku.jar?
It would great to get some overall guidance and best practices in this area - thanks!!!
From the google group i found this tidbit:
http://groups.google.com/group/heroku/browse_thread/thread/7fe984c3d2d01f21/9474f31138636332?lnk=gst&q=scheduler+#9474f31138636332
"Sorry for the delayed response. We updated the docs to mention running Procfile entries via heroku run:
http://devcenter.heroku.com/articles/oneoff-admin-ps
Anything that works via heroku run works via Heroku Scheduler. Just put the name of the process type as the 'task" in Scheduler. No special syntax required. And you can even pass it arguments. "
From this and James Ward's last example above i am considering this answered.