I am using Heroku and New Relic and am trying to get more detailed information on the servers resources, CPU usage, RAM etc.
New Relic has a section "Get started with Server Monitoring" but the instructions to set it up require working with the command line
running commands like apt-get install newrelic-sysmond and stuff.
How can I set this up with heroku?
Thanks!
EDIT
Here is the screen I am talking about.
http://i.imgur.com/8XMZOLr.png
The New Relic Linux / Windows Server Monitor agents are not able to used on Heroku.
You can get some memory info with this:
https://devcenter.heroku.com/articles/log-runtime-metrics
Edit: Oh yeah, we also have an Instances tab:
http://blog.newrelic.com/2013/03/07/new-relics-instances-tab/
You shouldn't need to install NR monitoring via apt-get if you are using Heroku.
Are you saying that the NR monitoring isn't working at all, or are you just trying to get it to provide more information?
After installing the NR add-on, did you follow the configuration instructions for your language/environment? https://devcenter.heroku.com/articles/newrelic
Related
I know that https://forge.laravel.com/auth/register is available for $12/month*, but I'd like to understand how to accomplish the same thing myself.
What I assume is possible (and what I'm looking for): I create a server that has only Ubuntu 18.04.3 installed and nothing else, and I upload a script that installs all the appropriate software and sets up MySQL with the correct passwords, etc (without manual intervention).
I've tried Laradock and had tons of problems with Docker and don't want to do that anymore.
I see that https://cloud.digitalocean.com/droplets/new lets me create a LEMP droplet (Ubuntu, Nginx, MySQL, PHP-FPM) with one click. But it lacks Redis, and its versions are outdated (e.g. PHP 7.2).
I've heard people mention Chef (maybe this?), but that seems to be more complicated than what I'm imagining.
Unfortunately I'm not even sure how to search for what I'm trying to do (or how to tag this question); is this called "server provisioning"? I've been searching phrases like "automatic install script redis mysql server for laravel".
Thanks in advance for pointing me in the right direction.
* I also just found https://getcleaver.com/ and https://runcloud.io/server-management, which each look like Forge + Envoyer (and RunCloud offers a free plan).
It is called server provisioning and Chef would be a good fit for this, check out Ansible too - another thing you could do is setup the server yourself and create an image from that server and then base your new servers out of that image, that way you'll have all your services installed from the start.
This sounds like a job or something like Puppet (or Chef/Ansible), however Laravel Envoy may be another tool to look at if you haven't already for the second part of your problem.
I highly recommend Heroku (or similar service), as this is all done out of the box, and has a ton of other great features that make developing a pipeline a breeze.
I spent a week trying to set up Safe-guard and Openshift in docker-container and completely torn apart...
I am working at a project where I plan to have clients, who can be given access to only those indices. X-pack, Safe-guard enterprise work perfectly - unfortunately until I get any clients I cannot pay yearly fees of several thousands $.
I tried to setup Safe-guard, turn off enterprise mode and then install openshift-elasticsearch-plugin
If I install them both after many tunings - I got an error that you cannot enable functionality in openshift that already enabled by safeguard.
When I install only openshift-elasticsearch-plugin and set all settings - it says "Failed authentication for null".
Here is the repository https://github.com/SvitlanaShepitsena/Lana
I have a small issue (somehow sleep does not work) so in order to start the cluster you need:
docker-compose up
docker ps
docker exec [container-id] -it /bin/bash
./sgadmin.sh
After 1 week of work I am desperate and beg for help :-).
The openshift-elasticsearch-plugin is designed to add specific features to the openshift logging stack. It, among other things, provides dynamic ACLs for users based on their openshift permissions. I would suggest containerizing an Elasticsearch image and adding the Searchguard plugins directly. Alternatively, versions of Elasticsearch later then the the one the plugin is designed for (2.4.4) are able to utilize XPACK that provides similar security.
Its preinstalled https://hub.docker.com/r/elastic/elasticsearch and can be configured as described https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
I am trying to do the MSI web deployment with chef. I have about 400 web servers with same configuration. We will do deployment in two slots with 200 servers each.
I will follow below steps for new release,
1) Increase the cookbook version.
2) Upload the cookbook to server.
3) Update the cookbook version to role and run list.
I will do lot of steps from cookbook like install 7 msi, update IIS settings, update web.configure file and add registry entry. Once deployment is done we need to update testing team, so that they can start the testing. My question is how could I ensure deployment is done in all the machines successfully? How could I find if one MSI is not installed in one machine or one web.config file is not updated properly?
My understanding is chef client will run every 30 Mins default, so I have wait for next 30 mins to complete the deployment. Is there any other way with push (I can’t use push job, since chef is removed push job support from chef High Availability servers) like knife chef client from workstation?
It would be fine, If anyone share their experience who is using chef in large scale windows deployment.
Thanks in advance.
I personnaly use rundeck to trigger on demand chef runs.
According to your description, I would use 2 prod env, one for each group where you'll bump the cookbook version limitation for each group separately.
For the reporting, at this scale consider buying a license to get chef-manage and chef-reporting so you'll have a complete overview, next option is to use a handler to report the run status and send a mail if there was an error during the run.
Nothing in here is specific to Windows, so more you are asking how to use Chef in a high-churn environment. I would highly recommend checking out the new Policyfile workflow, we've had a lot of success with it though it has some sharp limitations. I've got a guide up at https://yolover.poise.io/. Another solution on the cookbook/data release side is to move a lot of your tunables (eg. versions of things to deploy) out of the cookbook and in to a little web service somewhere, than have your recipe code read from that to get their tuning data. As for the push vs. pull question, most people end up with a hybrid. As #Tensibai mentioned, RunDeck is a popular push-based option. Usually you still leave background interval runs on a longer cycle time (maybe 1 or 2 hours) to catch config drift and use the push system for more specific deploy tasks. Beyond RunDeck you can also check out Fabric, Capistrano, MCollective, and SaltStack (you can use its remote execution layer without the CM stuffs). Chef also has its own Push Jobs project but I think I can safely say you should avoid it at this point, it never got enough community momentum to really go anywhere.
Now i'm working on RESTfull API on go, using Windows and goclipse.
Testing environemnt consists of few VMs managed by Vagrant. These machines contain nginx, PostgreSQL etc. The app should be deployed into Docker on the separated VM.
There is no problem to deploy app on first time using guide like here: https://blog.golang.org/docker. I've read a lot of information and guides but still totally confused how to automate deploying process and update go app in docker after some changes in code done. On the current stage changes in code done very often, so deploying should be fast.
Could you please advise me with correct way to setup some kind of local CI for such case? What approach will be better?
Thanks a lot.
I want to build application which servers as a stand-alone system service, always run on the backend and servers a front-end with a web interface.
Like we do in Linux /etc/init.d/apache2 start , Same as I want to server my application /etc/init.d/myapp start.
My major target is to deliver on Linux specially Ubuntu, whole app would be in plain Ruby and front-end would be in Sinatra.
I want to make it install with simple, gem install my_app and command line features get available to start the service. The application would be doing heavily processing and database insertion. And I want that its configurations must be set as in pure linux fashion, like /etc/apache2/apache2.conf
Can any one guide me in it? Also if possible, i want to secure the code, is there any possibilities for it?
I am using the Daemon-Kit gem for the same requirements. Works very well in production. The only thing it does not include is the configuration with a .conf file, but it's easy to do it yourself with Ruby code. You can deploy with Capistrano, no need to install.