I got to know recently that we cannot export/import dashboards from Zabbix. Is there any work around for this? I want to export my Development environment zabbix dashboards to Test environment.
You can do it with the dashboard API.
Use the .get calls retrieve the dashboards objects from your Dev environment, then call .create on the Test environment.
Watch out for mismatching userids (and/or other ids), if Dev and Test aren't 100% matching!
Related
I am installed google-cloud-sdk in my matillion instance hosted on EC2. I am able to access gcloud command in the ssh instance and also by using a bash component in my matillion.
However, I am not able to run bq commands. I see it has been installed as part of the cloud sdk. I was able to configure my account and everything. But it doesn't work.
Can someone help me with this?
As per the documentation, its necessary that you activated the BigQuery API in order to use the bq command-line tool.
These are all the steps that you need to follow:
In the Cloud Console, on the project selector page, select or create
a Cloud project.
Install and initialize the Cloud SDK.
BigQuery is automatically enabled in new projects. To activate BigQuery in a preexisting project, go to Enable the BigQuery API.
I also was getting the same error than you and activating the API was the solution.
According to Jelastic documentation it is possible to export the Environment configuration and download it so it can be restored in another provider
However I have tried with 2 Jelastic providers and they both have disabled the option for exporting private data.
So exporting/download/upload/import of environment is not possible.
i.e. I was expecting to have a process similar to CPanel backup/restore tool
In fact, another view for the deployment process gives a possibility to get rid of the model of handling the data or configuration on the platform. Try to think a bit differently and using CI/CD approach. The Jelastic provides a platform where something you created, locate on somewhere you're elaborating(VCS or GIT as an example) and based on or depends on the specific stack, already pre-configured like a layer and can be installed(copied) over Jelastic. Don't need to handle the data somewhere in the cloud because you have it locally(means within any VCS) and doing the changes there. Then just do a 'pull' procedure(manually or automatically) on that deployment(test, production, staging) environment you're expecting.
Moreover, you can expect any environments type like a code and perform it creating before deploying the data.
Please, find the articles being described each case:
Deployment Guide
Jelastic Packaging Standard for CI/CD Automation
In case you would like to handle the databases' backups, check this article:
Scheduling Database Backups
Additional FTP add-on can make the copies more easily for each instance:
FTP/FTPS Support in Jelastic
I am planning to build an enterprise application using aws lambda and serverless framework.
I want to separate the dev, test and prod environments and I am planning to use AWS Parameter store for it.
I don't want my production environment configuration be exposed to developers. If the developer runs the command serverless offline -s production start then the production configuration should not be obtained.
It should be obtained only when the serverless function has been successfully deployed to aws lambda.
Here are few considerations based on your question:
To have different environments on Serverless framework you have to set up the stage. This value can be passed as a parameter when executing sls commands.
If you are keeping your code in a repo, the developers will have access to all the configurations. If this is really important, you could keep the production configuration in a diff repo where only very specific people will have access to it, and then you make a reference to in in your serverless.yml. Ex:
custom: ${file(./config/${opt:stage, 'dev'}.json)} and then in your config folder you create the prod.json file, but pointing to the real one of the new repo you created. Note: this would make your project harder to maintain.
Considering you don't want your developers to execute your production environment locally. You can use the global variable of serverless offline to block the execution. You could also inform then to not do so.
Here is what should be a good practice and solution based on your problem:
Considering you have a production environment you want to isolate from a given group in your company, you should create VPC's and configure their resources access, accordingly.
Then you create users to have diff access. When your developer try to execute the code accessing a resource (dynamoDB for example) in a VPC they don't have access, they will be blocked.
AWS configure to define which user will execute the SLS command.
Your development team will still have access to your configuration file.
Note: In this case the person/group with access to the production VPC will have to do the deploy.
If the answer does not suffice, could you please reinforce which type of resource(s) are sensitive across your Serverless project? I am taking for granted it is the DB as it is the most common scenario.
I want to know what deployments have recently occurred on my QA environment. It comprises several servers and several projects, so I don't want to go trawling through each environment separately or each project separately.
I tried clicking on the QA environment header on the home page, but it just takes me to the list of all environments - no links/buttons to logs for a specific environment.
Version: Octopus Deploy 2.6.4.951
I use the filtering on the 'Tasks' page. Looks like filtering was introduced in 2.5. https://octopus.com/blog/2.5#organised-task-page-including-filtering
/Octopus/app#/tasks
There are filters for Environment (QA) and 'Task Type' (Deployment). Checking mine just showed me all the deployments to QA and that there is one waiting approval.
I also filter the tasks page, but also have a PowerBI project which queries the API on the below URL:
http://octopus.example.com/api/reporting/deployments/xml?apikey=API-ABCDEFGHIJKLMNOPQRSTUVWXY
This can then set up whatever you want :)
I wasn't able to find solid information on this and I wanted to ask developers who use Parse Dashboard:
What are the pros/cons of Parse Dashboard local installation vs deployment?
I currently run the Parse Dashboard on local installation, but I know that deployment to Heroku is also an option (my app is deployed on Heroku). I wanted to gather some information before deploying/not deploying.
Thank you!
I also have it running locally and I think for security reasons it's best to do so. If you setup the dashboard on the same server on which Parse is running, then you will have to take security measure to protect access to the dashboard and the config file which includes your masterkey and all that. This definitely outweighs the arguments to host it locally, which in my opinion only is that it's easier to access the dashboard.
If you really want to setup a dashboard on a server at least do it on a separate server.