elasticsearch 1.3+ and secure scripts workflow - elasticsearch

I've upgraded to Elasticsearch 1.3.x and by default dynamic scripting is disabled for security reasons. The advice is now to place any used scripts as a file in config/scripts directory. My question is how to integrate this into a multi-user production environment? Each developer needs to have it installed, the circleci needs it, I have to ensure the search cluster connected to Heroku has it. Is there anyway I can integrate this into our git workflow? (Info on the new security setup: http://www.elasticsearch.org/blog/scripting-security/)

Related

Why do some Jelastic providers block Export Environment option

According to Jelastic documentation it is possible to export the Environment configuration and download it so it can be restored in another provider
However I have tried with 2 Jelastic providers and they both have disabled the option for exporting private data.
So exporting/download/upload/import of environment is not possible.
i.e. I was expecting to have a process similar to CPanel backup/restore tool
In fact, another view for the deployment process gives a possibility to get rid of the model of handling the data or configuration on the platform. Try to think a bit differently and using CI/CD approach. The Jelastic provides a platform where something you created, locate on somewhere you're elaborating(VCS or GIT as an example) and based on or depends on the specific stack, already pre-configured like a layer and can be installed(copied) over Jelastic. Don't need to handle the data somewhere in the cloud because you have it locally(means within any VCS) and doing the changes there. Then just do a 'pull' procedure(manually or automatically) on that deployment(test, production, staging) environment you're expecting.
Moreover, you can expect any environments type like a code and perform it creating before deploying the data.
Please, find the articles being described each case:
Deployment Guide
Jelastic Packaging Standard for CI/CD Automation
In case you would like to handle the databases' backups, check this article:
Scheduling Database Backups
Additional FTP add-on can make the copies more easily for each instance:
FTP/FTPS Support in Jelastic

Setting up Kibana-multitenancy with safeguard and openshift?

I spent a week trying to set up Safe-guard and Openshift in docker-container and completely torn apart...
I am working at a project where I plan to have clients, who can be given access to only those indices. X-pack, Safe-guard enterprise work perfectly - unfortunately until I get any clients I cannot pay yearly fees of several thousands $.
I tried to setup Safe-guard, turn off enterprise mode and then install openshift-elasticsearch-plugin
If I install them both after many tunings - I got an error that you cannot enable functionality in openshift that already enabled by safeguard.
When I install only openshift-elasticsearch-plugin and set all settings - it says "Failed authentication for null".
Here is the repository https://github.com/SvitlanaShepitsena/Lana
I have a small issue (somehow sleep does not work) so in order to start the cluster you need:
docker-compose up
docker ps
docker exec [container-id] -it /bin/bash
./sgadmin.sh
After 1 week of work I am desperate and beg for help :-).
The openshift-elasticsearch-plugin is designed to add specific features to the openshift logging stack. It, among other things, provides dynamic ACLs for users based on their openshift permissions. I would suggest containerizing an Elasticsearch image and adding the Searchguard plugins directly. Alternatively, versions of Elasticsearch later then the the one the plugin is designed for (2.4.4) are able to utilize XPACK that provides similar security.
Its preinstalled https://hub.docker.com/r/elastic/elasticsearch and can be configured as described https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html

Solaris 11 development environment on Cloud or hosting

I got in charge to create a website based on Java. Production environment specs include Solaris 11.1, OHS 12.1.3, WebLogic 12.1.3, Java 1.7.0_51 and Oracle Database 11.2.
I want to create a server on some cloud or hosting service as Development environment with the same specs to avoid migration problems to Production. I also think this approach helps to give my team a single server where they can work and have some testers/client to visit the site.
Normally I would use a local Development environment but a lot of people is involved and differences with Production can become a problem at migration.
I checked http://www.polarhome.com/ but I don't know if it will fit all specs needed. I looked at Windows Azure and Google Cloud with no success. AWS maybe? I also checked https://cloud.oracle.com but I don't understand if they already offer what I need.
Do you know any providers to create my Development environment or another approach/suggestion to develop this project??
Thanks!
EDIT.
To clarify, the client's Production environment already exists and is running somewhere. My project will be installed on that environment when development is finished. I personally think that developing on any VM with WebLogic 12.1.X and Oracle Database 11.X should be enough, but I've never done it so I wanted to follow client's advise on having a Development environment similar to Production.
Do you think I can just create a VM on any OS and just install WebLogic 12 with Oracle Database 11?? Any suggestions to avoid migration issues if I take that route?
I think that develop a new website from scratch thinking to use the architecture proposed by you is a nonsense. I think that if you will use cloud services like PaaS you will do something better.
In any case, you can find solaris VMs on Cloudsigma , Entic and Oracle Public Cloud

SonarQube permissions location

I'm trying to config a SonarQube server using puppet.
My puppet manifests install software, deploy my custom sonar.properties, deploy ssl certificates, download and configure few plugins and, at last, start service.
Default Global Permissions allows Execute Analysis and Execute Preview Analysis to Anyone.
Default Project Permissions allows Browse and See Source Code to Anyone.
I want change this from my puppet code without using the web interface. Not only before first deploy. In each repuppet I could want change this permissions.
The goal is config and reconfig SonarQube in automatic way.
Thanks and sorry for my english.
To update permissions, you can do it through web service calls: http://docs.codehaus.org/pages/viewpage.action?pageId=231735777

Setting up a collaborative environment for web application development

My office is growing and ive been tasked to build out the IT for our web development.
Whats the best tool/setup for doing web development in a group setting? The requirements are a centralized code repository, a location to test development code on, and finally a way to push tagged code out to a staging server. What im thinking is svn/redmine for code repo, each user has an account on a central development machine to allow for ssh access(eclipse over ssh) and their own virtual host on the dev server which gives everyone a centralized development sandbox. Code is written and tested on this dev box then checked back into svn and later tagged and pushed out to the staging server. Yeah? Thoughts comments or recommendations?
*Also, in a dev environment what is the best way to handle databases? Is it wise to pull from the production database? Also should each developer have his/her own db or work off a master db?
**We are building a magento application and also have some custom backoffice tools that run on cakePHP.
Although this subject is off-topic in StackOverflow and flagged so then you need to concentrate on following areas:
VERSION-CONTROL
GIT has all the glory and you don't need your own box for this as https://bitbucket.org/ offers unlimited data and private/public repos and you can set your codebase there. http://github.com is also powerful and de facto most popular version-control oriented tool out there although it comes for a small price
so your master branches live in your version control and your devs will checkout frpom there and commit to it as well
your deployment tools will deploy data to your live and staging environments from your master
ENVIRONMENTS
usually three are used LIVE, STAGE, DEV
LIVE is well live and only approved code gets deployed there
STAGE is pre-live environment and should be exact replica environment according to LIVE so all things can be tested there by merchant
DEV is cool to have exact replica but can as well be on developers local env and is ment for loose testing and experimenting
DATABASES AND DEPLOYMENT
mysql databases are pain in the ass to sync so you better have a script for it that syncs from live to others and prevent syncing from other environments to LIVE. This limitation also requires that all the configuration and content will be added from LIVE only and only then synced down the line. Every change to schema or permanent setting should be handled by update scripts (As we are talking MAGENTO CE , MAGENTO EE has migration built in)
for deployment I also suggest you to build a fabric or capistrano script that resets dev and staging environments, handles database reset and pull from LIVE DB, and imports code from central repository.
it's also a good idea to target the following everyday tasks:
clients needs to reset the stage for it's tests
project manager, developer or testers need to test so spawning a test clone should be oneclick action (take current db and code and make it live in some subfolder for specific test only) as well as deleting the test
3rd party devs might need access to specific test or dev environment (this is actual with magento as in average there are at least 10 external extensions installed in every magento store)

Resources