puphpet elasticsearch directory - elasticsearch

First time tinkering with ElasticSearch. Installed ES on my VM by adding
elastic_search:
install: '1'
settings:
version: '1.4.4'
java_install: true
to config.yaml. Following THESE INSTRUCTIONS I would now like to perform some actions in the ElasticSearch directory, for example run the following:
./bin/plugin -i elasticsearch/marvel/latest
Where is the ElasticSearch directory in my VM? Or is this command supposed to be run in my local machine (where)?

PuPHPet uses the elastic/puppet-elasticsearch module.
Looking through that code I think one of these directories is the one you want:
https://github.com/elastic/puppet-elasticsearch/blob/master/manifests/params.pp#L98
Elastic Search is not installed on your master machine. It exists only within the VM you've created. You can access it from the outside, though, just as you would any other service, but actually running any code through shell would require you to be inside the VM.

Related

Running Kibana locally, why does`bin/kibana` command work, but other seemingly equal commands do not

This isn't stopping me from running kibana locally, I'd just like to better understand the mechanics for whatever script starts the service.
What I've noticed is that from my local Kibana (kibana-7.14.2-darwin-x86_64), there is a bin folder with a kibana Unix executable file in it. From this root directory, i can run bin/kibana to presumably execute the kibana file and start the service, but if I run cd bin, then kibana, I get command not found: kibana.
What am I missing here?
Thanks!

Docker - The moduleSpecifier "..." couldn't be found on local disk

I cannot mount a local folder to a docker volume on docker-compose
so it is not accessible on docker-compose run cmd.
Here is a repo from the github https://github.com/up1/demo-k6-docker
When I follow readme on docker-compose run k6 run scripts/sample.js it gives me the following error all the time:
WARN[0000] The moduleSpecifier "scripts/sample.js" has no scheme but we will try to resolve it as remote module. This will be deprecated in the future and all remote modules will need to explicitly use "https" as scheme. ERRO[0000] The moduleSpecifier "scripts/sample.js" couldn't be found on local disk. Make sure that you've specified the right path to the file. If you're running k6 using the Docker image make sure you have mounted the local directory (-v /local/path/:/inside/docker/path) containing your script and modules so that they're accessible by k6 from inside of the container, see https://k6.io/docs/using-k6/modules#using-local-modules-with-docker. Additionally it was tried to be loaded as remote module by prepending "https://" to it, which also didn't work. Remote resolution error: "Get "https://scripts/sample.js": dial tcp: lookup scripts on 127.0.0.11:53: no such host"
Tried:
specifically sharing folder in docker app settings window,
different github repos,
different mac laptops,
different setups Dockerfile copy and -v option on docker run
looking for similar questions
and docs
https://k6.io/docs/using-k6/modules#using-local-modules-with-docker
I would really appreciate some help, banging my head against the wall for a couple of days with this
Try this.
docker-compose run k6 run //scripts//sample.js
I'm running on docker desktop version 3.1.0 Windows 10 pro.
Solution

With Vagrant, is it possible to run a single VM (scotchbox) using multiple Vagrantfiles?

I currently have multiple dev sites that each use Vagrant and scotchbox. Each site directory has its own copy of scotchbox, but since they're all the same I'd like to have just one scotchbox VM that I can start with any Vagrantfile, where each Vagrantfile would just change the config.vm.synced_folder.
So, for example, let's say I have:
~/Sites/cheese/
~/Sites/bacon/
~/Sites/eggs/
and then
~/Sites/sctoch-box
I'd like to be able to startup and shutdown Vagrant from ~/Sites/cheese/ or ~/Sites/bacon/ and so on without each also having their own copy of scotch-box.
Is that possible?
Yes you could potentially do that - I've not tested and don't think its official supported
but you can create the first VM from your Vagrantfile within the ~/Sites/cheese/ project, once you have created the VM, copy the Vagrant file and .vagrant directory from ~/Sites/cheese/ into ~/Sites/bacon/ and ~/Sites/egges/ so all will point to the same VM. Edit your Vagrantfile within each of your project to change the Vagrantfile if needed.
You will be able to start the VM from any of this project, but as its single VM, if you try to run vagrant from another project directory, it will not work.

Vagrant Virtualbox Openstack - or is there a better way?

Current Steup
Using a Vagrant/Virtualbox image for development
Vagrant file and php code are both checked into a git repo
When a new user joins the project they pull down the git repo and type vagrant up
When we deploy to our "dev production" server we are on a CentOS 7 machine that has virtual box and vagrant and we just run the vagrant image
Future Setup
We are moving towards an OpenStack "cloud" and are wondering how to best integrate this current setup into the workflow
As I understand it OpenStack allows you to create individual VMs - which sounds cool because on one hand we could then launch our VM's, but the problem is we are taking advantage of Vagrant/Virtual Box's "mapping" functionality so that we are mounting /var/www/html to a /html directory in the folder we run vagrant out of. I assume this is not possible with OpenStack - and was wondering whether there is a specified best practice for how to handle this situation.
Approach
The only approach i can think of is to:
Install a VM on OpenStack that runs Centos7 and then inside that VM run Vagrant/VirtualBox (this seems bonkers)
But then we have VM inside a VM inside a VM and that just doesn't seem efficient.
Is there a tool - or guide - or guidance how to work with both a local vagrant image and the cloud? It seems like there may not be as easy a mapping as I initially though.
Thanks
It sounds like you want to keep using vagrant, presumably using https://github.com/ggiamarchi/vagrant-openstack-provider or similar? With that assumption, the way to do this which is probably the smallest iteration from your current setup is just to use an rsync synced folder - see https://www.vagrantup.com/docs/synced-folders/rsync.html. You should be able to something like this:
config.vm.synced_folder "html/", "/var/www/html", type: 'rsync'
Check the rest of that rsync page though -- depending on your ssh user, you might need to use the --rsync-path option.
NB - you don't mention whether you vagrant host is running windows or linux etc. If you're on windows then I tend to use cygwin, though I expect you can otherwise find some rsync.exe to use.
If you can free yourself from the vagrant pre-requisite then there are many solutions, but the above should be a quick win from where you are now.

ElasticSearch installed---but Installing kibana on localhost?

I'd like to view my machine's syslogs more beautifully on an ubuntu desktop. I notice that all the kibana documentation is oriented towards remote servers (which makes sense). However, how would I securely view the same information about my local machine?
Here are some things I've read that were not helpful because they were designed for remote access:
https://www.digitalocean.com/community/tutorials/how-to-use-logstash-and-kibana-to-centralize-logs-on-centos-7
Kibana deployment issue on server . . . client not able to access GUI
http://www.elasticsearch.org/overview/kibana/installation/ which has the following problems:
there is no config.js to open in an editor per step 2, you can see this very plainly on their github page: https://github.com/elasticsearch/kibana
running
~/kibana/src/server/bin$ bash kibana.sh
The Kibana Backend is starting up... be patient
Error: Unable to access jarfile ./../lib/kibana.jar
How do I install kibana locally?
Not sure if you're still looking for an answer, but for future searchers:
What you can do is download elasticsearch - http://www.elasticsearch.org/overview/elkdownloads/
Extract it, and create a plugins subdirectory. Then, within the /plugins directory create a /kibana/_site subdirectory.
Then, download kibana using the above mentioned link. Extract the archive, then edit config.js to point to the localhost as the elasticsearch host:
elasticsearch: "http://localhost:9200",
Copy all of the contents of the folder you extracted kibana into to the /kibana/_site directory you created inside the elasticsearch folder.
Then start elasticsearch:
within the elasticsearch directory -
bin/elasticsearch
Kibana will now run off of the same 'server' as elasticsearch, on your local host.
UPDATE: Kibana 4 comes bundled with a web server now: see the docs

Resources