Is it possible to set up a Vagrant sync folder with variable host/source directory, possibly from config file? [closed] - vagrant

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 months ago.
The community reviewed whether to reopen this question 8 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I'm trying to set up a Vagrantfile that will mount a code base on the developer's machine. The place the developer puts the codebase on their machine could be anywhere they like based on how they like to organize their machine. If I offer up this Vagrantfile to set up a small development and test environment that closely resembles production, I'd like them to be able to set the location of their code without having to edit the Vagrantfile (leaving it unchanged in source control).
Is there a way to make the Vagrantfile look somewhere else for a value to use as the host directory path for a sync folder?
I tried asking on the HashiCorp forum (might require login) yesterday, but haven't gotten a response yet and it seems like a low traffic site. I'll keep checking there in case a solid answer comes back, but I'm hoping someone here has dealt with this before. Thank you for any help.

You can customize additional synced folders with an additional line of code in the Vagrantfile described here:
config.vm.synced_folder '<host_path>', '<guest_path>'
This solves the problem of additional customized synced folders. However, you also stated you desired to customize this per user based on a config file. You can accomplish this with basic Ruby. We will assume the config file is YAML like:
# config.yaml
---
host_path: '/path/on/host'
guest_path: '/path/on/guest'
Then we can read in the file with normal Ruby and utilize its key-value pairs as per normal. This assumes the file is in the same directory as where vagrant commands are being executed; otherwise the code will need to be customized further:
require 'yaml'
paths = YAML.load_file('config.yaml')
config.vm.synced_folder paths['host_path'], paths['guest_path']
The code can also be easily modified for different config file formats.

Related

Automatic FTP deploy: is there any automatic / programmable tool available? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I need this
Watch a dir content
When dir content change
delete a specific ftp folder
upload all dir content to the same ftp folder
I'm totally unaware of best way to accomplish this task.
I'm looking for suggestions / ideas.... gulp? grunt? node? nope :) ?
Actually I know no build tools. So before try/fault with every single options I ask you for suggestions.
EDIT: Please take note that I'm asking for a Continuos Deployment tool able to watch for file changes and to upload to FTP.
I cannot choose to change FTP ... I'd only change job
What you need is a Continuous Deployment solution. FTP is not necessarily what needs to be done. You haven't mentioned a technology stack, but if you were (for example) working in Visual Studio, you can easily setup continuous deployment to an app service through Azure integrations. That way, whenever you have the project open in VS, if you make code changes and save them they are uploaded to your app.
You can read about app service CD on Azure here
There are, of course, many different continuous deployment solutions available to you. This was a specific example I'm a little more well versed with. Here is a list of other solutions, each with their own sets of functionality. If you do your research, I'm sure you'll find what you need (rather than write a script to do the directory monitoring and FTP for you)
The user case you have described is unique, thus you will not find an already build solution where you can simple run it and be done. You will need to create your own.
Watch directory.
This can be translated into "your code needs to be aware of file upload". This can be achieved via "notification" being pushed onto your code or your code pulls "notification" on its own.
Sadly, push from FTP Server will not work, as (to my knowledge) no FTP Server supports a push notification on file upload. To understand what I mean here, think of SVN post-commit hook.
The easiest to make pull is to tail FTP Log and look for STOR command, match your watch directory (via regexp) and on match execute bash/PHP or any other script to make step (2). Step (2) is a handful of ftp client commands.
For PHP:
Remove directory: http://ee1.php.net/manual/en/function.ftp-rmdir.php
Upload a file (in comments there are recursive examples to upload directory and
it's content): http://ee.php.net/manual/en/function.ftp-put.php

golang binary not working outside gopath src folder [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 7 years ago.
Improve this question
I have a golang web application defined using gin-gonic. I have defined the the goapp under /usr/local/goapp
The structure is like this -
/usr/local/goapp
+src
+bin
+pkg
Here are my go env -
GOPATH - /usr/local
GOBIN - /usr/local/goapp/bin
GOROOT - /usr/local/go
When I run go build and go install under the main folder in source, I get my binary and I am able to run it and see my html getting loaded when I go to the home page URL.
If I try to run the same binary under bin folder, I don't see the html getting loaded when I go to the home page URL. I am getting 404 page not found.
Am I missing something here? Has anyone came across such a kind of issue?
Thanks.
All those GOPATH, GOROOT and of course the missing PATH variables just say something about how to call the go program itself and where it searches for modules. But you told us, that you built and installed some go program.
When a go program has been built go is actually not needed anymore. You can take the binary, put it anywhere you want, even to another machine, that has the same or at least a similar system and run that program there.
When your program, that you have built and installed and that seems to be called gin-gonic is executed, you will execute it in some path, also known as the current working directory (see getcwd(2) or pwd(1)).
I just guess now, that under the current working directory there lives your htdocs, index.html, whatever files, that this gin-gonic uses to create the pages.
It is common, that, if such a program cannot find the document it should create, send, produce, whatever, it will return the code 404: Not found.
Though just guessed, this is, quite likely, the situation you are in, when you run your program with a different current working directory, than the working directory, under which the program expects its documents.

Why is Windows not reading my hosts file? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
Improve this question
I'm having a hard time geting Windows to take into account a new entry in my hosts file.
I tried adding this line:
199.229.249.151 models.db
To the hosts file found here:
c:\windows\system32\drivers\etc\hosts
When I save the file and try to reach the host with a browser, I'm getting a "host not found" error. I tried setting the "read-only" file attribute to the hosts file -- same result. I tried flushing the DNS cache, but nothing changes.
It seems Windows is not reading my modified hosts file at all, or at least, not taking in account my new entry.
What am I forgetting? What else could I try?
Are there specific requirements or rules to follow to ensure that Windows can always properly detect a change to the hosts file, read and parse its contents, and immediately take into account changes when using a browser or ping to test via the command-line?
I ran into same issue and after checking lot of things, the issue ended up being the line endings, I had change the line endings to Windows format and it worked.
I ran into this problem once, The problem is Windows ignored the host file and I fixed it by:
Copy the hosts file from C:\Windows\System32\drivers\etc to somewhere like Desktop
Remove the hosts file there
Copy the copied hosts file back to C:\Windows\System32\drivers\etc
I don't know why but it's fixed.
Automatic proxy server configuration scripts override the hosts file. To disable the automatic configuration script:
Press Windows key and type Configure proxy server
Click LAN settings
Uncheck Use automatic configuration script
Try ping localhost.
if it works, then something wrong with ip or your entry. If it does not, hosts file is bad. Pay attention where it goes. It might try ipv6. That still means that hosts file is broken.
Remove everything from it and leave only your entry or localhost. Single line only, nothing else at all! Not even line breaks. Just stash it aside somewhere until problem is resolved.
If it works, then there's an entry that breaks things. Try converting line endings to windows format, might help. Usually it's white space that messes with things because it's hard to notice.
Open Notepad > Start Open Notepad > Open as administrator.
Save it at some location as ANSI hosts file (Not .txt extension, select all files and name it as hosts)
Copy all the hosts files entries and save it.
Finally copy the hosts file copied at the desired location say : C:\tempfolder\hosts to c:windows\system32\drivers\etc folder.
I encounter the same issue, and find my host is unicode, after change it to ANSI, the issue is fixed

Information needed with Bash/Unix fundamentals [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm struggling to understand how Bash works (I'm using Mac OS X Lion).
I use the terminal for things like Git Version Control and SSH'ing onto our servers and doing basic interactions like that. But I don't really understand Bash scripts and the whole unix set-up past that.
So when I need to install software and it asks me to set environment variables (and PATH variables like $PATH e.g. export PATH=/usr/local/bin) or add paths to a file like /usr/local/bin/:usr/bin/:$PATH - I just have no idea of what I'm doing or more importantly "why" - it is just really confusing to me.
For example, why is there a /usr/local/bin/ and a /usr/bin/ (one local and one not?) and why does some software get installed in one and not the other?
And what about files like .bashrc, .profile and .bash_profile - I understand that .bashrc is executed when a shell starts up and it checks the paths inside that file for application settings and stuff like that, but why do I not have either .profile and .bash_profile on my work computer, but on my home laptop I have .bash_profile and in some places I've seen articles where people ask the user to set-up a .profile if it doesn't exist? Why not just one file for the shell to go to to look for stuff.
I've got NodeJs installed on my laptop at home and that has a path set-up under .bash_profile. I've recently tried installing rvm so I can try out some Ruby programming (I needed rvm so I could upgrade to the latest version of Ruby) but that has settings inside .bashrc such as PATH=$PATH:$HOME/.rvm/bin # Add RVM to PATH for scripting.
Sorry if I'm just repeating myself, but it seems like there just aren't any good articles about this sort of stuff. Articles are either non-existent OR they are over-kill so you never really understand the bits you're interested in (i.e. I don't want to know everything about UNIX just enough to understand these common items that seem to crop up a lot).
Again, this is a bit of a strange question because there isn't a specific thing I want to know, just the common stuff that crops us when you need to install software via the Terminal and you're asked to do things like setting paths and variables and choosing locations of where to install stuff (which bin folder to use) and stuff like that, so a general overview of all this would be amazing!
Any help I can get understanding how the above items work and why would be great!
Thanks.
Your questions is rather 'general'. So the best I can think of is point you to definitive resources on the topic [which may or may not satisfy you].
1: The TLDP book Bash Guide for Beginners, especially Chapter 3 on The Bash environment which talks about PATH and the bash configuration files you mentioned.
2: The Filesystem Hierarchy Standard which basically sets out requirements for how a UNIX(like) Operating System's filesystem should be laid out. The section on /usr goes into considerable detail.
And in case those links go down in the future, here is the gist of what they say about your specific questions:
1: PATH is basically an environment variable, which contains a ':' separated list of directories. When you type a command in Bash, Bash will go through the directories (in the order they are listed) listed in PATH to search for an executable file corresponding to the command. You can see the current contents of PATH by executing:
echo $PATH
in your terminal.
2: /usr contains files/packages installed by your distribution. In my case [I use 'Archlinux'], this means packages which get installed when I install Archlinux, or which I can choose to install via the official package manager for Archlinux. In your case, I guess this means stuff that came along with Mac OS X, officially packaged by Apple.
/usr/local is where things get installed when I locally install packages [bypassing the package management system]. e.g. if I want the latest copy of GCC, I download the sources, build it for myself, and then when I execute 'make install' it goes into /usr/local. But the 'official' copy of GCC that comes with Archlinux goes into /usr. And when that official copy gets updated, my own copy in /usr/local is untouched.
So on a freshly installed system [e.g. a spanking new MacBook], /usr/local should be empty. Because the local administrator (you) has installed nothing yet.

How do I make TortoiseSVN ignore empty directories that have been removed from the repo? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 14 years ago.
Improve this question
I've got some directories that have been moved or renamed. The Linux command line SVN client ignores these directories. The TortoiseSVN plugin for Explorer shows them. If I delete them and update, they come back.
All of the file movement and deletion has been done using the Linux SVN CLI tools. When doing an 'svn update' or even a fresh 'svn co' on a Linux system, these empty directories are not shown.
When doing a fresh checkout using TortoiseSVN, the empty directories are created, even though they don't exist in the HEAD revision anymore.
How can I make them go away?
Sounds like a client problem. Unless I'm misreading you, your SVN manipulations were correctly done.
Several options:
your client is set to get the wrong revision
you client is somehow running into a cache of some sort (are you running SVN over port 80?)
someone has something fancy set up on the server like 2 repos mirroring each other but badly.
Only the first one seems likely to me. What I'd do is figure out what URL the linux side is using and do a fresh checkout on the Windows side with that to a new location and specifically check to make sure the revision is set to head. If that doesn't work, I don't known what the issue could be. If it does, it limits the problems a lot.
p.s., just had a thought, failing the above try a revert to the head revision. I don't think it will work but...
I ended up deleting the directories in the Windows working copy and now the directories are gone. No idea why that was necessary.
Closing question...

Resources