How secure is Vagrant/Puppet/Puphpet? [closed] - vagrant

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am sorry if my question is stupid, but I think it is better to be safe than sorry. I am just a beginner when it comes to server configurations and DevOps.
I am checking server configuration management tools like Vagrant/Puppet/Puphpet. They look like extremely powerful tools, but I am worried about the security of using them in the production environment.
For example, when deploying to AWS, we need to specify the AWS access credentials (key and secret, and the key pair). If using Puphpet, you actually need to insert them into the website to create the script file. I downloaded the script as is, and replaced the credentials in the code, but still I wonder how secure it is to trust these external tools (vagrant/puppet) to manage configurations on the server.
Am I just being paranoid, or is this a possible security risk?

Creator of PuPHPet here. Your configs are not saved to the server, everything is deleted.
I suggest on the GUI you leave the entry blank and manually type it into the yaml file afterward.
PuPHPet's source code is open source, and you are more than welcome to go through all the Puppet modules included in the zip file.

Vagrant, Puppet and puPHPet are all different fro each other.
(1)Vagrant helps you spawn VMs with pre-defined/custom boxes within seconds. The configuration from "Vagrantfile" is applied on the boxes. You can bring up a server and apply your puppet code through it.
(2)puPHPet does the same thing but has a nice GUI and higher level of abstraction as compared to vagrant. It has various options to choose when it comes to the kind of box you want.
(3)Puppet is configuration management tool with a descriptive language where you write a module and apply it to your server to configure it.
Now coming to security, If you have keys/passwords in your manifests, I will not suggest you to use online tool. But you can install vagrant on your local-machine and use it. If your puppet code is internal to the devops team you work for, its pretty safe to ave passwords in it.
NOTE: NEVER SAVE PASSWORDS IN CODE.

Related

Best practices for collaborative web development in Mac environment? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
At my current job I've been the only web developer for almost 3 years. So for that whole time I just used MAMP on my own local machine. We have a 2nd developer that will be starting pretty soon and I need to figure out the best way to set everything up so we can both work on the same project.
All of the machines at our office are connected to a Mac OS X server. Is it possible to host our web projects on the server and have them be accessible via a short url such as exampleproject.dev?
The reason I want to have our web projects reside on our server is because it is always on. So if I'm not in the office I want our other developer to be able to access our web projects.
Also, I'd like to get some kind of version control software set up. Any recommendations? Thanks!!
This is a hazard I've seen two companies fall into, and then emerge out of. Your other developer should also work locally (optionally against a common dev database though).
I would recommend putting Subversion or Git on the server. Just from personal preference and both of which work well with local environment setups, you have to push your changes into the repository before others will see them.
You can then set up an automated build system that pushes your code from your source control to the server for common viewing (if necessary).
From personal experience I would not recommend everyone developing against the same code on the same machine. People will break things and temporarily halt development for others as well as accidental code overwrites. It can't help but get ugly occasionally.
Definitely an advocate of: Work locally, commit often, but only once you're sure it's not going to blatantly break the site.
But — If you feel you still want to do the 1 dev environment at least make sure everyone is using an editor that prompts you (or updates) when someone changes the same file. As an example I use Sublime.
oh there are many ways. You could check out the thoughtbot dotfiles on github (https://github.com/thoughtbot/dotfiles) which are meant for just that or you could investigate into different setups for tmux and session sharing. I'd recommend you get yourself used to git and homebrew because that's at the base of every common shared environment on the mac.

Issue status (specified, coded, tested, ...) and percentage [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
After struggling a bit to install it on a shared hosting, I finally discovered the power of Redmine. I must say I'm pretty impressed. However I'm a beginner, and stumbling into the manual and the forums didn't help much for what I'm trying to do, so I ask here. Please forgive me if it's already addressed somewhere.
We are doing a software development project, and we are trying to get organized. At the moment, we entered all the developement tasks as features into Redmine. However we would like to enforce that, for a task to be completed, it must have been
- specified
- coded
- tested
- some other project specific stuff not relevant here
I can't use issue statuses, because that would impose a particular order (like testing after coding, BUT sometimes, we want to be able to write the test before the code, and sometimes not)
I don't really know how to achieve that :-(
What I tried so far is :
for every task, create a subtask, for coding, testing, and so on ... It works well but it's very tedious, and it makes the number of issues a bit overwhelming
use custom boolean fields. It's ok but :
-I can't make a search filter like "find the tasks which are NOT
-I can't setup the completion percentage to depend on the subtasks (e.g : prevent someone from marking a task as 100% completed if it hasn't been tested)
I'd like to get some insights from experienced Redmine users, about how to achieve this. I must admit I'm a beginner in both Redmine and project management, so I'm really trying hard to find the best way to deal with that.
Any help appreciated
Best regards
From what I understand I think you should still consider the status field. In Redmine you can define what status changes are permitted (defining allowed destinations for a specific status).
For example you could define the following statuses
New
Specified
Coded
Tested
Deployed
Closed
From "New" you would define that a specific role can change only to "Specified" (perhaps enforcing not to implement without specifying). From "Specified" you could allow changes to "Coded" and "Tested". From "coded" or "tested" to "deployed". And only from deployed to closed.
You could also define a specific workflow for a role and a specific kind of issue (what Redmine calls tracker), but define a different workflow for a different tracker.
It's very powerful, but you'll have to tailor it to your specific needs.
To access the workflows you need to go to the Admin page.
HTH-

Suggestions for porting linux application to windows [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'd like to port an application written under linux to windows. Currently I'm using Cygwin but I'm curious if there are any other options that don't force me to release my source since I'm not in the position to do so right now. Are there any other options short of having to completely re-write it or buying a license?
MinGW doesn't have as many licensing restrictions as Cygwin, but it might require more effort to get your code to run under it.
Please give us more information about your application. Simple commandline utility? Uses KDE libraries? Uses linux kernel extensions? etc etc. For internal use only? For use on corporate desktops? For use by end-users? These all will change our suggestions.
I'll also suggest using MinGW.
The basic process of porting:
Install MinGW and MSYS
Run your Makefile.
Likely you'll get an error, fix it (either by changing code or by commenting it out)
Recompile via Makefile.
Repeat 2-4 until you compile with no errors.
Then test your application, and track down any bugs you might have introduced.
I am by no means an expert. Saying that, if you're not going to release your application, I believe you can use it with GPL'd stuff internally. It is only when you started distributing your binaries that you are bound by the GPL to release your source along with the binary, or by user request. E.g. a company could modify some GPL'd source code, and use it internally so long as they don't distribute the code or application outside of the company.
So it depends on what you plan to do with your app.
You might be able to use cygwin for now just to get it to a working point, and then gradually replace pieces of the app with native windows code until you've completely de-cygwin'd it.
I know that's probably not what you're looking form, but I thought I'd throw that out there. They should have a couple law/licensing classes shoved into CS degrees these days.
There is also a commercial license for Cygwin.
http://www.redhat.com/services/custom/cygwin/

Where can I find a template for documentation about server-side installation of software? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm looking for a good template on server-side installation of software for a project I'm working on.
The client-side is pretty straight-forward. The server-side installation is a little trickier. It is made up of several pieces (services, database connections, dependencies, ports that need to be unblocked, etc.). During a recent test, several undocumented pieces were discovered. Now I need to create installation documentation for our disaster-recovery plans and ways to test the installation without necessarily having a "full-up" system to test on.
I'd really like a suggestion of where I can get a template or a really good example of such a document. I'd like it to be something that an operator could read and comprehend in the heat of a recovery.
[EDIT]
Our current documentation comes mainly from the questions our administrators have had during off-site tests. As new code is written, I'd like to make sure the documentation is written ahead of time. I've been collecting VMWare images to start testing, but was looking for some good examples. It's a Windows Server shop (2000 & 2003). Word templates would be great, but if I could see good documentation, I could create the templates. Any suggestions about what should be tested would be great as well.
[2nd EDIT]
I've gotten several good ideas from the answers posted. After changing my Google search, I came up with some good starting points. They're not perfect, but they are a good start.
Microsoft Exchange - http://technet.microsoft.com/en-us/library/bb125074(EXCHG.65).aspx
iPhone - http://manuals.info.apple.com/en_US/Enterprise_Deployment_Guide.pdf
http://www.novell.com/documentation/gwgateways/gw7_exch/index.html?page=/documentation/gwgateways/gw7_exch/data/ab32nt1.html
http://cregan.wordpress.com/2006/06/22/exchange-2003-step-by-step-installation-instructions/
http://technet.microsoft.com/en-us/magazine/cc160942.aspx
Covers planning in the design stage well - http://www.onlamp.com/pub/a/onlamp/2004/04/08/disaster_recovery.html?page=2
[Edit 10/29/2008]
THIS is the type sample I was looking for. It doesn't have a lot of garbage, but seems to explain enough of the why along with the how http://wiki.alfresco.com/wiki/Installing_Labs_3_Nile
The most complete method that we've come up with for creating our DR documentation, involves going through a full cycle (or two) of installation, and documenting each step along the way.
I realize this can be a bit difficult if you don't have a test (or replacement) system to use to create your documentation - but it's worth lobbying for running through this cycle at least once.
(I recommend twice, the second being done by someone not involved with the project - this is how you test the documentation for future admins, who may not be as experienced with the process.)
A side effect of the above is that your documentation grows fairly large - last I had to do it, I believe the completed installation manual for our database servers was 30+ pages.
What should be tested? Well, in the case of a web site, "can you get to the page?" Include a URL as a starting point and let the admin click through to a certain point. It is not necessary for the admin to go through the whole QA cycle, just a confirmation that what you meant to be deployed is really what got deployed.
Other ideas
Also, we (my team at my last job) had QA test the deployment. As a QA person should be, he was not intimate with the details and as he deployed to QA, we were able to get feedback on what went wrong.
Another thing that is useful is sitting down with the admin(s) before the deployment. Go over the instructions and make sure they understand them the same way you do.
Template? Just make sections that have fields for data such as URL to DEV, QA, and PROD. When you write out the instruction you can refer to those. Just make it clear what is being deployed.
Depending on the admins, automation is helpful. I've had windows admins that want a Word doc with step by step instructions and other admins that wanted a script.
However, some helpful things to include, probably as sections
Database changes
Scripts to run
Verification that they worked
Configuration changes
what are the change
where is a version of the new file (In my case they diffed the two, which helped reduced errors concerning production-specific values)
General verification
what should be different from the user perspective (feature changes)
For web farm deployments, it might be helpful to have a coordination document concerning how the servers need to be pulled in and out of pool.

How do you keep a personal wiki (TiddlyWiki) current and in sync in multiple locations? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
If one were to use TiddlyWiki as a personal database for notes and code snippets, how would you go about keeping it in sync between multiple machines. Would a svn/cvs etc work. How would you handle merges?
One option is the up-and-comer DropBox. A free filesharing service that gives you 2GB free, and no limit to the number of computers you share on.
Define a shared folder, put your tiddlywiki files in there, and then point the local editing to the shared drive. Any changes are automatically reflected.
Note: I have no connections to DropBox other than the fact that I've been reading lots about it, and am trialing it for my personal use.
Use TiddlySpot, its online all the time and private
Tiddlywiki is well suited for version control (since it is a single text file).
Just put it on a personal SVN or Git repository accessible from the web, and you can keep it in sync with many places (office, home, laptop, etc.).
I use this method, and it works pretty well. You can even have several versions of your notes and resolve conflicts using diff tools. And obviously with revision control, you can work "offline" and sync later.
I just created a new Tiddlywiki at TiddlySpot. It allows you to keep a local copy of the Tiddlywiki and also sync it up with the server.
These options are all good, but I would just put it on a USB key.
If you have your own web server (and don't want to use TiddlySpot), try this code to enable saving to your own server.
I have a MonkeyGTD wiki that is on http://TiddlySpot.com. I have a local copy of it on my work PC and do my work during the day on it, and periodically upload to TiddlSpot during the day and at the end of the day. If I need to access it or update it after work I will make changes to the online version and then the next morning I do an Import back into my local file.
It's true that if I forget to do an update or do them in the wrong order I will lose information, but it's "good enough".
There is probably a way to use the Sync functionality to prevent this, but I haven't researched this option yet.
If you might want to edit your wiki on several computers at the same time, you would definitely want a server-based solution that syncs at a finer level than the file. Giewiki (http://giewiki.appspot.com) is a server-based TiddlyWiki solution based on Google's App Engine, which does just that. And unlike any other hosted TiddlyWikis that I know of, you can create several pages in any hierachy and navigate them through an auto-generated sitemap. You can try it out by creating a subdomain site at giewiki.appspot.com, or you can download the source and install it into a free appspot site of your own. And you can make it as personal or public as you like.
Try FolderShare.
I store my TiddlyWiki files on a USB flash drive that I keep with me no matter what computer I might be using. No need to bother synchronizing across other computers. It gets backed up regularly when I back up the flash drive itself on my primary workstation.
Yet another option: Use a different personal wiki called Luminotes, which you can either access online from different computers or download and run on your own computer (yes, even a USB drive). Luminotes has definitely got some similarities to TiddlyWiki, but in many ways it's simpler to learn and use.
You mentioned SVN, but if you don't mind using git, Github's Gollum is a great solution. Edit locally or from the github remote repo.
Why not just setup something like DokuWiki on a webserver? You do have your own web server, right? You can get a virtual hosted solution for $19/mo these days.

Resources