Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm developing a service that talks to other services. To test these interaction, I have a fake http server. I'm using node.js and test via http requests. The tests are running external to the process, so I cannot (and don't want to) mock the request/response.
So far, I have an environment variable that allows me to switch hosts within the service itself. However, I cannot base the fake request/response on the hostname.
I also run a development version of the service that interacts with the real external services. I could programmatically change /etc/hosts during the test run, as I probably won't be "using" the development service while running tests, but I'd rather keep the purity of the test sandbox.
Ideally, I'd like to have a version of /etc/hosts apply only to the process. This would allow the fake http server to also glean the intended host of the request.
I'm also open to different approaches to achieving the test hostname sandbox.
/etc/hosts is used among other things by gethostbyname(), a system call that is actually performing the resolving.
There's no easy way to do it the way you want it.
How about a local DNS server with fake names/addresses?
This will work for a lot of implementations but it may not be feasible for you. You have a UNIX/OSX tag, so:
You have to put your process in what amounts to a chroot jail. chroot changes the root / to be some other location, ex.: /localroot.
You can then create your version of hosts under /localroot/etc/hosts. It is seen by the chrooted process as /etc/hosts
There is a lot of information on how to set one up on the web. And any user account you create is "locked" in there.
I cannot find basic OSX chroot information, this is more advanced, and is meant primarily for sftp users.:
http://www.macresearch.org/restricted-sftp-mac-os-x-leopard
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am in the process of developing a CI/CD tool that would be running on kubernetes. The application would be responsible for creating a k8s job object which would be treated as a slave in order to run a pipeline.
The image, the slave would be running is entirely up to the user, so I do not have any control over it except for the fact that it would be running in the same local network as the CI/CD application.
My question is; in this scenario, how can I make communication possible between the CI/CD tool and the slave?
To add more context to this, I want to create something similar to Jenkins. Jenkins, together with kubernetes plugin, runs on kubernetes and creates pods which are treated as slaves(agents) in order to run a pipeline. The image which would be running in the slave is entirely up to the user. The slaves have a JNLP as a side-car container which is used to establish the connection. How can I achieve the same architecture in golang or python?
What have I done so far?
I have tried researching on this and came across that Jenkins uses sockets to establish a connection. But, in order to use sockets I have to have sockets on both sides; on the server-side and as well as on the client-side. As far as I know, Jenkins uses the image that I, as the user, gave it to use in the slave, and it does NOT have server-side socket. So how is it able to establish the connection?
Since Kubernetes is natively Go, I think this could be achieved easily with a Golang solution. This is a list of some of the things worthy of research time that I can think about out of the box are, assuming that you are running your solution as a Kubernetes service:
Research about Kubernetes operators. An operator will help you extend Kubernetes functionality easily. https://github.com/operator-framework/getting-started
Raft. Raft is a consensus algorithm designed to be easily understandable. This can be used to achieve things like leader election in case you come across the need to implement one. https://raft.github.io/
Golang has native SSH libraries so this should make your idea of using SSH very feasible. I, however, think there could be alternatives like using RPC for communications between the master and slaves as I can imagine it could be a nightmare to manage different certificates to authenticate the master against the slaves.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm a Go newbie and I'm stuck trying to figure out how to deploy my apps on a dedicated server.
When I worked with PHP I used the standard setup:
But I'm confused as to how I'm supposed to deploy my Go apps.
I know I can run a single app on port :80 but how do I run multiple apps?
Is this the general idea:
Or should I be using something like this:
Can someone clarify the way most Go developers deploy their apps?
Thanks!
I'd highly recommend going with Caddy. You can set up you server with all the apps on different ports (esp. higher ports i.e. 1024 and up, so they don't need to be root), and then use proxy directives to forward traffic to your apps and such. As a bonus, you also then get free Let's Encrypt certificate support!
https://caddyserver.com/docs/proxy for more on the proxy directive
If you need multiple apps to serve HTTP requests, you should definitely consider using Nginx as a reverse proxy. You can forward all requests on a given route, say /api to one service and /ui to a second service, provided they are bound to different ports.
You might want to look a Traefik (https://traefik.io/), a go based web proxy
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm going up to the mountains with no internet connection to present something. I'd like to be able to use interactive examples since I'll be presenting on a certain website.
So is there a way I can set up a proxy caching server or something to cache every call made in order to have a fully cached website experience with no internet connection?
I've looked at http://squidman.net/ but I'm not sure how it works or how to use it.
You might want to try something like this. It might be a lot more work than the steps below, but this could be a good starting point.
Create a local proxy server along with memcache or redis
Update the browser proxy settings to use your proxy server details
Make the local server look for the url in the redis server.
If found, return the data in the redis server
Else, do a web request and store the data in the redis server
You'll have to do this manually for the pages that you want while you have the internet connection. Once you've got all the data you need, you can work without the internet connection too.
If the pages are essentially static then you could use something like HTTrack http://www.httrack.com/ to make an offline copy
If there's anything requiring server side interaction or dynamic generation of pages you're most likely going to need to run your own local instance of the server.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
See Server Fault: How do I configure proxy settings for LOCAL SYSTEM?
I have a Windows service that needs to start up IE with certain proxy settings (e.g. host name and port). If the service is configured to run as some normal user (e.g. me), I can ensure the required IE proxy configuration by programmatically setting the following values in the "HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings" registry key:
ProxyServer = myserver:9999
ProxyEnable = 0x1
ProxyOverride = (delete)
However if the service is configured to log on using the local system account, setting those registry values seems to have no effect on IE.
Is there a programmatic way to configure IE proxying for the local system account? Ideally I'd like a method that works both for that account and for normal users, to keep my program simple.
In case you're wondering why a service needs to start a browser, the program being run as a service is the Hudson continuous integration server, which in turn is configured to run some browser-based automated acceptance tests of a web application (using Sahi).
STOP PRESS: Since adding the bounty, I've discovered this is an exact duplicate of https://serverfault.com/questions/34940/how-do-i-configure-proxy-settings-for-local-system, which has an accepted answer, so a bounty is no longer applicable. Can I (or an admin) delete the bounty and get my rep back? Also, it doesn't seem possible to close a StackOverflow question as being a duplicate of a ServerFault question.
This is an exact duplicate of https://serverfault.com/questions/34940/how-do-i-configure-proxy-settings-for-local-system, which has an accepted answer.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm currently looking at hosting solutions for my Ruby on Rails SaaS web application, and the biggest issue I see is that if I go with something like Amazon EC2, then I still need to configure my own server and install what I need (e.g. database, programming framework, application server, etc.). Each one of these is an opportunity for something to go wrong. I also have to worry about how my data is getting backed up, how frequently, and a host of other "low-level" details. Being a startup I don't have the resources for a sysadmin so would have to play one myself. I currently do some work for a startup and my boss is always talking about how great EC2 is because it let's us "get out of the hardware business" - in reality though, it doesn't feel that way because we still have to set up the server instances, still have to install software, still have to configure the software properly. It feels like we're still in the hardware business, just that we don't really own the server we're using.
In contrast is a service like Heroku (which actually uses EC2 underneath, I believe) but basically takes care of all the low-level details. They do automatic backups for me, I just specify the frequency. They have a server configuration already set up. They have ways to manage it and keep it running so I don't have to monitor traffic. I can focus on my application and just deploy the code, and let them worry about administration and making sure the database is properly configured with the web server and the right folders have permissions.
The problem with Heroku is obviously that I don't have control over these things if I wanted to modify it. Heroku uses nginx as it's web server; if I want to use Phusion Passenger on Apache to stay on the "cutting edge" of RoR development, I'm SOL. If I need to make a quick patch in production (Root of all evil, I know, but it happens sometimes) I don't have SSH access to Heroku's servers. If I need to set up a new database user to allow somebody else to remotely access data, I don't think I can do this. And worst of all if something does happen with the server, I have no way of doing anything except wait for Heroku to fix it.
Basically at what point, if ever, can we as developers focus on our code and application and not have to play sysadmin with server configuration? As a startup with limited resources and limited knowledge of configuring servers (enough to get by), would I be better off sacrificing some configurability for the ability to let somebody else worry about the hardware/software end of things?
Make the server config part of your project and use scripts to setup and tear down your servers. Keep everything under VCS and use the scripts routinely to recreate your development setup.
https://stackoverflow.com/questions/162144/what-is-a-good-ruby-on-rails-hosting-service/265646#265646
I'm not interested in learning how to
configure Apache, ModRails, Phusion,
Mongrel, Thin, MySQL, and whatever.
With Heroku I don't worry. nginx is
the web server, and PostgreSQL is the
database. They have settled on
Ruby/Rack for all new apps. Frameworks
that run on Rack include Rails, Merb,
and Sinatra. Limited choices.