Linking domain to vagrant box - vagrant

This may be a simple question, but I've done a fair bit of googling/watching tutorial videos and I have no idea why this is failing.
Background
I have a codebase that I'm going to be working on with rest of my dev team. Everyone is using different setups. They have dev servers to roll out to, but they are also not particularly well maintained and do have slightly different installs. The next project I start also requires a php upgrade, so I thought to use vagrant to allow a uniform testing environment (amongst other reasons).
Using puphpet seems logical and has served me well before.
The problem
While setting up the box I configured puphpet as expected. The commant vagrant up also works as expected. I can SSH in as required.
I can also access the server via it's configured ip (192.168.56.101). If I do, I get the standard message:
Congratulations! You are pretty awesome.
[blah blah]
However, if you are seeing this page, it means you are using IP Address, not virtual host!
So - I then opened my (windows) host and added the following:
192.168.56.101 iccell.local
When I then navigate to http://iccell.local I get caught by some search engine and end up at http://searchguide.level3.com/search/?q=http%3A//iccell.local/&r=&t=0
The hosts entry doesn't seem to be correct, but I have no idea why.
Can any body suggest how to fix it/why that would be the case/point me in the right direction?
Thanks

As I posted this, I looked for other possible explanations.
I found a very informative post on SO's sister site; Server Fault.
https://serverfault.com/questions/452268/hosts-file-ignored-how-to-troubleshoot
In my case, for some reason, my hosts file had been saved in a non standard encoding. I fixed this, replacing tabs/space and adding a blank line. I then flushed DNS and things seemed to be fixed.
I'm answering here as a tool to point to the above answer. It's in depth and very informative.

Related

GitHub/Git Issues with overwrites and dup code in multi-serve port structure

this is sort of a top-level full stack question for someone who has some top-tier git knowledge(fluff up).
In our current projects environment we have a single dev server with a typical LAMP set up and building with Laravel, to handle 4 dev's working off a single server I setup a multi-site serving with apache, giving each dev their own port. Each port directs to a folder that each dev works from, giving them all their own code base but one URL with port appendices.
Folder structure
/var/www/master_dev
/var/www/dev_1
/var/www/dev_2
etc.
Scenario basically goes, each dev has a port which they do their specified work in, when completed they create a branch and push, we check it, merge to the standard port 80 and test for bugs.
We're currently at git version 2.35.3, but for some unknown reason sometimes when we merge, there is dup data, sometimes old versions find their way in too.
Now some dev's auto format their HTML. Some space differently. Some don't format anything and it's horrible. But does that effect merging in anyway shape or form?
Is it the apache serving on different ports when pulling/pushing? Each dev does a fresh pull from the master every morning (or should).
When git pulls is it intelligent enough to stay within the working directory it is requested from?
Is it possible that this is just human error?
A lot of questions I know but I'm starting to lose the will to live.
Exclaimer: yes I know of other approaches, i.e. local, containers, etc. etc. I'm working on it, coming to this party late

Migrating Prestashop to a new server, caching and Apollo Pagebuilder

currently I am a bit lost or maybe have just a mental blockade.
The topic for my question is a 1.7.3.3 Prestashop currently hosted at shared hosting. Due to slow performance and long TTFB I am currently moving it to a VPS running Plesk, hosted on DigitalOcean.
Now comes the Part where I am a bit lost: I copied the files via WGET, dumped the Database and applied permissions (to my knowledge) correct. Shop comes up at the new Plesk-Host under new domain without issues.
As soon as I am trying to enable MySQL-caching I am able to edit the pages with Apollo Pagebuilder, but not save them anymore. At least the changes don't show up at front office. If I switch back to filecache, changes are propagated as intended, but the modules-page in the backend doesn't work anymore (e.g. error 500, can be fixed by removing /app/cache/prod and app/cache/dev)
So, to summarize my issue: If I enable filecache, everything except the module-page works, if I enable MySQL-cache, everything except Apollo Pagebuilder-propagation works.
What I already tried:
I have reinstalled Apollo Pagebuilder, but this rather completely breaks my Front Office (means I'd have to rebuild everything from scratch, as the current status doesn't seem to be read properly).
Exported, reimported and "update and fixed" Apollo, not successful :(
Only thing that comes to my mind as a fix would be sacrificing something to the gods, but I'd rather not do that.
Environment:
Ubuntu 16.04 LTS; Plesk Onyx 17.8.11; Prestashop 1.7.3.3; PHP 7.1.26
If no one had this problem before, maybe someone has an idea on what to delete to just enable the modules in the backoffice. I'd be willing to take MySQL caching as non-available.
Thank you in advance for your help.
Ok, I think I found the answer. As the server including cache was migrated, it cached the database-connection as well. (Fortunately it wasn't able to write on the previous DB).
So if ever someone faces the same issue:
prestaroot/app/cache/prod/appProdProjectContainer.php stores the connectionstrings at 2 positions.
Once in: protected function getDoctrine_Dbal_DefaultConnectionService() //**around line 670
and once around line 5000. Easiest would be to just search for your previous connection-credentials.
Also you need to make sure that in prestaroot/app/cache/prod/appParameters.php the same, valid, credentials are existant.
Hope it will help someone one day.

Inter-Gear Communication for Openshift?

I'm trying to create an app such that gear 2 according to this model can be accessed by gear 3,4...n when using the --scaling option.
The idea being for this structure is the head of a chain of relays. I'm trying to find where the relevant information is so all the following gears have the same behavior. It would look like this:
I've found no documentation that describes how to reach gear 2 (The Primary DNAS) with a url (internal/external ip:port) or otherwise, so I'm a little lost as to how to let the app scale properly.
I should mention so far I've only used bash scripting, but I'm not worried about starting the program in other languages, but so long as it follows that structure in openshift I'm not worried.
The end result is hopefully create a scalable instance of shoutcast on openshift.
To Be Clear:
I'm developing a cartridge, not using the diy, all I understand of openshift is in this guide but of course I'm limited because I'm new.
I'm stuck trying to figure out how to have the cartridge handle having additional gears use the first gear as a relay. I am not confused about how Openshift routes requests externally to the gears and load balances them. I'm not lost how to use port-forwarding to connect to my app, the goal would be to design the cartridge so this wouldn't be a requirement at all, to only use external routes.
The problem as described above is that additional gears need some extra configuration, they need an available source (what better than the first gear?). In fact the solution to my issue might be to somehow set up this cartridge to bypass haproxy with an external route that only goes to the first gear.
Github for those interested, pass it around, it'll remain public. Currently this works only as a standalone, scaling it (what I'd like to fix) causes issues. I've been working on this too long by myself, so have at it :)
There's a great KB article that explains how the routing works on OpenShift gears here https://help.openshift.com/hc/en-us/articles/203263674-What-external-ports-are-available-on-OpenShift-.
On a scalable application, haproxy handles all the traffic routing to your gears. the only way to access your gears is through the ports mentioned in the article above. rhc does however provide a port-forwading option that would allow you to access things like mysql directly from your local machine.
Please note: We don't allow arbitrary binding of ports on the externally accessible IP address.
It is possible to bind to the internal IP with port range: 15000 - 35530. All other ports are reserved for specific processes to avoid conflicts. Since we're binding to the internal IP, you will need to use port forwarding to access it: https://openshift.redhat.com/community/blogs/getting-started-with-port-forwarding-on-openshift

On a Mac, in what other file can hosts be set, other than in /etc/hosts?

I have an annoying problem, where one specific URL keeps pointing locally to 127.0.0.1, no matter if I have it refer to a different IP address in my /etc/hosts file. I used to use Gas Mask, but found it to be buggy and removed it - hopefully it cleaned up after itself.
Are there other files that grab any IP redirection before getting to the hosts file? Or is there a way that I can follow the exact path a URL request does?
This isn't a programming question and you might get better answers if you ask the super users and sys admins who hang out on http://serverfault.com, but enough people do programming pointing to "localhost" that it might be worthwhile to answer here.
Besides editing the hosts file, you also have to flush the dnscache. E.G. type "dscacheutil -flushcache".
More instructions can be found here.
Also you'll need to restart the "mDNSResponder" process. Details on that can be found in this related question

Images not loading on Facebook

I'm usually a great debugger when it comes to helping family members with their computer problems, I also would normally post this type of question here, but I'm hoping this community can help me get to the bottom of this.
A family member is having problems with certain websites not loading all of the resources, primarily images is what it appears. I have disabled her Symantec protection in case it was scanning or preventing stuff from loading and have also uninstalled and disabled startup applications she doesn't need.
One example of a file that is not loading on her system is:
http://static.ak.fbcdn.net/rsrc.php/v1/yp/r/kk8dc2UJYJ4.png
I'm assuming this loads for everyone else here.
Any thoughts would be much appreciated. Also she gets a similar issue in IE, Chrome, Firefox.
The first place I'd look is if there's a commercial ad-blocker installed, as I guess it can't be an add-in/extension as different browsers have their own settings.
And it may sound silly, but did you check the hosts file (system32/drivers/etc/hosts)? Is it possible static.ak.fbcdn.net is just being redirected? You might want to try opening the command prompt and just doing ping static.ak.fbcdn.net and confirming her computer's exact behavior.
In my case FB redirects me to a749.g.akamai.net (or 125.56.208.11) and everything works fine.
Minor edit: I'm a bit skeptical that's the cause, as FB serves other stuff from that domain (CSS, JS). Photos and profile pictures seem to come from a different domain. But I'd still be interested in whether the problem occurs when connecting to the resource or displaying it.
Thats probably because your DNS resolves the Akamai CDN server, used by facebook to fetch images, to an IP address that is not reachable from your network. You may want to get the IP address of facebook CDNs used by your computer at the time this happens and contact your network administrator to find the reason behind the IP blockage (may be because of firewall). Other than that, you can try changing your DNS in your system settings which might give you an IP address that works for your network.
PS: I ran into this issue a few weeks ago and have found my findings to be correct.

Resources