I have in .env file defined APP_URL. On apache web server host is configured to respond only for domain exampledomain.com
In routes is for example route defined as
Route::get('/orders/main-table', [\App\Http\Controllers\OrdersController::class, 'main_table'])->name('orders.offers_and_requests');
When following code is executed sometimes instead of expected url starting with https://exampledomain.com/...... I get https://ipaddress-of-server/.... and there is no rule when I get this. I personaly can't replicate to get this result but sometimes like some hacker override this... I would't notice if I did't analyzed activity log as I for debug purposes log all activity in app... All these routes from which can be executed this url generation are protected, so hacker can't just use post to generate these links.
route('orders.offers_and_requests');
Related
I want to bind Akeneo 6 Community Edition Events API to a Laravel application, making this latter able to act on the creation of an Akeneo product within Akeneo, for example. In other words, when a user creates a product in Akeneo, Akeneo sends a message to Laravel.
So I've followed this doc: https://help.akeneo.com/pim/serenity/articles/manage-event-subscription.html#activate which says to create a Destination Flow Connection, and then to activate the Events subscription within. Then I have to type the URL of Laravel that will receive the message sent by Akeneo in the Akeneo field URL. Then I can click on the button "TEST" in Akeneo and each time it shows an error "This url is not allowed." .
After having created a POST Laravel route named receive_akeneo_events, I've tried to type the following URL:
http://127.0.0.1:80/receive_akeneo_events
http://0.0.0.0:80/receive_akeneo_events
http://localhost:80/receive_akeneo_events
http://laravel.test:80/receive_akeneo_events (after having added laravel.test as an entry in my /etc/hosts at the same line than localhost)
The same URLs without indicating the port and/or The same URLs in HTTPS.
None works. "This url is not allowed." is still displayed for each of these URLs.
Is it a Laravel problem, i.e.: should this route not be POST or should it be configured to return a particular HTTP code, since it's the target of an events triggering system (the one of Akeneo)?
Or is it an Akeneo problem or just a bad URL format?
Finally I modified the code of Akeneo CE by removing the exclusion of localhost in a PHP Array called BLACKCLIST and by removing the condition that excludes IP from Private ranges. Both modifications were done in the adequat Symfony validator's script caller.
Moreover, of course I've also done something about Laravel route's CSRF protection, see the Laravel doc to know what: https://laravel.com/docs/9.x/csrf .
Now it works.
My Laravel site uses SoapClient to access another site during an page load (with performs about 6 seconds of data processing before the soap call.) I noticed sometimes the SoapClient switches to non-wsdl mode and the process errors out. I discovered this was happening because the SoapClient was passed a NULL for its first constructor parameter (the URI of the WSDL file). I though this was strange, because this value came directly from the .env file. The site was having no trouble connecting to the database, so the .env file had to be working. I set up a function that access .env variables repeatedly during the page load, using env(...). During a Soap error, I discovered around the four second mark, the site lost access to the .env vars. Before that point, the information was accessible. After that point, calls to env() returned NULL. This may be related to other page request (possibly repeat calls to the same page, requesting the same process.) Also, I just upgraded php to 7.4.13 (xampp with 64 bit thread support: php-7.4.13-Win32-vc15-x64.) Has anyone seen this before, and has a way to address this issue?
EDIT ====
The SoapClient was created in a model, and the env() function was used to access the environmental vars. I have learned that env() should not be used anywhere but config files. This may explain my problem.
I have never seen this problem. But an approach might be to load env variable into a config variable and use that instead. For example: create extra.php file in the config directory like this:
<?php
return [
'api_url' => env('API_URL'),
];
And use it like this:
config('extra.api_url');
// Instead of env('API_URL')
Hope it works.
I am using SPA (Laravel and Vuejs). When in development, website project worked perfect but when after running npm run production and put it on live server, I got errors saying
Access to Xmlhttprequest at http://localhost:8000/auth_check from origin https://hamariweb.com/auth_check has been blocked by Cors policy.
I have tried different solutions that fixed their problems but not mine. I want to know what is wrong with it and how can I fix it?
Sounds like somewhere in your code you're trying to send a request to http://localhost:8000/auth_check which isn't going to work in prod. You need to find the call to that URL and replace it with a call to the correct URL.
You can create an ENV setting in your .env file like this.
APP_URL=https://hamariweb.com
Then share that env file to your javascript code like this.
MIX_APP_URL="${APP_URL}"
Any ENV settings that start with MIX_ are passed to Javascript and this is passing it the initial APP URL value.
To finally grab the app url in the JS do this:
process.env.MIX_APP_URL
You can even assign the entire env settings globally like this.
if(process && process.env){
window.env = process.env;
}
Any .env MIX_ variables will be available in window.env at that point and in your SPA you'll be able to make the URL or any other variable configurable based on environment.
I changed the domain name for my ruby application but when I run it I get
The page you were looking for doesn't exist.
In the log file it says
Routing error no route matches [GET] "/"
In my routing file config/routes.rb I changed domaincontraint to domain new.
I must say that this configuration worked with my old domain name.
Am I missing some place to change the domain name? Please note I am very noobie to ruby
So, the good news is you are getting Routing error no route matches [GET] "/". this means your request reaches the rails application. Because in most cases it doesn't even come pass your server such as Nginx. In such cases its nothing to do with rails.
However, since you get the above routing error, that means your request hits the rails app.
Few things to check here:
1) make sure you have a root url defined in your config/routes file.
2) Make sure you url you are trying to access has a matching route.
For an example, say if you do http://<your domain>/products
Then in the routes, you should have products GET /products(.:format) products#index
You can check this by running rake routes from root of your rails app.
Also, it will be helpful for you to update the question with your config/routes file and also the url you are trying to access.
I'm trying to set up an MVC application that will service several facebook applications for various clients. With help from Prabir's blog post I was able to set this up with v5.2.1 and it is working well, with one exception.
At first, I had only set up two "clients", one called DemoStore and the first client, ClientA. The application determines what client content and facebook settings to use based on the url. example canvasUrl: http://my_domain.com/client_name/
This works for ClientA, but for some reason when I try any DemoStore routes I get a 500 error. The error page points to an issue with the web.config.
Config Error:
Cannot add duplicate collection entry of type 'add' with unique key attribute 'name' set to 'facebookredirect.axd'
I am able to add additional clients with no problem, and changing DemoStore to something like "demo" while using the same facebook application settings works fine also.
Working calls:
http:// localhost:2888/ClientA/
http:// localhost:2888/ClientB/
http:// localhost:2888/Demo/
Failing call:
http:// localhost:2888/DemoStore/
I was thinking this might be an MVC issue, but the Config Error points to the facebookredirect handler. Why would the SDK try to add this value to the config during runtime, and only for this specific client?
Any insight would be greatly appreciated.
I managed to figure out what went wrong here. Silly mistake..
After I had set up the application routes to require the client_name I changed the Project Url in the project properties to point to demostore by default. When I hit ctrl+S a dialog popped up that I promptly entered through without reading.
When I changed the Project Url, IIS Express created a new virtual directory for the project. This was the source of my problem. Why? I'm not sure, but once I removed the second site from my applicationhost.config I was able to access the DemoStore routes.
Moral of the story: read the VS dialog messages!