I am building a self-consuming Lumen API that has a single Lumen view (which serves the HTML to which the React app is appended).
I was able to install Dusk with
composer require --dev laravel/dusk
which seemed successful.
However, when I run
php artisan dusk:install
I get
There are no commands defined in the "dusk" namespace.
I know Lumen has a stripped-down php artisan. But, wondering if I can add the commands to the "dusk" namespace, or if anyone has successfully used Dusk with Lumen.
Thanks to Jared's answer,
I found I had to manually register Dusk's service provider before I could run php artisan dusk:install. The current Laravel documentation doesn't mention registering it, but it seems like it might have to be done for Lumen.
So all I had to do was add
if (app()->environment('local')) {
$app->register(Laravel\Dusk\DuskServiceProvider::class);
}
to /bootstrap/app.php below the Register Service Providers comment.
As Jared mentions, you don't want it to register in production environments so I stuck it in a conditional.
Once added, I was able to run php artisan dusk:install and got Dusk scaffolding installed successfully.
Be sure you have the .env file setup correctly.
The APP_ENV should be set to local or testing for dusk to work.
Also check if it's was correctly installed by checking the Register Service Providers in your bootstrap/app.php file. Dusk should be listed there.
If you are manually registering Dusk's service provider, you should never register it in your production environment, as doing so could lead to arbitrary users being able to authenticate with your application.
Related
So I tried creating a new project using Breeze+vue in Laravel.
I was following this guide: https://www.youtube.com/watch?v=A7UlfXPhsaA
When I finally got vite running on npm run dev without issues, I came across the app_url and I tried to change it to something similar like in that guide (timestamp 6:14 in video), in my case the app_url value was just http://localhost and I changed it to http://grandia.test.
And it reflects like that when vite is running:
VITE v3.1.0 ready in 1031 ms
➜ Local: http://localhost:5173/
➜ Network: use --host to expose
LARAVEL v9.28.0 plugin v0.6.0
➜ APP_URL: http://grandia.test
but nothing happens when I go to the site. only the localhost:5173 works properly.
I tried googling for answers but I couldn't find anything helpful.
Could someone help me out what I'm doing wrong or missing? Thanks!
I had the same issue, It look like you changed URL: http://grandia.test instead of localhost. So you should try with npm run build instead.
Changing Laravel’s APP_URL parameter will not magically allow you to choose the URL your website is served at. It will only tell Laravel what it should use when generating URLs related to your website.
The video’s author uses the Laravel Valet local development environment. It provides a park command that allows you to automatically serve the subfolders of some folder (e.g. Sites in the video) at http://<subfolder>.test.
In the video, the project is created in Sites/reddit-clone/, which makes it possible to directly access it at reddit-clone.test.
If I do not run php artisan optimize and I go to a new route in the browser I get Page not found
Everytime you push a new version of your project to production it is recommended to run php artisan route:cache.
On dev environment it is recommended to have no cache and ensure that by running php artisan route:clear.
When you do
php artisan route:cache
A file like bootstrap\cache\routes-v7.php gets created holding all your routes from routes\*.php which are contained in Route methods that weight some calculation cost on your server each time you send a request to figure what to do for the current route.
Quoting from bootstrap\cache\routes-v7.php comments:
This allows us to instantaneously load the entire route map into the router.
Important:
This cache file doesn't get auto updates and doesn't exist in a fresh project.
This cache takes precedence over routes\*.php files.
The bootstrap\cache folder is ignored by default by git.
Here is an extensive great article for more details https://voltagead.com/laravel-route-caching-for-improved-performance/
Note that the optimize command was removed from the framework then added back.
Guzzle version(s) affected: 6.3 Laravel: 5.6.3 PHP: 7.2.10
Description
If I am trying to get response in tinker
$client = new \GuzzleHttp\Client();
$response = $client->get($url);
json_decode($response->getBody())
I am getting response as expected.
but in my controller
$object_res = $client->get($url);
I am getting error
"cURL error 3: malformed (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)",…}
Which means, the url is incorrect, but as I have described, this working perfectly in tinker.
Note
I am getting everything working perfectly in my localhost, this is occurring only on my test server.
Please let me know, if I need to give additional information.
Tinker uses a different runtime than your application, this could be causing an issue because in one scenario. php goes directly from your box to the api server and in the other it goes through your webserver before making the request.
The first thing to do would be to clear your laravel cache and config with
php artisan cache:clear
and
php artisan config:clear
if that fails I would look into the cross-domain restrictions or settings on your web server.
Please check for Guzzle requirements on your server. specially
To use the PHP stream handler, allow_url_fopen must be enabled in your system's php.ini.
I just ran into this problem on my testing server, I found it using cockpit. but my problem was with alouy/youtube. check selinux if you have that on your production server. check file permission of .env too. Hard to give a solution when the variables of your server are not presented.
also Read your laravel logs, that presented the solution to me.
hope this helps.
I update my .env file using a function in my controller.
After I save the settings I need to update, I call Artisan::call('config:cache') to clear the cache of my site's configuration.
Everything works fine on localhost, but when I try to clear config cache on production, it doesn't work. (No warnings or errors.)
I even tried with --no-interaction option attached to this CLI command.
Did anyone have this problem and know what causes it?
check into the PHP security settings and make sure you can run these exec,passthru,shell_exec functions in your server.
The first problem I'm running in to is that when installing I receive a mysql error stating that a table cannot be found. Of course it can't, I finished installing the dependencies much less run the migration. The error was being triggered by a Eloquent query in a view composer. After commenting out the entirety of my routes file Composer let me continue.
I proceed to uncomment out my route file - I get the error once again trying to run any artisan commands (can't migrate my database because I haven't migrated my database). Repeat the solution for step one and I've migrated my database.
Artisan serve is now serving me my layout file in the terminal and exiting. I'm at a bit of a loss to troubleshoot this. I assumed that it was possibly a plugin, trying to disable plugins one by one results in:
Script php artisan clear-compiled handling the pre-update-cmd event returned with an error
and being served up my layout file in the terminal.
It seems that the error is directly related to this function in my routes file:
View::composer('layouts.main', function($view) {
$things = Thing::where('stuff', 1)->orderBy('stuff')->get();
$view->with(compact('things'));
});
This isn't a new introduction to the application however so the underlying cause is coming from somewhere else.
As i said in the comment, if you are finding database errors in production server but not in local, then
check database credentials. if its ok then....
check the different configs in the environment.
using profilers(any) will let you know what environment you are in.