I'm making a website using NuxtJS for the front-end, and Laravel for the backend.
I need to have some data available on every page, so I fetch it using the nuxtServerInit function like so:
async nuxtServerInit({ commit }, { $axios }) {
const items= await $axios.$get('items')
commit('saveItems', items)
}
This works fine when pointing to my Laravel app that's already online, but not when I'm running it on my local machine. I get an "get addrinfo ENOTFOUND 3213" error.
My .env file contains the baseURL for axios, so local would be:
API_URL=http://127.0.0.1:8000/v1/
I tried many things but to no avail. It works when using the live backend, and there's no problem with the API_URL as it works on every other function/page apart from the nuxtServerInit one. I tried:
Using nuxt/dotenv as well as their new runtimeConfig settings to configure Axios' base URL in nuxt.config.js
Changing the port on which my backend runs
Changing the port on which my frontend runs
Adding a custom URl for my local backend (like devlocal.com pointing to localhost:8000)
But nothing works. My /etc/host file also correctly contains 127.0.0.1 as I saw it was a solution for a lot of ENOTFOUND errors on local.
I also tried changing the API_URL in my .env file to
API_URL=localhost:8000/v1/
But then I get "ENOTFOUND 8000" instead, or whichever the port I set my Laravel server to.
Please help!
Edit:
After toggling debug on Axios on dev, I found the host name is wrong in the request, which in turns make the URL incorrect:
reusedSocket: false,
host: '3213',
protocol: 'http:',
_redirectable: [Circular *1],
[Symbol(kCapture)]: false,
[Symbol(kNeedDrain)]: false,
[Symbol(corked)]: 0,
[Symbol(kOutHeaders)]: [Object: null prototype]
},
_currentUrl: 'http://3213/http://localhost:8000/v1/items',
I don't know where the "3213" comes from. I tried setting default host/hostnames/port values to Axios in nuxt.config.js, but no luck.
Solved it randomly. I had a proxy variable setup in my environment variables (Win 10) called "http_proxy". Deleted it, and no more problem.
Related
I have been using SpringBoot for web development before, which needs to manage session, database, HTML and other functions.
Recently, I am going to use VUE, replacing the HTML part of SpringBoot.
Now I use the following two methods for development, but I am troubled by problems such as cross domain and session cannot be keeped.
Use nodejs as the enter of the website and use vue.config to configure the agent. In this way, the URL located is correct, but the login status cannot be saved.For example, if submit a form for login, the background returns to the successful message for login, but if refresh the page, it will show 'not login'. The 'devServer' configuration is as follows
devServer: {
open: false,
host: 'localhost',
port: 8080,
proxy: {
'/ API: {
Target: 'http://192.168.9.211/',
ChangeOrigin: true,
ws: true,
PathRewrite: {
'^ / API' : ' '
},
cookieDomainRewrite: 'localhost',
}
}
}
Use Nginx as gateway, and then forward the HTML request to NodeJS, and the data request (such as JSON data) to SpringBoot.I haven't had time to experiment with this idea, because it may still occur session saving issues.Nginx is forwarding HTTP requests to Nodejs and SpringBoot, so this saved session is from Nodejs or SpringBoot?
So ask if anyone has done a similar project and if you can share your experience.
I have a problem with my Laravel application, after pushing local changes to the master branch and then pulling master from the Linux server, I get a CSRF token mismatch whenever I do an axios request.
This issue is not present in the local environment which uses PHP artisan serve, but the Linux server uses apache2.
I have tried clearing cookies, I have tried incognito windows I have tried on another computer, I have updated axios to 0.19.2 I have cleared all errors with dependencies missing and I cannot get rid of this error.
When I inspect the network tab I see that the request header does not contain XSRF-thing-a-ma-jig and as far as I know axios includes it automatically. Also, on the local environment it is included and visible.
Here is how I make the axios request
axios.delete(`/${objectType.toLowerCase()}/${objectId}`)
.then((returnData) => {
this.getPosts(); // to be replaced
})
.catch(function (error) {
console.log(error);
});
and here are the two rows for axios in bootstrap.js
window.axios = require('axios');
window.axios.defaults.headers.common['X-Requested-With'] = 'XMLHttpRequest';
Obviously I have the csrf_token meta tag in the head of the page.
I am at a loss, I can't test the old version from git since it throws a 500 server error, I guess I broke something somewhere along the lines but the same master branch is working on the local environment so it should work on the Linux server...
I need help !
UPDATE: I think I found a possible reason, the cookies tab shows the same XSRF-Token all the time, even after multiple times deleting the cookies in browser, also on other computers as well. What could be the reason for this ?
I am starting a new project, Nuxt.js for the frontend and Laravel for the backend.
How can I connect the two?
I have installed a new Nuxt project using create-nuxt-app, and a new laravel project.
As far as I have searched, I figured I need some kind of environment variables.
In my nuxt project, I have added the dotenv package and placed a new .env file in the root of the nuxt project.
And added CORS to my laravel project, as I have been getting an error.
The variables inside are indeed accessible from the project, and im using them
like this:
APP_NAME=TestProjectName
API_URL=http://127.0.0.1:8000
And accessing it like this:
process.env.APP_NAME etc'
To make HTTP calls, I am using the official Axios module of nuxt.js, and to test it i used it in one of the components that came by default.
The backend:
Route::get('/', function () {
return "Hello from Laravel API";
});
and from inside the component:
console.log(process.env.API_URL)//Gives 127.0.0.1:8000
//But this gives undefined
this.$axios.$get(process.env.API_URL).then((response) => {
console.log(response);
});
}
What am I doing wrong here?
I have tried to describe my setup and problem as best as I can. If I overlooked something, please tell me and I will update my question. Thanks.
Taking for granted that visiting https://127.0.0.1:8000/ in your browser you get the expected response, lets see what might be wrong in the front end:
First you should make sure that axios module is initialized correctly. Your nuxt.config.js file should include the following
//inclusion of module
modules: [
'#nuxtjs/axios',
<other modules>,
],
//configuration of module
axios: {
baseURL: process.env.API_URL,
},
Keep in mind that depending on the component's lifecycle, your axios request may be occurring in the client side (after server side rendering), where the address 127.0.0.1 might be invalid. I would suggest that you avoid using 127.0.0.1 or localhost when defining api_uris, and prefer using your local network ip for local testing.
After configuring the axios module as above, you can make requests in your components using just relative api uris:
this.$axios.$get('/').then(response => {
console.log(response)
}).catch(err => {
console.error(err)
})
While testing if this works it is very helpful to open your browser's dev tools > network tab and check the state of the request. If you still don't get the response, the odds are that you'll have more info either from the catch section, or the request status from the dev tools.
Keep us updated!
Nuxt has a routing file stucture to make it easy to set up server side rendering but also to help with maintainability too. This can cause Laravel and Nuxt to fight over the routing, you will need to configure this to get it working correctly.
I'd suggest you use Laravel-Nuxt as a lot of these small problems are solved for you.
https://github.com/cretueusebiu/laravel-nuxt
I am working on an app using a React frontend and Express backend, with GraphQL setup through Apollo (I am following and modifying tutorial https://www.youtube.com/playlist?list=PLN3n1USn4xlkdRlq3VZ1sT6SGW0-yajjL)
I am currently attempting deployment, and am doing so with Heroku. Everything functions perfectly on my local machine before deployment and on Heroku in Google Chrome. However, I get the aforementioned errors in Safari and Firefox, respectively. Wondering why this is happening in these browsers and how to fix.
I have spent about 10 hrs doing research on this. Things I tried that made no difference:
I tried adding CORS to my express backend
I tried serving the graphql endpoint as HTTPS
Moving app.use(express.static) in main app.js server file
I couldn't find many other things to try. Everywhere I looked seemed to say that CORS fixed the problem, but mine persists.
Github link: https://github.com/LucaProvencal/thedrumroom
Live Heroku App: https://powerful-shore-83650.herokuapp.com/
App.js (express backend):
const cors = require('cors')
// const fs = require('fs')
// const https = require('https')
// const http = require('http')
app.use(express.static(path.join(__dirname, 'client/build')));
app.use(cors('*')); //NEXT TRY app.use(cors('/login')) etc...
app.use(cors('/*'));
app.use(cors('/'));
app.use(cors('/register'));
app.use(cors('/login'));
app.get('/login', (req, res) => {
res.sendFile(path.join(__dirname, "client", "build", "index.html"));
});
app.get('/register', (req, res) => {
res.sendFile(path.join(__dirname, "client", "build", "index.html"));
});
server.applyMiddleware({ app }); // app is from the existing express app. allows apollo server to run on same listen command as app
const portVar = (process.env.PORT || 3001) // portVar cuz idk if it will screw with down low here im tired of dis
models.sequelize.sync(/*{ force: true }*/).then(() => { // syncs sequelize models to postgres, then since async call starts the server after
app.listen({ port: portVar }, () =>
console.log(`🚀 ApolloServer ready at http://localhost:3001${server.graphqlPath}`)
)
app.on('error', onError);
app.on('listening', onListening);
});
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));
app.use(function(req, res, next) {
res.header("Access-Control-Allow-Origin", "*");
res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
next();
});
Full file is on Github, I tried to post only relevant parts above.
The expected result is that it works in all browsers. It seems from my research that since Heroku serves on HTTPS, Safari and Firefox do not allow requests to HTTP (which is where the graphql server is located, http://localhost:3001/graphql'). When I tried serving Apollo on HTTPS, Heroku just crashed, giving me H13 and 503 errors.
Thanks for any help...
This may also happen during local development when running the front end using HTTPS, but the back end using HTTP.
This is because CORS treats two URLs as having the same origin "only when the scheme, host, and port all match". Matching scheme means matching protocols e.g. both http, or both https.
One solution for local development is to proxy the back end using a tool such as ngrok.
Suppose the front end uses an environment variable which indicates the back end's URL:
BACK_END_API_URL=http://localhost:3005. Then do the following.
Install ngrok
Identify what port the back end is running on e.g. 3005
Run ngrok http 3005 at the command line, which will establish both http and https endpoints. Both will ultimately proxy the requests to the same back end endpoint: http://localhost:3005
After running ngrok it will display the http and https endpoints you can use. Put the one that matches the front end protocol you're using (e.g. https) into your front end environment variable that indicates the back end's URL e.g.
BACK_END_API_URL=https://1234asdf5678ghjk.ngrok.io
Was going to delete this because it is such a silly problem but maybe it will help someone in the future:
I simply replaced all of my 'http://localhost:PORT' endpoints in development with '/graphql'. I assumed that localhost meant local the machine running the code. But an app running on Heroku does not point to localhost. The express server is served on the url (https://powerful-shore-83650.herokuapp.com/) in our case...
At any rate I am so glad I came to a solution. I have a full stack app deployed and connected to a db. Hopefully this post can save someone lots of time.
I've been developing with the vue-cli and the Webpack template. Everything works flawlessly but I'm having some issues using a custom host. Right now Webpack listens to localhost:8080 (or similar) and I want to be able to use a custom domain such as http://project.dev. Has anybody figured this out?
This might be where the problem resides:
https://github.com/chimurai/http-proxy-middleware
I also added this to the proxyTable:
proxyTable: { 'localhost:8080' : 'http://host.dev' } and it gives me a console response [HPM] Proxy Created / -> http://host.dev
Any advice, direction or suggestion would be great!
Update
I successfully added a proxy to my Webppack project this way:
var mProxy = proxyMiddleware('/', {
target: 'http://something.dev',
changeOrigin: true,
logLevel: 'debug'
})
app.use(mProxy)
This seems to work, but still not on port 80.
Console Log:
[HPM] Proxy created: / -> http://something.dev
I can assume the proxy is working! But my assets are not loaded when I access the url.
Is important to note I'm used to working with Mamp -- and its using port 80. So the only way I can run this proxy is to shut down Mamp and set the port to 80. It seems to work, but when I reload page with the proxy URL -- there is a little delay, trying to resolve, and then console outputs this:
[HPM] GET / -> http://mmm-vue-ktest.dev
[HPM] PROXY ERROR: ECONNRESET. something.dev -> http://something.dev/
And this displays in the browser:
Error occured while trying to proxy to: mmm-vue-ktest.dev/
The proxy table is for forwarding requests to another server, like a development API server.
If you want the webpack dev server to run on this address, you have to add it to your OS's hosts file. Vue or we pack can't do this, it's the job of your OS.
Google will have simple guides for every OS.