I am a total node.js noobie and trying to figure out the best way to structure my application with proper separation of concerns.
I am using mongodb via mongoose and have successfully gotten my controllers separated out using node.js modules and am trying to then separate out my models. What I've gone appears to work, but when I check the database nothing has been saved. Also, I tried a console.log() in the save function and nothing gets logged.
from my server.js I have:
app.post(api.urlslug + '/messages', messagesapi.insert);
I then have a /controllers/api/messages.js:
var m = require('../../models/message');
exports.index = function(req, res, next){
res.send('all the messages...');
}
exports.insert = function(req, res, next){
var message;
message = new m.Message({
messagebody: req.body.messagebody
});
message.save(function(err) {
console.log('here we are in the save');
if(!err) {
return console.log('message created');
} else {
return console.log(err);
}
});
return res.send(message);
}
and my /models/message.js looks like this:
// required modules
var mongoose = require('mongoose')
, db = require('../models/db');
// setup database connection
mongoose.connect(db.connectionstring());
var Message = exports.Message = mongoose.model('Message', new mongoose.Schema({
messagebody: String
}));
When I post to the API I get a the proper JSON back and it even has the _id field with what appears to me as a mongodb provided unique id. With that, I am having trouble understanding why it is not actually going into mongodb if it appears to be creating the object and communicating with mongodb correctly.
sounds like a connection is not being made. try listening to the open/error events of the default mongoose connection to find out why.
function log (msg) { console.log('connection: ', msg) }
mongoose.connection.on('open', log);
mongoose.connection.on('error', log);
Related
I have some parse cloud code im running on my self hosted server but im running into an issue where queries are not doing anything. I can run commands through terminal and get data back but when I run a query.find.. nothing happens. For Example:
Parse.Cloud.job("getall", function(request, response) {
var itemStatus = Parse.Object.extend('MovieStatus');
var query = new Parse.Query(itemStatus);
query.find({
success: function(results) {
console.log(results.length)
response.success(results.length);
},
error: function(err) {
response.error(err);
},
useMasterKey : true
})
})
Nothing happens. No error no response. I have added console logs to make sure its at least getting called and it is, but for some reason nothing every returns from the server when I do query.find
I have tried all sorts of things to figure out what the issue is but this affects all of my cloud code so it has to be something in there.
You are using an old syntax. Since version 3.0, Parse Server supports async/await style. Try this:
Parse.Cloud.job("getall", async request => {
const { log, message } = request;
const ItemStatus = Parse.Object.extend('MovieStatus');
const query = new Parse.Query(ItemStatus);
const results = await query.find({ useMasterKey: true });
log(response.length);
message(response.length);
})
Not this is a job and not a cloud code function. You can invoke this job using Parse Dashboard and you should see the message in the job status section.
When using apollo-server 2.2.1 or later, how can one log, for each request, the query and the variables?
This seems like a simple requirement and common use case, but the documentation is very vague, and the query object passed to formatResponse no longer has the queryString and variables properties.
Amit's answer works (today), but IMHO it is a bit hacky and it may not work as expected in the future, or it may not work correctly in some scenarios.
For instance, the first thing that I thought when I saw it was: "that may not work if the query is invalid", it turns out that today it does work when the query is invalid. Because with the current implementation the context is evaluated before the the query is validated. However, that's an implementation detail that can change in the future. For instance, what if one day the apollo team decides that it would be a performance win to evaluate the context only after the query has been parsed and validated? That's actually what I was expecting :-)
What I'm trying to say is that if you just want to log something quick in order to debug something in your dev environment, then Amit's solution is definitely the way to go.
However, if what you want is to register logs for a production environment, then using the context function is probably not the best idea. In that case, I would install the graphql-extensions and I would use them for logging, something like:
const { print } = require('graphql');
class BasicLogging {
requestDidStart({queryString, parsedQuery, variables}) {
const query = queryString || print(parsedQuery);
console.log(query);
console.log(variables);
}
willSendResponse({graphqlResponse}) {
console.log(JSON.stringify(graphqlResponse, null, 2));
}
}
const server = new ApolloServer({
typeDefs,
resolvers,
extensions: [() => new BasicLogging()]
});
Edit:
As Dan pointed out, there is no need to install the graphql-extensions package because it has been integrated inside the apollo-server-core package.
With the new plugins API, you can use a very similar approach to Josep's answer, except that you structure the code a bit differently.
const BASIC_LOGGING = {
requestDidStart(requestContext) {
console.log("request started");
console.log(requestContext.request.query);
console.log(requestContext.request.variables);
return {
didEncounterErrors(requestContext) {
console.log("an error happened in response to query " + requestContext.request.query);
console.log(requestContext.errors);
}
};
},
willSendResponse(requestContext) {
console.log("response sent", requestContext.response);
}
};
const server = new ApolloServer(
{
schema,
plugins: [BASIC_LOGGING]
}
)
server.listen(3003, '0.0.0.0').then(({ url }) => {
console.log(`GraphQL API ready at ${url}`);
});
If I had to log the query and variables, I would probably use apollo-server-express, instead of apollo-server, so that I could add a separate express middleware before the graphql one that logged that for me:
const express = require('express')
const { ApolloServer } = require('apollo-server-express')
const { typeDefs, resolvers } = require('./graphql')
const server = new ApolloServer({ typeDefs, resolvers })
const app = express()
app.use(bodyParser.json())
app.use('/graphql', (req, res, next) => {
console.log(req.body.query)
console.log(req.body.variables)
return next()
})
server.applyMiddleware({ app })
app.listen({ port: 4000}, () => {
console.log(`🚀 Server ready at http://localhost:4000${server.graphqlPath}`)
})
Dan's solution mostly resolves the problem but if you want to log it without using express,
you can capture it in context shown in below sample.
const server = new ApolloServer({
schema,
context: params => () => {
console.log(params.req.body.query);
console.log(params.req.body.variables);
}
});
I found myself needing something like this but in a more compact form - just the query or mutation name and the ID of the user making the request. This is for logging queries in production to trace what the user was doing.
I call logGraphQlQueries(req) at the end of my context.js code:
export const logGraphQlQueries = ( req ) => {
// the operation name is the first token in the first line
const operationName = req.body.query.split(' ')[0];
// the query name is first token in the 2nd line
const queryName = req.body.query
.split('\n')[1]
.trim()
.split(' ')[0]
.split('(')[0];
// in my case the user object is attached to the request (after decoding the jwt)
const userString = req.user?.id
? `for user ${req.user.id}`
: '(unauthenticated)';
console.log(`${operationName} ${queryName} ${userString}`);
};
This outputs lines such as:
query foo for user e0ab63d9-2513-4140-aad9-d9f2f43f7744
Apollo Server exposes a request lifecycle event called didResolveOperation at which point the requestContext has populated properties called operation and operationName
plugins: [
{
requestDidStart(requestContext) {
return {
didResolveOperation({ operation, operationName }) {
const operationType = operation.operation;
console.log(`${operationType} recieved: ${operationName}`)
}
};
}
}
]
// query recieved: ExampleQuery
// mutation recieved: ExampleMutation
I'm trying to access a doc using the GET API of ElasticSearch but eventhough the documentation claims to be real time I cannot seem to make it work.
Here's what I tried:
Indexing an event with a custom id:
POST: http://hostname.com:9200/events/purchase/<custom_id>
Immediatedly retrieving the doc using:
GET: http://hostname.com:9200/events/purchase/<custom_id>
The problem is that the document is not found.
UPDATE:
It seems that the problem only occurs if the index is initially empty and that's the first doc to be written. Subsequent requests are indexed and retrieved just fine.
this might not be the best code but I think this shows what you want. If possible switch to the elasticsearch.js client. That is what is used in this sample code.
var elasticsearch = require('elasticsearch');
var config = {host:'localhost:9200'};
var client = new elasticsearch.Client({
host: config.host,
log:'trace'
});
storeEvent({"title":"My Event"}, 1);
function storeEvent(myEvent, id) {
client.create({
"index":"events",
"type":"purchase",
"id":id,
"body":myEvent
}, function(error,response) {
if (error) {
console.log("Error during creating event");
console.log(error);
} else {
console.log("Submitted event");
}
client.get({
"index":"events",
"type":"purchase",
"id":id
}, function (error,response) {
if (error) {
console.log("Error during obtaining event");
console.log(error);
} else {
console.log(response);
}
})
});
}
Which version are you using? 1.2.0 had a bug involving routing: http://www.elasticsearch.org/blog/elasticsearch-1-2-1-released/
If you fixed things by reindexing your data it's worth noting that it won't be compatible with future versions.
I'm building a super simple server in node and in my onRequest listener I'm trying to determine if I should serve a static file (off the disk) or some json (probably pulled from mongo) based on the path in request.url.
Currently I'm trying to stat the file first (because I use mtime elsewhere) and if that doesn't fail then I read the contents from disk. Something like this:
fs.stat(request.url.pathname, function(err, stat) {
if (!err) {
fs.readFile(request.url.pathname, function( err, contents) {
//serve file
});
}else {
//either pull data from mongo or serve 404 error
}
});
Other than cacheing the result of fs.stat for the request.url.pathname, is there something that could speed this check up? For example, would it be just as fast to see if fs.readFile errors out instead of the stat? Or using fs.createReadStream instead of fs.readFile? Or could I potentially check for the file using something in child_process.spawn? Basically I just want to make sure I'm not spending any extra time messing w/ fileio when the request should be sent to mongo for data...
Thanks!
var fs = require('fs');
fs.exists(file, function(exists) {
if (exists) {
// serve file
} else {
// mongodb
}
});
I don't think you should be worrying about that, but rather how can you improve the caching mechanism. fs.stat is really ok for file checking, doing that in another child process would probably slow you down rather then help you here.
Connect implemented the staticCache() middleware a few months ago, as described in this blog post: http://tjholowaychuk.com/post/9682643240/connect-1-7-0-fast-static-file-memory-cache-and-more
A Least-Recently-Used (LRU) cache algo is implemented through the
Cache object, simply rotating cache objects as they are hit. This
means that increasingly popular objects maintain their positions while
others get shoved out of the stack and garbage collected.
Other resources:
http://senchalabs.github.com/connect/middleware-staticCache.html
The source code for staticCache
this snippet can help you
fs = require('fs') ;
var path = 'sth' ;
fs.stat(path, function(err, stat) {
if (err) {
if ('ENOENT' == err.code) {
//file did'nt exist so for example send 404 to client
} else {
//it is a server error so for example send 500 to client
}
} else {
//every thing was ok so for example you can read it and send it to client
}
} );
In case you want to serve a file using express, I would recommend to just use the sendFile error Handler of express
const app = require("express")();
const options = {};
options.root = process.cwd();
var sendFiles = function(res, files) {
res.sendFile(files.shift(), options, function(err) {
if (err) {
console.log(err);
console.log(files);
if(files.length === 0) {
res.status(err.status).end();
} else {
sendFiles(res, files)
}
} else {
console.log("Image Sent");
}
});
};
app.get("/getPictures", function(req, res, next) {
const files = [
"file-does-not-exist.jpg",
"file-does-not-exist-also.jpg",
"file-exists.jpg",
"file-does-not-exist.jpg"
];
sendFiles(res, files);
});
app.listen(8080);
If the file is not existent then it will go to the error that sends it self.
I made a github repo here https://github.com/dmastag/ex_fs/blob/master/index.js
I have a little problem with Express and mongoose using Node.js . I pasted the code in pastebin, for a better visibility.
Here is the app.js: http://pastebin.com/FRAFzvjR
Here is the routes/index.js: http://pastebin.com/gDgBXSy6
Since the db.js isn't big, I post it here:
var mongoose = require('mongoose'),
Schema = mongoose.Schema;
module.exports = function () {
mongoose.connect('mongodb://localhost/test',
function(err) {
if (err) { throw err; }
}
);
};
var User = new Schema({
username: {type: String, index: { unique: true }},
mdp: String
});
module.exports = mongoose.model('User', User);
As you can see, I used the console.log to debug my app, and I found that, in routes/index.js, only the a appeared. That's weird, it's as if the script stopped (or continue without any response) when
userModel.findOne({username: req.body.username}, function(err, data)
is tried.
Any idea?
You never connect to your database. Your connect method is within the db.export, but that is never called as a function from your app.
Also, you are overwriting your module.exports - if you want multiple functions/classes to be exported, you must add them as different properties of the module.export object. ie.:
module.export.truthy = function() { return true; }
module.export.falsy = function() { return false; }
When you then require that module, you must call the function (trueFalse.truthy();) in order to get the value. Since you never execute the function to connect to your database, you are not recieveing any data.
A couple of things real quick.
Make sure you're on the latest mongoose (2.5.3). Update your package.json and run npm update.
Try doing a console.log(augments) before your if (err). It's possible that an error is happening.
Are you sure you're really connecting to the database? Try explicitly connecting at the top of your file (just for testing) mongoose.connect('mongodb://localhost/my_database');
I'll update if I get any other ideas.