My test code as blow:
std::string strLogPath = "d:/logtest/";
google::InitGoogleLogging("test1");
FLAGS_log_dir = strLogPath;
FLAGS_stderrthreshold = google::GLOG_INFO;
FLAGS_minloglevel = google::GLOG_INFO;
//FLAGS_colorlogtostderr = true;
std::string strLogPath1 = "d:/logtest/L";
google::SetLogDestination(google::GLOG_INFO, strLogPath1.c_str());
google::SetLogDestination(google::GLOG_ERROR, strLogPath1.c_str());
google::SetLogDestination(google::GLOG_WARNING, strLogPath1.c_str());
google::SetLogDestination(google::GLOG_FATAL, strLogPath1.c_str());
LOG(INFO) << "infoinfo";
Sleep(1000);
LOG(WARNING) << "wwwww";
LOG(WARNING) << "wwwww";
LOG(ERROR) << "eeeeee";
Sleep(2000);
//LOG(FATAL) << "ffffff";
LOG(WARNING) << "wwwww";
LOG(WARNING) << "wwwww";
LOG(WARNING) << "wwwww";
google::ShutdownGoogleLogging();
I got two log files, one file contain all messages(INFO, WARNING and ERROR), and the other contain all WARNING and ERROR messages, but without INFO. This is quiet different from my expectation. I want all messages in one file, and don't like the WARNING and ERROR messages appear twice in different files.
It would be highly appreciated if someone can tell me the solution.
Thanks a lot in advance.
You probably already have a solution for your problem but I will post this anyway for other users.
glog is not designed for logging in one file as far as I know. Therefore you can't do what you want using only glog.
However there are other solutions for your problem.
First: Write your own little logging library. This isn't very complicated and is a great Programming training ;)
Second: For *nix only. Activate logging to stderr in glog using the logtostderr-Flag and redirect the output from stderr to the desired log file.
FLAGS_logtostderr=1;
LOG(INFO) << "Info";
LOG(WARNING) << "Warning";
LOG(ERROR) << "Error";
and on the shell: ./MyProg 2>logFile
Last: Keep everything as it is and delete the logfiles you don't need. Either within your program using c/c++ or with a call to system()
Related
Because my server may run for a long time, the log file will be too large.Is there any way to cut logs according to size or time?
Since you worry about a large log file, try conditional logging or occassional logging
You can use the following macros to perform conditional logging:
LOG_IF(INFO, num_cookies > 10) << "Got lots of cookies";
The "Got lots of cookies" message is logged only when the variable num_cookies exceeds 10. If a line of code is executed many times, it may be useful to only log a message at certain intervals. This kind of logging is most useful for informational messages.
LOG_EVERY_N(INFO, 10) << "Got the " << google::COUNTER << "th cookie";
The above line outputs a log messages on the 1st, 11th, 21st, ... times it is executed. Note that the special google::COUNTER value is used to identify which repetition is happening.
You can combine conditional and occasional logging with the following macro.
LOG_IF_EVERY_N(INFO, (size > 1024), 10) << "Got the " << google::COUNTER
<< "th big cookie";
Instead of outputting a message every nth time, you can also limit the output to the first n occurrences:
LOG_FIRST_N(INFO, 20) << "Got the " << google::COUNTER << "th cookie";
Outputs log messages for the first 20 times it is executed. Again, the google::COUNTER identifier indicates which repetition is happening.
You can check here for more info
Now I found a way to split logs.
Using third party libraries.(ex:https://github.com/natefinch/lumberjack)
I have developed a client server application with casablanca cpprestskd.
Every 5 minutes a client send informations from his task manager (processes,cpu usage etc) to server via POST method.
The project should be able to manage about 100 clients.
Every time that server receives a POST request he opens an output file stream ("uploaded.txt") ,extract some initial infos from client (login,password),manage this infos, save all infos in a file with the same name of client (for example: client1.txt, client2.txt) in append mode and finally reply to client with a status code.
This is basically my POST handle code from server side:
void Server::handle_post(http_request request)
{
auto fileBuffer =
std::make_shared<Concurrency::streams::basic_ostream<uint8_t>>();
try
{
auto stream = concurrency::streams::fstream::open_ostream(
U("uploaded.txt"),
std::ios_base::out | std::ios_base::binary).then([request, fileBuffer](pplx::task<Concurrency::streams::basic_ostream<unsigned char>> Previous_task)
{
*fileBuffer = Previous_task.get();
try
{
request.body().read_to_end(fileBuffer->streambuf()).get();
}
catch (const exception&)
{
wcout << L"<exception>" << std::endl;
//return pplx::task_from_result();
}
//Previous_task.get().close();
}).then([=](pplx::task<void> Previous_task)
{
fileBuffer->close();
//Previous_task.get();
}).then([](task<void> previousTask)
{
// This continuation is run because it is value-based.
try
{
// The call to task::get rethrows the exception.
previousTask.get();
}
catch (const exception& e)
{
wcout << e.what() << endl;
}
});
//stream.get().close();
}
catch (const exception& e)
{
wcout << e.what() << endl;
}
ManageClient();
request.reply(status_codes::OK, U("Hello, World!")).then([](pplx::task<void> t) { handle_error(t); });
return;
}
Basically it works but if i try to send info from due clients at the same time sometimes it works sometimes it doen't work.
Obviously the problem if when i open "uploaded.txt" stream file.
Questions:
1)Is CASABLANCA http_listener real multitasking?how many task it's able to handle?
2)I didn't found in documentation ax example similar to mine,the only one who is approaching to mine is "Casalence120" Project but he uses Concurrency::Reader_writer_lock class (it seems a mutex method).
What can i do in order to manage multiple POST?
3)Is it possible to read some client infos before starting to open uploaded.txt?
I could open an output file stream directly with the name of the client.
4)If i lock access via mutex on uploaded.txt file, Server become sequential and i think this is not a good way to use cpprestsdk.
I'm still approaching cpprestskd so any suggestions would be helpful.
Yes, the REST sdk processes every request on a different thread
I confirm there are not many examples using the listener.
The official sample using the listener can be found here:
https://github.com/Microsoft/cpprestsdk/blob/master/Release/samples/CasaLens/casalens.cpp
I see you are working with VS. I would strongly suggest to move to VC++2015 or better VC++2017 because the most recent compiler supports co-routines.
Using co_await dramatically simplify the readability of the code.
Substantially every time you 'co_await' a function, the compiler will refactor the code in a "continuation" avoiding the penalty to freeze the threads executing the function itself. This way, you get rid of the ".then" statements.
The file problem is a different story than the REST sdk. Accessing the file system concurrently is something that you should test in a separate project. You can probably cache the first read and share the content with the other threads instead of accessing the disk every time.
I have to run bash heavy-job.sh <data-num> (that takes 0.5~2 days) frequently on my computer to process data located at ~/a/data/num . The script call a few sub-processes sequentially and write a log to ~/a/result/num.log . I have done this manually until now.
I wanted to visualize processed tasks and it's status(success or fail), etc as html table. I wrote simple sinatra app to render a table that shows
the list of ~/a/data/num to be processed
~/a/result/num.log exists or not (process not-launched/processing/done)
it's status (the log file contains the word "error" or not)
I found that it would be convenient that if I could launch a bash heavy-job.sh <data-num> from the sinatra app, log the tasks (and info like time,date,etc..) and it's args (heavy-jobs takes some optional args ) and show them as html table.
So I need something that manages jobs and logs to files (or db).
First I wrote a code like below for test (! for test, not integrated with my system yet !), but later I found resque is what i wanted. I am a beginner and not sure if my decision is reasonable or not.
my questions are
is it reasonable to use resque to manage external long-running commands (and log tasks)
or should I use another tool (not necessarily ruby-tool).
(extra;) the task-manager and the sinatra app should work separately (and communicate each other over REST or something) OR not ?
The jobs are not critical since I can retry tasks manually later if failed.
I am not good at English and my question may be misleading. I appreciate any help :) .
class TaskSpawn
def initialize()
#pids = []
end
def spawn(command, options = {})
#opt = {:pgroup => true}
#pids << Kernel.spawn(command, options)
end
def pids()
return #pids.clone
end
def waitany_nohang()
delete_idx = nil
ret = nil
#pids.each_with_index do |p, idx|
pid,status = Process.waitpid2(p, Process::WNOHANG)
unless pid.nil?
delete_idx = idx
ret = [pid,status]
break
end
end
if delete_idx
#pids.delete_at(delete_idx)
return ret
else
# no task fininshed
return nil
end
end
def waitall()
ret = waitall
raise "interal error" if ret.size != pids.size
return ret
end
end
I'm trying to unzip from within my application. I only need this to work on OS X.
For some reason I cannot get this to unzip my file:
QProcess *proc = new QProcess( this );
proc->start("unzip", QStringList("testFile.zip"));
Any ideas what I'm doing wrong?
There are two things you can try.
1. Instead of "unzip", use "/usr/bin/unzip", ie, provide the full path of the program.
2. Use one big string, not string list. Like this:
proc->start("/usr/bin/unzip testFile.zip");
If someone is still interested my solution:
QString path = "/tmp/testFile.zip";
QProcess unzip;
unzip.setWorkingDirectory("/tmp/"); // set working directory - for extraction
unzip.start("unzip", QStringList() << "-o" << path); // overwrite files
if (!unzip.waitForFinished()) return; // wait for finished here
QByteArray result = unzip.readAll(); // read the result
unzip.close(); // close the process
qDebug() << result; // debug the result
I'm currently testing a software written for windows under MacOS X.6. And most stuff works already, but Currently I'm stuck with 1 thing:
The native SaveFileName Dialog under Mac.
QString fileName = m_sSaveAsDir + "untitled." + m_sFileExtension;
qDebug() << "File Extension:" << m_sFileExtension; //"jpg"
qDebug() << "SaveDir:" << m_sSaveAsDir; //""
qDebug() << "Filename:" << fileName; //Filename: "untitled.jpg"
fileName = QFileDialog::getSaveFileName( 0, tr( "Save As" ),
fileName, tr("Images (*.dng *.tif *.jpg)"), 0, 0 );
qDebug() << "Filename:" << fileName; //Filename: "//...../Pictures/untitled.dng"
So obviously the former extension jpg is ignored under MacOs and therefore not displayed nor saved. Which is fine for me.
Further the Qt manual says that under MacOS the filter is ignored. Which is correct if I have a look at the folder in the Browser in the SaveDialog (the files are not filtered ). But it seems that the first extension in the filter is used as an extension as long as no extension was entered in the filedialog, which is very annoying.
How can I come around this problem?
I tried to use the NonNativeSaveDialog by changing the last argument in the getSaveFileName() method to "DontUseNativeDialog", which pretty much works but doesnt look good.
Any suggestions?
Greetings Donny
You can construct the dialog yourself using non-static QFileDialog methods. Follow the QFileDialog docs for this, then look to QFileDialog::setDefaultSuffix(), which you can set to an empty string, like this:
dialog.setDefaultSuffix(QString());
Then nothing will be automatically appended to the end of the file. I don't have a Mac handy to test this, but it should work.