How do I view High Sierra system log files? Specifically I want to do so from recovery, or remotely.
In recovery, log collect gives:
“log: failed to collect LiveData: No such file or directory (2)”
log show doesn't seem work according to its documentation even on a normally booted up system e.g.
log show --file Persist/00000000000000eb.tracev3 gives:
log: Could not open tracev3 log file: The specified URL did not refer to a valid log archive
Even though the documentation states:
--file file Display events stored in the given .tracev3 file. In order to be decoded, the file must be contained within a valid .logarchive bundle, or part of the system logs directory.
which would seem to imply that .tracev3 should not need to be in a log archive, if they’re in the system log directory.
In the case of recovery, first starting the logd daemon allows you to collect logs, which is a start e.g.
launchctl load /System/Library/LaunchDaemons/com.apple.logd.plist
log collect --last 3h
I'm still unclear how one would do this remotely i.e. I still can't work out how to read an arbitrary tracev3 in the system directory with log.
Related
In informatica pc I got an error like Writer initialization failed.Error opening output file.The system cannot find the path specified.
Even I checked the directories and file names but what exactly confused.
It's exactly as it says: the Writer failed to initialize, as it was not able to locate the path and file specified.
Note that PowerCenter Workflows and Mappings are executed on the Server. So while you develop on your local laptop (for example) and place a file in C:\Temp folder, and you are able to see the file, once you run the process, it will be executed on the Server. And the Server will not refer your laptop. It will look for C:\Temp location on its local disk. And if that's a unix box, there won't even be a C: path!
Hence, the process will fail with exactly the message you've seen: initialization failed, error opening output file. You need to place the file in the location accessible by Server.
In case of Writer, you name target location where the file will be created - make sure the user used by PowerCenter does have the write access.
I am running Rscripts on a self hosted Devops agent. My Windows agent is able to access the system's directory where its hosted. Below is the directory structure for my code
Agent loc. : F:/agent
Source Code : F:/agent/deployment/projects/project1/sourcecode
DWH _dump : F:/agent/deployment/DWH_dump/2021/
Output loca. : F:/agent/deployment/projects/project1/output_data/2021
The agent is using CMD in the devops pipeline to trigger R from the system and use the libraries from the system directory.
Problem statement: I am unable to save the output from my Rscript in to the Output Loca. directory. It give an error as Probable reason: permission denied error by pointing to that directory.
Output File Format: file_name.rds but same issue happens even for a csv file.
Command leading to failure: saverds(paste0(Output loca.,"/",file_name.rds))
Workaround : However I found a workaround, tried to save the scripts at the Source Code directory and then save the same files at the Output Loca. directory. This works perfectly fine but costs me 2 extra hours of run time because I have to save all intermediatory files and delete them in the end. Keeping the intermediatory files in memory eats up my RAM.
I have not opened that directory anywhere in the machine. Only open application in my explorer is my browser where the pipeline is running. I spent hours to figure out the reason but no success. Even I checked the system Path to see whether I have mentioned that directory over there and its not present.
When I run the same script directly, on the machine using Rstudio, I do not have any issues with saving the file at any directory.
Spent 2 full days already. Any pointers to figure out the root cause can save me few hours of runtime.
Solution was to set the Azure Pipeline Agent services in Windows to run with Admin Credentials. The agent was not configured as an admin during creation and so after enabling it with my userid which has admin access on the VM, the pipelines were able to save files without any troubles.
Feels great, saved few hours of run time!
I was able to achieve this by following this post.
I am trying to isolate a problem in a backend logic using log file. I have made a custom log file for the purpose because default log file has too much content to filter through. The module is already live so I have to read the log file from the server to debug the problem. I noticed that while performing commit, the log files I created was in gitignore. So I wanted to know how it works. Are log files generally placed in gitignore? And do servers make their own log files?
Yes, the server will create its own log file. Version control should not work with them since the information they contain will be relative to the environment in which they are contained (your server in this case). Therefore, by default, in the storage/logs directory you will find a .gitignore file with the content:
*
! .gitignore
which will cause none of the files contained in it to be managed by Git.
If your new log file is in this directory, it will not be processed by Git either.
Disclosure: I work with NEAR and am currently on-boarding.
When I start up a local node on a clean machine I see that a .near folder is created in my home directory with a few configuration files (exact files seem to depend on which start_ script I run). Another folder appears inside of the .near folder called data.
Running strings ~/.near/data/*.sst in the folder spits out a few lines starting with the string "rocksdb" which led me to this reference to RocksDB
Is there any way to inspect the contents of a node's RocksDB instance?
I found Keylord but it crashes when I try to configure a new connection to the database (by pointing the connection to ~/.near/data). I didn't pursue that thread.
PSA1: sometimes it's useful to backup the ~/.near folder between node restarts if you want to reset the environment or avoid reusing old data while troubleshooting
mv ~/.near ~/.near_`date +%Y-%m-%d.%s`
PSA2: on MacOS you can watch what happens to the contents of the ~/.near folder while the node boots up and runs. (brew install watch).
watch -d -c -n 0.5 find ~/.near
The content of RocksDB is serialized using our own binary serialization format (http://borsh.io/), so you won't be able to examine the content with general-purpose third-party tools
I am getting the following message :-
Warning: insufficient space on disk where the following directory resides: Z:\TeamCity\.BuildServer\system. Disk space available: 915.53Mb. Please contact your system administrator.
I already have executed the build history cleanup command. but this has not done much. Can you please guide what directory under the following path I clear up to make space on disk.
This Z:\TeamCity.BuildServer\system path has artifacts, caches, changes, messages directories. Which directory to delete to make space.
Many Thanks
Take a look to the Clean-up process settings: http://blogs.lessthandot.com/index.php/ITProfessionals/ITProcesses/don-t-forget-to-clean
Wayback Machine Archive Link
By default TeamCity kepts everything forever, you must configure clean-up rules for each project.