Our developer left a few weeks ago, so I have been attempting to get myself a little more acquainted with the systems and troubleshoot a few issues around the site.
I attempted to look at our log files in var/log/ before realizing that they are, erm, several GB each. Despite knowing almost nothing about how big an ideal log file should be, the fact that this file very handily crashed my computer seems like maybe it's too big. Before my computer crashed, I could see that there are records in the file that are over five months old.
Is it safe to simply delete system.log and exception.log?
When I search for this, I get a lot of results related to Log Cleaning...
In Magento there are settings under System to turn on 'Log Cleaning', but I suspect that has nothing to do with these two log files because a) it looks like it is set to clear it daily and b) cron is set up. If I am incorrect about that, please let me know so I can look into why that is not working correctly.
You guessed correctly. The log cleaning settings refer to tables of logs in the database. The files in var/log/ are safe to delete, as are most files in var/. If you do not need these files then it is simplest to turn them off from System > Configuration > Developer > Log Settings.
However, if you do think you want some logs just in case then learn how to use logrotate on your server, it can compress files and delete the oldest for you.
Related
I'm trying to figure out the root cause of a strange TFS error we are seeing in our current instance. It wasn't noticed until after a server move, but I'm not sure if they're directly related, because the error seems to be showing up for check-ins about a week prior to the move, as well as all those following it.
We first noticed the problem when I tried to get latest, and got several errors indicating:
"The downloaded file is corrupt. Please get the file again."
Upon looking into the error, we have noticed that starting as of a single check-in every code update has resulted in files being replaced with the contents of other files, ranging from project files to binary executable files (presumably assembly DLLs), rather than the expected content which is still present on our local development machines.
I don't have admin access to the servers myself, but am looking for ideas on possible causes and/or fixes for our team to investigate.
After weeks of searching, I finally found another mention of this sort of thing happening, along with a solution that appears to have worked.
Clear the Application Tier cache.
MSDN Archived Forums: TFS swapping contents of files
I was working on a static website and decided to change my account as I recently changed my Github username. When I say account, I mean the icon in the bottom left corner of VS Code. In doing so, 10k files were added, and all my extensions disappeared.
The staged files include App Data files that I can't clear.
My Discord is affected. I can't open it as when I do, a message comes up and says:
'update.exe' has been moved or changed. Would you like to delete the shortcut?
Although it's an issue, I essentially have to reinstall it, but the same issue may still arise, which it has as I've tried reinstalling it.
My concern is I feel I may affect other programs by trying to come up with a solution.
I'm posting this question to get help on removing the 10k files that are staged and do not affect program files/app data. When I try to commit these files, I get an error saying,
Git Fatal :pathsepc C\Users.....\getconfig did not match any files
I would think all I need to do is add these files to a gitignore file, but I would need steps if so.
I hope I've explained my problem clearly to get a solution.
Please help.
Staged files
I hope this is a good place to ask this, otherwise please redirect me to the correct forum.
I have a large amount of data (~400GB) I need to distribute to all nodes in a cluster (~100 nodes). Any help into how to do this will be appreciated, following here is what Ive tried.
I was thinking of doing this using torrents but I'm running into a bunch of issues. These are the steps I tried:
I downloaded ctorrent to create the torrent and seed and download it. I had a problem because I didn't have a tracker.
I found that qbittorrent-nox has an embedded tracker so I downloaded that on one of my nodes and set the tracker up.
I now created the torrent using the tracker I created and copied it to my nodes.
When I run the torrent with ctorrent on the node with the actual data on it to seed the data I get:
Seed for others 72 hours
- 0/0/1 [1/1/1] 0MB,0MB | 0,0K/s | 0,0K E:0,1 Connecting
When I run on one of the nodes to download the data I get:
- 0/0/1 [0/1/0] 0MB,0MB | 0,0K/s | 0,0K E:0,1
So it seems they aren't connecting to the tracker ok, but I don't know why
I am probably doing something very wrong, but I can't figure it out.
If anyone can help me with what I am doing, or has any way of distributing the data efficiently, even not with torrents, I would be very happy to hear.
Thanks in advance for any help available.
but the node thats supposed to be seeding thinks it has 0% of the file, and so it doesn't seed.
If you create a metadata file (.torrent) with tool A and then want to seed it with tool B then you need to point B to both the metadata and the data (the content files) itself.
I know it is a different issue now, and might require a different topic, but Im hoping you might have ideas.
You should create a new question which will have more room for you to provide details.
So this is embarrassing, I might have had it working for a while now, but I did change my implementation since I started. I just re-checked and the files I was transferring were corrupted on one of my earlier tries and I have been using them since.
So to sum up this is what worked for me if anybody else ends up needing the same setup:
I create torrents using "transmission-create /path/to/file/or/directory/to/be/torrented -o /path/to/output/directory/output_file_name.torrent" (this is because qbittorrent-nox doesn't provide a tool that I could find to create torrents)
I run the torrent on the computer with the actual files so it will seed using "qbittorrent-nox ~/path/to/torrent/file/name_of_file.torrent"
I copy the .torrent file to all nodes and run "qbittorrent-nox ~/path/to/torrent/file/name_of_file.torrent" to start downloading
qbittorrent settings I needed to configure:
In "Downloads" change "Save files to location" to the location of the data in the node that is going to be seeding #otherwise that node wont know it has the files specified in the torrent and wont seed them.
To avoid issues with the torrents sometimes starting as queued and requiring a "force resume". This doesn't appear to have fixed the problem 100% though
In "Speed" tab uncheck "Enable bandwidth management (uTP)"
uncheck "Apply rate limit to uTP connections"
In "BitTorrent" tab uncheck "Torrent Queueing"
Thanks for all the help and Im sorry I hassled people for no reason from some point..
I have been trying to get my Magento site to take some changes but it is still not refreshing the changes. I have disabled caching and flushed all of them on every single occasion I have also cleared my browser cache and it still does not take changes. I have gone as far to delete several files from the server that the theme relies on but it still functions like nothing was ever removed! What could be my issue?
You keep editing those files. I do not think those files are the files you think they are.
You question is pretty short on details, but my first guess if your system is running with the compiler enabled, which means it's loading its class files from
includes/src
Googling around to learn about the compiler would be a good idea.
I'd try adding the following to the end of your index.php file
echo '<--';
print_r(get_included_files());
echo '-->';
This will list every file PHP used during the request. Compare the full paths with the ones you're editing, and I bet you'll find a discrepancy.
Our VSS database appears to be horribly out of shape. I've been trying to archive and run "analyze" and keep getting "Access to file [filename] is denied. The file may be read-only, may be in use, or you may not have permission to write to the file. Correct this problem and run analyze again." No one is logged into SourceSafe (including myself) and I'm running the analyze utility from the VS command prompt as follows:
analyze -v -f -bbackuppath databasepath
I get similar errors if I try and create project archives from the ssadmin tool.
The database is on a network share, and we're running VSS 2005 v8.0.50727.42. I'd love to be able to do this, as it would be a first step in a move away from VSS.
Thanks in advance.
More Info
Every time I run analyze, the file that spawns the access denied message changes. It's almost as if running analyze unlocks that file so that the next time I get through to the next one.
I had this issue with our VSS database as well when we tried to most recently analyze and repair.
We did a few things to get it working.
Turned off the network share, apparently we still had users accessing the share that we couldn't see, this helped most of the time.
Otherwise we copied the repository locally, then ran analyze on it from there.
Neither solution is ideal, but we were in a critical situation and it was the only way we got it to work.