How to avoid 'No space left' on Bazel build? - caching

Running a lengthy Bazel build on a near-full device, I encounter this error:
ERROR: I/O error while writing action log: No space left on device
However, I can't quite free up some space on the device, so I must manipulate the cache and/or the temporary storage somehow. I've noticed that Bazel's cache at ~/.cache/bazel/myproject/ can get pretty big, so I was wondering, can I:
delete some files in there after I get the error ?
move that cache somewhere else ?
disable the cache altogether ?
Bazel's User Manual seems to indicate that the --[no]use_action_cache would kind of do that third option (though I don't know how efficient that would be), but it would slow things down.
As for the temporary storage, I do have a location with enough space, so I simply called export TMPDIR=/path/to/morespace/. So if I could move the cache, that would be where it's going.

You can use the startup option --output_base to point to a location where there's more available storage. This will tell Bazel where to write all its outputs.
$ bazel --output_base=/path/to/more/space build ...
To avoid specifying this for every command, add it to your project <project>/.bazelrc or user ~/.bazelrc:
startup --output_base=/path/to/more/space

Related

yarn cache size on Mac OS too big

I just used Clean My Mac's space lens feature to understand what was eating my disk space and I found this under ~/Libary/Caches
Even with the biggest imagination, I can't think at a reason for that folder being so big, is it possible to safely (and periodically) delete this folder?
Thank you
Yes, you can delete that directory (or run yarn cache clean -- see How to clear cache in Yarn?).
Yarn, by default caches the packages it downloads (including different versions). If you delete this cache, the main side-effect that you'll see is it may take longer to run a yarn install because it will need to fetch the necessary packages again.

Kentico's Continuous Integration is very slow

My Continuous Integration is running ver very slow.
After launch with -r "ContinuousIntegration.exe -r" it hangs after "Restoring objects…" and before "Optimizing file repository…". It can last within that state even for an hour. After all everything is imported well...
With a profiler I've found that most time is consumed by CMS.DataEngine.TranslationHelper
Anyone has some ideas what is wrong ? Click here to see the screenshot of profiler
If you have a lot of custom objects or data within those out of the box or custom objects with relationships when you are doing a -r it can take a long time to update your local instance. Simply put, it's rebuilding the whole database with the structure in the CI files. Also, the documentation states:
To ensure that the restore process works correctly, you need to stop your Kentico application before running the restore process. Otherwise you may encounter the following problems:
Deadlocks or data inconsistencies if the system attempts to write to the CIRepository folder while data is being restored from the files
Outdated content in the application's cache if you restore without restarting (can cause inconsistencies in the Kentico administration interface or the website's content)
So be sure to stop your instance when restoring to help with the performance.

Idea 14 and 15 is sometimes irritatingly slow in editor. How to fast?

I found that IntelliJ Idea sometimes is becaming irritatingly slow.
Sometimes it is not VERY slow, but sometimes it is slow like bad web page. The impression that is think and waits on each keystroke or each word.
Much slower than Visual Studio.
The speed was one of the main reason I switched from Eclipse. I would not like if IntelliJ will turn to the same as Eclipse but for money.
Are there any means to speed up an IntelliJ?
I have added
editor.zero.latency.typing=true
into idea.properties but had no effect.
UPDATE
Already set
but this didn't help.
UPDATE 2
I found that slowness depends on what is written in code. I.e. it is somehow related with automatic code inspection or something.
I don't want to disable inspection completely, but just don't want it runs each keystroke. Is it possible to increase delays somewhere?
Please report your problems in Jetbrans's youtrack. Usually you need to provide your CPU Usage profile. How to enable it you can read here: Reporting performance problems
If you have a 64-bit machine, you can launch IntelliJ IDEA from idea64.exe, not idea.exe
Second thing, as the comment suggests, you can edit your idea64.exe.vmoptions (that's spelled correct, .vmoptions is the file extension, while .exe is part of the filename) in pathToIntelliJ/bin by increasing values in lines starting from -Xms and -Xmx (that is: memory for JVM while starting and maximum amount of memory). You may not be able to edit this file in place, but you can copy this file to another location (where you have permission to edit it), edit this file and copy it back to /bin folder.

Xcode Derived Data to /dev/null

Anyone who's worked with Xcode knows how finicky it can be regarding build settings, linker errors, and other generalized nonsense. Add in any dependency manager like CocoaPods, and all of a sudden you're deleting derived data nearly every time you build.
So my question is two-fold:
What exactly is Derived Data responsible for?
and
What would happen if I just dropped its use entirely, by redirecting to /dev/null?
The DerivedData folder contains all the data, well, derived from Xcode processing. This includes any build artifacts such as header maps, intermediate build steps (.o files and such), and built products (compiled code). It is the destination for any and all build logs, run logs, and test results. Finally, it contains any indexing caches used for code coloring and searching.
Basically, it'd break everything. Doing exactly what you say with /dev/null and building causes an extremely large number of issues, mainly because it is actually trying to read and write files there and can't.
Hypothetically, if it could exist without DerivedData or anything resembling it (Xcode used to heavily rely on a Build/ folder, for instance), compilation would be impossibly slow and memory hungry.
Strange behavior in Xcode related to the DerivedData, and issues fixed by the clearing of such, are mostly because cache invalidation is really hard. Like, really difficult.

How to speed up the eclipse project 'refresh'

I have a fairly large PHP codebase (10k files) that I work with using Eclipse 3.4/PDT 2 on a windows machine, while the files are hosted on a Debian fileserver. I connect via a mapped drive on windows.
Despite having a 1gbit ethernet connection, doing an eclipse project refresh is quite slow. Up to 5 mins. And I am blocked from working while this happens.
This normally wouldn't be such a problem since Eclipse theoretically shouldn't have to do a full refresh very often. However I use the subclipse plugin also which triggers a full refresh each time it completes a switch/update.
My hunch is that the slowest part of the process is eclipse checking the 10k files one by one for changes over samba.
There is a large number of files in the codebase that I would never need to access from eclipse, so I don't need it to check them at all. However I can't figure out how to prevent it from doing so. I have tried marking them 'derived'. This prevents them from being included in the build process etc. But it doesn't seem to speed up the refresh process at all. It seems that Eclipse still checks their changed status.
I've also removed the unneeded folders from PDT's 'build path'. This does speed up the 'building workspace' process but again it doesn't speed up the actual refresh that precedes building (and which is what takes the most time).
Thanks all for your suggestions. Basically, JW was on the right track. Work locally.
To that end, I discovered a plugin called FileSync:
http://andrei.gmxhome.de/filesync/
This automatically copies the changed files to the network share. Works fantastically. I can now do a complete update/switch/refresh from within Eclipse in a couple of seconds.
Do you have to store the files on a share? Maybe you can set up some sort of automatic mirroring, so you work with the files locally, and they get automatically copied to the share. I'm in a similar situation, and I'd hate to give up the speed of editing files on my own machine.
Given it's subversioned, why not have the files locally, and use a post commit hook to update to the latest version on the dev server after every commit? (or have a specific string in the commit log (eg '##DEPLOY##') when you want to update dev, and only run the update when the post commit hook sees this string).
Apart from refresh speed-ups, the advantage of this technique is that you can have broken files that you are working on in eclipse, and the dev server is still ok (albeit with an older version of the code).
The disadvantage is that you have to do a commit to push your saved files onto the dev server.
I solwed this problem by changing "File Transfer Buffer Size" at:
Window->Preferences->Remote Systems-Files
and change "File transfer buffer size"-s Download (KB) and Upload (KB) values to high value, I set it to 1000 kb, by default it is 40 kb
Use offline folder feature in Windows by right-click and select "Make availiable offline".
It could save a lot of time and round trip delay in the file sharing protocol.
The use of svn externals with the revision flag for the non changing stuff might prevent subclipse from refreshing those files on update. Then again it might not. Since you'd have to make some changes to the structure of your subversion repository to get it working, I would suggest you do some simple testing before doing it for real.

Resources