Yarn Offline Mirroring Doesn't Work As Expected - node-modules

Trying to utilise yarn offline mirroring, and it's not working as I expect.
How I expect it to work:
Tarballs get saved in a cache folder, as specified in the .yarnrc, and yarn install --offline extracts those into node_modules in up to 5 seconds and everyone is happy.
How it seems to (not) work:
After I did everything described in the doc above, I:
Delete node_modules
Try to yarn install --offline again with my wifi turned off.
This results in a failure in the Linking dependecies... step (3rd one after Resolving & Fetching). The error is a package (chromedriver) trying to use internet connection and also seems to be a symptom and not the actual problem.
Fetching step is very quick, so it does seem like it finds the local tarballs, but why does the linking take so long? I'm talking about 4-5~ minutes of a yarn install step each time, which eventually takes pretty much the same amount of time, thus gaining me nothing overall except for lots of binaries in my repo.
Is the process itself faulty? Am I doing anything wrong, or not running the correct commands? The docs are not clear, to say the least.

Related

Bitbake stopped in do_fetch

I have been trying to create a Linux image for the VAR-SOM-MX8X. Its been a rough beginning, but I have managed to create the layers and so. I have bitbaked 2 days ago and left the system working, I already knew it was going to be a long way.
The problem is since yesterday at 23:26 the log inside of my tmp/buildstats/ shows me the build has been trying to download me this Git package,
git clone --branch imx_5.4.24_2.1.0 https://source.codeaurora.org/external/imx/linux-imx.git
I have tried to download it by myself with git clone and it starts, its huge, but it downloads properly. The problem is the bitbake download doesn't look like its working, the progress bar has stayed all day long in 41% and it doesn't look like its working.
-- UPDATE
okey, I didn't know you could actually stop bitbake by Ctr+C twice, one first and another after the "wait for executiioning tasks" dialogue appears. Anyway my download has not gone far, my system stays downloading the same git path all the time and it doesnt make any change, once it has managed to go to 100% but then it has stopped there forever. Any idea why is this happening?
I have finally managed to get the image, the problem may be about connectivity issue. First, I tried to do it in my work laptop in my office but it was too slow. Finally, trying to do it in my home PC I have managed to get the image, some recipes fail, but with
bitbake <image_name> -c cleanall
I have managed to get everything done. Moreover, nodejs didnt compile properly, so I remove it out of my local.conf file and everythign worked fine.

yarn cache size on Mac OS too big

I just used Clean My Mac's space lens feature to understand what was eating my disk space and I found this under ~/Libary/Caches
Even with the biggest imagination, I can't think at a reason for that folder being so big, is it possible to safely (and periodically) delete this folder?
Thank you
Yes, you can delete that directory (or run yarn cache clean -- see How to clear cache in Yarn?).
Yarn, by default caches the packages it downloads (including different versions). If you delete this cache, the main side-effect that you'll see is it may take longer to run a yarn install because it will need to fetch the necessary packages again.

Is composer.phar required after installation?

This may be a stupid question but after a fair bit of googling i and still unsure weather i should be removing the the composer.phar file after installation. Is the files just part of the installation or required to run the application ?
The Composer executable is used to manage your dependencies, which is mostly "update" and "install". The result is an autogenerated autoloader and a complete tree of files from the required packages of the application.
The executable itself is not part of the application and therefore is not needed to run it. For security reasons it should not be present on the live servers unless you really know it has to be there, because it seems like a good idea to not give an attacker some useful tools into their hands.
The proper places to have the executable are your development environment (in order to add new packages and update the old ones) and the deployment server that puts the application onto the live server (otherwise you cannot install the packages that your application runs with).
I know that people tend to create a workflow that simply pushes a branch to production, and a post-transmit hook then runs composer install, but this is dangerous from a reliability standpoint: What if Github has an unexpected downtime and you push to production, unable to download the new packages? In this scenario, the server doing the deployment actually is the production server and so requires a copy of the Composer executable, but I explained that this is no ideal setup.

How to Get Pig to Work with lzo Files?

So, I've seen a couple of tutorials for this online, but each seems to say to do something different. Also, each of them doesn't seem to specify whether you're trying to get things to work on a remote cluster, or to locally interact with a remote cluster, etc...
That said, my goal is just to get my local computer (a mac) to make pig work with lzo compressed files that exist on a Hadoop cluster that's already been setup to work with lzo files. I already have Hadoop installed locally and can get files from the cluster with hadoop fs -[command].
I also already have pig installed locally and communicating with the hadoop cluster when I run scripts or when I just run stuff through grunt. I can load and play around with non-lzo files just fine. My problem is only in terms of figuring out a way to load lzo files. Maybe I can just process them through the cluster's instance of ElephantBird? I have no idea, and have only found minimal information online.
So, any sort of short tutorial or answer for this would be awesome, and would hopefully help more people than just me.
I recently got this to work and wrote up a wiki on it for my coworkers. Here's an excerpt detailing how to get PIG to work with lzos. Hope this helps someone!
NOTE: This is written with a Mac in mind. The steps will be almost identical for other OS', and this should definitely give you what you need to know to configure on Windows or Linux, but you will need to extrapolate a bit (obviously, change Mac-centric folders to whatever OS you're using, etc...).
Hooking PIG up to be able to work with LZOs
This was by far the most annoying and time-consuming part for me-- not because it's difficult, but because there are 50 different tutorials online, none of which are all that helpful. Anyway, what I did to get this working is:
Clone hadoop-lzo from github at https://github.com/kevinweil/hadoop-lzo.
Compile it to get a hadoop-lzo*.jar and the native *.o libraries. You'll need to compile
this on a 64bit machine.
Copy the native libs to $HADOOP_HOME/lib/native/Mac_OS_X-x86_64-64/.
Copy the java jar to $HADOOP_HOME/lib and $PIG_HOME/lib
Then configure hadoop and pig to have the property java.library.path
point to the lzo native libraries. You can do this in $HADOOP_HOME/conf/mapred-site.xml with:
<property>
<name>mapred.child.env</name>
<value>JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native/Mac_OS_X-x86_64-64/</value>
</property>
Now try out grunt shell by running pig again, and make sure everything still works. If it doesn't, you probably messed up something in mapred-site.xml and you should double check it.
Great! We're almost there. All you need to do now is install elephant-bird. You can get that from https://github.com/kevinweil/elephant-bird (clone it).
Now, in order to get elephant-bird to work, you'll need quite a few pre-reqs. These are listed on the page mentioned above, and might change, so I won't specify them here. What I will mention is that the versions on these are very important. If you get an incorrect version and try running ant, you will get errors. So, don't try grabbing the pre-reqs from brew or macports as you'll likely get a newer version. Instead, just download tarballs and build for each.
command: ant in the elephant-bird folder in order to create a jar.
For simplicity's sake, move all relevant jars (hadoop-lzo-x.x.x.jar and elephant-bird-x.x.x.jar) that you'll need to register frequently somewhere you can easily find them. /usr/local/lib/hadoop/... works nicely.
Try things out! Play around with loading normal files and lzos in grunt shell. Register the relevant jars mentioned above, try loading a file, limiting output to a manageable number, and dumping it. This should all work fine whether you're using a normal text file or an lzo.

create project with compass/sass

I'm having a problem getting started with compass/sass. I eventually managed to install compass, although I had to google around because the instructions on the compass website didn't work for me.
Next step was to create a project. I thought this would be simple enough by typing:
$ compass create path/to/project --using blueprint/basic --sass-dir=sass --css-dir=css
Unfortunately, this didn't work. The first thing to fail was that it told me that --using was not a recognised command (even though that is exactly what it tells you to type in the compass installation instructions). So, I tried creating the project taking away all three of the additional options.
This did create a project, although not in the place I specified. Rather than placing it in path/to/project it created the files and directories straight into my home folder ie /Users/me/
I must be doing something wrong, I can't believe that a tool designed to save time and make life easier could be so difficult to get up and running. I'm not great at using the command line, but I am able to follow instructions!
Any pointers would be appreciated!
It sounds like your running compass v0.8, please upgrade to v0.10 and that command will work.

Resources