Grails asset-pipeline: Reference a dependency's assets - asset-pipeline

I'm building a grails app that uses assets from a dependency as if they were in my own project. Running the app in development mode works fine, as none of the files get uglified/minified. However, when the files are pre-processed for a production build there are errors because the processor can't find them.
You can see it in this output from the assetCompile task:
:assetCompile
...
Unable to Locate Asset: /spring-websocket.js
Unable to Locate Asset: /spring-websocket
Uglifying File 18 of 28 - application
Compressing File 18 of 28 - application
Processing File 19 of 28 - jquery-2.1.3.js
Uglifying File 19 of 28 - jquery-2.1.3
Compressing File 19 of 28 - jquery-2.1.3
Processing File 20 of 28 - my-websocket.js
Unable to Locate Asset: /spring-websocket
Uglifying File 20 of 28 - my-websocket
Compressing File 20 of 28 - my-websocket
...
Processing File 26 of 28 - sockjs.js
Uglifying File 26 of 28 - sockjs
Compressing File 26 of 28 - sockjs
Processing File 27 of 28 - spring-websocket.js
Unable to Locate Asset: /sockjs
Unable to Locate Asset: /stomp
Uglifying File 27 of 28 - spring-websocket
Compressing File 27 of 28 - spring-websocket
Processing File 28 of 28 - stomp.js
Uglifying File 28 of 28 - stomp
Compressing File 28 of 28 - stomp
Finished Precompiling Assets
The assets needed are bundled with spring-websocket (sock.js and stomp.js). You can see the precompiler complaining about them but then eventually finding them at the end. Those individual files make it into the final .war, but not into the minified application.js that has my dependent code in it. Does asset-pipeline have a way of dealing with this?

Related

Waiting for missing types to be registered:

I'm trying to run node-red on a Moxa UC8112, which has no graphics whatsoever and run completely through SSH command line.
I have tried to register and fix the node_modules with "npm install request" and "npm audit fix", but still receive the "Waiting for missing types to be registered:" error.
My command prompt is as follows:
moxa#Moxa:~/.node-red$
moxa#Moxa:~/.node-red$ npm install request
+ request#2.88.0
updated 1 package and audited 387 packages in 85.818s
found 0 vulnerabilities
moxa#Moxa:~/.node-red$ npm audit fix
up to date in 48.031s
fixed 0 of 0 vulnerabilities in 387 scanned packages
moxa#Moxa:~/.node-red$ node-red
31 Jan 11:47:40 - [info]
Welcome to Node-RED
===================
31 Jan 11:47:40 - [info] Node-RED version: v0.19.5
31 Jan 11:47:40 - [info] Node.js version: v6.14.0
31 Jan 11:47:40 - [info] Linux 4.1.0-ltsi-rt-uc8100-me+ arm LE
31 Jan 11:47:45 - [info] Loading palette nodes
31 Jan 11:47:50 - [warn] rpi-gpio : Raspberry Pi specific node set inactive
31 Jan 11:47:50 - [warn] rpi-gpio : Cannot find Pi RPi.GPIO python library
31 Jan 11:48:02 - [info] Settings file : /home/moxa/.node-red/settings.js
31 Jan 11:48:02 - [info] Context store : 'default' [module=memory]
31 Jan 11:48:02 - [info] User directory : /home/moxa/.node-red
31 Jan 11:48:02 - [warn] Projects disabled :
editorTheme.projects.enabled=false
31 Jan 11:48:02 - [info] Flows file : /home/moxa/.node- red/flows_Moxa.json
31 Jan 11:48:02 - [info] Server now running at http://127.0.0.1:1880/
31 Jan 11:48:02 - [warn]
---------------------------------------------------------------------
Your flow credentials file is encrypted using a system-generated key.
If the system-generated key is lost for any reason, your credentials
file will not be recoverable, you will have to delete it and re-enter
your credentials.
You should set your own key using the 'credentialSecret' option in
your settings file. Node-RED will then re-encrypt your credentials
file using your chosen key the next time you deploy a change.
---------------------------------------------------------------------
31 Jan 11:48:02 - [info] Waiting for missing types to be registered:
31 Jan 11:48:02 - [info] - twilioConfig
31 Jan 11:48:02 - [info] - modbustcp-server
31 Jan 11:48:02 - [info] - twilio-api
31 Jan 11:48:02 - [info] - modbus-client
31 Jan 11:48:02 - [info] - amazon config
31 Jan 11:48:02 - [info] - sms
31 Jan 11:48:02 - [info] - modbus-getter
I expect it to be an issue possibly with how I installed the node_modules? Even though I made sure to "npm install " in the .node-red directory.
In order to move a flow from one instance of Node-RED to another you need to ensure that all the nodes used are installed on the target system.
You can either install them via the manage pallet option in the menu or with npm on the command line.
The easiest way is probably to copy the package.json file from the .node-red directory on the source system to the . node-red directory on the target and then run npm install while in the same directory.

Suspicious Activity in system.log OSX

A mac user was having some clock errors, and thought they had seen someone using remote/VNC action on their screen. I went through the system.log and most of this activity is showing at times when the laptop was off and unplugged (no battery) and the user was asleep.
System.log file here- https://ghostbin.com/paste/mcukf
These were the lines that interested me.
Java connection causing clock to be off.
23:54:32 Ushas-Air Java Updater[531]: Original euid:501
Apr 24 23:54:32 Ushas-Air com.apple.xpc.launchd[1] (com.apple.preference.datetime.remoteservice[366]): Service exited due to signal: Killed: 9 sent by com.apple.preference.datetime.re[366]
Apr 24 23:54:32 Ushas-Air Java Updater[531]: Host name is javadl-esd-secure.oracle.com
Apr 24 23:54:32 Ushas-Air Java Updater[531]: Feed URL: https
Apr 24 23:54:32 Ushas-Air Java Updater[531]: Hostname check passed. Valid Oracle hostname
Apr 24 23:54:33 Ushas-Air com.apple.xpc.launchd[1] (com.apple.bsd.dirhelper[523]): Endpoint has been activated through legacy launch(3) APIs. Please switch to XPC or bootstrap_check_in(): com.apple.bsd.dirhelper
Apr 24 23:54:36 Ushas-Air java[541]: objc[541]: Class JavaLaunchHelper is implemented in both /Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Home/bin/java (0x1023604c0) and /Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Home/lib/jli/./libjli.dylib (0x119327480). One of the two will be used. Which one is undefined.
Instances of IMRemoteURLConnection Agent happening
Apr 25 00:14:11 Ushas-MacBook-Air com.apple.xpc.launchd[1] (com.apple.imfoundation.IMRemoteURLConnectionAgent): Unknown key for integer: _DirtyJetsamMemoryLimit
Apr 25 00:01:22 Ushas-MacBook-Air com.apple.xpc.launchd[1] (com.apple.imfoundation.IMRemoteURLConnectionAgent): Unknown key for integer: _DirtyJetsamMemoryLimit
Apr 25 00:05:57 Ushas-MacBook-Air com.apple.xpc.launchd[1] (com.apple.preferences.users.remoteservice[762]): Service exited due to signal: Killed: 9 sent by com.apple.preferences.users.remo[762]
Multiple cache deletes requested after.
Apr 25 00:01:27 Ushas-MacBook-Air logd[57]: _handle_cache_delete_with_urgency(0x7fdf19412a60, 3, 0)
Apr 25 00:01:27 Ushas-MacBook-Air logd[57]: _handle_cache_delete_with_urgency(0x7fdf19412a60, 3, 0)
Apr 25 00:01:31 Ushas-MacBook-Air com.apple.preferences.icloud.remoteservice[700]: BUG in libdispatch client: kevent[EVFILT_MACHPORT] monitored resource vanished before the source cancel handler was invoked
Apr 25 00:01:33 Ushas-MacBook-Air logd[57]: _handle_cache_delete_with_urgency(0x7fdf19658620, 3, 0)
Apr 25 00:01:33 Ushas-MacBook-Air logd[57]: _volume_contains_cached_data(is /private/var/db/diagnostics/ in /) - YES
Apr 25 00:01:34 Ushas-MacBook-Air logd[57]: 239517600 bytes of purgeable space from log files
Apr 25 00:01:34 Ushas-MacBook-Air logd[57]: _purge_uuidtext only runs at urgency 0 (3)
Apr 25 00:01:34 Ushas-MacBook-Air logd[57]: 0 bytes of purgeable space from uuidtext files
And appears to be launching the FamilyCircleFramework
Apr 24 23:56:11 Ushas-Air com.apple.xpc.launchd[1] (com.apple.imfoundation.IMRemoteURLConnectionAgent): Unknown key for integer: _DirtyJetsamMemoryLimit
Apr 24 23:56:16 --- last message repeated 1 time ---
Apr 24 23:56:16 Ushas-Air familycircled[615]: objc[615]: Class FAFamilyCloudKitProperties is implemented in both /System/Library/PrivateFrameworks/FamilyCircle.framework/Versions/A/FamilyCircle (0x7fffbe466a60) and /System/Library/PrivateFrameworks/FamilyCircle.framework/Versions/A/Resources/familycircled (0x10aa01178). One of the two will be used. Which one is undefined.
Apr 24 23:56:16 Ushas-Air familycircled[615]: objc[615]: Class FAFamilyMember is implemented in both /System/Library/PrivateFrameworks/FamilyCircle.framework/Versions/A/FamilyCircle (0x7fffbe466880) and /System/Library/PrivateFrameworks/FamilyCircle.framework/Versions/A/Resources/familycircled (0x10aa01268). One of the two will be used. Which one is undefined.
Apr 24 23:56:16 Ushas-Air familycircled[615]: objc[615]: Class FAFamilyCircle is implemented in both /System/Library/PrivateFrameworks/FamilyCircle.framework/Versions/A/FamilyCircle (0x7fffbe466a10) and /System/Library/PrivateFrameworks/FamilyCircle.framework/Versions/A/Resources/familycircled (0x10aa01358). One of the two will be used. Which one is undefined.
Activity related to Findmyfriends happening. The mac owner doesn't use FindMyFriends, or have a mac phone.
Apr 25 00:30:00 Ushas-MacBook-Air syslogd[40]: Configuration Notice:
ASL Module "com.apple.mobileme.fmf1.internal" sharing output destination "/var/log/FindMyFriendsApp/FindMyFriendsApp.asl" with ASL Module "com.apple.mobileme.fmf1".
Output parameters from ASL Module "com.apple.mobileme.fmf1" override any specified in ASL Module "com.apple.mobileme.fmf1.internal".
Apr 25 00:30:00 Ushas-MacBook-Air syslogd[40]: Configuration Notice:
ASL Module "com.apple.mobileme.fmf1.internal" sharing output destination "/var/log/FindMyFriendsApp" with ASL Module "com.apple.mobileme.fmf1".
Output parameters from ASL Module "com.apple.mobileme.fmf1" override any specified in ASL Module "com.apple.mobileme.fmf1.internal".
Apr 25 00:30:00 Ushas-MacBook-Air syslogd[40]: Configuration Notice:
The keybaglogd being shared with com.apple.mkb
Apr 25 00:30:00 Ushas-MacBook-Air syslogd[40]: Configuration Notice:
ASL Module "com.apple.mkb.internal" sharing output destination "/private/var/log/keybagd.log" with ASL Module "com.apple.mkb".

About spark and hbase

This is the first time I ask question, if there's any place I should improve please tell me, thanks.
Here is my system version :
jdk1.8.0_65
hadoop-2.6.1
hbase-1.0.2
scala-2.11.7
spark-1.5.1
zookeeper-3.4.6
Then there is my question:
I'm gonna built a system that can store data from sensors
I need to store data in it and to analysis the data near
real-time, so I use spark to make my analysis run faster,
but I'm wondering "Do I Really Need Hbase Database" ?
There is some problem when I run Spark:
First I run: hadoop:start-all.sh and Spark:start-all.sh, then I
run Spark:spark-shell
This is what I got:
15 / 12 / 01 22: 16: 47 WARN NativeCodeLoader: Unable to load native - hadoop library
for your platform...using builtin - java classes where applicable Welcome to Using Scala version 2.10 .4(Java HotSpot(TM) 64 - Bit Server VM, Java 1.8 .0 _65)
Type in expressions to have them evaluated.
Type: help
for more information.
15 / 12 / 01 22: 16: 56 WARN MetricsSystem: Using
default name DAGScheduler
for source because spark.app.id is not set.Spark context available as sc.
15 / 12 / 01 22: 16: 59 WARN Connection: BoneCP specified but not present in CLASSPATH(or one of dependencies)
15 / 12 / 01 22: 16: 59 WARN Connection: BoneCP specified but not present in CLASSPATH(or one of dependencies)
15 / 12 / 01 22: 17: 07 WARN ObjectStore: Version information not found in metastore.hive.metastore.schema.verification is not enabled so recording the schema version 1.2 .0
15 / 12 / 01 22: 17: 07 WARN ObjectStore: Failed to get database
default, returning NoSuchObjectException
15 / 12 / 01 22: 17: 10 WARN NativeCodeLoader: Unable to load native - hadoop library
for your platform...using builtin - java classes where applicable
15 / 12 / 01 22: 17: 11 WARN Connection: BoneCP specified but not present in CLASSPATH(or one of dependencies)
15 / 12 / 01 22: 17: 11 WARN Connection: BoneCP specified but not present in CLASSPATH(or one of dependencies)
SQL context available as sqlContext.
scala >
There are so many warning, am I doing the right thing? Like where can
I set spark.app.id or even do I need spark.app.id? And what is "Failed to get database default, returning NoSuchObjectException" ?
Thank's for helping me.

Not a Valid Jar When Running Hadoop Job

I want to run WordCount Example.
In eclipse it run correctly. In output folder the output file is present.
I made a jar file of WordCount and want to run it through command
hadoop jar WordCount.jar /Projects/input /Projects/output
it gives me error
Not a valid JAR: /Projects/WordCount.jar
result of hdfs dfs -ls /Projects
Found 3 items
-rw-r--r-- 1 hduser supergroup 3418 2014-11-02 15:38 /Projects/WordCount.jar
drwxr-xr-x - hduser supergroup 0 2014-11-02 14:13 /Projects/input
drwxr-xr-x - hduser supergroup 0 2014-11-02 14:16 /Projects/output
it gives me same error on this also
hadoop jar /Projects/WordCount.jar wordPackage.WordCount /Projects/input /Projects/output
Not a valid JAR: /Projects/WordCount.jar
how to solve this error.
I have run tvf command it gives this output
jar -tvf /home/hduser/Desktop/Files/WordCount.jar
60 Sun Nov 02 16:10:10 PKT 2014 META-INF/MANIFEST.MF
1895 Sun Nov 02 14:02:38 PKT 2014 wordPackage/WordCount.class
1295 Sun Nov 02 14:02:38 PKT 2014 wordPackage/WordCount.java
2388 Sun Nov 02 14:02:06 PKT 2014 wordPackage/WordReducer.class
707 Sun Nov 02 14:02:06 PKT 2014 wordPackage/WordReducer.java
2203 Sun Nov 02 14:02:08 PKT 2014 wordPackage/WordMapper.class
713 Sun Nov 02 14:02:06 PKT 2014 wordPackage/WordMapper.java
16424 Sun Nov 02 13:50:00 PKT 2014 .classpath
420 Sun Nov 02 13:50:00 PKT 2014 .project
You cannot keep the jar in HDFS when executing the same using hadoop command, Jar should be available in the local path
If the jar is not runnable try the following (Need to specify the package.mainclass)
hadoop jar /home/hduser/Desktop/Files/WordCount.jar wordPackage.WordCount /Projects/input /Projects/output
If the jar is runnable following can be used
hadoop jar /home/hduser/Desktop/Files/WordCount.jar /Projects/input /Projects/output
If the issue still persists, you need to rebuild this jar(WordCount.jar) in eclipse again
in spite of rebuilding your jar again if issue still persists, Please make sure you have given +x(execution chmod 755) privileges before you run the command. in my case this was the reason for the issue.
command help:
chmod +x jarname.jar
I faced the same issue. But in my case what I did was I wrote the java code referring to hadoop 1.x libraries and tried to execute it using 2.x. Initially, I got the same error in the terminal. Then I tried navigating to the path where I had my jar file. It worked.
May be you can actually try the following:(after navigating to the jar files path)
hadoop jar WordCount.jar wordPackage.WordCount /Projects/input /Projects/output

Picard SamToFastq only extracts one read, then throws an error

I'm trying to extract some FastQ files from bam files. Picard can do this with SamToFastq as it says in the documentation for this tool it accepts either a bam or sam file.
But when I run it, it only extracts one read, and then exits. Here is the error message. Any help is appreciated.
[davy#xxxx picard-tools-1.70]$ java -jar SamToFastq.jar I=/home/davy/xxx_trio_data/xxxx-1.bam F=/home/davy/xxx_trio_data/1005-1.fastq
[Wed Jun 20 14:14:21 BST 2012] net.sf.picard.sam.SamToFastq INPUT=/home/davy/xxx_trio_data/xxxx-1.bam FASTQ=/home/davy/xxxx_trio_data/xxxx-1.fastq OUTPUT_PER_RG=false RE_REVERSE=true INCLUDE_NON_PF_READS=false READ1_TRIM=0 READ2_TRIM=0 INCLUDE_NON_PRIMARY_ALIGNMENTS=false VERBOSITY=INFO QUIET=false VALIDATION_STRINGENCY=STRICT COMPRESSION_LEVEL=5 MAX_RECORDS_IN_RAM=500000 CREATE_INDEX=false CREATE_MD5_FILE=false
[Wed Jun 20 14:14:21 BST 2012] Executing as davy#xxxxx.xxxx.xxxx.xx.xx on Linux 2.6.34.9-69.fc13.x86_64 amd64; OpenJDK 64-Bit Server VM 1.6.0_18-b18; Picard version: 1.70(1215)
[Wed Jun 20 14:14:21 BST 2012] net.sf.picard.sam.SamToFastq done. Elapsed time: 0.00 minutes.
Runtime.totalMemory()=2029715456
FAQ: http://sourceforge.net/apps/mediawiki/picard/index.php?title=Main_Page
Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
at java.util.ArrayList.rangeCheck(ArrayList.java:571)
at java.util.ArrayList.get(ArrayList.java:349)
at net.sf.picard.sam.SamToFastq.doWork(SamToFastq.java:156)
at net.sf.picard.cmdline.CommandLineProgram.instanceMain(CommandLineProgram.java:177)
at net.sf.picard.sam.SamToFastq.main(SamToFastq.java:118)
As it turns out, the data is paired end, not single read as I had initially thought, and picard require a second outfile in this instance specified with the SECOND_END_FASTQ option.

Resources