This is the first time I ask question, if there's any place I should improve please tell me, thanks.
Here is my system version :
jdk1.8.0_65
hadoop-2.6.1
hbase-1.0.2
scala-2.11.7
spark-1.5.1
zookeeper-3.4.6
Then there is my question:
I'm gonna built a system that can store data from sensors
I need to store data in it and to analysis the data near
real-time, so I use spark to make my analysis run faster,
but I'm wondering "Do I Really Need Hbase Database" ?
There is some problem when I run Spark:
First I run: hadoop:start-all.sh and Spark:start-all.sh, then I
run Spark:spark-shell
This is what I got:
15 / 12 / 01 22: 16: 47 WARN NativeCodeLoader: Unable to load native - hadoop library
for your platform...using builtin - java classes where applicable Welcome to Using Scala version 2.10 .4(Java HotSpot(TM) 64 - Bit Server VM, Java 1.8 .0 _65)
Type in expressions to have them evaluated.
Type: help
for more information.
15 / 12 / 01 22: 16: 56 WARN MetricsSystem: Using
default name DAGScheduler
for source because spark.app.id is not set.Spark context available as sc.
15 / 12 / 01 22: 16: 59 WARN Connection: BoneCP specified but not present in CLASSPATH(or one of dependencies)
15 / 12 / 01 22: 16: 59 WARN Connection: BoneCP specified but not present in CLASSPATH(or one of dependencies)
15 / 12 / 01 22: 17: 07 WARN ObjectStore: Version information not found in metastore.hive.metastore.schema.verification is not enabled so recording the schema version 1.2 .0
15 / 12 / 01 22: 17: 07 WARN ObjectStore: Failed to get database
default, returning NoSuchObjectException
15 / 12 / 01 22: 17: 10 WARN NativeCodeLoader: Unable to load native - hadoop library
for your platform...using builtin - java classes where applicable
15 / 12 / 01 22: 17: 11 WARN Connection: BoneCP specified but not present in CLASSPATH(or one of dependencies)
15 / 12 / 01 22: 17: 11 WARN Connection: BoneCP specified but not present in CLASSPATH(or one of dependencies)
SQL context available as sqlContext.
scala >
There are so many warning, am I doing the right thing? Like where can
I set spark.app.id or even do I need spark.app.id? And what is "Failed to get database default, returning NoSuchObjectException" ?
Thank's for helping me.
Related
I'm unable to successfully run dbca silently in a docker container.
First, I installed the Oracle software using runInstaller, then root.sh, and netca. When I run dbca, I always get the following error:
DBCA_PROGRESS : 50%
[ 2017-12-21 21:49:18.914 UTC ] ORA-29283: invalid file operation
ORA-06512: at "SYS.DBMS_QOPATCH", line 1547
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 41
ORA-06512: at "SYS.UTL_FILE", line 478
ORA-06512: at "SYS.DBMS_QOPATCH", line 1532
ORA-06512: at "SYS.DBMS_QOPATCH", line 1417
ORA-06512: at line 1
The alert log says
QPI : Found directory objects and ORACLE_HOME out of sync
QPI : Trying to patch with the current ORACLE_HOME
QPI: ------QPI Old Directories -------
QPI: OPATCH_SCRIPT_DIR:/ade/b/2717506464/oracle/QOpatch
QPI: OPATCH_LOG_DIR:/ade/b/2717506464/oracle/QOpatch
QPI: OPATCH_INST_DIR:/ade/b/2717506464/oracle/OPatch
QPI: op_scpt_path /u01/app/oracle/product/12.2.0/dbhome_1/QOpatch
QPI: Unable to find proper QPI install
QPI: [1] Please check the QPI directory objects and set them manually
QPI: OPATCH_INST_DIR not present:/ade/b/2717506464/oracle/OPatch
Unable to obtain current patch information due to error: 20013, ORA-20013: DBMS_QOPATCH ran mostly in non install area
ORA-06512: at "SYS.DBMS_QOPATCH", line 777
ORA-06512: at "SYS.DBMS_QOPATCH", line 532
ORA-06512: at "SYS.DBMS_QOPATCH", line 2247
and the trace log
[Thread-66] [ 2017-12-22 17:21:42.931 UTC ] [ClonePostCreateScripts.executeImpl:508] calling dbms_qopatch.replace_logscrpt_dirs()
[Thread-75] [ 2017-12-22 17:21:43.178 UTC ] [BasicStep.handleNonIgnorableError:509] oracle.sysman.assistants.util.SilentMessageHandler#3b2b52b7:messageHandler
[Thread-75] [ 2017-12-22 17:21:43.178 UTC ] [BasicStep.handleNonIgnorableError:510] ORA-29283: invalid file operation
ORA-06512: at "SYS.DBMS_QOPATCH", line 1547
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 41
ORA-06512: at "SYS.UTL_FILE", line 478
ORA-06512: at "SYS.DBMS_QOPATCH", line 1532
ORA-06512: at "SYS.DBMS_QOPATCH", line 1417
ORA-06512: at line 1
Then I tried to use Oracle's official images with no success.
The only thing I modified in the Oracle's image creation process is createAsContainerDatabase parameter in dbca.rsp file. The original value was true and I changed it to false because I do not want to create a CDB.
Any idea what do I do incorrectly?
EDIT:
The image build fails on docker host running on Fedora 25, Kernel Version: 4.10.10-200.fc25.x86_64.
On macOS, and Debian Jessie, Kernel Version: 3.16.0-4-amd64, the dbca runs successfully.
Which storage driver you use?
I had exactly the same issue with Solus 3, kernel 4.14.8-41.current
Docker version:
Server:
Version: 17.11.0-ce
API version: 1.34 (minimum version 1.12)
Go version: go1.9.2
Git commit: 7cbbc92838236e442de83d7ae6b3d74dd981b586
Built: Sun Nov 26 16:15:47 2017
OS/Arch: linux/amd64
Experimental: false
..
Storage Driver: overlay
Backing Filesystem: extfs
Supports d_type: true
The image i used works fine on Linux Mint (docker 11, storage driver: aufs).
So I tried to change "overlay" to "overlay2" in settings, and now it works.
Server Version: 17.11.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
...
Creating and starting Oracle instance
35% complete
40% complete
44% complete
49% complete
50% complete
53% complete
55% complete
Completing Database Creation
56% complete
57% complete
58% complete
62% complete
65% complete
66% complete
Executing Post Configuration Actions
100% complete
But I have no idea why it's not wotking with "overlay"...
Shadowsocks cannot connect to the network on MAC and can be used on Windows.
My system log:
Jul 3 11:45:54 yaojundeMacBook-Pro ShadowsocksX[2004]: Could not bind
Jul 3 11:46:24 --- last message repeated 29 times ---
Jul 3 11:46:24 yaojundeMacBook-Pro ShadowsocksX[2004]: Could not bind
Jul 3 11:46:54 --- last message repeated 29 times ---
Jul 3 11:46:54 yaojundeMacBook-Pro ShadowsocksX[2004]: Could not bind
Jul 3 11:47:24 --- last message repeated 28 times ---
Jul 3 11:47:24 yaojundeMacBook-Pro ShadowsocksX[2004]: Could not bind
Jul 3 11:47:54 --- last message repeated 29 times ---
Jul 3 11:47:54 yaojundeMacBook-Pro ShadowsocksX[2004]: Could not bind
Jul 3 11:48:24 --- last message repeated 29 times ---
Jul 3 11:48:24 yaojundeMacBook-Pro ShadowsocksX[2004]: Could not bind
How to solve this problem, thank you very much
Same issue with you ,ss isn't work on my mbp and work perfect on my windows.
You can try ShadowsocksX-NG, this is can be used on my Mac.
PS:
Must be make sure your internet don't have any othder proxy.
Because my mac and windows pc on the same internet and the internet has a proxy. So,my mac ss doesn't work perfect, but my windows pc work perfect.
I also encountered the same problem, this error means that the proxy port is occupied by other programs, you need to modify the Shadowsocks proxy port (default=1080), modify the port file in Users ▸ User Name ▸ Library ▸ Application Support ▸ ShadowsocksX▸proixy.config
change to:
Listen-address 127.0.0.1:1085
My Zookeeper client is having trouble connecting to the Hadoop cluster.
This works fine from a Linux VM, but I am using a Mac.
I set the -Dsun.security.krb5.debug=true flag on the JVM and get the following output:
Found ticket for solr#DDA.MYCO.COM to go to krbtgt/DDA.MYCO.COM#DDA.MYCO.COM expiring on Sat Apr 29 03:15:04 BST 2017
Entered Krb5Context.initSecContext with state=STATE_NEW
Found ticket for solr#DDA.MYCO.COM to go to krbtgt/DDA.MYCO.COM#DDA.MYCO.COM expiring on Sat Apr 29 03:15:04 BST 2017
Service ticket not found in the subject
>>> Credentials acquireServiceCreds: same realm
Using builtin default etypes for default_tgs_enctypes
default etypes for default_tgs_enctypes: 17 16 23.
>>> CksumType: sun.security.krb5.internal.crypto.RsaMd5CksumType
>>> EType: sun.security.krb5.internal.crypto.Aes128CtsHmacSha1EType
>>> KrbKdcReq send: kdc=oc-10-252-132-139.nat-ucfc2z3b.usdv1.mycloud.com UDP:88, timeout=30000, number of retries =3, #bytes=682
>>> KDCCommunication: kdc=oc-10-252-132-139.nat-ucfc2z3b.usdv1.mycloud.com UDP:88, timeout=30000,Attempt =1, #bytes=682
>>> KrbKdcReq send: #bytes read=217
>>> KdcAccessibility: remove oc-10-252-132-139.nat-ucfc2z3b.usdv1.mycloud.com
>>> KDCRep: init() encoding tag is 126 req type is 13
>>>KRBError:
cTime is Thu Dec 24 11:18:15 GMT 2015 1450955895000
sTime is Fri Apr 28 15:15:06 BST 2017 1493388906000
suSec is 925863
error code is 7
error Message is Server not found in Kerberos database
cname is solr#DDA.MYCO.COM
sname is zookeeper/oc-10-252-132-160.nat-ucfc2z3b.usdv1.mycloud.com#DDA.MYCO.COM
msgType is 30
KrbException: Server not found in Kerberos database (7) - UNKNOWN_SERVER
at sun.security.krb5.KrbTgsRep.<init>(KrbTgsRep.java:73)
at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:251)
at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:262)
at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:308)
at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:126)
at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:458)
at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:693)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
at org.apache.zookeeper.client.ZooKeeperSaslClient$2.run(ZooKeeperSaslClient.java:366)
at org.apache.zookeeper.client.ZooKeeperSaslClient$2.run(ZooKeeperSaslClient.java:363)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.zookeeper.client.ZooKeeperSaslClient.createSaslToken(ZooKeeperSaslClient.java:362)
at org.apache.zookeeper.client.ZooKeeperSaslClient.createSaslToken(ZooKeeperSaslClient.java:348)
at org.apache.zookeeper.client.ZooKeeperSaslClient.sendSaslPacket(ZooKeeperSaslClient.java:420)
at org.apache.zookeeper.client.ZooKeeperSaslClient.initialize(ZooKeeperSaslClient.java:458)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1057)
Caused by: KrbException: Identifier doesn't match expected value (906)
at sun.security.krb5.internal.KDCRep.init(KDCRep.java:140)
at sun.security.krb5.internal.TGSRep.init(TGSRep.java:65)
at sun.security.krb5.internal.TGSRep.<init>(TGSRep.java:60)
at sun.security.krb5.KrbTgsRep.<init>(KrbTgsRep.java:55)
... 18 more
ERROR 2017-04-28 15:15:07,046 5539 org.apache.zookeeper.client.ZooKeeperSaslClient [main-SendThread(oc-10-252-132-160.nat-ucfc2z3b.usdv1.mycloud.com:2181)]
An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed
[Caused by GSSException: No valid credentials provided
(Mechanism level: Server not found in Kerberos database (7) - UNKNOWN_SERVER)])
occurred when evaluating Zookeeper Quorum Member's received SASL token.
This may be caused by Java's being unable to resolve the Zookeeper Quorum Member's hostname correctly.
You may want to try to adding '-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS environment.
Zookeeper Client will go to AUTH_FAILED state.
I've tested Kerberos config as follows:
>kinit -kt /etc/security/keytabs/solr.headless.keytab solr
>klist
Credentials cache: API:3451691D-7D5E-49FD-A27C-135816F33E4D
Principal: solr#DDA.MYCO.COM
Issued Expires Principal
Apr 28 16:58:02 2017 Apr 29 04:58:02 2017 krbtgt/DDA.MYCO.COM#DDA.MYCO.COM
Following the instructions from hortonworks I managed to get the kerberos ticket stored in a file:
>klist -c FILE:/tmp/krb5cc_501
Credentials cache: FILE:/tmp/krb5cc_501
Principal: solr#DDA.MYCO.COM
Issued Expires Principal
Apr 28 17:10:25 2017 Apr 29 05:10:25 2017 krbtgt/DDA.MYCO.COM#DDA.MYCO.COM
Also I tried the suggested JVM option suggested in the stack trace (-Dsun.net.spi.nameservice.provider.1=dns,sun), but this led to a different error along the lines of Client session timed out, which suggests that this JVM param is preventing the client from connecting correctly in the first place.
==EDIT==
Seems that the Mac version of Kerberos is not the latest:
> krb5-config --version
Kerberos 5 release 1.7-prerelease
I just tried brew install krb5 to install a newer version, then adjusting the path to point to the new version.
> krb5-config --version
Kerberos 5 release 1.15.1
This has had no effect whatsoever on the outcome.
NB this works fine from a linux VM on my Mac, using exactly the same jaas.conf, keytab files, and krb5.conf.
krb5.conf:
[libdefaults]
renew_lifetime = 7d
forwardable = true
default_realm = DDA.MYCO.COM
ticket_lifetime = 24h
dns_lookup_realm = false
dns_lookup_kdc = false
[realms]
DDA.MYCO.COM = {
admin_server = oc-10-252-132-139.nat-ucfc2z3b.usdv1.mycloud.com
kdc = oc-10-252-132-139.nat-ucfc2z3b.usdv1.mycloud.com
}
Reverse DNS:
I checked that the FQDN hostname I'm connecting to can be found using a reverse DNS lookup:
> host 10.252.132.160
160.132.252.10.in-addr.arpa domain name pointer oc-10-252-132-160.nat-ucfc2z3b.usdv1.mycloud.com.
This is exactly as per the response to the same command from the linux VM.
===WIRESHARK ANALYSIS===
Using Wireshark configured to use the system key tabs allows a bit more detail in the analysis.
Here I have found that a failed call looks like this:
client -> host AS-REQ
host -> client AS-REP
client -> host AS-REQ
host -> client AS-REP
client -> host TGS-REQ <-- this call is detailed below
host -> client KRB error KRB5KDC_ERR_S_PRINCIPAL_UNKNOWN
The erroneous TGS-REQ call shows the following:
Kerberos
tgs-req
pvno: 5
msg-type: krb-tgs-req (12)
padata: 1 item
req-body
Padding: 0
kdc-options: 40000000 (forwardable)
realm: DDA.MYCO.COM
sname
name-type: kRB5-NT-UNKNOWN (0)
sname-string: 2 items
SNameString: zookeeper
SNameString: oc-10-252-134-51.nat-ucfc2z3b.usdv1.mycloud.com
till: 1970-01-01 00:00:00 (UTC)
nonce: 797021964
etype: 3 items
ENCTYPE: eTYPE-AES128-CTS-HMAC-SHA1-96 (17)
ENCTYPE: eTYPE-DES3-CBC-SHA1 (16)
ENCTYPE: eTYPE-ARCFOUR-HMAC-MD5 (23)
Here is the corresponding successful call from the linux box, which is followed by several more exchanges.
Kerberos
tgs-req
pvno: 5
msg-type: krb-tgs-req (12)
padata: 1 item
req-body
Padding: 0
kdc-options: 40000000 (forwardable)
realm: DDA.MYCO.COM
sname
name-type: kRB5-NT-UNKNOWN (0)
sname-string: 2 items
SNameString: zookeeper
SNameString: d59407.ddapoc.ucfc2z3b.usdv1.mycloud.com
till: 1970-01-01 00:00:00 (UTC)
nonce: 681936272
etype: 3 items
ENCTYPE: eTYPE-AES128-CTS-HMAC-SHA1-96 (17)
ENCTYPE: eTYPE-DES3-CBC-SHA1 (16)
ENCTYPE: eTYPE-ARCFOUR-HMAC-MD5 (23)
So it looks like the client is sending
oc-10-252-134-51.nat-ucfc2z3b.usdv1.mycloud.com
as the server host, when it should be sending:
d59407.ddapoc.ucfc2z3b.usdv1.mycloud.com
So the question is, how do I fix that? Bear in mind this is a Java piece of code.
My /etc/hosts has the following:
10.252.132.160 b3e073.ddapoc.ucfc2z3b.usdv1.mycloud.com
10.252.134.51 d59407.ddapoc.ucfc2z3b.usdv1.mycloud.com
10.252.132.139 d7cc18.ddapoc.ucfc2z3b.usdv1.mycloud.com
And my krb5.conf file has:
kdc = d7cc18.ddapoc.ucfc2z3b.usdv1.mycloud.com
kdc = b3e073.ddapoc.ucfc2z3b.usdv1.mycloud.com
kdc = d59407.ddapoc.ucfc2z3b.usdv1.mycloud.com
I tried adding -Dsun.net.spi.nameservice.provider.1=file,dns as a JVM param but got the same result.
I fixed this by setting up a local dnsmasq instance to supply the forward and reverse DNS lookups.
So now from the command line, host d59407.ddapoc.ucfc2z3b.usdv1.mycloud.com returns 10.252.134.51
See also here and here.
Looks like some DNS issue.
Could this SO question help you resolving your problem?
Also, here is an Q&A about the problem.
It also could be because of non Sun JVM.
I am trying a redshift copyactivity from S3 to Redshift, and getting the below error when I run it.
01 Feb 2017 04:08:38,467 [INFO] (TaskRunnerService-resource:df-0657690RH3EEUVGYXWE_#Ec2Instance_2017-02-01T03:43:47-0) df-0657690RH3EEUVGYXWE amazonaws.datapipeline.taskrunner.TaskPoller: Executing: amazonaws.datapipeline.activity.RedshiftCopyActivity#63859f83
01 Feb 2017 04:08:38,962 [ERROR] (TaskRunnerService-resource:df-0657690RH3EEUVGYXWE_#Ec2Instance_2017-02-01T03:43:47-0) df-0657690RH3EEUVGYXWE amazonaws.datapipeline.database.ConnectionFactory: Unable to establish connection to postgresql:/redshiftHost:5439/trivusdev No suitable driver found for postgresql:/redshiftHost:5439/trivusdev
01 Feb 2017 04:08:39,063 [ERROR] (TaskRunnerService-resource:df-0657690RH3EEUVGYXWE_#Ec2Instance_2017-02-01T03:43:47-0) df-0657690RH3EEUVGYXWE amazonaws.datapipeline.database.ConnectionFactory: Unable to establish connection to postgresql:/redshiftHost:5439/trivusdev No suitable driver found for postgresql:/redshiftHost:5439/trivusdev
01 Feb 2017 04:08:39,265 [ERROR] (TaskRunnerService-resource:df-0657690RH3EEUVGYXWE_#Ec2Instance_2017-02-01T03:43:47-0) df-0657690RH3EEUVGYXWE amazonaws.datapipeline.database.ConnectionFactory: Unable to establish connection to postgresql:/redshiftHost:5439/trivusdev No suitable driver found for postgresql:/redshiftHost:5439/trivusdev
01 Feb 2017 04:08:39,666 [ERROR] (TaskRunnerService-resource:df-0657690RH3EEUVGYXWE_#Ec2Instance_2017-02-01T03:43:47-0) df-0657690RH3EEUVGYXWE amazonaws.datapipeline.database.ConnectionFactory: Unable to establish connection to postgresql:/redshiftHost:5439/trivusdev No suitable driver found for postgresql:/redshiftHost:5439/trivusdev
01 Feb 2017 04:08:40,468 [ERROR] (TaskRunnerService-resource:df-0657690RH3EEUVGYXWE_#Ec2Instance_2017-02-01T03:43:47-0) df-0657690RH3EEUVGYXWE amazonaws.datapipeline.database.ConnectionFactory: Unable to establish connection to postgresql:/redshiftHost:5439/trivusdev No suitable driver found for postgresql:/redshiftHost:5439/trivusdev
01 Feb 2017 04:08:40,473 [INFO] (TaskRunnerService-resource:df-0657690RH3EEUVGYXWE_#Ec2Instance_2017-02-01T03:43:47-0) df-0657690RH3EEUVGYXWE amazonaws.datapipeline.taskrunner.HeartBeatService: Finished waiting for heartbeat thread #RedshiftLoadActivity_2017-02-01T03:43:47_Attempt=3
01 Feb 2017 04:08:40,473 [INFO] (TaskRunnerService-resource:df-0657690RH3EEUVGYXWE_#Ec2Instance_2017-02-01T03:43:47-0) df-0657690RH3EEUVGYXWE amazonaws.datapipeline.taskrunner.TaskPoller: Work RedshiftCopyActivity took 0:2 to complete
I am able to see someone suggesting to use postgresql drivers, instead of redshift drivers.
But when I try with postgresql drivers, I get the error as :
No suitable driver found for postgresql://.....
Please suggest where should I make the corrections ?
In fact No suitable driver found for postgresql:/redshiftHost:5439/trivusdev are you sure that this is the right URL the URL should look like this :
jdbc:postgresql://redshiftHost:5439/trivusdev?OpenSourceSubProtocolOverride=true
I think you miss a jdbc:.. and / before the host.
You can learn more here : Creating a custom Database connection
Hope this can help you.
I'm building a grails app that uses assets from a dependency as if they were in my own project. Running the app in development mode works fine, as none of the files get uglified/minified. However, when the files are pre-processed for a production build there are errors because the processor can't find them.
You can see it in this output from the assetCompile task:
:assetCompile
...
Unable to Locate Asset: /spring-websocket.js
Unable to Locate Asset: /spring-websocket
Uglifying File 18 of 28 - application
Compressing File 18 of 28 - application
Processing File 19 of 28 - jquery-2.1.3.js
Uglifying File 19 of 28 - jquery-2.1.3
Compressing File 19 of 28 - jquery-2.1.3
Processing File 20 of 28 - my-websocket.js
Unable to Locate Asset: /spring-websocket
Uglifying File 20 of 28 - my-websocket
Compressing File 20 of 28 - my-websocket
...
Processing File 26 of 28 - sockjs.js
Uglifying File 26 of 28 - sockjs
Compressing File 26 of 28 - sockjs
Processing File 27 of 28 - spring-websocket.js
Unable to Locate Asset: /sockjs
Unable to Locate Asset: /stomp
Uglifying File 27 of 28 - spring-websocket
Compressing File 27 of 28 - spring-websocket
Processing File 28 of 28 - stomp.js
Uglifying File 28 of 28 - stomp
Compressing File 28 of 28 - stomp
Finished Precompiling Assets
The assets needed are bundled with spring-websocket (sock.js and stomp.js). You can see the precompiler complaining about them but then eventually finding them at the end. Those individual files make it into the final .war, but not into the minified application.js that has my dependent code in it. Does asset-pipeline have a way of dealing with this?