I am trying a redshift copyactivity from S3 to Redshift, and getting the below error when I run it.
01 Feb 2017 04:08:38,467 [INFO] (TaskRunnerService-resource:df-0657690RH3EEUVGYXWE_#Ec2Instance_2017-02-01T03:43:47-0) df-0657690RH3EEUVGYXWE amazonaws.datapipeline.taskrunner.TaskPoller: Executing: amazonaws.datapipeline.activity.RedshiftCopyActivity#63859f83
01 Feb 2017 04:08:38,962 [ERROR] (TaskRunnerService-resource:df-0657690RH3EEUVGYXWE_#Ec2Instance_2017-02-01T03:43:47-0) df-0657690RH3EEUVGYXWE amazonaws.datapipeline.database.ConnectionFactory: Unable to establish connection to postgresql:/redshiftHost:5439/trivusdev No suitable driver found for postgresql:/redshiftHost:5439/trivusdev
01 Feb 2017 04:08:39,063 [ERROR] (TaskRunnerService-resource:df-0657690RH3EEUVGYXWE_#Ec2Instance_2017-02-01T03:43:47-0) df-0657690RH3EEUVGYXWE amazonaws.datapipeline.database.ConnectionFactory: Unable to establish connection to postgresql:/redshiftHost:5439/trivusdev No suitable driver found for postgresql:/redshiftHost:5439/trivusdev
01 Feb 2017 04:08:39,265 [ERROR] (TaskRunnerService-resource:df-0657690RH3EEUVGYXWE_#Ec2Instance_2017-02-01T03:43:47-0) df-0657690RH3EEUVGYXWE amazonaws.datapipeline.database.ConnectionFactory: Unable to establish connection to postgresql:/redshiftHost:5439/trivusdev No suitable driver found for postgresql:/redshiftHost:5439/trivusdev
01 Feb 2017 04:08:39,666 [ERROR] (TaskRunnerService-resource:df-0657690RH3EEUVGYXWE_#Ec2Instance_2017-02-01T03:43:47-0) df-0657690RH3EEUVGYXWE amazonaws.datapipeline.database.ConnectionFactory: Unable to establish connection to postgresql:/redshiftHost:5439/trivusdev No suitable driver found for postgresql:/redshiftHost:5439/trivusdev
01 Feb 2017 04:08:40,468 [ERROR] (TaskRunnerService-resource:df-0657690RH3EEUVGYXWE_#Ec2Instance_2017-02-01T03:43:47-0) df-0657690RH3EEUVGYXWE amazonaws.datapipeline.database.ConnectionFactory: Unable to establish connection to postgresql:/redshiftHost:5439/trivusdev No suitable driver found for postgresql:/redshiftHost:5439/trivusdev
01 Feb 2017 04:08:40,473 [INFO] (TaskRunnerService-resource:df-0657690RH3EEUVGYXWE_#Ec2Instance_2017-02-01T03:43:47-0) df-0657690RH3EEUVGYXWE amazonaws.datapipeline.taskrunner.HeartBeatService: Finished waiting for heartbeat thread #RedshiftLoadActivity_2017-02-01T03:43:47_Attempt=3
01 Feb 2017 04:08:40,473 [INFO] (TaskRunnerService-resource:df-0657690RH3EEUVGYXWE_#Ec2Instance_2017-02-01T03:43:47-0) df-0657690RH3EEUVGYXWE amazonaws.datapipeline.taskrunner.TaskPoller: Work RedshiftCopyActivity took 0:2 to complete
I am able to see someone suggesting to use postgresql drivers, instead of redshift drivers.
But when I try with postgresql drivers, I get the error as :
No suitable driver found for postgresql://.....
Please suggest where should I make the corrections ?
In fact No suitable driver found for postgresql:/redshiftHost:5439/trivusdev are you sure that this is the right URL the URL should look like this :
jdbc:postgresql://redshiftHost:5439/trivusdev?OpenSourceSubProtocolOverride=true
I think you miss a jdbc:.. and / before the host.
You can learn more here : Creating a custom Database connection
Hope this can help you.
Related
I am inserting rows into Snowflake database using simple jdbc. I am doing a "executeUpdate" in a loop. I see that rows are inserted, but this error is reported
[error] Sep 09, 2016 9:41:28 AM com.snowflake.client.jdbc.SnowflakeResultSet processMetadata
[error] INFO: unknown parameter: TIME_OUTPUT_FORMAT
[error] Sep 09, 2016 9:41:28 AM com.snowflake.client.jdbc.SnowflakeResultSet processMetadata
[error] INFO: unknown parameter: CLIENT_DISABLE_INCIDENTS
[error] Sep 09, 2016 9:41:28 AM com.snowflake.client.jdbc.SnowflakeResultSet processMetadata
[error] INFO: unknown parameter: JS_DRIVER_DISABLE_OCSP_FOR_NON_SF_ENDPOINTS
[error] Sep 09, 2016 9:41:28 AM com.snowflake.client.jdbc.SnowflakeResultSet processMetadata
[error] INFO: unknown parameter: JS_DRIVER_ENABLE_COMPRESSION
[error] Sep 09, 2016 9:41:28 AM com.snowflake.client.jdbc.SnowflakeResultSet processMetadata
[error] INFO: unknown parameter: ODBC_ENABLE_COMPRESSION
[error] Sep 09, 2016 9:41:28 AM com.snowflake.client.jdbc.SnowflakeResultSet processMetadata
[error] INFO: unknown parameter: CLIENT_SESSION_KEEP_ALIVE
[error] Sep 09, 2016 9:41:28 AM com.snowflake.client.jdbc.SnowflakeResultSet processMetadata
[error] INFO: unknown parameter: JDBC_USE_JSON_PARSER
I am not sure what these error are ... and whether they can be ignored or not.
Also I see that the snowflake JDBC driver does not support executeBatch and executeLargeBatch. So how do I upload large number of rows from a Java application?
Also does the JDBC driver support transactions?
Regarding "[error] Sep 09, 2016 9:41:28 AM com.snowflake.client.jdbc.SnowflakeResultSet processMetadata
[error] INFO: unknown parameter: TIME_OUTPUT_FORMAT", these are INFO logs. It means that the driver doesn't handle those parameters. You can ignore them. We will change the behavior that these lines are not logged by default.
We support executeBatch for PreparedStatement. So you can insert large number of rows through batch binding. executeLargeBatch is currently not supported, but we can add support for it easily if needed.
Our JDBC driver supports transaction. By default a session will be started in auto commit mode. If you want to turn off auto commit, you can call the Connection.setAutoCommit method and then use commit() or rollback() to commit or roll back a transaction. A transaction is transparently started upon the first DML.
I have written a simple expression transformation where the mapping concatenates the first and last name and based on salary of the customer it stores the string high,low or medium into the target table.
The workflow is validated and there are no errors. But when the workflow is started it reads the rows from the database and then applies the transformation and then tries to load the target table. Here while loading the data into the target table it doesn't move forward and gets struck and after long time when Integration service gets disconnected it throws a repository error.
The interesting thing is when I run some other workflows it doesn't give any error. Below are the session logs:
<pre><code>
DIRECTOR> TM_6014 Initializing session [s_mp_person_exp_person] at [Thu Feb
18 22:21:21 2016].
DIRECTOR> TM_6683 Repository Name: [infa_training]
DIRECTOR> TM_6684 Server Name: [infa_training_IS]
DIRECTOR> TM_6686 Folder: [training]
DIRECTOR> TM_6685 Workflow: [wf_person_exp_person] Run Instance Name: [] Run Id: [38]
DIRECTOR> TM_6101 Mapping name: mp_person_exp_person.
DIRECTOR> TM_6964 Date format for the Session is [MM/DD/YYYY HH24:MI:SS.US]
DIRECTOR> TM_6708 Using configuration property [EnableDataEncryption,no]
DIRECTOR> TM_6708 Using configuration property [StoreHAPersistenceInDB,no]
DIRECTOR> TM_6703 Session [s_mp_person_exp_person] is run by 64-bit Integration Service [node01_Gizmo], version [9.6.1], build [0606].
MANAGER> PETL_24058 Running Partition Group [1].
MANAGER> PETL_24000 Parallel Pipeline Engine initializing.
MANAGER> PETL_24001 Parallel Pipeline Engine running.
MANAGER> PETL_24003 Initializing session run.
MAPPING> CMN_1569 Server Mode: [ASCII]
MAPPING> CMN_1570 Server Code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
MAPPING> TM_6151 The session sort order is [Binary].
MAPPING> TM_6156 Using low precision processing.
MAPPING> TM_6180 Deadlock retry logic will not be implemented.
MAPPING> TM_6187 Session target-based commit interval is [10000].
MAPPING> TM_6307 DTM error log disabled.
MAPPING> TE_7022 TShmWriter: Initialized
MAPPING> TM_6007 DTM initialized successfully for session [s_mp_person_exp_person]
DIRECTOR> PETL_24033 All DTM Connection Info: [<NONE>].
MANAGER> PETL_24004 Starting pre-session tasks. : (Thu Feb 18 22:21:21 2016)
MANAGER> PETL_24027 Pre-session task completed successfully. : (Thu Feb 18 22:21:21 2016)
DIRECTOR> PETL_24006 Starting data movement.
MAPPING> TM_6660 Total Buffer Pool size is 1219648 bytes and Block size is 65536 bytes.
READER_1_1_1> DBG_21438 Reader: Source is [orcl], user [kakarrot]
READER_1_1_1> BLKR_16003 Initialization completed successfully.
WRITER_1_*_1> WRT_8147 Writer: Target is database [orcl], user [target], bulk mode [OFF]
WRITER_1_*_1> WRT_8124 Target Table PERSON :SQL INSERT statement:
INSERT INTO PERSON(ID,FULLNAME,TOTALSAL,ANNUALSAL,CITY,STATE,MOB,SALSTATUS) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?)
WRITER_1_*_1> WRT_8124 Target Table PERSON :SQL UPDATE statement:
UPDATE PERSON SET FULLNAME = ?, TOTALSAL = ?, ANNUALSAL = ?, CITY = ?, STATE = ?, MOB = ?, SALSTATUS = ? WHERE ID = ?
WRITER_1_*_1> WRT_8124 Target Table PERSON :SQL DELETE statement:
DELETE FROM PERSON WHERE ID = ?
WRITER_1_*_1> WRT_8270 Target connection group #1 consists of target(s) [PERSON]
WRITER_1_*_1> WRT_8003 Writer initialization complete.
READER_1_1_1> BLKR_16007 Reader run started.
WRITER_1_*_1> WRT_8005 Writer run started.
WRITER_1_*_1> WRT_8158
*****START LOAD SESSION*****
Load Start Time: Thu Feb 18 22:21:21 2016
Target tables:
PERSON
READER_1_1_1> RR_4010 SQ instance [SQ_PERSON] SQL Query [SELECT kakarrot.Person.ID, kakarrot.Person.FNAME, kakarrot.Person.LNAME, kakarrot.Person.SALPERMONTH, kakarrot.Person.COMMISION, kakarrot.Person.CITY, kakarrot.Person.STATE, kakarrot.Person.MOB FROM kakarrot.Person]
READER_1_1_1> RR_4049 SQL Query issued to database : (Thu Feb 18 22:21:21 2016)
READER_1_1_1> RR_4050 First row returned from database to reader : (Thu Feb 18 22:21:21 2016)
READER_1_1_1> BLKR_16019 Read [6] rows, read [0] error rows for source table [Person] instance name [PERSON]
READER_1_1_1> BLKR_16008 Reader run completed.
TRANSF_1_1_1> DBG_21216 Finished transformations for Source Qualifier [SQ_PERSON]. Total errors [0]
WRITER_1_*_1> WRT_8167 Start loading table [PERSON] at: Thu Feb 18 22:21:21 2016
CMN_1761 Timestamp Event: [Fri Feb 19 08:01:22 2016]
REP_12014 An error occurred while accessing the repository
CMN_1761 Timestamp Event: [Fri Feb 19 08:01:23 2016]
REP_12400 Repository Error (Connection to the Repository Service [infa_training] is broken.)
CMN_1761 Timestamp Event: [Fri Feb 19 08:01:23 2016]
REP_12400 Repository Error ([REP_55101] Connection to the Repository Service [infa_training] is broken.)
CMN_1761 Timestamp Event: [Fri Feb 19 08:01:23 2016]
REP_12400 Repository Error ([REP_55114] Reconnecting to the Repository Service [infa_training]. The resilience time is 180 seconds.)
CMN_1761 Timestamp Event: [Fri Feb 19 08:01:50 2016]
REP_12400 Repository Error ([REP_51490] Communication with client application on host [169.254.164.2] and port [6013] has failed because of network errors. [System Error (errno = 10054): Unknown error].)
CMN_1761 Timestamp Event: [Fri Feb 19 10:00:14 2016]
REP_12400 Repository Error ([REP_55112] Unable to connect to the Repository Service [infa_training] since the resilience time is up.)
CMN_1761 Timestamp Event: [Fri Feb 19 10:00:14 2016]
REP_12400 Repository Error (Failed to connect to repository service [infa_training].)
CMN_1761 Timestamp Event: [Fri Feb 19 10:00:14 2016]
REP_12014 An error occurred while accessing the repository
CMN_1761 Timestamp Event: [Fri Feb 19 10:00:14 2016]
REP_12400 Repository Error (Failed to connect to repository service [infa_training].)
CMN_1761 Timestamp Event: [Fri Feb 19 10:00:14 2016]
REP_12400 Repository Error ([REP_55102] Failed to connect to repository service [infa_training].)
CMN_1761 Timestamp Event: [Fri Feb 19 10:00:15 2016]
PETL_24074 Failed to send updates to the master service process. Session run
`will be terminated.`
</pre></code>
Hope you guys can shed light on this problem.
The issue definitely appears to be with the database. According to your log, the session hung for 10 hours and then seems to have lost its connection to the repository database as well. You should try changing the target connection to a flat file and/or any other database you have access to in order to test the session itself. Also, if you don't have the ability to do it yourself, you should ask a DBA to look at the session as it is running to see why the oracle connection is waiting/hung up.
I made a setup to run selenium scripts during jenkins build. Jenkins is available in linux machine and selenium-server-standalone-2.39.0.jar is running in the jenkins machine.
The selenium grid will run the scripts in windows machine and the code looks like,
DesiredCapabilities desiredCapabilities = DesiredCapabilities.chrome();
desiredCapabilities.setBrowserName("chrome");
desiredCapabilities.setPlatform(Platform.ANY);
WebDriver driver = new RemoteWebDriver(new URL("http://example.com:5555/wd/hub"), desiredCapabilities);
I also have the below setup in jenkins,
maven is used to run the selenium test suite and the junit test report are published.
Sometime the jenkins machine is going down with the below error,
Jun 24, 2015 4:15:57 AM hudson.node_monitors.ResponseTimeMonitor$1 monitor
WARNING: Making NewSlave offline because it’s not responding
Jun 24, 2015 4:44:04 AM hudson.ExpressionFactory2$JexlExpression evaluate
WARNING: Caught exception evaluating: it.transientActions in /job/Selenium_GRID/4/console. Reason: java.lang.reflect.InvocationTargetException
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
.....
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.NullPointerException
at com.chikli.hudson.plugin.naginator.NaginatorActionFactory.createFor(NaginatorActionFactory.java:20)
at hudson.model.Run.getTransientActions(Run.java:362)
... 122 more
Jun 24, 2015 5:15:42 AM hudson.model.Run execute
This is the selenium server log,
Jun 22, 2015 6:46:04 PM org.openqa.grid.selenium.proxy.DefaultRemoteProxy isAlive
WARNING: Failed to check status of node: Connection refused
Jun 22, 2015 6:47:12 PM org.openqa.grid.selenium.proxy.DefaultRemoteProxy isAlive
WARNING: Failed to check status of node: Connection timed out
Jun 22, 2015 6:47:12 PM org.openqa.grid.selenium.proxy.DefaultRemoteProxy onEvent
WARNING: Marking the node as down. Cannot reach the node for 2 tries.
Jun 22, 2015 6:48:20 PM org.openqa.grid.selenium.proxy.DefaultRemoteProxy isAlive
WARNING: Failed to check status of node: Connection refused
Jun 22, 2015 6:48:20 PM org.openqa.grid.selenium.proxy.DefaultRemoteProxy onEvent
WARNING: Unregistering the node. It's been down for 67828 milliseconds.
Jun 22, 2015 6:48:20 PM org.openqa.grid.internal.Registry removeIfPresent
WARNING: Proxy 'host :http://xxx.xxx.y.z:5555' was previously registered. Cleaning up any stale test sessions.
The problem is that, jenkins is going down when the selenium scripts got failed or sometimes when viewing the failed test results in jenkins test report.Can anyone please suggest why this issue is occuring and how to resolve this issue?
Is this issue occuring due to the selenium server running in jenkins machine or any other reason?
Today I configured a basic tinyproxy.
I expected it to act as proxy for ubuntu repositories.
But when trying to download stuff from repositories I got this on tinyproxy log
CONNECT Mar 27 17:30:46 [20348]: Connect (file descriptor 9): [unknown] [192.168.2.30]
CONNECT Mar 27 17:30:46 [20348]: Request (file descriptor 9): GET http://br.archive.ubuntu.com/ubuntu/pool/main/t/tdb/python-tdb_1.2.12-1_amd64.deb HTTP/1.1
INFO Mar 27 17:30:46 [20348]: No upstream proxy for br.archive.ubuntu.com
ERROR Mar 27 17:30:56 [20348]: opensock: Could not retrieve info for br.archive.ubuntu.com
INFO Mar 27 17:30:56 [20348]: no entity
I stuck on some misconcept. Do not tinyproxy send requests for outside servers directly?
I supllied an external proxy server to fix this
upstream 117.79.64.29:80
I have an installation of couchdb which has been working well for a few weeks now. Today it started to throw an os_process_error exit status 1 when attempting to look at any view. The documents in the DB are very small and the views a quite simple. Total DB size is 20mb, largest document is 2mb, i have noticed that the ERL process pegs the CPU at 99%.
I've looked at:
CouchDB delay building index (CouchDB 1.5.0 on Windows Server 2008 R2)
Specific couchdb views suddenly start timing out
I've increased my time out to 50000 seconds, then lowered it to 500 to see if i could find the document which was killing everything but nothing shows up. Stale views still work as well.
Below is the debug error:
[Mon, 10 Nov 2014 19:22:19 GMT] [debug] [<0.118.0>] Successful cookie auth as: "sking"
[Mon, 10 Nov 2014 19:22:19 GMT] [info] [<0.118.0>] 192.168.247.158 - - GET /_config/native_query_servers/ 200
[Mon, 10 Nov 2014 19:22:19 GMT] [error] [<0.231.0>] OS Process Error <0.233.0> :: {os_process_error,
{exit_status,1}}
[Mon, 10 Nov 2014 19:22:19 GMT] [error] [emulator] Error in process <0.231.0> with exit value: {{nocatch,{os_process_error,{exit_status,1}}},[{couch_os_process,prompt,2,[{file,"c:/cygwin/relax/APACHE~2.1/src/couchdb/couch_os_process.erl"},{line,57}]},{couch_query_servers,map_doc_raw,2,[{file,"c:/cygwin/relax...
[Mon, 10 Nov 2014 19:22:19 GMT] [debug] [<0.117.0>] Minor error in HTTP request: {os_process_error,
{exit_status,1}}
[Mon, 10 Nov 2014 19:22:19 GMT] [debug] [<0.117.0>] Stacktrace: [{couch_mrview_util,get_view,4,
[{file,
"c:/cygwin/relax/APACHE~2.1/src/COUCH_~3/src/couch_mrview_util.erl"},
{line,49}]},
{couch_mrview,query_view,6,
[{file,
"c:/cygwin/relax/APACHE~2.1/src/COUCH_~3/src/couch_mrview.erl"},
{line,75}]},
{couch_httpd,etag_maybe,2,
[{file,
"c:/cygwin/relax/APACHE~2.1/src/couchdb/couch_httpd.erl"},
{line,610}]},
{couch_mrview_http,design_doc_view,5,
[{file,
"c:/cygwin/relax/APACHE~2.1/src/COUCH_~3/src/couch_mrview_http.erl"},
{line,188}]},
{couch_httpd_db,do_db_req,2,
[{file,
"c:/cygwin/relax/APACHE~2.1/src/couchdb/couch_httpd_db.erl"},
{line,234}]},
{couch_httpd,handle_request_int,5,
[{file,
"c:/cygwin/relax/APACHE~2.1/src/couchdb/couch_httpd.erl"},
{line,318}]},
{mochiweb_http,headers,5,
[{file,
"c:/cygwin/relax/APACHE~2.1/src/mochiweb/mochiweb_http.erl"},
{line,94}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,239}]}]
[Mon, 10 Nov 2014 19:22:19 GMT] [info] [<0.117.0>] 192.168.247.158 - - GET /tcs/_design/company/_view/Company_Id?limit=101 500
[Mon, 10 Nov 2014 19:22:19 GMT] [error] [<0.117.0>] httpd 500 error response:
{"error":"os_process_error","reason":"{exit_status,1}"}
I figured this out but I am not sure why it happened. There was a HUGE document 57mb which had been uploaded to the DB but it was not visible in Futon or anywhere else.
I only found it after digging into the debug log further. I could not access the document via a CURL Get. I ended up having to use CURL -X DELETE with the specific revision then a purge to get rid of the document. Soon as the document was deleted everything worked as expected.