I'm trying to use Flink to work with Oracle. Just do a simple task copy data from table to a new one.
EnvironmentSettings settings = EnvironmentSettings.inStreamingMode();
TableEnvironment tEnv = TableEnvironment.create(settings);
tEnv.executeSql("CREATE TABLE ExistedTable(\n" +
" quoteid BIGINT,\n" +
" requestid BIGINT,\n" +
" createddt DATE,\n" +
" PRIMARY KEY (quoteid) NOT ENFORCED\n" +
") WITH (\n" +
" 'connector' = 'jdbc',\n" +
" 'url' = 'jdbc:oracle:thin:#xxx.xxx.xxx.xxx:1521:DBNAME',\n" +
" 'table-name' = 'TableName',\n" +
" 'driver' = 'oracle.jdbc.driver.OracleDriver',\n" +
" 'username' = 'UserName',\n" +
" 'password' = 'Password'\n" +
")");
tEnv.executeSql("CREATE TABLE NewTable (\n" +
" quoteid BIGINT,\n" +
" requestid BIGINT,\n" +
" createddt DATE,\n" +
" PRIMARY KEY (quoteid) NOT ENFORCED\n" +
") WITH (\n" +
" 'connector' = 'jdbc',\n" +
" 'url' = 'jdbc:oracle:thin:#xxx.xxx.xxx.xxx:1521:DBNAME',\n" +
" 'table-name' = 'NewTableName',\n" +
" 'driver' = 'oracle.jdbc.driver.OracleDriver',\n" +
" 'username' = 'UserName',\n" +
" 'password' = 'Password'\n" +
")");
Table data= tEnv.from("ExistedTable");
data.executeInsert("NewTable");
When running I've got error
The program finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: Unable to create a source for reading table 'default_catalog.default_database.xxx'.
Table options are:
'connector'='jdbc'
'driver'='oracle.jdbc.OracleDriver'
'password'='******'
'table-name'='xxx'
'url'='jdbc:oracle:thin:#xxx:1521:xxx'
'username'='xxx'
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222)
at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114)
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:812)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:246)
at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1054)
at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132)
at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132)
Caused by: org.apache.flink.table.api.ValidationException: Unable to create a source for reading table 'default_catalog.default_database.xxx'.
Is there any error in my sqlconnection. I couldn't found any example for working with oracle.
Thanks,
Which version of Flink are you using? Support for Oracle JDBC is available since Flink 1.15, which hasn't been released yet.
I have a problem getting some properties of the JDBC Driver of a Datasource in a weblogic server using WLST.
Altough I can get many properties of the DS like this:
allJDBCResources = cmo.getJDBCSystemResources()
for jdbcResource in allJDBCResources:
dsname = jdbcResource.getName()
dsResource = jdbcResource.getJDBCResource()
dsJNDIname = dsResource.getJDBCDataSourceParams().getJNDINames()[0]
dsInitialCap = dsResource.getJDBCResource().getJDBCConnectionPoolParams().getInitialCapacity()
I'm still unable to retrieve the values from this (image for better understanding of the problem):
I'm able to get the driver class name but not the propierties in that field. I need to retrieve through WLST:
user
readtimeout
connect_timeout
I've seen lots of pages through google but only to "set" the properties not a way to get that values.
Any help is appreciated.
Well, I achieved what I needed in this way:
try :
user = ls("/JDBCSystemResources/"+ dsname +"/Resource/" + dsname + "/JDBCDriverParams/" + dsname + "/Properties/" + dsname + "/Properties/user")
readTimeOut = ls("/JDBCSystemResources/"+ dsname +"/Resource/" + dsname + "/JDBCDriverParams/" + dsname + "/Properties/" + dsname + "/Properties/oracle.jdbc.ReadTimeout")
conTimeOut = ls("/JDBCSystemResources/"+ dsname +"/Resource/" + dsname + "/JDBCDriverParams/" + dsname + "/Properties/" + dsname + "/Properties/oracle.net.CONNECT_TIMEOUT")
streamAsBlob = ls("/JDBCSystemResources/"+ dsname +"/Resource/" + dsname + "/JDBCDriverParams/" + dsname + "/Properties/" + dsname + "/Properties/SendStreamAsBlob")
except WLSTException:
pass
And after that I had the information I needed, just in an horrible string but I'll parse it with python.
I am using the following two options, the Mongo C# driver seems to be taking more time. I'm using StopWatch to calculate the timings.
Case 1: Native Mongo QueryDocument (takes 0.0011 ms to return data)
string querytext = #"{schemas:{$elemMatch:{name: " + n + ",code : " + c + "} }},{schemas:{$elemMatch:{code :" + c1 + "}}}";
string printQueryname = "Query: " + querytext;
BsonDocument query1 = MongoDB.Bson.Serialization.BsonSerializer.Deserialize<BsonDocument>(querytext);
QueryDocument queryDoc1 = new QueryDocument(query1);
var queryResponse = collection.FindAs<BsonDocument>(queryDoc1);
Case 2: Mongo C# Driver (takes more than 3.2 ms to return data)
Schema _result = new Schema();
_result = (from c in _coll.AsQueryable<Schema>()
where c.schemas.Any(s => s.code.Equals(c) && s.name.Equals(n) ) &&
c.schemas.Any(s => s.code.Equals(c1))
select c).FirstOrDefault();
Any thoughts ? Anything wrong here ?
I'm doing the R-Hadoop tutorial (october 2012) of Jeffrey Breen.
At the moment I try to populate hdfs and then run the commands Jeffrey published in his tutorial in RStudio. Unfortunately I got some troubles with it:
UPDATE: I now moved the data folder to:
/home/cloudera/data/hadoop/wordcount (and same for airline-Data)
No when I run populate.hdfs.sh I get the following output:
[cloudera#localhost ~]$ /home/cloudera/TutorialBreen/bin/populate.hdfs.sh
mkdir: cannot create directory /user/cloudera: File exists
mkdir: cannot create directory /user/cloudera/wordcount: File exists
mkdir: cannot create directory /user/cloudera/wordcount/data: File exists
mkdir: cannot create directory /user/cloudera/airline: File exists
mkdir: cannot create directory /user/cloudera/airline/data: File exists
put: Target /user/cloudera/airline/data/20040325.csv already exists
And then I tried the commands in RStudio as shown in the tutorial but I get errors at the end. Can someone show me what I did wrong?
> if (LOCAL)
+ {
+ rmr.options.set(backend = 'local')
+ hdfs.data.root = 'data/local/airline'
+ hdfs.data = file.path(hdfs.data.root, '20040325-jfk-lax.csv')
+ hdfs.out.root = 'out/airline'
+ hdfs.out = file.path(hdfs.out.root, 'out')
+ if (!file.exists(hdfs.out))
+ dir.create(hdfs.out.root, recursive=T)
+ } else {
+ rmr.options.set(backend = 'hadoop')
+ hdfs.data.root = 'airline'
+ hdfs.data = file.path(hdfs.data.root, 'data')
+ hdfs.out.root = hdfs.data.root
+ hdfs.out = file.path(hdfs.out.root, 'out')
+ }
> asa.csvtextinputformat = make.input.format( format = function(con, nrecs) {
+ line = readLines(con, nrecs)
+ values = unlist( strsplit(line, "\\,") )
+ if (!is.null(values)) {
+ names(values) = c('Year','Month','DayofMonth','DayOfWeek','DepTime','CRSDepTime',
+ 'ArrTime','CRSArrTime','UniqueCarrier','FlightNum','TailNum',
+ 'ActualElapsedTime','CRSElapsedTime','AirTime','ArrDelay',
+ 'DepDelay','Origin','Dest','Distance','TaxiIn','TaxiOut',
+ 'Cancelled','CancellationCode','Diverted','CarrierDelay',
+ 'WeatherDelay','NASDelay','SecurityDelay','LateAircraftDelay')
+ return( keyval(NULL, values) )
+ }
+ }, mode='text' )
> mapper.year.market.enroute_time = function(key, val) {
+ if ( !identical(as.character(val['Year']), 'Year')
+ & identical(as.numeric(val['Cancelled']), 0)
+ & identical(as.numeric(val['Diverted']), 0) ) {
+ if (val['Origin'] < val['Dest'])
+ market = paste(val['Origin'], val['Dest'], sep='-')
+ else
+ market = paste(val['Dest'], val['Origin'], sep='-')
+ output.key = c(val['Year'], market)
+ output.val = c(val['CRSElapsedTime'], val['ActualElapsedTime'], val['AirTime'])
+ return( keyval(output.key, output.val) )
+ }
+ }
> reducer.year.market.enroute_time = function(key, val.list) {
+ if ( require(plyr) )
+ val.df = ldply(val.list, as.numeric)
+ else { # this is as close as my deficient *apply skills can come w/o plyr
+ val.list = lapply(val.list, as.numeric)
+ val.df = data.frame( do.call(rbind, val.list) )
+ }
+ colnames(val.df) = c('crs', 'actual','air')
+ output.key = key
+ output.val = c( nrow(val.df), mean(val.df$crs, na.rm=T),
+ mean(val.df$actual, na.rm=T),
+ mean(val.df$air, na.rm=T) )
+ return( keyval(output.key, output.val) )
+ }
> mr.year.market.enroute_time = function (input, output) {
+ mapreduce(input = input,
+ output = output,
+ input.format = asa.csvtextinputformat,
+ output.format='csv', # note to self: 'csv' for data, 'text' for bug
+ map = mapper.year.market.enroute_time,
+ reduce = reducer.year.market.enroute_time,
+ backend.parameters = list(
+ hadoop = list(D = "mapred.reduce.tasks=2")
+ ),
+ verbose=T)
+ }
> out = mr.year.market.enroute_time(hdfs.data, hdfs.out)
Error in file(f, if (format$mode == "text") "r" else "rb") :
cannot open the connection
In addition: Warning message:
In file(f, if (format$mode == "text") "r" else "rb") :
cannot open file 'data/local/airline/20040325-jfk-lax.csv': No such file or directory
> if (LOCAL)
+ {
+ results.df = as.data.frame( from.dfs(out, structured=T) )
+ colnames(results.df) = c('year', 'market', 'flights', 'scheduled', 'actual', 'in.air')
+ print(head(results.df))
+ }
Error in to.dfs.path(input) : object 'out' not found
Thank you so much!
First of all, it looks like the command:
/usr/bin/hadoop fs -mkdir /user/cloudera/wordcount/data
Is being split into multiple lines. Make sure you're entering it as-is.
Also, it is saying that the local directory data/hadoop/wordcount does not exist. Verify that you're running this command from the correct directory and that your local data is where you expect it to be.
I am working on a game in Construct2 at the moment.
It is a HTML5 Javascript Engine.
I probably implement clay.io inside it.
My question however is about "Rooms"
Clay.io also helps with Rooms. However I am not sure If I will take that offer.
https://clay.io/docs/rooms
So when I want to limit the users per game to 10 for example. Would I then need to run two servers?
The socket.io server recives and returns data.. But would two games running with each 10 people not confuse the servers data? When person A on server A shoots some1, that this information could somehow end up on Person B on server B?
Or do the assigned ID's prevent this somehow?
Here is the Example Server that I want to upgrade for my needs:
var entities = [], count = 0;
var io = require("socket.io").listen(8099);
var INITIAL_X = 5;
var INITIAL_Y = 5;
var INITIAL_VEL_X = 0;
var INITIAL_VEL_Y = 0;
io.set('log level', 1);
io.sockets.on("connection", function (socket) {
var myNumber = count++;
//assign number
var mySelf = entities[myNumber] = [myNumber, INITIAL_X, INITIAL_Y, INITIAL_VEL_X, INITIAL_VEL_Y];
//Send the initial position and ID to connecting player
console.log(myNumber + ' sent: ' + 'I,' + mySelf[0] + ',' + mySelf[1] + ',' + mySelf[2]);
socket.send('I,' + mySelf[0] + ',' + mySelf[1] + ',' + mySelf[2]);
//Send to conencting client the current state of all the other players
for (var entity_idx = 0; entity_idx < entities.length; entity_idx++) {
//send initial update
if (entity_idx != myNumber) {
entity = entities[entity_idx];
if (typeof (entity) != "undefined" && entity != null) {
console.log(myNumber + ' sent: C for ' + entity_idx);
socket.send('C,' + entity[0] + ',' + entity[1] + ',' + entity[2]);
//send the client that just connected the position of all the other clients
}
}
}
//create new entity in all clients
socket.broadcast.emit("message",
'C,' + mySelf[0] + ',' + mySelf[1] + ',' + mySelf[2]);
socket.on("message", function (data) {
//if (myNumber == 0)
// console.log(myNumber + ' sent: ' +data);
var new_data = data.split(',');
if (new_data[0] == 'UM') {
mySelf[1] = new_data[1];
mySelf[2] = new_data[2];
mySelf[3] = new_data[3];
mySelf[4] = new_data[4];
//Update all the other clients about my update
socket.broadcast.emit("message",
'UM,' + mySelf[0] + ',' + mySelf[1] + ',' + mySelf[2] + ',' + mySelf[3] + ',' + mySelf[4]);
}
else if (new_data[0] == 'S') { // a s message
var shoot_info = [];
shoot_info[0] = new_data[1]; //ini x
shoot_info[1] = new_data[2]; //ini y
shoot_info[2] = new_data[3]; //degrees
//Update all the other clients about my update
socket.broadcast.emit("message",
'S,' + mySelf[0] + ',' + shoot_info[0] + ',' + shoot_info[1] + ',' + shoot_info[2]);
}
});
});
Socket.io has rooms that you can limit the broadcasts to, see: https://github.com/LearnBoost/socket.io/wiki/Rooms
Then rather than use socket.broadcast.emit you would use io.sockets.in('roomname').emit
A good way to mesh this with Clay.io is to have the room name be the room.id (in the Construct 2 plugin that's the RoomId expression). When the Clay.io room fills up (in C2 there's a condition for that), create the Socket.io room using that unique ID and put the players who "Rooms Filled" was just called for in that room.
I know it's a bit different since it's a game written in CoffeeScript instead of Construct 2, but we're using Clay.io rooms + Socket.io rooms in our Slime Volley game. Here is the code.