Genexus: Full process for FTP connection and exchanging data - ftp

What is the whole process in order to connect to FTP server and exchange data using Genexus?

You can use SFTP Module.
Documentation
https://wiki.genexus.com/commwiki/servlet/wiki?45274,GeneXus+FTPS+Module
Here is some code:
//Knowledge Manager / Manage Module References / GenexusFTPS Module
//GX16U8 Module SFTP, SecurityAPICommons //&SftpOptions //SDT SftpOtptions
&SftpOptions.Host = !"172.16.4.5"
&SftpOptions.User = !"dummyuser"
&SftpOptions.Port = 22
&SftpOptions.Password = !"dumypass"
&SftpOptions.AllowHostKeyChecking = true
&SftpOptions.KeyPath = !"C:\Temp\keys\private_key.pem"
&SftpOptions.KeyPassword= !"dummykeypass"
&SftpOptions.KnownHostsPath = !"C:\Temp\known_hosts"
If &SftpClient.Connect( &SftpOptions)
If &SftpClient.Put( !"C:\temp\testfile.txt", "/sftptest")
Else
If &SftpClient.HasError()
msg( !"Error. Code: " + &SftpClient.GetErrorCode() + !"Description: " + &SftpClient.GetErrorDescription())
Endif
Endif
Else
If &SftpClient.HasError()
msg( !"Error. Code: " + &SftpClient.GetErrorCode() + !"Description: " + &SftpClient.GetErrorDescription())
Endif
Endif
&SftpClient.Disconnect()

Related

Generate Azure Storage SAS Signature In Ruby

I am trying to use the following code to generate a valid URL for accessing a blob in my Azure storage account. The Azure account name and key are stored in .env files. For some reason, the URL doesn't work; I get a Signature did not match error.
# version 2018-11-09 and later, https://learn.microsoft.com/en-us/rest/api/storageservices/create-service-sas#version-2018-11-09-and-later
signed_permissions = "r"
signed_start = "#{(start_time - 5.minutes).iso8601}"
signed_expiry = "#{(start_time + 10.minutes).iso8601}"
canonicalized_resource = "/blob/#{Config.azure_storage_account_name}/media/#{medium.tinyurl}"
signed_identifier = ""
signed_ip = ""
signed_protocol = "https"
signed_version = "2018-11-09"
signed_resource = "b"
signed_snapshottime = ""
rscc = ""
rscd = ""
rsce = ""
rscl = ""
rsct = ""
string_to_sign = signed_permissions + "\n" +
signed_start + "\n" +
signed_expiry + "\n" +
canonicalized_resource + "\n" +
signed_identifier + "\n" +
signed_ip + "\n" +
signed_protocol + "\n" +
signed_version + "\n" +
signed_resource + "\n" +
signed_snapshottime + "\n" +
rscc + "\n" +
rscd + "\n" +
rsce + "\n" +
rscl + "\n" +
rsct
sig = OpenSSL::HMAC.digest('sha256', Base64.strict_decode64(Config.azure_storage_account_key), string_to_sign.encode(Encoding::UTF_8))
sig = Base64.strict_encode64(sig)
#result = "#{medium.storageurl}?sp=#{signed_permissions}&st=#{signed_start}&se=#{signed_expiry}&spr=#{signed_protocol}&sv=#{signed_version}&sr=#{signed_resource}&sig=#{sig}"
PS: This is in Rails and medium is a record pulled from the DB that contains information about the blob in Azure.
Turns out the issue was clock skew. The signed_start and signed_expiry amounts I was using were too tight. WHen I relaxed then to -30/+20, I could reliably create SAS tokens using the snipper I posted.

strange behaviour when xcode evaluates variables at runtime

I get the error "expression was too complex to be solved in reasonable time" for the following code.
I saw other threads regarding the error with much complexer expressions then mine and it was stated, that the compiler is buggy.
Is this also true for the following simple code ?
Problem:
let value1:Int = 1;
let value2:Int = 3;
var sql="The Number 2 is in "
+ " between " + String(iFrom) + " and " + String(iTo) + ".";
but this works
let value1:Int = 1;
let value2:Int = 3;
var sql="The Number 2 is in "
+ " between " + String(iFrom) + " and " + String(iTo);
The difference is only the "." at the end of concatenation.
If its a bug:
The process SourceKitService is running at max and slowes everything down. Can compilation at runtime be disabled ?
One thing you could try that I've had some success with is to try specifically typing your sql variable.
var sql: String ="The Number 2 is in "
+ " between " + String(iFrom) + " and " + String(iTo) + ".";

Running a mapreduce job on cloudera demo cdh3u4 (airline data example)

I'm doing the R-Hadoop tutorial (october 2012) of Jeffrey Breen.
At the moment I try to populate hdfs and then run the commands Jeffrey published in his tutorial in RStudio. Unfortunately I got some troubles with it:
UPDATE: I now moved the data folder to:
/home/cloudera/data/hadoop/wordcount (and same for airline-Data)
No when I run populate.hdfs.sh I get the following output:
[cloudera#localhost ~]$ /home/cloudera/TutorialBreen/bin/populate.hdfs.sh
mkdir: cannot create directory /user/cloudera: File exists
mkdir: cannot create directory /user/cloudera/wordcount: File exists
mkdir: cannot create directory /user/cloudera/wordcount/data: File exists
mkdir: cannot create directory /user/cloudera/airline: File exists
mkdir: cannot create directory /user/cloudera/airline/data: File exists
put: Target /user/cloudera/airline/data/20040325.csv already exists
And then I tried the commands in RStudio as shown in the tutorial but I get errors at the end. Can someone show me what I did wrong?
> if (LOCAL)
+ {
+ rmr.options.set(backend = 'local')
+ hdfs.data.root = 'data/local/airline'
+ hdfs.data = file.path(hdfs.data.root, '20040325-jfk-lax.csv')
+ hdfs.out.root = 'out/airline'
+ hdfs.out = file.path(hdfs.out.root, 'out')
+ if (!file.exists(hdfs.out))
+ dir.create(hdfs.out.root, recursive=T)
+ } else {
+ rmr.options.set(backend = 'hadoop')
+ hdfs.data.root = 'airline'
+ hdfs.data = file.path(hdfs.data.root, 'data')
+ hdfs.out.root = hdfs.data.root
+ hdfs.out = file.path(hdfs.out.root, 'out')
+ }
> asa.csvtextinputformat = make.input.format( format = function(con, nrecs) {
+ line = readLines(con, nrecs)
+ values = unlist( strsplit(line, "\\,") )
+ if (!is.null(values)) {
+ names(values) = c('Year','Month','DayofMonth','DayOfWeek','DepTime','CRSDepTime',
+ 'ArrTime','CRSArrTime','UniqueCarrier','FlightNum','TailNum',
+ 'ActualElapsedTime','CRSElapsedTime','AirTime','ArrDelay',
+ 'DepDelay','Origin','Dest','Distance','TaxiIn','TaxiOut',
+ 'Cancelled','CancellationCode','Diverted','CarrierDelay',
+ 'WeatherDelay','NASDelay','SecurityDelay','LateAircraftDelay')
+ return( keyval(NULL, values) )
+ }
+ }, mode='text' )
> mapper.year.market.enroute_time = function(key, val) {
+ if ( !identical(as.character(val['Year']), 'Year')
+ & identical(as.numeric(val['Cancelled']), 0)
+ & identical(as.numeric(val['Diverted']), 0) ) {
+ if (val['Origin'] < val['Dest'])
+ market = paste(val['Origin'], val['Dest'], sep='-')
+ else
+ market = paste(val['Dest'], val['Origin'], sep='-')
+ output.key = c(val['Year'], market)
+ output.val = c(val['CRSElapsedTime'], val['ActualElapsedTime'], val['AirTime'])
+ return( keyval(output.key, output.val) )
+ }
+ }
> reducer.year.market.enroute_time = function(key, val.list) {
+ if ( require(plyr) )
+ val.df = ldply(val.list, as.numeric)
+ else { # this is as close as my deficient *apply skills can come w/o plyr
+ val.list = lapply(val.list, as.numeric)
+ val.df = data.frame( do.call(rbind, val.list) )
+ }
+ colnames(val.df) = c('crs', 'actual','air')
+ output.key = key
+ output.val = c( nrow(val.df), mean(val.df$crs, na.rm=T),
+ mean(val.df$actual, na.rm=T),
+ mean(val.df$air, na.rm=T) )
+ return( keyval(output.key, output.val) )
+ }
> mr.year.market.enroute_time = function (input, output) {
+ mapreduce(input = input,
+ output = output,
+ input.format = asa.csvtextinputformat,
+ output.format='csv', # note to self: 'csv' for data, 'text' for bug
+ map = mapper.year.market.enroute_time,
+ reduce = reducer.year.market.enroute_time,
+ backend.parameters = list(
+ hadoop = list(D = "mapred.reduce.tasks=2")
+ ),
+ verbose=T)
+ }
> out = mr.year.market.enroute_time(hdfs.data, hdfs.out)
Error in file(f, if (format$mode == "text") "r" else "rb") :
cannot open the connection
In addition: Warning message:
In file(f, if (format$mode == "text") "r" else "rb") :
cannot open file 'data/local/airline/20040325-jfk-lax.csv': No such file or directory
> if (LOCAL)
+ {
+ results.df = as.data.frame( from.dfs(out, structured=T) )
+ colnames(results.df) = c('year', 'market', 'flights', 'scheduled', 'actual', 'in.air')
+ print(head(results.df))
+ }
Error in to.dfs.path(input) : object 'out' not found
Thank you so much!
First of all, it looks like the command:
/usr/bin/hadoop fs -mkdir /user/cloudera/wordcount/data
Is being split into multiple lines. Make sure you're entering it as-is.
Also, it is saying that the local directory data/hadoop/wordcount does not exist. Verify that you're running this command from the correct directory and that your local data is where you expect it to be.

Why is CF FTP transfer speed multiple times slower than standard FTP?

I have a ColdFusion application that I use to transfer files between our development and production servers. The code that actually sends the files is as follows:
ftp = new Ftp();
ftp.setUsername(username);
ftp.setPassword(password);
ftp.setServer(server);
ftp.setTimeout(1800);
ftp.setConnection('dev');
ftp.open();
ftp.putFile(transferMode="binary",localFile=localpath,remoteFile=remotepath);
ftp.close(connection='dev');
When sending a file using the above code, I cap out at just under 100KB/s(monitored via FileZilla on the receiving server). If I send the exact same file using the Windows command-line FTP tool, my speeds are upwards of 1000KB/s.
I created a brand new file with nothing but the code above and that has no effect on the transfer speed, so I know it has nothing to do with the surrounding code in the original application.
So, what could be causing these abysmally low speeds?
Edit: All tests are being done transferring files from my production server to my development server. I also tried using the <cfftp> tag instead of cfscript, and I have the same results.
Edit #2: I ended up using cfexecute, the code is as follows:
From my FTP script:
public function sendFiles(required string localpath, required string remotepath) {
this.writeFtpInstructions(localpath);
exe = "C:\Windows\system32\ftp.exe";
params = "-s:" & request.localapproot & "/" & "upload.txt";
outputfile = request.localapproot & '/ftp.log';
timeout = 120;
Request.cliExec(exe,params,outputfile,timeout);
}
public function writeFtpInstructions(required string localpath) {
instructions = request.localapproot & "/" & "upload.txt";
crlf = chr(13) & chr(10);
data = "";
data &= "open " & this.server & crlf;
data &= this.username & crlf;
data &= this.password & crlf;
data &= "cd " & request.remoteapproot & crlf;
data &= "put " & localpath & crlf;
data &= "quit";
FileWrite(instructions, data);
}
The cliExec() function(necessary to create a wrapper since there is no equivalent of cfexecute in cfscript):
<cffunction name="cliExec">
<cfargument name="name">
<cfargument name="arguments">
<cfargument name="outputfile">
<cfargument name="timeout">
<cfexecute
name="#name#"
arguments="#arguments#"
outputFile="#outputfile#"
timeout="#timeout#" />
</cffunction>
With my experience using cfftp on CF9, it was impossible to transfer larger files. If you view the active transfers on the FTP server side, you will notice that the transfers start out at top speed, but as more & more data is transmitted the speeds keep dropping. After 100 MB had been transfered, the reduction started to become very drastic, until they eventually reached a single digit crawl. Eventually the transfer timed out & failed. I was trying to work with a 330 MB file & found it impossible to transfer using cfftp. The cfexecute was not an option for me using the standard windows ftp.exe, because it doesn’t seem to support SFTP.
I ended up seeking out an external java class to use through coldfusion & settled on JSch (http://www.jcraft.com/jsch/). Ironically, CF9 appears to use a variation of this class for CFFTP (jsch-0.1.41m.jar), but the results are much different using this latest downloaded version (jsch-0.1.45.jar).
Here is the code that I put together for a proof of concept:
<cfscript>
stAppPrefs = {
stISOFtp = {
server = 'sftp.server.com',
port = '22',
username = 'youser',
password = 'pa$$w0rd'
}
};
/* Side-Load JSch Java Class (http://www.jcraft.com/jsch/) */
try {
// Load Class Using Mark Mandel's JavaLoader (http://www.compoundtheory.com/?action=javaloader.index)
/*
Add Mark's LoaderClass To The ColdFusion Class Path Under CF Admin:
Java and JVM : ColdFusion Class Path : C:\inetpub\wwwroot\javaloader\lib\classloader-20100119110136.jar
Then Restart The Coldfusion Application Service
*/
loader = CreateObject("component", "javaloader.JavaLoader").init([expandPath("jsch-0.1.45.jar")]);
// Initiate Instance
jsch = loader.create("com.jcraft.jsch.JSch").init();
}
catch(any excpt) {
WriteOutput("Error loading ""jsch-0.1.45.jar"" java class: " & excpt.Message & "<br>");
abort;
}
/* SFTP Session & Channel */
try {
// Create SFTP Session
session = jsch.getSession(stAppPrefs.stISOFtp.username, stAppPrefs.stISOFtp.server, stAppPrefs.stISOFtp.port);
// Turn Off & Use Username/Password
session.setConfig("StrictHostKeyChecking", "no");
session.setPassword(stAppPrefs.stISOFtp.password);
session.connect();
// Create Channel To Transfer File(s) On
channel = session.openChannel("sftp");
channel.connect();
}
catch(any excpt) {
WriteOutput("Error connecting to FTP server: " & excpt.Message & "<br>");
abort;
}
// Returns Array Of Java Objects, One For Each Zip Compressed File Listed In Root DIR
// WriteDump(channel.ls('*.zip'));
// Get First Zip File Listed For Transfer From SFTP To CF Server
serverFile = channel.ls('*.zip')[1].getFilename();
/* Debug */
startTime = Now();
WriteOutput("Transfer Started: " & TimeFormat(startTime, 'hh:mm:ss') & "<br>");
/* // Debug */
/* Transfer File From SFTP Server To CF Server */
try {
// Create File On Server To Write Byte Stream To
transferFile = CreateObject("java", "java.io.File").init(expandPath(serverFile));
channel.get(serverFile, CreateObject("java", "java.io.FileOutputStream").init(transferFile));
// Close The File Output Stream
transferFile = '';
}
catch(any excpt) {
WriteOutput("Error transfering file """ & expandPath(serverFile) & """: " & excpt.Message & "<br>");
abort;
}
/* Debug */
finishTime = Now();
WriteOutput("Transfer Finished: " & TimeFormat(finishTime, 'hh:mm:ss') & "<br>");
expiredTime = (finishTime - startTime);
WriteOutput("Duration: " & TimeFormat(expiredTime, 'HH:MM:SS') & "<br>");
WriteOutput("File Size: " & NumberFormat(Evaluate(GetFileInfo(ExpandPath(serverFile)).size / 1024), '_,___._') & " KB<br>");
WriteOutput("Transfer Rate: " & NumberFormat(Evaluate(Evaluate(GetFileInfo(ExpandPath(serverFile)).size / 1024) / Evaluate(((TimeFormat(expiredTime, 'H') * 60 * 60) + (TimeFormat(expiredTime, 'M') * 60) + TimeFormat(expiredTime, 'S')))), '_,___._') & " KB/Sec <br>");
/* // Debug */
channel.disconnect();
session.disconnect();
</cfscript>
Results:
Transfer Started: 09:37:57
Transfer Finished: 09:42:01
Duration: 00:04:04
File Size: 331,770.8 KB
Transfer Rate: 1,359.7 KB/Sec
The transfer speed that was achieved is on par with what I was getting using the FileZilla FTP Client & manually downloading. I found this method to be a viable solution for the inadequacy of cfftp.
I have been looking and I dont have an answer about why it is slower. But, in theory, you should be able to use cfexecute to do this through the windows command line. You might could even create a batch file and do it in one call.

jython: port definition

I am looking for a jython script that does the following:
Servers > application servers > server1 > Ports > WC_default > (set) port=8080.
enviornment > virtaul hosts > deault_host > host aliases > [if there is an entry with host name==*, then port = 8080]
Thank you very much.
Use the following code as a starting point:
serverEntries = AdminConfig.list('ServerEntry', AdminConfig.getid('/Node:' + nodeName + '/')).split(java.lang.System.getProperty('line.separator'))
for serverEntry in serverEntries:
if AdminConfig.showAttribute(serverEntry, "serverName") == 'server1':
sepString = AdminConfig.showAttribute(serverEntry, "specialEndpoints")
sepList = sepString[1:len(sepString)-1].split(" ")
for specialEndPoint in sepList:
endPointNm = AdminConfig.showAttribute(specialEndPoint, "endPointName")
if endPointNm == "WC_defaulthost":
ePoint = AdminConfig.showAttribute(specialEndPoint, "endPoint")
# at this point you probably want to do a resetAttribute instead of showAttribute
defaultHostPort = AdminConfig.showAttribute(ePoint, "port")
break
for hostAlias in AdminConfig.getid('/Cell:' + cellName + '/VirtualHost:default_host/HostAlias:/').split(java.lang.System.getProperty('line.separator')):
if AdminConfig.showAttribute(hostAlias, 'port') == defaultHostPort:
print "Deleting host alias for port " + defaultHostPort
AdminConfig.remove(hostAlias)
AdminConfig.create('HostAlias', AdminConfig.getid('/Cell:' + cellName + '/VirtualHost:default_host/'), [['hostname', '*'],['port', defaultHostPort]])

Resources