I am trying to save a file on a library inside a Iseries database using the GxFtpPut on genexus 10 V3 with .net but when sending the file genexus tries to send it to a windows directory instead of sending it to the library which works using the ftp command on the cmd
I've already tried to changing the route is using to no avail and trying to find another way of sending the file through genexus.
for example when using the cmd I just put this :
put C:\FILES\Filename.txt Library/Filename
And it works on sending the file inside the library,
but when doing this on genexus:
Call("GxFtpPut", &FileDirectory , 'Library/'+&FileName,'B' )
Does not work and tries to find a directory with that name inside the windows files of the server
I just want to be able to send it to the server library without issue.
IBM i has two distinct name formats depending on the file system you are trying to use. NAMEFMT 0 is the library/filename format, and is likely unknown to PC FTP clients. NAMEFMT 1 is the typical hierarchical directory path used by non-IBM i computers, and also works with IBM i if you want to put a file anywhere in the IFS (Integrated File System).
Fun fact, the native library file system is also accessible from the IFS. But to address it you need to use a format that might be a little unfamiliar. /QSYS.lib/library.lib/filename.file/membername.mbr You may be able to drop the member name.
To change name format, you can issue the SITE sub-command on your remote host like this:
QUOTE SITE NAMEFMT 0 -- This sets name format 0 (library/filename)
QUITE SITE NAMEFMT 1 -- This sets name format 1 (directory path)
I did some testing with a plain Windows FTP client. The test file on the PC was a text file created in Notepad++. Turns out that we start out in NAMEFMT 0 unless it is changed. It looks like genexus only supports a limited set of commands. So here is the limited FTP script that works:
ascii
put test.txt mylib/testpf
I can now pull up testpf on the greenscreen utilities and read it. I can also read testpf in my GUI SQL client. The ASCII text has been converted properly to EBCDIC.
|TESTPF |
|--------------------------------------------------------------------------------|
| |
|// ------------------------------------ |
|// Sweep |
|// |
|// Performs the sweep logic |
|// ------------------------------------ |
|dcl-proc Sweep; |
| |
| |
| exec sql |
| update atty a |
| set ymglsb = (select ymglsb from glaty |
| where atty = a.atty) |
| where atty in (select atty from glaty where atty = a.atty); |
|// where ymglsb in (select ymglsb from glaty where atty = a.atty); |
| if %subst(sqlstate: 1: 2) < '00' or |
| %subst(sqlstate: 1: 2) > '02'; |
| exec sql get diagnostics condition 1 |
| :message = message_text; |
| SendSqlMsg('02: ' + message); |
| endif; |
| |
| exec sql |
| update atty a |
| set ymglsb = '000' |
| where not exists (select * from glaty where atty = a.atty); |
| if %subst(sqlstate: 1: 2) < '00' or |
| %subst(sqlstate: 1: 2) > '02'; |
| exec sql get diagnostics condition 1 |
| :message = message_text; |
| SendSqlMsg('03: ' + message); |
| endif; |
| |
|end-proc; |
However, if I try to transfer in binary mode, the resulting data in the file looks like this:
|TESTPF |
|--------------------------------------------------------------------------------|
|ëÏÁÁø&ÁÊÃ?Ê_ËÈÇÁËÏÁÁø% |
|?ÅÑÄÀÄ%øÊ?ÄëÏÁÁøÁÌÁÄËÉ% |
|ÍøÀ/ÈÁ/ÈÈ`/ËÁÈ`_Å%ËÂËÁ%ÁÄÈ`_Å%ËÂÃÊ?_Å%/È` |
|ÏÇÁÊÁ/ÈÈ`//ÈÈ`ÏÇÁÊÁ/ÈÈ`Ñ>ËÁ%ÁÄÈ/ÈÈ`ÃÊ?_Å%/È`ÏÇÁÊÁ/ÈÈ |
|`//ÈÈ`ÏÇÁÊÁ`_Å%ËÂÑ>ËÁ%ÁÄÈ`_Å%ËÂÃÊ?_Å%/È`ÏÇÁÊÁ/ÈÈ`//ÈÈ |
|`ÑöËÍÂËÈËÉ%ËÈ/ÈÁ?ʶËÍÂËÈËÉ%ËÈ/ÈÁ |
|ÁÌÁÄËÉ%ÅÁÈÀÑ/Å>?ËÈÑÄËÄ?>ÀÑÈÑ?>_ÁËË/ÅÁ_ÁËË/ÅÁ¬ÈÁÌÈ |
|ëÁ>ÀëÉ%(ËÅ_ÁËË/ÅÁÁ>ÀÑÃÁÌÁÄËÉ%ÍøÀ/ÈÁ/ÈÈ`/ |
|ËÁÈ`_Å%ËÂÏÇÁÊÁ>?ÈÁÌÑËÈËËÁ%ÁÄÈÃÊ?_Å%/È`ÏÇÁÊÁ/ÈÈ`// |
|ÈÈ`ÑöËÍÂËÈËÉ%ËÈ/ÈÁ?ʶËÍÂËÈËÉ%ËÈ/ÈÁ |
|ÁÌÁÄËÉ%ÅÁÈÀÑ/Å>?ËÈÑÄËÄ?>ÀÑÈÑ?>_ÁËË/ÅÁ_ÁËË/ÅÁ¬ÈÁÌÈ |
|ëÁ>ÀëÉ%(ËÅ_ÁËË/ÅÁÁ>ÀÑÃÁ>ÀøÊ?Ä |
This has not been converted because we have told IBM i FTP server not to convert to EBCDIC because it is binary.
So try ASCII mode, use the library/filename format. The target file does not need to pre-exist.
Related
I have a massive chunk of files in an extremely fast SAN disk that I like to do Hive query on them.
An obvious option is to copy all files into HDFS by using a command like this:
hadoop dfs -copyFromLocal /path/to/file/on/filesystem /path/to/input/on/hdfs
However, I don't want to create a second copy of my files, just to be to Hive query in them.
Is there any way to point an HDFS folder into a local folder, such that Hadoop sees it as an actual HDFS folder? The files keep adding to the SAN disk, so Hadoop needs to see the new files as they are being added.
This is similar to Azure's HDInsight approach that you copy your files into a blob storage and HDInsight's Hadoop sees them through HDFS.
For playing around with small files using the local file system might be fine, but I wouldn't do it for any other purpose.
Putting a file in an HDFS means that it is being split to blocks which are replicated and distributed.
This gives you later on both performance and availability.
Locations of [external] tables can be directed to the local file system using file:///.
Whether it works smoothly or you'll start getting all kinds of error, that's to be seen.
Please note that for the demo I'm doing here a little trick to direct the location to a specific file, but your basic use will probably be for directories.
Demo
create external table etc_passwd
(
Username string
,Password string
,User_ID int
,Group_ID int
,User_ID_Info string
,Home_directory string
,shell_command string
)
row format delimited
fields terminated by ':'
stored as textfile
location 'file:///etc'
;
alter table etc_passwd set location 'file:///etc/passwd'
;
select * from etc_passwd limit 10
;
+----------+----------+---------+----------+--------------+-----------------+----------------+
| username | password | user_id | group_id | user_id_info | home_directory | shell_command |
+----------+----------+---------+----------+--------------+-----------------+----------------+
| root | x | 0 | 0 | root | /root | /bin/bash |
| bin | x | 1 | 1 | bin | /bin | /sbin/nologin |
| daemon | x | 2 | 2 | daemon | /sbin | /sbin/nologin |
| adm | x | 3 | 4 | adm | /var/adm | /sbin/nologin |
| lp | x | 4 | 7 | lp | /var/spool/lpd | /sbin/nologin |
| sync | x | 5 | 0 | sync | /sbin | /bin/sync |
| shutdown | x | 6 | 0 | shutdown | /sbin | /sbin/shutdown |
| halt | x | 7 | 0 | halt | /sbin | /sbin/halt |
| mail | x | 8 | 12 | mail | /var/spool/mail | /sbin/nologin |
| uucp | x | 10 | 14 | uucp | /var/spool/uucp | /sbin/nologin |
+----------+----------+---------+----------+--------------+-----------------+----------------+
You can mount your hdfs path into local folder, for example with hdfs mount
Please follow this for more info
But if you want speed, it isn't an option
I am in the process of moving and testing a development site on the actual domain name now and I just wanted to check if I was missing anything and also get some advice.
It is a Magento 1.8.1 install from Turnkey Linux running on an m1.medium instance.
What I have done (so far) is, create an image of the development instance, made a new account and copied it over to there. I then made an elastic IP and associated it with the new instance. Next I pointed the A name record of the production domain to the elastic IP.
Now, if I go to the production domain I get redirected to the development domain. Is there a reason for this?
Ideally I would like to have two instances, one dev one that is off unless needed and of course the production on which is going to be live 24/7. However if I turn the development domain off it stops the other too.
I have a feeling it's just because I need to change instances of the dev domain in the Magento database / back-end however I wanted to get a more knowledgable answer as I don't want to break either of the instance.
Also, I should probably mention that the development domain is a subdomain i.e. shop.mysite.com and the live one is just normal i.e. mysite.com. Not entirely sure this is relevant but thought it worth a mention.
Thanks in advance for any help.
The reason your URL on your new instance is getting redirected to the old URL is because in the core_config_data table of your magento database the web/unsecure/base_url and web/secure/base_url paths point to your old URL.
So if you are using mysql you can query your database as follows:
mysql> use magento;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> select * from core_config_data;
+-----------+---------+----------+-------------------------------+-------------------------------------+
| config_id | scope | scope_id | path | value |
+-----------+---------+----------+-------------------------------+-------------------------------------+
| 1 | default | 0 | web/seo/use_rewrites | 1 |
| 2 | default | 0 | admin/dashboard/enable_charts | 0 |
| 3 | default | 0 | web/unsecure/base_url | http://magento.myolddomain.com/ |
| 4 | default | 0 | web/secure/use_in_frontend | 1 |
| 5 | default | 0 | web/secure/base_url | https://magento.myolddomain.com/ |
| 6 | default | 0 | web/secure/use_in_adminhtml | 1 |
| 7 | default | 0 | general/locale/code | en_US |
| 8 | default | 0 | general/locale/timezone | Europe/London |
| 9 | default | 0 | currency/options/base | USD |
| 10 | default | 0 | currency/options/default | USD |
| 11 | default | 0 | currency/options/allow | USD |
| 12 | default | 0 | general/region/display_all | 1 |
| 13 | default | 0 | general/region/state_required | AT,CA,CH,DE,EE,ES,FI,FR,LT,LV,RO,US |
| 14 | default | 0 | catalog/category/root_id | 2 |
+-----------+---------+----------+-------------------------------+-------------------------------------+
14 rows in set (0.00 sec)
and you can change it as follows:
mysql> update core_config_data set value='http://magento.mynewdomain.com' where path='web/unsecure/base_url';
mysql> update core_config_data set value='https://magento.mynewdomain.com' where path='web/secure/base_url';
I'm writing scenarios for QA engineers, and now I face a problem such as step encapsulation.
This is my scenario:
When I open connection
And device are ready to receive files
I send to device file with params:
| name | ololo |
| type | txt |
| size | 123 |
All of each steps are important for people, who will use my steps.
And I need to automate this steps and repeat it 100 times.
So I decide create new step, which run it 100 times.
First variant was to create step with other steps inside like:
Then I open connection, check device are ready and send file with params 100 times:
| name | ololo |
| type | txt |
| size | 123 |
But this version is not appropriate, because:
people who will use it, will don't understand which steps executes inside
and some times name of steps like this are to long
Second variant was to create step with other steps in parameters table:
I execute following steps 100 times:
| When I open connection |
| And device are ready to receive files |
| I send to device file |
It will be easy to understand for people, who will use me steps and scenarios.
But also I have some steps with parameters,
and I need to create something like two tier table:
I execute following steps 100 times:
| When I open connection |
| And device are ready to receive files |
| I send to device file with params: |
| | name | ololo | |
| | type | txt | |
| | size | 123 | |
This is the best variant in my situation.
But of cause cucumber can't parse it without errors ( it's not correct as cucumber code ).
How can I fix last example of step? (mark with bold font)
Does cucumber have some instruments, which helps me?
Can you suggest some suggest your type of solution?
Does someone have similar problems?
I decide to change symbols "|" to "/" in parameters table, which are inside.
It's not perfect, but it works:
This is scenarios steps:
I execute following steps 100 times:
| I open connection |
| device are ready to receive files |
| I send to device file with params: |
| / name / ololo / |
| / type / txt / |
| / size / 123 / |
This is step definition:
And /^I execute following steps (.*) times:$/ do |number, table|
data = table.raw.map{ |raw| raw.last }
number.to_i.times do
params = []
step_name = ''
data.each_with_index do |line,index|
next_is_not_param = data[index+1].nil? || ( data[index+1] && !data[index+1].include?('/') )
if !line.include?('/')
step_name = line
#p step_name if next_is_not_param
step step_name if next_is_not_param
else
params += [line.gsub('/','|')]
if next_is_not_param
step_table = Cucumber::Ast::Table.parse( params.join("\n"), nil, nil )
#p step_name
#p step_table
step step_name, step_table
params = []
end
end
end
#p '---------------------------------------------------------'
end
end
Question
How to communicate with another program (for instance, a windows service one) through environment variables (not system or user ones)?
What do we have
Well, I have the following scheme for a data logger:
------------------------- --------------------------------
| the things to measure | | the things that do something |
------------------------- --------------------------------
| ^
| sensors | switches
V |
-------------------------------------------------------------------
| dedicated hardware |
-------------------------------------------------------------------
| ^
| | serial communication
V |
--------------- -------------
| Windows | ------------------------------------> | user |
| service | <------------------------------------ | interface |
--------------- udp communication -------------
|^ keyboard
V| and screen
--------
| user |
--------
On current development:
windows service is always running when Windows is running
user can open and close user interface (of course :p)
windows service acquires data from sensors
user interface automatic requests data to windows service every 100ms and shows it to user via udp communication through some implemented protocol (we call it GetData() command and response to it)
user can send some other commands to change the data to acquire through implemented protocol (we call it SetSensors() command and response to it)
Both user interface and windows service are developed on Borland C+ Builder 6 and use NMUDP component, from FastNet tab, for UDP communication.
What we are thinking to do
Because of some buffer issues and to free udp channel only for sending SetSensors()command and response to it, we are considering that instead of using GetData():
Windows service would get data from sensors and put them on environment variables
the user interface would read them to show to user
Scheme after doing what we are thinking
------------------------- --------------------------------
| the things to measure | | the things that do something |
------------------------- --------------------------------
| ^
| sensors | switches
V |
-------------------------------------------------------------------
| dedicated hardware |
-------------------------------------------------------------------
| ^
| | serial communication
V |
--------------- -------------
| | ------------------------------------> | |
| | environment variables | |
| | (get data from sensors) | |
| Windows | | user |
| service | | interface |
| | | |
| | ------------------------------------> | |
| | <------------------------------------ | |
--------------- udp communication -------------
(send commands to service) |^ keyboard
V| and screen
--------
| user |
--------
Any way to do that?
We would not use system and user environment variables, because it writes on Windows Registry, i.e., it will save to hard drive and it gets more slow...
As #HansPassant said, I cannot do that directly. Although I saw some ways to do that via memory mapped file, it is so easy only to add one more udp communication channel through other port. So:
------------------------- --------------------------------
| the things to measure | | the things that do something |
------------------------- --------------------------------
| ^
| sensors | switches
V |
-------------------------------------------------------------------
| dedicated hardware |
-------------------------------------------------------------------
| ^
| | serial communication
V |
--------------- -------------
| | ------------------------------------> | |
| | udp communication (port 3) | |
| | (get data from sensors) | |
| Windows | | user |
| service | | interface |
| | (port 1) | |
| | ------------------------------------> | |
| | <------------------------------------ | |
--------------- udp communication (port 2) -------------
(send commands to service) |^ keyboard
V| and screen
--------
| user |
--------
If someone provide a better solution, I'll mark it as solution in future.
I am working with JasperReports 4.5.0, Spring 3.0.5 RELEASE. I am exporting my JR report in pdf, html, csv formats. With pdf, html the report is generating fine.
But when i am exporting my report in csv it is displaying all the fields and values in one column only. My code flow is exactly like this link.
Below is the example how I am getting now.
| A | B | C | D |
|S.No,IPAddress,TotalDuration,TotalBdrCount | | | |
|1,null,266082,null | | | |
2,null,null,null | | | |
3,null,null,null | | | |
4,null,null,null | | | |
Where S.No,IPAddress,TotalDuration,TotalBdrCount are the column headers and 1,null,266082,null are the values to the respective columns.
But my requirement is
| A | B | C | D |
| S.No | IPAddress | TotalDuration | TotalBdrCount |
I think you understood my problem. For this am i need to set any parameters? I am not getting. Can any one help me out regarding this issue.