LuaSocket FTP always times out - ftp

I've had success with LuaSocket's TCP facility, but I'm having trouble with its FTP module. I always get a timeout when trying to retrieve a (small) file. I can download the file just fine using Firefox or ftp in passive mode (on Ubuntu Dapper Linux).
I thought it might be that I need LuaSocket to use passive FTP, but then I found that it seems to do that by default. The file I'm trying to retrieve via FTP can be accessed with passive FTP via other programs on my machine, but not via active mode. I found some talk about "hacking" passive mode support into LuaSocket, and that discussion implies that later versions stopped using passive mode, but my version seems to use passive anyway (I'm using 2.0.1; newest is 2.0.2 and does not appear to have any changes relevant to my use case). I'm a little confused about how that post may relate to my situation, partly because it's very old and LuaSocket's source now bears little resemblance to the code in that discussion).
I've boiled my code down to this:
local ftp = require "socket.ftp"
ftp.TIMEOUT = 10
print(ftp.get("ftp://ftp.us.dell.com/app/dpart.txt"))
This gives me a timeout. I ran it under strace on Linux (same as ptrace on Solaris). Here's an abridged transcript:
socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 3
fcntl64(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0
recv(3, "230-Welcome to the Dell FTP site."..., 8192, 0) = 971
send(3, "pasv\r\n", 6, 0) = 6
recv(3, 0x8089a58, 8192, 0) = -1 EAGAIN (Resource temporarily unavailable)
select(4, [3], NULL, NULL, {9, 999934}) = 0 (Timeout)
There's another site I tried connecting to, but it has a password which I can't post here, but in that case the results were slightly different...I got trace like the above but with select() succeeding at the end, then this:
recv(3, "227 Entering Passive Mode (123,456,789,0,12,34)\r\n", 8192, 0) = 49
socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 4
fcntl64(4, F_SETFL, O_RDWR|O_NONBLOCK) = 0
connect(4, {sa_family=AF_INET, sin_port=htons(12345), sin_addr=inet_addr("123.456.789.0")}, 16) = -1 EINPROGRESS (Operation now in progress)
select(5, [4], [4], NULL, {9, 999694}) = 0 (Timeout)
Compare this to the trace of my "ftp" program in passive mode (which works fine, though note that it does not set the sockets to nonblocking like LuaSocket does):
socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 6
write(5, "PASV\r\n", 6) = 6
read(3, "227 Entering Passive Mode (123,456,789,0,12,34)\r\n", 1024) = 51
connect(6, {sa_family=AF_INET, sin_port=htons(12345), sin_addr=inet_addr("123.456.789.0")}, 16) = 0
So I've tried LuaSocket against these two different FTP sites with different but similar failures. I also tried it from another machine where active FTP works, and it didn't have any better luck there (presumably because LuaSocket is always using passive mode, from what I can tell by reading the source in socket/ftp.lua).
So can anyone here make the LuaSocket two-liner at the top work? Note that on my machine, active FTP to Dell's site doesn't work (I can connect but as soon as I do ls it disconnects), so if you get LuaSocket to work please also note whether active FTP to Dell's site from another program works on your machine.

Hm. It looks like the problem is that LuaSocket uses "pasv" in lower case. I'm going try to figure out a work-around.
Hm. Nope, it looks quite elegantly welded shut. The easiest thing to do is probably to copy that particular file to its equivalent place in a hierarchy in an earlier path in LUA_PATH. That is, (usually) make a local copy of the file, e.g. path/to/your/project/socket/ftp.lua.
Then edit the local file:
- self.try(self.tp:command("user", user or USER))
+ self.try(self.tp:command("USER", user or USER))
- self.try(self.tp:command("pass", password or PASSWORD))
+ self.try(self.tp:command("PASS", password or PASSWORD))
- self.try(self.tp:command("pasv"))
+ self.try(self.tp:command("PASV"))
- self.try(self.tp:command("port", arg))
+ self.try(self.tp:command("PORT", arg))
- local command = sendt.command or "stor"
+ local command = sendt.command or "STOR"
- self.try(self.tp:command("cwd", dir))
+ self.try(self.tp:command("CWD", dir))
- self.try(self.tp:command("type", type))
+ self.try(self.tp:command("TYPE", type))
- self.try(self.tp:command("quit"))
+ self.try(self.tp:command("QUIT"))
Perversely, a navelnaut expedition using getfenv, getmetatable, etc didn't seem to be worth it. I consider it a serious problem with the design. (of LuaSocket)
It's worth noting that RFC0959 uses all-caps commands. (Probably because it's from the 7-bit ASCII era.)

Note that the server is failing to follow the FTP specification, which states commands are case-insensitive. See RFC959, section 5.3 "The command codes are four or fewer alphabetic characters.
Upper and lower case alphabetic characters are to be treated
identically. Thus, any of the following may represent the
retrieve command:
RETR Retr retr ReTr rETr"

This problem is now fixed, with the question and first answer a great help.
Luasocket is correct to RFC 959 (first comment here is not right about upper case, see RFC959 section 5.2)
At least Microsoft FTP server is not compliant. There might be others.
The solution is change pasv to PASV and is a workaround for a command case sensitive server. Details are on the Lua email list, where the archive will be web accessible in a few days.
(edit line 59 of ftp.lua)

Related

--up script fails with '/etc/openvpn/update-systemd-resolved': No such file or directory (errno=2)

Since I reinstalled my ArchLinux distro I get an error when I want to use OpenVPN. Here is the full output:
quentin#QuentinDesktop ~/Documents> openvpn --config ulille-vpn.ovpn
2022-01-04 21:52:15 WARNING: Compression for receiving enabled. Compression has been used in the past to break encryption. Sent packets are not compressed unless "allow-compression yes" is also set.
2022-01-04 21:52:15 WARNING: Compression for receiving enabled. Compression has been used in the past to break encryption. Sent packets are not compressed unless "allow-compression yes" is also set.
Options error: --up script fails with '/etc/openvpn/update-systemd-resolved': No such file or directory (errno=2)
Options error: Please correct this error.
Use --help for more information.
Here is the truncated ulille-vpn.ovpn file content (I just truncated the CA certificates):
ignore-unknown-option comp-lzo compress
dev tun
persist-tun
persist-key
cipher AES-256-CBC
tls-client
client
resolv-retry infinite
proto udp
remote vpn-etudiant.univ-lille.fr 443
verify-x509-name "vpn-etudiant.univ-lille.fr" name
auth SHA256
auth-user-pass
comp-lzo
compress lzo
#route-nopull
verb 3
pull-filter ignore "dhcp-option DOMAIN"
dhcp-option DOMAIN univ-lille.fr
dhcp-option DOMAIN univ-lille1.fr
script-security 2
setenv PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
up /etc/openvpn/update-systemd-resolved
up-restart
down /etc/openvpn/update-systemd-resolved
down-pre
Note that I didn't write this one myself, it is given by my university to access its local network.
I already tried to install the openvpn-update-systemd-resolved AUR package and enable it on systemd but it changed nothing.
How can I fix it ?
Okay, after a quick looking at the configuration file (what I did not think before asking this question), I commented the last 4 lines of the chunk I posted, and it works !
I am sorry for asking this question, I though the config file my university distributes was valid but it looks like it is Fedora/Debian specific, which is kind of weird because it works perfectly fine without these four lines.
I hope this short lifespan topic can help someone else in a similar case ! :^)
I had the very same problem and it was also the config file trying to run up /etc/openvpn/update-systemd-resolved. Seems to be a distro problem as I'm also running arch.

How to run code in a debugging session from VS code on a remote using an interactive session?

I am using a cluster (similar to slurm but using condor) and I wanted to run my code using VS code (its debugger specially) and it's remote sync extension.
I tried running it using my debugger in VS code but it didn't quite work as expected.
First I logged in to the cluster using VS code and remote sync as usual and that works just fine. Then I go ahead an get an interactive job with the command:
condor_submit -i request_cpus=4 request_gpus=1
then that successfully gives a node/gpu to use.
Once I have that I try to run the debugger but somehow it logs me out from the remote session (and it looks like it goes to the head node from the print statements). That's NOT what I want. I want to run my job in the interactive session in the node/gpu I was allocated. Why is VS code running it in the wrong place? How can I run it in the right place?
Some of the output from the integrated terminal:
source /home/miranda9/miniconda3/envs/automl-meta-learning/bin/activate
/home/miranda9/miniconda3/envs/automl-meta-learning/bin/python /home/miranda9/.vscode-server/extensions/ms-python.python-2020.2.60897-dev/pythonFiles/lib/python/new_ptvsd/wheels/ptvsd/launcher /home/miranda9/automl-meta-learning/automl/automl/meta_optimizers/differentiable_SGD.py
conda activate base
(automl-meta-learning) miranda9~/automl-meta-learning $ source /home/miranda9/miniconda3/envs/automl-meta-learning/bin/activate
(automl-meta-learning) miranda9~/automl-meta-learning $ /home/miranda9/miniconda3/envs/automl-meta-learning/bin/python /home/miranda9/.vscode-server/extensions/ms-python.python-2020.2.60897-dev/pythonFiles/lib/python/new_ptvsd/wheels/ptvsd/launcher /home/miranda9/automl-meta-learning/automl/automl/meta_optimizers/differentiable_SGD.py
--> main in differentiable SGD
hello world torch_utils!
vision-sched.cs.illinois.edu
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
-> initialization of DiMO done!
---> i = 0, iteration/it 1 about to start
lp_norms(mdl) = 18.43514633178711
lp_norms(meta_optimized mdl) = 18.43514633178711
[e=0,it=1], train_loss: 2.304989814758301, train error: -1, test loss: -1, test error: -1
---> i = 1, iteration/it 2 about to start
lp_norms(mdl) = 18.470401763916016
lp_norms(meta_optimized mdl) = 18.470401763916016
[e=0,it=2], train_loss: 2.3068909645080566, train error: -1, test loss: -1, test error: -1
---> i = 2, iteration/it 3 about to start
lp_norms(mdl) = 18.548133850097656
lp_norms(meta_optimized mdl) = 18.548133850097656
[e=0,it=3], train_loss: 2.3019633293151855, train error: -1, test loss: -1, test error: -1
---> i = 0, iteration/it 1 about to start
lp_norms(mdl) = 18.65604019165039
lp_norms(meta_optimized mdl) = 18.65604019165039
[e=1,it=1], train_loss: 2.308889150619507, train error: -1, test loss: -1, test error: -1
---> i = 1, iteration/it 2 about to start
lp_norms(mdl) = 18.441967010498047
lp_norms(meta_optimized mdl) = 18.441967010498047
[e=1,it=2], train_loss: 2.300947666168213, train error: -1, test loss: -1, test error: -1
---> i = 2, iteration/it 3 about to start
lp_norms(mdl) = 18.545459747314453
lp_norms(meta_optimized mdl) = 18.545459747314453
[e=1,it=3], train_loss: 2.30662202835083, train error: -1, test loss: -1, test error: -1
-> DiMO done training!
--> Done with Main
(automl-meta-learning) miranda9~/automl-meta-learning $ conda activate base
(automl-meta-learning) miranda9~/automl-meta-learning $ hostname vision-sched.cs.illinois.edu
Doesn't even run without debugging mode
The problem is more serious than I thought. I can't run the debugger in the interactive session but I can't even "Run Without Debugging" without it switching to the Python Debug Console on it's own. So that means I have to run things manually with python main.py but that won't allow me to use the variable pane...which is a big loss!
What I am doing is switching my terminal to the conoder_ssh_to_job and then clicking the button Run Without Debugging (or ^F5 or Control + fn + f5) and although I made sure to be on the interactive session at the bottom in my integrated window it goes by itself to the Python Debugger window/pane which is not connected to the interactive session I requested from my cluster...
related:
gitissue: https://github.com/microsoft/vscode-remote-release/issues/1722
quora: https://qr.ae/TqCiu8
reddit: https://www.reddit.com/r/vscode/comments/f1giwi/how_to_run_code_in_a_debugging_session_from_vs/
You can try reversing the order of operations; first submitting the job, obtaining the name of the compute node allocated to you, then instructing VSCode to connect to the compute node rather than the login node.
So first would be
condor_submit -i request_cpus=4 request_gpus=1
and noting the name of the compute node. Assuming node001 in the following.
Then, open VSCode on your laptop, click on the Remote Development extension icon and choose "Remote SSH: Connect to Host...". Choose "+ Add new SSH host...". In the "Enter SSH command" box, add the following:
ssh -J vision-sched.cs.illinois.edu miranda9#node001
The VSCode will ask you which SSH configuration file it should update. Make sure to review that configuration: specify the SSH keys if needed, the user name, etc. Also make sure you have the vision-sched.cs.illinois.edu correctly configured in that file.
Then you can choose that host to connect to. VSCode will then execute on the compute node, and will be disconnected when the allocation finishes.
I stumbled upon a related issue recently (I wanted to use VsCode interactive Python capabilities on a compute node) and the above weren't working but this solved it:
ssh to the remote cluster ssh cluster
inside the remote cluster, add my public key to the authorized keys, so typically append the content of ~/.ssh/id_rsa.pub (local machine) to .ssh/authorized_keys (remote cluster)
allocate some resources inside the cluster (this particular cluster uses slurm and not condor so in this case I use something like srun --pty bash)
get the name of the compute node, typically visible in the command line as username#nodename). For argument's sake, let's imagine I get a generic name like node001
for simplicity on my local machine, modify the ~/.ssh/config file and edit it as:
Host cluster
# stuff written
Host node*
HostName %h
ProxyJump cluster
User $USERNAME
Now I'm able to ssh to it from my local machine (as long as the compute node is running) with ssh node001.
In VsCode this boils down to
CTRL+P > Remote-SSH: Connect to Host...
type in the name of the node, here node001
you get connected to the node, now every interactive python you run (including jupyter and jupytext) will have access to your allocated resources
I don't know how generic this solution is, I hope it'll help at least somebody !
Here is a simpler workaround:
on the remote server create a file named bash somewhere for example /home/myuser/pathto/bash
make it executable using chmod +x bash
write salloc [your desired options for the interactive job] in the bash file
In vscode Settings search for Automation Shell: Linux and click on the "Edit in settings.js"
change the line to "terminal.integrated.automationShell.linux": "/home/myuser/pathto/bash" and save it (use the absolute path. for example ~/pathto/bash didn't work for me)
Done :)
now every time you run the debugger it will first ask for the interactive job and the debugger will run on it. but take in to consider that this is also applied to tasks you run in tasks.json.
also you can use srun instead of salloc. for example srun --pty -t 2:00:00 --mem=8G bash

Commands not accepted by instrument over RS232 w/ ruby serialport script

MY ENVIRONMENT
I'm sending commands to a camera simulator device made by Vivid Engineering (Model CLS-211) over RS-232 from my laptop which is running CentOS 7.
ISSUE
I have installed two different serial monitors (minicom, gtkterm) and can successfully send successive commands over and over to the device. I can send a command to dump the memory contents as well. There are several configuration commands I have to send to the CLS-211 to set it up per a specific test. I want to automate this process and have written a ruby script to write a list of commands to the CLS-211 to make this process easier. However, it appears that I am not terminating each command correctly or the CLS-211 requires a specific terminator/signal that I am not giving to it in my ruby script. I'm confused why I can successfully accomplish this task with a serial monitor but not a ruby script. I've configured the serial port settings correctly per their manual so I know this is not the issue. You'll see those settings below defined in my scirpts. Their manual points out they use HyperTerminal which I can't use because I'm on a Linux system. Manufacture mentioned that other serial terminals should work just fine but they have only chosen to test one out being HyperTerminal. I've asked for their feedback on the issue and they simply say "We don't use Linux but it shouldn't be much different, good luck".
TROUBLESHOOTING
I have verified that my "send.rb" script is working to the best of my knowledge by writing a "read.rb" script to read back what I sent. I essentially connected pin 2 "rx" to pin 3 "tx" on the RS-232 cable for a loopback test. Below is my two scripts and the resulting output from running this test.
## Filename send.rb
require 'serialport'
ser = SerialPort.new("/dev/ttyS0", 9600, 8, 1, SerialPort::NONE)
ser.write "LVAL_LO 5\r\n"
ser.write "LVAL_HI 6\r\n"
ser.write "FVAL_LO 7\r\n"
ser.write "FVAL_HI 8\r\n"
ser.close
## Filename read.rb
require 'serialport'
ser = SerialPort.new("/dev/ttyS0", 9600, 8, 1, SerialPort::NONE)
while true do
printf("%c", ser.getc)
end
ser.close
Just found out that I cannot post more than 2 links since my reputation is so low. Anyways the output is just the following...
username#hostname $ ruby read.rb
LVAL_LO 5
LVAL_HI 6
FVAL_LO 7
FVAL_HI 8
I have hooked up the CLS-211 and dumped the memory contents by issuing the DUMP command using GtkTerm and this works fine. The following image shows the memory contents of the first four parameters being LVAL_LO, LVAL_HI, FVAL_LO, and FVAL_HI. I'm just choosing to show four values in the memory dump for the sake of keeping this thread short versus listing all of them. Since I cannot include more than 2 links because my reputation is low being a new guy I'm typing what the output looks like in GtkTerm instead...
CLS-211 initializing, please wait
............
ready
CLS211 Camera Link Simulator CLI
Vivid Engineering
Rev 3.0
DUMP
LVAL_LO = 0x0020 / 32
LVAL_HI = 0x0100 / 256
FVAL_LO = 0x0002 / 2
FVAL_HI = 0x0100 / 256
In the above image you can clearly see that the system boots as expected. After I typed in the command "DUMP" it printed out the memory contents successfully. You see that LVAL_LO = 32, LVAL_HI = 256, FVAL_LO = 2, and FVAL_HI = 256. As I mentioned before I can successfully type in a command to GtkTerm to change a specific parameter as well. The below images shows that after typing in the command LVAL_LO 5 to GtkTerm and then issuing a DUMP command the value 5 was read correctly and LVAL_LO was changed as expected. I can replicate this successful behavior with every command using GtkTerm.
Again, I can't post more than 2 links so I'm writing the output below...
LVAL_LO 5
DUMP
LVAL_LO = 0x0005 / 5
LVAL_HI = 0x0100 / 256
FVAL_LO = 0x0002 / 2
FVAL_HI = 0x0100 / 256
At this point I was like ok everything is working as expected. Lets see if I can execute my ruby script and replicate the same successfully. I then ran the script I typed up above titled "send.rb". Then I opened up GtkTerm and issued a DUMP command afterwards to see if those values were taken. Let it be known that before I ran "send.rb" the values that existed in memory on the CLS-211 were LVAL_LO = 32, LVAL_HI = 256, FVAL_LO = 2, and FVAL_HI = 256. You can see that after running "send.rb", opening GtkTerm back up and issuing the DUMP command the CLS-211 replied w/ "invalid entry". After issuing it again you'll see that it dumped the contents of memory and showed LVAL_LO was changed correctly but the other three values were not.
Almost Successful
At this point I assumed that the first value was being received and written to the memory contents of the CLS-211 correctly but the other commands were not being received correctly. I assumed this was most likely because of the lack of any delay. Therefore, I placed a 1 second delay between each ser.write command. I changed the script "send.rb" to the following.
## Filename send.rb
require 'serialport'
ser = SerialPort.new("/dev/ttyS0", 9600, 8, 1, SerialPort::NONE)
ser.write "LVAL_LO 9\r\n"
sleep(1)
ser.write "LVAL_HI 10\r\n"
sleep(1)
ser.write "FVAL_LO 11\r\n"
sleep(1)
ser.write "FVAL_HI 12\r\n"
sleep(1)
ser.close
The following is the result of running "send.rb" again with the above changes, opening up GtkTerm, and executing the DUMP command to verify memory.
Added sleep(1)
Nothing really changed. I was able to tell that the script took longer to execute and the first value did change but like before the last three values I sent did not get received and saved to memory on the CLS-211.
CONCLUSION
How can I continue troubleshooting this issue? What sort of terminations are happening to each command that I send through GtkTerm and is that different from what I have sent in my ruby script "send.rb" being "...\r\n". Totally lost and out of options on what I can do next.
[EDIT/UPDATE 10/09/17]
I'm so stupid. The one termination character I forgot to try out "by itself" was carriage return "\r". Using a carriage return by itself after each command fixed the issue. I'm curious what requirements drive a manufacturer to define how a serial packet should be constructed in terms of a termination character(s). I would think there would be a predefined standard to what termination character(s) should be used. For completeness I have included my code below to what it should be in the case of communicating correctly with the CLS-211 device. Basically, i took out the '\n' and kept the '\r' that was it.
## Filename send.rb
require 'serialport'
ser = SerialPort.new("/dev/ttyS0", 9600, 8, 1, SerialPort::NONE)
ser.write "LVAL_LO 9\r"
sleep(1)
ser.write "LVAL_HI 10\r"
sleep(1)
ser.write "FVAL_LO 11\r"
sleep(1)
ser.write "FVAL_HI 12\r"
sleep(1)
ser.close

Strange pause when python download file from ftp

I have some code, where i check some directories on ftp server and download new files on my server. There are above 3 million files on server (zip archives). I am doing many not optimize things in this code, but all of them works fast, except part with downloading. Here is this part:
lf = open(local_filename, "wb") //here i create blank file
print ("opened")
try:
ftp.retrbinary("RETR "+name, lf.write) //here i write data
print ("wrote")
except ftplib.error_perm:
pass
lf.close() //here i close file with data
print ("closed")
my problem in the part between print ("opened") and print ("wrote"). My python console (2.7) keep silence for 10-20 second on this fase, but size of downloading files is very tiny. Its below 2-3 Kb.
Strange thing in next: when i start script from my own PC (windows 7), it works great and fast, but when i start it on windows server 2012 R2 (VDS), i got this sadly pause. Guys, i need your help. What should i do for configuration my server and fast downloading?
i got the answer. just need to run next command:
netsh int tcp set global ecncapability=disabled
and everything will be excellent!

Sending an email from R using the sendmailR package

I am trying to send an email from R, using the sendmailR package. The code below works fine when I run it on my PC, and I recieve the email. However, when I run it with my macbook pro, it fails with the following error:
library(sendmailR)
from <- sprintf("<sendmailR#%s>", Sys.info()[4])
to <- "<myemail#gmail.com>"
subject <- "TEST"
sendmail(from, to, subject, body,
control=list(smtpServer="ASPMX.L.GOOGLE.COM"))
Error in socketConnection(host = server, port = port, blocking = TRUE) :
cannot open the connection
In addition: Warning message:
In socketConnection(host = server, port = port, blocking = TRUE) :
ASPMX.L.GOOGLE.COM:25 cannot be opened
Any ideas as to why this would work on a PC, but not a mac? I turned the firewall off on both machines.
Are you able to send email via the command-line?
So, first of all, fire up a Terminal and then
$ echo “Test 123” | mail -s “Test” user#domain.com
Look into /var/log/mail.log, or better use
$ tail -f /var/log/mail.log
in a different window while you send your email. If you see something like
... setting up TLS connection to smtp.gmail.com[xxx.xx.xxx.xxx]:587
... Trusted TLS connection established to smtp.gmail.com[xxx.xx.xxx.xxx]:587:\
TLSv1 with cipher RC4-MD5 (128/128 bits)
then you succeeded. Otherwise, it means you have to configure you mailing system. I use postfix with Gmail for two years now, and I never had have problem with it. Basically, you need to grab the Equifax certificates, Equifax_Secure_CA.pem from here: http://www.geotrust.com/resources/root-certificates/. (They were using Thawtee certificates before but they changed last year.) Then, assuming you used Gmail,
Create relay_password in /etc/postfix and put a single line like this (with your correct login and password):
smtp.gmail.com login#gmail.com:password
then in a Terminal,
$ sudo postmap /etc/postfix/relay_password
to update Postfix lookup table.
Add the certificates in /etc/postfix/certs, or any folder you like, then
$ sudo c_rehash /etc/postfix/certs/
(i.e., rehash the certificates with Openssl).
Edit /etc/postfix/main.cf so that it includes the following lines (adjust the paths if needed):
relayhost = smtp.gmail.com:587
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/relay_password
smtp_sasl_security_options = noanonymous
smtp_tls_security_level = may
smtp_tls_CApath = /etc/postfix/certs
smtp_tls_session_cache_database = btree:/etc/postfix/smtp_scache
smtp_tls_session_cache_timeout = 3600s
smtp_tls_loglevel = 1
tls_random_source = dev:/dev/urandom
Finally, just reload the Postfix process, with e.g.
$ sudo postfix reload
(a combination of start/stop works too).
You can choose a different port for the SMTP, e.g. 465.
It’s still possible to use SASL without TLS (the above steps are basically the same), but in both case the main problem is that your login informations are available in a plan text file... Also, should you want to use your MobileMe account, just replace the Gmail SMTP server with smtp.me.com.

Resources