I want to create a FTP connection and send a big file via it and see its effect in my network. In order to do that, I wrote the following lines in OMNet++ scenario file. I am not sure whether this is a correct way of creating a FTP connection or not. I set "thinkTime" and "IdleInterval" parameters to zero to send the FTP data immediately and in one session. Since the file size is too big for my current simulation, I think I would have only one session through the whole simulation time. However the throughput of my FTP connection is too low and I am not sure. Can someone help me? All suggestions or FTP examples are welcome.
# FTPclusterMember FTP
**.clusterMember[0].tcpApp[1].typename = "TCPBasicClientApp"
**.clusterMember[0].tcpApp[1].connectAddress = "FTPserver"
**.clusterMember[0].tcpApp[1].connectPort = 21
# Download Scenario requestLength < replyLength
**.clusterMember[0].tcpApp[1].requestLength = 100B
**.clusterMember[0].tcpApp[1].replyLength = 1500MB
**.clusterMember[0].tcpApp[1].thinkTime = 0s
**.clusterMember[0].tcpApp[1].idleInterval = 0s
# FTPserver Settings
**.FTPserver.numTcpApps = 1
**.FTPserver.tcpApp[0].typename = "TCPSinkApp"
**.FTPserver.tcpApp[0].localAddress = "FTPserver"
**.FTPserver.tcpApp[0].localPort = 21
Since FTP uses TCP in transport layer, we can use "TCPBasicClientApp" and "TCPGenericSrvApp" for FTP client and server respectively. For simulating behavior of FTP protocol, we set the port to 21. In addition, for simulation upload and download scenario, you just need to change the size of request length and reply length. As an example, please take a look at codes below;
# FTPclusterMember FTP
**.clusterMember[0].numTcpApps = 1
**.clusterMember[0].tcpApp[0].typename = "TCPBasicClientApp"
**.clusterMember[0].tcpApp[0].connectAddress = "FTPserver"
**.clusterMember[0].tcpApp[0].connectPort = 21
# Download Scenario requestLength < replyLength
# Upload Scenario requestLength > replyLength
**.clusterMember[0].tcpApp[0].requestLength = 1500MB
**.clusterMember[0].tcpApp[0].replyLength = 100B
**.clusterMember[0].tcpApp[0].thinkTime = 0s
**.clusterMember[0].tcpApp[0].idleInterval = 0s
# FTPserver Settings
**.FTPserver.numTcpApps = 1
**.FTPserver.tcpApp[0].typename = "TCPGenericSrvApp"
**.FTPserver.tcpApp[0].localAddress = "FTPserver"
**.FTPserver.tcpApp[0].localPort = 21
As you can see, I set "idleInterval" abd "thinktime" to zero. With these values, you can be sure that you have a constant transmission of FTP packets during your simulation time.
Related
I am new to Omnet and Inet.
I am modifying the network TsnLinearNetwork which is part of the Inet Library and looks like the following:
Client <-------> Switch <-------> Server
The client sends a continuous packet stream which the switch forwards to the server.
I am trying to set up a Periodic Gate which changes its state, so that the packets won't get forwarded, if the gate is closed.
The 1st second the gate should be closed and the 2nd second it should be open.
As a result the gate status changes which I configured in the omnetpp.ini file.
In contrast to that, the packets are forwarded even on a closed gate state.
I played around with the omnet.ini parameters without any success.
I am expecting that the packets are discarded when the periodic gate state is closed.
Like the documentation points out:
PeriodicGate
This module allows or forbids packets to pass through depending on whether the gate is open or closed. The gate is open and closed according to the list of change times periodically.
So here my questions:
Why are the packets forwarded even on a closed gate state?
How can I achieve that behaviour?
omnetpp.ini
[General]
[simpleStart03]
network = simpleStart
sim-time-limit = 2s
*.client.numApps = 1
*.client.app[*].typename = "UdpSourceApp"
*.client.app[0].display-name = "random traffic"
*.client.app[*].io.destAddress = "server"
*.client.app[0].io.destPort = 1000
*.client.app[0].source.packetLength = 1000B
*.client.app[0].source.productionInterval = 500us
*.client.hasOutgoingStreams = true
*.client.bridging.streamIdentifier.identifier.mapping = [{stream: "random traffic"}]
*.client.bridging.streamCoder.encoder.mapping = [{stream: "random traffic", pcp: 0}]
*.server.numApps = 1
*.server.app[*].typename = "UdpSinkApp"
*.server.app[0].io.localPort = 1000
*.switch.bridging.streamCoder.decoder.mapping = [{pcp: 0, stream: "random traffic"}]
*.switch.hasIngressTrafficFiltering = true
*.switch.bridging.streamFilter.ingress.numGates = 1
*.switch.bridging.streamFilter.ingress.numMeters = 1
*.switch.bridging.streamFilter.ingress.numStreams = 1
*.switch.bridging.streamFilter.ingress.classifier.mapping = {"random traffic": 0}
*.switch.bridging.streamFilter.ingress.meter[0].display-name = "random traffic"
*.switch.bridging.streamFilter.ingress.meter[*].typename = "SingleRateTwoColorMeter"
*.switch.bridging.streamFilter.ingress.meter[0].committedInformationRate = 40Mbps
*.switch.bridging.streamFilter.ingress.meter[0].committedBurstSize = 10kB
*.switch.bridging.streamFilter.ingress.gate[*].typename = "PeriodicGate"
*.switch.bridging.streamFilter.ingress.gate[0].display-name = "random traffic"
*.switch.bridging.streamFilter.ingress.gate[0].initiallyOpen = false
*.switch.bridging.streamFilter.ingress.gate[0].durations = [1s,1s]
*.switch.bridging.streamFilter.ingress.gate[*].initiallyOpen = false
omnetpp.ned
import inet.networks.tsn.TsnLinearNetwork;
network simpleStart extends TsnLinearNetwork
{
}
omnet Version: 6.0.1
Inet Version: inet4.4
This issue was stated as a bug
https://github.com/inet-framework/inet/issues/830
The bug is that the classifier doesn't check that the selected gate index can be pushed with the packet or not. In case the selected branch has back pressure then the default branch should be selected. If the default branch also has back pressure then an error should be raised.
This Commit fixes the issue and this commit changes the order of the submodules.
I am running Leach protocol simulations in Castalia Omnet++ with the following simulation parameters:
sim-time-limit = 100s
SN.field_x = 70
SN.field_y = 70
SN.numNodes = 10
SN.deployment = "[1..9]->uniform"
SN.node[*].Communication.RoutingProtocolName = "LeachRouting"
SN.node[*].Communication.Routing.netBufferSize = 1000
SN.node[0].Communication.Routing.isSink = true
SN.node[*].Communication.Routing.slotLength = 0.2
SN.node[*].Communication.Routing.roundLength = 20s
SN.node[*].Communication.Routing.percentage = 0.05
SN.node[*].Communication.Routing.powersConfig = xmldoc("powersConfig.xml")
SN.node[*].ApplicationName = "ThroughputTest"
SN.node[*].Application.packet_rate = 1
SN.node[*].Application.constantDataPayload = 200
After running simulations, I checked the Castalia trace file and found the following errors:
SN.node[1].Communication.Radio Failed packet (WC_SIGNAL_START) from
node 6, radio not in RX state
SN.node[1].Communication.Radio Failed packet (WC_SIGNAL_END) from node
6, NO interference
Are these errors occurs due to simulation parameters or is there any other reason?
The messages you see are not errors per se. It could be normal behaviour. The messages just tell you that when a packet from node 6 arrived at node 1, node 1 did not have its radio in RX mode (listening) so it could not receive the packet.
This is a problem only when you lose most of your info-carrying packets, or you do not have a way to recover from such losses. You do not provide information whether this is the case or not.
The MAC plays a crucial role in this. The MAC puts the radio in RX or TX or Sleep mode. In the list of simulation parameters, the MAC is absent. If we assume that you use the default value, then this is bypassMAC that does not put the Radio to Sleep mode. Only way to have this message appear is for node 1 to TX at the same time it receives the packet from node 6.
These are normal messages, not errors. You can check Radio.cc to know why these messages are generated and adjust your code.
Actually I have 2 questions, my first question is : How to make HDFS close file (example .123456789.tmp ) after the entire file was flushed by flume agent.
In fact, the file never closed until, I force flume agent to stop.
I beleive there is a method using the 4 parameters as follow:
hdfs.rollSize = 0
hdfs.rollCount =0
hdfs.rollInterval = 0
hdfs.batchsize = 1000000
Well, my second question is, my agent flume receives files from SFTP server, while I need to keep each file name in hdfs. It works fine with spooldir type, but not with SFTP !! is there a any ideas ?
My configuration file for flume agent as follow:
agent.sources = r1
agent.channels = c1
agent.sinks = k
configure ftp source
agent.sources.r1.type = org.keedio.flume.source.mra.source.Source
agent.sources.r1.client.source = sftp
agent.sources.r1.name.server = ip
agent.sources.r1.user = user
agent.sources.r1.password = secret
agent.sources.r1.port = 22
agent.sources.r1.knownHosts = ~/.ssh/known_hosts
agent.sources.r1.work.dir = /DATA/test/flumrFTP
agent.sources.r1.fileHeader = true
agent.sources.r1.basenameHeader = true
agent.sources.r1.inputCharset = ISO-8859-1
#agent.sources.r1.batchSize = 1000
agent.sources.r1.flushlines = true
configure sink s1
agent.sinks.k.type = hdfs
agent.sinks.k.hdfs.path = hdfs://hostname:8000/user/admin/DATA/import_flume/
agent.sinks.k.hdfs.filePrefix = %{basename}
agent.sinks.k.hdfs.rollCount = 0
agent.sinks.k.hdfs.rollInterval = 0
agent.sinks.k.hdfs.rollSize = 0
agent.sinks.k.hdfs.useLocalTimeStamp = true
agent.sinks.k.hdfs.batchsize = 1000000
agent.sinks.k.hdfs.fileType = DataStream
Use a channel which buffers events in memory
agent.channels.c1.type = memory
agent.channels.c1.capacity = 1000000
agent.channels.c1.transactionCapacity = 1000000
agent.sources.r1.channels = c1
agent.sinks.k.channel = c1
Try setting the variable
hdfs.rollInterval It's the number of seconds to wait before rolling current file
This setting closes the file after the number of seconds you set. I set mine at 200 seconds and I am loading smaller files
I am trying to store some large files on S3 using ruby aws:s3 using:
S3Object.store("video.mp4", open(file), 'bucket', :access => :public_read)
For files of 100 MB or so everything is great but with files of over 200 MB I get a "Connection reset by peer" error in the log.
Has anyone come across this weirdness? From the web, it seems to be an issue with large but I have not yet come across a definitive solution.
I am using Ubuntu.
EDIT:
This seems to be a Linux issue as suggested here.
No idea where the original problem might be, but as workaround you could try multipart upload.
filename = "video.mp4"
min_chunk_size = 5 * 1024 * 1024 # S3 minimum chunk size (5Mb)
#object.multipart_upload do |upload|
io = File.open(filename)
parts = []
bufsize = (io.size > 2 * min_chunk_size) ? min_chunk_size : io.size
while buf = io.read(bufsize)
md5 = Digest::MD5.base64digest(buf)
part = upload.add_part(buf)
parts << part
if (io.size - (io.pos + bufsize)) < bufsize
bufsize = (io.size - io.pos) if (io.size - io.pos) > 0
end
end
upload.complete(parts)
end
S3 multipart upload is little tricky as each part size must be over 5Mb, but that has been taken care of above code.
Maybe I've gotten my sockets programming way mixed up, but shouldn't something like this work?
srv = TCPServer.open(3333)
client = srv.accept
data = ""
while (tmp = client.recv(10))
data += tmp
end
I've tried pretty much every other method of "getting" data from the client TCPSocket, but all of them hang and never break out of the loop (getc, gets, read, etc). I feel like I'm forgetting something. What am I missing?
In order for the server to be well written you will need to either:
Know in advance the amount of data that will be communicated: In this case you can use the method read(size) instead of recv(size). read blocks until the total amount of bytes is received.
Have a termination sequence: In this case you keep a loop on recv until you receive a sequence of bytes indicating the communication is over.
Have the client closing the socket after finishing the communication: In this case read will return with partial data or with 0 and recv will return with 0 size data data.empty?==true.
Defining a communication timeout: You can use the function select in order to get a timeout when no communication was done after a certain period of time. In which case you will close the socket and assume every data was communicated.
Hope this helps.
Hmm, I keep doing this on stack overflow [answering my own questions]. Maybe it will help somebody else. I found an easier way to go about doing what I was trying to do:
srv = TCPServer.open(3333)
client = srv.accept
data = ""
recv_length = 56
while (tmp = client.recv(recv_length))
data += tmp
break if tmp.length < recv_length
end
There is nothing that can be written to the socket so that client.recv(10) returns nil or false.
Try:
srv = TCPServer.open(3333)
client = srv.accept
data = ""
while (tmp = client.recv(10) and tmp != 'QUIT')
data += tmp
end