I am new to Cocoa, but managed to get a connection (to a FTP) up and running, and I've set up an eventhandler for the NSInputStream iStream to alert every response (which also works).
What I manage to get is simply the hello message and a connection timeout 60 sec, closing control connection.
EDIT: I guess my question is "without closing->opening what would be an non-terminating way of flushing the outputStream?
After searching stackoverflow and finding a lot of NSOutputStream write problems (e.g. How to use NSOutputStream's write message?) and a lot of confusion in my google hits, I figured I'd try to ask my own question:
I've tried reading the developer.apple.com doc on OutputStream, but it seems almost impossible for me to send some data (in this case just a string) to the "connection" via the NSOutputStream oStream.
- (IBAction) send_something: sender
{
const char *send_command_char = [#"USER foo" UTF8String];
send_command_buffer = [NSMutableData dataWithBytes:send_command_char length:strlen(send_command_char) + 1];
uint8_t *readBytes = (uint8_t *)[send_command_buffer mutableBytes];
NSInteger byteIndex = 0;
readBytes += byteIndex;
int data_len = [send_command_buffer length];
unsigned int len = ((data_len - byteIndex >= 1024) ?
1024 : (data_len-byteIndex));
uint8_t buf[len];
(void)memcpy(buf, readBytes, len);
len = [oStream write:(const uint8_t *)buf maxLength:len];
byteIndex += len;
}
the above seems not to result in any useable events. typing it under NSStreamEventHasSpaceAvailable sometimes give a response if I spam the ftp by keep creating new connection instances and keep sending some command whenever oStream has free space. In other words, nothing "right" and so I'm still unclear how to properly send a command to the connection. Should I open -> write -> close every time i want to write to oStream (and thus to the ftp) and can I then expect a reply (hasBytesAvailable event on iStream)?
EDIT: Doesn't look like it no.
For some reason I find it very difficult to find any clear tutorials on this matter. Seems like there are more than a few in the same position as me: unclear how to use oStream write?
Please! Any little bit that can help clear this up is greatly appreciated!
If needed I can write the rest of the code.
Chuck
Okay, so 10 hours 28 views and no answers/comments, but that's OK, because I just solved it with some good help from a very very friendly irssi coder (no butt licking intended ;)).
He proposed that I tried to std::endl'e it (newline + flush), and so I tried simply adding the newline char (\n, 0x0A) and it worked perfectly!
Related
I have motor programmed with arduino, the hardware is already setup so i don't want to change the micro controller. I need to give the motor 1 second to move from each point to the other and if its too much stay until "one second" is done and then go the rest of the code.
the bellow code this is part of the whole code. it freezes and not working after about 40 hour. please advise how can i prevent that. I know that the mills() function is the problem but don't know whats the best option to replace or prevent that?
unsigned long firsttime = 0;
unsigned long secondtime = 0;
void loop(){
...
firsttime= millis();
myStepper.step(RNum);
secondtime = 1000-millis()+firsttime;
delay (secondtime);
...
}
Thanks
I'm trying to spoof keystrokes; to be a bit more precise: I'm replaying a number of keystrokes which should all get sent at a certain time - sometimes several at the same time (or at least as close together as reasonably possible).
Implementing this using XTestFakeKeyEvent, I've come across a problem. While what I've written so far mostly works as it is intended and sends the events at the correct time, sometimes a number of them will fail. XTestFakeKeyEvent never returns zero (which would indicate failure), but these events never seem to reach the application I'm trying to send them to. I suspect that this might be due to the frequency of calls being too high (sometimes 100+/second) as it looks like it's more prone to fail when there's a large number of keystrokes/second.
A little program to illustrate what I'm doing, incomplete and without error checks for the sake of conciseness:
// #includes ...
struct action {
int time; // Time where this should be executed.
int down; // Keydown or keyup?
int code; // The VK to simulate the event for.
};
Display *display;
int nactions; // actions array length.
struct action *actions; // Array of actions we'll want to "execute".
int main(void)
{
display = XOpenDisplay(NULL);
nactions = get_actions(&actions);
int cur_time;
int cur_i = 0;
struct action *cur_action;
// While there's still actions to execute.
while (cur_i < nactions) {
cur_time = get_time();
cur_action = actions + cur_i;
// For each action that is (over)due.
while ((cur_action = actions + cur_i)->time <= cur_time) {
cur_i++;
XTestFakeKeyEvent(display, cur_action->code,
cur_action->down, CurrentTime);
XFlush(display);
}
// Sleep for 1ms.
nanosleep((struct timespec[]){{0, 1000000L}}, NULL);
}
}
I realize that the code above is very specific to my case, but I suspect that this is a broader problem - which is also why I'm asking this here.
Is there a limit to how often you can/should flush XEvents? Could the application I'm sending this to be the issue, maybe failing to read them quickly enough?
It's been a little while but after some tinkering, it turned out that my delay between key down and key up was simply too low. After setting it to 15ms the application registered the actions as keystrokes properly and (still) with very high accuracy.
I feel a little silly in retrospect, but I do feel like this might be something others could stumble over as well.
I'm trying to track down a memory leak that I think has to do with how I am using MS Bond. Specifically, the issue is likely on the subscriber side due to 'new' ArraySegment and InputBuffer objects being generated on every iteration inside a while loop.
On the publisher side, the code roughly looks as follows and I don't think there is a problem here:
open ZeroMQ
open Bond
open Bond.Protocols
open Bond.IO.Unsafe
let bond = new BondStructs.SomeStruct()
let output = new OutputBuffer()
let writer = new CompactBinaryWriter<OutputBuffer>(output)
let ctx = new ZContext()
let sock = new ZSocket(ctx, ZSocketType.PUB)
sock.Bind "tcp://localhost:12345"
while true do
// do work, populate bond structure
Marshal.To(writer, bond)
use frame = new ZFrame(output.Data.Array, output.Data.Offset, output.Data.Count)
sock.Send frame
output.position <- 0L
The issue I think is on the subscriber side due to the fact the new ArraySegment and InputBuffer objects are being generated on every iteration and somehow the GC is unable to properly clean up.
open ZeroMQ
open Bond
open Bond.Protocols
open Bond.IO.Unsafe
let sock = new ZSocket(ctx, ZSocketType.SUB)
sock.SubscribeAll()
sock.SetOption(ZSocketOption.CONFLATE, 1) |> ignore
sock.Connect("tcp://localhost:12345")
while true do
let zf = sock.ReceiveFrame()
let segment = new ArraySegment<byte>(zf.Read())
let input = new InputBuffer(segment)
let msg = Unmarshal<Record>.From(input)
zf.Close()
// do work
Is there a way for me to push ArraySegment and InputBuffer lines above the while loop and reuse those objects within the loop?
If the resulting msg instances are stored anywhere after processing the one request, that can keep the buffer alive.
The msg instances can have references to the underlying InputBuffer or ArraySegment when bonded- or blob-type fields are used.
Failing that, I've had luck with the WinDBG extension command !gcroot to figure out what's keeping something alive longer than I expected. If !gcroot provides more insights, but not a solution, please edit those details into the question.
I'm developing an application that needs to publish a media stream to an rtmp "ingestion" url (as used in YouTube Live, or as input to Wowza Streaming Engine, etc), and I'm using the ffmpeg library (programmatically, from C/C++, not the command line tool) to handle the rtmp layer. I've got a working version ready, but am seeing some problems when streaming higher bandwidth streams to servers with worse ping. The problem exists both when using the ffmpeg "native"/builtin rtmp implementation and the librtmp implementation.
When streaming to a local target server with low ping through a good network (specifically, a local Wowza server), my code has so far handled every stream I've thrown at it and managed to upload everything in real time - which is important, since this is meant exclusively for live streams.
However, when streaming to a remote server with a worse ping (e.g. the youtube ingestion urls on a.rtmp.youtube.com, which for me have 50+ms pings), lower bandwidth streams work fine, but with higher bandwidth streams the network is underutilized - for example, for a 400kB/s stream, I'm only seeing ~140kB/s network usage, with a lot of frames getting delayed/dropped, depending on the strategy I'm using to handle network pushback.
Now, I know this is not a problem with the network connection to the target server, because I can successfully upload the stream in real time when using the ffmpeg command line tool to the same target server or using my code to stream to a local Wowza server which then forwards the stream to the youtube ingestion point.
So the network connection is not the problem and the issue seems to lie with my code.
I've timed various parts of my code and found that when the problem appears, calls to av_write_frame / av_interleaved_write_frame (I never mix & match them, I am always using one version consistently in any specific build, it's just that I've experimented with both to see if there is any difference) sometimes take a really long time - I've seen those calls sometimes take up to 500-1000ms, though the average "bad case" is in the 50-100ms range. Not all calls to them take this long, most return instantly, but the average time spent in these calls grows bigger than the average frame duration, so I'm not getting a real time upload anymore.
The main suspect, it seems to me, could be the rtmp Acknowledgement Window mechanism, where a sender of data waits for a confirmation of receipt after sending every N bytes, before sending any more data - this would explain the available network bandwidth not being fully used, since the client would simply sit there and wait for a response (which takes a longer time because of the lower ping), instead of using the available bandwidth. Though I haven't looked at ffmpeg's rtmp/librtmp code to see if it actually implements this kind of throttling, so it could be something else entirely.
The full code of the application is too much to post here, but here are some important snippets:
Format context creation:
const int nAVFormatContextCreateError = avformat_alloc_output_context2(&m_pAVFormatContext, nullptr, "flv", m_sOutputUrl.c_str());
Stream creation:
m_pVideoAVStream = avformat_new_stream(m_pAVFormatContext, nullptr);
m_pVideoAVStream->id = m_pAVFormatContext->nb_streams - 1;
m_pAudioAVStream = avformat_new_stream(m_pAVFormatContext, nullptr);
m_pAudioAVStream->id = m_pAVFormatContext->nb_streams - 1;
Video stream setup:
m_pVideoAVStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
m_pVideoAVStream->codecpar->codec_id = AV_CODEC_ID_H264;
m_pVideoAVStream->codecpar->width = nWidth;
m_pVideoAVStream->codecpar->height = nHeight;
m_pVideoAVStream->codecpar->format = AV_PIX_FMT_YUV420P;
m_pVideoAVStream->codecpar->bit_rate = 10 * 1000 * 1000;
m_pVideoAVStream->time_base = AVRational { 1, 1000 };
m_pVideoAVStream->codecpar->extradata_size = int(nTotalSizeRequired);
m_pVideoAVStream->codecpar->extradata = (uint8_t*)av_malloc(m_pVideoAVStream->codecpar->extradata_size + AV_INPUT_BUFFER_PADDING_SIZE);
// Fill in the extradata here - I'm sure I'm doing that correctly.
Audio stream setup:
m_pAudioAVStream->time_base = AVRational { 1, 1000 };
// Let's leave creation of m_pAudioCodecContext out of the scope of this question, I'm quite sure everything is done right there.
const int nAudioCodecCopyParamsError = avcodec_parameters_from_context(m_pAudioAVStream->codecpar, m_pAudioCodecContext);
Opening the connection:
const int nAVioOpenError = avio_open2(&m_pAVFormatContext->pb, m_sOutputUrl.c_str(), AVIO_FLAG_WRITE);
Starting the stream:
AVDictionary * pOptions = nullptr;
const int nWriteHeaderError = avformat_write_header(m_pAVFormatContext, &pOptions);
Sending a video frame:
AVPacket pkt = { 0 };
av_init_packet(&pkt);
pkt.dts = nTimestamp;
pkt.pts = nTimestamp;
pkt.duration = nDuration; // I know what I have the wrong duration sometimes, but I don't think that's the issue.
pkt.data = pFrameData;
pkt.size = pFrameDataSize;
pkt.flags = bKeyframe ? AV_PKT_FLAG_KEY : 0;
pkt.stream_index = m_pVideoAVStream->index;
const int nWriteFrameError = av_write_frame(m_pAVFormatContext, &pkt); // This is where too much time is spent.
Sending an audio frame:
AVPacket pkt = { 0 };
av_init_packet(&pkt);
pkt.pts = m_nTimestampMs;
pkt.dts = m_nTimestampMs;
pkt.duration = m_nDurationMs;
pkt.stream_index = m_pAudioAVStream->index;
const int nWriteFrameError = av_write_frame(m_pAVFormatContext, &pkt);
Any ideas? Am I on the right track with thinking about the Acknowledgement Window? Am I doing something else completely wrong?
I don't think this explains everything, but, just in case, for someone in a similar situation, the fix/workaround I found was:
1) build ffmpeg with the librtmp implementation of the rtmp protocol
2) build ffmpeg with --enable-network, it adds a couple of features to the librtmp protocol
3) pass "rtmp_buffer_size" parameter to avio_open2, and increase it's value to a satisfactory one
I can't give you a full step-by-step explanation of what was going wrong, but this fixed at least the symptom that was causing me problems.
I'd like to learn if a simple CoreAudio component (of subtype kAudioUnitSubType_HALOutput, e.g.), can be parametrically controlled by a MIDI keyboard, let's say MIDI note number be translated into interpollating oscillator frequency? On the other hand, controlling such a parameter by means of a GUI element works like a dream.
I found no single example of such a code on the entire web.
I don't need SinSynth, Sampler, MusicDevice, SoundFonts, Midi files, GM, ADSRs, plug-in level of functionality, etc.
Just need a plain piece of information or hint on how can data from a MIDI packet read by means of a midiReadProc get passed to a audio render callback, much like values of a slider can. With MIDI there seems to be a threading issue, I found no documentation about.
I'd prefer to do it in CoreAudio API, if possible, I'm sure it must be.
On the other hand, using Apple pre-built music instrument devices would lead me into a completely wrong direction.
Thanks in advance,
CA
It seems you want to controls some of parameters or properties of AudioUnit using MIDI-keyboard. In this case all that you need is take MIDIPacket's data field.
What every byte mean you can look here.
After that, depending on value of needed byte, you need to set property or parameter value.
Here's a minimalistic answer to the question that I've learned and made work meanwhile. It's a matter of making a midiReadProc generate values which an audioRenderProc can accept as parameters. Please note that this works in stand-alone apps. For writing AU-plug-ins I recommend understanding and using CoreAudioUtilityClasses, as provided by Apple.
A simplest example of createMidi in C:
//these have to be declared somewhere
MIDIClientRef midiclient;
MIDIPortRef midiin;
void createMIDI (void)
{
//create MIDI input and client - - - - - - - - - - -
midiclient = 0;
CheckError(MIDIClientCreate(CFSTR("MIDI_Client"),
NULL,
/*midiClientNotifyRefCon*/NULL,
&midiclient),
"MIDI Client Create Error\n");
CheckError(MIDIInputPortCreate(midiclient,
CFSTR("MIDI_Input"),
midiReadProc,
NULL,
&midiin),
"MIDI Port Create Error\n");
//connect MIDI - - - - - - - - - - - - - - - - - - -
ItemCount mSrcs = MIDIGetNumberOfSources();
printf("MIDI Sources: %ld\n", (long)mSrcs);
ItemCount iSrc;
for (iSrc=0; iSrc<mSrcs; iSrc++) {
MIDIEndpointRef src = MIDIGetSource(iSrc);
MIDIPortConnectSource(midiin, src, NULL);
}
}
CheckError( ) is a generic utility function modeled after "Learning Core Audio", by C.Adamson & K.Avila, ISBN 0-321-63684-8...
...and a plain-C midiReadProc template. Please note that many manufacturers of MIDI hardware don't implement the standardized noteOff event, but rather a "hacked" version consisting of a zero-velocity-noteOn, due to alleged improving MIDI-latency issues, but they hardly document it. So, one has to check against both scenarios:
void midiReadProc(const MIDIPacketList *packetList,
void* readProcRefCon,
void* srcConnRefCon)
{
Byte note;
Byte velocity;
MIDIPacket *packet = (MIDIPacket*)packetList->packet;
int count = packetList->numPackets;
for (int k=0; k < count; k++) {
Byte midiStatus = packet->data[0];
Byte midiChannel = midiStatus & 0x0F;
Byte midiCommand = midiStatus >> 4;
if ((midiCommand == 0x08)||(midiCommand == 0x09)){
if(midiCommand == 0x09){
note = packet->data[1] & 0x7F;
velocity = packet->data[2] & 0x7F;
if (velocity == 0x0){ //"hacked" note-off
; //do something
}else{//note on
; //do something
}
}
if(midiCommand == 0x08){ //proper note-off
;//do something
}
}else{
;//do something else
}
packet = MIDIPacketNext(packet);
}//end for (k = 0; ...;...)
}
Everything else is a matter of common good programing practice.