I have a script that looks something like:
while true; do
read -t10 -d$'\n' input_from_serial_device </dev/ttyS0
# do some costly processing on the string
done
The problem is that it will miss the next input from the serial device because it is burning CPU cycles doing the costly string processing.
I thought I could fix this by using a pipe, on the principle that bash will buffer the input between the two processes:
( while true; do
read -d$'\n' input_from_serial_device </dev/ttyS0
echo $input_from_serial_device
done ) | ( while true; do
read -t10 input_from_first_process
# costly string processing
done )
I firstly want to check that I've understood the pipes correctly and that this will indeed buffer the input between the two processes as I intended. Is this idea correct?
Secondly, if I get the input I'm looking for in the second process, is there a way to immediately kill both processes, rather than exiting from the second and waiting for the next input before exiting the first?
Finally, I realise bash isn't the best way to do this and I'm currently working on a C program, but I'd quite like to get this working as an intermediate solution.
Thank you!
The problem isn't the pipe. It's the serial device.
When you write
while true; do
read -t10 -d$'\n' input_from_serial_device </dev/ttyS0
# use a lot of time
done
the consequence is that the serial device is opened, a line is read from it, and it is then closed. Then it is not opened again until # use a lot of time is done. While a serial device is not open, incoming serial input is thrown away.
If the input is truly coming in faster than it can be processed, then buffering isn't enough. You'll have to throw input away. If, on the other hand, it's dribbling in at an average speed which allows for processing, then you should be able to achieve what you want by keeping the serial device open:
while true; do
read -t10 -r input_from_serial_device
# process input_from_serial_device
done < /dev/ttyS0
Note: I added -r -- almost certainly necessary -- to your read call, and removed -d$'\n', because that is the default.
Related
My program normally uses the controlling terminal to read input from the user.
// GetCtty gets the file descriptor of the controlling terminal.
func GetCtty() (*os.File, error) {
return os.OpenFile("/dev/tty", os.O_RDONLY, 0)
}
I am currently constructing several times a s := bufio.NewScanner(GetCtty()) during the programm and read the input from s.Scan() with s.Text(). Which works nice.
However, for testing I am simulating the following input on stdin to my CLI Go-Program
echo -e "yes\nno\nyes\n" | app
This will not work correctly because the first construction of s and s.Scan() will already have buffered other test inputs which will not be available to a new construction by bufio.NewScanner and subsequent scan.
I am wondering how I can make sure that only one line is read from the stdin stream by s *bufio.Scanner or how I can mock my input to the controlling terminal.
I had several guesses but I am not sure if they work:
using only one bufio.Scanner in the whole program is a solution but I did not want to go this way...
write back the buffered data to GetCtty() with s.WriteTo(GetCtty()) (?) want work as the stuff gets appended instead of prepended on stdin?
Somehow only read a single line and do not consume more bytes, does that untimately mean to read not in chunks but byte by bytes (?)...
Use iotest.OneByteReader to disable buffering in the scanner:
s := bufio.NewScanner(iotest.OneByteReader(GetCtty()))
i have a requirement where many threads will call same shell script to perform a work, and then will write output(data as single text line) to a common text file.
as here many threads will try to write data to same file, my question is whether unix provides a default locking mechanism so that all can not write at the same time.
Performing a short single write to a file opened for append is mostly atomic; you can get away with it most of the time (depending on your filesystem), but if you want to be guaranteed that your writes won't interrupt each other, or to write arbitrarily long strings, or to be able to perform multiple writes, or to perform a block of writes and be assured that their contents will be next to each other in the resulting file, then you'll want to lock.
While not part of POSIX (unlike the C library call for which it's named), the flock tool provides the ability to perform advisory locking ("advisory" -- as opposed to "mandatory" -- meaning that other potential writers need to voluntarily participate):
(
flock -x 99 || exit # lock the file descriptor
echo "content" >&99 # write content to that locked FD
) 99>>/path/to/shared-file
The use of file descriptor #99 is completely arbitrary -- any unused FD number can be chosen. Similarly, one can safely put the lock on a different file than the one to which content is written while the lock is held.
The advantage of this approach over several conventional mechanisms (such as using exclusive creation of a file or directory) is automatic unlock: If the subshell holding the file descriptor on which the lock is held exits for any reason, including a power failure or unexpected reboot, the lock will be automatically released.
my question is whether unix provides a default locking mechanism so
that all can not write at the same time.
In general, no. At least not something that's guaranteed to work. But there are other ways to solve your problem, such as lockfile, if you have it available:
Examples
Suppose you want to make sure that access to the file "important" is
serialised, i.e., no more than one program or shell script should be
allowed to access it. For simplicity's sake, let's suppose that it is
a shell script. In this case you could solve it like this:
...
lockfile important.lock
...
access_"important"_to_your_hearts_content
...
rm -f important.lock
...
Now if all the scripts that access "important" follow this guideline,
you will be assured that at most one script will be executing between
the 'lockfile' and the 'rm' commands.
But, there's actually a better way, if you can use C or C++: Use the low-level open call to open the file in append mode, and call write() to write your data. With no locking necessary. Per the write() man page:
If the O_APPEND flag of the file status flags is set, the file offset
shall be set to the end of the file prior to each write and no
intervening file modification operation shall occur between changing
the file offset and the write operation.
Like this:
// process-wide global file descriptor
int outputFD = open( fileName, O_WRONLY | O_APPEND, 0600 );
.
.
.
// write a string to the file
ssize_t writeToFile( const char *data )
{
return( write( outputFD, data, strlen( data ) );
}
In practice, you can write anything to the file - it doesn't have to be a NUL-terminated character string.
That's supposed to be atomic on writes up to PIPE_BUF bytes, which is usually something like 512, 4096, or 5120. Some Linux filesystems apparently don't implement that properly, so you may in practice be limited to about 1K on those file systems.
I have a C program that has a scanf call followed by a read call. I want to feed both inputs using printf.
printf 10 | program_name doesn't work for some reason; scanf correctly picks up 10, but the read call defaults to " " and doesn't even ask for input.
I want to use printf twice, once to pass input to scanf and the second time to pass input to read. How can I do this?
As a terrible hack, you need to ensure that scanf's buffer is full. Something like:
{ printf 10; dd if=/dev/zero bs=4094 count=1;
echo This text will go to the read if bufsize is 4096; } | program_name
The technique here is relying on scanf reading the first 4096 bytes to fill its buffer on its first read, leaving data in the pipe for the read to get. The main problem is that it is extremely fragile and requires intimate knowledge of the buffering used. Overall, this is a terrible idea, but not too much worse that calling read after calling scanf on the same file descriptor.
I'm making an alarm clock and wanted to start at basically no volume and increase the sound per 2 seconds up one 'notch'/value until a certain predefined point where it won't increase anymore.
As of right now I'm using mplayer (it's a radio station, so I'm running mplayer http://66.225.205.192:80), but I don't care what I use (VLC, etc.))
My full code is
while true; do
mplayer http://66.225.205.192:80
sleep 1
done
Googling for 'mplayer alarm clock' actually yields a lot of pages dealing with this problem and solutions that you can actually use right away, but let's give it a try anyway.
#!/bin/bash
{
for ((volume = 0; volume <= 100; volume += 5)); do
/usr/bin/aumix -v${volume} -w100 >/dev/null
sleep 2
done
} &
mplayer http://66.225.205.192:80
echo "good morning! :-)"
You need to install aumix, which is used here to change the volume (but you could use something else, of course). The block between { } gets run in the background. The aumix command sets the PCM volume to 100% and gradually adjusts the main volume in 5 percent increments every two seconds, once it hits 100% the loop finishes and the background job exits.
I have never used aumix, and you might want to read its man page in case it does not work as expected (this is untested).
mplayer runs in the foreground until you quit it, upon which it makes coffee for you greets you with a warm welcome.
Does that get you started?
A Win32 application (the "server") is sending a continuous stream of data over a named pipe. GetNamedPipeInfo() tells me that input and output buffer sizes are automatically allocated as needed. The pipe is operating in byte mode (although it is sending data units that are bigger than 1 byte (doubles, to be precise)).
Now, my question is this: Can I somehow verify that my application (the "client") is not missing any data when reading from the pipe? I know that those read/write operations are buffered, but I suppose the buffers will not grow indefinitely if the client doesn't fetch the data quickly enough. How do I know if I missed something? Does the server (or the pipe?) silently discard data that is not read in time by the client?
BTW, can I rely on proper alignment of the data the client reads using ReadFile()? As far as I understood, ReadFile() may return with less bytes read than specified, i.e. NumberOfBytesRead <= NumberOfBytesToRead. Do I have to check every time that NumberOfBytesRead is a multiple of sizeof(double)?
The write operation will block if there is no more room in the pipe's buffers. This is from my (old) copy of the SDK manual:
When an application uses the WriteFile
function to write to a pipe, the write
operation may not finish if the pipe
buffer is full. The write operation is
completed when a read operation (using
the ReadFile function) makes more
buffer space available.
Sorry, didn't find out how to comment on your post, Neil.
The write operation will block if there is no more room in the pipe's buffers.
I just discovered that Sysinternals' FileMon can also monitor pipe operations. For testing purposes I connected the client to the named pipe and did no read operations, just waiting. The server writes a few hundred kB to the pipe every 4--5 seconds, even though nobody is fetching the data from the pipe on the client side. No blocking write operation ... And so far no limits in buffer-size seem to have been reached.
This is either a very big buffer ... or the server does some magic additional to just using WriteFile() and waiting for the client to read.