My program normally uses the controlling terminal to read input from the user.
// GetCtty gets the file descriptor of the controlling terminal.
func GetCtty() (*os.File, error) {
return os.OpenFile("/dev/tty", os.O_RDONLY, 0)
}
I am currently constructing several times a s := bufio.NewScanner(GetCtty()) during the programm and read the input from s.Scan() with s.Text(). Which works nice.
However, for testing I am simulating the following input on stdin to my CLI Go-Program
echo -e "yes\nno\nyes\n" | app
This will not work correctly because the first construction of s and s.Scan() will already have buffered other test inputs which will not be available to a new construction by bufio.NewScanner and subsequent scan.
I am wondering how I can make sure that only one line is read from the stdin stream by s *bufio.Scanner or how I can mock my input to the controlling terminal.
I had several guesses but I am not sure if they work:
using only one bufio.Scanner in the whole program is a solution but I did not want to go this way...
write back the buffered data to GetCtty() with s.WriteTo(GetCtty()) (?) want work as the stuff gets appended instead of prepended on stdin?
Somehow only read a single line and do not consume more bytes, does that untimately mean to read not in chunks but byte by bytes (?)...
Use iotest.OneByteReader to disable buffering in the scanner:
s := bufio.NewScanner(iotest.OneByteReader(GetCtty()))
Related
I would like to parse several times with gocal data I retrieve through a HTTP call. Since I would like to avoid making the call for each of the parsing, I would like to save this data and reuse it.
The Body I get from http.Get is of type io.ReadCloser. The gocal parser requires io.Reader so it works.
Since I can retrieve Body only once, I can save it with body, _ := io.ReadAll(get.Body) but then I do not know how to serve []byte as io.Reader back (to the gocal parser, several times to account for different parsing conditions)
As you have figured, the http.Response.Body is exposed as an io.Reader, this reader is not re usable because it is connected straight to the underlying connection* (might be tcp/utp/or any other stream like reader under the net package).
Once you read the bytes out of the connection, new bytes are sitting their waiting for another read.
In order to save the response, indeed, you need to drain it first, and save that result within a variable.
body, _ := io.ReadAll(get.Body)
To re use that slice of bytes many time using the Go programming language, the standard API provides a buffered reader bytes.NewReader.
This buffer adequately offers the Reset([]byte) method to reset the state of the buffer.
The bytes.Reader.Reset is very useful to read multiple times the same bytes buffer with no allocations. In comparison, bytes.NewReader allocates every time it is called.
Finally, between two consecutive calls to c.Parser, you should reset the buffer with bytes buffer you have collected previously.
such as :
buf := bytes.NewReader(body)
// initialize the parser
c.Parse()
// process the result
// reset the buf, parse again
buf.Reset(body)
c.Parse()
You can try this version https://play.golang.org/p/YaVtCTZHZEP It uses the strings.NewReader buffer, but the interface and behavior are similar.
not super obvious, that is the general principle, the transport reads the headers, and leave the body untouched unless you consume it. see also that.
i have a requirement where many threads will call same shell script to perform a work, and then will write output(data as single text line) to a common text file.
as here many threads will try to write data to same file, my question is whether unix provides a default locking mechanism so that all can not write at the same time.
Performing a short single write to a file opened for append is mostly atomic; you can get away with it most of the time (depending on your filesystem), but if you want to be guaranteed that your writes won't interrupt each other, or to write arbitrarily long strings, or to be able to perform multiple writes, or to perform a block of writes and be assured that their contents will be next to each other in the resulting file, then you'll want to lock.
While not part of POSIX (unlike the C library call for which it's named), the flock tool provides the ability to perform advisory locking ("advisory" -- as opposed to "mandatory" -- meaning that other potential writers need to voluntarily participate):
(
flock -x 99 || exit # lock the file descriptor
echo "content" >&99 # write content to that locked FD
) 99>>/path/to/shared-file
The use of file descriptor #99 is completely arbitrary -- any unused FD number can be chosen. Similarly, one can safely put the lock on a different file than the one to which content is written while the lock is held.
The advantage of this approach over several conventional mechanisms (such as using exclusive creation of a file or directory) is automatic unlock: If the subshell holding the file descriptor on which the lock is held exits for any reason, including a power failure or unexpected reboot, the lock will be automatically released.
my question is whether unix provides a default locking mechanism so
that all can not write at the same time.
In general, no. At least not something that's guaranteed to work. But there are other ways to solve your problem, such as lockfile, if you have it available:
Examples
Suppose you want to make sure that access to the file "important" is
serialised, i.e., no more than one program or shell script should be
allowed to access it. For simplicity's sake, let's suppose that it is
a shell script. In this case you could solve it like this:
...
lockfile important.lock
...
access_"important"_to_your_hearts_content
...
rm -f important.lock
...
Now if all the scripts that access "important" follow this guideline,
you will be assured that at most one script will be executing between
the 'lockfile' and the 'rm' commands.
But, there's actually a better way, if you can use C or C++: Use the low-level open call to open the file in append mode, and call write() to write your data. With no locking necessary. Per the write() man page:
If the O_APPEND flag of the file status flags is set, the file offset
shall be set to the end of the file prior to each write and no
intervening file modification operation shall occur between changing
the file offset and the write operation.
Like this:
// process-wide global file descriptor
int outputFD = open( fileName, O_WRONLY | O_APPEND, 0600 );
.
.
.
// write a string to the file
ssize_t writeToFile( const char *data )
{
return( write( outputFD, data, strlen( data ) );
}
In practice, you can write anything to the file - it doesn't have to be a NUL-terminated character string.
That's supposed to be atomic on writes up to PIPE_BUF bytes, which is usually something like 512, 4096, or 5120. Some Linux filesystems apparently don't implement that properly, so you may in practice be limited to about 1K on those file systems.
I'm trying to run a console app and read/write it's standard i/o. The problem is that, when this app writes to the output via WriteFile(GetStdHandle(...)), I successfully read it's input with ReadFile on the pipe.
When the target app uses fprintf, then ReadFile blocks until the target app exits, in which case it returns the entire output at once. When the target app blocks (say, via fgets()), then ReadFile blocks.
I 'm using standard pipe redirection: http://msdn.microsoft.com/en-us/library/windows/desktop/ms682499(v=vs.85).aspx.
Why is that strange behaviour and how do I get around it?
It is likely due to the fact that fprintf is buffered while WriteFile is not. Can you use fflush after fprintf and try the same ?
I have a script that looks something like:
while true; do
read -t10 -d$'\n' input_from_serial_device </dev/ttyS0
# do some costly processing on the string
done
The problem is that it will miss the next input from the serial device because it is burning CPU cycles doing the costly string processing.
I thought I could fix this by using a pipe, on the principle that bash will buffer the input between the two processes:
( while true; do
read -d$'\n' input_from_serial_device </dev/ttyS0
echo $input_from_serial_device
done ) | ( while true; do
read -t10 input_from_first_process
# costly string processing
done )
I firstly want to check that I've understood the pipes correctly and that this will indeed buffer the input between the two processes as I intended. Is this idea correct?
Secondly, if I get the input I'm looking for in the second process, is there a way to immediately kill both processes, rather than exiting from the second and waiting for the next input before exiting the first?
Finally, I realise bash isn't the best way to do this and I'm currently working on a C program, but I'd quite like to get this working as an intermediate solution.
Thank you!
The problem isn't the pipe. It's the serial device.
When you write
while true; do
read -t10 -d$'\n' input_from_serial_device </dev/ttyS0
# use a lot of time
done
the consequence is that the serial device is opened, a line is read from it, and it is then closed. Then it is not opened again until # use a lot of time is done. While a serial device is not open, incoming serial input is thrown away.
If the input is truly coming in faster than it can be processed, then buffering isn't enough. You'll have to throw input away. If, on the other hand, it's dribbling in at an average speed which allows for processing, then you should be able to achieve what you want by keeping the serial device open:
while true; do
read -t10 -r input_from_serial_device
# process input_from_serial_device
done < /dev/ttyS0
Note: I added -r -- almost certainly necessary -- to your read call, and removed -d$'\n', because that is the default.
I'm attempting to compose an image in memory and send it out through http.ResponseWriter without ever touching the file system.
I use the following to create a new file:
file := os.NewFile(0, "temp_destination.png")
However, I don't seem to be able to do anything at all with this file. Here is the function I'm using (which is being called within an http.HandleFunc, which just sends the file's bytes to the browser), which is intended to draw a blue rectangle on a temporary file and encode it as a PNG:
func ComposeImage() ([]byte) {
img := image.NewRGBA(image.Rect(0, 0, 640, 480))
blue := color.RGBA{0, 0, 255, 255}
draw.Draw(img, img.Bounds(), &image.Uniform{blue}, image.ZP, draw.Src)
// in memory destination file, instead of going to the file sys
file := os.NewFile(0, "temp_destination.png")
// write the image to the destination io.Writer
png.Encode(file, img)
bytes, err := ioutil.ReadAll(file)
if err != nil {
log.Fatal("Couldn't read temporary file as bytes.")
}
return bytes
}
If I remove the png.Encode call, and just return the file bytes, the server just hangs and does nothing forever.
Leaving the png.Encode call in results in the file bytes (encoded, includes some of the PNG chunks I'd expect to see) being vomited out to stderr/stdout (I can't tell which) and server hanging indefinitely.
I assume I'm just not using os.NewFile correctly. Can anyone point me in the right direction? Alternative suggestions on how to properly perform in-memory file manipulations are welcome.
os.NewFile is a low level function that most people will never use directly. It takes an already existing file descriptor (system representation of a file) and converts it to an *os.File (Go's representation).
If you never want the picture to touch your filesystem, stay out of the os package entirely. Just treat your ResponseWriter as an io.Writer and pass it to png.Encode.
png.Encode(yourResponseWriter, img)
If you insist on writing to an "in memory file", I suggest using bytes.Buffer:
buf := new(bytes.Buffer)
png.Encode(buf, img)
return buf.Bytes()
Please have a detailed read of the NewFile documentation. NewFile does not create a new file, not at all! It sets up a Go os.File which wraps around an existing file with the given file descriptor (0 in your case which is stdin I think).
Serving images without files is much easier: Just Encode your image to your ResponseWriter. That's what interfaces are there for. No need to write to ome magic "in memory file", no need to read it back with ReadAll, plain and simple: Write to your response.