How to begin SAPI speaking a few characters back? - char

I have a small problem in my code. I have programmed my program to find word inside a .txt file and now I want to know how to make SAPI speaking from 5000 characters from the place the word is found inside the .txt file.

If you have the position of the word in the file it shouldn't be a challenge to read the next 5000 characters into a buffer that can then be fed to the speech engine. It's possible that a hard 5000 character cutoff will leave a word broken at the end so you may want to seek ahead past that point to the next blank space or whatever to make the end smoother.
Without knowing specifics like language and platform and which SAPI you are using it is impossible to give more detailed advice.

Related

word wrap serial numbers with Zebra in ZPL

I have a question related to this topic: New line in Zebra ZPL. I want to print a serial number, which is longer than the label. There shouldn't be a hyphen in this seral number.
It is printed by a PLC, which gets the format as ZPL from a PC. I get the ZPL file only, if the format changes. The data which have to be printed on the label will be given as variables.
I can't change the communication configuration (e.g. connect the printer directly to the PLC or change the program in the PC). This means, I can't split the serial number into two lines (like I did in another project). But, of course, I can change the PLC program, but it must be changeable to new formats without changing the PLC program again. So from my point of view, splitting the code in the PLC program is not an option.
Until now, I only had to possibility to change the code, to have automatic word wrap with a hyphen or split the serial number into two lines.
Hopefully someone has a suggestion.
With kind regards,
Alexander Härtel
Use the ^FB command (field box).
The first argument is the width in dots (always in dots, this is the only command that ignores ^CU units of measurement (which is an undocumented fact)), the second argument is the maximum number of lines the box can have. (There are other arguments too.)
^FO100,350
^FB100,2
^FD1234567890^FS
Labelary example.

FB_FileGets vs FB_FileRead in twincat

There are two similar function for reading file in twincat software for Beckhoff company. FB_FileGets and FB_FileRead. I will be appreciate if someone explain what are the differences of these function and clear when we use each of them. both of them have same ‌prerequisite or not, use in same way in programs? which one has better speed(fast reading in different file format) and any inform that make them clear for better programming.
vs
The FB_FileGets reads the file line by line. So when you call it, you always get a one line of the text file as string. The maximum length of a line is 255 characters. So by using this function block, it's very easy to read all lines of a file. No need for buffers and memory copying, if the 255 line length limit is ok.
THe FB_FileReadreads given number of bytes from the file. So you can read files with for example 65000 characters in a single line.
I would use the FB_FileGets in all cases where you know that the lines are less than 255 characters and the you handle the data as line-by-line. It's very simple to use. If you have no idea of the line sizes, you need all data at once or the file is very big, I would use the FB_FileRead.
I haven't tested but I think that the FB_FileReadis probably faster, as it just copies the bytes to buffer. And you can read the whole file at once, not line-by-line.

Converting text file with spaces between CR & LF

I've never seen this line ending before and I am trying to load the file into a database.
The lines all have a fixed width. After the CSV text which contains the data (the length varies line-by-line), there is a CR followed by multiple spaces and ending with LF. The spaces provide the padding to equalize the line width.
Line1,Data 1,Data 2,Data 3,4,50D20202020200A
Line2,Data 11,Data 21,Data 31,41,510D2020200A
Line3,Data12,Data22,Data 32,42,520D202020200A
I am about to handle this with a stream reader / writer in C#, but there are 40 files that come in each month and if there is a way to convert them all at once instead of one line at a time, I would rather do that.
Any thoughts?
Line-by-line processing of a stream doesn't have to be a bottleneck if you implement it at the right point in your overall process.
When I've had to do this kind of preprocessing I put a folder watch on the inbound folder, then automatically pick up each file and process it upon arrival, putting the original into an archive folder and writing the processed file into another location from which data will be parsed or loaded into the database. Unless you have unusual real-time requirements, you'll never notice this kind of overhead. If you do have real-time requirements, this issue will pale in comparison to all the other issues you'll face with batched data files :)
But you may not even have to go through a preprocessing step at all. You didn't indicate what database you will be using or how you plan to load the data, but many databases do include utilities to process fixed-length records. In the past, fixed-format files came with every imaginable kind of bizarre format (and contained all kinds of stuff that had to be stripped out or converted). As a result those utilities tend to be very efficient at this kind of task. In my experience they can easily be at least an order of magnitude faster than line-by-line processing, which can make a real difference on larger bulk loads.
If your database doesn't have good bulk import processing tools, there are a number of many open-source or freeware utilities already written that do pretty much exactly what you need. You can find them on GitHub and other places. For example, NPM replace is here and zzzprojects findandreplace is here.
For a quick and dirty approach that allows you to preview all the changes as you develop a more robust solution, many text editors have the ability to find and replace in multiple files. I've used that approach successfully in the past. For example, here's the window from NotePad++ that lets you use RegEx to remove or change whatever you like in all files matching defined criteria.

Incrementally reading logs

Looked around with numerous search strings but can't find anything quite like this:
I'm writing a custom log parser (ala analog or webalizer except not for webserver) and I want to be able to skip the hard work for the lines that have already been parsed. I have thought about using a history file like webalizer but have no idea how it actually works internally and my C is pretty poor.
I've considered hashing each line and writing the hashes out, then parsing the history file for their presence but I think this will perform poorly.
The only other method I can think of is storing the line number of the last parse and skipping until that number is reached the next time round. What happens when the log is rotated I am not sure.
Any other ideas would be appreciated. I will be writing the parser in ruby but tips in a similar language will help as well.
The solutions I can think of right now are bound to be brittle.
Even if you store the line number and later realize it would be past the length of the current file, what happens if old lines have been trimmed? You would start reading (well) after the last position.
If, on the other hand, you are sure your log files won't be tampered with and they will only be rotated, I only see two ways of doing what you want, and I'm not sure the second is applicable to you.
Anyway, here goes.
First solution
You store the last line you parsed along with a timestamp. At the next run, you consider all the rotated log files sorting them by their last modified date, figure out which one you read last time, and start reading from there.
I didn't think this through, there might be funny corner cases you will need to handle.
Second solution
You create a background script that continuously watches the log file. A quick search on Google turned out this gem, but I'm not sure if that's even an option for you. Even then, you might want to integrate this solution with the previous one just in case your daemon will get interrupted (because that's clearly bound to happen at some point).
As you read the file and parse the lines keep track of the byte count. Save that. On next read, try to seek to that byte offset in the file. If the file is smaller than the byte count, it's a new file so start at the beginning.

How do I create this File Input and Output assignment in Ruby

I have an assignment that I am not sure what to do. Was wondering if anyone could help. This is it:
Create a program that allows the user to input how many hours they exercised for today. Then the program should output the total of how many hours they have exercised for all time. To allow the program to persist beyond the first run the total exercise time will need to be written and retrieved from a file.
My code is this so far:
myFileObject2 = File.open("exercise.txt")
myFileObjecit2.read
puts "This is an exercise log. It keeps track of the number hours of exercise."
hours = gets.to_f
myFileObject2.close
Write your code like:
File.open("exercise.txt", "r") do |fi|
file_content = fi.read
puts "This is an exercise log. It keeps track of the number hours of exercise."
hours = gets.chomp.to_f
end
Ruby's File.open takes a block. When that block exits File will automatically close the file. Don't use the non-block form unless you are absolutely positive you know why you should do it another way.
chomp the value you get from gets. This is because gets won't return until it sees a trailing END-OF-LINE, which is usually a "\n" on Mac OS and *nix, or "\r\n" on Windows. Failing to remove that with chomp is the cause of much weeping and gnashing of teeth in unaware developers.
The rest of the program is left for you to figure out.
The code will fail if "exercise.txt" doesn't already exist. You need to figure out how to deal with that.
Using read is bad form unless you are absolutely positive the file will always fit in memory because the entire file will be read at once. Once it is in memory, it will be one big string of data so you'll have to figure out how to break it into an array so you can iterate it. There are better ways to handle reading than read so I'd study the IO class, plus read what you can find on Stack Overflow. Hint: Don't slurp your files.

Resources