Curl progress meter and bar problem. It duplicate data - bash

I have script that final stage is download files with curl. I need to show progress of downloading. When I type on terminal curl -# $url - O it work good. I use it with
while read line
do
"curl code"
done <url
In that case it start filling my screen with #. Default progress meter do that to. It start filling my screen with progress data.
here is picture if you dont understand
progress bar
In curl documentation there writed that they fixed it.
İf I can't solve this problem how I can show only percentage of progress? It can cuted with sed but i don't know how.

Regarding your question
How I can show only percentage of progress?
this seems not to be possible easily according CURL Progress Bar: How to pipe and extract numbers only using grep? or without a wrapper according curl progress - only show percentage.

Related

Set label size for industrial printer via PowerShell or bash

At the moment I am struggling with our ways of working in my place. Guys are generating sample labels which have barcode on them, due to huge number of operators they cannot use their own accounts as it defaults the settings each time, they are trying to print even though I have set print settings on printer which is Novexx 6406 6" printer.
I would like to generate or create a PowerShell or bash script which we will then run on each account via group policy, and I would like to do the following.
print speed - 4"/s
darkness -80
width-102 mm
height- 72 mm
no padding at any side so set it to 0mm for top bottom left & right
and I need the label to be going horizontal.
I'm new to bash & PowerShell so I am looking to get started & any help would be appreciated.
Forward to explain scenario better the sample labels are being printed out from web-based app in edge browser.
Any further questions needed for resolution please reach out
Searching online however to no avail

Is it possible to render or print bold or stylized text in the build log?

I used a bash function to print out manually echo'd messages in bold, but it renders as normal text in the build log.
Is it possible to render the text a different font style or color in order to make it stand out when reviewing logs?
Edit-
Per the request in the comment (thanks Sathi) what I tried to were two techniques outlined here: How does one output bold text in Bash?
The tput method fails because there is no terminal to interact with, while using echo -e '\033[1mYOUR_STRING\033[0m' renders normal text. I understand why both of these would not likely work as there is not a terminal. I thought perhaps there would be an escape sequence or some other method specifically for the cloud build logs as viewed in cloud build, but I haven't seen anything when searching.

Can I bulk-remove links from a pdf from the command line?

I'm downloading some newspapers as pdf (for posterity). One title is a pain, it includes URI links in the pdf itself, if you accidentally click these it opens a browser tab to a page that 500s. It's not so bad on a desktop computer, but a pain in the butt if someone is reading it with a tablet. Each issues has approximately 200 of these links.
For a different title, it was as simple as using QPDF, like so:
qpdf --qdf --object-streams=disable file temp-file
This puts the temp version into postscript mode or something, and I was able to nuke the links with something like this:
s/obj\n<<\n( \/A <<\n \/S \/URI.+?)>>\nendobj/"obj\n<<\n" . " " x length($1). ">>\nendobj"/sge
This still works. However, a 15 meg original pdf is now becoming a 108meg "fixed" pdf. I can accept some bloat, but 720% is a bit absurd (I think it was more like 10% on the other title). Whenever I google for how to do this, I get results for Acrobat Reader and how you can click around in 20 menus to do such... does no one that uses Adobe products ever want to automate this stuff? There are between 180 and 300 links in a typical issue, spread across 45-150 pages (Sunday editions).
Are there any tools that can do this? Are there any clever arguments to qpdf that will make this more reasonable?
PS Yes I know it's hacky as hell to just overwrite the URIs with spaces, but I've never managed to figure out how to remove the objects entirely since their references also have to be removed.
You can do this with the community edition of cpdf: https://community.coherentpdf.com/
To remove all links in a PDF (well, to replace them with an empty link):
cpdf -replace-dict-entry /URI cpdfmanual.pdf -replace-dict-entry-value '""' -o out.pdf
This does not remove the annotations - it just makes sure that clicking on them won't go anywhere. It leaves the annotation in place, but with an empty link. You could replace with a working URL too, of course:
cpdf -replace-dict-entry /URI cpdfmanual.pdf -replace-dict-entry-value '"https://www.google.com/"' -o out.pdf
(You can also use -replace-dict-entry-search to replace only certain URLs - see the manual.)
Or, if you just want rid of all the annotations (link and non-link):
cpdf -remove-annotations in.pdf -o out.pdf
You can use HexaPDF (you need to have Ruby installed and then use gem install hexapdf to install HexaPDF) and the following small script to remove the links:
require 'hexapdf'
HexaPDF::Document.open(ARGV[0]) do |doc|
doc.pages.each do |page|
page.each_annotation.select {|annot| annot[:Subtype] == :Link}.each do |annot|
page[:Annots].delete(annot)
end
end
doc.write(ARGV[0] + '_processed.pdf', optimize: true)
end
Then batch execute the script for all the files you want the links removed.
Note that this will remove all links.
Just to round off the options I would suggest the best is potentially a PDF dedicated command line tool such as cpdf answer by johnwhitington or a dedicated library like iText.
There are several alternative methods touted for batch text editing your using qpdf
"temp version into postscript mode or something,"
That is a converted pdf into plain old decompressed text/pdf hybrid qdf so you can run sed or similar string editor. Here the primary difference is the upper out.pdf file shows as an editable QDF-1.0 version after editing so needs conversion to a conventional PDF as seen in the lower part where the stream is binary thus recompressed.
1) qpdf
At end of a bloating edit exercise the idea is to reverse back to application/pdf using
fix-qdf file-temp.pdf>out.pdf
to tidy up redirects and then
qpdf --compress-streams=y out.pdf outfixed.pdf
back to fixed.pdf
Other cross platform means are using
2) pdftk
$ pdftk infile.pdf output outfile.pdf uncompress
edit with vim or whatever sed scripting method then
$ pdftk outfile.pdf output fixedfile.pdf compress
3) mutool
mutool clean -d [options] input.pdf [output.pdf] [pages]
-d Decompress streams. This will make the output file larger, but provides easy access for reading and editing the contents with a text editor.
-i Toggle decompression of image streams. Use in conjunction with -d to leave images compressed.
-f Toggle decompression of font streams. Use in conjunction with -d to leave fonts compressed.
-a ASCII Hex encode binary streams. Use in conjunction with -d and -i or -f to ensure that although the images and/or fonts are compressed, the resulting file can still be viewed and edited with a text editor.
Whichever options you use, need to be reversed when recompressing
NOTE
Using text editors will potentially corrupt binary fonts and binary images, thus they need monitoring for any corruption in an editor that changes encoding or line feeds. This pdftk sample shows the image stream has been decompressed well into simple text but beware any change of End Of Line by editor would break up that stream
Additionally when making text edits that are not simple byte wise "find and replace", the xref table can be corrupted too much to be reindexed by recompression, try to overwrite with same number of characters when using a text edit method.
SIDE NOTE
EVEN if you remove actions and external hyperlinks actions but the text is present the reader will still provide that exploitable action. Same as here https://google.com but html will highlight usually in blue underline.
Hence ensure security is on

OSX: automated (every 1-2sec) screenshot (not full screen but (x,y,w,h)) using python

I want to make screenshots on OSX using python. I dont want make full screen shots but only certain rectangles on the screen. Something like (291,305,213,31). I need the correct pixel because afterwards the image files are processed by OCR (python-tesseract) to extract the text.
By the way this is since 6 years the first time I am programming, so far I only know Java a bit. I started yesterday and gave up this morning at 4am. So basically I have no clue yet...For example I still cannot build with Sublime because of path settings, but thats a different story. Cant figure out everything on one day.
I was trying already the following:
- wxPython
But the result are black images, see also:
stackoverflow.com/questions/8644908/take-screenshot-in-python-cross-platform
Additionally it only works in 32-bit mode, but when I do OCR using python-tesseract openCV requires 64-bit....
autopy
when trying to install I got errors, see also:
stackoverflow.com/questions/12993126/errors-while-installing-python-autopy
ImageGrab
only Windows
effbot.org/imagingbook/imagegrab.htm
commandline screencapture
os.system('screencapture test.png')
When I found this I thought, nice but only fullscreen when checking man screencapture. But then I found this: guides.macrumors.c om/screencapture
-R capture screen rect
That would be already enough, but on OSX 10.7.5 I dont have this option. Any ideas?
import Quartz.CoreGraphics
neverfear.org/blog/view/156/OS_X_Screen_capture_from_Python_PyObjC
Create screenshot as CGImage
image = CG.CGWindowListCreateImage(
region,
CG.kCGWindowListOptionOnScreenOnly,
CG.kCGNullWindowID,
CG.kCGWindowImageDefault)
Unfortunately the image is not in file format but a CGImage, no idea how to save as file.
So if possible I would like to use the commandline screencapture with -R if somebody knows how. Just as a start to continue.
Are there any other command line tools available?
What about other libs that I have missed?
Cheers
M
Given that you can get a CGImageRef, you can get its pixel data using the techniques described in Technical Q&A QA1509: Getting the pixel data from a CGImage object. In particular, it shows a function to get the pixel data as a CFDataRef using this function:
CFDataRef CopyImagePixels(CGImageRef inImage) { return CGDataProviderCopyData(CGImageGetDataProvider(inImage)); }
and says:
The pixel data returned by CGDataProviderCopyData has not been color
matched and is in the format that the image is in, as described by the
various CGImageGet functions …
It shows an alternative for getting the pixel data in other formats if you need that.

Changing bash home position makes it run slowly. Any fix?

I'm having some trouble with bash. I'm changing my console's home position so I can reserve some rows in the top of the screen to print the current status of the script, and allow all the standard output to scroll in the lower part of the screen. This way, the upper lines showing the status doesn't get removed when the screen scrolls down.
I'm doing this by using these lines to do so, where <RNUM> is the number of rows I need to freeze.
\033[<RNUM>;r
\033[<RNUM>;1H
It works, however the console performance is highly lowered, it print lines really slowly (say it takes 4 times more to print the same amount of lines it usually prints).
Does anyone have a fix for the performance issue? Am I using the correct codes? I wasn't able to find any information about this stuff on the net, and I don't recall where I got these codes years ago.

Resources