How to get pyside-uic to generate "./" for resources instead of ":/" - pyside

I have a GUI I created with QtDesigner that includes some images for buttons. Within QtDesigner they are stored together in a resource file. When I run pyside-uic -x ui_mainWindow.ui -o ui_mainWindow.py or pyside-uic -o ui_mainWindow.py ui_mainWindow.ui, the resulting python code includes lines like:
icon.addPixmap(QtGui.QPixmap(":/assets/CU LASP.png"), QtGui.QIcon.Normal, QtGui.QIcon.Off)
but I want it to say
icon.addPixmap(QtGui.QPixmap("./assets/CU LASP.png"), QtGui.QIcon.Normal, QtGui.QIcon.Off)
(just use a ./ instead of a :/). When I run the python code with the :/, the images don't show up. When I do a hard code change to ./, the images do show up.

Related

pandoc to make each directory a chapter

I have a lot of markdown files in various directories each with the same format (# title, then ## sub-title).
can I make the --toc respect the folder layout, in that the folder itself is the name of chapter, and each markdown file is content of this chapter.
so far pandoc totally ignores my folder names, it works the same as putting all the markdown files within the same folder.
My approach to this is to create index files in each folder with first level heading and downgrade headings in other files by one level.
I use Git and by default I'm using default structure, having first level headings in files, but when I want to generate ebook using pandoc I'm modifying files via automated Linux shell script. After that, I revert changed files via Git.
Here's the script:
find ./docs/*/ -name "*.md" ! -name "*index.md" -exec perl -pi -e "s/^(#)+\s/#$&/g" {} \;
./docs/*/ means I'm looking only for files inside subfolders of docs directory like docs/foo/file1.md, docs/bar/file2.md.
I'm also interested only in *.md files, excluding *index.md files.
In index.md files (that I name usually 00-index.md to make them appear as first), I put a first level heading # and because those files are excluded from find portion of the script, their headings aren't downgraded.
Next, there's a perl's search and replace command with regular expression s/^(#)+\s/#$&/g that looks for all lines starting from one or more # and adds another # to them.
In the end, I'm running pandoc with --toc-depth=2 so the table of content contains only first and second level headings.
pandoc ./docs/**/*.md --verbose --fail-if-warnings --toc-depth=2 --table-of-contents -o ./ebook.epub
To revert all changes made to files, I restore changes in the Git repo.
git restore .

Relative paths in pandoc HTML templates

I‘ve got a Pandoc (v1.19.2.1) HTML5 template that I’m loading from the default --data-dir. Within the template I need to load external resources, such as stylesheets and JavaScript. I’d like to load those resources relative to the path of the template, not the working directory or the source file. For example, on macOS, in ~/.pandoc/templates/hierarchical/hierarchical.html:
…
<link rel="stylesheet" href="hierarchical.css">
…
where hierarchical.css is located at ~/.pandoc/templates/hierarchical/hierarchical.css, in the same directory as the template itself.
Then invoked from the command line:
pandoc \
--from=markdown_strict+header_attributes+yaml_metadata_block+pipe_tables\
--to=html5 \
--self-contained \
--template="hierarchical/template.html" \
--section-divs \
--output="$1.html" \
--toc \
--toc-depth=6 \
"$1.md"
I get the error:
pandoc: Could not fetch hierarchical.css
hierarchical.css: openBinaryFile: does not exist (No such file or directory)
I’ve tried various other relative paths to the CSS file. The only thing that works is the absolute path /Users/jmakeig/.pandoc/templates/hierarchical/hierarchical.css, which, of course, will only work on my laptop.
Is there any way to resolve external resources in Pandoc templates relative to the template itself, so that the templates are portable? I don’t see an obvious external variable that I could use in my template or a command line option.
I'm pasting the work-around I gave to the issue that I created in github a while ago.
Coming back to the topic more than two years after I created this issue, I've found a not-so-bad workaround.
Apparently, latex uses the environment variable TEXINPUTS as a sort ot PATH for resources. So, you can just configure an environment variable once in your system (linux, windows, wherever) and just refer to resources relative to that path.
This link provides some explanation about how to use it:
https://tex.stackexchange.com/questions/93712/definition-of-the-texinputs-variable
For example I have the following files:
SOME_PATH_TO/templates/my_latex_template.tex
SOME_PATH_TO/templates/img/my_img.png
In my system I set the environment variable (example with Windows, although I actually just save it under the system config):
set TEXINPUTS=SOME_PATH_TO/templates/
In the template my_latex_template.tex I have something like:
%...
\includegraphics{img/my_img.png}
%...
And I call the template like so:
pandoc file.txt -t pdf --template=SOME_PATH_TO/templates/my_latex_template.tex --output=output.pdf

Render .adoc file in HTML template using asciidoctor

My goal is to use asciidoctor to render an html file, including an html template, and an .adoc file that can be easily edited by a non-technical worker. I can currently get the html template to render, but am not sure how I can wire up an adoc file to place text inside of specific tags, i.e. inside of the template divs/paragraphs.
Currently running this command from terminal:
rm assets/templates/about/digitization.html && asciidoctor -a stylesheet! -T
assets/templates/asciidoc/about/templates/ -o assets/templates/about/digitization.html
assets/templates/asciidoc/about/digitization.adoc
With this command, currently anything inside of digitization.adoc is not showing up (nor do I understand how to get text to render within the correct places in the html template).

Join multiple Coffeescript files into one file? (Multiple subdirectories)

I've got a bunch of .coffee files that I need to join into one file.
I have folders set up like a rails app:
/src/controller/log_controller.coffee
/src/model/log.coffee
/src/views/logs/new.coffee
Coffeescript has a command that lets you join multiple coffeescripts into one file, but it only seems to work with one directory. For example this works fine:
coffee --output app/controllers.js --join --compile src/controllers/*.coffee
But I need to be able to include a bunch of subdirectories kind of like this non-working command:
coffee --output app/all.js --join --compile src/*/*.coffee
Is there a way to do this? Is there a UNIXy way to pass in a list of all the files in the subdirectories?
I'm using terminal in OSX.
They all have to be joined in one file because otherwise each separate file gets compiled & wrapped with this:
(function() { }).call(this);
Which breaks the scope of some function calls.
From the CoffeeScript documentation:
-j, --join [FILE] : Before compiling, concatenate all scripts together in the order they were passed, and write them into the specified file. Useful for building large projects.
So, you can achieve your goal at the command line (I use bash) like this:
coffee -cj path/to/compiled/file.js file1 file2 file3 file4
where file1 - fileN are the paths to the coffeescript files you want to compile.
You could write a shell script or Rake task to combine them together first, then compile. Something like:
find . -type f -name '*.coffee' -print0 | xargs -0 cat > output.coffee
Then compile output.coffee
Adjust the paths to your needs. Also make sure that the output.coffee file is not in the same path you're searching with find or you will get into an infinite loop.
http://man.cx/find |
http://www.rubyrake.org/tutorial/index.html
Additionally you may be interested in these other posts on Stackoverflow concerning searching across directories:
How to count lines of code including sub-directories
Bash script to find a file in directory tree and append it to another file
Unix script to find all folders in the directory
I've just release an alpha release of CoffeeToaster, I think it may help you.
http://github.com/serpentem/coffee-toaster
The most easy way to use coffee command line tool.
coffee --output public --join --compile app
app is my working directory holding multiple subdirectories and public is where ~output.js file will be placed. Easy to automate this process if writing app in nodejs
This helped me (-o output directory, -j join to project.js, -cw compile and watch coffeescript directory in full depth):
coffee -o web/js -j project.js -cw coffeescript
Use cake to compile them all in one (or more) resulting .js file(s). Cakefile is used as configuration which controls in which order your coffee scripts are compiled - quite handy with bigger projects.
Cake is quite easy to install and setup, invoking cake from vim while you are editing your project is then simply
:!cake build
and you can refresh your browser and see results.
As I'm also busy to learn the best way of structuring the files and use coffeescript in combination with backbone and cake, I have created a small project on github to keep it as a reference for myself, maybe it will help you too around cake and some basic things. All compiled files are in www folder so that you can open them in your browser and all source files (except for cake configuration) are in src folder. In this example, all .coffee files are compiled and combined in one output .js file which is then included in html.
Alternatively, you could use the --bare flag, compile to JavaScript, and then perhaps wrap the JS if necessary. But this would likely create problems; for instance, if you have one file with the code
i = 0
foo = -> i++
...
foo()
then there's only one var i declaration in the resulting JavaScript, and i will be incremented. But if you moved the foo function declaration to another CoffeeScript file, then its i would live in the foo scope, and the outer i would be unaffected.
So concatenating the CoffeeScript is a wiser solution, but there's still potential for confusion there; the order in which you concatenate your code is almost certainly going to matter. I strongly recommend modularizing your code instead.

Exporting and importing images in MediaWiki

How do I export and import images from and into a MediaWiki?
Terminal solutions
MediaWiki administrator, at server's terminal, can perform maintenance tasks using the Maintenance scripts framework. New Mediawiki versions run all standard scripts in the tasks described below, but old versions have some bugs or not have all moderns scripts: check the version number by grep wgVersion includes/DefaultSettings.php.
Note: all cited (below) scripts have also --help option, for instance php maintenance/importImages.php --help
Original image folder
Users upload files through the Special:Upload page; administrators can configure the allowed file types through an extension whitelist. Once uploaded, files are stored in a folder on the file system, and thumbnails in a dedicated thumb directory.
The Mediawiki's images folder can be zipped with zip -r ~/Mediafiles.zip images command, but this zip is not so good:
there are a lot of expurious files: "deleted files" and "old files" (not the current) with filenames as 20160627184943!MyFig.png, and thumbnails as MyFig.png/120px-MyFig.jpg.
for data-interchange or long-term preservation porpurses, it is invalid... The ugly images/?/??/* folder format is not suitable, as usual "all image files in only one folder".
Images export/import
For "Exporting and Importing" all current images in one folder at MediaWiki server's terminal, there are a step-by-step single procedure.
Step-1: generate the image dumps using dumpUploads (with --local or --shared options when preservation need), that creates a txt list of all image filenames in use.
mkdir /tmp/workingBackupMediaFiles
php maintenance/dumpUploads.php \
| sed 's~mwstore://local-backend/local-public~./images~' \
| xargs cp -t /tmp/workingBackupMediaFiles
zip -r ~/Mediafiles.zip /tmp/workingBackupMediaFiles
rm -r /tmp/workingBackupMediaFiles
The command results in a standard zip file of your image backup folder, Mediafiles.zip at yor user root directory (~/).
NOTE: if you are not worried about the ugly folder strutcture, a more direct way is
php maintenance/dumpUploads.php \
| sed 's~mwstore://local-backend/local-public~./images~' \
| zip ~/Mediafiles.zip -#
according Mediawiki version the --base=./ option will work fine and you can remove the sed command of the pipe.
Step-2: need a backup? installing a copy of the images? ... you need only Mediafiles.zip, and the Mediawiki installed, with no contents... If the Wiki have contents, check problems with filename conflicks (!). Another problem is configuration of file formats and permissions, that must be the same or broader in the new Wiki, see Manual:Configuring file uploads.
Step-3: restore the dumps (to the new Wiki), with the maintenance tools. Supposing that you used step-1 to export and preserve in a zip file,
unzip ~/Mediafiles.zip -d /tmp/workingBackupMediaFiles
php maintenance/importImages.php /tmp/workingBackupMediaFiles
rm -r /tmp/workingBackupMediaFiles
php maintenance/update.php
php maintenance/rebuildall.php
That is all. Check, navegating in your new Wiki's Special:NewFiles.
The full export or preservation
For exporting "ALL images and ALL articles" of your old MediaWiki, for full backup or content preservation. Add some procedures at each step:
Step-1: ... see above step-1... and, to generate the text-content dumps from the old Wiki
php maintenance/dumpBackup.php --full | gzip > ~/dumpContent.xml.gz
Note: instead of --full you can use the --current option.
Step-2: ... you need dumpContent.xml.zip and Mediafiles.zip... from the old Wiki. Suppose both zip files at your ~ folder.
Step-3: run in your new Wiki
unzip ~/Mediafiles.zip -d /tmp/workingBackupMediaFiles
gunzip -c ~/dumpContent.xml.gz
| php maintenance/importDump.php --no-updates \
--image-base-path=/tmp/workingBackupMediaFiles
rm -r /tmp/workingBackupMediaFiles
php maintenance/update.php
php maintenance/rebuildall.php
That is all. Check also Special:AllPages of the new Wiki.
There is no automatic way to export images like you export pages, you have to right click on them, and choose "save image". To get the history of the Image page, use the Special:Export page.
To import images use the Special:Upload page on your wiki. If you have lots of them, you can use the Import Images script. Note: you generally have to be in the sysop group to upload images.
- Export ALL:
You can get all pages and all images from a MediaWiki web using [API], even you are not the owner of the web (of course when the owner hasn't disable this function):
Step 1: Using API to get all pages title and all images url. You can write some code to do it automatically.
Step 2: Next you use [Special:Export] to export all pages with the titles you got, and use wget to get all images you had links (like this wget -i img-list.txt).
- Import ALL:
Step 1: Import pages using [Special:Import]
Step 2: Import images using [Manual:ImportImages.php].
There are a few mass upload tools available.
Commonist - www.djini.de/software/commonist/
Both run on the desktop and can be configured to upload to your local wiki (they are configured for Wikipedia and Wikimedia commons by default). If you are afraid to edit the content of a .jar file, I suggest you start with Commonplace.
Another useful extension exists for Mediawiki itself.
MultiUpload - http://www.mediawiki.org/wiki/Extension:MultiUpload
This extension allows you to drop images in a folder and load them all at once. It supports annotations for each file if necessary and cleans up the folder once it is done. On the downside, it requires opening a shared folder on the server side.
Commonplace - commons.wikimedia.org/wiki/Commons:Tools/Commonplace
used to be available, but it was deprecated as of Jan. 13, 2010.
Hope this helps a bit: http://www.mediawiki.org/wiki/Manual:ImportImages.php
As a committer of MediaWiki-Japi I'd like to point out:
For the usecase to push pages including images from one wiki to another MediaWiki-Japi now has a command line mode see
Issue 49 - Enable commandline interface with page transfer option
Otherwise you can use MediaWiki-Api with the language of your choice and use the functions as you find in PushPages.java
e.g.
download
upload

Resources