Getting %0D%0D at the end of url while accessing from unix - windows

I am accessing a file kept on my svn repository from unix by wget command.
#!/bin/bash
ANTBUILDFILE=http://l09089r4.tst.poles.com:1808/svn/CommonMDM/trunk/Common/BuildArtifacts/VendorCatalog_Weblogic/build_CustomUI.xml
cd /tmp/install
wget -nc ${ANTBUILDFILE}
But I am getting the output as :
--2013-05-16 00:21:51-- http://l09089r4.tst.poles.com:1808/svn/CommonMDM/trunk/Common/BuildArtifacts/VendorCatalog_Weblogic/build_CustomUI.xml%0D%0D
wget: /home/tkmd999/.netrc:3: unknown token "ibm"
Resolving l09089r4.tst.poles.com... 10.8.91.58
Connecting to l09089r4.tst.poles.com|10.8.91.58|:18080... connected.
HTTP request sent, awaiting response... 404 Not Found
2013-05-16 00:21:51 ERROR 404: Not Found.
There are %0D%0D art the end of the url which is making it in accessible.
Post getting this error I have converted the file in question in url in unix format too, and commited my changes in the svn repositiry, bu still getting this error.
Any other ideas which I can follow to get rid of this error?>
Thanks,
Manish

The %0D you see are most likely remains of Windows-style CrLf Newlines in your shell script - one from the ANTBUILDFILE=... line, one from the wget ... line.
There can be a number of more or less subtle reasons for this, for example the svn:eol-style property:
TortoiseSVN sets svn:eol-style native style by default, trying to follow the convention of the client OS.
This can lead to confusion when using network shares accessible by several operating systems or tools that have different expectations on newlines.
If this turns out to be the situation you are experiencing, you can simply remove the svn:eol-style property from the file and commit it with the newline style you want.

Related

Struggling with file upload path on cURL

Beginner user on cURL. I'm really struggling with the path format used to upload a file through cURL.
curl -X POST https://XXXXXXXXX … -F file=#C:\Users\John\Downloads\test.csv
I keep getting the following error message "curl: (26) Failed to open/read local data from file/application"
Most of the examples provided start with file=#/home/, which is confusing as I don't have such directory to my understanding. Also, examples use "/" instead of "\". Why is it so?
Can anyone provide some feedback on how to properly write the path to a file?
Thanks!
Finally solved this! After some trials and errors, I figured out that the path's syntax should have used double backslash, so:
C:\\Users\\John\\Downloads\\test.csv
and not:
C:\Users\John\Downloads\test.csv
Honestly, I have never used this syntax before and don't understand why this is required. Your comments/input would certainly be appreciated.

Apache 403 on windows using AliasMatch and -D

After i packaged apache to be portable (used any where), I had the surprise to get 403 error on AliasMatch but not on regular Location resources.
Update: I found the fix. Check answer.
And so, here is my own response.
I check the error.log and saw the path of the aliasmatch was wrong ( what a scoop). But note this path is made using a variable. Actually, all the character / that we can expect have been removed.
I look to the origin of the variable but not at the beginning, because all URI based on Location was working. Amazing ! But , the fact is, Location directive seems to convert the ugly \ windows in / to work, but not the AliasMatch directive. I forget to tell that i launched Apache using command line httpd with option -D and ... an ugly windows path with \ and not /. So, i convert the path in a pretty unix path and all is back in order.
In dos command : SET APACHE_ROOT="%THE_WINDOWS_PATH:\=/%"

WGET saves with wrong file and extension name possibly due to BASH

I`ve tried this on a few forum threads already.
However I keep on getting the some failure as a result.
To replicate the problem :
Here is an url leading to a forum thread with 6 pages.
http://forex.kbpauk.ru/showflat.php/Cat/0/Number/107623/page/0/fpart/1/vc/1
What I typed into the console was :
wget "http://forex.kbpauk.ru/showflat.php/Cat/0/Number/107623/page/0/fpart/{1..6}/vc/1"
And here is what I got:
--2018-06-14 10:44:17-- http://forex.kbpauk.ru/showflat.php/Cat/0/Number/107623/page/0/fpart/%7B1..6%7D/vc/1
Resolving forex.kbpauk.ru (forex.kbpauk.ru)... 185.68.152.1
Connecting to forex.kbpauk.ru (forex.kbpauk.ru)|185.68.152.1|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: '1'
1 [ <=> ] 19.50K 58.7KB/s in 0.3s
2018-06-14 10:44:17 (58.7 KB/s) - '1' saved [19970]
The file was saved as simply "1" with no extension as it seems.
My expectations were that the file will be saved with an .html extension, because its a webpage.
Im trying to get WGET to work, but if its possible to do what I want with CURL than I would also accept that as an answer.
Well, there's a couple of issues with what you're trying to do.
The double quotes around your URL actually prevent Bash expansion, so you're not really downloading 6 files, but a single URL with "{1..6}" in it. You probably want to not have quotes around the URL to allow bash to expand it into 6 different parameters.
I notice that all of the pages are called "1", irrespective of their actual page numbers. This means the server is always serving a page with the same name, making it very hard for Wget or any other tool to actually make a copy of the webpage.
The real way to create a mirror of the forum would be to use this command line:
$ wget -m --no-parent -k --adjust-extension http://forex.kbpauk.ru/showflat.php/Cat/0/Number/107623/page/0/fpart/1
Let me explain what this command does:
-m --mirror activates the mirror mode (recursion)
--no-parent asks Wget to not go above the directory it starts from
-k --convert-links will edit the HTML pages you download so that the links in them will point to the other local pages you have also downloaded. This allows you to browse the forum pages locally without needing to be online
--adjust-extension This is the option you were originally looking for. It will cause Wget to save the file with a .html extension if it downloads a text/html file but the server did not provide an extension.
simply use the -O switch to specify the output filename, otherwise wget just defaults to something like in your case its 1
so if you wanted to call your file what-i-want-to-call-it.html then you would do
wget "http://forex.kbpauk.ru/showflat.php/Cat/0/Number/107623/page/0/fpart/{1..6}/vc/1" -o what-i-want-to-call-it.html
if you type into the console wget --help you will get a full list of all the options that wget provides
To verify it has worked type the following to output
cat what-i-want-to-call-it.html

Reading valid rsyslog configurations from /etc/rsyslog.d/project.conf drops syntax errors?

I have a few lines of configurations that I need in my rsyslgog.
if $programname == 'project' then /var/log/file.log
When added to the end of the main rsyslog configuration file, /etc/rsyslog.conf, this configuration appears to be valid and functional.
However, when using the rsyslog.d directory I get a syntax error.
error during parsing file /etc/rsyslog.d/project.conf, on or before line 2: syntax error on token '==' [v8.32.0 try http://www.rsyslog.com/e/2207 ]
Is there anything in the main config that has to be parsed in advance, or is this a bug that should be reported to Fedora 27 developers?
As rsyslog author, I would assume that there is some include right in front of it that somehow renders your (valid) construct invalid. Red Hat unfortunately tends to stick to obsolete legacy format, and things like these can easily happen when this is used (after all, this was why we obsoleted it).
To hunt this down, I would check the config include that is included immediately in front of your own. If included via wildcards, the include order is sorted by filename.
Sorry, it was my bad. The configuration for my rsyslog config file was rewritten by my installer bash script, and that interpreted the $ sign as variable within the string. I should have double-checked the correctness of my config file.

How do I get quicklisp to load rfc2388 in slime?

I'm trying to load hunchentoot via quicklisp in slime, and getting the following error:
READ error during COMPILE-FILE:
:ASCII stream decoding error on
#<SB-SYS:FD-STREAM
for "file [redacted]/dists/quicklisp/software/rfc2388-20120107-http/rfc2388.asd"
{100607B723}>:
the octet sequence #(196) cannot be decoded.
(in form starting at line: 29, column: 29,
file-position: 1615)
[Condition of type ASDF:LOAD-SYSTEM-DEFINITION-ERROR]
I get this when trying to run either:
(ql:quickload "hunchentoot")
Or simply:
(ql:quickload "rfc2388")
It seems that others are getting this too. I found one hint at a possible answer, saying:
The system file is encoded as UTF-8.
I'm not sure how to configure things so that SBCL on Windows starts with
UTF-8 as its default encoding for loading sources, but that's what you
need to do.
From there, I've tried (based on e.g. [this] adding the following to my emacs config:
(set-language-environment "UTF-8")
(setq slime-lisp-implementations
'((sbcl ("/opt/local/bin/sbcl") :coding-system utf-8-unix)))
(setq slime-net-coding-system 'utf-8-unix)
But... I still get the same error, even after completely re-starting emacs, to make sure I had a fresh Slime that was reading the above config.
So, what am I missing, and/or otherwise how can I get this to load?
Thanks in advance! (More thanks to come for a successful answer. ;)
Have you checked your locale settings? Emacs configuration only tells it what coding systems to set for communication between SLIME and SWANK.
You can check for locale settings with /usr/bin/locale, for example:
navi ~ » locale
LANG=pl_PL.UTF-8
LC_CTYPE=pl_PL.UTF-8
LC_NUMERIC=pl_PL.UTF-8
LC_TIME=pl_PL.UTF-8
LC_COLLATE="pl_PL.UTF-8"
LC_MONETARY=pl_PL.UTF-8
LC_MESSAGES=C
LC_PAPER=pl_PL.UTF-8
LC_NAME="pl_PL.UTF-8"
LC_ADDRESS="pl_PL.UTF-8"
LC_TELEPHONE="pl_PL.UTF-8"
LC_MEASUREMENT=pl_PL.UTF-8
LC_IDENTIFICATION=pl_PL.UTF-8
LC_ALL=
navi ~ »
Mine is setup for UTF-8 everywhere, as you can see, except for displaying 'C' messages.
Try this:
change into the .../quicklisp/dists/quicklisp/software/rfc2388* directory and load rfc2388.asd into a text editor.
Move down to the :author parameter of the defsystem form. Replace the author's name by the name given at the top of the file.
Store file using ASCII encoding.
Of course, when a new version of the library is published, the workaround gets lost. Or else store the modified project in local-projects.
With the original UTF-8 encoding still in effect, the DEBUGGER should present an INPUT-REPLACEMENT option to replace offending input characters by a replacement string. Choose that option, type "?" or "x" or any string you like at the prompt and then ENTER. The load then completes. Of course, that is not something you would like to do every time.
So the best idea is probably to send an email to the author and ask to provide an ascii version for quicklisp.
There should be a .cache directory in your HOME that contains all the fasl files. Sometimes removing those old fasl files seems to work for me when something goes wrong with compilation.

Resources