w3m: fake JavaScript and user-agent - shell

I use w3m to search words in the Spanish dictionary (dle.rae.es), so I'm using a bash script with this line:
w3m "https://dle.rae.es/$1"
The filename of the script is defes. For example, to search the meaning of "casa" I type defes casa, viewing the result in my terminal.
However, I'm getting this error:
Please enable cookies.
Please wait...
We are checking your browser... dle.rae.es
Please stand by, while we are checking your browser...
Redirecting...
Please turn JavaScript on and reload the page.
Please enable Cookies and reload the page.
Why do I have to complete a CAPTCHA?
Completing the CAPTCHA proves you are a human and gives you temporary access to the web property.
What can I do to prevent this in the future?
If you are on a personal connection, like at home, you can run an anti-virus scan on your device to make sure it is not
infected with malware.
If you are at an office or shared network, you can ask the network administrator to run a scan across the network looking
for misconfigured or infected devices.
Cloudflare Ray ID: 69ffcac51e4a668f • Your IP: XX.YYY.ZZZ.NNN • Performance & security by Cloudflare
I try to do something like this:
w3m -header 'User-Agent: blah' ...
Tested with a lot user agents.
I'm also using the -cookie flag to try to remove the cookies message...
What could I do?

Related

How do I stop iisnode on Windows Server 2019 from caching my javascript files?

I have a node.js application running on iisnode running on IIS 10 running on Windows Server 2019.
At one of these layers, there seems to be some caching of javascript files. For example, in my app.js file, I had this:
fs.appendfilesync('log.txt', 'CORS set up.\n');
...which was giving me the error:
appendfilesync is not a function.
I realized I had a typo: it should be camel cased. So I changed it to:
fs.appendFileSync('log.txt', 'CORS set up.\n');
But it kept giving me the same error, even specifying the exact same line and column in the file.
I know the caching is occuring on the server because the error is logged to iisnode's logs and because I leave it for a day and try again the next day and the error no longer occurs.
It's extremely frustrating when I'm trying to fix things on the server and I can't test my fix because it stubbornly won't update the cache.
How can I force iisnode, IIS, or Windows Server 2019 (whichever one is doing the caching, if not more than one) to clear the cache or to not cache?
Thank you.
Please make sure you assigned the iis_iusrs and iusr full control permission to the log.txt file.
To disable caching in iis you could follow the below steps:
1)using output caching:
Select your site in iis.
Double click on output caching feature from the middle pane.
Click edit feature setting from the action pane.
Uncheck Enable cache and Enable kernel cache.
2)clientCache:
Open Internet Information Services (IIS) Manager select your site.
In the Home pane, double-click HTTP Response Headers.
In the Set Common HTTP Response Headers dialog box, check the box to expire Web content, select the option to expire after a specific interval or at a specific time, and then click OK.

Google Drive API Console: Error saving Drive UI integration page

I have a webapp in production that interacts with Google Drive through Google Drive API.
I need to change some settings in Drive interaction but I can't save.
When I save the Drive UI integration page, I receive this error:
There's a problem at our end.
Please try again. If the problem persists, please let us know using
the "Send feedback" link below. Thanks!
(spying Network console: there is an Internal Server Error in a POST call)
I tried to send feedback for months: nobody answers and the bug is still there.
I tried also to create another project: I can save the first time but then the bug returns.
How can I do? Has someone the same problem?
Is there a way to receive a reply from Google? Is there some workaround?
Thank you.
i think that problem must be Client ID
before adding Client ID, go to the Credentials -> OAuth 2.0 Client IDs
then select edit your Client ID. after that your production site url add to Authorized JavaScript origins and Authorized redirect URIs.
then enter your Client ID in Drive UI integration page
For myself trying to get the Drive UI configured I noticed a couple of errors (that don't have any specific error messages)
When adding in an Open URL it has to be a valid domain, so for instance I tried to test it out with local host, to no avail. However something like https://devbox.app.com worked, but something like https://localhost:8888 does not. Even though https://localhost is a valid javascript origin in the client_id configuration (at least for the app I am working on, not sure about other apps), localhost doesn't work as an open URL.
When adding in the mimeTypes it needs to be in the format */* and can include custom mimeTypes like application/custom+xml and application/custom-name+json not sure for other custom types that are not in a particular format like xml or json. Also not sure about wildcards.
When adding in file extensions do not add in the '.' just the name of the file extension.
The app icon I found only failed to upload the image when the image wasn't the exact dimensions, I actually ended up editing some icons in photoshop to change the pixel x pixel values as a quick work around during dev.
That worked for me to get it to save and I tested it with a file that had a custom mimeType (application/custom-name+xml specifically) and custom file extension!

Curl or Lynx scripting with Chrome Cookie

Just looking for someone to point me in the right direction. I need to script an interaction with a site that uses a "trust this device" cookie, and uses a log in portal. I found the cookie in Chrome, but not sure what to do next. This will be hosted on a CentOS 7 system.
After authenticating to the login portal, I need to access another page using the "trust this device" cookie and the session cookie so I can download files. Manually downloading files everyday gets tedious, and the owner of the site does not want to use SFTP.
Update 1 :
There was some confusion in my request (I could have made it more clear), I am NOT looking for someone to "write code" for me. This is more a sanity check as I learn how this process works. Please simply point me in the right direction as far as tools and general procedure.
Update 2 :
Using the "Copy as curl" option found in most web browsers, I was able to get the correct header information needed for authenticating.
Instead of
curl -b "xxx=xxx"
I needed
curl -H "Cookie: XXXX="%"2Fwpsnew; xxx=xxx"
When adding the -c switch, I can now save the session cookie. Further testing is needed, but at least there is progress.
EDIT
Using the Chrome feature for copying curl commands from the history (this is found in Firefox as well), I was able to partially reproduce results. However in my case I was not able to log in as the site I was working with uses additional js that modifies the cookies.
This initial question can be closed, I will open a new post for more specific parts of my project.

Allowing Cross domain ajax calls from firefox

I want to change the settings of firefox so as to allow it to make cross domain ajax calls. Since due to the security feature of the firefox it doen't allow ajax calls to be made. I know if it is in same domain it will allow. I have a code given bellow which in safari works fine but firefox doesn't display the results when it calls csce server then since the code is on local machine doesn't allow it and returns error. I know it will start working if I load my this code to csce server but I want to run the code from my machine. So can anyone help me in resolving this. I have spent past couple of days just searching for this solution.
Kindly suggest how to achieve this or should I go with some older version of firefox?
I googled and set the parameters of browser in config file as specified in this site but it still doesn't work.
http://code.google.com/p/httpfox/issues/detail?id=20
Maybe you could use privoxy and tell it to inject something like "Access-Control-Allow-Origin: *" in the server response.
To do this, you would have to go into the file user.filter (create it if it doesn't exist) in privoxys configuration directory and insert something like this:
SERVER-HEADER-FILTER: allow-crossdomain
s|Server: .*|Access-Control-Allow-Origin: *|
Instead of Server, you can also use any other header that's always present and you don't need.
And this into user.action:
{+server-header-filter{allow-crossdomain}}
csce.unl.edu
Note: I didn't test it.
https://developer.mozilla.org/En/HTTP_access_control
http://config.privoxy.org/user-manual/
This appears to enable XSS from file:// pages in Firefox 4, although it prompts you so might not be suitable for more than simple test pages:
netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect");

MOSS search crawl fails with "Access is denied ..."

Recently the search crawler stopped working on my MOSS installation. The message in the crawl log is
Access is denied. Check that the Default Content Access Account has access to this content, or add a crawl rule to crawl this content. (The item was deleted because it was either not found or the crawler was denied access to it.)
The default content account is an admin on the site collection that I am trying to crawl.
Almost every result for this error on Google tells me to add the DisableLoobackCheck registry key with a value of 1. I have done this and rebooted and the error continues.
The "Do not allow Basic Authentication" checkbox in my crawl rule screen is unchecked.
Is there anything else that could be causing this error? Something with file system or database permissions maybe?
Edit: All signs seem to indicate that the "DisableLoopbackCheck" should fix this, but it doesn't seem to work. Could I be doing something wrong when I enable this?
I'm doing it in My Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa, where I create a new DWORD key called DisableLoopbackCheck and give it the hex value 1.
It turned out not to be related to DisableLoopbackCheck. The problem was that the search was accessing the site through its external URL. You are supposedly not supposed to be able to access a site from within a server using the same URL that you use to reach it from the outside, at least in pre-SP1 MOSS. But I was doing this for about two years somehow. MS Support tells me they don't quite understand how it was ever working. So it looks like I ran into an issue that should have been manifesting all along. I'm not sure what caused it to appear suddenly, maybe some routine patching of the server. The solution was to extend the web application so it was accessible internally through the machine name, then point the crawler at that.

Resources