I'm trying to run a collection from a free webhosting text files and I can run it easily with:
newman run %https://mysite.txt%
Now I'm trying to locally capture the sent requests from the newman run so I'm adding this command in CMD:
set HTTP_PROXY=127.0.0.1:62248
this should allow my app to record requests from Newman when using it like that:
newman run %https://mysite.txt% --env-var HTTP_PROXY --insecure
(it works perfect on local hosted files)
However since the txt file is hosted by https protocol, I'm getting the following error:
error: collection could not be loaded
unable to fetch data from url "https://mysite.txt"
tunneling socket could not be established, cause=getaddrinfo ENOTFOUND 62248
Can I run Newman collection from a secure https web address while locally recording it, or should I download and save it first locally and then run it locally?
My current temporary(hopefully I'll get a better one), is to download the file from wherever it's being stored, and run Newman locally with the downloaded file.
Related
I have a Jenkins job that clones the repository from GitHub, then runs the Execute Shell script with the help of the Newman command containing the collection and Environment JSON file.
After that, the collection of some of the services is working fine.
But the file upload-related services are failing.
The error is displayed as File load error: <“file name”>, no such fileFile Issue
Can you help me to resolve this issue with some solutions?
My all services are automated and all are working fine in Postman and Newman tools.
But now I'm trying to run my collection on Jenkins by cloning the repository from GitHub with the help of Newman from the shell script.
Expectation:
In the collection, I have 15 services where I have to upload different data files, and based on that response some of the services are dependent.
So I want to upload the files while running the Jenkins job
I'm trying to get ngrok's dynamically generated IP address programmatically by using bash to set globals and env variables.
Below is what I have so far.
Run ngrok http {server url}
Inside your host root directory run:
curl http://localhost:4040/api/tunnels > ~/ngrok_tunnels.json;
Install jq
Brew install [jq](https://stedolan.github.io/jq/) (let's you access json through bash)
Afterwards you just need to access this json following jq's docs.
Inside the project root that is calling the dev URL. [0]=(http) [1]=(https)
echo “NGROK_TUNNEL=$(jq .tunnels[1].public_url ~/ngrok_tunnels.json
)" >> .env
Set all of your dev urls to process.env.NGROK_TUNNEL
So this works, but is it the "best way" to do this?
For people who want to get a url through ngrok using python there is the pyngrok library
from pyngrok import ngrok
#Open a HTTP tunnel on the default port 80
#<NgrokTunnel: "http://<public_sub>.ngrok.io" -> "http://localhost:80">
http_tunnel = ngrok.connect()
#Open a SSH tunnel
#<NgrokTunnel: "tcp://0.tcp.ngrok.io:12345" -> "localhost:22">
ssh_tunnel = ngrok.connect(22, "tcp")
it is also possible to do some things directly via the ngrok API. I didn't find the option to create a tunnel, but having a tunnel created you can restart it or update it
https://ngrok.com/docs/api#api-tunnel-sessions-list
The short answer is yes.
You can upgrade to a paid plan and use the --subdomain argument to get the same ngrok url every time. The next price level from that includes white labeling where you can use your own custom domain as well.
I need to download the following file using the command line to my remote computer:
download link
The point is that if I use wget of curl, I just get a html document. but, if I enter this address in my browser (on my laptop), it simply starts downloading.
Now, my question is that since the only way to access my remote machine is through command line, how I can download it directly on that machine using the command line?
Thanks
Assuming that you are using a linux terminal.
You can use a command line browser like Lynx to click on links and download files.
The link provided by you isn't a normal file link, this link sends the filename as a GET variable, another page with form is sent by server as a response to this request. So wget, cURL will not work.
That website likely tracking session and checks if you've submitted the data & confirmed you're not a robot
Try different approach: copy it from your machine to remote via scp:
scp /localpath/to/file username#remotehost.com:/path/to/destination
Alternatively, you may export cookies from your local machine to remote and then pass them to wget with ‘--load-cookies file’ option, but can't guarantee it will work 100% if site also tracks session ID to IP
Here's Firefox extension for exporting cookies:
https://addons.mozilla.org/en-US/firefox/addon/export-cookies/
Once you have cookies.txt file just scp it to remote machine and run wget with '--load-cookies file' option
One of the authors of the corpus here.
As pointed out by a friend of mine, this tool solves all the problems.
https://addons.mozilla.org/en-GB/firefox/addon/cliget/
After installation you just click the download link and copy the generated command to the remote machine. Just tried it, works perfectly. We should put that info on the download page.
I have Bitnami's Parse Server set up on Azure.
I'm logging some info from cloud code using console.log and console.error. When using hosted Parse these logs were displayed in the Info & Error Logs section on the Dashboard. Any idea where the logs go to now?
The issue is not specific to Bitnami's distribution. I also tested on a local machine with parse-server-example & Parse Dashboard and got the same result (no logs).
I use AWS but you can see the logs by downloading them or running it on localhost just cd into your folder then do Npm start on terminal and switch you parse server URL to http://localhost:1337/parse.
You can manually download them through the azure cli
Take a look here for installation : https://azure.microsoft.com/en-us/documentation/articles/xplat-cli-install/
I used npm : npm install azure-cli -g
open up terminal and type in : azure site log download webappname
This will save the logs for the web app named 'webappname' to a file named diagnostics.zip in the current directory.
Unzip and open the folder diagnostics -> LogFiles -> Application
The text file with -stderr- in the name of it will display the logs you display by using console.error() in your cloud code.
The text file with -stdout- in the name of it will display the logs you display by using console.log() in your cloud code.
This is a known issue on Bitnami Parse. We are working on fixing it for the next release.
You have to log in your server via SSH and modify the line below at the /opt/bitnami/apps/parse/htdocs/server.js file:
From:
cloud: "./node_modules/parse-server/lib/cloud-code/Parse.Cloud.js",
To:
cloud: "./cloud/main.js",
You have to include the path to the ./cloud/main.js you previously created (assuming you created it in /opt/bitnami/apps/parse/htdocs/).
Remember to restart the Server after applying those changes running:
sudo /opt/bitnami/ctlscript.sh restart
I know a lot of 'recent' JS tech require a server to work, but is there a way to run a simple Aurelia hello world without a server installation, just opening index.html and see my hello world app shown in the browser. It works for angular 1.x and many other JavaScript libraries.
Is the System.import mechanism going to force me to use a server ? Or is there a workaround to read local files, I tried the usual hacks but it did not help as I still get Error: [Exception... "File error: Unrecognized path" nsresult: "0x80520001 (NS_ERROR_FILE_UNRECOGNIZED_PATH)" but the path shown in the error (not pasted here) matches my local path.
The Aurelia starter pack recommends using Firefox to accomplish this goal if you are using the ES2016 starter kit. Firefox is the only browser that supports the use case you are asking about.
For any other browser, you will need to run a server. I recommend using the extremely simple to use http-server that runs on NodeJS.
From within your project directory type the following two commands:
npm install -g http-server
http-server
Then open your browser and navigate to http://localhost:8080 (8080 is http-server's default port. This port can be changed using the -p command line argument.