Setting Proxy for Electron App - proxy

I am using of some of the npm modules which are making get request behind the scenes to pull some data from websites. But there is no option or setting to set proxy for those requests, so I want to know how to set proxy for entire electron app so that all the requests go through that proxy?

Using request:
Use environment variables :
process.env.HTTP_PROXY = 'http://192.168.0.36:3128'
Using Axios:
Install this package :
npm install https-proxy-agent
Then :
const axios = require('axios');
const HttpsProxyAgent = require('https-proxy-agent');
let config = {}
config.httpsAgent = new HttpsProxyAgent('http://192.168.0.36:3128')
config.url = 'https://example.com'
config.method = 'GET'
axios(config).then(...).catch(...)
Electron app
For the wall app (like IMG SRC in HTML), you can use command line switches supported by Electron :
const { app } = require('electron')
app.commandLine.appendSwitch('proxy-server', '172.17.0.2:3128')
app.on('ready', () => {
// Your code here
})
See documentation

Related

What is the easiest way to connect to Memgraph using Node.js?

I want to test the creation of an application that uses Node.js and Memgraph. What is the fastest and easiest way for me to test this type of setup?
The easiest way is to use Express.js. Here are the exact steps:
Create a new directory for your application, /MyApp and position yourself in it.
Create a package.json file using npm init
Install Express.js using npm install express --save and the Bolt driver using npm install neo4j-driver --save in the /MyApp directory. Both packages will be added to the dependencies list.
To make the actual program, create a program.js file with the following code:
const express = require("express");
const app = express();
const port = 3000;
var neo4j = require("neo4j-driver");
app.get("/", async (req, res) => {
const driver = neo4j.driver("bolt://localhost:7687");
const session = driver.session();
try {
const result = await session.writeTransaction((tx) =>
tx.run(
'CREATE (a:Greeting) SET a.message = $message RETURN "Node " + id(a) + ": " + a.message',
{
message: "Hello, World!",
}
)
);
const singleRecord = result.records[0];
const greeting = singleRecord.get(0);
console.log(greeting);
} finally {
await session.close();
}
// on application exit:
await driver.close();
});
app.listen(port, () => {
console.log(`Example app listening at http://localhost:${port}`);
});
Once you save the file, you can run your program using node program.js.
For additional details check the official Memgraph documentation.

Websocket issue preventing bokeh app deployed in heroku from loading in website

I have a bokeh app deployed in heroku. I want to embed it in a website, but am failing to do so.
The app is here:
https://ckgsb-final.herokuapp.com/cn_ckgsb
And this is for the script to use in the website:
from bokeh.embed import server_document
script = server_document("https://ckgsb-final.herokuapp.com/cn_ckgsb")
print(script)
<script id="1014">
(function() {
const xhr = new XMLHttpRequest()
xhr.responseType = 'blob';
xhr.open('GET', "https://ckgsb-final.herokuapp.com/cn_ckgsb/autoload.js?bokeh-autoload-element=1014&bokeh-app-path=/cn_ckgsb&bokeh-absolute-url=https://ckgsb-final.herokuapp.com/cn_ckgsb", true);
xhr.onload = function (event) {
const script = document.createElement('script');
const src = URL.createObjectURL(event.target.response);
script.src = src;
document.body.appendChild(script);
};
xhr.send();
})();
</script>
The procfile for the heroku app is:
web: bokeh serve --port=$PORT --allow-websocket-origin=ckgsb-final.herokuapp.com --address=0.0.0.0 --use-xheaders cn_ckgsb.py
I know the problem is with the websocket. I've tried various combinations of the app url in both the procfile and the script code, but haven't managed to fix it.
Thanks.
You need to configure an allowed websocket origin for the URL of the embedding site as well. When users navigate to mysite.org and the page there tries to embed the Bokeh app, the HTTP origin received by the Bokeh server will be mysite.org. If the Bokeh server has not been configured to allow that origin, the request will be rejected.

Making credentialed requests with Bokeh AjaxDataSource

I have a plot set up to use an AjaxDataSource. This is working pretty well in my local development, and was working as deployed in my Kubernetes cluster. However, after I added HTTPS and Google IAP (Identity-Aware Proxy) to my plotting app, all of the requests to the data-url for my AjaxDataSource are rejected by the Google IAP service.
I have run into this issue in the past with other AJAX requests to Google IAP-protected services, and resolved it by setting {withCredentials: true} in my axios requests. However, I do not have this option while working with Bokeh's AjaxDataSource. How do I get BokehJS to pass the cookies to my service in the AjaxDataSource?
AjaxDataSource can pass headers:
ajax_source.headers = { 'x-my-custom-header': 'some value' }
There's not any way to set cookies (that would be set on the viewer's browser... which does not seem relevant in this context). Doing that would require building a custom extension.
Thanks to bigreddot for pointing me in the right direction. I was able to build a custom extension that did what I needed. Here's the source code for that extension:
from bokeh.models import AjaxDataSource
from bokeh.util.compiler import TypeScript
TS_CODE = """
import {AjaxDataSource} from "models/sources";
export class CredentialedAjaxDataSource extends AjaxDataSource {
prepare_request(): XMLHttpRequest {
const xhr = new XMLHttpRequest();
xhr.open(this.method, this.data_url, true);
xhr.withCredentials = true;
xhr.setRequestHeader("Content-Type", this.content_type);
const http_headers = this.http_headers;
for (const name in http_headers) {
const value = http_headers[name];
xhr.setRequestHeader(name, value)
}
return xhr;
}
}
"""
class CredentialedAjaxDataSource(AjaxDataSource):
__implementation__ = TypeScript(TS_CODE)
Bokeh extensions documentation: https://docs.bokeh.org/en/latest/docs/user_guide/extensions.html

Nuxt Axios Dynamic url

I manage to learn nuxt by using following tutorial
https://scotch.io/tutorials/implementing-authentication-in-nuxtjs-app
In the tutorial, it show that
axios: {
baseURL: 'http://127.0.0.1:3000/api'
},
it is point to localhost, it is not a problem for my development,
but when come to deployment, how do I change the URL based on the browser URL,
if the system use in LAN, it will be 192.168.8.1:3000/api
if the system use at outside, it will be example.com:3000/api
On the other hand, Currently i using adonuxt (adonis + nuxt), both listen on same port (3000).
In future, I might separate it to server(3333) and client(3000)
Therefore the api links will be
localhost:3333/api
192.168.8.1:3333/api
example.com:3333/api
How do I achieve dynamic api url based on browser and switch port?
You don't need baseURL in nuxt.config.js.
Create a plugins/axios.js file first (Look here) and write like this.
export default function({ $axios }) {
if (process.client) {
const protocol = window.location.protocol
const hostname = window.location.hostname
const port = 8000
const url = `${protocol}//${hostname}:${port}`
$axios.defaults.baseURL = url
}
A late contribution, but this question and answers were helpful for getting to this more concise approach. I've tested it for localhost and deploying to a branch url at Netlify. Tested only with Windows Chrome.
In client mode, windows.location.origin contains what we need for the baseURL.
# /plugins/axios-host.js
export default function ({$axios}) {
if (process.client) {
$axios.defaults.baseURL = window.location.origin
}
}
Add the plugin to nuxt.config.js.
# /nuxt.config.js
...
plugins: [
...,
"~/plugins/axios-host.js",
],
...
This question is a year and a half old now, but I wanted to answer the second part for anyone that would find it helpful, which is doing it on the server-side.
I stored a reference to the server URL that I wanted to call as a Cookie so that the server can determine which URL to use as well. I use cookie-universal-nuxt and just do something simple like $cookies.set('api-server', 'some-server') and then pull the cookie value with $cookies.get('api-server') .. map that cookie value to a URL then you can do something like this using an Axios interceptor:
// plguins/axios.js
const axiosPlugin = ({ store, app: { $axios, $cookies } }) => {
$axios.onRequest ((config) => {
const server = $cookies.get('api-server')
if (server && server === 'some-server') {
config.baseURL = 'https://some-server.com'
}
return config
})
}
Of course you could also store the URL in the cookie itself, but it's probably best to have a whitelist of allowed URLs.
Don't forget to enable the plugin as well.
// nuxt.config.js
plugins: [
'~/plugins/axios',
This covers both the client-side and server-side since the cookie is "universal"

feathersjs -> socketio https request not working

I have an application made in featherjs which I would like to run with https. I have gotten that working. I did that by changing the 'index.js' file to look like this:
const fs = require('fs');
const https = require('https');
const app = require('./app');
const port = app.get('port');
const host = app.get('host');
//const server = app.listen(port);
const server = https.createServer({
key: fs.readFileSync('./certs/aex007.key'),
cert: fs.readFileSync('./certs/aex007.crt')
}, app).listen(port, function(){
console.log("Mfp Backend started: https://" + host + ":" + port);
});
As soon as I now go to e.g. 'https://127.0.0.1/a_service_name' in postman, I get a result after accepting the certificate. When I go to the address in a browser it also give result, the certificate indication is 'red' for it's selfsigned.
So my problem is the following. When I go to 'http://127.0.01' in a browser, in stead of the 'index.html' file I get nothing of my 'socket' information, only a blank page. I get the following error in the console
info: (404) Route: /socket.io/?EIO=3&transport=polling&t=LwydYAw -
Page not found
Then 'index.html' file I'm using is currently containing this:
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/2.0.3/socket.io.js"></script>
<script type="text/javascript" src="//cdn.rawgit.com/feathersjs/feathers-client/v1.1.0/dist/feathers.js"></script>
<script type="text/javascript">
var socket = io('https://127.0.0.1:3001');
var client = feathers()
.configure(feathers.hooks())
.configure(feathers.socketio(socket));
var todoService = client.service('/some_service');
todoService.on('created', function(todo) {
alert('created');
console.log('Someone created a todo', todo);
});
</script>
Can someone explain to me what to do to get the alert message?
Edit 2017/09/27
I found on the internet that socket.io is configured like
var https = require('https'),
fs = require('fs');
var options = {
key: fs.readFileSync('ssl/server.key'),
cert: fs.readFileSync('ssl/server.crt'),
ca: fs.readFileSync('ssl/ca.crt')
};
var app = https.createServer(options);
io = require('socket.io').listen(app); //socket.io server listens to https connections
app.listen(8895, "0.0.0.0");
However the require of feathers-socket.io is in the app.js not the index.js. I wonder if I can move that?
As daffl pointed out on the feathers slack channel here; check out the documentation which requires in feathers-socketio explicitly before calling configure on the app, in addition to the https portion of the docs. Putting those two together, I would do something like this (untested):
const feathers = require('feathers');
const socketio = require('feathers-socketio');
const fs = require('fs');
const https = require('https');
const app = feathers();
app.configure(socketio());
const opts = {
key: fs.readFileSync('privatekey.pem'),
cert: fs.readFileSync('certificate.pem')
};
const server = https.createServer(opts, app).listen(443);
// magic sauce! Socket w/ ssl
app.setup(server);
The structure of your app.js and index.js is totally up to you. You can do all of the above in a single file as shown, or split out the https/fs requires into index.js, and configuring the app into app.js - I would recommend this approach because it will allow you to change the (usually smaller) index.js file if you every decide to use a reverse proxy like nginx to handle ssl instead of node.

Resources