I have an asp.net mvc3 web app using XSocket, it works fine locally, but it doesn't on my intranet.
I've configured the xsockets windows service and runs fine.
I've copied my "plugins" (DLL's and dependencies) on the right directory and runs fine.
The problem is when I try to access to the application via intranet, the connection always says closed.
Do I need to point to a specific IP address/server name?
My javascript code that runs fine in localhost:
var url = "ws://127.0.0.1:4507/";
var controller = "Chat";
var mensajes = $('#messages');
var mensaje = $('#message');
var ws = new XSockets.WebSocket(url + controller);
function send() {
if (mensaje.val() != '') {
ws.trigger('sendall', { message: mensaje.val() });
mensaje.attr('value', '');
}
}
$(function () {
ws.bind(XSockets.Events.open, function () {
console.log("opened");
});
ws.bind(XSockets.Events.close, function () {
console.log("closed");
});
ws.bind(XSockets.Events.onError, function (err) {
console.log("error", err);
});
ws.bind('sendall', function (mensaje) {
console.log(mensaje);
mensajes.prepend($('<div>').text(mensaje));
});
mensaje.on('keyup', function (e) {
if (e.which == 13 || e.keyCode == 13) {
e.preventDefault();
send();
}
});
$('#publish').click(function () {
send();
});
});
Thank you in advance.
First of all, you need to configure your server to have Uri that points to your computer adress* (not localhost, not 127.0.0.1!). Then this:
var url = "ws://127.0.0.1:4507/";
Needs to exactly match that address (it can be a domain name, but as for starters, better do it with ip).
This:
var ws = new XSockets.WebSocket(url + controller);
Needs to look like that:
var ws = new XSockets.WebSocket("ws://my.ip.add.res:myport/myController", myController, null); //null can contain an array of parameters that you want to send to a server, but if you are just starting, leave this with null
Also, when I was struggling with my configuration, many times I bite my pillow because I was trying with different ports and forgot about firewall. So don't forget about it. :)
I will be here for like 30-40 minutes more, then I am going to sleep, if you will have any problems and will respond within that timespan, I will stay here and try to help you, since I was going through this too, and it hurt as hell. :)
PS. You can also contact developers at contact#xsockets.net, they are really cool guys and will surely help you out!
*edit: by computer address I mean computer, that is hosting the xsockets server.
Related
for some reason my publisher initiates twice when I create a new a new session. However the 2nd one, isn't in the div where it's supposed to be. Also if you connect to the session you'll get the same so it only show for yourself.
I'm trying to find out why it's appearing. Here's some snippets:
var getApiAndToken, initializeSession;
getApiAndToken = function() {
var apiKey, customer_id, sessionId, token;
if (gon) {
apiKey = gon.api_key;
}
if (gon) {
sessionId = gon.session_id;
}
if (gon) {
token = gon.token;
}
if (gon) {
customer_id = gon.customer_id;
}
initializeSession();
};
initializeSession = function() {
var publishStream, session;
session = OT.initSession(apiKey, sessionId);
session.connect(token, function(error) {
if (!error) {
session.publish(publishStream(true));
layout();
} else {
console.log('There was an error connecting to the session', error.code, error.message);
}
});
$('#audioInputDevices').change(function() {
publishStream(false);
});
$('#videoInputDevices').change(function() {
publishStream(false);
});
return publishStream = function(loadDevices) {
var publisherOptions;
publisherOptions = {
audioSource: $('#audioInputDevices').val() || 0,
videoSource: $('#videoInputDevices').val() || 0
};
OT.initPublisher('publisherContainer', publisherOptions, function(error) {
if (error) {
console.log(error);
} else {
if (loadDevices) {
OT.getDevices(function(error, devices) {
var audioInputDevices, videoInputDevices;
audioInputDevices = devices.filter(function(element) {
return element.kind === 'audioInput';
});
videoInputDevices = devices.filter(function(element) {
return element.kind === 'videoInput';
});
$.each(audioInputDevices, function() {
$('#audioInputDevices').append($('<option></option>').val(this['deviceId']).html(this['label']));
});
$.each(videoInputDevices, function() {
$('#videoInputDevices').append($('<option></option>').val(this['deviceId']).html(this['label']));
});
});
}
}
});
};
};
it also asks me for device access twice.
I see two general problems in the code you provided:
The variables api_key, session_id, and token inside the getApiAndToken() function are scoped to only that function, and therefore not visible inside initializeSession() where you try to use them.
The goal of the publishStream() function is not clear and its use is not consistent. Each time you invoke it (once the session connects and each time the dropdown value changes) this function creates a new Publisher. It also does not return anything, so when using it in the expression session.publish(publishStream(true)), you are effectively just calling session.publish() which results in a new Publisher being added to the end of the page because there is no element ID specified. That last part is the reason why you said its not in the <div> where its supposed to be.
It sounds like what you want is a Publisher with a dropdown to select which devices its using. I created an example of this for you: https://jsbin.com/sujufog/11/edit?html,js,output.
Briefly, the following is how it works. It first initializes a dummy publisher so that the browser can prompt the user for permission to use the camera and microphone. This is necessary for reading the available devices. Note that if you use a page served over HTTPS, browsers such as Chrome will remember the permissions you allowed on that domain earlier and will not have to prompt the user again. Therefore on Chrome, the dummy publisher doesn't cause any prompt be shown for a user who has already run the application. Next, the dummy publisher is thrown away, and OT.getDevices() is called to read the available devices and populate the dropdown menu. While this is happening, the session would have also connected, and on every change of the selection in either of the dropdowns, the publish() function is called. In that function, if a previous publisher existed, it is first removed, and then a new publisher is created with the devices that are currently selected. Then that new publisher is passed into session.publish().
I am building the scaffolding for my new polymer project, and am considering unit tests. I think I will be using the karma/jasmine combination. There is an interesting post at http://japhr.blogspot.co.uk/2014/03/polymer-page-objects-and-jasmine-20.html which I understand enough to get me started, but the key question I will have to address and haven't found any standard way to do it is how do I mock the ajax calls.
When I was using jasmine, standalone, on a JQuery Mobile project, I was able to directly use the Jasmine SpyOn ability to mock the JQuery.ajax call. Is there something similar for Polymer?
I came across an element <polymer-mock-data> but there is no real documentation for it, so I couldn't figure out if they might help
Instead of importing core-ajax/core-ajax.html, create your own core-ajax element.
<polymer-element name="core-ajax" attributes="response">
<script>
Polymer('core-ajax', {
attached: function() {
this.response = ['a', 'b', 'c'];
}
});
</script>
</polymer-element>
Obviously, this is just an example, the actual implementation depends on the desired mocking behavior.
This is just one way to solve it, there are many others. I'm interested to hear what you find (in)convenient.
It turns out that Jasmine2.0 has an Jasmine-ajax plugin that will mock the global XMLHttpRequest. core-ajax uses this under the hood, so I can directly get at the call.
It works well, in a beforeEach function at the top the suite you call jasmine.Ajax.install and in the afterEach function you call jasmine.Ajax.uninstall, and it automatically replaces the XMLHttpRequest.
Timing is also crucial, in that you need to ensure you have mocked the Ajax call before the element under test uses it. I achieve that using a separate function to specifically load the fixture which contains the element under test, which is called after jasmine.Ajax.install has been called. I use a special setup script thus
(function(){
var PolymerTests = {};
//I am not sure if we can just do this once, or for every test. I am hoping just once
var script = document.createElement("script");
script.src = "/base/components/platform/platform.js";
document.getElementsByTagName("head")[0].appendChild(script);
var POLYMER_READY = false;
var container; //Used to hold fixture
PolymerTests.loadFixture = function(fixture,done) {
window.addEventListener('polymer-ready', function(){
POLYMER_READY = true;
done();
});
container = document.createElement("div");
container.innerHTML = window.__html__[fixture];
document.body.appendChild(container);
if (POLYMER_READY) done();
};
//After every test, we remove the fixture
afterEach(function(){
document.body.removeChild(container);
});
window.PolymerTests = PolymerTests;
})();
The only point to note here is that the fixture files have been loaded by the karma html2js pre-processor, which loads them into the window.__html__ array, from where we use the code to add to the test context
My test suite is like so
describe('<smf-auth>',function(){
beforeEach(function(done){
jasmine.Ajax.install();
PolymerTests.loadFixture('client/smf-auth/smf-auth-fixture.html',done);
});
afterEach(function(){
jasmine.Ajax.uninstall();
});
describe("The element authenticates",function(){
it("Should Make an Ajax Request to the url given in the login Attribute",function(){
var req = jasmine.Ajax.requests;
expect(req.mostRecent().url).toBe('/football/auth_json.php'); //Url declared in our fixture
});
})
});
For this answer, I took an entirely different approach. Inspiration came from Web Component Tester, which includes sinon within its capabilities. sinon includes the ability to call sinon.useFakeXMLHttpRequest to replace the standard xhr object that core-ajax uses and return responses baked on that.
As far as I can see, haven't quite got as far as running module tests using it, Web Component Tester runs sinon in the node.js context so the build of sinon supplied with it can "require" the various sinon components. In a normal browser environment this doesn't work and I was looking for a way to allow me to manually run the app I was developing without a php capable server running..
However, downloading and installing with Bower the actual releases from the sinonjs.org web site, does provide a completely built sinon that will run in the context of a web server.
So I can include the following scripts in my main index.html file
<!--build:remove -->
<script type="text/javascript" src="/bower_components/sinon-1.14.1/index.js"></script>
<script type="text/javascript" src="/fake/fake.js"></script>
<!--endbuild-->
which is automatically removed by the gulp build scrips and then fake JS has the following in it
var PAS = (function (my) {
'use strict';
my.Faker = my.Faker || {};
var getLocation = function(href) {
var a = document.createElement('a');
a.href = href;
return a;
};
sinon.FakeXMLHttpRequest.useFilters = true;
sinon.FakeXMLHttpRequest.addFilter(function(method,url){
if(method === 'POST' && getLocation(url).pathname.substring(0,7) === '/serve/') {
return false;
}
return true;
});
var server = sinon.fakeServer.create();
server.autoRespond = true;
my.Faker.addRoute = function(route,params,notfound){
server.respondWith('POST','/serve/' + route + '.php',function(request){
var postParams = JSON.parse(request.requestBody);
var foundMatch = false;
var allMatch;
/*
* First off, we will work our way through the parameter list seeing if we got a parameter
* which matches the parameters received from our post. If all components of a parameter match,
* then we found one
*/
for(var i=0; i <params.length; i++) {
//check to see parameter is in request
var p = params[i][0];
allMatch = true; //start of optimisic
for(var cp in p ) {
//see if this parameter was in the request body
if(typeof postParams[cp] === 'undefined') {
allMatch = false;
break;
}
if(p[cp] !== postParams[cp]) {
allMatch = false;
break;
}
}
if (allMatch) {
request.respond(200,{'Content-Type':'application/json'},JSON.stringify(params[i][1]));
foundMatch = true;
break;
}
}
//see if we found a match. If not, then we will have to respond with the not found option
if (!foundMatch) {
request.respond(200,{'Content-Type':'application/json'},JSON.stringify(notfound));
}
});
};
return my;
})(PAS||{});
/**********************************************************************
Thses are all the routinee we have and their responses.
**********************************************************************/
PAS.Faker.addRoute('logon',[
[{password:'password1',username:'alan'},{isLoggedOn:true,userID:1,name:'Alan',token:'',keys:['A','M']}],
[{username:'alan'},{isLoggedIn:false,userID:1,name:'Alan'}],
[{password:'password2',username:'babs'},{isLoggedOn:true,userID:2,name:'Barbara',token:'',keys:['M']}],
[{username:'babs'},{isLoggedIn:false,userID:2,name:'Barbara'}]
],{isLoggedOn:false,userID:0,name:''});
The PAS function initialises a sinon fake server and provides a way of providing tests cases with the addRoute function. For a given route, it checks the list of possible POST parameter combinations, and as soon as it finds one, issues that response.
In this case testing /serve/logon.php for various combinations of username and password. It only checks the parameters actually in the particular entry.
So if username = "alan" and password = "password1" the first response is made, but if username is "alan" and any other password is supplied - since it isn't checked, the second pattern matches and the response to that pattern is made.
If non of the patterns match, the last "notfound" parameter is the response pattern that is made.
I believe I could use this same technique in my module test fixtures if I wanted to, but I am more likely to do more specific sinon spying and checking actual parameters in that mode
For 0.8, the tests for PolylmerElements/iron-ajax show how to do this with sinon.
Since SO doesn't like link-only answers, I've copied their code below. However I'd highly recommend going to the source linked above, since 0.8 components are in a high state of flux currently.
var jsonResponseHeaders = {
'Content-Type': 'application/json'
};
var ajax;
var request;
var server;
setup(function () {
server = sinon.fakeServer.create();
server.respondWith(
'GET',
'/responds_to_get_with_json',
[
200,
jsonResponseHeaders,
'{"success":true}'
]
);
server.respondWith(
'POST',
'/responds_to_post_with_json',
[
200,
jsonResponseHeaders,
'{"post_success":true}'
]
);
ajax = fixture('TrivialGet');
});
teardown(function () {
server.restore();
});
suite('when making simple GET requests for JSON', function () {
test('has sane defaults that love you', function () {
request = ajax.generateRequest();
server.respond();
expect(request.response).to.be.ok;
expect(request.response).to.be.an('object');
expect(request.response.success).to.be.equal(true);
});
test('will be asynchronous by default', function () {
expect(ajax.toRequestOptions().async).to.be.eql(true);
});
});
I'm trying to group all my socket.io connection into groups.
I want 1 group for each sails.js session.
My first goal is authentificate all tabs in a same time.
So I tried to do this with onConnect in config/sockets.js like that :
onConnect: function(session, socket) {
// By default: do nothing
// This is a good place to subscribe a new socket to a room, inform other users that
// someone new has come online, or any other custom socket.io logic
if (typeof session.socket == 'undefined'){
session.socket = [];
}
session.socket.push(socket.id);
session.save();
console.log(session, socket);
},
// This custom onDisconnect function will be run each time a socket disconnects
onDisconnect: function(session, socket) {
// By default: do nothing
// This is a good place to broadcast a disconnect message, or any other custom socket.io logic
if(Array.isArray(session.socket)){
var i = session.socket.indexOf(socket.id);
if(i != -1) {
session.socket.splice(i, 1);
session.save();
}
}
console.log(session, socket);
},
But I realize that session doesn't save my modifications.
I tried a session.save but sailsjs doesn't know req !
Session.set(sessionKey, req.session, function (err) {
I want to access to sails.js sesion but I don't know how to do it.
I tried to search a solution but now, after 6 hours of search I think it's time to requiered some help !
Thanks and sorry for my poor english (I'm french).
There appears to be a bug in the implementation of onConnect and onDisconnect in Sails v0.9.x. You can work around it for now by adding the following line before a call to session.save in those methods:
global.req = {}; global.req.session = session;
then changing session.save() to:
session.save(function(){delete global.req;});
That will provide the missing req var as a global, and then delete the global (for safety) after the session is saved.
Note that this issue only affects sessions in the onConnect and onDisconnect methods; inside of controller code session.save should work fine.
Thanks for pointing this out!
I started learning AJAX recently and am trying a very simple project which involves capturing some form data and sending it to two servers.
The first server is the one which hosts the website and server side php handling. This worls fine
The second server is a python basic http server which handles only the POST operation request send from AJAX. This functionality works but is a bit weird.
Let me explain
Here is my AJAX code which is absolutely straight forward.
function xml_http_post(url, data) {
var req = false;
try {
// Firefox, Opera 8.0+, Safari
req = new XMLHttpRequest();
}
catch (e) {
// Internet Explorer
try {
req = new ActiveXObject("Msxml2.XMLHTTP");
}
catch (e) {
try {
req = new ActiveXObject("Microsoft.XMLHTTP");
}
catch (e) {
alert("Your browser does not support AJAX!");
return false;
}
}
}
req.onreadystatechange = function() {
if (req.readyState == 4) {
// callback(req);
}
}
req.open("POST", url, true);
req.setRequestHeader("Content-type","text/plain");
req.send(data);
}
Since I do not intend to send back any response , my callback function on ready state change is empty.
But when I execute this code ( triggered by onclick on a button) , the POST doesnt work and server doesnt seem to receive anything.
But the most surprising thing is that if I keep a breakpoint at req.open( ) and then do a manual step execution then it works always. Which means , I guess that there is some timing issue which needs to be resolved.
It works fine without breakpoints if the third parameter "async" is set to false but that is anyway undesirable so I want to make it work with async = true.
Any help would be greatly appreciated.
Thanks
Shyam
As I figured out, the form page was getting unloaded by a php script which was invoked as a action of the form b the first server. This resulted in the javascript code being partially or not executed.
So I figured out that sync XHR is the only way for my.
So I'm building a multipart form uploader over ajax on node.js, and sending progress events back to the client over socket.io to show the status of their upload. Everything works just fine until I have multiple clients trying to upload at the same time. Originally what would happen is while one upload is going, when a second one starts up it begins receiving progress events from both of the forms being parsed. The original form does not get affected and it only receives progress updates for itself. I tried creating a new formidable form object and storing it in an array along with the socket's session id to try to fix this, but now the first form stops receiving events while the second form gets processed. Here is my server code:
var http = require('http'),
formidable = require('formidable'),
fs = require('fs'),
io = require('socket.io'),
mime = require('mime'),
forms = {};
var server = http.createServer(function (req, res) {
if (req.url.split("?")[0] == "/upload") {
console.log("hit upload");
if (req.method.toLowerCase() === 'post') {
socket_id = req.url.split("sid=")[1];
forms[socket_id] = new formidable.IncomingForm();
form = forms[socket_id];
form.addListener('progress', function (bytesReceived, bytesExpected) {
progress = (bytesReceived / bytesExpected * 100).toFixed(0);
socket.sockets.socket(socket_id).send(progress);
});
form.parse(req, function (err, fields, files) {
file_name = escape(files.upload.name);
fs.writeFile(file_name, files.upload, 'utf8', function (err) {
if (err) throw err;
console.log(file_name);
})
});
}
}
});
var socket = io.listen(server);
server.listen(8000);
If anyone could be any help on this I would greatly appreciate it. I've been banging my head against my desk for a few days trying to figure this one out, and would really just like to get this solved so that I can move on. Thank you so much in advance!
Can you try putting console.log(socket_id);
after form = forms[socket_id]; and
after progress = (bytesReceived / bytesExpected * 100).toFixed(0);, please?
I get the feeling that you might have to wrap that socket_id in a closure, like this:
form.addListener(
'progress',
(function(socket_id) {
return function (bytesReceived, bytesExpected) {
progress = (bytesReceived / bytesExpected * 100).toFixed(0);
socket.sockets.socket(socket_id).send(progress);
};
})(socket_id)
);
The problem is that you aren't declaring socket_id and form with var, so they're actually global.socket_id and global.form rather than local variables of your request handler. Consequently, separate requests step over each other since the callbacks are referring to the globals rather than being proper closures.
rdrey's solution works because it bypasses that problem (though only for socket_id; if you were to change the code in such a way that one of the callbacks referenced form you'd get in trouble). Normally you only need to use his technique if the variable in question is something that changes in the course of executing the outer function (e.g. if you're creating closures within a loop).