Google Chrome 38 introduced the new "Device Mode & Mobile Emulation" functionality in devtools. In addition to choosing a device for emulation, it is also possible to emulate different network conditions:
Optimizing your site's performance under varying network conditions is
a key aspect of developing for a mobile audience.
Device mode's network conditioning allows you to test your site on a
variety of network connections, including Edge, 3G, and even offline.
Select a connection from the preset dropdown to apply network
throttling and latency manipulation.
For example, we can set it to be like in good old days - GPRS 50 Kbps:
Now we have a good use case for it - we have an internal application for network speed testing. And this new emulation functionality is very helpful for manual testing. But, we'd like to automate it.
Question is:
Is it possible to start chrome via selenium with specified network conditions? Is it something that can be controlled through chrome preferences or command-line arguments?
There are certainly multiple options to simulate slow internet connection, but the question is specifically about chrome+selenium.
The API to control network emulation were added to ChromeDriver. And should be available for quite a while now. According to comment in the linked issue you should use version at least 2.26 because of some bugfix.
According to Selenium changelog bindings are available for these languages:
JavaScript as of version 3.4.0 (commit)
Python as of version 3.5.0 (commit)
Ruby as of version 3.11.0 (commit)
C# as of version 4 (commit)
If you need these binding in other languages you should probably open issue/contribute implementation similar to one of the above.
Example usage from Python is below:
driver.set_network_conditions(
offline=False,
latency=5, # additional latency (ms)
download_throughput=500 * 1024, # maximal throughput
upload_throughput=500 * 1024) # maximal throughput
No, it is not possible to control Network Connectivity Emulation through Chrome preferences or command-line arguments. Network Connectivity Emulation is part of the build-in Chrome debugger. One way way in solving this is to control the debugger. This can be done via an extension or by directly controlling the debugger, see explanation. However, this will not work with WebDriver. The reason for this is that there can only be one "debug" session and WebDriver is already using it, see explanation. Since there is no public interface, there is also no way to control it via WebDriver.
For Device Mode & Mobile Emulation which is also part of the build-in debugger, there is a public interface (details), and thus can be controlled. This can be done through WebDriver Capabilities. Two options 1) Specify a device name 2) Enter your own parameters (limited).
You can use this method to run your test case in specified network conditions
protected void networkThrotting() throws IOException {
Map map = new HashMap();
map.put("offline", false);
map.put("latency", 5);
map.put("download_throughput", 500);
map.put("upload_throughput", 1024);
CommandExecutor executor = ((ChromeDriver)driver).getCommandExecutor();
Response response = executor.execute(
new Command(((ChromeDriver)driver).getSessionId(), "setNetworkConditions", ImmutableMap.of("network_conditions", ImmutableMap.copyOf(map)))
);
}
Indeed C# Selenium latest (3.11) has NetworkConditions added. Now you can use it like this:
var driver = new ChromeDriver(pathToDriver);
driver.NetworkConditions = new ChromeNetworkConditions()
{ DownloadThroughput = 5000, UploadThroughput = 5000, Latency = TimeSpan.FromMilliseconds(5) };
The problem is it's not yet usable because of the bug
https://github.com/SeleniumHQ/selenium/issues/5693
So .Net guys will have to wait until 3.12 Selenium Release.
While this is a very welcome and useful bit of functionality, for serious testing I think the conventional methods of network simulation are still the way to go.
I am aware of 2 solutions in addition to those already linked - the Charles web proxy (very useful tool - commercial) and implementing your own recipe using Linux Traffic Control (e.g. see chapter 6 of LAMPe2e).
By interfering with the network connections rather than the browser, you then get a proper measure of the impact independently of the browser in use.
Why do you just want to use the Chrome functionality?
Let's consider two different approaches,
one where we can throttle the entire network and one where we can specify which network requests to throttle specifically.
Approach 1: throttle the entire network
const { Builder } = require("selenium-webdriver")
async function throttleNetwork() {
let driver = await new Builder().forBrowser("chrome").build();
await driver.setNetworkConditions({
offline: false,
latency: 5000, // Additional latency (ms).
download_throughput: 50 * 1024, // Maximal aggregated download throughput.
upload_throughput: 50 * 1024, // Maximal aggregated upload throughput.
});
driver.get("http://www.google.com/");
}
thanks to Yaroslav for pointing out the commit.
This has a downside where we can't specify a specific network request to throttle and the rest to go unthrottled.
Let's fix this downside in our next approach.
Approach 2: throttle a specific network request
Here we'd be using an npm package from requestly called Requestly for Selenium.
We need to create a rule first in their client application and get the link by creating a shared list.
For example, let's throttle network request to google.com
require("chromedriver");
const { Builder } = require("selenium-webdriver");
const chrome = require("selenium-webdriver/chrome");
const {
getRequestlyExtension,
importRequestlySharedList,
} = require("#requestly/selenium");
const sharedListUrl = "YOUR_SHARED_LIST_LINK_HERE" // For example, use "https://app.requestly.io/rules/#sharedList/1631611216670-delay"
async function throttleGoogle() {
const options = new chrome.Options().addExtensions(
getRequestlyExtension("chrome") // This installs requestly chrome extension in your testing instance
);
const driver = new Builder()
.forBrowser("chrome")
.setChromeOptions(options)
.build();
await importRequestlySharedList(driver, sharedListUrl); // Here we import the shared list we created some time back
driver.get("http://www.google.com/");
}
This was a high-level overview of how we can overcome the downsides of the selenium-only approach. I've written a blog on the same where I go into depth on how to create a rule, shared list, and so on. You can read it here.
The below issue has now been fixed in this commit
For anyone like me in the C# world wondering why the upload/download throughput does not work as expected, it seems the tooltips for these properties are mislabelled. The tooltip states the data rate is measured in kb/s but in my own experience it is actually bytes per second so if you want to use a more familiar measurement like Mbps you will have to multiply by 125,000:
int latencyInMilliseconds = 20;
long downloadLimitMbps = 20;
long uploadLimitMbps = 5;
_driver.NetworkConditions = new ChromeNetworkConditions()
{
Latency = new TimeSpan(0, 0, 0, 0, latencyInMilliseconds),
DownloadThroughput = downloadLimitMbps * 125000, // Mbps to bytes per second
UploadThroughput = uploadLimitMbps * 125000, // Mbps to bytes per second
IsOffline = false,
};
Using these settings and looking at network traffic while my tests are running I can see they result in exactly 20Mbps down and 5Mbps up.
It looks like it's coming soon to Selenium (C#). The commit was on 01/28/2018:
https://github.com/SeleniumHQ/selenium/blob/ef156067a583fe84b66ec338d969aeff6504595d/dotnet/src/webdriver/Chrome/ChromeNetworkConditions.cs
I know this is an old question, but I recently had to solve for this problem and this page came up at the top of my Google search. Here are the main bits from how I did it in C#. Hope this helps someone in the future.
var networkConditions = new ChromeNetworkConditions();
networkConditions.Latency = new TimeSpan(150);
networkConditions.IsOffline = false;
networkConditions.DownloadThroughput = 120 * 1024;
networkConditions.UploadThroughput = 150 * 1024;
Driver.NetworkConditions = networkConditions;
Inspired by the answer from TridentTrue, here is an updated version for Selenium 4.0.0 in C#. If anyone knows how to use it for alpha7 and upwards without beeing version specific feel free to update this. :)
public void LimitNetwork(int latencyInMilliseconds, long downloadLimitMbps, long uploadLimitMbps)
{
IDevTools devTools = driver as IDevTools;
session = devTools.CreateDevToolsSession();
session.Network.Enable(new EnableCommandSettings());
EmulateNetworkConditionsCommandSettings command = new EmulateNetworkConditionsCommandSettings();
command.Latency = latencyInMilliseconds;
command.DownloadThroughput = downloadLimitMbps * 125000; // Mbps to bytes per second
command.UploadThroughput = uploadLimitMbps * 125000; // Mbps to bytes per second
command.Offline = false;
session.Network.EmulateNetworkConditions(command);
}
Update: After I had implemented this for my own, I found a really good article to get an overview in Selenium 4.0, also in Emulating network conditions.
Update 2: My issue was that I forgot to add the Network.Enable command, so don't forget to call it before you do the other stuff.
I have updated the code. :)
Related
I'm trying to communicate between a c#(5.0) and a python (3.9) application via ZeroMQ. For .Net I'm using NetMQ and for python PyZMQ.
I have no trouble letting two applications communicate, as long as they are in the same language
c# app to c# app;
python -> python;
java -> java,
but trouble starts when I try to connect between different languages.
java -> c# and reverse works fine as well [edited]
I do not get any errors, but it does not work either.
I first tried the PUB-SUB Archetype pattern, but as that didn't work, I tried REQ-REP, so some remainders of the "PUB-SUB"-version can still be found in the code.
My Python code looks like this :
def run(monitor: bool):
loop_counter: int = 0
context = zmq.Context()
# socket = context.socket(zmq.PUB)
# socket.bind("tcp://*:5557")
socket = context.socket(zmq.REP)
socket.connect("tcp://localhost:5557")
if monitor:
print("Connecting")
# 0 = Longest version, 1 = shorter version, 2 = shortest version
length_version: int = 0
print("Ready and waiting for incoming requests ...")
while True:
message = socket.recv()
if monitor:
print("Received message:", message)
if message == "long":
length_version = 0
elif message == "middle":
length_version = 1
else:
length_version = 2
sys_info = get_system_info(length_version)
"""if not length_version == 2:
length_version = 2
loop_counter += 1
if loop_counter == 15:
length_version = 1
if loop_counter > 30:
loop_counter = 0
length_version = 0"""
if monitor:
print(sys_info)
json_string = json.dumps(sys_info)
print(json_string)
socket.send_string(json_string)
My C# code :
static void Main(string[] args)
{
//using (var requestSocket = new RequestSocket(">tcp://localhost:5557"))
using (var requestSocket = new RequestSocket("tcp://localhost:5557"))
{
while (true) {
Console.WriteLine($"Running the server ...");
string msg = "short";
requestSocket.SendFrame(msg);
var message = requestSocket.ReceiveFrameString();
Console.WriteLine($"requestSocket : Received '{message}'");
//Console.ReadLine();
Thread.Sleep(1_000);
}
}
}
Seeing the period of your problems maybe it's because of versions.
I run fine a program for long time with communications from Windows/C# with NTMQ 4.0.0.207 239,829 7/1/2019 on one side and Ubuntu/Python with zeromq=4.3.1 and pyzmq=18.1.0.
I just tried updating to use same NETMQ version but with new versions zeromq=4.3.3 and pyzmq=20.0.0 but there is a problem/bug somewhere and it doesn't run well anymore.
So your code doesn't look bad may be it's software versions issues not doing well try with NTMQ 4.0.0.207 on c# side and zeromq=4.3.1 with pyzmq=18.1.0 on python side
Q : "How to set up a ZeroMQ request-reply between a c# and python application"
The problem starts with the missed understanding of how REQ/REP archetype works.
Your code uses a blocking-form of the .recv()-method, so you remain yourselves hanging Out-of-the-Game, forever & unsalvageable, whenever a REQ/REP two-step gets into troubles (as no due care was taken to prevent this infinite live-lock).
Rather start using .poll()-method to start testing a presence / absence of a message in the local AccessNode-side of the queue and this leaves you in a capability to state-fully decide what to do next, if a message is already or is not yet present, so as to keep the mandatory sequence of an API-defined need to "zip" successful chainings ofREQ-side .send()-.recv()-.send()-.recv()-... with REP-side .recv()-.send()-.recv()-.send()-... calls, are the REQ/REP archetype works as a distributed-Finite-State-Automaton (dFSA), that may easily deadlock itself, due to "remote"-side not being compliant with the local-side expectations.
Having a code, that works in a non-blocking, .poll()-based mode avoids falling into these traps, as you may handle each of these unwanted circumstances while being still in a control of the code-execution paths (which a call to a blocking-mode method in a blind belief it will return at some future point in time, if ever, simply is not capable of).
Q.E.D.
If in doubts, one may use a PUSH/PULL archetype, as the PUB/SUB-archetype may run into problems with non-matching subscriptions ( topic-list management being another, version dependent detail ).
There ought be no other problem for any of the language-bindings, if they passed all the documented ZeroMQ API features without creating any "shortcuts" - some cases were seen, where language-specific binding took "another" direction for PUB/SUB, when sending a pure message, transformed into a multi-part message, putting a topic into a first frame and the message into the other. That is an example of a binding not compatible with the ZeroMQ API, where a cross-language / non-matching binding-version system problems are clear to come.
Your port numbers do not match, the python code is 55557 and the c# is 5557
I might be late, but this same thing happened to me. I have a python Subscriber using pyzmq and a C# Publisher using NetMQ.
After a few hours, it occurred to me that I needed to let the Publisher some time to connect. So a simple System.Threading.Thread.Sleep(500); after the Connect/Bind did the trick.
I am trying to measure how long does it take for images to load in a react native app on my users' devices in different countries.
In debug mode there is performance.now() that creates timestamp that I then send as a property of the event to Amplitude.
But performance.now() is a JS method and is not available in Release builds for users. There is an undocumented global.nativePerformanceNow method
const loadStartAmplitudeEvent = () => {
if (R.not(__DEV__)) {
const timeStamp = global.nativePerformanceNow();
amplitude.logEvent('Photo On Load Start', {
uri, timeStamp,
});
}
};
For example, that's how I create an event with a timestamp to send to amplitude, but I get an error, what am I doing wrong? Thanks a lot! Should i use some other method? Is the global.nativePerformanceNow → g.nativePerformanceNow transformation messing it up?
2019-08-06 03:10:45.134 [error][tid:com.facebook.react.JavaScript]
g.nativePerformanceNow is not a function.
(In 'g.nativePerformanceNow()', 'g.nativePerformanceNow' is undefined)
Seems like the feature was removed from RN in mid October 2018...
This crashes on device (works on Chrome tools which is a bit meh):
https://github.com/facebook/react-native/issues/27274#issuecomment-557586801
But also seems it's been brought back from the dead recently.
So maybe next React Native release will include performance, performanceNow(), performance.now()
https://github.com/facebook/react-native/commit/232517a5740f5b82cfe8779b3832e9a7a47a8d3d
Today, I had to restart my browser due to some issue with an extension. What I found when I restarted it, was that my browser (Chromium) automatically updated to a new version that doesn't allow synchronous AJAX-requests anymore. Quote:
Synchronous XMLHttpRequest on the main thread is deprecated because of
its detrimental effects to the end user's experience. For more help,
check http://xhr.spec.whatwg.org/.
I need synchronous AJAX-requests for my node.js applications to work though, as they store and load data from disk through a server utilizing fopen. I found this to be a very simplistic and effective way of doing things, very handy in the creation of little hobby projects and editors... Is there a way to re-enable synchronous XMLHttpRequests in Chrome/Chromium?
This answer has been edited.
Short answer:
They don't want sync on the main thread.
The solution is simple for new browsers that support threads/web workers:
var foo = new Worker("scriptWithSyncRequests.js")
Neither DOM nor global vairables aren't going to be visible within a worker but encapsulation of multiple synchronous requests is going to be really easy.
Alternative solution is to switch to async but to use browser localStorage along with JSON.stringify as a medium. You might be able to mock localStorage if you allowed to do some IO.
http://caniuse.com/#search=localstorage
Just for fun, there are alternative hacks if we want to restrict our self using only sync:
It is tempting to use setTimeout because one might think it is a good way to encapsulate synchronous requests together. Sadly, there is a gotcha. Async in javascript doesn't mean it gets to run in its own thread. Async is likely postponing the call, waiting for others to finish. Lucky for us there is light at the end of the tunnel because it is likely you can use xhttp.timeout along with xhttp.ontimeout to recover. See Timeout XMLHttpRequest
This means we can implement tiny version of a schedular that handles failed request and allocates time to try again or report error.
// The basic idea.
function runSchedular(s)
{
setTimeout(function() {
if (s.ptr < callQueue.length) {
// Handles rescheduling if needed by pushing the que.
// Remember to set time for xhttp.timeout.
// Use xhttp.ontimeout to set default return value for failure.
// The pushed function might do something like: (in pesudo)
// if !d1
// d1 = get(http...?query);
// if !d2
// d2 = get(http...?query);
// if (!d1) {pushQue tryAgainLater}
// if (!d2) {pushQue tryAgainLater}
// if (d1 && d2) {pushQue handleData}
s = s.callQueue[s.ptr++](s);
} else {
// Clear the que when there is nothing more to do.
s.ptr = 0;
s.callQueue = [];
// You could implement an idle counter and increase this value to free
// CPU time.
s.t = 200;
}
runSchedular(s);
}, s.t);
}
Doesn't "deprecated" mean that it's available, but won't be forever. (I read elsewhere that it won't be going away for a number of years.) If so, and this is for hobby projects, then perhaps you could use async: false for now as a quick way to get the job done?
We just upgraded our Heroku postgres database using the follower changeover method. We have over 50 dataclips attached to the old database, and now we need to move them over to the new database. However, doing them one by one will take a lot of time.
Is there a programatic way to update the database a dataclip is attached to, perhaps with the CLI tools?
At least once the old database has been deprovisioned, you can now (as of March 2016) reattach them to another database:
Go to https://dataclips.heroku.com/clips/recoverable. It will display your old database and a set of 'orphaned' dataclips and you can choose to transfer them to another database (in my case the promoted follower from the changeover).
Note that this only affects the dataclips that you created, it does not affect the dataclips one of your team members created and that you only had access to. So they will have to go through this process as well.
Official devcenter article: https://devcenter.heroku.com/articles/dataclips#dataclip-recovery
Thanks to Heroku CSRF measures, programmatically updating data clips is much more difficult than you might expect. You'll need to suck it up and start clicking buttons by hand, or beg their support team to do it for you, which is just as difficult.
There is no official support for programmatically moving the dataclips. That being said, you can script it out against their HTTP API.
The base URL is https://dataclips.heroku.com/api/v1/. There are three relevant endpoints:
clips /clips
resources (databases) /heroku_resources
move clip /clips/:slug/move
Find the slug of the clip you want to move, find the resource id of the new database, and make a post to the move clip endpoint:
POST /api/v1/clips/fjhwieufysdufnjqqueyuiewsr/move
Content-Type: application/json
{"heroku_resource_id":"resource123456789#heroku.com"}
I had over 300 dataclips to move. I used the following technique to update them all (essentially reverse engineering the dataclips API).
Open Chrome with Web Developer tools, Network tab.
Log into Heroku Dataclips
Observe the network call which returns all the dataclips, in JSON (https://dataclips.heroku.com/api/v1/clips). Take this response and extract out all dataclip slugs.
Update the database for one dataclip. Observe the network call which does this (https://dataclips.heroku.com/api/v1/clips/:slug/move). Right click, Copy as cURL. This is the easiest way to get all the correct parameters, since the API uses cookies for authentication.
Write a script that loops through each dataclip slug, and shells out to curl. In Ruby, this looks like:
slugs = <paste ids here>.split("\n")
slugs.each do |slug|
command = %Q(curl -v 'https://dataclips.heroku.com/api/v1/clips/#{slug}/move' -H 'Cookie: ...' --data '{"heroku_resource_id":"resource1234567#heroku.com"}')
puts command
system(command)
end
You can contact Heroku support, and they will bulk transfer the dataclips to your new database for you.
Batch working on dataclips
I've finally found a solution to work on my Dataclips as a batch using the javascript console and some scraping technique. I needed it to retrieve every dataclips. But it guess It can be updated as such:
// Go to the dataclip listing (https://data.heroku.com/dataclips).
// Then execute this script in your console.
// Be careful, this will focus a new window every 4 seconds, preventing
// you from working 4 seconds times the number of dataclips you have.
// Retrieve urls and titles
let dataclips = Array.
from(document.querySelectorAll('.rt-td:first-child a')).
map(el => ({ url: el.href, title: el.innerText }))
/**
* Allows waiting for a given timeout before execution.
* #param {number} seconds
*/
const timeout = function(seconds) {
return new Promise(resolve => {
setTimeout(() => {
resolve()
}, seconds);
})
}
/**
* Here are all the changes you want to apply to every single
* dataclip.
* #param {object} window
*/
const applyChanges = function(window) {
}
// With a fast connection, 4 seconds is OK. Dial it down if you
// have errors.
const expectedLoadTime = 4000 // ms
// This is the main loop, windows are opened one by one to ensure focus and a
// correct loading time.
for (const dataclip of dataclips) {
// This opens another window from the script, having access to its DOM.
// See https://github.com/buonomo/kazoo for a funnier example usage!
// And don't be shy to star and share :D
const externWindow = window.open(dataclip.url)
// A hack to wait for loading, this could be improved for sure.
await timeout(expectedLoadTime)
applyChanges(externWindow)
externWindow.close()
}
You'd still have to implement applyChanges yourself which I conceed is a bit tedious and I don't have time to do it know (if one does, please share!). But at least it can be done on all of your dataclips in a single function.
For an example usage of this script, you can take a look at the gist I made to scrape every dataclips and related errors.
I developed WP7 application using the emulator. Everything was great. To communicate with the server I used WebClient and RestClient. But to test the application on a real device - I threw a shock.
1)
private void LoadData()
{
var webClient = new WebClient();
webClient.DownloadStringCompleted += DownloadStringCompleted;
webClient.DownloadStringAsync(new Uri(Constants.Url1));
//Point_1
}
private void DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e)
{
//Point_2
}
On emulator between Point_1 and Point_2 0.8-1.2 seconds.
On real device (HTC Radar WP7.8) between Point_1 and Point_2 15-20 seconds.
2)
var request = new RestRequest(url) {Method = Method.POST};
//Point_3
RestClient.ExecuteAsync(request, response =>
{
//Point_4
}
On emulator between Point_3 and Point_4 0.3-0.5 seconds.
On real device (HTC Radar WP7.8) between Point_3 and Point_4 18-22 seconds.
My computer and phone in same wi-fi network.
I have three questions:
First: It's normal?
Second: Why it's happening?
Third: How can I solved it?
There are many factors however its worth remembering that emulator performance is usually lot better than device and that you should try on device.
Having said that, you should consider alternate models of data display,
e.g. make a call and then populating data as it arrives in chunks using something like ObservableCollection.
You could also implement downloading the data using background task and having it already available.
In the end, it depends on what you can and cannot do.
Like Hermit says: "There are many factors however its worth remembering that emulator performance is usually lot better than device and that you should try on device."
My solution is - do not use debug mode, when you test network performance on real device. Just create XAP file and load it on device.