I have an e2e protractor-jasmine test in my angular application. I've just run the exact same test several times in a row and it stops at exactly one step. giving the same error:
'Failed: script timeout: result was not received in 20 seconds'
What I have tried:
1. I attempted to make this an async function: ... header', async () => { ...
2. I have tried await(ing) the element: await element(by.css("[ng-click='siteDocLibCtrl.managePermissionsDialog($event)']")).click();
3. I have tried to browser.sleep(3000)
Jasmine version:2.8.0
npm version:
npm: '6.4.1',
ares: '1.15.0',
cldr: '33.1',
http_parser: '2.8.0',
icu: '62.1',
modules: '64',
napi: '3',
nghttp2: '1.34.0',
node: '10.15.0',
openssl: '1.1.0j',
tz: '2018e',
unicode: '11.0',
uv: '1.23.2',
v8: '6.8.275.32-node.45',
zlib: '1.2.11'
element.all(by.repeater("file in siteDocLibCtrl.files | filter:global.search | orderBy:orderByField:reverseSort")).get(0).click(); //selects 1st element
element(by.css("[ng-click='siteDocLibCtrl.managePermissionsDialog($event)']")).click();
The output error i am getting is below:
Failed: script timeout: result was not received in 20 seconds
(Session info: chrome=73.0.3683.103)
(Driver info: chromedriver=2.46.628402 (536cd7adbad73a3783fdc2cab92ab2ba7ec361e1),platform=Windows NT 10.0.17134 x86_64)[0m
Stack:
ScriptTimeoutError: script timeout: result was not received in 20 seconds
(Session info: chrome=73.0.3683.103)
(Driver info: chromedriver=2.46.628402 (536cd7adbad73a3783fdc2cab92ab2ba7ec361e1),platform=Windows NT 10.0.17134 x86_64)
at Object.checkLegacyResponse (C:\Users\Jagdeep\eclipse-workspace\ProtractorTutorial\node_modules\selenium-webdriver\lib\error.js:546:15)
at parseHttpResponse (C:\Users\Jagdeep\eclipse-workspace\ProtractorTutorial\node_modules\selenium-webdriver\lib\http.js:509:13)
at doSend.then.response (C:\Users\Jagdeep\eclipse-workspace\ProtractorTutorial\node_modules\selenium-webdriver\lib\http.js:441:30)
at process._tickCallback (internal/process/next_tick.js:68:7)
Try add this line before the test:
browser.ignoreSynchronization = true;
Related
I have a CloudRun function that accepts multiple URL inputs and this works fine via cURL from Google Shell, e.g
curl https://myapp.a.run.app/function1/input1/input2
and where input1 is an email address, input2 a string with no spaces, input3 and int and input4 a filename with spaces, e.g
curl https://myapp.a.run.app/function2/input1/input2/input3/input4
I have a Cloud worklow that calls this function in a few steps, building up a URL from previous responses. The calls to the function in steps before are working fine, however when I attempt to call the function via a URL where the email address/filename inputs are text.url_encoded prior, the workflow fails with a 403 error.
- step3_assign_input1:
assign:
- step3_URL: '${"https://myapp.a.run.app/function1/" + step2result.body.field1.field2["b:input1"]}'
- step4_get_input1:
try:
call: http.get
args:
url: ${step3_URL}
result: step4result
retry:
predicate: ${custom_predicate}
max_retries: 10
backoff:
initial_delay: 2
max_delay: 60
multiplier: 2
- step4a_sleep:
call: sys.sleep
args:
seconds: 5
- step5_assign_input2:
assign:
- step5_URL: '${"https://myapp.a.run.app/function2/" + step4result.body.field1.field2["a:input2"]}'
- step6_get_input2:
try:
call: http.get
args:
url: ${step5_URL}
result: step6result
retry:
predicate: ${custom_predicate}
max_retries: 10
backoff:
initial_delay: 2
max_delay: 60
multiplier: 2
- step7a_encode_email:
call: text.url_encode
args:
source: ${args.email}
result: encode_email
- step7b_encode_filename:
call: text.url_encode
args:
source: '${step6result.body.field1.field2["a:FileName"]}'
result: encode_filename
- step8_assign_emailURL:
assign:
- emailURL: '${"https://myapp.a.run.app/email/" + encode_email + "/" + args.type + "/" + args.serial + "/" + encode_filename}'
- step9_get_emailURL:
try:
call: http.get
args:
url: ${emailURL}
result: step9result
retry:
predicate: ${custom_predicate}
max_retries: 10
backoff:
initial_delay: 2
max_delay: 60
multiplier: 2
- returnOutput:
return: ${step9result}
custom_predicate:
params: [e]
steps:
- what_to_repeat:
switch:
- condition: ${e.code in [429, 500, 502, 503, 504]}
return: true
- otherwise:
return: false
From the workflow logs I can see the URL is being constructed as
https://myapp.a.run.app/email/someone%40email.com/String/12345678/THIS%20File%20Name%01.pdf
which if I call via cURL GET in CloudShell it returns with a 200 response code.
In the Workflow logs I get a HTTP server responded with error code 403, and Error: Forbidden\u003c/h1\u003e\n\u003ch2\u003eAccess is forbidden.
{
"insertId": "sl7uy9bd1",
"jsonPayload": {
"activityTime": "2023-01-06T05:44:39Z",
"state": "FAILED",
"#type": "type.googleapis.com/google.cloud.workflows.type.ExecutionsSystemLog",
"failure": {
"source": "main.step9_get_emailURL, line: 76",
"exception": "HTTP server responded with error code 403\nin step \"step9_get_emailURL\", routine \"main\", line: 76: {\"body\":\"\\n\\u003chtml\\u003e\\u003chead\\u003e\\n\\u003cmeta http-equiv=\\\"content-type\\\" content=\\\"text/html;charset=utf-8\\\"\\u003e\\n\\u003ctitle\\u003e403 Forbidden\\u003c/title\\u003e\\n\\u003c/head\\u003e\\n\\u003cbody text=#000000 bgcolor=#ffffff\\u003e\\n\\u003ch1\\u003eError: Forbidden\\u003c/h1\\u003e\\n\\u003ch2\\u003eAccess is forbidden.\\u003c/h2\\u003e\\n\\u003ch2\\u003e\\u003c/h2\\u003e\\n\\u003c/body\\u003e\\u003c/html\\u003e\\n\",\"code\":403,\"headers\":{\"Alt-Svc\":\"h3=\\\":443\\\"; ma=2592000,h3-29=\\\":443\\\"; ma=2592000,h3-Q050=\\\":443\\\"; ma=2592000,h3-Q046=\\\":443\\\"; ma=2592000,h3-Q043=\\\":443\\\"; ma=2592000,quic=\\\":443\\\"; ma=2592000; v=\\\"46,43\\\"\",\"Content-Length\":\"235\",\"Content-Type\":\"text/html; charset=UTF-8\",\"Date\":\"Fri, 06 Jan 2023 05:44:38 GMT\",\"X-Appengine-Country\":\"ZZ\"},\"message\":\"HTTP server responded with error code 403\",\"tags\":[\"HttpError\"]}"
}
},
"resource": {
"type": "workflows.googleapis.com/Workflow",
"labels": {
"workflow_id": "myworkflow",
"location": "australia-southeast1",
"resource_container": "323152299552"
}
},
"timestamp": "2023-01-06T05:44:39.101297442Z",
"severity": "ERROR",
"labels": {
"workflows.googleapis.com/revision_id": "000036-15c",
"workflows.googleapis.com/execution_id": "f2175706-7e8a-4321-916b-487231a10d6b"
},
"logName": "projects/myproject/logs/workflows.googleapis.com%2Fexecutions_system",
"receiveTimestamp": "2023-01-06T05:44:40.026366626Z"
}
If I look at the logs of the CloudRun function, I see successful logs from the earlier steps in the workflow. but, no logs from the step that is failing.
Hopefully someone can provide some insight why this fails via a workflow steps.
I'm testing with a minimal example how futures work when using ProcessPoolExecutor.
First, I want to know the result of my processing functions, then I would like to add complexity to the scenario.
import time
import string
import random
import traceback
import concurrent.futures
from concurrent.futures import ProcessPoolExecutor
def process(*args):
was_ok = True
try:
name = args[0][0]
tiempo = args[0][1]
print(f"Task {name} - Sleeping {tiempo}s")
time.sleep(tiempo)
except:
was_ok = False
return name, was_ok
def program():
amount = 10
workers = 2
data = [''.join(random.choices(string.ascii_letters, k=5)) for _ in range(amount)]
print(f"Data: {len(data)}")
tiempo = [random.randint(5, 15) for _ in range(amount)]
print(f"Times: {len(tiempo)}")
with ProcessPoolExecutor(max_workers=workers) as pool:
try:
index = 0
futures = [pool.submit(process, zipped) for zipped in zip(data, tiempo)]
for future in concurrent.futures.as_completed(futures):
name, ok = future.result()
print(f"Task {index} with code {name} finished: {ok}")
index += 1
except Exception as e:
print(f'Future failed: {e}')
if __name__ == "__main__":
program()
If I run this program, the output is as expected, obtaining all the future results. However, just at the end I also get a failure:
Data: 10
Times: 10
Task utebu - Sleeping 14s
Task klEVG - Sleeping 10s
Task ZAHIC - Sleeping 8s
Task 0 with code klEVG finished: True
Task RBEgG - Sleeping 9s
Task 1 with code utebu finished: True
Task VYCjw - Sleeping 14s
Task 2 with code ZAHIC finished: True
Task GDZmI - Sleeping 9s
Task 3 with code RBEgG finished: True
Task TPJKM - Sleeping 10s
Task 4 with code GDZmI finished: True
Task CggXZ - Sleeping 7s
Task 5 with code VYCjw finished: True
Task TUGJm - Sleeping 12s
Task 6 with code CggXZ finished: True
Task THlhj - Sleeping 11s
Task 7 with code TPJKM finished: True
Task 8 with code TUGJm finished: True
Task 9 with code THlhj finished: True
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/concurrent/futures/process.py", line 101, in _python_exit
thread_wakeup.wakeup()
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/concurrent/futures/process.py", line 89, in wakeup
self._writer.send_bytes(b"")
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/multiprocessing/connection.py", line 183, in send_bytes
self._check_closed()
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/multiprocessing/connection.py", line 136, in _check_closed
raise OSError("handle is closed")
OSError: handle is closed
AFAIK the code doesn't have an error itself. I've been researching really old questions like the following, without luck to find a fix for it:
this one related to the same error msg but for Python 2,
this issue in GitHub which seems to be exactly the same I'm having (and seems not fixed yet...?),
this other issue where the comment I linked to seems to point to the actual problem, but doesn't find a solution to it,
of course the official docs for Python 3.7,
...
And so forth. However still unlucky to find how to solve this behaviour. Even found a old question here in SO who pointed to avoid using as_completed function and using submit instead (and from there came this actual test, before I just had a map to my process function).
Any idea, fix, explanation or workaround are welcome. Thanks!
I am testing google workflow and would like to call a workflow from another workflow but as a separate process (not a subworkflow)
I am able to start the execution but currently unable to retrieve the return value. I receive instead an instance of the execution:
{
"argument": "null",
"name": "projects/xxxxxxxxxxxx/locations/us-central1/workflows/child-workflow/executions/9fb4aa01-2585-42e7-a79f-cfb4b57b22d4",
"startTime": "2020-12-09T01:38:07.073406981Z",
"state": "ACTIVE",
"workflowRevisionId": "000003-cf3"
}
parent-workflow.yaml
main:
params: [args]
steps:
- callChild:
call: http.post
args:
url: 'https://workflowexecutions.googleapis.com/v1beta/projects/my-project/locations/us-central1/workflows/child-workflow/executions'
auth:
type: OAuth2
scope: 'https://www.googleapis.com/auth/cloud-platform'
result: callresult
- returnValue:
return: ${callresult.body}
child-workflow.yaml:
- getCurrentTime:
call: http.get
args:
url: https://us-central1-workflowsample.cloudfunctions.net/datetime
result: CurrentDateTime
- readWikipedia:
call: http.get
args:
url: https://en.wikipedia.org/w/api.php
query:
action: opensearch
search: ${CurrentDateTime.body.dayOfTheWeek}
result: WikiResult
- returnOutput:
return: ${WikiResult.body[1]}
also as an added question how can create a dynamic url from a variable. ${} doesn't seem to work there
As Executions are async API calls, you need to POLL for the workflow to see when finished.
You can have the following algorithm:
main:
steps:
- callChild:
call: http.post
args:
url: ${"https://workflowexecutions.googleapis.com/v1beta/projects/"+sys.get_env("GOOGLE_CLOUD_PROJECT_ID")+"/locations/us-central1/workflows/http_bitly_secrets/executions"}
auth:
type: OAuth2
scope: 'https://www.googleapis.com/auth/cloud-platform'
result: workflow
- waitExecution:
call: CloudWorkflowsWaitExecution
args:
execution: ${workflow.body.name}
result: workflow
- returnValue:
return: ${workflow}
CloudWorkflowsWaitExecution:
params: [execution]
steps:
- init:
assign:
- i: 0
- valid_states: ["ACTIVE","STATE_UNSPECIFIED"]
- result:
state: ACTIVE
- check_condition:
switch:
- condition: ${result.state in valid_states AND i<100}
next: iterate
next: exit_loop
- iterate:
steps:
- sleep:
call: sys.sleep
args:
seconds: 10
- process_item:
call: http.get
args:
url: ${"https://workflowexecutions.googleapis.com/v1beta/"+execution}
auth:
type: OAuth2
result: result
- assign_loop:
assign:
- i: ${i+1}
- result: ${result.body}
next: check_condition
- exit_loop:
return: ${result}
What you see here is that we have a CloudWorkflowsWaitExecution subworkflow which will loop 100 times at most, also has a 10 second delay, it will stop when the workflow has finished, and returns the result.
The output is:
argument: 'null'
endTime: '2020-12-09T13:00:11.099830035Z'
name: projects/985596417983/locations/us-central1/workflows/call_another_workflow/executions/05eeefb5-60bb-4b20-84bd-29f6338fa66b
result: '{"argument":"null","endTime":"2020-12-09T13:00:00.976951808Z","name":"projects/985596417983/locations/us-central1/workflows/http_bitly_secrets/executions/2f4b749c-4283-4c6b-b5c6-e04bbcd57230","result":"{\"archived\":false,\"created_at\":\"2020-10-17T11:12:31+0000\",\"custom_bitlinks\":[],\"deeplinks\":[],\"id\":\"j.mp/2SZaSQK\",\"link\":\"//<edited>/2SZaSQK\",\"long_url\":\"https://cloud.google.com/blog\",\"references\":{\"group\":\"https://api-ssl.bitly.com/v4/groups/Bg7eeADYBa9\"},\"tags\":[]}","startTime":"2020-12-09T13:00:00.577579042Z","state":"SUCCEEDED","workflowRevisionId":"000001-478"}'
startTime: '2020-12-09T13:00:00.353800247Z'
state: SUCCEEDED
workflowRevisionId: 000012-cb8
in the result there is a subkey that holds the results from the external Workflow execution.
The best method is now the workflows.executions.run helper method, which formats the request and blocks until the workflow execution has completed:
- run_execution:
try:
call: googleapis.workflowexecutions.v1.projects.locations.workflows.executions.run
args:
workflow_id: ${workflow}
location: ${location} # Defaults to current location
project_id: ${project} # Defaults to current project
argument: ${arguments} # Arguments could be specified inline as a map instead.
result: r1
except:
as: e
steps: ... # handle a failed execution
I'm running into this error in AWS Lambda. It appears that the devtools websocket is not up. Not sure how to fix it. Any ideas? Thanks for your time.
Exception originated from get_ws_endpoint() due to websocket response timeout https://github.com/pyppeteer/pyppeteer/blob/ad3a0a7da221a04425cbf0cc92e50e93883b077b/pyppeteer/launcher.py#L225
Lambda code:
import os
import json
import asyncio
import logging
import boto3
import pyppeteer
from pyppeteer import launch
logger = logging.getLogger()
logger.setLevel(logging.INFO)
pyppeteer.DEBUG = True # print suppressed errors as error log
def lambda_handler(event, context):
asyncio.get_event_loop().run_until_complete(main())
async def main():
browser = await launch({
'headless': True,
'args': [
'--no-sandbox'
]
})
page = await browser.newPage()
await page.goto('http://example.com')
await page.screenshot({'path': '/tmp/example.png'})
await browser.close()
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
Exception:
Response:
{
"errorMessage": "Browser closed unexpectedly:\n",
"errorType": "BrowserError",
"stackTrace": [
" File \"/var/task/lambda_handler.py\", line 23, in lambda_handler\n asyncio.get_event_loop().run_until_complete(main())\n",
" File \"/var/lang/lib/python3.8/asyncio/base_events.py\", line 616, in run_until_complete\n return future.result()\n",
" File \"/var/task/lambda_handler.py\", line 72, in main\n browser = await launch({\n",
" File \"/opt/python/pyppeteer/launcher.py\", line 307, in launch\n return await Launcher(options, **kwargs).launch()\n",
" File \"/opt/python/pyppeteer/launcher.py\", line 168, in launch\n self.browserWSEndpoint = get_ws_endpoint(self.url)\n",
" File \"/opt/python/pyppeteer/launcher.py\", line 227, in get_ws_endpoint\n raise BrowserError('Browser closed unexpectedly:\\n')\n"
]
}
Request ID:
"06be0620-8b5c-4600-a76e-bc785210244e"
Function Logs:
START RequestId: 06be0620-8b5c-4600-a76e-bc785210244e Version: $LATEST
---- files in /tmp ----
[W:pyppeteer.chromium_downloader] start chromium download.
Download may take a few minutes.
0%| | 0/108773488 [00:00<?, ?it/s]
11%|█▏ | 12267520/108773488 [00:00<00:00, 122665958.31it/s]
27%|██▋ | 29470720/108773488 [00:00<00:00, 134220418.14it/s]
42%|████▏ | 46172160/108773488 [00:00<00:00, 142570388.86it/s]
58%|█████▊ | 62607360/108773488 [00:00<00:00, 148471487.93it/s]
73%|███████▎ | 79626240/108773488 [00:00<00:00, 154371569.93it/s]
88%|████████▊ | 95754240/108773488 [00:00<00:00, 156353972.12it/s]
100%|██████████| 108773488/108773488 [00:00<00:00, 161750092.47it/s]
[W:pyppeteer.chromium_downloader]
chromium download done.
[W:pyppeteer.chromium_downloader] chromium extracted to: /tmp/local-chromium/588429
-----
/tmp/local-chromium/588429/chrome-linux/chrome
[ERROR] BrowserError: Browser closed unexpectedly:
Traceback (most recent call last):
File "/var/task/lambda_handler.py", line 23, in lambda_handler
asyncio.get_event_loop().run_until_complete(main())
File "/var/lang/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/var/task/lambda_handler.py", line 72, in main
browser = await launch({
File "/opt/python/pyppeteer/launcher.py", line 307, in launch
return await Launcher(options, **kwargs).launch()
File "/opt/python/pyppeteer/launcher.py", line 168, in launch
self.browserWSEndpoint = get_ws_endpoint(self.url)
File "/opt/python/pyppeteer/launcher.py", line 227, in get_ws_endpoint
raise BrowserError('Browser closed unexpectedly:\n')END RequestId: 06be0620-8b5c-4600-a76e-bc785210244e
REPORT RequestId: 06be0620-8b5c-4600-a76e-bc785210244e Duration: 33370.61 ms Billed Duration: 33400 ms Memory Size: 3008 MB Max Memory Used: 481 MB Init Duration: 445.58 ms
I think BrowserError: Browser closed unexpectedly is just the error you get when Chrome crashes for whatever reason. It would be nice if pyppeteer printed out the error, but it doesn't.
To track things down, it's helpful to pull up the exact command that pyppeteer runs. You can do that this way:
>>> from pyppeteer.launcher import Launcher
>>> ' '.join(Launcher().cmd)
/root/.local/share/pyppeteer/local-chromium/588429/chrome-linux/chrome --disable-background-networking --disable-background-timer-throttling --disable-breakpad --disable-browser-side-navigation --disable-client-side-phishing-detection --disable-default-apps --disable-dev-shm-usage --disable-extensions --disable-features=site-per-process --disable-hang-monitor --disable-popup-blocking --disable-prompt-on-repost --disable-sync --disable-translate --metrics-recording-only --no-first-run --safebrowsing-disable-auto-update --enable-automation --password-store=basic --use-mock-keychain --headless --hide-scrollbars --mute-audio about:blank --no-sandbox --remote-debugging-port=33423 --user-data-dir=/root/.local/share/pyppeteer/.dev_profile/tmp5cj60q6q
When I ran that command in my Docker image, I got the following error:
$ /root/.local/share/pyppeteer/local-chromium/588429/chrome-linux/chrome # ...
/root/.local/share/pyppeteer/local-chromium/588429/chrome-linux/chrome:
error while loading shared libraries:
libnss3.so: cannot open shared object file: No such file or directory
So I installed libnss3:
apt-get install -y libnss3
Then I ran the command again and got a different error:
$ /root/.local/share/pyppeteer/local-chromium/588429/chrome-linux/chrome # ...
[0609/190651.188666:ERROR:zygote_host_impl_linux.cc(89)] Running as root without --no-sandbox is not supported. See https://crbug.com/638180.
So I needed to change my launch command to something like:
browser = await launch(headless=True, args=['--no-sandbox'])
and now it works!
Answering my own question.
Finally I was able to run Pyppeteer(v0.2.2) with Python 3.6 and 3.7 (not 3.8) after I bundled chromium binary in a lambda layer.
So in summary, it appears to work only when its configured to run with user provided chromium executable path and not with automatically downloaded chrome. Probably some race condition or something.
Got Chromium from https://github.com/adieuadieu/serverless-chrome/releases/download/v1.0.0-41/stable-headless-chromium-amazonlinux-2017-03.zip
browser = await launch(
headless=True,
executablePath='/opt/python/headless-chromium',
args=[
'--no-sandbox',
'--single-process',
'--disable-dev-shm-usage',
'--disable-gpu',
'--no-zygote'
])
Issue posted on repo https://github.com/pyppeteer/pyppeteer/issues/108
I have been trying to run pyppeteer in a Docker container and ran into the same issue.
Finally managed to fix it thanks to this comment: https://github.com/miyakogi/pyppeteer/issues/14#issuecomment-348825238
I installed Chrome manually through apt
curl -sSL https://dl.google.com/linux/linux_signing_key.pub | apt-key add -
echo "deb [arch=amd64] https://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google-chrome.list
apt update -y && apt install -y google-chrome-stable
and then specified the path when launching the browser.
You also have to run it in headless and with args "--no-sandbox"
browser = await launch(executablePath='/usr/bin/google-chrome-stable', headless=True, args=['--no-sandbox'])
Hope this will help!
If anybody is running on Heroku and facing the same error:
Add the buildpack : The url for the buildpack is below :
https://github.com/jontewks/puppeteer-heroku-buildpack
Ensure that you're using --no-sandbox mode
launch({ args: ['--no-sandbox'] })
Make sure all the necessary dependencies are installed. You can run ldd /path/to/your/chrome | grep not on a Linux machine to check which dependencies are missing.
In my case, i get this:
libatk-bridge-2.0.so.0 => not found
libgtk-3.so.0 => not found
and then install dependencies:
sudo apt-get install at-spi2-atk gtk3
and now it works!
I am here to ask for some help because I can't reach a solution and I have spent so much time on this.
The problem is a weird behavior in karma + jasmine tests, initially I thought that the problem was in AngularJs code but, stripping down by stripping down I reached the point where there is nothing else to remove and the problem is 100% not in angular.
The actual code that I am using is this:
test.js:
'use strict';
describe('Unit tests suite', function () {
it('test', function () {
expect('base').toEqual('');
});
});
karma.conf.js:
module.exports = function (config) {
config.set({
basePath: '',
frameworks: ['jasmine'],
files: ['*.js'],
exclude: [],
preprocessors: {},
reporters: ['progress'],
port: 9876,
colors: true,
logLevel: config.LOG_INFO,
autoWatch: true,
browsers: ['PhantomJS'],
singleRun: false,
})
}
Absolutely nothing else. The result of that test is:
13 02 2016 04:32:39.559:WARN [karma]: No captured browser, open http://localhost:9876/
13 02 2016 04:32:39.571:INFO [karma]: Karma v0.13.15 server started at http://localhost:9876/
13 02 2016 04:32:39.578:INFO [launcher]: Starting browser PhantomJS
13 02 2016 04:32:41.248:INFO [PhantomJS 2.1.1 (Mac OS X 0.0.0)]: Connected on socket HiC4WW_4235Nlf0rAAAA with id 54292207
PhantomJS 2.1.1 (Mac OS X 0.0.0) Unit tests suite test FAILED
Expected '/Users/Gianmarco/Desktop/test' to equal ''.
/Users/Gianmarco/Desktop/test/test.js:5:31
PhantomJS 2.1.1 (Mac OS X 0.0.0): Executed 1 of 1 (1 FAILED) ERROR (0.003 secs / 0.003 secs)
As you can see it seems that the word "base" is being changed with the path of the folder. This is making me going nuts I can't figure out why it is doing so.
I tried both with MacOSX and Ubuntu 14.04 and the result is the same.
To prepare the folder I did this:
mkdir test
cd test
npm install jasmine-core karma-cli karma-jasmine karma-phantomjs-launcher phantomjs-prebuilt --save
karma init
karma start
and of course my system had a npm install karma-cli -g some time ago.
The versions are:
jasmine-core#2.4.1
karma#0.13.21
karma-cli#0.1.2
karma-jasmine#0.3.7
karma-phantomjs-launcher#1.0.0
phantomjs-prebuilt#2.1.4
The same behaviour is obtained using the word absolute, that is replaced with an empty string
I believe its a problem with the default reporter in karma (progress) it appears the URL_REGEX matches both base and absolute as all the rest of the regex is optional.
var URL_REGEXP = new RegExp('(?:https?:\\/\\/[^\\/]*)?\\/?' +
'(base|absolute)' + // prefix
'((?:[A-z]\\:)?[^\\?\\s\\:]*)' + // path
'(\\?\\w*)?' + // sha
'(\\:(\\d+))?' + // line
'(\\:(\\d+))?' + // column
'', 'g')
https://github.com/karma-runner/karma/blob/684ab1838c6ad7127df2f1785c1f56520298cd6b/lib/reporter.js#L25