Cypress aliased variable is inaccessible - cypress

After I upgraded to Cypress 12, this problem started to happen. The only update to my code was upgrading cypress. The problem is that when I attempt to access an alias, some were accessible and others were not. In my test, I have aliased these two variables noRows and role_0
When I attempted to access them, noRows was accessible, but role_0 was not, and the test failed due to this.
This had no problem
But this had as the test kept trying to access it but was never able to do it.
This is where I alias them:
cy.get('#role_0').invoke('text').invoke('trim').as(`role_0`);
cy.wrap(index + 1).as('noRows');
And later in this test I attempt to access them:
cy.get('#noRows').then(noRows => {
cy.get(`#row`).should('have.length', noRows);
});
cy.get(`#role_0`).should('eq', shipment_ST);
Any ideas on how to solve this problem?

This might be due to the revamp of the alias behaviour in version 12.
Prior to 12, an alias would only try to re-evaluate when a "detached from DOM" error was found.
After 12 it re-evaluates all the time, but it introduced a breaking change if your test relies on the initial value.
This doesn't exactly sound like the problem you are describing, but it's worth trying the "fix" anyway.
See Changelog 12.4.0
The .as command now accepts an options argument, allowing an alias to be stored as type "query" or "static" value. This is stored as "query" by default. Addresses #25173.
So in your test, use .as('role_0', { type: 'static' }) to make it work more like a fixed variable rather than a retriable query.

Related

How to Fix Document Not Found errors with find

I have a collection of Person, stored in a legacy mongodb server (2.4) and accessed with the mongoid gem via the ruby mongodb driver.
If I perform a
Person.where(email: 'some.existing.email#server.tld').first
I get a result (let's assume I store the id in a variable called "the_very_same_id_obtained_above")
If I perform a
Person.find(the_very_same_id_obtained_above)
I got a
Mongoid::Errors::DocumentNotFound
exception
If I use the javascript syntax to perform the query, the result is found
Person.where("this._id == #{the_very_same_id_obtained_above}").first # this works!
I'm currently trying to migrate the data to a newever version. Currently mongodbrestore-ing on amazon documentdb to make tests (mongodb 3.6 compatible) and the issue remains.
One thing I noticed is that those object ids are peculiar:
5ce24b1169902e72c9739ff6 this works anyway
59de48f53137ec054b000004 this requires the trick
The small number of zeroes toward the end of the id seems to be highly correlated with the problem (I have no idea of the reason).
That's the default:
# Raise an error when performing a #find and the document is not found.
# (default: true)
raise_not_found_error: true
Source: https://docs.mongodb.com/mongoid/current/tutorials/mongoid-configuration/#anatomy-of-a-mongoid-config
If this doesn't answer your question, it's very likely the find method is overridden somewhere in your code!

Joomla 3.0 generic database error handling

Going from Joomla 2.5 to 3.0 with my extension, I'm struggling with how to do the DB error handling (since GetErrorNum is deprecated, see also Joomla! JDatabase::getErrorNum() is deprecated, use exception handling instead).
The way that seems to be the one to go according to the question linked above, is to add the following code for each db->query() code:
if (!$db->query()) {
throw new Exception($db->getErrorMsg());
}
In my opinion, that makes DB error handling more awkward than it was before. So far, I simply called a checkDBError() function after a DB call, which queried the ErrorNum and handled any possible error accordingly.
That was independent from how the DB query was actually triggered - there are different ways to do that, and different results on an error: $db->loadResult() returns null on error, $db->query() returns false. So there will now be different checks for different DB access types.
Isn't there any generic way to handle this, e.g. a way to tell Joomla to throw some exception on DB problems? Or do I have to write my own wrapper around the DatabaseDriver to achieve that? Or am I maybe missing something obvious?
Or should I just ignore the deprecation warning for now and continue with using getErrorNum()? I'd like to make my extension future-proof, but I also don't want to clutter it too much with awkward error handling logic.
Just found this discussion: https://groups.google.com/forum/#!msg/joomla-dev-general/O-Hp0L6UGcM/XuWLqu2vhzcJ
As I interpret it, there is that deprecation warning, but there is no proper replacement yet anyway...
Unless somebody points out any other proper documentation of how to do it in 3.0, I will keep to the getErrorNum method of doing stuff...
Get getErrorNum() function will solve your problem....
$result = $db->loadResult();
// Check for a database error.
if ($db->getErrorNum())
{
JFactory::getApplication()->enqueueMessage($db->getErrorMsg());
return false;
}

Splunk-client (with Nokogiri) giving Undefined Namespace Prefix

I'm using splunk-client to extract results from splunk. Here's the code:
query = "sourcetype=collection #{order_id}"
search = #splunk_client.search(query)
search.wait
The search is happening fine, and it seems like I'm doing everything according to the example (https://github.com/cbrito/splunk-client), but I get this error on the 'search.wait' line:
Undefined namespace prefix: //s:key[#name='isDone']
Any ideas what could be going wrong? Running these commands in irb works fine. Is there some sort of blocking issue?
There is currently very little error checking which occurs within the gem itself. The reason for the error is that wait looks for the status of the isDone key to change to true.
Since your credentials were not properly setup in the first place, the gem creates a search object with an invalid session. The search does not initially fail, because enough response came back from Splunk that Nokogiri processes it into an object without a Splunk search sid.
In the future I should likely raise an exception if a proper sid is not returned to avoid confusion.
Source: I wrote the gem.
I found out the issue -- the splunk client wasn't authenticating properly, and so search was actually a broken SplunkJob object (with a nil username and authentication key). It's strange that there was no error raised until the wait command, but upon inspecting the search object, one of the fields stated that the object was malformed.

Language Service: ParseReason.Check never called after migrating to VS2010

I just migrated my language service from VS2008 to VS2010. Everything works fine except for one important thing: I no longer get LanguageService.ParseSource invoked for ParseReason.Check. It do get a single invoke after opening a file. But after editing code, it no longer gets invoked.
Any ideas what could be causing that?
I also migrated a language service from 2008 to 2010. Can you check if you've fallowed all of these steps?
http://msdn.microsoft.com/en-us/library/dd885475.aspx
I didn't have to do anything else, which I verified by diffing the important files in our depot before and after the change.
I don't know if you ever figured your question out, but have you tried making sure that your Source class' LastParseTime is set to 0 when creating it? I seem to recall some issues with Check not happening unless you manually set LastParseTime to 0 when creating your Source object.
Protip: If you use .NET Reflector, you can disassemble all of the base classes for the LanguageService framework and get a pretty good understanding of how it all works under the hood. The classes you'd be interested in live in Microsoft.VisualStudio.Package.LanguageService.10.0.dll, which should be installed in the GAC. I've found this to be unimaginably helpful when trying to figure out why things weren't working in my own Language Service, and being able to step through the source code in the debugger mitigates almost all the pain of working with these frameworks!
When your Source object is initialized, it starts off with a LastParseTime of Int32.MaxValue. The code that causes fires off a ParseRequest with ParseReason.Check checks the LastParseTime value to see if the time since the last change to the text is less than the time it takes to run a parse (or the CodeSenseDelay setting, whichever is greater).
The code that handles the response from ParseSource is supposed to set the LastParseTime, but as far as I can tell, it only does that if the ParseReason is Check.
You can get around this issue by setting Source.LastParseTime = 0 when you initialize your Source. This has the side-effect of setting CompletedFirstParse to true, even if the first parse hasn't finished yet.
Another way to fix this issue is to override Source.OnIdle to fire off the first call to BeginParse() This is the way I would recommend.
public override void OnIdle(bool periodic)
{
// Once first "Check" parse completes, revert to base implementation
if (this.CompletedFirstParse)
{
base.OnIdle(periodic);
}
// Same as base implementation, except we don't check lastParseTime
else if (!periodic || this.LanguageService == null || this.LanguageService.LastActiveTextView == null || (this.IsCompletorActive) || (!this.IsDirty || this.LanguageService.IsParsing))
{
this.BeginParse();
}
}

rails session_store odd behaviour

I am using active_record_store in a rails application which is storing this in session session[:email] = "email#address.com"
now this works fine in the action. but when this action gets over and is redirected to another page, which also accesses the same session[:email] I get an error
undefined method `eq' for nil:NilClass
this should probably mean that i am trying to compare values at some place i am not allowed to. but i cannot see anything like that in the code.
This looks like an old question, but I was just having the same problem and had to figure it out on my own, and thought I would post the solution up here for anyone else that runs into this. It's not very well documented, but to get this to work you have to add:
config.action_dispatch.session_store = :active_record_store
to application.rb, and
Application.config.session_store :active_record_store
to config/initializers/session_store.rb. Then, you have to do:
rake db:sessions:create
and:
rake db:migrate
Then, you have to restart your rails server. I think it was the db:sessions:create step that tripped up the original poster. Not only does that database table have to be laid out the way rails is expecting (that is, with an 'id' column, which is the actual cause of this error, I think), but also the current session has to have a valid ID. Hence the need to create the table and re-start the server, or potentially empty the table if it exists.

Resources