Cypress Code Coverage - One file with 0 should fail test - cypress

Please help stackoverflow!
What's going on is one file is 100%.
The other file does not have a test so it's showing 0.
Code coverage seems to be ignoring my 0 and only reading the file with 100%.
Because of this, I want the test to fail to make it required that each file have a test.
I want to make a note that code coverage does work... as long as I'm getting a result other than 0.
-------------------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
-------------------|---------|----------|---------|---------|-------------------
All files | 100 | 100 | 100 | 100 |
molecules/boxes | 100 | 100 | 100 | 100 |
Box.tsx | 100 | 100 | 100 | 100 |
molecules/fakebox | 0 | 0 | 0 | 0 |
fakebox.tsx | 0 | 0 | 0 | 0 |
-------------------|---------|----------|---------|---------|-------------------
fakebox.tsx does not have a test so it should fail, but it's not.
Here is my nyc's configuration:
"nyc": {
"check-coverage": true,
"all": true,
"branches": 90,
"lines": 90,
"functions": 90,
"statements": 90,
"average": false,
"skipEmpty": false,
"perFile": true,
"include": "src/lib/components/**/*.tsx"
},
I thought perFile would help in that case, but it's still ignoring my 0.
I think I've tried most of nyc's configuration but please let me know if there's something I wasn't aware of.
In short: My tests are passing when it shouldn't.
Thanks so much in advance!

Related

How to write multiple arrow/parquet files in chunks while reading in large data quantities of data so that all written files are one dataset?

I'm working in R with the arrow package. I have multiple tsv files:
sample1.tsv
sample2.tsv
sample3.tsv
sample4.tsv
sample5.tsv
...
sample50.tsv
each of the form
| id | start| end | value|
| --- | -----|-----|------|
| id1 | 1 | 3 | 0.2 |
| id2 | 4 | 6 | 0.5 |
| id. | ... | ... | ... |
| id2 | 98 | 100 | 0.5 |
and an index file:
| id | start| end |
| --- | -----|-----|
| id1 | 1 | 3 |
| id2 | 4 | 6 |
| id. | ... | ... |
| id2 | 98 | 100 |
I use the index file to left join on id, start and end with each sample to get a datatable like this:
| id | start| end | sample 1| sample 2| sample ...|
| --- | -----|-----|---------|---------|-----------|
| id1 | 1 | 3 | 0.2 | 0.1 | ... |
| id2 | 4 | 6 | 0.5 | 0.8 | ... |
| id. | ... | ... | ... | ... | ... |
| id2 | 98 | 100 | 0.5 | 0.6 | ... |
With multiple samples. I'd like to read them in chunks (eg: chunk_size=5), and when I have a table of chunk_size samples read, write that joined datatable as a parquet file to disk.
Currently, I'm able to write each chunked datatable to disk and I read them with open_dataset(datadir). In a loop with i as the sample_number:
# read and join
...
if (i %% chunk_size == 0) {
write_parquet(joined_table, paste0("datadir", "chunk", i / chunk_size, ".parquet"))
}
...
# clear the data table of samples
However, even though the arrow package says it read as many files as were written, when I check the columns available, only the columns from the first chunk are found.
data <- arrow::open_dataset("datadir")
data
# FileSystemDataset with 10 Parquet files
# id: string
# start: int32
# end: int32
# sample1: double
# sample2: double
# sample3: double
# sample4: double
# sample5: double
Samples 6-50 are missing. Reading the parquet files individually shows that each contains the samples from their chunk.
data2 <- arrow::open_dataset("datadir/chunk2.parquet")
data2
# FileSystemDataset with 1 Parquet file
# id: string
# start: int32
# end: int32
# sample6: double
# sample7: double
# sample8: double
# sample9: double
# sample10: double
Are parquet files the right format for this task? I'm not sure what I'm missing to make a splintered set of files that are all the same dataset when read in.

Grafana & Elastic - How to count sub array length

So I have a document that has two nested arrays i.e.
foo.bars[].baz[]
I am trying to figure out how I can use graphana to group by bars and give me a count of bar's for each bar. So it would look something like:
| bars.id| count|
| 1 | 10 |
| 2 | 15 |
| 3 | 20 |
What I have tried is the following:
Group by bars.id
Add a Sum metric for bars.baz.id
Override the script value to return 1
While this does give me the count of the bars, it does so for all bars in the document and not grouped by the bars.id i.e.
| bars.id| count|
| 1 | 45 |
| 2 | 45 |
| 3 | 45 |
Any help to achieve this would be very helpful.
Now if this can be done I have another more complex problem. I have another collection let's call it bobs that is a child of the root document. Now bobs isn't nested under the bars array but it has a bar.id field. I would also like to sum this based on that i.e.
{
bobs: [
{bar_id: 1},
{bar_id: 2},
],
bars: [
{id: 1, bazes: []},
{id: 2, bazes: []}
]
}
In this case I would also like in the table:
| bars.id| bobs.count|
| 1 | 1 |
| 2 | 1 |
| 3 | 0 |
Is this possible?

Getting cumulative risk values

Consider the following toy example:
use https://data.princeton.edu/pop509/justices2.dta, clear
stset tenure, fail(event == 1)
stcrreg age year, compete (event == 2)
stcurve, cif
I want to plot a cumulative incidence curve as done above but then I want to store those values with their 95% confidence intervals. However, it is not clear to me how to access/store them as variables.
Cross-posted at Statalist.
Use the outfile() option of the stcurve command:
stcurve, cif outfile(stdata)
use stdata
list in 1/10
+---------------------+
| ci1 _t |
|---------------------|
1. | .0465373 5.691992 |
2. | 0 1.045859 |
3. | .2600816 20.6078 |
4. | .1169629 8.876112 |
5. | .0465373 5.724846 |
|---------------------|
6. | .1249585 9.440109 |
7. | 0 .4462697 |
8. | .1574731 13.49213 |
9. | .1991083 15.36756 |
10. | .0232038 4.769336 |
+---------------------+

Simulate output with 3 cases

Physically is possible to simulate such situation on a board, using electronic components.
I got 2 inputs A and B , with 3 possible values for each one (-1,0,1). My final aim is to achieve this following truth table
A | B | result
–1 | –1 | +1
–1 | +1 | 0
0 | 0 | 0
0 | +1 | +1
+1 | –1 | 0
+1 | 0 | +1
+1 | +1 | -1
In pseudo code:
if (A equals B)
result = A * -1
else
result = A + B
Yes it is absolutely possible and this what todays CPUs are using. The so called logic gates.
Of course depending on your project but won't probably need Intel processor to redo your work but much simpler components doing just that. See the above link for example components doing it.

Magento Indexes Issue - Can't reindex

I have a problem with index management inside my Magento 1.6.2.0 store. Basically I can't get them to update. The status says Processing but it says like that for over a 3 weeks now.
And when I try to reindex I am getting this message Stock Status Index process is working now. Please try run this process later but later is 3 weeks now? So it looks like the process is frozen but I don't know how to restart.
Any ideas?
cheers
Whenever you start an indexing process, Magento writes out a lock file to the var/locks folder.
$ cd /path/to/magento
$ ls var/locks
index_process_1.lock index_process_4.lock index_process_7.lock
index_process_2.lock index_process_5.lock index_process_8.lock
index_process_3.lock index_process_6.lock index_process_9.lock
The lock file prevents another user from starting an indexing process. However, if the indexing request times out or fails before it can complete, the lock file will be left in a lock state. That's probably what happened to you. I'd recommend you check the last modified dates on the lock files to make sure someone else isn't running the re-indexer right now, and then remove the lock files. This will clear up your
Stock Status Index process is working now. Please try run this process later
error. After that, run the indexers one at a time to make sure each one completes.
Hello Did you call script manually if not then create one file in your root folder and write this code in it
require_once 'app/Mage.php';
umask( 0 );
Mage :: app( "default" );
$process = Mage::getSingleton('index/indexer')->getProcessByCode('catalog_product_flat');
$process->reindexAll();
this code do indexing of your magento manually some times it's happen that if your magento store contain large number of products then it will required lot's of time to reindexing of products so when you can go to your index management from admin it will show some indexing in processing stage so may be this code will help you to remove processing stage to ready stage of your indexes.
or you can also do indexing with SSH if you have rights of it. it's faster too for indexing
For newer versions of magento , ie 2.1.3 I had to use this solution:
http://www.elevateweb.co.uk/magento-ecommerce/magento-error-sqlstatehy000-general-error-1205-lock-wait-timeout-exceeded
This might happen if you are running a lot of custom scripts and killing the scripts before the database connection gets chance to close
If you login to MySQL from CLI and run the command
SHOW PROCESSLIST;
you will get the following output
+———+—————–+——————-+—————–+———+——+——-+——————+———–+—————+———–+
| Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined | Rows_read |
+———+—————–+——————-+—————–+———+——+——-+——————+———–+—————+———–+
| | 6794372 | db_user| 111.11.0.65:21532 | db_name| Sleep | 3800 | | NULL | 0 | 0 | 0
|
| 6794475 | db_user| 111.11.0.65:27488 | db_name| Sleep | 3757 | | NULL | 0 | 0 | 0
|
| 6794550 | db_user| 111.11.0.65:32670 | db_name| Sleep | 3731 | | NULL | 0 | 0 | 0
|
| 6794797 | db_user| 111.11.0.65:47424 | db_name | Sleep | 3639 | | NULL | 0 | 0 | 0
|
| 6794909 | db_user| 111.11.0.65:56029 | db_name| Sleep | 3591 | | NULL | 0 | 0 | 0
|
| 6794981 | db_user| 111.11.0.65:59201 | db_name| Sleep | 3567 | | NULL | 0 | 0 | 0
|
| 6795096 | db_user| 111.11.0.65:2390 | db_name| Sleep | 3529 | | NULL | 0 | 0 | 0
|
| 6795270 | db_user| 111.11.0.65:10125 | db_name | Sleep | 3473 | | NULL | 0 | 0 | 0
|
| 6795402 | db_user| 111.11.0.65:18407 | db_name| Sleep | 3424 | | NULL | 0 | 0 | 0
|
| 6795701 | db_user| 111.11.0.65:35679 | db_name| Sleep | 3330 | | NULL | 0 | 0 | 0
|
| 6800436 | db_user| 111.11.0.65:57815 | db_name| Sleep | 1860 | | NULL | 0 | 0 | 0
|
| 6806227 | db_user| 111.11.0.67:20650 | db_name| Sleep | 188 | | NULL | 1 | 0 | 0
+———+—————–+——————-+—————–+———+——+——-+——————+———–+—————+———–+
15 rows in set (0.00 sec)
You can see as an example
6794372 the command is sleep and time is 3800. This is preventing other operations
These processes should be killed 1 by 1 using the command.
KILL 6794372;
Once you have killed all the sleeping connections, things should start working as normal again
You need to do two steps:
give 777 permition to var/locks folders
delete all file of var/locks folder
Whenever you start an indexing process, Magento writes out a lock file to the var/locks folder. So uou need to do two steps:
Give 777 permission to var/locks folders
Delete all file of var/locks folder.
Now refresh the index management page in admin panel.
Enjoy!!

Resources