Where to include socket.io-client and socket.io-stream client side? - socket.io

I am quite new to javascript and I am following the example here: https://www.npmjs.com/package/socket.io-stream to attempt to set up a very simple file transfer that automatically just transfers the file as soon as the client connects.
However, I am getting the error:
Error: Module name "socket.io-client" has not been loaded yet for context
When I remove that line just to see if stream is working either I get:
Error: Module name "socket.io-stream" has not been loaded yet for context
socket.io-client missing sort of makes sense to me (as I have not included it anywhere in my <script> tags). However, the tutorial listed above has no mention of that. The socket.io-stream missing doesn't make sense to me as I followed their advice in the tutorial running
cp node_modules/socket.io-stream/socket.io-stream.js public/static/js
(Where index.html resides in public/) and included the JS file in my index.html
Kind of having a tough time finding answers on this one, any help you can offer is greatly appreciated!
Here is my server side code:
var io = require('socket.io').listen(80);
var ss = require('socket.io-stream');
var path = require('path');
io.of('/').on('connection', function(socket) {
ss(socket).on('write-file', function(stream, data) {
var filename = path.basename(data.name);
stream.pipe(fs.createWriteStream(filename));
});
});
and client side code:
index.html:
<script src="/socket.io/socket.io.js"></script>
<script src="/static/js/socket.io-stream.js"></script>
<script src="/static/js/require.js"></script>
<script
src="https://code.jquery.com/jquery-3.1.1.min.js"
integrity="sha256-hVVnYaiADRTO2PzUGmuLJr8BLUSjGIZsDYGmIJLv2b8="
crossorigin="anonymous"></script>
<script src="/client.js"></script>
client.js:
var io = require('socket.io-client');
var ss = require('socket.io-stream');
var socket = io.connect('http://localhost:3000');
var stream = ss.createStream();
ss(socket).emit('write-file', stream, { name: 'test.txt' });
fs.createReadStream('test.txt').pipe(stream);
Here is a tree directory of my project
├── index.js
├── node_modules
│   ├── socket.io
│ ├── socket.io-stream
│ ├── socket.io-client
│ ├── ... a billion other node modules
└── public
├── index.html
├── static
│   ├── glyphicons
│   ├── js
│   │   ├── client.js
│   │   ├── require.js
│   │   └── socket.io-stream.js
│   └── style.css
├── test.txt

Related

How to add a logo to my readthedocs - logo rendering at 0px wide

This happens locally via sphinx running readthedocs theme, it also happens in readthedocs.io.
I have added an svg logo (actually it's just the downloaded rtd logo.svg copied from their site for testing).
I've added the settings to conf.py and html builds fine.
html_theme = 'sphinx_rtd_theme'
html_static_path = ['_static']
html_logo = 'logo.svg'
html_theme_options = {
'logo_only': True,
'display_version': False,
}
If I inspect the logo class in Firefox it is set to "auto", if I add a width in px, the logo appears.
I feel as if I am missing something about the configuration of the readthedocs theme in the conf.py file?
Surely I should not have to hack at the CSS manually: I see no indication of altered CSS in the Readthedocs.io site when looking at their source.
I'm looking for an elegant solution - I do not want updates to readthedocs theme to break my site because I have been overriding the theme's CSS.
You're doing correctly
html_theme = 'sphinx_rtd_theme'
html_static_path = ['_static']
html_logo = "mepro_headshot.png"
html_theme_options = {
'logo_only': True,
'display_version': False,
}
I just added the logo in my docs/source/ and when you run make html, it copies your pngor svg files into docs/html/_static/. As mentioned in the documentation: New in version 0.4.1: The image file will be copied to the _static directory of the output HTML, but only if the file does not already exist there.
├── docs
│   │   └── html
│   │   ├── _static
│   │   │   ├── mepro_headshot.png
│   │   │   ├── mepro_headshot.svg
│   └── source
│   ├── _images
│   ├── _static
│   ├── _templates
│   ├── conf.py
│   ├── index.rst
│   ├── mepro_headshot.png
│   ├── mepro_headshot.svg
and it seems both
svg
and
png works
I had a similar issue, I've resolved it by adding the _static directory at the html_logo parameter.
html_theme = 'alabaster'
html_static_path = ['_static']
html_logo = "_static/logo_rw.png"
Same problem with .svg width auto zero px. For anyone that does want to set the css here is a solution:
sphinx-rtd-theme v0.5.0, sphinx v3.4.3
docs/_build/html/_static/css/custom.css:
/*
`width:auto` was rendering 0px wide for .svg files
https://stackoverflow.com/questions/59215996/how-to-add-a-logo-to-my-readthedocs-logo-rendering-at-0px-wide
*/
.wy-side-nav-search .wy-dropdown > a img.logo, .wy-side-nav-search > a img.logo {
width: 275px;
}

Altering snakemake workflow to anticipate and accommodate different data-structures

I have an existing snakemake RNAseq workflow that works fine with a directory tree as below. I need to alter the workflow so that it can accommodate another layer of directories. Currently, I use a python script that os.walks the parent directory and creates a json file for the sample wildcards (json file for sample widlcards also included below). I am not very familiar with python, and it seems to me that adapting the code for an extra layer of directories shouldn't be too difficult and was hoping someone would be kind enough to point me in the right direction.
RNAseqTutorial/
├── Sample_70160
│ ├── 70160_ATTACTCG-TATAGCCT_S1_L001_R1_001.fastq.gz
│ └── 70160_ATTACTCG-TATAGCCT_S1_L001_R2_001.fastq.gz
├── Sample_70161
│ ├── 70161_TCCGGAGA-ATAGAGGC_S2_L001_R1_001.fastq.gz
│ └── 70161_TCCGGAGA-ATAGAGGC_S2_L001_R2_001.fastq.gz
├── Sample_70162
│ ├── 70162_CGCTCATT-ATAGAGGC_S3_L001_R1_001.fastq.gz
│ └── 70162_CGCTCATT-ATAGAGGC_S3_L001_R2_001.fastq.gz
├── Sample_70166
│ ├── 70166_CTGAAGCT-ATAGAGGC_S7_L001_R1_001.fastq.gz
│ └── 70166_CTGAAGCT-ATAGAGGC_S7_L001_R2_001.fastq.gz
├── scripts
├── groups.txt
└── Snakefile
{
"Sample_70162": {
"R1": [ "/gpfs/accounts/SlurmMiKTMC/Sample_70162/Sample_70162.R1.fq.gz"
],
"R2": [ "/gpfs/accounts//SlurmMiKTMC/Sample_70162/Sample_70162.R2.fq.gz"
]
},
{
"Sample_70162": {
"R1": [ "/gpfs/accounts/SlurmMiKTMC/Sample_70162/Sample_70162.R1.fq.gz"
],
"R2": [ "/gpfs/accounts/SlurmMiKTMC/Sample_70162/Sample_70162.R2.fq.gz"
]
}
}
The structure I need to accommodate is below
RNAseqTutorial/
├── part1
│   ├── 030-150-G
│   │   ├── 030-150-GR1_clipped.fastq.gz
│   │   └── 030-150-GR2_clipped.fastq.gz
│   ├── 030-151-G
│   │   ├── 030-151-GR1_clipped.fastq.gz
│   │   └── 030-151-GR2_clipped.fastq.gz
│   ├── 100T
│   │   ├── 100TR1_clipped.fastq.gz
│   │   └── 100TR2_clipped.fastq.gz
├── part2
│   ├── 030-025G
│   │   ├── 030-025GR1_clipped.fastq.gz
│   │   └── 030-025GR2_clipped.fastq.gz
│   ├── 030-131G
│   │   ├── 030-131GR1_clipped.fastq.gz
│   │   └── 030-131GR2_clipped.fastq.gz
│   ├── 030-138G
│   │   ├── 030-138R1_clipped.fastq.gz
│   │   └── 030-138R2_clipped.fastq.gz
├── part3
│   ├── 030-103G
│   │   ├── 030-103GR1_clipped.fastq.gz
│   │   └── 030-103GR2_clipped.fastq.gz
│   ├── 114T
│   │   ├── 114TR1_clipped.fastq.gz
│   │   └── 114TR2_clipped.fastq.gz
├── scripts
├── groups.txt
└── Snakefile
The main script that generates the json file for the sample wildcards is below
for root, dirs, files in os.walk(args):
for file in files:
if file.endswith("fq.gz"):
full_path = join(root, file)
#R1 will be forward reads, R2 will be reverse reads
m = re.search(r"(.+).(R[12]).fq.gz", file)
if m:
sample = m.group(1)
reads = m.group(2)
FILES[sample][reads].append(full_path)
I just can't seem to wrap my head around a way to accommodate that extra layer. Is there another module or function other than os.walk? Could I somehow force os.walk to skip a directory and merge the part and sample prefixes? Any suggestions would be helpful!
Edited to add:
I wasn't clear in describing my problem, and noticed that the second example wasn't representative of the problem, and I fixed the examples accordingly, because the second tree was taken from a directory processed by someone else. Data I get comes in two forms, either samples of only one tissue, where the directory consists of WD, sampled folders, and fastq files, where the fastq files have the same prefix as the sample folders that they reside in. The second example is of samples from two tissues. These tissues must be processed separate from each other. But tissues from both types can be found in separate "Parts", but tissues of the same type from different "Parts" must be processed together. If I could get os.walk to return four tuples, or even use
root,dirs,files*=os.walk('Somedirectory')
where the * would append the rest of the directory string to the files variable. Unfortunately, this method does not go to the file level for the third child directory 'root/part/sample/fastq'. In an ideal world, the same snakemake pipeline would be able to handle both scenarios with minimal input from the user. I understand that this may not be possible, but I figured I ask and see if there was a module that could return all portions of each sample directory string.
It seems to me that your problem doesn't have much to do on how to accommodate the second layer. Instead, the question is about the specifications of the directory trees and file names you expect.
In the first case, it seems you can extract the sample name from the first part of the file name. In the second case, file names are all the same and the sample name comes from the parent directory. So, either you implement some logic that tells which naming scheme you are parsing (and this depends on who/what provides the files) or you always extract the sample name from the parent directory as this should work also for the first case (but again, assuming you can rely on such naming scheme).
If you want to go for the second option, something like this should do:
FILES = {}
for root, dirs, files in os.walk('RNAseqTutorial'):
for file in files:
if file.endswith("fastq.gz"):
sample = os.path.basename(root)
full_path = os.path.join(root, file)
if sample not in FILES:
FILES[sample]= {}
if 'R1' in file:
reads = 'R1'
elif 'R2' in file:
reads = 'R2'
else:
raise Exception('Unexpected file name')
if reads not in FILES[sample]:
FILES[sample][reads] = []
FILES[sample][reads].append(full_path)
Not sure if I understand correctly, but here you go:
for root, dirs, files in os.walk(args):
for file in files:
if file.endswith("fq.gz"):
full_path = join(root, file)
reads = 'R1' if 'R1' in file else 'R2'
sample = root.split('/')[-1]
FILES[sample][reads].append(full_path)

Can't load static resources from custom taglib in Spring Boot

I'm experimenting with Spring Boot and I'm trying to create a simple custom Tag that acts as a wrapper for my JSPs. Although I have problems to load static resources from that Tag.
Below is my directory structure:
.
├── pom.xml
├── src
│   ├── main
│   │   ├── java
│   │   │   └── com....
│   │   ├── resources
│   │   │   ├── application.properties
│   │   │   ├── static
│   │   │ | ├── js
│   │   │ | ├── css
│   │   │ | └── libraries
│   │   │ |   ├── jquery
│   │   │ |   └── jquery.min.js
│   │   │ |   ├── bootstrap
│   │   │ |   ├── ...
│   │   └── webapp
│   │   └── WEB-INF
│   │   ├── tags
│   │   └── main.tag
│   │   ├── tlds
│   │   ├── views
│   │   └── productDetails.jsp
│   └── test
│   └── java
│   └── com
Whenever a page loads, the content is fine because I can check the HTML of the page but the static resources fail to load giving me console errors:
jquery.min.js:1 Failed to load resource: the server responded with a status of 404 ()
localhost/:1 Refused to apply style from 'http://localhost:8080/product/libraries/bootstrap/bootstrap.min.css' because its MIME type ('application/json') is not a supported stylesheet MIME type, and strict MIME checking is enabled.
jquery.min.js:1 Failed to load resource: the server responded with a status of 404 ()
localhost/:1 Refused to apply style from 'http://localhost:8080/product/libraries/bootstrap/bootstrap.min.css' because its MIME type ('application/json') is not a supported stylesheet MIME type, and strict MIME checking is enabled.
An interesting thing is that when I change the resources with CDN links it works fine. Somehow this is with Spring but I can't figure out what.
The main.tag is as follows:
<%# tag description="Core Page Template" %>
<%# attribute name="header" fragment="true" required="false" %>
<%# attribute name="jsImports" fragment="true" required="false" %>
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<link rel="stylesheet" href="libraries/bootstrap/bootstrap.min.css">
<!-- I've even tried all possible combinations of that link
(e.g /libraries or /static/libraries or I moved the whole directory
under /webapp or /webapp/WEB-INF/)
but nothing seem to work. All the time I get 404 responses from the
server -->
<jsp:invoke fragment="header"/>
</head>
<body>
<jsp:doBody/>
<script src="libraries/jquery/jquery.min.js"></script>
</body>
</html>
Which I can use from a JSP like:
<%# page contentType="text/html;charset=UTF-8" %>
<%# taglib prefix="tt" tagdir="/WEB-INF/tags" %>
<tt:main>
<jsp:attribute name="header">
<title>Products - List</title>
</jsp:attribute>
<jsp:body>
Do body....
</jsp:body>
</tt:main>

GraphQL test schema and resolvers

WHAT I TRIED
I am using JEST for testing resolvers and schema but I am having trouble in creating folder and structure.Currently I import resolver functions and call functions and compare result or check if fields are defined.But it does not sometimes satisfy complex scenarios.
WHAT I AM LOOKING FOR
The best practices to test graphql schema and resolver functions, and what testing tool is recommended or mostly used?
Also, you can try this npm package that will test your Schema, Queries and Mutations... there is an example of it using mocha & chai.. here is the link
What you need to do, is import the schema and pass to the easygraphql-tester and then you can create UT.
There are multiple frameworks out there for integration testing your API using for example YAML files. There you can specify request and response. A simpler approach can be to use Jest snapshots and simply execute test queries using the graphql function from graphql-js. It will return a promise with the result and you can then await it and expect it to match the snapshot.
import { graphql } from 'graphql';
import schema from './schema';
import createContext from './createContext';
describe('GraphQL Schema', () => {
let context;
before(() => {
context = createContext();
database.setUp();
});
after(() => {
database.tearDown();
});
it('should resolve simple query', async () => {
const query = '{ hello }';
const result = await graphql(schema, query, null, context);
expect(result).toMatchSnapshot();
});
});
Tipp: You can also create dynamic tests for example by reading queries from files in a directory and then iterating over them creating a new test for each file. An example for that (not GraphQL though) can be found on my Github.
There is not a recomendable way to do it. Specially for documents and folder structure.
In my case, I am working on this repo. And this is my folder structure in the first level:
src/
├── App.js
├── configs
├── helpers
├── index.js
├── middlewares
├── models
├── resolvers
├── routes
├── schema
├── seeds
├── templates
├── tests
└── utils
In the root, I have the test folder, mainly to checkout the App basic behavior and some utils functions. In the other hand, inside the resolvers i have the main test for the GraphQl, queries and mutations.
src/resolvers/
├── camping
│   ├── camping.mutations.js
│   ├── camping.query.js
│   ├── camping.query.test.js
│   └── camping.resolver.js
├── clientes.resolver.js
├── guest
│   ├── guest.mutation.js
│   ├── guest.mutation.test.js
│   ├── guest.query.js
│   ├── guest.query.test.js
│   └── guest.resolver.js
├── index.js
├── turno
│   ├── turno.mutations.js
│   ├── turno.query.js
│   ├── turno.query.test.js
│   └── turno.resolver.js
└── users
├── user.mutations.js
├── user.mutations.test.js
├── user.queries.js
├── user.query.test.js
└── user.resolver.js
Every single resolver has his test, you can check there if the basic endpoints are working as expected.
I am planning to do some workflow tests, they will be in the test root folder later.

Firefox CSS sourcemaps not updating rendered page

I'm attempting to use source maps to live-edit scss in Firefox.
I followed the steps outlined in the documentation. My scss sources are visible and editable in the dev tools, but changes made to them are not reflected until I perform a manual page refresh.
My setup:
A local webserver with the following in its document root:
/
├── index.html
├── assets
│   ├── css
│   │   ├── main.css
│   │   ├── main.css.map
│   │   ├── scss
│   │   │   ├── main.scss
index.html:
<!DOCTYPE html>
<html>
<head>
<link rel="stylesheet" href="/assets/css/main.css">
</head>
<body>
42
</body>
</html>
main.scss:
body {
background-color:lightblue;
}
And, running in the css directory to generate the main.css and main.css.map files:
sass --sourcemap=file --watch scss:.
When I go to localhost/index.html in Firefox I can see the scss file, but when I save it over /path/to/document_root/assets/css/scss/main.scss the "main.css" displayed under it is struck through:
Saving it triggers sass to rebuild as expected, but Firefox doesn't seem to pick up on the rebuilt css until I refresh.

Resources