Error while running apache bench ,could not open post data ,the filename ,directory name , or volume label syntax is in - apachebench

I am trying to run a scenario using apache benchmark ( ab)
My current path is D:\ecif-documents\performance.
Post data file corp_sample.json is in the same directory .
I am running my test case on windows.
All the syntax's, i tried , are giving error . Any inputs ?
Method 1 :
ab -n 5 -p "file:///D:/cif-documents/performance/corp_sample.json" -T "application/json" "http://localhost:3000/api/corporates/enrollcorporate?access_token=lsM7Ar3FqX4pO36ORAqI3q0Km3OPioyLYougQSD8oVtXI4mAXPU5jocZx9QJKYcz" -v 4
Error -ab: Could not open POST data file (file:///D:/cif-documents/performance/corp_sample.json): The filename, directory name, or volume label syntax is incorrect.
Method 2:
D:\ecif-documents\performance>ab -n 5 -p "corp_sample.json" -T "application/json" "http://localhost:3000/api/corporates/enrollcorporate?access_token=lsM7Ar3FqX4pO36ORAqI3q0Km3OPioyLYougQSD8oVtXI4mAXPU5jocZx9QJKYcz" -v 4
Error : Could not stat POST data file (corp_sample.json): Partial results are valid but processing is incomplete
Method 3:
D:\ecif-documents\performance>ab -n 5 -p "D:\cif-documents\performance\corp_sample.json" -T "application/json" "http://localhost:3000/api/corporates/enrollcorporate?access_token=lsM7Ar3FqX4pO36ORAqI3q0Km3OPioyLYougQSD8oVtXI4mAXPU5jocZx9QJKYcz" -v 4
Error :
ab: Could not open POST data file (D:\cif-documents\performance\corp_sample.json): The system cannot find the path specified.
*Method 4*
D:\ecif-documents\performance>ab -n 5 -p "http://localhost:3000/corp_sample.json" -T "application/json" "http://localhost:3000/api/corporates/enrollcorporate?access_token=lsM7Ar3FqX4pO36ORAqI3q0Km3OPioyLYougQSD8oVtXI4mAXPU5jocZx9QJKYcz" -v 4
Error : ab: Could not open POST data file (http://localhost:3000/corp_sample.json): The filename, directory name, or volume label syntax is incorrect.
I am able to open the url (http://localhost:3000/corp_sample.json) in browser and see the json .

Related

baseDir issue with nextflow

This might be a very basic question for you guys, however, I am have just started with nextflow and I struggling with the simplest example.
I first explain what I have done and the problem.
Aim: I aim to make a workflow for my bioinformatics analyses as the one here (https://www.nextflow.io/example4.html)
Background: I have installed all the packages that were needed and they all work from the console without any error.
My run: I have used the same script as in example only by replacing the directory names. Here is how I have arranged the directories
location of script
~/raman/nflow/script.nf
location of Fastq files
~/raman/nflow/Data/T4_1.fq.gz
~/raman/nflow/Data/T4_2.fq.gz
Location of transcriptomic file
~/raman/nflow/Genome/trans.fa
The script
#!/usr/bin/env nextflow
/*
* The following pipeline parameters specify the refence genomes
* and read pairs and can be provided as command line options
*/
params.reads = "$baseDir/Data/T4_{1,2}.fq.gz"
params.transcriptome = "$baseDir/HumanGenome/SalmonIndex/gencode.v42.transcripts.fa"
params.outdir = "results"
workflow {
read_pairs_ch = channel.fromFilePairs( params.reads, checkIfExists: true )
INDEX(params.transcriptome)
FASTQC(read_pairs_ch)
QUANT(INDEX.out, read_pairs_ch)
}
process INDEX {
tag "$transcriptome.simpleName"
input:
path transcriptome
output:
path 'index'
script:
"""
salmon index --threads $task.cpus -t $transcriptome -i index
"""
}
process FASTQC {
tag "FASTQC on $sample_id"
publishDir params.outdir
input:
tuple val(sample_id), path(reads)
output:
path "fastqc_${sample_id}_logs"
script:
"""
fastqc "$sample_id" "$reads"
"""
}
process QUANT {
tag "$pair_id"
publishDir params.outdir
input:
path index
tuple val(pair_id), path(reads)
output:
path pair_id
script:
"""
salmon quant --threads $task.cpus --libType=U -i $index -1 ${reads[0]} -2 ${reads[1]} -o $pair_id
"""
}
Output:
(base) ntr#ser:~/raman/nflow$ nextflow script.nf
N E X T F L O W ~ version 22.10.1
Launching `script.nf` [modest_meninsky] DSL2 - revision: 032a643b56
executor > local (2)
executor > local (2)
[- ] process > INDEX (gencode) -
[28/02cde5] process > FASTQC (FASTQC on T4) [100%] 1 of 1, failed: 1 ✘
[- ] process > QUANT -
Error executing process > 'FASTQC (FASTQC on T4)'
Caused by:
Missing output file(s) `fastqc_T4_logs` expected by process `FASTQC (FASTQC on T4)`
Command executed:
fastqc "T4" "T4_1.fq.gz T4_2.fq.gz"
Command exit status:
0
Command output:
(empty)
Command error:
Skipping 'T4' which didn't exist, or couldn't be read
Skipping 'T4_1.fq.gz T4_2.fq.gz' which didn't exist, or couldn't be read
Work dir:
/home/ruby/raman/nflow/work/28/02cde5184f4accf9a05bc2ded29c50
Tip: view the complete command output by changing to the process work dir and entering the command `cat .command.out`
I believe I have an issue with my baseDir understanding. I am assuming that the baseDir is the one where I have my file script.nf I am not sure what is going wrong and how can I fix it.
Could anyone please help or guide.
Thank you
Caused by:
Missing output file(s) `fastqc_T4_logs` expected by process `FASTQC (FASTQC on T4)`
Nextflow complains when it can't find the declared output files. This can occur even if the command completes successfully, i.e. with exit status 0. The problem here is that fastqc simply skips files that don't exist or can't be read (e.g. permissions problems), but it does produce these warnings:
Skipping 'T4' which didn't exist, or couldn't be read
Skipping 'T4_1.fq.gz T4_2.fq.gz' which didn't exist, or couldn't be read
The solution is to just make sure all files exist. Note that the fromFilePairs factory method produces a list of files in the second element. Therefore quoting a space-separated pair of filenames is also problematic. All you need is:
script:
"""
fastqc ${reads}
"""

How to curl a post request with a bash array?

So I am trying to create a script that will send data to a slack hook app. This is my current code:
DATA="[app_code: app_code | error invalid _id: alsdjasdlkj];[app_code: app_code | error invalid _type: 1a1]"
IFS=';' ;read -ra errors <<< "$DATA"
unset IFS
cmd=$(curl -X POST -H 'Content-type: application/json' --data '{"text": "'"${errors[#]}"'" }' hook_url)
echo $cmd
The issue I am having is that I want to send an array list of errors as I will never know how many there could be at any given time. when I run this bash command this is the error I get:
curl: (3) bad range in URL position 2:
[app_code: app_code | error invalid _type: 1a1]" }
So how can I fix the code to send an array as a curl command for the data field? And if that's not possible is there a workaround for this? I need the text to show up on the slack channel as follows:
Total errors: 2
[app_code: app_code | error invalid _id: alsdjasdlkj]
[app_code: app_code | error invalid _type: 1a1]
Any help is appreciated. Thanks!

batch processing : File name comparison error

I have written a program (Cifti_subject_fmri) which compares whether file name matches in two folders and essentially executes a set of instructions
#!/bin/bash -- fix_mni_paths
source activate ciftify_v1.0.0
export SUBJECTS_DIR=/scratch/m/mchakrav/dev/functional_data
export HCP_DATA=/scratch/m/mchakrav/dev/tCDS_ciftify
## make the $SUBJECTS_DIR if it does not already exist
mkdir -p ${HCP_DATA}
SUBJECTS=`cd $SUBJECTS_DIR; ls -1d *` ## list of my subjects
HCP=`cd $HCP_DATA; ls -1d *` ## List of HCP Subjects
cd $HCP_DATA
## submit the files to the queue
for i in $SUBJECTS;do
for j in $HCP ; do
if [[ $i == $j ]];then
parallel "echo ciftify_subject_fmri $i/filtered_func_data.nii.gz $j fMRI " ::: $SUBJECTS |qbatch --walltime '05:00:00' --ppj 8 -c 4 -j 4 -N ciftify_subject_fmri -
fi
done
done
When i run this code in the cluster i am getting an error which says
./Cifti_subject_fmri: [[AS1: command not found
The query ciftify_subject_fmri is part of toolbox ciftify, for it to execute it requires following instructions
ciftify_subject_fmri <func.nii.gz> <Subject> <NameOffMRI>
I have 33 subjects [AS1 -AS33] each with its own func.nii.gz files located SUBJECTS directory,the results need to be populated in HCP directory, fMRI is name of file format .
Could some one kindly let me know why i am getting an error in loop

Bash : Adding input file names in output results

I'm using cURL API to submit files on a API service and It returns back with something called taks_id for the files submitted.
#submitter.sh
creation_date=$(date +"%m_%d_%Y")
task_id_file="/home/results/$creation_date"_task_ids".csv"
for i in $(find $1 -type f);do
task_id="$(curl -s -F file=#$i http://X.X.X.X:XXXX/api/tiscale/v1/upload)"
final_task_id=$(echo $task_id | grep -o 'http\?://[^"]\+')
echo "$final_task_id" >> $task_id_file
done
#Result ( 10_13_2016_task_ids.csv )
http://192.168.122.24:8080/api/tiscale/v1/task/17
http://192.168.122.24:8080/api/tiscale/v1/task/18
http://192.168.122.24:8080/api/tiscale/v1/task/19
Run Method :
$./submitter.sh /home/files/pdf/
Now, Using[find $1 -type f] logic will get the full path with file name mentioned.
#find /home/files/pdf -type f
/home/files/pdf/country.pdf
/home/files/pdf/summary.pdf
/home/files/pdf/age.pdf
How can i add the file names along with cURL API response result. For example , When submitting "/home/files/country.pdf", The API might give the task_id with http://192.168.122.24:8080/api/tiscale/v1/task/17'`.
Expecting Result :
country.pdf,http://192.168.122.24:8080/api/tiscale/v1/task/17
summary.pdf,http://192.168.122.24:8080/api/tiscale/v1/task/18
age.pdf,http://192.168.122.24:8080/api/tiscale/v1/task/19
I'm beginner in Bash, Any suggestions on how to achieve this ?

Extracting a URL from a list of flash variables from bash

I would want to download the ( .smil ) file with bash
The link of the file looks like that
http://website.fr:1535/3PBOLaEQ2kC19nmxtIYg8a4ziKlPQ9l0Jkn2hecxIexEZYc32znTlugcyxus%3D-08Fw3XtDFE9wrMbCGOZTOw%3D%3D.mp4?audioindex=0.smil
The filename -- everything after the last / and before the .mp4 -- changes on each reload, and is embedded in the site's code, within the flashvars parameter:
<param name="flashvars" value="netstreambasepath=http%3A%2F%2Fwebsite.fr%2Fvideo%2Fcelestial_method%2F5675-episode-4-04-fragments-d-emotions&id=yui_3_17_2_13_1414622045495_169&image=http%3A%2F%2Fi.jpg&skin=%2Fcomponents%2Fcom_vodvideo%2Fmediaplayer%2Fskin%2Fadn%2Fadn.xml&bufferlength=16000&repeat=list&title=undefined&logo=undefined&plugins=http%3A%2F%2Fwebsite.fr%2Fcomponents%2Fcom_vodvideo%2Fmediaplayer%2Fplugins%2Fcontrol%2Fcontrol.swf%2Chttp%3A%2F%2...wJgu2BiCfbqlqd6sQDZUlMO56C270iwoWT7GZ6txc%253D-ep69DPqWGsFsiQgVBAbiHQ%253D%253D.mp4%3Faudioindex%3D0.smil%22%2C%22default%22%3A%22http%3A%2F%website.fr%3A1935%2FS2v5wo0p7Fum7GI8_WlyBJU%252BXtVRjXY%252BkTuo_TXa0Wv0tpLLzR37DWx0AQkK52G9FMwJgu2BiCfbqlqd6sQDZUlMO56C270iwoWT7GZ6txc%253D-ep69DPqWGsFsiQgVBAbiHQ%253D%253D.mp4%3Faudioindex%3D0.smil%22%7D%5D&control.pluginmode=FLASH&mulutibu_v4_3.back=false&mulutibu_v4_3.cc=true&mulutibu_v4_3.pluginmode=FLASH&controlbar.position=over&dock.position=true">
How can I extract the link to launch VLC with the file directly?
The following would be a place to start:
# credit to https://gist.github.com/cdown/1163649
urldecode() {
local url_encoded="${1//+/ }"
printf '%b' "${url_encoded//%/\x}"
}
flashvars_u=$(
curl http://your-website/ | \
xmllint --html --xmlout - | \
xmlstarlet sel -t -m '//param[#name="flashvars"]' -v #value
)
flashvars=$(urldecode "$flashvars")
Further extracting content from flashvars is hindered by the content being provided only in redacted/modified form, making it impossible to test whether code is correct.

Resources