Getting error when trying to echo string to a file - shell

I have got the below content from a JSON API. I am trying to create a Markdown file from this content using the below script.
body="**Weekly Report 28-10-2020 for ravsam-web**\r\n\r\nJekyll website of RavSam\r\n\r\n**Updations**\r\n\r\n- Added json feed blog with 110 changes\r\n- Just some minor layout changes with 43 changes\r\n- Replaced blog images path with 6 changes\r\n- Commit added support for json feed with 155 changes\r\n\r\n\r\n```\r\nsend-email ravgeetdhillon#gmail.com\r\n```\r\n\r\n<sub>Created by [Github Actions](http://github.com/ravsamhq/client-reports)</sub>\r\n"
echo $body > ravgeet.md
Whenever I execute the above script, I get this error markdown.sh: line 3: rnsend-email: command not found. Any reason why?

Related

fatal error: An error occurred (404) when calling the HeadObject operation: Key " " does not exist

This is my setup:
I use AWS Batch that is running a custom Docker image
The startup.sh file is an entrypoint script that is reading the nth line of a text file and copying it from s3 into the docker.
For example, if the first line of the .txt file is 'Startup_00001/ Startup_000018 Startup_000019', the bash script reads this line, and uses a for loop to copy them over.
This is part of my bash script:
STARTUP_FILE_S3_URL=s3://cmtestbucke/Config/
Startup_FileNames=$(sed -n ${LINE}p file.txt)
for i in ${Startup_FileNames}
do
Startup_FileURL=${STARTUP_FILE_S3_URL}$i
echo $Startup_FileURL
aws s3 cp ${Startup_FileURL} /home/CM_Projects/ &
done
Here is the log output from aws:
s3://cmtestbucke/Config/Startup_000017
s3://cmtestbucke/Config/Startup_000018
s3://cmtestbucke/Config/Startup_000019
Completed 727 Bytes/727 Bytes (7.1 KiB/s) with 1 file(s) remaining download: s3://cmtestbucke/Config/Startup_000018 to Data/Config/Startup_000018
Completed 731 Bytes/731 Bytes (10.1 KiB/s) with 1 file(s) remaining download: s3://cmtestbucke/Config/Startup_000017 to Data/Config/Startup_000017
fatal error: *An error occurred (404) when calling the HeadObject operation: Key
"Config/Startup_000019 " does not exist.*
My s3 bucket certainly contains the object s3://cmtestbucke/Config/Startup_000019
I noticed this happens regardless of filenames. The last iteration always gives this error.
I tested this bash logic locally with the same aws commands. It copies all 3 files.
Can someone please help me figure out what is wrong here?
The problem was with EOL of the text file. It was set to Windows(CR LF). The docker image is running Ubuntu which caused the error. I changed the EOL to Unix(LF). The problem was solved.

Jmeter - Run .jmx file through command line and get the summary report in a html, csv as well as XML output - All three

Currently, I am giving below on the command line. When I add CSV, it doesn't give output in csv format. Can you please provide complete command for all three outputs.
Sample 1:
!JMeter -Jjmeter.save.saveservice.samplerData=true -Jjmeter.save.saveservice.response_data=true -Jjmeter.save.saveservice.output_format=xml -Jjmeter.save.saveservice.responseHeaders=true -Jjmeter.save.saveservice.requestHeaders=true -n -t ProgramServices.jmx -l JmeterReports\TestReport.xml -j JmeterReports\jmeter.log
Sample 2:
JMeter -n -t Creation_SLW.jmx -l JmeterReports/TestReport.csv -e -o JmeterReports/htmlReport/ -j JmeterReports/jmeter.log
As per Generating Report Dashboard documentation
The dashboard generator is a modular extension of JMeter. Its default behavior is to read and process samples from CSV files to generate HTML files containing graph views. It can generate the report at end of a load test or on demand.
So as of latest JMeter 5.4.1 you can only generate the HTML dashboard from .jtl file which is in CSV format and the contents of the file has to be inline with the Result File Configuration
I would recommend reverting your changes and going for CSV + HTML Reporting Dashboard with the default configuration (or with your necessary amendments)
If you need XML with the full data as well - you can add i.e. Simple Data Writer to your test plan and specify the desired file name and the metrics which you want to store:
More information: How to Save Response Data in JMeter

Problem uploading image to twitter using bash script with twurl. Keep getting "code: 44" "media_ids parameter is invalid"

I am having an issue when trying to upload an image to twitter using a bash script and twurl.
When I use a variable (which is storing the media_id of the image I want to upload) as the parameter for the "media_ids" that I appended to my status update, no photo gets posted and I receive this error message:
twurl -j -X POST -H upload.twitter.com "/1.1/media/upload.json" -f /root/$imgName -F media > mediaID.txt
local mediaID=$(egrep -o " [0-9]{19}" mediaID.txt)
twurl -X POST -H api.twitter.com "/1.1/statuses/update.json?status=The Astronomy Photo Of The Day (courtesy of NASA) for the day $date is \"$title\". For more details and info check out: https://apod.nasa.gov/apod/astropix.html&media_ids=$mediaID" | jq
“errors”: [
{
“code”: 44,
“message”: “media_ids parameter is invalid.”
}
]
However, when I use the actual media_id value instead of the variable ($mediaID) of the image as the parameter for the "media_ids" that gets appended to the status update, everything works as expected.
Is there some issue with using a variable as a parameter for media_id?
I am brand new to twurl and api's and therefore might be missing some basic point.
I would really appreciate any help or suggestions on this topic.
Thank you!

Getting 'Invalid Boolean' error while inserting a file to influx's measurement through shell

I'm trying to insert a file to influx through shell script using write API. Inconsistently data is not getting inserted. I get 'Invalid Boolean' response whenever its failing to insert to influx.
Kindly help me in identifying where i'm doing the mistake
Here is my code to write to influx
curl -s -X POST "http://influxdb:8186/write?db=mydb" --data-binary #data.txt
I get below json response with an error very inconsistently.
I'm generating the data.txt file with after some calculation. Below is the screenshot of the file.
{"error":"unable to parse 'databackupstatus1,env=engg-az-dev2 tenant=dataplatform snapshot_name=engg-az-dev2-dataplatform-2019-07-08_12-43-59 state=Started backup_status=Not-Applicable': invalid boolean\nunable to parse 'databackupstatus1,env=engg-az-dev2 tenant=dataplatform snapshot_name=engg-az-dev2-dataplatform-2019-07-08_12-43-59 state=Completed backup_status=\"SUCCESS\"': invalid boolean"}
Note : The same data above had got inserted multiple times

ruby read lines which updates latest only from file

There is a script which updates the output to a file and planning to setup monitoring of that process and send the latest output information to central server.
For example, the monitoring script will look for the output file for every minute and it has to read only whatever the latest data updated on the output file and send it to central server. Means it should read the old data whatever it read already. Is there any way to make this happen with ruby version >= 1.8.7
Here is my scenario,
the script test.sh is running which writes output to file called /tmp/script_output.txt
I have another monitoring script jobmonitor.rb which is triggered by test.sh and is used to monitor the job status and sends information to central server. Right now it reads the entire contents of script_output.txt file and sending the details and it has to send the information for every 2 mins for live monitoring.
Some cases the output file script_output.txt will have more than 500 lines, so every 2 mins the monitoring script sending the entire content which includes 2 mins old content as well.
So I am looking way to read the output file content for every 2 mins with the updated content only.
For example:
at present the script_output.txt file has contents,
line1
line2
.
.
.
line10
and the monitoring script sent these information.
Now after 2 mins the output file has below contents,
line1
line2
.
.
.
line10
line11
.
.
.
line 20
Now the monitoring script capture all 20 lines and sends the information, instead I need to send the output contents from line11-20 only.
Update:
I tried with below,
open(filepath, 'r') do |f|
while true
puts "#{f.readline()}"
sleep 60
end
end
it reads one line only but I want to read the n number of lines which was updated in 120secs or from next line of last read to till now.
Note : I was raised this question already on https://stackoverflow.com/questions/43446881/ruby-read-lines-from-file-only-latest-updates but I missed to update detailed information initially and so the question was closed. Sorry for that.

Resources