When working on my local machine i'm using this command for mass insertion:
cat fixtures.txt | redis-cli --pipe
but heroku gives a limited access to redis-cli, so I don't know how should I do it.
I tried:
heroku run "cat fixtures.txt | redis-cli --pipe"
resulting:
bash: redis-cli: command not found
I tried:
cat fixtures.txt | heroku redis:cli --pipe
resulting:
▸ No Redis instances found.
Does anybody knows how to make it right?
I really need to initialize my redis with a lot of data
Related
I want to fill some tags of the EC2 spot instance, however as it is impossible to do it directly in spot request, I do it via user data script. All is going fine when I specify region statically, but it is not universal approach. When I try to detect current region from instance userdata, the region variable is always empty. I do it in a following way:
#!/bin/bash
region=$(ec2-metadata -z | awk '{print $2}' | sed 's/[a-z]$//')
aws ec2 create-tags \
--region $region \
--resources `wget -q -O - http://169.254.169.254/latest/meta-data/instance-id` \
--tags Key=sometag,Value=somevalue Key=sometag,Value=somevalue
I tried to made a delay before region populating
/bin/sleep 30
but this had no result.
However, when I run this script manually after start, the tags are added fine. What is going on?
Why all in all aws-cli doesn't get default region from profile? I have aws configure properly configured inside the instance, but without --region clause it throws error that region is not specified.
I suspect the ec2-metadata command is not available when your userdata script is executed. Try getting the region from the metadata server directly (which is what ec2-metadata does anyway)
region=$(curl -fsq http://169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/[a-z]$//')
AWS CLI does use the region from default profile.
You can now use this endpoint to get only the instance region (no parsing needed):
http://169.254.169.254/latest/meta-data/placement/region
So in this case:
region=`curl -s http://169.254.169.254/latest/meta-data/placement/region`
I ended up with
region=$(curl -s http://169.254.169.254/latest/dynamic/instance-identity/document | python -c "import json,sys; print"
which worked fine. However, it would be fine if somebody explain the nuts-and-bolts.
I want to use wget in sh script but I don't want to download wget url.How can I do this ? This script is getting load average with this code uptime | awk -F'[a-z]:' '{ print $2}' and I'll pass this values to php script with wget.
If you want to pipe the document instead of downloading it to a file, use the -O option:
wget -O - URL | command
Redirecting to the filename - means to send to standard output instead of a file.
I am having trouble understanding your question, but I will attempt to rephrase it and suggest a solution.
My guess is that you want to show the load averages of a remote server on a webpage, via php. With that assumption, let me show you an easier way to do that. This way requires that you have access via ssh to the remote computer, and that your local computer can access the remote computer with a ssh key.
Basically, you will use ssh to execute a command on the remote machine, then save the output (load averages) locally (somewhere that your web server can access them). Then you will include the load average file in your php script.
First, you need to get the load average of the remote computer and save it locally. To do so, run this command:
ssh [remote username]#[address of remote computer] "uptime" | awk -F'[a-z]:' '{ print $2}' > [path to where you want to save the load average]
Here is an example:
ssh jake#10.0.0.147 "uptime" | awk -F'[a-z]:' '{ print $2}' > /var/www/load_average.txt
Next, you need to setup your php script, it will looking something like this:
<?php
include "load_average.txt";
>
You should also setup a cronjob to request the information regularly so that it is up to date.
I ran a ruby script from Heroku bash that generates a CSV file on the server that I want to download. I tried moving it to the public folder to download, but that didn't work. I figured out that after every session in the Heroku bash console, the files delete. Is there a command to download directly from the Heroku bash console?
If you manage to create the file from heroku run bash, you could use transfer.sh.
You can even encrypt the file before you transfer it.
cat <file_name> | gpg -ac -o- | curl -X PUT -T "-" https://transfer.sh/<file_name>.gpg
And then download and decrypt it on the target machine
curl https://transfer.sh/<hash>/<file_name>.gpg | gpg -o- > <file_name>
There is heroku ps:copy:
#$ heroku help ps:copy
Copy a file from a dyno to the local filesystem
USAGE
$ heroku ps:copy FILE
OPTIONS
-a, --app=app (required) app to run command against
-d, --dyno=dyno specify the dyno to connect to
-o, --output=output the name of the output file
-r, --remote=remote git remote of app to use
DESCRIPTION
Example:
$ heroku ps:copy FILENAME --app murmuring-headland-14719
Example run:
#$ heroku ps:copy app.json --app=app-example-prod --output=app.json.from-heroku
Copying app.json to app.json.from-heroku
Establishing credentials... done
Connecting to web.1 on ⬢ app-example-prod...
Downloading... ████████████████████████▏ 100% 00:00
Caveat
This seems not to run with dynos that are run via heroku run.
Example
#$ heroku ps:copy tmp/some.log --app app-example-prod --dyno run.6039 --output=tmp/some.heroku.log
Copying tmp/some.log to tmp/some.heroku.log
Establishing credentials... error
▸ Could not connect to dyno!
▸ Check if the dyno is running with `heroku ps'
It is! Prove:
#$ heroku ps --app app-example-prod
=== run: one-off processes (1)
run.6039 (Standard-1X): up 2019/08/29 12:09:13 +0200 (~ 16m ago): bash
=== web (Standard-2X): elixir --sname dyno -S mix phx.server --no-compile (2)
web.1: up 2019/08/29 10:41:35 +0200 (~ 1h ago)
web.2: up 2019/08/29 10:41:39 +0200 (~ 1h ago)
I could connect to web.1 though:
#$ heroku ps:copy tmp/some.log --app app-example-prod --dyno web.1 --output=tmp/some.heroku.log
Copying tmp/some.log to tmp/some.heroku.log
Establishing credentials... done
Connecting to web.1 on ⬢ app-example-prod...
▸ ERROR: Could not transfer the file!
▸ Make sure the filename is correct.
So I fallen back to using SCP scp -P PORT tmp/some.log user#host:/path/some.heroku.log from the run.6039 dyno command line.
Now that https://transfer.sh is defunct, https://file.io is an alternative. To upload myfile.csv:
$ curl -F "file=#myfile.csv" https://file.io
The response will include a link you can access the file at:
{"success":true,"key":"2ojE41","link":"https://file.io/2ojE41","expiry":"14 days"}
I can't vouch for the security of file.io, so using encryption as described in other answers could be a good idea.
Heroku dyno filesystems are ephemeral, non-persistant and not shared between dynos. So when you do heroku run bash, you actually get a new dyno with a fresh deployment of you app without any of the changes made to ephemeral filesystems in other dynos.
If you want to do something like this, you should probably either do it all in a heroku run bash session or all in a request to a web app running on Heroku that responds with the CSV file you want.
I did as the following:
First I entered heroku bash with this command:
heroku run 'sh'
Then made a directory and moved the file to that
Made a git repository and commited the file
Finally I pushed this repository to github
Before commiting, git will ask you for your name and email. Give it something fake!
If you have files bigger than 100 Mg, push to gitlab.
If there is an easier way please let me know!
Sorry for my bad english.
Another way of doing this (that doesn't involve any third server) is to use Patrick's method but first compress the file into a format that only uses visible ASCII charaters. That should make it work for any file, regardless of any whitespace characters or unusual encodings. I'd recommend base64 to do this.
Here's how I've done it:
Log onto your heroku instance using heroku run bash
Use base64 to print the contents of your file: base64 <your-file>
Select the base64 text in your terminal and copy it
On your local machine decompress this text using base64 straight into a new file (on a mac I'd do pbpaste | base64 --decode -o <your-file>)
I agree that most probably your need means a change in your application architecture, something like a worker dyno.
But by executing the following steps you can transfer the file, since heroku one-off dyno can run scp:
create vm in a cloud provider, e.g. digital ocean;
run heroku one-off dyno and create your file;
scp file from heroku one-off dyno to that vm server;
scp file from vm server to your local machine;
delete cloud vm and stop heroku one-off dyno.
I see that these answers are much older, so I'm assuming this is a new feature. For all those like me who are looking for an easier solution than the excellent answers already here, Heroku now has the capability to copy files quite easily with the following command: heroku ps:copy <filename>
Note that this works with relative paths, as you'd expect. (Tested on a heroku-18 stack, downloading files at "path/to/file.ext"
For reference: Heroku docs
Heroku dyno's come with sftp pre-installed. I tried git but was too many steps (had to generate a new ssh cert and add it to github every time), so now I am using sftp and it works great.
You'll need to have another host (like dreamhost, hostgator, godaddy, etc) - but if you do, you can:
sftp username#ftp.yourhostname.com
Accept the server fingerprint/hash, then enter your password.
Once on the server, navigate to the folder you want to upload to (using cd and ls commands).
Then use the command put filename.csv and it will upload it to your web host.
To retrieve your file: Use an ftp client like filezilla or hit the url if you uploaded to a folder in the www or website folder path.
This is great because it also works with multiple files and binaries as well as text files.
For small/quick transfers that fit comfortably in the clipboard:
Open a terminal on your local device
Run heroku run bash
(Inside your remote connection, on the dyno) Run cat filename
Select the lines in your local terminal and copy them to your clipboard.
Check to ensure proper newlines when pasting them.
Now i created shell script to upload some files from to git backup repo (for example, my app.db sqlite file is gitignored and every deploy kills it)
## upload dyno files to git via SSH session
## https://devcenter.heroku.com/changelog-items/1112
# heroku ps:exec
git config --global user.email 'dmitry.cheva#gmail.com'
git config --global user.name 'Dmitry Cheva'
rm -rf ./.gitignore
git init
## add each file separately (-f to add git ignored files)
git add app.db -f
git commit -m "backup on `date +'%Y-%m-%d %H:%M:%S'`"
git remote add origin https://bitbucket.org/cheva/appbackup.git
git push -u origin master -f
The git will reboot after the deploy and does not store the environment, you need to perform the first 3 commands.
Then you need to add files (-f for ignored ones) and push into repo (-f, because the git will require pull)
This question already has answers here:
How do I write a bash script to restart a process if it dies?
(10 answers)
Closed 6 years ago.
Bash: Check up, Run a process if not running
Hi ,
My requirement is that , if Memcache server is down for any reason in production , i want to restart it immediately
Typically i will start Memcache server in this way with user as nobody with replication as shown below
memcached -u nobody -l 192.168.1.1 -m 2076 -x 192.168.1.2 -v
So for this i added a entry in crontab this way
(crontab -e)
*/5 * * * * /home/memcached/memcached_autostart.sh
memcached_autostart.sh
#!/bin/bash
ps -eaf | grep 11211 | grep memcached
# if not found - equals to 1, start it
if [ $? -eq 1 ]
then
memcached -u nobody -l 192.168.1.1 -m 2076 -x 192.168.1.2 -v
else
echo "eq 0 - memcache running - do nothing"
fi
My question is inside memcached_autostart.sh , for autorestarting the memcached server , is there any problem with the above script ??
Or
If there is any better approach for achieving this (rather than using cron job )
Please share your experience .
Yes the problem is ps -eaf | grep 11211 | grep memcached I assume is the process ID which always changes on every start, so what you should do is ps -ef | grep memcached
hope that helped
Instead of running it from cron you might want to create a proper init-script. See /etc/init.d/ for examples. Also, if you do this most systems already have functionality to handle most of the work, like checking for starting, restarting, stopping, checking for already running processes etc.
Most daemon scripts save the pid to a special file (e.g. /var/run/foo), and then you can check for the existence of that file.
For Ubuntu, you can see /etc/init.d/skeleton for example script that you can copy.
I can connect to remote redis using the telnet command and get the value of "mytest" key. The following is working as expected.
[root#server shantanu]# telnet 10.10.10.100 6379
Trying 10.10.10.100...
Connected to 10.10.10.100 (10.10.10.100).
Escape character is '^]'.
get mytest
$14
this is first
But how do I use it in shell script?
I am used to connect to mysql using the following:
msyql -h10.10.10.100 -uroot -proot#123 -e"show databases"
Is a simialar syntax available for redis?
You can alternatively use redis-cli, included in redis
$ ./src/redis-cli --raw GET key
test
I don't know telnet, but with ssh you can:
ssh user#server "command arg1 arg2 ..."
for example
ssh user#server "ls -ltr | tail"
I would use a tool like wget, which is designed to get content from websites, and is very configurable and automaetable. You might even be able to get away with
export myTestKey=`echo "get mytest" | telnet 10.10.10.100 6379`
If the conversation needs to be more complex than that, I would use telnet in combination with expect, which is designed for trigger and response conversations.