Rstudio AMI linkDropbox() gives no link - amazon-ec2

It is not the first time I use the Rstudio AMI from Louis Aslett on Amazon EC2, but this time the linkDropbox() function drives me crazy and I could not find any help on Google: it just doesn't give any link !
> linkDropbox()
Launching Dropbox client, please wait ...
Dropbox launched. Please visit the following URL in your browser now to link the server to your Dropbox account:
isn't
waiting (please do this now or linking may fail) ...
I don't know what this isn't do here, but there should be a link. Any one has had the same problem?
EDIT
It appears that the function gets stucks into the loop and return:
"This computer isn't linked to any Dropbox account..."

The best solution I came up with was to launch a new instance from a previous working image/snapshot - obviously not ideal if your existing instance has many packages/versions you need, but I found when I created a new AMI from Louis Aslett's site (after having trouble re-starting an old instance and updating Dropbox), I could not replicate behavior from February 2018 (except by using a snapshot of that previously functioning AMI). I am writing this in September 2018, so it is possible that the AMI's on Aslett's site will be updated and this won't be a problem anymore.
Good luck, future readers...

I had this problem, and realized that the security settings for the instance didn't allow outbound traffic on my current IP. I updated my security settings, ran unlinkDropbox(), and then ran linkDropbox() again, and it worked this time.

Related

How to stop supabase which is running on localhost 3000?

I wanted to know what supabase is so I installed it using this local development guide enter link description here
it was few weeks back, i was simply checking port 3000 and supabase is running i have removed all supabase related folder but still its running. can someone help me understand why its still running and how to stop it.
To stop running supabase, you can use the CLI stop command:
supabase stop
Deleting folders without stopping will keep the current instance running.
You can always use htop (linux) and of follow this guide (macOS) to stop the process.
Hi guys i found the solution. it was cache in the browser which was loading. I deleted history, cache and cookies. now if i open my browser and go to localhost:3000 i can see 'This site can’t be reached' message.
thanks #Mansueli and #ahmad

Executing gcloud commands in bash

I've spent 3 days beating my head against this before coming here in desperation.
So long story short I thought I'd fire up a simple PHP site to allow moderators of a gaming group I'm in the ability to start GCP servers on demand. I'm no developer so I'm looking at this from a Systems perspective to find the simplest solution to do the job.
I fired up an Ubuntu 18.04 machine on GCP and set it up with the Google SDK, authorised it for access to the project and was able to simply run gcloud commands which worked fine. Had some issues with the PHP file calling the shell script to run the same commands but with some testing I can see it's now calling the shell script no worries (it broadcasts wall "test") to console everytime I click the button on the PHP page.
However what does not happen is the execution of the gcloud command. If I manually run this shell script it starts up the instance no worries and broadcasts wall, if I click the button it broadcasts but that's it. I've set the files to have execution rights and I've even added the user nginx runs as to have sudo rights, putting sudo sh in front of the command in the PHP file also made no difference. Please find the bash script below:
#!/bin/bash
/usr/lib/google-cloud-sdk/bin/gcloud compute instances start arma3s1-prod --zone=australia-southeast1-b
wall "test"
Any help would be greatly appreciated, this coupled with an automated shut down would allow our gaming group to save money by only running the servers people want to play on.
Any more detail you want about the underlying system please let me know.
So I asked a PHP dev at work about this and in two seconds flat she pointed out the issue and now I feel stupid. In /etc/passwd the www-data user had /usr/sbin/nologin and after I fixed that running the script gcloud wanted permissions to write a log file to /var/www. Fixed those and it works fine. I'm not terribly worried about the page or even server being hacked and destroyed, I can recreate them pretty easily.
Thanks for the help though! Sometimes I think I just need to take a step back and get a set fresh of eyes on the problem.
When you launch a command while logged in, you have your account access rights to the Google cloud API but the PHP account doesn't have those.
Even if you add the www-data user to root, that won't fix the problem, maybe create some security issues but nothing more.
If you really want to do this you should create a service account and giving the json to the env variable, GOOGLE_APPLICATION_CREDENTIALS, which only have the rights on the compute instance inside your project this way your PHP should have enough rights to do what you are asking him.
Note that the issue with this method is that if you are hacked there is a change the instance hosting your PHP could be deleted too.
You could also try to make a call to prepared cloud function which will create the instance, this way, even if your instance is deleted the cloud function would still be there.

Lost bash got jailshell on shared host reboot - how to get bash back?

A few days ago my shared hosting ISP apparently had server issues and ever since I get jailshell rather than bash when connecting with .ssh.
After three tech troubleshooting sessions, they have not been able to restore my bash capability. They say they get bash if they log in. They seem to keep trying ineffective measures, and provide no details about them.
During the server restart I had tried to log in, and saw the jailshell then. Could that attempted login during server restart have caused this issue?
In any case, advice would be appreciated on how I can get bash back or tell them what to try on their side. Are there useful questions to ask, or things to suggest to them to try to resolve this?
Multiple machines have been used with .ssh with same results. I can FTP into my account (if going to root I see just a few jailshell files; if going directly to folder and then up to root I see all my files; the web serving is not affected).
-Ken

Trouble Uploading Large Files to RStudio using Louis Aslett's AMI on EC2

After following this simple tutorial http://www.louisaslett.com/RStudio_AMI/ and video guide http://www.louisaslett.com/RStudio_AMI/video_guide.html I have setup an RStudio environment on EC2.
The only problem is, I can't upload large files (> 1GB).
I can upload small files just fine.
When I try to upload a file via RStudio, it gives me the following error:
Unexpected empty response from server
Does anyone know how I can upload these large files for use in RStudio? This is the whole reason I am using EC2 in the first place (to work with big data).
Ok so I had the same problem myself and it was incredibly frustrating, but eventually I realised what was going on here. The default home directory size for AWS is less than 8-10GB regardless of the size of your instance. As this as trying to upload to home then there was not enough room. An experienced linux user would not have fallen into this trap, but hopefully any other windows users new to this who come across this problem will see this. If you upload into a different drive on the instance then this can be solved. As the Louis Aslett Rstudio AMI is based in this 8-10GB space then you will have to set your working directory outside this, the home directory. Not intuitively apparent from Rstudio server interface. Whilst this is an advanced forum and this is a rookie error I am hoping no one deletes this question as I spent months on this and I think someone else will too. I hope this makes sense to you?
Don't you have shell access to your Amazon server? Don't rely on RStudio's upload (which may have a 2Gb limit, reasonably) and use proper unix dev tools:
rsync -avz myHugeFile.dat amazonusername#my.amazon.host.ip:
on your local PC command line (install cygwin or other unixy compatibility system) will transfer your huge file to your amazon server, and if interrupted will resume from that point, will compress the data for transfer too.
For a windows gui on something like this, WinSCP was what we used to do in the bad old days before Linux.
This could have something to do with your web server. Are you using nginx or apache as your web server. If so you can modify the upload feature in your nginx server. If you are running nginx on the front end of the web server I would recommend the following fix in your nginx.conf file.
http {
...
client_max_body_size 100M;
}
https://www.tecmint.com/limit-file-upload-size-in-nginx/
I had a similar problems with a 5GB file. What worked for me was to use SQLite to create a database with the csv file that I needed. Use SQLite code to bring create the database. Then I used a function in RStudio to communicate with the local database. In that way, I was able to bring in the csv file. I can track down the R code that I used if you like.

Installed Zone Alarm on Amazon EC2 Windows Instance and cannot access now. How do I fix this?

I messed up this.
Installed ZoneMinder and now I cannot connect to my VPS via Remote Desktop, it must probably have blocked connections. Didnt know it will start blocking right away and let me configure it before.
How can I solve this?
Note: My answer is under the assumption this is a Windows instance due to the use of 'Remote Desktop', even though ZoneMinder is primarily Linux-based.
Short answer is you probably can't and will likely be forced to terminate the instance.
But at the very least you can take a snapshot of the hard drive (EBS volume) attached to the machine, so you don't lose any data or configuration settings.
Without network connectivity your server can't be accessed at all, and unless you've installed other services on the machine that are still accessible (e.g. ssh, telnet) that could be used to reverse the firewall settings, you can't make any changes.
I would attempt the following in this order (although they're longshots):
Restart your instance using the AWS Console (maybe the firewall won't be enabled by default on reboot and you'll be able to connect).
If this doesn't work (which it shouldn't), you're going to need to stop your crippled instance, detach the volume, spin up another ec2 instance running Windows, and attach the old volume to the new instance.
Here's the procedure with screenshots of the exact steps, except your specific steps to disable the new firewall will be different.
After this is done, you need to find instructions on manually uninstalling your new firewall -
Take a snapshot of the EBS volume attached to it to preserve your data (essentially the C:), this appears on the EC2 console page under the 'volumes' menu item. This way you don't lose any data at least.
Start another Windows EC2 instance, and attach the EBS volume from the old one to this one. RDP into the new instance and attempt to manually uninstall the firewall.
At a minimum at this point you should be able to recover your files and service settings very easily into the new instance, which is the approach I would expect you to have more success with.

Resources