I've been using twilio's library just fine locally (mac os x with XAMPP), but when I upload it to an amazon ec2 instance, the ability to send sms messages breaks.
$sms = $client->account->sms_messages->create(
"xxx-xxx-xxxx", $users[pnumber], "Testing!");
(the x's are numbers)
The above code seems to be what breaks it. I have uploaded the twilio library to the correct directory. I have also tried enabling all permissions to see if it was a permission issue.
I'm rather inexperienced to running things on my own server. Any guidance, guesses, and tips would be appreciated!
edit: Clarification - by "breaks", I mean the rest of the page does not load. If I add "echo "Hi";", it will not be printed. However, echo-ing before the code above works.
The problem was that I had not installed cURL onto my server yet. It was not included in my php installation. Thanks to Kevin Burke's advice, I ran it in command line and realized that it was calling a non-existant function. Some googling led to me installing curl, which fixed the problem. Thanks Kevin!
Related
Hello I was following this tutorial: https://docs.elrond.com/developers/tutorials/your-first-dapp/
With the help of: https://www.youtube.com/watch?v=IdkgvlK3rb8
But I think there is some difference between the dApp repository and the tutorial, first the src/config.devnet.tsx disappeared we now have an src/config.tsx already present (not a big deal).
I'm blocked when I try to do the ping, in the console I got the error Sender not allowed with value erd1qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq6gq4hu.
So my guess is that I've done something wrong deploying the contract, but I tried to redeploy other contracts I always ended up with the same error.
I tried natively on my ubuntu 20.04, and then in a devcontainer using an Ubuntu 22.04 image.
I'm pretty new to Elrond, Crypto (and also Node) so I might be missing something.
Thanks for your help!
I've just completed the tutorial and I think that you put the wrong SC address in src/config.tsx. The address you provided is the SC that handles other SC deployments. You have to replace it with your SC's address generated after deployment.
We're using odoo.sh platform with odoo14. The installed wkhtmltopdf is wkhtmltopdf_paas_wrapper 0.12.5, we can't upgrade to 0.12.6 because the access is very limited we cant use 'sudo' to apt-install. To temporarily solve this, we decided to use the 0.12.5 version. But it returns "Unable to call host printing service (HTTPError)" even with the right arguments. I've already tried it with the staging and production server, but still the same result. The ticket I've sent hasn't been replied to yet. This is so frustrating, I'm going bonkers...please help.
here's a screenshot:
ps: unrecognized argument error was intentional so I can display the available args. I've also crossed out the project domain. Thank you
Apparently, to properly execute the package, it should not have been "wkhtmltopdf" but instead "wkhtmltopdf.bin". I've overridden the ir_actions_report.py to change the package name. Here's the snippet of the original source code:
They shouldve known better, its a paid platform.
I've spent 3 days beating my head against this before coming here in desperation.
So long story short I thought I'd fire up a simple PHP site to allow moderators of a gaming group I'm in the ability to start GCP servers on demand. I'm no developer so I'm looking at this from a Systems perspective to find the simplest solution to do the job.
I fired up an Ubuntu 18.04 machine on GCP and set it up with the Google SDK, authorised it for access to the project and was able to simply run gcloud commands which worked fine. Had some issues with the PHP file calling the shell script to run the same commands but with some testing I can see it's now calling the shell script no worries (it broadcasts wall "test") to console everytime I click the button on the PHP page.
However what does not happen is the execution of the gcloud command. If I manually run this shell script it starts up the instance no worries and broadcasts wall, if I click the button it broadcasts but that's it. I've set the files to have execution rights and I've even added the user nginx runs as to have sudo rights, putting sudo sh in front of the command in the PHP file also made no difference. Please find the bash script below:
#!/bin/bash
/usr/lib/google-cloud-sdk/bin/gcloud compute instances start arma3s1-prod --zone=australia-southeast1-b
wall "test"
Any help would be greatly appreciated, this coupled with an automated shut down would allow our gaming group to save money by only running the servers people want to play on.
Any more detail you want about the underlying system please let me know.
So I asked a PHP dev at work about this and in two seconds flat she pointed out the issue and now I feel stupid. In /etc/passwd the www-data user had /usr/sbin/nologin and after I fixed that running the script gcloud wanted permissions to write a log file to /var/www. Fixed those and it works fine. I'm not terribly worried about the page or even server being hacked and destroyed, I can recreate them pretty easily.
Thanks for the help though! Sometimes I think I just need to take a step back and get a set fresh of eyes on the problem.
When you launch a command while logged in, you have your account access rights to the Google cloud API but the PHP account doesn't have those.
Even if you add the www-data user to root, that won't fix the problem, maybe create some security issues but nothing more.
If you really want to do this you should create a service account and giving the json to the env variable, GOOGLE_APPLICATION_CREDENTIALS, which only have the rights on the compute instance inside your project this way your PHP should have enough rights to do what you are asking him.
Note that the issue with this method is that if you are hacked there is a change the instance hosting your PHP could be deleted too.
You could also try to make a call to prepared cloud function which will create the instance, this way, even if your instance is deleted the cloud function would still be there.
I'm writing a component in Joomla 3 and want to save the database periodically (eg after a user updates something). I'd like to therefore run mysqldump using shell_exec (or similar) but I can't get this to work. I suspect it's a permissions issue, but I'm not sure how to resolve this...
Any ideas appreciated.
Your little question inspired us to write a post on how to run SSH commands from Joomla. You can find it here: http://www.itoctopus.com/how-we-ran-an-ssh-command-from-joomla
The post is how we created a secure script that will unblock blocked IPs in CSF - but, the nice thing about it, it provides very clear instructions on how to run ssh commands from a Joomla extension (which is what you essentially need).
I really hope you enjoy this post and that it works for you. If it doesn't, then please provide feedback and we can help!
After following this simple tutorial http://www.louisaslett.com/RStudio_AMI/ and video guide http://www.louisaslett.com/RStudio_AMI/video_guide.html I have setup an RStudio environment on EC2.
The only problem is, I can't upload large files (> 1GB).
I can upload small files just fine.
When I try to upload a file via RStudio, it gives me the following error:
Unexpected empty response from server
Does anyone know how I can upload these large files for use in RStudio? This is the whole reason I am using EC2 in the first place (to work with big data).
Ok so I had the same problem myself and it was incredibly frustrating, but eventually I realised what was going on here. The default home directory size for AWS is less than 8-10GB regardless of the size of your instance. As this as trying to upload to home then there was not enough room. An experienced linux user would not have fallen into this trap, but hopefully any other windows users new to this who come across this problem will see this. If you upload into a different drive on the instance then this can be solved. As the Louis Aslett Rstudio AMI is based in this 8-10GB space then you will have to set your working directory outside this, the home directory. Not intuitively apparent from Rstudio server interface. Whilst this is an advanced forum and this is a rookie error I am hoping no one deletes this question as I spent months on this and I think someone else will too. I hope this makes sense to you?
Don't you have shell access to your Amazon server? Don't rely on RStudio's upload (which may have a 2Gb limit, reasonably) and use proper unix dev tools:
rsync -avz myHugeFile.dat amazonusername#my.amazon.host.ip:
on your local PC command line (install cygwin or other unixy compatibility system) will transfer your huge file to your amazon server, and if interrupted will resume from that point, will compress the data for transfer too.
For a windows gui on something like this, WinSCP was what we used to do in the bad old days before Linux.
This could have something to do with your web server. Are you using nginx or apache as your web server. If so you can modify the upload feature in your nginx server. If you are running nginx on the front end of the web server I would recommend the following fix in your nginx.conf file.
http {
...
client_max_body_size 100M;
}
https://www.tecmint.com/limit-file-upload-size-in-nginx/
I had a similar problems with a 5GB file. What worked for me was to use SQLite to create a database with the csv file that I needed. Use SQLite code to bring create the database. Then I used a function in RStudio to communicate with the local database. In that way, I was able to bring in the csv file. I can track down the R code that I used if you like.