YAML exception: unacceptable character '' (0x0) - macos

This error appears on Elastic Beanstalk after uploading (with a zip) a new version to Elastic Beanstalk! with a file .ebextensions/singlehttps.config that sets the https for a single instance server.

If you're doing the Amazon AWS workshop LAB:
https://github.com/awslabs/eb-node-express-signup
ie. Upload and Deploying your Elastic Beanstalk app
and getting this PROBLEM error:
*ERROR Failed to deploy application.
*ERROR The configuration file __MACOSX/.ebextensions/._setup.config in application version 1.1.0 contains invalid YAML or JSON. YAML exception: Invalid Yaml: unacceptable character '' (0x0) special characters are not allowed in "", position 0, JSON exception: Invalid JSON: Unexpected character () at position 0.. Update the configuration file.
*INFO Environment update is starting.
SOLUTION
This is because MACOS includes some extra hidden folders which you need to exclude from your ZIP file. To do this, run this command in terminal on your zip:
$ zip -d nameofyourzipfile.zip __MACOSX/\*
Now re-upload, and you should get a success message:
INFO Environment update completed successfully.
INFO New application version was deployed to running EC2 instances.
Hope this solved your issue!

The reason for this problem in the Elastic Beanstalk system was in fact in the zip that is created in the Mac osx platform.
if you upload the new version with eb deploy command and not by zipping the application, then the problem doesn't appear!
Hope this helps someone, as it has been troubling me for so long!!

When you zip folders on MACOSX, it will add its own hidden files in there alongside yours.
If you want to make a zip without those invisible Mac resource files such as “_MACOSX” or “._Filename” and .ds store files, use the “-X” option in the zip command
$ zip -r -X archive_name.zip folder_to_compress
If this is a pre-existing zip file, you can use the command others here have mentioned
$ zip -d nameofyourzipfile.zip __MACOSX/\*

Work around on Mac
Since it opens up the zip file and when you compress it, Elastic Beanstalk gives the error mentioned above. If you run command in previous comments to remove MACOSX related stuff, it still gave me an error about one of the files not found.
Workaround is that before doing the download, rename the zip file to some other extension and change to zip once its on the Mac.
When you upload this file to Elastic Beanstalk, it will work fine.

Related

Confluent Kafka - Control Center did not get installed, doesn't start up

I'm following Confluent Kafka QuickStart guid here "https://docs.confluent.io/platform/current/quickstart/ce-quickstart.html#ce-quickstart".
The page offers this command to startup everything at once like this:
confluent local services start
Then it shows a short list in command line what starts up. The list ends with the following:
Starting Control Center
Control Center is [UP]
When I run the command, I don't see the Control Center. Then when I'm trying to view the basic page "http://localhost:9021/", it shows this error - ERR_CONNECTION_REFUSED.
It looks like the install command is not installing the Control Center. I've ran the install command twice. I'm missing control-center.properties file, and I'm missing the folder confluent-control-center that suppose to be in the etc folder.
I'm using Mac, my installation is on the Mac. Would appreciate any help getting this running properly.
I ended up getting rid of the download, then re-downloaded the Confluent Platform as a zip file. I don't remember if it was a tar file earlier. The new download has the Control Center folders and files.

AWS Beanstalk Laravel post deploy hooks no such file or directory

I'm trying to deploy laravel app to aws beanstalk, OS is Amazon Linux 2 AMI.
I've setup following files:
.ebextensions/01-deploy-script-permission.config
It contains below code:
container_commands:
01-storage-link:
command: 'sudo chmod +x .platform/hooks/postdeploy/post-deploy.sh'
And
.platform\hooks\postdeploy/01-post-deploy.sh
It contains below code:
php artisan optimize:clear
Upon deploying it fails with following entry in eb-engine.log file
[ERROR] An error occurred during execution of command [app-deploy] -
[RunAppDeployPostDeployHooks]. Stop running the command. Error:
Command .platform/hooks/postdeploy/post-deploy.sh failed with error
fork/exec .platform/hooks/postdeploy/post-deploy.sh: no such file or
directory
This answer is for users who are using Windows to deploy their files to elastic beanstalk.
I found this information after spending 6 precious hours. Probably not documented anywhere in official documentations
As per this link "https://forums.aws.amazon.com/thread.jspa?threadID=321653"
psss: most important that the file is saved with LF line separator.
CRLF makes "no file or directory found"
So I used Visual Studio Code to convert CRLF to LF for files in .platform/hooks/postdeploy
At the bottom right of the screen in VS Code there is a little button
that says “LF” or “CRLF”: Click that button and change it to your
preference.
I don't know for sure but I think you are running the command before the files are even created hence getting the following error.
A while ago I faced the same kind of problem where I wrote migration commands in .ebextension and it used to give me an error because my env file wasn't even created yet hence no DB connection is made so I was getting the error. Hope this will give you a direction.
By the way, I resolved the problem by creating env then pushing these commands through the pipeline.

Facing issue regarding installation in hybris6.6

While installing the hybris, my localextension.xml is creating in comment form. I am very new in hybris ecommerce development.
So I have followed below steps for installing the hyrbis -
Installed the zip version of Hybris 6.6
Unzip it
From Platform folder, I opened the terminal and ran ". ./setantenv.sh" And after that I ran "ant clean all" and after the build completed succesfully all folders got created in Hybris folder.
Then I ran "./hybrisserver.sh" and my server got started successfully.
Then I ran "https://localhost:9002/" over that I initialize and it also went successfully.
When I try to access hmc or backoffice it is giving me 404 page not found error.
I checked my localextension.xml file and found all the extensions generated as a comment as shown below.
Could anyone help me out where I am doing the mistake.
Thanks in advance.
If you are using original package you need to install a receipt. Go to install folder.
Run below command for listing existing receipt
./install.sh -l
Prepare b2c with acc:
./install.sh -r b2c_acc
Initialize b2c with acc (Also you can use ant clean all for this step):
./install.sh -r b2c_acc initialize
Start hybris (Also you can use ./hybrisserver.sh start for this step):
./install.sh -r b2c_acc start
When you do "ant all" for the first time and set-up the config folder, it generates a localextensions.xml file which contains extensions that are commented out. If you initialize and start Hybris using this setting, you get nothing, except the HAC.
To enable HMC, you need to at least have "platformhmc" extension enabled (i.e. not commented out) in localextensions. So, stop Hybris, uncomment platformhmc, and do another build (i.e. "ant all"). After that, you can do a Platform Update, or a Platform Initialize (to build from scratch again). When it's done, and you've started Hybris, HMC should be accessible.
Or, if you want more features enabled by default, you can do #mkysoft's suggestion and use recipes.

How to see Parse Server cloud code logs?

I have Bitnami's Parse Server set up on Azure.
I'm logging some info from cloud code using console.log and console.error. When using hosted Parse these logs were displayed in the Info & Error Logs section on the Dashboard. Any idea where the logs go to now?
The issue is not specific to Bitnami's distribution. I also tested on a local machine with parse-server-example & Parse Dashboard and got the same result (no logs).
I use AWS but you can see the logs by downloading them or running it on localhost just cd into your folder then do Npm start on terminal and switch you parse server URL to http://localhost:1337/parse.
You can manually download them through the azure cli
Take a look here for installation : https://azure.microsoft.com/en-us/documentation/articles/xplat-cli-install/
I used npm : npm install azure-cli -g
open up terminal and type in : azure site log download webappname
This will save the logs for the web app named 'webappname' to a file named diagnostics.zip in the current directory.
Unzip and open the folder diagnostics -> LogFiles -> Application
The text file with -stderr- in the name of it will display the logs you display by using console.error() in your cloud code.
The text file with -stdout- in the name of it will display the logs you display by using console.log() in your cloud code.
This is a known issue on Bitnami Parse. We are working on fixing it for the next release.
You have to log in your server via SSH and modify the line below at the /opt/bitnami/apps/parse/htdocs/server.js file:
From:
cloud: "./node_modules/parse-server/lib/cloud-code/Parse.Cloud.js",
To:
cloud: "./cloud/main.js",
You have to include the path to the ./cloud/main.js you previously created (assuming you created it in /opt/bitnami/apps/parse/htdocs/).
Remember to restart the Server after applying those changes running:
sudo /opt/bitnami/ctlscript.sh restart

error while starting Titan database using docker

I want to start using titan database and I have followed http://oren.github.io/blog/titan.html instructions. But when I try to start titan in docker it gives me the following error:
/opt/titan-0.5.4-hadoop2/run.sh: 2: /opt/titan-0.5.4-hadoop2/run.sh: : not found
run.sh file located in C:\Users\Modeso\titan but I can't find a way to change the folder location in docker.
Has anyone faced this problem before or have solution for it?
I suspect that in this case, the "not found" message may not be because the file is not found, but because the wrong line-endings are used in the file. If a shell-script uses Windows line-endings, Linux can produce weird errors, such as this one.
Did you try building from the GitHub repository? https://github.com/apobbati/titan-rexster
You can build an image from that repository through;
docker build -t titan-rexter github.com/apobbati/titan-rexster
And run it;

Resources