Kony - Failed to publish the application :- Application or service with name 'BestBuymergeEve' already exists - temenos-quantum

Tried doing a full publish or update services in kony both options is throwing the below error, Can anyone please help.
Log
[01-29-2016 09:06:22]{
"status_long_message": "Request conflict.",
"status_short_message": "Conflict",
"message": "Application or service with name 'BestBuymergeEve' already exists",
"status": "error",
"status_code": 409
}
[01-29-2016 09:06:22]Response:HTTP/1.1 409 Conflict [Cache-Control: no-cache, no-store, must-revalidate, Content-Type: text/plain; charset=UTF-8, Date: Fri, 29 Jan 2016 21:04:19 GMT, Expires: Thu, 01 Jan 1970 00:00:00 GMT, Pragma: no-cache, Server: Apache, X-Kony-RequestId: ee652aef-8819-4005-9153-2329dac1f0b3, Content-Length: 211, Connection: keep-alive]
[01-29-2016 09:06:39]Failed to publish application for 'BestBuymergeEve'. Please see console for error details.

Rename your application id then restart the studio and server would resolve the issue.

Related

webhdfs is redirecting to localhost:50075

I am trying to create a file from a non-hadoop environment to a remote hdfs.
For this purpose, I am using pywebhdfs api and I'm running command using curl.
https://pythonhosted.org/pywebhdfs/
I used this documentation as a reference, I am able to execute all other methods except of create_file().
While using create_file(), I am getting error like: 'Couldn't connect to host'
Command: curl -i -X PUT -L "http://xxx.xxx.xxx.xxx:50070/webhdfs/v1/test1/?op=CREATE" -T sample.txt
Response:HTTP/1.1 100 Continue
HTTP/1.1 307 TEMPORARY_REDIRECT
Cache-Control: no-cache
Expires: Tue, 30 Oct 2018 12:04:04 GMT
Date: Tue, 30 Oct 2018 12:04:04 GMT
Pragma: no-cache
Expires: Tue, 30 Oct 2018 12:04:04 GMT
Date: Tue, 30 Oct 2018 12:04:04 GMT
Pragma: no-cache
Content-Type: application/octet-stream
Location: http://localhost:50075/webhdfs/v1/test1/?op=CREATE&namenoderpcaddress=xxx.xxx.xxx.xxx:9000&overwrite=false
Content-Length: 0
Server: Jetty(6.1.26)
curl: (7) couldn't connect to host
This Location is displaying as localhost here. I took reference from the past post.
webhdfs always redirect to localhost:50075
but I didn't get success.
I tried changing IP in hdfs-site.xml and /etc/hosts file but no success at all.
Can anyone tell me, how to fix this?
Thanks in advance..

webhdfs rest api throwing file not found exception

I am trying to open a hdfs file that is present on cdh4 cluster from cdh5 machine using webhdfs from the command line as below:
curl -i -L "http://namenodeIpofCDH4:50070/webhdfs/v1/user/quad/source/JSONML.java?user.name=quad&op=OPEN"
I am getting "File Not Found Exception" even if the file JSONML.java is present in the mentioned path in namenode as well as datanode and its trace is as follows:
HTTP/1.1 307 TEMPORARY_REDIRECT
Cache-Control: no-cache
Expires: Thu, 01-Jan-1970 00:00:00 GMT
Date: Mon, 22 Feb 2016 13:25:35 GMT
Pragma: no-cache
Date: Mon, 22 Feb 2016 13:25:35 GMT
Pragma: no-cache
Set-Cookie: hadoop.auth="u=quad&p=quad&t=simple&e=1456183535737&s=KdZYcA5iwJeIU2F9ZJfLSaT4qMY=";Path=/
Location: http://n3.quadratics.com:50075/webhdfs/v1/user/quad/source/JSONML.java?op=OPEN&user.name=quad&namenoderpcaddress=n1.quadratics.com:8020&offset=0
Content-Type: application/octet-stream
Content-Length: 0
Server: Jetty(6.1.26.cloudera.4)
HTTP/1.1 404 Not Found
Cache-Control: no-cache
Expires: Mon, 22 Feb 2016 13:26:28 GMT
Date: Mon, 22 Feb 2016 13:26:28 GMT
Pragma: no-cache
Expires: Mon, 22 Feb 2016 13:26:28 GMT
Date: Mon, 22 Feb 2016 13:26:28 GMT
Pragma: no-cache
Content-Type: application/json
Transfer-Encoding: chunked
Server: Jetty(6.1.26.cloudera.4)
{"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File does not exist: /user/quad/source/JSONML.java\n\tat org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)\n\tat org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1932)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1873)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1853)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1825)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:559)\n\tat org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:87)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:363)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:415)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)\n"}}
But I don't get any error and get the status of the above file when I use the below command:
curl -i -L http://namenodeIpofCDH4:50070/webhdfs/v1/user/quad/source/JSONML.java?user.name=quad&op=GETFILESTATUS"
I get the output response as below:
HTTP/1.1 200 OK
Cache-Control: no-cache
Expires: Thu, 01-Jan-1970 00:00:00 GMT
Date: Mon, 22 Feb 2016 13:38:48 GMT
Pragma: no-cache
Date: Mon, 22 Feb 2016 13:38:48 GMT
Pragma: no-cache
Set-Cookie: hadoop.auth="u=quad&p=quad&t=simple&e=1456184328134&s=sE6esO8J39O+itl+ggNzX4/WzjQ=";Path=/
Content-Type: application/json
Transfer-Encoding: chunked
Server: Jetty(6.1.26.cloudera.4)
{"FileStatus":{"accessTime":1456147448567,"blockSize":134217728,"group":"quad","length":14849,"modificationTime":1456143798039,"owner":"quad","pathSuffix":"","permission":"644","replication":3,"type":"FILE"}}
Any ideas of the reason of why opening a file is failing and fixing that would be greatly appreciated.
I saw a similar error when I had misconfigured my /etc/hosts
The OPEN command above returns a redirect which provides a hostname. The localhost will try and resolve this hostname based on the local DNS setup.
It will then look for the file at the IP address that the hostname resolves to. Not necessarily the one you issued the command to.

NGINX + HHVM + Magento

I've been working on a development box deploying an Nginx, Magento, HHVM model. The site loads nicely and I don't get any real clear errors that I can see. However, a good number of my images are not loading properly. All I get are the image dimensions. The version of HHVM I'm using is:
HipHop VM 3.3.0-dev (rel)
Compiler: heads/master-0-g39d07abf36f7cad883d6be2a384a77c0c8aac040
Repo schema: 248cb6ce2d336c890fb26fd16690bf1b53b46994
Extension API: 20140829
The version of Nginx is:
nginx version: nginx/1.7.4
Some images load, and others don't. We're using timthumb for images, and I do see one error:
\nWarning: Is a directory in /home/user/shared/timthumb/index.php on line 90
If I run a curl from the command-line, I get the following result for an image, which means it's being seen as far as I can tell:
HTTP/1.1 200 OK
Content-Type: image/gif
Connection: keep-alive
Vary: Accept-Encoding
X-Powered-By: HHVM/3.3.0-dev
CF-Cache-Status: HIT
CF-RAY: 16513733df33085c-IAD
Server: cloudflare-nginx
Date: Fri, 05 Sep 2014 08:56:47 GMT
Expires: Sun, 05 Oct 2014 08:56:47 GMT
Cache-Control: public, max-age=2592000
Set-Cookie: __cfduid=d6c188df6595a443200995301f0b707981409907407975; expires=Mon, 23-Dec-2019 23:50:00 GMT; path=/; domain=.placehold.it; HttpOnly
Last-Modified: Fri, 19 Nov 2010 07:00:00 GMT
Cache-Control: max-age=31536000, public, must-revalidate, proxy-revalidate
I have a separate test server with the same content/database running NGINX/PHP-FPM, and the only variable I can see that's different in the curl is that when I run the curl against the dev server running php-fpm, the result is Content-Type: image/jpeg. When I run it against HHVM I get Content-Type: image/gif.
I would think this is related to rewrite rules, but I'm not sure. I also just discovered this error here as well in exception.log:
2014-09-05T12:04:06+00:00 ERR (3): Warning: open(tcp://127.0.0.1:11211?persistent=1&weight=2&timeout=10&retry_interval=10/sess_dd885307c2ba602e5faa7020c9c7c038, O_RDWR) failed: No such file or directory (2) in on line 0
Any advice would be greatly appreciated.

Setting Access-Control-Allow-Origin on Cloudfront

I am having problems serving static assets to Firefox using AWS Cloudfront.
Chrome works perfect, but Firefox is returning a CORS error.
If I execute curl , I get:
HTTP/1.1 200 OK
Content-Type: application/x-font-opentype
Content-Length: 39420
Connection: keep-alive
Date: Mon, 11 Aug 2014 21:53:50 GMT
Cache-Control: public, max-age=31557600
Expires: Sun, 09 Aug 2015 01:28:02 GMT
Last-Modified: Fri, 08 Aug 2014 19:28:05 GMT
ETag: "9df744bdf9372cf4cff87bb3e2d68fc8"
Accept-Ranges: bytes
Server: AmazonS3
Age: 2743
X-Cache: Hit from cloudfront
Via: 1.1 c445b20dfbf3128d810e975e5d84e2cd.cloudfront.net (CloudFront)
X-Amz-Cf-Id: ...
Which I think needs the header:
Access-Control-Allow-Origin: *
Can anyone help me? Why is it a problem on Firefox and not Chrome? How can I solve it?
Have you configured your distribution to support CORS by setting Origin header to be forwarded?
Reference:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/header-caching.html#header-caching-web-cors

Drive Realtime API no longer returns realtime document on localhost

I have been calling the following gapi javascript function with great success for a few months:
gapi.drive.realtime.load(fileId,
successHandler,
initializer,
errorHandler);
Suddenly, at 1:30 PM CDT today, that call stopped working when run in javascript on localhost. I can deploy the exact same code to my server and it works perfectly!
Frustratingly, none of the callbacks are called - not successHandler OR errorHandler.
I have localhost:3000 set as an allowed javascript origin in my Google API Console project, and anyway I haven't changed any settings there since this was working. I am correctly authorized and can make REST calls to the Drive API without an issue.
Has anyone else seen this behavior suddenly? Can anyone from the Google team make a suggestion?
Update: the request inspector shows a GET to
https://drive.google.com/otservice/gs?access_token=[ommitted-for-stackoverflow]&id=[also-omitted]
with the response
)]}'
["17AKDsTY8kHESKfQavrHeh3YybD5k4b6ty8CQ78MHtyc","724b79b808d48070",false,1,[1,""],[0,[28,"724b79b808d48070","110581799581534438628",false,true,"REL DEV","#58B442","https://lh3.googleusercontent.com/-XdUIqdMkCWA/AAAAAAAAAAI/AAAAAAAAAAA/4252rscbv5M/s128/photo.jpg"]]]
The headers are
HTTP/1.1 200 OK
status: 200 OK
version: HTTP/1.1
access-control-allow-origin: *
access-control-expose-headers: Content-Length,Content-Type,X-Restart
alternate-protocol: 443:quic
cache-control: no-cache, no-store, max-age=0, must-revalidate
content-disposition: attachment; filename="json.txt"; filename*=UTF-8''json.txt
content-encoding: gzip
content-type: application/json; charset=utf-8
date: Fri, 04 Apr 2014 22:15:40 GMT
expires: Fri, 01 Jan 1990 00:00:00 GMT
pragma: no-cache
server: GSE
vary: Origin
x-content-type-options: nosniff
x-frame-options: SAMEORIGIN
x-restart:
x-xss-protection: 1; mode=block
There are no other requests after that.

Resources