I am new to GAE and I cannot figure this out so please your help, I have the following folder tree in my application:
static
--images
---subImages
--index.html
I am getting the content of subImages folder with Ajax like this:
$.ajax({
url: folder,
success: function (data) {
$(data).find("a").attr("href", function (i, val) {
if (val.match(/\.jpg|\.png|\.gif/)) {
$("#gallery").append("<img src='" + folder + val + "'" + " data-image='" + folder + val + "'" + ">");
}
});
}
});
I am getting 404 Error
my YAML file from here: https://gist.github.com/darktable/873098
The app.yaml and information that you link to is really out of date. It loads the runtime: python, which is Python 2.5 and has been deprecated. Its got other issues too.
You probably should follow along with the official tutorial on Hosting a static website on App Engine. You'd likely end up with an app.yaml like the following:
runtime: python27
api_version: 1
threadsafe: true
# no application or version elements now that we prefer
# the Cloud SDK and gcloud deployments as those are command
# line parameters now.
handlers:
- url: /static/(.*)
static_files: static/\1
upload: static/(.*)
- url: /
static_files: index.html
upload: index.html
# If you wanted a wildcard match for other static files that
# in the example case are in a `www` folder.
#- url: /(.*)
# static_files: www/\1
# upload: www/(.*)
Related
I am using Angular-cli for this build and it compiles all folders under the src folder into the build.
I was storing images in the assets folder under src:
src
|-app
|-assets
|-img_library
I access them dynamically like this:
<img src="assets/img_library/{{imgId}}"
Unfortunately this folder gets compiled into the build by angular-cli, so I would have to rebuild the app every time an image is added for the client to be able to access it.
I built the server as well so I can store the images anywhere I want but I don't know how to access folders above the src via a img tag.
Is there a way to access a public/assets folder above the src folder with an image tag in Angular?
#jonrsharpe You're right this doesn't make any sense. The assets folder is for images/media that will be used often by most users. I don't know what I was thinking, my brain was stuck in Angular mode when I needed to approach it from the backend.
I used an express api:
router.get('/some/api/:id/img.png', function( req, res, next){
var id = req.params.id,
filePath = 'img.png',
root = __dirname + '/some/location/' + id +'/';
var options = {
root: root,
dotfiles: 'deny',
headers: {
'x-timestamp': Date.now(),
'x-sent': true
}
};
res.sendFile(filePath, options, (err) => {
if (err) {
next(err);
} else {
console.log('Sent:', filePath);
}
});
})
to respond to a get request from <img src='some/api/{{imageId}}/img.png'>.
Hope this helps some other sleep deprived developer.
i am trying to post an image to a a backend server that is an Express Server.
I am using cordova file transfer(installed through cordova plugin add cordova-plugin-file-transfer )
I have imported the file transfer like this:
import {Transfer} from 'ionic-native';
here is my component that posts the file to the server
save() {
base64Image = open("/Users/user1/1.jpg");
let ft = new Transfer();
let filename = "example" + ".jpg";
let options = {
fileKey: 'file',
fileName: filename,
mimeType: 'image/jpeg',
chunkedMode: false,
headers: {
'Content-Type' : undefined
},
params: {
fileName: filename
}
};
ft.upload(base64Image, "http://localhost:3500/api/v1/file", options, false);
}
the error i get whenever i call the save function is:
FileTransfer is not defined
help will be appreciated
Install with ionic since you are using ionic-native.
ionic plugin add cordova-plugin-file-transfer --save.
The save option is to ensure there is an entry in config.xml.
Also call any plugin within
platform.ready().then(()=>{})
Plugins are loaded after the app is loaded.
UPDATE:
Cordova is not supported and most plugins will not load with ionic serve command.
You need to run it in an emulator or a device.
I have the following code to create a file from a Firefox Add-on SDK extension.
panel.port.on("mbData", function(data) {
console.log("Recebi dados. Data: " + data);
OS.File.writeAtomic("mb.txt", data, {write: true, create: true}).then(function(aResult) {
console.log("Criei o ficheiro\n");
}, function(ex) {
console.log("Error!\n"+ex);
});
});
The code above works great when I run using jpm run. But, when I create the xpi file (jpm xpi) and install it on Firefox it doesn't work. It seems that the file it's not being created. In addition, I can't access any log files.
Am I doing anything wrong here?
try to use this line:
Components.utils.import('resource://gre/modules/osfile.jsm');
Before to call the writeAthomic method.
Script test.js:
var page = require('webpage').create();
var url = args[1];
page.open(url, function (status) {
console.log(status);
phantom.exit();
});
Run script:
phantomjs --proxy=1.1.1.1:22 test.js 'http://nonexistent_site.com'
1.1.1.1:22 - nonexistent server
http://nonexistent_site.com - nonexistent site
How can I determine in PhantomJS which one is not responding - a proxy or a site?
You can catch network timeouts with page.onResourceTimeout callback:
page.onResourceTimeout = function(request) {
console.log('Response (#' + request.id + '): ' + JSON.stringify(request));
};
You can also set your own timeout:
page.settings.resourceTimeout = 3000; // ms
To intercept network errors you can register page.onResourceError callback:
page.onResourceError = function(resourceError) {
console.log('Unable to load resource #' + resourceError.id + ' URL:' + resourceError.url);
console.log('Error code: ' + resourceError.errorCode + '. Description: ' + resourceError.errorString);
};
With this in place, non-existent host will trigger Host not found error.
But if you use a non-working proxy, you will always end up with error Network timeout on resource first, even if target host does not exist.
So if you want to check proxies :) I'd suggest just to page.open hosts that are 100% working, for example, set up a simple static web page on the very server that you are operating from.
Also there is a node.js module: proxy-checker
We have a development site on an Azure VM and production on Azure Website both using DNN Platform 7.1.2.
The Ajax calls pattern as seen below calls the DnnApicontroller and works fine on the Dev site, but fails once deployed to the production site on the Azure Website.
I've checked the Bin folder to ensure both are using the same version dlls. I have checked the webconfig files and found those to be similar too.
The error received is "{"Message":"Unable to locate a controller for http://mydomain.com/DesktopModules/ContentModule/API/BusinessObjects/HelloWorld. Searched in namespaces: IPW.Modules.ContentModule, ContentModule."}"
These call work fine on the dev site. Based on the error message, the routing didn't discover a matching controller which the namespaces provided.
-this is an example of the ajax call:
$.ajax({
type: "POST",
cache: false,
url: baseServicePath + 'HelloWorld',
dataType: "json",
beforeSend: serviceFramework.setModuleHeaders
}).done(function (data) {
console.log(data);
}).fail(function () {
console.log('Sorry failed to load hours');
});
-the 'baseServicePath' obtains URL using the DNN Platform serviceFramework.getServiceRoot('ContentModule') + 'BusinessObjects/'
-The routemapper:
public void RegisterRoutes(IMapRoute mapRouteManager) {
mapRouteManager.MapHttpRoute("ContentModule", "default", "{controller}/{action}", new[] { "My.Modules.ContentModule", "ContentModule" });
}
-and the apicontroller method:
public class BusinessObjectsController : DotNetNuke.Web.Api.DnnApiController
{
[AllowAnonymous]
[AcceptVerbs("GET", "POST")]
public HttpResponseMessage HelloWorld()
{
string result = "Hello world! Time is: " + DateTime.Now + "";
var response = Request.CreateResponse(HttpStatusCode.OK, result, Configuration.Formatters.JsonFormatter);
return response;
}
}
}
Where the call fails is on the production site which is running on an Azure Website, the code is identical to the dev. The DNN Platform is using a custom mapping interface. Any suggestions are appreciated.
I'm supposing that if you are using custom domains in the Azure Website:
You have created a DNN portal alias for the site corresponding with the custom DNS name
You have also enabled that domain host name in the Azure Website
If you are using a naked comain (like "http://mydomain.com/...") ensure that you have the correct setup (check http://maartenvanstam.wordpress.com/2013/08/23/configuring-a-naked-domain-name-for-a-windows-azure-web-site/)
If this doesn't help, just check the module example "DNN Todo List" available at GitHub at http://github.com/davidjrh/dnntodolist. Is just using the same pattern.