"Cannot connect to server" while installing ipa through OTA - xcode

I have provisioning profile is installed and successfully created the ipa. The ipa will install in Dropbox without any problem, but the same ipa will give the error while installing through OTA.
I have attached the screenshot of the error.
How can i solve the issue. Any help would be appreciated.
Thanks.

The answer may depend on what OTA server you are using.
If you are running your own OTA server, like I am, then the problem may be your URL.
I saw the exact same "Cannot connect to..." error today with my personal OTA server. The problem was with the itms-services URL.
itms-services://?action=download-manifest&url=https://3ea1be94.ngrok.com/TestApp.plist
The .plist file name and the hostname for my &url= parameter were incorrect. Once I fixed them and refreshed the page, everything was fine.
If you are using your own OTA server, check that the &url= paramter is accurate and make sure it is using HTTPS. If you are not running your own OTA server, check with whoever is running it as they may be able to assist.

To proper distribute the IPA file from HTTP you should put a webpage with a Link pointing the itms-services special link for example:
Download App
This manifest file you is very simple, you need the bundleid and the URL to the IPA file
You can find an example of start from this one: https://gist.github.com/kEpEx/777df3cb1fd4bd851409
A couple of important things to consider
Valid certificate is required, and URL from Manifest and IPA should be HTTPS, (I'm not sure if self signed certificated works on this)
Take care of the manifest URL, sometimes you have parameters on the URL, you want to urlencode them or use simplier url
Sometimes you want to auth the users before allowing them to download the manifest or IPA files, take care of this, since cookies on the safari are lost when you click on this link, so if you check for session there based on the cookie you will get the "Cannot connect to" message. You will need to came out with a better aproach like generating temporaty tokens or something like that (this point took me 2 days of work to figure it out why it was failing)

Related

Google Drive API Console: Error saving Drive UI integration page

I have a webapp in production that interacts with Google Drive through Google Drive API.
I need to change some settings in Drive interaction but I can't save.
When I save the Drive UI integration page, I receive this error:
There's a problem at our end.
Please try again. If the problem persists, please let us know using
the "Send feedback" link below. Thanks!
(spying Network console: there is an Internal Server Error in a POST call)
I tried to send feedback for months: nobody answers and the bug is still there.
I tried also to create another project: I can save the first time but then the bug returns.
How can I do? Has someone the same problem?
Is there a way to receive a reply from Google? Is there some workaround?
Thank you.
i think that problem must be Client ID
before adding Client ID, go to the Credentials -> OAuth 2.0 Client IDs
then select edit your Client ID. after that your production site url add to Authorized JavaScript origins and Authorized redirect URIs.
then enter your Client ID in Drive UI integration page
For myself trying to get the Drive UI configured I noticed a couple of errors (that don't have any specific error messages)
When adding in an Open URL it has to be a valid domain, so for instance I tried to test it out with local host, to no avail. However something like https://devbox.app.com worked, but something like https://localhost:8888 does not. Even though https://localhost is a valid javascript origin in the client_id configuration (at least for the app I am working on, not sure about other apps), localhost doesn't work as an open URL.
When adding in the mimeTypes it needs to be in the format */* and can include custom mimeTypes like application/custom+xml and application/custom-name+json not sure for other custom types that are not in a particular format like xml or json. Also not sure about wildcards.
When adding in file extensions do not add in the '.' just the name of the file extension.
The app icon I found only failed to upload the image when the image wasn't the exact dimensions, I actually ended up editing some icons in photoshop to change the pixel x pixel values as a quick work around during dev.
That worked for me to get it to save and I tested it with a file that had a custom mimeType (application/custom-name+xml specifically) and custom file extension!

Google Sheets API Java program, works on one machine and not the other - 401 Au

I've got a working basic Java App that uploads some data to a google sheets file of mine.
I uploaded it to a git client, pulled it to my other computer, and it doesn't work on that with a 401
Exception in thread "main" com.google.api.client.auth.oauth2.TokenResponseException: 401 Unauthorized
at com.google.api.client.auth.oauth2.TokenResponseException.from(TokenResponseException.java:105)
at com.google.api.client.auth.oauth2.TokenRequest.executeUnparsed(TokenRequest.java:287)
at com.google.api.client.auth.oauth2.TokenRequest.execute(TokenRequest.java:307)
at com.google.api.client.auth.oauth2.Credential.executeRefreshToken(Credential.java:570)
at com.google.api.client.auth.oauth2.Credential.refreshToken(Credential.java:489)
at com.google.api.client.auth.oauth2.Credential.intercept(Credential.java:217)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:868)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:419)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469)
at App.main(App.java:71)
Any idea what could be different between the two machines? I understand it that if I'm using the same client_secret.json, it should be irrelevant which machine I'm on?
UDPATE 1:
ok, some extra info - i just tried my project at work on my work laptop and it worked fine! On first run it opened a browser window and asked me which google account I wanted to use, I chose the correct one, and that worked. On the laptop I have that didn't work, I wasn't given that option (that I remember) so how can I reset the google account that has been used to authenticate against?
I saw this in my cmd line
Please open the following address in your browser:
https://accounts.google.com/o/oauth2/auth?access_type=offline&client_id=blah-notputtingmyrealid.apps.googleusercontent.com&redirect_uri=http://localhost:42299/Callback&response_type=code&scope=https://www.googleapis.com/auth/spreadsheets
Attempting to open that address in the default browser now...
Since it's working on your previous computer, the issue might be concerning the location of your client_secret.json. If you check the Java Quickstart setup, there's a part where you need to download the JSON file and place it on your working directory. Since, you're on a new machine, that file is now missing.
g. Click the file_download (Download JSON) button to the right of the
client ID.
h. Move this file to your working directory and rename it
client_secret.json.
Or the access token has expired.

Error 404 for rest webservice request in offline app in GeneXus Ev3 U9

I'm developing an offline Android app with Genexus Ev3 U9 and when I try the app in the device I see there is no initial synchronization, even when I try to execute a manual sync the app shuts down. The cat log shows that request made to URLs like http://192.168.12.17/MyAppSmartDevicesEnvironment/gxmetadata/MyApp.android.json
worked fine but when the app tries to get this URL http://192.168.12.17/MyAppSmartDevicesEnvironment/rest/MyAppOfflineDatabase?fmt=json&event=gxchecksync returns 404 I tried the same link in my laptop and it's like the requested resource was not created by GeneXus.
What could be wrong?
There are actually a couple of things you might want to check.
When you accessed http://192.168.12.17/MyAppSmartDevicesEnvironment/gxmetadata/MyApp.android.json you got data but that just means that the virtual directory was successfully created. (which is good of course)
Then you need to check if the WCF module is installed correctly, in order to do that you could try to go to http://192.168.12.17/MyAppSmartDevicesEnvironment/MyAppOfflineDatabase.svc/rest or any other service in your KB. That goes straight to the service implementation. (you can check you web.config file in order to see the actual rewriting rules)
If that works it's certainly a URL Rewrite problem like Sandro and Guscarr suggested.
You can download and install the module from here: http://www.iis.net/downloads/microsoft/url-rewrite
Gcastano,
It seems that you're generating to .net, right?
If so, it could be some problem with iis rewrite module.
Anyway you might check gx software requirements...
It seems that REST services cannot be run on your IIS, as Sandro said, try installing URLRewrite.
Further info at http://wiki.genexus.com/commwiki/servlet/wiki?14575,Android%20-%20FAQ%20and%20Common%20Issues

Linkedin: image not showing up after sharing via REST Api

I shared a post from an application but the image doesn't show up in the update. I realise this question is asked a couple of times already and I checked all the info I could find here, to no avail.
I checked content type, which is correct.
I checked the url, which works.
I checked the ssl certificate which seems to be fine.
The image in question is: https://soworker.com/files/images/608-3XZcFrnujD_LN.JPG
The share is being made using the REST Api and the share itself gives a success message.
How can I debug why the image is not showing up? Am I missing something?
Thanks in advance for your response.
I am having this same issue. When I post this url via the API - https://blog.calevans.com/2016/05/16/postcards-life-010/ - it will not show the image.
However, when I post this one - http://voicesoftheelephpant.com/2016/05/17/interview-amanda-folson/ - it works.
Since they both sit on the same server and are both running the same software, my current theory is that LinkedIn can't read images from secure servers. Alternatively, however, less likely, it may be that they won't read from sites using Let's Encrypt images.
UPDATE: It seems to be Let's Encrypt. The podcast is also available encrypted but not using a Let's Encrypt cert because Apple won't read them. I posted a second update using the https:// version and it worked.
So it LOOKS like LinkedIn doesn't like Let's Encrypt.
HTH,
=C=
Apparently LinkedIn didn't like Lets Encrypt certificates. The problems is completely resolved now though and I don't think this will be a certificate issue anymore.

MOSS search crawl fails with "Access is denied ..."

Recently the search crawler stopped working on my MOSS installation. The message in the crawl log is
Access is denied. Check that the Default Content Access Account has access to this content, or add a crawl rule to crawl this content. (The item was deleted because it was either not found or the crawler was denied access to it.)
The default content account is an admin on the site collection that I am trying to crawl.
Almost every result for this error on Google tells me to add the DisableLoobackCheck registry key with a value of 1. I have done this and rebooted and the error continues.
The "Do not allow Basic Authentication" checkbox in my crawl rule screen is unchecked.
Is there anything else that could be causing this error? Something with file system or database permissions maybe?
Edit: All signs seem to indicate that the "DisableLoopbackCheck" should fix this, but it doesn't seem to work. Could I be doing something wrong when I enable this?
I'm doing it in My Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa, where I create a new DWORD key called DisableLoopbackCheck and give it the hex value 1.
It turned out not to be related to DisableLoopbackCheck. The problem was that the search was accessing the site through its external URL. You are supposedly not supposed to be able to access a site from within a server using the same URL that you use to reach it from the outside, at least in pre-SP1 MOSS. But I was doing this for about two years somehow. MS Support tells me they don't quite understand how it was ever working. So it looks like I ran into an issue that should have been manifesting all along. I'm not sure what caused it to appear suddenly, maybe some routine patching of the server. The solution was to extend the web application so it was accessible internally through the machine name, then point the crawler at that.

Resources