Xcode : realm thread issue - xcode

i am new to IOS development and i recently tried realm
the problem is that, i have to get the urls from a json file and then i put those urls in realm as an object ...and whenever i start my app again the URL variable would get the respected url from realm...
like this:
getUrls()
let realm = try! Realm()
// Query Realm for all dogs less than 2 years old
let urls = realm.objects(UrlCollector.self).first
let sss = realm.objects(UrlCollector.self)
print("no of objects in did load \(sss.count)")
loginUrl = urls!.login
print("login url inside didload \(loginUrl)")
but the problem is getUrls method...it updates the urls using almofire
getUrls method:
Alamofire.request("<<<myurl>>>", method: .post, encoding: JSONEncoding.default, headers: nil).responseJSON { (response:DataResponse<Any>) in
switch(response.result) {
case .success(_):
if let data = response.result.value{
print(data)
let data = JSON(data)
for item in data["result"].arrayValue {
let url = UrlCollector()
url.login = "\(self.server)\(item["login"].stringValue)"
print(url.login)
url.changePassword = "\(self.server)\(item["changePassword"].stringValue)"
print(url.changePassword)
url.phoneNumberVerify = "\(self.server)\(item["phoneNumberVerify"].stringValue)"
print(url.phoneNumberVerify)
url.sessionCheck = "\(self.server)\(item["sessionCheck"].stringValue)"
print(url.sessionCheck)
// Get the default Realm
let realm = try! Realm()
var urls = realm.objects(UrlCollector.self)
// Persist your data easily
try! realm.write {
realm.delete(urls)
realm.add(url)
}
// Query Realm for all dogs less than 2 years old
urls = realm.objects(UrlCollector.self)
print(urls.count)
}
}
break
case .failure(_):
print("Error message:\(response.result.error)")
break
}
}
}
this code runs on did load
my log:
no of objects in did load 1
login url inside didload
{
result = (
{
changePassword = "/iust_app/android/passwordChange.php";
login = "/iust_app/android/login.php";
phoneNumberVerify = "/iust_app/android/onNumberVerification.php";
sessionCheck = "/iust_app/android/sessionCheck.php";
}
);
}
/iust_app/android/login.php
/iust_app/android/passwordChange.php
/iust_app/android/onNumberVerification.php
/iust_app/android/sessionCheck.php
1
print("no of objects in did load \(sss.count)")
loginUrl = urls!.login
print("login url inside didload \(loginUrl)")
as you can see these runs before the request...Please read my log lines to understand

All network requests are executed asynchronously. If you want your code to be executed after you get a response from the server put it into the completion handler of this request.

Related

Ajax.post --> dom.fetch

I'm trying to use the dom.fetch (or dom.Fetch.fetch) api instead of Ajax.post and have a few problems:
Is this a correct translation from ajax to fetch?
Ajax.post(
url = "http://localhost:8080/ajax/myMethod",
data = byteBuffer2typedArray(Pickle.intoBytes(req.payload)),
responseType = "arraybuffer",
headers = Map("Content-Type" -> "application/octet-stream"),
)
dom.fetch(
"http://localhost:8080/fetch/myMethod",
new RequestInit {
method = HttpMethod.POST
body = byteBuffer2typedArray(Pickle.intoBytes(req.payload))
headers = new Headers {
js.Array(
js.Array("Content-Type", "application/octet-stream")
)
}
}
)
A "ReferenceError: fetch is not defined" is thrown on the js side though, same if replacing with dom.Fetch.fetch.
My setup:
Fresh jsdom 19.0.0 with
npm init private
npm install jsdom
project/plugins.sbt
libraryDependencies += "org.scala-js" %% "scalajs-env-jsdom-nodejs" % "1.1.0"
addSbtPlugin("org.scala-js" % "sbt-scalajs" % "1.8.0")
build.sbt (in js project)
libraryDependencies += "org.scala-js" %%% "scalajs-dom" % "2.0.0"
jsEnv := new JSDOMNodeJSEnv(JSDOMNodeJSEnv.Config()
.withArgs(List("--dns-result-order=ipv4first")))
Thought that the jsEnv workaround was not needed on Scala.js 1.8 (see https://github.com/scala-js/scala-js-js-envs/issues/12#issuecomment-958925883). But it is still needed when I run the ajax version. With the workaround, my ajax version works fine, so it seems that my node installation is fine.
The fetch API is only available by-default in browser environments, and not in Node. node-fetch is also not pulled in (or at least not re-exported) by jsdom, so fetch is not available with the current package/environment setup.
Possible solutions:
Set the ScalaJS side up in such a way that it would call node-fetch on NodeJS and fetch on browser
Use XMLHttpRequest which is available on both platforms
(Please see the #scala-js channel in the Scala Discord for a related conversation).
Got help on the scala-js channel on Discord from #Aly here and #armanbilge here who pointed out that:
fetch is not available by default in Node.js or JSDOM, only in browsers.
scala-js-dom provides typesafe access to browser APIs, not Node.js APIs.
The distinction between browser API and Node API wasn't clear for me before, although it is well described in step 6 of the scala-js tutorial.
So, dom.fetch of the scala-js-dom API works when running a js program in a browser, but not if running a test that uses the Node jsEnv(ironment)! To fetch in a test one would have to npm install node-fetch and use node-fetch, maybe by making a facade with scala-js.
Since I want my code to work for both browser (scala-js-dom) and test (Node.js), I ended up falling back to simply using the Ajax.post implementation with XMLHttpRequest:
case class PostException(xhr: dom.XMLHttpRequest) extends Exception {
def isTimeout: Boolean = xhr.status == 0 && xhr.readyState == 4
}
val url = s"http://$interface:$port/ajax/" + slothReq.path.mkString("/")
val byteBuffer = Pickle.intoBytes(slothReq.payload)
val requestData = byteBuffer.typedArray().subarray(byteBuffer.position, byteBuffer.limit)
val req = new dom.XMLHttpRequest()
val promise = Promise[dom.XMLHttpRequest]()
req.onreadystatechange = { (e: dom.Event) =>
if (req.readyState == 4) {
if ((req.status >= 200 && req.status < 300) || req.status == 304)
promise.success(req)
else
promise.failure(PostException(req))
}
}
req.open("POST", url) // (I only need to POST)
req.responseType = "arraybuffer"
req.timeout = 0
req.withCredentials = false
req.setRequestHeader("Content-Type", "application/octet-stream")
req.send(requestData)
promise.future.recover {
case PostException(xhr) =>
val msg = xhr.status match {
case 0 => "Ajax call failed: server not responding."
case n => s"Ajax call failed: XMLHttpRequest.status = $n."
}
println(msg)
xhr
}.flatMap { req =>
val raw = req.response.asInstanceOf[ArrayBuffer]
val dataBytes = TypedArrayBuffer.wrap(raw.slice(1))
Future.successful(dataBytes)
}

Cypress - extract URL info

I have this URL :
https://www.acme.com/book/passengers?id=h1c7cafc-5457-4564-af9d-2599c6a37dde&hash=7EPbMqFFQu8T5R3AQr1GCw&gtmsearchtype=City+Break
and want to store these values :
id=h1c7cafc-5457-4564-af9d-2599c6a37dde
hash=7EPbMqFFQu8T5R3AQr1GCw
for use in a later test.
How do I extract these values from the URL? I am using Cypress. Thanks.
Please follow the following steps and that's all there is to it.
You can put this snippet into before() hooks of your spec file and you can access them wherever you want.
cy.location().then(fullUrl => {
let pathName = fullUrl.pathname
let arr = pathName.split('?');
let arrayValues = arr[1].split('&');
cy.log(arrayValues[0]);
cy.log(arrayValues[1]);
cy.log(arrayValues[2]);
})
In case anyone needs the correct answer, use the cy.location('search') to extract the search part of the location data.
Then for convenience, convert it to a javascript object with key/value pairs for each item.
Finally, store it in a Cypress alias to use later in the test.
cy.location('search')
.then(search=> {
const searchValues = search.split('?')[1].split('&')
// yields: [
// id=h1c7cafc-5457-4564-af9d-2599c6a37dde,
// hash=7EPbMqFFQu8T5R3AQr1GCw,
// gtmsearchtype=City+Break
// ]
const searchMap = searchValues.reduce((acc,item) => {
const [key,value] = item.split('=')
acc[key] = value.replace('+', ' ')
return acc
}, {})
// yields: {
// id: "h1c7cafc-5457-4564-af9d-2599c6a37dde",
// hash: "7EPbMqFFQu8T5R3AQr1GCw",
// gtmsearchtype: "City Break"
// }
cy.wrap(searchMap).as('searchMap')
})
Using #Srinu Kodi's answer I got it working changing ...then(fullUrl => ... to
...then((fullUrl) => ...

Geocoding requests to HERE API randomly fails

I am trying to geocode addresses with HERE API. I am not free plan. I try following code (Spring Boot in Kotlin):
override fun geocode(address: Address): Coordinate? {
val uriString = UriComponentsBuilder
.fromHttpUrl(endpoint)
.queryParam("app_id", appId)
.queryParam("app_code", appCode)
.queryParam("searchtext", addressToSearchText(address))
.toUriString()
logger.info("Geocode requested with url {}", uriString)
val response = restTemplate.getForEntity(uriString, String::class.java)
return response.body?.let {
Klaxon().parse<GeocodeResponse>(it)
}?.let {
it.Response.View.firstOrNull()?.Result?.firstOrNull()
}?.let {
Coordinate(
latitude = it.Location.DisplayPosition.Latitude,
longitude = it.Location.DisplayPosition.Longitude
)
}.also {
if (it == null) {
logger.warn("Geocode failed: {}", response.body)
}
}
}
It turned out that when I call this method many times in a row, some requests returns empty responses, like this:
{
"Response":{
"MetaInfo":{
"Timestamp":"2019-04-18T11:33:17.756+0000"
},
"View":[
]
}
}
I could not figure out any rule why some requests fail. It seems to be just random.
However, when I try to call same URLs with curl of in my browser, everything works just fine.
I guess there is some limit for amount requests per seconds, but I could not find anything in HERE documentation.
Does anyone have an idea about the limit? Or may it be something else?
Actually, there was a problem with my code. Requests were failing for addresses having "special" symbols like ü and ö. The problem was with building request URL
val uriString = UriComponentsBuilder
.fromHttpUrl(endpoint)
.queryParam("app_id", appId)
.queryParam("app_code", appCode)
.queryParam("searchtext", addressQueryParam(address))
.build(false) // <= this was missed
.toUriString()

Vapor 3 Websocket with Sessions

In Vapor 2 it was possible to access a session when connecting a new websocket.
For example:
setupRoutes(){
socket("ws") { request, websocket in
let session = try request.assertSession()
guard let userId = session.data["user_id"]?.string else {
..
}
}
In Vapor 3 configure.swift:
let wss = NIOWebSocketServer.default()
wss.get("ws") { websocket, request in
--get session information--
websocket.onText { websocket, text in
websocket.send(text)
}
}
services.register(wss, as: WebSocketServer.self)
With Vapor 3 the SessionMiddleware will not be invoked before passing the HTTP upgrade request to the WebsocketServer.
So how can I access session information?
So, I'm super aware that this thread is old and the OP probably found an answer or gave up months ago. Just in case anyone comes across this still looking, can't you use websocket.session to access the session?
This would make the Vapor 3 code
let wss = NIOWebSocketServer.default()
wss.get("ws") { websocket, request in
guard let userID = (try? websocket.session).data["user_id"]?.string else {
...
}
websocket.onText { websocket, text in
websocket.send(text)
}
}
services.register(wss, as: WebSocketServer.self)

Akka.Net Clustering Simple Explanation

I try to do a simple cluster using akka.net.
The goal is to have a server receiving request and akka.net processing it through it cluster.
For testing and learning I create a simple WCF service that receive a math equation and I want to send this equation to be solved.
I have one project server and another client.
The configuration on the server side is :
<![CDATA[
akka {
actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
debug {
receive = on
autoreceive = on
lifecycle = on
event-stream = on
unhandled = on
}
deployment {
/math {
router = consistent-hashing-group #round-robin-pool # routing strategy
routees.paths = [ "/user/math" ]
virtual-nodes-factor = 8
#nr-of-instances = 10 # max number of total routees
cluster {
enabled = on
max-nr-of-instances-per-node = 2
allow-local-routees = off
use-role = math
}
}
}
}
remote {
helios.tcp {
transport-class = "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
port = 8081
hostname = "127.0.0.1"
}
}
cluster {
seed-nodes = ["akka.tcp://ClusterSystem#127.0.0.1:8081"] # address of seed node
}
}
]]>
On the Client side the configuration is like this :
<![CDATA[
akka {
actor.provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
remote {
log-remote-lifecycle-events = DEBUG
log-received-messages = on
helios.tcp {
transport-class = "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
port = 0
hostname = 127.0.0.1
}
}
cluster {
seed-nodes = ["akka.tcp://ClusterSystem#127.0.0.1:8081"] # address of the seed node
roles = ["math"] # roles this member is in
}
actor.deployment {
/math {
router = round-robin-pool # routing strategy
routees.paths = ["/user/math"]
nr-of-instances = 10 # max number of total routees
cluster {
enabled = on
allow-local-routees = on
use-role = math
max-nr-of-instances-per-node = 10
}
}
}
}
]]>
The cluster connection seems to correctly be made. I see the status [UP] and the association with the role "math" that appeared on the server side.
Event follwing the example on the WebCramler, I don't achieved to make a message to be delivered. I always get a deadletters.
I try like this :
actor = sys.ActorOf(Props.Empty.WithRouter(FromConfig.Instance), "math");
or
var actor = sys.ActorSelection("/user/math");
Does someone know a good tutorial or could help me ?
Thanks
Some remarks:
First: assuming your sending work from the server to the client. Then you are effectively remote deploying actors on your client.
Which means only the server node needs the actor.deployment config section.
The client only needs the default cluster config (and your role setting ofcourse).
Second: Try to make it simpler first. Use a round-robin-pool instead. Its much simpler. Try to get that working. And work your way up from there.
This way its easier to eliminate configuration/network/other issues.
Your usage: actor = sys.ActorOf(Props.Empty.WithRouter(FromConfig.Instance), "math"); is correct.
A sample of how your round-robin-pool config could look:
deployment {
/math {
router = round-robin-pool # routing strategy
nr-of-instances = 10 # max number of total routees
cluster {
enabled = on
max-nr-of-instances-per-node = 2
allow-local-routees = off
use-role = math
}
}
}
Try this out. And let me know if that helps.
Edit:
Ok after looking at your sample. Some things i changed
ActorManager->Process: Your creating a new router actor per request. Don't do that. Create the router actor once, and reuse the IActorRef.
Got rid of the minimal cluster size settings in the MathAgentWorker project
Since your not using remote actor deployment. I changed the round-robin-pool to a round-robin-group.
After that it worked.
Also remember that if your using the consistent-hashing-group router you need to specify the hashing key. There are various ways to do that, in your sample i think the easiest way would be to wrap the message your sending to your router in a ConsistentHashableEnvelope. Check the docs for more information.
Finally the akka deployment sections looked like this:
deployment {
/math {
router = round-robin-group # routing strategy
routees.paths = ["/user/math"]
cluster {
enabled = on
allow-local-routees = off
use-role = math
}
}
}
on the MathAgentWorker i only changed the cluster section which now looks like this:
cluster {
seed-nodes = ["akka.tcp://ClusterSystem#127.0.0.1:8081"] # address of the seed node
roles = ["math"] # roles this member is in
}
And the only thing that the ActorManager.Process does is:
return await Program.Instance.RouterInstance.Ask<TResult>(msg, TimeSpan.FromSeconds(10));

Resources