We use an application user (https://learn.microsoft.com/en-us/powerapps/developer/data-platform/use-single-tenant-server-server-authentication) to do CRUD operations for dynamics crm, everything works fine except when we start to upload files to the D365 CRM. We are using the clound base version of the CRM.
The call that we make looks like this
curl --location --request POST 'https://{CRM-INSTANCE}.crm6.dynamics.com/api/data/v9.0/UploadDocument' \
--header 'Authorization: Bearer {TOKEN_PLACEHOLDER}' \
--header 'OData-MaxVersion: 4.0' \
--header 'OData-Version: 4.0' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--header 'Cookie: ARRAffinity=d33afaef86fb660a97ce9fe142520dd808ad8aaee8c775b216ff896553cb48c5; ReqClientId=d37cd1a9-79a8-4e7c-b0ad-280e64968576; orgId=879cd69e-8d15-49c7-9c44-f7cdeb21848a' \
--data-raw '{
"Content": "dGVzdGZpbGU=",
"Entity": {
"#odata.type": "Microsoft.Dynamics.CRM.sharepointdocument",
"locationid": "",
"title": "testme.txt"
},
"OverwriteExisting": true,
"ParentEntityReference": {
"#odata.type": "Microsoft.Dynamics.CRM.account",
"accountid": "{ACCOUNTID_PLACEHOLDER}"
},
"FolderPath": ""
}'
When we upload a file through this dynamics 365 CRM API, we encounter this error.
The error with status code 500(Internal Server Error):
{
"error": {
"code": "0x80060761",
"message": "Failed to connect to SharePointSite."
}
}
But we already configured SharePoint for dynamics CRM document management (we can see the documents in D365), and we can upload to D365 CRM on the web console.
We are a little stumped on what to even look at. What we already tried is:
we gave the app user access to SharePoint, we can upload the file directly to sharepoint using the SharePoint CRM, but we want to use the UploadDocument API from D365.
we cannot see any obvious permissions that we can give to our D365 application user to have access to Sharepoint. I searched for this but there is no role that we can tell does this already.
Related
The old server will remove, that Elasticsearch version is v6.8, the new server installed same version. Now, I'll migrate all data to the new server. Is my operation correct?
Old server: elasticsearch.yml add path.repo, for example
path.repo:["/data/backup"]
Restart Elasticsearch services
Old server
curl -H "Content-Type: application/json"
-XPUT http://192.168.50.247:9200/_snapshot/my_backup
-d '{ "type": "fs", "settings":
{ "location": "/data/backup","compress": true }}'
Create backup
curl -XPUT http://192.168.50.247:9200/_snapshot/my_backup/news0618
Restore database(new server ip:192.168.10.49 ):
curl -XPOST http://192.168.10.49:9200/_snapshot/my_backup/news0618/_restore
Are these operations can migrate all the data?
If you are using fs as a snapshot repository location then it will not work as your new instance is hosted on different host it will not have access to your old hosts file system. You need to use any shared location like volume mounts, S3 or Azure blob etc.
You should use reindexing rather than snapshot and restore. Its pretty simpler. Refer this link for remote reindexing:
Steps:
Whitelist remote host in elasticsearch.yaml using the reindex.remote.whitelist property in your new Elasticsearch instance:
reindex.remote.whitelist: "192.168.50.247:9200"
Restart new Elasticsearch instance.
Reindexing:
curl -X POST "http://192.168.10.49:9200/_reindex?pretty" -H 'Content-Type: application/json' -d'
{
"source": {
"remote": {
"host": "http://192.168.50.247:9200"
},
"index": "source-index-name",
"query": {
"match_all": {}
}
},
"dest": {
"index": "dest-index-name"
}
}
'
Refer this link for reindexing many indices
Warning: The destination should be configured as wanted before calling _reindex. Reindex does not copy the settings from the source or its associated template. Mappings, shard counts, replicas, and so on must be configured ahead of time.
Hope this helps!
I have a very minimal spring-boot app with dependency spring-boot-starter-web and spring-boot-starter-security. I just learned spring boot yesterday. I noticed that sending a POST request like below even if provided with the wrong basic auth password the request succeeds. Because, of the presence of a cookie in a previous successful request.
curl --location --request POST 'http://localhost:8080/create' --header 'Content-Type: application/json' --header 'Cookie: JSESSIONID=EA39A09B47575D192845148AFFCAD85B' --data-raw '{
"surname":"Murdock",
"givenname":"John",
"placeofbirth":"Slovenia",
"pin":"1234"
}'
Is this the expected behavior? And how do I make spring-boot to always check the provided basic auth password?
I have service in Kong and I have set proxy-cache plugin for that service.
curl -X POST http://localhost:8001/plugins --data "name=proxy-cache" --data "config.strategy=redis" --data 'service_id=2f0a285d-7b25-48d6-adc3-bbf28ffe5f47' --data "config.redis.host=127.0.0.1" --data "config.redis.port=6379" --data "config.redis.password=my_redis_password"
When I call an API from that service:
curl -i -X GET --url http://localhost:3002/v1/currency --header 'apikey: MY_API_KEY'
everything works correctly but X-Cache-Status is always Bypass
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Content-Length: 3654
Connection: keep-alive
X-RateLimit-Limit-second: 100
X-RateLimit-Remaining-second: 99
X-Cache-Key: 3e18cdfc6e02359fb0f874efdf5788d8
X-Cache-Status: Bypass
X-Powered-By: Express
...
How can I debug Bypass reason?
To avoid Bypass in X-Cache-Status you have to add this config when you create your proxy-cache plugin
--data "config.content_type=application/json; charset=utf-8"
The plugin proxy-cache that comes bundled with Kong community edition only allows in-memory caching. If you want to use Redis for caching, you will have to use Kong Enterprise version. More information here
As an alternative, there is a open source plugin called kong-plugin-proxy-cache available on Github. You will have to first install the plugin from Luarocks and then enable the plugin in Kong config
# Install plugin dependency
sudo luarocks install lua-resty-redis-connector
# install plugin
sudo luarocks install kong-plugin-proxy-cache
# Enable plugin in kong.conf
plugins = bundled,proxy-cache
# After enabling, you can use plugin with any service, route or consumer.
# To enable it for a service
curl -X POST http://localhost:8001/services/<service-name>/plugins \
--data "name=proxy-cache" \
--data "config.cache_ttl=300" \
--data "config.cache_control=false" \
--data "config.redis.host=<redis-host>" \
--data "config.redis.port=<redis-port>"
I would like to edit my Xcode bot through the Xcode Server API by sending the blueprint through PATCH.
However, when I send my PATCH request, Xcode Server replies back an unchanged json of my old blueprint.
My request is curl -X PATCH -H "Content-Type: application/json" -d "{\"my\": \"json\"}" https://<username>:<password>#<my_domain>:20343/api/bots/<bot_id>
What am I missing?
There are two missing parameters that will cause the following problems:
Missing xcsclientversion: server will return 400 Bad Request.
Missing overwriteBlueprint=true: server will not change the blueprint.
Your final request should look like the following:
curl -X PATCH -H "Content-Type: application/json" -H "x-xcsclientversion: 18" -d "{\"json goes\": \"here\"}" https://<username>:<password>#<domain>:20343/api/bots/<_id>?overwriteBlueprint=true
Source: radar and Developer Relations (thanks!)
we try to automate the creation of projects (including user/group Management) in sonarqube and I already found the Web-API-documentation in our sonarqube 5.6-Installation. But if I try to create a project with the following settings
JSON-File create-project.json:
{"key": "test1", "name": "Testprojekt1"}
curl-request
curl --noproxy '*' -D -X POST -k -u admin:admin -H 'content-type: application/json' -d create_project.json http://localhost:9000/api/projects/create
I get the Error:
{"err_code":400,"err_msg":"Missing parameter: key"}
It's a bit strange because if I try e.g. the URL:
http://localhost:9000/api/projects/index
I get the list of the projects I created manuelly and if I try a request like
curl -u admin:admin -X POST 'http://localhost:9000/api/projects/create?key=myKey&name=myProject'
it works too, but I would like to use the new api because it looks like it support much more function that the 4.X API of sonarqube.
Maybe someone here can help me with this problem, if would very thanksful for every useful hint.
best regards
Dan
I found this question because I got the same "parameter missing" error message.
So what we both did not understand: The SQ API expects the parameters as plain URL parameters and not as json formatted parameters as most REST APIs do today.
PS: Would be nice if this could be added to the SQ documentation.