Parse reduce deploy version - parse-platform

I am working on an app since few days for testing. I am using parse to store data. I have deployed cloud code lot of times since I am a beginner to parse,but now the version number has gone above 200 . Is there a way so that I can set that version number to beginning again.
I dont want to start a new app in parse again and import all the data.

Related

Google Cloud Data flow jobs failing with error 'Failed to retrieve staged files: failed to retrieve worker in 3 attempts: bad MD5...'

SDK: Apache Beam SDK for Go 0.5.0
We are running Apache Beam Go SDK jobs in Google Cloud Data Flow. They had been working fine until recently when they intermittently stopped working (no changes made to code or config). The error that occurs is:
Failed to retrieve staged files: failed to retrieve worker in 3 attempts: bad MD5 for /var/opt/google/staged/worker: ..., want ; bad MD5 for /var/opt/google/staged/worker: ..., want ;
(Note: It seems as if it's missing a second hash value in the error message message.)
As best I can guess there's something wrong with the worker - It seems to be trying to compare md5 hashes of the worker and missing one of the values? I don't know exactly what it's comparing to though.
Does anybody know what could be causing this issue?
The fix to this issue seems to have been to rebuild the worker_harness_container_image with the latest changes. I had tried this but I didn't have the latest release when I built it locally. After I pulled the latest from the Beam repo, and rebuilt the image (As per the notes here https://github.com/apache/beam/blob/master/sdks/CONTAINERS.md) and reran it seemed to work again.
I'm seeing the same thing. If I look into the Stackdriver logging I see this:
Handler for GET /v1.27/images/apache-docker-beam-snapshots-docker.bintray.io/beam/go:20180515/json returned error: No such image: apache-docker-beam-snapshots-docker.bintray.io/beam/go:20180515
However, I can pull the image just fine locally. Any ideas why Dataflow cannot pull.

CouchDB Performance: 1.6.1 vs 2.1.1

We are looking at upgrading our CouchDB on our RHEL servers from 1.6.1 to 2.1.1. Before we do that, though, we wanted to run a performance test. So we created a JMeter test that goes directly against the database. It does not use any random values, so that the test would be exactly the same, and we could compare the two results. This is just a standalone server, we are not using clustering. I ran the tests the exacts same way for both. I ran the tests for 1.6.1, and then installed 2.1.1 on the same machine. And I created the database new for each test run. [I also updated Erlang to R19.3.]
The results were very shocking:
Average response times:
1.6.1: 271.15 ms
2.1.1: 494.32 ms
POST and PUTs were really bad ...
POST:
1.6.1: 38.25 ms
2.1.1: 250.18 ms
PUT:
1.6.1: 37.33 ms
2.1.1: 358.76
We are just using the default values for all the config options, except that we changed 1.6.1 to have delayed_commits = false (that is now the default in 2.1.1). I'm wondering if there's some default that changed that would make 2.1.1 so bad.
When I ran the CouchDB setup from the Fauxton UI, it added the following to my local.ini:
[cluster]
n = 1
Is that causing CouchDB to try to use clustering, or is that the same as if there were no entries here at all? One other thing, I deleted the _global_changes database, since it seemed as if it would add extra processing that we didn't need.
Is that causing CouchDB to try to use clustering, or is that the same as if there were no entries here at all?
It's not obvious from your description. If you setup CouchDB 2.0 as clustered then that's how it will work. This is something you should know depending on the setup instructions you followed: http://docs.couchdb.org/en/2.1.1/install/setup.html
You can tell by locating the files on disk and seeing if they are in a shards directory or not.
I'm pretty sure you want at least two, so setting n = 1 doesn't seem like something you should be doing.
If you're trying to run in single node follow the instructions I linked above to do that.
One other thing, I deleted the _global_changes database, since it seemed as if it would add extra processing that we didn't need.
You probably don't want to delete random parts of your database unless there are instructions saying this is OK?

How to setup Cloud Coding on Parse Open Server using Heroku

I have looked everywhere but cannot seem to figure out how to setup cloud coding on the Parse Open Server using Heroku.
I see this link which tells me what to put in the Index.js and Main.js file: Implementing Cloud Code on Open Source Parse Server. However, I cannot seem to find those files. Nor can I find the "cloud" folder.
How do I find the cloud folder?
I created the Parse Server on MongoDB using the "Deploy to Heroku" link on this page: https://github.com/ParsePlatform/parse-server-example. After creating my application by filling out all the information, I ran the command heroku git:clone -a yourAppName to clone the application files. However, when I use the command I obtain a empty repository and get the following message in my terminal:
Cloning into 'hyv3-moja'...
warning: You appear to have cloned an empty repository.
So, how/where do I find the cloud folder with main.js? Did I miss any step in creating the Parse Server?
I also tried using the Parse Command Line. However, when I try to use the parse new command, it requires me to login to a Parse account. However, since Parse is going down, they are not accepting new accounts and I did not have an account before. Regardless, this seems like a deadend.
So can someone please explain to me how to set up Cloud Code?? I want to create a code that decrements a column in the database every second so it operates like a timer. Basically, I want my application to create objects on the database that last a certain amount of time chosen by the user. For this example, ill say 24 hours. So from the moment it is created, I want to decrement those 24 hours in the database. That way when a user of my application clicks to view the object, I translate the time remaining from the database and just output that value to the user to show how much time is remaining for the life of the object.

Continuously running script on Parse (server)

I am making some sort of lottery app that draws a winner every 48 hours which I made with Parse. Only the problem is that the code to check if the 48 hours have run out cant be on a phone because there is no permanent connection with Parse. Is there a way to have a script continuously running on Parse or on the Parse open source Server with MongoDB?
The best solution is to use a Cloud Job. But Cloud Job is not yet available in Parse Server.
The other option is to create a Cloud Function and use an external tool to call this cloud function every hour, for example, and check if 48h has completed.
Another option is to create an interval. Something like it:
setInterval(function(){
// Do what you have to do.
}, 3600000); // Will run every 1h.

Store data in selenium webdriver

Im looking for a way to store data in selenium for use in future tests.
Im using jenkins, maven + selenium and testng.
How can i store some data, lets say i want to run test, get some data from website (weather forecast). Store it somewhere and next day run test to check if forecast match todays weather.
I can store it in txt file, and parse by regex but im sure there is better way to do it?
You have to consider what "selenium webdriver" is in this context. It is a Java app. It "exists" only when it is running. Once the run stops, it is purged from memory, including all data it held. If you are using JUnit or TestNG (as you specified), then this data is purged even more frequently: after every the class.
To accomplish what you are asking, you will need something external to your tests. You can certainly utilize a txt file as you suggested. A spreadsheet might do for your purposes. Most applications utilize an entire database.
You also mentioned Jenkins. This will make the external storage a more interesting problem, as Jenkins often purges the current working directory before each run.

Resources