I'm using esp32 whroom with platform io .
my question is how do i set the the partition table to 3MB no OTA and 1MB to Spiffs i understand i need to use configuretion code in the platformio.ini file but couldn't find a good guide online , thank for answers
You have to write your own partition table as descriped in the docs of PlatformIO.
You can find a lot of examples and predefined partition tables here.
If you want to get a deeper insight, have a look at the docs of Espressif.
Related
I have a (desktop) application that logs high frequency data in sqlite. Our annalists have asked to move to parquet (for domain specific reasons). I have ported our application, and am getting terrible write performance (very similar performance to commiting sqlite every update, without controlling transactions)
Does parquet have similar transaction control or a similar analogy?
Additional background information-
In every transaction I have ~1200 columns of data to update
I defined an entirely "flat" parquet message schema, where everyone entry is required
additionally, I believe that I've ruled out filesystem journaling-like bottlenecks, but if it's relevant, I am testing on xfs and would deploy on ext4
and finally (?) this is implemented with the rust implementation of parquet ("parquet = 0.16.0")
I'm happy to fill in any missing gaps, where have I gone wrong in this port?
After researching this further, parameters such as row_group_size, compression, encoding, page_size, etc... can all be set using the WriterPropertiesBuilder. These can even be configured on a per-column basis.
This did not actually solve my problem but answered the gist of my above question of what and where can we configure parquet FileWriters.
Can i find any resources like PDF or User Guide for learning vertica DB?
As i am beginner in vertica, also I am looking for the performance factor which affects while loading the data as well.
All of the documentation is posted publicly on my.vertica.com. Data-load performance depends on many factors; you should probably start with Bulk-Loading Data and then review the many COPY parameters. For a general beginner introduction to Vertica, see Getting Started.
These are three simple questions which was surprisingly hard to find definite answers.
Does ElasticSearch support indexing data in RDBMS tables ( Oracle/SQLServer/Informix) out of the box?
If yes, can you please point me to documentation on how to do it
If not, what are alternate ways (plugins like Rivers - deprecated) with good reputation
I'm surprised there isn't any solid answer as yet for this. So here's the solution. Logstash directly gives us the ability to push data from a RDBMS into Elasticsearch.
Here's a link to a tutorial which tell you how to go about it. Briefly(all details in link 1), you simply need a JDBC driver for the relational database you'll be using (Postgres, MySQL etc) and make a config file specifying your input as the Relational Database and your output as Elasticsearch. You can also specify a cron which would allow you to keep updating one regular intervals.
Here's the article which mentions the configuration and gets you started (See Example 2): https://www.elastic.co/blog/logstash-jdbc-input-plugin
Here's the article which tells you how to configure the Cronjob as such: https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jdbc.html#_scheduling
I have read lot of blogs\article on how different type of industries are using Big Data Analytic. But most of these article fails to mention
What kinda data these companies used. What was the size of the data
What kinda of tools technologies they used to process the data
What was the problem they were facing and how the insight they got the data helped them to resolve the issue.
How they selected the tool\technology to suit their need.
What kinda pattern they identified from the data & what kind of patterns they were looking from the data.
I wonder if someone can provide me answer to all these questions or a link which at-least answer some of the the questions.
It would be great if someone share how finance industry is making use of Big Data Analytic.
Your question is very large but I will try to answer with my own experience
1 - What kinda data these companies used ?
One of the strength of Hadoop is that you can use a very large origin for your data. It can be .csv / .txt files, json, mysql, photos, videos ...
It can contains data about marketing, social network, server logs ...
What was the size of the data ?
There is no rules about that. It can start from 50 - 60 Go to 1Po. Depends of the data and the company.
2 - What kinda of tools technologies they used to process the data
No rules about that. Depends of the needs. To organize and process data they use Hadoop with Hive and Pig. To query data, they want some short response time so they use NoSQL / in-memory database with a shorter dataset (refined by Hadoop). In some cases, company use ETL like Talend in order to go faster.
3 - What was the problem they were facing and how the insight they got the data helped them to resolve the issue.
The main issue for company is the growth of their data. At a moment, the data are too big and it is impossible to process with traditional tools like Mysql or others. So they start to use Hadoop for example.
4 - How they selected the tool\technology to suit their need.
I think it's an internal problematic. Company choose their tools because of the price of the licence, their own skills, their finals needs ...
5 - What kinda pattern they identified from the data & what kind of patterns they were looking from the data.
I don't really understand this question
Hope it will help you.
I think getting what you want is a difficult job getting data little by little from different resources. just make sure to visit these links:
a bunch of free reports. I am studying the list right now.
http://www.oreilly.com/data/free/
and the famous McKinsey Report:
http://www.mckinsey.com/~/media/McKinsey/dotcom/Insights%20and%20pubs/MGI/Research/Technology%20and%20Innovation/Big%20Data/MGI_big_data_full_report.ashx
I was just thinking what is the best way to keep images in IPhone/iPad (XCODE) application if I'm getting them from internet dynamically. My main concern is if I'm storing it in my database as Binary data, will it decrease my efficiency when creating the queries to database?
In that case is it better to store them in Application's folder?
Thanks for responds.
Apple dev forums has some good discussion on this. A good post can be found here. General guideline from the post: less than 16kb data blob ok, 100k ok as well, approaching 1MB and it is better to store outside of Core Data or any database.
In terms of fetching performance, it will boil down to how you have normalized your data model.