I'm using the strapi 3.0.0-beta.15
DB - mongo 4.2.2
After updating one of model fields - which is type of 'text', it is cutting the last three chars and set it to '000'. Saw this only if there are numbers at field.
Writting text (before saving
After saving
Did anyone have stacked with such problems yet?
This issue has been fixed in the last version of Strapi (at least beta.18)
I suggest you update your application using a migration guide if necessary, or, using the application update guide that is into the documentation.
Here is what I did in a short video - https://www.loom.com/share/23076126176546708f9dcaa7eac1924c
Related
I want to allow upper case letters for frontendusers in TYPO3 9.5.x.
When a new feuser self-registers via sfregister_form, it works, but when I add a feuser in the backend, the username gets converted to only lowercase letters.
I found solutions here in the forum https://www.typo3.net/forum/thematik/zeige/thema/47903/ how to (propably) change it, but they only work in older versions of TYPO3. Since there has been lot of changes in TYPO3 since the 10 years old post and I found nothing that talks about TYPO3 9.5.x I ask the question here.
By default username field has set lower eval among others, the fast solution is rewriting it, i.e. if you have some own extension you can add this to its ext_table.php or Configuration/TCA/Overrides/fe_users.php
$GLOBALS['TCA']['fe_users']['columns']['username']['config']['eval'] = 'nospace,trim,uniqueInPid,required';
Note: you can do it within two files, and nowadays the second is more advised
Note 2: Since TYPO3 ver 7.3 typo3conf/extTables.php is deprecated https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/7.3/Deprecation-65344-ExtTables.html
typo3conf/ext/yourext/ext_tables.php
typo3conf/ext/yourext/Configuration/TCA/Overrides/fe_users.php
Works with 9.5.x as shown on screenshots:
I have recently started using the ASP.NET Zero system and noticed that when I'm attempting to change an edition to a paid edition, it does not show it when I go to edit the edition again. In my database, I have the other values that I entered so the edition is saving correctly. When editing the edition, the radio button will still say "Free."
I noticed that the EditionAppService.cs file utilizes the ObjectMapper to map from SubscribableEditions to EditionEditDto. When the SubscribableEdition enters the mapper, it has the values for Monthly and Annual prices. When it exits the mapper as the EditionEditDto, both values are null. Somehow, the ObjectMapper isn't pulling over these values.
I have attached two pictures below. The first shows the SubscribableEdition that has the AnnualPrice and the MonthlyPrice.
When I take the next step in the second picture to see the results of the ObjectMapper, you will see that it no longer has a value for either of those fields in the EditionEditDto.
This results in the edition appearing to be Free when editing it. I thought that maybe the fact that it was a nullable Decimal was the problem. But once I removed that and converted it to a normal decimal, it filled in the prices with zeroes instead of null values. When I downloaded and completed the PhoneBook tutorial, I noticed that project also had the same issue of the Edition not saving.
I am trying to figure out why the mapper isn't mapping the values over to the EditionEditDto correctly.
It's due to a missing map in CustomDtoMapper.cs that will be added in v5.1:
- configuration.CreateMap<EditionEditDto, SubscribableEdition>();
+ configuration.CreateMap<EditionEditDto, SubscribableEdition>().ReverseMap();
I am using nifi 1.1.1 package. I applied the patch files in the source code by referring the below link due to issue faced "Destination cannot be within sources" while split flowfile when make header count greater than 0.
https://issues.apache.org/jira/browse/NIFI-3255.
After apply patches, split text processor works fine, if the header line count given as 0 and above 1.
Those changes in Split text processor can works if we have lesser number of rows only. For example: if flowfile contains 1000 rows it could be split.
If the input file contains more than 20000 rows then it doesn't splits the data and leads `"ArrayIndexOutOfBoundsException" exception.
Here i attached image in which faced.
Anyone please guide me way to resolve that issue.
https://i.stack.imgur.com/UNKI0.png
After some digging it seems that you have run into a problem in the 1.1 version of Nifi.
As discussed here, upgrading to Nifi 1.2 or above should resolve the issue.
I'm trying to filter my GitHub issues based on an OR filter of milestones. Specifically, I want to retrieve all issues that are in milestone X or milestone Y.
Things I've tried:
milestone:X,Y
milestone:"X","Y"
milestone:X milestone:Y
-no:milestone (aka show me issues that have any milestone by way of not showing me issues with no milestones)
I'm using GitHub Enterprise so don't have the option of installing additional products.
Edit: Seems like per Can I search github labels with logical operator OR? searching labels by logical OR works (for issues), but the same syntax for milestones did nothing for me.
It is now possible to filter multiple milestones, you just need to separate them using commas. E.g:
milestone:"v1.0.0","v1.0.1","v1.0.2"
Ref: https://github.blog/changelog/2021-08-02-search-issues-by-label-using-logical-or
I'm new to ELK Stack and trying to setup a dashboard to analyze my apache access logs. Setting up the environment and displaying data from my logfiles all worked. But it seems like Kibana is mistakenly using spaces (and in another dashboard colons and minuses) as separators.
The first two screenshots show that the information inside my attribute "server_node" are correct.
Sadly this one shows that every space-sign is used as separator. So instead of "Tomcat Website Prod 1" or "Tomcat Website Prod 2" as seen in server_node there are too many entries and thus falsify my graph.
This is my widget setting. As mentioned I'm new to ELK and hence don't have that much knowledge to set up good dashbards.
Does anyone of you have any expirience with setting up kibana to analyze apache access logs and can give me a hint on how to setup expressive dashboards or can give me a sample dashboard to use as a model?
Thanks for your help and time and regards, Sebastian
The basic problem you are running into is that strings are analyzed by default -- which is what you want in a text search engine, but not what you want in an analytics type of situation. You need to set the field to not_analyzed before loading it in.
If you are using logstash 1.3.1 or later to load your data, you should be able to change your field to server_node.raw (see http://www.elasticsearch.org/blog/logstash-1-3-1-released/):
Most folks, in this situation, sit and scratch their heads, right? I know I did the first time. I’m pretty certain “docs” and “centralized” aren’t valid paths on the logstash.net website! The problem here is that the pie chart is built from a terms facet. With the default text analyzer in elasticsearch, a path like “/docs/1.3.1/filters/” becomes 3 terms {docs, 1.3.1, filters}, so when we ask for a terms facet, we only get individual terms back!
Index templates to the rescue! The logstash index template we provide adds a “.raw” field to every field you index. These “.raw” fields are set by logstash as “not_analyzed” so that no analysis or tokenization takes place – our original value is used as-is! If we update our pie chart above to instead use the “request.raw” field, we get the following: