I have the chance to influence the log format of a logging solution we are about to set up for an existing backend system. It is not open-telemetry based and may never be, but at the moment I can still make suggestions and would like to make sure the logs are written in a compatible format. Is there some kind of overview or definition I can use as a base? Some kind of list of mandatory fields the need to be filled?
I see you found the data model (https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/logs/data-model.md) in the specification - keep in mind, logging support for OpenTelemetry is currently not stable and so this may change. Generally, I suspect that if you use something like the Elastic Common Schema (https://www.elastic.co/guide/en/ecs/master/ecs-log.html) then you should be broadly compatible going forward.
From what I've read this should be possible due to the modular nature of Laravel, but I need assurance from people with more Laravel experience:
I have a very large (500k loc) ancient PHP app. So ancient that some parts of it date from PHP3 times (ca. 2000, PHP4 was released already but PHP3 was used for backwards compatibility reasons).
Refactoring this is a huge project, and the only way to reasonably do it is in parts. Replace this part, then that part, etc. Fortunately, the "ancient" part comes in handy as no framework was used and basically every page is its own script, with a few central libraries for shared functionality.
Is it possible to spin up a Laravel app that can route new/refactored pages to the new site and everything else (wildcard if possible) to the ancient code? All data is stored in a database, so there will be no sync issues between them except for user authentication and session info.
Is it possible to get eloquent running on an ancient DB design or to refactor the DB in such a way that it works for both? There was a previous attempt to move the DB interface to Doctrine which from what I know was aborted after partial success (i.e. many DB objects are accessed through Doctrine, but there is also a lot of straight SQL code in parallel).
It's a huge mess, but the software in question is still being used and successfully so and a previous attempt to replace it with something else has already failed.
additional details:
Thanks #maiorano84 for good questions:
First, does your legacy application have tests?
Negative on that. PHPUnit was released in 2004. At that time, this app had already been in production for four year.
Second, are you able to get it to work on a more recent version of PHP?
Yes, the current codebase is running on PHP 5.6.33 - it has been maintained throughout the years, and a major update was made on the transition between PHP 4 and PHP 5.
If it runs on PHP 5.3+, you can use Instant Refactoring
I'm an author of Rector, a tool that can migrate huge amount of PHP files in a few seconds. E.g. upgrade PHP 5.3 to PHP 7.4, upgrade Symfony 2.8 to 4.2 or migrate from Nette to Symfony (read the case study).
1. Install Rector
composer require rector/rector --dev
2. Create rector.php with PHP sets, since you have old PHP code
// rector.php
use Rector\Core\Configuration\Option;
use Rector\Set\ValueObject\SetList;
use Symfony\Component\DependencyInjection\Loader\Configurator\ContainerConfigurator;
return static function (ContainerConfigurator $containerConfigurator):
void {
$parameters = $containerConfigurator->parameters();
$parameters->set(Option::SETS, [
SetList::PHP_52,
// use one set at a time, as sets build on each other
// SetList::PHP_53,
// SetList::PHP_54,
]);
};
3. Then run Rector on your code (e.g. /src directory)
vendor/bin/rector process src
You can also write your own rules, so you can convert the code to Laravel/MVC approach. The idea is to write one rule, that e.g. converts 100+ files to controllers.
Read more on Github repository.
Is it possible? Yes.
Is it going to take a short amount of time? Absolutely not.
With any kind of legacy codebase, you're going to need to take the time in figuring out all of its moving parts and figuring out what portions are going to need to change in order to even be able to work on a modern platform.
The most recent version of Laravel requires PHP 7.1.3, so even attempting to just dump the entire codebase into a Laravel application is very likely going to result in failure.
First, does your legacy application have tests? These can be unit tests, integration tests, or functional tests. If not, and you want to be able to modernize your application without breaking things in the future, then you're going to want to write tests to ensure that nothing breaks as you begin upgrading. This alone can take a long time, especially with a codebase that makes it difficult to even test in the first place. Having a fully tested application will allow you to see which tests begin to fail as you start reworking your application, so this information will be extremely valuable.
Second, are you able to get it to work on a more recent version of PHP? If this code is already in production, then you're going to need to use some hardware virtualization through Vagrant, or better yet, containerization through Docker to get a local installation up and running without breaking your production code.
Once that's ready, then you should be able to begin refactoring. Taking entire pages of code and dumping them right into a Laravel application is not going to work straight out of the gate. You're going to want to start smaller. Find all of your moving parts, figure out what each one is responsible for, and encapsulate them in classes with the appropriate methods.
Use Composer's PSR-4 Autoloader to help remove all of those extra include and require statements and load your new classes throughout the application.
Use a decent Router to change all of your URLs into SEO-friendly paths and have a clearly defined entrypoint for all requests.
Move all of your business logic out of webroot: Create a /public folder in which you have just your index.php entrypoint and all public-facing assets (images, css, javascript, etc.). Since all requests are all being routed over to this file by this point, you should be able to process the request and return your response.
Once you get to a point where you've actually gotten the application into a system of well-defined components and modules, then migrating over to Laravel - or any other well-established framework - should be much easier.
This is going to take you a long time if you plan on doing it right. Hopefully this helps, and best of luck to you.
Refactoring is of course possible, but I have some doubts, if it is doable partially in this case. By partially here, I mean that, parts of the app will run sometimes on old and sometimes on new code in production.
I did this once for and old project, but not as ancient and big as yours.
In my case it was custom app (without any framework) running on php 5.3 and I was converting it to Laravel 4.2.
I must admit that there are some real challenges on the path.
This is only tip of the iceberg, but I'll try to name few of them, from what I remember:
PHP version compatibility or rather incompatibility in this case. You can rewrite existing code to run on latest PHP 7 versions. That might be a lot work however - not used in the end.
Routing and asset handling - you need to check if you can modify urls so they can fit into Laravel routing engine. It may be really hard, especially if old app is using Laravel standard paths and if you don't want to break google indexing for example. I have also seen systems with custom generators for urls which were then heavily used in views. Trying to do perfect match for these routes would be a nightmare.
Authentication. Changing auth must be done in one step, cause adapting Laravel to properly work with sessions from old system (although doable) will clutter new code.
Database. You will be lucky if database is well designed, but I don't think it will be even close to Laravel Eloquent conventions. Although you can run it on Laravel without any DB schema modifications, your new code will also get bloated in your new app. This and other things can be refactored again in complete system, but it's another load of work.
Considering amount of all possible, not optimal workarounds, in order to have properly designed app (built with best practices), probably it will be better to rebuild from scratch.
Hope it helps a bit...
I am trying to find the new best practice for clearing statistics on an ehcache Cache. Previously, you would be able to call clearStatistics() and then you coul in real-time reset your stats on hit/miss operations.
Somewhere between ehcache 2.6 and 2.10, this went away. However instead of seeing a release where it was deprecated and hints as to the new philosophy or suggested implementation strategy, I simply see the method gone from the API documentation: It is not shown in http://www.ehcache.org/apidocs/2.10/deprecated-list.html#method nor http://www.ehcache.org/apidocs/2.9/deprecated-list.html#method, and any previous versions are lost to refactoring on the site.
Cache.clearStatistics has been removed in Ehcache 2.7.0. This release included a large rework of the Ehcache statistics to make them low overhead and ensure you paid the price only for statistics you queried and only for a limited period of time.
You can't clear statistics anymore inside Ehcache. If you need that feature, you have to use an external object that can handle the baselining for your application.
You can find the API documentation for each <major>.<minor> on http://www.ehcache.org/documentation/. And for the most recent versions, you can navigate to different fix versions even though there is no explicit link.
For example, see http://www.ehcache.org/apidocs/2.9.1/index.html
Disclaimer: I am working on Ehcache
I have installed ISIM 6.0 with a fix pack 6. After installing the FixPack i was able to view the request of any activity performed in ISIM, but now View All Request tab doesn't show list of requests however the transactions are going through fine. Please suggest ?
That usually means that ISIM web app cannot read the requests from the database.
I would enable DEBUG_MAX level in enRoleLogging.properties
file in order to see the exact request made to the database and try to perform the same SQL query manually.
I believe the component needed is
logger.trace.com.ibm.itim.util.level
But you can try setting the level to DEBUG_MAX for all
Then check your trace.log file(s). The location depends on your installation, more details can be found here :https://www-01.ibm.com/support/knowledgecenter/mobile/#!/SSRMWJ_6.0.0/com.ibm.isim.doc_6.0/trouble/cpt/cpt_ic_trouble_itimop_logs.htm
Also check the log files of your isim DB.
Hint: The only time I have come across this exact issue was after an upgrade where the tenant got changed from dc=com to DC=COM. It's a common misconception that the tenant is case-insensitive as we tend to associate this to the LDAP, but it affects SQL queries also (and those are case-sensitive).
ilektrojohn's answer should help you but anyways I have faced this couple of times in earlier stage and hence posting it. Usually its a slight mistake, and we forget to update the Time Interval or Request Type fields. Set them to wider range like this week or this month. Also expand the More search criteria and remove any added filters that might be restricting the search results.
TL;DR: Basically what I am looking for is a way to get a list of all sonar rules that have 0 issues raised. I could then move all of those to blockers and protect myself from someone adding that issue in the future.
My company is using sonar and static analysis to help guide refactoring and development of a sizable legacy codebase (~750K LOC). We have had a lot of success by lowering the severity of most rules and then choosing a smaller set of rules to promote up to blocker or critical as we find real issues in the code. This has kept the number of issues we are trying to address at a time manageable so we can actually feel like we are making progress and not drown in the noise of legacy issues.
In particular when we have been bitten by a field or QA issue that sonar could have detected we turn that issue up to a BLOCKER and fix every instance of in. These blockers break the build and we are now assured that we wont add a new instance of the same issue again. This has worked great and has kept a number of what would be nasty bugs from slipping through.
The big problem with that methodology is we need to have an example of every one of those classes of mistake atleast once in the codebase so we could learn that it was important and should be made a blocker. Any issues we haven't already encountered will still be at their default level, I'd like to move all of them up to BLOCKER now so we notice the day they are added.
Edit: Currently we are using 3.7.3 but we are about to upgrade to 5.X.
There are 2 ways to do this:
1- The difficult way is to query the SonarQube database. You have to understand the tables and write a SQL query based on which DB is used for your SonarQube. You Can find some reference here - OR here
2- I have never tried your method but it should work. You can use Sonar Web Service API. You also have a Web Service Java Client. Reference :
link1,link2,link3