Splitting the sourcecode before passing to Quality gates - sonarqube

I have a build where the Checkmarx scan is taking more than four hours to scan the full source code. Is there any way to split the source code into three or four packages and scan separately. So that we can scan them parallelly and run the scans faster. If you know please specify how we can split the source code to different packets to sent to Scan.

Currently, Checkmarx does not support linking results between source codes. If your code contains some stand-alone components like micro-srvices, you can split your source code to various Checkmarx scans.
But if you splitted your code to separated scans, and there is a "flow", value in the code that passed between the splitted source code, and it expose a volnurability, Checkmarks won't recognize it.

Related

Lines of code per binary overtime in Golang

In a single github repository containing multiple binaries code together. How can we find the total lines of codes per binary? Also, need to see the lines of codes per binary over a period of time.
For example in the below repository structure
cmd/service1
cmd/service2
pkg/service1
pkg/service2
Need to find the lines of code per service above?
Do we have any tools available to find this?

Are logs included while creating the dot file in gstreamer?

I am trying to understand, how dot file is created from gstreamer logs.
When I generated the gstreamer logs with GST_DEBUG=4 it generated huge number of logs.
At the same time when I check the dot file generated by gstreamer, it has specific information about the pipeline creation. Not the log information after pipeline is created like playing paused seeking...
I have some questions:
What information will be having in dot file when compared to complete log file?
If all the logs are not included in dot file, then how we can debug those log information using dotgraph(using tools like graphviz)?
The dot file is a graphical representation of your complete pipeline, the interconnection of different elements in the pipeline along with the information about the caps negotiation. For eg. When your pipeline grows too large, and you need information about the connection of different elements and the flow of data, usage of dot files will prove useful. Follow this link.
With GST_DEBUG=4, all the logs, warnings, errors of different elements will be outputted. This is particularly useful when you want to understand the lower levels of what is going on inside the elements when the dataflow occurs along the pipeline. You can get information about different events, pad information, buffer information ,etc. Follow this link.
To get more information about a specific element you could also use the following:
GST_DEBUG=<element_name>:4

How to implement the equivalent of the Aggregator EIP in Nifi

I'm very experienced with Apache Camel and EIPs and am struggling to understand how to implement equivalents in Nifi. I understand that Nifi uses a different paradigm (flow based programming) but I don't think what I'm trying to do is unreasonable.
In a nutshell I want the contents of each file to be sent to many rest services and I want to aggregate the responses into a single document which will stored in elasticsearch. I might also do some further processing and cleanup to improve what is stored (but this isn't my immediate issue)
The screenshot is a quick mock-up of what I'm trying to achieve but I don't understand enough about Nifi to know how to implement this pattern correctly.
If you are going to take a single piece of data and then fork to multiple parts of the flow and then converge back, there needs to be a way for MergeContent to know which pieces go together.
There are generally two ways this can be done...
The first is using MergeContent in "defragment mode". Think of this as reversing a split operation that was performed by one of the split processors like SplitText. For example, you split a file of 100 lines into 100 flow files of 1 line each, then do some stuff to each one, then want to converge back. The split processors produce a standard set of split attributes (described in the docs of the processors) and the defragment mode knows how to bin the splits accordingly and merge them back together. This probably doesn't apply to your example since you didn't start with a split processor.
The second approach is the "Correlation Attribute" in MergeConent. This tells merge content to only merge flow files together that have the same value for the attribute specified. In your example, when a file gets picked up by GetFile and sent to 3 InvokeHttp processors, there are 3 flow files created, and they all should have their "filename" attribute set to the name of the file picked up from disk. So telling MergeContent to correlate on filename should do the trick, and probably setting the min and max number of entries to the number you expect like 3, and a maximum time in case one of them fails or hangs.

Unaccounted lines of code in coverity scan

My project has over 150k lines of code according to Coverity Scan, while Cloc reports 30k (which is a lot more reasonable).
I am trying to figure out where those LOCs come from, but failing. How do I get coverity scan to report the actual lines of code? Or report where they come from.
By default the LOC count includes the system headers pulled in by your application. You may be able to configure component maps to filter these out if it matters enough to you.

Initializing large arrays efficiently in Xcode

I need to store a large number of different kind of confidential data in my project.
The data can be represented as encoded NSStrings. I rather like to initialize this in code than read from a file, since that is more secure.
So I would need about 100k lines like this:
[myData addObject: #"String"];
or like this
myData[n++] = #"String";
Putting these lines in Xcode causes compile time to increase extensively up to several hours (by the way in Eclipse it takes a fraction of a second to compile 100k lines like this)
What would be feasible secure alternatives?
(please do not suggest reading from a file since this makes the data much easier to crack)
Strings in your code can be readily dumped with tools like strings.
Anyway, if you want to incorporate a data file directly into the executable, you can do that with the -sectcreate linker option. Add something like -Wl,-sectcreate,MYSEG,MYSECT,path to the Other Linker Commands build setting. In your code, you can use getsectdata() to access that data section.
However, you must not consider any of the data that you actually deliver to the end user, whether in code or resource files, as "confidential". It isn't and can never be.
I would put the strings in a plist file and read it into an NSArray at run time. For security encrypt the file.

Resources