How to get the JIRA ticket number using Jenkins Declarative script - jenkins-pipeline

How to get the JIRA ticket number using Jenkins Pipeline script Example:
CICD-34
The following command gives complete info about the ticket, but how do we get just the ID and store in a variable?
def issue = jiraJqlSearch jql: 'PROJECT = CICD AND description~"New JIRA Created from Jenkins through Declarative PL script"', site: 'MyLocalJira'”
echo issue.data.toString()

I have the same problem and find a solution.
I share with you my answer :)
def testExecutionSearch = jiraJqlSearch jql: "project=${project} and issuetype = 'Test Execution' and summary ~ '${summary}'", site: 'myjira', failOnError: true
if (testExecutionSearch != null){
//Get all issues
def issues = testExecutionSearch.data.issues
//Get the key of the first result
def key = issues[0].key
}
Here I take the first element of my jql search (def key = issues[0].key) but you can choose what you want :)

Related

Save Google Cloud Speech API operation(job) object to retrieve results later

I'm struggling to use the Google Cloud Speech Api with the ruby client (v0.22.2).
I can execute long running jobs and can get results if I use
job.wait_until_done!
but this locks up a server for what can be a long period of time.
According to the API docs, all I really need is the operation name(id).
Is there any way of creating a job object from the operation name and retrieving it that way?
I can't seem to create a functional new job object such as to use the id from #grpc_op
What I want to do is something like:
speech = Google::Cloud::Speech.new(auth_credentials)
job = speech.recognize_job file, options
saved_job = job.to_json #Or some element of that object such that I can retrieve it.
Later, I want to do something like....
job_object = Google::Cloud::Speech::Job.new(saved_job)
job.reload!
job.done?
job.results
Really hoping that makes sense to somebody.
Struggling quite a bit with google's ruby clients on the basis that everything seems to be translated into objects which are much more complex than the ones required to use the API.
Is there some trick that I'm missing here?
You can monkey-patch this functionality to the version you are using, but I would advise upgrading to google-cloud-speech 0.24.0 or later. With those more current versions you can use Operation#id and Project#operation to accomplish this.
require "google/cloud/speech"
speech = Google::Cloud::Speech.new
audio = speech.audio "path/to/audio.raw",
encoding: :linear16,
language: "en-US",
sample_rate: 16000
op = audio.process
# get the operation's id
id = op.id #=> "1234567890"
# construct a new operation object from the id
op2 = speech.operation id
# verify the jobs are the same
op.id == op2.id #=> true
op2.done? #=> false
op2.wait_until_done!
op2.done? #=> true
results = op2.results
Update Since you can't upgrade, you can monkey-patch this functionality to an older-version using the workaround described in GoogleCloudPlatform/google-cloud-ruby#1214:
require "google/cloud/speech"
# Add monkey-patches
module Google
Module Cloud
Module Speech
class Job
def id
#grpc.name
end
end
class Project
def job id
Job.from_grpc(OpenStruct.new(name: id), speech.service).refresh!
end
end
end
end
end
# Use the new monkey-patched methods
speech = Google::Cloud::Speech.new
audio = speech.audio "path/to/audio.raw",
encoding: :linear16,
language: "en-US",
sample_rate: 16000
job = audio.recognize_job
# get the job's id
id = job.id #=> "1234567890"
# construct a new operation object from the id
job2 = speech.job id
# verify the jobs are the same
job.id == job2.id #=> true
job2.done? #=> false
job2.wait_until_done!
job2.done? #=> true
results = job2.results
Ok. Have a very ugly way of solving the issue.
Get the id of the Operation from the job object
operation_id = job.grpc.grpc_op.name
Get an access token to manually use the RestAPI
json_key_io = StringIO.new(ENV["GOOGLE_CLOUD_SPEECH_JSON_KEY"])
authorisation = Google::Auth::ServiceAccountCredentials.make_creds(
json_key_io:json_key_io,
scope:"https://www.googleapis.com/auth/cloud-platform"
)
token = authorisation.fetch_access_token!
Make an api call to retrieve the operation details.
This will return with a "done" => true parameter, once results are in and will display the results. If "done" => true isn't there then you'll have to poll again later until it is.
HTTParty.get(
"https://speech.googleapis.com/v1/operations/#{operation_id}",
headers: {"Authorization" => "Bearer #{token['access_token']}"}
)
There must be a better way of doing that. Seems such an obvious use case for the speech API.
Anyone from google in the house who can explain a much simpler/cleaner way of doing it?

Bintray VCS Tagging

So I have a Bintray repository, but I'm having difficulty uploading to it from gradle. Well, what I mean is version management is not working how I want it, currently for every single .jar I upload, I have to increment the version in my configuration, and dependencies. I know this is not how it's supposed to be done. My question is how do I automate/implement VCS tagging with Bintray. Right now my configuration for uploading looks like so (using the bintray plugin):
bintray {
user = "$bintrayUser"
key = "$bintrayKey"
publications = ['maven']
dryRun = false
publish = true
pkg {
repo = "$targetBintrayRepo"
name = "$targetBintrayPackage"
desc = ''
websiteUrl = "$programWebsiteUrl"
issueTrackerUrl = "$programIssueUrl"
vcsUrl = "$programVcsUrl"
licenses = ["$programLicense"]
labels = []
publicDownloadNumbers = true
version {
name = "$programVersion"
released = new java.util.Date()
vcsTag = "$programVcsTag"
}
}
}
And my variables are:
def programVersion = '0'
def programVcsTag = '0.0.0'
def programGroup = 'com.gmail.socraticphoenix'
def targetBintrayRepo = 'Main'
def targetBintrayPackage = 'java-api'
def programLicense = 'MIT'
def programWebsiteUrl = 'https://github.com/meguy26/PlasmaAPI'
def programIssueUrl = 'https://github.com/meguy26/PlasmaAPI/issues'
def programVcsUrl = 'https://github.com/meguy26/PlasmaAPI.git'
Yet here no tags appear, and running publish again (even with a different vcs tag) results in a version already exists error. (Could not upload to 'https://api.bintray.com/content/meguy26/Main/java-api/0/com/gmail/socraticphoenix/PlasmaAPI/0/PlasmaAPI-0.jar': HTTP/1.1 409 Conflict [message:Unable to upload files: An artifact with the path 'com/gmail/socraticphoenix/PlasmaAPI/0/PlasmaAPI-0.jar' already exists])
Sorry if I'm being noobish, but I don't understand why its not working, I filled out all the appropriate variables (I thought)
Bintray does not support multiple tags per version. Version is a unique string. If you want to release something from the same version with different tags, compose a Bintray version string with from your program version and tag, e.g. "$programVersion-$programVcsTag"

How to get the number of forks of a GitHub repo with the GitHub API?

I use Github API V3 to get forks count for a repository, i use:
GET /repos/:owner/:repo/forks
The request bring me only 30 results even if a repository contain more, I googled a little and I found that due to the memory restrict the API return only 30 results per page, and if I want next results I have to specify the number of page.
Only me I don't need all this information, all I need is the number of forks.
Is there any way to get only the number of forks?
Because If I start to loop page per page my script risque to crash if a repository contain thousand results.
You can try and use a search query.
For instance, for my repo VonC/b2d, I would use:
https://api.github.com/search/repositories?q=user%3AVonC+repo%3Ab2d+b2d
The json answer gives me a "forks_count": 5
Here is one with more than 4000 forks (consider only the first result, meaning the one whose "full_name" is actually "strongloop/express")
https://api.github.com/search/repositories?q=user%3Astrongloop+repo%3Aexpress+express
"forks_count": 4114,
I had a job where I need to get all forks as git-remotes of a github project.
I wrote the simple python script https://gist.github.com/urpylka/9a404991b28aeff006a34fb64da12de4
At the base of the program is recursion function for getting forks of a fork. And I met same problem (GitHub API was returning me only 30 items).
I solved it with add increment of ?page=1 and add check for null response from server.
def get_fork(username, repo, forks, auth=None):
page = 1
while 1:
r = None
request = "https://api.github.com/repos/{}/{}/forks?page={}".format(username, repo, page)
if auth is None: r = requests.get(request)
else: r = requests.get(request, auth=(auth['login'], auth['secret']))
j = r.json()
r.close()
if 'message' in j:
print("username: {}, repo: {}".format(username, repo))
print(j['message'] + " " + j['documentation_url'])
if str(j['message']) == "Not Found": break
else: exit(1)
if len(j) == 0: break
else: page += 1
for item in j:
forks.append({'user': item['owner']['login'], 'repo': item['name']})
if auth is None:
get_fork(item['owner']['login'], item['name'], forks)
else:
get_fork(item['owner']['login'], item['name'], forks, auth)

Configuring grails spring security ldap plugin

here is a part of my perl cgi script (which is working..):
use Net::LDAP;
use Net::LDAP::Entry;
...
$edn = "DC=xyz,DC=com";
$quser ="(&(objectClass=user)(cn=$username))";
$ad = Net::LDAP->new("ip_address...");
$ldap_msg=$ad->bind("$username\#xyz.com", password=>$password);
my $result = $ad->search( base=>$edn,
scope=>"sub",
filter=>$quser);
my $entry;
my $myname;
my $emailad;
my #entries = $result->entries;
foreach $entry (#entries) {
$myname = $entry->get_value("givenName");
$emailad = $entry->get_value("mail");
}
So basically, there is no admin/manager account for AD, users credentials are used for binding. I need to implement the same thing in grails..
+Is there a way to configure the plugin to search several ADs, I know I can add more ldap IPs in context.server but for each server I need a different search base...
++ I dont wanna use my DB, just AD. User logins through ldap > I get his email, and use the email for another ldap query but that will probably be another topic :)
Anyway the code so far is:
grails.plugin.springsecurity.ldap.context.managerDn = ''
grails.plugin.springsecurity.ldap.context.managerPassword = ''
grails.plugin.springsecurity.ldap.context.server = 'ldap://address:389'
grails.plugin.springsecurity.ldap.authorities.ignorePartialResultException = true
grails.plugin.springsecurity.ldap.search.base = 'DC=xyz,DC=com'
grails.plugin.springsecurity.ldap.authenticator.useBind=true
grails.plugin.springsecurity.ldap.authorities.retrieveDatabaseRoles = false
grails.plugin.springsecurity.ldap.search.filter="sAMAccountName={0}"
grails.plugin.springsecurity.ldap.search.searchSubtree = true
grails.plugin.springsecurity.ldap.auth.hideUserNotFoundExceptions = false
grails.plugin.springsecurity.ldap.search.attributesToReturn =
['mail', 'givenName']
grails.plugin.springsecurity.providerNames=
['ldapAuthProvider',anonymousAuthenticationProvider']
grails.plugin.springsecurity.ldap.useRememberMe = false
grails.plugin.springsecurity.ldap.authorities.retrieveGroupRoles = false
grails.plugin.springsecurity.ldap.authorities.groupSearchBase ='DC=xyz,DC=com'
grails.plugin.springsecurity.ldap.authorities.groupSearchFilter = 'member={0}'
And the error code is: [LDAP: error code 1 - 000004DC: LdapErr: DSID-0C0906E8, comment: In order to perform this operation a successful bind must be completed on the connection., data 0, v1db1
And it's the same code for any user/pass I try :/
Heeeeelp! :)
The most important thing with grails and AD is to use ActiveDirectoryLdapAuthenticationProvider rather than LdapAuthenticationProvider as it will save a world of pain. To do this, just make the following changes:
In resources.groovy:
// Domain 1
ldapAuthProvider1(ActiveDirectoryLdapAuthenticationProvider,
"mydomain.com",
"ldap://mydomain.com/"
)
// Domain 2
ldapAuthProvider2(ActiveDirectoryLdapAuthenticationProvider,
"mydomain2.com",
"ldap://mydomain2.com/"
)
In Config.groovy:
grails.plugin.springsecurity.providerNames = ['ldapAuthProvider1', 'ldapAuthProvider2']
This is all the code you need. You can pretty much remove all other grails.plugin.springsecurity.ldap.* settings in Config.groovy as they don't apply to this AD setup.
Documentation:
http://docs.spring.io/spring-security/site/docs/3.1.x/reference/springsecurity-single.html#ldap-active-directory

Reporting in Microsoft AdCenter (Sandbox) - RoR

I am using AdCenter API for my RoR application. I searched a lot on Internet to find example of ruby code to fetch account performance report using API, But didn't get.. Now I have written following code but submitGenerateReport returns nil
Here is my code.
report_request = AccountPerformanceReportRequest.new
start_date = 10.days.ago.strftime("%Y-%m-%d")
end_date = Time.zone.now.strftime("%Y-%m-%d")
scope = AccountReportScope.new
scope.accountIds = [AppConfig.adcenter['accountId']]
# Specify the format of the report.
report_request.format = 'Xml'
report_request.returnOnlyCompleteData = false
report_request.language = 'English'
report_request.reportName = "My Account Report"
report_request.aggregation = 'Daily'
report_request.time = ReportTime.new(start_date, end_date)
report_request.columns = %w[ AccountName AccountName GregorianDate CurrentMaxCpc Impressions Clicks ]
report_request.scope = scope
report_request.filter = nil
report = SubmitGenerateReportRequest.new(report_request)
# Returns nil
puts response = svc.submitGenerateReport(report)
I have campaigns, adgroups as well as ads in specified account.
Can anyone please guide me where I am wrong or give some example of reporting adcenter through api using ruby?
Thanks in advance
Got the solution...
The time was not in right format so, start time and end time was unrecognisable in soap request. SOAP debugging helped me a lot..

Resources