I want to use GSAClient to access our company google search appliance project and retrieve searching result, but I don't get the configuration working at all.
The web UI is: http://kb.juniper.net
So in order to make GSAClientDemo working, how should I set:
HOSTNAME
SETTING_FRONTEND
Do I have to ask GSA admin for the settings?
// target GSA's hostname
private static final String HOSTNAME = "kb.juniper.net";
// query string to search for
private static final String QUERY_STRING = "juno";
// The value for the frontend configured for the GSA
// (If you dont know this, ask GSA admin for correct value for your target GSA.)
private static final String SETTING_FRONTEND = "InfoCenter";
public static void main(String[] args) throws IOException {
GSAClient client = new GSAClient(HOSTNAME);
GSAQuery query = new GSAQuery();
// typical way to generate query term.
GSAQueryTerm term = new GSAQueryTerm(QUERY_STRING);
query.setQueryTerm(term);
System.out.println("Searching for: "+query.getQueryString());
Hostname is the GSA domain name or ip
Frontend is not required
Complete example is here:
http://gsa-japi.sourceforge.net/onemin-tut.html
Do I have to ask GSA admin for the settings?
Yes. You need to check with your GSA admin for the HOST and front_end information. Only they can tell you. Unless, you know how to use GSA and are permitted to get the information for yourself.
Related
I am unable to access a public container using the C# SDK, even though I have enabled "Allow Blob public access" in the storage account configuration.
var fileSystemClient = new DataLakeFileSystemClient(new Uri("https://somestorageaccount.dfs.core.windows.net/public"), new DataLakeClientOptions());
var paths = fileSystemClient.GetPaths();
foreach (var path in paths)
{
Console.WriteLine(path);
}
This code throws the following exception:
Azure.RequestFailedException: 'Server failed to authenticate the
request. Make sure the value of Authorization header is formed
correctly including the signature.
Is there anything I can configure to make this work?
I tried in my environment and got below results:
Initially, I created ADLS gen2 container with public access level set to container level.
Portal:
When I try to access the file through browser, I got same error.
Browser:
When we are accessing through file system, Files kept in storage system are not accessible anonymously. It is necessary to authorize access even if it is public Access level. You are getting this error because you are attempting to access the resource without authorization.
If you need to access files, you need to authorize with SAS token.
I tried with File URL + SAS token in the browser. I can be able to access the file.
You can get SAS-token by clicking file with generate SAS token.
Browser:
If you need access path of data lake gen 2 in C#, you use the StorageSharedKeyCredential method by this link:
string storageAccountName = StorageAccountName;
string storageAccountKey = StorageAccountKey;
Uri serviceUri = StorageAccountUri;
StorageSharedKeyCredential sharedKeyCredential = new StorageSharedKeyCredential(storageAccountName, storageAccountKey);
DataLakeServiceClient serviceClient = new DataLakeServiceClient(serviceUri, sharedKeyCredential);
DataLakeFileSystemClient filesystem = serviceClient.GetFileSystemClient(Randomize("sample-filesystem-list"));
List<string> names = new List<string>();
foreach (PathItem pathItem in filesystem.GetPaths())
{
names.Add(pathItem.Name);
}
Reference:
java - How to get list of child files/directories having parent DataLakeDirectoryClient class instance - Stack Overflow in java by Jim Xu.
Hello Guys I am trying to write the java utility to download the documents to local PC from content engine in filenet can anyone help me out?
You should read about FileNet P8 CE API, you can start here:
You have to know that the FileNet Content Engine has two types of interface that can be used to connect to it: RMI and SOAP. A cmd line app you are planning to write, can connect only by SOAP (I am not sure that this is true for the newest versions, but what is definitely true, that it is much easier to setup the SOAP connection than EJB), so you have to read that part of the documentation, how to establish a connection in this way to your Content Engine.
On the link above, you can see that first of all you have to collect the required jars for SOAP connection: please check the "Required for a Content Engine Java API CEWS transport client" section for the file names.
After you collect them, you will need a SOAP WSDL URL and a proper user and password, the user has to have read properties and read content right to the documents you would like to download. You also need to know the ObjectStore name and the identifier or the location of your documents.
Now we have to continue using this Setting Up a Thick Client Development Environment link (I opened it from the page above.)
Here you have to scroll down to the "CEWS transport protocol (non-application-server dependent)" section.
Here you can see, that you have to create a jaas.conf file with the following content:
FileNetP8WSI {
com.filenet.api.util.WSILoginModule required;
};
This file must be added as the following JVM argument when you run the class we will create:
java -cp %CREATE_PROPER_CLASSPATH% -Djava.security.auth.login.config=jaas.conf DownloadClient
Now, on the top-right corner of the page, you can see links that describes what to do in order to get a connection, like "Getting Connection", "Retrieving an EntireNetwork Object" etc. I used that snipplet to create the class below for you.
public class DownloadClient {
public static void main(String[] args) throws Exception{
String uri = "http://filenetcehost:9080/wsi/FNCEWS40MTOM";
String userId = "ceadmin";
String password = "password";
String osName = "Test";
UserContext uc = UserContext.get();
try {
//Get the connection and default domain
Connection conn = Factory.Connection.getConnection(uri);
Domain domain = Factory.Domain.getInstance(conn, null);
ObjectStore os = Factory.ObjectStore.fetchInstance(domain, osName, null);
// the last value (jaas samza name) must match with the name of the login module in jaas.conf
Subject subject =UserContext.createSubject(connection, userId, password, "FileNetP8WSI");
// set the subject to the local thread via threadlocal
uc.pushSubject(subject);
// from now, we are connected to FileNet CE, and objectStore "Test"
//https://www.ibm.com/support/knowledgecenter/en/SSNW2F_5.2.0/com.ibm.p8.ce.dev.ce.doc/document_procedures.htm
Document doc = Factory.Document.getInstance(os, ClassNames.DOCUMENT, new Id("{F4DD983C-B845-4255-AC7A-257202B557EC}") );
// because in FileNet a document can have more that one associated content element
// (e.g. stores single page tifs and handle it as a multipaged document), we have to
// get the content elements and iterate list.
ContentElementList docContentList = doc.get_ContentElements();
Iterator iter = docContentList.iterator();
while (iter.hasNext() )
{
ContentTransfer ct = (ContentTransfer) iter.next();
// Print element sequence number and content type of the element.
// Get and print the content of the element.
InputStream stream = ct.accessContentStream();
// now you have an inputstream to the document content, you can save it local file,
// or you can do what you want with it, just do not forget to close the stream at the end.
stream.close();
}
} finally {
uc.popSubject();
}
}
}
This code is just shows how can you implement such a thick client, I have created it now using the documentation, not production code. But after specifying the packages to import, and may handle the exceptions it will probably work.
You have to specify the right URL, user, password and docId of course, and you have to implement the copy from the TransferInputStream to a FileOutputStream, e.g. by using commons.io or java NIO, etc.
I want to use routing-field in Elastic-Search.
But I am not able to find any Java API to enable the same.
I have gone through link 1 and link 2 but none seems to have addressed this.
My code:
public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {
this.collector = collector;
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", elasticSearchCluster).build();
this.client = new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress(esHost, esPort));
}
public void execute(Tuple tuple) {
try {
String document = tuple.toString();
byte[] byteBuffer = document.getBytes();
IndexResponse response = this.client.prepareIndex(indexName, type, id)
.setSource(byteBuffer).execute().actionGet();
} catch (Exception e) {
e.printStackTrace();
}
collector.ack(tuple);
}
Note that I am using TransportClient here as there does not seem to be a good way of using Node-Client with storm but the question is irrespective of that. If there is a way of using Node-Client with routing, please do suggest otherwise TransportClient's routing would also be of great help.
I believe you are confusing two different "routing" concepts in ES. One is document routing and the other is index allocation routing (or "filtering").
The _routing field allows you specify the value to be used when indexing each document to determine which shard the document will be indexed on. The other two links you provided refer to an index-level (as opposed to document-level) setting that determines how the shards of an index are allocated to the various nodes in your cluster.
It sounds like you are trying to do document routing. This can be accomplished in the Java API using the IndexRequestBuilder class and the setRouting(String) method. Have a look at the source code on GitHub.
There are also some good code examples here which specify the routing field during indexing.
Almost!
you can just replace one line of codes
from
IndexResponse response = this.client.prepareIndex(indexName, type, id)
.setSource(byteBuffer).execute().actionGet();
to
String routingValue = "ANY_ROUTING_VALUE_YOU_WANT";
IndexResponse response = this.client.prepareIndex(indexName, type, id) .setSource(byteBuffer).setRouting(routingValue).execute().actionGet();
Then your documetns will be stored in a specific shard corresponding to the routing value you provide. In search time, you can provide the same routing value so that your search request hits only one specific shard.
Note: i'm using an experimental pre-release of microsoft's latest adal
I'm trying to get my identity providers to work on the mobile applications. So far I've been able to load my identity providers and managed to get the login page to show (except for facebook).
The problem is that whenever i actually try to login i'm getting some error in the form off "invalid redirect uri".
Google, for instance, will say: "The redirect URI in the request: https://login.microsoftonline.com/... did not match a registered redirect URI.
Facebook will show: "Given URL is not allowed by the application configuration: One or more of the given URLs is not allowed by the App's settings. It must match the website URL or Canvas URL, or the domain must be a subdomain of one of the App's domains."
As far as I understand you don't actually need to register the mobile application anymore with the different identity providers because Azure sits in between you and them. Azure handles the connection, gets your token and uses it to identify you. It should then return a set of "azure tokens" to you.
To my knowledge the used redirect URI is registered on the portal since I'm able to load the identity providers in the first place?
Not to mention it seems to be a default URL that's used by many applications: urn:ietf:wg:oauth:2.0:oob which simply tells it to return it to some none-browser based application?
This is the code i'm using to actually do the login/signup:
private static String AUTHORITY_URL = "https://login.microsoftonline.com/<directory>/oauth2/authorize/";
private static String CLIENT_ID = "my_client_id";
private static String[] SCOPES = { "my_client_id" };
private static String[] ADDITIONAL_SCOPES = { "" };
private static String REDIRECT_URL = "urn:ietf:wg:oauth:2.0:oob";
private static String CORRELATION_ID = "";
private static String USER_HINT = "";
private static String EXTRA_QP = "nux=1";
private static String FB_POLICY = "B2C_1_<your policy>";
private static String EMAIL_SIGNIN_POLICY = "B2C_1_SignIn";
private static String EMAIL_SIGNUP_POLICY = "B2C_1_SignUp";
public async Task<AuthenticationResult> Login(IPlatformParameters parameters, bool isSignIn)
{
var authContext = new AuthenticationContext(AUTHORITY_URL, new TokenCache());
if (CORRELATION_ID != null &&
CORRELATION_ID.Trim().Length != 0)
{
authContext.CorrelationId = Guid.Parse(CORRELATION_ID);
}
String policy = "";
if (isSignIn)
policy = EMAIL_SIGNIN_POLICY;
else
policy = EMAIL_SIGNUP_POLICY;
return await authContext.AcquireTokenAsync(SCOPES, ADDITIONAL_SCOPES, CLIENT_ID, new Uri(REDIRECT_URL), parameters, UserIdentifier.AnyUser, EXTRA_QP, policy);
}
microsoft's documentation isn't really helping because most are either empty (they're literally not yet typed out) or it's some help topic from over a year ago. This stuff is pretty new so documentation seems to be hard to come by.
So, dear people of stackoverflow, what am I missing? Why is it saying that the redirect urI is invalid when it's been registered on the azure web portal? And if the redirect URI is invalid why can I retrieve the identity providers in the first place?
why is it that i can't seem to find solutions after hours of searching, yet when i post a question here i somehow find the answer within minutes...
It was quite a stupid mistake at that, one of my collegues had sent me the wrong authority url.
The funny thing is that it was correct "enough" to load the identity providers we had installed on the portal but not correct enough to handle actually signing in or up.
I initially used:
https://login.microsoftonline.com/<tenant_id>/oauth2/authorize/
where it should have been:
https://login.microsoftonline.com/<tenant_id>/oauth2/v2.0/authorize
You see that little "v2.0"? yeah that little bastard is what caused all the pain...
I want to be able to serve URLs to client that are "signed" and so, are only relevant to 24 hours (for example).
However, I don't want to call S3 for every URL generated:
AWS::S3::S3Object.new(bucket, name).url_for(:read, :secure => true, :expires => expires_in).to_s
Instead, I want to generate the URL by myself (I have the file name and the bucket link, I can build it myself).
However, I want to sign the url at the bucket level (say, once a day for all the files in a given bucket). is this possible?
When you create a pre-signed URL, that is done completely locally. You could do it "by yourself", but it is much easier to use the SDK, and there would be no practical diferences. See that there is no "sign" action on the S3 API.
However, you can not sign at the "bucket level", as signature is checked per-object. I believe signing a whole bucket would not be feasible.
Sorry I do not have ruby code for this only Java...
But you will not be able to get a presigned url for the whole bucket, only each file.
Here is the function I created. This will print everything for you. Does the process make sense?
private static URI GetURL(AmazonS3Client amazonS3Client, S3ObjectSummary s3ObjectSummary) throws URISyntaxException {
return amazonS3Client.generatePresignedUrl(
new GeneratePresignedUrlRequest(s3ObjectSummary.getBucketName(), s3ObjectSummary.getKey())
.withMethod(HttpMethod.GET)
.withExpiration(GetExperation())).toURI();
}
public static void run(String accessKey, String secretKey, String bucketName) {
AmazonS3Client amazonS3Client = new AmazonS3Client(new BasicAWSCredentials(accessKey, secretKey));
amazonS3Client.listObjects(bucketName)
.getObjectSummaries()
.stream()
.forEach(s3ObjectSummary
-> System.out.println(GetURL(amazonS3Client, s3ObjectSummary).toString()));
}