java 8 - some error with compiling lambda function - java-8

public class GrammarValidityTest {
private String[] dataPaths = new String[] {"data/", "freebase/", "tables/", "regex/"};
#Test(groups = {"grammar"})
public void readGrammars() {
try {
List<String> successes = new ArrayList<>(), failures = new ArrayList<>();
for (String dataPath : dataPaths) {
// Files.walk(Paths.get(dataPath)).forEach(filePath -> {
try {
if (filePath.toString().toLowerCase().endsWith(".grammar")) {
Grammar test = new Grammar();
LogInfo.logs("Reading grammar file: %s", filePath.toString());
test.read(filePath.toString());
LogInfo.logs("Finished reading", filePath.toString());
successes.add(filePath.toString());
}
}
catch (Exception ex) {
failures.add(filePath.toString());
}
});
}
LogInfo.begin_track("Following grammar tests passed:");
for (String path : successes)
LogInfo.logs("%s", path);
LogInfo.end_track();
LogInfo.begin_track("Following grammar tests failed:");
for (String path : failures)
LogInfo.logs("%s", path);
LogInfo.end_track();
assertEquals(0, failures.size());
}
catch (Exception ex) {
LogInfo.logs(ex.toString());
}
}
}
The line beginning with // is the one that brings up the error -"illegal start of expression" starting at the '>' sign.
I do not program much in java. I just downloaded a code from somewhere that is quite popular and supposed to run but I got this error. Any help/fixes/explanation would be appreciated.

Run javac -version and verify that you are actually using the compiler from JDK8, it's possible that even if your java points to the 1.8 releaase, your javac has a different version.
If you are using Eclipse, remember to set the source type for your project to 1.8.
Edit:
Since you are using ant, verify that your JAVA_HOME environment variable points to your jdk1.8 directory.

Related

Looking to Parse the following code (Apache CLI 1.4), but it doesn't get into the if loop

I have the following code in JDeveloper and I am trying to parse the output but can't seem to figure it out.
package project1;
import org.apache.commons.cli.*;
public class cmdParser
{
public static void main(String[] args)
{
try
{
Options options = new Options();
options.addOption("t", false, "display current time");
CommandLineParser parser = new DefaultParser();
CommandLine cmd = parser.parse( options, args);
if(cmd.hasOption("t"))
{
String optionT=cmd.getOptionValue("t");
System.out.println("Option t" + optionT);
}
else
{
System.out.println("Can't get the option");
}
}
catch(ParseException exp)
{
System.out.println("Error:" + exp.getMessage());
}
}
}
Output:
Click to enlarge the image
How do you get the option if you don't pass such an option...
Not sure how it is done in JDeveloper but from command line:
java cmdParser -t "my test option"
further more, you should use options.addOption("t", true, "display current time"); if you want to pass a value to the option. If the 2nd parameter is false, this option would just be a flag.

Spring boot controlling start up of application

How can I control programmatically start up of app depending of config server? I have sh script for controlling on docker-compose up, but I’m wondering can I control it programly in the spring app.
Regards
If you are on Ubuntu then the following solution will work flawless. If you are on a different distro and if ps -A doesn't work, then kindly find the equivalent one. Now, talking about the strategy how it is done below. I wanted my application to be started by a script as it contains some JVM arguments. So when my application will be running, the script will also be in the active process list. After the startup is completed, the application finds whether my script file is present in the active process list or not. If it's not there then it shuts down the application. The following example will probably give you an idea on how to implement the same with the config server.
#Autowired
private ApplicationContext context;
#EventListener(ApplicationReadyEvent.class) // use in production
public void initiateStartup() {
try {
String shProcessName = "root-IDEA.sh";
String line;
boolean undesiredStart = true;
Process p = Runtime.getRuntime().exec("ps -A");
InputStream inputStream = p.getInputStream();
InputStreamReader inputStreamReader = new InputStreamReader(inputStream);
BufferedReader bufferedReader = new BufferedReader(inputStreamReader);
while ((line = bufferedReader.readLine()) != null) {
if (line.contains(shProcessName)) {
undesiredStart = false;
break;
}
}
bufferedReader.close();
inputStreamReader.close();
inputStream.close();
if (undesiredStart) {
System.out.println("--------------------------------------------------------------------");
System.out.println("APPLICATION STARTED USING A DIFFERENT CONFIG. PLEASE START USING THE '.sh' FILE.");
System.out.println("CLOSING APPLICATION");
System.out.println("--------------------------------------------------------------------");
// close the application
int exitCode = SpringApplication.exit(context, (ExitCodeGenerator) () -> 0);
System.exit(exitCode);
}
System.out.println("APPLICATION STARTED CORRECTLY");
} catch (IOException e) {
e.printStackTrace();
}
}

How to solve SchemaValidationFailedException: Child is not present in schema

I'm trying to use Databroker of MD-SAL to save a list of data, after modifying the yang file and InstanceIdentifier many times but always facing similar validation issue, for example
java.util.concurrent.ExecutionException: TransactionCommitFailedException{message=canCommit encountered an unexpected failure, errorList=[RpcError [message=canCommit encountered an unexpected failure, severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.yangtools.yang.data.impl.schema.tree.SchemaValidationFailedException: Child /(urn:opendaylight:params:xml:ns:yang:testDataBroker?revision=2015-01-05)service-datas is not present in schema tree.]]}
at org.opendaylight.yangtools.util.concurrent.MappingCheckedFuture.wrapInExecutionExc
My goal is to use rpc save-device-info to get data from rest. Then use databroker api to save data in the memory and finally test if the data could be succesfully replicated into other cluster nodes.
Yang file:
module testDataBroker {
yang-version 1;
namespace "urn:opendaylight:params:xml:ns:yang:testDataBroker";
prefix "testDataBroker";
revision "2015-01-05" {
description "Initial revision of testDataBroker model";
}
container service-datas {
list service-data {
key "service-id";
uses service-id;
uses device-info;
}
}
grouping device-info {
container device-info {
leaf device-name {
type string;
config false;
}
leaf device-description {
type string;
config false;
}
}
}
grouping service-id {
leaf service-id {
type string;
mandatory true;
}
}
rpc save-device-info {
input {
uses service-id;
uses device-info;
}
output {
uses device-info;
}
}
rpc get-device-info {
output {
uses device-info;
}
}
}
Java Code
#Override public Future<RpcResult<SaveDeviceInfoOutput>> saveDeviceInfo(SaveDeviceInfoInput input) {
String name = input.getDeviceInfo().getDeviceName();
String description = input.getDeviceInfo().getDeviceDescription();
String serviceId = input.getServiceId();
WriteTransaction writeTransaction = dataBroker.newWriteOnlyTransaction();
DeviceInfo deviceInfo = new DeviceInfoBuilder().setDeviceDescription(description).setDeviceName(name).build();
ServiceData serviceData = new ServiceDataBuilder().setServiceId(serviceId).setDeviceInfo(deviceInfo).build();
InstanceIdentifier<ServiceData> instanceIdentifier =
InstanceIdentifier.builder(ServiceDatas.class).child(ServiceData.class, serviceData.getKey()).build();
writeTransaction.put(LogicalDatastoreType.CONFIGURATION, instanceIdentifier, serviceData, true);
boolean isFailed = false;
try {
writeTransaction.submit().get();
log.info("Create containers succeeded!");
} catch (InterruptedException | ExecutionException e) {
log.error("Create containers failed: ", e);
isFailed = true;
}
return isFailed ?
RpcResultBuilder.success(new SaveDeviceInfoOutputBuilder())
.withError(RpcError.ErrorType.RPC, "Create container failed").buildFuture() :
RpcResultBuilder.success(new SaveDeviceInfoOutputBuilder().setDeviceInfo(input.getDeviceInfo()))
.buildFuture();
}
Really need your help. Thanks.
Update:
With the same version of md-sal bundles, I installed feature odl-toaster on only one ODL instead of cluster nodes. It seems like rpc from odl-toaster is working properly on single node.
I didn't realize that rpc is also clustered. Sometimes the rpc request hit on other nodes which didn't deploy the same bundles. Now the problem has been solved after the bundle is distrubted on each node.

Spark on Windows - What exactly is winutils and why do we need it?

I'm curious! To my knowledge, HDFS needs datanode processes to run, and this is why it's only working on servers. Spark can run locally though, but needs winutils.exe which is a component of Hadoop. But what exactly does it do? How is it, that I cannot run Hadoop on Windows, but I can run Spark, which is built on Hadoop?
I know of at least one usage, it is for running shell commands on Windows OS. You can find it in org.apache.hadoop.util.Shell, other modules depends on this class and uses it's methods, for example getGetPermissionCommand() method:
static final String WINUTILS_EXE = "winutils.exe";
...
static {
IOException ioe = null;
String path = null;
File file = null;
// invariant: either there's a valid file and path,
// or there is a cached IO exception.
if (WINDOWS) {
try {
file = getQualifiedBin(WINUTILS_EXE);
path = file.getCanonicalPath();
ioe = null;
} catch (IOException e) {
LOG.warn("Did not find {}: {}", WINUTILS_EXE, e);
// stack trace comes at debug level
LOG.debug("Failed to find " + WINUTILS_EXE, e);
file = null;
path = null;
ioe = e;
}
} else {
// on a non-windows system, the invariant is kept
// by adding an explicit exception.
ioe = new FileNotFoundException(E_NOT_A_WINDOWS_SYSTEM);
}
WINUTILS_PATH = path;
WINUTILS_FILE = file;
WINUTILS = path;
WINUTILS_FAILURE = ioe;
}
...
public static String getWinUtilsPath() {
if (WINUTILS_FAILURE == null) {
return WINUTILS_PATH;
} else {
throw new RuntimeException(WINUTILS_FAILURE.toString(),
WINUTILS_FAILURE);
}
}
...
public static String[] getGetPermissionCommand() {
return (WINDOWS) ? new String[] { getWinUtilsPath(), "ls", "-F" }
: new String[] { "/bin/ls", "-ld" };
}
Though Max's answer covers the actual place where it's being referred. Let me give a brief background on why it needs it on Windows -
From Hadoop's Confluence Page itself -
Hadoop requires native libraries on Windows to work properly -that
includes accessing the file:// filesystem, where Hadoop uses some
Windows APIs to implement posix-like file access permissions.
This is implemented in HADOOP.DLL and WINUTILS.EXE.
In particular, %HADOOP_HOME%\BIN\WINUTILS.EXE must be locatable
And , I think you should be able to run both Spark and Hadoop on Windows.

Debugger always Break when invoke GetFileAsync in windows 8 store apps (metro style)

This problem confuse me several days, when the code run to
await ApplicationData.Current.LocalFolder.GetFileAsync(udFileName);
The app was always jump to
UnhandledException += (sender, e) =>
{
if (global::System.Diagnostics.Debugger.IsAttached) global::System.Diagnostics.Debugger.Break();
};
The Exception e is:
{Windows.UI.Xaml.UnhandledExceptionEventArgs}
Exception {"Object reference not set to an instance of an object."}
System.Exception {System.NullReferenceException}
Message "System.NullReferenceException.......
Following is the function I invoked:
public async void RestoreUserDefaults()
{
string udFileName = "userdefaults.udef";
bool bExist = true;
{
try
{
await ApplicationData.Current.LocalFolder.GetFileAsync(udFileName);
}
catch (FileNotFoundException)
{
bExist = false;
}
}
}
I had add the file type to the package.appxmanifest.
anyone can help me, so many thanks.....

Resources