I have a windows service which seems to be unable to use isolated storage (most likely because I'm doing something wrong).
The service is running under local system.
Everytime I run the code there is no change to isolated storage folder.
Can someone please tell me why this is so.
I have tried on windows 7 and windows 8.1
IsolatedStorageFileStream configFile = new IsolatedStorageFileStream("UIsolate.cfg", FileMode.Create);
// create a writer to write to the stream
StreamWriter writer = new StreamWriter(configFile);
// write some data to the config. file
writer.WriteLine("test");
// flush the buffer and clean up
sr.Close();
writer.Close();
configFile.Close();
Related
I am using fs (Node Module) to manage files. I am getting the file's created time (BirthTime). It is working absolutely fine when I run this app on my local machine. But when I try to implement it on EFS using NodeJs Lambda function then it gives 1970-01-01T00:00:00.000Z which is not the actual time of the file that I created.
var efsDirectory = "/mnt/data/";
var filePath = path.join(efsDirectory, file);
console.log("This file is going to be executed :", file);
var response = fs.statSync(filePath);
let fileBirthTime = response.birthtime;
console.log("File path is : ", filePath);
After joining the path my filepath looks like this filepath = /mnt/data/172.807056.json which is the actual path of the file.
In the Cloudwatch logs I am getting this :
On the local machine, it is working fine and giving the actual file birthtime. Can you tell me guys why I am getting this?
I posted the same question on the AWS repost, and an engineer responded to me with the following answer. Pasting the same answer here, if someone is facing that problem too.
You are getting this result with birthtime, as it is not supported on most NFS filesystems like EFS. Even on Linux OSes it depends on the kernel and type of file system as to whether this field is supported. The default file system on Amazon Linux 2 on EBS doesn't return a value to birthtime. However with the latest Ubuntu image, it is supported. This is why you would be seeing a difference between running it locally and against EFS.
I am using Kafka Stream with Spring cloud Stream. Our application is stateful as it does some aggregation. When I run the app, I see the below ERROR message on the console.
I am running this app in a Remote Desktop Windows machine.
Failed to change permissions for the directory C:\Users\andy\project\tmp
Failed to change permissions for the directory C:\Users\andy\project\tmp\my-local-local
But when the same code is deployed in a Linux box, I don't see the error. So I assume it an access issue.
As per our company policy, we do not have access to the change a folder's permission and hence chmod 777 did not work as well.
My question is, is there a way to disable creating the state store locally and instead use the Kafka change log topic to maintain the state. I understand this is not ideal, but it only for my local development. TIA.
You could try to use in-memory state stores instead of the default persistent state stores.
You can do that by providing a state store supplier for in-memory state stores to your stateful operations:
StateStoreSupplier storeSupplier = Stores.inMemoryKeyValueStore("in-mem");
StreamsBuilder builder = stream("input-topic")
.groupByKey()
.count(Materialized.as(storeSupplier))
From Apache Kafka 3.2 onwards, you can set the store type in the stateful operation without the need for a state store supplier:
StreamsBuilder builder = stream("input-topic")
.groupByKey()
.count(Materialized.withStoreType(StoreType.IN_MEMORY))
Or you can set the state store type globally with:
props.put(StreamsConfig.DEFAULT_DSL_STORE_CONFIG, StreamsConfig.IN_MEMORY);
I have a scala codebase where i am accessing azure blob files using Hadoop FileSystem Apis (and not the azure blob web client). My usage is of the format:
val hadoopConfig = new Configuration()
hadoopConfig.set(s"fs.azure.sas.${blobContainerName}.${accountName}.blob.windows.core.net",
sasKey)
hadoopConfig.set("fs.defaultFS",
s"wasbs://${blobContainerName}#${accountName}.blob.windows.core.net")
hadoopConfig.set("fs.wasb.impl",
"org.apache.hadoop.fs.azure.NativeAzureFileSystem")
hadoopConfig.set("fs.wasbs.impl",
"org.apache.hadoop.fs.azure.NativeAzureFileSystem$Secure")
val fs = FileSystem.get(
new java.net.URI(s"wasbs://" +
s"${blobContainerName}#${accountName}.blob.windows.core.net"), hadoopConfig)
I am now writing unit tests for this code using azure storage emulator as the storage account. I went through this page but it only explains how to access azure emulator through web apis of AzureBlobClient. I need to figure out how to test my above code by accessing azure storage emulator using hadoop FileSystem apis. I have tried the following way but this does not work:
val hadoopConfig = new Configuration()
hadoopConfig.set(s"fs.azure.sas.{containerName}.devstoreaccount1.blob.windows.core.net",
"Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==")
hadoopConfig.set("fs.defaultFS",
s"wasbs://{containerName}#devstoreaccount1.blob.windows.core.net")
hadoopConfig.set("fs.wasb.impl",
"org.apache.hadoop.fs.azure.NativeAzureFileSystem")
hadoopConfig.set("fs.wasbs.impl",
"org.apache.hadoop.fs.azure.NativeAzureFileSystem$Secure")
val fs = FileSystem.get(
new java.net.URI(s"wasbs://{containerName}#devstoreaccount1.blob.windows.core.net"), hadoopConfig)
I was able to solve this problem and connect to storage emulator by adding the following 2 configurations:
hadoopConfig.set("fs.azure.test.emulator",
"true")
hadoopConfig.set("fs.azure.storage.emulator.account.name",
"devstoreaccount1.blob.windows.core.net")
We have spring boot application running in azure app service to do some ETL operations on CSV files.
A file will be put into instance local directory from where application will be picking the file and process it. We are facing an issue if the uploaded file is bigger than >10 MB. Reader is not able to read the file and returning null . We are using supercsv to process the csv
file .
FileReader fr = new FileReader(filePath);
BufferedReader bufferedReader = new BufferedReader(fr);
CsvListReader reader = new CsvListReader(bufferedReader, CsvPreference.EXCEL_NORTH_EUROPE_PREFERENCE);
List<String> read = reader.read();
`
reader.read() method returns null.The issue is happening only in azure app service (linux) and working in local perfectly with the same.
Can anyone help me to find out what the issue is here ?
It seems Application Settings on Windows Phone are not secure and encrypted. I used isolated storage tool to pull all app files and folder fro mthe device and AppSettings file seems to be plain XML.
What about app linq databases? .sdf file seems to be encrypted.
I need to store very sensitive data that needs to be accessed both from the app and from background agent which is running in a separate process. They both seem to access application settings but since the storage is not secure I really cannot use app settings.
You can use the ProtectedData class to encrypt your sensitive data, then store it in the application settings or directly in the isolated storage.
For instance:
// Encrypting
var encryptedData = ProtectedData.Protect(Encoding.UTF8.GetBytes("Hello world!"), null);
// Decrypting
var sensitiveData = Encoding.UTF8.GetString(ProtectedData.UnProtect(encryptedData, null));