Unable to write file on HDFS - hadoop

I have installed hadoop-2.6.0 and also I checked that all the hadoop daemons are running. I am able to create or copy directory in hdfs but not able to copy file to hdfs.
Command:
bin/hadoop fs -copyFromLocal /home/130853/Hadoop_Data/abc /trial/abc
It's giving following exception:
Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(II[BI[BIILjava/lang/String;JZ)V
at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(Native Method)
at org.apache.hadoop.util.NativeCrc32.calculateChunkedSumsByteArray(NativeCrc32.java:86)
at org.apache.hadoop.util.DataChecksum.calculateChunkedSums(DataChecksum.java:430)
at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:202)
at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:163)
at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:144)
at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2217)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:54)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:112)
at org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)
at org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
at org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:306)
at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278)
at org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244)
at org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
at org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:267)
at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190)
at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
Can someone please help on this?

what is your Linux System ? 64bit or 32 bit ?
if it's 64bit , suggest you recomplie hadoop source code in order to it support native lib .
if it's 32bit ,suggest you switch 64bit system.
try execute command ,
hadoop checknative -a

Related

mapr windows client not working

I am trying to install mapr windows client. I have followed all the steps outlined in the mapr windows client installation. I have copied the ssl_truststore file from our cluster into the C:\opt\mapr\conf folder and ran the configure.bat file. It ran without any errors and I even verified the C:\opt\mapr\conf\mapr-clusters.conf with updated cluster name and CLDB nodes.
But however when i run the following command by changing to folder c:\opt\mapr\hadoop\hadoop-2.7.0\bin
hadoop fs -ls /
I get the following error
18/01/19 14:05:07 ERROR cldbutils.CLDBRpcCommonUtils: Exception during init
java.lang.UnsatisfiedLinkError: com.mapr.security.JNISecurity.SetClusterOption(Ljava/lang/String;Ljava/lang/String;Ljava/lang/String;)I
at com.mapr.security.JNISecurity.SetClusterOption(Native Method)
at com.mapr.baseutils.cldbutils.CLDBRpcCommonUtils.init(CLDBRpcCommonUtils.java:163)
at com.mapr.baseutils.cldbutils.CLDBRpcCommonUtils.<init>(CLDBRpcCommonUtils.java:73)
at com.mapr.baseutils.cldbutils.CLDBRpcCommonUtils.<clinit>(CLDBRpcCommonUtils.java:63)
at org.apache.hadoop.conf.CoreDefaultProperties.<clinit>(CoreDefaultProperties.java:69)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2147)
at org.apache.hadoop.conf.Configuration.getProperties(Configuration.java:2362)
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2579)
at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2531)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2444)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1156)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1128)
at org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1464)
at org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:321)
at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:487)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:170)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:153)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:64)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
Exception in thread "main" java.lang.UnsatisfiedLinkError: com.mapr.security.JNISecurity.SetParsingDone()V
at com.mapr.security.JNISecurity.SetParsingDone(Native Method)
at com.mapr.baseutils.cldbutils.CLDBRpcCommonUtils.init(CLDBRpcCommonUtils.java:231)
at com.mapr.baseutils.cldbutils.CLDBRpcCommonUtils.<init>(CLDBRpcCommonUtils.java:73)
at com.mapr.baseutils.cldbutils.CLDBRpcCommonUtils.<clinit>(CLDBRpcCommonUtils.java:63)
at org.apache.hadoop.conf.CoreDefaultProperties.<clinit>(CoreDefaultProperties.java:69)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2147)
at org.apache.hadoop.conf.Configuration.getProperties(Configuration.java:2362)
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2579)
at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2531)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2444)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1156)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1128)
at org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1464)
at org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:321)
at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:487)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:170)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:153)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:64)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
We use java 8 and windows 7.
I am stuck with this issue for a while. I tried all the possible options but was not successful. Any help is greatly appreciated.
These are steps :
1. Install Mapr applications
Configure MapR client
/opt/mapr/server/configure.sh -N poc2.cibdatahub.com -c -C (Ur cluster name)
Upload the missing jar from MapR cluster node to Edge node under:
/opt/mapr/hadoop/hadoop-2.7.0/share/hadoop/yarn/
Create a local mapr account if you don't have one:
Example: useradd mapr
Passwd mapr
Set environment for user mapr
su mapr
vi ~/.bashrc
#append line below
export HADOOP_HOME=/opt/mapr/hadoop/hadoop-2.7.0
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export LD_LIBRARY_PATH=$HADOOP_COMMON_LIB_NATIVE_DIR
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
#HIVE home directory configuration
export HIVE_HOME=/opt/mapr/hive/hive-2.1
export PATH="$PATH:$HIVE_HOME/bin"
export HADOOP_USER_NAME="USERname"
# load environment
source ~/.bashrc
Edit /opt/mapr/spark/spark-2.1.0/conf/spark-defaults.conf by adding
spark.yarn.archive maprfs:///apps/spark/jars/spark-jars.zip
Verify the cluster setting
/opt/mapr/conf/mapr-clusters.conf has option secure=false
Test
hadoop fs -Dfs.mapr.trace -ls
Please try these options.

java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO Fails to start DFS

I have installed/configured Hadoop on windows hadoop-2.6.0
I couldn't successfully start "sbin\start-dfs" run command.
I am getting below error
16/12/20 13:03:56 FATAL namenode.NameNode: Failed to start namenode.
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.a
ccess0(Ljava/lang/String;I)Z
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:5
57)
at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:996)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyze
Storage(Storage.java:490)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSI
mage.java:308)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(
FSImage.java:202)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNam
esystem.java:1020)
There was a similar question for running YARN. And it was told that including hadoop-2.6.0/sbin and hadoop-2.6.0/bin in path would resolve the problem. But still i am facing the error.
Can anyone help me in fixing this?
Please make sure you have proper access permission to namenode directory. Also, format the namenode and start the hdfs services.

SQOOP Import Fails, File Not Found Exception

I am new to hadoop architectures system and installed components using web search. For this i installed Hadoop, sqoop, hive. Here is directory structure for my installations (my local ubuntu machine instead and any vm, Each my installation is on separate directory):-
/usr/local/hadoop
/usr/local/sqoop
/usr/local/hive
By looking at error i tried to resolve it and so i copied sqoop (local machine /usr/local/sqoop) folder to hdfs directory (hdfs://localhost:54310/usr/local/sqoop). This resolved my problem. I want to know certain things from this:-
Before coping my sqoop to hdfs, is my installation right?
Is it necessary to copy sqoop directory from ext file system to hdfs file system.
16/07/02 13:22:15 ERROR tool.ImportTool: Encountered IOException running import job: java.io.FileNotFoundException: File does not exist: hdfs://localhost:54310/usr/local/sqoop/lib/avro-mapred-1.7.5-hadoop2.jar
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1122)
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:269)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:390)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:483)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:196)
at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:169)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:266)
at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:673)
at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:118)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605)
at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
No problem with the installation, there is no need to copy all the files from sqoop directory just copy the sqoop library files into hdfs.
Create a directory structure in hdfs which is same as $SQOOP_HOME/lib.
Example:hdfs dfs -mkdir -p /usr/lib/sqoop
Copy all sqoop library files from $SQOOP_HOME/lib to hdfs lib
Example:hdfs dfs -put /usr/lib/sqoop/lib/* /usr/lib/sqoop/lib

Hadoop Mapreduce Error Input path does not exist: hdfs://localhost:54310/user/hduser/input"

I have installed hadoop 2.6 in Ubuntu Linux 15.04 and its running fine. But, when I am running a sample test mapreduce program, its giving the following error:
org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://localhost:54310/user/hduser/input.
Kindly help me. Below is the complete details of the error.
hduser#krishadoop:/usr/local/hadoop/sbin$ hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount input output
Picked up JAVA_TOOL_OPTIONS: -javaagent:/usr/share/java/jayatanaag.jar
15/08/24 15:22:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/08/24 15:22:38 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
15/08/24 15:22:38 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
15/08/24 15:22:39 INFO mapreduce.JobSubmitter: Cleaning up the staging area file:/app/hadoop/tmp/mapred/staging/hduser1122930879/.staging/job_local1122930879_0001
org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://localhost:54310/user/hduser/input
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:321)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:385)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:597)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:614)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Seems like you mentioned a wrong input path. Hadoop is searching for an input path at /user/hduser/input. Hadoop also follows unix like tree structure. If you simply mention a directory input it will be taken as /user/{username}/input.
hadoop fs -mkdir -p /user/hduser/input
hadoop fs -put <datafile> /user/hduser/input
If you see this path (file) physically and still getting the error, you may have confused with local file system and Hadoop Distributed File System(HDFS). In order to run this map-reduce, this file should be located in HDFS (locating only inside local file system will not do it.).
You can import local file system files into HDFS by this command.
hadoop fs -put <local_file_path> <HDFS_diresctory>
You confirm that the file that you imported exists in HDFS by this command.
hadoop fs -ls <HDFS_path>
You must create and upload your input before executing your hadoop job. For example, if you need to upload input.txt file, you should do the following:
$HADOOP_HOME/bin/hdfs dfs -mkdir /user/hduser/input
$HADOOP_HOME/bin/hdfs dfs -copyFromLocal $HADOOP_HOME/input.txt /user/hduser/input/input.txt
The first line creates the directory, and the other upload your input file into hdfs (hadoop fylesystem).
When you compile any jar file using input and output file/directory, you should make sure that the input file is already created(in the specified path) and output file does not exist.
If you want to give a text file as input file, first copy a text file from local file system to hdfs and compiling it by using the following commands
hadoop fs -copyFromLocal /input.txt /user/hduser/input.txt
/usr/local/hadoop/sbin$ yarn jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /user/hduser/input.txt /output
/input.txt may be replaced with address of any text file.
You need to start Pig in local mode and not cluster node:
pig -x local
Program is not able to find the Hadoop path for the inputs. It is searching in the local system files rather than Hadoop's DFS.
This problem will go away when your program is able to locate the HDFS location. We need to let the program understand the HDFS location given in the configuration file. To do that, add these lines in your program code.
Configuration conf = new Configuration();
conf.addResource(new Path("/usr/local/hadoop/hadoop-2.7.3/etc/hadoop/core-site.xml"));
conf.addResource(new Path("/usr/local/hadoop/hadoop-2.7.3/etc/hadoop/hdfs-site.xml"));
you should make the directory in HDFS:
for instance, "hadoop fs -mkdir /input_dir"
Then when you run your MapReduce program. You should mention the absolute path of input directory, so the format should be:
hadoop jar jarFileName.jar className /input_dir /outputdir right
The following is wrong because it is relative path
hadoop jar jarFileName.jar className input_dir outputdir wrong
If you find /bin/bash: /bin/java: No such file or directory in log, try setting JAVA_HOME in /etc/hadoop/hadoop-env.sh

error while running nutch on hadoop multi cluster environment

I am running nutch on hadoop multi cluster environment.
Hadoop is throwing an error when nutch is being executed using the following command
$ bin/hadoop jar /home/nutch/nutch/runtime/deploy/nutch-1.5.1.job org.apache.nutch.crawl.Crawl urls -dir urls -depth 1 -topN 5
Error:
Exception in thread "main" java.io.IOException: Not a file:
hdfs://master:54310/user/nutch/urls/crawldb
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:170)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:515)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:753)
at com.bdc.dod.dashboard.BDCQueryStatsViewer.run(BDCQueryStatsViewer.java:829)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at com.bdc.dod.dashboard.BDCQueryStatsViewer.main(BDCQueryStatsViewer.java:796)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.apache.hadoop.util.RunJar.main(RunJar.java:155)
I tried with possible ways of solving this and fixed all the issues like setting http.agent.name in /local/conf path etc. And I installed earlier and it was smooth.
Can anybody suggest a solution?
By the way, I followed link for installing and running.
I could solve this issue. when copying files from local file system to HDFS destination filesystem, it used to be like this: bin/hadoop dfs -put ~/nutch/urls urls.
However it should be "bin/hadoop dfs -put ~/nutch/urls/* urls", here urls/* will allow sub directories.

Resources