Related
I'm new at Elasticsearch (ES) and I'm currently set a customized podman container ES 8.5.0 installation (rootless install) from ES base RPM repository
In this installation I'm using a dedicated Linux user 'elasticadm' which owns the files into the container and over the local Red Hat Linux 8.5 host
Basically I use the following ownership for the installation on localhost:
/app/elasticsearch/data - /var/log/elasticsearch/elasticsearch.log - /etc/elasticsearch/elasticsearch.yml:
elasticadm: elasticsearch - then after the below error occurred I tried: elasticadm:root (but with no more success)
Whenever I run a Elasticsearch podman container with any mount-bind volumes the installation fails with the following error message
"
Fatal exception while booting Elasticsearch org.elasticsearch.ElasticsearchSecurityException: invalid configuration for xpack.security.transport.ssl - [xpack.security.transport.ssl.enabled] is not set, but the following settings have been configured in elasticsearch.yml
"
ES podman installation without mount-bind volumes is fine but has no interest of course
I'm able to deploy the container without any bind-mount volumes.
podman run --detach --name es850 --publish 9200:9200 --user=elasticadm localhost/elasticsearch_cust:1.4
podman logs es850
warning: ignoring JAVA_HOME=/usr/lib/jvm/java-openjdk; using bundled JDK
[2022-11-09T20:37:41,777][INFO ][o.e.n.Node ] [Prod] version[8.5.0], pid[72], build[rpm/c94b4700cda13820dad5aa74fae6db185ca5c304/2022-10-24T16:54:16.433628434Z], OS[Linux/4.18.0-348.7.1.el8_5.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/19/19+36-2238]
[2022-11-09T20:37:41,782][INFO ][o.e.n.Node ] [Prod] JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]
[2022-11-09T20:37:41,783][INFO ][o.e.n.Node ] [Prod] JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -Djava.security.manager=allow, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=ALL-UNNAMED, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-5358173424819503746, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Xms1868m, -Xmx1868m, -XX:MaxDirectMemorySize=979369984, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.distribution.type=rpm, --module-path=/usr/share/elasticsearch/lib, --add-modules=jdk.net, -Djdk.module.main=org.elasticsearch.server]
[2022-11-09T20:37:43,721][INFO ][c.a.c.i.j.JacksonVersion ] [Prod] Package versions: jackson-annotations=2.13.2, jackson-core=2.13.2, jackson-databind=2.13.2.2, jackson-dataformat-xml=2.13.2, jackson-datatype-jsr310=2.13.2, azure-core=1.27.0, Troubleshooting version conflicts: https://aka.ms/azsdk/java/dependency/troubleshoot
[2022-11-09T20:37:45,175][INFO ][o.e.p.PluginsService ] [Prod] loaded module [aggs-matrix-stats]
[2022-11-09T20:37:45,175][INFO ][o.e.p.PluginsService ] [Prod] loaded module [analysis-common]
[2022-11-09T20:37:45,176][INFO ][o.e.p.PluginsService ] [Prod] loaded module [apm]
......
[2022-11-09T20:37:45,190][INFO ][o.e.p.PluginsService ] [Prod] loaded module [x-pack-watcher]
[2022-11-09T20:37:45,191][INFO ][o.e.p.PluginsService ] [Prod] no plugins loaded
[2022-11-09T20:37:48,027][WARN ][stderr ] [Prod] Nov 09, 2022 8:37:48 PM org.apache.lucene.store.MMapDirectory lookupProvider
[2022-11-09T20:37:48,028][WARN ][stderr ] [Prod] WARNING: You are running with Java 19. To make full use of MMapDirectory, please pass '--enable-preview' to the Java command line.
[2022-11-09T20:37:48,039][INFO ][o.e.e.NodeEnvironment ] [Prod] using [1] data paths, mounts [[/ (overlay)]], net usable_space [24gb], net total_space [27.8gb], types [overlay]
[2022-11-09T20:37:48,039][INFO ][o.e.e.NodeEnvironment ] [Prod] heap size [1.8gb], compressed ordinary object pointers [true]
[2022-11-09T20:37:48,048][INFO ][o.e.n.Node ] [Prod] node name [Prod], node ID [CvroQFRsTxKqyWfwcOJGag], cluster name [elasticsearch], roles [data_frozen, ml, data_hot, transform, data_content, data_warm, master, remote_cluster_client, data, data_cold, ingest]
[2022-11-09T20:37:51,831][INFO ][o.e.x.s.Security ] [Prod] Security is enabled
[2022-11-09T20:37:52,214][INFO ][o.e.x.s.a.s.FileRolesStore] [Prod] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2022-11-09T20:37:52,628][INFO ][o.e.x.s.InitialNodeSecurityAutoConfiguration] [Prod] Auto-configuration will not generate a password for the elastic built-in superuser, as we cannot determine if there is a terminal attached to the elasticsearch process. You can use the `bin/elasticsearch-reset-password` tool to set the password for the elastic user.
[2022-11-09T20:37:52,724][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [Prod] [controller/96] [Main.cc#123] controller (64 bit): Version 8.5.0 (Build 3922fab346e761) Copyright (c) 2022 Elasticsearch BV
[2022-11-09T20:37:53,354][INFO ][o.e.t.n.NettyAllocator ] [Prod] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}]
[2022-11-09T20:37:53,381][INFO ][o.e.i.r.RecoverySettings ] [Prod] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
[2022-11-09T20:37:53,425][INFO ][o.e.d.DiscoveryModule ] [Prod] using discovery type [single-node] and seed hosts providers [settings]
[2022-11-09T20:37:54,888][INFO ][o.e.n.Node ] [Prod] initialized
[2022-11-09T20:37:54,889][INFO ][o.e.n.Node ] [Prod] starting ...
[2022-11-09T20:37:54,901][INFO ][o.e.x.s.c.f.PersistentCache] [Prod] persistent cache index loaded
[2022-11-09T20:37:54,903][INFO ][o.e.x.d.l.DeprecationIndexingComponent] [Prod] deprecation component started
[2022-11-09T20:37:55,011][INFO ][o.e.t.TransportService ] [Prod] publish_address {10.0.2.100:9300}, bound_addresses {[::]:9300}
[2022-11-09T20:37:55,122][WARN ][o.e.b.BootstrapChecks ] [Prod] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2022-11-09T20:37:55,124][INFO ][o.e.c.c.ClusterBootstrapService] [Prod] this node has not joined a bootstrapped cluster yet; [cluster.initial_master_nodes] is set to [Prod]
[2022-11-09T20:37:55,133][INFO ][o.e.c.c.Coordinator ] [Prod] setting initial configuration to VotingConfiguration{CvroQFRsTxKqyWfwcOJGag}
[2022-11-09T20:37:55,327][INFO ][o.e.c.s.MasterService ] [Prod] elected-as-master ([1] nodes joined)[_FINISH_ELECTION_, {Prod}{CvroQFRsTxKqyWfwcOJGag}{oYVn8g0ZS2CFxHKYosdd_Q}{Prod}{10.0.2.100}{10.0.2.100:9300}{cdfhilmrstw} completing election], term: 1, version: 1, delta: master node changed {previous [], current [{Prod}{CvroQFRsTxKqyWfwcOJGag}{oYVn8g0ZS2CFxHKYosdd_Q}{Prod}{10.0.2.100}{10.0.2.100:9300}{cdfhilmrstw}]}
[2022-11-09T20:37:55,352][INFO ][o.e.c.c.CoordinationState] [Prod] cluster UUID set to [_wcBh4-JRtuLqIBXyNhZ5A]
[2022-11-09T20:37:55,370][INFO ][o.e.c.s.ClusterApplierService] [Prod] master node changed {previous [], current [{Prod}{CvroQFRsTxKqyWfwcOJGag}{oYVn8g0ZS2CFxHKYosdd_Q}{Prod}{10.0.2.100}{10.0.2.100:9300}{cdfhilmrstw}]}, term: 1, version: 1, reason: Publication{term=1, version=1}
[2022-11-09T20:37:55,439][INFO ][o.e.r.s.FileSettingsService] [Prod] starting file settings watcher ...
[2022-11-09T20:37:55,447][INFO ][o.e.r.s.FileSettingsService] [Prod] file settings service up and running [tid=51]
[2022-11-09T20:37:55,456][INFO ][o.e.h.AbstractHttpServerTransport] [Prod] publish_address {10.0.2.100:9200}, bound_addresses {[::]:9200}
[2022-11-09T20:37:55,457][INFO ][o.e.n.Node ] [Prod] started {Prod}{CvroQFRsTxKqyWfwcOJGag}{oYVn8g0ZS2CFxHKYosdd_Q}{Prod}{10.0.2.100}{10.0.2.100:9300}{cdfhilmrstw}{ml.max_jvm_size=1958739968, ml.allocated_processors_double=4.0, xpack.installed=true, ml.machine_memory=3917570048, ml.allocated_processors=4}
[2022-11-09T20:37:55,510][INFO ][o.e.g.GatewayService ] [Prod] recovered [0] indices into cluster_state
[2022-11-09T20:37:55,691][INFO ][o.e.c.m.MetadataIndexTemplateService] [Prod] adding index template [.watch-history-16] for index patterns [.watcher-history-16*]
[2022-11-09T20:37:55,700][INFO ][o.e.c.m.MetadataIndexTemplateService] [Prod] adding index template [ilm-history] for index patterns [ilm-history-5*]
[2022-11-09T20:37:55,707][INFO ][o.e.c.m.MetadataIndexTemplateService] [Prod] adding index template [.slm-history] for index patterns [.slm-history-5*]
[2022-11-09T20:37:55,718][INFO ][o.e.c.m.MetadataIndexTemplateService] [Prod] adding component template [.deprecation-indexing-mappings]
[2022-11-09T20:37:55,723][INFO ][o.e.c.m.MetadataIndexTemplateService] [Prod] adding component template [synthetics-mappings]
...
[2022-11-09T20:37:56,392][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [Prod] adding index lifecycle policy [.fleet-actions-results-ilm-policy]
[2022-11-09T20:37:56,510][INFO ][o.e.l.LicenseService ] [Prod] license [4b5d6876-1402-470e-96fd-f9ff8211cca7] mode [basic] - valid
[2022-11-09T20:37:56,511][INFO ][o.e.x.s.a.Realms ] [Prod] license mode is [basic], currently licensed security realms are [reserved/reserved,file/default_file,native/default_native]
[2022-11-09T20:37:56,538][INFO ][o.e.h.n.s.HealthNodeTaskExecutor] [Prod] Node [{Prod}{CvroQFRsTxKqyWfwcOJGag}] is selected as the current health node.
# and connection test is fine:
curl --cacert http_ca.crt -u elastic https://127.0.0.1:9200
Enter host password for user 'elastic':
{
"name" : "Prod",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "........",
"version" : {
"number" : "8.5.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "c94b4700cda13820dad5aa74fae6db185ca5c304",
"build_date" : "2022-10-24T16:54:16.433628434Z",
"build_snapshot" : false,
"lucene_version" : "9.4.1",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
Elasticsearch podman installation with bind-mount volumes (fails):
`podman run --detach --name es850 --publish 9200:9200
--volume=/etc/elasticsearch/elasticsearch.yml:/etc/elasticsearch/elasticsearch.yml :Z
--volume=/var/log/elasticsearch/elasticsearch.log:/var/log/elasticsearch/elasticsearch.log:Z
--volume=/app/elasticsearch/data:/app/elasticsearch/data:Z
--user=elasticadm localhost/elasticsearch_cust:1.4
podman logs es850
warning: ignoring JAVA_HOME=/usr/lib/jvm/java-openjdk; using bundled JDK
Aborting auto configuration because the node keystore contains password settings already
[2022-11-09T15:56:27,292][INFO ][o.e.n.Node ] [0d8414e9b51b] version[8.5.0], pid[76], build[rpm/c94b4700cda13820dad5aa74fae6db185ca5c304/2022-10-24T16:54:16.433628434Z], OS[Linux/4.18.0-348.7.1.el8_5.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/19/19+36-2238]
[2022-11-09T15:56:27,299][INFO ][o.e.n.Node ] [0d8414e9b51b] JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]
[2022-11-09T15:56:27,300][INFO ][o.e.n.Node ] [0d8414e9b51b] JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -Djava.security.manager=allow, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=ALL-UNNAMED, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-10492222574682252504, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Xms1868m, -Xmx1868m, -XX:MaxDirectMemorySize=979369984, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.distribution.type=rpm, --module-path=/usr/share/elasticsearch/lib, --add-modules=jdk.net, -Djdk.module.main=org.elasticsearch.server]
[2022-11-09T15:56:29,369][INFO ][c.a.c.i.j.JacksonVersion ] [0d8414e9b51b] Package versions: jackson-annotations=2.13.2, jackson-core=2.13.2, jackson-databind=2.13.2.2, jackson-dataformat-xml=2.13.2, jackson-datatype-jsr310=2.13.2, azure-core=1.27.0, Troubleshooting version conflicts: https://aka.ms/azsdk/java/dependency/troubleshoot
[2022-11-09T15:56:30,863][INFO ][o.e.p.PluginsService ] [0d8414e9b51b] loaded module [aggs-matrix-stats]
.............
[2022-11-09T15:56:30,880][INFO ][o.e.p.PluginsService ] [0d8414e9b51b] loaded module [x-pack-watcher]
[2022-11-09T15:56:30,881][INFO ][o.e.p.PluginsService ] [0d8414e9b51b] no plugins loaded
[2022-11-09T15:56:33,720][WARN ][stderr ] [0d8414e9b51b] Nov 09, 2022 3:56:33 PM org.apache.lucene.store.MMapDirectory lookupProvider
[2022-11-09T15:56:33,721][WARN ][stderr ] [0d8414e9b51b] WARNING: You are running with Java 19. To make full use of MMapDirectory, please pass '--enable-preview' to the Java command line.
[2022-11-09T15:56:33,732][INFO ][o.e.e.NodeEnvironment ] [0d8414e9b51b] using [1] data paths, mounts [[/ (overlay)]], net usable_space [24gb], net total_space [27.8gb], types [overlay]
[2022-11-09T15:56:33,732][INFO ][o.e.e.NodeEnvironment ] [0d8414e9b51b] heap size [1.8gb], compressed ordinary object pointers [true]
[2022-11-09T15:56:33,740][INFO ][o.e.n.Node ] [0d8414e9b51b] node name [0d8414e9b51b], node ID [rMFgxntETo63opwgU7P9sg], cluster name [elasticsearch], roles [ml, data_hot, transform, data_content, data_warm, master, remote_cluster_client, data, data_cold, ingest, data_frozen]
**[2022-11-09T15:56:36,194][ERROR][o.e.b.Elasticsearch ] [0d8414e9b51b] fatal exception while booting Elasticsearch org.elasticsearch.ElasticsearchSecurityException: invalid configuration for xpack.security.transport.ssl - [xpack.security.transport.ssl.enabled] is not set, but the following settings have been configured in elasticsearch.yml : [xpack.security.transport.ssl.keystore.secure_password,xpack.security.transport.ssl.truststore.secure_password]**
at org.elasticsearch.xcore#8.5.0/org.elasticsearch.xpack.core.ssl.SSLService.validateServerConfiguration(SSLService.java:648)
at org.elasticsearch.xcore#8.5.0/org.elasticsearch.xpack.core.ssl.SSLService.loadSslConfigurations(SSLService.java:612)
at org.elasticsearch.xcore#8.5.0/org.elasticsearch.xpack.core.ssl.SSLService.<init>(SSLService.java:156)
at org.elasticsearch.xcore#8.5.0/org.elasticsearch.xpack.core.XPackPlugin.createSSLService(XPackPlugin.java:465)
at org.elasticsearch.xcore#8.5.0/org.elasticsearch.xpack.core.XPackPlugin.createComponents(XPackPlugin.java:314)
at org.elasticsearch.server#8.5.0/org.elasticsearch.node.Node.lambda$new$15(Node.java:704)
at org.elasticsearch.server#8.5.0/org.elasticsearch.plugins.PluginsService.lambda$flatMap$0(PluginsService.java:252)
at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:273)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.base/java.util.AbstractList$RandomAccessSpliterator.forEachRemaining(AbstractList.java:722)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:575)
at java.base/java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
at java.base/java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:616)
at java.base/java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:622)
at java.base/java.util.stream.ReferencePipeline.toList(ReferencePipeline.java:627)
at org.elasticsearch.server#8.5.0/org.elasticsearch.node.Node.<init>(Node.java:719)
at org.elasticsearch.server#8.5.0/org.elasticsearch.node.Node.<init>(Node.java:316)
at org.elasticsearch.server#8.5.0/org.elasticsearch.bootstrap.Elasticsearch$2.<init>(Elasticsearch.java:214)
at org.elasticsearch.server#8.5.0/org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:214)
at org.elasticsearch.server#8.5.0/org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67)
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/elasticsearch.log
# Configuration is the following (elasticsearch.yml):
node.name: Prod # Name is 'Prod' but it's not a true production server
path.data: /app/elasticsearch/data
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.type: single-node
ingest.geoip.downloader.enabled: false
# Security:
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
http.host: 0.0.0.0
#transport.host: 0.0.0.0
$ podman exec -it es850 bash
[elasticadm#8a9ceb50b3b4 /]$ /usr/share/elasticsearch/bin/elasticsearch-keystore list
warning: ignoring JAVA_HOME=/usr/lib/jvm/java-openjdk; using bundled JDK
autoconfiguration.password_hash
keystore.seed
xpack.security.http.ssl.keystore.secure_password
xpack.security.transport.ssl.keystore.secure_password
xpack.security.transport.ssl.truststore.secure_password`
Any ideas / advise would be really appreciated because I don't know what's wrong suddenly with xpack.security parameters and the relationship with the podman bind-mount volume ?
These base xpack.security seem well configured (initial base configuration with no modification in a first time)
I'm building my first yocto release.
All packages build well and I can build my dts. In fact I have a lot of dtb in my images folder.
This is my build configuration.
Build Configuration:
BB_VERSION = "1.36.0"
BUILD_SYS = "x86_64-linux"
NATIVELSBSTRING = "ubuntu-18.04"
TARGET_SYS = "arm-poky-linux-gnueabi"
MACHINE = "pico-imx6ul-itl"
DISTRO = "fsl-imx-fb"
DISTRO_VERSION = "4.9.88-2.0.0"
TUNE_FEATURES = "arm armv7ve vfp neon callconvention-hard cortexa7"
TARGET_FPU = "hard"
meta
meta-poky = "HEAD:0ec241873367e18f5371a3ad9aca1e2801dcd4ee"
meta-oe
meta-multimedia = "HEAD:dacfa2b1920e285531bec55cd2f08743390aaf57"
meta-freescale = "HEAD:49ac225a38f6d84519798e3264f2e4d19b84f70a"
meta-freescale-3rdparty = "HEAD:1d6d5961dbf82624b28bb318b4950a64abc31d12"
meta-freescale-distro = "HEAD:0ec6d7e206705702b5b534611754de0787f92b72"
meta-bsp
meta-sdk = "HEAD:d65692ecb3a4136fc1cc137152634e8633ddb3c6"
meta-browser = "HEAD:d6f9aed41c73b75a97d71bff060b03a66ee087b1"
meta-gnome
meta-networking
meta-python
meta-filesystems = "HEAD:dacfa2b1920e285531bec55cd2f08743390aaf57"
meta-qt5 = "HEAD:32bb7d18a08d1c48873d7ab6332d4cc3815a4dff"
meta-edm-bsp-release = "added-wifi-drivers:10f5373fedd09c19ffb1a393272e3f3ed83b643a"
This is my machine configuration
##TYPE: Machine
##NAME: pico-imx6ul-itl
##SOC: i.MX6UL
##DESCRIPTION: Machine configuration for PICO-IMX6UL/ULL with QCA(Qualcomm)/BRCM(Broadcom) WLAN module
include conf/machine/include/imx-base.inc
include conf/machine/include/tune-cortexa7.inc
include conf/machine/include/imx6ul-common.inc
MACHINEOVERRIDES = "mx6:mx6ul:"
SOC_FAMILY = "mx6ul"
PREFERRED_PROVIDER_u-boot = "u-boot-edm"
PREFERRED_PROVIDER_u-boot_mx6ul = "u-boot-edm"
PREFERRED_PROVIDER_virtual/bootloader = "u-boot-edm"
PREFERRED_PROVIDER_virtual/bootloader_mx6ul = "u-boot-edm"
UBOOT_MAKE_TARGET = ""
UBOOT_SUFFIX = "img"
SPL_BINARY = "SPL"
UBOOT_MACHINE = "pico-imx6ul_spl_defconfig"
#UBOOT_MACHINE = "./pico-imx6ul_defconfig"
# Ensure uEnv.txt will be available at rootfs time
do_rootfs[depends] += "u-boot-uenv:do_deploy"
UENV_FILENAME = "uEnv.txt"
BOOT_SCRIPTS = "${UENV_FILENAME}:uEnv.txt"
PREFERRED_PROVIDER_virtual/kernel ?= "linux-tn-imx"
PREFERRED_PROVIDER_virtual/kernel_mx6ul = "linux-tn-imx"
# Add kernel modules
MACHINE_EXTRA_RRECOMMENDS += "\
kernel-module-qcacld-tn \
"
KERNEL_DEVICETREE = "imx6ul-pico-qca_dwarf.dtb imx6ul-pico-qca_hobbit.dtb \
imx6ul-pico-qca_nymph.dtb imx6ul-pico-qca_pi.dtb \
imx6ul-pico_dwarf.dtb imx6ul-pico_hobbit.dtb \
imx6ul-pico_nymph.dtb imx6ul-pico_pi.dtb \
imx6ull-pico-qca_dwarf.dtb imx6ull-pico-qca_hobbit.dtb \
imx6ull-pico-qca_nymph.dtb imx6ull-pico-qca_pi.dtb \
imx6ull-pico_dwarf.dtb imx6ull-pico_hobbit.dtb \
imx6ull-pico_nymph.dtb imx6ull-pico_pi.dtb"
KERNEL_IMAGETYPE = "zImage"
MACHINE_FEATURES += "bluetooth pci wifi touchscreen"
MACHINE_EXTRA_RRECOMMENDS += " \
broadcom-bluetooth \
openssh-sftp-server \
libsocketcan \
bash hostapd dnsmasq haveged create-ap iptables \
"
MACHINE_FIRMWARE_remove = "firmware-imx-brcm"
SERIAL_CONSOLE = "115200 ttymxc5"
MACHINE_FEATURES += " usbgadget usbhost "
At the moment I use dd to flash the content of this archive:
core-image-base-pico-imx6ul-itl.sdcard.bz2
u-boot is using this device tree:
imx6ul-pico-qca_pi.dts
But I want this one:
imx6ul-pico_pi.dtb
Can you help me to fix the dts? I can't find a tutorial/documentation for this scenario.
UPDATE
printenv from uboot shell on the board
U-Boot SPL 2017.03-tn-imx_v2017.03_4.9.88_2.0.0_ga-test+g2fb0ee6322 (Apr 09 2019 - 20:13:49)
Boot Device: MMC
Trying to boot from MMC1
Boot Device: MMC
reading u-boot.img
reading u-boot.img
U-Boot 2017.03-tn-imx_v2017.03_4.9.88_2.0.0_ga-test+g2fb0ee6322 (Apr 09 2019 - 20:13:49 +0200)
CPU: Freescale i.MX6UL rev1.0 528 MHz (running at 396 MHz)
CPU: Industrial temperature grade (-40C to 105C) at 24C
Reset cause: POR
Board: PICO-IMX6UL
Compatible baseboard: dwarf, hobbit, nymph, pi
I2C: ready
DRAM: 512 MiB
PMIC: PFUZE3000 DEV_ID=0x30 REV_ID=0x11
MMC: FSL_SDHC: 0
*** Warning - bad CRC, using default environment
No panel detected: default to AT070TN94
Display: AT070TN94 (800x480)
Video: 800x480x24
In: serial
Out: serial
Err: serial
switch to partitions #0, OK
mmc0(part 0) is current device
Net: , FEC1
Normal Boot
Hit any key to stop autoboot: 0
=> printenv
baseboard=pi
baudrate=115200
boot_fdt=try
bootcmd=mmc dev ${mmcdev}; if mmc rescan; then if run loadbootenv; then echo Loaded environment from ${bootenv};run importbootenv;fi;if test -n $uenvcmd; then echo Running uenvcmd ...;run uenvcmd;fi;if run loadbootscript; then run bootscript; fi;if run loadfit; then run fitboot; fi; if run loadimage; then run mmcboot; else echo WARN: Cannot load kernel from boot media; fi; else run netboot; fi
bootdelay=1
bootenv=uEnv.txt
bootscript=echo Running bootscript from mmc ...; source
console=ttymxc5
default_baseboard=pi
detectmem=if test ${memdet} = 512MB; then setenv memsize cma=128M; else setenv memsize cma=96M; fi
eth1addr=00:1f:7b:11:07:27
ethact=FEC1
fdt_addr=0x83000000
fdt_high=0xffffffff
fdtfile=undefined
fit_args=setenv bootargs console=${console},${baudrate} root=/dev/ram0 rootwait rw
fitboot=run fit_args; echo ${bootargs}; bootm 87880000#config#${som}-${form}_${baseboard};
form=pico
image=zImage
importbootenv=echo Importing environment from mmc ...; env import -t -r $loadaddr $filesize
initrd_high=0xffffffff
ip_dyn=no
loadaddr=0x80800000
loadbootenv=fatload mmc ${mmcdev} ${loadaddr} ${bootenv}
loadbootscript=fatload mmc ${mmcdev}:${mmcpart} ${loadaddr} ${script};
loadfdt=fatload mmc ${mmcdev}:${mmcpart} ${fdt_addr} ${fdtfile}
loadfit=fatload mmc ${mmcdev}:${mmcpart} 0x87880000 tnrescue.itb
loadimage=fatload mmc ${mmcdev}:${mmcpart} ${loadaddr} ${image}
mmcargs=setenv bootargs console=${console},${baudrate} ${memsize} root=${mmcroot}
mmcautodetect=yes
mmcboot=echo Booting from mmc ...; run detectmem; run mmcargs; echo baseboard is ${baseboard}; run setfdt; if test ${boot_fdt} = yes || test ${boot_fdt} = try; then if run loadfdt; then bootz ${loadaddr} - ${fdt_addr}; else if test ${boot_fdt} = try; then echo WARN: Cannot load the DT; echo fall back to load the default DT; setenv baseboard ${default_baseboard}; run setfdt; run loadfdt; bootz ${loadaddr} - ${fdt_addr}; else echo WARN: Cannot load the DT; fi; fi; else bootz; fi;
mmcdev=0
mmcpart=1
mmcroot=/dev/mmcblk0p2 rootwait rw
netargs=setenv bootargs console=${console},${baudrate} root=/dev/nfs ip=dhcp nfsroot=${serverip}:${nfsroot},v3,tcp
netboot=echo Booting from net ...; if test ${ip_dyn} = yes; then setenv get_cmd dhcp; else setenv get_cmd tftp; fi; run loadbootenv; run importbootenv; run setfdt; run netargs; ${get_cmd} ${loadaddr} ${image}; if test ${boot_fdt} = yes || test ${boot_fdt} = try; then if ${get_cmd} ${fdt_addr} ${fdtfile}; then bootz ${loadaddr} - ${fdt_addr}; else if test ${boot_fdt} = try; then bootz; else echo WARN: Cannot load the DT; fi; fi; else bootz; fi;
script=boot.scr
setfdt=if test ${wifi_module} = qca; then setenv fdtfile ${som}-${form}-${wifi_module}_${baseboard}.dtb; else setenv fdtfile ${som}-${form}_${baseboard}.dtb;fi
som=imx6ul
splashpos=m,m
wifi_module=qca
Environment size: 2920/8188 bytes
You need to modify the wifi_module field of your uboot.
At this time, when you boot, the command setfdtsets the device tree to ${som}-${form}-${wifi_module}_${baseboard}.dtb if ${wifi_module} = qca.
To remove this line type:
setenv wifi_module
After that type
run bootcmd
This will allow you to test the modification. After a reboot it will come back to wifi_module = qca so you can save the environment to make it persistent:
setenv wifi_module
saveenv
See Section 4.12.4 of the Yocto Reference Manual.
To reiterate here, "Functionality is automatically enabled for any recipe that inherits the kernel class and sets the KERNEL_DEVICETREE variable".
For example, in my machine config for ZynqMP which is ARM64 based, I set
KERNEL_DEVICETREE = "xilinx/zynqmp-zcu102-ged.dtb"
For your example, I believe specifying the below should suffice in your machine configuration.
KERNEL_DEVICETREE = "imx6ul-pico_pi.dtb"
I using vagrant on linux CentOS to construct windows server 2012 virtual machines, right now I got this Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.define "MACHINE" do |db|
db.vm.box = "win2k12r2en"
db.vm.network "public_network", bridge: "p1p1", ip: "130.103.97.40", netmask: "255.255.252.0"
db.vm.provider "virtualbox" do |vb|
vb.memory = 4096
vb.cpus = 2
vb.name = "MACHINE"
end
db.vm.provision :file, source: '/home/vagrant/ambientes/machine/shell/Install.ps1', destination: "/tmp/"
db.vm.provision :file, source: '/home/vagrant/ambientes/machine/shell/Lib-General.ps1', destination: "/tmp"
db.vm.provision :file, source: '/home/vagrant/ambientes/machine/shell/continue.bat', destination: "/tmp"
db.vm.provision :file, source: '/home/vagrant/ambientes/machine/shell/PHP_ZEND.zip', destination: "/tmp"
end
end
After run the script i got this error:
The following WinRM command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mkdir / -force
Stdout from the command:
Stderr from the command:
#< CLIXML
<Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><S S="Error">mkdir : The path is not of a legal form._x000D__x000A_</S><S S="Error">At line:1 char:40_x000D__x000A_</S><S S="Error">+ $ProgressPreference='SilentlyContinue';mkdir / -force_x000D__x000A_</S><S S="Error">+ ~~~~~~~~~~~~~~_x000D__x000A_</S><S S="Error"> + CategoryInfo : InvalidArgument: (C:\:String) [New-Item], Argume _x000D__x000A_</S><S S="Error"> ntException_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : CreateDirectoryArgumentError,Microsoft.PowerShel _x000D__x000A_</S><S S="Error"> l.Commands.NewItemCommand_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S></Objs>
¿What am I doing wrong?, I'm new to vagrant, I followed this guide. Thanks for your time.
I followed vahdet guidance: I read github.com/hashicorp/vagrant/issues/7435 and did some workarounds:
replace file provision destination "/tmp" by "C:\tmp"
I upload the "PHP_ZEND.zip" file to a NEXUS server then downloaded to the VM by a powershell script provisioned by the vagrant script.
I did the second point because the VM was forced to shutdown and auto-destroy when the script was running the provision for the zip file.
That solved the problem.
I recently installed Horizon on a Laravel project which is running on a Homestead Vagrant box.
My issue is that no jobs are being picked up by the queue workers. I have no supervisors:
vagrant#homestead:~/Code/project$ artisan horizon:list
+----------------+------+-------------+---------+
| Name | PID | Supervisors | Status |
+----------------+------+-------------+---------+
| homestead-D2dV | 7094 | None | running |
+----------------+------+-------------+---------+
vagrant#homestead:~/Code/project$ artisan horizon:supervisors
No supervisors are running.
Here is my supervisor (horizon.conf) configuration:
[program:horizon]
process_name=%(program_name)s
command=/usr/bin/php /home/vagrant/Code/project/artisan horizon
autostart=true
autorestart=true
user=vagrant
redirect_stderr=true
stdout_logfile=/home/vagrant/Code/project/storage/logs/horizon.log
When I bring this machine up, my logs and the web interface indicate that "Horizon started successfully.".
And my horizon (horizon.php) configuration:
'environments' => [
'production' => [
'supervisor-1' => [
'connection' => 'redis',
'queue' => ['default', 'queue-1', 'queue-2', 'queue-3'],
'balance' => 'auto',
'processes' => env('HORIZON_PROCESSES', 10),
'tries' => 3,
],
],
'local' => [
'supervisor-1' => [
'connection' => 'redis',
'queue' => ['default', 'queue-1', 'queue-2', 'queue-3'],
'balance' => 'auto',
'processes' => env('HORIZON_PROCESSES', 3),
'tries' => 3,
],
],
],
My supervisor appears to be active as well:
vagrant#homestead:~/Code/project$ sudo service supervisor status
● supervisor.service - Supervisor process control system for UNIX
Loaded: loaded (/lib/systemd/system/supervisor.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2018-03-28 13:03:08 UTC; 6h ago
Docs: http://supervisord.org
Process: 1591 ExecStop=/usr/bin/supervisorctl $OPTIONS shutdown (code=exited, status=0/SUCCESS)
Main PID: 2547 (supervisord)
Tasks: 2
Memory: 34.4M
CPU: 21.038s
CGroup: /system.slice/supervisor.service
├─2547 /usr/bin/python /usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf
└─7094 /usr/bin/php /home/vagrant/Code/project/artisan horizon
Mar 28 18:32:13 homestead supervisord[2547]: 2018-03-28 18:32:13,225 INFO spawned: 'horizon' with pid 7057
Mar 28 18:32:15 homestead supervisord[2547]: 2018-03-28 18:32:15,055 INFO success: horizon entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
Mar 28 18:32:16 homestead php[7057]: DIGEST-MD5 common mech free
Mar 28 18:32:16 homestead supervisord[2547]: 2018-03-28 18:32:16,693 INFO exited: horizon (exit status 0; expected)
Mar 28 18:32:17 homestead supervisord[2547]: 2018-03-28 18:32:17,706 INFO spawned: 'horizon' with pid 7072
Mar 28 18:32:19 homestead supervisord[2547]: 2018-03-28 18:32:19,584 INFO success: horizon entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
Mar 28 18:32:26 homestead php[7072]: DIGEST-MD5 common mech free
Mar 28 18:32:26 homestead supervisord[2547]: 2018-03-28 18:32:26,206 INFO exited: horizon (exit status 0; expected)
Mar 28 18:32:27 homestead supervisord[2547]: 2018-03-28 18:32:27,210 INFO spawned: 'horizon' with pid 7094
Mar 28 18:32:29 homestead supervisord[2547]: 2018-03-28 18:32:29,052 INFO success: horizon entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
Any thoughts or ideas as to why my supervisor(s) are not working?
Turns out my horizon.php configuration was wrong.
I changed this line:
'local' => [ ...
To match my APP_ENV environment variable; which I had set as development.
I am new to Whirr and I'm trying to setup a Hadoop cluster on EC2 with Whirr,I have followed the tutorial on Cloudera https://ccp.cloudera.com/display/CDHDOC/Whirr+Installation
Before install Whirr, I install Hadoop (0.20.2-cdh3u3), then install Whirr (0.5.0-cdh3u3).
Here's my cluster config file
whirr.cluster-name=large-cluster
whirr.instance-templates=1 hadoop-jobtracker+hadoop-namenode,1 hadoop-datanode+hadoop-tasktracker
whirr.provider=aws-ec2
whirr.identity=XXXXXXXXXXXXXXX
whirr.credential=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
whirr.private-key-file=${sys:user.home}/.ssh/id_rsa
whirr.public-key-file=${sys:user.home}/.ssh/id_rsa.pub
whirr.hadoop-install-function=install_cdh_hadoop
whirr.hadoop-configure-function=configure_cdh_hadoop
whirr.hardware-id=m1.large
whirr.image-id=us-east-1/ami-da0cf8b3
whirr.location-id=us-east-1
The cluster lauching looks normally
khiem#master ~ $ whirr launch-cluster --config large-hadoop.properties
Bootstrapping cluster
Configuring template
Starting 1 node(s) with roles [hadoop-datanode, hadoop-tasktracker]
Configuring template
Starting 1 node(s) with roles [hadoop-jobtracker, hadoop-namenode]
Nodes started: [[id=us-east-1/i-9aa01dfd, providerId=i-9aa01dfd, group=large-cluster, name=null, location=[id=us-east-1a, scope=ZONE, description=us-east-1a, parent=us-east-1, iso3166Codes=[US-VA], metadata={}], uri=null, imageId=us-east-1/ami-da0cf8b3, os=[name=null, family=ubuntu, version=10.04, arch=paravirtual, is64Bit=true, description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml], state=RUNNING, loginPort=22, privateAddresses=[10.196.142.64], publicAddresses=[107.20.64.97], hardware=[id=m1.large, providerId=m1.large, name=null, processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null, type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true], [id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false, isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc, durable=false, isBootDevice=false]], supportsImage=is64Bit()], loginUser=ubuntu, userMetadata={}]]
Nodes started: [[id=us-east-1/i-0aa31e6d, providerId=i-0aa31e6d, group=large-cluster, name=null, location=[id=us-east-1a, scope=ZONE, description=us-east-1a, parent=us-east-1, iso3166Codes=[US-VA], metadata={}], uri=null, imageId=us-east-1/ami-da0cf8b3, os=[name=null, family=ubuntu, version=10.04, arch=paravirtual, is64Bit=true, description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml], state=RUNNING, loginPort=22, privateAddresses=[10.85.130.43], publicAddresses=[50.17.128.123], hardware=[id=m1.large, providerId=m1.large, name=null, processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null, type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true], [id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false, isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc, durable=false, isBootDevice=false]], supportsImage=is64Bit()], loginUser=ubuntu, userMetadata={}]]
Authorizing firewall ingress to [Instance{roles=[hadoop-jobtracker, hadoop-namenode], publicIp=50.17.128.123, privateIp=10.85.130.43, id=us-east-1/i-0aa31e6d, nodeMetadata=[id=us-east-1/i-0aa31e6d, providerId=i-0aa31e6d, group=large-cluster, name=null, location=[id=us-east-1a, scope=ZONE, description=us-east-1a, parent=us-east-1, iso3166Codes=[US-VA], metadata={}], uri=null, imageId=us-east-1/ami-da0cf8b3, os=[name=null, family=ubuntu, version=10.04, arch=paravirtual, is64Bit=true, description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml], state=RUNNING, loginPort=22, privateAddresses=[10.85.130.43], publicAddresses=[50.17.128.123], hardware=[id=m1.large, providerId=m1.large, name=null, processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null, type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true], [id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false, isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc, durable=false, isBootDevice=false]], supportsImage=is64Bit()], loginUser=ubuntu, userMetadata={}]}] on ports [50070, 50030] for [116.96.138.41/32]
Authorizing firewall ingress to [Instance{roles=[hadoop-jobtracker, hadoop-namenode], publicIp=50.17.128.123, privateIp=10.85.130.43, id=us-east-1/i-0aa31e6d, nodeMetadata=[id=us-east-1/i-0aa31e6d, providerId=i-0aa31e6d, group=large-cluster, name=null, location=[id=us-east-1a, scope=ZONE, description=us-east-1a, parent=us-east-1, iso3166Codes=[US-VA], metadata={}], uri=null, imageId=us-east-1/ami-da0cf8b3, os=[name=null, family=ubuntu, version=10.04, arch=paravirtual, is64Bit=true, description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml], state=RUNNING, loginPort=22, privateAddresses=[10.85.130.43], publicAddresses=[50.17.128.123], hardware=[id=m1.large, providerId=m1.large, name=null, processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null, type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true], [id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false, isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc, durable=false, isBootDevice=false]], supportsImage=is64Bit()], loginUser=ubuntu, userMetadata={}]}] on ports [8020, 8021] for [50.17.128.123/32]
Running configuration script
Configuration script run completed
Running configuration script
Configuration script run completed
Completed configuration of large-cluster
Namenode web UI available at http://ec2-50-17-128-123.compute-1.amazonaws.com:50070
Jobtracker web UI available at http://ec2-50-17-128-123.compute-1.amazonaws.com:50030
Wrote Hadoop site file /home/khiem/.whirr/large-cluster/hadoop-site.xml
Wrote Hadoop proxy script /home/khiem/.whirr/large-cluster/hadoop-proxy.sh
Wrote instances file /home/khiem/.whirr/large-cluster/instances
Started cluster of 2 instances
Cluster{instances=[Instance{roles=[hadoop-datanode, hadoop-tasktracker], publicIp=107.20.64.97, privateIp=10.196.142.64, id=us-east-1/i-9aa01dfd, nodeMetadata=[id=us-east-1/i-9aa01dfd, providerId=i-9aa01dfd, group=large-cluster, name=null, location=[id=us-east-1a, scope=ZONE, description=us-east-1a, parent=us-east-1, iso3166Codes=[US-VA], metadata={}], uri=null, imageId=us-east-1/ami-da0cf8b3, os=[name=null, family=ubuntu, version=10.04, arch=paravirtual, is64Bit=true, description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml], state=RUNNING, loginPort=22, privateAddresses=[10.196.142.64], publicAddresses=[107.20.64.97], hardware=[id=m1.large, providerId=m1.large, name=null, processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null, type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true], [id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false, isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc, durable=false, isBootDevice=false]], supportsImage=is64Bit()], loginUser=ubuntu, userMetadata={}]}, Instance{roles=[hadoop-jobtracker, hadoop-namenode], publicIp=50.17.128.123, privateIp=10.85.130.43, id=us-east-1/i-0aa31e6d, nodeMetadata=[id=us-east-1/i-0aa31e6d, providerId=i-0aa31e6d, group=large-cluster, name=null, location=[id=us-east-1a, scope=ZONE, description=us-east-1a, parent=us-east-1, iso3166Codes=[US-VA], metadata={}], uri=null, imageId=us-east-1/ami-da0cf8b3, os=[name=null, family=ubuntu, version=10.04, arch=paravirtual, is64Bit=true, description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml], state=RUNNING, loginPort=22, privateAddresses=[10.85.130.43], publicAddresses=[50.17.128.123], hardware=[id=m1.large, providerId=m1.large, name=null, processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null, type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true], [id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false, isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc, durable=false, isBootDevice=false]], supportsImage=is64Bit()], loginUser=ubuntu, userMetadata={}]}], configuration={hadoop.job.ugi=root,root, mapred.job.tracker=ec2-50-17-128-123.compute-1.amazonaws.com:8021, hadoop.socks.server=localhost:6666, fs.s3n.awsAccessKeyId=AKIAIGXAURLAB7CQE77A, fs.s3.awsSecretAccessKey=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, fs.s3.awsAccessKeyId=AZIAIGXIOPLAB7CQE77A, hadoop.rpc.socket.factory.class.default=org.apache.hadoop.net.SocksSocketFactory, fs.default.name=hdfs://ec2-50-17-128-123.compute-1.amazonaws.com:8020/, fs.s3n.awsSecretAccessKey=dWDRq2z0EQegdPrbbL8Dab3eCu98O32r3gOrIbOK}}
I've also started the proxy and update the local Hadoop configuration follow Cloudera tutorial, but when I tried to test the HDFS with hadoop fs -ls /
the terminal prints connection error:
12/04/12 11:54:43 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
12/04/12 11:54:43 INFO security.UserGroupInformation: JAAS Configuration already set up for Hadoop, not re-installing.
12/04/12 11:54:45 INFO ipc.Client: Retrying connect to server: ec2-50-17-128-123.compute-1.amazonaws.com/50.17.128.123:8020. Already tried 0 time(s).
12/04/12 11:54:46 INFO ipc.Client: Retrying connect to server: ec2-50-17-128-123.compute-1.amazonaws.com/50.17.128.123:8020. Already tried 1 time(s).
12/04/12 11:54:48 INFO ipc.Client: Retrying connect to server: ec2-50-17-128-123.compute-1.amazonaws.com/50.17.128.123:8020. Already tried 2 time(s).
12/04/12 11:54:49 INFO ipc.Client: Retrying connect to server: ec2-50-17-128-123.compute-1.amazonaws.com/50.17.128.123:8020. Already tried 3 time(s).
In the proxy terminal
Running proxy to Hadoop cluster at
ec2-50-17-128-123.compute-1.amazonaws.com. Use Ctrl-c to quit.
Warning: Permanently added 'ec2-50-17-128-123.compute-1.amazonaws.com,50.17.128.123' (RSA) to the list of known hosts.
channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused
channel 2: open failed: connect failed: Connection refused
channel 2: open failed: connect failed: Connection refused
channel 2: open failed: connect failed: Connection refused
The namenode webUI (50070 port also not available), I can ssh to the namenode but inside the namenode, it looks like there's none of Hadoop or Java installation, is this strange thing?