We are running Cassandra 2.2.3.
Have a column family using DateTieredCompactionStrategy as follows,
CREATE TABLE test (
num_id text,
position_time timestamp,
acc int,
coordinate text,
device_no text,
PRIMARY KEY (num_id, position_time, coordinate)
) WITH CLUSTERING ORDER BY (position_time DESC, coordinate ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = 'table for gps points from car gps source.'
AND compaction = {'timestamp_resolution': 'MILLISECONDS', 'max_sstable_age_days': '8', 'base_time_seconds': '3600', 'class': 'org.apache.cassandra.db.compaction.DateTieredCompactionStrategy'}
AND compression = {'chunk_length_kb': '64', 'crc_check_chance': '1.0', 'sstable_compression': 'org.apache.cassandra.io.compress.SnappyCompressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 86400
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
We do have some traffic, It keeps inserting the table.
Cassandra generates many SStables, around 2,000 in total.
For example,
-rw-r--r-- 1 cassandra cassandra 86M Jan 20 02:59 la-11110-big-Data.db
-rw-r--r-- 1 cassandra cassandra 111M Jan 20 03:11 la-11124-big-Data.db
-rw-r--r-- 1 cassandra cassandra 176M Jan 20 03:12 la-11125-big-Data.db
-rw-r--r-- 1 cassandra cassandra 104M Jan 20 03:14 la-11130-big-Data.db
-rw-r--r-- 1 cassandra cassandra 102M Jan 20 03:26 la-11144-big-Data.db
-rw-r--r-- 1 cassandra cassandra 172M Jan 20 03:26 la-11145-big-Data.db
-rw-r--r-- 1 cassandra cassandra 107M Jan 20 03:30 la-11149-big-Data.db
-rw-r--r-- 1 cassandra cassandra 96M Jan 20 03:41 la-11163-big-Data.db
-rw-r--r-- 1 cassandra cassandra 176M Jan 20 03:41 la-11164-big-Data.db
-rw-r--r-- 1 cassandra cassandra 97M Jan 20 03:45 la-11169-big-Data.db
-rw-r--r-- 1 cassandra cassandra 82M Jan 20 03:57 la-11183-big-Data.db
-rw-r--r-- 1 cassandra cassandra 194M Jan 20 03:58 la-11184-big-Data.db
-rw-r--r-- 1 cassandra cassandra 28M Jan 20 03:59 la-11187-big-Data.db
-rw-r--r-- 1 cassandra cassandra 90M Jan 20 04:00 la-11188-big-Data.db
My question is, Is it normal to have so many SStables (2000)?
The other thing is we are experiencing readtimeout Exception for selection query.
The selection query uses primary key num_id and clustering key timestamp.
The readtimeout is set to 10 seconds.
So, the other question is the readtimeout exception is caused by many SStables or wide row? How to solve avoid this exception?
the problem is "'timestamp_resolution': 'MILLISECONDS'" I've filed https://issues.apache.org/jira/browse/CASSANDRA-11041 to improve documentation about this parameter
My question is, Is it normal to have so many SStables (2000)?
No, it's not normal. I think that in your case, the compaction is not fast enough to keep up with the ingestion rate. What kind of hard drive do you have for the Cassandra server ? Spinning disk ? SSD ? Shared storage ?
So, the other question is the readtimeout exception is caused by many SStables or wide row?
It can be both, but in your case, I'm pretty sure that it's related to the huge number of SSTables
How to solve avoid this exception?
Check that your disk I/O can keep up. Use dstat and iostat Linux tool to monitor the I/O
Related
there.
When using Godot 3.2.2 stable (all templates updated) to export for OSX (from High Sierra) and Windows (VMWare with Windows 10) , many resources are not find, by the interpreter.
While testing in the IDE, everything runs perfectly.
I already changed files names (avoiding spaces and non alphanumeric and '_' characters), deleted everything from the '.import' folder and re-imported all files, and even changed my code to avoid loading stuff 'on the fly', in order to have all resources properly referred in the resulting code.
The files are in their original folders, their '.import' files are there too and mapping to existing files in the '.import' folder.
I was able, too, to check the '.pck' file and the '.wav', '.ogg' and '.png' files are there.
The game will prompt messages like:
ERROR: _load: No loader found for resource: res://sounds//Starting_Lights.ogg.
At: core/io/resource_loader.cpp:285.
ERROR: _load: No loader found for resource: res://sounds//Testing.wav.
At: core/io/resource_loader.cpp:285.
ERROR: _load: No loader found for resource: res://sprites//Backlash_Pic.png.
At: core/io/resource_loader.cpp:285.
ERROR: _load: No loader found for resource: res://sprites//Backlash_Grand_Prix.png.
At: core/io/resource_loader.cpp:285.
One of the 'not found' resource '.import' files has
[remap]
importer="texture"
type="StreamTexture"
path="res://.import/Backlash_Grand_Prix.png-ad663db21f8bfbe75b0464e994ebbe2f.stex"
metadata={
"vram_texture": false
}
[deps]
source_file="res://sprites/Backlash_Grand_Prix.png"
dest_files=[ "res://.import/Backlash_Grand_Prix.png-ad663db21f8bfbe75b0464e994ebbe2f.stex" ]
and all indicated files are there
AnJo888i7:sprites AnJo888$ pwd
/Users/AnJo888/Desktop/Godot/project_mr/sprites
AnJo888i7:sprites AnJo888$ ls -l Back*
-rw-r--r--# 1 AnJo888 staff 55120 9 Jun 21:31 Backlash_Grand_Prix.png
-rw-r--r--# 1 AnJo888 staff 693 7 Jul 18:36 Backlash_Grand_Prix.png.import
-rw-r--r--# 1 AnJo888 staff 255514 29 Jun 16:40 Backlash_Pic.png
-rw-r--r-- 1 AnJo888 staff 672 7 Jul 18:36 Backlash_Pic.png.import
AnJo888i7:sprites AnJo888$ ls -l ../.import/Back*
-rw-r--r-- 1 AnJo888 staff 91 7 Jul 18:35 ../.import/Backlash.obj-1faf80b2c76bbdff34635db74f883c59.md5
-rw-r--r-- 1 AnJo888 staff 879958 7 Jul 18:35 ../.import/Backlash.obj-1faf80b2c76bbdff34635db74f883c59.mesh
-rw-r--r-- 1 AnJo888 staff 91 7 Jul 18:35 ../.import/BacklashFF.obj-1f7907e7c14594be339288bdbcc49d13.md5
-rw-r--r-- 1 AnJo888 staff 1134886 7 Jul 18:35 ../.import/BacklashFF.obj-1f7907e7c14594be339288bdbcc49d13.mesh
-rw-r--r-- 1 AnJo888 staff 91 7 Jul 18:36 ../.import/Backlash_Grand_Prix.png-ad663db21f8bfbe75b0464e994ebbe2f.md5
-rw-r--r-- 1 AnJo888 staff 55358 7 Jul 18:36 ../.import/Backlash_Grand_Prix.png-ad663db21f8bfbe75b0464e994ebbe2f.stex
-rw-r--r-- 1 AnJo888 staff 91 7 Jul 18:36 ../.import/Backlash_Pic.png-802dae49352de96e7456539e639a1c34.md5
-rw-r--r-- 1 AnJo888 staff 268132 7 Jul 18:36 ../.import/Backlash_Pic.png-802dae49352de96e7456539e639a1c34.stex
AnJo888i7:sprites AnJo888$
So... although all seems to be in place, the game will not play music/sounds (some sounds are played and I changed the loading code, for the others, in order to make everything as equal as possible, without success - all sounds are loaded by a couple os singletons) and not show some textures (mainly stuff loaded during the game execution).
These sounds load and play:
extends AudioStreamPlayer
var audioTeamsFiles = ["res://sounds/Team_Braillewalk.ogg",
"res://sounds/Team_Candy_Cane.ogg",
...
"res://sounds/Team_Cash_is_King.ogg",
"res://sounds/Team_Watermelon.ogg"
]
var audioTeamName
var names = Array()
var volSpeech
func _ready() -> void:
volSpeech = get_node("/root/Globals").volSpeech
for i in range(audioTeamsFiles.size()):
audioTeamName = AudioStreamPlayer2D.new()
audioTeamName.stream = load(audioTeamsFiles[i])
audioTeamName.volume_db = volSpeech
names.append(audioTeamName)
add_child(names[i])
func say_team_name(team):
names[team].play()
func shut_team_name(team):
names[team].stop()
func set_volume():
volSpeech = get_node("/root/Globals").volSpeech
for i in range(audioTeamsFiles.size()):
names[i].volume_db = volSpeech
These will not load:
extends AudioStreamPlayer
var audioSoundFiles = ["res://sounds/Live_the_Life.ogg",
"res://sounds//Love_the_Sound.ogg",
"res://sounds//Love_this_Song.ogg",
...
"res://sounds//Vuvuzelas.ogg"
]
var audioSound
var sounds = Array()
var volEffects
var volMusic
var volSpeech
onready var globals
func _ready() -> void:
globals = get_node("/root/Globals")
for i in range(audioSoundFiles.size()):
audioSound = AudioStreamPlayer2D.new()
audioSound.stream = load(audioSoundFiles[i])
sounds.append(audioSound)
add_child(sounds[i])
set_volume()
play_sound(0)
func play_sound(sound):
sounds[sound].play()
func quiet_sound(sound):
sounds[sound].stop()
func set_volume():
volEffects = globals.volEffects
volMusic = globals.volMusic
volSpeech = globals.volSpeech
for i in range(audioSoundFiles.size()):
if i == 0:
sounds[i].volume_db = volMusic
elif i < 6:
sounds[i].volume_db = volSpeech
else:
sounds[i].volume_db = volEffects
I even included all kind of extensions, available in the export feature, and pointed the sprites and sounds folders, to be included (I used the triple slash I saw in some other reference to Godot's exporting 'issues').
[preset.0]
name="Mac OSX"
platform="Mac OSX"
runnable=true
custom_features=""
export_filter="all_resources"
include_filter="res:///sounds/*, res:///sprites/*"
exclude_filter=""
export_path="./AGC.dmg"
patch_list=PoolStringArray( )
script_export_mode=1
script_encryption_key=""
[preset.0.options]
custom_template/debug=""
custom_template/release=""
application/name="Absolutely Goode Championship"
application/info="Made with Godot Engine"
application/icon="res://AGC_Icon_256.png"
application/identifier="com.AGC.game"
application/signature=""
application/short_version="1.0"
application/version="1.0"
application/copyright=""
display/high_res=false
privacy/camera_usage_description=""
privacy/microphone_usage_description=""
codesign/enable=false
codesign/identity=""
codesign/timestamp=true
codesign/hardened_runtime=true
codesign/entitlements=""
codesign/custom_options=PoolStringArray( )
texture_format/s3tc=true
texture_format/etc=true
texture_format/etc2=true
It would be great if somebody could help me figure out what I'm missing here...
Btw, if I copy the sprites and sounds folders with the '.exe' in Windows, everything works fine and I was willing to use the same fix for the OSX version (regardless the duplicated files), but not even copying those folders to the app package worked.
Thanks in advance for all answers.
So... after the realization that some of my paths were wrong, I revised all of them and, after correcting the 'wrong' ones, the exported game is working OK.
The issue is that Godot's IDE is quite forgiving when it comes to find files and such, even if we make mistakes like using double slashes (in other places than 'res://') when pointing to resources.
Not trying to lessen the programmer's responsibility to get things done right, but it would've been better, IMHO, if the IDE had punched me in the face earlier, saying: "Your F-ing files are not available, in the F-ing folders you F-ing said they were."... or something like that.
Anyway... as the Lego Movie stated... Everything is Awesome...
I'm staking to join a CockroachDB node to a cluster.
I've created first cluster, then try to join 2nd node to the first node, but 2nd node created new cluster as follows.
Does anyone knows whats are wrong steps on the following my steps, any suggestions are wellcome.
I've started first node as follows:
cockroach start --insecure --advertise-host=163.172.156.111
* Check out how to secure your cluster: https://www.cockroachlabs.com/docs/v19.1/secure-a-cluster.html
*
CockroachDB node starting at 2019-05-11 01:11:15.45522036 +0000 UTC (took 2.5s)
build: CCL v19.1.0 # 2019/04/29 18:36:40 (go1.11.6)
webui: http://163.172.156.111:8080
sql: postgresql://root#163.172.156.111:26257?sslmode=disable
client flags: cockroach <client cmd> --host=163.172.156.111:26257 --insecure
logs: /home/ueda/cockroach-data/logs
temp dir: /home/ueda/cockroach-data/cockroach-temp449555924
external I/O path: /home/ueda/cockroach-data/extern
store[0]: path=/home/ueda/cockroach-data
status: initialized new cluster
clusterID: 3e797faa-59a1-4b0d-83b5-36143ddbdd69
nodeID: 1
Then, start secondary node to join to 163.172.156.111, but can't join:
cockroach start --insecure --advertise-addr=128.199.127.164 --join=163.172.156.111:26257
CockroachDB node starting at 2019-05-11 01:21:14.533097432 +0000 UTC (took 0.8s)
build: CCL v19.1.0 # 2019/04/29 18:36:40 (go1.11.6)
webui: http://128.199.127.164:8080
sql: postgresql://root#128.199.127.164:26257?sslmode=disable
client flags: cockroach <client cmd> --host=128.199.127.164:26257 --insecure
logs: /home/ueda/cockroach-data/logs
temp dir: /home/ueda/cockroach-data/cockroach-temp067740997
external I/O path: /home/ueda/cockroach-data/extern
store[0]: path=/home/ueda/cockroach-data
status: restarted pre-existing node
clusterID: a14e89a7-792d-44d3-89af-7037442eacbc
nodeID: 1
The cockroach.log of joining node shows some gosip error:
cat cockroach-data/logs/cockroach.log
I190511 01:21:13.762309 1 util/log/clog.go:1199 [config] file created at: 2019/05/11 01:21:13
I190511 01:21:13.762309 1 util/log/clog.go:1199 [config] running on machine: amfortas
I190511 01:21:13.762309 1 util/log/clog.go:1199 [config] binary: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6)
I190511 01:21:13.762309 1 util/log/clog.go:1199 [config] arguments: [cockroach start --insecure --advertise-addr=128.199.127.164 --join=163.172.156.111:26257]
I190511 01:21:13.762309 1 util/log/clog.go:1199 line format: [IWEF]yymmdd hh:mm:ss.uuuuuu goid file:line msg utf8=✓
I190511 01:21:13.762307 1 cli/start.go:1033 logging to directory /home/ueda/cockroach-data/logs
W190511 01:21:13.763373 1 cli/start.go:1068 RUNNING IN INSECURE MODE!
- Your cluster is open for any client that can access <all your IP addresses>.
- Any user, even root, can log in without providing a password.
- Any user, connecting as root, can read or write any data in your cluster.
- There is no network encryption nor authentication, and thus no confidentiality.
Check out how to secure your cluster: https://www.cockroachlabs.com/docs/v19.1/secure-a-cluster.html
I190511 01:21:13.763675 1 server/status/recorder.go:610 available memory from cgroups (8.0 EiB) exceeds system memory 992 MiB, using system memory
W190511 01:21:13.763752 1 cli/start.go:944 Using the default setting for --cache (128 MiB).
A significantly larger value is usually needed for good performance.
If you have a dedicated server a reasonable setting is --cache=.25 (248 MiB).
I190511 01:21:13.764011 1 server/status/recorder.go:610 available memory from cgroups (8.0 EiB) exceeds system memory 992 MiB, using system memory
W190511 01:21:13.764047 1 cli/start.go:957 Using the default setting for --max-sql-memory (128 MiB).
A significantly larger value is usually needed in production.
If you have a dedicated server a reasonable setting is --max-sql-memory=.25 (248 MiB).
I190511 01:21:13.764239 1 server/status/recorder.go:610 available memory from cgroups (8.0 EiB) exceeds system memory 992 MiB, using system memory
I190511 01:21:13.764272 1 cli/start.go:1082 CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6)
I190511 01:21:13.866977 1 server/status/recorder.go:610 available memory from cgroups (8.0 EiB) exceeds system memory 992 MiB, using system memory
I190511 01:21:13.867002 1 server/config.go:386 system total memory: 992 MiB
I190511 01:21:13.867063 1 server/config.go:388 server configuration:
max offset 500000000
cache size 128 MiB
SQL memory pool size 128 MiB
scan interval 10m0s
scan min idle time 10ms
scan max idle time 1s
event log enabled true
I190511 01:21:13.867098 1 cli/start.go:929 process identity: uid 1000 euid 1000 gid 1000 egid 1000
I190511 01:21:13.867115 1 cli/start.go:554 starting cockroach node
I190511 01:21:13.868242 21 storage/engine/rocksdb.go:613 opening rocksdb instance at "/home/ueda/cockroach-data/cockroach-temp067740997"
I190511 01:21:13.894320 21 server/server.go:876 [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I190511 01:21:13.894813 21 storage/engine/rocksdb.go:613 opening rocksdb instance at "/home/ueda/cockroach-data"
W190511 01:21:13.896301 21 storage/engine/rocksdb.go:127 [rocksdb] [/go/src/github.com/cockroachdb/cockroach/c-deps/rocksdb/db/version_set.cc:2566] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
W190511 01:21:13.905666 21 storage/engine/rocksdb.go:127 [rocksdb] [/go/src/github.com/cockroachdb/cockroach/c-deps/rocksdb/db/version_set.cc:2566] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
I190511 01:21:13.911380 21 server/config.go:494 [n?] 1 storage engine initialized
I190511 01:21:13.911417 21 server/config.go:497 [n?] RocksDB cache size: 128 MiB
I190511 01:21:13.911427 21 server/config.go:497 [n?] store 0: RocksDB, max size 0 B, max open file limit 10000
W190511 01:21:13.912459 21 gossip/gossip.go:1496 [n?] no incoming or outgoing connections
I190511 01:21:13.913206 21 server/server.go:926 [n?] Sleeping till wall time 1557537673913178595 to catches up to 1557537674394265598 to ensure monotonicity. Delta: 481.087003ms
I190511 01:21:14.251655 65 vendor/github.com/cockroachdb/circuitbreaker/circuitbreaker.go:322 [n?] circuitbreaker: gossip [::]:26257->163.172.156.111:26257 tripped: initial connection heartbeat failed: rpc error: code = Unknown desc = client cluster ID "a14e89a7-792d-44d3-89af-7037442eacbc" doesn't match server cluster ID "3e797faa-59a1-4b0d-83b5-36143ddbdd69"
I190511 01:21:14.251695 65 vendor/github.com/cockroachdb/circuitbreaker/circuitbreaker.go:447 [n?] circuitbreaker: gossip [::]:26257->163.172.156.111:26257 event: BreakerTripped
W190511 01:21:14.251763 65 gossip/client.go:122 [n?] failed to start gossip client to 163.172.156.111:26257: initial connection heartbeat failed: rpc error: code = Unknown desc = client cluster ID "a14e89a7-792d-44d3-89af-7037442eacbc" doesn't match server cluster ID "3e797faa-59a1-4b0d-83b5-36143ddbdd69"
I190511 01:21:14.395848 21 gossip/gossip.go:392 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"128.199.127.164:26257" > attrs:<> locality:<> ServerVersion:<major_val:19 minor_val:1 patch:0 unstable:0 > build_tag:"v19.1.0" started_at:1557537674395557548
W190511 01:21:14.458176 21 storage/replica_range_lease.go:506 can't determine lease status due to node liveness error: node not in the liveness table
I190511 01:21:14.458465 21 server/node.go:461 [n1] initialized store [n1,s1]: disk (capacity=24 GiB, available=18 GiB, used=2.2 MiB, logicalBytes=41 MiB), ranges=20, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=6467.00 p90=26940.00 pMax=43017435.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
I190511 01:21:14.458775 21 storage/stores.go:244 [n1] read 0 node addresses from persistent storage
I190511 01:21:14.459095 21 server/node.go:699 [n1] connecting to gossip network to verify cluster ID...
W190511 01:21:14.469842 96 storage/store.go:1525 [n1,s1,r6/1:/Table/{SystemCon…-11}] could not gossip system config: [NotLeaseHolderError] r6: replica (n1,s1):1 not lease holder; lease holder unknown
I190511 01:21:14.474785 21 server/node.go:719 [n1] node connected via gossip and verified as part of cluster "a14e89a7-792d-44d3-89af-7037442eacbc"
I190511 01:21:14.475033 21 server/node.go:542 [n1] node=1: started with [<no-attributes>=/home/ueda/cockroach-data] engine(s) and attributes []
I190511 01:21:14.475393 21 server/status/recorder.go:610 [n1] available memory from cgroups (8.0 EiB) exceeds system memory 992 MiB, using system memory
I190511 01:21:14.475514 21 server/server.go:1582 [n1] starting http server at [::]:8080 (use: 128.199.127.164:8080)
I190511 01:21:14.475572 21 server/server.go:1584 [n1] starting grpc/postgres server at [::]:26257
I190511 01:21:14.475605 21 server/server.go:1585 [n1] advertising CockroachDB node at 128.199.127.164:26257
W190511 01:21:14.475655 21 jobs/registry.go:341 [n1] unable to get node liveness: node not in the liveness table
I190511 01:21:14.532949 21 server/server.go:1650 [n1] done ensuring all necessary migrations have run
I190511 01:21:14.533020 21 server/server.go:1653 [n1] serving sql connections
I190511 01:21:14.533209 21 cli/start.go:689 [config] clusterID: a14e89a7-792d-44d3-89af-7037442eacbc
I190511 01:21:14.533257 21 cli/start.go:697 node startup completed:
CockroachDB node starting at 2019-05-11 01:21:14.533097432 +0000 UTC (took 0.8s)
build: CCL v19.1.0 # 2019/04/29 18:36:40 (go1.11.6)
webui: http://128.199.127.164:8080
sql: postgresql://root#128.199.127.164:26257?sslmode=disable
client flags: cockroach <client cmd> --host=128.199.127.164:26257 --insecure
logs: /home/ueda/cockroach-data/logs
temp dir: /home/ueda/cockroach-data/cockroach-temp067740997
external I/O path: /home/ueda/cockroach-data/extern
store[0]: path=/home/ueda/cockroach-data
status: restarted pre-existing node
clusterID: a14e89a7-792d-44d3-89af-7037442eacbc
nodeID: 1
I190511 01:21:14.541205 146 server/server_update.go:67 [n1] no need to upgrade, cluster already at the newest version
I190511 01:21:14.555557 149 sql/event_log.go:135 [n1] Event: "node_restart", target: 1, info: {Descriptor:{NodeID:1 Address:128.199.127.164:26257 Attrs: Locality: ServerVersion:19.1 BuildTag:v19.1.0 StartedAt:1557537674395557548 LocalityAddress:[] XXX_NoUnkeyedLiteral:{} XXX_sizecache:0} ClusterID:a14e89a7-792d-44d3-89af-7037442eacbc StartedAt:1557537674395557548 LastUp:1557537671113461486}
I190511 01:21:14.916458 59 gossip/gossip.go:1510 [n1] node has connected to cluster via gossip
I190511 01:21:14.916660 59 storage/stores.go:263 [n1] wrote 0 node addresses to persistent storage
I190511 01:21:24.480247 116 storage/store.go:4220 [n1,s1] sstables (read amplification = 2):
0 [ 51K 1 ]: 51K
6 [ 1M 1 ]: 1M
I190511 01:21:24.480380 116 storage/store.go:4221 [n1,s1]
** Compaction Stats [default] **
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
----------------------------------------------------------------------------------------------------------------------------------------------------------
L0 1/0 50.73 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 8.0 0 1 0.006 0 0
L6 1/0 1.26 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0
Sum 2/0 1.31 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 8.0 0 1 0.006 0 0
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 8.0 0 1 0.006 0 0
Uptime(secs): 10.6 total, 10.6 interval
Flush(GB): cumulative 0.000, interval 0.000
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
estimated_pending_compaction_bytes: 0 B
I190511 01:21:24.481565 121 server/status/runtime.go:500 [n1] runtime stats: 170 MiB RSS, 114 goroutines, 0 B/0 B/0 B GO alloc/idle/total, 14 MiB/16 MiB CGO alloc/total, 0.0 CGO/sec, 0.0/0.0 %(u/s)time, 0.0 %gc (7x), 50 KiB/1.5 MiB (r/w)net
What is the possibly cause to block to join? Thank you for your suggestion!
It seems you had previously started the second node (the one running on 128.199.127.164) by itself, creating its own cluster.
This can be seen in the error message:
W190511 01:21:14.251763 65 gossip/client.go:122 [n?] failed to start gossip client to 163.172.156.111:26257: initial connection heartbeat failed: rpc error: code = Unknown desc = client cluster ID "a14e89a7-792d-44d3-89af-7037442eacbc" doesn't match server cluster ID "3e797faa-59a1-4b0d-83b5-36143ddbdd69"
To be able to join the cluster, the data directory of the joining node must be empty. You can either delete cockroach-data or specify an alternate directory with --store=/path/to/data-dir
my data looks like
JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC
22.60 24.60 30.60 34.60 36.20 35.70 32.10 30.20 31.40 31.60 28.00 24.80
25.40 27.60 32.40 34.60 36.50 38.10 31.70 31.40 30.30 30.20 27.00 23.90
and there are like hundreds of rows! I want to find a maximum value in each row and write it in different column next to data along with month
so my out put will be
36.20 MAY
38.10 JUN
.
.
I want to use maxloc function, but i have no idea how to use it!
Try
index = maxloc(myTable(3,:))
print *, myTable((/1,3/), index)
It should select the highest value from the third row and display the first and third value at this index.
I have a spark dataFrame that looks like this:
id dates value
1 11 2013-11-15 10
2 11 2013-11-16 15
3 22 2013-11-15 20
4 22 2013-11-16 21
5 22 2013-11-17 3
I wish to retain the value from the previous date per id.
The final result should look like this:
id dates value prev_value
1 11 2013-11-15 10 NA
2 11 2013-11-16 15 10
3 22 2013-11-15 20 NA
4 22 2013-11-16 21 20
5 22 2013-11-17 3 21
The solution from this question would not work for various reasons.
I would appreciate the help!
So after playing with it for a while, here's the workaround that I found:
First of all, here's the example DF
id<-c(11,11,22,22,22)
dates<-as.Date(c('2013-11-15','2013-11-16','2013-11-15','2013-11-16','2013-11-17'), "%Y-%m-%d")
value <- c(10,15,20,21,3)
example<-as.DataFrame(data.frame(id=id,dates=dates, value))
I copy the example DF and add 1 day to the original date, then rename the column
example_p <- example
example_p$dates <- date_add(example_p$dates, 1)
colnames(example_p) <- c("id", "dates", "prev_value")
Finally, I merge the new DF to the original one
result <- select(merge(example, example_p, by = intersect(names(example),names(example_p))
, all.x = T), c("id_x", "dates_x", "value", "prev_value"))
showDF(result)
+----+----------+-----+----------+
|id_x| dates_x|value|prev_value|
+----+----------+-----+----------+
|22.0|2013-11-15| 20.0| null|
|11.0|2013-11-15| 10.0| null|
|11.0|2013-11-16| 15.0| 10.0|
|22.0|2013-11-16| 21.0| 20.0|
|22.0|2013-11-17| 3.0| 21.0|
+----+----------+-----+----------+
Obviously, this is somehow clumsy and I will be happy to give the points to anyone who can suggest a solution that would work faster than this.
I recently started learning Hadoop,
I found this data set http://stat-computing.org/dataexpo/2009/the-data.html - (2009 data),
I want some suggestions as what type of patterns or analysis can I do in Hadoop MapReduce, i just need something to get started with, If anyone has a better data set link which I can use for learning, help me here.
The attributes are as:
1 Year 1987-2008
2 Month 1-12
3 DayofMonth 1-31
4 DayOfWeek 1 (Monday) - 7 (Sunday)
5 DepTime actual departure time (local, hhmm)
6 CRSDepTime scheduled departure time (local, hhmm)
7 ArrTime actual arrival time (local, hhmm)
8 CRSArrTime scheduled arrival time (local, hhmm)
9 UniqueCarrier unique carrier code
10 FlightNum flight number
11 TailNum plane tail number
12 ActualElapsedTime in minutes
13 CRSElapsedTime in minutes
14 AirTime in minutes
15 ArrDelay arrival delay, in minutes
16 DepDelay departure delay, in minutes
17 Origin origin IATA airport code
18 Dest destination IATA airport code
19 Distance in miles
20 TaxiIn taxi in time, in minutes
21 TaxiOut taxi out time in minutes
22 Cancelled was the flight cancelled?
23 CancellationCode reason for cancellation (A = carrier, B = weather, C = NAS, D = security)
24 Diverted 1 = yes, 0 = no
25 CarrierDelay in minutes
26 WeatherDelay in minutes
27 NASDelay in minutes
28 SecurityDelay in minutes
29 LateAircraftDelay in minutes
Thanks