Mirror segments down in Greenplum - greenplum

I am learning Greenplum and have setup a small cluster for testing (one master, three segment hosts).
I initialized the cluster without mirroring, and have later enabled it by using gpaddmirrors. However, all of the mirrors appear as Down in gpstate and running gp_primarymirror command on the segment hosts simply hangs (that is the command that gprecoverseg runs)
gpadmin 2695 1 0 06:21 ? 00:00:00 /usr/local/greenplum-db-4.3.8.1/bin/postgres -D /data/primary/gpseg2 -p 40002 -b 4 -z 9 --silent-mode=true -i -M quiescent -C 2
gpadmin 2698 2695 0 06:21 ? 00:00:00 postgres: port 40002, logger process
gpadmin 2706 2695 0 06:21 ? 00:00:00 postgres: port 40002, primary process
gpadmin 2709 2706 0 06:21 ? 00:00:00 postgres: port 40002, primary recovery process
gpadmin 2719 2695 0 06:21 ? 00:00:00 postgres: port 40002, stats collector process
gpadmin 2720 2695 0 06:21 ? 00:00:00 postgres: port 40002, writer process
gpadmin 2721 2695 0 06:21 ? 00:00:00 postgres: port 40002, checkpoint process
gpadmin 2722 2695 0 06:21 ? 00:00:00 postgres: port 40002, sweeper process
gpadmin 2901 2207 0 06:29 pts/0 00:00:00 grep --color=auto 40002
gpadmin#gpseg02:~> gp_primarymirror -h gpseg02 -p 40002
The last command just hangs and never finishes.
Any idea what am I doing wrong?
Update #1:
gprecoverseg -v output (without -v it just prints "Unable to connect to database"):
20160502:06:24:28:023062 gprecoverseg:gpmaster02:gpadmin-[DEBUG]:-[worker8] finished cmd: Get segment status cmdStr='ssh -o 'StrictHostKeyChecking no' gpseg03 ". /usr/local/greenplum-db/./greenplum_path.sh; $GPHOME/bin/gp_primarymirror -h gpseg03 -p 40002"' had result: cmd had rc=1 completed=True halted=False
stdout=''
stderr='Welcome to SUSE Linux Enterprise Server 12.
mode: PrimarySegment
segmentState: ChangeTrackingDisabled
dataState: InChangeTracking
faultType: NotInitialized
mode: PrimarySegment
segmentState: ChangeTrackingDisabled
dataState: InChangeTracking
faultType: NotInitialized
gp_segment_configuration output:
testdb=# select * from gp_segment_configuration;
dbid | content | role | preferred_role | mode | status | port | hostname | address | replication_port | san_mounts
------+---------+------+----------------+------+--------+-------+----------------------------+----------------------------+------------------+------------
1 | -1 | p | p | s | u | 5432 | gpmaster02 | gpmaster02 | |
2 | 0 | p | p | c | u | 40000 | gpseg02 | gpseg02 | 43000 |
11 | 0 | m | m | r | d | 41000 | gpseg03 | gpseg03 | 42000 |
3 | 1 | p | p | c | u | 40001 | gpseg02 | gpseg02 | 43001 |
12 | 1 | m | m | r | d | 41001 | gpseg03 | gpseg03 | 42001 |
4 | 2 | p | p | c | u | 40002 | gpseg02 | gpseg02 | 43002 |
13 | 2 | m | m | r | d | 41002 | gpseg03 | gpseg03 | 42002 |
5 | 3 | p | p | c | u | 40000 | gpseg04 | gpseg04 | 43000 |
17 | 3 | m | m | r | d | 41000 | gpseg02 | gpseg02 | 42000 |
6 | 4 | p | p | c | u | 40001 | gpseg04 | gpseg04 | 43001 |
18 | 4 | m | m | r | d | 41001 | gpseg02 | gpseg02 | 42001 |
7 | 5 | p | p | c | u | 40002 | gpseg04 | gpseg04 | 43002 |
19 | 5 | m | m | r | d | 41002 | gpseg02 | gpseg02 | 42002 |
8 | 6 | p | p | c | u | 40000 | gpseg03 | gpseg03 | 43000 |
14 | 6 | m | m | r | d | 41000 | gpseg04 | gpseg04 | 42000 |
9 | 7 | p | p | c | u | 40001 | gpseg03 | gpseg03 | 43001 |
15 | 7 | m | m | r | d | 41001 | gpseg04 | gpseg04 | 42001 |
10 | 8 | p | p | c | u | 40002 | gpseg03 | gpseg03 | 43002 |
16 | 8 | m | m | r | d | 41002 | gpseg04 | gpseg04 | 42002 |

This usually means there is a problem with the changetracking logs.
"segmentState: ChangeTrackingDisabled"
Try the following:
Stop the database.
For segment "-h gpseg03 -p 40002", go in to it's datadir and delete the contents of "pg_changetracking" dir.
Start the database.
Run "gprecoverseg -F".
There could be other segments with corrupted changetracking logs. If the above steps don't work then stop the database and delete pg_changetracking for ALL segments.

I faced the same issue in past, i tried with gprecoverseg full after login into db in admin only mode and it works
please try this step

Related

Which model should I use? xtlogit or xtprobit

I have the following panel data set with very large N (500,000) and small T (15 years). My dependent variable is Project1 or project 2. I want to estimate the likelihood of Project dependent on treated with year and village fixed effects. For the continuous dependent variable, I was using reghdfe.
The dependent variable is simply that when a village gets the project the dummy is equal to 1 and remains 1 for the subsequent years.
I am aware that I cannot use "probit" command in STATA as I have a panel. Can you suggest which model should I use?
| village | population | year | project_1 | project_2 | treated |
|---------|------------|------|-----------|-----------|-----------|
| A | 100 | 2001 | 0 | 0 | 0 |
| A | 100 | 2002 | 1 | 0 | 0 |
| A | 100 | 2003 | 1 | 0 | 1 |
| A | 100 | 2004 | 1 | 0 | 1 |
| A | 100 | 2005 | 1 | 0 | 1 |
| B | 200 | 2001 | 0 | 0 | 0 |
| B | 200 | 2002 | 0 | 0 | 1 |
| B | 200 | 2003 | 0 | 1 | 1 |
| B | 200 | 2004 | 0 | 1 | 1 |
| B | 200 | 2005 | 0 | 1 | 1 |
| C | 150 | 2001 | 0 | 0 | 0 |
| C | 150 | 2002 | 0 | 0 | 0 |
| C | 150 | 2003 | 0 | 0 | 0 |
| C | 150 | 2004 | 1 | 0 | 0 |
| C | 150 | 2005 | 1 | 0 | 1 |
| D | 175 | 2001 | 0 | 0 | 0 |
| D | 175 | 2002 | 0 | 0 | 0 |
| D | 175 | 2003 | 0 | 0 | 0 |
| D | 175 | 2004 | 0 | 0 | 1 |
| D | 175 | 2005 | 0 | 0 | 1 |
Your question has two parts. Which model of Logit and Probit is more appropriate for you, and how to implement the appropriate model in Stata. As #NickCox mentioned, the former is most appropriate for Cross Validated, and has received robust discussion there: Difference between logit and probit models
.

How to compare two circuits based on their utilization

I have some hardware IPs that I need to synthesize. And the IP contains several generic parameters I can play with. Each combination of parameters gives me a different utilization report after synthesis and implementation.
So for example for two different configurations Design_1 and Design_2, I get the following in Vivado 2018.1. The 3rd line is the ratio of the values of Design_2 devided by values of Design_1.
So as you can see in this simple example, Design_2 has less Slice LUTs but slightly more F7 Muxes.
My question is how to conclude about the cost of each one? Should I privilege Slice LUTs or Registers ...etc?
+----------+-------------------+-----------------+------------------+----------+-------------------+-------------------+---------------+---------------------+----------------+------+------------+--------------+-------------+------------+----------+---------+------------+---------+---------------------------+-------------------------+-----------------------------+--------+--------+----------+---------+------------+-----------+---------+--------+---------+---------+-----------+----------+-----------+-------------+---------+----------+-----------+---------+
| Name | Slice LUTs | Slice Registers | F7 Muxes | F8 Muxes | Slice | LUT as Logic | LUT as Memory | LUT Flip Flop Pairs | Block RAM Tile | DSPs | Bonded IOB | Bonded IPADs | PHY_CONTROL | PHASER_REF | OUT_FIFO | IN_FIFO | IDELAYCTRL | IBUFDS | PHASER_OUT/PHASER_OUT_PHY | PHASER_IN/PHASER_IN_PHY | IDELAYE2/IDELAYE2_FINEDELAY | ILOGIC | OLOGIC | BUFGCTRL | BUFIO | MMCME2_ADV | PLLE2_ADV | BUFMRCE | BUFHCE | BUFR | BSCANE2 | CAPTUREE2 | DNA_PORT | EFUSE_USR | FRAME_ECCE2 | ICAPE2 | PCIE_2_1 | STARTUPE2 | XADC |
+----------+-------------------+-----------------+------------------+----------+-------------------+-------------------+---------------+---------------------+----------------+------+------------+--------------+-------------+------------+----------+---------+------------+---------+---------------------------+-------------------------+-----------------------------+--------+--------+----------+---------+------------+-----------+---------+--------+---------+---------+-----------+----------+-----------+-------------+---------+----------+-----------+---------+
| Design_1 | 34124 | 16913 | 1453 | 91 | 10272 | 31538 | 2586 | 9020 | 37 | 11 | 125 | 0 | 1 | 1 | 4 | 2 | 1 | 0 | 4 | 2 | 16 | 16 | 46 | 10 | 0 | 2 | 2 | 0 | 2 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Design_2 | 34097 | 16913 | 1550 | 91 | 10189 | 31511 | 2586 | 9021 | 37 | 11 | 125 | 0 | 1 | 1 | 4 | 2 | 1 | 0 | 4 | 2 | 16 | 16 | 46 | 10 | 0 | 2 | 2 | 0 | 2 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| -------- | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| (2)/(1) | 0.999208768022506 | 1 | 1.06675843083276 | 1 | 0.991919781931464 | 0.999143889910584 | 1 | 1.00011086474501 | 1 | 1 | 1 | #DIV/0! | 1 | 1 | 1 | 1 | 1 | #DIV/0! | 1 | 1 | 1 | 1 | 1 | 1 | #DIV/0! | 1 | 1 | #DIV/0! | 1 | #DIV/0! | 1 | #DIV/0! | #DIV/0! | #DIV/0! | #DIV/0! | #DIV/0! | #DIV/0! | #DIV/0! | #DIV/0! |
+----------+-------------------+-----------------+------------------+----------+-------------------+-------------------+---------------+---------------------+----------------+------+------------+--------------+-------------+------------+----------+---------+------------+---------+---------------------------+-------------------------+-----------------------------+--------+--------+----------+---------+------------+-----------+---------+--------+---------+---------+-----------+----------+-----------+-------------+---------+----------+-----------+---------+
It's depending on your needs, LUTs and F7 Muxes are differents physical cells in your FPGA. So even if you don't use its, its will be there.
If you have one ressource more critical than the other, you should try to minimize the utilisation of the critical ressource to simplify the place and route.
If you have nothing critical, I think the better is to use F7 Muxes first because Slice LUTs are more flexible for the rest of your design.

Pivot Table in Hive and Create Multiple Columns for Unique Combinations

I want to pivot the following table
| ID | Code | date | qty |
| 1 | A | 1/1/19 | 11 |
| 1 | A | 2/1/19 | 12 |
| 2 | B | 1/1/19 | 13 |
| 2 | B | 2/1/19 | 14 |
| 3 | C | 1/1/19 | 15 |
| 3 | C | 3/1/19 | 16 |
into
| ID | Code | mth_1(1/1/19) | mth_2(2/1/19) | mth_3(3/1/19) |
| 1 | A | 11 | 12 | 0 |
| 2 | B | 13 | 14 | 0 |
| 3 | C | 15 | 0 | 16 |
I am new to hive, i am not sure how to implement it.
NOTE: I don't want to do mapping because my month values change over time.

Ruby Net-SSH get on-login text and send data

when I run this command from my linux terminal:
ssh -p 12643 sudoku#ringzer0team.com
and enter the password, dg43zz6R0E, I get this message:
Linux ld64webdmz 3.2.0-4-amd64 #1 SMP Debian 3.2.82-1 x86_64
Last login: Fri Sep 22 09:19:02 2017 from 39.188.121.75
RingZer0 Team Online CTF
The sudoku challenge
+---+---+---+---+---+---+---+---+---+
| | 4 | 8 | | | 1 | 3 | 5 | 2 |
+---+---+---+---+---+---+---+---+---+
| 6 | 7 | | | 5 | | | 4 | |
+---+---+---+---+---+---+---+---+---+
| | | | | 4 | 8 | 6 | 7 | |
+---+---+---+---+---+---+---+---+---+
| 4 | | | | 1 | 3 | 5 | 2 | 9 |
+---+---+---+---+---+---+---+---+---+
| | | 3 | 5 | 2 | | | | 6 |
+---+---+---+---+---+---+---+---+---+
| | | 9 | | | 6 | | 1 | |
+---+---+---+---+---+---+---+---+---+
| | | | | | | | | 4 |
+---+---+---+---+---+---+---+---+---+
| | | | 2 | | | | | |
+---+---+---+---+---+---+---+---+---+
| 2 | | | | | | 1 | | |
+---+---+---+---+---+---+---+---+---+
Solve this sudoku in less than 10 seconds and you'll get the flag.
Submit all the sudoku table using this format from left to right 1,2,3,4,5,6,7,8,9,2,3,4,5,6,7,8,9,1...
Solution:
Using Net-SSH from Ruby, how can I get that on-login message and send a response?
This is what I have:
#!/usr/bin/ruby
require'net/ssh';
Net::SSH.start('ringzer0team.com', 'sudoku', :password => 'dg43zz6R0E', :port => 12643) do|ssh|
# read that on_login text, solve and send output
p ssh.exec!(((1..9).to_a*9).join(',')+"\n"); # trying to send data
end
It does not terminate (does not get past the call to exec!).
I'm just asking about how to interact with the session (get and send data), not on how to solve the sudoku.

mysql table unaligned in console output when using UTF8

I like to use mysql client. But when using UTF-8, the tables on the console are unaligned:
> set names utf8;
> [some query]
+--------+---------+---------------------------------+-----------------------------+----------+---------+-----------+-------+---------+-----------+
| RuleId | TaxonId | Note | NoteSci | MinCount | DayFrom | MonthFrom | DayTo | MonthTo | ExtraNote |
+--------+---------+---------------------------------+-----------------------------+----------+---------+-----------+-------+---------+-----------+
| 722 | 10090 | sedmihlásek malý | Hippolais caligata | 1 | 1 | 1 | 31 | 12 | NULL |
| 727 | 10059 | Anseranas semipalmata | husovec strakatý | 1 | 1 | 1 | 31 | 12 | NULL |
| 728 | 10062 | Cygnus atratus | labuť černá | 1 | 1 | 1 | 31 | 12 | NULL |
| 729 | 10094 | Anser cygnoides | husa labutí | 1 | 1 | 1 | 31 | 12 | NULL |
| 730 | 10063 | Tadorna cana | husice šedohlavá | 1 | 1 | 1 | 31 | 12 | NULL |
| 731 | 10031 | Cairina moschata f. domestica | pižmovka domácí | 20 | 1 | 1 | 31 | 12 | NULL |
| 732 | 10088 | Cairina scutulata | pižmovka bělokřídlá | 1 | 1 | 1 | 31 | 12 | NULL |
| 733 | 10087 | Anas sibilatrix | hvízdák chilský | 1 | 1 | 1 | 31 | 12 | NULL |
| 734 | 10077 | Anas platyrhynchos f. domestica | kachna domácí | 1000 | 1 | 1 | 31 | 12 | NULL |
| 735 | 10086 | Anas hottentota | čírka hottentotská | 1 | 1 | 1 | 31 | 12 | NULL |
|
This is apparently because mysql client will compute the widths of the columns using string length which doesn't take UTF-8 characters into account - so then there is exactly one space missing for each accented character (because these actually take two bytes).
Do you know possible workaround for this problem?
Run your mysql client with charset option:
mysql -uUSER -p DATABASE --default-character-set=utf8
(USER and DATABASE should be replaced with actual credentials data)

Resources