I met a error warning when run regression model for a panel using plm package - rstudio

I have a panel for 27 years, but met this warning when I run a regression.
panel data of global suicide rate with temperature
I use the following codes:
library(plm)
install.packages("dummies")
library(dummies)
data2 <- cbind(mydata, dummy(mydata$year, sep ="_"))
suicide_fe <- plm(suiciderate ~ dmt, data2, index = c("country", "year"),
model= "within")
summary(suicide_fe)
But I got this error:
Error in pdim.default(index[[1]], index[[2]]) :
duplicate couples (id-time)
In addition: Warning messages:
1: In pdata.frame(data, index) :
duplicate couples (id-time) in resulting pdata.frame to find out which, use
e.g. table(index(your_pdataframe), useNA = "ifany") 2: In
is.pbalanced.default(index[[1]], index[[2]]) :
duplicate couples (id-time)

Related

Pytorch is not working with DistributedDataParallel for multi gpu training

I am trying to train my model on multiple GPUS. I used the libraries and a added a code for it
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.distributed import init_process_group, destroy_process_group
Initialization
def ddp_setup(rank: int, world_size: int):
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "12355"
os.environ["TORCH_DISTRIBUTED_DEBUG"]="DETAIL"
init_process_group(backend="gloo", rank=0, world_size=1)
my model
model = CMGCNnet(config,
que_vocabulary=glovevocabulary,
glove=glove,
device=device)
model = model.to(0)
if -1 not in args.gpu_ids and len(args.gpu_ids) > 1:
model = DDP(model, device_ids=[0,1])
it throws following error:
config_yml : model/config_fvqa_gruc.yml
cpu_workers : 0
save_dirpath : exp_test_gruc
overfit : False
validate : True
gpu_ids : [0, 1]
dataset : fvqa
Loading FVQATrainDataset…
True
done splitting
Loading FVQATestDataset…
Loading glove…
Building Model…
Traceback (most recent call last):
File “trainfvqa_gruc.py”, line 512, in
train()
File “trainfvqa_gruc.py”, line 145, in train
ddp_setup(0,1)
File “trainfvqa_gruc.py”, line 42, in ddp_setup
init_process_group(backend=“gloo”, rank=0, world_size=1)
File “/home/seecs/miniconda/envs/mucko-edit/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py”, line 360, in init_process_group
timeout=timeout)
RuntimeError: [enforce fail at /opt/conda/conda-bld/pytorch_1544202130060/work/third_party/gloo/gloo/transport/tcp/device.cc:128] rp != nullptr. Unable to find address for: 127.0.0.1localhost.
localdomainlocalhost
I tried printing the issue with os.environ["TORCH_DISTRIBUTED_DEBUG"]="DETAIL"
it outputs:
Loading FVQATrainDataset...
True
done splitting
Loading FVQATestDataset...
Loading glove...
Building Model...
Segmentation fault
with NCCL background it starts the training but get stuck and doesn’t go further than this :slight_smile:
Training for epoch 0:
0%| | 0/2039 [00:00<?, ?it/s]
I found this solution but where to add these lines?
GLOO_SOCKET_IFNAME* , for example export GLOO_SOCKET_IFNAME=eth0`
mentioned in
https://discuss.pytorch.org/t/runtime-error-using-distributed-with-gloo/16579/3
Can someone help me with this issue?
to seek help. I am hoping to get and answer

changing CRS in GeoPandas

I'm trying to change the CRS of a geopandas dataframe. The current CRS is:
Name: unknown
Axis Info [ellipsoidal]:
- lon[east]: Longitude (degree)
- lat[north]: Latitude (degree)
Area of Use:
- undefined
Datum: World Geodetic System 1984
- Ellipsoid: WGS 84
- Prime Meridian: Greenwich
When I try dfTrans.to_crs('epsg:4326') I get the following error:
pyproj.exceptions.CRSError: Invalid projection: epsg:4326: (Internal Proj Error: proj_create: cannot build geodeticCRS 4326: SQLite error on SELECT name, ellipsoid_auth_name, ellipsoid_code, prime_meridian_auth_name, prime_meridian_code, area_of_use_auth_name, area_of_use_code, publication_date, deprecated FROM geodetic_datum WHERE auth_name = ? AND code = ?: no such column: publication_date)
For a simple command in pyproj, pyproj.CRS.from_epsg(4326), I get the same error:
File "pyproj/_crs.pyx", line 1738, in pyproj._crs._CRS.__init__
pyproj.exceptions.CRSError: Invalid projection: epsg:4326: (Internal Proj Error: proj_create: cannot build geodeticCRS 4326: SQLite error on SELECT name, ellipsoid_auth_name, ellipsoid_code, prime_meridian_auth_name, prime_meridian_code, area_of_use_auth_name, area_of_use_code, publication_date, deprecated FROM geodetic_datum WHERE auth_name = ? AND code = ?: no such column: publication_date)
I don't know enough to know what's going on, but it seems like there's an underlying function that calls a column that doesn't exist. Any ideas how to fix this or work around it?
I got that same error when using Proj-5.x. It seems that the 'publication_date' column is a Proj-6 or Proj-7 item (which both require SQLite.)

How to correct Knapsack compiling error MiniZinc?

What can I do for correcting errors in the following program
item= record( int: id, profit, weight);
set of item: All_Items ;
int :Max_Capacity;
var set of item: Selected_Items;
I have the following code for solving knapsack in MiniZinc, but it has many errors.
constraint sum([holds(X in Selected_Items)*X.weight | X in All_Items])=< Max_Capacity;
constraint Selected_Items >= All_Items;
maximize
sum([holds(S in Selected_Items)*S.profit |S in All_Items]);
Errors List
Compiling knapsack1.mzn
C:/Program Files/MiniZinc IDE (bundled)/examples/knapsack1.mzn:3.7-12:
item= record( int: id, profit, weight);
^^^^^^
Error: syntax error, unexpected record
C:/Program Files/MiniZinc IDE (bundled)/examples/knapsack1.mzn:11.45:
constraint sum([holds(X in Selected_Items)*X.weight | X in All_Items])=< Max_Capacity;
^
Error: syntax error, unexpected $undefined, expecting ]
C:/Program Files/MiniZinc IDE (bundled)/examples/knapsack1.mzn:15.1-8:
maximize
^^^^^^^^
Error: syntax error, unexpected maximize, expecting end of file
Process finished with non-zero exit code 1
Finished in 89msec
Although MiniZinc currently does not contain any record-types (read struct like types), they are a possibility for the future. To prevent breaking models in the future, the word record is therefore already a reserved keyword and can not be used in as an identifier in your model. Changing the name from record to something else will fix your problem.

Service Now : transform map stopped due to error: java.lang.NumberFormatException

The transform is getting aborted but only if I marked the checkbox copy empty fields and also the rest of the entry of the Import set is getting stuck at pending, also I verified the transform script but no luck.
Below is the error :
Import set: ISETxxxxxxx transform stopped due to error: java.lang.NumberFormatException
java.lang.NumberFormatException
at java.math.BigDecimal.<init>(BigDecimal.java:596)
at java.math.BigDecimal.<init>(BigDecimal.java:383)
at java.math.BigDecimal.<init>(BigDecimal.java:806)
at com.glide.script.glide_elements.GlideNumber.getSafeBigDecimal(GlideNumber.java:42)
at com.glide.currency.GlideElementCurrency.coerceAmount(GlideElementCurrency.java:406)
at com.glide.currency.GlideElementCurrency.cleanAmount(GlideElementCurrency.java:389)
at com.glide.currency.GlideElementCurrency.setDisplayValue(GlideElementCurrency.java:136)
at com.glide.currency.GlideElementCurrency.setValue(GlideElementCurrency.java:89)
at com.glide.db.impex.transformer.TransformerField.copyEmptyFields(TransformerField.java:202)
at com.glide.db.impex.transformer.TransformerField.setValue(TransformerField.java:130)
at com.glide.db.impex.transformer.TransformerField.transformField(TransformerField.java:84)
at com.glide.db.impex.transformer.TransformRow.transformCurrent(TransformRow.java:100)
at com.glide.db.impex.transformer.TransformRow.transform(TransformRow.java:69)
at com.glide.db.impex.transformer.Transformer.transformBatch(Transformer.java:150)
at com.glide.db.impex.transformer.Transformer.transform(Transformer.java:76)
at com.glide.system_import_set.ImportSetTransformerImpl.transformEach(ImportSetTransformerImpl.java:239)
at com.glide.system_import_set.ImportSetTransformerImpl.transformAllMaps(ImportSetTransformerImpl.java:91)
at com.glide.system_import_set.ImportSetTransformer.transformAllMaps(ImportSetTransformer.java:64)
at com.glide.system_import_set.ImportSetTransformer.transformAllMaps(ImportSetTransformer.java:50)
at com.snc.automation.ScheduledImportSetJob.runImport(ScheduledImportSetJob.java:55)
at com.snc.automation.ScheduledImportJob.execute(ScheduledImportJob.java:45)
at com.glide.schedule.JobExecutor.execute(JobExecutor.java:83)
at com.glide.schedule.GlideScheduleWorker.executeJob(GlideScheduleWorker.java:207)
at com.glide.schedule.GlideScheduleWorker.process(GlideScheduleWorker.java:145)
at com.glide.schedule.GlideScheduleWorker.run(GlideScheduleWorker.java:62)
I'm guessing you have a field that required that is a decimal or similar.
The error java.lang.NumberFormatException indicates it's failing to convert an empty string to 0.0.
Use a source script line to convert this, something along the lines of this
answer = (function transformEntry(source) {
if (source.u_number_field.nil())
return 0.0;
})(source);

error in running auto.arima function

I am downloading some stock's daily close data using quantmod package:
library(quantmod)
library(dygraphs)
library(forecast)
date <- as.Date("2014-11-01")
getSymbols("SBIN.BO",from = date )
close <- SBIN.BO[, 4]
dygraph(close)
dat <- data.frame(date = index(SBIN.BO),SBIN.BO)
acf1 <- acf(close)
When I tried to execute auto arima function from forecast package:
fit <- auto.arima(close, seasonal=FALSE, xreg=fourier(close, K=4))
I encountered the following error:
Error in ...fourier(x, K, 1:length(x)) :
K must be not be greater than period/2
So I want to know why there is this error? Did I do any mistake in writing code, which based upon tutorials available on Rob's website/blogs...

Resources