I'm trying to make an interactive map with Geopandas using the default data-set.
countries.to_crs(epsg=3395)
countries.explore(column='pop_est',cmap='magma')
Now I get the following error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-27-92f1397b09bf> in <module>
1 #Popultion mapping- Interactive
2 countries.to_crs(epsg=3395)
----> 3 countries.explore(column='pop_est',cmap='magma')
~\anaconda3\envs\myenv\lib\site-packages\geopandas\geodataframe.py in explore(self, *args, **kwargs)
1856 def explore(self, *args, **kwargs):
1857 """Interactive map based on folium/leaflet.js"""
-> 1858 return _explore(self, *args, **kwargs)
1859
1860 def sjoin(self, df, *args, **kwargs):
~\anaconda3\envs\myenv\lib\site-packages\geopandas\explore.py in _explore(df, column, cmap, color, m, tiles, attr, tooltip, popup, highlight, categorical, legend, scheme, k, vmin, vmax, width, height, categories, classification_kwds, control_scale, marker_type, marker_kwds, style_kwds, highlight_kwds, missing_kwds, tooltip_kwds, popup_kwds, legend_kwds, **kwargs)
283 kwargs["crs"] = "Simple"
284 tiles = None
--> 285 elif not gdf.crs.equals(4326):
286 gdf = gdf.to_crs(4326)
287
AttributeError: 'CRS' object has no attribute 'equals'
How can I fix this?
You have an outdated version of pyproj installed in your environment. You need at least pyproj 2.5.0. GeoPandas 0.10.x contains an installation bug that allows you to install older versions but this doesn't work. Update your pyproj.
conda update pyproj
or
pip install -U pyproj
Also, note that the line countries.to_crs(epsg=3395) in your snippet above doesn't do anything. It does not work in place. You need to assign reprojected GeoDataFrame or use a keyword. But keep in mind that this has no effect on explore as it automatically retrojects geometries to Web Mercator.
countries.to_crs(epsg=3395, inplace=True)
# or
countries = countries.to_crs(epsg=3395)
Related
I'm trying to reproduce the result in a paper and its GitHub is this.
What is wired is, after I download the code and run the sample "small simulation.ipynb" in VScode with conda environment python 3.9.16, it shows
Output exceeds the size limit. Open the full output data in a text editor
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/Users/yanghongguo/anaconda3/envs/py3.7/lib/python3.8/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/Users/yanghongguo/anaconda3/envs/py3.7/lib/python3.8/multiprocessing/pool.py", line 48, in mapstar
return list(map(*args))
File "/Users/yanghongguo/Downloads/SINC-master 2/small_example/../SINC_functions.py", line 249, in VI_VS_parallel
sig = np.ones((Q,1)) / 2500
NameError: name 'Q' is not defined
"""
The above exception was the direct cause of the following exception:
NameError Traceback (most recent call last)
Cell In[5], line 15
13 tol_elbo = 10.0
14 cpus = 1
---> 15 omega, EZ, phi,B,iters_total, elbo, elbo_score = SINC_update_tau(x, m, v0, v1, lamb, vB,a_gamma,b_gamma,a_pi,b_pi,a_tau,b_tau, max_iters, tol_prec, tol_elbo, cpus)
File ~/Downloads/SINC-master 2/small_example/../SINC_functions.py:495, in SINC_update_tau(x, m, v0, v1, lamb, vB, a_gamma, b_gamma, a_pi, b_pi, a_tau, b_tau, max_iters, tol_prec, tol_elbo, cpus)
493 pool = Pool(cpus)
494 args = [(Z[:,i] - B0[i],m,vB,sigs_j[i],B[i,],phi[i,],theta[i],a_gamma,b_gamma) for i in range(P)]
...
769 return self._value
770 else:
--> 771 raise self._value
NameError: name 'Q' is not defined
Q is defined as a global variable in the function. I asked my friend to run it in his computer and it was successful.
I tried different version of python but still got this error. Seems global variable is not identified in my environment.
My labtop is MAC M1 chip. What could be the reason that caused the error???
When saving a version in Kaggle, I get StdinNotImplementedError: getpass was called, but this frontend does not support input requests whenever I use the Transformers.Trainer class. The general code I use:
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(params)
trainer = Trainer(params)
trainer.train()
And the specific cell I am running now:
from transformers import Trainer, TrainingArguments,EarlyStoppingCallback
early_stopping = EarlyStoppingCallback()
training_args = TrainingArguments(
output_dir=OUT_FINETUNED_MODEL_PATH,
num_train_epochs=20,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
warmup_steps=0,
weight_decay=0.01,
logging_dir='./logs',
logging_steps=100,
evaluation_strategy="steps",
eval_steps=100,
load_best_model_at_end=True,
metric_for_best_model="eval_loss",
greater_is_better=False
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset,
callbacks=[early_stopping]
)
trainer.train()
When trainer.train() is called, I get the error below, which I do not get if I train with native PyTorch. I understood that the error arises since I am asked to input a password, but no password is asked when using native PyTorch code, nor when using the same code with trainer.train() on Google Colab.
Any solution would be ok, like:
Avoid being asked the password.
Enable input requests when saving a notebook on Kaggle. After that, if I understood correctly, I would need to go to https://wandb.ai/authorize (after having created an account) and copy the generated key to console. However, I do not understand why wandb should be necessary since I never explicitly used it so far.
wandb: You can find your API key in your browser here: https://wandb.ai/authorize
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/wandb/sdk/wandb_init.py", line 741, in init
wi.setup(kwargs)
File "/opt/conda/lib/python3.7/site-packages/wandb/sdk/wandb_init.py", line 155, in setup
wandb_login._login(anonymous=anonymous, force=force, _disable_warning=True)
File "/opt/conda/lib/python3.7/site-packages/wandb/sdk/wandb_login.py", line 210, in _login
wlogin.prompt_api_key()
File "/opt/conda/lib/python3.7/site-packages/wandb/sdk/wandb_login.py", line 144, in prompt_api_key
no_create=self._settings.force,
File "/opt/conda/lib/python3.7/site-packages/wandb/sdk/lib/apikey.py", line 135, in prompt_api_key
key = input_callback(api_ask).strip()
File "/opt/conda/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 825, in getpass
"getpass was called, but this frontend does not support input requests."
IPython.core.error.StdinNotImplementedError: getpass was called, but this frontend does not support input requests.
wandb: ERROR Abnormal program exit
---------------------------------------------------------------------------
StdinNotImplementedError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/wandb/sdk/wandb_init.py in init(job_type, dir, config, project, entity, reinit, tags, group, name, notes, magic, config_exclude_keys, config_include_keys, anonymous, mode, allow_val_change, resume, force, tensorboard, sync_tensorboard, monitor_gym, save_code, id, settings)
740 wi = _WandbInit()
--> 741 wi.setup(kwargs)
742 except_exit = wi.settings._except_exit
/opt/conda/lib/python3.7/site-packages/wandb/sdk/wandb_init.py in setup(self, kwargs)
154 if not settings._offline and not settings._noop:
--> 155 wandb_login._login(anonymous=anonymous, force=force, _disable_warning=True)
156
/opt/conda/lib/python3.7/site-packages/wandb/sdk/wandb_login.py in _login(anonymous, key, relogin, host, force, _backend, _silent, _disable_warning)
209 if not key:
--> 210 wlogin.prompt_api_key()
211
/opt/conda/lib/python3.7/site-packages/wandb/sdk/wandb_login.py in prompt_api_key(self)
143 no_offline=self._settings.force,
--> 144 no_create=self._settings.force,
145 )
/opt/conda/lib/python3.7/site-packages/wandb/sdk/lib/apikey.py in prompt_api_key(settings, api, input_callback, browser_callback, no_offline, no_create, local)
134 )
--> 135 key = input_callback(api_ask).strip()
136 write_key(settings, key, api=api)
/opt/conda/lib/python3.7/site-packages/ipykernel/kernelbase.py in getpass(self, prompt, stream)
824 raise StdinNotImplementedError(
--> 825 "getpass was called, but this frontend does not support input requests."
826 )
StdinNotImplementedError: getpass was called, but this frontend does not support input requests.
The above exception was the direct cause of the following exception:
Exception Traceback (most recent call last)
<ipython-input-82-4d1046ab80b8> in <module>
42 )
43
---> 44 trainer.train()
/opt/conda/lib/python3.7/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs)
1067 model.zero_grad()
1068
-> 1069 self.control = self.callback_handler.on_train_begin(self.args, self.state, self.control)
1070
1071 # Skip the first epochs_trained epochs to get the random state of the dataloader at the right point.
/opt/conda/lib/python3.7/site-packages/transformers/trainer_callback.py in on_train_begin(self, args, state, control)
338 def on_train_begin(self, args: TrainingArguments, state: TrainerState, control: TrainerControl):
339 control.should_training_stop = False
--> 340 return self.call_event("on_train_begin", args, state, control)
341
342 def on_train_end(self, args: TrainingArguments, state: TrainerState, control: TrainerControl):
/opt/conda/lib/python3.7/site-packages/transformers/trainer_callback.py in call_event(self, event, args, state, control, **kwargs)
386 train_dataloader=self.train_dataloader,
387 eval_dataloader=self.eval_dataloader,
--> 388 **kwargs,
389 )
390 # A Callback can skip the return of `control` if it doesn't change it.
/opt/conda/lib/python3.7/site-packages/transformers/integrations.py in on_train_begin(self, args, state, control, model, **kwargs)
627 self._wandb.finish()
628 if not self._initialized:
--> 629 self.setup(args, state, model, **kwargs)
630
631 def on_train_end(self, args, state, control, model=None, tokenizer=None, **kwargs):
/opt/conda/lib/python3.7/site-packages/transformers/integrations.py in setup(self, args, state, model, **kwargs)
604 project=os.getenv("WANDB_PROJECT", "huggingface"),
605 name=run_name,
--> 606 **init_args,
607 )
608 # add config parameters (run may have been created manually)
/opt/conda/lib/python3.7/site-packages/wandb/sdk/wandb_init.py in init(job_type, dir, config, project, entity, reinit, tags, group, name, notes, magic, config_exclude_keys, config_include_keys, anonymous, mode, allow_val_change, resume, force, tensorboard, sync_tensorboard, monitor_gym, save_code, id, settings)
779 if except_exit:
780 os._exit(-1)
--> 781 six.raise_from(Exception("problem"), error_seen)
782 return run
/opt/conda/lib/python3.7/site-packages/six.py in raise_from(value, from_value)
Exception: problem
You may want to try adding report_to="tensorboard" or any other reasonable string array in your TrainingArguments
https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments
If you have multiple logger that you want to use report_to="all" (the default value)
try os.environ["WANDB_DISABLED"] = "true" such that wandb is always disabled.
see: https://huggingface.co/transformers/main_classes/trainer.html#transformers.TFTrainer.setup_wandb
Why this works in google colab but doesn't work on docker?
So this is my Dockerfile.
FROM python:3.7
RUN pip install -q transformers tensorflow
RUN pip install ipython
ENTRYPOINT ["/bin/bash"]
And I'm executing this.
from transformers import *
nlp = pipeline(
'question-answering',
model='mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es',
tokenizer=(
'mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es',
{"use_fast": False}
)
)
But I get this error
...:
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 465/465 [00:00<00:00, 325kB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 242k/242k [00:00<00:00, 796kB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 112/112 [00:00<00:00, 70.1kB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 135/135 [00:00<00:00, 99.6kB/s]
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
/usr/local/lib/python3.7/site-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
461 if resolved_archive_file is None:
--> 462 raise EnvironmentError
463 except EnvironmentError:
OSError:
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-1-1f9fed95967a> in <module>
5 tokenizer=(
6 'mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es',
----> 7 {"use_fast": False}
8 )
9 )
/usr/local/lib/python3.7/site-packages/transformers/pipelines.py in pipeline(task, model, config, tokenizer, framework, **kwargs)
1882 "Trying to load the model with Tensorflow."
1883 )
-> 1884 model = model_class.from_pretrained(model, config=config, **model_kwargs)
1885
1886 return task_class(model=model, tokenizer=tokenizer, modelcard=modelcard, framework=framework, task=task, **kwargs)
/usr/local/lib/python3.7/site-packages/transformers/modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1207 for config_class, model_class in TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING.items():
1208 if isinstance(config, config_class):
-> 1209 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
1210 raise ValueError(
1211 "Unrecognized configuration class {} for this kind of TFAutoModel: {}.\n"
/usr/local/lib/python3.7/site-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
467 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a file named one of {TF2_WEIGHTS_NAME}, {WEIGHTS_NAME}.\n\n"
468 )
--> 469 raise EnvironmentError(msg)
470 if resolved_archive_file == archive_file:
471 logger.info("loading weights file {}".format(archive_file))
OSError: Can't load weights for 'mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es'. Make sure that:
- 'mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.
However this works perfectly in google colab. This Google Colab doesn't require GPU to be ran, so why wouldn't it work in docker? What dependencies could I be missing? It doesn't see in the error message that dependencies could be missing, more than the model is no there but look:
And yes, this model exists "mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es" in hugging.co
I'm trying to run language model finetuning script (run_language_modeling.py) from huggingface examples with my own tokenizer(just added in several tokens, see the comments). I have problem loading the tokenizer. I think the problem is with AutoTokenizer.from_pretrained('local/path/to/directory').
Code:
from transformers import *
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
# special_tokens = ['<HASHTAG>', '<URL>', '<AT_USER>', '<EMOTICON-HAPPY>', '<EMOTICON-SAD>']
# tokenizer.add_tokens(special_tokens)
tokenizer.save_pretrained('../twitter/twittertokenizer/')
tmp = AutoTokenizer.from_pretrained('../twitter/twittertokenizer/')
Error Message:
OSError Traceback (most recent call last)
/z/huggingface_venv/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, pretrained_config_archive_map, **kwargs)
248 resume_download=resume_download,
--> 249 local_files_only=local_files_only,
250 )
/z/huggingface_venv/lib/python3.7/site-packages/transformers/file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, local_files_only)
265 # File, but it doesn't exist.
--> 266 raise EnvironmentError("file {} not found".format(url_or_filename))
267 else:
OSError: file ../twitter/twittertokenizer/config.json not found
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-32-662067cb1297> in <module>
----> 1 tmp = AutoTokenizer.from_pretrained('../twitter/twittertokenizer/')
/z/huggingface_venv/lib/python3.7/site-packages/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
190 config = kwargs.pop("config", None)
191 if not isinstance(config, PretrainedConfig):
--> 192 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
193
194 if "bert-base-japanese" in pretrained_model_name_or_path:
/z/huggingface_venv/lib/python3.7/site-packages/transformers/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
192 """
193 config_dict, _ = PretrainedConfig.get_config_dict(
--> 194 pretrained_model_name_or_path, pretrained_config_archive_map=ALL_PRETRAINED_CONFIG_ARCHIVE_MAP, **kwargs
195 )
196
/z/huggingface_venv/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, pretrained_config_archive_map, **kwargs)
270 )
271 )
--> 272 raise EnvironmentError(msg)
273
274 except json.JSONDecodeError:
OSError: Can't load '../twitter/twittertokenizer/'. Make sure that:
- '../twitter/twittertokenizer/' is a correct model identifier listed on 'https://huggingface.co/models'
- or '../twitter/twittertokenizer/' is the correct path to a directory containing a 'config.json' file
If I change AutoTokenizer to BertTokenizer, the code above can work. Also I can run the script without any problem is I load by shortcut name instead of path. But in the script run_language_modeling.py it uses AutoTokenizer. I'm looking for a way to get it running.
Any idea? Thanks!
The problem is that you are using nothing that would indicate the correct tokenizer to instantiate.
For reference, see the rules defined in the Huggingface docs. Specifically, since you are using BERT:
contains bert: BertTokenizer (Bert model)
Otherwise, you have to specify the exact type yourself, as you mentioned.
AutoTokenizer.from_pretrained fails if the specified path does not contain the model configuration files, which are required solely for the tokenizer class instantiation.
In the context of run_language_modeling.py the usage of AutoTokenizer is buggy (or at least leaky).
There is no point to specify the (optional) tokenizer_name parameter if it's identical to the model name or path. Therefore, to my understanding, it supposes to support exactly the case of a modified tokenizer. I also found this issue very confusing.
The best workaround that I have found is to add config.json to the tokenizer directory with only the "missing" configuration:
{
"model_type": "bert"
}
when loading modified tokenizer or pretrained tokenizer you should load it as follows:
tokenizer = AutoTokenizer.from_pretrained(path_to_json_file_of_tokenizer, config=AutoConfig.from_pretrained('path to thefolderthat contains the config file of the model'))
I'm trying to build a convolutional neural network for image classification in Python.
I run my code on CoLab and have loaded my data on Google Drive.
I can see all the files and folders in my google drive from python, but when I try to actually load an image it gives me the error in the title.
I'm using the skimage.io package, I'm actually just running a notebook I found on kaggle so the code should run fine, only difference I noticed is that the kaggle user was probably not working on CoLab with his data in GoogleDrive so I think maybe that's the problem, anyway here's my code:
from skimage.io import imread
img=imread('/content/drive/My Drive/CoLab/Data/chest_xray/train/PNEUMONIA/person53_bacteria_255.jpeg')
Which gives me the following error:
AttributeError: 'NoneType' object has no attribute 'ReadAsArray'
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-12-4a64aebb8504> in <module>()
----> 1 img=imread('/content/drive/My Drive/CoLab/Data/chest_xray/train/PNEUMONIA/person53_bacteria_255.jpeg')
4 frames
/usr/local/lib/python3.6/dist-packages/skimage/io/_io.py in imread(fname, as_gray, plugin, flatten, **plugin_args)
59
60 with file_or_url_context(fname) as fname:
---> 61 img = call_plugin('imread', fname, plugin=plugin, **plugin_args)
62
63 if not hasattr(img, 'ndim'):
/usr/local/lib/python3.6/dist-packages/skimage/io/manage_plugins.py in call_plugin(kind, *args, **kwargs)
208 (plugin, kind))
209
--> 210 return func(*args, **kwargs)
211
212
/usr/local/lib/python3.6/dist-packages/imageio/core/functions.py in imread(uri, format, **kwargs)
221 reader = read(uri, format, "i", **kwargs)
222 with reader:
--> 223 return reader.get_data(0)
224
225
/usr/local/lib/python3.6/dist-packages/imageio/core/format.py in get_data(self, index, **kwargs)
345 self._checkClosed()
346 self._BaseReaderWriter_last_index = index
--> 347 im, meta = self._get_data(index, **kwargs)
348 return Array(im, meta) # Array tests im and meta
349
/usr/local/lib/python3.6/dist-packages/imageio/plugins/gdal.py in _get_data(self, index)
64 if index != 0:
65 raise IndexError("Gdal file contains only one dataset")
---> 66 return self._ds.ReadAsArray(), self._get_meta_data(index)
67
68 def _get_meta_data(self, index):
AttributeError: 'NoneType' object has no attribute 'ReadAsArray'
Frist instead of My Drive it should be MyDrive (no space).
If it still doesn't work, you can try the following:
%cd /content/drive/MyDrive/CoLab/Data/chest_xray/train/PNEUMONIA
img=imread('person53_bacteria_255.jpeg')```