I was having issues with current data, and wanted to start over the full account. Found out, there is a link which you can hit to start over again.
Here is the link to delete everything.
https://app.pipedrive.com/settings/start_over
Following this link i was able to delete everything on the account. https://app.pipedrive.com/settings/start_over
Related
I use ledger nano S for https://wallet.betanet.near.org/, and the ledger nano S successful configuration. after I delete all key from my account and just keep 1 Full Access Key(create by ledger nano S).
Now I can't add other fullaccess key. Whatever method is used.when I did: near login --useLedgerKey. the web verify page will show:
An error occurred while approving this action. Please try again!
When I did add Seed Phrase enable, on the [Enter the following word from your recovery phrase to complete the setup process.] page, When I confirm it in the Ledger it will show:
An error occurred while setting up seed phrase recovery. Please try again!
Actually , my ledger still will show confirm info:
DANGER: This gives full access to a device other than Ledger
when I make a transaction push √ button. it still show the error:
An error occurred while approving this action. Please try again!
I think it's a bug, when use just keep 1 access key form ledger nano S.
enter image description here
enter image description here
my account : catcatcat.test
This is probably an error on the NEAR Wallet side. If you don't mind me asking, what is your account ID? I can better understand what is happening by viewing the chain activity associated with this.
Unfortunately it takes time to get new app version published in Ledger Live (even in Developer Mode). First version that got published (1.0.0) has the bug with adding full access keys.
As far as I know this bug is fixed in latest version available here:
https://github.com/LedgerHQ/app-near/pull/2
Best shot for now is to try compiling it from source following instructions in README:
https://github.com/LedgerHQ/app-near/blob/6a8d3263896fdf65ac06961c6e7ed6a9912d5480/README.md
We have a TFS user who appears to have the same permissions as a colleague in their team. However this user can't be #mentioned despite having the ability to open and close and otherwise edit the bug that they can't be mentioned in.
The following error is emailed to me following attempting to #mention this user.
#Joe Bloggs cannot be mentioned in the Bug 41729. The user does not have
sufficient permissions.
We can't see any differences between a user that can be tagged and one that can't.
We were getting this same error.
Went into the project settings -> security.
Clicked "remove" on that user
at which point the "remove" changed into an "undo".
Clicked the "undo".
Now we are no longer seeing this error. Seems like something internally just needed to be recombobulated :-)
If you are sure that the user exists and can be mentioned with "Joe Bloggs" then they do not have read permission on the bug you are referring to. You should go back and check what group membership they have in the project.
I decided to try out Firebase hosting and wanted to start fresh so I deleted my one and only app, but when I tried to create a new app with the same name I was unable to due to the error:
"This Firebase URL is not available"
I can only guess this is because of caching of app names/URLs? Hopefully it will become available (unless someone else beats me to it) after some timeout? Any info from others who have experience with this issue or otherwise know the answer is appreciated!
Not sure whether this is the right place to ask although Firebase suggest coming to SO because they apparently monitor Firebase-related questions closely according to their website.
Thanks!
Once you delete a Firebase URL, it is permanently unavailable. It cannot be recovered.
During confirmation, you should see a message like this, which explains in detail:
This stems from a number of abuse vectors that are possible by misappropriating a project id that the prior owner believes is deleted and could still have apps/releases in the wild attached to the defunct backend. Since compliance requires that we purge all data related to the project, including information about ownership, there's not even a way to restore one you personally deleted.
I moved website to new server, domain stay the same, files structure stay the same, but path to public_html has beed changed. Database has been also moved.
I tried to clean cache, but i dont think I made it. This is error i get:
Could not find action file at: /home/account_name/domains/domain.co.uk/public_html/manager/controllers/default/welcome.php
account_name is different now.
I havent access to the old server, so I cant login and clear cache. I tried to do it using php script I found, but it didnt help.
Moving to new server documentation - there is welcome.php error and how to fix it, but since I haven't access to website from old server, I can't do it.
Also I can't login and clear cache in admin panel, because this message in when i wan get access to it.
I also change in db, in modx_workspaces->path from {core_path} to home/account_name/domains/domain.co.uk/publis_html/core, but didn't help.
How can i clear cache or if it's not the case, what should I do to make it work?
Update
I have change location in settings:
config.core.php
connectors/config.core.php
core/config/config.inc.php
manager/config.core.php
In .htaccess I couldn't find path to website, I didn't change anything.
I remove all content from core/cache/, except one file (.gitignore), and if I go to domain.co.uk/manager/ it's blank page, no content at all. And still can't log in.
Clear the cache on the new server manually VIA FTP or from a shell.
Change that modx_workspaces thing back
did you change all your settings in core/config/config.inc.php ?? if not do so, that is where you will set most of your paths & database credentials.
you have a backup? good!
Now upgrade to the same version of modx, that should fix all your path issues. [make sure you are not logged into the manager while trying to upgrade]
When moving the site to the new server rather watch two things:
the right paths into this files
/config.core.php
/core/config/config.inc.php
/connectors/config.core.php
/manager/config.core.php
the folder /core/cache/ is empty. They can be cleaned simply by removing the contents via ftp.
and correct the value in the database back to {core_path}
Recently the search crawler stopped working on my MOSS installation. The message in the crawl log is
Access is denied. Check that the Default Content Access Account has access to this content, or add a crawl rule to crawl this content. (The item was deleted because it was either not found or the crawler was denied access to it.)
The default content account is an admin on the site collection that I am trying to crawl.
Almost every result for this error on Google tells me to add the DisableLoobackCheck registry key with a value of 1. I have done this and rebooted and the error continues.
The "Do not allow Basic Authentication" checkbox in my crawl rule screen is unchecked.
Is there anything else that could be causing this error? Something with file system or database permissions maybe?
Edit: All signs seem to indicate that the "DisableLoopbackCheck" should fix this, but it doesn't seem to work. Could I be doing something wrong when I enable this?
I'm doing it in My Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa, where I create a new DWORD key called DisableLoopbackCheck and give it the hex value 1.
It turned out not to be related to DisableLoopbackCheck. The problem was that the search was accessing the site through its external URL. You are supposedly not supposed to be able to access a site from within a server using the same URL that you use to reach it from the outside, at least in pre-SP1 MOSS. But I was doing this for about two years somehow. MS Support tells me they don't quite understand how it was ever working. So it looks like I ran into an issue that should have been manifesting all along. I'm not sure what caused it to appear suddenly, maybe some routine patching of the server. The solution was to extend the web application so it was accessible internally through the machine name, then point the crawler at that.