I just created a trial account to check out some features in GCP; I have some experience from work but never had this problem on my job account.
I have an issue with the user interface where it's not friendly at all, and the web page content doesn't load properly (not the menu nor the search bar, those are fine). Also, several times it doesn't even load, I ger the "Failed to load" error message.
For example: I go to Cloud Storage and everything's crooked, or piled into one column on the left side of the screen, including the formulary when trying to create any element.
Has anyone encountered this issue?
Related
This is my first try at Fauna, following this tutorial: https://css-tricks.com/how-to-create-a-client-serverless-jamstack-app-using-netlify-gatsby-and-fauna/. All goes well and after uploading the example shopnotes.gql file from the Fauna dashboard and inspecting the created documents, I quit for the night. Today I go back to Fauna Playground, but get an error message: "Issues processing last GraphQL query". I'm told to clear out my local storage. But clicking the "Clear Local Storage" does nothing. The page refreshes with the same error. So I'm dead in the water. I thought I'd just delete the database I created to start over, but I can't find a way to delete it! What do I do now?
Since it appears that the Dashboard problem you encountered has been resolved, I'll answer your question from the comments.
To delete an unwanted database using the Dashboard, follow these steps:
Visit the Dashboard home page
Click on the database you want to delete
Click the "Settings" link beside the database name (near the top)
Click "Delete". A confirmation popup appears.
Click "Delete"
Repeats these steps for each database that you want to remove.
Note that deleting a database removes all of its content. There is no user-available way to undelete a database, however our support team might be able to help if you ask right away. Complete removal of document data in storage is handled by a background task, so it may be possible to recover most (if not all) of your content. If you wait 24 hours or more, the data is likely gone.
Is it possible to set some flag in my browser so that I always get the RECAPTHCA image challenges? Sometimes when you click on the "I am not a robot" button, it gives you a pop up challenge with something like "Click all the images which contain a car", but sometimes it just checks off the box and takes your word for the fact that you're not a robot.
I would like to test the UI of my tool both on a desktop and on mobile, and make sure that the challenge pop up shows up and interacts well with other elements of the page.
In other words, as a developer, I want Google to think that I'm a robot so that it always gives me the visual challenge.
Is there any way to force this behavior?
Note: I've done some research and was unable to find any relevant questions or blog posts that might yield an answer.
Force Google recaptcha to use simple checkbox click challenge asks for a way to force Google to NOT use the visual challenge, only the checkbox
How to force recheck user with reCAPTCHA? talks about forcing a recheck of some kind, but has no answers
https://groups.google.com/forum/#!topic/recaptcha/2ed-s3KK3Do actually asks my same question, but users did not seem keen on providing answers, with one user just suggesting not to use RECAPTCHA at all!
https://developers.google.com/recaptcha/docs/faq#id-like-to-run-automated-tests-with-recaptcha-v2-what-should-i-do is straight from Google, but it does exactly the opposite of what I want - it sets your site up such that the captcha appears on the page but is actually a test captcha that always lets you pass, and NEVER gives you the challenge. I want the exact inverse of this.
The methods told here should generally work, but there is no guarantee of the same. There is a very easy way to guarantee that Google reCAPTCHA challenge always show up. All you need to do is to add a custom BOT device in developer tools and then use the same to test.
In Chrome Dev Tools, open Settings. Open Devices after that.
Add a custom device with any name and set User Agent String to Googlebot/2.1
Finally, in Device Mode, at the left of the top bar, choose the custom device that you created (the default is Responsive).
Thanks to the SO users who had put it up in the answer and follow-up comment here.
I too have been looking for similar functionality. While I have not found a code-based solution to force the challenge, I have found a fairly reliable hack.
Grab a VPN tool (I happen to use IP Vanish), then connect to a remote server (I've had success connecting to China). Then, open up a private/incognito window and fill out your form.
From my testing, the combination of the remote IP and the blank user session triggers the challenge.
Here are a few things you can try. In my experience all of them will increase your chances of getting a challenge.
Log in at https://www.google.com/recaptcha/admin and edit your
reCAPTCHA settings. Under Security Preference choose Most Secure.
Use a VPN + incognito mode (as suggested here)
If you're using the invisible reCAPTCHA, I found that using explicit
rendering + immediately calling grecaptcha.execute() after
grecaptcha.render() will usually trigger the challenge. I suspect
this is because Google's AI expects a user interaction of some kind
to trigger grecaptcha.execute() and not the onloadCallback itself.
I use reCAPTCHA's SDK in Android, and I also encounter the need to force validation when testing. I tried it many times. At last, I turned off or turned on the flight mode, which can be verified in the retest. I guess it may be that Google put my IP on the white list in the background, so I passed the verification without any challenge.
That should be possible, because when LinkedIn forcefully logged out an user for excessive usage, it showed captcha on next login, and there always was the challenge.
Unfortunately, LinkedIn switched from Recaptcha to another provider just few days ago, so I cannot just look up into their JavaScript code.
It is what makes me believe that Recaptcha does have an undocumented option to force the challenge.
2022 and later
It seems to be increasingly harder to trigger the recaptcha challenge of the invisible recaptcha. Using the UserAgent of a bot, going into incognito mode is not enough anymore. A VPN might work, but I do not trust free VPN services.
I am however still able to trigger the recaptcha challenge when I'm only using the keyboard while filling in the form fields and pressing the submit button with the enter key. It seems like the Google Recaptcha is now also following your mouse movements to determine if you are a real user. Make sure to never hover your mouse cursor over the webpage and only use the keyboard.
I was looking for something like this and after some research plus trial & error what worked for me is to use the invisible recaptcha and invoke the challenge with JS.
After you have loaded the recaptcha script on your page then do
grecaptcha.execute()
and the challenge might be invoked.
I have a dev site running on Heroku. I put in google analytics. It was working fine. I decided to create a new tracking code so I could wipe out any previous data and start fresh.
I put in the new tracking code in my code, then pushed to Heroku. I noticed I was getting hits. But it was some weird url, looks like spam but what is this /www1.free-share-buttons.top ? why in the world is that getting logged?
This is most likely to be the result of so-called referral spam. You are getting hits from bots, which are targeting random UA identifiers, and not from actual visits and Analytics code on your website. To double-check this, you can view your Top pages reports in Analytics, by adding Hostname as secondary dimensions. You'll see (not set) next to these unknown pages, which means, they were pushed into Analytics from devices other than your site. There are several methods to prevent this happening, e.g. by adding an Analytics View filter, that prevents other Hostnames to appear in your reports.
I am facing a very strange situation and don't know how to solve it...Please help me solve the issue...
I am working on a web site where a research page is created to measure the performance of tasks done in the web site. It is one type of report page which checks for different conditions into the database tables, retrieves the information and send an email to the administrator. The page runs in every hour that is 24 times per day.
Now what the issue is: The web site works correctly however when the research page runs the other pages of the web site do not work correctly. That is say for example I am on the Page1 and at the same time the research page start running. Now at this time - when research page is running - if I click on the link of Page2, the Page2 will not get displayed until research page finishes its working. Can anyone tell what could be the issue for this behavior?
Here are some more information regarding the issue:
The web site is in Visual Studio 2008 (C#) and SOL Server 2008 is
used
The SOL query is too complex for research page however, I have made
all the optimization which are possible.
There are two connection strings (with different user for same
database) used in the web site. One for the Research page and second
for all the other pages in the site
Please help me find out the issue... Thanks in advance....
This may be due to mishandling of the Thread within you website. Have you tried it by Threads and Building Asynchronous Handlers in Your Server-Side Web Code
Just check out this might help you with :
http://msdn.microsoft.com/en-us/library/ms741870.aspx
msdn.microsoft.com/en-us/library/ms741870.aspx
http://www.albahari.com/threading/part3.aspx
And also do take care of releasing the necessary resources which might be locks up the Table, even though the Thread function gets over.
I'm having a strange problem and no luck debugging.
I was tasked with writing a JSR168 compliant portlet to search a database. When you open the portlet, you're given 6 search boxes for different criteria to search several thousand records. Once you press search, it brings up another page (it keeps the first page and uses <jsp: include> for the second page so users can see/change their search terms) with the search results. From the search results page, the user can click on one of the results (which redirects to a new page) and get more detailed information about it.
All of that works. The problem is when the user wants to search again.
When I developed this, I used LifeRay installed on my local machine. Everything works perfectly in IE, Firefox, and Chrome. However, when I deploy it to our development portal (IBM WebSphere), it doesn't quite work in IE. In Firefox/Chrome, when a user is on the detailed information page, they can hit back on their browser and it loads a cached version of the search results. Perfect, because this content rarely changes.
However, in IE, when they click the back button on the detailed view, we get a "Webpage has expired message". I've tried every caching setting in the portal settings for the portlet as well as the page, but haven't had any luck.
Anyone have any ideas?
There are settings at the portal level too.
Check out following link
http://publib.boulder.ibm.com/infocenter/wpdoc/v6r0/index.jsp?topic=/com.ibm.wp.ent.doc/wps/adbakbut.html
You could try tweeking some of these paramters as required by your portlets
The "Webpage has expired message" in IE indicates that you did a POST. You could try using a GET, which should not have this problem on "back" command.
You should install WebSphere Portal on your developer machine and test locally before going to another environment.