Cypress baseUrl configuation - cypress

Can I ask how to maybe overcome that i setup one baseURL for my cypress but in i want to run tests for different URL in diffrent folders. But setting baseURL limits me only for that particular URL testing
regards

You can add multiple URLs in your cypress.json file like:
{
"appurl1":"www.website1.com",
"appurl2":"www.website2.com"
}
and then access those in the tests like:
Cypress.config('appUrl1')
Cypress.config('appUrl2')
Check out the cypress docs on Cypress.config.

Related

How to add cache headers to Strapi API endpoint

I'd like to understand how to enable caching in Strapi for a specific (or any) API endpoint. At the moment when the browser hits my endpoint in the response headers I don't see any caching related headers. Is there a way to use etags and have a long cache time to allow the JSON response to be cached?
There is one mention of etags in the docs but I'm not sure how to implement. If anyone can provide more detailed information it would be appreciated.
At least for static files this can be done within Strapi itself. The middlewares documentation suggests, that there is already a middleware called public, which sets a Cache-Control header with its maxAge when it serves the files from the public/ directory.
But if you want to get your uploaded files cached (i.e. files within the public/uploads/ directory), that’s not enough, because the middleware strapi-provider-upload-local (not yet documented) runs first.
A recently published package solves this issue:
npm i strapi-middleware-upload-plugin-cache
Simply activate it by adding or modifying the config/middleware.js file with the following content:
module.exports = ({ env }) => ({
settings: {
'upload-plugin-cache': {
enabled: true,
maxAge: 86400000
}
}
});
I suggest you manage this kind of thing outside of Strapi.
With another service. For example, if you host you app on AWS, you can use CloudFront.

How to properly connect Nuxt.js with a laravel backend?

I am starting a new project, Nuxt.js for the frontend and Laravel for the backend.
How can I connect the two?
I have installed a new Nuxt project using create-nuxt-app, and a new laravel project.
As far as I have searched, I figured I need some kind of environment variables.
In my nuxt project, I have added the dotenv package and placed a new .env file in the root of the nuxt project.
And added CORS to my laravel project, as I have been getting an error.
The variables inside are indeed accessible from the project, and im using them
like this:
APP_NAME=TestProjectName
API_URL=http://127.0.0.1:8000
And accessing it like this:
process.env.APP_NAME etc'
To make HTTP calls, I am using the official Axios module of nuxt.js, and to test it i used it in one of the components that came by default.
The backend:
Route::get('/', function () {
return "Hello from Laravel API";
});
and from inside the component:
console.log(process.env.API_URL)//Gives 127.0.0.1:8000
//But this gives undefined
this.$axios.$get(process.env.API_URL).then((response) => {
console.log(response);
});
}
What am I doing wrong here?
I have tried to describe my setup and problem as best as I can. If I overlooked something, please tell me and I will update my question. Thanks.
Taking for granted that visiting https://127.0.0.1:8000/ in your browser you get the expected response, lets see what might be wrong in the front end:
First you should make sure that axios module is initialized correctly. Your nuxt.config.js file should include the following
//inclusion of module
modules: [
'#nuxtjs/axios',
<other modules>,
],
//configuration of module
axios: {
baseURL: process.env.API_URL,
},
Keep in mind that depending on the component's lifecycle, your axios request may be occurring in the client side (after server side rendering), where the address 127.0.0.1 might be invalid. I would suggest that you avoid using 127.0.0.1 or localhost when defining api_uris, and prefer using your local network ip for local testing.
After configuring the axios module as above, you can make requests in your components using just relative api uris:
this.$axios.$get('/').then(response => {
console.log(response)
}).catch(err => {
console.error(err)
})
While testing if this works it is very helpful to open your browser's dev tools > network tab and check the state of the request. If you still don't get the response, the odds are that you'll have more info either from the catch section, or the request status from the dev tools.
Keep us updated!
Nuxt has a routing file stucture to make it easy to set up server side rendering but also to help with maintainability too. This can cause Laravel and Nuxt to fight over the routing, you will need to configure this to get it working correctly.
I'd suggest you use Laravel-Nuxt as a lot of these small problems are solved for you.
https://github.com/cretueusebiu/laravel-nuxt

Swagger page being redirected from https to http

AWS Elastic Load Balancer listening through HTTPS (443) using SSL and redirecting requests to EC2 instances through HTTP (80), with IIS hosting a .net webapi application, using swashbuckle to describe the API methods.
Home page of the API (https://example.com) has a link to Swagger documentation which can bee read as https://example.com/swagger/ui/index.html when you hove over on the link.
If I click on the link it redirects the request on the browser to http://example.com/swagger/ui/index.html which displays a Page Not Found error
but if I type directly in the browser URL https://example.com/swagger/ui/index.html then it loads Swagger page, but then, when expanding the methods an click on "Try it out", the Request URL starts with "http" again.
This configuration is only for Stage and Production environments. Lower environments don't use the load balancer and just use http.
Any ideas on how to stop https being redirected to http? And how make swagger to display Request URLs using https?
Thank you
EDIT:
I'm using a custom index.html file
Seems is a known issue for Swashbuckle. Quote:
"By default, the service root url is inferred from the request used to access the docs. However, there may be situations (e.g. proxy and load-balanced environments) where this does not resolve correctly. You can workaround this by providing your own code to determine the root URL."
What I did was provide the root url and/or scheme to use based on the environment
GlobalConfiguration.Configuration
.EnableSwagger(c =>
{
...
c.RootUrl(req => GetRootUrlFromAppConfig(req));
...
c.Schemes(GetEnvironmentScheme());
...
})
.EnableSwaggerUi(c =>
{
...
});
where
public static string[] GetEnvironmentScheme()
{
...
}
public static string GetRootUrlFromAppConfig(HttpRequestMessage request)
{
...
}
The way I would probably do it is having a main file, and generating during the build of your application a different swagger file based on the environnement parameters for schemes and hosts.
That way, you have to manage only one swagger file accross your environments, and you only have to manage a few extra environnement properties, host and schemes (if you don't already have them)
Since I don't know about swashbuckle, I cannot answer for sure at your first question (the redirect)

General URL for restcall on both server and on development machine,

My restcall on my devoplment machine is this:
return $resource('http://127.0.0.1/projectname/index.php/api/pipedata/pipes/format/json', {}, {});
Since I have several projects on my dev machine, I cant have it on root.
But on my server I have it on root so the correct url is:
http://127.0.0.1/index.php/api/pipedata/pipes/format/json
"projectname" is removed.
Whats the best practice to solve this? On server or client side?
I have php with codeigniter on server and angular js on client.
You can create a config file for different environment with an variable called ApiDomain like this
In dev configuration config_dev.js:
config = {};
config.ApiDomain = 'http://127.0.0.1/projectname';
In prod configuration config_prod.js:
config.ApiDomain = 'http://127.0.0.1';
Then in code you can refer to config.ApiDomain
return $resource(config.ApiDomain +'/index.php/api/pipedata/pipes/format/json', {}, {});
When deploy your code you can rename config_dev.js or config_prod.js to config.js for dev or prod in CI and you only need to refer to config.js in your code.

Using a local proxy auto configuration (.pac) file for proxy settings from a chrome extension

I want to modify the proxy settings of chrome using an extension.
I want it to use a local .pac file which is present in my extension's root folder.
I tried following ways to refer this pac file:
settings.pacScript.url = "proxyFile.pac";
settings.pacScript.url = "chrome-extension://adcccdddeeefffggghhhiiijjjkkklll/proxyFile.pac";
These two methods do not work.
I tried using "chrome://net-internals" to inspect what is happening and found the following(there was no file not found error or pac javascript error):
PROXY_CONFIG_CHANGED
--> old_config =
Use DIRECT connections.
--> new_config =
Use DIRECT connections.
Whereas the following two approaches work:
settings.pacScript.url = "C:\\Users\\username\\Desktop\\myChromeExtension\\proxyFile.pac";
settings.pacScript.url = "http://www.example.com/proxyFile.pac";
Now since I want to refer to the local file in my extension, I cannot use http url.For using file url, how do I know the url of my extension's root folder?
Looking for help on this.
Thanks
In a chrome extension, you can get a URL for a file inside your extension with chrome.extension.getURL. This will return a chrome-extension:// URL.
In your case, you want:
settings.pacScript.url = chrome.extension.getURL("proxyFile.pac");

Resources