Why WireMock says that the Request not matches? Spring cloud contract - spring-boot

Wiremock logs that the following request not matches:
WireMock : Request was not matched:
{
"url" : "/api/accounts?username=defaultuser",
"absoluteUrl" : "http://localhost:11651/api/accounts?username=defaultuser",
"method" : "GET",
"clientIp" : "127.0.0.1",
"headers" : {
"authorization" : "bearer test123",
"accept" : "application/json, application/*+json",
"user-agent" : "Java/1.8.0_121",
"host" : "localhost:11651",
"connection" : "keep-alive"
},
"cookies" : { },
"browserProxyRequest" : false,
"loggedDate" : 1500711718016,
"bodyAsBase64" : "",
"body" : "",
"loggedDateString" : "2017-07-22T08:21:58Z"
}
Closest match:
{
"urlPath" : "/api/accounts",
"method" : "GET",
"headers" : {
"authorization" : {
"matches" : "^bearer"
},
"accept" : {
"equalTo" : "application/json, application/*+json"
},
"user-agent" : {
"equalTo" : "Java/1.8.0_121"
},
"host" : {
"matches" : "^localhost:[0-9]{5}"
},
"connection" : {
"equalTo" : "keep-alive"
}
},
"queryParameters" : {
"username" : {
"matches" : "^[a-zA-Z0-9]*$"
}
}
}
Is the problem because of the difference of url and urlPath?
I also tried to specify absoluteUrl in the Contract. but it is ignored. I guess because it is not defined in Contract DSL.
The request side of the contract looks like this:
request{
method 'GET'
url('/api/accounts'){
queryParameters {
parameter('username', $(consumer(regex('^[a-zA-Z0-9]*$')), producer('defaultuser')))
}
}
headers {
header('authorization', $(consumer(regex('^bearer')), producer(execute('authClientBearer()'))))
header('accept', $(consumer('application/json, application/*+json')))
header('user-agent', $(consumer('Java/1.8.0_121')))
header('host', $(consumer(regex('^localhost:[0-9]{5}'))))
header('connection', $(consumer('keep-alive')))
}
}

It turned out to be a missing / at the end of the URL in the contract/stub

Not directly related to the question but for all who came here from Google:
In my case I was in the wrong scenario state.
More about scenario states here: http://wiremock.org/docs/stateful-behaviour/

If you have the same problem, maybe it's about your problem:
JSON configuration for matching mvcMock example:
"request": {
"urlPath": "/hello?name=pavel",
"method": "GET",
..
}
And you can see in log:
"/hello?name=pavel" | "/hello?name=pavel" - URL does not match
This is correct.
You have to change:
"request": {
"urlPath": "/hello",
"method": "GET",
"queryParameters": {
"name": {
"equalTo": "pavel"
}
},
..
}

Related

ingest pipeline not preserving the date type field

Here is my JSON data that i am trying to send from filebeat to ingest pipeline "logpipeline.json" in opensearch.
json data
{
"#timestamp":"2022-11-08T10:07:05+00:00",
"client":"10.x.x.x",
"server_name":"example.stack.com",
"server_port":"80",
"server_protocol":"HTTP/1.1",
"method":"POST",
"request":"/example/api/v1/",
"request_length":"200",
"status":"500",
"bytes_sent":"598",
"body_bytes_sent":"138",
"referer":"",
"user_agent":"Java/1.8.0_191",
"upstream_addr":"10.x.x.x:10376",
"upstream_status":"500",
"gzip_ratio":"",
"content_type":"application/json",
"request_time":"6.826",
"upstream_response_time":"6.826",
"upstream_connect_time":"0.000",
"upstream_header_time":"6.826",
"remote_addr":"10.x.x.x",
"x_forwarded_for":"10.x.x.x",
"upstream_cache_status":"",
"ssl_protocol":"TLSv",
"ssl_cipher":"xxxx",
"ssl_session_reused":"r",
"request_body":"{\"date\":null,\"sourceType\":\"BPM\",\"processId\":\"xxxxx\",\"comment\":\"Process status: xxxxx: \",\"user\":\"xxxx\"}",
"response_body":"{\"statusCode\":500,\"reasonPhrase\":\"Internal Server Error\",\"errorMessage\":\"xxxx\"}",
"limit_req_status":"",
"log_body":"1",
"connection_upgrade":"close",
"http_upgrade":"",
"request_uri":"/example/api/v1/",
"args":""
}
Filebeat to Opensearch log shipping
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["192.168.29.117:9200"]
pipeline: logpipeline
#index: "filebeatelastic-%{[agent.version]}-%{+yyyy.MM.dd}"
index: "nginx_dev-%{+yyyy.MM.dd}"
# Protocol - either `http` (default) or `https`.
protocol: "https"
ssl.enabled: true
ssl.verification_mode: none
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "filebeat"
password: "filebeat"
I am carrying out the "data" fields transformation in the ingest pipeline for some of the fields by doing type conversion which works perfectly. But the only problem i am facing is with the "#timestamp".
The "#timestamp" is of "date" type and once the json data goes through the pipeline i am mapping the json data message to root level json object called "data". In that transformed data the "data.#timestamp" is showing as type "string" even though i haven't done any transformation for it.
Opensearch ingestpipeline - logpipeline.json
{
"description" : "Logging Pipeline",
"processors" : [
{
"json" : {
"field" : "message",
"target_field" : "data"
}
},
{
"date" : {
"field" : "data.#timestamp",
"formats" : ["ISO8601"]
}
},
{
"convert" : {
"field" : "data.body_bytes_sent",
"type": "integer",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.bytes_sent",
"type": "integer",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.request_length",
"type": "integer",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.request_time",
"type": "float",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.upstream_connect_time",
"type": "float",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.upstream_header_time",
"type": "float",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"convert" : {
"field" : "data.upstream_response_time",
"type": "float",
"ignore_missing": true,
"ignore_failure": true
}
}
]
}
Is there any way i can preserve the "#timestamp" "date" type field even after the transformation carried out in ingest pipeline?
indexed document image:
Edit1: Update ingest pipeline simulate result
{
"docs" : [
{
"doc" : {
"_index" : "_index",
"_id" : "_id",
"_source" : {
"index_date" : "2022.11.08",
"#timestamp" : "2022-11-08T12:07:05.000+02:00",
"message" : """
{ "#timestamp": "2022-11-08T10:07:05+00:00", "client": "10.x.x.x", "server_name": "example.stack.com", "server_port": "80", "server_protocol": "HTTP/1.1", "method": "POST", "request": "/example/api/v1/", "request_length": "200", "status": "500", "bytes_sent": "598", "body_bytes_sent": "138", "referer": "", "user_agent": "Java/1.8.0_191", "upstream_addr": "10.x.x.x:10376", "upstream_status": "500", "gzip_ratio": "", "content_type": "application/json", "request_time": "6.826", "upstream_response_time": "6.826", "upstream_connect_time": "0.000", "upstream_header_time": "6.826", "remote_addr": "10.x.x.x", "x_forwarded_for": "10.x.x.x", "upstream_cache_status": "", "ssl_protocol": "TLSv", "ssl_cipher": "xxxx", "ssl_session_reused": "r", "request_body": "{\"date\":null,\"sourceType\":\"BPM\",\"processId\":\"xxxxx\",\"comment\":\"Process status: xxxxx: \",\"user\":\"xxxx\"}", "response_body": "{\"statusCode\":500,\"reasonPhrase\":\"Internal Server Error\",\"errorMessage\":\"xxxx\"}", "limit_req_status": "", "log_body": "1", "connection_upgrade": "close", "http_upgrade": "", "request_uri": "/example/api/v1/", "args": ""}
""",
"data" : {
"server_name" : "example.stack.com",
"request" : "/example/api/v1/",
"referer" : "",
"log_body" : "1",
"upstream_addr" : "10.x.x.x:10376",
"body_bytes_sent" : 138,
"upstream_header_time" : 6.826,
"ssl_cipher" : "xxxx",
"response_body" : """{"statusCode":500,"reasonPhrase":"Internal Server Error","errorMessage":"xxxx"}""",
"upstream_status" : "500",
"request_time" : 6.826,
"upstream_cache_status" : "",
"content_type" : "application/json",
"client" : "10.x.x.x",
"user_agent" : "Java/1.8.0_191",
"ssl_protocol" : "TLSv",
"limit_req_status" : "",
"remote_addr" : "10.x.x.x",
"method" : "POST",
"gzip_ratio" : "",
"http_upgrade" : "",
"bytes_sent" : 598,
"request_uri" : "/example/api/v1/",
"x_forwarded_for" : "10.x.x.x",
"args" : "",
"#timestamp" : "2022-11-08T10:07:05+00:00",
"upstream_connect_time" : 0.0,
"request_body" : """{"date":null,"sourceType":"BPM","processId":"xxxxx","comment":"Process status: xxxxx: ","user":"xxxx"}""",
"request_length" : 200,
"ssl_session_reused" : "r",
"server_port" : "80",
"upstream_response_time" : 6.826,
"connection_upgrade" : "close",
"server_protocol" : "HTTP/1.1",
"status" : "500"
}
},
"_ingest" : {
"timestamp" : "2023-01-18T08:06:35.335066236Z"
}
}
}
]
}
Finally able to resolve my issue. I updated the filebeat.yml with the following. Previously template name and pattern was different. But this default template name "filebeat" and pattern "filebeat" seems to be doing the job for me.
To
setup.template.name: "filebeat"
setup.template.pattern: "filebeat"
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
But still need to figure our how templates work though

Wiremock: xpath not working if xmlns is present

I am creating stub in wiremock. If I have xmlns in xml then it doesn't match however, without that It works.
Request
curl -d '<?xml version="1.0" encoding="UTF-8" standalone="yes"?><a xmlns="http://www.example.com/namespaces/ad"><b>1</b><c>2</c><d>9407339517</d></a>' -i -H "Content-Type: text/xml" -X POST "http://localhost:8080/test"
Stub Json
{
"request": {
"method": "POST",
"url": "/test",
"headers" : {
"Content-Type" : {
"equalTo" : "text/xml"
}
},
"bodyPatterns" : [ {
"matchesXPath" : "/stuff:a[b='1'][c='2']",
"xPathNamespaces" : {
"stuff" : "http://www.example.com/namespaces/ad"
}
} ]
},
"response": {
"body": "Hello world!",
"status": 200
}
}
Along with above mentioned way, I have tried with local-name() too.
When there is a namespace present on a (grand) parent then on the (grand) children inherit the same namespace. So your /b and /c should be prefixed with /stuff:b or /stuff:c
{
"request": {
"method": "POST",
"url": "/test",
"headers" : {
"Content-Type" : {
"equalTo" : "text/xml"
}
},
"bodyPatterns" : [ {
"matchesXPath" : "/stuff:a[./stuff:b='1'][./stuff:c='2']",
"xPathNamespaces" : {
"stuff" : "http://www.example.com/namespaces/ad"
}
} ]
},
"response": {
"body": "Hello world!",
"status": 200
}
}

Unable to run nightwatch JSON

I have been trying to run a simple test with Nightwatchjs and I keep running into issues.
I believe I have setup my JSON file correctly:
{
"src_folder" : ["./smoketests"],
"output_folder" : "./reports",
"selenium" : {
"start_process" : true,
"start_session" : true,
"server_path" : "M:/nightwatch/lib/selenium-server-standalone-2.48.2.jar",
"log_path" : false,
"host" : "127.0.0.1",
"port" : 4444,
"cli_args" : {
"webdriver.chrome.driver" : "./lib/chromedriver.exe"
}
},
"test_settings" : {
"default" : {
"launch_url" : "http://www.google.com/",
"selenium_port" : 4444,
"selenium_host" : "localhost",
"silent" : true,
"screenshots" : {
"enabled" : false,
"path" : "./screenshots/smoketests"
}
},
"desiredCapabilities" : {
"browserName" : "firefox",
"javascriptEnabled" : true,
"acceptSslCerts" : true
},
"chrome" : {
"desiredCapabilities" : {
"browserName" : "chrome",
"javascriptEnabled" : true,
"acceptSslCerts" : true
}
}
}
}
and my test is pretty simple:
module.exports = {
beforeEach : function(browser) {
browser.maximizeWindow();
},
'Test title' : function(browser) {
browser
.url('http://www.google.com/')
.waitForElementVisible('body', 1000)
.assert.title("Google")
browser.end();
}
};
Yet, when I run the test:
nightwatch -c smoketests/homepage.json
I receive the following error:
M:\nightwatch>nightwatch -c projects/smoketests/homepage.json
Starting selenium server... started - PID: 6448
C:\Users\jomartinez\AppData\Roaming\npm\node_modules\nightwatch\lib\
runner\run.js:116
var fullPaths = testSource.map(function (p) {
^
TypeError: Cannot read property 'map' of undefined
at module.exports.readPaths (C:\Users\jomartinez\AppData\Roaming\npm\node_mo
dules\nightwatch\lib\runner\run.js:116:31)
at runner [as run] (C:\Users\jomartinez\AppData\Roaming\npm\node_modules\nig
htwatch\lib\runner\run.js:182:10)
at Object.CliRunner.runner (C:\Users\jomartinez\AppData\Roaming\npm\node_mod
ules\nightwatch\lib\runner\cli\clirunner.js:345:16)
at C:\Users\jomartinez\AppData\Roaming\npm\node_modules\nightwatch\lib\runne
r\cli\clirunner.js:321:12
at SeleniumServer.onStarted (C:\Users\jomartinez\AppData\Roaming\npm\node_mo
dules\nightwatch\lib\runner\cli\clirunner.js:281:9)
at SeleniumServer.checkProcessStarted (C:\Users\jomartinez\AppData\Roaming\n
pm\node_modules\nightwatch\lib\runner\selenium.js:140:10)
at SeleniumServer.onStderrData (C:\Users\jomartinez\AppData\Roaming\npm\node
_modules\nightwatch\lib\runner\selenium.js:120:8)
at emitOne (events.js:77:13)
at Socket.emit (events.js:169:7)
at readableAddChunk (_stream_readable.js:146:16)
Has anybody else encounter this issue as well?
I think I figure out my initial issue. I had a syntax error in my "scr_folders" in my JSON file. After fixing it, my test seems to run fine.

Can IAM role temporary credentials be used in cloudformation templates?

I'm building a stack that needs access to a private S3 bucket to download the most current version of my application. I'm using IAM roles, a relatively new AWS feature that allows EC2 instances to be assigned specific roles, which are then coupled with IAM policies. Unfortunately, these roles come with temporary API credentials generated at instantiation. It's not crippling, but it's forced me to do things like this cloud-init script (simplified to just the relevant bit):
#!/bin/sh
# Grab our credentials from the meta-data and parse the response
CREDENTIALS=$(curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/s3access)
S3_ACCESS_KEY=$(echo $CREDENTIALS | ruby -e "require 'rubygems'; require 'json'; puts JSON[STDIN.read]['AccessKeyId'];")
S3_SECRET_KEY=$(echo $CREDENTIALS | ruby -e "require 'rubygems'; require 'json'; puts JSON[STDIN.read]['SecretAccessKey'];")
S3_TOKEN=$(echo $CREDENTIALS | ruby -e "require 'rubygems'; require 'json'; puts JSON[STDIN.read]['Token'];")
# Create an executable script to pull the file
cat << EOF > /tmp/pullS3.rb
require 'rubygems'
require 'aws-sdk'
AWS.config(
:access_key_id => "$S3_ACCESS_KEY",
:secret_access_key => "$S3_SECRET_KEY",
:session_token => "$S3_TOKEN")
s3 = AWS::S3.new()
myfile = s3.buckets['mybucket'].objects["path/to/my/file"]
File.open("/path/to/save/myfile", "w") do |f|
f.write(myfile.read)
end
EOF
# Downloading the file
ruby /tmp/pullS3.rb
First and foremost: This works, and works pretty well. All the same, I'd love to use CloudFormation's existing support for source access. Specifically, cfn-init supports the use of authentication resources to get at protected data, including S3 buckets. Is there anyway to get at these keys from within cfn-init, or perhaps tie the IAM role to an authentication resource?
I suppose one alternative would be putting my source behind some other authenticated service, but that's not a viable option at this time.
Another promising lead is the AWS::IAM::AccessKey resource, but the docs don't suggest it can be used with roles. I'm going to try it anyway.
I'm not sure when support was added, but you can meanwhile use an IAM role for authenticating S3 downloads for files and sources sections in AWS::CloudFormation::Init.
Just use roleName instead of accessKeyId & secretKey (see AWS::CloudFormation::Authentication for details), e.g.:
"Metadata": {
"AWS::CloudFormation::Init": {
"download": {
"files": {
"/tmp/test.txt": {
"source": "http://myBucket.s3.amazonaws.com/test.txt"
}
}
}
},
"AWS::CloudFormation::Authentication": {
"default" : {
"type": "s3",
"buckets": [ "myBucket" ],
"roleName": { "Ref": "myRole" }
}
}
}
Tested with aws-cfn-bootstrap-1.3-11
I managed to get this working. What I used was code from this exchange:
https://forums.aws.amazon.com/message.jspa?messageID=319465
The code doesn't use IAM Policies - it uses the AWS::S3::BucketPolicy instead.
Cloud formation code snippet:
"Resources" : {
"CfnUser" : {
"Type" : "AWS::IAM::User",
"Properties" : {
"Path": "/",
"Policies": [{
"PolicyName": "root",
"PolicyDocument": { "Statement":[{
"Effect" : "Allow",
"Action" : [
"cloudformation:DescribeStackResource",
"s3:GetObject"
],
"Resource" :"*"
}]}
}]
}
},
"CfnKeys" : {
"Type" : "AWS::IAM::AccessKey",
"Properties" : {
"UserName" : {"Ref": "CfnUser"}
}
},
"BucketPolicy" : {
"Type" : "AWS::S3::BucketPolicy",
"Properties" : {
"PolicyDocument": {
"Version" : "2008-10-17",
"Id" : "CfAccessPolicy",
"Statement" : [{
"Sid" : "ReadAccess",
"Action" : ["s3:GetObject"],
"Effect" : "Allow",
"Resource" : { "Fn::Join" : ["", ["arn:aws:s3:::<MY_BUCKET>/*"]]},
"Principal" : { "AWS": {"Fn::GetAtt" : ["CfnUser", "Arn"]} }
}]
},
"Bucket" : "<MY_BUCKET>"
}
},
"WebServer": {
"Type": "AWS::EC2::Instance",
"DependsOn" : "BucketPolicy",
"Metadata" : {
"AWS::CloudFormation::Init" : {
"config" : {
"sources" : {
"/etc/<MY_PATH>" : "https://s3.amazonaws.com/<MY_BUCKET>/<MY_FILE>"
}
}
},
"AWS::CloudFormation::Authentication" : {
"S3AccessCreds" : {
"type" : "S3",
"accessKeyId" : { "Ref" : "CfnKeys" },
"secretKey" : {"Fn::GetAtt": ["CfnKeys", "SecretAccessKey"]},
"buckets" : [ "<MY_BUCKET>" ]
}
}
},
"Properties": {
"ImageId" : "<MY_INSTANCE_ID>",
"InstanceType" : { "Ref" : "WebServerInstanceType" },
"KeyName" : {"Ref": "KeyName"},
"SecurityGroups" : [ "<MY_SECURITY_GROUP>" ],
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash\n",
"# Helper function\n",
"function error_exit\n",
"{\n",
" cfn-signal -e 1 -r \"$1\" '", { "Ref" : "WaitHandle" }, "'\n",
" exit 1\n",
"}\n",
"# Install Webserver Packages etc \n",
"cfn-init -v --region ", { "Ref" : "AWS::Region" },
" -s ", { "Ref" : "AWS::StackName" }, " -r WebServer ",
" --access-key ", { "Ref" : "CfnKeys" },
" --secret-key ", {"Fn::GetAtt": ["CfnKeys", "SecretAccessKey"]}, " || error_exit 'Failed to run cfn-init'\n",
"# All is well so signal success\n",
"cfn-signal -e 0 -r \"Setup complete\" '", { "Ref" : "WaitHandle" }, "'\n"
]]}}
}
}
Obviously replacing MY_BUCKET, MY_FILE, MY_INSTANCE_ID, MY_SECURITY_GROUP with your values.

decoding Ajax Response

I am new to ext Js.
I have a ajax call. I could see the response text on the alert but the next line, that is assumed to decode the responseText, does not produce any result in the alert Box.
My function goes like this :
function openToRecipients()
{
Ext.Ajax.request({
url: "Redirector?id=ClinicalInitiateForm&wfid=CLINICALONGOINGWFINITIATE",
method: 'POST',
success: function(response, opts)
{
alert(response.responseText);
var dataCurrent = Ext.util.JSON.decode(response.responseText);
alert(dataCurrent );
var jsonStr = dataCurrent.cData;
recipientJsonResponse = dataCurrent.dataGrid;
var myObject = eval('(' + jsonStr + ')');
gridStore = new Ext.data.JsonStore({
id : 'gridStore',
autoLoad : true,
data : myObject,
root : 'data',
fields:['NAME',
'CLIENT',
'DESCRIPTION'
],
listeners :{
load : gridDisplay
}
});
},
failure: function(response, opts) {
alert("fail");
}
});
}
This is my json after coverting to string
"formFields" : [ {
"id" : "NAME",
"set" : "",
"label" : "Name",
"dataType" : "string",
"editType" : "static",
"clientConfig" : "",
"hide" : "False",
"required" : "",
"mask" : "",
"maxValue" : "",
"maxLength" : "",
"minValue" : "",
"value" : "",
"showIf" : "",
"options" : "",
"prePopulate" : "",
"shortForm" : "",
"comments" : "",
"optionsValue" : "",
"currentValue" : "",
"disabled" : "",
"qTip" : "",
"hover" : ""
}, {
"id" : "CLIENT",
"set" : "",
"label" : "Client",
"dataType" : "string",
"editType" : "static",
"clientConfig" : "",
"hide" : "False",
"required" : "",
"mask" : "",
"maxValue" : "",
"maxLength" : "",
"minValue" : "",
"value" : "",
"showIf" : "",
"options" : "",
"prePopulate" : "",
"shortForm" : "",
"comments" : "",
"optionsValue" : "",
"currentValue" : "",
"disabled" : "",
"qTip" : "",
"hover" : ""
}, {
"id" : "DESCRIPTION",
"set" : "",
"label" : "Description",
"dataType" : "string",
"editType" : "static",
"clientConfig" : "",
"hide" : "False",
"required" : "",
"mask" : "",
"maxValue" : "",
"maxLength" : "",
"minValue" : "",
"value" : "",
"showIf" : "",
"options" : "",
"prePopulate" : "",
"shortForm" : "",
"comments" : "",
"optionsValue" : "",
"currentValue" : "",
"disabled" : "",
"qTip" : "",
"hover" : ""
} ],
And this is my data
{'data':[{"NAME":"Shan","CLIENT":"CSC","DESCRIPTION":"Computer science"}]}
How can i have this data in my grid
Here is the code that you can use:
var myStore = Ext.create( "Ext.data.JsonStore", {
fields: [ "firstname", "lastname" ], // the fields of each item (table line)
proxy: {
type: "ajax", // the proxy uses ajax
actionMethods: { // this config is not necessary for you. I needed to use it to be able to work with the echo service of jsFiddle. if you want to use post (as I saw in your post, you can skip this)
create: "POST",
read: "POST",
update: "POST",
destroy: "POST"
},
url: "/echo/json/", // here will come your URL that returns your JSON (in your case "Redirector?id..."
reader: {
type: "json", // this store reads data in json format
root: "items" // the itens to be read are inserted in a "items" array, in you case "formFields"
}
}
});
// in jsFiddle, we need to send the JSON that we want to read. In your case, you will just call .load() or set the autoLoad config of the store to true. If you want send adition parameters, you can use the sintax below.
myStore.load({
params: {
// everything inside the encode method will be encoded in json (this format that you must send to the store)
json: Ext.encode({
items: [{
"firstname": "foo",
"lastname": "bar"
}, {
"firstname": "david",
"lastname": "buzatto"
}, {
"firstname": "douglas",
"lastname": "adams"
}]
})
}
});
// creatin the grid, setting its columns and the store
Ext.create( "Ext.grid.Panel", {
title: "My Grid",
columns: [{
header: "First Name",
dataIndex: "firstname" // the dataIndex config is used to bind the column with the json data of each item
}, {
header: "Last Name",
dataIndex: "lastname"
}],
store: myStore, // the store created above
renderTo: Ext.getBody() // render the grid to the body
});
You can access a fiddle here: http://jsfiddle.net/cYwhK/1/
The documentation:
JsonStore: http://dev.sencha.com/deploy/ext-4.1.0-gpl/docs/index.html#!/api/Ext.data.JsonStore
Ajax proxy: http://dev.sencha.com/deploy/ext-4.1.0-gpl/docs/index.html#!/api/Ext.data.proxy.Ajax
Grid: http://dev.sencha.com/deploy/ext-4.1.0-gpl/docs/index.html#!/api/Ext.grid.Panel
Another think that I forgot to tell is that you can use Models in your store instead of an array of fields. The Models are like a class in a OO language. Take a look: http://dev.sencha.com/deploy/ext-4.1.0-gpl/docs/index.html#!/api/Ext.data.Model

Resources