This is nearly the same question as this one, but in the opposite direction.
I know there's no FloatStream in Java 8, and there're no many use cases for float[], but I have one:
Dealing with TriangleMesh in JavaFX 3D, you have to provide an ObservableFloatArray for the 3D coordinates of the vertices of the whole mesh.
As a result of some calculations I'll have all these coordinates in a List, and to add all of them at once to the mesh, I'll call triangleMesh.getPoints().addAll(), using one of the following methods:
addAll(ObservableFloatArray src)
addAll(float... elements)
addAll(ObservableFloatArray src, int srcIndex, int length)
addAll(float[] src, int srcIndex, int length)
where ObservableFloatArray can be created using FXCollections.observableFloatArray(), FXCollections.observableFloatArray(ObservableFloatArray array) or FXCollections.observableFloatArray(float... values).
Let's say I have this pojo for each vertex:
private class Vertex {
private final float x;
private final float y;
private final float z;
public Vertex(float x, float y, float z){
this.x=x; this.y=y; this.z=z;
}
public float[] getCoordenates(){
return new float[]{x,y,z};
}
}
and after performing some calculations I have List<Vertex> listVertices. I'll need to generate float[] arrayVertices to finally call triangleMesh.getPoints().addAll(arrayVertices);.
For now this is what I'm doing:
listVertices.forEach(vertex->triangleMesh.getPoints().addAll(vertex.getCoordenates()));
But this triggers the associated listener on every new vertex added to the observable array, and for high number of vertices this affects performance.
Should FloatStream and flatMapToFloat() exist, I'd do something like this:
float[] arrayVertices = listVertices.stream()
.map(vertex->FloatStream.of(vertex.getCoordenates()))
.flatMapToFloat(f->f).toArray();
triangleMesh.getPoints().addAll(arrayVertices);
like I actually do with a list of int[] for face indices:
int[] arrayFaces = listFaces.stream()
.map(face->IntStream.of(face.getFaceIndices()))
.flatMapToInt(i->i).toArray();
triangleMesh.getFaces().addAll(arrayFaces);
But as far as I know, there's no way using streams.
Thanks in advance for any possible solution involving streams.
Keep in mind that a Stream defines an operation rather that a storage. So for most operations, using a float provides only little benefit over double values when CPU registers are used. There might be a theoretical improvement for operations that could be accelerated using SSE or GPU, but that’s not relevant here.
So you can use a DoubleStream for that operation, the only thing you need is a collector capable of collecting a DoubleStream into a float[] array:
float[] arrayVertices = listVertices.stream()
.flatMapToDouble(vertex->DoubleStream.of(vertex.x, vertex.y, vertex.z))
.collect(FaCollector::new, FaCollector::add, FaCollector::join)
.toArray();
static class FaCollector {
float[] curr=new float[64];
int size;
void add(double d) {
if(curr.length==size) curr=Arrays.copyOf(curr, size*2);
curr[size++]=(float)d;
}
void join(FaCollector other) {
if(size+other.size > curr.length)
curr=Arrays.copyOf(curr, size+other.size);
System.arraycopy(other.curr, 0, curr, size, other.size);
size+=other.size;
}
float[] toArray() {
if(size!=curr.length) curr=Arrays.copyOf(curr, size);
return curr;
}
}
This supports parallel processing, however, for an operation that merely consist of data copying only, there is no benefit from parallel processing.
I don't think there's any way around the fact that you have to create one data structure (e.g. a double[] or a List<Float>) and then map it into a float[]. (But maybe I am missing something.)
If you want to do this using a Stream-like API, you can use a Collector to do the mapping at the end:
import java.util.List;
import java.util.ListIterator;
import java.util.Random;
import java.util.stream.Collector;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
import java.util.stream.Stream;
import javafx.scene.shape.TriangleMesh;
public class StreamFloatsTest {
public static void main(String[] args) {
// The following declaration works in Eclipse 4.4
// however it won't compile from the command line:
// Collector<Float, List<Float>, float[]> toFloatArray =
// This declaration works:
Collector<Float, ?, float[]> toFloatArray =
Collectors.collectingAndThen(Collectors.toList(), floatList -> {
float[] array = new float[floatList.size()];
for (ListIterator<Float> iterator = floatList.listIterator(); iterator.hasNext();) {
array[iterator.nextIndex()] = iterator.next();
}
return array ;
});
// Random vertex list for demo purposes:
Random rng = new Random();
List<Vertex> vertices = IntStream.range(0, 100)
.mapToObj(i -> new Vertex(rng.nextFloat(), rng.nextFloat(), rng.nextFloat()))
.collect(Collectors.toList());
float[] vertexArray = vertices.stream()
.flatMap(v -> Stream.of(v.getX(), v.getY(), v.getZ()))
.collect(toFloatArray);
TriangleMesh mesh = new TriangleMesh();
mesh.getPoints().addListener(obs -> System.out.println("mesh invalidated"));
mesh.getPoints().addListener((array, sizeChanged, from, to) -> System.out.println("mesh changed"));
mesh.getPoints().addAll(vertexArray);
}
public static class Vertex {
private final float x ;
private final float y ;
private final float z ;
public Vertex(float x, float y, float z) {
this.x = x ;
this.y = y ;
this.z = z ;
}
public float getX() {
return x;
}
public float getY() {
return y;
}
public float getZ() {
return z;
}
public float[] getCoordinates() {
return new float[] {x, y, z};
}
}
}
Here's my requirement: Using DirectX11 (via SlimDX) I have to download a series of verteces and use them to create a Texture2D of a map of county borders. Then I need to do the same thing with state borders, and draw them over the county borders. Then, I need to take that texture and create 2 different textures from it, each containing unique radar data. Then I want to take those textures and display them so that the user can look at, for example, base reflectivity and base velocity side by side. The user should be able to zoom in and out of particular areas of the map.
Here's what I've got working: I'm creating my Texture2D without multisampling or depth on a billboard which is displaying in 2 separate views. But it looks blocky, and if you zoom too far out, some of the borders start to disappear.
Here are my issues:
1) I can't for the life of me get any multisampling quality. I'm using an ATI Radeon HD 5750, so I know it must be able to do it, but no formats I've tried support a quality greater than 0.
2) I'm uncertain whether I need to use a depth stencil since I'm drawing all these textures on top of each other. I hope not because when I try, the ShaderResourceView says, "Puny Human! You cannot use a depth stencil format in a ShaderResourceView! Bwa ha ha!" (I'm embellishing)
I'm willing to bet that a lot of these issues would be solved if I just drew the primitives directly into the world space, but when I do that rendering takes way too long because there are so many lines to render. Is there perhaps a way I can cut down on the time it takes?
And here's the code of my last working version:
using SlimDX;
using SlimDX.D3DCompiler;
using SlimDX.Direct3D11;
using SlimDX.DXGI;
using SlimDX.Windows;
using System;
using System.Windows.Forms;
using System.Collections.Generic;
using Device = SlimDX.Direct3D11.Device;
using Buffer = SlimDX.Direct3D11.Buffer;
using Resource = SlimDX.Direct3D11.Resource;
using Format = SlimDX.DXGI.Format;
using MapFlags = SlimDX.Direct3D11.MapFlags;
namespace Radar
{
abstract public class Renderer
{
protected static Device mDevice = null;
protected SwapChain mSwapChain = null;
protected RenderTargetView RenderTarget { get; set; }
public static Device Device { get { return mDevice; } protected set { mDevice = value; } }
public static DeviceContext Context { get { return Device.ImmediateContext; } }
protected SwapChain SwapChain { get { return mSwapChain; } set { mSwapChain = value; } }
public Texture2D Texture { get; protected set; }
protected int RenderTargetIndex { get; set; }
protected VertexShader VertexShader { get; set; }
protected PixelShader PixelShader { get; set; }
protected Buffer VertexBuffer { get; set; }
protected Buffer MatrixBuffer { get; set; }
protected InputLayout Layout { get; set; }
protected ShaderSignature InputSignature { get; set; }
protected SamplerState SamplerState { get; set; }
protected Color4 mClearColor = new Color4(0.117f, 0.117f, 0.117f);
protected Color4 ClearColor { get { return mClearColor; } }
protected void CreateDevice(IntPtr inHandle)
{
if (Device == null)
Device = new Device(DriverType.Hardware, DeviceCreationFlags.Debug);
SwapChainDescription chainDescription = new SwapChainDescription()
{
BufferCount = 2,
Usage = Usage.RenderTargetOutput,
OutputHandle = inHandle,
IsWindowed = true,
ModeDescription = new ModeDescription(0, 0, new Rational(60, 1), Format.R8G8B8A8_UNorm),
SampleDescription = new SampleDescription(8, 0),
Flags = SwapChainFlags.AllowModeSwitch,
SwapEffect = SwapEffect.Discard
};
SwapChain = new SwapChain(Device.Factory, Device, chainDescription);
}
protected void SetupViewport(int inWidth, int inHeight)
{
Viewport viewport = new Viewport(0.0f, 0.0f, inWidth, inHeight);
Context.OutputMerger.SetTargets(RenderTarget);
Context.Rasterizer.SetViewports(viewport);
}
public void Clear()
{
Context.ClearRenderTargetView(RenderTarget, ClearColor);
}
public void Present()
{
SwapChain.Present(0, PresentFlags.None);
}
// I do this to ensure the texture is correct
public void Save()
{
Texture2D.ToFile(Context, Texture, ImageFileFormat.Png, "test.png");
}
public virtual void Dispose()
{
Texture.Dispose();
SamplerState.Dispose();
VertexBuffer.Dispose();
Layout.Dispose();
InputSignature.Dispose();
VertexShader.Dispose();
PixelShader.Dispose();
RenderTarget.Dispose();
SwapChain.Dispose();
Device.Dispose();
}
public class RenderTargetParameters
{
public int Width { get; set; }
public int Height { get; set; }
public IntPtr Handle { get; set; }
public RenderTargetParameters()
{
Width = 0;
Height = 0;
Handle = new IntPtr(0);
}
}
public abstract void Render(int inWidth, int inHeight, int inCount = -1);
public abstract void Prepare(string inShaderName = null);
}
public class TextureRenderer : Renderer
{
public TextureRenderer(RenderTargetParameters inParms)
{
CreateDevice(inParms.Handle);
Texture2DDescription description = new Texture2DDescription()
{
Width = inParms.Width,
Height = inParms.Height,
MipLevels = 1,
ArraySize = 1,
Format = Format.R8G8B8A8_UNorm,
SampleDescription = new SampleDescription(8, 0),
Usage = ResourceUsage.Default,
BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None
};
Texture = new Texture2D(Device, description);
RenderTarget = new RenderTargetView(Device, Texture);
SetupViewport(inParms.Width, inParms.Height);
using (ShaderBytecode bytecode = ShaderBytecode.CompileFromFile("ShaderFX.fx", "VShader", "vs_5_0", ShaderFlags.Debug, EffectFlags.None))
{
InputSignature = ShaderSignature.GetInputSignature(bytecode);
VertexShader = new VertexShader(Device, bytecode);
}
// load and compile the pixel shader
InputElement[] elements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0) };
Layout = new InputLayout(Device, InputSignature, elements);
Context.InputAssembler.InputLayout = Layout;
Context.InputAssembler.PrimitiveTopology = PrimitiveTopology.LineStrip;
Context.VertexShader.Set(VertexShader);
}
public override void Prepare(string inShaderName)
{
using (ShaderBytecode bytecode = ShaderBytecode.CompileFromFile("ShaderFX.fx", inShaderName, "ps_4_0", ShaderFlags.Debug, EffectFlags.None))
PixelShader = new PixelShader(Device, bytecode);
Context.PixelShader.Set(PixelShader);
}
public void SetVertices(DataStream inShape)
{
VertexBuffer = new Buffer(Device, inShape, (int)inShape.Length, ResourceUsage.Default, BindFlags.VertexBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
Context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(VertexBuffer, 12, 0));
}
public override void Render(int inWidth, int inHeight, int inCount = -1)
{
Context.Draw(inCount, 0);
}
}
public class RuntimeRenderer : Renderer
{
private ShaderResourceView ResourceView { get; set; }
public RuntimeRenderer(RenderTargetParameters inParms, ref TextureRenderer inTextureRenderer)
{
CreateDevice(inParms.Handle);
Texture = inTextureRenderer.Texture;
using (Resource resource = Resource.FromSwapChain<Texture2D>(SwapChain, 0))
RenderTarget = new RenderTargetView(Device, resource);
//using (var factory = SwapChain.GetParent<Factory>())
//factory.SetWindowAssociation(inParms.Handle, WindowAssociationFlags.IgnoreAltEnter);
}
public void Resize()
{
RenderTarget.Dispose();
SwapChain.ResizeBuffers(2, 0, 0, Format.R8G8B8A8_UNorm, SwapChainFlags.AllowModeSwitch);
using (SlimDX.Direct3D11.Resource resource = Resource.FromSwapChain<Texture2D>(SwapChain, 0))
RenderTarget = new RenderTargetView(Device, resource);
}
public override void Prepare(string inShaderName)
{
using (ShaderBytecode bytecode = ShaderBytecode.CompileFromFile("ShaderFX.fx", "TextureVertexShader", "vs_4_0", ShaderFlags.EnableStrictness, EffectFlags.None))
{
InputSignature = ShaderSignature.GetInputSignature(bytecode);
VertexShader = new VertexShader(Device, bytecode);
}
using (ShaderBytecode bytecode = ShaderBytecode.CompileFromFile("ShaderFX.fx", "TexturePixelShader", "ps_4_0", ShaderFlags.EnableStrictness, EffectFlags.None))
PixelShader = new PixelShader(Device, bytecode);
InputElement[] elements = new InputElement[2];
elements[0].SemanticName = "POSITION";
elements[0].SemanticIndex = 0;
elements[0].Format = Format.R32G32B32_Float;
elements[0].Slot = 0;
elements[0].AlignedByteOffset = 0;
elements[0].Classification = InputClassification.PerVertexData;
elements[0].InstanceDataStepRate = 0;
elements[1].SemanticName = "TEXCOORD";
elements[1].SemanticIndex = 0;
elements[1].Format = Format.R32G32_Float;
elements[1].Slot = 0;
elements[1].AlignedByteOffset = InputElement.AppendAligned;
elements[1].Classification = InputClassification.PerVertexData;
elements[1].InstanceDataStepRate = 0;
Layout = new InputLayout(Device, InputSignature, elements);
BufferDescription matrixDescription = new BufferDescription()
{
Usage = ResourceUsage.Dynamic,
SizeInBytes = sizeof(float) * 16 * 4,
BindFlags = BindFlags.ConstantBuffer,
CpuAccessFlags = CpuAccessFlags.Write,
OptionFlags = ResourceOptionFlags.None,
StructureByteStride = 0
};
MatrixBuffer = new Buffer(Device, matrixDescription);
ShaderResourceViewDescription resourceViewDescription = new ShaderResourceViewDescription()
{
Format = Texture.Description.Format,
Dimension = ShaderResourceViewDimension.Texture2DMultisampled,
MipLevels = Texture.Description.MipLevels,
MostDetailedMip = 0,
};
//Texture2D.ToFile(Context, Texture, ImageFileFormat.Png, "test.png");
ResourceView = new ShaderResourceView(Device, Texture, resourceViewDescription);
SamplerDescription samplerDescription = new SamplerDescription()
{
Filter = Filter.MinMagMipLinear,
AddressU = TextureAddressMode.Wrap,
AddressV = TextureAddressMode.Wrap,
AddressW = TextureAddressMode.Wrap,
MipLodBias = 0.0f,
MaximumAnisotropy = 1,
ComparisonFunction = Comparison.Always,
BorderColor = ClearColor,
MinimumLod = 0,
MaximumLod = 99999
};
SamplerState = SamplerState.FromDescription(Device, samplerDescription);
}
public override void Render(int inWidth, int inHeight, int inCount = -1)
{
Clear();
Billboard.SetVerteces(Device, Texture.Description.Width, Texture.Description.Height, inWidth, inHeight);
SetupViewport(inWidth, inHeight);
Context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(Billboard.Verteces, 20, 0));
Context.InputAssembler.SetIndexBuffer(Billboard.Indeces, Format.R32_UInt, 0);
Context.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleStrip;
Context.InputAssembler.InputLayout = Layout;
Context.VertexShader.Set(VertexShader);
Context.PixelShader.Set(PixelShader);
Context.PixelShader.SetSampler(SamplerState, 0);
Context.VertexShader.SetConstantBuffer(MatrixBuffer, 0);
Context.PixelShader.SetConstantBuffer(MatrixBuffer, 0);
Context.PixelShader.SetShaderResource(ResourceView, 0);
Context.DrawIndexed(4, 0, 0);
Present();
}
}
}
Image 1 is what it looks like if I save the texture to a file (I scaled this down a LOT so it would fit in my post).
Image 2 is what it looks like in runtime when viewed at about a medium distance (not ideal, but not so bad)
Image 3 is what it looks like zoomed in to a county (Eww! Blocky and fuzzy!)
Image 4 is what it looks like zoomed out (where did all the borders go?)
About multisampling, generally you can keep quality to 0, quality setting generally are different "subpixels" (aka : samples) patterns. 0 generally does fine.
In case you render to texture with multisampling, you also need to resolve your resource, multi sampled textures are bound as Texture2DMS (instead of Texture2D) in shaders.
To do so, you need to create a second texture (with same format/size), but with only one sample.
Then once you're done rendering your multisampled texture, you need to do the following call:
deviceContext.ResolveSubresource(multisampledtexture, 0, nonmultisampledtexture,
0, format);
You can then use the ShaderView of the non multisampled texture in subsequent passes.
From what I see you should not need to use a depth stencil, just make sure you draw your elements in the correct order.
About formats, this is normal since depth is a bit "special", you need to pass different formats for resource/views. If you want to use D24_UNorm_S8_UInt (most common format i'd say), you need to setup the following:
In the texture description, format needs to be Format.R24_UNorm_X8_Typeless
In the Depth Stencil view description, Format.D24_UNorm_S8_UInt
In the shader view description, Format.R24_UNorm_X8_Typeless
That will allow you to build a depth stencil that you can read (if you don't need to read your depth buffer, just ignore shader view and use depth format directly).
Also you can increase quality by using mipmaps (which would help a lot, specially when zooming out).
To do so, in your texture description, set the following options (make sure that this texture is not multisampled)
texBufferDesc.OptionFlags |= ResourceOptionFlags.GenerateMipMaps;
texBufferDesc.MipLevels = 0; //0 means "all"
once you're done with your rendering, call:
context.GenerateMips
using the shader resource view of the texture that just got rendered.
About drawing the lines directly behind that's definitely possible, and for certain will give you the best quality.
Not sure how many lines you render, but it doesn't look like something a reasonably modern card would struggle with. And a bit of culling can easily help discard lines that are out of the screen so they don't get drawn.
You could also do some "hybrid" (use texture when zoomed out, render a subset of the lines when zoomed in), that's not too hard to setup either.
Currently working on a project to find the X,Y position of the using who is standing in front of the kinect.
Even though the user is standing still the X,Y keep changing because the kinect skeleton is flickering. Could someone help me how to figure out a way to stop the skeleton flickering
Have you set the smoothing parameter? The following settings result in very smooth movements, but with a lot of latency.
TransformSmoothParameters smoothingParam = new TransformSmoothParameters
{
Smoothing = 0.7f;
Correction = 0.3f;
Prediction = 1.0f;
JitterRadius = 1.0f;
MaxDeviationRadius = 1.0f;
};
sensor.SkeletonStream.Enable(smoothingParam);
If that is not working good enough for you, I had developed a helper class for one of my kinect projects to smooth the coordinates of joints. You have to create an instance of the following helper class for each smoothed joint and call GetSmoothedXPosition/GetSmoothedYPosition with the screen coordinates of the joint:
internal class JointSmoothing
{
#region Constants and Enumerations
private const int QueueLength = 7;
private const double BaseValue = 0.9;
#endregion
#region Fields
private readonly Queue<double> myXValues = new Queue<double>(QueueLength);
private readonly Queue<double> myYValues = new Queue<double>(QueueLength);
#endregion
#region Public Methods
public double GetSmoothedXPosition(double x)
{
if (myXValues.Count >= QueueLength)
{
myXValues.Dequeue();
}
myXValues.Enqueue(x);
return ExponentialMovingAverage(myXValues);
}
public double GetSmoothedYPosition(double y)
{
if (myYValues.Count >= QueueLength)
{
myYValues.Dequeue();
}
myYValues.Enqueue(y);
return ExponentialMovingAverage(myYValues);
}
public void Clear()
{
myXValues.Clear();
myYValues.Clear();
}
#endregion
#region Private Implementation
private static double ExponentialMovingAverage(Queue<double> values)
{
double numerator = 0;
double denominator = 0;
int valueCount = values.Count;
int weight = valueCount - 1;
foreach (double value in values)
{
numerator += value * Math.Pow(BaseValue, weight);
denominator += Math.Pow(BaseValue, weight);
weight--;
}
double average = values.Sum();
average /= values.Count;
numerator += average * Math.Pow(BaseValue, valueCount);
denominator += Math.Pow(BaseValue, valueCount);
return numerator / denominator;
}
#endregion
}
I am working on a WP7 game. I am using Game state managment (http://create.msdn.com/en-US/education/catalog/sample/game_state_management , but I think its not important ) I have problem with saving data into
Microsoft.Phone.Shell.PhoneApplicationService.Current.State
if I put there sprite in this method
public override void Deactivate()
{
#if WINDOWS_PHONE
Microsoft.Phone.Shell.PhoneApplicationService.Current.State["Score"] = Score;
Microsoft.Phone.Shell.PhoneApplicationService.Current.State["cloudSprite"] = cloudSprite;
#endif
base.Deactivate();
}
there is nothing in
Microsoft.Phone.Shell.PhoneApplicationService.Current.State
in activate method. However if I remove cloudSprite and put there only Score which is int it works fine. I dont know whats wrong maybe it cant handle more complex objects. I tried also float doubel, it all works. But if I put there something more complex it doesnt work. What do you think ?
EDIT
This is my sprite class. I dont know how to make it serializable. I have added there [DataContractAttribute()] and [DataMember] but its not working
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Content;
using System.Runtime.Serialization;
using System.IO;
namespace GameStateManagementSample.GameObjects
{
[DataContractAttribute()]
public class Sprite
{
[DataMember]
public Vector2 Position;
[DataMember]
public Vector2 Size;
[DataMember]
public Texture2D Texture;
[DataMember]
public Rectangle Rect
{
get
{
return new Rectangle((int)Position.X, (int)Position.Y, (int)Size.X, (int)Size.Y);
}
}
public Sprite(Vector2 position)
{
Position = position;
}
public Sprite(Vector2 position, Vector2 size)
{
Position = position;
Size = size;
}
public Sprite(Vector2 position, Texture2D texture)
{
Position = position;
Texture = texture;
Size = new Vector2(Texture.Width, Texture.Height);
}
public void LoadContent(string assetName, ContentManager content)
{
Texture = content.Load<Texture2D>(assetName);
if (Size == Vector2.Zero)
Size = new Vector2(Texture.Width, Texture.Height);
}
public virtual void Draw(SpriteBatch spriteBatch)
{
//spriteBatch.Draw(Texture, Rect, Color.White);
spriteBatch.Draw(Texture, Position, Color.White);
}
public virtual void Draw(SpriteBatch spriteBatch, Rectangle TexturePositionInSpriteSheet, Color color)
{
spriteBatch.Draw(Texture, Position, TexturePositionInSpriteSheet, color);
}
public bool Intersects(Vector2 point)
{
if (point.X >= Position.X && point.Y >= Position.Y && point.X <= Position.X + Size.X && point.Y <= Position.Y + Size.Y)
return true;
else return false;
}
public bool Intersects(Rectangle rect)
{
return Rect.Intersects(rect);
}
public static void Serialize(Stream streamObject, object objForSerialization)
{
if (objForSerialization == null || streamObject == null)
return;
DataContractSerializer ser = new DataContractSerializer(objForSerialization.GetType());
ser.WriteObject(streamObject, objForSerialization);
}
public static object Deserialize(Stream streamObject, Type serializedObjectType)
{
if (serializedObjectType == null || streamObject == null)
return null;
DataContractSerializer ser = new DataContractSerializer(serializedObjectType);
return ser.ReadObject(streamObject);
}
}
}
Objects added to the State collection are serialized with the DataContractSerializer. Ensure that anything you're saving there can be serialized that way.
Any serialization errors are silently ignored.
Update
Here's a simplified version of your Sprite object:
[DataContract]
public class Sprite
{
[DataMember]
public Vector2 Position;
[DataMember]
public Vector2 Size;
[DataMember]
public Texture2D Texture;
public Sprite()
{
}
public Sprite(Vector2 position)
{
Position = position;
}
public Sprite(Vector2 position, Vector2 size)
{
Position = position;
Size = size;
}
public Sprite(Vector2 position, Texture2D texture)
{
Position = position;
Texture = texture;
Size = new Vector2(Texture.Width, Texture.Height);
}
}
And here's an example of it being serialized and deserialized:
// Sprite serialization test
var sprite1 = new Sprite(new Vector2(12.34f, 56.78f));
Sprite sprite2;
using (var memStr = new MemoryStream())
{
var serializer = new DataContractSerializer(typeof(Sprite));
serializer.WriteObject(memStr, sprite1);
memStr.Position = 0;
var sr = new StreamReader(memStr);
var serialized = sr.ReadToEnd();
// serialized now looks like
// <Sprite xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.datacontract.org/2004/07/MiscExperiments"><Position xmlns:d2p1="http://schemas.datacontract.org/2004/07/Microsoft.Xna.Framework"><d2p1:X>12.34</d2p1:X><d2p1:Y>56.78</d2p1:Y></Position><Size xmlns:d2p1="http://schemas.datacontract.org/2004/07/Microsoft.Xna.Framework"><d2p1:X>0</d2p1:X><d2p1:Y>0</d2p1:Y></Size><Texture xmlns:d2p1="http://schemas.datacontract.org/2004/07/Microsoft.Xna.Framework.Graphics" i:nil="true" /></Sprite>
memStr.Position = 0;
sprite2 = (Sprite)serializer.ReadObject(memStr);
// sprite2 now contains the same as
// sprite2.Position = { X:12.34, Y:56.78 }
}
I believe that you get the ArgumentOutOfRangeException which is silently ignored if you aren't debugging (in which case the exception is thrown). The items stored in the State dictionary need to be serializable. Try storing that sprite somehow and saving only a link (source) to the sprite in the state dictionary.
I am creating a ListField. in each row of I am adding a image and 3 labelfield.
Can any one tell me how to create a keywordfilterField for this...
Thanks in advance
I am new to blackberry.
Little code will help me alot
This is my code for creating a custom list
class CustomListField extends ListField implements ListFieldCallback
{
String type;
int DISPLAY_WIDTH = Display.getWidth();
int DISPLAY_HEIGHT = Display.getHeight();
Vector mItems = new Vector();
Vector mine = new Vector();
Vector three= new Vector();
// SizedVFM mListManager = new SizedVFM(DISPLAY_WIDTH, DISPLAY_HEIGHT - 40);
Bitmap searchresult = Bitmap.getBitmapResource("res/searchresult.png");
HorizontalFieldManager hfManager;
Bitmap image ,image1;
int z = this.getRowHeight();
CustomListField(String text1,String text2,String type)
{
for (int i = 1; i < 31; i++)
{
mItems.addElement(text1 +String.valueOf(i));
mine.addElement(" "+text2);
three.addElement("31");
}
this.type=type;
this.setRowHeight((2*z));
this.setCallback(this);
this.setSize(20);
//mListManager.add(mListField);
//add(mListManager);
}
public void drawListRow(ListField field, Graphics g, int i, int y, int w)
{
// Draw the text.
image = Bitmap.getBitmapResource("res/searchresult.png");
String text = (String) get(field, i);
String mytext = (String)mine.elementAt(i);
String urtext=(String)three.elementAt(i);
g.drawBitmap(0, y, image.getWidth(),image.getHeight(), image, 0, 0);
g.drawText(text, image.getWidth(), y, 0, w);
g.setColor(Color.GRAY);
g.drawText(mytext, image.getWidth(), y+getFont().getHeight(), 0, w);
g.drawText(urtext,Graphics.getScreenWidth()*7/8,y,0,w);
if (i != 0)
{
g.drawLine(0, y, w, y);
}
}
public Object get(ListField listField, int index)
{
return mItems.elementAt(index);
}
public int getPreferredWidth(ListField listField)
{
return DISPLAY_WIDTH;
}
public int indexOfList(ListField listField, String prefix, int start)
{
return 0;
}
protected boolean touchEvent(TouchEvent message)
{
// If click, process Field changed
if ( message.getEvent() == TouchEvent.CLICK )
{
if(type.equals("Stops"))
UiApplication.getUiApplication().pushScreen(new SearchScreen("Services"));
else if(type.equals("Services"))
UiApplication.getUiApplication().pushScreen(new SearchScreen("Stops"));
return true;
}
return super.touchEvent(message);
}
}
The problem with KeywordFilterField is that it uses internally its own ListField, so I think it is going to be difficult to customize. If you wanted to use it as it is provided, you'll have to use it as follows:
//KeywordFilterField contains a ListField to display and a search edit field to type in the words
KeywordFilterField keywordFilterField = new KeywordFilterField();
//Instantiate the sorted collection:
CustomList cl = new CustomList(mItems);
//Pass the custom collection
keywordFilterField.setSourceList(cl, cl);
//Now you have to add two fields: first the list itself
myManager.add(keywordFilterField);
//And the search field, probably you'd want it at top:
myScreen.setTitle(keywordFilterField.getKeywordField());
You'll have to implement a custom sortable collection to hold the items you wan't to display:
class CustomList extends SortedReadableList implements KeywordProvider {
//In constructor, call super constructor with a comparator of <yourClass>
public CustomList(Vector elements)
{
super(new <yourClass>Comparator()); //pass comparator to sort
loadFrom(elements.elements());
}
//Interface implementation
public String[] getKeywords( Object element )
{
if(element instanceof <yourClass> )
{
return StringUtilities.stringToWords(element.toString());
}
return null;
}
void addElement(Object element)
{
doAdd(element);
}
//...
}
You have a full demo available inside the JDE samples folder. It is called keywordfilterdemo.
To use a custom list like the one you posted, you'll probably have to code a lot of stuff, like a custom EditField to type in the keywords receiving events on every typed character, linked to a search on a sortered collection (maybe you could use a SortedReadableList for this) which will select in your ListField the first search result returned by this collection.