I need help with serving GeoTIFF Image on Geoserver.
I only got a blank layer preview on OpenLayers and an invalid image in PNG format. I followed this guide on how to add the GeoTIFF image. It only loads the bounding box extent but with no image like the picture below ,
This is the log file, when I requested layer preview using OpenLayer :
Request: getMap
Angle = 0.0
BaseUrl = http://172.105.126.51:8081/geoserver/
Bbox = SRSEnvelope[680086.6381284555 : 680897.2906084555, 9124213.379031269 : 9124973.017911268]
BgColor = java.awt.Color[r=255,g=255,b=255]
Buffer = 0
CQLFilter = null
Crs = PROJCS["WGS 84 / UTM zone 49S",
GEOGCS["WGS 84",
DATUM["World Geodetic System 1984",
SPHEROID["WGS 84", 6378137.0, 298.257223563, AUTHORITY["EPSG","7030"]],
AUTHORITY["EPSG","6326"]],
PRIMEM["Greenwich", 0.0, AUTHORITY["EPSG","8901"]],
UNIT["degree", 0.017453292519943295],
AXIS["Geodetic longitude", EAST],
AXIS["Geodetic latitude", NORTH],
AUTHORITY["EPSG","4326"]],
PROJECTION["Transverse_Mercator", AUTHORITY["EPSG","9807"]],
PARAMETER["central_meridian", 111.0],
PARAMETER["latitude_of_origin", 0.0],
PARAMETER["scale_factor", 0.9996],
PARAMETER["false_easting", 500000.0],
PARAMETER["false_northing", 10000000.0],
UNIT["m", 1.0],
AXIS["Easting", EAST],
AXIS["Northing", NORTH],
AUTHORITY["EPSG","32749"]]
Elevation = []
Env = {}
Exceptions = SE_XML
FeatureId = null
FeatureVersion = null
Filter = null
Filters = null
Format = application/openlayers
FormatOptions = {}
Get = true
Height = 719
Interpolations = []
Layers = [org.geoserver.wms.MapLayerInfo#e35cd445]
MaxFeatures = null
Palette = null
RawKvp = {REQUEST=GetMap, SRS=EPSG:32749, FORMAT=application/openlayers, BBOX=680086.6381284555,9124213.379031269,680897.2906084555,9124973.017911268, VERSION=1.1.0, SERVICE=WMS, WIDTH=768, HEIGHT=719, LAYERS=raster_matra:orto_matra_udara}
RemoteOwsType = null
RemoteOwsURL = null
Request = GetMap
RequestCharset = UTF-8
ScaleMethod = null
Sld = null
SldBody = null
SldVersion = null
SortBy = null
SortByArrays = null
SRS = EPSG:32749
StartIndex = null
StyleBody = null
StyleFormat = sld
Styles = [StyleImpl[ name=raster]]
StyleUrl = null
StyleVersion = null
Tiled = false
TilesOrigin = null
Time = []
Transparent = false
ValidateSchema = false
Version = 1.1.0
ViewParams = null
Width = 768
I also provided the way I added the image :
I created a new workspace
Then created a new store
Published layer (mostly I left everything by default)
This is the GeoTIFF image that I want to serve using Geoserver
I run gdalinfo on the image :
What do you think ?
I do appreciate any help, thanks
Regards,
Yogi
Related
I made a code using pysimplegui. it basically shows some images from a database based on a scanned number. it works but sometimes it could be useful to be able to increase the size of the image + it would make my user interface a bit more interactive
i want to have the possibility to either:
when i fly over the image with the mouse, i want the image to increase in size
have the possibility to clic on the image and have a pop-up of the image showing up (in a bigger size)
i am not sure on how to interact with a sg.image()
Below you will find a trunkated part of my code where i show my way of getting the image to show up.
layout = [
[
sg.Text("Numéro de boîte"),
sg.Input(size=(25, 1), key="-FILE-"),
sg.Button("Load Image"),
sg.Button("Update DATA"),
sg.Text("<- useless text ")
],
[sg.Text("Indicateur au max" , size = (120, 1),font = ("Arial", 18), justification = "center")],
[sg.Image(key="-ALV1-"),sg.Image(key="-ALV2-"), sg.Image(key="-ALV3-"), sg.Image(key="-ALV4-"), sg.Image(key="-ALV5-")],
[sg.Image(key="-ALV6-"),sg.Image(key="-ALV7-"), sg.Image(key="-ALV8-"), sg.Image(key="-ALV9-"), sg.Image(key="-ALV10-")],
[sg.Text("_" * 350, size = (120, 1), justification = "center")],
[sg.Text("Indicateur au milieu" , size = (120, 1),font = ("Arial", 18), justification = "center")],
[sg.Image(key="-ALV11-"),sg.Image(key="-ALV12-"), sg.Image(key="-ALV13-"), sg.Image(key="-ALV14-"), sg.Image(key="-ALV15-")],
[sg.Image(key="-ALV16-"),sg.Image(key="-ALV17-"), sg.Image(key="-ALV18-"), sg.Image(key="-ALV19-"), sg.Image(key="-ALV20-")],
[sg.Text("↓↓↓ ↓↓↓" , size = (120, 1),font = ("Arial", 18), justification = "center")],
]
ImageAlv1 = Image.open(PathAlv1)
ImageAlv1.thumbnail((250, 250))
bio1 = io.BytesIO()
ImageAlv1.save(bio1, format="PNG")
window["-ALV1-"].update(data=bio1.getvalue())
Using bind method for events, like
"<Enter>", the user moved the mouse pointer into a visible part of an element.
"<Double-1>", specifies two click events happening close together in time.
Using PIL.Image to resize image and io.BytesIO as buffer.
import base64
from io import BytesIO
from PIL import Image
import PySimpleGUI as sg
def resize(image, size=(256, 256)):
imgdata = base64.b64decode(image)
im = Image.open(BytesIO(imgdata))
width, height = size
w, h = im.size
scale = min(width/w, height/h)
new_size = (int(w*scale+0.5), int(h*scale+0.5))
new_im = im.resize(new_size, resample=Image.LANCZOS)
buffer = BytesIO()
new_im.save(buffer, format="PNG")
return buffer.getvalue()
sg.theme('DarkBlue3')
number = 4
column_layout, line = [], []
limit = len(sg.EMOJI_BASE64_HAPPY_LIST) - 1
for i, image in enumerate(sg.EMOJI_BASE64_HAPPY_LIST):
line.append(sg.Image(data=image, size=(64, 64), pad=(1, 1), background_color='#10C000', expand_y=True, key=f'IMAGE {i}'))
if i % number == number-1 or i == limit:
column_layout.append(line)
line = []
layout = [
[sg.Image(size=(256, 256), pad=(0, 0), expand_x=True, background_color='green', key='-IMAGE-'),
sg.Column(column_layout, expand_y=True, pad=(0, 0))],
]
window = sg.Window("Title", layout, margins=(0, 0), finalize=True)
for i in range(limit+1):
window[f'IMAGE {i}'].bind("<Enter>", "") # Binding for Mouse enter sg.Image
#window[f'IMAGE {i}'].bind("<Double-1>", "") # Binding for Mouse double click on sg.Image
element = window['-IMAGE-']
now = None
while True:
event, values = window.read()
if event == sg.WINDOW_CLOSED:
break
elif event.startswith("IMAGE"):
index = int(event.split()[-1])
if index != now:
element.update(data=resize(sg.EMOJI_BASE64_HAPPY_LIST[index]))
now = index
window.close()
I am new to TensorFlow and I get my code running successfully by modifying tutorials from the official website.
I checked some other answers on StackOverflow, which says my problem is likely due to something is being added to the graph every time. However, I have no idea where to look for the code that might have caused this.
Also, I used tf.py_function to map the dataset because I really need to enable eagerly mode in the mapping.
def get_dataset(data_index):
# data_index is a Pandas Dataframe that contains image/label pair info, each row is one pair
data_index = prepare_data_index(data_index)
# shuffle dataframe here because dataset.shuffle is taking very long time.
data_index = data_index.sample(data_index.shape[0])
path = path_to_img_dir
# list of dataframe indices indicating rows that are going to be included in the dataset for training.
indices_ls = ['{}_L'.format(x) for x in list(data_index.index)] + ['{}_R'.format(x) for x in list(data_index.index)]
# around 310k images
image_count = len(indices_ls)
list_ds = tf.data.Dataset.from_tensor_slices(indices_ls)
# dataset.shuffle is commented out because it takes too much time
# list_ds = list_ds.shuffle(image_count, reshuffle_each_iteration=False)
val_size = int(image_count * 0.2)
train_ds = list_ds.skip(val_size)
val_ds = list_ds.take(val_size)
def get_label(index):
index = str(np.array(index).astype(str))
delim = index.split('_')
state = delim[1]
index = int(delim[0])
if state == 'R':
label = data_index.loc[index][right_labels].to_numpy().flatten()
elif state == 'L':
label = data_index.loc[index][left_labels].to_numpy().flatten()
return tf.convert_to_tensor(label , dtype=tf.float16)
def get_img(index):
index = str(np.array(index).astype(str))
delim = index.split('_')
state = delim[1]
index = int(delim[0])
file_path = '{}_{}.jpg'.format(data_index.loc[index, 'sub_folder'],
str(int(data_index.loc[index, 'img_index'])).zfill(4)
)
img = tf.io.read_file(os.path.join(path, file_path))
img = tf.image.decode_jpeg(img, channels=3)
full_width = 320
img = tf.image.resize(img, [height, full_width])
# Crop half of the image depending on the state
if state == 'R':
img = tf.image.crop_to_bounding_box(img, offset_height=0, offset_width=0, target_height=height,
target_width=int(full_width / 2))
img = tf.image.flip_left_right(img)
elif state == 'L':
img = tf.image.crop_to_bounding_box(img, offset_height=0, offset_width=int(full_width / 2), target_height=height,
target_width=int(full_width / 2))
img = tf.image.resize(img, [height, width])
img = tf.keras.preprocessing.image.array_to_img(
img.numpy(), data_format=None, scale=True, dtype=None
)
# Apply auto white balancing, output an np array
img = AWB(img)
img = tf.convert_to_tensor(img, dtype=tf.float16)
return img
def process_path(index):
label = get_label(index)
img = get_img(index)
return img, label
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_ds = train_ds.map(lambda x: tf.py_function(
process_path,
[x], (tf.float16, tf.float16)), num_parallel_calls=AUTOTUNE)
val_ds = val_ds.map(lambda x: tf.py_function(
process_path,
[x], (tf.float16, tf.float16)), num_parallel_calls=AUTOTUNE)
def configure_for_performance(ds):
ds = ds.cache()
# ds = ds.shuffle(buffer_size=image_count)
ds = ds.batch(batch_size)
ds = ds.prefetch(buffer_size=AUTOTUNE)
return ds
train_ds = configure_for_performance(train_ds)
val_ds = configure_for_performance(val_ds)
return train_ds, val_ds
Can anyone please help me? Thanks!
Here is the rest of my code.
def initialize_model():
IMG_SIZE = (height, width)
preprocess_input = tf.keras.applications.vgg19.preprocess_input
IMG_SHAPE = IMG_SIZE + (3,)
base_model = tf.keras.applications.VGG19(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
prediction_layer = tf.keras.layers.Dense(class_num, activation=tf.nn.sigmoid, use_bias=True)
inputs = tf.keras.Input(shape=(height, width, 3))
x = preprocess_input(inputs)
x = base_model(x, training=True)
x = global_average_layer(x)
outputs = prediction_layer(x)
model = tf.keras.Model(inputs, outputs)
def custom_loss(y_gt, y_pred):
L1_loss_out = tf.math.abs(tf.math.subtract(y_gt, y_pred))
scaler = tf.pow(50.0, y_gt)
scaled_loss = tf.math.multiply(L1_loss_out, scaler)
scaled_loss = tf.math.reduce_mean(
scaled_loss, axis=None, keepdims=False, name=None
)
return scaled_loss
base_learning_rate = 0.001
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=base_learning_rate, momentum=0.9),
loss=custom_loss,
metrics=['mean_absolute_error']
)
return model
def train(data_index, epoch_num, save_path):
train_dataset, validation_dataset = get_dataset(data_index)
model = initialize_model()
model.summary()
history = model.fit(train_dataset,
epochs=epoch_num,
validation_data=validation_dataset)
model.save_weights(save_path)
return model, history
I encoded some images to TFRecords as an example and then try to decode them. However, there is a bug during the decode process and I really cannot fix it.
InvalidArgumentError: Expected image (JPEG, PNG, or GIF), got unknown format starting with '\257\222\244\257\222\244\260\223\245\260\223\245\262\225\247\263'
[[{{node DecodeJpeg}}]] [Op:IteratorGetNextSync]
encode:
def _bytes_feature(value):
"""Returns a bytes_list from a string / byte."""
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _float_feature(value):
"""Returns a float_list from a float / double."""
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def _int64_feature(value):
"""Returns an int64_list from a bool / enum / int / uint."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
src_path = r"E:\data\example"
record_path = r"E:\data\data"
sum_per_file = 4
num = 0
key = 3
for img_name in os.listdir(src_path):
recordFileName = "trainPrecipitate.tfrecords"
writer = tf.io.TFRecordWriter(record_path + recordFileName)
img_path = os.path.join(src_path, img_name)
img = Image.open(img_path, "r")
height = np.array(img).shape[0]
width = np.array(img).shape[1]
img_raw = img.tobytes()
example = tf.train.Example(features = tf.train.Features(feature={
'image/encoded': _bytes_feature(img_raw),
'image/class/label': _int64_feature(key),
'image/height': _int64_feature(height),
'image/width': _int64_feature(width)
}))
writer.write(example.SerializeToString())
writer.close()
decode:
import IPython.display as display
train_files = tf.data.Dataset.list_files(r"E:\data\datatrainPrecipitate.tfrecords")
train_files = train_files.interleave(tf.data.TFRecordDataset)
def decode_example(example_proto):
image_feature_description = {
'image/height': tf.io.FixedLenFeature([], tf.int64),
'image/width': tf.io.FixedLenFeature([], tf.int64),
'image/class/label': tf.io.FixedLenFeature([], tf.int64, default_value=3),
'image/encoded': tf.io.FixedLenFeature([], tf.string)
}
parsed_features = tf.io.parse_single_example(example_proto, image_feature_description)
height = tf.cast(parsed_features['image/height'], tf.int32)
width = tf.cast(parsed_features['image/width'], tf.int32)
label = tf.cast(parsed_features['image/class/label'], tf.int32)
image_buffer = parsed_features['image/encoded']
image = tf.io.decode_jpeg(image_buffer, channels=3)
image = tf.cast(image, tf.float32)
return image, label
def processed_dataset(dataset):
dataset = dataset.repeat()
dataset = dataset.batch(1)
dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
# print(dataset)
return dataset
train_dataset = train_files.map(decode_example)
# train_dataset = processed_dataset(train_dataset)
print(train_dataset)
for (image, label) in train_dataset:
print(repr(image))
InvalidArgumentError: Expected image (JPEG, PNG, or GIF), got unknown format starting with '\257\222\244\257\222\244\260\223\245\260\223\245\262\225\247\263'
[[{{node DecodeJpeg}}]] [Op:IteratorGetNextSync]
I can use tf.io.decode_raw() to decode the TFRecords and then use tf.reshape() to get the original image. While still don't know when to use tf.io.decode_raw() and when to use tf.io.decode_jpeg().
I'm trying to use
NSBitmapImageRep.bitmapImageRepByConvertingToColorSpace(NSColorSpace.genericRGBColorSpace(), renderingIntent: NSColorRenderingIntent.Perceptual);
to convert an NSImage to a format that can be handled by openGL, and it works (unlike bitmapImageRepByRetaggingWithColorSpace()), but I get an error:
<Error>: The function ‘CGContextClear’ is obsolete and will be removed in an upcoming update. Unfortunately, this application, or a library it uses, is using this obsolete function, and is thereby contributing to an overall degradation of system performance.
which is sort of useless, because it's Apple's own code, and I can't seem to find an alternative to bitmapImageRepByConvertingToColorSpace(), or any note that it too is deprecated.
EDIT:
var image=NSImage(size: frame);
image.lockFocus();
//println("NSImage: \(image.representations)");
string.drawAtPoint(NSMakePoint(0.0,0.0), withAttributes: attribs);
var bitmap2=NSBitmapImageRep(focusedViewRect: NSMakeRect(0.0,0.0,frame.width,frame.height))?;
image.unlockFocus();
bitmap=bitmap2!.bitmapImageRepByConvertingToColorSpace(NSColorSpace.genericRGBColorSpace(), renderingIntent: NSColorRenderingIntent.Perceptual);
For some reason this question interested me. It looks like the solution is to lock focus on your NSImage, then use NSBitmapImageRep(focusedViewRect:) to create the NSBitmapImageRep.
I tried to recreate your situation (as I understand it) and then create an NSBitmapImageRep with the following code:
var img:NSImage = NSImage(size: NSMakeSize(200,200))
img.lockFocus()
let testString:NSString = "test String"
testString.drawAtPoint(NSMakePoint(10,10), withAttributes: nil)
img.unlockFocus()
println("img Description = \(img.description)")
I got the following description:
img Description = NSImage 0x608000067ac0 Size={200, 200} Reps=(
"NSCGImageSnapshotRep:0x61000007cf40 cgImage=CGImage 0x6100001a2bc0")
I then extended this code to:
var img:NSImage = NSImage(size: NSMakeSize(200,200))
img.lockFocus()
let testString:NSString = "test String"
testString.drawAtPoint(NSMakePoint(10,10), withAttributes: nil)
img.unlockFocus()
println("img Description = \(img.description)")
img.lockFocus()
var bitmapRep:NSBitmapImageRep = NSBitmapImageRep(focusedViewRect: NSMakeRect(0.0, 0.0, img.size.width, img.size.height))!
img.unlockFocus()
println("bitmap data planes = \(bitmapRep.bitmapData)")
println("bitmap pixels wide = \(bitmapRep.size.width)")
println("bitmap pixels high = \(bitmapRep.size.height)")
println("bits per sample = \(bitmapRep.bitsPerSample)")
println("samples per pixel = \(bitmapRep.samplesPerPixel)")
println("has alpha = \(bitmapRep.alpha)")
println("is planar = \(bitmapRep.planar)")
println("color space name = \(bitmapRep.colorSpace)")
println("bitmap format = \(bitmapRep.bitmapFormat)")
println("bytes per row = \(bitmapRep.bytesPerRow)")
println("bits per pixel = \(bitmapRep.bitsPerPixel)")
The output for the NSBitmapImageRep parameters was:
bitmap data planes = 0x000000010b6ff100
bitmap pixels wide = 200.0
bitmap pixels high = 200.0
bits per sample = 8
samples per pixel = 4
has alpha = true
is planar = false
color space name = Color LCD colorspace
bitmap format = C.NSBitmapFormat
bytes per row = 800
bits per pixel = 32
I've posted some new code below. I've created two NSBitMapImageRep just like your code (except that I used the method bitmapImageRepByRetaggingWithColorSpace), and I exported both of them to a PNG file. I got the same image in both PNG files.
But I'm wondering a couple of things. First, the difference in your two tests was not only the OS X version, but the hardware produced different color spaces. You should check to be sure bitmap2 is not nil.
Second, I'm wondering if OpenGL cares. If bitmap2 is not nil and you pass OpenGL bitmap2.bitmapData, does it work?
My new code (deleting most of the println):
var img:NSImage = NSImage(size: NSMakeSize(200,200))
img.lockFocus()
let testString:NSString = "test String"
let font:NSFont = NSFont(name: "AppleCasual", size: 18.0)!
let textStyle = NSMutableParagraphStyle.defaultParagraphStyle().mutableCopy() as NSMutableParagraphStyle
textStyle.alignment = NSTextAlignment.LeftTextAlignment
let textColor:NSColor = NSColor(calibratedRed: 1.0, green: 0.0, blue: 1.0, alpha: 1.0)
let attribs:NSDictionary = [NSFontAttributeName: font,
NSForegroundColorAttributeName: textColor,
NSParagraphStyleAttributeName: textStyle]
testString.drawAtPoint(NSMakePoint(0.0, 0.0), withAttributes: attribs)
img.unlockFocus()
println("img Description = \(img.description)")
img.lockFocus()
var bitmapRep:NSBitmapImageRep = NSBitmapImageRep(focusedViewRect: NSMakeRect(0.0, 0.0, img.size.width, img.size.height))!
img.unlockFocus()
let pngPath:String = "TestStringBeforeChange.png"
let imageProps = [NSImageCompressionFactor: NSNumber(float: 1.0)]
let outputImageData = bitmapRep.representationUsingType(NSBitmapImageFileType.NSPNGFileType, properties: imageProps)
let fileSavePanel = NSSavePanel()
fileSavePanel.nameFieldStringValue = pngPath
var savePanelReturn = fileSavePanel.runModal()
if savePanelReturn == NSFileHandlingPanelOKButton
{
var theFileURL = fileSavePanel.URL
var saveStatus = outputImageData?.writeToURL(theFileURL!, atomically: true)
}
let bitmapRep2 = bitmapRep.bitmapImageRepByRetaggingWithColorSpace(NSColorSpace.genericRGBColorSpace())
let pngPath2:String = "TestStringAfterChange.png"
let outputImageData2 = bitmapRep2!.representationUsingType(NSBitmapImageFileType.NSPNGFileType, properties: imageProps)
let fileSavePanel2 = NSSavePanel()
fileSavePanel2.nameFieldStringValue = pngPath2
var savePanel2Return = fileSavePanel2.runModal()
if savePanel2Return == NSFileHandlingPanelOKButton
{
var theFileURL2 = fileSavePanel2.URL
var saveStatus2 = outputImageData2?.writeToURL(theFileURL2!, atomically: true)
}
Sorry for all the posts, but I don't want to let this go, and it's interesting.
I created 3 PNG files: the first using the original NSBitmapImageRep, a second using an NSBitmapImageRep created with bitmapImageRepByRetaggingWithColorSpace and a third using an NSBitmapImageRep created with bitmapImageRepByConvertingToColorSpace (I, of course, got the system warning when I used this function).
Looking at these three PNG files in Preview's Inspector, they were all RGB color models; only the colorSynch profile for the first file was different from that of the second and third file. So I thought maybe there's no difference in the bitmapData among these files.
I then wrote some code to compare the bitmapData:
var firstRepPointer:UnsafeMutablePointer<UInt8> = bitmapRep.bitmapData
var secondRepPointer:UnsafeMutablePointer<UInt8> = bitmapRep2!.bitmapData
var firstByte:UInt8 = 0
var secondByte:UInt8 = 0
for var i:Int = 0; i < 200; i++
{
for var j:Int = 0; j < 200; j++
{
for var k:Int = 0; k < 4; k++
{
memcpy(&firstByte, firstRepPointer, 1)
firstRepPointer += 1
memcpy(&secondByte, secondRepPointer, 1)
secondRepPointer += 1
if firstByte != secondByte {println("firstByte = \(firstByte) secondByte = \(secondByte)")}
}
}
}
As I expected, there were no differences when I compared the bitmapData from the original bitmap rep to the bitmapData from the bitmap rep created with bitmapImageRepByRetaggingWithColorSpace.
It got more interesting, however when I compared the bitmapData from the original bitmap rep to the data from the bitmap rep created with bitmapImageRepByConvertingToColorSpace.
There were lots of differences in the data. Small differences, but lots of them.
Even though there were no differences I could see in the PNG files, the underlying data had been changed by bitmapImageRepByConvertingToColorSpace.
All that said, I think you should be able to pass .bitmapData from any of these bitmap reps to an OpenGL texture. The data from the bitmap rep created with bitmapImageRepByConvertingToColorSpace might create a texture that looks different, but it should be valid data.
having issues with making a set of background images save and load correctly. Please note, that this has worked correctly in the past with "1 - 2" images in the caseSwapper.
Structure :
On my stage I have set of draggable objects that you can save and load as you please. (these work)
The background, which is movieClip called "caseSwapper" contains a set of frames with different images inside each frame. E.G frame one is called (labelled) "frameone" - contains a pretty picture. Frame 2 is labelled "Frametwo" which contains an alternative image etc etc
A load and save button on the stage allows you to store the data to sharedObject "mySO"
Issues and Behaviour
On the face of it, the save appears to be working. The trace statement is declaring that the current frame is being stored to mySO... although I'm not entirely convinced it is. Basically, when the player has a certain background selected and they click 'save' I need the current image to be saved/written to the sharedObject.
Notes : Frame 1 appears to work when I click 'load' from the stage. When I 'launch' the application (not load) even after saving frame 123 or 4 only frame 4 launches/dis[lays. I then have to click load to retrieve my sharedObject... which only shows the first frame... Any pointers. Script to edit is at the bottom. Please Note, that I'm designer first and foremost!
save_btn.addEventListener (MouseEvent.CLICK, clickersave);
function clickersave (e:MouseEvent):void {
saved.play();
mySO.data.myblcskull_mc_x = blcskull_mc.x;
mySO.data.myblcskull_mc_y = blcskull_mc.y;
mySO.data.myblackhandbag_mc_y = blackhandbag_mc.y;
mySO.data.myblackhandbag_mc_x = blackhandbag_mc.x;
mySO.data.myhotlips_mc_x = hotlips_mc.x;
mySO.data.myhotlips_mc_y = hotlips_mc.y;
mySO.data.my_x = bones_mc.x;
mySO.data.my_y = bones_mc.y;
mySO.data.mybut_x = btrfly_mc.x;
mySO.data.mybut_y = btrfly_mc.y;
mySO.data.mytig_x = tiger_mc.x;
mySO.data.mytig_y = tiger_mc.y;
mySO.data.myskullface_mc_y = skullface_mc.y;
mySO.data.myskullface_mc_x = skullface_mc.x;
mySO.data.myblack_tile_mc_zero_y = black_tile_mc_zero.y;
mySO.data.myblack_tile_mc_zero_x = black_tile_mc_zero.x;
mySO.data.myblack_tile_mc_one_x = black_tile_mc_one.x;
mySO.data.myblack_tile_mc_one_y = black_tile_mc_one.y;
mySO.data.mycrown_mc_y = crown_mc.y;
mySO.data.mycrown_mc_x = crown_mc.x;
mySO.data.myperfume_mc_y = perfume_mc.y;
mySO.data.myperfume_mc_x = perfume_mc.x;
mySO.data.myheart_mc_x = heart_mc.x;
mySO.data.myheart_mc_y = heart_mc.y;
mySO.data.myrose_mc_y = rose_mc.y;
mySO.data.myrose_mc_x = rose_mc.x;
// tears saved - - - - - - -
mySO.data.mytear_drop_mc_one_x = tear_drop_mc_one.x;
mySO.data.mytear_drop_mc_one_y = tear_drop_mc_one.y;
mySO.data.mytearup_drop_mc_three_x = tearup_drop_mc_three.x;
mySO.data.mytearup_drop_mc_three_y = tearup_drop_mc_three.y;
mySO.data.mytearup_drop_mc_four_x = tearup_drop_mc_four.x;
mySO.data.mytearup_drop_mc_four_y = tearup_drop_mc_four.y;
mySO.data.mytear_drop_mc_two_x = tear_drop_mc.x;
mySO.data.mytear_drop_mc_two_y = tear_drop_mc.y;
mySO.data.mytear_side_mc_one_x = tear_side_mc_one.x;
mySO.data.mytear_side_mc_one_y = tear_side_mc_one.y;
mySO.data.mytear_side_mc_two_x = tear_side_mc_two.x;
mySO.data.mytear_side_mc_two_y = tear_side_mc_two.y;
mySO.data.mytear_op_mc_one_y = tear_op_mc_one.y;
mySO.data.mytear_op_mc_one_x = tear_op_mc_one.x;
mySO.data.mytear_op_mc_two_y = tear_op_mc_two.y;
mySO.data.mytear_op_mc_two_x = tear_op_mc_two.x;
//tear_op_mc_one
// pink gems
mySO.data.mypink_jewel_mc_one_x = pink_jewel_mc_one.x;
mySO.data.mypink_jewel_mc_one_y = pink_jewel_mc_one.y;
mySO.data.mypink_jewel_mc_two_x = pink_jewel_mc_two.x;
mySO.data.mypink_jewel_mc_two_y = pink_jewel_mc_two.y;
mySO.data.mypink_jewel_mc_three_x = pink_jewel_mc_three.x;
mySO.data.mypink_jewel_mc_three_y = pink_jewel_mc_three.y;
mySO.data.mypink_jewel_mc_four_x = pink_jewel_mc_four.x;
mySO.data.mypink_jewel_mc_four_y = pink_jewel_mc_four.y;
mySO.data.mypink_jewel_mc_five_x = pink_jewel_mc_five.x;
mySO.data.mypink_jewel_mc_five_y = pink_jewel_mc_five.y;
mySO.data.mypink_jewel_mc_six_x = pink_jewel_mc_six.x;
mySO.data.mypink_jewel_mc_six_y = pink_jewel_mc_six.y;
mySO.data.mypink_jewel_mc_seven_x = pink_jewel_mc_seven.x;
mySO.data.mypink_jewel_mc_seven_y = pink_jewel_mc_seven.y;
mySO.data.mypink_jewel_mc_eight_x = pink_jewel_mc_eight.x;
mySO.data.mypink_jewel_mc_eight_y = pink_jewel_mc_eight.y;
mySO.data.mypink_jewel_mc_nine_x = pink_jewel_mc_nine.x;
mySO.data.mypink_jewel_mc_nine_y = pink_jewel_mc_nine.y;
// bg saves
mySO.data.myBgFrame = 1;
mySO.data.myBgFrameone = 2;
mySO.data.myBgFrametwo = 3;
mySO.data.myBgFramethree = 4;
trace("bgbackgrounds");
// silver gems - - - - - - - - -
mySO.data.mycircle_gem_mc_x = circle_gem_mc.x;
mySO.data.mycircle_gem_mc_y = circle_gem_mc.y;
mySO.data.mycircle_gem_mc_two_x = circle_gem_mc_two.x;
mySO.data.mycircle_gem_mc_two_y = circle_gem_mc_two.y;
mySO.data.mycircle_gem_mc_thirteen_x = circle_gem_mc_thirteen.x;
mySO.data.mycircle_gem_mc_thirteen_y = circle_gem_mc_thirteen.y;
//circle_gem_mc_six
mySO.flush ();
}
if (mySO.data.myBgFrame){
caseSwapper.gotoAndStop(mySO.data.myBgFrame);
}
if (mySO.data.myBgFrameone){
caseSwapper.gotoAndStop(mySO.data.myBgFrameone);
}
if (mySO.data.myBgFrametwo){
caseSwapper.gotoAndStop(mySO.data.myBgFrametwo);
}
if (mySO.data.myBgFramethree){
caseSwapper.gotoAndStop(mySO.data.myBgFramethree);
}
//caseSwapper.currentFrame = mySO.data.myBgFrame;
/////// ---------------------- loader
// ---------------------- LOADER -------------------------
//--------------------------------------------------------
//--------------------------------------------------------
// when load button is clicked it loads the x and y position of dragged objects pulled from the
//sharedOject, it remembers the last var!
load_btn.addEventListener (MouseEvent.CLICK, loadlast);
function loadlast (e:MouseEvent):void {
//saved.play();
caseSwapper.gotoAndStop(mySO.data.myBgFrame);
//caseSwapper.currentFrame = mySO.data.myBgFrame;
//caseSwapper.gotoAndStop(mySO.data.myBgFrameone);
//caseSwapper.gotoAndStop(mySO.data.myBgFrametwo);
//caseSwapper.gotoAndStop(mySO.data.myBgFramethree);
//caseSwapper.gotoAndStop(mySO.data.myBgFramefour);
blcskull_mc.x = mySO.data.myblcskull_mc_x;
blcskull_mc.y = mySO.data.myblcskull_mc_y;
blackhandbag_mc.y = mySO.data.myblackhandbag_mc_y;
blackhandbag_mc.x = mySO.data.myblackhandbag_mc_x;
bones_mc.x = mySO.data.my_x;
bones_mc.y = mySO.data.my_y;
tiger_mc.x = mySO.data.mytig_x;
tiger_mc.y = mySO.data.mytig_y;
btrfly_mc.x = mySO.data.mybut_x;
btrfly_mc.y = mySO.data.mybut_y;
crown_mc.x = mySO.data.mycrown_mc_x;
crown_mc.y = mySO.data.mycrown_mc_y;
perfume_mc.x = mySO.data.myperfume_mc_x;
perfume_mc.y = mySO.data.myperfume_mc_y;
heart_mc.x = mySO.data.myheart_mc_x;
heart_mc.y = mySO.data.myheart_mc_y;
rose_mc.y = mySO.data.myrose_mc_y;
rose_mc.x = mySO.data.myrose_mc_x;
pink_jewel_mc_one.x = mySO.data.mypink_jewel_mc_one_x;
pink_jewel_mc_one.y = mySO.data.mypink_jewel_mc_one_y;
pink_jewel_mc_two.x = mySO.data.mypink_jewel_mc_two_x;
pink_jewel_mc_two.y = mySO.data.mypink_jewel_mc_two_y;
pink_jewel_mc_three.x = mySO.data.mypink_jewel_mc_three_x;
pink_jewel_mc_three.y = mySO.data.mypink_jewel_mc_three_y;
pink_jewel_mc_four.x = mySO.data.mypink_jewel_mc_four_x;
pink_jewel_mc_four.y = mySO.data.mypink_jewel_mc_four_y;
pink_jewel_mc_five.x = mySO.data.mypink_jewel_mc_five_x;
pink_jewel_mc_five.y = mySO.data.mypink_jewel_mc_five_y;
pink_jewel_mc_six.x = mySO.data.mypink_jewel_mc_six_x;
pink_jewel_mc_six.y = mySO.data.mypink_jewel_mc_six_y;
pink_jewel_mc_seven.x = mySO.data.mypink_jewel_mc_seven_x;
pink_jewel_mc_seven.y = mySO.data.mypink_jewel_mc_seven_y;
pink_jewel_mc_eight.x = mySO.data.mypink_jewel_mc_eight_x;
pink_jewel_mc_eight.y = mySO.data.mypink_jewel_mc_eight_y;
pink_jewel_mc_nine.x = mySO.data.mypink_jewel_mc_nine_x;
pink_jewel_mc_nine.y = mySO.data.mypink_jewel_mc_nine_y;
hotlips_mc.x = mySO.data.myhotlips_mc_x;
hotlips_mc.y = mySO.data.myhotlips_mc_y;
tearup_drop_mc_three.y = mySO.data.mytearup_drop_mc_three_y;
tearup_drop_mc_three.x = mySO.data.mytearup_drop_mc_three_x;
tearup_drop_mc_four.x = mySO.data.mytearup_drop_mc_four_x;
tearup_drop_mc_four.y = mySO.data.mytearup_drop_mc_four_y;
tear_side_mc_one.x = mySO.data.mytear_side_mc_one_x;
tear_side_mc_one.y = mySO.data.mytear_side_mc_one_y;
//tear_side_mc_two.x = mySO.data.mytear_side_mc_two_x;
//tear_side_mc_two.y = mySO.data.mytear_side_mc_two_y;
tear_op_mc_one.y = mySO.data.mytear_op_mc_one_y;
tear_op_mc_one.x = mySO.data.mytear_op_mc_one_x;
tear_op_mc_two.y = mySO.data.mytear_op_mc_two_y;
tear_op_mc_two.x = mySO.data.mytear_op_mc_two_x;
//--- silver little gems -----------------
circle_gem_mc_thirteen.x = mySO.data.mycircle_gem_mc_thirteen_x;
circle_gem_mc_thirteen.y = mySO.data.mycircle_gem_mc_thirteen_y;
circle_gem_mc_two.x = mySO.data.mycircle_circle_gem_mc_two_x;
circle_gem_mc_two.y = mySO.data.mycircle_circle_gem_mc_two_y;
mySO.flush ();
}
You're not actually storing the current frame anywhere — this section just saves the same numbers every time:
mySO.data.myBgFrame = 1;
mySO.data.myBgFrameone = 2;
mySO.data.myBgFrametwo = 3;
mySO.data.myBgFramethree = 4;
This is also why you're only seeing frame four when you launch, because all of your if statements see a positive number (and anything other than zero or NaN counts as true), so they all get executed one after the other.
Instead of the above, all you need is this in your save function:
mySO.data.myBgFrame = caseSwapper.currentFrame;
Then if you want to jump to that frame on launch, you only need your first if statement:
if (mySO.data.myBgFrame){
caseSwapper.gotoAndStop(mySO.data.myBgFrame);
}