I want the function to return the "root" and the paths to all the subsystems below. I have yet no clue how to do it, this is my attempt, haven't run it yet because I need to find out all the functions to do lists and appending and getting subsystem paths and so on. But I just wanted to ask if this is the right way to do it so that i can proceed with my searching for these functions, or if there exists functions to do it or parts of it?
function dest,paths = SaveRootAndBelow(path)
entitytosave = gcs;
if strcmp(bdroot(entitytosave),entitytosave)
% I'm the main model
dest = save_system(entitytosave,path);
paths = getPaths(entitytosave)
else
% I'm a subsystem
newbd = new_system;
open_system(newbd);
% copy the subsystem
Simulink.SubSystem.copyContentsToBlockDiagram(entitytosave, newbd);
dest = save_system(newbd,path);
paths = getPaths(newbd)
% close the new model
close_system(newbd, 0);
end
end
function paths = getPaths(root)
paths = []
subsystems = find_system(modelName, 'BlockType', 'SubSystem')
foreach subsystem in subsystems
paths.append(subsyspath)
paths.append(getPaths(subsystem))
end
end
Related
I've use the below bazel rule to build static libraries with bazel:
def _cc_static_library_impl(ctx):
cc_deps = [dep[CcInfo] for dep in ctx.attr.deps]
libraries = []
for cc_dep in cc_deps:
for link_input in cc_dep.linking_context.linker_inputs.to_list():
for library in link_input.libraries:
libraries += library.pic_objects
args = ["r", ctx.outputs.out.path] + [f.path for f in libraries]
ctx.actions.run(
inputs = libraries,
outputs = [ctx.outputs.out],
executable = "/usr/bin/ar",
arguments = args,
)
return [DefaultInfo()]
cc_static_library = rule(
implementation = _cc_static_library_impl,
attrs = {
"deps": attr.label_list(providers = [CcInfo]),
},
outputs = {"out": "lib%{name}.a"},
)
How can I extract the command to use from the current toolchain instead of using the hardcoded /usr/bin/ar? I've based the rule on what I've found on the internet and I have very limited knowledge about this. This example seems to do something related:
https://github.com/bazelbuild/rules_cc/blob/main/examples/my_c_archive/my_c_archive.bzl
This is the relevant part of my_c_archive:
archiver_path = cc_common.get_tool_for_action(
feature_configuration = feature_configuration,
action_name = CPP_LINK_STATIC_LIBRARY_ACTION_NAME,
)
That gets you the path, and then you need to add cc_toolchain.all_files to your ctx.actions.run call, so it ends up looking like this:
ctx.actions.run(
inputs = depset(
direct = [libraries],
transitive = [
cc_toolchain.all_files,
],
),
outputs = [ctx.outputs.out],
executable = archiver_path,
arguments = args,
)
However, you'll also notice that my_c_archive builds up a command line and environment variables to call the archiver with. A simple toolchain won't have anything to pass in either of those, but a more complex one may not work correctly without adding them (for example, passing -m32 or setting PATH).
Part of the starlark implementation of cc_import in _create_archive_action in cc_import.bzl is a good place to start for handling all the intricacies. It creates an action to produce a static library from a set of object files, with flexibility to work with many toolchains.
I have a training set of 614 images which have already been shuffled. I want to read the images in order in batches of 5. Because my labels are arranged in the same order, any shuffling of the images when being read into the batch will result in incorrect labelling.
These are my functions to read and add the images to the batch:
# To add files from queue to a batch:
def add_to_batch(image):
print('Adding to batch')
image_batch = tf.train.batch([image],batch_size=5,num_threads=1,capacity=614)
# Add to summary
tf.image_summary('images',image_batch,max_images=30)
return image_batch
# To read files in queue and process:
def get_batch():
# Create filename queue of images to read
filenames = [('/media/jessica/Jessica/TensorFlow/StreetView/training/original/train_%d.png' % i) for i in range(1,614)]
filename_queue = tf.train.string_input_producer(filenames,shuffle=False,capacity=614)
reader = tf.WholeFileReader()
key, value = reader.read(filename_queue)
# Read and process image
# Image is 500 x 275:
my_image = tf.image.decode_png(value)
my_image_float = tf.cast(my_image,tf.float32)
my_image_float = tf.reshape(my_image_float,[275,500,4])
return add_to_batch(my_image_float)
This is my function to perform the prediction:
def inference(x):
< Perform convolution, pooling etc.>
return y_conv
This is my function to calculate loss and perform optimisation:
def train_step(y_label,y_conv):
""" Calculate loss """
# Cross-entropy
loss = -tf.reduce_sum(y_label*tf.log(y_conv + 1e-9))
# Add to summary
tf.scalar_summary('loss',loss)
""" Optimisation """
opt = tf.train.AdamOptimizer().minimize(loss)
return loss
This is my main function:
def main ():
# Training
images = get_batch()
y_conv = inference(images)
loss = train_step(y_label,y_conv)
# To write and merge summaries
writer = tf.train.SummaryWriter('/media/jessica/Jessica/TensorFlow/StreetView/SummaryLogs/log_5', graph_def=sess.graph_def)
merged = tf.merge_all_summaries()
""" Run session """
sess.run(tf.initialize_all_variables())
tf.train.start_queue_runners(sess=sess)
print "Running..."
for step in range(5):
# y_1 = <get the correct labels here>
# Train
loss_value = sess.run(train_step,feed_dict={y_label:y_1})
print "Step %d, Loss %g"%(step,loss_value)
# Save summary
summary_str = sess.run(merged,feed_dict={y_label:y_1})
writer.add_summary(summary_str,step)
print('Finished')
if __name__ == '__main__':
main()
When I check my image_summary the images do not seem to be in sequence. Or rather, what is happening is:
Images 1-5: discarded, Images 6-10: read, Images 11-15: discarded, Images 16-20: read etc.
So it looks like I am getting my batches twice, throwing away the first one and using the second one? I have tried a few remedies but nothing seems to work. I feel like I am understanding something fundamentally wrong about calling images = get_batch() and sess.run().
Your batch operation is a FIFOQueue, so every time you use it's output, it advances the state.
Your first session.run call uses the images 1-5 in the computation of train_step, your second session.run asks for the computation of image_summary which pulls images 5-6 and uses them in the visualization.
If you want to visualize things without affecting the state of input, it helps to cache queue values in variables and define your summaries with variables as inputs rather than depending on live queue.
(image_batch_live,) = tf.train.batch([image],batch_size=5,num_threads=1,capacity=614)
image_batch = tf.Variable(
tf.zeros((batch_size, image_size, image_size, color_channels)),
trainable=False,
name="input_values_cached")
advance_batch = tf.assign(image_batch, image_batch_live)
So now your image_batch is a static value which you can use both for computing loss and visualization. Between steps you would call sess.run(advance_batch) to advance the queue.
Minor wrinkle with this approach -- default saver will save your image_batch variable to checkpoint. If you ever change your batch-size, then your checkpoint restore will fail with dimension mismatch. To work-around you would need to specify the list of variables to restore manually, and run initializers for the rest.
I would like to use a specific version of g++ installedvat /opt/blabla/bin/g++.
How do I force premake to add initialization of CXX variable in makefile, such that it will point to the specific location?
I do realize that once makefile is generated, I can to 'make CXX=...' but I would like to have CXX set inside auto-generated makefile.
Using premake5, targeting gmake.
Thanks in advance
=============
Update:
By poking examples and browsing the code, I figured out I can do it by adding this code into premake5.lua:
local GCC_BIN_PATH = "/opt/blala/bin"
-- start: setting gcc version
-- todo: consider moving this instrumentation into a side lua script
local gcc = premake.tools.gcc
gcc.tools = {
cc = GCC_BIN_PATH.."/gcc",
cxx = GCC_BIN_PATH.."/g++",
ar = GCC_BIN_PATH.."/ar"
}
function gcc.gettoolname(cfg, tool)
return gcc.tools[tool]
end
-- finish: setting gcc version
Is there a better way to achieve the same? In particular, does it make sense to redefine gettoolname function?
Same answer as above but more generic. This file can be named "add_new_gcc_toolset.lua":
local function add_new_gcc_toolset (name, prefix)
local gcc = premake.tools.gcc
local new_toolset = {}
new_toolset.getcflags = gcc.getcflags
new_toolset.getcxxflags = gcc.getcxxflags
new_toolset.getforceincludes = gcc.getforceincludes
new_toolset.getldflags = gcc.getldflags
new_toolset.getcppflags = gcc.getcppflags
new_toolset.getdefines = gcc.getdefines
new_toolset.getincludedirs = gcc.getincludedirs
new_toolset.getLibraryDirectories = gcc.getLibraryDirectories
new_toolset.getlinks = gcc.getlinks
new_toolset.getmakesettings = gcc.getmakesettings
new_toolset.toolset_prefix = prefix
function new_toolset.gettoolname (cfg, tool)
if tool == "cc" then
name = new_toolset.toolset_prefix .. "gcc"
elseif tool == "cxx" then
name = new_toolset.toolset_prefix .. "g++"
elseif tool == "ar" then
name = new_toolset.toolset_prefix .. "ar"
end
return name
end
premake.tools[name] = new_toolset
end
return add_new_gcc_toolset
I think that official way is to create your own toolset.
The example below creates a toolset with the "arm_gcc" name:
premake.tools.arm_gcc = {}
local arm_gcc = premake.tools.arm_gcc
local gcc = premake.tools.gcc
arm_gcc.getcflags = gcc.getcflags
arm_gcc.getcxxflags = gcc.getcxxflags
arm_gcc.getforceincludes = gcc.getforceincludes
arm_gcc.getldflags = gcc.getldflags
arm_gcc.getcppflags = gcc.getcppflags
arm_gcc.getdefines = gcc.getdefines
arm_gcc.getincludedirs = gcc.getincludedirs
arm_gcc.getLibraryDirectories = gcc.getLibraryDirectories
arm_gcc.getlinks = gcc.getlinks
arm_gcc.getmakesettings = gcc.getmakesettings
function arm_gcc.gettoolname (cfg, tool)
local prefix = path.getabsolute ("../../arm_env")
prefix = prefix .. "/arm_toolchain/bin/arm-linux-gnueabihf-"
if tool == "cc" then
name = prefix .. "gcc"
elseif tool == "cxx" then
name = prefix .. "g++"
elseif tool == "ar" then
name = prefix .. "ar"
else
name = nil
end
return name
end
Then you can use:
toolset "arm_gcc"
Inside your projects, filters, etc.
This method has the advantage that you aren't overwriting the regular gcc toolset. Both can coexist if necessary.
In my case I actually find cleaner to add each compiler in its own lua file and then include it from the main (premake5.lua) script.
as a workaround, I found this method:
makesettings [[
CC = gcc
]]
https://github.com/premake/premake-core/wiki/makesettings
I need to import multiple images (10.000) in Matlab (2013b) from a subdirectory of the predefined directory of Matlab.
I don't know the exact names of the images.
I tried this:
file = dir('C:\Users\user\Documents\MATLAB\train');
NF = length(file);
for k = 1 : NF
img = imread(fullfile('C:\Users\user\Documents\MATLAB\train', file(k).name));
end
But it throws this error though I ran it with the Admin privileges:
Error using imread (line 347)
Can't open file "C:\Users\user\Documents\MATLAB\train\." for reading;
you may not have read permission.
The "dir" command returns the virtual directory elements "." (self directory) and ".." parent, as your error message shows.
A simple fix is to use a more specific dir call, based on your image types, perhaps:
file = dir('C:\Users\user\Documents\MATLAB\train\*.jpg');
Check the output of dir. The first two "files" are . and .., which is similar to the behaviour of the windows dir command.
file = dir('C:\Users\user\Documents\MATLAB\train');
NF = length(file);
for k = 3 : NF
img = imread(fullfile('C:\Users\user\Documents\MATLAB\train', file(k).name));
end
In R2013b you would have to do
file = dir('C:\Users\user\Documents\MATLAB\train\*.jpg');
If you have R2014b with the Computer Vision System Toolbox then you can use imageSet:
images = imageSet('C:\Users\user\Documents\MATLAB\train\');
This will create an object containing paths to all image files in the train directory, regardless of format. Then you can read the i-th image like this:
im = read(images, i);
I want to write a program which can read core files in Linux. However i cannot find any documentation which can guide me in this respect. Can someone please guide me as to where to do find some resources?
You can also take a look at GDB source code, gdb/core*.
For instance, in gdb/corelow.c, you can read at the end:
static struct target_ops core_ops;
core_ops.to_shortname = "core";
core_ops.to_longname = "Local core dump file";
core_ops.to_doc = "Use a core file as a target. Specify the filename of the core file.";
core_ops.to_open = core_open;
core_ops.to_close = core_close;
core_ops.to_attach = find_default_attach;
core_ops.to_detach = core_detach;
core_ops.to_fetch_registers = get_core_registers;
core_ops.to_xfer_partial = core_xfer_partial;
core_ops.to_files_info = core_files_info;
core_ops.to_insert_breakpoint = ignore;
core_ops.to_remove_breakpoint = ignore;
core_ops.to_create_inferior = find_default_create_inferior;
core_ops.to_thread_alive = core_thread_alive;
core_ops.to_read_description = core_read_description;
core_ops.to_pid_to_str = core_pid_to_str;
core_ops.to_stratum = process_stratum;
core_ops.to_has_memory = core_has_memory;
core_ops.to_has_stack = core_has_stack;
core_ops.to_has_registers = core_has_registers;
The struct target_ops defines a generic interface that the upper part of GDB will use to communicate with a target. This target can be a local unix process, a remote process, a core file, ...
So if you only investigate what's behing these functions, you won't be overhelmed by the generic part of the debugger implementation.
(depending of what's your final goal, you may also want to reuse this interface and its implementation in your app, it shouldn't rely on so many other things.
Having a look at the source of gcore http://people.redhat.com/anderson/extensions/gcore.c might be helpful.
Core files can be examined by using the dbx(1) or mdb(1) or one of the proc(1) tools.