I have a problem with tensorflow.
I need to create several model (e.g. neural networks), but after the computation of the parameters of such models, I will create new models, and I won't need the previous models anymore.
Seems that tensorflow is not able to recognize which model I am still using, and which ones are without reference anymore, and I don't know in which way should I delete the previous models. As result the memory keep increasing its size, until the system kills my execution, which, obviously, is something that I would like to avoid.
How do you think I should deal with this problem? What's the correct way to 'delete' the previous models?
thanks in advance,
Samuele
Related
Im very new to Tm1 and have to implement a new function in my code.
Is there any way to undo your last action?
For better understanding ill write an example:
I have 8 different cubes and i will upload one after one.
If one cube is not able to be uploaded all the others shouldnt be uploaded too.
Every cube which is already being uploaded should get a reset to the previous state.
Is there a way to implement it?
You have to inbriquate your 8 loading process in a master (others are slaves). If you have a condition that make one or your cube unuploadable you use the ProcessError function. In the master process, you fetch the result of the execution of each slave process. If one is in error, you use the processerror function (in the master). All the chain won't be commited.
The answer above from Wuzardor provides an approach using TI processes but your question seems to suggest you're using TM1py/Python to do the upload to TM1, either directly, or by triggering TI processes through the REST API.
In general, there's no easy way to roll back changes to cube data. However, it should be simple enough to structure your Python code such that the existence and validity of all the load files is established before you push anything to any of your cubes. It's difficult to suggest the best approach without more details about what you're trying to achieve and how.
Updated in response to OP comment:
OK, while it's not clear what IT isn't cooperating with, but if you are unable to verify the source prior to extracting it, you can always load it first to a staging cube, where the data can be checked, before copying anything to your main cubes. Depending on what issues you tend to face with the data, you might be able to automate this check or might need to rely on a human looking at it. Either way, just don´t overwrite your historic data until you've checked the new data.
Furthermore, you might want to think about your overall design. Might it make sense to retain a copy of the previous data in the cubes anyway? Why not build your cubes such that you can keep the history, rather than re-overwriting each time? Finding a sensible design really depends on the details of your application but you might benefit from looking at it with fresh eyes.
Cheers
Alex
Let us imagine we have an object D, containing some data. This is modified differently across two different locations, giving rise to data objects D1 and D2. Depending upon the contents, D1 and D2 may be in conflict with each other when being merged back as part of a synchronization process.
Systems such as version control systems simply point out that the two data objects are in conflict with each other and leave it upon the user to manually resolve the conflict.
However, let us now imagine a consumer-facing application, such as a note-taking application that synchronizes contents online. In this case, no user will want to manually resolve conflicts that may have arisen due to the user typing out two versions of the same note with different contents. Discarding the older object for the newer object isn't possible either, since there may be valuable content in the older object that the user wants.
How should I go about resolving such conflicts in a consumer-facing application?
Well, if you don't want manual conflict resolution, then you will have to automatically merge changes from both updates.
There is no way that works well for all applications. When you have a requirement like this, you have to carefully design the application so that automatic merging makes sense.
There are a few common approaches, and you can do one or all of them in various combinations:
1) Merge updates really fast. Think google docs -- updates are merged in real time as people edit. Operational Transformation (https://en.wikipedia.org/wiki/Operational_transformation) is a good way to understand exactly how to do that kind of merging, but it doesn't have to be as complicated as that doc. The reason this works well is that updates are small and you can tell if someone is messing with your stuff before you put a lot of work into it. Politeness fixes conflicts -- one of you will wait until the other is done with that stuff.
2) Locking. If you click the edit button on a note, make lock it so that nobody else can edit it until you're done, etc. This is old-school, and not nearly as slick as (1), but it can work in situations where you can't merge fast enough to do (1).
3) Design your data model and interface to make merged versions as nice as possible. If anyone can add notes, but a note can only be edited by its owner, then no problem, for example. Or maybe you can only edit my stuff if you ask permission first and I give it to you. As things get more complicated than that, this becomes increasingly difficult. It's not usually possible to do this well if you're not willing to make sacrifices in application functionality. You've got one thing on your side, though: It's rude to mess with someone else's work, so a lot of the things you can do look like you did them just to enforce good behavior, and users will thank you for them if you did it with finesse.
Is it possible to directly use TypedArrays in three.js for custom attributes? I'm downloading a binary model format from a server, and the data is directly stored into a Float32Array. Since this is the format required by gl.bufferdata, it seems wasteful to create THREE.Vector3 objects, which only get stored back into a new Float32Array inside WebGLRenderer.js.
As a possibly unrelated issue/bug, I've profiled this binary model loading in Chrome and noticed that 60% of the time is spent in the garbage collector. This is seriously bogging down the model loading, since there are over 100k vertices in this model. This only started happening since v49 I believe. Any insight?
You can use BufferGeometry. Sadly we don't have many examples of how to use that one yet. Only CTMLoader is using it at this point. Maybe it can serve as good reference for you?
I am playing around with Mahout's recommendation engines and are running into problem with using genericdatamodel object. My question is if I want to add some new users data into the existing datamodel, is the only way to do it, by reconstruction of a new datamodel by reading all the data again.
Currently, our data is in the cache.
Yes, that's correct. It's effectively read-only for performance. The general idea is that you don't incorporate data model updates frequently, as it generally means rebuilding a lot of other pre-computed or cached computations.
You could hack it to expose an update method without too much trouble. Just be careful of thread-safety issues.
Currently I'm doing a project whose specifications are unclear - well who doesn't. I wonder what's the best development strategy to design a DB, that's going to be extended sooner or later with additional tables and relations. I want to include "changeability".
My main concern is that I want to apply design patterns (it's a university project) and I want to separate the constant factors from those, that change by choosing appropriate design patterns - in my case MVC and a set of sub-patterns at model level.
When it comes to the DB however, I may have to resdesign my model in my MVC approach, because my domain model at a later stage my require a different set of classes representing the DB tables. I use Hibernate as an abstraction layer between DB and application.
Would you start with a very minimal DB, just a few tables and relations? And what if I want an efficient DB, too? I wonder what strategies are applied in the real world. Stakeholder analysis for example isn't a sufficient planing solution when it comes to changing requirements. I think - at a DB level - my design pattern ends. So there's breach whose impact I'd like to minimize with a smart strategy.
In unclear situations I prefer a minimalistic DB design, supporting the needs known right now. My experience is that any effort to be clever, to model for future needs makes the model more complex. When the new needs arise, they are often in unforseen areas. The extra modeling for future needs doesn't fit the new needs, but rather makes the needed refactoring even harder.
As you already have chosen Hibernate to be able to decouple the DB design and the OO model, I think that sticking with an as simple DB as possible is a good choice.
What you describe is typical for almost every project. There are a few things you can do however.
Try to isolate the concepts (not their realizations) of your problem domain. Remember: Extending a data model is almost always easy (add a new table, a new column etc.) but changing your data model is hard and requires data migration.
I advocate using an Agile development process: Implement only what you need right now, but make sure you understand the complete problem before modeling it.
Another thing you should check before starting to hack away your code is wether your chosen infrastructure is appropriate. Using a relational database when you want to change your schema's very often is usually a bad match. Document databases are schema-less and hence more flexible. I think you should evaluate wether using a relational database is really appropriate for you application.
"Currently I'm doing a project whose specifications are unclear"
Given the 'database' tag, I assume you are asking this question in a database context.
Remember that a database is a set of ASSERTIONS OF FACT (capitalization intended).
If it is unclear what kind of "assertions of fact" your user wants to be registered by the database, then you simply cannot define (the structure of) your database.
And you will be helping both yourself and your user by first trying to clear out everything that is unclear.
In a simple answer: BE MINIMALISTIC.
Try to figure out the main entities. Don´t worry about the properties, you will fill them later. Then, create the relations between the entities. Create a test application using wour favorite ORM (Hibernate?), build some unit tests, and voilà, you have your minimal DB operational. :)
No project begins with requirements entirely known and fixed for all time. Use an agile, iterative approach to the database design so you that you can accommodate change during development.
All database designs are extensible and subject to change during their lifetime. Don't try to avoid change. Just make sure you have the right people and processes in place to manage change effectively when it happens.