What is the best way for deploying Oracle EBS developments? [closed] - oracle

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am planning to deploy my developments from one instance to another, how should i proceed, what is the best practice about this process, I want to deploy DB objects(Tables, packages, views etc.) and application objects (Concurrent programs, messages, lookups etc.), Thanks for help.

The best practice to deploy database objects is
Have creation/modification script text files stored in a version control system.
Run the scripts from version control with sqlplus.
Note that you don't create/modify database objects in database with a GUI-tool but write the SQL in text files with a text editor.
The best practice to deploy eBS objects is
Save the objects from an eBS instance to text files with Generic Loader fndload.
Save the text files into a version control system.
Load the text files to another eBS instance(s) with fndload.
Here the objects are created in one eBS instance and then saved in to version control system and copied to other instances.
fndload example for concurrent programs:
ebs-1$ FNDLOAD apps/<PASSWD> O Y DOWNLOAD $FND_TOP/patch/115/import/afcpprog.lct <CONCURRENT_NAME>_program.ldt PROGRAM APPLICATION_SHORT_NAME=<APP_NAME> CONCURRENT_PROGRAM_NAME=<CONCURRENT_NAME>
ebs-2$ FNDLOAD apps/<PASSWD> O Y UPLOAD $FND_TOP/patch/115/import/afcpprog.lct <CONCURRENT_NAME>_program.ldt -

Related

Files transfer to HDFS [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I need to bring the files (zip, csv, xml etc) from windows share location to HDFS. Which is the best approach ? I have kafka - flume - hdfs in mind. Please suggest the efficient way.
I tried getting the files to Kafka consumer.
producer.send(
new ProducerRecord(topicName,key,value),
Expect an efficient approach
Kafka is not designed to send files, only individual messages of up to 1MB, by default.
You can install NFS Gateway in Hadoop, then you should be able to copy directly from the windows share to HDFS without any streaming technology, only a scheduled script on the windows machine, or externally ran
Or you can mount the windows share on some Hadoop node, and schedule a Cron job if you need continuous file delivery - https://superuser.com/a/1439984/475508
Other solutions I've seen use tools like Nifi / Streamsets which can be used to read/move files
https://community.hortonworks.com/articles/26089/windows-share-nifi-hdfs-a-practical-guide.html

How data stores in Database? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I've one doubt, it could be very lame, so please bear with me.
How and where database store its data ?
Online says
This default option creates database files that are managed by the file system of your operating system. You can specify the directory path where database files are to be stored. Oracle Database can create and manage the actual files.
But file's data is actually in disk. No?
Is it a disk where it writes its data or something else is being performed?
Can anyone help me understand how it works?
here's a bit of light reading on Oracle physical storage structures. Try not to get too excited. It's a thrilling read :)
https://docs.oracle.com/cd/E11882_01/server.112/e40540/physical.htm#CNCPT1389
You're probably finding Oracle's documentation confusing because they give you a few different ways to manage storage. It's always on disk, but its a question of how it's on disk. You can do anything from having Oracle use raw partitions on a disk (bypassing the OS filesystem) to using data files on a filesystem. It's partly a question of performance vs. convenience and partly that Oracle has been around a very long time.

How can you change the file being redirected to while script is still running? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Assume I have this script running continually:
node myprogram.js > logfile.log
If I want to make the output dump to a new log file every day without stopping or restarting "node myprogram.js", what should I do?
For example, every day, I want to see the logs saved as 2015-12-01.log, 2015-12-02.log, 2015-12-03.log, etc, and not have the logs be dumped into a single file.
I would use logrotate its the pre-installed utility most linux OS's use for what you are talking about plus more, typical default settings would involve automatically compressing log files of a certain age and then eventually deleting the oldest log files.
The utility runs automatically once a day and performs log rotations as per a configuration you define.
I would prefer this question in server fault sister site. Nonetheless, there are many tools to use. Check out logrotate / rotatelogs.

Ruby script folder structure [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have several Ruby scripts that have been refactored to use common methods. What folder structure should I use for these files?
For example: I have reports "Grower", "Fecal", "30Day", "30DayFecal", etc. that all use methods in files "date_of", "get_fecal_data", "get_fy", "chart_fecal", etc. I'm thinking that I should set up folders like;
App
-Grower
-Fecal
-30Day
-30DayFecal
-lib
-date_of.rb
-get_fecal_data.rb
-get_fy.rb
-chart_fecal.rb
Please advise.
Seems you're being partially influenced by the Rails folder layout.
app
models
controllers
views
lib
...
You can use that, since it is a common way to think, and will aid you getting advice from others. Just be sure to make it clear that you're not developing a Rails app, so you don't create other confusions.
Views
These can be any output-formatters. Rails app commonly export data in .csv format, for instance.
Controllers
These, along with services and related concepts, are things that manipulate data after a Model has retrieved it from a 'store' (e.g. a file on disk, database, or anything).
A common convention is that Controllers depend upon Models, whereas Lib[raries] is self-sufficient. Some would say that Services depend upon external API data.

Scheduling an existing AWS EC2 instance to start/stop [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Right now I am using the Auto Scaling Command Line Tool to launch a new EC2 instance once per day and run a script that terminates itself upon completion. Now I need to do the same thing with a different script, but this one requires several Python modules to be installed. Therefore, I would like to schedule the start/stop of a single, existing instance rather than the launch/termination of a brand new instance. I've scoured Amazon's documentation/blogs but I can't determine if this functionality is supported with Autoscaling. How could this be accomplished?
Its not supported with autoscaling. If you want to keep doing what you are currently doing. You could install the python modules with a cloud init script.
You can also start/stop an existing instance with the command line tools, just not the autoscaling ones.
My eventual solution was to set up an instance the way I wanted and then create an AMI from it. My autoscaling setup then starts/stops an instance of that AMI.

Resources