Start mySQL Server out of a Ruby program - ruby

I have a Linux server where I start a few Ruby programs during the day. The server is directly connected to the internet (no firewall) at a hoster and I wonder, if there is a way to start and close the mySQL server just before I update the db and close it afterwards. The target is, to have the mySQL server only open when it is needed. So I thought it might be a way to activate the port or the service directly out of Ruby.
thank you for answering,
Werner

You'd probably have to change the permissions to the database through ruby, then, doing whatever you want to do, and change the permissions back.
You could do that usig the mysql gem, connecting to the database and running the commands.
Then restart the process, and do the same thing but backwards
Honestly, I don't know why you would do that, and I wouldn't recommend someone to do that. But that would be my approach

Related

firebird, Bad File Descriptor/Your user name and password are not defined

I am trying to set up a test environment on my mac (os 10.12) and it requires Fishbowl/Firebird DB. No matter what I do i bounce back and forth between these two errors:
isql localhost:/Users/me/Fishbowl/database/data/EXAMPLE.FDB
which gives me:
Your user name and password are not defined. Ask your database
administrator to set up a Firebird login.
And anything to do using gsec to create user or change password:
And:
Statement failed, SQLSTATE = HY000 operating system directive stat
failed
-Bad file descriptor
This is supremely frustrating. Fishbowl Client itself seems to hit this DB just fine. I have chmod 770 the /tmp/firebird directory and even tried to chown the example.fdb file itself.
Can anyone tell me how I might hit this DB from my java app or commandline? Both ways produce these errors.
1) Your connection line starts with "localhost:". That means you user TCP/IP connection to reach the database server. And the database server is running in a separate process. That means chmod and chown should not matter as long as there is firebird daemon server running and listening at TCP port ( default is 3050 AFAIR, you can read the value of your installation in the text file firebird.conf ).
Indeed, there is so-called "embedded server" or "embedded mode" where the server is loaded as DLL/SO library into the application. But then the connection string can not have network protocol prefix, so that should NOT be your case.
2) You can check documentation at http://firebirdsql.org/manual/isql-switches.html to specify your user and password in the isql command line. The Firebird has one built-in superuser, namely "SYSDBA". Regarding the password it might be a bit complicated.... It differs by Firebird version and platform
2.0) whatever SYSDBA password might be set by the server installation, if server comes in a bundle with some application the said application can override it later. Then you would either have to contact application developers or try to remove the bundled FB and install your own vanilla one, risking rendering the application no more functioning.
2.1) Windows installation of FB 2.x sets the "default" SYSDBA password as "masterkey" (only 8 first symbols actually matter)
2.2) Linux installation of FB 2.x generates a random SYSDBA password and saves it into a text file in Firebird folder.
2.3) MacOS ? Don't know. Perhaps it is closer to Linux than to Windows. So try to find such a text file and try "masterkey"
2.4) With FB 3 the authentication methods and configuration was greatly overhauled, so... So it is quite hard to tell something specific. At least for me.
3) I don't know what Fishbowl ever is, but Google suggests this: https://www.fishbowlinventory.com/wiki/Fishbowl_for_Mac
If that is so, then check the bottomline examples at that page. They stress that you should sudo all those commands. That also makes sense because
3.1) Firebird daemon might have "trusted authentication" enabled, mapping FB users to Operating System users. On UNIX that would at least map SYSDBA to root. On Windows - to Administrator (however it is localized). This does not have to be enabled, but if it is then sudo UNIX command is exactly what makes applications run with OS superuser grants and might explain lack of user and password in the command line examples.
3.2) Firebird embedded server/mode work as part of an application process, and especially with CS (Classic Server) package on UNIX the command line utilities tend to fall into this mode. Then again it needs to be run as root to read highly sensitive data from Firebird Security Database, thus the need to sudo the command. Granted, I do not think your isql command might ever run in embedded mode - because you do specify "localhost:' prefix. But the example at the wiki link above - backup and restore - they use local connection strings, so they probably do run as embedded. So that might give you yet another hint - to try remove "localhost:" prefix from the connection string and to sudo isql rather than running it from regular user. It would hardly be a normal mode, but for test purposes why not.
Hope this helps.
PS. you might also try this Firebird IDE - it is simplistic, but again, for testing purposes... http://www.flamerobin.org/dokuwiki/wiki/manual

How to mantain Shell_reverse_tcp connection?

I'm experimenting with reverse shell tcp. I managed to establish a connection, but my question is, how do I mantain a connection even after I close the multihandler? And when I'm using the target's command prompt, how do I send files to the target's computer using his command prompt?
Pedro,
The short answer is you can't.
In order to maintain a connection you need to install persistence on
the victim machine. You will still have to reuse the multi/handler in
order to receive a new connection.
In order to transfer files you need to use the meterpreter payload in
order to upload and download files.
However, if you have powershell on your target machine you can run a
powershell download that will fetch internet hosted resources for
you.
Hope this helped.

Postgres: After importing production database (with replication) to my local machine, I notice network packets being sent and received from macbook

I've been a MySQL guy, and now I'm working with Postgres so I am learning. Wondering if someone can tell me why my postgres process on my macbook is sending and receiving data over my network. I am just noticing this is happening for the first time - so maybe it's been going on before this and I just never noticed postgres does this.
What has me a bit nervous, is that I pulled down a production datadump from our server which is set up with replication and I imported it to my local postgres db. The settings in my postgresql.conf don't indicate replication is turned on. So it shouldn't be streaming out to anything, right?
If someone has some insight into what may be happening, or why postgres is sending/receiving packets, I'd love to hear the easy answer (and the complex one if there's more to what's happening).
This is a postgres install via Homebrew on MacOSX.
Thanks in advance!
Some final thoughts: It's entirely possible, I guess, that Mac's activity monitor also shows local 'network' traffic stats. Maybe this isn't going out to the internets.....
In short, I would not expect replication to be enabled for a DB that was dumped from a server that had it if the server to which it was restored had no replication configured at all.
More detail:
Normally, to get a local copy of a database in Postgres, one would do a pg_dump of the remote database (this could be done from your laptop, pointing at your server), followed by a createdb on your laptop to create the database stub and then a pg_restore pointed at the dump to populate its contents. [Edit: Re-reading your post, it seems like you may perhaps have done this, but meant that the dump you used had replication enabled.)]
That would be entirely local (assuming no connections into the DB from off-box), so long as you didn't explicitly setup any replication or anything else that would go off-box. Can you elaborate on what exactly you mean by importing with replication?
Also, if you're concerned about remote traffic coming from Postgres, try running this command a few times over the period of a minute or two (when you are seeing the traffic):
netstat | grep postgres
In general, replication in Postgres in configured at a server level, and has to do with things such as the master server shipping WAL files to the standby server (for streaming replication). You would have almost certainly have had to setup entries in postgresql.conf and pg_hba.conf to ensure that the standby server had access (such as a replication entry in the latter conf file). Assuming you didn't do steps such as this, I think it can pretty safely be concluded that there's no replication going on (especially in conjunction with double-checking via netstat).
You might also double-check the Postgres log to see if it's doing anything replication related. In a default install, that'd probably be in /var/log/postgresql (although I'm not 100% sure if Homebrew installs put it somewhere else).
If it's UDP traffic, to and from a high port, it's likely to be PostgreSQL's internal statistics collector.
These are pre-bound to prevent interference and should not be accessible outside of PostgreSQL.

Add Mounted Server to Ubuntu File Manager Side Panel

At work I have to connect to our server every day. After becoming annoyed with having to use the GUI Connect to Server every day, I wrote a quick script (using mount) that does the same thing.
When I use Connect to Server, however, a link to the mounted server appears in the side panel of the File Manager, which I use all the time. How do I add this link from a terminal/shell script?
(Or even better, where can I find the code for the Connect to Server program?)
Thanks in advance.
You want to use gvfs-mount rather than mount
See the discussion here: http://www.g-loaded.eu/2008/12/08/access-gvfs-mounts-from-the-command-line/

Can you connect to a MS Access database from Ruby running on a Mac?

I'm pretty sure the answer is "no" but I thought I'd check.
Background:
I have some legacy data in Access, need to get it into MySQL, which will be the DB server for a Ruby application that uses this legacy data.
Data has to be processed and transformed. Access and MySQL schemas are totally different. I want to write a rake task in Ruby to do the migration.
I'm planning to use the techniques outlined in this blog post: Using Ruby and ADO to Work with Access Databases. But I could use a different technique if it solves the problem.
I'm comfortable working on Unix-like computers, such as Macs. I avoid working in Windows because it fills me with deep existential horror.
Is there a practical way that I can write and run my rake task on my Mac and have it reach across the network to the grunting Mordor that is my Windows box and delicately pluck the data out like a team of commandos rescuing a group of hostages? Or do I have to just write this and run it on Windows?
Why don't you export it from MS-Access into Excel or CSV files and then import it into a separate MySQL database? Then you can rake the new one to your heart's content.
Mac ODBC drivers that open Access databases are available for about $30.00
http://www.actualtechnologies.com/product_access.php is one. I just run access inside vmware on my mac and expore to csv/excel as CodeSlave mentioned.
ODBC might be handy in case you want to use the access database to do a more direct transfer.
Hope that helps.
I had a similar issue where I wanted to use ruby with sql server. The best solution I found was using jruby with the java jdbc drivers. I'm guessing this will work with access as well, but I don't know anything about access

Resources