Trying to Reconnet issue in Odoo 11.0 - odoo-11

In the case of importing or exporting large amount of data (like 5000 records) from odoo, It show connection lost and trying to reconnect messages. So is there any way to deal with it while working with large amount of records ?

I have same issue in odoo 12 when I tried to import translation. I did some hard troubleshooting, I disabled nginx that I configured with self signed SSL.

In my case, import records from MSSQL.
Use transient model and pyodbc
import pyodbc
class Import(models.TransientModel):
#api.multi
def insert_records(self):
try:
cnxn = pyodbc.connect(
'DRIVER={SQL Server}; SERVER=server_address; DATABASE=db_name; UID=uid_name; PWD=pass_word')
cursor = cnxn.cursor()
cursor.execute("SELECT * FROM MSSQL_table")
rows = cursor.fetchall() # or cursor.fetchmany(5000)
pg_table = self.env["pgSql_table"].search([])
for row in rows:
pg_table.create({
"pg_colume_name1": row.SQL_colume_name1, ...
})
except Exception as e:
pass
return True
<button string="import" type="object" name="insert_records" confirm="confirm?"/>
Click button to run insert method,and use pyCharm to set break-points while running.
Fetchmany(number) allow you to test few records

Related

Is it alright to inlcude connect() inside the lambda_handler in order to close the connection after use?

I wrote one lambda function to access the MySQL database and fetch the data i.e to fetch the number of users, but any real-time update is not fetched, unless the connection is re-established.
And closing the connection inside the lambda_handler before returning, results in connection error upon its next call.
The query which I am using is -> select count(*) from users
import os
import pymysql
import json
import logging
endpoint = os.environ.get('DBMS_endpoint')
username = os.environ.get('DBMS_username')
password = os.environ.get('DBMS_password')
database_name = os.environ.get('DBMS_name')
DBport = int(os.environ.get('DBMS_port'))
logger = logging.getLogger()
logger.setLevel(logging.INFO)
try:
connection = pymysql.connect(endpoint, user=username, passwd=password, db=database_name, port=DBport)
logger.info("SUCCESS: Connection to RDS mysql instance succeeded")
except:
logger.error("ERROR: Unexpected error: Could not connect to MySql instance.")
def lambda_handler(event, context):
try:
cursor = connection.cursor()
............some.work..........
............work.saved..........
cursor.close()
connection.close()
return .....
except:
print("ERROR")
The above code results in connection error after its second time usage,
First time it works fine and gives the output but the second time when I run the lambda function it results in connection error.
Upon removal of this line ->
connection.close()
The code works fine but the real-time data which was inserted into the DB is not fetched by the lambda,
but when I don't use the lambda function for 2 minutes, then after using it again, the new value is fetched by it.
So,
In order to rectify this problem,
I placed the connect() inside the lambda_handler and the problem is solved and it also fetches the real-time data upon insertion.
import os
import pymysql
import json
import logging
endpoint = os.environ.get('DBMS_endpoint')
username = os.environ.get('DBMS_username')
password = os.environ.get('DBMS_password')
database_name = os.environ.get('DBMS_name')
DBport = int(os.environ.get('DBMS_port'))
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def lambda_handler(event, context):
try:
try:
connection = pymysql.connect(endpoint, user=username, passwd=password, db=database_name, port=DBport)
except:
logger.error("ERROR: Unexpected error: Could not connect to MySql instance.")
cursor = connection.cursor()
............some.work..........
............work.saved..........
cursor.close()
connection.close()
return .....
except:
print("ERROR")
So, I want to know, whether is it right to do this, or there is some other way to solve this problem, I trying to solve this for few-days and finally this solution is working, but not sure whether will it be a good practice to do this or not.
Any problems will occur if the number of connections to database increases?
Or any kind of resource problem?

Trying to access an object from a listener python web framework

Pretty new to asynch so here is my question and thank you in advance.
Hi All very simple question I might be thinking too much into.
I am trying to access this cassandra client outside of these defined listeners below that get registered to a sanic main app.
I need the session in order to use an update query which will execute Asynchronously. I can definetly connect and event query from the 'setup_cassandra_session_listener' method below. But having tough time figuring how to call this Cassandra session outside and isolate so i can access else where.
from aiocassandra import aiosession
from cassandra.cluster import Cluster
from sanic import Sanic
from config import CLUSTER_HOST, TABLE_NAME, CASSANDRA_KEY_SPACE, CASSANDRA_PORT, DATA_CENTER, DEBUG_LEVEL, LOGGER_FORMAT
log = logging.getLogger('sanic')
log.setLevel('INFO')
cassandra_cluster = None
def setup_cassandra_session_listener(app, loop):
global cassandra_cluster
cassandra_cluster = Cluster([CLUSTER_HOST], CASSANDRA_PORT, DATA_CENTER)
session = cassandra_cluster.connect(CASSANDRA_KEY_SPACE)
metadata = cassandra_cluster.metadata
app.session = cassandra_cluster.connect(CASSANDRA_KEY_SPACE)
log.info('Connected to cluster: ' + metadata.cluster_name)
aiosession(session)
app.cassandra = session
def teardown_cassandra_session_listener(app, loop):
global cassandra_cluster
cassandra_cluster.shutdown()
def register_cassandra(app: Sanic):
app.listener('before_server_start')(setup_cassandra_session_listener)
app.listener('after_server_stop')(teardown_cassandra_session_listener)
Here is a working example that should do what you need. It does not actually run Cassandra (since I have no experience doing that). But, in principle this should work with any database connection you need to manage across the lifespan of your running server.
from sanic import Sanic
from sanic.response import text
app = Sanic()
class DummyCluser:
def connect(self):
print("Connecting")
return "session"
def shutdown(self):
print("Shutting down")
def setup_cassandra_session_listener(app, loop):
# No global variables needed
app.cluster = DummyCluser()
app.session = app.cluster.connect()
def teardown_cassandra_session_listener(app, loop):
app.cluster.shutdown()
def register_cassandra(app: Sanic):
# Changed these listeners to be more friendly if running with and ASGI server
app.listener('after_server_start')(setup_cassandra_session_listener)
app.listener('before_server_stop')(teardown_cassandra_session_listener)
#app.get("/")
async def get(request):
return text(app.session)
if __name__ == "__main__":
register_cassandra(app)
app.run(debug=True)
The idea is that you attach to your app instance (as you did) and then are able to simply access that inside your routes with request.app.

asyncronously pulling large data from pyodbc using fetchmany

I am trying to pull a large dataset from pyodbc. My code below works ok, but it is serial, hence slow. I want to make it able to initiate multiple IO calls asynchronously. I see many examples using asyncio - but cannot find anything i can use with fetchmany. I appreciate any suggestions! I attempted to pool using asyncio but couldn't make it work.
conn = pyodbc.connect('DSN=Denodo Interfaces')
cursor = conn.cursor()
strng = strng.replace('myWellName', well_name)
cursor.execute(strng)
cols = [column[0] for column in cursor.description]
mylist=[]
while True:
rows = cursor.fetchmany(10000)
if not rows:
break
df = pd.DataFrame([tuple(t) for t in rows], columns = cols)
mylist.append(df)
df = pd.concat(mylist, axis=0).reset_index(drop=True)

gspread data does not appear in Google Sheet

I'm trying to write sensor data to a google sheet. I was able to write to this same sheet a year or so ago but I am active on this project again and can't get it to work. I believe the Oauth has changed and I've updated my code for that change.
In the below code, I get no errors, however no data in entered in the GoogleSheet. Also, If I look at GoogleSheets, the "last opened" date does not reflect the time my program would/should be writing to that google sheet.
I've tried numerous variations and I'm just stuck. Any suggestions would be appreciated.
#!/usr/bin/python3
#-- developed with Python 3.4.2
# External Resources
import time
import sys
import json
import gspread
from oauth2client.service_account import ServiceAccountCredentials
import traceback
# Initialize gspread
scope = ['https://spreadsheets.google.com/feeds']
credentials = ServiceAccountCredentials.from_json_keyfile_name('MyGoogleCode.json',scope)
client = gspread.authorize(credentials)
# Start loop ________________________________________________________________
samplecount = 1
while True:
data_time = (time.strftime("%Y-%m-%d %H:%M:%S"))
row = ([samplecount,data_time])
# Append to Google sheet_
try:
if credentials is None or credentials.invalid:
credentials.refresh(httplib2.Http())
GoogleDataFile = client.open('DataLogger')
#wks = GoogleDataFile.get_worksheet(1)
wks = GoogleDataFile.get_worksheet(1)
wks.append_row([samplecount,data_time])
print("worksheets", GoogleDataFile.worksheets()) #prints ID for both sheets
except Exception as e:
traceback.print_exc()
print ("samplecount ", samplecount, row)
samplecount += 1
time.sleep(5)
I found my issue. I've changed 3 things to get gspread working:
Downloaded a newly created json file (probably did not need this step)
With the target worksheet open in chrome, I "shared" it with the email address found in the JSON file.
In the google developers console, I enabled "Drive API"
However, the code in the original post will not refresh the token. It will stop working after 60 minutes.
The code that works (as of July 2017) is below.
The code writes to a google sheet named "Datalogger"
It writes to the sheet shown as Sheet2 in the google view.
The only unique information is the name of the JSON file
Hope this helps others.
Jon
#!/usr/bin/python3
# -- developed with Python 3.4.2
#
# External Resources __________________________________________________________
import time
import json
import gspread
from oauth2client.service_account import ServiceAccountCredentials
import traceback
# Initialize gspread credentials
scope = ['https://spreadsheets.google.com/feeds']
credentials = ServiceAccountCredentials.from_json_keyfile_name('MyjsonFile.json',scope)
headers = gspread.httpsession.HTTPSession(headers={'Connection': 'Keep-Alive'})
client = gspread.Client(auth=credentials, http_session=headers)
client.login()
workbook = client.open("DataLogger")
wksheet = workbook.get_worksheet(1)
# Start loop ________________________________________________________________
samplecount = 1
while True:
data_time = (time.strftime("%Y-%m-%d %H:%M:%S"))
row_data = [samplecount,data_time]
if credentials.access_token_expired:
client.login()
wksheet.append_row(row_data)
print("Number of rows in out worksheet ",wksheet.row_count)
print ("samplecount ", samplecount, row_data)
print()
samplecount += 1
time.sleep(16*60)

Spotfire - mark records and send to clipboard

I'd like to create a Spotfire button action control that does the following
Select all rows in a table visualization
Send the selected rows to the clipboard
First step was handled pretty easily (borrowed from here). For the second step, I was unsuccessful in my initial attempts to send to clipboard with script (e.g. as suggested here). I was partially successful in a followup attempt by sending ctrl-c programatically to spotfire (see spotfired.blogspot.co.id/2014/04/pressing-keys-programatically.html).
Here's the [mostly] functioning code:
from Spotfire.Dxp.Application.Visuals import VisualContent
from Spotfire.Dxp.Data import IndexSet
from Spotfire.Dxp.Data import RowSelection
#Get table reference
vc = vis.As[VisualContent]()
dataTable = vc.Data.DataTableReference
#Set marking
marking=vc.Data.MarkingReference
#Setup rows to select from rows to include
rowCount=dataTable.RowCount
rowsToSelect = IndexSet(rowCount, True)
#Set marking
marking.SetSelection(RowSelection(rowsToSelect), dataTable)
#Script to send keystroke to Spotfire
import clr
clr.AddReference("System.Windows.Forms")
from System.Windows.Forms import SendKeys, Control, Keys
#Send keystroke for CTRL-C Copy-to-clipboard
SendKeys.Send("^c") #Ctrl+C
The code works as expected, except that I have to hit the button twice for the ctrl-c part of the script to work (i.e. hitting once results in marking all rows in the table visualization).
Another issue that I seemed to have resolved is that the originally suggested syntax to send the ctrl-c keystroke command was SendKeys.Send("(^+C)"). However, this didn't work, so I rewrote as SendKeys.Send("^c"), which does work, except only after I hit the button twice.
Any thoughts on how I could fix the issue of having hit the action control button twice?
A workaround could be to avoid sending keystrokes with script and revisit my first attempt code the copy-to-clipboard functionality, but my Ironpython skills are a limiting factor here.
Using the same post as reference I used this code to use the windows clipboard
tempFolder = Path.GetTempPath()
tempFilename = Path.GetTempFileName()
tp = mytable.As[TablePlot]()
writer = StreamWriter(tempFilename)
tp.ExportText(writer)
f = open(tempFilename)
html=""
for line in f:
html += "\t".join(line.split("\t")).strip()
html += "\n"
f.close()
import clr
clr.AddReference('System.Windows.Forms')
from System.Windows.Forms import Clipboard
Clipboard.SetText(html)
Thanks, sayTibco, code working for me, now. See below for updated version. Still curious to know how to better utilize SendKeys.Send(), but will make that the subject of a separate post after I have some time to experiment.
from Spotfire.Dxp.Application.Visuals import VisualContent, TablePlot
from Spotfire.Dxp.Data import IndexSet
from Spotfire.Dxp.Data import RowSelection
#get table reference
vc = mytable.As[VisualContent]()
dataTable = vc.Data.DataTableReference
#set marking
marking=vc.Data.MarkingReference
#setup rows to select from rows to include
rowCount=dataTable.RowCount
rowsToSelect = IndexSet(rowCount, True)
#Set marking
marking.SetSelection(RowSelection(rowsToSelect), dataTable)
#Copy marked records to Clipboard
import clr
import sys
clr.AddReference('System.Data')
import System
from System.IO import Path, StreamWriter
from System.Text import StringBuilder
#Temp file for storing the table data
tempFolder = Path.GetTempPath()
tempFilename = Path.GetTempFileName()
#Export TablePlot data to the temp file
tp = mytable.As[TablePlot]()
writer = StreamWriter(tempFilename)
tp.ExportText(writer)
f = open(tempFilename)
#Format table
html=""
for line in f:
html += "\t".join(line.split("\t")).strip()
html += "\n"
f.close()
#Paste to system Clipboard
clr.AddReference('System.Windows.Forms')
from System.Windows.Forms import Clipboard
Clipboard.SetText(html)

Resources