I'm using Go connector taosRestful to access TDengine. I want to check databases and tables, so I execute :
db.Exec("show databases")
But I got an error "wrong result".
Can't taosRestful execute this statement? Or is there another way to write it?
I Googled the question but got no answer
You should use taos.Query() to execute a query command. taos.Exec() is for inserting.
Related
I am currently working on a stupid system where I have given no direct DB access but a weird SQL Workbench which can not do most of the things apart from some basic stuff. So for some reason I need to do a SELECT * on one of the tables which have 174 columns. And whenever I try that it gives me the following error:
"ERROR: Error -27 was encountered whilst running the SQL command. (-3)
Error -3 running SQL : ORACLE Driver Error [-27]: Selected data too
large for SQL Workbench"
Quick googling gave me nothing apart from (in one of the oracle documents):
In the SQL Editor, the maximum length of one row of the formatted
result is 8190 bytes. When this length is exceeded, the ORA connector
generates the above error
Now, I was wondering if anyone could give me a solution that would be a great help. One of the solution I am thinking is to increase the Maximum Length for Ora Connector/Driver. But I am novice in Oracle and do not know anything apart from querying. So haven't been able to change the Maximum Length yet.
So, please if anybody could help me out with this, that would be great.
Thanks a lot guys
Being asked to do database work trough the Uniface SQL Workbench is not a good situation. It is only a very simple thing that you can use in an emergency if nothing else is available.
You could run a couple of queries, each time with the primary key and a bunch of fields and stitch the result together in Excel.
If you have access to the Uniface Development Environment you can use it to convert your Oracle data to, for example, XML. Instructions are in the Uniface helpfile ulibrary.chm, see command line switch /cpy.
You cannot change the maximum record length of the Uniface Oracle Connector.
My Hive query has been throwing an error:
syntax error near unexpected token `('
I am not sure where the error occurs in the query below.
Can you help me out?
select
A.dataA, B.dataB, count(A.nid), count(B.nid)
from
(select
nid, sum(dataA_count) as dataA
from
table_view
group by nid) A
LEFT JOIN
(select
nid, sum(dataB_count) as dataB
from
table_others
group by nid) B ON A.nid = B.nid
group by A.dataA , B.dataB;
I think you didnot close the ) at the end.
Thanks
Sometimes it have been seen that people forget to start the service metastore and thereafter as well as also to enter the hive bash shell, and start passing the commands in similar way of sqoop, when I were the newbie I were also facing these things,
so to overcome of this issue -
goto the hive directory & pass : bin/hive --service metastore & so it will start the hive metastore server for you and later
open another terminal or cli & pass : bin/hive so it will let you enter inside the hive bash shell.
sometimes when you forgot to do these steps, you got silly issues like on thread title we are discussing here,
hope it will help others, thanks.
I have gone through many posts but i didn't realize the my beeline terminal is logged off and i am trying in normal terminal
I faced an issue exactly like this:
First, you have opened 6 open parenthesis and 6 closed ones, so this is not your issue.
Last but not least, you are getting this error because your command is getting interpreted word by word. A statement, like a SQL query, is only known for the databases and if you use the language of a specific database, then only that particular database is able to understand it.
Your "before '(' ..." error means, you are using something before the ( that isnt known for the terminal or the place you are running in.
All you have to do to fix it is:
1- Wrap it with a single or double quotation
2- Use where-clause even if you don't need to(for example Apache Sqoop needs it no matter what). So check the documentation for exact way to do so, usually you can use something like where 1=1 when you dont need it(for Sqoop it was where $CONDITIONS .
3- Make sure your command runs in a designated database first, before asking any third party app to run it.
if this answer was helpful you can give it "chosen answer mark" for better reach in the community.
I am using the DBI module to fire a select query of Oracle. Using the prepare module of the DBI, the query has been prepared, and then using the execute module, the select query is executed.
My question is: Once the query is executed, the result is stored in memory till we use any of the fetchrow methods to retrieve the result. Till then, the query result is stored in Oracle memory or Perl memory?
As of my understanding, it should be in Oracle memory, I still wanted to confirm.
It is held in Oracle until you issue your first fetch. However, you should be aware that once you make your first fetch call DBD::Oracle (which I presume you are using) will likely fetch multiple rows back in one go even if you asked for only one (you can see how many with RowsInCache). You can alter the settings used with ora_prefetch_rows, ora_prefetch_memory and ora_row_cache_off.
in the Oracle memory. First hint: you don't have access to that data.
You could test the amount of memory used by your Perl script before and after the execute statement to confirm.
See http://docstore.mik.ua/orelly/linux/dbi/ch05_01.htm
The following SQL takes 62 seconds to return:
select getCreditBalance(Customerid)
from business_apply
where serialno = '20101013000005'
How to tune it?
Please tell me in detail.
I just want to know the steps I should do to tune it .
we use IDS 9.04 .
As in JDBC I cant see output with SET Explain ON
shall I execute query in dbaccess (with SET Explain on)?
My problem is I cant get execution plan ...If I can get it ,I will post it here.
You've not given us very much to work on.
Basic questions
What is the type of the column 'SerialNo'?
If it is a numeric column, don't quote the value you are searching for.
Is there an index on 'SerialNo'?
The index is important; the type is not so important.
Crucial question
What does the getCreditBalance() procedure do?
Auxilliary questions
Which version of Informix are you using? Is it IDS or SE or something else?
When did you last run UPDATE STATISTICS?
Is there a problem connecting to the database, or is it definitely just this query that is slow?
What language are you using to submit the query?
Are there any networks with huge latencies involved?
Which isolation level are you running at?
How big is the Business_Apply table?
What is the size of each row?
How many rows?
Which other tables are accessed by the getCreditBalance() procedure?
How big are they?
Do they have appropriate indexes?
What sort of machine is the Informix server running on?
What does the query plan tell you when you run with SET EXPLAIN on?
Is there any chance you've got a failing disk and the o/s is taking forever to read it?
Make sure there is an index on serialno and tune the code in the getCreditBalance function. Without knowing what that does, it's hard to give you any additional help.
I have a select query which takes 10 min to complete as it runs thru 10M records. When I run thru TOAD or program using normal JDBC connection I get the results back, but while running a Job which uses Hibernate as ORM does not return any results. It just hangs up ...even after 45 min? Please help
Are you saying you trying to retrieve 10M records using an ORM like hibernate?
If this is the case you have one big problems, you need to redesign your application because this is not going to work, and about why it hangs up, well, I bet is because it runs out of memory.
Have you enabled SQL output for Hibernate? You need to set hibernate.show_sql to true in order to do that.
Once that's done, compare the generated SQL with the one you're been running through TOAD. Are they exactly the same or not?
I'm going to venture a guess here and say they're not because once SQL is generated Hibernate does nothing fancy - connection is taken from a pool; prepared statement is created and executed - so it should be no different from JDBC.
Thus the question most likely is how can your HQL be optimized. If you need any help with that you'll have to post the HQL in question as well as appropriate mappings / table schemas. Running explain on query would help as well.