I am trying to upload a table, and it is giving me the error message: "The following new table has invalid names: ". It did not point out which one is invalid. All my column names are words. Not sure what rules can I possibly violate. Below is a screenshot.
DMS NOTDMS engine arsenic sediment cartilage articular bone freight solutions neutrino heart stripe plasma indoor calcium power fixture eye chloride tellurium alloys egg corrosion market antenna metal ice quantum invasibility interrupt ventilation ammonia pollen syringae text auxin editing compression copper dpp clock enduring taxes blue kinase dolomite meristem isoprene proteins halo context information type detector oxygen invariants aequorin attractors ribosome actin cellulose tubulin binding site disulfide midgut alternative oxidase fischeri agreement snow cements excluded attitudes law nucleotide music homotopy periplasmic translocation stomatal phosphoprotein flagellar late motors operons replication sigma recombination streamflow fluidity police muscle blood heme replicative kelps estrogen elderly witnesses fire splicing scaffolding subunits erosion reef climate abnormal operator holographic braided seeding kidney cortical photonic functor homology river alluvial sand inlet import nitrogenase aleurone maturation guard light inositol membrane clay lightning recycling amoebae dyneins thioredoxin coat 3-manifolds mercury diving sludge sources fluorine conductivity hydraulic glucose designs condensate amorphous treeline
Short version:
column names are not the problem
title of error messages may indicate that not every record has the datatype you want them to be
I imported a flat-file (csv) with the same column-names you provided above using microsoft sql server 2008 R2 Express. The import worked fine; every column names was imported. Could it be that your column names have some non-visible characters that are causing the trouble?
If you provide more information we can help better.
which version of sql server are you using
provide the source of your file
provide additional error information and the way you are importing the file
My answer is that the column names are not the problem - at least not with sql server 2008 R2 express.
Oracle keywords:
type (in information type)
SQL keywords may caused the issue:
power
operator
You need to change those column names to avoid any further error.
How to check: Paste the list into any SQL editor (Notepad++). The keywords are highlighted.
I encountered a similar problem (Using Oracle SQL Developer 3.2)
My issue was that I have an index column (with no name). Thus it couldnt be selected, and it displayed as nothing on the error prompt.
Hopefully this helps anyone who might be facing the same issue
Related
I'm trying to make a table using asdoc that will include both the value labels and the variable labels in the output. When I run the following line of code in Stata
asdoc list progname progtype progterm publicprivate cohortsize grereq, label
I get this in the console (no variable labels):
But in the word doc, it comes out looking like this (has variable labels but no value labels in the table cells):
How do I get both the variable and value labels in the table?
The last update of asdoc was on April 10, 2021. I announced the following in that update.
It is now almost three years that I have been developing asdoc and constantly adding features to it. With the addition of _docx and xl() classes to Stata, it is high time to add support for native docx and xlsx output to asdoc. Also, given that there exists a significant number of LaTeX users, asdoc should be able to create LaTeX documents. It gives me immense pleasure to announce asdocx that is not only more flexible in making customized tables, but also creates documents in native docx, xlsx, html, and tex formats. If you have enjoyed and find asdoc useful, please consider buying a copy of asdocx to support its development. Details related to asdocx can be found on this page.
I am still committed to fixing bugs / issues in asdoc. However, I think it makes more sense to me to add features to asdocx than asdoc, given that asdocx supports all latest developments in Word, Excel and LaTeX.
The requested feature is already available in asdocx. See the following example.
sysuse nlsw88
asdocx list industry age race married grade south in 1/20, replace label
I'm trying to create a database for my project to lookup mac vendors. I added a UNIQUE key on the prefix column. When inserting rows from an officially published MA-L csv file, I got the duplicate entry error from DB. Then I looked it up in the csv file and found 3 entries for prefix '080030'.
Is the file wrong or I'm misunderstanding how to use the OUI list? If I want to look up the vendor of a mac with prefix '08:00:30', which one of the three is correct?
There are currently two duplicate assignments in the MA data files.
Registry,Assignment,Organization Name,Organization Address
MA-L,080030,NETWORK RESEARCH CORPORATION,2380 N. ROSE AVENUE OXNARD CA US 93010
MA-L,080030,ROYAL MELBOURNE INST OF TECH,GPO BOX 2476V MELBOURNE VIC AU 3001
MA-L,080030,CERN,CH-1211 GENEVE SUISSE/SWITZ CH 023
Registry,Assignment,Organization Name,Organization Address
MA-L,0001C8,THOMAS CONRAD CORP.,1908-R KRAMER LANE AUSTIN TX US 78758
MA-L,0001C8,CONRAD CORP.,
The IEEE provides the following footnote on page 7 of the linked document.
The IEEE Registration Authority makes a concerted effort to avoid
duplicate assignments but does not guarantee that duplicate assignments
have not occurred. Global uniqueness also depends on proper use of
assignments and absence of faults that might result in duplication.
https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/tutorials/eui.pdf
The Wireshark OUI lookup tool, based on their own compiled list, gives one answer for which of these organizations is currently assigned.
Network Research Corporation
https://www.wireshark.org/tools/oui-lookup
And the maclookup website, which seems reasonable, gives a different answer.
CERN
https://maclookup.app/macaddress/080030
There are no timestamps in the data file. There is no explicit row sorting and ordering (and it doesn't look sorted or ordered). Bottom line, there seems to be no way to use the data file and supporting documents alone to determine which assignment is correct.
This is strange, never seen such a thing, are you sure you are pulling the correct file from IEEE.
I've reviewed the 3 companies you mentioned in the file, it has different Prefixes.
CERN Mac prefix is 80D336
Network Research Corporation has two prefixes, one of which the one mentioned in the question.
Couldn't find this company in any DB.
I think your parser is somehow corrupted especially if you are parsing the text file.
I'm looking for a datasets with all the Chinese character Mandarin pronunciations in bopomofo and/or pinyin. Also, I need open source datasets that I can copy into my own code bases.
It sounds like you might be looking for the Unihan Database. The Unihan Database is maintained by the Unicode Consortium.
The Unihan database is the repository for the Unicode Consortium’s collective knowledge
regarding the CJK Unified Ideographs contained in the Unicode Standard. It contains
mapping data to allow conversion to and from other coded character sets and additional
information to help implement support for the various languages which use the Han
ideographic script.
For an example, here is the data for 爱.
Here is the description of the organization and content of the Unihan Database. Be sure to read that to understand what the data is referring to.
If this is the information you want, you can download the ZIP archive that contains all this data.
The Unihan Database doesn't have Bopomofo (Zhuyin) pronunciations, but it has Pinyin readings. Converting from Pinyin to Zhuyin is simple; there are a lot of online tools that can do it for you.
As for licensing issues, the Unihan Database data files have a liberal copyright notice. So, you shouldn't run into any problems using that data in your own software.
this is a bit of a late entry but I was searching for the same thing last year and ended up compiling my own character/bopomofo database based on a bunch of different data sets. I have put enough work into this thing to thoroughly call it my own though so you should check it out! its part of a rubygem I made to sort by bopomofo (I had a system that would not let me change the database colaltion settings) https://github.com/nallan/a-b-chi
You can see on their category links that it's quite obvious that the only portion of their URL that matters is the small hash near the end of the URL itself.
For instance, Water Heaters category found under Heating/Cooling is:
http://www.lowes.com/Heating-Cooling/Water-Heaters/_/N-1z11ong/pl?Ns=p_product_avg_rating|1
and Water Heaters category found under Plumbing is:
http://www.lowes.com/Plumbing/Water-Heaters/_/N-1z11qhp/pl?Ns=p_product_avg_rating|1
That being said, obviously their structure could be a number of different things...
But the only thing I can think is it's a hex string that gets decoded into a number and denominator but I can't figure it out...
apparently it's important to them to obfuscate this for some reason?
Any ideas?
UPDATE
At first I was thinking it was some sort of base16 / hex conversion of a standard number / denom or something? or the ID of a node and it's adjacency?
Does anyone have enough experience with this to assist?
They are building on top of IBM WebSphere Commerce. Nothing fancy going on here, though. The alpha-numeric identifiers N-xxxxxxx are simple node identifiers that do not capture hierarchical structure in themselves; the structure (parent nodes and direct child nodes) is coded inside the node data itself, and there are tell-tale signs to that effect (see below.) They have no need for nested intervals (sets), their user interface does not expose more than one level at a time during normal navigation.
Take Lowe's.
If you look inside the cookies (WC_xxx) as well as see where they serve some of their contents from (.../wcsstore/B2BDirectStorefrontAssetStore/...) you know they're running on WebSphere Commerce. On their listing pages, everything leading up to /_/ is there for SEO purposes. The alpha-numeric identifier is fixed-length, base-36 (although as filters are applied additional Zxxxx groups are tacked on -- but everything that follows a capital Z simply records the filtering state.)
Let's say you then wrote a little script to inventory all 3600-or-so categories Lowe's currently has on their site. You'd get something like this:
N-1z0y28t /Closet-Organization/Wood-Closet-Systems/Wood-Closet-Kits
N-1z0y28u /Closet-Organization/Wood-Closet-Systems/Wood-Closet-Towers
N-1z0y28v /Closet-Organization/Wood-Closet-Systems/Wood-Closet-Shelves
N-1z0y28w /Closet-Organization/Wood-Closet-Systems/Wood-Closet-Hardware
N-1z0y28x /Closet-Organization/Wood-Closet-Systems/Wood-Closet-Accessories
N-1z0y28y /Closet-Organization/Wood-Closet-Systems/Wood-Closet-Pedestal-Bases
N-1z0y28z /Cleaning-Organization/Closet-Organization/Wood-Closet-Systems
N-1z0y294 /Lighting-Ceiling-Fans/Chandeliers-Pendant-Lighting/Mix-Match-Mini-Pendant-Shades
N-1z0y295 /Lighting-Ceiling-Fans/Chandeliers-Pendant-Lighting/Mix-Match-Mini-Pendant-Light-Fixtures
N-1z0y296 /Lighting-Ceiling-Fans/Chandeliers-Pendant-Lighting/Chandeliers
...
N-1z13dp5 /Plumbing/Plumbing-Supply-Repair
N-1z13dr7 /Plumbing
N-1z13dsg /Lawn-Care-Landscaping/Drainage
N-1z13dw5 /Lawn-Care-Landscaping
N-1z13e72 /Tools
N-1z13e9g /Cleaning-Organization/Hooks-Racks
N-1z13eab /Cleaning-Organization/Shelves-Shelving/Laminate-Closet-Shelves-Organizers
N-1z13eag /Cleaning-Organization/Shelves-Shelving/Shelves
N-1z13eak /Cleaning-Organization/Shelves-Shelving/Shelving-Hardware
N-1z13eam /Cleaning-Organization/Shelves-Shelving/Wall-Mounted-Shelving
N-1z13eao /Cleaning-Organization/Shelves-Shelving
N-1z13eb3 /Cleaning-Organization/Baskets-Storage-Containers
N-1z13eb4 /Cleaning-Organization
N-1z13eb9 /Outdoor-Living-Recreation/Bird-Care
N-1z13ehd /Outdoor-Living
N-1z13ehn /Appliances/Air-Purifiers-Accessories/Air-Purifiers
N-1z13eho /Appliances/Air-Purifiers-Accessories/Air-Purifier-Filters
N-1z13ehp /Appliances/Air-Purifiers-Accessories
N-1z13ejb /Appliances/Humidifiers-Dehumidifiers/Humidifier-Filters
N-1z13ejc /Appliances/Humidifiers-Dehumidifiers/Dehumidifiers
N-1z13ejd /Appliances/Humidifiers-Dehumidifiers/Humidifiers
N-1z13eje /Appliances/Humidifiers-Dehumidifiers
N-1z13elr /Appliances
N-1z13eny /Windows-Doors
Notice how entries are for the most part sequential (it's a sequential identifier, not a hash), mostly though not always grouped together (the identifier reflects chronology not structure, it captures insertion sequence, which happened in single or multiple batches, sometimes years and thousands of identifiers apart, at the other end of the database), and notice how "parent" nodes always come after their children, sometimes after holes. These are all tell-tale signs that, as categories are added and/or removed, new versions of their corresponding parent nodes are rewritten and the old, superseded or removed versions are ultimately deleted.
If you think there's more you need to know you may want to further inquire with WebSphere Commerce experts as to what exactly Lowe's might be using specifically for its N-xxxxxxx catalogue (though I suspect that whatever it is is 90%+ custom.) FWIW I believe Home Depot (who also appear to be using WebSphere) upgraded to version 7 earlier this year.
UPDATE Joshua mentioned Endeca, and it is indeed Endeca (those N-xxxxxxx identifiers) that is being used behind Websphere in this case (though I believe since the acquisition of Endeca Oracle is pushing SUN^H^H^Htheir own Java EE "Endeca Server" platform.) So not really a 90% custom job despite the appearances (the presentation and their javascripts are heavily customized, but that's the tip of the iceberg.) You should be able to use Solr as a substitute.
I am trying to create an automatic feed generation for data to be sent to Google Base using utf-8 encoding.
However I am getting errors whenever hyphens are found telling me that there is an encoding error in the relevant attribute (title, description, product_type). I am currently using:
−
but I have also tried:
−
neither of which have worked.
I am using the following declaration at the top of the document:
<?xml version="1.0" encoding="utf-8"?>
Ok to give further context to this the data is being pulled from our site's product information stored as utf-8 encoded data in a MYSQL database. The data is going into an RSS 2.0 feed, using the some standard RSS attributes as well as some custom defined Google attributes. The problem comes up whenever there is a hyphen in any field except the link field. So it is appearing in the title and description fields as well as the custom product_type field. Below is an example of a field that Google Base (merchant centre) throws an error over. It throws the same error with or without the other entities and only stops objecting when hyphens are removed.
<description><p>Your sports floor is designed primarily for sports use. Thou many facilities have to be used for other activities including things like; assemblies careers fairs drama parties and social events bring and buy sales exhibitions etc.</p>
<p>Solid hardwood sports floors are designated as "area elastic floors" to provide the spring resilience and shock absorbing qualities needed for sports and dance use to minimise injury. If the floor is too hard the athlete and user will be exposed to early fatigue and aching joints through to injury such as sprains joint and shin bone damage.</p>
<p>If too soft then ball bounce and running characteristics are compromised.
In the UK hardwood sports floors are governed by a number of recognised standards</p>
<p>All sports floors must conform to BS7044 Part 4 - this is the minimum Sport England requirement with which your floor msut comply if it is part of a Sport England sponsored project.</p>
<p>A higher more demanding standard for better quality sports and dance flooring is DIN 18032 Part 2</p>
<p>The newest - and the best - standard is the European Standard CEN 217. This standard has brought together all the best eprformance criteria from a number of current standards in the EU including BS and DIN.</p>
<p>All Junckers systems fully comply with one or more of these standards. They ALL comply with the minimum Sport England requirement of BS7044 Part 4 compliance.</p></description>
You talk about using hyphens, but the character you're trying to insert is the mathematical minus sign. Have you tried it with an actual hyphen? And not a HTML entity, either; just the character, -.