I am new to opencad, I got a task to draw hundreds of dash lines, each dash line is contracuted with two points, and the values of the points are saved in an excel file like this:
1075 7755
1075 7541
1075 7340
1075 7114
1075 6936
1075 6738
Each row represents a point, and two ajacent points construct a line, I have a lot of such values, how should I acheive drawing the lines. Thank you very much for your help.
Late answer, but what the hell. You don't (didn't) need AutoLISP to do it, a script would do. I'd start with using Excel to create the commands, adding a third column with a function:
=concat("LINE ",A1," ",B1)
Just in case - you put this into the topmost cell, and then drag the little square dot down the column so it fills it in entirely, adjusting row references. Once it's done, select the column again, copy and paste the values into a simple text editor like Notepad. It should look like:
LINE 1075 7755
LINE 1075 7541
LINE 1075 7340
LINE 1075 7114
LINE 1075 6936
LINE 1075 6738
Now save it as a .scr file (f.i. dashedlines.scr). In AutoCAD, type "script", and in the dialog box find the file you just created. It should draw the lines in no time.
AutoLISP or VB app would fit if you needed more integration, say automatic redrawing when Excel data change.
Related
I am new to programming/coding and new to RStudio.
I am working with a dataset in RStudio, 'ethica_surveys'. Three columns within my dataset are contain data that is date, time, time zone - i.e., '2018-06-15 11:49:22 CST'. I want to remove the CST from each of these columns.
I first tried this :
str_sub(ethica_surveys$schedule_time,1,str_length(ethica_surveys$schedule_time)-4)
It worked, but only showed me the newly edited column in my console, my dataset did not change.
I then tried:
ethica_surveys <- str_sub(ethica_surveys$schedule_time,1,str_length(ethica_surveys$schedule_time)-4)
This changed the column in my dataset, but also seemed to erase all the other columns in the dataset.
I want to erase the CST (last 4 characters) in each of these three columns: schedule_time, issued_time, and response_time. I want this change to be reflected in my dataset, without erasing the other columns within the dataset. Can anyone advise as to how this could be done?
Thank you.
Assign the output of your transformation to your variable:
ethica_surveys$schedule_time <- str_sub(ethica_surveys$schedule_time,1,str_length(ethica_surveys$schedule_time)-4)
I am attempting to load the following data into R - using read.fwf:
sample of data
20100116600000100911600000014000006733839
20100116600000100912100000019600005648935
20100116600000100929100000000210000080787
20100116600000100980400000000090000000000
3010011660000010070031144300661101000
401001166000001000000001001
1010011660000020016041116664001338615001338115150001000000000000000000000000010001000100000000000000000000162002117592200001051
20100116600000200036300000001000005692222
however the first number in the row indicates which variables are coded in that line and they have different lengths so the 'widths' vector for lines starting with 1 is different from the 'widths' vector for lines starting with 2 and so on.
Is there a way I can do this without having to read the data in four times? (Note that each case has differing numbers of rows too)
Thank you, Maria
I have the following info in a text file.
Item Rate
pencil 2
eraser 1
laser 3
pencil 1
torch 4
eraser 1
Specifically, I want to know if any item in the above list has a different price.
For eg: In the above one, you can see that pencil has 2 rates ie 2 and 1.
The price of the eraser is same in both entries, so no problem.
Further complexities - The text file is very huge.
Since dicts don't allow us to store duplicate keys, please suggest ways to solve this problem along with appropriate data structure.
You Can use Hash Table with Separate Chaining Method.Hope it will works
Does the file have to be plain text ? I recommend tackling this problem by using XML format and parsing it with SAX (not DOM !). SAX will not load the entire file in the memory, so it works well with huge file sizes.
As for the data structure, you could always define your own or you could just use something like this Map<KeyType, List<ValueType>>. I feel it's counter-intuitive to have different prices mapped for the same product name. You could create a unique ID for every type of product and have a new field: quantity.
have a file that is in format x,y which should be loaded using sql loader
x y a bnew line
All data is loaded to table A(x,y) where x,y are varchar2 - this step passes successfully.
Next step is processing loaded data - i.e. transforming data to proper formats etc.
At this step i get into trouble, since column y is converted to number (it stores numbers). However due to new line at the end of the file, this line gets corrupted and to_number conversion fails.
How could this be solved?
Contact the provider of the data and have them correct the process that creates the file.
Use the LOAD= parameter to SQL*Loader to restrict how many lines to load LOAD=(lines in file - 1).
Pre-process the file by stripping blank lines or removing special characters before loading.
i'm writing a program at work for a categorizing issue.
i get data in the form of CODE, DESCRIPTION, SUB-TOTAL for example:
LIQ013 COGNAC 25
LIQ023 VODKA 21
FD0001 PRETZELS 10
PP0502 NAPKINS 5
Now it all generally follows something like this...the problem is my company supplies numerous different bars. So there are like 800 records a month with data like this. My boss wants to breakdown the data so she knows how much we spend on a certain category each month. For example:
ALCOHOL 46
FOOD 10
PAPER 5
What I've thought of is I setup a sort of "data-base" which is really a csv text file that contains entries like this:
LIQ,COGNAC,ALCOHOL
LIQ,VODKA,ALCOHOL
FD,PRETZELS,FOOD
FD,POPCORN,FOOD
I've already written code that imports the database as a worksheet and separates each field into its own column. I want excel to look through the file and when it sees LIQ and COGNAC to assign it the ALCOHOL designator. That way I can use a pivot table to get the category sums. For example I want the final product to look like this:
LIQ013 COGNAC 25 ALCOHOL
LIQ023 VODKA 21 ALCOHOL
FD0001 PRETZELS 10 FOOD
PP0502 NAPKINS 5 PAPER
Does anyone have any suggestions? My worry is that a single point expression match to JUST the code i.e. just to LIQ without a match to COGNAC as well would maybe result in problems later when there are conflicting descriptions? I'd also like the user to be able to add ledger entries so that the database of recognized terms grows and becomes more expansive and hopefully more accurate.
EDIT
as per #Marc 's request i'm including my solution:
code file
please note that this is a pretty dumb-ed down solution. i removed a bunch of the fail-safes and other bits of code that were relevant to a robust code but not to our particular solution.
in order to get this to work there are two parts:
the first is the macro source code
the second is the actual file
because all the fail-safes are removed, the file needs to be imported to excel exactly the way it appears. i.e. Sheet1 on the googleDoc should be Sheet1 on the excel, start pasting data at cell "A1". before the macro is run, be sure to select cell "A1" in Sheet1. as i said, there are implementations in the finished product to make it more user friendly! enjoy!
EDIT2
These links suck. They don't paste well into excel.
If your comfortable with it I can email you the actual workbook. Which would help in preserving the formatting etc.
Use a lookup table in a separate sheet. Column A of the lookup sheet contains the lookup value (e.g. PRETZELS), Column B contains the category (FOOD, ALCOHOL, etc). In the cells where you want the category to show up in your original sheet (let's use D3 for the result where B3 holds the "PRETZELS" value), type this formula:
=VLOOKUP(B3,OtherSheet!$A$1:$B$500,2,FALSE)
That assumes that your lookup table is in range A1:B500 of a worksheet named "OtherSheet".
This formula tells Excel to find the lookup value (B3) in column A of your lookup and return the corresponding value from column B of your lookup table. Absolute references (the $) ensure that your formula won't increment cell references when you copy/paste the formula in other cells.
When you get new categories and/or inventory, you can update your lookup table in this one place by just adding new rows to it.