ArangoDB: very basic first step -- how to get started with Foxx Microservices - installation

ArangoDB documentation for the Getting Started section of Foxx Microservices, begins with this paragraph:
We're going to start with an empty folder. This will be the root folder of our services. You can name it something clever but for the course of this guide we'll assume it's called the name of your service: getting-started.
My question is very basic. On a Linux system, what are the best options for the location of this folder? And what should its permissions be?
I see existing ArangoDB directories at these locations:
/var/lib/arangodb3/
/var/lib/arangodb3-apps/
/usr/share/arangodb3/
Should I place the getting-started directory under one of those locations or somewhere else?

The Foxx chapter received a structural overhaul and new content was added with the v3.4.0 release. I recommend you use the 3.4 Foxx documentation therefore.
You can put the getting-started folder anywhere, e.g. where you also put other project folders, like in ~/projects/arangodb/ or whatever suits you.
Read on in the Getting Started guide. Under the headline Try it out you find the steps how to deploy the service. ArangoDB will then place the files in the right folder, e.g. /var/lib/arangodb3-apps/_db/_system/getting-started/ (where /getting-started is the mount path, not the name of the project folder).
Also check out the guide about the Development Mode for faster iterations. You may use rsync to watch for file changes in your actual project folder and let it copy the changes over to ArangoDB's volatile Foxx app folder. This is much safer than to work in the deployed folder directly (if you remove the service you would also lose your changes, and in a cluster the files may get overwritten because the service changed on another coordinator).
An overview of Deployment options is also available, including Foxx CLI which can be used to bundle the files from your project folder and deploy them as service (foxx upgrade ...).

Related

Oracle ADF 12 Project Structure

I'm developing a ADF Fusion Web Application in JDeveloper 12. After the creation of the project I took a look at the file system and a bunch of directories were created.
Can anyone tell me what the .adf folder is good for? I can't find anything about it in the Oracle Docs. I'm developing with git and I'd like to know if I have to version this directory, too.
Thanks in advance!
Inside the above mentioned folder can be found two files: adf-config.xml and connections.xml. For an overview of their usage you can take a look at these links:
Oracle ADF XML File Appendix and Web Center. In both of them it states that there are stored application-level setting during design time, which can be used later during the deployment process (it seems quite important though :) ). So, even if you delete that folder it should be recreated if you make any changes and redeploy the application, BUT, if it is there it means it should be there (typical Oracle politics ;) ). So even if you are really in need of their settings (such as modify connection details to point to production server instances) it should be versioned as well.
I'm using svn, and it does version it automatically.
Hope this helps.
ADF creates several files and folders that are needed by the project.
It creates them when the project uses a functionality that needs those files.
In .adf/META-INF/ you can find adf-config.xml & connections.xml which are Application Level Settings.
But for example src/META-INF/jazn-data.xml doesn't exist until you enable Security on your application. This file is also needed and should be on SVN/Git.
ADF also creates some temporary files and folders that shouldn't be on Git/SVN.
Like: .data/.
Depending on what technologies you use from the ADF stack (ADF BC, ADF Model, ADF Controller, ADF Faces), you should understand what files and folders are created.
If you have searched for .adf/ in the official Documentation you would have found your answers.
ADF by default creates .adf and .data file, .adf file you can say is for holding various info related to your workspace in IDE i.e the connections that its having with Database, META-INF info used for customization purpose.
& .data support your MDS functionality.
we can always delete it but our Jdev will create it automatically, whenever we rebuild our application.

ClickOnce Error "different computed hash than specified in manifest" when transferring published files

I am in an interesting situation where I maintain the code for a program that is used and distributed primarily by our sister company. We are ready to distribute the program to all of the 3rd party users and since it is technically our sister companies program, we want to host it on their website. (in the interest of anonimity, I'll use 'program' everywhere instead of the actual application name, and 'www.SisterCompany.com' instead of their actual URL.)
So I get everything ready to go, setup the Publish setting to check for updates at program start, the minimum required version, and I set the Insallation Folder URL and Update Location to "http://www.SisterCompany.com/apps/program/", with the actual Publishing Folder Location as "C:\LocalProjects\Program\Publish\". Everything else is pretty standard.
After publish, I confirm that everything installs and works correctly when running directly from the publish location on my C: drive. So I put everything on our FTP server, and the guy at our sister company pulls it down and places everything in the '/apps/program/' directory on their webserver.
This is where it goes bad. When I try to install it from their site, I get the - File, Program.exe.config, has a different computed hash than specified in manifest. Error. I tested it a bit, and I even get that error trying to install from any network location on our network other than my local C: drive.
After doing the initial publish in visual studio, I have changed no files (which is the answer/reason I've found by doing some searching about this error).
What could be causing this? Is it because I set the Installation Folder URL to a location that it isn't initially published too?
Let me know if any additional info is needed.
Thanks.
After bashing my head against this all weekend, I have finally found the answer. After unsigning the project and removing the hash on the offending file (an xml file), I got the program to install, but it was giving me 'Windows Side by Side' Errors. I drilled down into the App Cache were the file was, and instead of a config .xml file, it was one of the HTML files from the website the clickonce installer was hosted on. Turns out that the web server didn't seem to like serving up an .XML (or .mdb it turns out) file.
This MSDN article ended up giving me the final solution:
I had to make sure that the 'Use ".deploy" file extension' was selected so that the web server wouldn't mangle files with extensions it didn't like.
I couldn't figure out why that one file's hash would be different. Turns out it wasn't even the same file at all.
It is possible that one of the FTP transfers is happening in text mode, rather than binary?
For me the problem was that .config transformations were done after generating manifest.
To anyone else who's still having trouble, five years later:
The first problem was configuring the MIME type, which on nginx (/etc/nginx/mime.types) should look like this:
application/x-ms-manifest application
See Click Once Server and Client Configuration.
The weirder problem to me was that I was using git to handle the push to the server, i.e.
git remote add live ssh://user#mybox/path/to/publish
git commit -am "committing...";git push live master
Works great for most things, but it was probably being registered as a "change," which prevented the app from installing locally. Once I started using scp instead:
scp -r * user#mybox/path/to/dir/
It worked without a hitch.
It is unfortunate that there is not a lot of helpful information out there about this.

How to manage split Web Application project

I've got a bit of an interesting project layout question for you all. I'm really not sure how to handle this, so I'm hoping someone here has a bright idea.
Basically, I have a Web Application that for the most part, is the same per customer (except the configuration files). There are certain files that are different for each and every customer (javascript, css, sql snippets), and managing all this is a bit of a pain with the setup we have now.
Currently, we have all these customised files sitting in the SVN repo and when someone comes to make changes, they first check out the core project (Web Forms Application consisting of pages, C# classes, javascript files, global styles, images, etc.) into a new working directory, then check out the customised files and export them into the working directory of the core project. Once they've finished making changes they will commit any changes that happened in the core project (bug fixes, changes/features other customers will want), and then copy back the customised files to the customised working directory and commit that. Needless to say, it's a major pain and things get missed.
My end goal is to have a single working directory for the core project (excluding any special development branches), and a customised project per customer. The developer would have a solution file that has both of these projects as part of it, they can make changes to either, and commits would go to the right repos.
Now, I could set up a multi project solution that looks like this, but functionally I have no idea what to do. How can I run the core project and have all the customised files available? How do we deploy to a testing or production server? How do we handle things like the web config that are only partially customised (e.g. connection strings)?
If anyone has worked on something like this, or has an idea on how we could set up a project like this, it would be greatly appreciated.
Ok, my working place experienced the same problem before. We ended up using branch.
Say you have 2 projects in one directory (main), and now you have another client coming in.
Simply branch this directory (main) to another directory (clientA). Now in directory clientA, you should have 2 projects.Do your daily development as usual, but just merge in the changes in core project to main branch.
---Main
----Core
----CustomisedSkeleton
---ClientA (branch)
----Core (do merge back later)
----ClientACustomised (client based code, do not do any merge in)
One morething, since your work is web applicaton, you need update each branchs' web projects file, to make it binding to different URL on your local IIS.
Say main webapplication's url is "http://yourmachine.com/main/login.aspx",
and you customised branch's web application url is "http://yourmachine.com/clientA/login.aspx"

Maven reads user configuration from wrong location

I just discovered why Maven doesn't work properly on my machine. For some reason it reads the user configuration from the completely wrong location. And I don't understand why. When I run maven with the -X switch I get the following output in the beginning:
[DEBUG] Reading global settings from D:\dev\maven\active\conf\settings.xml
[DEBUG] Reading user settings from D:\.m2\settings.xml
[DEBUG] Using local repository at D:\dev\maven_repo
Why is it reading user settings from D:\.m2 and not my actual user directory like it normally should? It worked fine on my old computer. Does it have something to do with me having installed maven on a different drive this time? On my old computer it was installed on the C drive.
Where does it get this D:\.m2 from? How can I make it read the user settings file from the actual default location, %userprofile%\.m2?
Finally figured it out. Found the solution in this blog post. To find the home directory in Java you do this:
System.getProperty("user.home");
Problem is, for some dumb reason, Java isn't using Windows environment variables or anything like that to find this path. It actually uses the parent directory of the Desktop directory. Since I like to keep certain main folders in my user directory on a separate drive (documents, downloads, music, desktop, etc) I had moved the desktop directory to D:\Desktop. Java then takes that directory, goes one level up and makes Maven and other java applications think D:\ is my home directory.
Gotta say the more I use Java the more i hate it... anyways, hopefully this might help save some hours of head scratching for someone else too.
Update
The original blog post is gone, but found on the WaybackMachine (the URL has been updated), but here's the gist from that post in case that goes too...
The issue: So how does Java play into all of this? Well, Java
developers sometimes want to store settings for their applications in
a folder within the user’s profile directory. It’s the Linux way, and
Java tends to do things the Linux way. (As mentioned earlier, Windows’
“AppData” folder servers the same purpose, with some extra separation
for data dependent on whether or not it should roam with the user’s
profile.) For some reason, Java does not use the Windows environment
variable to determine the location of the user’s profile, but instead
access a registry key that references the user’s desktop folder. It
then takes the parent directory of the desktop and assumes that is the
user’s profile folder (assuming the user makes use of the default
setup Windows chooses).
Essentially, when a programmer calls the Java command:
System.getProperty("user.home");
Java uses the following idea to determine where my user profile folder
is:
PATH_TO_DESKTOP_FOLDER_AS_SET_IN_THE_REGISTRY + "\..\"
This breaks down when the desktop folder has been modified.
So, with my setup, instead of saving settings at:
c:\users\tim\
Java apps tend to save data to:
t:\tim\
In reality, Java apps should save settings to:
c:\users\tim\AppData\Roaming\
or something like that.
To add insult to injury, the Java apps continue to follow the Linux
way and use a period at the beginning of the folder name in an attempt
to “hide” the folder (as is done on Linux). For Windows users, this
simply ensures these folders are listed first in directory listings.
(Hiding a folder on Windows is achieved through setting the hidden
attribute for the file.)
It looks like NetBeans has addressed the issue for their application,
but the root issue remains an unresolved, low priority bug. Somehow
I’d bet it would get fixed a lot faster if the mechanism for
determining the user’s home path on Linux was wrong.

How to tackle machine-dependant configuration with SVN and VS2010?

To start with some background, I am a member of a small team developing an ASP.NET application. In addition to us, there are 2 other teams working on it, all from different countries. Source code is hosted on a shared SVN server but there is no central testing environment. Each developer runs the app on their own machine and data services are set up per team.
Unfortunately our SVN workflow has some gaps in it: annoyances arise when there is time for an SVN update.
It is mainly because each developer and team have slightly different environments in terms of disk directory structure and configuration (both IIS and app itself). Hence conflicts in configuration files and elsewhere that in essence are not conflicts at all - for runtime configuration (XML) and in *.suo.
How should we handle this if our objective is to keep checkout, app setup and update as painless as possible?
One option would obviously be master copies. Another one establishing uniformity in developer environments and keeping it. But what about a third alternative?
One thing to do is to not put the .suo files into SVN, there's no reason to do that.
For IIS configuration there should be no argument - uniform environment across the build team.
For app.config files and the like, I tend to keep them in a separate "cfg" directory in the root of the project and use pre-build events to copy in the relevant ones I need depending on the project and environment I'm working on.
You could have a separate build task to copy in user-specific config into your output directory. Add a new directory in your root project called "user.config or something, and leave it empty. Then configure your project build to check this for entries and copy them to the output directory. This is easy to do, and then each dev can have their own config without affecting the master copies. Just make sure you have an ignore pattern on that folder so you don't commit user-specific configuration. If you have svnadmin access to your source code repo, you could set a hook to prevent it from ever happening.
Also set ignore patterns on your root directory (recursively) for .suo, .user, _Resharper or any other extensions you think are pertinent. There are some So questions already on exactly this topic:
Best general SVN Ignore Pattern?
Ignore *.suo and *.user files in svn. It is easy. After that create two types of config files in subversion. Development and Server, if in use add Test also. See below example.
ConnectionStringDevelopment.config
ConnectionStringServer.config
AppSettingsDevelopment.config
AppSettingsServer.config
Server files would contain server information. Development files is not contained in svn and ignored there. Every new developer will start by copying server files and making changes according to his environment.
Look following example site
http://code.google.com/p/karkas/source/browse/trunk/Karkas.Ornek/WebSite/web.config
following lines are interest.
<appSettings configSource="appSettingsDevelopment.config"/>
<connectionStrings configSource="ConnectionStringsDevelopment.config" />
ConfigSource can be used almost everywhere in web.config therefore you will be able to change every config to every developer. Only make use of following naming convention. ignore *Development.config in subversion. This way no developer config will be added to subversion.
Its not a perfect solution (and should only be used if there are not many of those special files), but what I do is to add fake files for each case, and switch the real file locally to it.
In detail: I have a file foo that creates the problem. I also create foo_1 and foo_2 and then locally switch foo to foo_1 (I use tortoisesvn, so I cant really give you the command line to do that). Then I am working on foo on my machine, but actually commit to foo_1. Other parties could then switch to foo_2...
(I admit this is basically a variant of the master-file approach you suggested yourself; but if there are not many actual changes to those files this at least reduces the numer of conflicts you have to think about)

Resources