Astyle does not work in Windows - windows

I just download Astyle from SourceForge. When I execute Astyle.exe in /bin, it said
Cannot convert to multi-byte string, reverting to English.
I don't know what happened.
I find there is a similar question, but that is about Astyle in OS X.
Here are the source code related to the error. I don't know the meaning of the second line.
// Not all compilers support the C++ function locale::global(locale(""));
// For testing on Windows change the "Region and Language" settings or use AppLocale.
// For testing on Linux change the LANG environment variable: LANG=fr_FR.UTF-8.
// setlocale() will use the LANG environment variable on Linux.
char* localeName = setlocale(LC_ALL, "");
if (localeName == NULL) // use the english (ascii) defaults
{
fprintf(stderr, "\n%s\n\n", "Cannot set native locale, reverting to English");
setTranslationClass();
return;
}
Finally, please feel free to correct my English.

Add following include to both ASLocalizer.cpp and style_main.cpp:
<#include "locale.h">

Related

Where does bash prompt escape sequence \h get the hostname from?

\h is a bash prompt escape sequence that expands to the hostname. Where does it get the hostname from? On my system it shows a value that I cannot find anywhere, not in hostname -f or /etc/hosts or /etc/hostname or /etc/sysconfig/network or $HOSTNAME. So I'm wondering where it's getting it from. My system is Centos 7.4. I know there are hidden places where things such as UUIDs are stored, and I seem to recall that I've come across a similar hidden hostname type of issue in the past, but I can't remember the details.
If you look at the bash source code you'll see in shell.c that it calls gethostname(2), a POSIX system call that retrieves the hostname from the kernel.
/* It's highly unlikely that this will change. */
if (current_host_name == 0)
{
/* Initialize current_host_name. */
if (gethostname (hostname, 255) < 0)
current_host_name = "??host??";
else
current_host_name = savestring (hostname);
}
This is not necessarily a canonical string. The kernel doesn't actually know the machine's network hostname. It just reports back whatever was passed to sethostname(2). To quote from the uname(2) man page:
On the other hand, the [hostname] is meaningless: it gives the name of the present machine in some undefined network, but typically machines are in more than one network and have several names. Moreover, the kernel has no way of knowing about such things, so it has to be told what to answer here.
On non-Linux systems without gethostname(2), bash falls back to uname(2). If uname(2) isn't even available then it simply displays "unknown". You can see that logic in lib/sh/oslib.c:
#if !defined (HAVE_GETHOSTNAME)
# if defined (HAVE_UNAME)
# include <sys/utsname.h>
int
gethostname (name, namelen)
char *name;
int namelen;
{
int i;
struct utsname ut;
--namelen;
uname (&ut);
i = strlen (ut.nodename) + 1;
strncpy (name, ut.nodename, i < namelen ? i : namelen);
name[namelen] = '\0';
return (0);
}
# else /* !HAVE_UNAME */
int
gethostname (name, namelen)
char *name;
int namelen;
{
strncpy (name, "unknown", namelen);
name[namelen] = '\0';
return 0;
}
# endif /* !HAVE_UNAME */
#endif /* !HAVE_GETHOSTNAME */
\h isn't updated if the hostname changes. The value is cached at startup when the shell is initialized.
[jkugelman#malkovich]$ hostname
malkovich
[jkugelman#malkovich]$ sudo hostname kaufman
[jkugelman#malkovich]$ hostname
kaufman
[jkugelman#malkovich]$ bash
[jkugelman#kaufman]
It probably (just a guess) uses the gethostname(2) system call (which is handled by the kernel, as all syscalls(2) are...)
BTW, GNU bash is (as most packages of your Linux distributions are) free software; so please download its source code and study it; use the source, Luke! and open the source more, please.
A more interesting question is how that information is cached by bash. Does it call gethostname at every command? You might also use strace(1) to find out.
Of course, take the habit of studying the source code of free software every time you are curious. And use strace -and the gdb debugger- to understand their dynamic behavior.
A French singer, G.Bedos, told us "La liberté ne s'use que si on ne s'en sert pas", that is
Freedom wears out if you don't use it.
(translation is mine, I am French but not a native English speaker)
So next time, please dive into the source code of free software. It is important to exercise your freedom, and that is what free software is about.

dyld_insert_libraries ignored when calling application through bash

For my application, I am using DYLD_INSERT_LIBRARIES to switch libraries. I am running Mac OS X, El Capitan.
If I set these environment variables in my shell:
export PYTHONHOME=${HOME}/anaconda
export DYLD_INSERT_LIBRARIES=${HOME}/anaconda/lib/libpython2.7.dylib:${HOME}/anaconda/lib/libmkl_rt.dylib
If I launch my application directly, it works properly. However, if I call it through a bash script I have written, the DYLD_INSERT_LIBRARIES is ignored.
If I add the same 2 lines to my bash script, my application works again.
It looks like DYLD_INSERT_LIBRARIES is being unset when the bash script is called, as proven by this test script.
#!/bin/bash
set -e
echo ${DYLD_INSERT_LIBRARIES}
Is there any way to let the bash script inherit and pass down DYLD_INSERT_LIBRARIES?
This is a security feature of recent macOS versions.
The system bash executable has been marked as "restricted", disabling the DYLD_* features. To work around this, you can make a copy of bash and use that instead.
By looking for the following details in the implementations of dyld, I see that this restriction goes back at least to 10.6.
In the macOS 10.13 dyld implementation this logic is in pruneEnvironmentVariables, with the comment:
// For security, setuid programs ignore DYLD_* environment variables.
// Additionally, the DYLD_* enviroment variables are removed
// from the environment, so that any child processes don't see them.
However the actual logic to set the restriction is in configureProcessRestrictions:
// any processes with setuid or setgid bit set or with __RESTRICT segment is restricted
if ( issetugid() || hasRestrictedSegment(mainExecutableMH) ) {
gLinkContext.processIsRestricted = true;
}
...
if ( csops(0, CS_OPS_STATUS, &flags, sizeof(flags)) != -1 ) {
// On OS X CS_RESTRICT means the program was signed with entitlements
if ( ((flags & CS_RESTRICT) == CS_RESTRICT) && usingSIP ) {
gLinkContext.processIsRestricted = true;
}
// Library Validation loosens searching but requires everything to be code signed
if ( flags & CS_REQUIRE_LV ) {
gLinkContext.processIsRestricted = false;
...
As you can see, it depends on, issetugid, hasRestrictedSegment, and the CS_RESTRICT / SIP entitlements. You might be able to test for restricted status directly, or you could probably construct a function to test for these conditions yourself based on this information.

How to check if a specific program (shell command) is available on a Linux in D?

I am trying to write a script-like D program, that would have different behaviour based on availability of certain tools on user's system.
I'd like to test if a given program is available from command line (in this case it is unison-gtk) or if it is installed (I care only about Ubuntu systems, which use apt)
For the record, there is a walk around using e.g. tryRun:
bool checkIfUnisonGTK()
{
import scriptlike;
return = tryRun("unison-gtk -version")==0;
}
Instead of tryRun, I propose you grab the PATH environment variable, parse it (it is trivial to parse it), and look for specific executable inside those directories:
module which1;
import std.process; // environment
import std.algorithm; // splitter
import std.file; // exists
import std.stdio;
/**
* Use this function to find out whether given executable exists or not.
* It behaves like the `which` command in Linux shell.
* If executable is found, it will return absolute path to it, or an empty string.
*/
string which(string executableName) {
string res = "";
auto path = environment["PATH"];
auto dirs = splitter(path, ":");
foreach (dir; dirs) {
auto tmpPath = dir ~ "/" ~ executableName;
if (exists(tmpPath)) {
return tmpPath;
}
}
return res;
} // which() function
int main(string[] args) {
writeln(which("wget")); // output: /usr/bin/wget
writeln(which("non-existent")); // output:
return 0;
}
A natural improvement to the which() function is to check whether tmpPath is an executable, or not, and return only when it found an executable with given name...
There can't be any «native D solution» because you are trying to detect something in the system environment, not inside your program itself. So no solution will be «native».
By the way, if you are really concerned about Ubuntu only, you can parse output of command dpkg --status unison-gtk. But for me it prints that package 'unison-gtk' is not installed and no information is available (I suppose that I don't have enabled some repo that you have). So I think that C1sc0's answer is the most universal one: you should try to run which unison-gtk (or whatever the command you want to run is) and check if it prints anything. This way will work even if user has installed unison-gtk from anywhere else than a repository, e.g. has built it from source or copied a binary directly into /usr/bin, etc.
Linux command to list all available commands and aliases
In short: run auto r = std.process.executeShell("compgen -c"). Each line in r.output is an available command. Requires bash to be installed.
man which
man whereis
man find
man locate

Get current system local encoding in Perl on Windows

I need to get the current encoding according to the system local settings. I'm looking for such function working this way:
my $sysEncoding = getSystemEncoding();
#and now $sysEncoding equals e.g. 'windows-1250'
I looked everywhere on the internet. I've found just the module PerlIO::locale. But I thing that the system encoding should be recognized easier without additional modules.
Encode::Locale provides the means to handle this.
use Win32::API;
if (Win32::API->Import('kernel32', 'int GetACP()')) {
$enc = GetACP();
print "Current local encoding is '$enc'\n";
}
Thanks for hint to Ikegami.

How can I include Win32 modules only when I'm running my Perl script on Windows?

I have a problem that I cannot seem to find an answer to.
With Perl I need to use a script across Windows and unix platforms. Te problem is that on Windows we use Win32-pecific modules like Win32::Process, and those modules do not exist on unix.
I need a way to include those Win32 modules only on Windows.
if($^O =~ /win/i)
{
use win32::process qw(CREATE_NEW_CONSOLE);
}
else
{
#unix fork
}
The problem lies in that use statement for windows. No matter what I try this does not compile on unix.
I have tried using dynamic evals, requires, BEGIN, etc.
Is there a good solution to this problem? Any help will be greatly appreciated.
Thanks in advance,
Dan
Update:
A coworker pointed out to me this is the correct way to do it.
require Win32;
require Win32::Process;
my $flag = Win32::Process::CREATE_NEW_CONSOLE();
Win32::Process::Create($process,
$program,
$cmd,
0,
$flag, ".") || die ErrorReport();
print "Child started, pid = " . getPID() . "\n";
Thank you all for your help!
Dan
use is executed at compile time.
Instead do:
BEGIN {
if( $^O eq 'MSWin32' ) {
require Win32::Process;
# import Win32::Process qw(CREATE_NEW_CONSOLE);
Win32::Process->import(qw/ CREATE_NEW_CONSOLE /);
}
else {
#unix fork
}
}
See the perldoc for use.
Also see perlvar on $^O.
Update:
As Sinan Unur points out, it is best to avoid indirect object syntax.
I use direct method calls in every case, except, with calls to import. Probably because import masquerades as a built-in. Since import is really a class method, it should be called as a class method.
Thanks, Sinan.
Also, on Win32 systems, you need to be very careful that you get the capitalization of your module names correct. Incorrect capitalization means that symbols won't be imported properly. It can get ugly.use win32::process may appear to work fine.
Are you sure win32::process can be loaded on OSX? "darwin" matches your /win/i.
You may want to use http://search.cpan.org/dist/Sys-Info-Base/ which tries to do the right thing.
That aside, can you post an example of the code that you actually are using, the failure message you're receiving, and on which unix platform (uname -a) ?
What about a parser that modifies the file on each OS?
You could parse your perl file via a configure script that works on both operating systems to output perl with the proper Use clauses. You could even bury the parse action in the executable script to launch the code.
Originally I was thinking of precompiler directives from C would do the trick, but I don't know perl very well.
Here's an answer to your second set of questions:
Are you using strict and warnings?
Did you define an ErrorReport() subroutine? ErrorReport() is just an example in the synopsis for Win32::Process.
CREATE_NEW_CONSOLE is probably not numeric because it didn't import properly. Check the capitalization in your call to import.
Compare these one-liners:
C:\>perl -Mwin32::process -e "print 'CNC: '. CREATE_NEW_CONSOLE;
CNC: CREATE_NEW_CONSOLE
C:\>perl -Mwin32::process -Mstrict -e "print 'CNC: '. CREATE_NEW_CONSOLE;
Bareword "CREATE_NEW_CONSOLE" not allowed while "strict subs" in use at -e line 1.
Execution of -e aborted due to compilation errors.
C:\>perl -MWin32::Process -e "print 'CNC: '. CREATE_NEW _CONSOLE;
CNC: 16
You could just place your platform specific code inside of an eval{}, and check for an error.
BEGIN{
eval{
require Win32::Process;
Win32::Process->import(qw'CREATE_NEW_CONSOLE');
};
if( $# ){ # $# is $EVAL_ERROR
# Unix code here
}
}

Resources