Is there a port of libfaketime to OSX? - macos

Is there a port of libfaketime to OSX? http://www.code-wizards.com/projects/libfaketime/

Ok, so I ported it.
http://github.com/dbrashear/libfaketime/

On freshmeat libfaketime lists its platforms as Linux and POSIX. Since OSX is listed as fully POSIX compliant, it should be perfectly usable as-is.
EDIT
if clock_gettime is the only problematic function and you're feeling fool-hardy, you could try this little hack:
#if _POSIX_TIMERS > 0
clock_gettime(CLOCK_REALTIME, &tp);
#else
struct timeval tv;
gettimeofday(&tv, NULL);
tp.tv_sec = tv.tv_sec;
tp.tv_nsec = tv.tv_usec*1000;
#endif

Related

code blocks failed message runing hello code

I'm trying to run this code and im a beginner at this im really struggling I don't know what to do.
NB: A "MinGW" version of Code::Blocks was used here.
#include <stdio.h>
#include <omp.h>
int main() {
printf ("Hello, world:");
#pragma omp parallel
printf (" %d", omp_get_thread_num ());
printf ("\n");
return 0;
}
Change a debugger setting in the following way.
From the Settings menu click on Debugger..., then on the left of the screen select Default, a debugger type. This should show Executable paths set to:
C:\MinGW\bin\gdb.exe
change this to:
C:\Program Files\Codeblocks\MinGW\bin\gdb.exe
You should then be able to use the debugger.
NB: Other Information
The above type of fix is required if you install the "MinGW" version of 'Code::Blocks 20.03' on Windows, using 'codeblocks-20.03mingw-setup.exe' ( the version relevant on 8th April 2021 , with the default installation type used ).

Where does bash prompt escape sequence \h get the hostname from?

\h is a bash prompt escape sequence that expands to the hostname. Where does it get the hostname from? On my system it shows a value that I cannot find anywhere, not in hostname -f or /etc/hosts or /etc/hostname or /etc/sysconfig/network or $HOSTNAME. So I'm wondering where it's getting it from. My system is Centos 7.4. I know there are hidden places where things such as UUIDs are stored, and I seem to recall that I've come across a similar hidden hostname type of issue in the past, but I can't remember the details.
If you look at the bash source code you'll see in shell.c that it calls gethostname(2), a POSIX system call that retrieves the hostname from the kernel.
/* It's highly unlikely that this will change. */
if (current_host_name == 0)
{
/* Initialize current_host_name. */
if (gethostname (hostname, 255) < 0)
current_host_name = "??host??";
else
current_host_name = savestring (hostname);
}
This is not necessarily a canonical string. The kernel doesn't actually know the machine's network hostname. It just reports back whatever was passed to sethostname(2). To quote from the uname(2) man page:
On the other hand, the [hostname] is meaningless: it gives the name of the present machine in some undefined network, but typically machines are in more than one network and have several names. Moreover, the kernel has no way of knowing about such things, so it has to be told what to answer here.
On non-Linux systems without gethostname(2), bash falls back to uname(2). If uname(2) isn't even available then it simply displays "unknown". You can see that logic in lib/sh/oslib.c:
#if !defined (HAVE_GETHOSTNAME)
# if defined (HAVE_UNAME)
# include <sys/utsname.h>
int
gethostname (name, namelen)
char *name;
int namelen;
{
int i;
struct utsname ut;
--namelen;
uname (&ut);
i = strlen (ut.nodename) + 1;
strncpy (name, ut.nodename, i < namelen ? i : namelen);
name[namelen] = '\0';
return (0);
}
# else /* !HAVE_UNAME */
int
gethostname (name, namelen)
char *name;
int namelen;
{
strncpy (name, "unknown", namelen);
name[namelen] = '\0';
return 0;
}
# endif /* !HAVE_UNAME */
#endif /* !HAVE_GETHOSTNAME */
\h isn't updated if the hostname changes. The value is cached at startup when the shell is initialized.
[jkugelman#malkovich]$ hostname
malkovich
[jkugelman#malkovich]$ sudo hostname kaufman
[jkugelman#malkovich]$ hostname
kaufman
[jkugelman#malkovich]$ bash
[jkugelman#kaufman]
It probably (just a guess) uses the gethostname(2) system call (which is handled by the kernel, as all syscalls(2) are...)
BTW, GNU bash is (as most packages of your Linux distributions are) free software; so please download its source code and study it; use the source, Luke! and open the source more, please.
A more interesting question is how that information is cached by bash. Does it call gethostname at every command? You might also use strace(1) to find out.
Of course, take the habit of studying the source code of free software every time you are curious. And use strace -and the gdb debugger- to understand their dynamic behavior.
A French singer, G.Bedos, told us "La liberté ne s'use que si on ne s'en sert pas", that is
Freedom wears out if you don't use it.
(translation is mine, I am French but not a native English speaker)
So next time, please dive into the source code of free software. It is important to exercise your freedom, and that is what free software is about.

Astyle does not work in Windows

I just download Astyle from SourceForge. When I execute Astyle.exe in /bin, it said
Cannot convert to multi-byte string, reverting to English.
I don't know what happened.
I find there is a similar question, but that is about Astyle in OS X.
Here are the source code related to the error. I don't know the meaning of the second line.
// Not all compilers support the C++ function locale::global(locale(""));
// For testing on Windows change the "Region and Language" settings or use AppLocale.
// For testing on Linux change the LANG environment variable: LANG=fr_FR.UTF-8.
// setlocale() will use the LANG environment variable on Linux.
char* localeName = setlocale(LC_ALL, "");
if (localeName == NULL) // use the english (ascii) defaults
{
fprintf(stderr, "\n%s\n\n", "Cannot set native locale, reverting to English");
setTranslationClass();
return;
}
Finally, please feel free to correct my English.
Add following include to both ASLocalizer.cpp and style_main.cpp:
<#include "locale.h">

How to run Jruby 1.6.6 on Aptana Studio 3.0.8 *ON WINDOWS*

I have read the launching Jruby from Aptana Studio 3 on Windows XP thread (to be fair, I am on windows 7) and created the wrapper script ruby.bat (#C:\jruby-1.6.6\bin\jruby %* - MY particular path)
Tried naming it "just" ruby, ruby.sh whatever, but Aptana wont find it. From any windows shell (cmd) it works without a hitch.
Also tried copying the JRuby.exe to Ruby.exe. That still won't work. Linking ruby.exe to jruby.exe with the mklink command still wont work.
Looked around the internet, but all I found are dead ends.
Any fix to this? Can't be THAT rare a setup, THAT difficult, or can it?
I did that by a simple trick...
I created a c++ file ruby.cpp:
#include <cstdlib>
#include <iostream>
using namespace std ;
int main( int argc, char *argv[] ) {
string cmd = "jruby.exe" ;
for (int i = 1 ; i < argc ; ++i)
cmd.append( " " ).append( argv[i] ) ;
return system( cmd.c_str() ) ;
}
Compiled as ruby.exe and moved to C:\jruby-1.6.6\bin.
It works...

boost::filesystem and Unicode under Linux and Windows

The following program compiles in Visual Studio 2008 under Windows, both with Character Set
"Use Unicode Character Set" and "Use Multi-Byte Character Set". However, it does not compile under Ubuntu 10.04.2 LTS 64-bit and GCC 4.4.3. I use Boost 1.46.1 under both environments.
#include <boost/filesystem/path.hpp>
#include <iostream>
int main() {
boost::filesystem::path p(L"/test/test2");
std::wcout << p.native() << std::endl;
return 0;
}
The compile error under Linux is:
test.cpp:6: error: no match for ‘operator<<’ in ‘std::wcout << p.boost::filesystem3::path::native()’
It looks to me like boost::filesystem under Linux does not provide a wide character string in path::native(), despite boost::filesystem::path having been initialized with a wide string. Further, I'm guessing that this is because Linux defaults to UTF-8 and Windows to UTF-16.
So my first question is, how do I write a program that uses boost::filesystem and supports Unicode paths on both platforms?
Second question: When I run this program under Windows, it outputs:
/test/test2
My understanding is that the native() method should convert the path to the native format under Windows, which is using backslashes instead of forward slashes. Why is the string coming out in POSIX format?
Your understanding of native is not completely correct:
Native pathname format: An implementation defined format. [Note: For POSIX-like operating systems, the native format is the same as the generic format. For Windows, the native format is similar to the generic format, but the directory-separator characters can be either slashes or backslashes. --end note]
from Reference
This is because Windows allows POSIX-style pathnames, so using native() won't cause problems with the above.
Because you might often get similar problems with your output I think the best way would be to use your preprocessor, i.e.:
#ifdef WINDOWS
std::wostream& console = std::wcout;
#elif POSIX
std::ostream& console = std::cout;
#endif
and something similar for the string-class.
If you want to use the wide output streams, you have to convert to a wide string:
#include <boost/filesystem/path.hpp>
#include <iostream>
int main() {
boost::filesystem::path p(L"/test/test2");
std::wcout << p.wstring() << std::endl;
return 0;
}
Note that AFAIK using wcout doesn't give you Unicode output on Windows; you need to use wprintf instead.
Try this:
#include <boost/filesystem/path.hpp>
#include <iostream>
int main() {
boost::filesystem::path p("/test/test2");
std::wcout << p.normalize() << std::endl;
return 0;
}

Resources