Graphics To Console - shell

im not even sure if what im asking for is possible.
i want to create a really lightweight interface for the RPi. it doesn't needs to should much in terms of graphics, but i would help.
i want to display data onto the unix console (so i don't have to start up a GUI desktop like Gnome).
but i don't even know what to google for what i want. basically, when installing something like Ubuntu, you get the console screen but it slightly formatted (unlike just logging to the console).
i want to create an interface similar to what might see when you load the BIOS menu. how do i do this?
it would also be really useful if i could get some touch functionality so if i touch certain parts of the screen it would register and i could get the interface to behave as i need it to.

You did not specify a programming language so maybe dialog will do.
ncurses is well known for C.

Related

How to clear console in Windows

I want to clear the console
I tried using this, but it doesn't work for Windows
print!("\x1B[2J");
Is there an easy way to clear the console?
I think I need some clarification first, on what you're attempting to do. Regardless, this is my best attempt at answering your question <3
Preamble
Given that different terminals work differently and have different APIs, I think you probably will want to use a crate that provides this functionality in a cross-plaform manner.
I'm assuming from here on that you want cross-platform functionality and that you don't mind using external dependencies. If this is correct, you might be happy to know that the terminal interaction crates are actually really developed in the Rust ecosystem. I have heard only good things about them.
With that out of the way, let's move on.
Do you want to just perform actions on the terminal?
If you just want to perform some actions on the terminal, like "clearing", "scrolling", "moving the cursor" and whatnot, I think you will be satisfied with the terminal crate. It allows to perform many actions, like clearing, independently of the platform you're in. It also allows for using interactivity features like interacting with the Mouse and the Keyboard :3
Or do you want to write a GUI for the terminal?
If what you want to do is write a Console-based User Interface though, I think that what might work for you instead is the tui crate. It has all of what you need to build terminal GUIs, from clearing of the console up to graphical widgets. Tools like gitui are written with tui.
Did I answer your question? Feel free to follow up if I fell short :)
I've found a way to clear the console in rust
By using console crate, console.clear() method

How to start interacting with the ACR122U-A9 NFC reader?

I'm a junior PHP/JavaScript/HTML developer, recently hired by a company that makes photobooths. I had never worked in a Ubuntu system before this. This I find relevant because I think that for this reason I might be skipping an obvious step or something like that.
One of the projects I have to work on is adding a NFC device on the photobooths, so the user can just tap the area with their phone and get the pictures they just took. Sounds easy.
A previous employee bought an ACR122U-A9 device, that connects via USB, but they weren't able to make it work. I took the device and followed every single tutorial I have find out and I had no luck either.
What I have achieved after installing a great deal of things and blindly following tutorials is just this:
If I open a terminal, and type "pcsc_scan" it detects the device and it kind of "works", reading the cards if I tap them. I get some hexadecimal codes and some blue text that doesn't say anything to me. And while I do this I can't even type in the terminal so I cannot do anything at all to it.
What I actually want is to know how to make the computer speak to the NFC device, not listen to it. Well, I guess it has to listen to know when to send info.
I think that I'm missing something very obvious, because every tutorial I find just explains what kind of code you need to write to do X thing or how to make the device emulate a card or things like that... But I think I need something WAY more basic:
How do I even start to work and interacting with it?
Info that might be relevant:
I didn't specify how I got to the point that writing "pcsc_scan" makes something because A) I've done so many tutorials and different things that I don't remember what part of what I did accomplished this and B) I'd like to start from scratch in order to understand what am I doing.
I'm working on a Ubuntu 17.10 machine, but the final product will be working under Windows (different versions of it depending on the Photobooth)
Our photobooths work with a web-api in localhost. Everything is either PHP, JavaScript, CSS, or HTML. In the end I will need a way for the device to get the info it needs from one of these languages (if possible)
I'm still struggling with Ubuntu. Everything you try to install or interact with in this OS is done via commands that I don't completely understand and I repeat from random internet tutorials or forums like a parrot. Fixing this is not part of the question, I'll eventually learn this, but I think it might be useful to know that I might not even know some things that should be obvious or basic about it.

MonoGame Platform Agnostic Input/Output

I've done a bit of XNA work, and I'm now trying to work in MonoGame. Previously, for all my input and output needs, I used Microsoft.Xna.Framework. I'm now trying to make one version of my game to deploy on as many platforms as possible (excluding, at the moment, touch interfaces), but I don't know what I should be doing regarding the mouse, for example.
Does MonoGame make Microsoft.Xna.Framework platform-agnostic or do I have to use other frameworks and switch between them depending on the platform?
MonoGame is designed to make it easy to port your game to other platforms, you shouldn't need to use any other frameworks to achieve that goal. However, it's not as simple as simply recompiling the code for each new platform.
For the most part all of your code will remain the same, but you'll need to put together a project for each platform and link all of the code files in each one. I won't go into detail about this, but I'll just say that you can do it and it's not that difficult.
Now, what you will find is that you may have to write some platform specific code to handle device specific stuff like screen scaling and input handling. What exactly you need to do will depend on your game, so I can't really explain that in detail either.
To make your life easier, it can be helpful to think about how your game is going to work on other platforms and write your code accordingly. For example, a touch on mobile device is very similar to the click of a mouse so you could wrap this functionality in a method of your own to minimize the code changes required when porting. On the other hand, some things you can do with a mouse simply don't work on touch interfaces, like right click, and hover. Similarly, touch interfaces have commonly used gestures that don't really map to a mouse on a PC like long press, swipe and pinch.
So the short answer is, you don't HAVE to do anything special, but you should at least think about it if you plan to port your game in the future.

How to control the mouse pointer outside our application

I want to control the mouse pointer with my application and be able to interact with other programs using my program,
For example I want my application to be able to click on a button on another application
How should I go about solving this problem?
(Any programming language would work, also if you have any suggestion please let me know)
Afterthoughts:
I want to do it in windows operating system and want to test my GUI to see if it works in different scenarios. Any language would work for me since this is not part of the final product but I prefer one of these languages (Python, Java, C# or MATLAB)
Thanks
There are many ways of doing this, and you didn't mention any details of your application (system, target goal, etc...).
If your goal is menial automation, I'd recommend whipping together a quick AutoIt script on Windows. http://www.autoitscript.com/autoit3/index.shtml
If this isn't what you're looking for, please give more details.
Okay, this one is really operating system and windowing specific. But the phrase you're looking for is "mouse grabbing".
As #Mitch suggests, unless you've got a really good reason — like maybe a GUI testing app? — then grabbing the mouse and messing with it in that way is very bad form.

Call another program's functions?

So I have this program that I really like, and it doesn't support Applescript. I'd like to automate it a little bit. Now, I know that I could use applescript to tell the program to tell the menu to tell the submenu to tell the menuitem to activate or whatever, but frankly I don't like applescript very much anyway.
When I open the NIB file in IB, I can see the messages that are being sent to FirstResponder; for example, the Copy menu item sends "copy:". Is there any way for me to invoke this directly from another program?
No. It's called protected memory for a reason, you know. The other program is completely insulated from your application. There are ways to put code into other apps, but (a) it's very inadvisable (b) requires root privileges, which means the rest of your app needs to be ROCK SOLID AND IMPREGNABLE, and (c) writing such code is a black art requiring knowledge of the operating system kernel interfaces, virtual memory management, the ABI, the internals of the linker/loader, assembler programming, and the operational parameters and other specifics of the particular processor upon which your app happens to be running.
Really, AppleEvents and other such IPC mechanisms are there for a reason.
Your other alternatives (all of which are a bit hacky, to be honest, and give you the fairly significant burden of ensuring the target app is in the state you want/expect) the access the data you're looking for are:
The Accessibility APIs from the ApplicationServices framework, through which you can traverse the UI tree to grab the text from wherever you need it directly, or can activate the menu item. Access for your app has to be explicitly granted by the user, however (although this is much the same as the requirement for UI scripting).
You can use the CoreGraphics APIs (within the ApplicationServices framework again) to send keyboard events to the target application (or just to the system) directly. This would mean sending four events: Command-down, C-down, C-up, Command-up.
None of these are ideal. To be honest, your best approach would be to look at your requirements and figure out how you can best engineer around the problem by changing those requirements in some way, i.e. instead of grabbing something directly, ask the user to provide some input, etc.
You might be interested in SIMBL or in mach_inject. SIMBL is a daemon (in my fork based on mach_inject, in the original version based on injection via some ScriptingAdditions hack) which does the injection for you, so you just need to put a bundle with your code into the SIMBL directory and SIMBL will inject it for you into the target application. Or you can do so yourself via mach_inject. Or probably more convenient, mach_inject_framework which injects and runs code which just loads some framework.
I think Jim may overstate the point a bit; he's not wrong, but it seems misleading. There are lots of ways to cause a Cocoa program to execute its own code under you control (Carbon is harder). The Accessibility API is very commonly used this way (so commonly that I expect it to be repurposed eventually). Fscript can give you all kinds of access to the innards of another Cocoa program. While Input Managers may well exit the scene at some point, SIMBL is still out there today to do this kind of stuff.
Whether you like Applescript or not, Apple Events are the primary way Apple provides for inter-program control. Have you double-checked Script Editor's Open Library function to find out if the program really does have any Applescript support? You can code Apple Events entirely in Objective-C these days using Leopard's Scripting Bridge. I wrote up a tutorial if you like (it's still under-documented by Apple).
Cocoa is a reverse-engineer's dream. The same guys who host SIMBL have a nice intro to the subject. "Wolf" also writes a lot of useful information on this.
Jim's right. Many of these approaches can completely destabilize the system if done incorrectly (sometimes even if done correctly). I don't do much of this stuff on my production systems; I need them to work. But there are a lot of things you can make a Mac app do, and it's a good part of a Mac developer's training to understand how all the pieces really work.

Resources