The following line in /etc/bashrc_Apple_Terminal
shell_session_history_enable() {
(umask 077; touch "$SHELL_SESSION_HISTFILE_NEW") <<< THIS LINE
HISTFILE="$SHELL_SESSION_HISTFILE_NEW"
SHELL_SESSION_HISTORY=1
}
is printing something like this on every new session.
/Users/me/.bash_sessions/717F6632-A946-44EE-8A27-2547EDDD09E9.historynew Stats {
dev: 16777220,
mode: 33152,
nlink: 1,
uid: 501,
gid: 20,
rdev: 0,
blksize: 4096,
ino: 1406878,
size: 0,
blocks: 0,
atimeMs: 1502801769000,
mtimeMs: 1502801769000,
ctimeMs: 1502801769000,
birthtimeMs: 1502801769000,
atime: 2017-08-15T12:56:09.000Z,
mtime: 2017-08-15T12:56:09.000Z,
ctime: 2017-08-15T12:56:09.000Z,
birthtime: 2017-08-15T12:56:09.000Z }
Closest thing as to when is since last MacOS update.
What's an elegant way to solve this without changing this file I don't really want to change?
This post answers my question
How to deactivate bash_history stats print when opening a new terminal window on my mac?
I didn't entertain the possibility that there was an alias for touch, but indeed this was the case.
Related
I'm struggling to unpick what the output from beantools tail on a beanstalk tube means exactly, specifically age, reserves & releases.
stat shows one job in this tube, but tail spits out thousands of these with the same job id:
id: 1, length: 184, priority: 1024, delay: 0, age: 45, ttr: 60
reserves: 101414, releases: 101413, buries: 0, kicks: 0, timeouts: 0
body:{snip}
age - age in seconds
reserves - a secondary id for this job after getting put back in the queue
releases - the reserve job that's going to get put back in the queue after this one is done
The huge numbers of reserves on the same job ID were caused by the process breaking on a timeout and not being caught - beanstalk saw the job failed and reserved it in a loop.
I would like to ask
I have created a cluster according to this
https://docs.opendaylight.org/en/stable-magnesium/getting-started-guide/clustering.html
And i would like to verify it is working can someone help me how to do it?
Also is it able to connect this cluster or those 3 controllers to one mininet topology? Or it cant be done?
EDIT
I would like to ask why
Not all bundle are active?
Is there gonna be some problem with that ?
I'm not sure if you can specify multiple controllers on the mininet command
line, but it's worth a try. Otherwise you can try like this person explains
in this post setting up the controllers in a mininet .py config file.
To verify the cluster is working, there are many ways, but you can try some
rest calls to check the status of things. We have some examples in upstream
CSIT tests. If you install the feature odl-jolokia, you can send a GET to:
jolokia/read/org.opendaylight.controller:Category=Shards,name=member-1-shard-default-config,type=DistributedConfigDatastore
that is checking the default shard status for the config datastore. You'll get
some output like this:
content={
"request": {
"mbean": "org.opendaylight.controller:Category=Shards,name=member-1-shard-default-config,type=DistributedConfigDatastore",
"type": "read"
},
"status": 200,
"timestamp": 1588524930,
"value": {
"AbortTransactionsCount": 0,
"CommitIndex": 70,
"CommittedTransactionsCount": 0,
"CurrentTerm": 7,
"FailedReadTransactionsCount": 0,
"FailedTransactionsCount": 0,
"FollowerInfo": [],
"FollowerInitialSyncStatus": true,
"InMemoryJournalDataSize": 33,
"InMemoryJournalLogSize": 1,
"LastApplied": 70,
"LastCommittedTransactionTime": "1970-01-01 00:00:00.000",
"LastIndex": 70,
"LastLeadershipChangeTime": "2020-05-03 16:54:45.034",
"LastLogIndex": 70,
"LastLogTerm": 7,
"LastTerm": 7,
"Leader": "member-2-shard-default-config",
"LeadershipChangeCount": 1,
"PeerAddresses": "member-3-shard-default-config: akka.tcp://opendaylight-cluster-data#10.30.170.119:2550/user/shardmanager-config/member-3-shard-default-config, member-2-shard-default-config: akka.tcp://opendaylight-cluster-data#10.30.170.113:2550/user/shardmanager-config/member-2-shard-default-config",
"PeerVotingStates": "member-3-shard-default-config: true, member-2-shard-default-config: true",
"PendingTxCommitQueueSize": 0,
"RaftState": "Follower",
"ReadOnlyTransactionCount": 0,
"ReadWriteTransactionCount": 0,
"ReplicatedToAllIndex": 69,
"ShardName": "member-1-shard-default-config",
"SnapshotCaptureInitiated": false,
"SnapshotIndex": 69,
"SnapshotTerm": 7,
"StatRetrievalError": null,
"StatRetrievalTime": "557.3 \u03bcs",
"TxCohortCacheSize": 0,
"VotedFor": "member-2-shard-default-config",
"Voting": true
}
}
Lots of info there, but the raftstate says Follower, so you know this node
is one of the two followers. One node will be leader.
Another thing we check is syncstatus to make sure it's "true". Use this
URI:
jolokia/read/org.opendaylight.controller:Category=ShardManager,name=shard-manager-operational,type=DistributedOperationalDatastore
example output
I'm trying to compare certain pixel values in my pyautogui script, but it crashes with following error message after either multiple successful runs, or sometimes just straight on the first call:
Traceback (most recent call last):
File "F:\Koodit\Python\HeroWars NNet\Assets\autodataGet.py", line 219, in <module>
battle = observeBattle()
File "F:\Koodit\Python\HeroWars NNet\Assets\autodataGet.py", line 180, in observeBattle
statii = getHeroBattlePixels()
File "F:\Koodit\Python\HeroWars NNet\Assets\autodataGet.py", line 32, in getHeroBattlePixels
colormatch = pyautogui.pixelMatchesColor(location[0], location[1], alive, tolerance=5)
File "E:\Program Files\Python\lib\site-packages\pyscreeze\__init__.py", line 557, in pixelMatchesColor
pix = pixel(x, y)
File "E:\Program Files\Python\lib\site-packages\pyscreeze\__init__.py", line 582, in pixel
return (r, g, b)
File "E:\Program Files\Python\lib\contextlib.py", line 120, in __exit__
next(self.gen)
File "E:\Program Files\Python\lib\site-packages\pyscreeze\__init__.py", line 111, in __win32_openDC
raise WindowsError("windll.user32.ReleaseDC failed : return 0")
OSError: windll.user32.ReleaseDC failed : return 0
My code (this is called multiple times, sometimes it crashes on first run, sometimes it runs nicely for around 100 calls before failing, also, my screen is 4K, so the resolutions get big):
def getSomePixelStatuses():
someLocations= [
[1200, 990],
[1300, 990],
[1400, 990],
[1500, 990],
[1602, 990],
[1768, 990],
[1868, 990],
[1968, 990],
[2068, 990],
[2169, 990]
]
status = []
someValue= (92, 13, 12)
for location in someLocations:
colormatch = pyautogui.pixelMatchesColor(location[0], location[1], someValue, tolerance=5)
status.append(colormatch)
return status
I have no idea how to mitigate this problem. It would seem that pyautogui uses pyscreeze to read pixel values on screen, and most probable candidate for the place where error occurs is the pyscreeze pixel function:
def pixel(x, y):
"""
TODO
"""
if sys.platform == 'win32':
# On Windows, calling GetDC() and GetPixel() is twice as fast as using our screenshot() function.
with __win32_openDC(0) as hdc: # handle will be released automatically
color = windll.gdi32.GetPixel(hdc, x, y)
if color < 0:
raise WindowsError("windll.gdi32.GetPixel failed : return {}".format(color))
# color is in the format 0xbbggrr https://msdn.microsoft.com/en-us/library/windows/desktop/dd183449(v=vs.85).aspx
bbggrr = "{:0>6x}".format(color) # bbggrr => 'bbggrr' (hex)
b, g, r = (int(bbggrr[i:i+2], 16) for i in range(0, 6, 2))
return (r, g, b)
else:
# Need to select only the first three values of the color in
# case the returned pixel has an alpha channel
return RGB(*(screenshot().getpixel((x, y))[:3]))
I installed these libraries just yesterday, and I'm running python 3.8 on windows 10, and pyscreeze is version 0.1.25 so in theory everything should be up to date, but somehow something ends up crashing. Is there a way to mitigate this, either modifying my code, or even the library itself, or is my environment not suitable for this operation?
Well I know it's not particularly helpful; but for me, this error was fixed simply by running my code on 3.7 instead of 3.8. There shouldn't be any changes you have to make to your code, however (unless you were using walrus!)
On Windows, this can be done with the -3.7 command line flag, as long as 3.7 is installed
PyScreeze and PyAutoGUI maintainer here. This is an issue that has been fixed in PyScreeze 0.1.28, so you just need to update it by running pip install -U pyscreeze.
For more context, here's the GitHub issue where it was reported: https://github.com/asweigart/pyscreeze/pull/73
It's a bug. You were on the right track, as the problem is indeed in this line of the pixel() function:
with __win32_openDC(0) as hdc
That function uses cyptes.windll which doesn't seem to do well with the negative values sometimes returned from windll.user32.GetDC(), which subsequently creates an exception when windll.user32.ReleaseDC() is called.
The folks at pillow helped track this down and propose a fix.
issue filed at pyautogui
issue filed at pillow which led to the solution
pending PR at pyscreeze to address
I can use pixel function on Python 3.8 like this:
try:
a = pixel(100,100)
> except:
> a = pixel(100,100)
I don't have any clue why this works, but it works.
I had this error too and i fixed it. Just use try and except.
While true:
try:
x,y = pyautogui.position()
print(pyautogui.pixel(x,y))
except:
print("Cannot get pixel for the moment")
Given that you might be taking pixels multiple times, or you can do so, try and except works wonders to solve any pyscreeze for pyautogui issue. Honestly i dont know whats up with pyscreeze, but this works for me. Cheers
Im using gocraft/health to check the health of my service and have the metrics of each endPoint.
Im usin The JSON polling sink to get the metrics.
sink := health.NewJsonPollingSink(time.Minute*5, time.Minute*5)
stream.AddSink(sink)
I want to use healthtop and healthd here Link they explain how.
I fixed the environment variables: export HEALTHD_MONITORED_HOSTPORTS=:5001 HEALTHD_SERVER_HOSTPORT=:5002 healthd
as they said
after they said "Now you can run it". how, they didn't give any command to do it.I didn't realy understand what they mean.
I navigated to src/github.com/gocraft/health/cmd/healthd. I found main.go when I run it I got that in the console
[openrtb#sd-69536 healthd]$ go run main.go
[2015-06-17T23:04:20.871743758Z]: job:general event:starting kvs:[health_host_port::5002 monitored_host_ports::5001,:5002 server_host_port::5002]
[2015-06-17T23:04:20.87810814Z]: job:poll status:success time:4 ms kvs:[host_port::5002]
[2015-06-17T23:04:20.881896459Z]: job:poll status:success time:8 ms kvs:[host_port::5001]
[2015-06-17T23:04:20.882338024Z]: job:recalculate status:success time:231 μs
[2015-06-17T23:04:23.275370787Z]: job:recalculate status:success time:6 μs
[2015-06-17T23:04:30.875230839Z]: job:poll status:success time:1573 μs kvs:[host_port::5002]
[2015-06-17T23:04:30.881415193Z]: job:poll status:success time:7 ms kvs:[host_port::5001]
.
.
but no reslute on the those endpoints
localhost:5002/jobs: Lists top jobs
localhost:5002/hosts: Lists all monitored hosts and their statuses
it gave me {"error": "not_found"}
excepte this localhost:5002/health I got this JSON responce
{
"instance_id": "sd-69536.1291",
"interval_duration": 3600000000000,
"aggregations": [
{
"interval_start": "2015-06-18T01:00:00+02:00",
"serial_number": 48,
"jobs": {
"general": {
"timers": {},
"events": {
"starting": 1
},
"event_errs": {},
"count": 0,
"nanos_sum": 0,
"nanos_sum_squares": 0,
"nanos_min": 0,
"nanos_max": 0,
"count_success": 0,
"count_validation_error": 0,
"count_panic": 0,
"count_error": 0,
"count_junk": 0
},
"poll": {
"timers": {},
"events": {},
"event_errs": {},
"count": 24,
"nanos_sum": 107049159,
"nanos_sum_squares": 6.06770682813009e+14,
"nanos_min": 1581783,
"nanos_max": 8259442,
"count_success": 24,
"count_validation_error": 0,
"count_panic": 0,
"count_error": 0,
"count_junk": 0
},
"recalculate": {
"timers": {},
"events": {},
"event_errs": {},
"count": 23,
"nanos_sum": 3501601,
"nanos_sum_squares": 6.75958305123e+11,
"nanos_min": 70639,
"nanos_max": 290877,
"count_success": 23,
"count_validation_error": 0,
"count_panic": 0,
"count_error": 0,
"count_junk": 0
}
},
"timers": {},
"events": {
"starting": 1
},
"event_errs": {}
}
]
}
but no idea what this result mean, because it doesn't have any relation with my
localhost:5001/health EndPoint that should normaly aggregate as they said.
What you downloaded is a binary so you can just invoke it with healthd if you're in the correct directory, they actually provide this example;
HEALTHD_MONITORED_HOSTPORTS=:5020 HEALTHD_SERVER_HOSTPORT=:5032 healthd
Which isn't setting env var as much as invoking healthd with those two values (export or something would be required to persist the change beyond the one command). healthtop more clearly states what it is but as you can see by their paths, they're both commands gocraft/health/cmd/healthtop. They have several examples of using healthtop from bash, not so explicit about healthd but it's the same.
If you ran that command (as you show in your question) then you may want to try healthtop jobs or something to that effect. I don't know a ton about this project and don't care to research it but from what I can tell healthd is just a service that collects results from various /health endpoints and makes them available in on API. It seems like they intend for you to use healthtop to on top of it to view reports.
Also note this;
Great! To get a sense of the type of data healthd serves, you can manually navigate to:
/jobs: Lists top jobs
/aggregations: Provides a time series of aggregations
/aggregations/overall: Squishes all time series aggregations into one aggregation.
/hosts: Lists all monitored hosts and their statuses.
However, viewing raw JSON is just to give you a sense of the data. See the next section...
I'm not sure what the domain is (localhost:5032 if you're running locally?) but you should probably just be able to go to localhost:5032/jobs and see the healthd is running and doing something. Also check your apps to confirm it's up and running. Don't expect any output from it directly, that's what healthtop is for.
So my problem goes on like this:
I have a yml file at this directory /srv/PvP/plugins/Essentials/userdata/USERNAME.yml
The file contains information such as this:
timestamps:
login: 1379189230018
lastteleport: 1379188566255
logout: 1379188894740
ipAddress: *.*.*.*
lastlocation:
world: skyworld
x: 2.878462237122215
y: 101.0
z: 134.80091939768792
yaw: 0.0
pitch: 0.0
nickname: §bAmir
money: '101980.0'
logoutlocation:
world: skyblock
x: -305.81015336936576
y: 187.50846552474954
z: -446.69999998807907
yaw: -222.72388
pitch: 13.428226
I want to rsync the data in this file to a different directory:
/srv/SB/plugins/Essentials/userdata/USERNAME.yml
But I ONLY want to sync the money line. Is there a way to do this with rsync?
Also if this helps there are around 10K files in the userdata directory.
no, rsync is a file based tool.
you could use grep to extract some lines with a regex-based approach (you might want to use a tool that understands yaml here) and save the result to some file and then rsync that file.