The `glow` backend crashes on certain CPU-based graphics, particularly,
Windows running a CPU with integrated graphics using the basic
Microsoft Display Adapter driver.
`wgpu` seems to work everywhere.
Instead of saving P2Pool payout logs as they are, they are now
(de)serialized in the same [Display] format, e.g:
<DATE> <TIME> | <12_DOT_FLOAT> XMR | Block <BLOCK>
2022-09-31 12:53:52.8683 | 0.166122683521 XMR | Block 2,713,512
The parsing functions were updated to be able to read both raw
log lines and the new above format.
These couldn't be fit in before since there wasn't enough space.
They still can't all fit in, but the most important ones can be
after adjusting the font sizes and height spacing.
macOS moves "dangerous" applications into a read-only [/private]
filesystem. This messes up with the updater and default P2Pool and
XMRig paths.
If [/private] is detected, show a panic screen upon Gupax startup
telling the user to move it to [/Applications]. This _seems_ to
make macOS relax a little (after an arbitrary amount of time...)
There was a deadlock happening between the [Helper]'s [gui_api_p2pool]
and the GUI's [gui_api_p2pool], since the locking order was different.
The watchdog loop locking order was fixed as well. This was a pain to
track down, better than a data race... I guess.
Oh and keyboard shortcuts were added in this commit too.
Comment from code:
// The ordering of these locks is _very_ important. They MUST be in sync
// with how the main GUI thread locks stuff or a deadlock will occur
// given enough time. They will eventually both want to lock the [Arc<Mutex>]
// the other thread is already locking. Yes, I figured this out the hard way,
// hence the vast amount of debug!() messages.
//
// Example of different order (BAD!):
//
// GUI Main -> locks [p2pool] first
// Helper -> locks [gui_api_p2pool] first
// GUI Status Tab -> trys to lock [gui_api_p2pool] -> CAN'T
// Helper -> trys to lock [p2pool] -> CAN'T
//
// These two threads are now in a deadlock because both
// are trying to access locks the other one already has.
//
// The locking order here must be in the same chronological
// order as the main GUI thread (top to bottom).
Even with the parent-child relationship and direct process handle,
Gupax can't kill an XMRig spawned with [sudo] on macOS, even though
it can do it fine on Linux. So, on macOS, get the user's [sudo]
pass on the [Stop] button and summon a [sudo kill] on XMRig's PID.
This also complicates things since now we have to keep [SudoState]
somehow between a [Stop] and a [Start]. So there is macOS specific
code that now handles that.
Cargo: Cleanup unused dependencies, enable some build optimizations
Tor: Arti doesn't seem to work on macOS
Even a bare Arti+Hyper request doesn't seem to work, so it's
probably not something to do with Gupax. A lot of issues only
seem to popup in a VM (OpenGL, TLS) even though on bare metal
Gupax runs fine, so Tor might work fine on real macOS but I don't
have real macOS to test it. VM macOS can't create a circuit, so,
disable by default and add a warning that it's unstable.
P2Pool: Let selected_index start at 0, and only +1 when printing
to the user, this makes the overflow math when adding/deleting a
lot more simple because selected_index will match the actual index
of the node vector
[og: State] is now completely wrapped in an [Arc<Mutex>] so that
when the update is done, it can [.lock()] the CURRENT runtime
settings of the user and save to [gupax.toml] instead of using an
old copy that was given to it at the beginning of the thread.
In practice, this means users can change settings around during
an update and the update finishing and saving to disk won't be
using their old settings, but the current ones. Wrapping all of
[og: State] within in [Arc<Mutex>] might be overkill compared to
message channels but [State] really is just a few [bool]'s, [u*],
and small [String]'s, so it's not much data.
To bypass a deadlock when comparing [og == state] every frame,
[og]'s struct fields get cloned every frame into separate
variables, then it gets compared. This is also pretty stupid, but
again, the data being cloned is so tiny that it doesn't seem to
slow anything down.