Remote Node: fix UI and backend functions for remote nodes

This commit is contained in:
hinto-janaiyo 2023-01-25 22:34:51 -05:00
parent 304e8afbef
commit 3fd5edc314
No known key found for this signature in database
GPG key ID: B1C5A64B80691E45
8 changed files with 214 additions and 181 deletions

View file

@ -543,11 +543,6 @@ For transparency, here's all the connections Gupax makes:
## Remote Monero Nodes
These are the remote nodes used by Gupax in the `[P2Pool Simple]` tab. They are sourced from [this list](https://github.com/hinto-janaiyo/monero-nodes), which itself sources from [`monero.fail`](https://monero.fail). The nodes with the most consistent uptime are used.
In general, a suitable node needs to:
- Have good uptime
- Have RPC enabled
- Have ZMQ enabled
| IP/Domain | Location | RPC Port | ZMQ Port |
|----------------------------------|-----------------------------------|----------|-------------|
| monero.10z.com.ar | 🇦🇷 AR - Buenos Aires F.D. | 18089 | 18084 |
@ -575,27 +570,27 @@ In general, a suitable node needs to:
These are community nodes that **DON'T** have ZMQ enabled but are fast and well-known. These are not used in Gupax but can be used for general Monero usage.
| Name | Owner | Owner Type | IP/Domain | RPC Port |
|----------------|---------------------------------------------------|------------|-----------------------------------|----------|
| C3pool | [C3pool](https://www.c3pool.com) | Pool | node.c3pool.com | 18081 |
| Cake | [Cake](https://cakewallet.com) | Wallet | xmr-node.cakewallet.com | 18081 |
| CakeEu | [Cake](https://cakewallet.com) | Wallet | xmr-node-eu.cakewallet.com | 18081 |
| CakeUk | [Cake](https://cakewallet.com) | Wallet | xmr-node-uk.cakewallet.com | 18081 |
| CakeUs | [Cake](https://cakewallet.com) | Wallet | xmr-node-usa-east.cakewallet.com | 18081 |
| Feather1 | [Feather](https://featherwallet.org) | Wallet | selsta1.featherwallet.net | 18081 |
| Feather2 | [Feather](https://featherwallet.org) | Wallet | selsta2.featherwallet.net | 18081 |
| HashVault | [HashVault](https://hashvault.pro) | Pool | nodes.hashvault.pro | 18081 |
| MajesticBankIs | [MajesticBank](https://www.majesticbank.sc) | Exchange | node.majesticbank.is | 18089 |
| MajesticBankSu | [MajesticBank](https://www.majesticbank.sc) | Exchange | node.majesticbank.su | 18089 |
| MoneroSeed1 | [Monero](https://github.com/monero-project/monero/blob/release-v0.18/src/p2p/net_node.inl#L708) | Seed Node | 176.9.0.187 | 18089 |
| MoneroSeed2 | [Monero](https://github.com/monero-project/monero/blob/release-v0.18/src/p2p/net_node.inl#L715) | Seed Node | 51.79.173.165 | 18089 |
| MoneroWorld1 | [Gingeropolous](https://github.com/Gingeropolous) | Individual | node.moneroworld.com | 18089 |
| MoneroWorld2 | [Gingeropolous](https://github.com/Gingeropolous) | Individual | uwillrunanodesoon.moneroworld.com | 18089 |
| Monerujo | [Monerujo](https://www.monerujo.io) | Wallet | nodex.monerujo.io | 18081 |
| Rino | [Rino](https://rino.io) | Wallet | node.community.rino.io | 18081 |
| Seth | [Seth](https://github.com/sethforprivacy) | Individual | node.sethforprivacy.com | 18089 |
| SupportXmr | [SupportXMR](https://www.supportxmr.com) | Pool | node.supportxmr.com | 18081 |
| SupportXmrIr | [SupportXMR](https://www.supportxmr.com) | Pool | node.supportxmr.ir | 18089 |
| IP/Domain | RPC Port | Owner | Owner Type |
|-----------------------------------|----------|---------------------------------------------------|------------|
| node.c3pool.com | 18081 | [C3pool](https://www.c3pool.com) | Pool |
| xmr-node.cakewallet.com | 18081 | [Cake](https://cakewallet.com) | Wallet |
| xmr-node-eu.cakewallet.com | 18081 | [Cake](https://cakewallet.com) | Wallet |
| xmr-node-uk.cakewallet.com | 18081 | [Cake](https://cakewallet.com) | Wallet |
| xmr-node-usa-east.cakewallet.com | 18081 | [Cake](https://cakewallet.com) | Wallet |
| selsta1.featherwallet.net | 18081 | [Feather](https://featherwallet.org) | Wallet |
| selsta2.featherwallet.net | 18081 | [Feather](https://featherwallet.org) | Wallet |
| nodes.hashvault.pro | 18081 | [HashVault](https://hashvault.pro) | Pool |
| node.majesticbank.is | 18089 | [MajesticBank](https://www.majesticbank.sc) | Exchange |
| node.majesticbank.su | 18089 | [MajesticBank](https://www.majesticbank.sc) | Exchange |
| 176.9.0.187 | 18089 | [Monero](https://github.com/monero-project/monero/blob/release-v0.18/src/p2p/net_node.inl#L708) | Seed Node |
| 51.79.173.165 | 18089 | [Monero](https://github.com/monero-project/monero/blob/release-v0.18/src/p2p/net_node.inl#L715) | Seed Node |
| node.moneroworld.com | 18089 | [Gingeropolous](https://github.com/Gingeropolous) | Individual |
| uwillrunanodesoon.moneroworld.com | 18089 | [Gingeropolous](https://github.com/Gingeropolous) | Individual |
| nodex.monerujo.io | 18081 | [Monerujo](https://www.monerujo.io) | Wallet |
| node.community.rino.io | 18081 | [Rino](https://rino.io) | Wallet |
| node.sethforprivacy.com | 18089 | [Seth](https://github.com/sethforprivacy) | Individual |
| node.supportxmr.com | 18081 | [SupportXMR](https://www.supportxmr.com) | Pool |
| node.supportxmr.ir | 18089 | [SupportXMR](https://www.supportxmr.com) | Pool |
## Build
### General Info

View file

@ -58,7 +58,7 @@ This is how Gupax works internally when starting up:
2. **AUTO**
- If `auto_update` == `true`, spawn auto-updating thread
- If `auto_ping` == `true`, spawn community node ping thread
- If `auto_ping` == `true`, spawn remote node ping thread
- If `auto_p2pool` == `true`, spawn P2Pool
- If `auto_xmrig` == `true`, spawn XMRig

View file

@ -249,20 +249,20 @@ pub const P2POOL_MINI: &str = "Use the P2Pool mini-chain. This P2Pool
pub const P2POOL_OUT: &str = "How many out-bound peers to connect to? (you connecting to others)";
pub const P2POOL_IN: &str = "How many in-bound peers to allow? (others connecting to you)";
pub const P2POOL_LOG: &str = "Verbosity of the console log";
pub const P2POOL_AUTO_NODE: &str = "Automatically ping the community Monero nodes at Gupax startup";
pub const P2POOL_AUTO_SELECT: &str = "Automatically select the fastest community Monero node after pinging";
pub const P2POOL_SELECT_FASTEST: &str = "Select the fastest community Monero node";
pub const P2POOL_SELECT_RANDOM: &str = "Select a random community Monero node";
pub const P2POOL_SELECT_LAST: &str = "Select the previous community Monero node";
pub const P2POOL_SELECT_NEXT: &str = "Select the next community Monero node";
pub const P2POOL_PING: &str = "Ping the built-in community Monero nodes";
pub const P2POOL_AUTO_NODE: &str = "Automatically ping the remote Monero nodes at Gupax startup";
pub const P2POOL_AUTO_SELECT: &str = "Automatically select the fastest remote Monero node after pinging";
pub const P2POOL_SELECT_FASTEST: &str = "Select the fastest remote Monero node";
pub const P2POOL_SELECT_RANDOM: &str = "Select a random remote Monero node";
pub const P2POOL_SELECT_LAST: &str = "Select the previous remote Monero node";
pub const P2POOL_SELECT_NEXT: &str = "Select the next remote Monero node";
pub const P2POOL_PING: &str = "Ping the built-in remote Monero nodes";
pub const P2POOL_ADDRESS: &str = "You must use a primary Monero address to mine on P2Pool (starts with a 4). It is highly recommended to create a new wallet since addresses are public on P2Pool!";
pub const P2POOL_COMMUNITY_NODE_WARNING: &str =
r#"TL;DR: Run & use your own Monero Node.
Using a Community Monero Node is convenient but comes at the cost of privacy and reliability.
Using a Remote Monero Node is convenient but comes at the cost of privacy and reliability.
You may encounter connection issues with community nodes which may cause mining performance loss! Late info from laggy nodes will cause your mining jobs to start later than they should.
You may encounter connection issues with remote nodes which may cause mining performance loss! Late info from laggy nodes will cause your mining jobs to start later than they should.
Running and using your own local Monero node improves privacy and ensures your connection is as stable as your own internet connection. This comes at the cost of downloading and syncing Monero's blockchain yourself (currently 155GB). If you have the disk space, consider using the [Advanced] tab and connecting to your own Monero node.
@ -275,7 +275,7 @@ r#"WARNING: Use [--no-color] and make sure to set [--data-api <PATH>] & [--local
Start P2Pool with these arguments and override all below settings"#;
pub const P2POOL_SIMPLE: &str =
r#"Use simple P2Pool settings:
- Remote community Monero node
- Remote remote Monero node
- Default P2Pool settings + Mini"#;
pub const P2POOL_ADVANCED: &str =
r#"Use advanced P2Pool settings:

View file

@ -1008,7 +1008,7 @@ pub struct P2pool {
pub out_peers: u16,
pub in_peers: u16,
pub log_level: u8,
pub node: crate::node::NodeEnum,
pub node: String,
pub arguments: String,
pub address: String,
pub name: String,
@ -1102,7 +1102,7 @@ impl Default for P2pool {
out_peers: 10,
in_peers: 10,
log_level: 3,
node: crate::NodeEnum::C3pool,
node: crate::RemoteNode::new().to_string(),
arguments: String::new(),
address: String::with_capacity(96),
name: "Local Monero Node".to_string(),

View file

@ -50,6 +50,7 @@ use crate::{
P2poolRegex,
xmr::*,
macros::*,
RemoteNode,
};
use sysinfo::SystemExt;
use serde::{Serialize,Deserialize};
@ -374,7 +375,7 @@ impl Helper {
// [Simple]
if state.simple {
// Build the p2pool argument
let (ip, rpc, zmq) = crate::node::enum_to_ip_rpc_zmq_tuple(state.node); // Get: (IP, RPC, ZMQ)
let (ip, rpc, zmq) = RemoteNode::get_ip_rpc_zmq(&state.node); // Get: (IP, RPC, ZMQ)
args.push("--wallet".to_string()); args.push(state.address.clone()); // Wallet address
args.push("--host".to_string()); args.push(ip.to_string()); // IP Address
args.push("--rpc-port".to_string()); args.push(rpc.to_string()); // RPC Port

View file

@ -176,7 +176,7 @@ impl App {
fn new(now: Instant) -> Self {
info!("Initializing App Struct...");
debug!("App Init | P2Pool & XMRig processes...");
info!("App Init | P2Pool & XMRig processes...");
let p2pool = arc_mut!(Process::new(ProcessName::P2pool, String::new(), PathBuf::new()));
let xmrig = arc_mut!(Process::new(ProcessName::Xmrig, String::new(), PathBuf::new()));
let p2pool_api = arc_mut!(PubP2poolApi::new());
@ -184,7 +184,7 @@ impl App {
let p2pool_img = arc_mut!(ImgP2pool::new());
let xmrig_img = arc_mut!(ImgXmrig::new());
debug!("App Init | Sysinfo...");
info!("App Init | Sysinfo...");
// We give this to the [Helper] thread.
let mut sysinfo = sysinfo::System::new_with_specifics(
sysinfo::RefreshKind::new()
@ -198,7 +198,7 @@ impl App {
};
let pub_sys = arc_mut!(Sys::new());
debug!("App Init | The rest of the [App]...");
info!("App Init | The rest of the [App]...");
let mut app = Self {
tab: Tab::default(),
ping: arc_mut!(Ping::new()),
@ -250,7 +250,7 @@ impl App {
regex: Regexes::new(),
};
//---------------------------------------------------------------------------------------------------- App init data that *could* panic
debug!("App Init | Getting EXE path...");
info!("App Init | Getting EXE path...");
let mut panic = String::new();
// Get exe path
app.exe = match get_exe() {
@ -268,7 +268,7 @@ impl App {
Err(e) => { panic = format!("get_os_data_path(): {}", e); app.error_state.set(panic.clone(), ErrorFerris::Panic, ErrorButtons::Quit); PathBuf::new() },
};
debug!("App Init | Setting TOML path...");
info!("App Init | Setting TOML path...");
// Set [*.toml] path
app.state_path = app.os_data_path.clone();
app.state_path.push(STATE_TOML);
@ -283,11 +283,11 @@ impl App {
// Apply arg state
// It's not safe to [--reset] if any of the previous variables
// are unset (null path), so make sure we just abort if the [panic] String contains something.
debug!("App Init | Applying argument state...");
info!("App Init | Applying argument state...");
let mut app = parse_args(app, panic);
// Read disk state
debug!("App Init | Reading disk state...");
info!("App Init | Reading disk state...");
use TomlError::*;
app.state = match State::get(&app.state_path) {
Ok(toml) => toml,
@ -307,7 +307,7 @@ impl App {
};
app.og = arc_mut!(app.state.clone());
// Read node list
debug!("App Init | Reading node list...");
info!("App Init | Reading node list...");
app.node_vec = match Node::get(&app.node_path) {
Ok(toml) => toml,
Err(err) => {
@ -328,7 +328,7 @@ impl App {
debug!("Node Vec:");
debug!("{:#?}", app.node_vec);
// Read pool list
debug!("App Init | Reading pool list...");
info!("App Init | Reading pool list...");
app.pool_vec = match Pool::get(&app.pool_path) {
Ok(toml) => toml,
Err(err) => {
@ -353,7 +353,7 @@ impl App {
// Read [GupaxP2poolApi] disk files
let mut gupax_p2pool_api = lock!(app.gupax_p2pool_api);
match GupaxP2poolApi::create_all_files(&app.gupax_p2pool_api_path) {
Ok(_) => debug!("App Init | Creating Gupax-P2Pool API files ... OK"),
Ok(_) => info!("App Init | Creating Gupax-P2Pool API files ... OK"),
Err(err) => {
error!("GupaxP2poolApi ... {}", err);
match err {
@ -367,7 +367,7 @@ impl App {
};
},
}
debug!("App Init | Reading Gupax-P2Pool API files...");
info!("App Init | Reading Gupax-P2Pool API files...");
match gupax_p2pool_api.read_all_files_and_update() {
Ok(_) => {
info!(
@ -395,7 +395,7 @@ impl App {
//----------------------------------------------------------------------------------------------------
let mut og = lock!(app.og); // Lock [og]
// Handle max threads
debug!("App Init | Handling max thread overflow...");
info!("App Init | Handling max thread overflow...");
og.xmrig.max_threads = app.max_threads;
let current = og.xmrig.current_threads;
let max = og.xmrig.max_threads;
@ -403,7 +403,7 @@ impl App {
og.xmrig.current_threads = max;
}
// Handle [node_vec] overflow
debug!("App Init | Handling [node_vec] overflow");
info!("App Init | Handling [node_vec] overflow");
if og.p2pool.selected_index > app.og_node_vec.len() {
warn!("App | Overflowing manual node index [{} > {}], resetting to 1", og.p2pool.selected_index, app.og_node_vec.len());
let (name, node) = app.og_node_vec[0].clone();
@ -419,7 +419,7 @@ impl App {
app.state.p2pool.selected_zmq = node.zmq;
}
// Handle [pool_vec] overflow
debug!("App Init | Handling [pool_vec] overflow...");
info!("App Init | Handling [pool_vec] overflow...");
if og.xmrig.selected_index > app.og_pool_vec.len() {
warn!("App | Overflowing manual pool index [{} > {}], resetting to 1", og.xmrig.selected_index, app.og_pool_vec.len());
let (name, pool) = app.og_pool_vec[0].clone();
@ -434,24 +434,26 @@ impl App {
}
// Apply TOML values to [Update]
debug!("App Init | Applying TOML values to [Update]...");
info!("App Init | Applying TOML values to [Update]...");
let p2pool_path = og.gupax.absolute_p2pool_path.clone();
let xmrig_path = og.gupax.absolute_xmrig_path.clone();
let tor = og.gupax.update_via_tor;
app.update = arc_mut!(Update::new(app.exe.clone(), p2pool_path, xmrig_path, tor));
debug!("App Init | Setting state Gupax version...");
// Set state version as compiled in version
info!("App Init | Setting state Gupax version...");
lock!(og.version).gupax = GUPAX_VERSION.to_string();
lock!(app.state.version).gupax = GUPAX_VERSION.to_string();
debug!("App Init | Setting saved [Tab]...");
// Set saved [Tab]
info!("App Init | Setting saved [Tab]...");
app.tab = app.state.gupax.tab;
// Check if [P2pool.node] exists
info!("App Init | Checking if saved remote node still exists...");
app.state.p2pool.node = RemoteNode::check_exists(&app.state.p2pool.node);
drop(og); // Unlock [og]
info!("App ... OK");
// Spawn the "Helper" thread.
info!("Helper | Spawning helper thread...");
@ -459,7 +461,7 @@ impl App {
info!("Helper ... OK");
// Check for privilege. Should be Admin on [Windows] and NOT root on Unix.
debug!("App Init | Checking for privilege level...");
info!("App Init | Checking for privilege level...");
#[cfg(target_os = "windows")]
if is_elevated::is_elevated() {
app.admin = true;
@ -482,6 +484,7 @@ impl App {
app.error_state.set(format!("macOS thinks Gupax is a virus!\n(macOS has relocated Gupax for security reasons)\n\nThe directory: [{}]\nSince this is a private read-only directory, it causes issues with updates and correctly locating P2Pool/XMRig. Please move Gupax into the [Applications] directory, this lets macOS relax a little.\n", app.exe), ErrorFerris::Panic, ErrorButtons::Quit);
}
info!("App ... OK");
app
}
}

View file

@ -35,28 +35,28 @@ use hyper::{
// The format is an array of tuples consisting of: (ARRAY_INDEX, IP, LOCATION, RPC_PORT, ZMQ_PORT)
pub const REMOTE_NODES: [(usize, &str, &str, &str, &str); 22] = [
(0, "monero.10z.com.ar", "🇦🇷 AR - Buenos Aires F.D.", "18089", "18084"),
(1, "escom.sadovo.com", "🇧🇬 BG - Plovdiv", "18089", "18084"),
(2, "monero2.10z.com.ar", "🇧🇷 BR - São Paulo", "18089", "18083"),
(3, "monero1.heitechsoft.com", "🇨🇦 CA - Ontario", "18081", "18084"),
(4, "node.monerodevs.org", "🇨🇦 CA - Quebec", "18089", "18084"),
(5, "de.poiuty.com", "🇩🇪 DE - Berlin", "18081", "18084"),
(6, "m1.poiuty.com", "🇩🇪 DE - Berlin", "18081", "18084"),
(7, "p2pmd.xmrvsbeast.com", "🇩🇪 DE - Hesse", "18081", "18083"),
(8, "fbx.tranbert.com", "🇫🇷 FR - Île-de-France", "18089", "18084"),
(9, "reynald.ro", "🇫🇷 FR - Île-de-France", "18089", "18084"),
(10, "node2.monerodevs.org", "🇫🇷 FR - Occitanie", "18089", "18084"),
(11, "monero.homeqloud.com", "🇬🇷 GR - East Macedonia and Thrace", "18089", "18083"),
(12, "home.allantaylor.kiwi", "🇳🇿 NZ - Canterbury", "18089", "18083"),
(13, "ru.poiuty.com", "🇷🇺 RU - Kuzbass", "18081", "18084"),
(14, "radishfields.hopto.org", "🇺🇸 US - Colorado", "18081", "18084"),
(15, "xmrbandwagon.hopto.org", "🇺🇸 US - Colorado", "18081", "18084"),
(16, "xmr.spotlightsound.com", "🇺🇸 US - Kansas", "18081", "18084"),
(17, "xmrnode.facspro.net", "🇺🇸 US - Nebraska", "18089", "18084"),
(18, "jameswillhoite.com", "🇺🇸 US - Ohio", "18089", "18084"),
(19, "moneronode.ddns.net", "🇺🇸 US - Pennsylvania", "18089", "18084"),
(20, "node.richfowler.net", "🇺🇸 US - Pennsylvania", "18089", "18084"),
(21, "bunkernet.ddns.net", "🇿🇦 ZA - Western Cape", "18089", "18084"),
(0, "monero.10z.com.ar", "AR - Buenos Aires F.D.", "18089", "18084"),
(1, "escom.sadovo.com", "BG - Plovdiv", "18089", "18084"),
(2, "monero2.10z.com.ar", "BR - São Paulo", "18089", "18083"),
(3, "monero1.heitechsoft.com", "CA - Ontario", "18081", "18084"),
(4, "node.monerodevs.org", "CA - Quebec", "18089", "18084"),
(5, "de.poiuty.com", "DE - Berlin", "18081", "18084"),
(6, "m1.poiuty.com", "DE - Berlin", "18081", "18084"),
(7, "p2pmd.xmrvsbeast.com", "DE - Hesse", "18081", "18083"),
(8, "fbx.tranbert.com", "FR - Île-de-France", "18089", "18084"),
(9, "reynald.ro", "FR - Île-de-France", "18089", "18084"),
(10, "node2.monerodevs.org", "FR - Occitanie", "18089", "18084"),
(11, "monero.homeqloud.com", "GR - East Macedonia and Thrace", "18089", "18083"),
(12, "home.allantaylor.kiwi", "NZ - Canterbury", "18089", "18083"),
(13, "ru.poiuty.com", "RU - Kuzbass", "18081", "18084"),
(14, "radishfields.hopto.org", "US - Colorado", "18081", "18084"),
(15, "xmrbandwagon.hopto.org", "US - Colorado", "18081", "18084"),
(16, "xmr.spotlightsound.com", "US - Kansas", "18081", "18084"),
(17, "xmrnode.facspro.net", "US - Nebraska", "18089", "18084"),
(18, "jameswillhoite.com", "US - Ohio", "18089", "18084"),
(19, "moneronode.ddns.net", "US - Pennsylvania", "18089", "18084"),
(20, "node.richfowler.net", "US - Pennsylvania", "18089", "18084"),
(21, "bunkernet.ddns.net", "ZA - Western Cape", "18089", "18084"),
];
pub const REMOTE_NODE_LENGTH: usize = REMOTE_NODES.len();
@ -65,7 +65,7 @@ pub const REMOTE_NODE_MAX_CHARS: usize = 24; // monero1.heitechsoft.com
pub struct RemoteNode {
pub index: usize,
pub ip: &'static str,
pub flag: &'static str,
pub location: &'static str,
pub rpc: &'static str,
pub zmq: &'static str,
}
@ -78,21 +78,33 @@ impl Default for RemoteNode {
impl RemoteNode {
pub fn new() -> Self {
let (index, ip, flag, rpc, zmq) = REMOTE_NODES[0];
let (index, ip, location, rpc, zmq) = REMOTE_NODES[0];
Self {
index,
ip,
flag,
location,
rpc,
zmq,
}
}
pub fn check_exists(og_ip: &str) -> String {
for (_, ip, _, _, _) in REMOTE_NODES {
if og_ip == ip {
info!("Found remote node in array: {}", ip);
return ip.to_string()
}
}
let ip = REMOTE_NODES[0].1.to_string();
warn!("[{}] remote node does not exist, returning default: {}", og_ip, ip);
ip
}
// Returns a default if IP is not found.
pub fn from_ip(from_ip: &str) -> Self {
for (index, ip, flag, rpc, zmq) in REMOTE_NODES {
for (index, ip, location, rpc, zmq) in REMOTE_NODES {
if from_ip == ip {
return Self { index, ip, flag, rpc, zmq }
return Self { index, ip, location, rpc, zmq }
}
}
Self::new()
@ -103,93 +115,74 @@ impl RemoteNode {
if index > REMOTE_NODE_LENGTH {
Self::new()
} else {
let (index, ip, flag, rpc, zmq) = REMOTE_NODES[index];
Self { index, ip, flag, rpc, zmq }
let (index, ip, location, rpc, zmq) = REMOTE_NODES[index];
Self { index, ip, location, rpc, zmq }
}
}
pub fn from_tuple(t: (usize, &'static str, &'static str, &'static str, &'static str)) -> Self {
let (index, ip, flag, rpc, zmq) = (t.0, t.1, t.2, t.3, t.4);
Self { index, ip, flag, rpc, zmq }
let (index, ip, location, rpc, zmq) = (t.0, t.1, t.2, t.3, t.4);
Self { index, ip, location, rpc, zmq }
}
// monero1.heitechsoft.com = 24 max length
pub fn format_ip(&self) -> String {
match self.ip.len() {
1 => format!("{} ", self.ip),
2 => format!("{} ", self.ip),
3 => format!("{} ", self.ip),
4 => format!("{} ", self.ip),
5 => format!("{} ", self.ip),
6 => format!("{} ", self.ip),
7 => format!("{} ", self.ip),
8 => format!("{} ", self.ip),
9 => format!("{} ", self.ip),
10 => format!("{} ", self.ip),
11 => format!("{} ", self.ip),
12 => format!("{} ", self.ip),
13 => format!("{} ", self.ip),
14 => format!("{} ", self.ip),
15 => format!("{} ", self.ip),
16 => format!("{} ", self.ip),
17 => format!("{} ", self.ip),
18 => format!("{} ", self.ip),
19 => format!("{} ", self.ip),
20 => format!("{} ", self.ip),
21 => format!("{} ", self.ip),
22 => format!("{} ", self.ip),
23 => format!("{} ", self.ip),
_ => format!("{}", self.ip),
pub fn get_ip_rpc_zmq(og_ip: &str) -> (&str, &str, &str) {
for (_, ip, _, rpc, zmq) in REMOTE_NODES {
if og_ip == ip { return (ip, rpc, zmq) }
}
let (_, ip, _, rpc, zmq) = REMOTE_NODES[0];
(ip, rpc, zmq)
}
// Return a random node (that isn't the one already selected).
pub fn get_random(&self) -> Self {
let mut rand = thread_rng().gen_range(0..REMOTE_NODE_LENGTH);
while rand == self.index {
rand = thread_rng().gen_range(0..REMOTE_NODE_LENGTH);
pub fn get_random(current_ip: &str) -> String {
let mut rng = thread_rng().gen_range(0..REMOTE_NODE_LENGTH);
let mut node = REMOTE_NODES[rng].1;
while current_ip == node {
rng = thread_rng().gen_range(0..REMOTE_NODE_LENGTH);
node = REMOTE_NODES[rng].1;
}
Self::from_index(rand)
node.to_string()
}
// Return the node [-1] of this one (wraps around)
pub fn get_last(&self) -> Self {
let index = self.index;
if index == 0 {
Self::from_index(REMOTE_NODE_LENGTH-1)
} else {
Self::from_index(index-1)
// Return the node [-1] of this one
pub fn get_last(current_ip: &str) -> String {
let mut found = false;
let mut last = current_ip;
for (_, ip, _, _, _) in REMOTE_NODES {
if found { return ip.to_string() }
if current_ip == ip { found = true; } else { last = ip; }
}
last.to_string()
}
// Return the node [+1] of this one (wraps around)
pub fn get_next(&self) -> Self {
let index = self.index;
if index == REMOTE_NODE_LENGTH-1 {
Self::from_index(0)
} else {
Self::from_index(index+1)
// Return the node [+1] of this one
pub fn get_next(current_ip: &str) -> String {
let mut found = false;
for (_, ip, _, _, _) in REMOTE_NODES {
if found { return ip.to_string() }
if current_ip == ip { found = true; }
}
current_ip.to_string()
}
// This returns relative to the ping.
pub fn get_last_from_ping(&self, nodes: &Vec<NodeData>) -> Self {
pub fn get_last_from_ping(current_ip: &str, nodes: &Vec<NodeData>) -> String {
let mut found = false;
let mut last = self.ip;
let mut last = current_ip;
for data in nodes {
if found { return Self::from_ip(last) }
if self.ip == data.ip { found = true; } else { last = data.ip; }
if found { return last.to_string() }
if current_ip == data.ip { found = true; } else { last = data.ip; }
}
Self::from_ip(last)
last.to_string()
}
pub fn get_next_from_ping(&self, nodes: &Vec<NodeData>) -> Self {
pub fn get_next_from_ping(current_ip: &str, nodes: &Vec<NodeData>) -> String {
let mut found = false;
for data in nodes {
if found { return Self::from_ip(data.ip) }
if self.ip == data.ip { found = true; }
if found { return data.ip.to_string() }
if current_ip == data.ip { found = true; }
}
*self
current_ip.to_string()
}
}
@ -199,6 +192,58 @@ impl std::fmt::Display for RemoteNode {
}
}
//---------------------------------------------------------------------------------------------------- Formatting
// 5000 = 4 max length
pub fn format_ms(ms: u128) -> String {
match ms.to_string().len() {
1 => format!("{}ms ", ms),
2 => format!("{}ms ", ms),
3 => format!("{}ms ", ms),
_ => format!("{}ms", ms),
}
}
// format_ip_location(monero1.heitechsoft.com) -> "monero1.heitechsoft.com | XX - LOCATION"
// [extra_space] controls whether extra space is appended so the list aligns.
pub fn format_ip_location(og_ip: &str, extra_space: bool) -> String {
for (_, ip, location, _, _) in REMOTE_NODES {
if og_ip == ip {
let ip = if extra_space { format_ip(ip) } else { ip.to_string() };
return format!("{} | {}", ip, location)
}
}
"??? | ???".to_string()
}
// monero1.heitechsoft.com = 24 max length
pub fn format_ip(ip: &str) -> String {
match ip.len() {
1 => format!("{} ", ip),
2 => format!("{} ", ip),
3 => format!("{} ", ip),
4 => format!("{} ", ip),
5 => format!("{} ", ip),
6 => format!("{} ", ip),
7 => format!("{} ", ip),
8 => format!("{} ", ip),
9 => format!("{} ", ip),
10 => format!("{} ", ip),
11 => format!("{} ", ip),
12 => format!("{} ", ip),
13 => format!("{} ", ip),
14 => format!("{} ", ip),
15 => format!("{} ", ip),
16 => format!("{} ", ip),
17 => format!("{} ", ip),
18 => format!("{} ", ip),
19 => format!("{} ", ip),
20 => format!("{} ", ip),
21 => format!("{} ", ip),
22 => format!("{} ", ip),
_ => format!("{}", ip),
}
}
//---------------------------------------------------------------------------------------------------- Node data
#[derive(Debug, Clone)]
pub struct NodeData {
@ -210,7 +255,7 @@ pub struct NodeData {
impl NodeData {
pub fn new_vec() -> Vec<Self> {
let mut vec = Vec::new();
for tuple in REMOTE_NODES {
for (_, ip, _, _, _) in REMOTE_NODES {
vec.push(Self {
ip,
ms: 0,
@ -221,16 +266,6 @@ impl NodeData {
}
}
// 5000 = 4 max length
pub fn format_ms(ms: u128) -> String {
match ms.to_string().len() {
1 => format!("{}ms ", ms),
2 => format!("{}ms ", ms),
3 => format!("{}ms ", ms),
_ => format!("{}ms", ms),
}
}
//---------------------------------------------------------------------------------------------------- Ping data
#[derive(Debug)]
pub struct Ping {
@ -300,8 +335,8 @@ impl Ping {
// This used to be done 3x linearly but after testing, sending a single
// JSON-RPC call to all IPs asynchronously resulted in the same data.
//
// <200ms = GREEN
// <500ms = YELLOW
// <300ms = GREEN
// >300ms = YELLOW
// >500ms = RED
// timeout = BLACK
// default = GRAY
@ -325,17 +360,17 @@ impl Ping {
let mut handles = Vec::with_capacity(REMOTE_NODE_LENGTH);
let node_vec = arc_mut!(Vec::with_capacity(REMOTE_NODE_LENGTH));
for (index, ip, location, rpc, zmq) in REMOTE_NODES {
for (_, ip, _, rpc, zmq) in REMOTE_NODES {
let client = client.clone();
let ping = Arc::clone(&ping);
let node_vec = Arc::clone(&node_vec);
let request = Request::builder()
.method("POST")
.uri("http://".to_string() + ip + "/json_rpc")
.uri("http://".to_string() + ip + ":" + rpc + "/json_rpc")
.header("User-Agent", rand_user_agent)
.body(hyper::Body::from(r#"{"jsonrpc":"2.0","id":"0","method":"get_info"}"#))
.unwrap();
let handle = tokio::task::spawn(async move { Self::response(client, request, ip, rpc, ping, percent, node_vec).await; });
let handle = tokio::task::spawn(async move { Self::response(client, request, ip, ping, percent, node_vec).await; });
handles.push(handle);
}
@ -356,7 +391,7 @@ impl Ping {
Ok(fastest_info)
}
async fn response(client: Client<HttpConnector>, request: Request<Body>, ip: &'static str, rpc: &'static str, ping: Arc<Mutex<Self>>, percent: f32, node_vec: Arc<Mutex<Vec<NodeData>>>) {
async fn response(client: Client<HttpConnector>, request: Request<Body>, ip: &'static str, ping: Arc<Mutex<Self>>, percent: f32, node_vec: Arc<Mutex<Vec<NodeData>>>) {
let ms;
let info;
let now = Instant::now();
@ -373,7 +408,7 @@ impl Ping {
},
};
let color;
if ms < 200 {
if ms < 300 {
color = GREEN;
} else if ms < 500 {
color = YELLOW;

View file

@ -126,7 +126,7 @@ impl crate::disk::P2pool {
let mut ping = lock!(ping);
// If we haven't auto_selected yet, auto-select and turn it off
if ping.pinged && !ping.auto_selected {
self.node = ping.fastest;
self.node = ping.fastest.to_string();
ping.auto_selected = true;
}
drop(ping);
@ -137,27 +137,26 @@ impl crate::disk::P2pool {
debug!("P2Pool Tab | Rendering [Ping List]");
// [Ping List]
let id = self.node;
let ip = enum_to_ip(id);
let mut ms = 0;
let mut color = Color32::LIGHT_GRAY;
if lock!(ping).pinged {
for data in lock!(ping).nodes.iter() {
if data.id == id {
if data.ip == self.node {
ms = data.ms;
color = data.color;
break
}
}
}
debug!("P2Pool Tab | Rendering [ComboBox] of Community Nodes");
let text = RichText::new(format!("{}ms | {} | {}", ms, id, ip)).color(color);
ComboBox::from_id_source("community_nodes").selected_text(RichText::text_style(text, Monospace)).show_ui(ui, |ui| {
debug!("P2Pool Tab | Rendering [ComboBox] of Remote Nodes");
let ip_location = crate::node::format_ip_location(&self.node, false);
let text = RichText::new(format!("{}ms | {}", ms, ip_location)).color(color);
ComboBox::from_id_source("remote_nodes").selected_text(RichText::text_style(text, Monospace)).show_ui(ui, |ui| {
for data in lock!(ping).nodes.iter() {
let ms = crate::node::format_ms(data.ms);
let id = crate::node::format_enum(data.id);
let text = RichText::text_style(RichText::new(format!("{} | {} | {}", ms, id, data.ip)).color(data.color), Monospace);
ui.selectable_value(&mut self.node, data.id, text);
let ip_location = crate::node::format_ip_location(data.ip, true);
let text = RichText::text_style(RichText::new(format!("{} | {}", ms, ip_location)).color(data.color), Monospace);
ui.selectable_value(&mut self.node, data.ip.to_string(), text);
}
});
});
@ -169,15 +168,15 @@ impl crate::disk::P2pool {
let width = (width/5.0)-6.0;
// [Select random node]
if ui.add_sized([width, height], Button::new("Select random node")).on_hover_text(P2POOL_SELECT_RANDOM).clicked() {
self.node = NodeEnum::get_random(&self.node);
self.node = RemoteNode::get_random(&self.node);
}
// [Select fastest node]
if ui.add_sized([width, height], Button::new("Select fastest node")).on_hover_text(P2POOL_SELECT_FASTEST).clicked() && lock!(ping).pinged {
self.node = lock!(ping).fastest;
self.node = lock!(ping).fastest.to_string();
}
// [Ping Button]
ui.add_enabled_ui(!lock!(ping).pinging, |ui| {
if ui.add_sized([width, height], Button::new("Ping community nodes")).on_hover_text(P2POOL_PING).clicked() {
if ui.add_sized([width, height], Button::new("Ping remote nodes")).on_hover_text(P2POOL_PING).clicked() {
Ping::spawn_thread(ping);
}
});
@ -185,8 +184,8 @@ impl crate::disk::P2pool {
if ui.add_sized([width, height], Button::new("⬅ Last")).on_hover_text(P2POOL_SELECT_LAST).clicked() {
let ping = lock!(ping);
match ping.pinged {
true => self.node = NodeEnum::get_last_from_ping(&self.node, &ping.nodes),
false => self.node = NodeEnum::get_last(&self.node),
true => self.node = RemoteNode::get_last_from_ping(&self.node, &ping.nodes),
false => self.node = RemoteNode::get_last(&self.node),
}
drop(ping);
}
@ -194,8 +193,8 @@ impl crate::disk::P2pool {
if ui.add_sized([width, height], Button::new("Next ➡")).on_hover_text(P2POOL_SELECT_NEXT).clicked() {
let ping = lock!(ping);
match ping.pinged {
true => self.node = NodeEnum::get_next_from_ping(&self.node, &ping.nodes),
false => self.node = NodeEnum::get_next(&self.node),
true => self.node = RemoteNode::get_next_from_ping(&self.node, &ping.nodes),
false => self.node = RemoteNode::get_next(&self.node),
}
drop(ping);
}