Merge branch 'dev' into evo

This commit is contained in:
XMRig 2020-05-23 14:36:27 +07:00
commit 07025dc41b
No known key found for this signature in database
GPG key ID: 446A53638BE94409
503 changed files with 33807 additions and 14394 deletions

26
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View file

@ -0,0 +1,26 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Required data**
- Miner log as text or screenshot
- Config file or command line (without wallets)
- OS: [e.g. Windows]
- For GPU related issues: information about GPUs and driver version.
**Additional context**
Add any other context about the problem here.

View file

@ -1,3 +1,145 @@
# v5.11.2
- [#1664](https://github.com/xmrig/xmrig/pull/1664) Improved JSON config error reporting.
- [#1668](https://github.com/xmrig/xmrig/pull/1668) Optimized RandomX dataset initialization.
- [#1675](https://github.com/xmrig/xmrig/pull/1675) Fixed cross-compiling on Linux.
- Fixed memory leak in HTTP client.
- Build [dependencies](https://github.com/xmrig/xmrig-deps/releases/tag/v4.1) updated to recent versions.
- Compiler for Windows gcc builds updated to v10.1.
# v5.11.1
- [#1652](https://github.com/xmrig/xmrig/pull/1652) Up to 1% RandomX perfomance improvement on recent AMD CPUs.
- [#1306](https://github.com/xmrig/xmrig/issues/1306) Fixed possible double connection to a pool.
- [#1654](https://github.com/xmrig/xmrig/issues/1654) Fixed build with LibreSSL.
# v5.11.0
- **[#1632](https://github.com/xmrig/xmrig/pull/1632) Added AstroBWT CUDA support ([CUDA plugin](https://github.com/xmrig/xmrig-cuda) v3.0.0 or newer required).**
- [#1605](https://github.com/xmrig/xmrig/pull/1605) Fixed AstroBWT OpenCL for NVIDIA GPUs.
- [#1635](https://github.com/xmrig/xmrig/pull/1635) Added pooled memory allocation of RandomX VMs (+0.5% speedup on Zen2).
- [#1641](https://github.com/xmrig/xmrig/pull/1641) RandomX JIT refactoring, smaller memory footprint and a bit faster overall.
- [#1643](https://github.com/xmrig/xmrig/issues/1643) Fixed build on CentOS 7.
# v5.10.0
- [#1602](https://github.com/xmrig/xmrig/pull/1602) Added AMD GPUs support for AstroBWT algorithm.
- [#1590](https://github.com/xmrig/xmrig/pull/1590) MSR mod automatically deactivated after switching from RandomX algorithms.
- [#1592](https://github.com/xmrig/xmrig/pull/1592) Added AVX2 optimized code for AstroBWT algorithm.
- Added new config option `astrobwt-avx2` in `cpu` object and command line option `--astrobwt-avx2`.
- [#1596](https://github.com/xmrig/xmrig/issues/1596) Major TLS (Transport Layer Security) subsystem update.
- Added new TLS options, please check [xmrig-proxy documentation](https://xmrig.com/docs/proxy/tls) for details.
- `cn/gpu` algorithm now disabled by default and will be removed in next major (v6.x.x) release, no ETA for it right now.
- Added command line option `--data-dir`.
# v5.9.0
- [#1578](https://github.com/xmrig/xmrig/pull/1578) Added new RandomKEVA algorithm for upcoming Kevacoin fork, as `"algo": "rx/keva"` or `"coin": "keva"`.
- [#1584](https://github.com/xmrig/xmrig/pull/1584) Fixed invalid AstroBWT hashes after algorithm switching.
- [#1585](https://github.com/xmrig/xmrig/issues/1585) Fixed build without HTTP support.
- Added command line option `--astrobwt-max-size`.
# v5.8.2
- [#1580](https://github.com/xmrig/xmrig/pull/1580) AstroBWT algorithm 20-50% speedup.
- Added new option `astrobwt-max-size`.
- [#1581](https://github.com/xmrig/xmrig/issues/1581) Fixed macOS build.
# v5.8.1
- [#1575](https://github.com/xmrig/xmrig/pull/1575) Fixed new block detection for DERO solo mining.
# v5.8.0
- [#1573](https://github.com/xmrig/xmrig/pull/1573) Added new AstroBWT algorithm for upcoming DERO fork, as `"algo": "astrobwt"` or `"coin": "dero"`.
# v5.7.0
- **Added SOCKS5 proxies support for Tor https://xmrig.com/docs/miner/tor.**
- [#377](https://github.com/xmrig/xmrig-proxy/issues/377) Fixed duplicate jobs in daemon (solo) mining client.
- [#1560](https://github.com/xmrig/xmrig/pull/1560) RandomX 0.3-0.4% speedup depending on CPU.
- Fixed possible crashes in HTTP client.
# v5.6.0
- [#1536](https://github.com/xmrig/xmrig/pull/1536) Added workaround for new AMD GPU drivers.
- [#1546](https://github.com/xmrig/xmrig/pull/1546) Fixed generic OpenCL code for AMD Navi GPUs.
- [#1551](https://github.com/xmrig/xmrig/pull/1551) Added RandomX JIT for AMD Navi GPUs.
- Added health information for AMD GPUs (clocks/power/fan/temperature) via ADL (Windows) and sysfs (Linux).
- Fixed possible nicehash nonce overflow in some conditions.
- Fixed wrong OpenCL platform on macOS, option `platform` now ignored on this OS.
# v5.5.3
- [#1529](https://github.com/xmrig/xmrig/pull/1529) Fixed crash on Bulldozer CPUs.
# v5.5.2
- [#1500](https://github.com/xmrig/xmrig/pull/1500) Removed unnecessary code from RandomX JIT compiler.
- [#1502](https://github.com/xmrig/xmrig/pull/1502) Optimizations for AMD Bulldozer.
- [#1508](https://github.com/xmrig/xmrig/pull/1508) Added support for BMI2 instructions.
- [#1510](https://github.com/xmrig/xmrig/pull/1510) Optimized `CFROUND` instruction for RandomX.
- [#1520](https://github.com/xmrig/xmrig/pull/1520) Fixed thread affinity.
# v5.5.1
- [#1469](https://github.com/xmrig/xmrig/issues/1469) Fixed build with gcc 4.8.
- [#1473](https://github.com/xmrig/xmrig/pull/1473) Added RandomX auto-config for mobile Ryzen APUs.
- [#1477](https://github.com/xmrig/xmrig/pull/1477) Fixed build with Clang.
- [#1489](https://github.com/xmrig/xmrig/pull/1489) RandomX JIT compiler tweaks.
- [#1493](https://github.com/xmrig/xmrig/pull/1493) Default value for Intel MSR preset changed to `15`.
- Fixed unwanted resume after RandomX dataset change.
# v5.5.0
- [#179](https://github.com/xmrig/xmrig/issues/179) Added support for [environment variables](https://xmrig.com/docs/miner/environment-variables) in config file.
- [#1445](https://github.com/xmrig/xmrig/pull/1445) Removed `rx/v` algorithm.
- [#1453](https://github.com/xmrig/xmrig/issues/1453) Fixed crash on 32bit systems.
- [#1459](https://github.com/xmrig/xmrig/issues/1459) Fixed crash on very low memory systems.
- [#1465](https://github.com/xmrig/xmrig/pull/1465) Added fix for 1st-gen Ryzen crashes.
- [#1466](https://github.com/xmrig/xmrig/pull/1466) Added `cn-pico/tlo` algorithm.
- Added `--randomx-no-rdmsr` command line option.
- Added console title for Windows with miner name and version.
- On Windows `priority` option now also change base priority.
# v5.4.0
- [#1434](https://github.com/xmrig/xmrig/pull/1434) Added RandomSFX (`rx/sfx`) algorithm for Safex Cash.
- [#1445](https://github.com/xmrig/xmrig/pull/1445) Added RandomV (`rx/v`) algorithm for *new* MoneroV.
- [#1419](https://github.com/xmrig/xmrig/issues/1419) Added reverting MSR changes on miner exit, use `"rdmsr": false,` in `"randomx"` object to disable this feature.
- [#1423](https://github.com/xmrig/xmrig/issues/1423) Fixed conflicts with exists WinRing0 driver service.
- [#1425](https://github.com/xmrig/xmrig/issues/1425) Fixed crash on first generation Zen CPUs (MSR mod accidentally enable Opcache), additionally now you can disable Opcache and enable MSR mod via config `"wrmsr": ["0xc0011020:0x0", "0xc0011021:0x60", "0xc0011022:0x510000", "0xc001102b:0x1808cc16"],`.
- Added advanced usage for `wrmsr` option, for example: `"wrmsr": ["0x1a4:0x6"],` (Intel) and `"wrmsr": ["0xc0011020:0x0", "0xc0011021:0x40:0xffffffffffffffdf", "0xc0011022:0x510000", "0xc001102b:0x1808cc16"],` (Ryzen).
- Added new config option `"verbose"` and command line option `--verbose`.
# v5.3.0
- [#1414](https://github.com/xmrig/xmrig/pull/1414) Added native MSR support for Windows, by using signed **WinRing0 driver** (© 2007-2009 OpenLibSys.org).
- Added new [MSR documentation](https://xmrig.com/docs/miner/randomx-optimization-guide/msr).
- [#1418](https://github.com/xmrig/xmrig/pull/1418) Increased stratum send buffer size.
# v5.2.1
- [#1408](https://github.com/xmrig/xmrig/pull/1408) Added RandomX boost script for Linux (if you don't like run miner with root privileges).
- Added support for [AMD Ryzen MSR registers](https://www.reddit.com/r/MoneroMining/comments/e962fu/9526_hs_on_ryzen_7_3700x_xmrig_520_1gb_pages_msr/) (Linux only).
- Fixed command line option `--randomx-wrmsr` option without parameters.
# v5.2.0
- **[#1388](https://github.com/xmrig/xmrig/pull/1388) Added [1GB huge pages support](https://xmrig.com/docs/miner/hugepages#onegb-huge-pages) for Linux.**
- Added new option `1gb-pages` in `randomx` object with command line equivalent `--randomx-1gb-pages`.
- Added automatic huge pages configuration on Linux if use the miner with root privileges.
- **Added [automatic Intel prefetchers configuration](https://xmrig.com/docs/miner/randomx-optimization-guide#intel-specific-optimizations) on Linux.**
- Added new option `wrmsr` in `randomx` object with command line equivalent `--randomx-wrmsr=6`.
- [#1396](https://github.com/xmrig/xmrig/pull/1396) [#1401](https://github.com/xmrig/xmrig/pull/1401) New performance optimizations for Ryzen CPUs.
- [#1385](https://github.com/xmrig/xmrig/issues/1385) Added `max-threads-hint` option support for RandomX dataset initialization threads.
- [#1386](https://github.com/xmrig/xmrig/issues/1386) Added `priority` option support for RandomX dataset initialization threads.
- For official builds all dependencies (libuv, hwloc, openssl) updated to recent versions.
- Windows `msvc` builds now use Visual Studio 2019 instead of 2017.
# v5.1.1
- [#1365](https://github.com/xmrig/xmrig/issues/1365) Fixed various system response/stability issues.
- Added new CPU option `yield` and command line equivalent `--cpu-no-yield`.
- [#1363](https://github.com/xmrig/xmrig/issues/1363) Fixed wrong priority of main miner thread.
# v5.1.0
- [#1351](https://github.com/xmrig/xmrig/pull/1351) RandomX optimizations and fixes.
- Improved RandomX performance (up to +6-7% on Intel CPUs, +2-3% on Ryzen CPUs)
- Added workaround for Intel JCC erratum bug see https://www.phoronix.com/scan.php?page=article&item=intel-jcc-microcode&num=1 for details.
- Note! Always disable "Hardware prefetcher" and "Adjacent cacheline prefetch" in BIOS for Intel CPUs to get the optimal RandomX performance.
- [#1307](https://github.com/xmrig/xmrig/issues/1307) Fixed mining resume after donation round for pools with `self-select` feature.
- [#1318](https://github.com/xmrig/xmrig/issues/1318#issuecomment-559676080) Added option `"mode"` (or `--randomx-mode`) for RandomX.
- Added memory information on miner startup.
- Added `resources` field to summary API with memory information and load average.
# v5.0.1
- [#1234](https://github.com/xmrig/xmrig/issues/1234) Fixed compatibility with some AMD GPUs.
- [#1284](https://github.com/xmrig/xmrig/issues/1284) Fixed build without RandomX.
- [#1285](https://github.com/xmrig/xmrig/issues/1285) Added command line options `--cuda-bfactor-hint` and `--cuda-bsleep-hint`.
- [#1290](https://github.com/xmrig/xmrig/pull/1290) Fixed 32-bit ARM compilation.
# v5.0.0
This version is first stable unified 3 in 1 GPU+CPU release, OpenCL support built in in miner and not require additional external dependencies on compile time, NVIDIA CUDA available as external [CUDA plugin](https://github.com/xmrig/xmrig-cuda), for convenient, 3 in 1 downloads with recent CUDA version also provided.

View file

@ -6,17 +6,21 @@ option(WITH_HWLOC "Enable hwloc support" ON)
option(WITH_CN_LITE "Enable CryptoNight-Lite algorithms family" ON)
option(WITH_CN_HEAVY "Enable CryptoNight-Heavy algorithms family" ON)
option(WITH_CN_PICO "Enable CryptoNight-Pico algorithm" ON)
option(WITH_CN_GPU "Enable CryptoNight-GPU algorithm" ON)
option(WITH_CN_GPU "Enable CryptoNight-GPU algorithm" OFF)
option(WITH_RANDOMX "Enable RandomX algorithms family" ON)
option(WITH_ARGON2 "Enable Argon2 algorithms family" ON)
option(WITH_ASTROBWT "Enable AstroBWT algorithms family" ON)
option(WITH_HTTP "Enable HTTP protocol support (client/server)" ON)
option(WITH_DEBUG_LOG "Enable debug log output" OFF)
option(WITH_TLS "Enable OpenSSL support" ON)
option(WITH_ASM "Enable ASM PoW implementations" ON)
option(WITH_MSR "Enable MSR mod & 1st-gen Ryzen fix" ON)
option(WITH_ENV_VARS "Enable environment variables support in config file" ON)
option(WITH_EMBEDDED_CONFIG "Enable internal embedded JSON config" OFF)
option(WITH_OPENCL "Enable OpenCL backend" ON)
option(WITH_CUDA "Enable CUDA backend" ON)
option(WITH_NVML "Enable NVML (NVIDIA Management Library) support (only if CUDA backend enabled)" ON)
option(WITH_ADL "Enable ADL (AMD Display Library) or sysfs support (only if OpenCL backend enabled)" ON)
option(WITH_STRICT_CACHE "Enable strict checks for OpenCL cache" ON)
option(WITH_INTERLEAVE_DEBUG_LOG "Enable debug log for threads interleave" OFF)
@ -30,6 +34,7 @@ set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "${CMAKE_SOURCE_DIR}/cmake")
include (CheckIncludeFile)
include (cmake/cpu.cmake)
include (cmake/os.cmake)
include (src/base/base.cmake)
include (src/backend/backend.cmake)
@ -50,7 +55,6 @@ set(HEADERS
src/net/JobResult.h
src/net/JobResults.h
src/net/Network.h
src/net/NetworkState.h
src/net/strategies/DonateStrategy.h
src/Summary.h
src/version.h
@ -73,9 +77,7 @@ set(HEADERS_CRYPTO
src/crypto/cn/hash.h
src/crypto/cn/skein_port.h
src/crypto/cn/soft_aes.h
src/crypto/common/Algorithm.h
src/crypto/common/Coin.h
src/crypto/common/keccak.h
src/crypto/common/HugePagesInfo.h
src/crypto/common/MemoryPool.h
src/crypto/common/Nonce.h
src/crypto/common/portable/mm_malloc.h
@ -99,7 +101,6 @@ set(SOURCES
src/core/Miner.cpp
src/net/JobResults.cpp
src/net/Network.cpp
src/net/NetworkState.cpp
src/net/strategies/DonateStrategy.cpp
src/Summary.cpp
src/xmrig.cpp
@ -112,9 +113,7 @@ set(SOURCES_CRYPTO
src/crypto/cn/c_skein.c
src/crypto/cn/CnCtx.cpp
src/crypto/cn/CnHash.cpp
src/crypto/common/Algorithm.cpp
src/crypto/common/Coin.cpp
src/crypto/common/keccak.cpp
src/crypto/common/HugePagesInfo.cpp
src/crypto/common/MemoryPool.cpp
src/crypto/common/Nonce.cpp
src/crypto/common/VirtualMemory.cpp
@ -131,40 +130,36 @@ if (WITH_HWLOC)
)
endif()
if (WIN32)
set(SOURCES_OS
"${SOURCES_OS}"
if (XMRIG_OS_WIN)
list(APPEND SOURCES_OS
res/app.rc
src/App_win.cpp
src/crypto/common/VirtualMemory_win.cpp
)
add_definitions(/DWIN32)
set(EXTRA_LIBS ws2_32 psapi iphlpapi userenv)
elseif (APPLE)
set(SOURCES_OS
"${SOURCES_OS}"
elseif (XMRIG_OS_APPLE)
list(APPEND SOURCES_OS
src/App_unix.cpp
src/crypto/common/VirtualMemory_unix.cpp
)
else()
set(SOURCES_OS
"${SOURCES_OS}"
list(APPEND SOURCES_OS
src/App_unix.cpp
src/crypto/common/VirtualMemory_unix.cpp
)
if (CMAKE_SYSTEM_NAME STREQUAL FreeBSD)
set(EXTRA_LIBS kvm pthread)
else()
set(EXTRA_LIBS pthread rt dl)
endif()
endif()
if (XMRIG_OS_ANDROID)
set(EXTRA_LIBS pthread rt dl log)
elseif (XMRIG_OS_LINUX)
list(APPEND SOURCES_OS
src/crypto/common/LinuxMemory.h
src/crypto/common/LinuxMemory.cpp
)
if (CMAKE_SYSTEM_NAME MATCHES "Linux" OR CMAKE_SYSTEM_NAME MATCHES "Android")
EXECUTE_PROCESS(COMMAND uname -o COMMAND tr -d '\n' OUTPUT_VARIABLE OPERATING_SYSTEM)
if (OPERATING_SYSTEM MATCHES "Android")
set(EXTRA_LIBS ${EXTRA_LIBS} log)
set(EXTRA_LIBS pthread rt dl)
elseif (XMRIG_OS_FREEBSD)
set(EXTRA_LIBS kvm pthread)
endif()
endif()
@ -176,6 +171,7 @@ find_package(UV REQUIRED)
include(cmake/flags.cmake)
include(cmake/randomx.cmake)
include(cmake/argon2.cmake)
include(cmake/astrobwt.cmake)
include(cmake/OpenSSL.cmake)
include(cmake/asm.cmake)
include(cmake/cn-gpu.cmake)
@ -210,3 +206,8 @@ endif()
add_executable(${CMAKE_PROJECT_NAME} ${HEADERS} ${SOURCES} ${SOURCES_OS} ${SOURCES_CPUID} ${HEADERS_CRYPTO} ${SOURCES_CRYPTO} ${SOURCES_SYSLOG} ${TLS_SOURCES} ${XMRIG_ASM_SOURCES} ${CN_GPU_SOURCES})
target_link_libraries(${CMAKE_PROJECT_NAME} ${XMRIG_ASM_LIBRARY} ${OPENSSL_LIBRARIES} ${UV_LIBRARIES} ${EXTRA_LIBS} ${CPUID_LIB} ${ARGON2_LIBRARY})
if (WIN32)
add_custom_command(TARGET ${CMAKE_PROJECT_NAME} POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different "${CMAKE_SOURCE_DIR}/bin/WinRing0/WinRing0x64.sys" $<TARGET_FILE_DIR:${CMAKE_PROJECT_NAME}>)
endif()

View file

@ -1,7 +1,5 @@
# XMRig
**:warning: [Monero will change PoW algorithm to RandomX on November 30.](https://github.com/xmrig/xmrig/issues/1204)**
[![Github All Releases](https://img.shields.io/github/downloads/xmrig/xmrig/total.svg)](https://github.com/xmrig/xmrig/releases)
[![GitHub release](https://img.shields.io/github/release/xmrig/xmrig/all.svg)](https://github.com/xmrig/xmrig/releases)
[![GitHub Release Date](https://img.shields.io/github/release-date-pre/xmrig/xmrig.svg)](https://github.com/xmrig/xmrig/releases)
@ -9,14 +7,14 @@
[![GitHub stars](https://img.shields.io/github/stars/xmrig/xmrig.svg)](https://github.com/xmrig/xmrig/stargazers)
[![GitHub forks](https://img.shields.io/github/forks/xmrig/xmrig.svg)](https://github.com/xmrig/xmrig/network)
XMRig High performance, open source, cross platform RandomX, CryptoNight and Argon2 CPU/GPU miner, with official support for Windows.
XMRig High performance, open source, cross platform RandomX, CryptoNight, AstroBWT and Argon2 CPU/GPU miner, with official support for Windows.
## Mining backends
- **CPU** (x64/x86/ARM)
- **OpenCL** for AMD GPUs.
- **CUDA** for NVIDIA GPUs via external [CUDA plugin](https://github.com/xmrig/xmrig-cuda).
<img src="doc/screenshot.png" width="808" >
<img src="doc/screenshot_v5_2_0.png" width="833" >
## Download
* Binary releases: https://github.com/xmrig/xmrig/releases
@ -38,7 +36,8 @@ Network:
-u, --user=USERNAME username for mining server
-p, --pass=PASSWORD password for mining server
-O, --userpass=U:P username:password pair for mining server
-k, --keepalive send keepalived packet for prevent timeout (needs pool support)
-x, --proxy=HOST:PORT connect through a SOCKS5 proxy
-k, --keepalive send keepalive packet for prevent timeout (needs pool support)
--nicehash enable nicehash.com support
--rig-id=ID rig identifier for pool-side statistics (needs pool support)
--tls enable SSL/TLS support (needs pool support)
@ -59,10 +58,17 @@ CPU backend:
--cpu-priority set process priority (0 idle, 2 normal to 5 highest)
--cpu-max-threads-hint=N maximum CPU threads count (in percentage) hint for autoconfig
--cpu-memory-pool=N number of 2 MB pages for persistent memory pool, -1 (auto), 0 (disable)
--cpu-no-yield prefer maximum hashrate rather than system response/stability
--no-huge-pages disable huge pages support
--asm=ASM ASM optimizations, possible values: auto, none, intel, ryzen, bulldozer
--randomx-init=N threads count to initialize RandomX dataset
--randomx-init=N thread count to initialize RandomX dataset
--randomx-no-numa disable NUMA support for RandomX
--randomx-mode=MODE RandomX mode: auto, fast, light
--randomx-1gb-pages use 1GB hugepages for dataset (Linux only)
--randomx-wrmsr=N write custom value (0-15) to Intel MSR register 0x1a4 or disable MSR mod (-1)
--randomx-no-rdmsr disable reverting initial MSR values on exit
--astrobwt-max-size=N skip hashes with large stage 2 size, default: 550, min: 400, max: 1200
--astrobwt-avx2 enable AVX2 optimizations for AstroBWT algorithm
API:
--api-worker-id=ID custom worker-id for API
@ -84,14 +90,26 @@ CUDA backend:
--cuda enable CUDA mining backend
--cuda-loader=PATH path to CUDA plugin (xmrig-cuda.dll or libxmrig-cuda.so)
--cuda-devices=N comma separated list of CUDA devices to use
--cuda-bfactor-hint=N bfactor hint for autoconfig (0-12)
--cuda-bsleep-hint=N bsleep hint for autoconfig
--no-nvml disable NVML (NVIDIA Management Library) support
TLS:
--tls-gen=HOSTNAME generate TLS certificate for specific hostname
--tls-cert=FILE load TLS certificate chain from a file in the PEM format
--tls-cert-key=FILE load TLS certificate private key from a file in the PEM format
--tls-dhparam=FILE load DH parameters for DHE ciphers from a file in the PEM format
--tls-protocols=N enable specified TLS protocols, example: "TLSv1 TLSv1.1 TLSv1.2 TLSv1.3"
--tls-ciphers=S set list of available ciphers (TLSv1.2 and below)
--tls-ciphersuites=S set list of available TLSv1.3 ciphersuites
Logging:
-S, --syslog use system log for output messages
-l, --log-file=FILE log all output to a file
--print-time=N print hashrate report every N seconds
--health-print-time=N print health report every N seconds
--no-color disable colored output
--verbose verbose output
Misc:
-c, --config=FILE load a JSON-format configuration file

21
bin/WinRing0/LICENSE Normal file
View file

@ -0,0 +1,21 @@
Copyright (c) 2007-2009 OpenLibSys.org. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Binary file not shown.

View file

@ -5,17 +5,38 @@ if (WITH_TLS)
set(OPENSSL_USE_STATIC_LIBS TRUE)
set(OPENSSL_MSVC_STATIC_RT TRUE)
set(EXTRA_LIBS ${EXTRA_LIBS} Crypt32)
set(EXTRA_LIBS ${EXTRA_LIBS} crypt32)
elseif (APPLE)
set(OPENSSL_USE_STATIC_LIBS TRUE)
endif()
find_package(OpenSSL)
if (OPENSSL_FOUND)
set(TLS_SOURCES src/base/net/stratum/Tls.h src/base/net/stratum/Tls.cpp)
set(TLS_SOURCES
src/base/net/stratum/Tls.cpp
src/base/net/stratum/Tls.h
src/base/net/tls/ServerTls.cpp
src/base/net/tls/ServerTls.h
src/base/net/tls/TlsConfig.cpp
src/base/net/tls/TlsConfig.h
src/base/net/tls/TlsContext.cpp
src/base/net/tls/TlsContext.h
src/base/net/tls/TlsGen.cpp
src/base/net/tls/TlsGen.h
)
include_directories(${OPENSSL_INCLUDE_DIR})
if (WITH_HTTP)
set(TLS_SOURCES ${TLS_SOURCES} src/base/net/http/HttpsClient.h src/base/net/http/HttpsClient.cpp)
set(TLS_SOURCES ${TLS_SOURCES}
src/base/net/https/HttpsClient.cpp
src/base/net/https/HttpsClient.h
src/base/net/https/HttpsContext.cpp
src/base/net/https/HttpsContext.h
src/base/net/https/HttpsServer.cpp
src/base/net/https/HttpsServer.h
)
endif()
else()
message(FATAL_ERROR "OpenSSL NOT found: use `-DWITH_TLS=OFF` to build without TLS support")
@ -27,5 +48,12 @@ else()
set(OPENSSL_LIBRARIES "")
remove_definitions(/DXMRIG_FEATURE_TLS)
if (WITH_HTTP)
set(TLS_SOURCES ${TLS_SOURCES}
src/base/net/http/HttpServer.cpp
src/base/net/http/HttpServer.h
)
endif()
set(CMAKE_PROJECT_NAME "${CMAKE_PROJECT_NAME}-notls")
endif()

47
cmake/astrobwt.cmake Normal file
View file

@ -0,0 +1,47 @@
if (WITH_ASTROBWT)
add_definitions(/DXMRIG_ALGO_ASTROBWT)
list(APPEND HEADERS_CRYPTO
src/crypto/astrobwt/AstroBWT.h
src/crypto/astrobwt/sha3.h
)
list(APPEND SOURCES_CRYPTO
src/crypto/astrobwt/AstroBWT.cpp
src/crypto/astrobwt/sha3.cpp
)
if (XMRIG_ARM)
list(APPEND HEADERS_CRYPTO
src/crypto/astrobwt/salsa20_ref/ecrypt-config.h
src/crypto/astrobwt/salsa20_ref/ecrypt-machine.h
src/crypto/astrobwt/salsa20_ref/ecrypt-portable.h
src/crypto/astrobwt/salsa20_ref/ecrypt-sync.h
)
list(APPEND SOURCES_CRYPTO
src/crypto/astrobwt/salsa20_ref/salsa20.c
)
else()
if (CMAKE_SIZEOF_VOID_P EQUAL 8)
add_definitions(/DASTROBWT_AVX2)
if (CMAKE_C_COMPILER_ID MATCHES MSVC)
enable_language(ASM_MASM)
list(APPEND SOURCES_CRYPTO src/crypto/astrobwt/sha3_256_avx2.asm)
else()
enable_language(ASM)
list(APPEND SOURCES_CRYPTO src/crypto/astrobwt/sha3_256_avx2.S)
endif()
endif()
list(APPEND HEADERS_CRYPTO
src/crypto/astrobwt/Salsa20.hpp
)
list(APPEND SOURCES_CRYPTO
src/crypto/astrobwt/Salsa20.cpp
)
endif()
else()
remove_definitions(/DXMRIG_ALGO_ASTROBWT)
endif()

View file

@ -57,9 +57,9 @@ if (CMAKE_CXX_COMPILER_ID MATCHES GNU)
add_definitions(/DHAVE_BUILTIN_CLEAR_CACHE)
elseif (CMAKE_CXX_COMPILER_ID MATCHES MSVC)
set(CMAKE_C_FLAGS_RELEASE "/MT /O2 /Oi /DNDEBUG /GL")
set(CMAKE_CXX_FLAGS_RELEASE "/MT /O2 /Oi /DNDEBUG /GL")
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} /Ox /Ot /Oi /MT /GL")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /Ox /Ot /Oi /MT /GL")
add_definitions(/D_CRT_SECURE_NO_WARNINGS)
add_definitions(/D_CRT_NONSTDC_NO_WARNINGS)
add_definitions(/DNOMINMAX)

45
cmake/os.cmake Normal file
View file

@ -0,0 +1,45 @@
if (WIN32)
set(XMRIG_OS_WIN ON)
elseif (APPLE)
set(XMRIG_OS_APPLE ON)
if (IOS OR CMAKE_SYSTEM_NAME STREQUAL iOS)
set(XMRIG_OS_IOS ON)
else()
set(XMRIG_OS_MACOS ON)
endif()
else()
set(XMRIG_OS_UNIX ON)
if (ANDROID OR CMAKE_SYSTEM_NAME MATCHES "Android")
set(XMRIG_OS_ANDROID ON)
elseif(CMAKE_SYSTEM_NAME MATCHES "Linux")
set(XMRIG_OS_LINUX ON)
elseif(CMAKE_SYSTEM_NAME STREQUAL FreeBSD)
set(XMRIG_OS_FREEBSD ON)
endif()
endif()
if (XMRIG_OS_WIN)
add_definitions(/DWIN32)
add_definitions(/DXMRIG_OS_WIN)
elseif(XMRIG_OS_APPLE)
add_definitions(/DXMRIG_OS_APPLE)
if (XMRIG_OS_IOS)
add_definitions(/DXMRIG_OS_IOS)
else()
add_definitions(/DXMRIG_OS_MACOS)
endif()
elseif(XMRIG_OS_UNIX)
add_definitions(/DXMRIG_OS_UNIX)
if (XMRIG_OS_ANDROID)
add_definitions(/DXMRIG_OS_ANDROID)
elseif (XMRIG_OS_LINUX)
add_definitions(/DXMRIG_OS_LINUX)
elseif (XMRIG_OS_FREEBSD)
add_definitions(/DXMRIG_OS_FREEBSD)
endif()
endif()

View file

@ -1,5 +1,6 @@
if (WITH_RANDOMX)
add_definitions(/DXMRIG_ALGO_RANDOMX)
set(WITH_ARGON2 ON)
list(APPEND HEADERS_CRYPTO
src/crypto/rx/Rx.h
@ -16,8 +17,6 @@ if (WITH_RANDOMX)
list(APPEND SOURCES_CRYPTO
src/crypto/randomx/aes_hash.cpp
src/crypto/randomx/allocator.cpp
src/crypto/randomx/argon2_core.c
src/crypto/randomx/argon2_ref.c
src/crypto/randomx/blake2_generator.cpp
src/crypto/randomx/blake2/blake2b.c
src/crypto/randomx/bytecode_machine.cpp
@ -75,13 +74,27 @@ if (WITH_RANDOMX)
)
list(APPEND SOURCES_CRYPTO
src/crypto/rx/RxConfig_hwloc.cpp
src/crypto/rx/RxNUMAStorage.cpp
)
endif()
if (WITH_MSR AND NOT XMRIG_ARM AND CMAKE_SIZEOF_VOID_P EQUAL 8 AND (XMRIG_OS_WIN OR XMRIG_OS_LINUX))
add_definitions(/DXMRIG_FEATURE_MSR)
add_definitions(/DXMRIG_FIX_RYZEN)
message("-- WITH_MSR=ON")
if (XMRIG_OS_WIN)
list(APPEND SOURCES_CRYPTO src/crypto/rx/Rx_win.cpp)
elseif (XMRIG_OS_LINUX)
list(APPEND SOURCES_CRYPTO src/crypto/rx/Rx_linux.cpp)
endif()
list(APPEND HEADERS_CRYPTO src/crypto/rx/msr/MsrItem.h)
list(APPEND SOURCES_CRYPTO src/crypto/rx/msr/MsrItem.cpp)
else()
list(APPEND SOURCES_CRYPTO
src/crypto/rx/RxConfig_basic.cpp
)
remove_definitions(/DXMRIG_FEATURE_MSR)
remove_definitions(/DXMRIG_FIX_RYZEN)
message("-- WITH_MSR=OFF")
endif()
else()
remove_definitions(/DXMRIG_ALGO_RANDOMX)

View file

@ -12,6 +12,8 @@ Option `coin` useful for pools without algorithm negotiation support or daemon t
| Name | Memory | Version | Notes |
|------|--------|---------|-------|
| `rx/sfx` | 2 MB | 5.4.0+ | RandomSFX (RandomX variant for Safex). |
| `rx/v` | 2 MB | 5.4.0+ | RandomV (RandomX variant for new MoneroV). |
| `rx/arq` | 256 KB | 4.3.0+ | RandomARQ (RandomX variant for ArQmA). |
| `rx/0` | 2 MB | 3.2.0+ | RandomX (Monero). |
| `argon2/chukwa` | 512 KB | 3.1.0+ | Argon2id (Chukwa). |
@ -23,7 +25,6 @@ Option `coin` useful for pools without algorithm negotiation support or daemon t
| `cn/zls` | 2 MB | 2.14.0+ | CryptoNight variant 2 with 3/4 iterations. |
| `cn/double` | 2 MB | 2.14.0+ | CryptoNight variant 2 with double iterations. |
| `cn/r` | 2 MB | 2.13.0+ | CryptoNightR (Monero's variant 4). |
| `cn/wow` | 2 MB | 2.12.0+ | CryptoNightR (Wownero). |
| `cn/gpu` | 2 MB | 2.11.0+ | CryptoNight-GPU. |
| `cn-pico` | 256 KB | 2.10.0+ | CryptoNight-Pico. |
| `cn/half` | 2 MB | 2.9.0+ | CryptoNight variant 2 with half iterations. |

View file

@ -99,4 +99,7 @@ Allow override automatically detected Argon2 implementation, this option added m
Maximum CPU threads count (in percentage) hint for autoconfig. [CPU_MAX_USAGE.md](CPU_MAX_USAGE.md)
#### `memory-pool` (since v4.3.0)
Use continuous, persistent memory block for mining threads, useful for preserve huge pages allocation while algorithm swithing. Default value `false` (feature disabled) or `true` or specific count of 2 MB huge pages.
Use continuous, persistent memory block for mining threads, useful for preserve huge pages allocation while algorithm swithing. Possible values `false` (feature disabled, by default) or `true` or specific count of 2 MB huge pages.
#### `yield` (since v5.1.1)
Prefer system better system response/stability `true` (default value) or maximum hashrate `false`.

View file

@ -1,5 +1,5 @@
# CMake options
This document contains list of useful cmake options.
**Recent version of this document: https://xmrig.com/docs/miner/cmake-options**
## Algorithms
@ -32,7 +32,7 @@ This feature add external dependency to libhwloc (1.10.0+) (except MSVC builds).
## Special build options
* **`-DXMRIG_DEPS=<path>`** path to precompiled dependensices https://github.com/xmrig/xmrig-deps
* **`-DXMRIG_DEPS=<path>`** path to precompiled dependencies https://github.com/xmrig/xmrig-deps
* **`-DARM_TARGET=<number>`** override ARM target, possible values `7` (ARMv7) and `8` (ARMv8).
* **`-DUV_INCLUDE_DIR=<path>`** custom path to libuv headers.
* **`-DUV_LIBRARY=<path>`** custom path to libuv library.

30
doc/gpg_keys/xmrig.asc Normal file
View file

@ -0,0 +1,30 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQENBF3VSRIBCADfFjDUbq0WLGulFeSou0A+jTvweNllPyLNOn3SNCC0XLEYyEcu
JiEBK80DlvR06TVr8Aw1rT5S2iH0i5Tl8DqShH2mmcN1rBp1M0Y95D89KVj3BIhE
nxmgmD4N3Wgm+5FmEH4W/RpG1xdYWJx3eJhtWPdFJqpg083E2D5P30wIQem+EnTR
5YrtTZPh5cPj2KRY+UmsDE3ahmxCgP7LYgnnpZQlWBBiMV932s7MvYBPJQc1wecS
0wi1zxyS81xHc3839EkA7wueCeNo+5jha+KH66tMKsfrI2WvfPHTCPjK9v7WJc/O
/eRp9d+wacn09D1L6CoRO0ers5p10GO84VhTABEBAAG0GVhNUmlnIDxzdXBwb3J0
QHhtcmlnLmNvbT6JAU4EEwEIADgWIQSaxM6o5m41pcfN3BtEalNji+lECQUCXdVJ
EgIbAwULCQgHAgYVCgkICwIEFgIDAQIeAQIXgAAKCRBEalNji+lECbkQB/9nRou0
tOlBwYn8xVgBu7IiDWNVETRWfrjrtdTvSahgbbo6lWgjA/vBLkjN9fISdBQ/n/Mt
hNDJbEtxHHt2baJhvT8du1eWcIHHXCV/rmv+iY/hTXa1gKqHiHDJrtYSVBG3BMme
1rdsUHTiKf3t5yRHOXAfY2C+XNblKAV7mhlxQBiKxdFDIkFEQKNrHNUvnzkOqoCT
2kTZZ2tPUMQdOn1eek6zG/+C7SwcBpJnakJ8jce4yA/xZbOVKetNWO3Ufu3TE34k
OdA+H4PU9+fV77XfOY8DtXeS3boUI97ei+4s/mwX/NFC0i8CPXyefxl3WRUBGDOI
w//kPNQVh4HobOCeuQENBF3VSRIBCADl29WorEi+vRA/3kg9VUXtxSU6caibFS3N
VXANiFRjrOmICdfrIgOSGNrYCQFsXu0Xe0udDYVX8yX6WJk+CT02Pdg0gkXiKoze
KrnK15mo3xXbb2tr1o9ROPgwY/o2AwQHj0o1JhdS2cybfuRiUQRoGgBX7a9X0cTY
r4ZJvOjzgAajl3ciwB3yWUmDiRlzZpO7YWESXbOhGVzyCnP5MlMEJ/fPRw9h38vK
HNKLhzcRfsLpXk34ghY3SxIv4NWUfuZXFWqpSdC9JgNc5zA72lJEQcF4DHJCKl7B
ddmrfsr9mdiIpo+/ZZFPPngdeZ2kvkJ2YKaZNVu2XooJARPQ8B8tABEBAAGJATYE
GAEIACAWIQSaxM6o5m41pcfN3BtEalNji+lECQUCXdVJEgIbDAAKCRBEalNji+lE
CdPUB/4nH1IdhHGmfko2kxdaHqQgCGLqh3pcrQXD9mBv/LYVnoHZpVRHsIDgg2Z4
lQYrIRRqe69FjVxo7sA2eMIlV0GRDlUrw+HeURFpEhKPEdwFy6i/cti2MY0YxOrB
TvQoRutUoMnyjM4TBJWaaqccbTsavMdLmG3JHdAkiHtUis/fUwVctmEQwN+d/J2b
wJAtliqw3nXchUfdIfwHF/7hg8seUuYUaifzkazBZhVWvRkTVLVanzZ51HRfuzwD
ntaa7kfYGdE+4TKOylAPh+8E6WnR19RRTpsaW0dVBgOiBTE0uc7rUv2HWS/u6RUR
t7ldSBzkuDTlM2V59Iq2hXoSC6dT
=cIG9
-----END PGP PUBLIC KEY BLOCK-----

View file

@ -0,0 +1,5 @@
6bb1a2e3a0fbca5195be6022f2a9fbff8a353c37c7542e7ab89420cb45b64505 xmrig-5.0.1-gcc-win32.zip
24dba9ec281acfb2ea2c401ebd0e4e2d1f1ee5fd557da5ff3c7049020c1f78b6 xmrig-5.0.1-gcc-win64.zip
86d65c6693ec9e35cd7547329580638b85c9eb0cf8383892a1c15199de5b556f xmrig-5.0.1-msvc-cuda10_1-win64.zip
0fbfe518b1c4b6993b0f66ff01302626375b15620ccf8f64d6fb97845068ffca xmrig-5.0.1-msvc-win64.zip
aa34890738a3494de2fa0e44db346937fea7339852f5f10b5d4655f95e2d8f1f xmrig-5.0.1-xenial-x64.tar.gz

View file

@ -0,0 +1,11 @@
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEEmsTOqOZuNaXHzdwbRGpTY4vpRAkFAl3VcsoACgkQRGpTY4vp
RAm9vQgA1MyTUU2jley2TCYLUzQy2Fffc8fbXYv64r44jbWOjC/6qo2iIlRgPhIc
oVyPKr5TYS3QjDzCEm8IvozS0YudS6soESbPzqDonboK8pd0K4bsML9TQY2feV7A
NL5vln0rfVHp1wxLLrQpfBqAgvJUXEyaHece6gFQN79JOGhEo2bHL2NyrOl+FViS
b2BaMtXq410Fh+XT6ShnOaG/2EuO8ZqSGdCO6A/2LHQw1UY+mZiCvue6P6B06HmB
WD/urOv38V389v+V+Sp4UlEW6VpBOOjvtChoVWtLt+tKzydrnt2EmoWWWg475pka
4G6whHuMWS8CTt5/PDhJpvVXNQTIOw==
=C764
-----END PGP SIGNATURE-----

BIN
doc/screenshot_v5_2_0.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

19
scripts/build.hwloc.sh Executable file
View file

@ -0,0 +1,19 @@
#!/bin/bash -e
HWLOC_VERSION="2.2.0"
mkdir -p deps
mkdir -p deps/include
mkdir -p deps/lib
mkdir -p build && cd build
wget https://download.open-mpi.org/release/hwloc/v2.2/hwloc-${HWLOC_VERSION}.tar.bz2 -O hwloc-${HWLOC_VERSION}.tar.bz2
tar -xjf hwloc-${HWLOC_VERSION}.tar.bz2
cd hwloc-${HWLOC_VERSION}
./configure --disable-shared --enable-static --disable-io --disable-libudev --disable-libxml2
make -j$(nproc)
cp -fr include/ ../../deps
cp hwloc/.libs/libhwloc.a ../../deps/lib
cd ..

20
scripts/build.libressl.sh Executable file
View file

@ -0,0 +1,20 @@
#!/bin/bash -e
LIBRESSL_VERSION="3.0.2"
mkdir -p deps
mkdir -p deps/include
mkdir -p deps/lib
mkdir -p build && cd build
wget https://ftp.openbsd.org/pub/OpenBSD/LibreSSL/libressl-${LIBRESSL_VERSION}.tar.gz -O libressl-${LIBRESSL_VERSION}.tar.gz
tar -xzf libressl-${LIBRESSL_VERSION}.tar.gz
cd libressl-${LIBRESSL_VERSION}
./configure --disable-shared
make -j$(nproc)
cp -fr include/ ../../deps
cp crypto/.libs/libcrypto.a ../../deps/lib
cp ssl/.libs/libssl.a ../../deps/lib
cd ..

20
scripts/build.openssl.sh Executable file
View file

@ -0,0 +1,20 @@
#!/bin/bash -e
OPENSSL_VERSION="1.1.1g"
mkdir -p deps
mkdir -p deps/include
mkdir -p deps/lib
mkdir -p build && cd build
wget https://www.openssl.org/source/openssl-${OPENSSL_VERSION}.tar.gz -O openssl-${OPENSSL_VERSION}.tar.gz
tar -xzf openssl-${OPENSSL_VERSION}.tar.gz
cd openssl-${OPENSSL_VERSION}
./config -no-shared -no-asm -no-zlib -no-comp -no-dgram -no-filenames -no-cms
make -j$(nproc)
cp -fr include/ ../../deps
cp libcrypto.a ../../deps/lib
cp libssl.a ../../deps/lib
cd ..

20
scripts/build.uv.sh Executable file
View file

@ -0,0 +1,20 @@
#!/bin/bash -e
UV_VERSION="1.38.0"
mkdir -p deps
mkdir -p deps/include
mkdir -p deps/lib
mkdir -p build && cd build
wget https://github.com/libuv/libuv/archive/v${UV_VERSION}.tar.gz -O v${UV_VERSION}.tar.gz
tar -xzf v${UV_VERSION}.tar.gz
cd libuv-${UV_VERSION}
sh autogen.sh
./configure --disable-shared
make -j$(nproc)
cp -fr include/ ../../deps
cp .libs/libuv.a ../../deps/lib
cd ..

5
scripts/build_deps.sh Executable file
View file

@ -0,0 +1,5 @@
#!/bin/bash -e
./build.uv.sh
./build.hwloc.sh
./build.openssl.sh

12
scripts/enable_1gb_pages.sh Executable file
View file

@ -0,0 +1,12 @@
#!/bin/bash -e
# https://xmrig.com/docs/miner/hugepages#onegb-huge-pages
sysctl -w vm.nr_hugepages=$(nproc)
for i in $(find /sys/devices/system/node/node* -maxdepth 0 -type d);
do
echo 3 > "$i/hugepages/hugepages-1048576kB/nr_hugepages";
done
echo "1GB pages successfully enabled"

View file

@ -60,6 +60,7 @@ function rx()
'randomx_constants_wow.h',
'randomx_constants_loki.h',
'randomx_constants_arqma.h',
'randomx_constants_keva.h',
'aes.cl',
'blake2b.cl',
'randomx_vm.cl',
@ -75,6 +76,15 @@ function rx()
}
function astrobwt()
{
const astrobwt = opencl_minify(addIncludes('astrobwt.cl', [ 'BWT.cl', 'salsa20.cl', 'sha3.cl' ]));
// fs.writeFileSync('astrobwt_gen.cl', astrobwt);
fs.writeFileSync('astrobwt_cl.h', text2h(astrobwt, 'xmrig', 'astrobwt_cl'));
}
process.chdir(path.resolve('src/backend/opencl/cl/cn'));
cn();
@ -84,4 +94,9 @@ cn_gpu();
process.chdir(cwd);
process.chdir(path.resolve('src/backend/opencl/cl/rx'));
rx();
rx();
process.chdir(cwd);
process.chdir(path.resolve('src/backend/opencl/cl/astrobwt'));
astrobwt();

View file

@ -2,7 +2,8 @@
function opencl_minify(input)
{
let out = input.replace(/\/\*[\s\S]*?\*\/|\/\/.*$/gm, ''); // comments
let out = input.replace(/\r/g, '');
out = out.replace(/\/\*[\s\S]*?\*\/|\/\/.*$/gm, ''); // comments
out = out.replace(/^#\s+/gm, '#'); // macros with spaces
out = out.replace(/\n{2,}/g, '\n'); // empty lines
out = out.replace(/^\s+/gm, ''); // leading whitespace
@ -10,7 +11,7 @@ function opencl_minify(input)
let array = out.split('\n').map(line => {
if (line[0] === '#') {
return line
return line;
}
line = line.replace(/, /g, ',');

20
scripts/randomx_boost.sh Executable file
View file

@ -0,0 +1,20 @@
#!/bin/bash
modprobe msr
if cat /proc/cpuinfo | grep "AMD Ryzen" > /dev/null;
then
echo "Detected Ryzen"
wrmsr -a 0xc0011022 0x510000
wrmsr -a 0xc001102b 0x1808cc16
wrmsr -a 0xc0011020 0
wrmsr -a 0xc0011021 0x40
echo "MSR register values for Ryzen applied"
elif cat /proc/cpuinfo | grep "Intel" > /dev/null;
then
echo "Detected Intel"
wrmsr -a 0x1a4 0xf
echo "MSR register values for Intel applied"
else
echo "No supported CPU detected"
fi

2342
src/3rdparty/adl/adl_defines.h vendored Normal file

File diff suppressed because it is too large Load diff

44
src/3rdparty/adl/adl_sdk.h vendored Normal file
View file

@ -0,0 +1,44 @@
//
// Copyright (c) 2016 Advanced Micro Devices, Inc. All rights reserved.
//
// MIT LICENSE:
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
// SOFTWARE.
/// \file adl_sdk.h
/// \brief Contains the definition of the Memory Allocation Callback.\n <b>Included in ADL SDK</b>
///
/// \n\n
/// This file contains the definition of the Memory Allocation Callback.\n
/// It also includes definitions of the respective structures and constants.\n
/// <b> This is the only header file to be included in a C/C++ project using ADL </b>
#ifndef ADL_SDK_H_
#define ADL_SDK_H_
#include "adl_structures.h"
#if defined (LINUX)
#define __stdcall
#endif /* (LINUX) */
/// Memory Allocation Call back
typedef void* ( __stdcall *ADL_MAIN_MALLOC_CALLBACK )( int );
#endif /* ADL_SDK_H_ */

3440
src/3rdparty/adl/adl_structures.h vendored Normal file

File diff suppressed because it is too large Load diff

View file

@ -17,7 +17,7 @@ set(ARGON2_SOURCES
set(ARGON2_X86_64_ENABLED ON)
set(ARGON2_X86_64_LIBS argon2-sse2 argon2-ssse3 argon2-xop argon2-avx2 argon2-avx512f)
set(ARGON2_X86_64_SOURCES arch/x86_64/lib/argon2-arch.c arch/x86_64/lib/cpu-flags.c)
set(ARGON2_X86_64_SOURCES arch/x86_64/lib/argon2-arch.c)
if (CMAKE_C_COMPILER_ID MATCHES MSVC)
function(add_feature_impl FEATURE MSVC_FLAG DEF)

View file

@ -4,7 +4,6 @@
#include "impl-select.h"
#include "cpu-flags.h"
#include "argon2-sse2.h"
#include "argon2-ssse3.h"
#include "argon2-xop.h"
@ -26,16 +25,14 @@ void fill_segment_default(const argon2_instance_t *instance,
void argon2_get_impl_list(argon2_impl_list *list)
{
static const argon2_impl IMPLS[] = {
{ "x86_64", NULL, fill_segment_default },
{ "SSE2", check_sse2, fill_segment_sse2 },
{ "SSSE3", check_ssse3, fill_segment_ssse3 },
{ "XOP", check_xop, fill_segment_xop },
{ "AVX2", check_avx2, fill_segment_avx2 },
{ "AVX-512F", check_avx512f, fill_segment_avx512f },
{ "x86_64", NULL, fill_segment_default },
{ "SSE2", xmrig_ar2_check_sse2, xmrig_ar2_fill_segment_sse2 },
{ "SSSE3", xmrig_ar2_check_ssse3, xmrig_ar2_fill_segment_ssse3 },
{ "XOP", xmrig_ar2_check_xop, xmrig_ar2_fill_segment_xop },
{ "AVX2", xmrig_ar2_check_avx2, xmrig_ar2_fill_segment_avx2 },
{ "AVX-512F", xmrig_ar2_check_avx512f, xmrig_ar2_fill_segment_avx512f },
};
cpu_flags_get();
list->count = sizeof(IMPLS) / sizeof(IMPLS[0]);
list->entries = IMPLS;
}

View file

@ -9,8 +9,6 @@
# include <intrin.h>
#endif
#include "cpu-flags.h"
#define r16 (_mm256_setr_epi8( \
2, 3, 4, 5, 6, 7, 0, 1, \
10, 11, 12, 13, 14, 15, 8, 9, \
@ -225,8 +223,7 @@ static void next_addresses(block *address_block, block *input_block)
fill_block(zero2_block, address_block, address_block, 0);
}
void fill_segment_avx2(const argon2_instance_t *instance,
argon2_position_t position)
void xmrig_ar2_fill_segment_avx2(const argon2_instance_t *instance, argon2_position_t position)
{
block *ref_block = NULL, *curr_block = NULL;
block address_block, input_block;
@ -310,8 +307,7 @@ void fill_segment_avx2(const argon2_instance_t *instance,
* lane.
*/
position.index = i;
ref_index = index_alpha(instance, &position, pseudo_rand & 0xFFFFFFFF,
ref_lane == position.lane);
ref_index = xmrig_ar2_index_alpha(instance, &position, pseudo_rand & 0xFFFFFFFF, ref_lane == position.lane);
/* 2 Creating a new block */
ref_block =
@ -327,21 +323,13 @@ void fill_segment_avx2(const argon2_instance_t *instance,
}
}
int check_avx2(void)
{
return cpu_flags_have_avx2();
}
extern int cpu_flags_has_avx2(void);
int xmrig_ar2_check_avx2(void) { return cpu_flags_has_avx2(); }
#else
void fill_segment_avx2(const argon2_instance_t *instance,
argon2_position_t position)
{
}
int check_avx2(void)
{
return 0;
}
void xmrig_ar2_fill_segment_avx2(const argon2_instance_t *instance, argon2_position_t position) {}
int xmrig_ar2_check_avx2(void) { return 0; }
#endif

View file

@ -3,9 +3,7 @@
#include "core.h"
void fill_segment_avx2(const argon2_instance_t *instance,
argon2_position_t position);
int check_avx2(void);
void xmrig_ar2_fill_segment_avx2(const argon2_instance_t *instance, argon2_position_t position);
int xmrig_ar2_check_avx2(void);
#endif // ARGON2_AVX2_H

View file

@ -10,8 +10,6 @@
# include <intrin.h>
#endif
#include "cpu-flags.h"
#define ror64(x, n) _mm512_ror_epi64((x), (n))
static __m512i f(__m512i x, __m512i y)
@ -210,8 +208,7 @@ static void next_addresses(block *address_block, block *input_block)
fill_block(zero2_block, address_block, address_block, 0);
}
void fill_segment_avx512f(const argon2_instance_t *instance,
argon2_position_t position)
void xmrig_ar2_fill_segment_avx512f(const argon2_instance_t *instance, argon2_position_t position)
{
block *ref_block = NULL, *curr_block = NULL;
block address_block, input_block;
@ -295,8 +292,7 @@ void fill_segment_avx512f(const argon2_instance_t *instance,
* lane.
*/
position.index = i;
ref_index = index_alpha(instance, &position, pseudo_rand & 0xFFFFFFFF,
ref_lane == position.lane);
ref_index = xmrig_ar2_index_alpha(instance, &position, pseudo_rand & 0xFFFFFFFF, ref_lane == position.lane);
/* 2 Creating a new block */
ref_block =
@ -312,21 +308,12 @@ void fill_segment_avx512f(const argon2_instance_t *instance,
}
}
int check_avx512f(void)
{
return cpu_flags_have_avx512f();
}
extern int cpu_flags_has_avx512f(void);
int xmrig_ar2_check_avx512f(void) { return cpu_flags_has_avx512f(); }
#else
void fill_segment_avx512f(const argon2_instance_t *instance,
argon2_position_t position)
{
}
int check_avx512f(void)
{
return 0;
}
void xmrig_ar2_fill_segment_avx512f(const argon2_instance_t *instance, argon2_position_t position) {}
int xmrig_ar2_check_avx512f(void) { return 0; }
#endif

View file

@ -3,9 +3,7 @@
#include "core.h"
void fill_segment_avx512f(const argon2_instance_t *instance,
argon2_position_t position);
int check_avx512f(void);
void xmrig_ar2_fill_segment_avx512f(const argon2_instance_t *instance, argon2_position_t position);
int xmrig_ar2_check_avx512f(void);
#endif // ARGON2_AVX512F_H

View file

@ -7,8 +7,6 @@
# include <intrin.h>
#endif
#include "cpu-flags.h"
#define ror64_16(x) \
_mm_shufflehi_epi16( \
_mm_shufflelo_epi16((x), _MM_SHUFFLE(0, 3, 2, 1)), \
@ -102,27 +100,17 @@ static __m128i f(__m128i x, __m128i y)
#include "argon2-template-128.h"
void fill_segment_sse2(const argon2_instance_t *instance,
argon2_position_t position)
void xmrig_ar2_fill_segment_sse2(const argon2_instance_t *instance, argon2_position_t position)
{
fill_segment_128(instance, position);
}
int check_sse2(void)
{
return cpu_flags_have_sse2();
}
extern int cpu_flags_has_sse2(void);
int xmrig_ar2_check_sse2(void) { return cpu_flags_has_sse2(); }
#else
void fill_segment_sse2(const argon2_instance_t *instance,
argon2_position_t position)
{
}
int check_sse2(void)
{
return 0;
}
void xmrig_ar2_fill_segment_sse2(const argon2_instance_t *instance, argon2_position_t position) {}
int xmrig_ar2_check_sse2(void) { return 0; }
#endif

View file

@ -3,9 +3,7 @@
#include "core.h"
void fill_segment_sse2(const argon2_instance_t *instance,
argon2_position_t position);
int check_sse2(void);
void xmrig_ar2_fill_segment_sse2(const argon2_instance_t *instance, argon2_position_t position);
int xmrig_ar2_check_sse2(void);
#endif // ARGON2_SSE2_H

View file

@ -9,8 +9,6 @@
# include <intrin.h>
#endif
#include "cpu-flags.h"
#define r16 (_mm_setr_epi8( \
2, 3, 4, 5, 6, 7, 0, 1, \
10, 11, 12, 13, 14, 15, 8, 9))
@ -114,27 +112,17 @@ static __m128i f(__m128i x, __m128i y)
#include "argon2-template-128.h"
void fill_segment_ssse3(const argon2_instance_t *instance,
argon2_position_t position)
void xmrig_ar2_fill_segment_ssse3(const argon2_instance_t *instance, argon2_position_t position)
{
fill_segment_128(instance, position);
}
int check_ssse3(void)
{
return cpu_flags_have_ssse3();
}
extern int cpu_flags_has_ssse3(void);
int xmrig_ar2_check_ssse3(void) { return cpu_flags_has_ssse3(); }
#else
void fill_segment_ssse3(const argon2_instance_t *instance,
argon2_position_t position)
{
}
int check_ssse3(void)
{
return 0;
}
void xmrig_ar2_fill_segment_ssse3(const argon2_instance_t *instance, argon2_position_t position) {}
int xmrig_ar2_check_ssse3(void) { return 0; }
#endif

View file

@ -3,9 +3,7 @@
#include "core.h"
void fill_segment_ssse3(const argon2_instance_t *instance,
argon2_position_t position);
int check_ssse3(void);
void xmrig_ar2_fill_segment_ssse3(const argon2_instance_t *instance, argon2_position_t position);
int xmrig_ar2_check_ssse3(void);
#endif // ARGON2_SSSE3_H

View file

@ -150,8 +150,7 @@ static void fill_segment_128(const argon2_instance_t *instance,
* lane.
*/
position.index = i;
ref_index = index_alpha(instance, &position, pseudo_rand & 0xFFFFFFFF,
ref_lane == position.lane);
ref_index = xmrig_ar2_index_alpha(instance, &position, pseudo_rand & 0xFFFFFFFF, ref_lane == position.lane);
/* 2 Creating a new block */
ref_block =

View file

@ -9,8 +9,6 @@
# include <intrin.h>
#endif
#include "cpu-flags.h"
#define ror64(x, c) _mm_roti_epi64((x), -(c))
static __m128i f(__m128i x, __m128i y)
@ -102,27 +100,17 @@ static __m128i f(__m128i x, __m128i y)
#include "argon2-template-128.h"
void fill_segment_xop(const argon2_instance_t *instance,
argon2_position_t position)
void xmrig_ar2_fill_segment_xop(const argon2_instance_t *instance, argon2_position_t position)
{
fill_segment_128(instance, position);
}
int check_xop(void)
{
return cpu_flags_have_xop();
}
extern int cpu_flags_has_xop(void);
int xmrig_ar2_check_xop(void) { return cpu_flags_has_xop(); }
#else
void fill_segment_xop(const argon2_instance_t *instance,
argon2_position_t position)
{
}
int check_xop(void)
{
return 0;
}
void xmrig_ar2_fill_segment_xop(const argon2_instance_t *instance, argon2_position_t position) {}
int xmrig_ar2_check_xop(void) { return 0; }
#endif

View file

@ -3,9 +3,7 @@
#include "core.h"
void fill_segment_xop(const argon2_instance_t *instance,
argon2_position_t position);
int check_xop(void);
void xmrig_ar2_fill_segment_xop(const argon2_instance_t *instance, argon2_position_t position);
int xmrig_ar2_check_xop(void);
#endif // ARGON2_XOP_H

View file

@ -1,129 +0,0 @@
#include <stdbool.h>
#include <stdint.h>
#include "cpu-flags.h"
#include <stdio.h>
#ifdef _MSC_VER
# include <intrin.h>
#else
# include <cpuid.h>
#endif
#ifndef bit_OSXSAVE
# define bit_OSXSAVE (1 << 27)
#endif
#ifndef bit_SSE2
# define bit_SSE2 (1 << 26)
#endif
#ifndef bit_SSSE3
# define bit_SSSE3 (1 << 9)
#endif
#ifndef bit_AVX2
# define bit_AVX2 (1 << 5)
#endif
#ifndef bit_AVX512F
# define bit_AVX512F (1 << 16)
#endif
#ifndef bit_XOP
# define bit_XOP (1 << 11)
#endif
#define PROCESSOR_INFO (1)
#define EXTENDED_FEATURES (7)
#define EAX_Reg (0)
#define EBX_Reg (1)
#define ECX_Reg (2)
#define EDX_Reg (3)
enum {
X86_64_FEATURE_SSE2 = (1 << 0),
X86_64_FEATURE_SSSE3 = (1 << 1),
X86_64_FEATURE_XOP = (1 << 2),
X86_64_FEATURE_AVX2 = (1 << 3),
X86_64_FEATURE_AVX512F = (1 << 4),
};
static unsigned int cpu_flags;
static inline void cpuid(uint32_t level, int32_t output[4])
{
# ifdef _MSC_VER
__cpuid(output, (int) level);
# else
__cpuid_count(level, 0, output[0], output[1], output[2], output[3]);
# endif
}
static bool has_feature(uint32_t level, uint32_t reg, int32_t bit)
{
int32_t cpu_info[4] = { 0 };
cpuid(level, cpu_info);
return (cpu_info[reg] & bit) != 0;
}
void cpu_flags_get(void)
{
if (has_feature(PROCESSOR_INFO, EDX_Reg, bit_SSE2)) {
cpu_flags |= X86_64_FEATURE_SSE2;
}
if (has_feature(PROCESSOR_INFO, ECX_Reg, bit_SSSE3)) {
cpu_flags |= X86_64_FEATURE_SSSE3;
}
if (!has_feature(PROCESSOR_INFO, ECX_Reg, bit_OSXSAVE)) {
return;
}
if (has_feature(EXTENDED_FEATURES, EBX_Reg, bit_AVX2)) {
cpu_flags |= X86_64_FEATURE_AVX2;
}
if (has_feature(EXTENDED_FEATURES, EBX_Reg, bit_AVX512F)) {
cpu_flags |= X86_64_FEATURE_AVX512F;
}
if (has_feature(0x80000001, ECX_Reg, bit_XOP)) {
cpu_flags |= X86_64_FEATURE_XOP;
}
}
int cpu_flags_have_sse2(void)
{
return cpu_flags & X86_64_FEATURE_SSE2;
}
int cpu_flags_have_ssse3(void)
{
return cpu_flags & X86_64_FEATURE_SSSE3;
}
int cpu_flags_have_xop(void)
{
return cpu_flags & X86_64_FEATURE_XOP;
}
int cpu_flags_have_avx2(void)
{
return cpu_flags & X86_64_FEATURE_AVX2;
}
int cpu_flags_have_avx512f(void)
{
return cpu_flags & X86_64_FEATURE_AVX512F;
}

View file

@ -1,12 +0,0 @@
#ifndef ARGON2_CPU_FLAGS_H
#define ARGON2_CPU_FLAGS_H
void cpu_flags_get(void);
int cpu_flags_have_sse2(void);
int cpu_flags_have_ssse3(void);
int cpu_flags_have_xop(void);
int cpu_flags_have_avx2(void);
int cpu_flags_have_avx512f(void);
#endif // ARGON2_CPU_FLAGS_H

View file

@ -174,8 +174,7 @@ static void fill_segment_64(const argon2_instance_t *instance,
* lane.
*/
position.index = i;
ref_index = index_alpha(instance, &position, pseudo_rand & 0xFFFFFFFF,
ref_lane == position.lane);
ref_index = xmrig_ar2_index_alpha(instance, &position, pseudo_rand & 0xFFFFFFFF, ref_lane == position.lane);
/* 2 Creating a new block */
ref_block =

View file

@ -57,7 +57,7 @@ size_t argon2_memory_size(uint32_t m_cost, uint32_t parallelism) {
int argon2_ctx_mem(argon2_context *context, argon2_type type, void *memory,
size_t memory_size) {
/* 1. Validate all inputs */
int result = validate_inputs(context);
int result = xmrig_ar2_validate_inputs(context);
uint32_t memory_blocks, segment_length;
argon2_instance_t instance;
@ -98,20 +98,20 @@ int argon2_ctx_mem(argon2_context *context, argon2_type type, void *memory,
/* 3. Initialization: Hashing inputs, allocating memory, filling first
* blocks
*/
result = initialize(&instance, context);
result = xmrig_ar2_initialize(&instance, context);
if (ARGON2_OK != result) {
return result;
}
/* 4. Filling memory */
result = fill_memory_blocks(&instance);
result = xmrig_ar2_fill_memory_blocks(&instance);
if (ARGON2_OK != result) {
return result;
}
/* 5. Finalization */
finalize(context, &instance);
xmrig_ar2_finalize(context, &instance);
return ARGON2_OK;
}
@ -174,7 +174,7 @@ int argon2_hash(const uint32_t t_cost, const uint32_t m_cost,
result = argon2_ctx(&context, type);
if (result != ARGON2_OK) {
clear_internal_memory(out, hashlen);
xmrig_ar2_clear_internal_memory(out, hashlen);
free(out);
return result;
}
@ -187,13 +187,13 @@ int argon2_hash(const uint32_t t_cost, const uint32_t m_cost,
/* if encoding requested, write it */
if (encoded && encodedlen) {
if (encode_string(encoded, encodedlen, &context, type) != ARGON2_OK) {
clear_internal_memory(out, hashlen); /* wipe buffers if error */
clear_internal_memory(encoded, encodedlen);
xmrig_ar2_clear_internal_memory(out, hashlen); /* wipe buffers if error */
xmrig_ar2_clear_internal_memory(encoded, encodedlen);
free(out);
return ARGON2_ENCODING_FAIL;
}
}
clear_internal_memory(out, hashlen);
xmrig_ar2_clear_internal_memory(out, hashlen);
free(out);
return ARGON2_OK;

View file

@ -128,14 +128,14 @@ static void blake2b_init_state(blake2b_state *S)
S->buflen = 0;
}
void blake2b_init(blake2b_state *S, size_t outlen)
void xmrig_ar2_blake2b_init(blake2b_state *S, size_t outlen)
{
blake2b_init_state(S);
/* XOR initial state with param block: */
S->h[0] ^= (uint64_t)outlen | (UINT64_C(1) << 16) | (UINT64_C(1) << 24);
}
void blake2b_update(blake2b_state *S, const void *in, size_t inlen)
void xmrig_ar2_blake2b_update(blake2b_state *S, const void *in, size_t inlen)
{
const uint8_t *pin = (const uint8_t *)in;
@ -160,7 +160,7 @@ void blake2b_update(blake2b_state *S, const void *in, size_t inlen)
S->buflen += inlen;
}
void blake2b_final(blake2b_state *S, void *out, size_t outlen)
void xmrig_ar2_blake2b_final(blake2b_state *S, void *out, size_t outlen)
{
uint8_t buffer[BLAKE2B_OUTBYTES] = {0};
unsigned int i;
@ -174,12 +174,12 @@ void blake2b_final(blake2b_state *S, void *out, size_t outlen)
}
memcpy(out, buffer, outlen);
clear_internal_memory(buffer, sizeof(buffer));
clear_internal_memory(S->buf, sizeof(S->buf));
clear_internal_memory(S->h, sizeof(S->h));
xmrig_ar2_clear_internal_memory(buffer, sizeof(buffer));
xmrig_ar2_clear_internal_memory(S->buf, sizeof(S->buf));
xmrig_ar2_clear_internal_memory(S->h, sizeof(S->h));
}
void blake2b_long(void *out, size_t outlen, const void *in, size_t inlen)
void xmrig_ar2_blake2b_long(void *out, size_t outlen, const void *in, size_t inlen)
{
uint8_t *pout = (uint8_t *)out;
blake2b_state blake_state;
@ -187,39 +187,39 @@ void blake2b_long(void *out, size_t outlen, const void *in, size_t inlen)
store32(outlen_bytes, (uint32_t)outlen);
if (outlen <= BLAKE2B_OUTBYTES) {
blake2b_init(&blake_state, outlen);
blake2b_update(&blake_state, outlen_bytes, sizeof(outlen_bytes));
blake2b_update(&blake_state, in, inlen);
blake2b_final(&blake_state, pout, outlen);
xmrig_ar2_blake2b_init(&blake_state, outlen);
xmrig_ar2_blake2b_update(&blake_state, outlen_bytes, sizeof(outlen_bytes));
xmrig_ar2_blake2b_update(&blake_state, in, inlen);
xmrig_ar2_blake2b_final(&blake_state, pout, outlen);
} else {
uint32_t toproduce;
uint8_t out_buffer[BLAKE2B_OUTBYTES];
blake2b_init(&blake_state, BLAKE2B_OUTBYTES);
blake2b_update(&blake_state, outlen_bytes, sizeof(outlen_bytes));
blake2b_update(&blake_state, in, inlen);
blake2b_final(&blake_state, out_buffer, BLAKE2B_OUTBYTES);
xmrig_ar2_blake2b_init(&blake_state, BLAKE2B_OUTBYTES);
xmrig_ar2_blake2b_update(&blake_state, outlen_bytes, sizeof(outlen_bytes));
xmrig_ar2_blake2b_update(&blake_state, in, inlen);
xmrig_ar2_blake2b_final(&blake_state, out_buffer, BLAKE2B_OUTBYTES);
memcpy(pout, out_buffer, BLAKE2B_OUTBYTES / 2);
pout += BLAKE2B_OUTBYTES / 2;
toproduce = (uint32_t)outlen - BLAKE2B_OUTBYTES / 2;
while (toproduce > BLAKE2B_OUTBYTES) {
blake2b_init(&blake_state, BLAKE2B_OUTBYTES);
blake2b_update(&blake_state, out_buffer, BLAKE2B_OUTBYTES);
blake2b_final(&blake_state, out_buffer, BLAKE2B_OUTBYTES);
xmrig_ar2_blake2b_init(&blake_state, BLAKE2B_OUTBYTES);
xmrig_ar2_blake2b_update(&blake_state, out_buffer, BLAKE2B_OUTBYTES);
xmrig_ar2_blake2b_final(&blake_state, out_buffer, BLAKE2B_OUTBYTES);
memcpy(pout, out_buffer, BLAKE2B_OUTBYTES / 2);
pout += BLAKE2B_OUTBYTES / 2;
toproduce -= BLAKE2B_OUTBYTES / 2;
}
blake2b_init(&blake_state, toproduce);
blake2b_update(&blake_state, out_buffer, BLAKE2B_OUTBYTES);
blake2b_final(&blake_state, out_buffer, toproduce);
xmrig_ar2_blake2b_init(&blake_state, toproduce);
xmrig_ar2_blake2b_update(&blake_state, out_buffer, BLAKE2B_OUTBYTES);
xmrig_ar2_blake2b_final(&blake_state, out_buffer, toproduce);
memcpy(pout, out_buffer, toproduce);
clear_internal_memory(out_buffer, sizeof(out_buffer));
xmrig_ar2_clear_internal_memory(out_buffer, sizeof(out_buffer));
}
}

View file

@ -20,11 +20,11 @@ typedef struct __blake2b_state {
} blake2b_state;
/* Streaming API */
void blake2b_init(blake2b_state *S, size_t outlen);
void blake2b_update(blake2b_state *S, const void *in, size_t inlen);
void blake2b_final(blake2b_state *S, void *out, size_t outlen);
void xmrig_ar2_blake2b_init(blake2b_state *S, size_t outlen);
void xmrig_ar2_blake2b_update(blake2b_state *S, const void *in, size_t inlen);
void xmrig_ar2_blake2b_final(blake2b_state *S, void *out, size_t outlen);
void blake2b_long(void *out, size_t outlen, const void *in, size_t inlen);
void xmrig_ar2_blake2b_long(void *out, size_t outlen, const void *in, size_t inlen);
#endif // ARGON2_BLAKE2_H

View file

@ -77,8 +77,7 @@ static void store_block(void *output, const block *src) {
/***************Memory functions*****************/
int allocate_memory(const argon2_context *context,
argon2_instance_t *instance) {
int xmrig_ar2_allocate_memory(const argon2_context *context, argon2_instance_t *instance) {
size_t blocks = instance->memory_blocks;
size_t memory_size = blocks * ARGON2_BLOCK_SIZE;
@ -107,11 +106,10 @@ int allocate_memory(const argon2_context *context,
return ARGON2_OK;
}
void free_memory(const argon2_context *context,
const argon2_instance_t *instance) {
void xmrig_ar2_free_memory(const argon2_context *context, const argon2_instance_t *instance) {
size_t memory_size = instance->memory_blocks * ARGON2_BLOCK_SIZE;
clear_internal_memory(instance->memory, memory_size);
xmrig_ar2_clear_internal_memory(instance->memory, memory_size);
if (instance->keep_memory) {
/* user-supplied memory -- do not free */
@ -125,7 +123,7 @@ void free_memory(const argon2_context *context,
}
}
void NOT_OPTIMIZED secure_wipe_memory(void *v, size_t n) {
void NOT_OPTIMIZED xmrig_ar2_secure_wipe_memory(void *v, size_t n) {
#if defined(_MSC_VER) && VC_GE_2005(_MSC_VER)
SecureZeroMemory(v, n);
#elif defined memset_s
@ -140,14 +138,14 @@ void NOT_OPTIMIZED secure_wipe_memory(void *v, size_t n) {
/* Memory clear flag defaults to true. */
int FLAG_clear_internal_memory = 0;
void clear_internal_memory(void *v, size_t n) {
void xmrig_ar2_clear_internal_memory(void *v, size_t n) {
if (FLAG_clear_internal_memory && v) {
secure_wipe_memory(v, n);
xmrig_ar2_secure_wipe_memory(v, n);
}
}
void finalize(const argon2_context *context, argon2_instance_t *instance) {
if (context != NULL && instance != NULL) {
void xmrig_ar2_finalize(const argon2_context *context, argon2_instance_t *instance) {
if (context != NULL && instance != NULL && context->out != NULL) {
block blockhash;
uint32_t l;
@ -164,24 +162,21 @@ void finalize(const argon2_context *context, argon2_instance_t *instance) {
{
uint8_t blockhash_bytes[ARGON2_BLOCK_SIZE];
store_block(blockhash_bytes, &blockhash);
blake2b_long(context->out, context->outlen, blockhash_bytes,
ARGON2_BLOCK_SIZE);
xmrig_ar2_blake2b_long(context->out, context->outlen, blockhash_bytes, ARGON2_BLOCK_SIZE);
/* clear blockhash and blockhash_bytes */
clear_internal_memory(blockhash.v, ARGON2_BLOCK_SIZE);
clear_internal_memory(blockhash_bytes, ARGON2_BLOCK_SIZE);
xmrig_ar2_clear_internal_memory(blockhash.v, ARGON2_BLOCK_SIZE);
xmrig_ar2_clear_internal_memory(blockhash_bytes, ARGON2_BLOCK_SIZE);
}
if (instance->print_internals) {
print_tag(context->out, context->outlen);
}
free_memory(context, instance);
xmrig_ar2_free_memory(context, instance);
}
}
uint32_t index_alpha(const argon2_instance_t *instance,
const argon2_position_t *position, uint32_t pseudo_rand,
int same_lane) {
uint32_t xmrig_ar2_index_alpha(const argon2_instance_t *instance, const argon2_position_t *position, uint32_t pseudo_rand, int same_lane) {
/*
* Pass 0:
* This lane : all already finished segments plus already constructed
@ -257,7 +252,7 @@ static int fill_memory_blocks_st(argon2_instance_t *instance) {
for (s = 0; s < ARGON2_SYNC_POINTS; ++s) {
for (l = 0; l < instance->lanes; ++l) {
argon2_position_t position = { r, l, (uint8_t)s, 0 };
fill_segment(instance, position);
xmrig_ar2_fill_segment(instance, position);
}
}
@ -268,7 +263,7 @@ static int fill_memory_blocks_st(argon2_instance_t *instance) {
return ARGON2_OK;
}
int fill_memory_blocks(argon2_instance_t *instance) {
int xmrig_ar2_fill_memory_blocks(argon2_instance_t *instance) {
if (instance == NULL || instance->lanes == 0) {
return ARGON2_INCORRECT_PARAMETER;
}
@ -276,19 +271,19 @@ int fill_memory_blocks(argon2_instance_t *instance) {
return fill_memory_blocks_st(instance);
}
int validate_inputs(const argon2_context *context) {
int xmrig_ar2_validate_inputs(const argon2_context *context) {
if (NULL == context) {
return ARGON2_INCORRECT_PARAMETER;
}
if (NULL == context->out) {
return ARGON2_OUTPUT_PTR_NULL;
}
//if (NULL == context->out) {
// return ARGON2_OUTPUT_PTR_NULL;
//}
/* Validate output length */
if (ARGON2_MIN_OUTLEN > context->outlen) {
return ARGON2_OUTPUT_TOO_SHORT;
}
//if (ARGON2_MIN_OUTLEN > context->outlen) {
// return ARGON2_OUTPUT_TOO_SHORT;
//}
if (ARGON2_MAX_OUTLEN < context->outlen) {
return ARGON2_OUTPUT_TOO_LONG;
@ -403,7 +398,7 @@ int validate_inputs(const argon2_context *context) {
return ARGON2_OK;
}
void fill_first_blocks(uint8_t *blockhash, const argon2_instance_t *instance) {
void xmrig_ar2_fill_first_blocks(uint8_t *blockhash, const argon2_instance_t *instance) {
uint32_t l;
/* Make the first and second block in each lane as G(H0||0||i) or
G(H0||1||i) */
@ -412,21 +407,17 @@ void fill_first_blocks(uint8_t *blockhash, const argon2_instance_t *instance) {
store32(blockhash + ARGON2_PREHASH_DIGEST_LENGTH, 0);
store32(blockhash + ARGON2_PREHASH_DIGEST_LENGTH + 4, l);
blake2b_long(blockhash_bytes, ARGON2_BLOCK_SIZE, blockhash,
ARGON2_PREHASH_SEED_LENGTH);
load_block(&instance->memory[l * instance->lane_length + 0],
blockhash_bytes);
xmrig_ar2_blake2b_long(blockhash_bytes, ARGON2_BLOCK_SIZE, blockhash, ARGON2_PREHASH_SEED_LENGTH);
load_block(&instance->memory[l * instance->lane_length + 0], blockhash_bytes);
store32(blockhash + ARGON2_PREHASH_DIGEST_LENGTH, 1);
blake2b_long(blockhash_bytes, ARGON2_BLOCK_SIZE, blockhash,
ARGON2_PREHASH_SEED_LENGTH);
load_block(&instance->memory[l * instance->lane_length + 1],
blockhash_bytes);
xmrig_ar2_blake2b_long(blockhash_bytes, ARGON2_BLOCK_SIZE, blockhash, ARGON2_PREHASH_SEED_LENGTH);
load_block(&instance->memory[l * instance->lane_length + 1], blockhash_bytes);
}
clear_internal_memory(blockhash_bytes, ARGON2_BLOCK_SIZE);
xmrig_ar2_clear_internal_memory(blockhash_bytes, ARGON2_BLOCK_SIZE);
}
void initial_hash(uint8_t *blockhash, argon2_context *context,
void xmrig_ar2_initial_hash(uint8_t *blockhash, argon2_context *context,
argon2_type type) {
blake2b_state BlakeHash;
uint8_t value[sizeof(uint32_t)];
@ -435,72 +426,70 @@ void initial_hash(uint8_t *blockhash, argon2_context *context,
return;
}
blake2b_init(&BlakeHash, ARGON2_PREHASH_DIGEST_LENGTH);
xmrig_ar2_blake2b_init(&BlakeHash, ARGON2_PREHASH_DIGEST_LENGTH);
store32(&value, context->lanes);
blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
xmrig_ar2_blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
store32(&value, context->outlen);
blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
xmrig_ar2_blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
store32(&value, context->m_cost);
blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
xmrig_ar2_blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
store32(&value, context->t_cost);
blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
xmrig_ar2_blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
store32(&value, context->version);
blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
xmrig_ar2_blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
store32(&value, (uint32_t)type);
blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
xmrig_ar2_blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
store32(&value, context->pwdlen);
blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
xmrig_ar2_blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
if (context->pwd != NULL) {
blake2b_update(&BlakeHash, (const uint8_t *)context->pwd,
xmrig_ar2_blake2b_update(&BlakeHash, (const uint8_t *)context->pwd,
context->pwdlen);
if (context->flags & ARGON2_FLAG_CLEAR_PASSWORD) {
secure_wipe_memory(context->pwd, context->pwdlen);
xmrig_ar2_secure_wipe_memory(context->pwd, context->pwdlen);
context->pwdlen = 0;
}
}
store32(&value, context->saltlen);
blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
xmrig_ar2_blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
if (context->salt != NULL) {
blake2b_update(&BlakeHash, (const uint8_t *)context->salt,
xmrig_ar2_blake2b_update(&BlakeHash, (const uint8_t *)context->salt,
context->saltlen);
}
store32(&value, context->secretlen);
blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
xmrig_ar2_blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
if (context->secret != NULL) {
blake2b_update(&BlakeHash, (const uint8_t *)context->secret,
context->secretlen);
xmrig_ar2_blake2b_update(&BlakeHash, (const uint8_t *)context->secret, context->secretlen);
if (context->flags & ARGON2_FLAG_CLEAR_SECRET) {
secure_wipe_memory(context->secret, context->secretlen);
xmrig_ar2_secure_wipe_memory(context->secret, context->secretlen);
context->secretlen = 0;
}
}
store32(&value, context->adlen);
blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
xmrig_ar2_blake2b_update(&BlakeHash, (const uint8_t *)&value, sizeof(value));
if (context->ad != NULL) {
blake2b_update(&BlakeHash, (const uint8_t *)context->ad,
context->adlen);
xmrig_ar2_blake2b_update(&BlakeHash, (const uint8_t *)context->ad, context->adlen);
}
blake2b_final(&BlakeHash, blockhash, ARGON2_PREHASH_DIGEST_LENGTH);
xmrig_ar2_blake2b_final(&BlakeHash, blockhash, ARGON2_PREHASH_DIGEST_LENGTH);
}
int initialize(argon2_instance_t *instance, argon2_context *context) {
int xmrig_ar2_initialize(argon2_instance_t *instance, argon2_context *context) {
uint8_t blockhash[ARGON2_PREHASH_SEED_LENGTH];
int result = ARGON2_OK;
@ -510,7 +499,7 @@ int initialize(argon2_instance_t *instance, argon2_context *context) {
/* 1. Memory allocation */
result = allocate_memory(context, instance);
result = xmrig_ar2_allocate_memory(context, instance);
if (result != ARGON2_OK) {
return result;
}
@ -519,11 +508,9 @@ int initialize(argon2_instance_t *instance, argon2_context *context) {
/* H_0 + 8 extra bytes to produce the first blocks */
/* uint8_t blockhash[ARGON2_PREHASH_SEED_LENGTH]; */
/* Hashing all inputs */
initial_hash(blockhash, context, instance->type);
xmrig_ar2_initial_hash(blockhash, context, instance->type);
/* Zeroing 8 extra bytes */
clear_internal_memory(blockhash + ARGON2_PREHASH_DIGEST_LENGTH,
ARGON2_PREHASH_SEED_LENGTH -
ARGON2_PREHASH_DIGEST_LENGTH);
xmrig_ar2_clear_internal_memory(blockhash + ARGON2_PREHASH_DIGEST_LENGTH, ARGON2_PREHASH_SEED_LENGTH - ARGON2_PREHASH_DIGEST_LENGTH);
if (instance->print_internals) {
initial_kat(blockhash, context, instance->type);
@ -531,9 +518,9 @@ int initialize(argon2_instance_t *instance, argon2_context *context) {
/* 3. Creating first blocks, we always have at least two blocks in a slice
*/
fill_first_blocks(blockhash, instance);
xmrig_ar2_fill_first_blocks(blockhash, instance);
/* Clearing the hash */
clear_internal_memory(blockhash, ARGON2_PREHASH_SEED_LENGTH);
xmrig_ar2_clear_internal_memory(blockhash, ARGON2_PREHASH_SEED_LENGTH);
return ARGON2_OK;
}

View file

@ -110,8 +110,7 @@ typedef struct Argon2_thread_data {
* @param instance the Argon2 instance
* @return ARGON2_OK if memory is allocated successfully
*/
int allocate_memory(const argon2_context *context,
argon2_instance_t *instance);
int xmrig_ar2_allocate_memory(const argon2_context *context, argon2_instance_t *instance);
/*
* Frees memory at the given pointer, uses the appropriate deallocator as
@ -119,22 +118,21 @@ int allocate_memory(const argon2_context *context,
* @param context argon2_context which specifies the deallocator
* @param instance the Argon2 instance
*/
void free_memory(const argon2_context *context,
const argon2_instance_t *instance);
void xmrig_ar2_free_memory(const argon2_context *context, const argon2_instance_t *instance);
/* Function that securely cleans the memory. This ignores any flags set
* regarding clearing memory. Usually one just calls clear_internal_memory.
* @param mem Pointer to the memory
* @param s Memory size in bytes
*/
void secure_wipe_memory(void *v, size_t n);
void xmrig_ar2_secure_wipe_memory(void *v, size_t n);
/* Function that securely clears the memory if FLAG_clear_internal_memory is
* set. If the flag isn't set, this function does nothing.
* @param mem Pointer to the memory
* @param s Memory size in bytes
*/
ARGON2_PUBLIC void clear_internal_memory(void *v, size_t n);
ARGON2_PUBLIC void xmrig_ar2_clear_internal_memory(void *v, size_t n);
/*
* Computes absolute position of reference block in the lane following a skewed
@ -146,9 +144,7 @@ ARGON2_PUBLIC void clear_internal_memory(void *v, size_t n);
* If so we can reference the current segment
* @pre All pointers must be valid
*/
uint32_t index_alpha(const argon2_instance_t *instance,
const argon2_position_t *position, uint32_t pseudo_rand,
int same_lane);
uint32_t xmrig_ar2_index_alpha(const argon2_instance_t *instance, const argon2_position_t *position, uint32_t pseudo_rand, int same_lane);
/*
* Function that validates all inputs against predefined restrictions and return
@ -157,7 +153,7 @@ uint32_t index_alpha(const argon2_instance_t *instance,
* @return ARGON2_OK if everything is all right, otherwise one of error codes
* (all defined in <argon2.h>
*/
int validate_inputs(const argon2_context *context);
int xmrig_ar2_validate_inputs(const argon2_context *context);
/*
* Hashes all the inputs into @a blockhash[PREHASH_DIGEST_LENGTH], clears
@ -169,8 +165,7 @@ int validate_inputs(const argon2_context *context);
* @pre @a blockhash must have at least @a PREHASH_DIGEST_LENGTH bytes
* allocated
*/
void initial_hash(uint8_t *blockhash, argon2_context *context,
argon2_type type);
void xmrig_ar2_initial_hash(uint8_t *blockhash, argon2_context *context, argon2_type type);
/*
* Function creates first 2 blocks per lane
@ -178,7 +173,7 @@ void initial_hash(uint8_t *blockhash, argon2_context *context,
* @param blockhash Pointer to the pre-hashing digest
* @pre blockhash must point to @a PREHASH_SEED_LENGTH allocated values
*/
void fill_first_blocks(uint8_t *blockhash, const argon2_instance_t *instance);
void xmrig_ar2_fill_first_blocks(uint8_t *blockhash, const argon2_instance_t *instance);
/*
* Function allocates memory, hashes the inputs with Blake, and creates first
@ -190,7 +185,7 @@ void fill_first_blocks(uint8_t *blockhash, const argon2_instance_t *instance);
* @return Zero if successful, -1 if memory failed to allocate. @context->state
* will be modified if successful.
*/
int initialize(argon2_instance_t *instance, argon2_context *context);
int xmrig_ar2_initialize(argon2_instance_t *instance, argon2_context *context);
/*
* XORing the last block of each lane, hashing it, making the tag. Deallocates
@ -203,7 +198,7 @@ int initialize(argon2_instance_t *instance, argon2_context *context);
* @pre if context->free_cbk is not NULL, it should point to a function that
* deallocates memory
*/
void finalize(const argon2_context *context, argon2_instance_t *instance);
void xmrig_ar2_finalize(const argon2_context *context, argon2_instance_t *instance);
/*
* Function that fills the segment using previous segments also from other
@ -212,8 +207,7 @@ void finalize(const argon2_context *context, argon2_instance_t *instance);
* @param position Current position
* @pre all block pointers must be valid
*/
void fill_segment(const argon2_instance_t *instance,
argon2_position_t position);
void xmrig_ar2_fill_segment(const argon2_instance_t *instance, argon2_position_t position);
/*
* Function that fills the entire memory t_cost times based on the first two
@ -221,6 +215,6 @@ void fill_segment(const argon2_instance_t *instance,
* @param instance Pointer to the current instance
* @return ARGON2_OK if successful, @context->state
*/
int fill_memory_blocks(argon2_instance_t *instance);
int xmrig_ar2_fill_memory_blocks(argon2_instance_t *instance);
#endif

View file

@ -323,7 +323,7 @@ int decode_string(argon2_context *ctx, const char *str, argon2_type type) {
ctx->flags = ARGON2_DEFAULT_FLAGS;
/* On return, must have valid context */
validation_result = validate_inputs(ctx);
validation_result = xmrig_ar2_validate_inputs(ctx);
if (validation_result != ARGON2_OK) {
return validation_result;
}
@ -371,7 +371,7 @@ int encode_string(char *dst, size_t dst_len, argon2_context *ctx,
} while ((void)0, 0)
const char* type_string = argon2_type2string(type, 0);
int validation_result = validate_inputs(ctx);
int validation_result = xmrig_ar2_validate_inputs(ctx);
if (!type_string) {
return ARGON2_ENCODING_FAIL;

View file

@ -2,79 +2,81 @@
#include <string.h>
#include "impl-select.h"
#include "3rdparty/argon2.h"
#define BENCH_SAMPLES 1024
extern uint64_t uv_hrtime(void);
#define BENCH_SAMPLES 1024U
#define BENCH_MEM_BLOCKS 512
static argon2_impl selected_argon_impl = {
"default", NULL, fill_segment_default
};
#ifdef _MSC_VER
# define strcasecmp _stricmp
#endif
static argon2_impl selected_argon_impl = { "default", NULL, fill_segment_default };
/* the benchmark routine is not thread-safe, so we can use a global var here: */
static block memory[BENCH_MEM_BLOCKS];
static uint64_t benchmark_impl(const argon2_impl *impl) {
clock_t time;
unsigned int i;
uint64_t bench;
argon2_instance_t instance;
argon2_position_t pos;
static uint64_t benchmark_impl(const argon2_impl *impl) {
memset(memory, 0, sizeof(memory));
instance.version = ARGON2_VERSION_NUMBER;
instance.memory = memory;
instance.passes = 1;
instance.memory_blocks = BENCH_MEM_BLOCKS;
argon2_instance_t instance;
instance.version = ARGON2_VERSION_NUMBER;
instance.memory = memory;
instance.passes = 1;
instance.memory_blocks = BENCH_MEM_BLOCKS;
instance.segment_length = BENCH_MEM_BLOCKS / ARGON2_SYNC_POINTS;
instance.lane_length = instance.segment_length * ARGON2_SYNC_POINTS;
instance.lanes = 1;
instance.threads = 1;
instance.type = Argon2_i;
instance.lane_length = instance.segment_length * ARGON2_SYNC_POINTS;
instance.lanes = 1;
instance.threads = 1;
instance.type = Argon2_id;
pos.lane = 0;
pos.pass = 0;
pos.slice = 0;
pos.index = 0;
argon2_position_t pos;
pos.lane = 0;
pos.pass = 0;
pos.slice = 0;
pos.index = 0;
/* warm-up cache: */
impl->fill_segment(&instance, pos);
/* OK, now measure: */
bench = 0;
time = clock();
for (i = 0; i < BENCH_SAMPLES; i++) {
const uint64_t time = uv_hrtime();
for (uint32_t i = 0; i < BENCH_SAMPLES; i++) {
impl->fill_segment(&instance, pos);
}
time = clock() - time;
bench = (uint64_t)time;
return bench;
return uv_hrtime() - time;
}
void argon2_select_impl()
{
argon2_impl_list impls;
unsigned int i;
const argon2_impl *best_impl = NULL;
uint64_t best_bench = UINT_MAX;
argon2_get_impl_list(&impls);
for (i = 0; i < impls.count; i++) {
for (uint32_t i = 0; i < impls.count; i++) {
const argon2_impl *impl = &impls.entries[i];
uint64_t bench;
if (impl->check != NULL && !impl->check()) {
continue;
}
bench = benchmark_impl(impl);
const uint64_t bench = benchmark_impl(impl);
if (bench < best_bench) {
best_bench = bench;
best_impl = impl;
best_impl = impl;
}
}
@ -83,11 +85,13 @@ void argon2_select_impl()
}
}
void fill_segment(const argon2_instance_t *instance, argon2_position_t position)
void xmrig_ar2_fill_segment(const argon2_instance_t *instance, argon2_position_t position)
{
selected_argon_impl.fill_segment(instance, position);
}
const char *argon2_get_impl_name()
{
return selected_argon_impl.name;
@ -97,14 +101,12 @@ const char *argon2_get_impl_name()
int argon2_select_impl_by_name(const char *name)
{
argon2_impl_list impls;
unsigned int i;
argon2_get_impl_list(&impls);
for (i = 0; i < impls.count; i++) {
for (uint32_t i = 0; i < impls.count; i++) {
const argon2_impl *impl = &impls.entries[i];
if (strcmp(impl->name, name) == 0) {
if (strcasecmp(impl->name, name) == 0) {
selected_argon_impl = *impl;
return 1;

View file

@ -381,7 +381,10 @@ enum header_states
, h_transfer_encoding
, h_upgrade
, h_matching_transfer_encoding_token_start
, h_matching_transfer_encoding_chunked
, h_matching_transfer_encoding_token
, h_matching_connection_token_start
, h_matching_connection_keep_alive
, h_matching_connection_close
@ -1257,9 +1260,9 @@ reexecute:
switch (parser->header_state) {
case h_general: {
size_t limit = data + len - p;
limit = MIN(limit, max_header_size);
while (p+1 < data + limit && TOKEN(p[1])) {
size_t left = data + len - p;
const char* pe = p + MIN(left, max_header_size);
while (p+1 < pe && TOKEN(p[1])) {
p++;
}
break;
@ -1335,6 +1338,7 @@ reexecute:
parser->header_state = h_general;
} else if (parser->index == sizeof(TRANSFER_ENCODING)-2) {
parser->header_state = h_transfer_encoding;
parser->flags |= F_TRANSFER_ENCODING;
}
break;
@ -1416,10 +1420,14 @@ reexecute:
if ('c' == c) {
parser->header_state = h_matching_transfer_encoding_chunked;
} else {
parser->header_state = h_general;
parser->header_state = h_matching_transfer_encoding_token;
}
break;
/* Multi-value `Transfer-Encoding` header */
case h_matching_transfer_encoding_token_start:
break;
case h_content_length:
if (UNLIKELY(!IS_NUM(ch))) {
SET_ERRNO(HPE_INVALID_CONTENT_LENGTH);
@ -1496,28 +1504,25 @@ reexecute:
switch (h_state) {
case h_general:
{
const char* p_cr;
const char* p_lf;
size_t limit = data + len - p;
{
size_t left = data + len - p;
const char* pe = p + MIN(left, max_header_size);
limit = MIN(limit, max_header_size);
p_cr = (const char*) memchr(p, CR, limit);
p_lf = (const char*) memchr(p, LF, limit);
if (p_cr != NULL) {
if (p_lf != NULL && p_cr >= p_lf)
p = p_lf;
else
p = p_cr;
} else if (UNLIKELY(p_lf != NULL)) {
p = p_lf;
} else {
p = data + len;
for (; p != pe; p++) {
ch = *p;
if (ch == CR || ch == LF) {
--p;
break;
}
if (!lenient && !IS_HEADER_CHAR(ch)) {
SET_ERRNO(HPE_INVALID_HEADER_TOKEN);
goto error;
}
}
if (p == data + len)
--p;
break;
}
--p;
break;
}
case h_connection:
case h_transfer_encoding:
@ -1566,16 +1571,41 @@ reexecute:
goto error;
/* Transfer-Encoding: chunked */
case h_matching_transfer_encoding_token_start:
/* looking for 'Transfer-Encoding: chunked' */
if ('c' == c) {
h_state = h_matching_transfer_encoding_chunked;
} else if (STRICT_TOKEN(c)) {
/* TODO(indutny): similar code below does this, but why?
* At the very least it seems to be inconsistent given that
* h_matching_transfer_encoding_token does not check for
* `STRICT_TOKEN`
*/
h_state = h_matching_transfer_encoding_token;
} else if (c == ' ' || c == '\t') {
/* Skip lws */
} else {
h_state = h_general;
}
break;
case h_matching_transfer_encoding_chunked:
parser->index++;
if (parser->index > sizeof(CHUNKED)-1
|| c != CHUNKED[parser->index]) {
h_state = h_general;
h_state = h_matching_transfer_encoding_token;
} else if (parser->index == sizeof(CHUNKED)-2) {
h_state = h_transfer_encoding_chunked;
}
break;
case h_matching_transfer_encoding_token:
if (ch == ',') {
h_state = h_matching_transfer_encoding_token_start;
parser->index = 0;
}
break;
case h_matching_connection_token_start:
/* looking for 'Connection: keep-alive' */
if (c == 'k') {
@ -1634,7 +1664,7 @@ reexecute:
break;
case h_transfer_encoding_chunked:
if (ch != ' ') h_state = h_general;
if (ch != ' ') h_state = h_matching_transfer_encoding_token;
break;
case h_connection_keep_alive:
@ -1768,12 +1798,17 @@ reexecute:
REEXECUTE();
}
/* Cannot use chunked encoding and a content-length header together
per the HTTP specification. */
if ((parser->flags & F_CHUNKED) &&
/* Cannot us transfer-encoding and a content-length header together
per the HTTP specification. (RFC 7230 Section 3.3.3) */
if ((parser->flags & F_TRANSFER_ENCODING) &&
(parser->flags & F_CONTENTLENGTH)) {
SET_ERRNO(HPE_UNEXPECTED_CONTENT_LENGTH);
goto error;
/* Allow it for lenient parsing as long as `Transfer-Encoding` is
* not `chunked`
*/
if (!lenient || (parser->flags & F_CHUNKED)) {
SET_ERRNO(HPE_UNEXPECTED_CONTENT_LENGTH);
goto error;
}
}
UPDATE_STATE(s_headers_done);
@ -1848,8 +1883,31 @@ reexecute:
UPDATE_STATE(NEW_MESSAGE());
CALLBACK_NOTIFY(message_complete);
} else if (parser->flags & F_CHUNKED) {
/* chunked encoding - ignore Content-Length header */
/* chunked encoding - ignore Content-Length header,
* prepare for a chunk */
UPDATE_STATE(s_chunk_size_start);
} else if (parser->flags & F_TRANSFER_ENCODING) {
if (parser->type == HTTP_REQUEST && !lenient) {
/* RFC 7230 3.3.3 */
/* If a Transfer-Encoding header field
* is present in a request and the chunked transfer coding is not
* the final encoding, the message body length cannot be determined
* reliably; the server MUST respond with the 400 (Bad Request)
* status code and then close the connection.
*/
SET_ERRNO(HPE_INVALID_TRANSFER_ENCODING);
RETURN(p - data); /* Error */
} else {
/* RFC 7230 3.3.3 */
/* If a Transfer-Encoding header field is present in a response and
* the chunked transfer coding is not the final encoding, the
* message body length is determined by reading the connection until
* it is closed by the server.
*/
UPDATE_STATE(s_body_identity_eof);
}
} else {
if (parser->content_length == 0) {
/* Content-Length header given but zero: Content-Length: 0\r\n */
@ -2103,6 +2161,12 @@ http_message_needs_eof (const http_parser *parser)
return 0;
}
/* RFC 7230 3.3.3, see `s_headers_almost_done` */
if ((parser->flags & F_TRANSFER_ENCODING) &&
(parser->flags & F_CHUNKED) == 0) {
return 1;
}
if ((parser->flags & F_CHUNKED) || parser->content_length != ULLONG_MAX) {
return 0;
}

View file

@ -27,7 +27,7 @@ extern "C" {
/* Also update SONAME in the Makefile whenever you change these. */
#define HTTP_PARSER_VERSION_MAJOR 2
#define HTTP_PARSER_VERSION_MINOR 9
#define HTTP_PARSER_VERSION_PATCH 0
#define HTTP_PARSER_VERSION_PATCH 3
#include <stddef.h>
#if defined(_WIN32) && !defined(__MINGW32__) && \
@ -225,6 +225,7 @@ enum flags
, F_UPGRADE = 1 << 5
, F_SKIPBODY = 1 << 6
, F_CONTENTLENGTH = 1 << 7
, F_TRANSFER_ENCODING = 1 << 8
};
@ -271,6 +272,8 @@ enum flags
"unexpected content-length header") \
XX(INVALID_CHUNK_SIZE, \
"invalid character in chunk size header") \
XX(INVALID_TRANSFER_ENCODING, \
"request has invalid transfer-encoding") \
XX(INVALID_CONSTANT, "invalid constant string") \
XX(INVALID_INTERNAL_STATE, "encountered unexpected internal state")\
XX(STRICT, "strict mode assertion failed") \
@ -293,11 +296,11 @@ enum http_errno {
struct http_parser {
/** PRIVATE **/
unsigned int type : 2; /* enum http_parser_type */
unsigned int flags : 8; /* F_* values from 'flags' enum; semi-public */
unsigned int state : 7; /* enum state from http_parser.c */
unsigned int header_state : 7; /* enum header_state from http_parser.c */
unsigned int index : 7; /* index into current matcher */
unsigned int lenient_http_headers : 1;
unsigned int flags : 16; /* F_* values from 'flags' enum; semi-public */
uint32_t nread; /* # bytes read in various scenarios */
uint64_t content_length; /* # bytes in body (0 if no Content-Length header) */

View file

@ -21,6 +21,7 @@ Nathalie Furmento CNRS
Bryon Gloden
Brice Goglin Inria
Gilles Gouaillardet RIST
Valentin Hoyet Inria
Joshua Hursey UWL
Alexey Kardashevskiy IBM
Rob Latham ANL

View file

@ -5,7 +5,7 @@ include_directories(include)
include_directories(src)
add_definitions(/D_CRT_SECURE_NO_WARNINGS)
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} /MT")
set(CMAKE_C_FLAGS_RELEASE "/MT /O2 /Ob2 /DNDEBUG")
set(HEADERS
include/hwloc.h

View file

@ -1,5 +1,5 @@
Copyright © 2009 CNRS
Copyright © 2009-2019 Inria. All rights reserved.
Copyright © 2009-2020 Inria. All rights reserved.
Copyright © 2009-2013 Université Bordeaux
Copyright © 2009-2011 Cisco Systems, Inc. All rights reserved.
@ -13,8 +13,117 @@ $HEADER$
This file contains the main features as well as overviews of specific
bug fixes (and other actions) for each version of hwloc since version
0.9 (as initially released as "libtopology", then re-branded to "hwloc"
in v0.9.1).
0.9.
Version 2.2.0
-------------
* API
+ Add hwloc_bitmap_singlify_by_core() to remove SMT from a given cpuset,
thanks to Florian Reynier for the suggestion.
+ Add --enable-32bits-pci-domain to stop ignoring PCI devices with domain
>16bits (e.g. 10000:02:03.4). Enabling this option breaks the library ABI.
Thanks to Dylan Simon for the help.
* Backends
+ Add support for Linux cgroups v2.
+ Add NUMA support for FreeBSD.
+ Add get_last_cpu_location support for FreeBSD.
+ Remove support for Intel Xeon Phi (MIC, Knights Corner) co-processors.
* Tools
+ Add --uid to filter the hwloc-ps output by uid on Linux.
+ Add a GRAPHICAL OUTPUT section in the manpage of lstopo.
* Misc
+ Use the native dlopen instead of libltdl,
unless --disable-plugin-dlopen is passed at configure time.
Version 2.1.0
-------------
* API
+ Add a new "Die" object (HWLOC_OBJ_DIE) for upcoming x86 processors
with multiple dies per package, in the x86 and Linux backends.
+ Add the new HWLOC_OBJ_MEMCACHE object type for memory-side caches.
- They are filtered-out by default, except in command-line tools.
- They are only available on very recent platforms running Linux 5.2+
and uptodate ACPI tables.
- The KNL MCDRAM in cache mode is still exposed as a L3 unless
HWLOC_KNL_MSCACHE_L3=0 in the environment.
+ Add HWLOC_RESTRICT_FLAG_BYNODESET and _REMOVE_MEMLESS for restricting
topologies based on some memory nodes.
+ Add hwloc_topology_set_components() for blacklisting some components
from being enabled in a topology.
+ Add hwloc_bitmap_nr_ulongs() and hwloc_bitmap_from/to_ulongs(),
thanks to Junchao Zhang for the suggestion.
+ Improve the API for dealing with disallowed resources
- HWLOC_TOPOLOGY_FLAG_WHOLE_SYSTEM is replaced with FLAG_INCLUDE_DISALLOWED
and --whole-system command-line options with --disallowed.
. Former names are still accepted for backward compatibility.
- Add hwloc_topology_allow() for changing allowed sets after load().
- Add the HWLOC_ALLOW=all environment variable to totally ignore
administrative restrictions such as Linux Cgroups.
- Add disallowed_pu and disallowed_numa bits to the discovery support
structure.
+ Group objects have a new "dont_merge" attribute to prevent them from
being automatically merged with identical parent or children.
+ Add more distances-related features:
- Add hwloc_distances_get_name() to retrieve a string describing
what a distances structure contain.
- Add hwloc_distances_get_by_name() to retrieve distances structures
based on their name.
- Add hwloc_distances_release_remove()
- Distances may now cover objects of different types with new kind
HWLOC_DISTANCES_KIND_HETEROGENEOUS_TYPES.
* Backends
+ Add support for Linux 5.3 new sysfs cpu topology files with Die information.
+ Add support for Intel v2 Extended Topology Enumeration in the x86 backend.
+ Improve memory locality on Linux by using HMAT initiators (exposed
since Linux 5.2+), and NUMA distances for CPU-less NUMA nodes.
+ The x86 backend now properly handles offline CPUs.
+ Detect the locality of NVIDIA GPU OpenCL devices.
+ Ignore NUMA nodes that correspond to NVIDIA GPU by default.
- They may be unignored if HWLOC_KEEP_NVIDIA_GPU_NUMA_NODES=1 in the environment.
- Fix their CPU locality and add info attributes to identify them.
Thanks to Max Katz and Edgar Leon for the help.
+ Add support for IBM S/390 drawers.
+ Rework the heuristics for discovering KNL Cluster and Memory modes
to stop assuming all CPUs are online (required for mOS support).
Thanks to Sharath K Bhat for testing patches.
+ Ignore NUMA node information from AMD topoext in the x86 backend,
unless HWLOC_X86_TOPOEXT_NUMANODES=1 is set in the environment.
+ Expose Linux DAX devices as hwloc Block OS devices.
+ Remove support for /proc/cpuinfo-only topology discovery in Linux
kernel prior to 2.6.16.
+ Disable POWER device-tree-based topology on Linux by default.
- It may be reenabled by setting HWLOC_USE_DT=1 in the environment.
+ Discovery components are now divided in phases that may be individually
blacklisted.
- The linuxio component has been merged back into the linux component.
* Tools
+ lstopo
- lstopo factorizes objects by default in the graphical output when
there are more than 4 identical children.
. New options --no-factorize and --factorize may be used to configure this.
. Hit the 'f' key to disable factorizing in interactive outputs.
- Both logical and OS/physical indexes are now displayed by default
for PU and NUMA nodes.
- The X11 and Windows interactive outputs support many keyboard
shortcuts to dynamically customize the attributes, legend, etc.
- Add --linespacing and change default margins and linespacing.
- Add --allow for changing allowed sets.
- Add a native SVG backend. Its graphical output may be slightly less
pretty than Cairo (still used by default if available) but the SVG
code provides attributes to manipulate objects from HTML/JS.
See dynamic_SVG_example.html for an example.
+ Add --nodeset options to hwloc-calc for converting between cpusets and
nodesets.
+ Add --no-smt to lstopo, hwloc-bind and hwloc-calc to ignore multiple
PU in SMT cores.
+ hwloc-annotate may annotate multiple locations at once.
+ Add a HTML/JS version of hwloc-ps. See contrib/hwloc-ps.www/README.
+ Add bash completions.
* Misc
+ Add several FAQ entries in "Compatibility between hwloc versions"
about API version, ABI, XML, Synthetic strings, and shmem topologies.
Version 2.0.4 (also included in 1.11.13 when appropriate)
@ -214,6 +323,54 @@ Version 2.0.0
+ hwloc now requires a C99 compliant compiler.
Version 1.11.13 (also included in 2.0.4)
---------------
* Add support for Linux 5.3 new sysfs cpu topology files with Die information.
* Add support for Intel v2 Extended Topology Enumeration in the x86 backend.
* Tiles, Modules and Dies are exposed as Groups for now.
+ HWLOC_DONT_MERGE_DIE_GROUPS=1 may be set in the environment to prevent
Die groups from being automatically merged with identical parent or children.
* Ignore NUMA node information from AMD topoext in the x86 backend,
unless HWLOC_X86_TOPOEXT_NUMANODES=1 is set in the environment.
* Group objects have a new "dont_merge" attribute to prevent them from
being automatically merged with identical parent or children.
Version 1.11.12 (also included in 2.0.3)
---------------
* Fix a corner case of hwloc_topology_restrict() where children would
become out-of-order.
* Fix the return length of export_xmlbuffer() functions to always
include the ending \0.
Version 1.11.11 (also included in 2.0.2)
---------------
* Add support for Hygon Dhyana processors in the x86 backend,
thanks to Pu Wen for the patch.
* Fix symbol renaming to also rename internal components,
thanks to Evan Ramos for the patch.
* Fix build on HP-UX, thanks to Richard Lloyd for reporting the issues.
* Detect PCI link speed without being root on Linux >= 4.13.
Version 1.11.10 (also included in 2.0.1)
---------------
* Fix detection of cores and hyperthreads on Mac OS X.
* Serialize pciaccess discovery to fix concurrent topology loads in
multiple threads.
* Fix first touch area memory binding on Linux when thread memory
binding is different.
* Some minor fixes to memory binding.
* Fix hwloc-dump-hwdata to only process SMBIOS information that correspond
to the KNL and KNM configuration.
* Add a heuristic for guessing KNL/KNM memory and cluster modes when
hwloc-dump-hwdata could not run as root earlier.
* Fix discovery of NVMe OS devices on Linux >= 4.0.
* Add get_area_memlocation() on Windows.
* Add CPUVendor, Model, ... attributes on Mac OS X.
Version 1.11.9
--------------
* Add support for Zhaoxin ZX-C and ZX-D processors in the x86 backend,
@ -941,7 +1098,7 @@ Version 1.6.0
+ Add a section about Synthetic topologies in the documentation.
Version 1.5.2 (some of these changes are in v1.6.2 but not in v1.6)
Version 1.5.2 (some of these changes are in 1.6.2 but not in 1.6)
-------------
* Use libpciaccess instead of pciutils/libpci by default for I/O discovery.
pciutils/libpci is only used if --enable-libpci is given to configure
@ -1076,9 +1233,8 @@ Version 1.4.2
for most of them.
Version 1.4.1
Version 1.4.1 (contains all 1.3.2 changes)
-------------
* This release contains all changes from v1.3.2.
* Fix hwloc_alloc_membind, thanks Karl Napf for reporting the issue.
* Fix memory leaks in some get_membind() functions.
* Fix helpers converting from Linux libnuma to hwloc (hwloc/linux-libnuma.h)
@ -1091,7 +1247,7 @@ Version 1.4.1
issues.
Version 1.4.0 (does not contain all v1.3.2 changes)
Version 1.4.0 (does not contain all 1.3.2 changes)
-------------
* Major features
+ Add "custom" interface and "assembler" tools to build multi-node
@ -1536,7 +1692,7 @@ Version 1.0.0
Version 0.9.4 (unreleased)
--------------------------
-------------
* Fix reseting colors to normal in lstopo -.txt output.
* Fix Linux pthread_t binding error report.
@ -1593,7 +1749,7 @@ Version 0.9.1
the physical location of IB devices.
Version 0.9 (libtopology)
-------------------------
Version 0.9 (formerly named "libtopology")
-----------
* First release.

View file

@ -8,8 +8,8 @@
# Please update HWLOC_VERSION* in contrib/windows/hwloc_config.h too.
major=2
minor=0
release=4
minor=2
release=0
# greek is used for alpha or beta release tags. If it is non-empty,
# it will be appended to the version number. It does not have to be
@ -22,7 +22,7 @@ greek=
# The date when this release was created
date="Jun 03, 2019"
date="Mar 30, 2020"
# If snapshot=1, then use the value from snapshot_version as the
# entire hwloc version (i.e., ignore major, minor, release, and
@ -41,7 +41,7 @@ snapshot_version=${major}.${minor}.${release}${greek}-git
# 2. Version numbers are described in the Libtool current:revision:age
# format.
libhwloc_so_version=15:3:0
libhwloc_so_version=17:0:2
libnetloc_so_version=0:0:0
# Please also update the <TargetName> lines in contrib/windows/libhwloc.vcxproj

View file

@ -1,6 +1,6 @@
/*
* Copyright © 2009 CNRS
* Copyright © 2009-2019 Inria. All rights reserved.
* Copyright © 2009-2020 Inria. All rights reserved.
* Copyright © 2009-2012 Université Bordeaux
* Copyright © 2009-2011 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory.
@ -53,7 +53,8 @@
#ifndef HWLOC_H
#define HWLOC_H
#include <hwloc/autogen/config.h>
#include "hwloc/autogen/config.h"
#include <sys/types.h>
#include <stdio.h>
#include <string.h>
@ -62,13 +63,13 @@
/*
* Symbol transforms
*/
#include <hwloc/rename.h>
#include "hwloc/rename.h"
/*
* Bitmap definitions
*/
#include <hwloc/bitmap.h>
#include "hwloc/bitmap.h"
#ifdef __cplusplus
@ -86,13 +87,13 @@ extern "C" {
* actually modifies the API.
*
* Users may check for available features at build time using this number
* (see \ref faq_upgrade).
* (see \ref faq_version_api).
*
* \note This should not be confused with HWLOC_VERSION, the library version.
* Two stable releases of the same series usually have the same ::HWLOC_API_VERSION
* even if their HWLOC_VERSION are different.
*/
#define HWLOC_API_VERSION 0x00020000
#define HWLOC_API_VERSION 0x00020100
/** \brief Indicate at runtime which hwloc API version was used at build time.
*
@ -101,7 +102,7 @@ extern "C" {
HWLOC_DECLSPEC unsigned hwloc_get_api_version(void);
/** \brief Current component and plugin ABI version (see hwloc/plugins.h) */
#define HWLOC_COMPONENT_ABI 5
#define HWLOC_COMPONENT_ABI 6
/** @} */
@ -172,8 +173,12 @@ typedef hwloc_const_bitmap_t hwloc_const_nodeset_t;
* may be defined in the future! If you need to compare types, use
* hwloc_compare_types() instead.
*/
#define HWLOC_OBJ_TYPE_MIN HWLOC_OBJ_MACHINE /**< \private Sentinel value */
typedef enum {
/** \cond */
#define HWLOC_OBJ_TYPE_MIN HWLOC_OBJ_MACHINE /* Sentinel value */
/** \endcond */
HWLOC_OBJ_MACHINE, /**< \brief Machine.
* A set of processors and memory with cache
* coherency.
@ -186,7 +191,8 @@ typedef enum {
HWLOC_OBJ_PACKAGE, /**< \brief Physical package.
* The physical package that usually gets inserted
* into a socket on the motherboard.
* A processor package usually contains multiple cores.
* A processor package usually contains multiple cores,
* and possibly some dies.
*/
HWLOC_OBJ_CORE, /**< \brief Core.
* A computation unit (may be shared by several
@ -233,6 +239,10 @@ typedef enum {
* It is usually close to some cores (the corresponding objects
* are descendants of the NUMA node object in the hwloc tree).
*
* This is the smallest object representing Memory resources,
* it cannot have any child except Misc objects.
* However it may have Memory-side cache parents.
*
* There is always at least one such object in the topology
* even if the machine is not NUMA.
*
@ -245,7 +255,7 @@ typedef enum {
*/
HWLOC_OBJ_BRIDGE, /**< \brief Bridge (filtered out by default).
* Any bridge that connects the host or an I/O bus,
* Any bridge (or PCI switch) that connects the host or an I/O bus,
* to another I/O bus.
* They are not added to the topology unless I/O discovery
* is enabled with hwloc_topology_set_flags().
@ -279,6 +289,24 @@ typedef enum {
* Misc objects have NULL CPU and node sets.
*/
HWLOC_OBJ_MEMCACHE, /**< \brief Memory-side cache (filtered out by default).
* A cache in front of a specific NUMA node.
*
* This object always has at least one NUMA node as a memory child.
*
* Memory objects are not listed in the main children list,
* but rather in the dedicated Memory children list.
*
* Memory-side cache have a special depth ::HWLOC_TYPE_DEPTH_MEMCACHE
* instead of a normal depth just like other objects in the
* main tree.
*/
HWLOC_OBJ_DIE, /**< \brief Die within a physical package.
* A subpart of the physical package, that contains multiple cores.
* \hideinitializer
*/
HWLOC_OBJ_TYPE_MAX /**< \private Sentinel value */
} hwloc_obj_type_t;
@ -297,8 +325,8 @@ typedef enum hwloc_obj_bridge_type_e {
/** \brief Type of a OS device. */
typedef enum hwloc_obj_osdev_type_e {
HWLOC_OBJ_OSDEV_BLOCK, /**< \brief Operating system block device.
* For instance "sda" on Linux. */
HWLOC_OBJ_OSDEV_BLOCK, /**< \brief Operating system block device, or non-volatile memory device.
* For instance "sda" or "dax2.0" on Linux. */
HWLOC_OBJ_OSDEV_GPU, /**< \brief Operating system GPU device.
* For instance ":0.0" for a GL display,
* "card0" for a Linux DRM device. */
@ -336,9 +364,8 @@ typedef enum hwloc_obj_osdev_type_e {
*/
HWLOC_DECLSPEC int hwloc_compare_types (hwloc_obj_type_t type1, hwloc_obj_type_t type2) __hwloc_attribute_const;
enum hwloc_compare_types_e {
HWLOC_TYPE_UNORDERED = INT_MAX /**< \brief Value returned by hwloc_compare_types() when types can not be compared. \hideinitializer */
};
/** \brief Value returned by hwloc_compare_types() when types can not be compared. \hideinitializer */
#define HWLOC_TYPE_UNORDERED INT_MAX
/** @} */
@ -434,9 +461,15 @@ struct hwloc_obj {
* These children are listed in \p memory_first_child.
*/
struct hwloc_obj *memory_first_child; /**< \brief First Memory child.
* NUMA nodes are listed here (\p memory_arity and \p memory_first_child)
* NUMA nodes and Memory-side caches are listed here
* (\p memory_arity and \p memory_first_child)
* instead of in the normal children list.
* See also hwloc_obj_type_is_memory().
*
* A memory hierarchy starts from a normal CPU-side object
* (e.g. Package) and ends with NUMA nodes as leaves.
* There might exist some memory-side caches between them
* in the middle of the memory subtree.
*/
/**@}*/
@ -471,7 +504,7 @@ struct hwloc_obj {
* object and known how (the children path between this object and the PU
* objects).
*
* If the ::HWLOC_TOPOLOGY_FLAG_WHOLE_SYSTEM configuration flag is set,
* If the ::HWLOC_TOPOLOGY_FLAG_INCLUDE_DISALLOWED configuration flag is set,
* some of these CPUs may not be allowed for binding,
* see hwloc_topology_get_allowed_cpuset().
*
@ -483,7 +516,7 @@ struct hwloc_obj {
*
* This may include not only the same as the cpuset field, but also some CPUs for
* which topology information is unknown or incomplete, some offlines CPUs, and
* the CPUs that are ignored when the ::HWLOC_TOPOLOGY_FLAG_WHOLE_SYSTEM flag
* the CPUs that are ignored when the ::HWLOC_TOPOLOGY_FLAG_INCLUDE_DISALLOWED flag
* is not set.
* Thus no corresponding PU object may be found in the topology, because the
* precise position is undefined. It is however known that it would be somewhere
@ -501,7 +534,7 @@ struct hwloc_obj {
*
* In the end, these nodes are those that are close to the current object.
*
* If the ::HWLOC_TOPOLOGY_FLAG_WHOLE_SYSTEM configuration flag is set,
* If the ::HWLOC_TOPOLOGY_FLAG_INCLUDE_DISALLOWED configuration flag is set,
* some of these nodes may not be allowed for allocation,
* see hwloc_topology_get_allowed_nodeset().
*
@ -516,7 +549,7 @@ struct hwloc_obj {
*
* This may include not only the same as the nodeset field, but also some NUMA
* nodes for which topology information is unknown or incomplete, some offlines
* nodes, and the nodes that are ignored when the ::HWLOC_TOPOLOGY_FLAG_WHOLE_SYSTEM
* nodes, and the nodes that are ignored when the ::HWLOC_TOPOLOGY_FLAG_INCLUDE_DISALLOWED
* flag is not set.
* Thus no corresponding NUMA node object may be found in the topology, because the
* precise position is undefined. It is however known that it would be
@ -584,7 +617,11 @@ union hwloc_obj_attr_u {
} group;
/** \brief PCI Device specific Object Attributes */
struct hwloc_pcidev_attr_s {
unsigned short domain;
#ifndef HWLOC_HAVE_32BITS_PCI_DOMAIN
unsigned short domain; /* Only 16bits PCI domains are supported by default */
#else
unsigned int domain; /* 32bits PCI domain support break the library ABI, hence it's disabled by default */
#endif
unsigned char bus, dev, func;
unsigned short class_id;
unsigned short vendor_id, device_id, subvendor_id, subdevice_id;
@ -599,7 +636,11 @@ union hwloc_obj_attr_u {
hwloc_obj_bridge_type_t upstream_type;
union {
struct {
unsigned short domain;
#ifndef HWLOC_HAVE_32BITS_PCI_DOMAIN
unsigned short domain; /* Only 16bits PCI domains are supported by default */
#else
unsigned int domain; /* 32bits PCI domain support break the library ABI, hence it's disabled by default */
#endif
unsigned char secondary_bus, subordinate_bus;
} pci;
} downstream;
@ -770,7 +811,8 @@ enum hwloc_get_type_depth_e {
HWLOC_TYPE_DEPTH_BRIDGE = -4, /**< \brief Virtual depth for bridge object level. \hideinitializer */
HWLOC_TYPE_DEPTH_PCI_DEVICE = -5, /**< \brief Virtual depth for PCI device object level. \hideinitializer */
HWLOC_TYPE_DEPTH_OS_DEVICE = -6, /**< \brief Virtual depth for software device object level. \hideinitializer */
HWLOC_TYPE_DEPTH_MISC = -7 /**< \brief Virtual depth for Misc object. \hideinitializer */
HWLOC_TYPE_DEPTH_MISC = -7, /**< \brief Virtual depth for Misc object. \hideinitializer */
HWLOC_TYPE_DEPTH_MEMCACHE = -8 /**< \brief Virtual depth for MemCache object. \hideinitializer */
};
/** \brief Return the depth of parents where memory objects are attached.
@ -828,7 +870,8 @@ hwloc_get_type_or_above_depth (hwloc_topology_t topology, hwloc_obj_type_t type)
/** \brief Returns the type of objects at depth \p depth.
*
* \p depth should between 0 and hwloc_topology_get_depth()-1.
* \p depth should between 0 and hwloc_topology_get_depth()-1,
* or a virtual depth such as ::HWLOC_TYPE_DEPTH_NUMANODE.
*
* \return (hwloc_obj_type_t)-1 if depth \p depth does not exist.
*/
@ -1324,7 +1367,7 @@ HWLOC_DECLSPEC int hwloc_get_proc_last_cpu_location(hwloc_topology_t topology, h
typedef enum {
/** \brief Reset the memory allocation policy to the system default.
* Depending on the operating system, this may correspond to
* ::HWLOC_MEMBIND_FIRSTTOUCH (Linux),
* ::HWLOC_MEMBIND_FIRSTTOUCH (Linux, FreeBSD),
* or ::HWLOC_MEMBIND_BIND (AIX, HP-UX, Solaris, Windows).
* This policy is never returned by get membind functions.
* The nodeset argument is ignored.
@ -1781,6 +1824,31 @@ HWLOC_DECLSPEC int hwloc_topology_set_xml(hwloc_topology_t __hwloc_restrict topo
*/
HWLOC_DECLSPEC int hwloc_topology_set_xmlbuffer(hwloc_topology_t __hwloc_restrict topology, const char * __hwloc_restrict buffer, int size);
/** \brief Flags to be passed to hwloc_topology_set_components()
*/
enum hwloc_topology_components_flag_e {
/** \brief Blacklist the target component from being used.
* \hideinitializer
*/
HWLOC_TOPOLOGY_COMPONENTS_FLAG_BLACKLIST = (1UL<<0)
};
/** \brief Prevent a discovery component from being used for a topology.
*
* \p name is the name of the discovery component that should not be used
* when loading topology \p topology. The name is a string such as "cuda".
*
* For components with multiple phases, it may also be suffixed with the name
* of a phase, for instance "linux:io".
*
* \p flags should be ::HWLOC_TOPOLOGY_COMPONENTS_FLAG_BLACKLIST.
*
* This may be used to avoid expensive parts of the discovery process.
* For instance, CUDA-specific discovery may be expensive and unneeded
* while generic I/O discovery could still be useful.
*/
HWLOC_DECLSPEC int hwloc_topology_set_components(hwloc_topology_t __hwloc_restrict topology, unsigned long flags, const char * __hwloc_restrict name);
/** @} */
@ -1800,28 +1868,27 @@ HWLOC_DECLSPEC int hwloc_topology_set_xmlbuffer(hwloc_topology_t __hwloc_restric
* They may also be returned by hwloc_topology_get_flags().
*/
enum hwloc_topology_flags_e {
/** \brief Detect the whole system, ignore reservations.
/** \brief Detect the whole system, ignore reservations, include disallowed objects.
*
* Gather all resources, even if some were disabled by the administrator.
* For instance, ignore Linux Cgroup/Cpusets and gather all processors and memory nodes.
*
* When this flag is not set, PUs and NUMA nodes that are disallowed are not added to the topology.
* Parent objects (package, core, cache, etc.) are added only if some of their children are allowed.
* All existing PUs and NUMA nodes in the topology are allowed.
* hwloc_topology_get_allowed_cpuset() and hwloc_topology_get_allowed_nodeset()
* are equal to the root object cpuset and nodeset.
*
* When this flag is set, the actual sets of allowed PUs and NUMA nodes are given
* by hwloc_topology_get_allowed_cpuset() and hwloc_topology_get_allowed_nodeset().
* They may be smaller than the root object cpuset and nodeset.
*
* When this flag is not set, all existing PUs and NUMA nodes in the topology
* are allowed. hwloc_topology_get_allowed_cpuset() and hwloc_topology_get_allowed_nodeset()
* are equal to the root object cpuset and nodeset.
*
* If the current topology is exported to XML and reimported later, this flag
* should be set again in the reimported topology so that disallowed resources
* are reimported as well.
* \hideinitializer
*/
HWLOC_TOPOLOGY_FLAG_WHOLE_SYSTEM = (1UL<<0),
HWLOC_TOPOLOGY_FLAG_INCLUDE_DISALLOWED = (1UL<<0),
/** \brief Assume that the selected backend provides the topology for the
* system on which we are running.
@ -1901,6 +1968,10 @@ struct hwloc_topology_discovery_support {
unsigned char numa;
/** \brief Detecting the amount of memory in NUMA nodes is supported. */
unsigned char numa_memory;
/** \brief Detecting and identifying PU objects that are not available to the current process is supported. */
unsigned char disallowed_pu;
/** \brief Detecting and identifying NUMA nodes that are not available to the current process is supported. */
unsigned char disallowed_numa;
};
/** \brief Flags describing actual PU binding support for this topology.
@ -1998,7 +2069,7 @@ HWLOC_DECLSPEC const struct hwloc_topology_support *hwloc_topology_get_support(h
*
* By default, most objects are kept (::HWLOC_TYPE_FILTER_KEEP_ALL).
* Instruction caches, I/O and Misc objects are ignored by default (::HWLOC_TYPE_FILTER_KEEP_NONE).
* Group levels are ignored unless they bring structure (::HWLOC_TYPE_FILTER_KEEP_STRUCTURE).
* Die and Group levels are ignored unless they bring structure (::HWLOC_TYPE_FILTER_KEEP_STRUCTURE).
*
* Note that group objects are also ignored individually (without the entire level)
* when they do not bring structure.
@ -2063,11 +2134,15 @@ HWLOC_DECLSPEC int hwloc_topology_get_type_filter(hwloc_topology_t topology, hwl
*/
HWLOC_DECLSPEC int hwloc_topology_set_all_types_filter(hwloc_topology_t topology, enum hwloc_type_filter_e filter);
/** \brief Set the filtering for all cache object types.
/** \brief Set the filtering for all CPU cache object types.
*
* Memory-side caches are not involved since they are not CPU caches.
*/
HWLOC_DECLSPEC int hwloc_topology_set_cache_types_filter(hwloc_topology_t topology, enum hwloc_type_filter_e filter);
/** \brief Set the filtering for all instruction cache object types.
/** \brief Set the filtering for all CPU instruction cache object types.
*
* Memory-side caches are not involved since they are not CPU caches.
*/
HWLOC_DECLSPEC int hwloc_topology_set_icache_types_filter(hwloc_topology_t topology, enum hwloc_type_filter_e filter);
@ -2106,10 +2181,24 @@ HWLOC_DECLSPEC void * hwloc_topology_get_userdata(hwloc_topology_t topology);
enum hwloc_restrict_flags_e {
/** \brief Remove all objects that became CPU-less.
* By default, only objects that contain no PU and no memory are removed.
* This flag may not be used with ::HWLOC_RESTRICT_FLAG_BYNODESET.
* \hideinitializer
*/
HWLOC_RESTRICT_FLAG_REMOVE_CPULESS = (1UL<<0),
/** \brief Restrict by nodeset instead of CPU set.
* Only keep objects whose nodeset is included or partially included in the given set.
* This flag may not be used with ::HWLOC_RESTRICT_FLAG_REMOVE_CPULESS.
*/
HWLOC_RESTRICT_FLAG_BYNODESET = (1UL<<3),
/** \brief Remove all objects that became Memory-less.
* By default, only objects that contain no PU and no memory are removed.
* This flag may only be used with ::HWLOC_RESTRICT_FLAG_BYNODESET.
* \hideinitializer
*/
HWLOC_RESTRICT_FLAG_REMOVE_MEMLESS = (1UL<<4),
/** \brief Move Misc objects to ancestors if their parents are removed during restriction.
* If this flag is not set, Misc objects are removed when their parents are removed.
* \hideinitializer
@ -2123,28 +2212,70 @@ enum hwloc_restrict_flags_e {
HWLOC_RESTRICT_FLAG_ADAPT_IO = (1UL<<2)
};
/** \brief Restrict the topology to the given CPU set.
/** \brief Restrict the topology to the given CPU set or nodeset.
*
* Topology \p topology is modified so as to remove all objects that
* are not included (or partially included) in the CPU set \p cpuset.
* are not included (or partially included) in the CPU set \p set.
* All objects CPU and node sets are restricted accordingly.
*
* If ::HWLOC_RESTRICT_FLAG_BYNODESET is passed in \p flags,
* \p set is considered a nodeset instead of a CPU set.
*
* \p flags is a OR'ed set of ::hwloc_restrict_flags_e.
*
* \note This call may not be reverted by restricting back to a larger
* cpuset. Once dropped during restriction, objects may not be brought
* set. Once dropped during restriction, objects may not be brought
* back, except by loading another topology with hwloc_topology_load().
*
* \return 0 on success.
*
* \return -1 with errno set to EINVAL if the input cpuset is invalid.
* \return -1 with errno set to EINVAL if the input set is invalid.
* The topology is not modified in this case.
*
* \return -1 with errno set to ENOMEM on failure to allocate internal data.
* The topology is reinitialized in this case. It should be either
* destroyed with hwloc_topology_destroy() or configured and loaded again.
*/
HWLOC_DECLSPEC int hwloc_topology_restrict(hwloc_topology_t __hwloc_restrict topology, hwloc_const_cpuset_t cpuset, unsigned long flags);
HWLOC_DECLSPEC int hwloc_topology_restrict(hwloc_topology_t __hwloc_restrict topology, hwloc_const_bitmap_t set, unsigned long flags);
/** \brief Flags to be given to hwloc_topology_allow(). */
enum hwloc_allow_flags_e {
/** \brief Mark all objects as allowed in the topology.
*
* \p cpuset and \p nođeset given to hwloc_topology_allow() must be \c NULL.
* \hideinitializer */
HWLOC_ALLOW_FLAG_ALL = (1UL<<0),
/** \brief Only allow objects that are available to the current process.
*
* The topology must have ::HWLOC_TOPOLOGY_FLAG_IS_THISSYSTEM so that the set
* of available resources can actually be retrieved from the operating system.
*
* \p cpuset and \p nođeset given to hwloc_topology_allow() must be \c NULL.
* \hideinitializer */
HWLOC_ALLOW_FLAG_LOCAL_RESTRICTIONS = (1UL<<1),
/** \brief Allow a custom set of objects, given to hwloc_topology_allow() as \p cpuset and/or \p nodeset parameters.
* \hideinitializer */
HWLOC_ALLOW_FLAG_CUSTOM = (1UL<<2)
};
/** \brief Change the sets of allowed PUs and NUMA nodes in the topology.
*
* This function only works if the ::HWLOC_TOPOLOGY_FLAG_INCLUDE_DISALLOWED
* was set on the topology. It does not modify any object, it only changes
* the sets returned by hwloc_topology_get_allowed_cpuset() and
* hwloc_topology_get_allowed_nodeset().
*
* It is notably useful when importing a topology from another process
* running in a different Linux Cgroup.
*
* \p flags must be set to one flag among ::hwloc_allow_flags_e.
*
* \note Removing objects from a topology should rather be performed with
* hwloc_topology_restrict().
*/
HWLOC_DECLSPEC int hwloc_topology_allow(hwloc_topology_t __hwloc_restrict topology, hwloc_const_cpuset_t cpuset, hwloc_const_nodeset_t nodeset, unsigned long flags);
/** \brief Add a MISC object as a leaf of the topology
*
@ -2250,21 +2381,21 @@ HWLOC_DECLSPEC int hwloc_obj_add_other_obj_sets(hwloc_obj_t dst, hwloc_obj_t src
/* high-level helpers */
#include <hwloc/helper.h>
#include "hwloc/helper.h"
/* inline code of some functions above */
#include <hwloc/inlines.h>
#include "hwloc/inlines.h"
/* exporting to XML or synthetic */
#include <hwloc/export.h>
#include "hwloc/export.h"
/* distances */
#include <hwloc/distances.h>
#include "hwloc/distances.h"
/* topology diffs */
#include <hwloc/diff.h>
#include "hwloc/diff.h"
/* deprecated headers */
#include <hwloc/deprecated.h>
#include "hwloc/deprecated.h"
#endif /* HWLOC_H */

View file

@ -1,6 +1,6 @@
/*
* Copyright © 2009 CNRS
* Copyright © 2009-2018 Inria. All rights reserved.
* Copyright © 2009-2019 Inria. All rights reserved.
* Copyright © 2009-2012 Université Bordeaux
* Copyright © 2009-2011 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory.
@ -11,10 +11,10 @@
#ifndef HWLOC_CONFIG_H
#define HWLOC_CONFIG_H
#define HWLOC_VERSION "2.0.4"
#define HWLOC_VERSION "2.2.0"
#define HWLOC_VERSION_MAJOR 2
#define HWLOC_VERSION_MINOR 0
#define HWLOC_VERSION_RELEASE 4
#define HWLOC_VERSION_MINOR 2
#define HWLOC_VERSION_RELEASE 0
#define HWLOC_VERSION_GREEK ""
#define __hwloc_restrict

View file

@ -13,7 +13,8 @@
#ifndef HWLOC_BITMAP_H
#define HWLOC_BITMAP_H
#include <hwloc/autogen/config.h>
#include "hwloc/autogen/config.h"
#include <assert.h>
@ -198,6 +199,9 @@ HWLOC_DECLSPEC int hwloc_bitmap_from_ulong(hwloc_bitmap_t bitmap, unsigned long
/** \brief Setup bitmap \p bitmap from unsigned long \p mask used as \p i -th subset */
HWLOC_DECLSPEC int hwloc_bitmap_from_ith_ulong(hwloc_bitmap_t bitmap, unsigned i, unsigned long mask);
/** \brief Setup bitmap \p bitmap from unsigned longs \p masks used as first \p nr subsets */
HWLOC_DECLSPEC int hwloc_bitmap_from_ulongs(hwloc_bitmap_t bitmap, unsigned nr, const unsigned long *masks);
/*
* Modifying bitmaps.
@ -256,6 +260,29 @@ HWLOC_DECLSPEC unsigned long hwloc_bitmap_to_ulong(hwloc_const_bitmap_t bitmap)
/** \brief Convert the \p i -th subset of bitmap \p bitmap into unsigned long mask */
HWLOC_DECLSPEC unsigned long hwloc_bitmap_to_ith_ulong(hwloc_const_bitmap_t bitmap, unsigned i) __hwloc_attribute_pure;
/** \brief Convert the first \p nr subsets of bitmap \p bitmap into the array of \p nr unsigned long \p masks
*
* \p nr may be determined earlier with hwloc_bitmap_nr_ulongs().
*
* \return 0
*/
HWLOC_DECLSPEC int hwloc_bitmap_to_ulongs(hwloc_const_bitmap_t bitmap, unsigned nr, unsigned long *masks);
/** \brief Return the number of unsigned longs required for storing bitmap \p bitmap entirely
*
* This is the number of contiguous unsigned longs from the very first bit of the bitmap
* (even if unset) up to the last set bit.
* This is useful for knowing the \p nr parameter to pass to hwloc_bitmap_to_ulongs()
* (or which calls to hwloc_bitmap_to_ith_ulong() are needed)
* to entirely convert a bitmap into multiple unsigned longs.
*
* When called on the output of hwloc_topology_get_topology_cpuset(),
* the returned number is large enough for all cpusets of the topology.
*
* \return -1 if \p bitmap is infinite.
*/
HWLOC_DECLSPEC int hwloc_bitmap_nr_ulongs(hwloc_const_bitmap_t bitmap) __hwloc_attribute_pure;
/** \brief Test whether index \p id is part of bitmap \p bitmap.
*
* \return 1 if the bit at index \p id is set in bitmap \p bitmap, 0 otherwise.

View file

@ -16,11 +16,11 @@
#ifndef HWLOC_CUDA_H
#define HWLOC_CUDA_H
#include <hwloc.h>
#include <hwloc/autogen/config.h>
#include <hwloc/helper.h>
#include "hwloc.h"
#include "hwloc/autogen/config.h"
#include "hwloc/helper.h"
#ifdef HWLOC_LINUX_SYS
#include <hwloc/linux.h>
#include "hwloc/linux.h"
#endif
#include <cuda.h>

View file

@ -16,11 +16,11 @@
#ifndef HWLOC_CUDART_H
#define HWLOC_CUDART_H
#include <hwloc.h>
#include <hwloc/autogen/config.h>
#include <hwloc/helper.h>
#include "hwloc.h"
#include "hwloc/autogen/config.h"
#include "hwloc/helper.h"
#ifdef HWLOC_LINUX_SYS
#include <hwloc/linux.h>
#include "hwloc/linux.h"
#endif
#include <cuda.h> /* for CUDA_VERSION */

View file

@ -1,6 +1,6 @@
/*
* Copyright © 2009 CNRS
* Copyright © 2009-2017 Inria. All rights reserved.
* Copyright © 2009-2018 Inria. All rights reserved.
* Copyright © 2009-2012 Université Bordeaux
* Copyright © 2009-2010 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory.
@ -21,6 +21,8 @@
extern "C" {
#endif
/* backward compat with v2.0 before WHOLE_SYSTEM renaming */
#define HWLOC_TOPOLOGY_FLAG_WHOLE_SYSTEM HWLOC_TOPOLOGY_FLAG_INCLUDE_DISALLOWED
/* backward compat with v1.11 before System removal */
#define HWLOC_OBJ_SYSTEM HWLOC_OBJ_MACHINE
/* backward compat with v1.10 before Socket->Package renaming */

View file

@ -87,7 +87,12 @@ enum hwloc_distances_kind_e {
* Such values are currently ignored for distance-based grouping.
* \hideinitializer
*/
HWLOC_DISTANCES_KIND_MEANS_BANDWIDTH = (1UL<<3)
HWLOC_DISTANCES_KIND_MEANS_BANDWIDTH = (1UL<<3),
/** \brief This distances structure covers objects of different types.
* \hideinitializer
*/
HWLOC_DISTANCES_KIND_HETEROGENEOUS_TYPES = (1UL<<4)
};
/** \brief Retrieve distance matrices.
@ -131,20 +136,32 @@ hwloc_distances_get_by_depth(hwloc_topology_t topology, int depth,
*
* Identical to hwloc_distances_get() with the additional \p type filter.
*/
static __hwloc_inline int
HWLOC_DECLSPEC int
hwloc_distances_get_by_type(hwloc_topology_t topology, hwloc_obj_type_t type,
unsigned *nr, struct hwloc_distances_s **distances,
unsigned long kind, unsigned long flags)
{
int depth = hwloc_get_type_depth(topology, type);
if (depth == HWLOC_TYPE_DEPTH_UNKNOWN || depth == HWLOC_TYPE_DEPTH_MULTIPLE) {
*nr = 0;
return 0;
}
return hwloc_distances_get_by_depth(topology, depth, nr, distances, kind, flags);
}
unsigned long kind, unsigned long flags);
/** \brief Release a distance matrix structure previously returned by hwloc_distances_get(). */
/** \brief Retrieve a distance matrix with the given name.
*
* Usually only one distances structure may match a given name.
*/
HWLOC_DECLSPEC int
hwloc_distances_get_by_name(hwloc_topology_t topology, const char *name,
unsigned *nr, struct hwloc_distances_s **distances,
unsigned long flags);
/** \brief Get a description of what a distances structure contains.
*
* For instance "NUMALatency" for hardware-provided NUMA distances (ACPI SLIT),
* or NULL if unknown.
*/
HWLOC_DECLSPEC const char *
hwloc_distances_get_name(hwloc_topology_t topology, struct hwloc_distances_s *distances);
/** \brief Release a distance matrix structure previously returned by hwloc_distances_get().
*
* \note This function is not required if the structure is removed with hwloc_distances_release_remove().
*/
HWLOC_DECLSPEC void
hwloc_distances_release(hwloc_topology_t topology, struct hwloc_distances_s *distances);
@ -221,11 +238,11 @@ enum hwloc_distances_add_flag_e {
* The distance from object i to object j is in slot i*nbobjs+j.
*
* \p kind specifies the kind of distance as a OR'ed set of ::hwloc_distances_kind_e.
* Kind ::HWLOC_DISTANCES_KIND_HETEROGENEOUS_TYPES will be automatically added
* if objects of different types are given.
*
* \p flags configures the behavior of the function using an optional OR'ed set of
* ::hwloc_distances_add_flag_e.
*
* Objects must be of the same type. They cannot be of type Group.
*/
HWLOC_DECLSPEC int hwloc_distances_add(hwloc_topology_t topology,
unsigned nbobjs, hwloc_obj_t *objs, hwloc_uint64_t *values,
@ -237,7 +254,7 @@ HWLOC_DECLSPEC int hwloc_distances_add(hwloc_topology_t topology,
* gathered through the OS.
*
* If these distances were used to group objects, these additional
*Group objects are not removed from the topology.
* Group objects are not removed from the topology.
*/
HWLOC_DECLSPEC int hwloc_distances_remove(hwloc_topology_t topology);
@ -260,6 +277,12 @@ hwloc_distances_remove_by_type(hwloc_topology_t topology, hwloc_obj_type_t type)
return hwloc_distances_remove_by_depth(topology, depth);
}
/** \brief Release and remove the given distance matrice from the topology.
*
* This function includes a call to hwloc_distances_release().
*/
HWLOC_DECLSPEC int hwloc_distances_release_remove(hwloc_topology_t topology, struct hwloc_distances_s *distances);
/** @} */

View file

@ -14,7 +14,7 @@
#ifndef HWLOC_GL_H
#define HWLOC_GL_H
#include <hwloc.h>
#include "hwloc.h"
#include <stdio.h>
#include <string.h>

View file

@ -17,8 +17,9 @@
#ifndef HWLOC_GLIBC_SCHED_H
#define HWLOC_GLIBC_SCHED_H
#include <hwloc.h>
#include <hwloc/helper.h>
#include "hwloc.h"
#include "hwloc/helper.h"
#include <assert.h>
#if !defined _GNU_SOURCE || !defined _SCHED_H || (!defined CPU_SETSIZE && !defined sched_priority)

View file

@ -1,6 +1,6 @@
/*
* Copyright © 2009 CNRS
* Copyright © 2009-2019 Inria. All rights reserved.
* Copyright © 2009-2020 Inria. All rights reserved.
* Copyright © 2009-2012 Université Bordeaux
* Copyright © 2009-2010 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory.
@ -527,30 +527,36 @@ hwloc_obj_type_is_io(hwloc_obj_type_t type);
*
* Memory objects are objects attached to their parents
* in the Memory children list.
* This current only includes NUMA nodes.
* This current includes NUMA nodes and Memory-side caches.
*
* \return 1 if an object of type \p type is a Memory object, 0 otherwise.
*/
HWLOC_DECLSPEC int
hwloc_obj_type_is_memory(hwloc_obj_type_t type);
/** \brief Check whether an object type is a Cache (Data, Unified or Instruction).
/** \brief Check whether an object type is a CPU Cache (Data, Unified or Instruction).
*
* Memory-side caches are not CPU caches.
*
* \return 1 if an object of type \p type is a Cache, 0 otherwise.
*/
HWLOC_DECLSPEC int
hwloc_obj_type_is_cache(hwloc_obj_type_t type);
/** \brief Check whether an object type is a Data or Unified Cache.
/** \brief Check whether an object type is a CPU Data or Unified Cache.
*
* \return 1 if an object of type \p type is a Data or Unified Cache, 0 otherwise.
* Memory-side caches are not CPU caches.
*
* \return 1 if an object of type \p type is a CPU Data or Unified Cache, 0 otherwise.
*/
HWLOC_DECLSPEC int
hwloc_obj_type_is_dcache(hwloc_obj_type_t type);
/** \brief Check whether an object type is a Instruction Cache,
/** \brief Check whether an object type is a CPU Instruction Cache,
*
* \return 1 if an object of type \p type is a Instruction Cache, 0 otherwise.
* Memory-side caches are not CPU caches.
*
* \return 1 if an object of type \p type is a CPU Instruction Cache, 0 otherwise.
*/
HWLOC_DECLSPEC int
hwloc_obj_type_is_icache(hwloc_obj_type_t type);
@ -666,6 +672,24 @@ hwloc_get_shared_cache_covering_obj (hwloc_topology_t topology __hwloc_attribute
* package has fewer caches than its peers.
*/
/** \brief Remove simultaneous multithreading PUs from a CPU set.
*
* For each core in \p topology, if \p cpuset contains some PUs of that core,
* modify \p cpuset to only keep a single PU for that core.
*
* \p which specifies which PU will be kept.
* PU are considered in physical index order.
* If 0, for each core, the function keeps the first PU that was originally set in \p cpuset.
*
* If \p which is larger than the number of PUs in a core there were originally set in \p cpuset,
* no PU is kept for that core.
*
* \note PUs that are not below a Core object are ignored
* (for instance if the topology does not contain any Core object).
* None of them is removed from \p cpuset.
*/
HWLOC_DECLSPEC int hwloc_bitmap_singlify_per_core(hwloc_topology_t topology, hwloc_bitmap_t cpuset, unsigned which);
/** \brief Returns the object of type ::HWLOC_OBJ_PU with \p os_index.
*
* This function is useful for converting a CPU set into the PU
@ -914,7 +938,7 @@ hwloc_topology_get_complete_cpuset(hwloc_topology_t topology) __hwloc_attribute_
* \note The returned cpuset is not newly allocated and should thus not be
* changed or freed; hwloc_bitmap_dup() must be used to obtain a local copy.
*
* \note This is equivalent to retrieving the root object complete CPU-set.
* \note This is equivalent to retrieving the root object CPU-set.
*/
HWLOC_DECLSPEC hwloc_const_cpuset_t
hwloc_topology_get_topology_cpuset(hwloc_topology_t topology) __hwloc_attribute_pure;
@ -923,11 +947,11 @@ hwloc_topology_get_topology_cpuset(hwloc_topology_t topology) __hwloc_attribute_
*
* \return the CPU set of allowed logical processors of the system.
*
* \note If the topology flag ::HWLOC_TOPOLOGY_FLAG_WHOLE_SYSTEM was not set,
* \note If the topology flag ::HWLOC_TOPOLOGY_FLAG_INCLUDE_DISALLOWED was not set,
* this is identical to hwloc_topology_get_topology_cpuset(), which means
* all PUs are allowed.
*
* \note If ::HWLOC_TOPOLOGY_FLAG_WHOLE_SYSTEM was set, applying
* \note If ::HWLOC_TOPOLOGY_FLAG_INCLUDE_DISALLOWED was set, applying
* hwloc_bitmap_intersects() on the result of this function and on an object
* cpuset checks whether there are allowed PUs inside that object.
* Applying hwloc_bitmap_and() returns the list of these allowed PUs.
@ -945,7 +969,7 @@ hwloc_topology_get_allowed_cpuset(hwloc_topology_t topology) __hwloc_attribute_p
* \note The returned nodeset is not newly allocated and should thus not be
* changed or freed; hwloc_bitmap_dup() must be used to obtain a local copy.
*
* \note This is equivalent to retrieving the root object complete CPU-set.
* \note This is equivalent to retrieving the root object complete nodeset.
*/
HWLOC_DECLSPEC hwloc_const_nodeset_t
hwloc_topology_get_complete_nodeset(hwloc_topology_t topology) __hwloc_attribute_pure;
@ -959,7 +983,7 @@ hwloc_topology_get_complete_nodeset(hwloc_topology_t topology) __hwloc_attribute
* \note The returned nodeset is not newly allocated and should thus not be
* changed or freed; hwloc_bitmap_dup() must be used to obtain a local copy.
*
* \note This is equivalent to retrieving the root object complete CPU-set.
* \note This is equivalent to retrieving the root object nodeset.
*/
HWLOC_DECLSPEC hwloc_const_nodeset_t
hwloc_topology_get_topology_nodeset(hwloc_topology_t topology) __hwloc_attribute_pure;
@ -968,11 +992,11 @@ hwloc_topology_get_topology_nodeset(hwloc_topology_t topology) __hwloc_attribute
*
* \return the node set of allowed memory of the system.
*
* \note If the topology flag ::HWLOC_TOPOLOGY_FLAG_WHOLE_SYSTEM was not set,
* \note If the topology flag ::HWLOC_TOPOLOGY_FLAG_INCLUDE_DISALLOWED was not set,
* this is identical to hwloc_topology_get_topology_nodeset(), which means
* all NUMA nodes are allowed.
*
* \note If ::HWLOC_TOPOLOGY_FLAG_WHOLE_SYSTEM was set, applying
* \note If ::HWLOC_TOPOLOGY_FLAG_INCLUDE_DISALLOWED was set, applying
* hwloc_bitmap_intersects() on the result of this function and on an object
* nodeset checks whether there are allowed NUMA nodes inside that object.
* Applying hwloc_bitmap_and() returns the list of these allowed NUMA nodes.
@ -992,15 +1016,16 @@ hwloc_topology_get_allowed_nodeset(hwloc_topology_t topology) __hwloc_attribute_
* @{
*/
/** \brief Convert a CPU set into a NUMA node set and handle non-NUMA cases
/** \brief Convert a CPU set into a NUMA node set
*
* For each PU included in the input \p _cpuset, set the corresponding
* local NUMA node(s) in the output \p nodeset.
*
* If some NUMA nodes have no CPUs at all, this function never sets their
* indexes in the output node set, even if a full CPU set is given in input.
*
* If the topology contains no NUMA nodes, the machine is considered
* as a single memory node, and the following behavior is used:
* If \p cpuset is empty, \p nodeset will be emptied as well.
* Otherwise \p nodeset will be entirely filled.
* Hence the entire topology CPU set is converted into the set of all nodes
* that have some local CPUs.
*/
static __hwloc_inline int
hwloc_cpuset_to_nodeset(hwloc_topology_t topology, hwloc_const_cpuset_t _cpuset, hwloc_nodeset_t nodeset)
@ -1015,13 +1040,16 @@ hwloc_cpuset_to_nodeset(hwloc_topology_t topology, hwloc_const_cpuset_t _cpuset,
return 0;
}
/** \brief Convert a NUMA node set into a CPU set and handle non-NUMA cases
/** \brief Convert a NUMA node set into a CPU set
*
* If the topology contains no NUMA nodes, the machine is considered
* as a single memory node, and the following behavior is used:
* If \p nodeset is empty, \p cpuset will be emptied as well.
* Otherwise \p cpuset will be entirely filled.
* This is useful for manipulating memory binding sets.
* For each NUMA node included in the input \p nodeset, set the corresponding
* local PUs in the output \p _cpuset.
*
* If some CPUs have no local NUMA nodes, this function never sets their
* indexes in the output CPU set, even if a full node set is given in input.
*
* Hence the entire topology node set is converted into the set of all CPUs
* that have some local NUMA nodes.
*/
static __hwloc_inline int
hwloc_cpuset_from_nodeset(hwloc_topology_t topology, hwloc_cpuset_t _cpuset, hwloc_const_nodeset_t nodeset)

View file

@ -13,11 +13,13 @@
#ifndef HWLOC_INTEL_MIC_H
#define HWLOC_INTEL_MIC_H
#include <hwloc.h>
#include <hwloc/autogen/config.h>
#include <hwloc/helper.h>
#include "hwloc.h"
#include "hwloc/autogen/config.h"
#include "hwloc/helper.h"
#ifdef HWLOC_LINUX_SYS
#include <hwloc/linux.h>
#include "hwloc/linux.h"
#include <dirent.h>
#include <string.h>
#endif

View file

@ -15,7 +15,8 @@
#ifndef HWLOC_LINUX_LIBNUMA_H
#define HWLOC_LINUX_LIBNUMA_H
#include <hwloc.h>
#include "hwloc.h"
#include <numa.h>

View file

@ -15,7 +15,8 @@
#ifndef HWLOC_LINUX_H
#define HWLOC_LINUX_H
#include <hwloc.h>
#include "hwloc.h"
#include <stdio.h>

View file

@ -13,11 +13,11 @@
#ifndef HWLOC_NVML_H
#define HWLOC_NVML_H
#include <hwloc.h>
#include <hwloc/autogen/config.h>
#include <hwloc/helper.h>
#include "hwloc.h"
#include "hwloc/autogen/config.h"
#include "hwloc/helper.h"
#ifdef HWLOC_LINUX_SYS
#include <hwloc/linux.h>
#include "hwloc/linux.h"
#endif
#include <nvml.h>

View file

@ -1,5 +1,5 @@
/*
* Copyright © 2012-2018 Inria. All rights reserved.
* Copyright © 2012-2019 Inria. All rights reserved.
* Copyright © 2013, 2018 Université Bordeaux. All right reserved.
* See COPYING in top-level directory.
*/
@ -14,19 +14,17 @@
#ifndef HWLOC_OPENCL_H
#define HWLOC_OPENCL_H
#include <hwloc.h>
#include <hwloc/autogen/config.h>
#include <hwloc/helper.h>
#include "hwloc.h"
#include "hwloc/autogen/config.h"
#include "hwloc/helper.h"
#ifdef HWLOC_LINUX_SYS
#include <hwloc/linux.h>
#include "hwloc/linux.h"
#endif
#ifdef __APPLE__
#include <OpenCL/cl.h>
#include <OpenCL/cl_ext.h>
#else
#include <CL/cl.h>
#include <CL/cl_ext.h>
#endif
#include <stdio.h>
@ -37,17 +35,80 @@ extern "C" {
#endif
/* OpenCL extensions aren't always shipped with default headers, and
* they don't always reflect what the installed implementations support.
* Try everything and let the implementation return errors when non supported.
*/
/* Copyright (c) 2008-2018 The Khronos Group Inc. */
/* needs "cl_amd_device_attribute_query" device extension, but not strictly required for clGetDeviceInfo() */
#define HWLOC_CL_DEVICE_TOPOLOGY_AMD 0x4037
typedef union {
struct { cl_uint type; cl_uint data[5]; } raw;
struct { cl_uint type; cl_char unused[17]; cl_char bus; cl_char device; cl_char function; } pcie;
} hwloc_cl_device_topology_amd;
#define HWLOC_CL_DEVICE_TOPOLOGY_TYPE_PCIE_AMD 1
/* needs "cl_nv_device_attribute_query" device extension, but not strictly required for clGetDeviceInfo() */
#define HWLOC_CL_DEVICE_PCI_BUS_ID_NV 0x4008
#define HWLOC_CL_DEVICE_PCI_SLOT_ID_NV 0x4009
#define HWLOC_CL_DEVICE_PCI_DOMAIN_ID_NV 0x400A
/** \defgroup hwlocality_opencl Interoperability with OpenCL
*
* This interface offers ways to retrieve topology information about
* OpenCL devices.
*
* Only the AMD OpenCL interface currently offers useful locality information
* about its devices.
* Only AMD and NVIDIA OpenCL implementations currently offer useful locality
* information about their devices.
*
* @{
*/
/** \brief Return the domain, bus and device IDs of the OpenCL device \p device.
*
* Device \p device must match the local machine.
*/
static __hwloc_inline int
hwloc_opencl_get_device_pci_busid(cl_device_id device,
unsigned *domain, unsigned *bus, unsigned *dev, unsigned *func)
{
hwloc_cl_device_topology_amd amdtopo;
cl_uint nvbus, nvslot, nvdomain;
cl_int clret;
clret = clGetDeviceInfo(device, HWLOC_CL_DEVICE_TOPOLOGY_AMD, sizeof(amdtopo), &amdtopo, NULL);
if (CL_SUCCESS == clret
&& HWLOC_CL_DEVICE_TOPOLOGY_TYPE_PCIE_AMD == amdtopo.raw.type) {
*domain = 0; /* can't do anything better */
*bus = (unsigned) amdtopo.pcie.bus;
*dev = (unsigned) amdtopo.pcie.device;
*func = (unsigned) amdtopo.pcie.function;
return 0;
}
clret = clGetDeviceInfo(device, HWLOC_CL_DEVICE_PCI_BUS_ID_NV, sizeof(nvbus), &nvbus, NULL);
if (CL_SUCCESS == clret) {
clret = clGetDeviceInfo(device, HWLOC_CL_DEVICE_PCI_SLOT_ID_NV, sizeof(nvslot), &nvslot, NULL);
if (CL_SUCCESS == clret) {
clret = clGetDeviceInfo(device, HWLOC_CL_DEVICE_PCI_DOMAIN_ID_NV, sizeof(nvdomain), &nvdomain, NULL);
if (CL_SUCCESS == clret) { /* available since CUDA 10.2 */
*domain = nvdomain;
} else {
*domain = 0;
}
*bus = nvbus & 0xff;
/* non-documented but used in many other projects */
*dev = nvslot >> 3;
*func = nvslot & 0x7;
return 0;
}
}
return -1;
}
/** \brief Get the CPU set of logical processors that are physically
* close to OpenCL device \p device.
*
@ -62,7 +123,7 @@ extern "C" {
* and hwloc_opencl_get_device_osdev_by_index().
*
* This function is currently only implemented in a meaningful way for
* Linux with the AMD OpenCL implementation; other systems will simply
* Linux with the AMD or NVIDIA OpenCL implementation; other systems will simply
* get a full cpuset.
*/
static __hwloc_inline int
@ -70,35 +131,28 @@ hwloc_opencl_get_device_cpuset(hwloc_topology_t topology __hwloc_attribute_unuse
cl_device_id device __hwloc_attribute_unused,
hwloc_cpuset_t set)
{
#if (defined HWLOC_LINUX_SYS) && (defined CL_DEVICE_TOPOLOGY_AMD)
/* If we're on Linux + AMD OpenCL, use the AMD extension + the sysfs mechanism to get the local cpus */
#if (defined HWLOC_LINUX_SYS)
/* If we're on Linux, try AMD/NVIDIA extensions + the sysfs mechanism to get the local cpus */
#define HWLOC_OPENCL_DEVICE_SYSFS_PATH_MAX 128
char path[HWLOC_OPENCL_DEVICE_SYSFS_PATH_MAX];
cl_device_topology_amd amdtopo;
cl_int clret;
unsigned pcidomain, pcibus, pcidev, pcifunc;
if (!hwloc_topology_is_thissystem(topology)) {
errno = EINVAL;
return -1;
}
clret = clGetDeviceInfo(device, CL_DEVICE_TOPOLOGY_AMD, sizeof(amdtopo), &amdtopo, NULL);
if (CL_SUCCESS != clret) {
hwloc_bitmap_copy(set, hwloc_topology_get_complete_cpuset(topology));
return 0;
}
if (CL_DEVICE_TOPOLOGY_TYPE_PCIE_AMD != amdtopo.raw.type) {
if (hwloc_opencl_get_device_pci_busid(device, &pcidomain, &pcibus, &pcidev, &pcifunc) < 0) {
hwloc_bitmap_copy(set, hwloc_topology_get_complete_cpuset(topology));
return 0;
}
sprintf(path, "/sys/bus/pci/devices/0000:%02x:%02x.%01x/local_cpus",
(unsigned) amdtopo.pcie.bus, (unsigned) amdtopo.pcie.device, (unsigned) amdtopo.pcie.function);
sprintf(path, "/sys/bus/pci/devices/%04x:%02x:%02x.%01x/local_cpus", pcidomain, pcibus, pcidev, pcifunc);
if (hwloc_linux_read_path_as_cpumask(path, set) < 0
|| hwloc_bitmap_iszero(set))
hwloc_bitmap_copy(set, hwloc_topology_get_complete_cpuset(topology));
#else
/* Non-Linux + AMD OpenCL systems simply get a full cpuset */
/* Non-Linux systems simply get a full cpuset */
hwloc_bitmap_copy(set, hwloc_topology_get_complete_cpuset(topology));
#endif
return 0;
@ -140,8 +194,8 @@ hwloc_opencl_get_device_osdev_by_index(hwloc_topology_t topology,
* Use OpenCL device attributes to find the corresponding hwloc OS device object.
* Return NULL if there is none or if useful attributes are not available.
*
* This function currently only works on AMD OpenCL devices that support
* the CL_DEVICE_TOPOLOGY_AMD extension. hwloc_opencl_get_device_osdev_by_index()
* This function currently only works on AMD and NVIDIA OpenCL devices that support
* relevant OpenCL extensions. hwloc_opencl_get_device_osdev_by_index()
* should be preferred whenever possible, i.e. when platform and device index
* are known.
*
@ -159,17 +213,10 @@ static __hwloc_inline hwloc_obj_t
hwloc_opencl_get_device_osdev(hwloc_topology_t topology __hwloc_attribute_unused,
cl_device_id device __hwloc_attribute_unused)
{
#ifdef CL_DEVICE_TOPOLOGY_AMD
hwloc_obj_t osdev;
cl_device_topology_amd amdtopo;
cl_int clret;
unsigned pcidomain, pcibus, pcidevice, pcifunc;
clret = clGetDeviceInfo(device, CL_DEVICE_TOPOLOGY_AMD, sizeof(amdtopo), &amdtopo, NULL);
if (CL_SUCCESS != clret) {
errno = EINVAL;
return NULL;
}
if (CL_DEVICE_TOPOLOGY_TYPE_PCIE_AMD != amdtopo.raw.type) {
if (hwloc_opencl_get_device_pci_busid(device, &pcidomain, &pcibus, &pcidevice, &pcifunc) < 0) {
errno = EINVAL;
return NULL;
}
@ -181,18 +228,15 @@ hwloc_opencl_get_device_osdev(hwloc_topology_t topology __hwloc_attribute_unused
continue;
if (pcidev
&& pcidev->type == HWLOC_OBJ_PCI_DEVICE
&& pcidev->attr->pcidev.domain == 0
&& pcidev->attr->pcidev.bus == amdtopo.pcie.bus
&& pcidev->attr->pcidev.dev == amdtopo.pcie.device
&& pcidev->attr->pcidev.func == amdtopo.pcie.function)
&& pcidev->attr->pcidev.domain == pcidomain
&& pcidev->attr->pcidev.bus == pcibus
&& pcidev->attr->pcidev.dev == pcidevice
&& pcidev->attr->pcidev.func == pcifunc)
return osdev;
/* if PCI are filtered out, we need a info attr to match on */
}
return NULL;
#else
return NULL;
#endif
}
/** @} */

View file

@ -19,10 +19,10 @@
#ifndef HWLOC_OPENFABRICS_VERBS_H
#define HWLOC_OPENFABRICS_VERBS_H
#include <hwloc.h>
#include <hwloc/autogen/config.h>
#include "hwloc.h"
#include "hwloc/autogen/config.h"
#ifdef HWLOC_LINUX_SYS
#include <hwloc/linux.h>
#include "hwloc/linux.h"
#endif
#include <infiniband/verbs.h>

View file

@ -1,5 +1,5 @@
/*
* Copyright © 2013-2017 Inria. All rights reserved.
* Copyright © 2013-2020 Inria. All rights reserved.
* Copyright © 2016 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory.
*/
@ -13,10 +13,15 @@
struct hwloc_backend;
#include <hwloc.h>
#include "hwloc.h"
#ifdef HWLOC_INSIDE_PLUGIN
/* needed for hwloc_plugin_check_namespace() */
#ifdef HWLOC_HAVE_LTDL
#include <ltdl.h>
#else
#include <dlfcn.h>
#endif
#endif
@ -25,52 +30,36 @@ struct hwloc_backend;
* @{
*/
/** \brief Discovery component type */
typedef enum hwloc_disc_component_type_e {
/** \brief CPU-only discovery through the OS, or generic no-OS support.
* \hideinitializer */
HWLOC_DISC_COMPONENT_TYPE_CPU = (1<<0),
/** \brief xml or synthetic,
* platform-specific components such as bgq.
* Anything the discovers CPU and everything else.
* No misc backend is expected to complement a global component.
* \hideinitializer */
HWLOC_DISC_COMPONENT_TYPE_GLOBAL = (1<<1),
/** \brief OpenCL, Cuda, etc.
* \hideinitializer */
HWLOC_DISC_COMPONENT_TYPE_MISC = (1<<2)
} hwloc_disc_component_type_t;
/** \brief Discovery component structure
*
* This is the major kind of components, taking care of the discovery.
* They are registered by generic components, either statically-built or as plugins.
*/
struct hwloc_disc_component {
/** \brief Discovery component type */
hwloc_disc_component_type_t type;
/** \brief Name.
* If this component is built as a plugin, this name does not have to match the plugin filename.
*/
const char *name;
/** \brief Component types to exclude, as an OR'ed set of ::hwloc_disc_component_type_e.
/** \brief Discovery phases performed by this component.
* OR'ed set of ::hwloc_disc_phase_t
*/
unsigned phases;
/** \brief Component phases to exclude, as an OR'ed set of ::hwloc_disc_phase_t.
*
* For a GLOBAL component, this usually includes all other types (~0).
* For a GLOBAL component, this usually includes all other phases (\c ~UL).
*
* Other components only exclude types that may bring conflicting
* topology information. MISC components should likely not be excluded
* since they usually bring non-primary additional information.
*/
unsigned excludes;
unsigned excluded_phases;
/** \brief Instantiate callback to create a backend from the component.
* Parameters data1, data2, data3 are NULL except for components
* that have special enabling routines such as hwloc_topology_set_xml(). */
struct hwloc_backend * (*instantiate)(struct hwloc_disc_component *component, const void *data1, const void *data2, const void *data3);
struct hwloc_backend * (*instantiate)(struct hwloc_topology *topology, struct hwloc_disc_component *component, unsigned excluded_phases, const void *data1, const void *data2, const void *data3);
/** \brief Component priority.
* Used to sort topology->components, higher priority first.
@ -107,6 +96,72 @@ struct hwloc_disc_component {
* @{
*/
/** \brief Discovery phase */
typedef enum hwloc_disc_phase_e {
/** \brief xml or synthetic, platform-specific components such as bgq.
* Discovers everything including CPU, memory, I/O and everything else.
* A component with a Global phase usually excludes all other phases.
* \hideinitializer */
HWLOC_DISC_PHASE_GLOBAL = (1U<<0),
/** \brief CPU discovery.
* \hideinitializer */
HWLOC_DISC_PHASE_CPU = (1U<<1),
/** \brief Attach memory to existing CPU objects.
* \hideinitializer */
HWLOC_DISC_PHASE_MEMORY = (1U<<2),
/** \brief Attach PCI devices and bridges to existing CPU objects.
* \hideinitializer */
HWLOC_DISC_PHASE_PCI = (1U<<3),
/** \brief I/O discovery that requires PCI devices (OS devices such as OpenCL, CUDA, etc.).
* \hideinitializer */
HWLOC_DISC_PHASE_IO = (1U<<4),
/** \brief Misc objects that gets added below anything else.
* \hideinitializer */
HWLOC_DISC_PHASE_MISC = (1U<<5),
/** \brief Annotating existing objects, adding distances, etc.
* \hideinitializer */
HWLOC_DISC_PHASE_ANNOTATE = (1U<<6),
/** \brief Final tweaks to a ready-to-use topology.
* This phase runs once the topology is loaded, before it is returned to the topology.
* Hence it may only use the main hwloc API for modifying the topology,
* for instance by restricting it, adding info attributes, etc.
* \hideinitializer */
HWLOC_DISC_PHASE_TWEAK = (1U<<7)
} hwloc_disc_phase_t;
/** \brief Discovery status flags */
enum hwloc_disc_status_flag_e {
/** \brief The sets of allowed resources were already retrieved \hideinitializer */
HWLOC_DISC_STATUS_FLAG_GOT_ALLOWED_RESOURCES = (1UL<<1)
};
/** \brief Discovery status structure
*
* Used by the core and backends to inform about what has been/is being done
* during the discovery process.
*/
struct hwloc_disc_status {
/** \brief The current discovery phase that is performed.
* Must match one of the phases in the component phases field.
*/
hwloc_disc_phase_t phase;
/** \brief Dynamically excluded phases.
* If a component decides during discovery that some phases are no longer needed.
*/
unsigned excluded_phases;
/** \brief OR'ed set of hwloc_disc_status_flag_e */
unsigned long flags;
};
/** \brief Discovery backend structure
*
* A backend is the instantiation of a discovery component.
@ -116,6 +171,14 @@ struct hwloc_disc_component {
* hwloc_backend_alloc() initializes all fields to default values
* that the component may change (except "component" and "next")
* before enabling the backend with hwloc_backend_enable().
*
* Most backends assume that the topology is_thissystem flag is
* set because they talk to the underlying operating system.
* However they may still be used in topologies without the
* is_thissystem flag for debugging reasons.
* In practice, they are usually auto-disabled in such cases
* (excluded by xml or synthetic backends, or by environment
* variables when changing the Linux fsroot or the x86 cpuid path).
*/
struct hwloc_backend {
/** \private Reserved for the core, set by hwloc_backend_alloc() */
@ -127,12 +190,20 @@ struct hwloc_backend {
/** \private Reserved for the core. Used internally to list backends topology->backends. */
struct hwloc_backend * next;
/** \brief Discovery phases performed by this component, possibly without some of them if excluded by other components.
* OR'ed set of ::hwloc_disc_phase_t
*/
unsigned phases;
/** \brief Backend flags, currently always 0. */
unsigned long flags;
/** \brief Backend-specific 'is_thissystem' property.
* Set to 0 or 1 if the backend should enforce the thissystem flag when it gets enabled.
* Set to -1 if the backend doesn't care (default). */
* Set to 0 if the backend disables the thissystem flag for this topology
* (e.g. loading from xml or synthetic string,
* or using a different fsroot on Linux, or a x86 CPUID dump).
* Set to -1 if the backend doesn't care (default).
*/
int is_thissystem;
/** \brief Backend private data, or NULL if none. */
@ -147,20 +218,22 @@ struct hwloc_backend {
* or because of an actual discovery/gathering failure.
* May be NULL.
*/
int (*discover)(struct hwloc_backend *backend);
int (*discover)(struct hwloc_backend *backend, struct hwloc_disc_status *status);
/** \brief Callback used by the PCI backend to retrieve the locality of a PCI object from the OS/cpu backend.
* May be NULL. */
/** \brief Callback to retrieve the locality of a PCI object.
* Called by the PCI core when attaching PCI hierarchy to CPU objects.
* May be NULL.
*/
int (*get_pci_busid_cpuset)(struct hwloc_backend *backend, struct hwloc_pcidev_attr_s *busid, hwloc_bitmap_t cpuset);
};
/** \brief Allocate a backend structure, set good default values, initialize backend->component and topology, etc.
* The caller will then modify whatever needed, and call hwloc_backend_enable().
*/
HWLOC_DECLSPEC struct hwloc_backend * hwloc_backend_alloc(struct hwloc_disc_component *component);
HWLOC_DECLSPEC struct hwloc_backend * hwloc_backend_alloc(struct hwloc_topology *topology, struct hwloc_disc_component *component);
/** \brief Enable a previously allocated and setup backend. */
HWLOC_DECLSPEC int hwloc_backend_enable(struct hwloc_topology *topology, struct hwloc_backend *backend);
HWLOC_DECLSPEC int hwloc_backend_enable(struct hwloc_backend *backend);
/** @} */
@ -349,14 +422,22 @@ static __hwloc_inline int
hwloc_plugin_check_namespace(const char *pluginname __hwloc_attribute_unused, const char *symbol __hwloc_attribute_unused)
{
#ifdef HWLOC_INSIDE_PLUGIN
lt_dlhandle handle;
void *sym;
handle = lt_dlopen(NULL);
#ifdef HWLOC_HAVE_LTDL
lt_dlhandle handle = lt_dlopen(NULL);
#else
void *handle = dlopen(NULL, RTLD_NOW|RTLD_LOCAL);
#endif
if (!handle)
/* cannot check, assume things will work */
return 0;
#ifdef HWLOC_HAVE_LTDL
sym = lt_dlsym(handle, symbol);
lt_dlclose(handle);
#else
sym = dlsym(handle, symbol);
dlclose(handle);
#endif
if (!sym) {
static int verboseenv_checked = 0;
static int verboseenv_value = 0;
@ -480,7 +561,9 @@ HWLOC_DECLSPEC hwloc_obj_type_t hwloc_pcidisc_check_bridge_type(unsigned device_
*
* Returns -1 and destroys /p obj if bridge fields are invalid.
*/
HWLOC_DECLSPEC int hwloc_pcidisc_setup_bridge_attr(hwloc_obj_t obj, const unsigned char *config);
HWLOC_DECLSPEC int hwloc_pcidisc_find_bridge_buses(unsigned domain, unsigned bus, unsigned dev, unsigned func,
unsigned *secondary_busp, unsigned *subordinate_busp,
const unsigned char *config);
/** \brief Insert a PCI object in the given PCI tree by looking at PCI bus IDs.
*
@ -490,10 +573,7 @@ HWLOC_DECLSPEC void hwloc_pcidisc_tree_insert_by_busid(struct hwloc_obj **treep,
/** \brief Add some hostbridges on top of the given tree of PCI objects and attach them to the topology.
*
* For now, they will be attached to the root object. The core will move them to their actual PCI
* locality using hwloc_pci_belowroot_apply_locality() at the end of the discovery.
*
* In the meantime, other backends lookup PCI objects or localities (for instance to attach OS devices)
* Other backends may lookup PCI objects or localities (for instance to attach OS devices)
* by using hwloc_pcidisc_find_by_busid() or hwloc_pcidisc_find_busid_parent().
*/
HWLOC_DECLSPEC int hwloc_pcidisc_tree_attach(struct hwloc_topology *topology, struct hwloc_obj *tree);
@ -507,32 +587,14 @@ HWLOC_DECLSPEC int hwloc_pcidisc_tree_attach(struct hwloc_topology *topology, st
* @{
*/
/** \brief Find the PCI object that matches the bus ID.
*
* To be used after a PCI backend added PCI devices with hwloc_pcidisc_tree_attach()
* and before the core moves them to their actual location with hwloc_pci_belowroot_apply_locality().
*
* If no exactly matching object is found, return the container bridge if any, or NULL.
*
* On failure, it may be possible to find the PCI locality (instead of the PCI device)
* by calling hwloc_pcidisc_find_busid_parent().
*
* \note This is semantically identical to hwloc_get_pcidev_by_busid() which only works
* after the topology is fully loaded.
*/
HWLOC_DECLSPEC struct hwloc_obj * hwloc_pcidisc_find_by_busid(struct hwloc_topology *topology, unsigned domain, unsigned bus, unsigned dev, unsigned func);
/** \brief Find the normal parent of a PCI bus ID.
*
* Look at PCI affinity to find out where the given PCI bus ID should be attached.
*
* This function should be used to attach an I/O device directly under a normal
* (non-I/O) object, instead of below a PCI object.
* It is usually used by backends when hwloc_pcidisc_find_by_busid() failed
* to find the hwloc object corresponding to this bus ID, for instance because
* PCI discovery is not supported on this platform.
* This function should be used to attach an I/O device under the corresponding
* PCI object (if any), or under a normal (non-I/O) object with same locality.
*/
HWLOC_DECLSPEC struct hwloc_obj * hwloc_pcidisc_find_busid_parent(struct hwloc_topology *topology, unsigned domain, unsigned bus, unsigned dev, unsigned func);
HWLOC_DECLSPEC struct hwloc_obj * hwloc_pci_find_parent_by_busid(struct hwloc_topology *topology, unsigned domain, unsigned bus, unsigned dev, unsigned func);
/** @} */

View file

@ -1,13 +1,13 @@
/*
* Copyright © 2009-2011 Cisco Systems, Inc. All rights reserved.
* Copyright © 2010-2018 Inria. All rights reserved.
* Copyright © 2010-2019 Inria. All rights reserved.
* See COPYING in top-level directory.
*/
#ifndef HWLOC_RENAME_H
#define HWLOC_RENAME_H
#include <hwloc/autogen/config.h>
#include "hwloc/autogen/config.h"
#ifdef __cplusplus
@ -28,6 +28,7 @@ extern "C" {
#define HWLOC_MUNGE_NAME(a, b) HWLOC_MUNGE_NAME2(a, b)
#define HWLOC_MUNGE_NAME2(a, b) a ## b
#define HWLOC_NAME(name) HWLOC_MUNGE_NAME(HWLOC_SYM_PREFIX, hwloc_ ## name)
/* FIXME: should be "HWLOC_ ## name" below, unchanged because it doesn't matter much and could break some embedders hacks */
#define HWLOC_NAME_CAPS(name) HWLOC_MUNGE_NAME(HWLOC_SYM_PREFIX_CAPS, hwloc_ ## name)
/* Now define all the "real" names to be the prefixed names. This
@ -49,7 +50,9 @@ extern "C" {
#define HWLOC_OBJ_MACHINE HWLOC_NAME_CAPS(OBJ_MACHINE)
#define HWLOC_OBJ_NUMANODE HWLOC_NAME_CAPS(OBJ_NUMANODE)
#define HWLOC_OBJ_MEMCACHE HWLOC_NAME_CAPS(OBJ_MEMCACHE)
#define HWLOC_OBJ_PACKAGE HWLOC_NAME_CAPS(OBJ_PACKAGE)
#define HWLOC_OBJ_DIE HWLOC_NAME_CAPS(OBJ_DIE)
#define HWLOC_OBJ_CORE HWLOC_NAME_CAPS(OBJ_CORE)
#define HWLOC_OBJ_PU HWLOC_NAME_CAPS(OBJ_PU)
#define HWLOC_OBJ_L1CACHE HWLOC_NAME_CAPS(OBJ_L1CACHE)
@ -90,9 +93,6 @@ extern "C" {
#define hwloc_compare_types HWLOC_NAME(compare_types)
#define hwloc_compare_types_e HWLOC_NAME(compare_types_e)
#define HWLOC_TYPE_UNORDERED HWLOC_NAME_CAPS(TYPE_UNORDERED)
#define hwloc_obj HWLOC_NAME(obj)
#define hwloc_obj_t HWLOC_NAME(obj_t)
@ -116,7 +116,7 @@ extern "C" {
#define hwloc_topology_flags_e HWLOC_NAME(topology_flags_e)
#define HWLOC_TOPOLOGY_FLAG_WHOLE_SYSTEM HWLOC_NAME_CAPS(TOPOLOGY_FLAG_WHOLE_SYSTEM)
#define HWLOC_TOPOLOGY_FLAG_INCLUDE_DISALLOWED HWLOC_NAME_CAPS(TOPOLOGY_FLAG_WITH_DISALLOWED)
#define HWLOC_TOPOLOGY_FLAG_IS_THISSYSTEM HWLOC_NAME_CAPS(TOPOLOGY_FLAG_IS_THISSYSTEM)
#define HWLOC_TOPOLOGY_FLAG_THISSYSTEM_ALLOWED_RESOURCES HWLOC_NAME_CAPS(TOPOLOGY_FLAG_THISSYSTEM_ALLOWED_RESOURCES)
@ -124,6 +124,9 @@ extern "C" {
#define hwloc_topology_set_synthetic HWLOC_NAME(topology_set_synthetic)
#define hwloc_topology_set_xml HWLOC_NAME(topology_set_xml)
#define hwloc_topology_set_xmlbuffer HWLOC_NAME(topology_set_xmlbuffer)
#define hwloc_topology_components_flag_e HWLOC_NAME(hwloc_topology_components_flag_e)
#define HWLOC_TOPOLOGY_COMPONENTS_FLAG_BLACKLIST HWLOC_NAME_CAPS(TOPOLOGY_COMPONENTS_FLAG_BLACKLIST)
#define hwloc_topology_set_components HWLOC_NAME(topology_set_components)
#define hwloc_topology_set_flags HWLOC_NAME(topology_set_flags)
#define hwloc_topology_is_thissystem HWLOC_NAME(topology_is_thissystem)
@ -151,10 +154,18 @@ extern "C" {
#define hwloc_restrict_flags_e HWLOC_NAME(restrict_flags_e)
#define HWLOC_RESTRICT_FLAG_REMOVE_CPULESS HWLOC_NAME_CAPS(RESTRICT_FLAG_REMOVE_CPULESS)
#define HWLOC_RESTRICT_FLAG_BYNODESET HWLOC_NAME_CAPS(RESTRICT_FLAG_BYNODESET)
#define HWLOC_RESTRICT_FLAG_REMOVE_MEMLESS HWLOC_NAME_CAPS(RESTRICT_FLAG_REMOVE_MEMLESS)
#define HWLOC_RESTRICT_FLAG_ADAPT_MISC HWLOC_NAME_CAPS(RESTRICT_FLAG_ADAPT_MISC)
#define HWLOC_RESTRICT_FLAG_ADAPT_IO HWLOC_NAME_CAPS(RESTRICT_FLAG_ADAPT_IO)
#define hwloc_topology_restrict HWLOC_NAME(topology_restrict)
#define hwloc_allow_flags_e HWLOC_NAME(allow_flags_e)
#define HWLOC_ALLOW_FLAG_ALL HWLOC_NAME_CAPS(ALLOW_FLAG_ALL)
#define HWLOC_ALLOW_FLAG_LOCAL_RESTRICTIONS HWLOC_NAME_CAPS(ALLOW_FLAG_LOCAL_RESTRICTIONS)
#define HWLOC_ALLOW_FLAG_CUSTOM HWLOC_NAME_CAPS(ALLOW_FLAG_CUSTOM)
#define hwloc_topology_allow HWLOC_NAME(topology_allow)
#define hwloc_topology_insert_misc_object HWLOC_NAME(topology_insert_misc_object)
#define hwloc_topology_alloc_group_object HWLOC_NAME(topology_alloc_group_object)
#define hwloc_topology_insert_group_object HWLOC_NAME(topology_insert_group_object)
@ -172,6 +183,7 @@ extern "C" {
#define HWLOC_TYPE_DEPTH_OS_DEVICE HWLOC_NAME_CAPS(TYPE_DEPTH_OS_DEVICE)
#define HWLOC_TYPE_DEPTH_MISC HWLOC_NAME_CAPS(TYPE_DEPTH_MISC)
#define HWLOC_TYPE_DEPTH_NUMANODE HWLOC_NAME_CAPS(TYPE_DEPTH_NUMANODE)
#define HWLOC_TYPE_DEPTH_MEMCACHE HWLOC_NAME_CAPS(TYPE_DEPTH_MEMCACHE)
#define hwloc_get_depth_type HWLOC_NAME(get_depth_type)
#define hwloc_get_nbobjs_by_depth HWLOC_NAME(get_nbobjs_by_depth)
@ -266,10 +278,12 @@ extern "C" {
#define hwloc_bitmap_zero HWLOC_NAME(bitmap_zero)
#define hwloc_bitmap_fill HWLOC_NAME(bitmap_fill)
#define hwloc_bitmap_from_ulong HWLOC_NAME(bitmap_from_ulong)
#define hwloc_bitmap_from_ulongs HWLOC_NAME(bitmap_from_ulongs)
#define hwloc_bitmap_from_ith_ulong HWLOC_NAME(bitmap_from_ith_ulong)
#define hwloc_bitmap_to_ulong HWLOC_NAME(bitmap_to_ulong)
#define hwloc_bitmap_to_ith_ulong HWLOC_NAME(bitmap_to_ith_ulong)
#define hwloc_bitmap_to_ulongs HWLOC_NAME(bitmap_to_ulongs)
#define hwloc_bitmap_nr_ulongs HWLOC_NAME(bitmap_nr_ulongs)
#define hwloc_bitmap_only HWLOC_NAME(bitmap_only)
#define hwloc_bitmap_allbut HWLOC_NAME(bitmap_allbut)
#define hwloc_bitmap_set HWLOC_NAME(bitmap_set)
@ -308,6 +322,7 @@ extern "C" {
#define hwloc_get_ancestor_obj_by_type HWLOC_NAME(get_ancestor_obj_by_type)
#define hwloc_get_next_obj_by_depth HWLOC_NAME(get_next_obj_by_depth)
#define hwloc_get_next_obj_by_type HWLOC_NAME(get_next_obj_by_type)
#define hwloc_bitmap_singlify_per_core HWLOC_NAME(bitmap_singlify_by_core)
#define hwloc_get_pu_obj_by_os_index HWLOC_NAME(get_pu_obj_by_os_index)
#define hwloc_get_numanode_obj_by_os_index HWLOC_NAME(get_numanode_obj_by_os_index)
#define hwloc_get_next_child HWLOC_NAME(get_next_child)
@ -380,10 +395,13 @@ extern "C" {
#define HWLOC_DISTANCES_KIND_FROM_USER HWLOC_NAME_CAPS(DISTANCES_KIND_FROM_USER)
#define HWLOC_DISTANCES_KIND_MEANS_LATENCY HWLOC_NAME_CAPS(DISTANCES_KIND_MEANS_LATENCY)
#define HWLOC_DISTANCES_KIND_MEANS_BANDWIDTH HWLOC_NAME_CAPS(DISTANCES_KIND_MEANS_BANDWIDTH)
#define HWLOC_DISTANCES_KIND_HETEROGENEOUS_TYPES HWLOC_NAME_CAPS(DISTANCES_KIND_HETEROGENEOUS_TYPES)
#define hwloc_distances_get HWLOC_NAME(distances_get)
#define hwloc_distances_get_by_depth HWLOC_NAME(distances_get_by_depth)
#define hwloc_distances_get_by_type HWLOC_NAME(distances_get_by_type)
#define hwloc_distances_get_by_name HWLOC_NAME(distances_get_by_name)
#define hwloc_distances_get_name HWLOC_NAME(distances_get_name)
#define hwloc_distances_release HWLOC_NAME(distances_release)
#define hwloc_distances_obj_index HWLOC_NAME(distances_obj_index)
#define hwloc_distances_obj_pair_values HWLOC_NAME(distances_pair_values)
@ -396,6 +414,7 @@ extern "C" {
#define hwloc_distances_remove HWLOC_NAME(distances_remove)
#define hwloc_distances_remove_by_depth HWLOC_NAME(distances_remove_by_depth)
#define hwloc_distances_remove_by_type HWLOC_NAME(distances_remove_by_type)
#define hwloc_distances_release_remove HWLOC_NAME(distances_release_remove)
/* diff.h */
@ -462,13 +481,10 @@ extern "C" {
#define hwloc_ibv_get_device_osdev HWLOC_NAME(ibv_get_device_osdev)
#define hwloc_ibv_get_device_osdev_by_name HWLOC_NAME(ibv_get_device_osdev_by_name)
/* intel-mic.h */
#define hwloc_intel_mic_get_device_cpuset HWLOC_NAME(intel_mic_get_device_cpuset)
#define hwloc_intel_mic_get_device_osdev_by_index HWLOC_NAME(intel_mic_get_device_osdev_by_index)
/* opencl.h */
#define hwloc_cl_device_topology_amd HWLOC_NAME(cl_device_topology_amd)
#define hwloc_opencl_get_device_pci_busid HWLOC_NAME(opencl_get_device_pci_ids)
#define hwloc_opencl_get_device_cpuset HWLOC_NAME(opencl_get_device_cpuset)
#define hwloc_opencl_get_device_osdev HWLOC_NAME(opencl_get_device_osdev)
#define hwloc_opencl_get_device_osdev_by_index HWLOC_NAME(opencl_get_device_osdev_by_index)
@ -502,13 +518,22 @@ extern "C" {
/* hwloc/plugins.h */
#define hwloc_disc_component_type_e HWLOC_NAME(disc_component_type_e)
#define HWLOC_DISC_COMPONENT_TYPE_CPU HWLOC_NAME_CAPS(DISC_COMPONENT_TYPE_CPU)
#define HWLOC_DISC_COMPONENT_TYPE_GLOBAL HWLOC_NAME_CAPS(DISC_COMPONENT_TYPE_GLOBAL)
#define HWLOC_DISC_COMPONENT_TYPE_MISC HWLOC_NAME_CAPS(DISC_COMPONENT_TYPE_MISC)
#define hwloc_disc_component_type_t HWLOC_NAME(disc_component_type_t)
#define hwloc_disc_phase_e HWLOC_NAME(disc_phase_e)
#define HWLOC_DISC_PHASE_GLOBAL HWLOC_NAME_CAPS(DISC_PHASE_GLOBAL)
#define HWLOC_DISC_PHASE_CPU HWLOC_NAME_CAPS(DISC_PHASE_CPU)
#define HWLOC_DISC_PHASE_MEMORY HWLOC_NAME_CAPS(DISC_PHASE_MEMORY)
#define HWLOC_DISC_PHASE_PCI HWLOC_NAME_CAPS(DISC_PHASE_PCI)
#define HWLOC_DISC_PHASE_IO HWLOC_NAME_CAPS(DISC_PHASE_IO)
#define HWLOC_DISC_PHASE_MISC HWLOC_NAME_CAPS(DISC_PHASE_MISC)
#define HWLOC_DISC_PHASE_ANNOTATE HWLOC_NAME_CAPS(DISC_PHASE_ANNOTATE)
#define HWLOC_DISC_PHASE_TWEAK HWLOC_NAME_CAPS(DISC_PHASE_TWEAK)
#define hwloc_disc_phase_t HWLOC_NAME(disc_phase_t)
#define hwloc_disc_component HWLOC_NAME(disc_component)
#define hwloc_disc_status_flag_e HWLOC_NAME(disc_status_flag_e)
#define HWLOC_DISC_STATUS_FLAG_GOT_ALLOWED_RESOURCES HWLOC_NAME_CAPS(DISC_STATUS_FLAG_GOT_ALLOWED_RESOURCES)
#define hwloc_disc_status HWLOC_NAME(disc_status)
#define hwloc_backend HWLOC_NAME(backend)
#define hwloc_backend_alloc HWLOC_NAME(backend_alloc)
@ -540,12 +565,11 @@ extern "C" {
#define hwloc_pcidisc_find_cap HWLOC_NAME(pcidisc_find_cap)
#define hwloc_pcidisc_find_linkspeed HWLOC_NAME(pcidisc_find_linkspeed)
#define hwloc_pcidisc_check_bridge_type HWLOC_NAME(pcidisc_check_bridge_type)
#define hwloc_pcidisc_setup_bridge_attr HWLOC_NAME(pcidisc_setup_bridge_attr)
#define hwloc_pcidisc_find_bridge_buses HWLOC_NAME(pcidisc_find_bridge_buses)
#define hwloc_pcidisc_tree_insert_by_busid HWLOC_NAME(pcidisc_tree_insert_by_busid)
#define hwloc_pcidisc_tree_attach HWLOC_NAME(pcidisc_tree_attach)
#define hwloc_pcidisc_find_by_busid HWLOC_NAME(pcidisc_find_by_busid)
#define hwloc_pcidisc_find_busid_parent HWLOC_NAME(pcidisc_find_busid_parent)
#define hwloc_pci_find_parent_by_busid HWLOC_NAME(pcidisc_find_busid_parent)
/* hwloc/deprecated.h */
@ -571,8 +595,9 @@ extern "C" {
/* private/misc.h */
#ifndef HWLOC_HAVE_CORRECT_SNPRINTF
#define hwloc_snprintf HWLOC_NAME(snprintf)
#define hwloc_namecoloncmp HWLOC_NAME(namecoloncmp)
#endif
#define hwloc_ffsl_manual HWLOC_NAME(ffsl_manual)
#define hwloc_ffs32 HWLOC_NAME(ffs32)
#define hwloc_ffsl_from_ffs32 HWLOC_NAME(ffsl_from_ffs32)
@ -631,8 +656,9 @@ extern "C" {
#define hwloc_backends_is_thissystem HWLOC_NAME(backends_is_thissystem)
#define hwloc_backends_find_callbacks HWLOC_NAME(backends_find_callbacks)
#define hwloc_backends_init HWLOC_NAME(backends_init)
#define hwloc_topology_components_init HWLOC_NAME(topology_components_init)
#define hwloc_backends_disable_all HWLOC_NAME(backends_disable_all)
#define hwloc_topology_components_fini HWLOC_NAME(topology_components_fini)
#define hwloc_components_init HWLOC_NAME(components_init)
#define hwloc_components_fini HWLOC_NAME(components_fini)
@ -656,7 +682,6 @@ extern "C" {
#define hwloc_cuda_component HWLOC_NAME(cuda_component)
#define hwloc_gl_component HWLOC_NAME(gl_component)
#define hwloc_linuxio_component HWLOC_NAME(linuxio_component)
#define hwloc_nvml_component HWLOC_NAME(nvml_component)
#define hwloc_opencl_component HWLOC_NAME(opencl_component)
#define hwloc_pci_component HWLOC_NAME(pci_component)
@ -669,12 +694,16 @@ extern "C" {
#define hwloc_special_level_s HWLOC_NAME(special_level_s)
#define hwloc_pci_forced_locality_s HWLOC_NAME(pci_forced_locality_s)
#define hwloc_pci_locality_s HWLOC_NAME(pci_locality_s)
#define hwloc_topology_forced_component_s HWLOC_NAME(topology_forced_component)
#define hwloc_alloc_root_sets HWLOC_NAME(alloc_root_sets)
#define hwloc_setup_pu_level HWLOC_NAME(setup_pu_level)
#define hwloc_get_sysctlbyname HWLOC_NAME(get_sysctlbyname)
#define hwloc_get_sysctl HWLOC_NAME(get_sysctl)
#define hwloc_fallback_nbprocessors HWLOC_NAME(fallback_nbprocessors)
#define hwloc_fallback_memsize HWLOC_NAME(fallback_memsize)
#define hwloc__object_cpusets_compare_first HWLOC_NAME(_object_cpusets_compare_first)
#define hwloc__reorder_children HWLOC_NAME(_reorder_children)
@ -687,8 +716,8 @@ extern "C" {
#define hwloc_pci_discovery_init HWLOC_NAME(pci_discovery_init)
#define hwloc_pci_discovery_prepare HWLOC_NAME(pci_discovery_prepare)
#define hwloc_pci_discovery_exit HWLOC_NAME(pci_discovery_exit)
#define hwloc_pci_find_by_busid HWLOC_NAME(pcidisc_find_by_busid)
#define hwloc_find_insert_io_parent_by_complete_cpuset HWLOC_NAME(hwloc_find_insert_io_parent_by_complete_cpuset)
#define hwloc_pci_belowroot_apply_locality HWLOC_NAME(pci_belowroot_apply_locality)
#define hwloc__add_info HWLOC_NAME(_add_info)
#define hwloc__add_info_nodup HWLOC_NAME(_add_info_nodup)

View file

@ -10,7 +10,7 @@
#ifndef HWLOC_SHMEM_H
#define HWLOC_SHMEM_H
#include <hwloc.h>
#include "hwloc.h"
#ifdef __cplusplus
extern "C" {

View file

@ -1,5 +1,5 @@
/*
* Copyright © 2012-2015 Inria. All rights reserved.
* Copyright © 2012-2019 Inria. All rights reserved.
* See COPYING in top-level directory.
*/
@ -16,13 +16,13 @@
#ifndef PRIVATE_COMPONENTS_H
#define PRIVATE_COMPONENTS_H 1
#include <hwloc/plugins.h>
#include "hwloc/plugins.h"
struct hwloc_topology;
extern int hwloc_disc_component_force_enable(struct hwloc_topology *topology,
int envvar_forced, /* 1 if forced through envvar, 0 if forced through API */
int type, const char *name,
const char *name,
const void *data1, const void *data2, const void *data3);
extern void hwloc_disc_components_enable_others(struct hwloc_topology *topology);
@ -30,10 +30,12 @@ extern void hwloc_disc_components_enable_others(struct hwloc_topology *topology)
extern void hwloc_backends_is_thissystem(struct hwloc_topology *topology);
extern void hwloc_backends_find_callbacks(struct hwloc_topology *topology);
/* Initialize the list of backends used by a topology */
extern void hwloc_backends_init(struct hwloc_topology *topology);
/* Initialize the lists of components and backends used by a topology */
extern void hwloc_topology_components_init(struct hwloc_topology *topology);
/* Disable and destroy all backends used by a topology */
extern void hwloc_backends_disable_all(struct hwloc_topology *topology);
/* Cleanup the lists of components used by a topology */
extern void hwloc_topology_components_fini(struct hwloc_topology *topology);
/* Used by the core to setup/destroy the list of components */
extern void hwloc_components_init(void); /* increases components refcount, should be called exactly once per topology (during init) */

View file

@ -11,8 +11,8 @@
#ifndef HWLOC_DEBUG_H
#define HWLOC_DEBUG_H
#include <private/autogen/config.h>
#include <private/misc.h>
#include "private/autogen/config.h"
#include "private/misc.h"
#ifdef HWLOC_DEBUG
#include <stdarg.h>

View file

@ -1,5 +1,5 @@
/*
* Copyright © 2018 Inria. All rights reserved.
* Copyright © 2018-2019 Inria. All rights reserved.
*
* See COPYING in top-level directory.
*/
@ -29,7 +29,6 @@ HWLOC_DECLSPEC extern const struct hwloc_component hwloc_x86_component;
/* I/O discovery */
HWLOC_DECLSPEC extern const struct hwloc_component hwloc_cuda_component;
HWLOC_DECLSPEC extern const struct hwloc_component hwloc_gl_component;
HWLOC_DECLSPEC extern const struct hwloc_component hwloc_linuxio_component;
HWLOC_DECLSPEC extern const struct hwloc_component hwloc_nvml_component;
HWLOC_DECLSPEC extern const struct hwloc_component hwloc_opencl_component;
HWLOC_DECLSPEC extern const struct hwloc_component hwloc_pci_component;

View file

@ -1,6 +1,6 @@
/*
* Copyright © 2009 CNRS
* Copyright © 2009-2018 Inria. All rights reserved.
* Copyright © 2009-2019 Inria. All rights reserved.
* Copyright © 2009-2012 Université Bordeaux
* Copyright © 2011 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory.
@ -11,9 +11,9 @@
#ifndef HWLOC_PRIVATE_MISC_H
#define HWLOC_PRIVATE_MISC_H
#include <hwloc/autogen/config.h>
#include <private/autogen/config.h>
#include <hwloc.h>
#include "hwloc/autogen/config.h"
#include "private/autogen/config.h"
#include "hwloc.h"
#ifdef HWLOC_HAVE_DECL_STRNCASECMP
#ifdef HAVE_STRINGS_H
@ -439,14 +439,14 @@ hwloc_linux_pci_link_speed_from_string(const char *string)
static __hwloc_inline int hwloc__obj_type_is_normal (hwloc_obj_type_t type)
{
/* type contiguity is asserted in topology_check() */
return type <= HWLOC_OBJ_GROUP;
return type <= HWLOC_OBJ_GROUP || type == HWLOC_OBJ_DIE;
}
/* Any object attached to memory children, currently only NUMA nodes */
/* Any object attached to memory children, currently NUMA nodes or Memory-side caches */
static __hwloc_inline int hwloc__obj_type_is_memory (hwloc_obj_type_t type)
{
/* type contiguity is asserted in topology_check() */
return type == HWLOC_OBJ_NUMANODE;
return type == HWLOC_OBJ_NUMANODE || type == HWLOC_OBJ_MEMCACHE;
}
/* I/O or Misc object, without cpusets or nodesets. */
@ -463,6 +463,7 @@ static __hwloc_inline int hwloc__obj_type_is_io (hwloc_obj_type_t type)
return type >= HWLOC_OBJ_BRIDGE && type <= HWLOC_OBJ_OS_DEVICE;
}
/* Any CPU caches (not Memory-side caches) */
static __hwloc_inline int
hwloc__obj_type_is_cache(hwloc_obj_type_t type)
{
@ -572,12 +573,4 @@ typedef SSIZE_T ssize_t;
# endif
#endif
#if defined HWLOC_WIN_SYS && !defined __MINGW32__ && !defined(__CYGWIN__)
/* MSVC doesn't support C99 variable-length array */
#include <malloc.h>
#define HWLOC_VLA(_type, _name, _nb) _type *_name = (_type*) _alloca((_nb)*sizeof(_type))
#else
#define HWLOC_VLA(_type, _name, _nb) _type _name[_nb]
#endif
#endif /* HWLOC_PRIVATE_MISC_H */

View file

@ -1,7 +1,7 @@
/*
* Copyright © 2009 CNRS
* Copyright © 2009-2019 Inria. All rights reserved.
* Copyright © 2009-2012 Université Bordeaux
* Copyright © 2009-2012, 2020 Université Bordeaux
* Copyright © 2009-2011 Cisco Systems, Inc. All rights reserved.
*
* See COPYING in top-level directory.
@ -22,11 +22,12 @@
#ifndef HWLOC_PRIVATE_H
#define HWLOC_PRIVATE_H
#include <private/autogen/config.h>
#include <hwloc.h>
#include <hwloc/bitmap.h>
#include <private/components.h>
#include <private/misc.h>
#include "private/autogen/config.h"
#include "hwloc.h"
#include "hwloc/bitmap.h"
#include "private/components.h"
#include "private/misc.h"
#include <sys/types.h>
#ifdef HAVE_UNISTD_H
#include <unistd.h>
@ -39,7 +40,7 @@
#endif
#include <string.h>
#define HWLOC_TOPOLOGY_ABI 0x20000 /* version of the layout of struct topology */
#define HWLOC_TOPOLOGY_ABI 0x20100 /* version of the layout of struct topology */
/*****************************************************
* WARNING:
@ -67,12 +68,13 @@ struct hwloc_topology {
void *adopted_shmem_addr;
size_t adopted_shmem_length;
#define HWLOC_NR_SLEVELS 5
#define HWLOC_NR_SLEVELS 6
#define HWLOC_SLEVEL_NUMANODE 0
#define HWLOC_SLEVEL_BRIDGE 1
#define HWLOC_SLEVEL_PCIDEV 2
#define HWLOC_SLEVEL_OSDEV 3
#define HWLOC_SLEVEL_MISC 4
#define HWLOC_SLEVEL_MEMCACHE 5
/* order must match negative depth, it's asserted in setup_defaults() */
#define HWLOC_SLEVEL_FROM_DEPTH(x) (HWLOC_TYPE_DEPTH_NUMANODE-(x))
#define HWLOC_SLEVEL_TO_DEPTH(x) (HWLOC_TYPE_DEPTH_NUMANODE-(x))
@ -86,6 +88,7 @@ struct hwloc_topology {
hwloc_bitmap_t allowed_nodeset;
struct hwloc_binding_hooks {
/* These are actually rather OS hooks since some of them are not about binding */
int (*set_thisproc_cpubind)(hwloc_topology_t topology, hwloc_const_cpuset_t set, int flags);
int (*get_thisproc_cpubind)(hwloc_topology_t topology, hwloc_cpuset_t set, int flags);
int (*set_thisthread_cpubind)(hwloc_topology_t topology, hwloc_const_cpuset_t set, int flags);
@ -127,20 +130,35 @@ struct hwloc_topology {
int userdata_not_decoded;
struct hwloc_internal_distances_s {
hwloc_obj_type_t type;
char *name; /* FIXME: needs an API to set it from user */
unsigned id; /* to match the container id field of public distances structure
* not exported to XML, regenerated during _add()
*/
/* if all objects have the same type, different_types is NULL and unique_type is valid.
* otherwise unique_type is HWLOC_OBJ_TYPE_NONE and different_types contains individual objects types.
*/
hwloc_obj_type_t unique_type;
hwloc_obj_type_t *different_types;
/* add union hwloc_obj_attr_u if we ever support groups */
unsigned nbobjs;
uint64_t *indexes; /* array of OS or GP indexes before we can convert them into objs. */
uint64_t *indexes; /* array of OS or GP indexes before we can convert them into objs.
* OS indexes for distances covering only PUs or only NUMAnodes.
*/
#define HWLOC_DIST_TYPE_USE_OS_INDEX(_type) ((_type) == HWLOC_OBJ_PU || (_type == HWLOC_OBJ_NUMANODE))
uint64_t *values; /* distance matrices, ordered according to the above indexes/objs array.
* distance from i to j is stored in slot i*nbnodes+j.
*/
unsigned long kind;
#define HWLOC_INTERNAL_DIST_FLAG_OBJS_VALID (1U<<0) /* if the objs array is valid below */
unsigned iflags;
/* objects are currently stored in physical_index order */
hwloc_obj_t *objs; /* array of objects */
int objs_are_valid; /* set to 1 if the array objs is still valid, 0 if needs refresh */
unsigned id; /* to match the container id field of public distances structure */
struct hwloc_internal_distances_s *prev, *next;
} *first_dist, *last_dist;
unsigned next_dist_id;
@ -153,8 +171,9 @@ struct hwloc_topology {
/* list of enabled backends. */
struct hwloc_backend * backends;
struct hwloc_backend * get_pci_busid_cpuset_backend;
unsigned backend_excludes;
struct hwloc_backend * get_pci_busid_cpuset_backend; /* first backend that provides get_pci_busid_cpuset() callback */
unsigned backend_phases;
unsigned backend_excluded_phases;
/* memory allocator for topology objects */
struct hwloc_tma * tma;
@ -176,7 +195,6 @@ struct hwloc_topology {
struct hwloc_numanode_attr_s machine_memory;
/* pci stuff */
int need_pci_belowroot_apply_locality;
int pci_has_forced_locality;
unsigned pci_forced_locality_nr;
struct hwloc_pci_forced_locality_s {
@ -185,13 +203,34 @@ struct hwloc_topology {
hwloc_bitmap_t cpuset;
} * pci_forced_locality;
/* component blacklisting */
unsigned nr_blacklisted_components;
struct hwloc_topology_forced_component_s {
struct hwloc_disc_component *component;
unsigned phases;
} *blacklisted_components;
/* FIXME: keep until topo destroy and reuse for finding specific buses */
struct hwloc_pci_locality_s {
unsigned domain;
unsigned bus_min;
unsigned bus_max;
hwloc_bitmap_t cpuset;
hwloc_obj_t parent;
struct hwloc_pci_locality_s *prev, *next;
} *first_pci_locality, *last_pci_locality;
};
extern void hwloc_alloc_root_sets(hwloc_obj_t root);
extern void hwloc_setup_pu_level(struct hwloc_topology *topology, unsigned nb_pus);
extern int hwloc_get_sysctlbyname(const char *name, int64_t *n);
extern int hwloc_get_sysctl(int name[], unsigned namelen, int *n);
extern int hwloc_fallback_nbprocessors(struct hwloc_topology *topology);
extern int hwloc_get_sysctl(int name[], unsigned namelen, int64_t *n);
/* returns the number of CPU from the OS (only valid if thissystem) */
#define HWLOC_FALLBACK_NBPROCESSORS_INCLUDE_OFFLINE 1 /* by default we try to get only the online CPUs */
extern int hwloc_fallback_nbprocessors(unsigned flags);
/* returns the memory size from the OS (only valid if thissystem) */
extern int64_t hwloc_fallback_memsize(void);
extern int hwloc__object_cpusets_compare_first(hwloc_obj_t obj1, hwloc_obj_t obj2);
extern void hwloc__reorder_children(hwloc_obj_t parent);
@ -208,19 +247,17 @@ extern void hwloc_pci_discovery_init(struct hwloc_topology *topology);
extern void hwloc_pci_discovery_prepare(struct hwloc_topology *topology);
extern void hwloc_pci_discovery_exit(struct hwloc_topology *topology);
/* Look for an object matching the given domain/bus/func,
* either exactly or return the smallest container bridge
*/
extern struct hwloc_obj * hwloc_pci_find_by_busid(struct hwloc_topology *topology, unsigned domain, unsigned bus, unsigned dev, unsigned func);
/* Look for an object matching complete cpuset exactly, or insert one.
* Return NULL on failure.
* Return a good fallback (object above) on failure to insert.
*/
extern hwloc_obj_t hwloc_find_insert_io_parent_by_complete_cpuset(struct hwloc_topology *topology, hwloc_cpuset_t cpuset);
/* Move PCI objects currently attached to the root object ot their actual location.
* Called by the core at the end of hwloc_topology_load().
* Prior to this call, all PCI objects may be found below the root object.
* After this call and a reconnect of levels, all PCI objects are available through levels.
*/
extern int hwloc_pci_belowroot_apply_locality(struct hwloc_topology *topology);
extern int hwloc__add_info(struct hwloc_info_s **infosp, unsigned *countp, const char *name, const char *value);
extern int hwloc__add_info_nodup(struct hwloc_info_s **infosp, unsigned *countp, const char *name, const char *value, int replace);
extern int hwloc__move_infos(struct hwloc_info_s **dst_infosp, unsigned *dst_countp, struct hwloc_info_s **src_infosp, unsigned *src_countp);
@ -313,8 +350,8 @@ extern void hwloc_internal_distances_prepare(hwloc_topology_t topology);
extern void hwloc_internal_distances_destroy(hwloc_topology_t topology);
extern int hwloc_internal_distances_dup(hwloc_topology_t new, hwloc_topology_t old);
extern void hwloc_internal_distances_refresh(hwloc_topology_t topology);
extern int hwloc_internal_distances_add(hwloc_topology_t topology, unsigned nbobjs, hwloc_obj_t *objs, uint64_t *values, unsigned long kind, unsigned long flags);
extern int hwloc_internal_distances_add_by_index(hwloc_topology_t topology, hwloc_obj_type_t type, unsigned nbobjs, uint64_t *indexes, uint64_t *values, unsigned long kind, unsigned long flags);
extern int hwloc_internal_distances_add(hwloc_topology_t topology, const char *name, unsigned nbobjs, hwloc_obj_t *objs, uint64_t *values, unsigned long kind, unsigned long flags);
extern int hwloc_internal_distances_add_by_index(hwloc_topology_t topology, const char *name, hwloc_obj_type_t unique_type, hwloc_obj_type_t *different_types, unsigned nbobjs, uint64_t *indexes, uint64_t *values, unsigned long kind, unsigned long flags);
extern void hwloc_internal_distances_invalidate_cached_objs(hwloc_topology_t topology);
/* encode src buffer into target buffer.
@ -330,13 +367,19 @@ extern int hwloc_encode_to_base64(const char *src, size_t srclength, char *targe
*/
extern int hwloc_decode_from_base64(char const *src, char *target, size_t targsize);
/* Check whether needle matches the beginning of haystack, at least n, and up
* to a colon or \0 */
extern int hwloc_namecoloncmp(const char *haystack, const char *needle, size_t n);
/* On some systems, snprintf returns the size of written data, not the actually
* required size. hwloc_snprintf always report the actually required size. */
* required size. Sometimes it returns -1 on truncation too.
* And sometimes it doesn't like NULL output buffers.
* http://www.gnu.org/software/gnulib/manual/html_node/snprintf.html
*
* hwloc_snprintf behaves properly, but it's a bit overkill on the vast majority
* of platforms, so don't enable it unless really needed.
*/
#ifdef HWLOC_HAVE_CORRECT_SNPRINTF
#define hwloc_snprintf snprintf
#else
extern int hwloc_snprintf(char *str, size_t size, const char *format, ...) __hwloc_attribute_format(printf, 3, 4);
#endif
/* Return the name of the currently running program, if supported.
* If not NULL, must be freed by the caller.
@ -356,7 +399,7 @@ extern char * hwloc_progname(struct hwloc_topology *topology);
#define HWLOC_GROUP_KIND_INTEL_MODULE 102 /* no subkind */
#define HWLOC_GROUP_KIND_INTEL_TILE 103 /* no subkind */
#define HWLOC_GROUP_KIND_INTEL_DIE 104 /* no subkind */
#define HWLOC_GROUP_KIND_S390_BOOK 110 /* no subkind */
#define HWLOC_GROUP_KIND_S390_BOOK 110 /* subkind 0 is book, subkind 1 is drawer (group of books) */
#define HWLOC_GROUP_KIND_AMD_COMPUTE_UNIT 120 /* no subkind */
/* then, OS-specific groups */
#define HWLOC_GROUP_KIND_SOLARIS_PG_HW_PERF 200 /* subkind is group width */

View file

@ -1,12 +1,12 @@
/*
* Copyright © 2009-2019 Inria. All rights reserved.
* Copyright © 2009-2017 Inria. All rights reserved.
* See COPYING in top-level directory.
*/
#ifndef PRIVATE_XML_H
#define PRIVATE_XML_H 1
#include <hwloc.h>
#include "hwloc.h"
#include <sys/types.h>
@ -54,7 +54,6 @@ struct hwloc_xml_backend_data_s {
unsigned nbnumanodes;
hwloc_obj_t first_numanode, last_numanode; /* temporary cousin-list for handling v1distances */
struct hwloc__xml_imported_v1distances_s *first_v1dist, *last_v1dist;
int dont_merge_die_groups;
};
/**************

View file

@ -11,7 +11,7 @@
/* include hwloc's config before anything else
* so that extensions and features are properly enabled
*/
#include <private/private.h>
#include "private/private.h"
/* $OpenBSD: base64.c,v 1.5 2006/10/21 09:55:03 otto Exp $ */

View file

@ -1,15 +1,16 @@
/*
* Copyright © 2009 CNRS
* Copyright © 2009-2018 Inria. All rights reserved.
* Copyright © 2009-2019 Inria. All rights reserved.
* Copyright © 2009-2010, 2012 Université Bordeaux
* Copyright © 2011-2015 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory.
*/
#include <private/autogen/config.h>
#include <hwloc.h>
#include <private/private.h>
#include <hwloc/helper.h>
#include "private/autogen/config.h"
#include "hwloc.h"
#include "private/private.h"
#include "hwloc/helper.h"
#ifdef HAVE_SYS_MMAN_H
# include <sys/mman.h>
#endif
@ -885,6 +886,8 @@ hwloc_set_binding_hooks(struct hwloc_topology *topology)
} else {
/* not this system, use dummy binding hooks that do nothing (but don't return ENOSYS) */
hwloc_set_dummy_hooks(&topology->binding_hooks, &topology->support);
/* Linux has some hooks that also work in this case, but they are not strictly needed yet. */
}
/* if not is_thissystem, set_cpubind is fake

View file

@ -1,18 +1,18 @@
/*
* Copyright © 2009 CNRS
* Copyright © 2009-2017 Inria. All rights reserved.
* Copyright © 2009-2018 Inria. All rights reserved.
* Copyright © 2009-2011 Université Bordeaux
* Copyright © 2009-2011 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory.
*/
#include <private/autogen/config.h>
#include <hwloc/autogen/config.h>
#include <hwloc.h>
#include <private/misc.h>
#include <private/private.h>
#include <private/debug.h>
#include <hwloc/bitmap.h>
#include "private/autogen/config.h"
#include "hwloc/autogen/config.h"
#include "hwloc.h"
#include "private/misc.h"
#include "private/private.h"
#include "private/debug.h"
#include "hwloc/bitmap.h"
#include <stdarg.h>
#include <stdio.h>
@ -505,14 +505,16 @@ int hwloc_bitmap_list_sscanf(struct hwloc_bitmap_s *set, const char * __hwloc_re
if (begin != -1) {
/* finishing a range */
hwloc_bitmap_set_range(set, begin, val);
if (hwloc_bitmap_set_range(set, begin, val) < 0)
goto failed;
begin = -1;
} else if (*next == '-') {
/* starting a new range */
if (*(next+1) == '\0') {
/* infinite range */
hwloc_bitmap_set_range(set, val, -1);
if (hwloc_bitmap_set_range(set, val, -1) < 0)
goto failed;
break;
} else {
/* normal range */
@ -766,6 +768,21 @@ int hwloc_bitmap_from_ith_ulong(struct hwloc_bitmap_s *set, unsigned i, unsigned
return 0;
}
int hwloc_bitmap_from_ulongs(struct hwloc_bitmap_s *set, unsigned nr, const unsigned long *masks)
{
unsigned j;
HWLOC__BITMAP_CHECK(set);
if (hwloc_bitmap_reset_by_ulongs(set, nr) < 0)
return -1;
for(j=0; j<nr; j++)
set->ulongs[j] = masks[j];
set->infinite = 0;
return 0;
}
unsigned long hwloc_bitmap_to_ulong(const struct hwloc_bitmap_s *set)
{
HWLOC__BITMAP_CHECK(set);
@ -780,6 +797,30 @@ unsigned long hwloc_bitmap_to_ith_ulong(const struct hwloc_bitmap_s *set, unsign
return HWLOC_SUBBITMAP_READULONG(set, i);
}
int hwloc_bitmap_to_ulongs(const struct hwloc_bitmap_s *set, unsigned nr, unsigned long *masks)
{
unsigned j;
HWLOC__BITMAP_CHECK(set);
for(j=0; j<nr; j++)
masks[j] = HWLOC_SUBBITMAP_READULONG(set, j);
return 0;
}
int hwloc_bitmap_nr_ulongs(const struct hwloc_bitmap_s *set)
{
unsigned last;
HWLOC__BITMAP_CHECK(set);
if (set->infinite)
return -1;
last = hwloc_bitmap_last(set);
return (last + HWLOC_BITS_PER_LONG-1)/HWLOC_BITS_PER_LONG;
}
int hwloc_bitmap_only(struct hwloc_bitmap_s * set, unsigned cpu)
{
unsigned index_ = HWLOC_SUBBITMAP_INDEX(cpu);

View file

@ -1,18 +1,19 @@
/*
* Copyright © 2009-2017 Inria. All rights reserved.
* Copyright © 2009-2020 Inria. All rights reserved.
* Copyright © 2012 Université Bordeaux
* See COPYING in top-level directory.
*/
#include <private/autogen/config.h>
#include <hwloc.h>
#include <private/private.h>
#include <private/xml.h>
#include <private/misc.h>
#include "private/autogen/config.h"
#include "hwloc.h"
#include "private/private.h"
#include "private/xml.h"
#include "private/misc.h"
#define HWLOC_COMPONENT_STOP_NAME "stop"
#define HWLOC_COMPONENT_EXCLUDE_CHAR '-'
#define HWLOC_COMPONENT_SEPS ","
#define HWLOC_COMPONENT_PHASESEP_CHAR ':'
/* list of all registered discovery components, sorted by priority, higher priority first.
* noos is last because its priority is 0.
@ -62,14 +63,128 @@ static pthread_mutex_t hwloc_components_mutex = PTHREAD_MUTEX_INITIALIZER;
#ifdef HWLOC_HAVE_PLUGINS
#ifdef HWLOC_HAVE_LTDL
/* ltdl-based plugin load */
#include <ltdl.h>
typedef lt_dlhandle hwloc_dlhandle;
#define hwloc_dlinit lt_dlinit
#define hwloc_dlexit lt_dlexit
#define hwloc_dlopenext lt_dlopenext
#define hwloc_dlclose lt_dlclose
#define hwloc_dlerror lt_dlerror
#define hwloc_dlsym lt_dlsym
#define hwloc_dlforeachfile lt_dlforeachfile
#else /* !HWLOC_HAVE_LTDL */
/* no-ltdl plugin load relies on less portable libdl */
#include <dlfcn.h>
typedef void * hwloc_dlhandle;
static __hwloc_inline int hwloc_dlinit(void) { return 0; }
static __hwloc_inline int hwloc_dlexit(void) { return 0; }
#define hwloc_dlclose dlclose
#define hwloc_dlerror dlerror
#define hwloc_dlsym dlsym
#include <sys/stat.h>
#include <sys/types.h>
#include <dirent.h>
#include <unistd.h>
static hwloc_dlhandle hwloc_dlopenext(const char *_filename)
{
hwloc_dlhandle handle;
char *filename = NULL;
(void) asprintf(&filename, "%s.so", _filename);
if (!filename)
return NULL;
handle = dlopen(filename, RTLD_NOW|RTLD_LOCAL);
free(filename);
return handle;
}
static int
hwloc_dlforeachfile(const char *_paths,
int (*func)(const char *filename, void *data),
void *data)
{
char *paths = NULL, *path;
paths = strdup(_paths);
if (!paths)
return -1;
path = paths;
while (*path) {
char *colon;
DIR *dir;
struct dirent *dirent;
colon = strchr(path, ':');
if (colon)
*colon = '\0';
if (hwloc_plugins_verbose)
fprintf(stderr, " Looking under %s\n", path);
dir = opendir(path);
if (!dir)
goto next;
while ((dirent = readdir(dir)) != NULL) {
char *abs_name, *suffix;
struct stat stbuf;
int err;
err = asprintf(&abs_name, "%s/%s", path, dirent->d_name);
if (err < 0)
continue;
err = stat(abs_name, &stbuf);
if (err < 0) {
free(abs_name);
continue;
}
if (!S_ISREG(stbuf.st_mode)) {
free(abs_name);
continue;
}
/* Only keep .so files, and remove that suffix to get the component basename */
suffix = strrchr(abs_name, '.');
if (!suffix || strcmp(suffix, ".so")) {
free(abs_name);
continue;
}
*suffix = '\0';
err = func(abs_name, data);
if (err) {
free(abs_name);
continue;
}
free(abs_name);
}
closedir(dir);
next:
if (!colon)
break;
path = colon+1;
}
free(paths);
return 0;
}
#endif /* !HWLOC_HAVE_LTDL */
/* array of pointers to dynamically loaded plugins */
static struct hwloc__plugin_desc {
char *name;
struct hwloc_component *component;
char *filename;
lt_dlhandle handle;
hwloc_dlhandle handle;
struct hwloc__plugin_desc *next;
} *hwloc_plugins = NULL;
@ -77,9 +192,10 @@ static int
hwloc__dlforeach_cb(const char *filename, void *_data __hwloc_attribute_unused)
{
const char *basename;
lt_dlhandle handle;
hwloc_dlhandle handle;
struct hwloc_component *component;
struct hwloc__plugin_desc *desc, **prevdesc;
char *componentsymbolname;
if (hwloc_plugins_verbose)
fprintf(stderr, "Plugin dlforeach found `%s'\n", filename);
@ -97,33 +213,40 @@ hwloc__dlforeach_cb(const char *filename, void *_data __hwloc_attribute_unused)
}
/* dlopen and get the component structure */
handle = lt_dlopenext(filename);
handle = hwloc_dlopenext(filename);
if (!handle) {
if (hwloc_plugins_verbose)
fprintf(stderr, "Failed to load plugin: %s\n", lt_dlerror());
fprintf(stderr, "Failed to load plugin: %s\n", hwloc_dlerror());
goto out;
}
{
char componentsymbolname[strlen(basename)+10+1];
componentsymbolname = malloc(strlen(basename)+10+1);
if (!componentsymbolname) {
if (hwloc_plugins_verbose)
fprintf(stderr, "Failed to allocation component `%s' symbol\n",
basename);
goto out_with_handle;
}
sprintf(componentsymbolname, "%s_component", basename);
component = lt_dlsym(handle, componentsymbolname);
component = hwloc_dlsym(handle, componentsymbolname);
if (!component) {
if (hwloc_plugins_verbose)
fprintf(stderr, "Failed to find component symbol `%s'\n",
componentsymbolname);
free(componentsymbolname);
goto out_with_handle;
}
if (component->abi != HWLOC_COMPONENT_ABI) {
if (hwloc_plugins_verbose)
fprintf(stderr, "Plugin symbol ABI %u instead of %d\n",
component->abi, HWLOC_COMPONENT_ABI);
free(componentsymbolname);
goto out_with_handle;
}
if (hwloc_plugins_verbose)
fprintf(stderr, "Plugin contains expected symbol `%s'\n",
componentsymbolname);
}
free(componentsymbolname);
if (HWLOC_COMPONENT_TYPE_DISC == component->type) {
if (strncmp(basename, "hwloc_", 6)) {
@ -166,7 +289,7 @@ hwloc__dlforeach_cb(const char *filename, void *_data __hwloc_attribute_unused)
return 0;
out_with_handle:
lt_dlclose(handle);
hwloc_dlclose(handle);
out:
return 0;
}
@ -182,7 +305,7 @@ hwloc_plugins_exit(void)
desc = hwloc_plugins;
while (desc) {
next = desc->next;
lt_dlclose(desc->handle);
hwloc_dlclose(desc->handle);
free(desc->name);
free(desc->filename);
free(desc);
@ -190,7 +313,7 @@ hwloc_plugins_exit(void)
}
hwloc_plugins = NULL;
lt_dlexit();
hwloc_dlexit();
}
static int
@ -206,7 +329,7 @@ hwloc_plugins_init(void)
hwloc_plugins_blacklist = getenv("HWLOC_PLUGINS_BLACKLIST");
err = lt_dlinit();
err = hwloc_dlinit();
if (err)
goto out;
@ -218,7 +341,7 @@ hwloc_plugins_init(void)
if (hwloc_plugins_verbose)
fprintf(stderr, "Starting plugin dlforeach in %s\n", path);
err = lt_dlforeachfile(path, hwloc__dlforeach_cb, NULL);
err = hwloc_dlforeachfile(path, hwloc__dlforeach_cb, NULL);
if (err)
goto out_with_init;
@ -232,17 +355,6 @@ hwloc_plugins_init(void)
#endif /* HWLOC_HAVE_PLUGINS */
static const char *
hwloc_disc_component_type_string(hwloc_disc_component_type_t type)
{
switch (type) {
case HWLOC_DISC_COMPONENT_TYPE_CPU: return "cpu";
case HWLOC_DISC_COMPONENT_TYPE_GLOBAL: return "global";
case HWLOC_DISC_COMPONENT_TYPE_MISC: return "misc";
default: return "**unknown**";
}
}
static int
hwloc_disc_component_register(struct hwloc_disc_component *component,
const char *filename)
@ -256,21 +368,26 @@ hwloc_disc_component_register(struct hwloc_disc_component *component,
return -1;
}
if (strchr(component->name, HWLOC_COMPONENT_EXCLUDE_CHAR)
|| strchr(component->name, HWLOC_COMPONENT_PHASESEP_CHAR)
|| strcspn(component->name, HWLOC_COMPONENT_SEPS) != strlen(component->name)) {
if (hwloc_components_verbose)
fprintf(stderr, "Cannot register discovery component with name `%s' containing reserved characters `%c" HWLOC_COMPONENT_SEPS "'\n",
component->name, HWLOC_COMPONENT_EXCLUDE_CHAR);
return -1;
}
/* check that the component type is valid */
switch ((unsigned) component->type) {
case HWLOC_DISC_COMPONENT_TYPE_CPU:
case HWLOC_DISC_COMPONENT_TYPE_GLOBAL:
case HWLOC_DISC_COMPONENT_TYPE_MISC:
break;
default:
fprintf(stderr, "Cannot register discovery component `%s' with unknown type %u\n",
component->name, (unsigned) component->type);
/* check that the component phases are valid */
if (!component->phases
|| (component->phases != HWLOC_DISC_PHASE_GLOBAL
&& component->phases & ~(HWLOC_DISC_PHASE_CPU
|HWLOC_DISC_PHASE_MEMORY
|HWLOC_DISC_PHASE_PCI
|HWLOC_DISC_PHASE_IO
|HWLOC_DISC_PHASE_MISC
|HWLOC_DISC_PHASE_ANNOTATE
|HWLOC_DISC_PHASE_TWEAK))) {
fprintf(stderr, "Cannot register discovery component `%s' with invalid phases 0x%x\n",
component->name, component->phases);
return -1;
}
@ -295,8 +412,8 @@ hwloc_disc_component_register(struct hwloc_disc_component *component,
prev = &((*prev)->next);
}
if (hwloc_components_verbose)
fprintf(stderr, "Registered %s discovery component `%s' with priority %u (%s%s)\n",
hwloc_disc_component_type_string(component->type), component->name, component->priority,
fprintf(stderr, "Registered discovery component `%s' phases 0x%x with priority %u (%s%s)\n",
component->name, component->phases, component->priority,
filename ? "from plugin " : "statically build", filename ? filename : "");
prev = &hwloc_disc_components;
@ -310,7 +427,7 @@ hwloc_disc_component_register(struct hwloc_disc_component *component,
return 0;
}
#include <static-components.h>
#include "static-components.h"
static void (**hwloc_component_finalize_cbs)(unsigned long);
static unsigned hwloc_component_finalize_cb_count;
@ -415,31 +532,152 @@ hwloc_components_init(void)
}
void
hwloc_backends_init(struct hwloc_topology *topology)
hwloc_topology_components_init(struct hwloc_topology *topology)
{
topology->nr_blacklisted_components = 0;
topology->blacklisted_components = NULL;
topology->backends = NULL;
topology->backend_excludes = 0;
topology->backend_phases = 0;
topology->backend_excluded_phases = 0;
}
/* look for name among components, ignoring things after `:' */
static struct hwloc_disc_component *
hwloc_disc_component_find(int type /* hwloc_disc_component_type_t or -1 if any */,
const char *name /* name of NULL if any */)
hwloc_disc_component_find(const char *name, const char **endp)
{
struct hwloc_disc_component *comp = hwloc_disc_components;
struct hwloc_disc_component *comp;
size_t length;
const char *end = strchr(name, HWLOC_COMPONENT_PHASESEP_CHAR);
if (end) {
length = end-name;
if (endp)
*endp = end+1;
} else {
length = strlen(name);
if (endp)
*endp = NULL;
}
comp = hwloc_disc_components;
while (NULL != comp) {
if ((-1 == type || type == (int) comp->type)
&& (NULL == name || !strcmp(name, comp->name)))
if (!strncmp(name, comp->name, length))
return comp;
comp = comp->next;
}
return NULL;
}
static unsigned
hwloc_phases_from_string(const char *s)
{
if (!s)
return ~0U;
if (s[0]<'0' || s[0]>'9') {
if (!strcasecmp(s, "global"))
return HWLOC_DISC_PHASE_GLOBAL;
else if (!strcasecmp(s, "cpu"))
return HWLOC_DISC_PHASE_CPU;
if (!strcasecmp(s, "memory"))
return HWLOC_DISC_PHASE_MEMORY;
if (!strcasecmp(s, "pci"))
return HWLOC_DISC_PHASE_PCI;
if (!strcasecmp(s, "io"))
return HWLOC_DISC_PHASE_IO;
if (!strcasecmp(s, "misc"))
return HWLOC_DISC_PHASE_MISC;
if (!strcasecmp(s, "annotate"))
return HWLOC_DISC_PHASE_ANNOTATE;
if (!strcasecmp(s, "tweak"))
return HWLOC_DISC_PHASE_TWEAK;
return 0;
}
return (unsigned) strtoul(s, NULL, 0);
}
static int
hwloc_disc_component_blacklist_one(struct hwloc_topology *topology,
const char *name)
{
struct hwloc_topology_forced_component_s *blacklisted;
struct hwloc_disc_component *comp;
unsigned phases;
unsigned i;
if (!strcmp(name, "linuxpci") || !strcmp(name, "linuxio")) {
/* replace linuxpci and linuxio with linux (with IO phases)
* for backward compatibility with pre-v2.0 and v2.0 respectively */
if (hwloc_components_verbose)
fprintf(stderr, "Replacing deprecated component `%s' with `linux' IO phases in blacklisting\n", name);
comp = hwloc_disc_component_find("linux", NULL);
phases = HWLOC_DISC_PHASE_PCI | HWLOC_DISC_PHASE_IO | HWLOC_DISC_PHASE_MISC | HWLOC_DISC_PHASE_ANNOTATE;
} else {
/* normal lookup */
const char *end;
comp = hwloc_disc_component_find(name, &end);
phases = hwloc_phases_from_string(end);
}
if (!comp) {
errno = EINVAL;
return -1;
}
if (hwloc_components_verbose)
fprintf(stderr, "Blacklisting component `%s` phases 0x%x\n", comp->name, phases);
for(i=0; i<topology->nr_blacklisted_components; i++) {
if (topology->blacklisted_components[i].component == comp) {
topology->blacklisted_components[i].phases |= phases;
return 0;
}
}
blacklisted = realloc(topology->blacklisted_components, (topology->nr_blacklisted_components+1)*sizeof(*blacklisted));
if (!blacklisted)
return -1;
blacklisted[topology->nr_blacklisted_components].component = comp;
blacklisted[topology->nr_blacklisted_components].phases = phases;
topology->blacklisted_components = blacklisted;
topology->nr_blacklisted_components++;
return 0;
}
int
hwloc_topology_set_components(struct hwloc_topology *topology,
unsigned long flags,
const char *name)
{
if (topology->is_loaded) {
errno = EBUSY;
return -1;
}
if (flags & ~HWLOC_TOPOLOGY_COMPONENTS_FLAG_BLACKLIST) {
errno = EINVAL;
return -1;
}
/* this flag is strictly required for now */
if (flags != HWLOC_TOPOLOGY_COMPONENTS_FLAG_BLACKLIST) {
errno = EINVAL;
return -1;
}
if (!strncmp(name, "all", 3) && name[3] == HWLOC_COMPONENT_PHASESEP_CHAR) {
topology->backend_excluded_phases = hwloc_phases_from_string(name+4);
return 0;
}
return hwloc_disc_component_blacklist_one(topology, name);
}
/* used by set_xml(), set_synthetic(), ... environment variables, ... to force the first backend */
int
hwloc_disc_component_force_enable(struct hwloc_topology *topology,
int envvar_forced,
int type, const char *name,
const char *name,
const void *data1, const void *data2, const void *data3)
{
struct hwloc_disc_component *comp;
@ -450,18 +688,28 @@ hwloc_disc_component_force_enable(struct hwloc_topology *topology,
return -1;
}
comp = hwloc_disc_component_find(type, name);
comp = hwloc_disc_component_find(name, NULL);
if (!comp) {
errno = ENOSYS;
return -1;
}
backend = comp->instantiate(comp, data1, data2, data3);
backend = comp->instantiate(topology, comp, 0U /* force-enabled don't get any phase blacklisting */,
data1, data2, data3);
if (backend) {
int err;
backend->envvar_forced = envvar_forced;
if (topology->backends)
hwloc_backends_disable_all(topology);
return hwloc_backend_enable(topology, backend);
err = hwloc_backend_enable(backend);
if (comp->phases == HWLOC_DISC_PHASE_GLOBAL) {
char *env = getenv("HWLOC_ANNOTATE_GLOBAL_COMPONENTS");
if (env && atoi(env))
topology->backend_excluded_phases &= ~HWLOC_DISC_PHASE_ANNOTATE;
}
return err;
} else
return -1;
}
@ -469,29 +717,32 @@ hwloc_disc_component_force_enable(struct hwloc_topology *topology,
static int
hwloc_disc_component_try_enable(struct hwloc_topology *topology,
struct hwloc_disc_component *comp,
const char *comparg,
int envvar_forced)
int envvar_forced,
unsigned blacklisted_phases)
{
struct hwloc_backend *backend;
if (topology->backend_excludes & comp->type) {
if (!(comp->phases & ~(topology->backend_excluded_phases | blacklisted_phases))) {
/* all this backend phases are already excluded, exclude the backend entirely */
if (hwloc_components_verbose)
/* do not warn if envvar_forced since system-wide HWLOC_COMPONENTS must be silently ignored after set_xml() etc.
*/
fprintf(stderr, "Excluding %s discovery component `%s', conflicts with excludes 0x%x\n",
hwloc_disc_component_type_string(comp->type), comp->name, topology->backend_excludes);
fprintf(stderr, "Excluding discovery component `%s' phases 0x%x, conflicts with excludes 0x%x\n",
comp->name, comp->phases, topology->backend_excluded_phases);
return -1;
}
backend = comp->instantiate(comp, comparg, NULL, NULL);
backend = comp->instantiate(topology, comp, topology->backend_excluded_phases | blacklisted_phases,
NULL, NULL, NULL);
if (!backend) {
if (hwloc_components_verbose || envvar_forced)
fprintf(stderr, "Failed to instantiate discovery component `%s'\n", comp->name);
return -1;
}
backend->phases &= ~blacklisted_phases;
backend->envvar_forced = envvar_forced;
return hwloc_backend_enable(topology, backend);
return hwloc_backend_enable(backend);
}
void
@ -502,10 +753,47 @@ hwloc_disc_components_enable_others(struct hwloc_topology *topology)
int tryall = 1;
const char *_env;
char *env; /* we'll to modify the env value, so duplicate it */
unsigned i;
_env = getenv("HWLOC_COMPONENTS");
env = _env ? strdup(_env) : NULL;
/* blacklist disabled components */
if (env) {
char *curenv = env;
size_t s;
while (*curenv) {
s = strcspn(curenv, HWLOC_COMPONENT_SEPS);
if (s) {
char c;
if (curenv[0] != HWLOC_COMPONENT_EXCLUDE_CHAR)
goto nextname;
/* save the last char and replace with \0 */
c = curenv[s];
curenv[s] = '\0';
/* blacklist it, and just ignore failures to allocate */
hwloc_disc_component_blacklist_one(topology, curenv+1);
/* remove that blacklisted name from the string */
for(i=0; i<s; i++)
curenv[i] = *HWLOC_COMPONENT_SEPS;
/* restore chars (the second loop below needs env to be unmodified) */
curenv[s] = c;
}
nextname:
curenv += s;
if (*curenv)
/* Skip comma */
curenv++;
}
}
/* enable explicitly listed components */
if (env) {
char *curenv = env;
@ -515,22 +803,7 @@ hwloc_disc_components_enable_others(struct hwloc_topology *topology)
s = strcspn(curenv, HWLOC_COMPONENT_SEPS);
if (s) {
char c;
/* replace linuxpci with linuxio for backward compatibility with pre-v2.0 */
if (!strncmp(curenv, "linuxpci", 8) && s == 8) {
curenv[5] = 'i';
curenv[6] = 'o';
curenv[7] = *HWLOC_COMPONENT_SEPS;
} else if (curenv[0] == HWLOC_COMPONENT_EXCLUDE_CHAR && !strncmp(curenv+1, "linuxpci", 8) && s == 9) {
curenv[6] = 'i';
curenv[7] = 'o';
curenv[8] = *HWLOC_COMPONENT_SEPS;
/* skip this name, it's a negated one */
goto nextname;
}
if (curenv[0] == HWLOC_COMPONENT_EXCLUDE_CHAR)
goto nextname;
const char *name;
if (!strncmp(curenv, HWLOC_COMPONENT_STOP_NAME, s)) {
tryall = 0;
@ -541,18 +814,31 @@ hwloc_disc_components_enable_others(struct hwloc_topology *topology)
c = curenv[s];
curenv[s] = '\0';
comp = hwloc_disc_component_find(-1, curenv);
name = curenv;
if (!strcmp(name, "linuxpci") || !strcmp(name, "linuxio")) {
if (hwloc_components_verbose)
fprintf(stderr, "Replacing deprecated component `%s' with `linux' in envvar forcing\n", name);
name = "linux";
}
comp = hwloc_disc_component_find(name, NULL /* we enable the entire component, phases must be blacklisted separately */);
if (comp) {
hwloc_disc_component_try_enable(topology, comp, NULL, 1 /* envvar forced */);
unsigned blacklisted_phases = 0U;
for(i=0; i<topology->nr_blacklisted_components; i++)
if (comp == topology->blacklisted_components[i].component) {
blacklisted_phases = topology->blacklisted_components[i].phases;
break;
}
if (comp->phases & ~blacklisted_phases)
hwloc_disc_component_try_enable(topology, comp, 1 /* envvar forced */, blacklisted_phases);
} else {
fprintf(stderr, "Cannot find discovery component `%s'\n", curenv);
fprintf(stderr, "Cannot find discovery component `%s'\n", name);
}
/* restore chars (the second loop below needs env to be unmodified) */
curenv[s] = c;
}
nextname:
curenv += s;
if (*curenv)
/* Skip comma */
@ -566,26 +852,24 @@ nextname:
if (tryall) {
comp = hwloc_disc_components;
while (NULL != comp) {
unsigned blacklisted_phases = 0U;
if (!comp->enabled_by_default)
goto nextcomp;
/* check if this component was explicitly excluded in env */
if (env) {
char *curenv = env;
while (*curenv) {
size_t s = strcspn(curenv, HWLOC_COMPONENT_SEPS);
if (curenv[0] == HWLOC_COMPONENT_EXCLUDE_CHAR && !strncmp(curenv+1, comp->name, s-1) && strlen(comp->name) == s-1) {
if (hwloc_components_verbose)
fprintf(stderr, "Excluding %s discovery component `%s' because of HWLOC_COMPONENTS environment variable\n",
hwloc_disc_component_type_string(comp->type), comp->name);
goto nextcomp;
}
curenv += s;
if (*curenv)
/* Skip comma */
curenv++;
/* check if this component was blacklisted by the application */
for(i=0; i<topology->nr_blacklisted_components; i++)
if (comp == topology->blacklisted_components[i].component) {
blacklisted_phases = topology->blacklisted_components[i].phases;
break;
}
if (!(comp->phases & ~blacklisted_phases)) {
if (hwloc_components_verbose)
fprintf(stderr, "Excluding blacklisted discovery component `%s' phases 0x%x\n",
comp->name, comp->phases);
goto nextcomp;
}
hwloc_disc_component_try_enable(topology, comp, NULL, 0 /* defaults, not envvar forced */);
hwloc_disc_component_try_enable(topology, comp, 0 /* defaults, not envvar forced */, blacklisted_phases);
nextcomp:
comp = comp->next;
}
@ -597,7 +881,7 @@ nextcomp:
backend = topology->backends;
fprintf(stderr, "Final list of enabled discovery components: ");
while (backend != NULL) {
fprintf(stderr, "%s%s", first ? "" : ",", backend->component->name);
fprintf(stderr, "%s%s(0x%x)", first ? "" : ",", backend->component->name, backend->phases);
backend = backend->next;
first = 0;
}
@ -638,7 +922,8 @@ hwloc_components_fini(void)
}
struct hwloc_backend *
hwloc_backend_alloc(struct hwloc_disc_component *component)
hwloc_backend_alloc(struct hwloc_topology *topology,
struct hwloc_disc_component *component)
{
struct hwloc_backend * backend = malloc(sizeof(*backend));
if (!backend) {
@ -646,6 +931,12 @@ hwloc_backend_alloc(struct hwloc_disc_component *component)
return NULL;
}
backend->component = component;
backend->topology = topology;
/* filter-out component phases that are excluded */
backend->phases = component->phases & ~topology->backend_excluded_phases;
if (backend->phases != component->phases && hwloc_components_verbose)
fprintf(stderr, "Trying discovery component `%s' with phases 0x%x instead of 0x%x\n",
component->name, backend->phases, component->phases);
backend->flags = 0;
backend->discover = NULL;
backend->get_pci_busid_cpuset = NULL;
@ -665,14 +956,15 @@ hwloc_backend_disable(struct hwloc_backend *backend)
}
int
hwloc_backend_enable(struct hwloc_topology *topology, struct hwloc_backend *backend)
hwloc_backend_enable(struct hwloc_backend *backend)
{
struct hwloc_topology *topology = backend->topology;
struct hwloc_backend **pprev;
/* check backend flags */
if (backend->flags) {
fprintf(stderr, "Cannot enable %s discovery component `%s' with unknown flags %lx\n",
hwloc_disc_component_type_string(backend->component->type), backend->component->name, backend->flags);
fprintf(stderr, "Cannot enable discovery component `%s' phases 0x%x with unknown flags %lx\n",
backend->component->name, backend->component->phases, backend->flags);
return -1;
}
@ -681,8 +973,8 @@ hwloc_backend_enable(struct hwloc_topology *topology, struct hwloc_backend *back
while (NULL != *pprev) {
if ((*pprev)->component == backend->component) {
if (hwloc_components_verbose)
fprintf(stderr, "Cannot enable %s discovery component `%s' twice\n",
hwloc_disc_component_type_string(backend->component->type), backend->component->name);
fprintf(stderr, "Cannot enable discovery component `%s' phases 0x%x twice\n",
backend->component->name, backend->component->phases);
hwloc_backend_disable(backend);
errno = EBUSY;
return -1;
@ -691,8 +983,8 @@ hwloc_backend_enable(struct hwloc_topology *topology, struct hwloc_backend *back
}
if (hwloc_components_verbose)
fprintf(stderr, "Enabling %s discovery component `%s'\n",
hwloc_disc_component_type_string(backend->component->type), backend->component->name);
fprintf(stderr, "Enabling discovery component `%s' with phases 0x%x (among 0x%x)\n",
backend->component->name, backend->phases, backend->component->phases);
/* enqueue at the end */
pprev = &topology->backends;
@ -701,8 +993,8 @@ hwloc_backend_enable(struct hwloc_topology *topology, struct hwloc_backend *back
backend->next = *pprev;
*pprev = backend;
backend->topology = topology;
topology->backend_excludes |= backend->component->excludes;
topology->backend_phases |= backend->component->phases;
topology->backend_excluded_phases |= backend->component->excluded_phases;
return 0;
}
@ -712,7 +1004,7 @@ hwloc_backends_is_thissystem(struct hwloc_topology *topology)
struct hwloc_backend *backend;
const char *local_env;
/* Apply is_thissystem topology flag before we enforce envvar backends.
/*
* If the application changed the backend with set_foo(),
* it may use set_flags() update the is_thissystem flag here.
* If it changes the backend with environment variables below,
@ -775,11 +1067,20 @@ hwloc_backends_disable_all(struct hwloc_topology *topology)
while (NULL != (backend = topology->backends)) {
struct hwloc_backend *next = backend->next;
if (hwloc_components_verbose)
fprintf(stderr, "Disabling %s discovery component `%s'\n",
hwloc_disc_component_type_string(backend->component->type), backend->component->name);
fprintf(stderr, "Disabling discovery component `%s'\n",
backend->component->name);
hwloc_backend_disable(backend);
topology->backends = next;
}
topology->backends = NULL;
topology->backend_excludes = 0;
topology->backend_excluded_phases = 0;
}
void
hwloc_topology_components_fini(struct hwloc_topology *topology)
{
/* hwloc_backends_disable_all() must have been called earlier */
assert(!topology->backends);
free(topology->blacklisted_components);
}

View file

@ -1,11 +1,11 @@
/*
* Copyright © 2013-2018 Inria. All rights reserved.
* Copyright © 2013-2019 Inria. All rights reserved.
* See COPYING in top-level directory.
*/
#include <private/autogen/config.h>
#include <private/private.h>
#include <private/misc.h>
#include "private/autogen/config.h"
#include "private/private.h"
#include "private/misc.h"
int hwloc_topology_diff_destroy(hwloc_topology_diff_t diff)
{
@ -351,7 +351,8 @@ int hwloc_topology_diff_build(hwloc_topology_t topo1,
err = 1;
break;
}
if (dist1->type != dist2->type
if (dist1->unique_type != dist2->unique_type
|| dist1->different_types || dist2->different_types /* too lazy to support this case */
|| dist1->nbobjs != dist2->nbobjs
|| dist1->kind != dist2->kind
|| memcmp(dist1->values, dist2->values, dist1->nbobjs * dist1->nbobjs * sizeof(*dist1->values))) {
@ -463,6 +464,10 @@ int hwloc_topology_diff_apply(hwloc_topology_t topology,
errno = EINVAL;
return -1;
}
if (topology->adopted_shmem_addr) {
errno = EPERM;
return -1;
}
if (flags & ~HWLOC_TOPOLOGY_DIFF_APPLY_REVERSE) {
errno = EINVAL;

View file

@ -1,19 +1,22 @@
/*
* Copyright © 2010-2018 Inria. All rights reserved.
* Copyright © 2010-2019 Inria. All rights reserved.
* Copyright © 2011-2012 Université Bordeaux
* Copyright © 2011 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory.
*/
#include <private/autogen/config.h>
#include <hwloc.h>
#include <private/private.h>
#include <private/debug.h>
#include <private/misc.h>
#include "private/autogen/config.h"
#include "hwloc.h"
#include "private/private.h"
#include "private/debug.h"
#include "private/misc.h"
#include <float.h>
#include <math.h>
static struct hwloc_internal_distances_s *
hwloc__internal_distances_from_public(hwloc_topology_t topology, struct hwloc_distances_s *distances);
/******************************************************
* Global init, prepare, destroy, dup
*/
@ -70,6 +73,8 @@ void hwloc_internal_distances_prepare(struct hwloc_topology *topology)
static void hwloc_internal_distances_free(struct hwloc_internal_distances_s *dist)
{
free(dist->name);
free(dist->different_types);
free(dist->indexes);
free(dist->objs);
free(dist->values);
@ -96,15 +101,35 @@ static int hwloc_internal_distances_dup_one(struct hwloc_topology *new, struct h
newdist = hwloc_tma_malloc(tma, sizeof(*newdist));
if (!newdist)
return -1;
if (olddist->name) {
newdist->name = hwloc_tma_strdup(tma, olddist->name);
if (!newdist->name) {
assert(!tma || !tma->dontfree); /* this tma cannot fail to allocate */
hwloc_internal_distances_free(newdist);
return -1;
}
} else {
newdist->name = NULL;
}
newdist->type = olddist->type;
if (olddist->different_types) {
newdist->different_types = hwloc_tma_malloc(tma, nbobjs * sizeof(*newdist->different_types));
if (!newdist->different_types) {
assert(!tma || !tma->dontfree); /* this tma cannot fail to allocate */
hwloc_internal_distances_free(newdist);
return -1;
}
memcpy(newdist->different_types, olddist->different_types, nbobjs * sizeof(*newdist->different_types));
} else
newdist->different_types = NULL;
newdist->unique_type = olddist->unique_type;
newdist->nbobjs = nbobjs;
newdist->kind = olddist->kind;
newdist->id = olddist->id;
newdist->indexes = hwloc_tma_malloc(tma, nbobjs * sizeof(*newdist->indexes));
newdist->objs = hwloc_tma_calloc(tma, nbobjs * sizeof(*newdist->objs));
newdist->objs_are_valid = 0;
newdist->iflags = olddist->iflags & ~HWLOC_INTERNAL_DIST_FLAG_OBJS_VALID; /* must be revalidated after dup() */
newdist->values = hwloc_tma_malloc(tma, nbobjs*nbobjs * sizeof(*newdist->values));
if (!newdist->indexes || !newdist->objs || !newdist->values) {
assert(!tma || !tma->dontfree); /* this tma cannot fail to allocate */
@ -150,6 +175,10 @@ int hwloc_distances_remove(hwloc_topology_t topology)
errno = EINVAL;
return -1;
}
if (topology->adopted_shmem_addr) {
errno = EPERM;
return -1;
}
hwloc_internal_distances_destroy(topology);
return 0;
}
@ -163,6 +192,10 @@ int hwloc_distances_remove_by_depth(hwloc_topology_t topology, int depth)
errno = EINVAL;
return -1;
}
if (topology->adopted_shmem_addr) {
errno = EPERM;
return -1;
}
/* switch back to types since we don't support groups for now */
type = hwloc_get_depth_type(topology, depth);
@ -174,7 +207,7 @@ int hwloc_distances_remove_by_depth(hwloc_topology_t topology, int depth)
next = topology->first_dist;
while ((dist = next) != NULL) {
next = dist->next;
if (dist->type == type) {
if (dist->unique_type == type) {
if (next)
next->prev = dist->prev;
else
@ -190,6 +223,27 @@ int hwloc_distances_remove_by_depth(hwloc_topology_t topology, int depth)
return 0;
}
int hwloc_distances_release_remove(hwloc_topology_t topology,
struct hwloc_distances_s *distances)
{
struct hwloc_internal_distances_s *dist = hwloc__internal_distances_from_public(topology, distances);
if (!dist) {
errno = EINVAL;
return -1;
}
if (dist->prev)
dist->prev->next = dist->next;
else
topology->first_dist = dist->next;
if (dist->next)
dist->next->prev = dist->prev;
else
topology->last_dist = dist->prev;
hwloc_internal_distances_free(dist);
hwloc_distances_release(topology, distances);
return 0;
}
/******************************************************
* Add distances to the topology
*/
@ -201,17 +255,34 @@ hwloc__groups_by_distances(struct hwloc_topology *topology, unsigned nbobjs, str
* the caller gives us the distances and objs pointers, we'll free them later.
*/
static int
hwloc_internal_distances__add(hwloc_topology_t topology,
hwloc_obj_type_t type, unsigned nbobjs, hwloc_obj_t *objs, uint64_t *indexes, uint64_t *values,
unsigned long kind)
hwloc_internal_distances__add(hwloc_topology_t topology, const char *name,
hwloc_obj_type_t unique_type, hwloc_obj_type_t *different_types,
unsigned nbobjs, hwloc_obj_t *objs, uint64_t *indexes, uint64_t *values,
unsigned long kind, unsigned iflags)
{
struct hwloc_internal_distances_s *dist = calloc(1, sizeof(*dist));
struct hwloc_internal_distances_s *dist;
if (different_types) {
kind |= HWLOC_DISTANCES_KIND_HETEROGENEOUS_TYPES; /* the user isn't forced to give it */
} else if (kind & HWLOC_DISTANCES_KIND_HETEROGENEOUS_TYPES) {
errno = EINVAL;
goto err;
}
dist = calloc(1, sizeof(*dist));
if (!dist)
goto err;
dist->type = type;
if (name)
dist->name = strdup(name); /* ignore failure */
dist->unique_type = unique_type;
dist->different_types = different_types;
dist->nbobjs = nbobjs;
dist->kind = kind;
dist->iflags = iflags;
assert(!!(iflags & HWLOC_INTERNAL_DIST_FLAG_OBJS_VALID) == !!objs);
if (!objs) {
assert(indexes);
@ -220,18 +291,16 @@ hwloc_internal_distances__add(hwloc_topology_t topology,
dist->objs = calloc(nbobjs, sizeof(hwloc_obj_t));
if (!dist->objs)
goto err_with_dist;
dist->objs_are_valid = 0;
} else {
unsigned i;
assert(!indexes);
/* we only have objs, generate the indexes arrays so that we can refresh objs later */
dist->objs = objs;
dist->objs_are_valid = 1;
dist->indexes = malloc(nbobjs * sizeof(*dist->indexes));
if (!dist->indexes)
goto err_with_dist;
if (dist->type == HWLOC_OBJ_PU || dist->type == HWLOC_OBJ_NUMANODE) {
if (HWLOC_DIST_TYPE_USE_OS_INDEX(dist->unique_type)) {
for(i=0; i<nbobjs; i++)
dist->indexes[i] = objs[i]->os_index;
} else {
@ -254,18 +323,23 @@ hwloc_internal_distances__add(hwloc_topology_t topology,
return 0;
err_with_dist:
if (name)
free(dist->name);
free(dist);
err:
free(different_types);
free(objs);
free(indexes);
free(values);
return -1;
}
int hwloc_internal_distances_add_by_index(hwloc_topology_t topology,
hwloc_obj_type_t type, unsigned nbobjs, uint64_t *indexes, uint64_t *values,
int hwloc_internal_distances_add_by_index(hwloc_topology_t topology, const char *name,
hwloc_obj_type_t unique_type, hwloc_obj_type_t *different_types, unsigned nbobjs, uint64_t *indexes, uint64_t *values,
unsigned long kind, unsigned long flags)
{
unsigned iflags = 0; /* objs not valid */
if (nbobjs < 2) {
errno = EINVAL;
goto err;
@ -279,24 +353,71 @@ int hwloc_internal_distances_add_by_index(hwloc_topology_t topology,
goto err;
}
return hwloc_internal_distances__add(topology, type, nbobjs, NULL, indexes, values, kind);
return hwloc_internal_distances__add(topology, name, unique_type, different_types, nbobjs, NULL, indexes, values, kind, iflags);
err:
free(indexes);
free(values);
free(different_types);
return -1;
}
int hwloc_internal_distances_add(hwloc_topology_t topology,
static void
hwloc_internal_distances_restrict(hwloc_obj_t *objs,
uint64_t *indexes,
uint64_t *values,
unsigned nbobjs, unsigned disappeared);
int hwloc_internal_distances_add(hwloc_topology_t topology, const char *name,
unsigned nbobjs, hwloc_obj_t *objs, uint64_t *values,
unsigned long kind, unsigned long flags)
{
hwloc_obj_type_t unique_type, *different_types;
unsigned i, disappeared = 0;
unsigned iflags = HWLOC_INTERNAL_DIST_FLAG_OBJS_VALID;
if (nbobjs < 2) {
errno = EINVAL;
goto err;
}
if (topology->grouping && (flags & HWLOC_DISTANCES_ADD_FLAG_GROUP)) {
/* is there any NULL object? (useful in case of problem during insert in backends) */
for(i=0; i<nbobjs; i++)
if (!objs[i])
disappeared++;
if (disappeared) {
/* some objects are NULL */
if (disappeared == nbobjs) {
/* nothing left, drop the matrix */
free(objs);
free(values);
return 0;
}
/* restrict the matrix */
hwloc_internal_distances_restrict(objs, NULL, values, nbobjs, disappeared);
nbobjs -= disappeared;
}
unique_type = objs[0]->type;
for(i=1; i<nbobjs; i++)
if (objs[i]->type != unique_type) {
unique_type = HWLOC_OBJ_TYPE_NONE;
break;
}
if (unique_type == HWLOC_OBJ_TYPE_NONE) {
/* heterogeneous types */
different_types = malloc(nbobjs * sizeof(*different_types));
if (!different_types)
goto err;
for(i=0; i<nbobjs; i++)
different_types[i] = objs[i]->type;
} else {
/* homogeneous types */
different_types = NULL;
}
if (topology->grouping && (flags & HWLOC_DISTANCES_ADD_FLAG_GROUP) && !different_types) {
float full_accuracy = 0.f;
float *accuracies;
unsigned nbaccuracies;
@ -310,8 +431,8 @@ int hwloc_internal_distances_add(hwloc_topology_t topology,
}
if (topology->grouping_verbose) {
unsigned i, j;
int gp = (objs[0]->type != HWLOC_OBJ_NUMANODE && objs[0]->type != HWLOC_OBJ_PU);
unsigned j;
int gp = !HWLOC_DIST_TYPE_USE_OS_INDEX(unique_type);
fprintf(stderr, "Trying to group objects using distance matrix:\n");
fprintf(stderr, "%s", gp ? "gp_index" : "os_index");
for(j=0; j<nbobjs; j++)
@ -329,7 +450,7 @@ int hwloc_internal_distances_add(hwloc_topology_t topology,
kind, nbaccuracies, accuracies, 1 /* check the first matrice */);
}
return hwloc_internal_distances__add(topology, objs[0]->type, nbobjs, objs, NULL, values, kind);
return hwloc_internal_distances__add(topology, name, unique_type, different_types, nbobjs, objs, NULL, values, kind, iflags);
err:
free(objs);
@ -348,7 +469,6 @@ int hwloc_distances_add(hwloc_topology_t topology,
unsigned nbobjs, hwloc_obj_t *objs, hwloc_uint64_t *values,
unsigned long kind, unsigned long flags)
{
hwloc_obj_type_t type;
unsigned i;
uint64_t *_values;
hwloc_obj_t *_objs;
@ -358,6 +478,10 @@ int hwloc_distances_add(hwloc_topology_t topology,
errno = EINVAL;
return -1;
}
if (topology->adopted_shmem_addr) {
errno = EPERM;
return -1;
}
if ((kind & ~HWLOC_DISTANCES_KIND_ALL)
|| hwloc_weight_long(kind & HWLOC_DISTANCES_KIND_FROM_ALL) != 1
|| hwloc_weight_long(kind & HWLOC_DISTANCES_KIND_MEANS_ALL) != 1
@ -368,15 +492,8 @@ int hwloc_distances_add(hwloc_topology_t topology,
/* no strict need to check for duplicates, things shouldn't break */
type = objs[0]->type;
if (type == HWLOC_OBJ_GROUP) {
/* not supported yet, would require we save the subkind together with the type. */
errno = EINVAL;
return -1;
}
for(i=1; i<nbobjs; i++)
if (!objs[i] || objs[i]->type != type) {
if (!objs[i]) {
errno = EINVAL;
return -1;
}
@ -389,7 +506,7 @@ int hwloc_distances_add(hwloc_topology_t topology,
memcpy(_objs, objs, nbobjs*sizeof(hwloc_obj_t));
memcpy(_values, values, nbobjs*nbobjs*sizeof(*_values));
err = hwloc_internal_distances_add(topology, nbobjs, _objs, _values, kind, flags);
err = hwloc_internal_distances_add(topology, NULL, nbobjs, _objs, _values, kind, flags);
if (err < 0)
goto out; /* _objs and _values freed in hwloc_internal_distances_add() */
@ -409,9 +526,9 @@ int hwloc_distances_add(hwloc_topology_t topology,
* Refresh objects in distances
*/
static hwloc_obj_t hwloc_find_obj_by_type_and_gp_index(hwloc_topology_t topology, hwloc_obj_type_t type, uint64_t gp_index)
static hwloc_obj_t hwloc_find_obj_by_depth_and_gp_index(hwloc_topology_t topology, unsigned depth, uint64_t gp_index)
{
hwloc_obj_t obj = hwloc_get_obj_by_type(topology, type, 0);
hwloc_obj_t obj = hwloc_get_obj_by_depth(topology, depth, 0);
while (obj) {
if (obj->gp_index == gp_index)
return obj;
@ -420,12 +537,31 @@ static hwloc_obj_t hwloc_find_obj_by_type_and_gp_index(hwloc_topology_t topology
return NULL;
}
static void
hwloc_internal_distances_restrict(struct hwloc_internal_distances_s *dist,
hwloc_obj_t *objs,
unsigned disappeared)
static hwloc_obj_t hwloc_find_obj_by_type_and_gp_index(hwloc_topology_t topology, hwloc_obj_type_t type, uint64_t gp_index)
{
int depth = hwloc_get_type_depth(topology, type);
if (depth == HWLOC_TYPE_DEPTH_UNKNOWN)
return NULL;
if (depth == HWLOC_TYPE_DEPTH_MULTIPLE) {
int topodepth = hwloc_topology_get_depth(topology);
for(depth=0; depth<topodepth; depth++) {
if (hwloc_get_depth_type(topology, depth) == type) {
hwloc_obj_t obj = hwloc_find_obj_by_depth_and_gp_index(topology, depth, gp_index);
if (obj)
return obj;
}
}
return NULL;
}
return hwloc_find_obj_by_depth_and_gp_index(topology, depth, gp_index);
}
static void
hwloc_internal_distances_restrict(hwloc_obj_t *objs,
uint64_t *indexes,
uint64_t *values,
unsigned nbobjs, unsigned disappeared)
{
unsigned nbobjs = dist->nbobjs;
unsigned i, newi;
unsigned j, newj;
@ -433,7 +569,7 @@ hwloc_internal_distances_restrict(struct hwloc_internal_distances_s *dist,
if (objs[i]) {
for(j=0, newj=0; j<nbobjs; j++)
if (objs[j]) {
dist->values[newi*(nbobjs-disappeared)+newj] = dist->values[i*nbobjs+j];
values[newi*(nbobjs-disappeared)+newj] = values[i*nbobjs+j];
newj++;
}
newi++;
@ -442,25 +578,25 @@ hwloc_internal_distances_restrict(struct hwloc_internal_distances_s *dist,
for(i=0, newi=0; i<nbobjs; i++)
if (objs[i]) {
objs[newi] = objs[i];
dist->indexes[newi] = dist->indexes[i];
if (indexes)
indexes[newi] = indexes[i];
newi++;
}
dist->nbobjs -= disappeared;
}
static int
hwloc_internal_distances_refresh_one(hwloc_topology_t topology,
struct hwloc_internal_distances_s *dist)
{
hwloc_obj_type_t type = dist->type;
hwloc_obj_type_t unique_type = dist->unique_type;
hwloc_obj_type_t *different_types = dist->different_types;
unsigned nbobjs = dist->nbobjs;
hwloc_obj_t *objs = dist->objs;
uint64_t *indexes = dist->indexes;
unsigned disappeared = 0;
unsigned i;
if (dist->objs_are_valid)
if (dist->iflags & HWLOC_INTERNAL_DIST_FLAG_OBJS_VALID)
return 0;
for(i=0; i<nbobjs; i++) {
@ -468,12 +604,16 @@ hwloc_internal_distances_refresh_one(hwloc_topology_t topology,
/* TODO use cpuset/nodeset to find pus/numas from the root?
* faster than traversing the entire level?
*/
if (type == HWLOC_OBJ_PU)
obj = hwloc_get_pu_obj_by_os_index(topology, (unsigned) indexes[i]);
else if (type == HWLOC_OBJ_NUMANODE)
obj = hwloc_get_numanode_obj_by_os_index(topology, (unsigned) indexes[i]);
else
obj = hwloc_find_obj_by_type_and_gp_index(topology, type, indexes[i]);
if (HWLOC_DIST_TYPE_USE_OS_INDEX(unique_type)) {
if (unique_type == HWLOC_OBJ_PU)
obj = hwloc_get_pu_obj_by_os_index(topology, (unsigned) indexes[i]);
else if (unique_type == HWLOC_OBJ_NUMANODE)
obj = hwloc_get_numanode_obj_by_os_index(topology, (unsigned) indexes[i]);
else
abort();
} else {
obj = hwloc_find_obj_by_type_and_gp_index(topology, different_types ? different_types[i] : unique_type, indexes[i]);
}
objs[i] = obj;
if (!obj)
disappeared++;
@ -483,10 +623,12 @@ hwloc_internal_distances_refresh_one(hwloc_topology_t topology,
/* became useless, drop */
return -1;
if (disappeared)
hwloc_internal_distances_restrict(dist, objs, disappeared);
if (disappeared) {
hwloc_internal_distances_restrict(objs, dist->indexes, dist->values, nbobjs, disappeared);
dist->nbobjs -= disappeared;
}
dist->objs_are_valid = 1;
dist->iflags |= HWLOC_INTERNAL_DIST_FLAG_OBJS_VALID;
return 0;
}
@ -520,32 +662,64 @@ hwloc_internal_distances_invalidate_cached_objs(hwloc_topology_t topology)
{
struct hwloc_internal_distances_s *dist;
for(dist = topology->first_dist; dist; dist = dist->next)
dist->objs_are_valid = 0;
dist->iflags &= ~HWLOC_INTERNAL_DIST_FLAG_OBJS_VALID;
}
/******************************************************
* User API for getting distances
*/
/* what we actually allocate for user queries, even if we only
* return the distances part of it.
*/
struct hwloc_distances_container_s {
unsigned id;
struct hwloc_distances_s distances;
};
#define HWLOC_DISTANCES_CONTAINER_OFFSET ((char*)&((struct hwloc_distances_container_s*)NULL)->distances - (char*)NULL)
#define HWLOC_DISTANCES_CONTAINER(_d) (struct hwloc_distances_container_s *) ( ((char*)_d) - HWLOC_DISTANCES_CONTAINER_OFFSET )
static struct hwloc_internal_distances_s *
hwloc__internal_distances_from_public(hwloc_topology_t topology, struct hwloc_distances_s *distances)
{
struct hwloc_distances_container_s *cont = HWLOC_DISTANCES_CONTAINER(distances);
struct hwloc_internal_distances_s *dist;
for(dist = topology->first_dist; dist; dist = dist->next)
if (dist->id == cont->id)
return dist;
return NULL;
}
void
hwloc_distances_release(hwloc_topology_t topology __hwloc_attribute_unused,
struct hwloc_distances_s *distances)
{
struct hwloc_distances_container_s *cont = HWLOC_DISTANCES_CONTAINER(distances);
free(distances->values);
free(distances->objs);
free(distances);
free(cont);
}
const char *
hwloc_distances_get_name(hwloc_topology_t topology, struct hwloc_distances_s *distances)
{
struct hwloc_internal_distances_s *dist = hwloc__internal_distances_from_public(topology, distances);
return dist ? dist->name : NULL;
}
static struct hwloc_distances_s *
hwloc_distances_get_one(hwloc_topology_t topology __hwloc_attribute_unused,
struct hwloc_internal_distances_s *dist)
{
struct hwloc_distances_container_s *cont;
struct hwloc_distances_s *distances;
unsigned nbobjs;
distances = malloc(sizeof(*distances));
if (!distances)
cont = malloc(sizeof(*cont));
if (!cont)
return NULL;
distances = &cont->distances;
nbobjs = distances->nbobjs = dist->nbobjs;
@ -560,18 +734,20 @@ hwloc_distances_get_one(hwloc_topology_t topology __hwloc_attribute_unused,
memcpy(distances->values, dist->values, nbobjs*nbobjs*sizeof(*distances->values));
distances->kind = dist->kind;
cont->id = dist->id;
return distances;
out_with_objs:
free(distances->objs);
out:
free(distances);
free(cont);
return NULL;
}
static int
hwloc__distances_get(hwloc_topology_t topology,
hwloc_obj_type_t type,
const char *name, hwloc_obj_type_t type,
unsigned *nrp, struct hwloc_distances_s **distancesp,
unsigned long kind, unsigned long flags __hwloc_attribute_unused)
{
@ -602,7 +778,10 @@ hwloc__distances_get(hwloc_topology_t topology,
unsigned long kind_from = kind & HWLOC_DISTANCES_KIND_FROM_ALL;
unsigned long kind_means = kind & HWLOC_DISTANCES_KIND_MEANS_ALL;
if (type != HWLOC_OBJ_TYPE_NONE && type != dist->type)
if (name && (!dist->name || strcmp(name, dist->name)))
continue;
if (type != HWLOC_OBJ_TYPE_NONE && type != dist->unique_type)
continue;
if (kind_from && !(kind_from & dist->kind))
@ -640,7 +819,7 @@ hwloc_distances_get(hwloc_topology_t topology,
return -1;
}
return hwloc__distances_get(topology, HWLOC_OBJ_TYPE_NONE, nrp, distancesp, kind, flags);
return hwloc__distances_get(topology, NULL, HWLOC_OBJ_TYPE_NONE, nrp, distancesp, kind, flags);
}
int
@ -655,14 +834,40 @@ hwloc_distances_get_by_depth(hwloc_topology_t topology, int depth,
return -1;
}
/* switch back to types since we don't support groups for now */
/* FIXME: passing the depth of a group level may return group distances at a different depth */
type = hwloc_get_depth_type(topology, depth);
if (type == (hwloc_obj_type_t)-1) {
errno = EINVAL;
return -1;
}
return hwloc__distances_get(topology, type, nrp, distancesp, kind, flags);
return hwloc__distances_get(topology, NULL, type, nrp, distancesp, kind, flags);
}
int
hwloc_distances_get_by_name(hwloc_topology_t topology, const char *name,
unsigned *nrp, struct hwloc_distances_s **distancesp,
unsigned long flags)
{
if (flags || !topology->is_loaded) {
errno = EINVAL;
return -1;
}
return hwloc__distances_get(topology, name, HWLOC_OBJ_TYPE_NONE, nrp, distancesp, HWLOC_DISTANCES_KIND_ALL, flags);
}
int
hwloc_distances_get_by_type(hwloc_topology_t topology, hwloc_obj_type_t type,
unsigned *nrp, struct hwloc_distances_s **distancesp,
unsigned long kind, unsigned long flags)
{
if (flags || !topology->is_loaded) {
errno = EINVAL;
return -1;
}
return hwloc__distances_get(topology, NULL, type, nrp, distancesp, kind, flags);
}
/******************************************************
@ -823,10 +1028,14 @@ hwloc__groups_by_distances(struct hwloc_topology *topology,
float *accuracies,
int needcheck)
{
HWLOC_VLA(unsigned, groupids, nbobjs);
unsigned *groupids;
unsigned nbgroups = 0;
unsigned i,j;
int verbose = topology->grouping_verbose;
hwloc_obj_t *groupobjs;
unsigned * groupsizes;
uint64_t *groupvalues;
unsigned failed = 0;
if (nbobjs <= 2)
return;
@ -836,6 +1045,10 @@ hwloc__groups_by_distances(struct hwloc_topology *topology,
/* TODO hwloc__find_groups_by_max_distance() for bandwidth */
return;
groupids = malloc(nbobjs * sizeof(*groupids));
if (!groupids)
return;
for(i=0; i<nbaccuracies; i++) {
if (verbose)
fprintf(stderr, "Trying to group %u %s objects according to physical distances with accuracy %f\n",
@ -847,13 +1060,13 @@ hwloc__groups_by_distances(struct hwloc_topology *topology,
break;
}
if (!nbgroups)
return;
goto out_with_groupids;
{
HWLOC_VLA(hwloc_obj_t, groupobjs, nbgroups);
HWLOC_VLA(unsigned, groupsizes, nbgroups);
HWLOC_VLA(uint64_t, groupvalues, nbgroups*nbgroups);
unsigned failed = 0;
groupobjs = malloc(nbgroups * sizeof(*groupobjs));
groupsizes = malloc(nbgroups * sizeof(*groupsizes));
groupvalues = malloc(nbgroups * nbgroups * sizeof(*groupvalues));
if (!groupobjs || !groupsizes || !groupvalues)
goto out_with_groups;
/* create new Group objects and record their size */
memset(&(groupsizes[0]), 0, sizeof(groupsizes[0]) * nbgroups);
@ -884,7 +1097,7 @@ hwloc__groups_by_distances(struct hwloc_topology *topology,
if (failed)
/* don't try to group above if we got a NULL group here, just keep this incomplete level */
return;
goto out_with_groups;
/* factorize values */
memset(&(groupvalues[0]), 0, sizeof(groupvalues[0]) * nbgroups * nbgroups);
@ -916,5 +1129,11 @@ hwloc__groups_by_distances(struct hwloc_topology *topology,
#endif
hwloc__groups_by_distances(topology, nbgroups, groupobjs, groupvalues, kind, nbaccuracies, accuracies, 0 /* no need to check generated matrix */);
}
out_with_groups:
free(groupobjs);
free(groupsizes);
free(groupvalues);
out_with_groupids:
free(groupids);
}

View file

@ -1,14 +1,14 @@
/*
* Copyright © 2009 CNRS
* Copyright © 2009-2015 Inria. All rights reserved.
* Copyright © 2009-2018 Inria. All rights reserved.
* Copyright © 2009-2010 Université Bordeaux
* Copyright © 2009-2018 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory.
*/
#include <private/autogen/config.h>
#include <private/private.h>
#include <private/misc.h>
#include "private/autogen/config.h"
#include "private/private.h"
#include "private/misc.h"
#include <stdarg.h>
#ifdef HAVE_SYS_UTSNAME_H
@ -28,6 +28,7 @@ extern char *program_invocation_name;
extern char *__progname;
#endif
#ifndef HWLOC_HAVE_CORRECT_SNPRINTF
int hwloc_snprintf(char *str, size_t size, const char *format, ...)
{
int ret;
@ -77,21 +78,7 @@ int hwloc_snprintf(char *str, size_t size, const char *format, ...)
return ret;
}
int hwloc_namecoloncmp(const char *haystack, const char *needle, size_t n)
{
size_t i = 0;
while (*haystack && *haystack != ':') {
int ha = *haystack++;
int low_h = tolower(ha);
int ne = *needle++;
int low_n = tolower(ne);
if (low_h != low_n)
return 1;
i++;
}
return i < n;
}
#endif
void hwloc_add_uname_info(struct hwloc_topology *topology __hwloc_attribute_unused,
void *cached_uname __hwloc_attribute_unused)

View file

@ -1,14 +1,14 @@
/*
* Copyright © 2009-2018 Inria. All rights reserved.
* Copyright © 2009-2020 Inria. All rights reserved.
* See COPYING in top-level directory.
*/
#include <private/autogen/config.h>
#include <hwloc.h>
#include <hwloc/plugins.h>
#include <private/private.h>
#include <private/debug.h>
#include <private/misc.h>
#include "private/autogen/config.h"
#include "hwloc.h"
#include "hwloc/plugins.h"
#include "private/private.h"
#include "private/debug.h"
#include "private/misc.h"
#include <fcntl.h>
#ifdef HAVE_UNISTD_H
@ -23,6 +23,11 @@
#define close _close
#endif
/**************************************
* Init/Exit and Forced PCI localities
*/
static void
hwloc_pci_forced_locality_parse_one(struct hwloc_topology *topology,
const char *string /* must contain a ' ' */,
@ -109,11 +114,11 @@ hwloc_pci_forced_locality_parse(struct hwloc_topology *topology, const char *_en
void
hwloc_pci_discovery_init(struct hwloc_topology *topology)
{
topology->need_pci_belowroot_apply_locality = 0;
topology->pci_has_forced_locality = 0;
topology->pci_forced_locality_nr = 0;
topology->pci_forced_locality = NULL;
topology->first_pci_locality = topology->last_pci_locality = NULL;
}
void
@ -135,7 +140,7 @@ hwloc_pci_discovery_prepare(struct hwloc_topology *topology)
if (!err) {
if (st.st_size <= 64*1024) { /* random limit large enough to store multiple cpusets for thousands of PUs */
buffer = malloc(st.st_size+1);
if (read(fd, buffer, st.st_size) == st.st_size) {
if (buffer && read(fd, buffer, st.st_size) == st.st_size) {
buffer[st.st_size] = '\0';
hwloc_pci_forced_locality_parse(topology, buffer);
}
@ -152,16 +157,31 @@ hwloc_pci_discovery_prepare(struct hwloc_topology *topology)
}
void
hwloc_pci_discovery_exit(struct hwloc_topology *topology __hwloc_attribute_unused)
hwloc_pci_discovery_exit(struct hwloc_topology *topology)
{
struct hwloc_pci_locality_s *cur;
unsigned i;
for(i=0; i<topology->pci_forced_locality_nr; i++)
hwloc_bitmap_free(topology->pci_forced_locality[i].cpuset);
free(topology->pci_forced_locality);
cur = topology->first_pci_locality;
while (cur) {
struct hwloc_pci_locality_s *next = cur->next;
hwloc_bitmap_free(cur->cpuset);
free(cur);
cur = next;
}
hwloc_pci_discovery_init(topology);
}
/******************************
* Inserting in Tree by Bus ID
*/
#ifdef HWLOC_DEBUG
static void
hwloc_pci_traverse_print_cb(void * cbdata __hwloc_attribute_unused,
@ -324,32 +344,16 @@ hwloc_pcidisc_tree_insert_by_busid(struct hwloc_obj **treep,
hwloc_pci_add_object(NULL /* no parent on top of tree */, treep, obj);
}
int
hwloc_pcidisc_tree_attach(struct hwloc_topology *topology, struct hwloc_obj *old_tree)
/**********************
* Attaching PCI Trees
*/
static struct hwloc_obj *
hwloc_pcidisc_add_hostbridges(struct hwloc_topology *topology,
struct hwloc_obj *old_tree)
{
struct hwloc_obj **next_hb_p;
enum hwloc_type_filter_e bfilter;
if (!old_tree)
/* found nothing, exit */
return 0;
#ifdef HWLOC_DEBUG
hwloc_debug("%s", "\nPCI hierarchy:\n");
hwloc_pci_traverse(NULL, old_tree, hwloc_pci_traverse_print_cb);
hwloc_debug("%s", "\n");
#endif
next_hb_p = &hwloc_get_root_obj(topology)->io_first_child;
while (*next_hb_p)
next_hb_p = &((*next_hb_p)->next_sibling);
bfilter = topology->type_filter[HWLOC_OBJ_BRIDGE];
if (bfilter == HWLOC_TYPE_FILTER_KEEP_NONE) {
*next_hb_p = old_tree;
topology->modified = 1;
goto done;
}
struct hwloc_obj * new = NULL, **newp = &new;
/*
* tree points to all objects connected to any upstream bus in the machine.
@ -358,15 +362,29 @@ hwloc_pcidisc_tree_attach(struct hwloc_topology *topology, struct hwloc_obj *old
*/
while (old_tree) {
/* start a new host bridge */
struct hwloc_obj *hostbridge = hwloc_alloc_setup_object(topology, HWLOC_OBJ_BRIDGE, HWLOC_UNKNOWN_INDEX);
struct hwloc_obj **dstnextp = &hostbridge->io_first_child;
struct hwloc_obj **srcnextp = &old_tree;
struct hwloc_obj *child = *srcnextp;
unsigned short current_domain = child->attr->pcidev.domain;
unsigned char current_bus = child->attr->pcidev.bus;
unsigned char current_subordinate = current_bus;
struct hwloc_obj *hostbridge;
struct hwloc_obj **dstnextp;
struct hwloc_obj **srcnextp;
struct hwloc_obj *child;
unsigned current_domain;
unsigned char current_bus;
unsigned char current_subordinate;
hwloc_debug("Starting new PCI hostbridge %04x:%02x\n", current_domain, current_bus);
hostbridge = hwloc_alloc_setup_object(topology, HWLOC_OBJ_BRIDGE, HWLOC_UNKNOWN_INDEX);
if (!hostbridge) {
/* just queue remaining things without hostbridges and return */
*newp = old_tree;
return new;
}
dstnextp = &hostbridge->io_first_child;
srcnextp = &old_tree;
child = *srcnextp;
current_domain = child->attr->pcidev.domain;
current_bus = child->attr->pcidev.bus;
current_subordinate = current_bus;
hwloc_debug("Adding new PCI hostbridge %04x:%02x\n", current_domain, current_bus);
next_child:
/* remove next child from tree */
@ -395,19 +413,14 @@ hwloc_pcidisc_tree_attach(struct hwloc_topology *topology, struct hwloc_obj *old
hostbridge->attr->bridge.downstream.pci.domain = current_domain;
hostbridge->attr->bridge.downstream.pci.secondary_bus = current_bus;
hostbridge->attr->bridge.downstream.pci.subordinate_bus = current_subordinate;
hwloc_debug("New PCI hostbridge %04x:[%02x-%02x]\n",
hwloc_debug(" new PCI hostbridge covers %04x:[%02x-%02x]\n",
current_domain, current_bus, current_subordinate);
*next_hb_p = hostbridge;
next_hb_p = &hostbridge->next_sibling;
topology->modified = 1; /* needed in case somebody reconnects levels before the core calls hwloc_pci_belowroot_apply_locality()
* or if hwloc_pci_belowroot_apply_locality() keeps hostbridges below root.
*/
*newp = hostbridge;
newp = &hostbridge->next_sibling;
}
done:
topology->need_pci_belowroot_apply_locality = 1;
return 0;
return new;
}
static struct hwloc_obj *
@ -458,6 +471,9 @@ hwloc__pci_find_busid_parent(struct hwloc_topology *topology, struct hwloc_pcide
unsigned i;
int err;
hwloc_debug("Looking for parent of PCI busid %04x:%02x:%02x.%01x\n",
busid->domain, busid->bus, busid->dev, busid->func);
/* try to match a forced locality */
if (topology->pci_has_forced_locality) {
for(i=0; i<topology->pci_forced_locality_nr; i++) {
@ -489,7 +505,7 @@ hwloc__pci_find_busid_parent(struct hwloc_topology *topology, struct hwloc_pcide
}
if (*env) {
/* force the cpuset */
hwloc_debug("Overriding localcpus using %s in the environment\n", envname);
hwloc_debug("Overriding PCI locality using %s in the environment\n", envname);
hwloc_bitmap_sscanf(cpuset, env);
forced = 1;
}
@ -499,7 +515,7 @@ hwloc__pci_find_busid_parent(struct hwloc_topology *topology, struct hwloc_pcide
}
if (!forced) {
/* get the cpuset by asking the OS backend. */
/* get the cpuset by asking the backend that provides the relevant hook, if any. */
struct hwloc_backend *backend = topology->get_pci_busid_cpuset_backend;
if (backend)
err = backend->get_pci_busid_cpuset(backend, busid, cpuset);
@ -510,7 +526,7 @@ hwloc__pci_find_busid_parent(struct hwloc_topology *topology, struct hwloc_pcide
hwloc_bitmap_copy(cpuset, hwloc_topology_get_topology_cpuset(topology));
}
hwloc_debug_bitmap("Attaching PCI tree to cpuset %s\n", cpuset);
hwloc_debug_bitmap(" will attach PCI bus to cpuset %s\n", cpuset);
parent = hwloc_find_insert_io_parent_by_complete_cpuset(topology, cpuset);
if (parent) {
@ -526,11 +542,129 @@ hwloc__pci_find_busid_parent(struct hwloc_topology *topology, struct hwloc_pcide
return parent;
}
int
hwloc_pcidisc_tree_attach(struct hwloc_topology *topology, struct hwloc_obj *tree)
{
enum hwloc_type_filter_e bfilter;
if (!tree)
/* found nothing, exit */
return 0;
#ifdef HWLOC_DEBUG
hwloc_debug("%s", "\nPCI hierarchy:\n");
hwloc_pci_traverse(NULL, tree, hwloc_pci_traverse_print_cb);
hwloc_debug("%s", "\n");
#endif
bfilter = topology->type_filter[HWLOC_OBJ_BRIDGE];
if (bfilter != HWLOC_TYPE_FILTER_KEEP_NONE) {
tree = hwloc_pcidisc_add_hostbridges(topology, tree);
}
while (tree) {
struct hwloc_obj *obj, *pciobj;
struct hwloc_obj *parent;
struct hwloc_pci_locality_s *loc;
unsigned domain, bus_min, bus_max;
obj = tree;
/* hostbridges don't have a PCI busid for looking up locality, use their first child */
if (obj->type == HWLOC_OBJ_BRIDGE && obj->attr->bridge.upstream_type == HWLOC_OBJ_BRIDGE_HOST)
pciobj = obj->io_first_child;
else
pciobj = obj;
/* now we have a pci device or a pci bridge */
assert(pciobj->type == HWLOC_OBJ_PCI_DEVICE
|| (pciobj->type == HWLOC_OBJ_BRIDGE && pciobj->attr->bridge.upstream_type == HWLOC_OBJ_BRIDGE_PCI));
if (obj->type == HWLOC_OBJ_BRIDGE) {
domain = obj->attr->bridge.downstream.pci.domain;
bus_min = obj->attr->bridge.downstream.pci.secondary_bus;
bus_max = obj->attr->bridge.downstream.pci.subordinate_bus;
} else {
domain = pciobj->attr->pcidev.domain;
bus_min = pciobj->attr->pcidev.bus;
bus_max = pciobj->attr->pcidev.bus;
}
/* find where to attach that PCI bus */
parent = hwloc__pci_find_busid_parent(topology, &pciobj->attr->pcidev);
/* reuse the previous locality if possible */
if (topology->last_pci_locality
&& parent == topology->last_pci_locality->parent
&& domain == topology->last_pci_locality->domain
&& (bus_min == topology->last_pci_locality->bus_max
|| bus_min == topology->last_pci_locality->bus_max+1)) {
hwloc_debug(" Reusing PCI locality up to bus %04x:%02x\n",
domain, bus_max);
topology->last_pci_locality->bus_max = bus_max;
goto done;
}
loc = malloc(sizeof(*loc));
if (!loc) {
/* fallback to attaching to root */
parent = hwloc_get_root_obj(topology);
goto done;
}
loc->domain = domain;
loc->bus_min = bus_min;
loc->bus_max = bus_max;
loc->parent = parent;
loc->cpuset = hwloc_bitmap_dup(parent->cpuset);
if (!loc->cpuset) {
/* fallback to attaching to root */
free(loc);
parent = hwloc_get_root_obj(topology);
goto done;
}
hwloc_debug("Adding PCI locality %s P#%u for bus %04x:[%02x:%02x]\n",
hwloc_obj_type_string(parent->type), parent->os_index, loc->domain, loc->bus_min, loc->bus_max);
if (topology->last_pci_locality) {
loc->prev = topology->last_pci_locality;
loc->next = NULL;
topology->last_pci_locality->next = loc;
topology->last_pci_locality = loc;
} else {
loc->prev = NULL;
loc->next = NULL;
topology->first_pci_locality = loc;
topology->last_pci_locality = loc;
}
done:
/* dequeue this object */
tree = obj->next_sibling;
obj->next_sibling = NULL;
hwloc_insert_object_by_parent(topology, parent, obj);
}
return 0;
}
/*********************************
* Finding PCI objects or parents
*/
struct hwloc_obj *
hwloc_pcidisc_find_busid_parent(struct hwloc_topology *topology,
unsigned domain, unsigned bus, unsigned dev, unsigned func)
hwloc_pci_find_parent_by_busid(struct hwloc_topology *topology,
unsigned domain, unsigned bus, unsigned dev, unsigned func)
{
struct hwloc_pcidev_attr_s busid;
hwloc_obj_t parent;
/* try to find that exact busid */
parent = hwloc_pci_find_by_busid(topology, domain, bus, dev, func);
if (parent)
return parent;
/* try to find the locality of that bus instead */
busid.domain = domain;
busid.bus = bus;
busid.dev = dev;
@ -538,66 +672,10 @@ hwloc_pcidisc_find_busid_parent(struct hwloc_topology *topology,
return hwloc__pci_find_busid_parent(topology, &busid);
}
int
hwloc_pci_belowroot_apply_locality(struct hwloc_topology *topology)
{
struct hwloc_obj *root = hwloc_get_root_obj(topology);
struct hwloc_obj **listp, *obj;
if (!topology->need_pci_belowroot_apply_locality)
return 0;
topology->need_pci_belowroot_apply_locality = 0;
/* root->io_first_child contains some PCI hierarchies, any maybe some non-PCI things.
* insert the PCI trees according to their PCI-locality.
*/
listp = &root->io_first_child;
while ((obj = *listp) != NULL) {
struct hwloc_pcidev_attr_s *busid;
struct hwloc_obj *parent;
/* skip non-PCI objects */
if (obj->type != HWLOC_OBJ_PCI_DEVICE
&& !(obj->type == HWLOC_OBJ_BRIDGE && obj->attr->bridge.downstream_type == HWLOC_OBJ_BRIDGE_PCI)
&& !(obj->type == HWLOC_OBJ_BRIDGE && obj->attr->bridge.upstream_type == HWLOC_OBJ_BRIDGE_PCI)) {
listp = &obj->next_sibling;
continue;
}
if (obj->type == HWLOC_OBJ_PCI_DEVICE
|| (obj->type == HWLOC_OBJ_BRIDGE
&& obj->attr->bridge.upstream_type == HWLOC_OBJ_BRIDGE_PCI))
busid = &obj->attr->pcidev;
else {
/* hostbridges don't have a PCI busid for looking up locality, use their first child if PCI */
hwloc_obj_t child = obj->io_first_child;
if (child && (child->type == HWLOC_OBJ_PCI_DEVICE
|| (child->type == HWLOC_OBJ_BRIDGE
&& child->attr->bridge.upstream_type == HWLOC_OBJ_BRIDGE_PCI)))
busid = &obj->io_first_child->attr->pcidev;
else
continue;
}
/* attach the object (and children) where it belongs */
parent = hwloc__pci_find_busid_parent(topology, busid);
if (parent == root) {
/* keep this object here */
listp = &obj->next_sibling;
} else {
/* dequeue this object */
*listp = obj->next_sibling;
obj->next_sibling = NULL;
hwloc_insert_object_by_parent(topology, parent, obj);
}
}
return 0;
}
/* return the smallest object that contains the desired busid */
static struct hwloc_obj *
hwloc__pci_belowroot_find_by_busid(hwloc_obj_t parent,
unsigned domain, unsigned bus, unsigned dev, unsigned func)
hwloc__pci_find_by_busid(hwloc_obj_t parent,
unsigned domain, unsigned bus, unsigned dev, unsigned func)
{
hwloc_obj_t child;
@ -622,7 +700,7 @@ hwloc__pci_belowroot_find_by_busid(hwloc_obj_t parent,
&& child->attr->bridge.downstream.pci.secondary_bus <= bus
&& child->attr->bridge.downstream.pci.subordinate_bus >= bus)
/* not the right bus id, but it's included in the bus below that bridge */
return hwloc__pci_belowroot_find_by_busid(child, domain, bus, dev, func);
return hwloc__pci_find_by_busid(child, domain, bus, dev, func);
} else if (child->type == HWLOC_OBJ_BRIDGE
&& child->attr->bridge.upstream_type != HWLOC_OBJ_BRIDGE_PCI
@ -632,7 +710,7 @@ hwloc__pci_belowroot_find_by_busid(hwloc_obj_t parent,
&& child->attr->bridge.downstream.pci.secondary_bus <= bus
&& child->attr->bridge.downstream.pci.subordinate_bus >= bus) {
/* contains our bus, recurse */
return hwloc__pci_belowroot_find_by_busid(child, domain, bus, dev, func);
return hwloc__pci_find_by_busid(child, domain, bus, dev, func);
}
}
/* didn't find anything, return parent */
@ -640,17 +718,54 @@ hwloc__pci_belowroot_find_by_busid(hwloc_obj_t parent,
}
struct hwloc_obj *
hwloc_pcidisc_find_by_busid(struct hwloc_topology *topology,
unsigned domain, unsigned bus, unsigned dev, unsigned func)
hwloc_pci_find_by_busid(struct hwloc_topology *topology,
unsigned domain, unsigned bus, unsigned dev, unsigned func)
{
struct hwloc_pci_locality_s *loc;
hwloc_obj_t root = hwloc_get_root_obj(topology);
hwloc_obj_t parent = hwloc__pci_belowroot_find_by_busid(root, domain, bus, dev, func);
if (parent == root)
hwloc_obj_t parent = NULL;
hwloc_debug("pcidisc looking for bus id %04x:%02x:%02x.%01x\n", domain, bus, dev, func);
loc = topology->first_pci_locality;
while (loc) {
if (loc->domain == domain && loc->bus_min <= bus && loc->bus_max >= bus) {
parent = loc->parent;
assert(parent);
hwloc_debug(" found pci locality for %04x:[%02x:%02x]\n",
loc->domain, loc->bus_min, loc->bus_max);
break;
}
loc = loc->next;
}
/* if we failed to insert localities, look at root too */
if (!parent)
parent = root;
hwloc_debug(" looking for bus %04x:%02x:%02x.%01x below %s P#%u\n",
domain, bus, dev, func,
hwloc_obj_type_string(parent->type), parent->os_index);
parent = hwloc__pci_find_by_busid(parent, domain, bus, dev, func);
if (parent == root) {
hwloc_debug(" found nothing better than root object, ignoring\n");
return NULL;
else
} else {
if (parent->type == HWLOC_OBJ_PCI_DEVICE
|| (parent->type == HWLOC_OBJ_BRIDGE && parent->attr->bridge.upstream_type == HWLOC_OBJ_BRIDGE_PCI))
hwloc_debug(" found busid %04x:%02x:%02x.%01x\n",
parent->attr->pcidev.domain, parent->attr->pcidev.bus,
parent->attr->pcidev.dev, parent->attr->pcidev.func);
else
hwloc_debug(" found parent %s P#%u\n",
hwloc_obj_type_string(parent->type), parent->os_index);
return parent;
}
}
/*******************************
* Parsing the PCI Config Space
*/
#define HWLOC_PCI_STATUS 0x06
#define HWLOC_PCI_STATUS_CAP_LIST 0x10
#define HWLOC_PCI_CAPABILITY_LIST 0x34
@ -703,13 +818,14 @@ hwloc_pcidisc_find_linkspeed(const unsigned char *config,
* PCIe Gen2 = 5 GT/s signal-rate per lane with 8/10 encoding = 0.5 GB/s data-rate per lane
* PCIe Gen3 = 8 GT/s signal-rate per lane with 128/130 encoding = 1 GB/s data-rate per lane
* PCIe Gen4 = 16 GT/s signal-rate per lane with 128/130 encoding = 2 GB/s data-rate per lane
* PCIe Gen5 = 32 GT/s signal-rate per lane with 128/130 encoding = 4 GB/s data-rate per lane
*/
/* lanespeed in Gbit/s */
if (speed <= 2)
lanespeed = 2.5f * speed * 0.8f;
else
lanespeed = 8.0f * (1<<(speed-3)) * 128/130; /* assume Gen5 will be 32 GT/s and so on */
lanespeed = 8.0f * (1<<(speed-3)) * 128/130; /* assume Gen6 will be 64 GT/s and so on */
/* linkspeed in GB/s */
*linkspeed = lanespeed * width / 8;
@ -738,30 +854,27 @@ hwloc_pcidisc_check_bridge_type(unsigned device_class, const unsigned char *conf
#define HWLOC_PCI_SUBORDINATE_BUS 0x1a
int
hwloc_pcidisc_setup_bridge_attr(hwloc_obj_t obj,
hwloc_pcidisc_find_bridge_buses(unsigned domain, unsigned bus, unsigned dev, unsigned func,
unsigned *secondary_busp, unsigned *subordinate_busp,
const unsigned char *config)
{
struct hwloc_bridge_attr_s *battr = &obj->attr->bridge;
struct hwloc_pcidev_attr_s *pattr = &battr->upstream.pci;
unsigned secondary_bus, subordinate_bus;
if (config[HWLOC_PCI_PRIMARY_BUS] != pattr->bus) {
if (config[HWLOC_PCI_PRIMARY_BUS] != bus) {
/* Sometimes the config space contains 00 instead of the actual primary bus number.
* Always trust the bus ID because it was built by the system which has more information
* to workaround such problems (e.g. ACPI information about PCI parent/children).
*/
hwloc_debug(" %04x:%02x:%02x.%01x bridge with (ignored) invalid PCI_PRIMARY_BUS %02x\n",
pattr->domain, pattr->bus, pattr->dev, pattr->func, config[HWLOC_PCI_PRIMARY_BUS]);
domain, bus, dev, func, config[HWLOC_PCI_PRIMARY_BUS]);
}
battr->upstream_type = HWLOC_OBJ_BRIDGE_PCI;
battr->downstream_type = HWLOC_OBJ_BRIDGE_PCI;
battr->downstream.pci.domain = pattr->domain;
battr->downstream.pci.secondary_bus = config[HWLOC_PCI_SECONDARY_BUS];
battr->downstream.pci.subordinate_bus = config[HWLOC_PCI_SUBORDINATE_BUS];
secondary_bus = config[HWLOC_PCI_SECONDARY_BUS];
subordinate_bus = config[HWLOC_PCI_SUBORDINATE_BUS];
if (battr->downstream.pci.secondary_bus <= pattr->bus
|| battr->downstream.pci.subordinate_bus <= pattr->bus
|| battr->downstream.pci.secondary_bus > battr->downstream.pci.subordinate_bus) {
if (secondary_bus <= bus
|| subordinate_bus <= bus
|| secondary_bus > subordinate_bus) {
/* This should catch most cases of invalid bridge information
* (e.g. 00 for secondary and subordinate).
* Ideally we would also check that [secondary-subordinate] is included
@ -769,15 +882,21 @@ hwloc_pcidisc_setup_bridge_attr(hwloc_obj_t obj,
* because objects may be discovered out of order (especially in the fsroot case).
*/
hwloc_debug(" %04x:%02x:%02x.%01x bridge has invalid secondary-subordinate buses [%02x-%02x]\n",
pattr->domain, pattr->bus, pattr->dev, pattr->func,
battr->downstream.pci.secondary_bus, battr->downstream.pci.subordinate_bus);
hwloc_free_unlinked_object(obj);
domain, bus, dev, func,
secondary_bus, subordinate_bus);
return -1;
}
*secondary_busp = secondary_bus;
*subordinate_busp = subordinate_bus;
return 0;
}
/****************
* Class Strings
*/
const char *
hwloc_pci_class_string(unsigned short class_id)
{

View file

@ -1,12 +1,12 @@
/*
* Copyright © 2017-2018 Inria. All rights reserved.
* Copyright © 2017-2019 Inria. All rights reserved.
* See COPYING in top-level directory.
*/
#include <private/autogen/config.h>
#include <hwloc.h>
#include <hwloc/shmem.h>
#include <private/private.h>
#include "private/autogen/config.h"
#include "hwloc.h"
#include "hwloc/shmem.h"
#include "private/private.h"
#ifndef HWLOC_WIN_SYS
@ -214,6 +214,8 @@ hwloc_shmem_topology_adopt(hwloc_topology_t *topologyp,
new->support.discovery = malloc(sizeof(*new->support.discovery));
new->support.cpubind = malloc(sizeof(*new->support.cpubind));
new->support.membind = malloc(sizeof(*new->support.membind));
if (!new->support.discovery || !new->support.cpubind || !new->support.membind)
goto out_with_support;
memcpy(new->support.discovery, old->support.discovery, sizeof(*new->support.discovery));
memcpy(new->support.cpubind, old->support.cpubind, sizeof(*new->support.cpubind));
memcpy(new->support.membind, old->support.membind, sizeof(*new->support.membind));
@ -230,6 +232,11 @@ hwloc_shmem_topology_adopt(hwloc_topology_t *topologyp,
*topologyp = new;
return 0;
out_with_support:
free(new->support.discovery);
free(new->support.cpubind);
free(new->support.membind);
free(new);
out_with_components:
hwloc_components_fini();
out_with_mmap:

View file

@ -1,45 +1,60 @@
/*
* Copyright © 2009 CNRS
* Copyright © 2009-2017 Inria. All rights reserved.
* Copyright © 2009-2012 Université Bordeaux
* Copyright © 2009-2019 Inria. All rights reserved.
* Copyright © 2009-2012, 2020 Université Bordeaux
* Copyright © 2009-2011 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory.
*/
#include <private/autogen/config.h>
#include <hwloc.h>
#include <private/private.h>
#include "private/autogen/config.h"
#include "hwloc.h"
#include "private/private.h"
static int
hwloc_look_noos(struct hwloc_backend *backend)
hwloc_look_noos(struct hwloc_backend *backend, struct hwloc_disc_status *dstatus)
{
/*
* This backend uses the underlying OS.
* However we don't enforce topology->is_thissystem so that
* we may still force use this backend when debugging with !thissystem.
*/
struct hwloc_topology *topology = backend->topology;
int nbprocs;
int64_t memsize;
if (topology->levels[0][0]->cpuset)
/* somebody discovered things */
return -1;
assert(dstatus->phase == HWLOC_DISC_PHASE_CPU);
nbprocs = hwloc_fallback_nbprocessors(topology);
if (nbprocs >= 1)
topology->support.discovery->pu = 1;
else
nbprocs = 1;
if (!topology->levels[0][0]->cpuset) {
int nbprocs;
/* Nobody (even the x86 backend) created objects yet, setup basic objects */
nbprocs = hwloc_fallback_nbprocessors(0);
if (nbprocs >= 1)
topology->support.discovery->pu = 1;
else
nbprocs = 1;
hwloc_alloc_root_sets(topology->levels[0][0]);
hwloc_setup_pu_level(topology, nbprocs);
}
memsize = hwloc_fallback_memsize();
if (memsize > 0)
topology->machine_memory.local_memory = memsize;;
hwloc_alloc_root_sets(topology->levels[0][0]);
hwloc_setup_pu_level(topology, nbprocs);
hwloc_add_uname_info(topology, NULL);
return 0;
}
static struct hwloc_backend *
hwloc_noos_component_instantiate(struct hwloc_disc_component *component,
hwloc_noos_component_instantiate(struct hwloc_topology *topology,
struct hwloc_disc_component *component,
unsigned excluded_phases __hwloc_attribute_unused,
const void *_data1 __hwloc_attribute_unused,
const void *_data2 __hwloc_attribute_unused,
const void *_data3 __hwloc_attribute_unused)
{
struct hwloc_backend *backend;
backend = hwloc_backend_alloc(component);
backend = hwloc_backend_alloc(topology, component);
if (!backend)
return NULL;
backend->discover = hwloc_look_noos;
@ -47,9 +62,9 @@ hwloc_noos_component_instantiate(struct hwloc_disc_component *component,
}
static struct hwloc_disc_component hwloc_noos_disc_component = {
HWLOC_DISC_COMPONENT_TYPE_CPU,
"no_os",
HWLOC_DISC_COMPONENT_TYPE_GLOBAL,
HWLOC_DISC_PHASE_CPU,
HWLOC_DISC_PHASE_GLOBAL,
hwloc_noos_component_instantiate,
40, /* lower than native OS component, higher than globals */
1,

View file

@ -6,11 +6,11 @@
* See COPYING in top-level directory.
*/
#include <private/autogen/config.h>
#include <hwloc.h>
#include <private/private.h>
#include <private/misc.h>
#include <private/debug.h>
#include "private/autogen/config.h"
#include "hwloc.h"
#include "private/private.h"
#include "private/misc.h"
#include "private/debug.h"
#include <limits.h>
#include <assert.h>
@ -122,6 +122,7 @@ hwloc_synthetic_process_indexes(struct hwloc_synthetic_backend_data_s *data,
unsigned long nbs = 1;
unsigned j, mul;
const char *tmp;
struct hwloc_synthetic_intlv_loop_s *loops;
tmp = attr;
while (tmp) {
@ -132,9 +133,10 @@ hwloc_synthetic_process_indexes(struct hwloc_synthetic_backend_data_s *data,
tmp++;
}
{
/* nr_loops colon-separated fields, but we may need one more at the end */
HWLOC_VLA(struct hwloc_synthetic_intlv_loop_s, loops, nr_loops+1);
loops = malloc((nr_loops+1) * sizeof(*loops));
if (!loops)
goto out_with_array;
if (*attr >= '0' && *attr <= '9') {
/* interleaving as x*y:z*t:... */
@ -148,11 +150,13 @@ hwloc_synthetic_process_indexes(struct hwloc_synthetic_backend_data_s *data,
if (tmp2 == tmp || *tmp2 != '*') {
if (verbose)
fprintf(stderr, "Failed to read synthetic index interleaving loop '%s' without number before '*'\n", tmp);
free(loops);
goto out_with_array;
}
if (!step) {
if (verbose)
fprintf(stderr, "Invalid interleaving loop with step 0 at '%s'\n", tmp);
free(loops);
goto out_with_array;
}
tmp2++;
@ -160,11 +164,13 @@ hwloc_synthetic_process_indexes(struct hwloc_synthetic_backend_data_s *data,
if (tmp3 == tmp2 || (*tmp3 && *tmp3 != ':' && *tmp3 != ')' && *tmp3 != ' ')) {
if (verbose)
fprintf(stderr, "Failed to read synthetic index interleaving loop '%s' without number between '*' and ':'\n", tmp);
free(loops);
goto out_with_array;
}
if (!nb) {
if (verbose)
fprintf(stderr, "Invalid interleaving loop with number 0 at '%s'\n", tmp2);
free(loops);
goto out_with_array;
}
loops[cur_loop].step = step;
@ -192,11 +198,13 @@ hwloc_synthetic_process_indexes(struct hwloc_synthetic_backend_data_s *data,
if (err < 0) {
if (verbose)
fprintf(stderr, "Failed to read synthetic index interleaving loop type '%s'\n", tmp);
free(loops);
goto out_with_array;
}
if (type == HWLOC_OBJ_MISC || type == HWLOC_OBJ_BRIDGE || type == HWLOC_OBJ_PCI_DEVICE || type == HWLOC_OBJ_OS_DEVICE) {
if (verbose)
fprintf(stderr, "Misc object type disallowed in synthetic index interleaving loop type '%s'\n", tmp);
free(loops);
goto out_with_array;
}
for(i=0; ; i++) {
@ -217,6 +225,7 @@ hwloc_synthetic_process_indexes(struct hwloc_synthetic_backend_data_s *data,
if (verbose)
fprintf(stderr, "Failed to find level for synthetic index interleaving loop type '%s'\n",
tmp);
free(loops);
goto out_with_array;
}
tmp = strchr(tmp, ':');
@ -235,6 +244,7 @@ hwloc_synthetic_process_indexes(struct hwloc_synthetic_backend_data_s *data,
if (loops[i].level_depth == mydepth && i != cur_loop) {
if (verbose)
fprintf(stderr, "Invalid duplicate interleaving loop type in synthetic index '%s'\n", attr);
free(loops);
goto out_with_array;
}
if (loops[i].level_depth < mydepth
@ -264,6 +274,7 @@ hwloc_synthetic_process_indexes(struct hwloc_synthetic_backend_data_s *data,
} else {
if (verbose)
fprintf(stderr, "Invalid index interleaving total width %lu instead of %lu\n", nbs, total);
free(loops);
goto out_with_array;
}
}
@ -278,6 +289,8 @@ hwloc_synthetic_process_indexes(struct hwloc_synthetic_backend_data_s *data,
mul *= nb;
}
free(loops);
/* check that we have the right values (cannot pass total, cannot give duplicate 0) */
for(j=0; j<total; j++) {
if (array[j] >= total) {
@ -293,7 +306,6 @@ hwloc_synthetic_process_indexes(struct hwloc_synthetic_backend_data_s *data,
}
indexes->array = array;
}
}
return;
@ -527,7 +539,8 @@ hwloc_backend_synthetic_init(struct hwloc_synthetic_backend_data_s *data,
if (*pos < '0' || *pos > '9') {
if (hwloc_type_sscanf(pos, &type, &attrs, sizeof(attrs)) < 0) {
if (!strncmp(pos, "Die", 3) || !strncmp(pos, "Tile", 4) || !strncmp(pos, "Module", 6)) {
if (!strncmp(pos, "Tile", 4) || !strncmp(pos, "Module", 6)) {
/* possible future types */
type = HWLOC_OBJ_GROUP;
} else {
/* FIXME: allow generic "Cache" string? would require to deal with possibly duplicate cache levels */
@ -645,6 +658,12 @@ hwloc_backend_synthetic_init(struct hwloc_synthetic_backend_data_s *data,
errno = EINVAL;
return -1;
}
if (type_count[HWLOC_OBJ_DIE] > 1) {
if (verbose)
fprintf(stderr, "Synthetic string cannot have several die levels\n");
errno = EINVAL;
return -1;
}
if (type_count[HWLOC_OBJ_NUMANODE] > 1) {
if (verbose)
fprintf(stderr, "Synthetic string cannot have several NUMA node levels\n");
@ -829,6 +848,7 @@ hwloc_synthetic_set_attr(struct hwloc_synthetic_attr_s *sattr,
obj->attr->numanode.page_types[0].count = sattr->memorysize / 4096;
break;
case HWLOC_OBJ_PACKAGE:
case HWLOC_OBJ_DIE:
break;
case HWLOC_OBJ_L1CACHE:
case HWLOC_OBJ_L2CACHE:
@ -953,13 +973,19 @@ hwloc__look_synthetic(struct hwloc_topology *topology,
}
static int
hwloc_look_synthetic(struct hwloc_backend *backend)
hwloc_look_synthetic(struct hwloc_backend *backend, struct hwloc_disc_status *dstatus)
{
/*
* This backend enforces !topology->is_thissystem by default.
*/
struct hwloc_topology *topology = backend->topology;
struct hwloc_synthetic_backend_data_s *data = backend->private_data;
hwloc_bitmap_t cpuset = hwloc_bitmap_alloc();
unsigned i;
assert(dstatus->phase == HWLOC_DISC_PHASE_GLOBAL);
assert(!topology->levels[0][0]->cpuset);
hwloc_alloc_root_sets(topology->levels[0][0]);
@ -1001,7 +1027,9 @@ hwloc_synthetic_backend_disable(struct hwloc_backend *backend)
}
static struct hwloc_backend *
hwloc_synthetic_component_instantiate(struct hwloc_disc_component *component,
hwloc_synthetic_component_instantiate(struct hwloc_topology *topology,
struct hwloc_disc_component *component,
unsigned excluded_phases __hwloc_attribute_unused,
const void *_data1,
const void *_data2 __hwloc_attribute_unused,
const void *_data3 __hwloc_attribute_unused)
@ -1021,7 +1049,7 @@ hwloc_synthetic_component_instantiate(struct hwloc_disc_component *component,
}
}
backend = hwloc_backend_alloc(component);
backend = hwloc_backend_alloc(topology, component);
if (!backend)
goto out;
@ -1051,8 +1079,8 @@ hwloc_synthetic_component_instantiate(struct hwloc_disc_component *component,
}
static struct hwloc_disc_component hwloc_synthetic_disc_component = {
HWLOC_DISC_COMPONENT_TYPE_GLOBAL,
"synthetic",
HWLOC_DISC_PHASE_GLOBAL,
~0,
hwloc_synthetic_component_instantiate,
30,
@ -1267,6 +1295,12 @@ hwloc__export_synthetic_obj(struct hwloc_topology * topology, unsigned long flag
/* if exporting to v1 or without extended-types, use all-v1-compatible Socket name */
res = hwloc_snprintf(tmp, tmplen, "Socket%s", aritys);
} else if (obj->type == HWLOC_OBJ_DIE
&& (flags & (HWLOC_TOPOLOGY_EXPORT_SYNTHETIC_FLAG_NO_EXTENDED_TYPES
|HWLOC_TOPOLOGY_EXPORT_SYNTHETIC_FLAG_V1))) {
/* if exporting to v1 or without extended-types, use all-v1-compatible Group name */
res = hwloc_snprintf(tmp, tmplen, "Group%s", aritys);
} else if (obj->type == HWLOC_OBJ_GROUP /* don't export group depth */
|| flags & HWLOC_TOPOLOGY_EXPORT_SYNTHETIC_FLAG_NO_EXTENDED_TYPES) {
res = hwloc_snprintf(tmp, tmplen, "%s%s", hwloc_obj_type_string(obj->type), aritys);
@ -1323,16 +1357,26 @@ hwloc__export_synthetic_memory_children(struct hwloc_topology * topology, unsign
}
while (mchild) {
/* v2: export all NUMA children */
assert(mchild->type == HWLOC_OBJ_NUMANODE); /* only NUMA node memory children for now */
/* FIXME: really recurse to export memcaches and numanode,
* but it requires clever parsing of [ memcache [numa] [numa] ] during import,
* better attaching of things to describe the hierarchy.
*/
hwloc_obj_t numanode = mchild;
/* only export the first NUMA node leaf of each memory child
* FIXME: This assumes mscache aren't shared between nodes, that's true in current platforms
*/
while (numanode && numanode->type != HWLOC_OBJ_NUMANODE) {
assert(numanode->arity == 1);
numanode = numanode->memory_first_child;
}
assert(numanode); /* there's always a numanode at the bottom of the memory tree */
if (needprefix)
hwloc__export_synthetic_add_char(&ret, &tmp, &tmplen, ' ');
hwloc__export_synthetic_add_char(&ret, &tmp, &tmplen, '[');
res = hwloc__export_synthetic_obj(topology, flags, mchild, (unsigned)-1, tmp, tmplen);
res = hwloc__export_synthetic_obj(topology, flags, numanode, (unsigned)-1, tmp, tmplen);
if (hwloc__export_synthetic_update_status(&ret, &tmp, &tmplen, res) < 0)
return -1;
@ -1366,9 +1410,8 @@ hwloc_check_memory_symmetric(struct hwloc_topology * topology)
assert(node);
first_parent = node->parent;
assert(hwloc__obj_type_is_normal(first_parent->type)); /* only depth-1 memory children for now */
/* check whether all object on parent's level have same number of NUMA children */
/* check whether all object on parent's level have same number of NUMA bits */
for(i=0; i<hwloc_get_nbobjs_by_depth(topology, first_parent->depth); i++) {
hwloc_obj_t parent, mchild;
@ -1379,10 +1422,9 @@ hwloc_check_memory_symmetric(struct hwloc_topology * topology)
if (parent->memory_arity != first_parent->memory_arity)
goto out_with_bitmap;
/* clear these NUMA children from remaining_nodes */
/* clear children NUMA bits from remaining_nodes */
mchild = parent->memory_first_child;
while (mchild) {
assert(mchild->type == HWLOC_OBJ_NUMANODE); /* only NUMA node memory children for now */
hwloc_bitmap_clr(remaining_nodes, mchild->os_index); /* cannot use parent->nodeset, some normal children may have other NUMA nodes */
mchild = mchild->next_sibling;
}
@ -1461,6 +1503,7 @@ hwloc_topology_export_synthetic(struct hwloc_topology * topology,
signed pdepth;
node = hwloc_get_obj_by_type(topology, HWLOC_OBJ_NUMANODE, 0);
assert(node);
assert(hwloc__obj_type_is_normal(node->parent->type)); /* only depth-1 memory children for now */
pdepth = node->parent->depth;

View file

@ -1,7 +1,7 @@
/*
* Copyright © 2009 CNRS
* Copyright © 2009-2018 Inria. All rights reserved.
* Copyright © 2009-2012 Université Bordeaux
* Copyright © 2009-2020 Inria. All rights reserved.
* Copyright © 2009-2012, 2020 Université Bordeaux
* Copyright © 2011 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory.
*/
@ -9,10 +9,10 @@
/* To try to get all declarations duplicated below. */
#define _WIN32_WINNT 0x0601
#include <private/autogen/config.h>
#include <hwloc.h>
#include <private/private.h>
#include <private/debug.h>
#include "private/autogen/config.h"
#include "hwloc.h"
#include "private/private.h"
#include "private/debug.h"
#include <windows.h>
@ -232,6 +232,10 @@ static void hwloc_win_get_function_ptrs(void)
{
HMODULE kernel32;
#if HWLOC_HAVE_GCC_W_CAST_FUNCTION_TYPE
#pragma GCC diagnostic ignored "-Wcast-function-type"
#endif
kernel32 = LoadLibrary("kernel32.dll");
if (kernel32) {
GetActiveProcessorGroupCountProc =
@ -270,6 +274,10 @@ static void hwloc_win_get_function_ptrs(void)
if (psapi)
QueryWorkingSetExProc = (PFN_QUERYWORKINGSETEX) GetProcAddress(psapi, "QueryWorkingSetEx");
}
#if HWLOC_HAVE_GCC_W_CAST_FUNCTION_TYPE
#pragma GCC diagnostic warning "-Wcast-function-type"
#endif
}
/*
@ -731,8 +739,14 @@ hwloc_win_get_area_memlocation(hwloc_topology_t topology __hwloc_attribute_unuse
*/
static int
hwloc_look_windows(struct hwloc_backend *backend)
hwloc_look_windows(struct hwloc_backend *backend, struct hwloc_disc_status *dstatus)
{
/*
* This backend uses the underlying OS.
* However we don't enforce topology->is_thissystem so that
* we may still force use this backend when debugging with !thissystem.
*/
struct hwloc_topology *topology = backend->topology;
hwloc_bitmap_t groups_pu_set = NULL;
SYSTEM_INFO SystemInfo;
@ -740,6 +754,8 @@ hwloc_look_windows(struct hwloc_backend *backend)
int gotnuma = 0;
int gotnumamemory = 0;
assert(dstatus->phase == HWLOC_DISC_PHASE_CPU);
if (topology->levels[0][0]->cpuset)
/* somebody discovered things */
return -1;
@ -1136,13 +1152,15 @@ static void hwloc_windows_component_finalize(unsigned long flags __hwloc_attribu
}
static struct hwloc_backend *
hwloc_windows_component_instantiate(struct hwloc_disc_component *component,
hwloc_windows_component_instantiate(struct hwloc_topology *topology,
struct hwloc_disc_component *component,
unsigned excluded_phases __hwloc_attribute_unused,
const void *_data1 __hwloc_attribute_unused,
const void *_data2 __hwloc_attribute_unused,
const void *_data3 __hwloc_attribute_unused)
{
struct hwloc_backend *backend;
backend = hwloc_backend_alloc(component);
backend = hwloc_backend_alloc(topology, component);
if (!backend)
return NULL;
backend->discover = hwloc_look_windows;
@ -1150,9 +1168,9 @@ hwloc_windows_component_instantiate(struct hwloc_disc_component *component,
}
static struct hwloc_disc_component hwloc_windows_disc_component = {
HWLOC_DISC_COMPONENT_TYPE_CPU,
"windows",
HWLOC_DISC_COMPONENT_TYPE_GLOBAL,
HWLOC_DISC_PHASE_CPU,
HWLOC_DISC_PHASE_GLOBAL,
hwloc_windows_component_instantiate,
50,
1,
@ -1168,10 +1186,12 @@ const struct hwloc_component hwloc_windows_component = {
};
int
hwloc_fallback_nbprocessors(struct hwloc_topology *topology __hwloc_attribute_unused) {
hwloc_fallback_nbprocessors(unsigned flags __hwloc_attribute_unused) {
int n;
SYSTEM_INFO sysinfo;
/* TODO handle flags & HWLOC_FALLBACK_NBPROCESSORS_INCLUDE_OFFLINE */
/* by default, ignore groups (return only the number in the current group) */
GetSystemInfo(&sysinfo);
n = sysinfo.dwNumberOfProcessors; /* FIXME could be non-contigous, rather return a mask from dwActiveProcessorMask? */
@ -1187,3 +1207,9 @@ hwloc_fallback_nbprocessors(struct hwloc_topology *topology __hwloc_attribute_un
return n;
}
int64_t
hwloc_fallback_memsize(void) {
/* Unused */
return -1;
}

File diff suppressed because it is too large Load diff

View file

@ -1,18 +1,18 @@
/*
* Copyright © 2009 CNRS
* Copyright © 2009-2018 Inria. All rights reserved.
* Copyright © 2009-2020 Inria. All rights reserved.
* Copyright © 2009-2011 Université Bordeaux
* Copyright © 2009-2011 Cisco Systems, Inc. All rights reserved.
* See COPYING in top-level directory.
*/
#include <private/autogen/config.h>
#include <hwloc.h>
#include <hwloc/plugins.h>
#include <private/private.h>
#include <private/misc.h>
#include <private/xml.h>
#include <private/debug.h>
#include "private/autogen/config.h"
#include "hwloc.h"
#include "hwloc/plugins.h"
#include "private/private.h"
#include "private/misc.h"
#include "private/xml.h"
#include "private/debug.h"
#include <string.h>
#include <assert.h>
@ -27,15 +27,14 @@
*******************/
struct hwloc__nolibxml_backend_data_s {
size_t buflen; /* size of both buffer and copy buffers, set during backend_init() */
size_t buflen; /* size of both buffer, set during backend_init() */
char *buffer; /* allocated and filled during backend_init() */
char *copy; /* allocated during backend_init(), used later during actual parsing */
};
typedef struct hwloc__nolibxml_import_state_data_s {
char *tagbuffer; /* buffer containing the next tag */
char *attrbuffer; /* buffer containing the next attribute of the current node */
char *tagname; /* tag name of the current node */
const char *tagname; /* tag name of the current node */
int closed; /* set if the current node is auto-closing */
} __hwloc_attribute_may_alias * hwloc__nolibxml_import_state_data_t;
@ -138,7 +137,7 @@ hwloc__nolibxml_import_find_child(hwloc__xml_import_state_t state,
return 0;
/* normal tag */
tag = nchildstate->tagname = buffer;
nchildstate->tagname = tag = buffer;
/* find the end, mark it and return it */
end = strchr(buffer, '>');
@ -260,14 +259,11 @@ hwloc_nolibxml_look_init(struct hwloc_xml_backend_data_s *bdata,
struct hwloc__nolibxml_backend_data_s *nbdata = bdata->data;
unsigned major, minor;
char *end;
char *buffer;
char *buffer = nbdata->buffer;
const char *tagname;
HWLOC_BUILD_ASSERT(sizeof(*nstate) <= sizeof(state->data));
/* use a copy in the temporary buffer, we may modify during parsing */
buffer = nbdata->copy;
memcpy(buffer, nbdata->buffer, nbdata->buflen);
/* skip headers */
while (!strncmp(buffer, "<?xml ", 6) || !strncmp(buffer, "<!DOCTYPE ", 10)) {
buffer = strchr(buffer, '\n');
@ -281,14 +277,17 @@ hwloc_nolibxml_look_init(struct hwloc_xml_backend_data_s *bdata,
bdata->version_major = major;
bdata->version_minor = minor;
end = strchr(buffer, '>') + 1;
tagname = "topology";
} else if (!strncmp(buffer, "<topology>", 10)) {
bdata->version_major = 1;
bdata->version_minor = 0;
end = buffer + 10;
tagname = "topology";
} else if (!strncmp(buffer, "<root>", 6)) {
bdata->version_major = 0;
bdata->version_minor = 9;
end = buffer + 6;
tagname = "root";
} else
goto failed;
@ -301,7 +300,7 @@ hwloc_nolibxml_look_init(struct hwloc_xml_backend_data_s *bdata,
state->parent = NULL;
nstate->closed = 0;
nstate->tagbuffer = end;
nstate->tagname = (char *) "topology";
nstate->tagname = tagname;
nstate->attrbuffer = NULL;
return 0; /* success */
@ -320,10 +319,6 @@ hwloc_nolibxml_free_buffers(struct hwloc_xml_backend_data_s *bdata)
free(nbdata->buffer);
nbdata->buffer = NULL;
}
if (nbdata->copy) {
free(nbdata->copy);
nbdata->copy = NULL;
}
}
static void
@ -429,19 +424,11 @@ hwloc_nolibxml_backend_init(struct hwloc_xml_backend_data_s *bdata,
goto out_with_nbdata;
}
/* allocate a temporary copy buffer that we may modify during parsing */
nbdata->copy = malloc(nbdata->buflen+1);
if (!nbdata->copy)
goto out_with_buffer;
nbdata->copy[nbdata->buflen] = '\0';
bdata->look_init = hwloc_nolibxml_look_init;
bdata->look_done = hwloc_nolibxml_look_done;
bdata->backend_exit = hwloc_nolibxml_backend_exit;
return 0;
out_with_buffer:
free(nbdata->buffer);
out_with_nbdata:
free(nbdata);
out:
@ -666,7 +653,7 @@ hwloc__nolibxml_export_end_object(hwloc__xml_export_state_t state, const char *n
}
static void
hwloc__nolibxml_export_add_content(hwloc__xml_export_state_t state, const char *buffer, size_t length)
hwloc__nolibxml_export_add_content(hwloc__xml_export_state_t state, const char *buffer, size_t length __hwloc_attribute_unused)
{
hwloc__nolibxml_export_state_data_t ndata = (void *) state->data;
int res;
@ -678,7 +665,7 @@ hwloc__nolibxml_export_add_content(hwloc__xml_export_state_t state, const char *
}
ndata->has_content = 1;
res = hwloc_snprintf(ndata->buffer, ndata->remaining, buffer, length);
res = hwloc_snprintf(ndata->buffer, ndata->remaining, "%s", buffer);
hwloc__nolibxml_export_update_buffer(ndata, res);
}
@ -799,6 +786,7 @@ hwloc___nolibxml_prepare_export_diff(hwloc_topology_diff_t diff, const char *ref
state.new_prop = hwloc__nolibxml_export_new_prop;
state.add_content = hwloc__nolibxml_export_add_content;
state.end_object = hwloc__nolibxml_export_end_object;
state.global = NULL;
ndata->indent = 0;
ndata->written = 0;

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

Some files were not shown because too many files have changed in this diff Show more