Compare commits

...

68 commits

Author SHA1 Message Date
dependabot[bot]
d0cfc69a09
Bump crossbeam-channel from 0.5.14 to 0.5.15 in the cargo group (#3560)
Bumps the cargo group with 1 update: [crossbeam-channel](https://github.com/crossbeam-rs/crossbeam).


Updates `crossbeam-channel` from 0.5.14 to 0.5.15
- [Release notes](https://github.com/crossbeam-rs/crossbeam/releases)
- [Changelog](https://github.com/crossbeam-rs/crossbeam/blob/master/CHANGELOG.md)
- [Commits](https://github.com/crossbeam-rs/crossbeam/compare/crossbeam-channel-0.5.14...crossbeam-channel-0.5.15)

---
updated-dependencies:
- dependency-name: crossbeam-channel
  dependency-version: 0.5.15
  dependency-type: indirect
  dependency-group: cargo
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-11 10:12:31 +10:00
Firstyear
b113262357
Improve token handling (#3553)
It was possible that a token could be updated in a way that caused
existing cached information to be lost if an event was delayed
in it's write to the user token.

To prevent this, the writes to user tokens now require the HsmLock
to be held, and refresh the token just ahead of writing to ensure
that these data can't be lost. The benefit to this approach is that
readers remain unblocked by a writer.
2025-04-09 14:49:06 +10:00
dependabot[bot]
d025e8fff0
Bump tokio from 1.44.1 to 1.44.2 in the cargo group (#3549)
Bumps the cargo group with 1 update: [tokio](https://github.com/tokio-rs/tokio).


Updates `tokio` from 1.44.1 to 1.44.2
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.44.1...tokio-1.44.2)

---
updated-dependencies:
- dependency-name: tokio
  dependency-version: 1.44.2
  dependency-type: direct:production
  dependency-group: cargo
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-09 09:39:19 +10:00
Firstyear
aee9ed05f3
Update fs4 and improve klock handling (#3551) 2025-04-08 05:04:26 +00:00
James Hodgkinson
5458b13398
Less footguns (#3552) 2025-04-08 04:48:53 +00:00
Firstyear
94b6287e27
Unify unix config parser (#3533)
* Unify unix config parser
* Document the various structs
* Compiler Update
2025-04-08 14:21:26 +10:00
dependabot[bot]
b6813a11d3
Bump openssl from 0.10.71 to 0.10.72 in the cargo group (#3544)
Bumps the cargo group with 1 update: [openssl](https://github.com/sfackler/rust-openssl).


Updates `openssl` from 0.10.71 to 0.10.72
- [Release notes](https://github.com/sfackler/rust-openssl/releases)
- [Commits](https://github.com/sfackler/rust-openssl/compare/openssl-v0.10.71...openssl-v0.10.72)

---
updated-dependencies:
- dependency-name: openssl
  dependency-version: 0.10.72
  dependency-type: direct:production
  dependency-group: cargo
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-08 01:31:20 +00:00
dependabot[bot]
d79188559f
Bump the all group in /pykanidm with 8 updates (#3547)
Bumps the all group in /pykanidm with 8 updates:

| Package | From | To |
| --- | --- | --- |
| [pydantic](https://github.com/pydantic/pydantic) | `2.11.1` | `2.11.2` |
| [aiohttp](https://github.com/aio-libs/aiohttp) | `3.11.14` | `3.11.16` |
| [authlib](https://github.com/lepture/authlib) | `1.5.1` | `1.5.2` |
| [ruff](https://github.com/astral-sh/ruff) | `0.11.2` | `0.11.4` |
| [coverage](https://github.com/nedbat/coveragepy) | `7.7.1` | `7.8.0` |
| [mkdocs-material](https://github.com/squidfunk/mkdocs-material) | `9.6.10` | `9.6.11` |
| [mkdocstrings](https://github.com/mkdocstrings/mkdocstrings) | `0.29.0` | `0.29.1` |
| [mkdocstrings-python](https://github.com/mkdocstrings/python) | `1.16.8` | `1.16.10` |


Updates `pydantic` from 2.11.1 to 2.11.2
- [Release notes](https://github.com/pydantic/pydantic/releases)
- [Changelog](https://github.com/pydantic/pydantic/blob/main/HISTORY.md)
- [Commits](https://github.com/pydantic/pydantic/compare/v2.11.1...v2.11.2)

Updates `aiohttp` from 3.11.14 to 3.11.16
- [Release notes](https://github.com/aio-libs/aiohttp/releases)
- [Changelog](https://github.com/aio-libs/aiohttp/blob/master/CHANGES.rst)
- [Commits](https://github.com/aio-libs/aiohttp/compare/v3.11.14...v3.11.16)

Updates `authlib` from 1.5.1 to 1.5.2
- [Release notes](https://github.com/lepture/authlib/releases)
- [Changelog](https://github.com/lepture/authlib/blob/main/docs/changelog.rst)
- [Commits](https://github.com/lepture/authlib/compare/v1.5.1...v1.5.2)

Updates `ruff` from 0.11.2 to 0.11.4
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.11.2...0.11.4)

Updates `coverage` from 7.7.1 to 7.8.0
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](https://github.com/nedbat/coveragepy/compare/7.7.1...7.8.0)

Updates `mkdocs-material` from 9.6.10 to 9.6.11
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/9.6.10...9.6.11)

Updates `mkdocstrings` from 0.29.0 to 0.29.1
- [Release notes](https://github.com/mkdocstrings/mkdocstrings/releases)
- [Changelog](https://github.com/mkdocstrings/mkdocstrings/blob/main/CHANGELOG.md)
- [Commits](https://github.com/mkdocstrings/mkdocstrings/compare/0.29.0...0.29.1)

Updates `mkdocstrings-python` from 1.16.8 to 1.16.10
- [Release notes](https://github.com/mkdocstrings/python/releases)
- [Changelog](https://github.com/mkdocstrings/python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/mkdocstrings/python/compare/1.16.8...1.16.10)

---
updated-dependencies:
- dependency-name: pydantic
  dependency-version: 2.11.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: aiohttp
  dependency-version: 3.11.16
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: authlib
  dependency-version: 1.5.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: ruff
  dependency-version: 0.11.4
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: coverage
  dependency-version: 7.8.0
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: all
- dependency-name: mkdocs-material
  dependency-version: 9.6.11
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: mkdocstrings
  dependency-version: 0.29.1
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: mkdocstrings-python
  dependency-version: 1.16.10
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-07 14:21:35 +10:00
Arian van Putten
ad012cd6fd
implement notify-reload protocol (#3540) 2025-04-04 09:24:14 +10:00
Firstyear
82a883089f
Allow versioning of server configs (#3515)
This allows our server configuration to be versioned, in preparation
for a change related to the proxy protocol additions.
2025-04-02 02:44:19 +00:00
Firstyear
a2eae53328
20250314 remove protected plugin (#3504)
Removes the protected plugin into an access control module so that it's outputs can be properly represented in effective access checks.
2025-04-01 01:00:56 +00:00
dependabot[bot]
ec3db91da0
Bump the all group with 10 updates (#3539)
* Bump the all group with 10 updates

Bumps the all group with 10 updates:

| Package | From | To |
| --- | --- | --- |
| [clap](https://github.com/clap-rs/clap) | `4.5.32` | `4.5.34` |
| [itertools](https://github.com/rust-itertools/itertools) | `0.13.0` | `0.14.0` |
| [lru](https://github.com/jeromefroe/lru-rs) | `0.12.5` | `0.13.0` |
| [rand](https://github.com/rust-random/rand) | `0.8.5` | `0.9.0` |
| [rand_chacha](https://github.com/rust-random/rand) | `0.3.1` | `0.9.0` |
| [whoami](https://github.com/ardaku/whoami) | `1.5.2` | `1.6.0` |
| [axum-extra](https://github.com/tokio-rs/axum) | `0.9.6` | `0.10.1` |
| [axum-macros](https://github.com/tokio-rs/axum) | `0.4.2` | `0.5.0` |
| [fantoccini](https://github.com/jonhoo/fantoccini) | `0.21.4` | `0.21.5` |
| [jsonschema](https://github.com/Stranger6667/jsonschema) | `0.29.0` | `0.29.1` |


Updates `clap` from 4.5.32 to 4.5.34
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.32...clap_complete-v4.5.34)

Updates `itertools` from 0.13.0 to 0.14.0
- [Changelog](https://github.com/rust-itertools/itertools/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-itertools/itertools/compare/v0.13.0...v0.14.0)

Updates `lru` from 0.12.5 to 0.13.0
- [Changelog](https://github.com/jeromefroe/lru-rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/jeromefroe/lru-rs/compare/0.12.5...0.13.0)

Updates `rand` from 0.8.5 to 0.9.0
- [Release notes](https://github.com/rust-random/rand/releases)
- [Changelog](https://github.com/rust-random/rand/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-random/rand/compare/0.8.5...0.9.0)

Updates `rand_chacha` from 0.3.1 to 0.9.0
- [Release notes](https://github.com/rust-random/rand/releases)
- [Changelog](https://github.com/rust-random/rand/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-random/rand/compare/rand_chacha-0.3.1...0.9.0)

Updates `whoami` from 1.5.2 to 1.6.0
- [Release notes](https://github.com/ardaku/whoami/releases)
- [Changelog](https://github.com/ardaku/whoami/blob/v1.6.0/CHANGELOG.md)
- [Commits](https://github.com/ardaku/whoami/compare/v1.5.2...v1.6.0)

Updates `axum-extra` from 0.9.6 to 0.10.1
- [Release notes](https://github.com/tokio-rs/axum/releases)
- [Changelog](https://github.com/tokio-rs/axum/blob/main/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/axum/compare/axum-extra-v0.9.6...axum-extra-v0.10.1)

Updates `axum-macros` from 0.4.2 to 0.5.0
- [Release notes](https://github.com/tokio-rs/axum/releases)
- [Changelog](https://github.com/tokio-rs/axum/blob/main/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/axum/compare/axum-macros-v0.4.2...axum-macros-v0.5.0)

Updates `fantoccini` from 0.21.4 to 0.21.5
- [Commits](https://github.com/jonhoo/fantoccini/compare/v0.21.4...v0.21.5)

Updates `jsonschema` from 0.29.0 to 0.29.1
- [Release notes](https://github.com/Stranger6667/jsonschema/releases)
- [Changelog](https://github.com/Stranger6667/jsonschema/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Stranger6667/jsonschema/compare/rust-v0.29.0...rust-v0.29.1)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: itertools
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all
- dependency-name: lru
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all
- dependency-name: rand
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all
- dependency-name: rand_chacha
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all
- dependency-name: whoami
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all
- dependency-name: axum-extra
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all
- dependency-name: axum-macros
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all
- dependency-name: fantoccini
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: jsonschema
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all
...

Signed-off-by: dependabot[bot] <support@github.com>

* maint: revert rand and axum packages

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: James Hodgkinson <james@terminaloutcomes.com>
2025-03-31 00:28:22 +00:00
dependabot[bot]
efaef70abe
Bump mozilla-actions/sccache-action from 0.0.8 to 0.0.9 in the all group (#3538)
Bumps the all group with 1 update: [mozilla-actions/sccache-action](https://github.com/mozilla-actions/sccache-action).


Updates `mozilla-actions/sccache-action` from 0.0.8 to 0.0.9
- [Release notes](https://github.com/mozilla-actions/sccache-action/releases)
- [Commits](https://github.com/mozilla-actions/sccache-action/compare/v0.0.8...v0.0.9)

---
updated-dependencies:
- dependency-name: mozilla-actions/sccache-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-30 23:25:40 +00:00
dependabot[bot]
5b48f1dfe3
Bump the all group in /pykanidm with 4 updates (#3537)
Bumps the all group in /pykanidm with 4 updates: [pydantic](https://github.com/pydantic/pydantic), [types-requests](https://github.com/python/typeshed), [mkdocs-material](https://github.com/squidfunk/mkdocs-material) and [mkdocstrings-python](https://github.com/mkdocstrings/python).


Updates `pydantic` from 2.10.6 to 2.11.1
- [Release notes](https://github.com/pydantic/pydantic/releases)
- [Changelog](https://github.com/pydantic/pydantic/blob/main/HISTORY.md)
- [Commits](https://github.com/pydantic/pydantic/compare/v2.10.6...v2.11.1)

Updates `types-requests` from 2.32.0.20250306 to 2.32.0.20250328
- [Commits](https://github.com/python/typeshed/commits)

Updates `mkdocs-material` from 9.6.9 to 9.6.10
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/9.6.9...9.6.10)

Updates `mkdocstrings-python` from 1.16.7 to 1.16.8
- [Release notes](https://github.com/mkdocstrings/python/releases)
- [Changelog](https://github.com/mkdocstrings/python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/mkdocstrings/python/compare/1.16.7...1.16.8)

---
updated-dependencies:
- dependency-name: pydantic
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all
- dependency-name: types-requests
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: mkdocs-material
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: mkdocstrings-python
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-31 07:54:28 +10:00
Firstyear
567fe7b259
Add max_ber_size to freeipa sync (#3530) 2025-03-28 10:46:00 +10:00
dependabot[bot]
5edc6be51c
Bump the all group in /pykanidm with 5 updates (#3524)
Bumps the all group in /pykanidm with 5 updates:

| Package | From | To |
| --- | --- | --- |
| [aiohttp](https://github.com/aio-libs/aiohttp) | `3.11.13` | `3.11.14` |
| [ruff](https://github.com/astral-sh/ruff) | `0.11.0` | `0.11.2` |
| [coverage](https://github.com/nedbat/coveragepy) | `7.7.0` | `7.7.1` |
| [mkdocs-material](https://github.com/squidfunk/mkdocs-material) | `9.6.8` | `9.6.9` |
| [mkdocstrings-python](https://github.com/mkdocstrings/python) | `1.16.5` | `1.16.7` |


Updates `aiohttp` from 3.11.13 to 3.11.14
- [Release notes](https://github.com/aio-libs/aiohttp/releases)
- [Changelog](https://github.com/aio-libs/aiohttp/blob/master/CHANGES.rst)
- [Commits](https://github.com/aio-libs/aiohttp/compare/v3.11.13...v3.11.14)

Updates `ruff` from 0.11.0 to 0.11.2
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.11.0...0.11.2)

Updates `coverage` from 7.7.0 to 7.7.1
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](https://github.com/nedbat/coveragepy/compare/7.7.0...7.7.1)

Updates `mkdocs-material` from 9.6.8 to 9.6.9
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/9.6.8...9.6.9)

Updates `mkdocstrings-python` from 1.16.5 to 1.16.7
- [Release notes](https://github.com/mkdocstrings/python/releases)
- [Changelog](https://github.com/mkdocstrings/python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/mkdocstrings/python/compare/1.16.5...1.16.7)

---
updated-dependencies:
- dependency-name: aiohttp
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: ruff
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: coverage
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: mkdocs-material
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: mkdocstrings-python
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-24 11:59:26 +10:00
William Brown
c75c97893e Update Concread 2025-03-22 12:47:18 +10:00
Peter Todd Decker ("Todd")
638904f12c
Update developer_ethics.md (#3520) 2025-03-22 01:58:54 +00:00
Jeff Scrum
e1b9063b99
Update examples.md (#3519)
fix command in OAuth2 Proxy example
2025-03-21 23:18:16 +00:00
Firstyear
bf1e9b0989
Make schema indexing a boolean instead of index types (#3517)
Previously on schema definitions for attributes, the list of index
types was manually set on attributes. The issue with this approach is
that not all index types apply to all attribute syntaxes. This made it
error prone not just to Kanidm developers, but to future users who
want to define custom attributes and may incorrectly index those
attributes.

Instead, this changes the index value to be a boolean to indicate
if this attribute should or should not be indexed. Internally Kanidm
has a list of appropriate indexes to apply to these syntax types.

As part of this change, the tests were reviewed to find missing index
types for syntaxes, and other causes of unindexed searches which led
to some changes around the dyngroup plugin (which pushes the boundaries
of a lot of things in Kani due to how it works).
2025-03-21 02:13:54 +00:00
Foosec
11c7266ff3
Add missing lld dependency and fix syntax typo (#3490)
* Add missing lld dependency and fix syntax typo in devcontainer_postcreate.sh
* replace pushd/popd with shell agnostic solution and do not throw away std out/err
---------
Co-authored-by: foobar <foobar>
2025-03-21 01:51:58 +00:00
Katherina Walshe-Grey
ef638a62e9
Update shell.nix to work with stable nixpkgs (#3514)
The existing shell.nix uses whatever versions of rustc and cargo are in
the system nixpkgs. In the current stable nixpkgs version (24.11), this
is rustc 1.82.0. Unfortunately, we depend on the `strict_provenance`
feature, which was unstable before 1.84.0. (See: kanidm/concread#132)

This patch makes minimal changes to shell.nix to overlay nixpkgs with
the rustc version defined in rust-toolchain.toml, enabling Kanidm to
build locally on stable versions of NixOS.

Co-authored-by: Firstyear <william@blackhats.net.au>
2025-03-20 13:06:51 +10:00
Firstyear
f86bc03a93
Improve unixd tasks channel comments (#3510) 2025-03-19 00:57:39 +00:00
Jinna Kiisuo
46eda59cff
Update kanidm_ppa_automation reference to latest (#3512) 2025-03-18 12:10:36 +00:00
Firstyear
b13951a79b
Add set-description to group tooling (#3511) 2025-03-18 21:54:20 +10:00
Jinna Kiisuo
1e91f244a2
packaging: Add kanidmd deb package, update documentation (#3506)
* packaging: Use cargo-deb multiarch support

This allows building all platforms from one definition,
assuming the --multiarch=foreign flag is used.

* packaging: Use correct path naming for unixd service files

While cargo-deb works around the mistake, better to name them as per the
rules: https://github.com/kornelski/cargo-deb/blob/main/systemd.md#systemd-unit-file-naming

* docs: Update book chapter on Debian packaging

* packaging: Shift Debian builds to a separate build profile

* packaging: Add deb for kanidmd
2025-03-18 12:10:42 +10:00
dependabot[bot]
23bb656c6b
Bump the all group in /pykanidm with 5 updates (#3508)
Bumps the all group in /pykanidm with 5 updates:

| Package | From | To |
| --- | --- | --- |
| [ruff](https://github.com/astral-sh/ruff) | `0.9.10` | `0.11.0` |
| [coverage](https://github.com/nedbat/coveragepy) | `7.6.12` | `7.7.0` |
| [mkdocs-material](https://github.com/squidfunk/mkdocs-material) | `9.6.7` | `9.6.8` |
| [mkdocstrings](https://github.com/mkdocstrings/mkdocstrings) | `0.28.3` | `0.29.0` |
| [mkdocstrings-python](https://github.com/mkdocstrings/python) | `1.16.3` | `1.16.5` |


Updates `ruff` from 0.9.10 to 0.11.0
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.9.10...0.11.0)

Updates `coverage` from 7.6.12 to 7.7.0
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](https://github.com/nedbat/coveragepy/compare/7.6.12...7.7.0)

Updates `mkdocs-material` from 9.6.7 to 9.6.8
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/9.6.7...9.6.8)

Updates `mkdocstrings` from 0.28.3 to 0.29.0
- [Release notes](https://github.com/mkdocstrings/mkdocstrings/releases)
- [Changelog](https://github.com/mkdocstrings/mkdocstrings/blob/main/CHANGELOG.md)
- [Commits](https://github.com/mkdocstrings/mkdocstrings/compare/0.28.3...0.29.0)

Updates `mkdocstrings-python` from 1.16.3 to 1.16.5
- [Release notes](https://github.com/mkdocstrings/python/releases)
- [Changelog](https://github.com/mkdocstrings/python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/mkdocstrings/python/compare/1.16.3...1.16.5)

---
updated-dependencies:
- dependency-name: ruff
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: all
- dependency-name: coverage
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: all
- dependency-name: mkdocs-material
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: mkdocstrings
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: all
- dependency-name: mkdocstrings-python
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-17 08:44:26 +10:00
Firstyear
b88b6923eb
20250313 unixd system cache (#3501)
The implementation of the unixd cache relies on inotify to detect changes to files in /etc so that we know when to reload the data for nss/passwd. However, the way that groupadd/del and other tools work is they copy the file, change it, and then move it into place. It turns out that william of the past didn't realise that inotify works on inodes not paths like other tools do (auditctl for example).

As a result, when something modified /etc/group or another related file, the removal was seen, but this breaks notifications on any future change until you reload unixd.

To resolve this we need to recursively watch /etc with inotify - yep, that's correct. We have to watch everything in /etc for changes because it's the only way to pick up on the add/remove of files. But because we have to watch everything, we need permissions to watch everything.

This forces us to move the parsing of the etc passwd/group/shadow files to the unixd tasks daemon - arguably, this is the correct place to read these anyway since that is a high priv (and locked down) daemon. Because of this, we actually end up solving the missing "shadow" group on debian issue, and probably similar on the BSD's in future.

In order to make my life easier while testing I also threw in a makefile that symlinks the files to needed locations for testing. It has plenty of warnings as it should.

Fixes #3499
Fixes #3407
Fixes #3249
2025-03-14 13:46:26 +10:00
Firstyear
e3243ce6b0
Support rfc2307 memberUid in sync operations. (#3466)
A lot of legacy directory servers will use rfc2307 schema where
members of groups are stored as the uid instead of a dn. Within
kani, we absolutely need this to be a dn, else we risk accidentally
adding kanidm entries into ldap synced groups which isn't what we
want.

If we have an rfc2307 schema, then we pre-resolve the uid to the
member dn so that kanidm gets the correct information.
2025-03-14 00:48:05 +00:00
dependabot[bot]
4b4e690642
Bump mozilla-actions/sccache-action from 0.0.7 to 0.0.8 in the all group (#3496)
* Bump mozilla-actions/sccache-action from 0.0.7 to 0.0.8 in the all group
* fix: remove manual specification of sccache version from github actions

Bumps the all group with 1 update: [mozilla-actions/sccache-action](https://github.com/mozilla-actions/sccache-action).


Updates `mozilla-actions/sccache-action` from 0.0.7 to 0.0.8
- [Release notes](https://github.com/mozilla-actions/sccache-action/releases)
- [Commits](https://github.com/mozilla-actions/sccache-action/compare/v0.0.7...v0.0.8)

---
updated-dependencies:
- dependency-name: mozilla-actions/sccache-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-14 00:32:14 +00:00
Jason
d6549077fb
Update Traefik config example to remove invalid label (#3500)
Remove non-existent traefik label config
2025-03-13 04:36:02 +00:00
Firstyear
2c5ce227ae
Add uid/gid allocation table (#3498) 2025-03-11 06:42:08 +00:00
Firstyear
919e0ba6fe
20250225 ldap testing in testkit (#3460)
Add support for ldap servers in integration tests

This allows the ldap interface to be enabled during tests, which is
a final requirement to complete ldap application passwords.
2025-03-11 12:35:31 +10:00
dependabot[bot]
23d35dc324
Bump the all group in /pykanidm with 5 updates (#3494)
Bumps the all group in /pykanidm with 5 updates:

| Package | From | To |
| --- | --- | --- |
| [ruff](https://github.com/astral-sh/ruff) | `0.9.9` | `0.9.10` |
| [types-requests](https://github.com/python/typeshed) | `2.32.0.20250301` | `2.32.0.20250306` |
| [mkdocs-material](https://github.com/squidfunk/mkdocs-material) | `9.6.6` | `9.6.7` |
| [mkdocstrings](https://github.com/mkdocstrings/mkdocstrings) | `0.28.2` | `0.28.3` |
| [mkdocstrings-python](https://github.com/mkdocstrings/python) | `1.16.2` | `1.16.3` |


Updates `ruff` from 0.9.9 to 0.9.10
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.9.9...0.9.10)

Updates `types-requests` from 2.32.0.20250301 to 2.32.0.20250306
- [Commits](https://github.com/python/typeshed/commits)

Updates `mkdocs-material` from 9.6.6 to 9.6.7
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/9.6.6...9.6.7)

Updates `mkdocstrings` from 0.28.2 to 0.28.3
- [Release notes](https://github.com/mkdocstrings/mkdocstrings/releases)
- [Changelog](https://github.com/mkdocstrings/mkdocstrings/blob/main/CHANGELOG.md)
- [Commits](https://github.com/mkdocstrings/mkdocstrings/compare/0.28.2...0.28.3)

Updates `mkdocstrings-python` from 1.16.2 to 1.16.3
- [Release notes](https://github.com/mkdocstrings/python/releases)
- [Changelog](https://github.com/mkdocstrings/python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/mkdocstrings/python/compare/1.16.2...1.16.3)

---
updated-dependencies:
- dependency-name: ruff
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: types-requests
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: mkdocs-material
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: mkdocstrings
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: mkdocstrings-python
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-11 08:49:13 +10:00
dependabot[bot]
7d9661ef45
Bump ring from 0.17.10 to 0.17.13 in the cargo group (#3491)
Bumps the cargo group with 1 update: [ring](https://github.com/briansmith/ring).


Updates `ring` from 0.17.10 to 0.17.13
- [Changelog](https://github.com/briansmith/ring/blob/main/RELEASES.md)
- [Commits](https://github.com/briansmith/ring/commits)

---
updated-dependencies:
- dependency-name: ring
  dependency-type: indirect
  dependency-group: cargo
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-09 11:53:42 +10:00
Firstyear
dcd5cd23f4
Handle form-post as a response mode (#3467)
Some oauth2 clients apparently ignore what we tell them
and request response modes we don't support.

First, we should deserialise these and error correctly.

Second, to maintain temporary compatibility, we remap
form-post to query. This will be removed in future.
2025-03-05 13:21:09 +10:00
Tshepang Mbambo
7b2bd38ab2
book: fix english (#3487)
* fix Python docs wording

---------

Co-authored-by: James Hodgkinson <james@terminaloutcomes.com>
2025-03-04 21:16:00 +00:00
Firstyear
775dd520cb
Correct paths with Kanidm Tools Container (#3486) 2025-03-04 14:52:30 +10:00
Firstyear
63deda350c
20250225 improve test performance (#3459)
* Ignore tests that are no longer used.

Each time a library or binary is added, that requires compilation to create
the *empty* test harness, which then is executed and takes multiple seconds
to start up, do nothing, and return success.

This removes test's for libraries that aren't actually using or running
any tests.

Additionally, each time a new test binary is added, that adds a ton of
compilation time, but also test execution time as the binary for each
test runner must start up, execute, and shutdown. So this merges all
the testkit integration tests to a single running which significantly
speeds up test execution.

* Improve IDL exists behaviour, improve memberof verification

Again to improve test performance. This improves the validation of idx
existance to be a faster SQLite call, caches the results as needed.

Memberof was taking up a large amount of time in verify phases of test
finalisation, and so a better in memory version has been added.

* Disable TLS native roots when not needed

* Cleanup tests that are hitting native certs, or do nothing at all
2025-03-04 10:36:53 +10:00
dependabot[bot]
7eedb0159f
Bump the all group in /pykanidm with 8 updates (#3484)
---
updated-dependencies:
- dependency-name: aiohttp
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: authlib
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all
- dependency-name: ruff
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: pytest
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: types-requests
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: mkdocs-material
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: mkdocstrings
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: mkdocstrings-python
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-03 08:00:57 +10:00
Firstyear
e98d60a962
Use lld by default on linux (#3477)
* Use lld by default on linux

---------

Co-authored-by: James Hodgkinson <james@terminaloutcomes.com>
2025-02-28 08:30:59 +00:00
Firstyear
25c1c1573e
20250213 patch used wrong acp (#3432)
Migrations and server bootstrap are very interconnected processes
and in this we'll be addressing and improving both.

Server bootstrap was performed by creating base entries in phases,
eventually bringing up enough of the *oldest* supported server
minimum remigration level, to then allow triggering of migrations.

Migrations then applied "patches" effectively ontop of this minimum
level to update entries to what they should be in newer versions of
the server.

This scheme has it's pros and cons, but the major con was that to
remove a migration meant squashing it's content back into the
minimum remigration level, and this was a human process that was
quite error prone and difficult to automate. As well, this scheme
also led to cases where the patch migrations would sometimes *not*
reflect all the needed changes or content, or in one case was actually
undone by a patchlevel fix up that was required to address a bug.

Invariably this led to issues, and cases where a new server may have
different content to a migrated one - not exactly what we want!

This is a new migration scheme that addresses this fragility. However
what it trades is verbosity of the content.

Rather than having a base set of entries and patching/updating small
sections ontop, we have migration data folders that contain the full
set of entries as they should appear at that migration level. This
makes the bootstrap process easier as we can just apply the migration
level as a whole, and targetted to what precise version we want.

This also makes migrations more durable as the content is explicitly
copied and all entries fully applied, so there is no risk that a
migration or data change can be forgotten or applied incorrectly. We
are expressing the full state of what our builtin and provided entries
should be.

Finally this rips out a number of places where migration data was being
used as test case data. Not all of these have been replaced (notably
in authsession with Account), but the majority have and have been replaced
with clearer use of constants rather than building whole entries just to
access the name and throw them away for example.
2025-02-28 10:18:48 +10:00
Ludea
145ffed7c6
Android support (#3475) 2025-02-27 11:45:33 +00:00
CEbbinghaus
b669f38d23
Changed all CI/CD builds to locked (#3471) 2025-02-26 22:04:23 +00:00
Firstyear
537d6fd93b
Make it a bit clearer that providers are needed (#3468) 2025-02-27 00:05:33 +10:00
Firstyear
b6ffb31e4a
Fix incorrect credential generation in radius docs (#3465) 2025-02-26 12:03:10 +10:00
Firstyear
0e0e8ff844
Add crypt formats for password import (#3458)
Adds crypt md5, sha256 and sha512 allowing import of legacy credentials
from external ldap servers.
2025-02-25 11:09:34 +00:00
Jade Ellis
266dc77536
build: Create daemon image from scratch (#3452) 2025-02-25 14:16:08 +10:00
micolous
3edee485dd
address webfinger doc feedbacks (#3446) 2025-02-25 02:53:53 +00:00
dependabot[bot]
38c260214b
Bump the all group across 1 directory with 5 updates (#3453)
Bumps the all group with 5 updates in the /pykanidm directory:

| Package | From | To |
| --- | --- | --- |
| [ruff](https://github.com/astral-sh/ruff) | `0.9.5` | `0.9.7` |
| [coverage](https://github.com/nedbat/coveragepy) | `7.6.11` | `7.6.12` |
| [mkdocs-material](https://github.com/squidfunk/mkdocs-material) | `9.6.3` | `9.6.5` |
| [mkdocstrings](https://github.com/mkdocstrings/mkdocstrings) | `0.28.0` | `0.28.1` |
| [mkdocstrings-python](https://github.com/mkdocstrings/python) | `1.14.6` | `1.16.1` |



Updates `ruff` from 0.9.5 to 0.9.7
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.9.5...0.9.7)

Updates `coverage` from 7.6.11 to 7.6.12
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](https://github.com/nedbat/coveragepy/compare/7.6.11...7.6.12)

Updates `mkdocs-material` from 9.6.3 to 9.6.5
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/9.6.3...9.6.5)

Updates `mkdocstrings` from 0.28.0 to 0.28.1
- [Release notes](https://github.com/mkdocstrings/mkdocstrings/releases)
- [Changelog](https://github.com/mkdocstrings/mkdocstrings/blob/main/CHANGELOG.md)
- [Commits](https://github.com/mkdocstrings/mkdocstrings/compare/0.28.0...0.28.1)

Updates `mkdocstrings-python` from 1.14.6 to 1.16.1
- [Release notes](https://github.com/mkdocstrings/python/releases)
- [Changelog](https://github.com/mkdocstrings/python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/mkdocstrings/python/compare/1.14.6...1.16.1)

---
updated-dependencies:
- dependency-name: ruff
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: coverage
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: mkdocs-material
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: mkdocstrings
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: mkdocstrings-python
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: all
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-24 13:22:51 +10:00
Merlijn
857dcf5087
[htmx] Admin ui for groups and users management (#3019)
* Some progress on admin ui for managing groups and users
* Improve scim querying

---------

Co-authored-by: William Brown <william@blackhats.net.au>
2025-02-22 13:43:54 +10:00
Sebastiano Tocci
9611a7f976
Fixes #3406: add configurable maximum queryable attributes for LDAP (#3431) 2025-02-21 12:14:47 +10:00
sinavir
f40679cd52
Accept invalid certs and fix token_cache_path (#3439)
* Add accept-invalid-certs option for cli
* Fix token_cache_path behavior

---------

Co-authored-by: sinavir <sinavir@sinavir.fr>
2025-02-20 08:07:48 +00:00
Firstyear
52824b58f1
Accept lowercase ldap pwd hashes (#3444) 2025-02-20 04:34:27 +00:00
CEbbinghaus
848af4cecd
TOTP label verification (#3419)
* Adding TOTP Label verification (for both empty and duplicate)
2025-02-19 06:54:50 +00:00
micolous
de506a5f53
Rewrite WebFinger docs (#3443) 2025-02-19 12:26:15 +10:00
micolous
7f3b1f2580
doc: fix formatting of URL table, remove Caddyfile instructions (#3442)
There are many web servers, and this breaks the flow of the rest of the table.
2025-02-19 11:18:58 +10:00
Alex Martens
9bf17c4846
book: add OAuth2 Proxy example (#3434) 2025-02-16 05:14:47 +00:00
Firstyear
ed88b72080
Exempt idm_admin and admin from denied names. (#3429)
idm_admin and admin should be exempted from the denied names process,
as these values will already be denied due to attribute uniqueness.
Additionally improved the denied names check to only validate the
name during a change, not during a modifification. This way entries
that become denied can get themself out of the pickle.
2025-02-15 22:45:25 +00:00
Firstyear
d0b0b163fd
Book fixes (#3433) 2025-02-15 16:01:44 +10:00
Jade Ellis
ce410f440c
ci: uniform Docker builds (#3430) 2025-02-14 10:25:04 +00:00
Firstyear
77271c1720
20240213 3413 domain displayname (#3425)
Remove older migrations and make domain displayname optional.
2025-02-14 10:52:49 +10:00
Justin Warren
e838da9a08
Correct path to kanidm config example in documentation. (#3424) 2025-02-13 01:31:38 +00:00
Firstyear
94b7285cbb
Support redirect uris with query parameters (#3422)
RFC 6749 once again reminds us that given the room to do silly
things, RFC authors absolutely will. In this case, it's query
parameters in redirection uris which are absolutely horrifying
and yet, here we are.

We strictly match the query pairs during the redirection to
ensure that if a query pair did allow open redirection, then
we prevent it.
2025-02-13 01:03:15 +00:00
Firstyear
af6f55b1fe
Update to 1.6.0-dev (#3418) 2025-02-11 07:26:07 +00:00
George Wu
211e7d4e89
Remove white background from square logo. (#3417) 2025-02-11 14:41:55 +10:00
CEbbinghaus
ccde675cd2
feat: Added webfinger implementation (#3410)
Adds WebFinger endpoints to every oauth2 client

Co-authored-by: James Hodgkinson <james@terminaloutcomes.com>
2025-02-10 06:10:12 +00:00
dependabot[bot]
b96fe49b99
Bump the all group in /pykanidm with 7 updates (#3412)
Bumps the all group in /pykanidm with 7 updates:

| Package | From | To |
| --- | --- | --- |
| [aiohttp](https://github.com/aio-libs/aiohttp) | `3.11.11` | `3.11.12` |
| [ruff](https://github.com/astral-sh/ruff) | `0.9.4` | `0.9.5` |
| [mypy](https://github.com/python/mypy) | `1.14.1` | `1.15.0` |
| [coverage](https://github.com/nedbat/coveragepy) | `7.6.10` | `7.6.11` |
| [mkdocs-material](https://github.com/squidfunk/mkdocs-material) | `9.6.1` | `9.6.3` |
| [mkdocstrings](https://github.com/mkdocstrings/mkdocstrings) | `0.27.0` | `0.28.0` |
| [mkdocstrings-python](https://github.com/mkdocstrings/python) | `1.13.0` | `1.14.6` |


Updates `aiohttp` from 3.11.11 to 3.11.12
- [Release notes](https://github.com/aio-libs/aiohttp/releases)
- [Changelog](https://github.com/aio-libs/aiohttp/blob/master/CHANGES.rst)
- [Commits](https://github.com/aio-libs/aiohttp/compare/v3.11.11...v3.11.12)

Updates `ruff` from 0.9.4 to 0.9.5
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.9.4...0.9.5)

Updates `mypy` from 1.14.1 to 1.15.0
- [Changelog](https://github.com/python/mypy/blob/master/CHANGELOG.md)
- [Commits](https://github.com/python/mypy/compare/v1.14.1...v1.15.0)

Updates `coverage` from 7.6.10 to 7.6.11
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](https://github.com/nedbat/coveragepy/compare/7.6.10...7.6.11)

Updates `mkdocs-material` from 9.6.1 to 9.6.3
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/9.6.1...9.6.3)

Updates `mkdocstrings` from 0.27.0 to 0.28.0
- [Release notes](https://github.com/mkdocstrings/mkdocstrings/releases)
- [Changelog](https://github.com/mkdocstrings/mkdocstrings/blob/main/CHANGELOG.md)
- [Commits](https://github.com/mkdocstrings/mkdocstrings/compare/0.27.0...0.28.0)

Updates `mkdocstrings-python` from 1.13.0 to 1.14.6
- [Release notes](https://github.com/mkdocstrings/python/releases)
- [Changelog](https://github.com/mkdocstrings/python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/mkdocstrings/python/compare/1.13.0...1.14.6)

---
updated-dependencies:
- dependency-name: aiohttp
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: ruff
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: mypy
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: all
- dependency-name: coverage
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: mkdocs-material
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: all
- dependency-name: mkdocstrings
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: all
- dependency-name: mkdocstrings-python
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: all
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-10 07:19:22 +10:00
231 changed files with 20369 additions and 6946 deletions

11
.cargo/config.toml Normal file
View file

@ -0,0 +1,11 @@
# The default ld from glibc is impossibly slow and consumes huge amounts of
# memory. We use lld by default which often is twice as fast for significantly
# less ram consumption.
[target.x86_64-unknown-linux-gnu]
linker = "clang"
rustflags = ["-C", "link-arg=-fuse-ld=lld", "-Ctarget-cpu=native"]
[target.aarch64-unknown-linux-gnu]
linker = "clang"
rustflags = ["-C", "link-arg=-fuse-ld=lld", "-Ctarget-cpu=native"]

View file

@ -10,3 +10,4 @@ kanidmd/sampledata
Makefile
target
test.db
Dockerfile

View file

@ -19,9 +19,7 @@ jobs:
steps:
- uses: actions/checkout@v4
- name: Setup sccache
uses: mozilla-actions/sccache-action@v0.0.7
with:
version: "v0.4.2"
uses: mozilla-actions/sccache-action@v0.0.9
- name: Install dependencies
run: |
sudo apt-get update && \
@ -41,8 +39,6 @@ jobs:
steps:
- uses: actions/checkout@v4
- name: Setup sccache
uses: mozilla-actions/sccache-action@v0.0.7
with:
version: "v0.4.2"
uses: mozilla-actions/sccache-action@v0.0.9
- name: "Run cargo fmt"
run: cargo fmt --check

View file

@ -35,9 +35,15 @@ jobs:
needs:
- set_tag_values
steps:
- uses: actions/checkout@v4
- name: Checkout repository
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Docker metadata
id: meta
uses: docker/metadata-action@v5
- name: Build kanidm
uses: docker/build-push-action@v6
with:
@ -47,6 +53,9 @@ jobs:
build-args: |
"KANIDM_FEATURES="
file: tools/Dockerfile
context: .
labels: ${{ steps.meta.outputs.labels }}
annotations: ${{ steps.meta.outputs.annotations }}
# Must use OCI exporter for multi-arch: https://github.com/docker/buildx/pull/1813
outputs: type=oci,dest=/tmp/kanidm-docker.tar
- name: Upload artifact
@ -60,8 +69,8 @@ jobs:
# This step is split so that we don't apply "packages: write" permission
# except when uploading the final Docker image to GHCR.
runs-on: ubuntu-latest
if: ( github.ref_type == 'tag' || github.ref == 'refs/heads/master' ) && github.repository == 'kanidm/kanidm'
needs: kanidm_build
if: ( github.ref_type == 'tag' || github.ref == 'refs/heads/master' )
needs: [kanidm_build, set_tag_values]
permissions:
packages: write
@ -78,4 +87,4 @@ jobs:
echo "${{ secrets.GITHUB_TOKEN }}" | \
oras login -u "${{ github.actor }}" --password-stdin ghcr.io
oras copy --from-oci-layout "/tmp/kanidm-docker.tar:devel" \
"ghcr.io/${{ github.repository_owner }}/kanidm:devel"
"ghcr.io/${{ needs.set_tag_values.outputs.owner_lc }}/kanidm:devel"

View file

@ -35,27 +35,15 @@ jobs:
runs-on: ubuntu-latest
needs: set_tag_values
steps:
- uses: actions/checkout@v4
- name: Checkout repository
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Docker metadata
id: meta
uses: docker/metadata-action@v5
with:
# list of Docker images to use as base name for tags
# images: |
# kanidm/kanidmd
# ghcr.io/username/app
# generate Docker tags based on the following events/attributes
tags: |
type=schedule
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha
- name: Build kanidmd
uses: docker/build-push-action@v6
with:
@ -64,6 +52,9 @@ jobs:
# build-args: |
# "KANIDM_BUILD_OPTIONS=-j1"
file: server/Dockerfile
context: .
labels: ${{ steps.meta.outputs.labels }}
annotations: ${{ steps.meta.outputs.annotations }}
# Must use OCI exporter for multi-arch: https://github.com/docker/buildx/pull/1813
outputs: type=oci,dest=/tmp/kanidmd-docker.tar
- name: Upload artifact
@ -77,8 +68,8 @@ jobs:
# This step is split so that we don't apply "packages: write" permission
# except when uploading the final Docker image to GHCR.
runs-on: ubuntu-latest
if: ( github.ref_type== 'tag' || github.ref == 'refs/heads/master' ) && github.repository == 'kanidm/kanidm'
needs: kanidmd_build
if: ( github.ref_type== 'tag' || github.ref == 'refs/heads/master' )
needs: [kanidmd_build, set_tag_values]
permissions:
packages: write
@ -95,4 +86,4 @@ jobs:
echo "${{ secrets.GITHUB_TOKEN }}" | \
oras login -u "${{ github.actor }}" --password-stdin ghcr.io
oras copy --from-oci-layout "/tmp/kanidmd-docker.tar:devel" \
"ghcr.io/${{ github.repository_owner }}/kanidmd:devel"
"ghcr.io/${{ needs.set_tag_values.outputs.owner_lc }}/kanidmd:devel"

View file

@ -35,17 +35,26 @@ jobs:
runs-on: ubuntu-latest
needs: set_tag_values
steps:
- uses: actions/checkout@v4
- name: Checkout repository
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Docker metadata
id: meta
uses: docker/metadata-action@v5
- name: Build radius
uses: docker/build-push-action@v6
with:
platforms: linux/arm64,linux/amd64
tags: ghcr.io/${{ needs.set_tag_values.outputs.owner_lc }}/radius:devel,ghcr.io/${{ needs.set_tag_values.outputs.owner_lc }}/radius:${{ needs.set_tag_values.outputs.ref_name}}
file: rlm_python/Dockerfile
context: .
labels: ${{ steps.meta.outputs.labels }}
annotations: ${{ steps.meta.outputs.annotations }}
# Must use OCI exporter for multi-arch: https://github.com/docker/buildx/pull/1813
outputs: type=oci,dest=/tmp/radius-docker.tar
- name: Upload artifact
@ -59,8 +68,8 @@ jobs:
# This step is split so that we don't apply "packages: write" permission
# except when uploading the final Docker image to GHCR.
runs-on: ubuntu-latest
if: ( github.ref_type == 'tag' || github.ref == 'refs/heads/master' ) && github.repository == 'kanidm/kanidm'
needs: radius_build
if: ( github.ref_type == 'tag' || github.ref == 'refs/heads/master' )
needs: [radius_build, set_tag_values]
permissions:
packages: write
@ -79,4 +88,4 @@ jobs:
echo "${{ secrets.GITHUB_TOKEN }}" | \
oras login -u "${{ github.actor }}" --password-stdin ghcr.io
oras copy --from-oci-layout "/tmp/radius-docker.tar:devel" \
"ghcr.io/${{ github.repository_owner }}/radius:devel"
"ghcr.io/${{ needs.set_tag_values.outputs.owner_lc }}/radius:devel"

View file

@ -24,9 +24,7 @@ jobs:
with:
ref: ${{ inputs.tag }}
- name: Setup sccache
uses: mozilla-actions/sccache-action@v0.0.7
with:
version: "v0.4.2"
uses: mozilla-actions/sccache-action@v0.0.9
- name: Install deps
run: |
sudo apt-get update

View file

@ -27,10 +27,7 @@ jobs:
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
- name: Setup sccache
uses: mozilla-actions/sccache-action@v0.0.7
with:
version: "v0.4.2"
uses: mozilla-actions/sccache-action@v0.0.9
- name: Install dependencies
run: |
sudo apt-get update && \
@ -41,7 +38,7 @@ jobs:
libsystemd-dev
- name: "Build the workspace"
run: cargo build --workspace
run: cargo build --locked --workspace
- name: "Check disk space and size of target, then clean it"
run: |
df -h
@ -75,10 +72,7 @@ jobs:
with:
toolchain: ${{ matrix.rust_version }}
- name: Setup sccache
uses: mozilla-actions/sccache-action@v0.0.7
with:
version: "v0.4.2"
uses: mozilla-actions/sccache-action@v0.0.9
- name: Install dependencies
run: |
sudo apt-get update && \
@ -89,7 +83,7 @@ jobs:
libsystemd-dev
- name: "Build the workspace"
run: cargo build --workspace
run: cargo build --locked --workspace
- name: "Check disk space and size of target, then clean it"
run: |
df -h
@ -118,10 +112,7 @@ jobs:
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
- name: Setup sccache
uses: mozilla-actions/sccache-action@v0.0.7
with:
version: "v0.4.2"
uses: mozilla-actions/sccache-action@v0.0.9
- name: Install dependencies
run: |
sudo apt-get update && \

View file

@ -28,9 +28,7 @@ jobs:
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
- name: Setup sccache
uses: mozilla-actions/sccache-action@v0.0.7
with:
version: "v0.4.2"
- run: cargo build -p kanidm_client -p kanidm_tools --bin kanidm
uses: mozilla-actions/sccache-action@v0.0.9
- run: cargo build --locked -p kanidm_client -p kanidm_tools --bin kanidm
# yamllint disable-line rule:line-length
- run: cargo test -p kanidm_client -p kanidm_tools

1035
Cargo.lock generated

File diff suppressed because it is too large Load diff

View file

@ -1,5 +1,5 @@
[workspace.package]
version = "1.5.0-dev"
version = "1.6.0-dev"
authors = [
"William Brown <william@blackhats.net.au>",
"James Hodgkinson <james@terminaloutcomes.com>",
@ -123,20 +123,20 @@ codegen-units = 256
libnss = { git = "https://github.com/Firstyear/libnss-rs.git", branch = "20250207-freebsd" }
[workspace.dependencies]
kanidmd_core = { path = "./server/core", version = "=1.5.0-dev" }
kanidmd_lib = { path = "./server/lib", version = "=1.5.0-dev" }
kanidmd_lib_macros = { path = "./server/lib-macros", version = "=1.5.0-dev" }
kanidmd_testkit = { path = "./server/testkit", version = "=1.5.0-dev" }
kanidm_build_profiles = { path = "./libs/profiles", version = "=1.5.0-dev" }
kanidm_client = { path = "./libs/client", version = "=1.5.0-dev" }
kanidmd_core = { path = "./server/core", version = "=1.6.0-dev" }
kanidmd_lib = { path = "./server/lib", version = "=1.6.0-dev" }
kanidmd_lib_macros = { path = "./server/lib-macros", version = "=1.6.0-dev" }
kanidmd_testkit = { path = "./server/testkit", version = "=1.6.0-dev" }
kanidm_build_profiles = { path = "./libs/profiles", version = "=1.6.0-dev" }
kanidm_client = { path = "./libs/client", version = "=1.6.0-dev" }
kanidm-hsm-crypto = "^0.2.0"
kanidm_lib_crypto = { path = "./libs/crypto", version = "=1.5.0-dev" }
kanidm_lib_file_permissions = { path = "./libs/file_permissions", version = "=1.5.0-dev" }
kanidm_proto = { path = "./proto", version = "=1.5.0-dev" }
kanidm_unix_common = { path = "./unix_integration/common", version = "=1.5.0-dev" }
kanidm_utils_users = { path = "./libs/users", version = "=1.5.0-dev" }
scim_proto = { path = "./libs/scim_proto", version = "=1.5.0-dev" }
sketching = { path = "./libs/sketching", version = "=1.5.0-dev" }
kanidm_lib_crypto = { path = "./libs/crypto", version = "=1.6.0-dev" }
kanidm_lib_file_permissions = { path = "./libs/file_permissions", version = "=1.6.0-dev" }
kanidm_proto = { path = "./proto", version = "=1.6.0-dev" }
kanidm_unix_common = { path = "./unix_integration/common", version = "=1.6.0-dev" }
kanidm_utils_users = { path = "./libs/users", version = "=1.6.0-dev" }
scim_proto = { path = "./libs/scim_proto", version = "=1.6.0-dev" }
sketching = { path = "./libs/sketching", version = "=1.6.0-dev" }
anyhow = { version = "1.0.95" }
argon2 = { version = "0.5.3", features = ["alloc"] }
@ -159,12 +159,12 @@ base64 = "^0.22.1"
base64urlsafedata = "0.5.1"
bitflags = "^2.8.0"
bytes = "^1.9.0"
clap = { version = "^4.5.27", features = ["derive", "env"] }
clap = { version = "^4.5.34", features = ["derive", "env"] }
clap_complete = "^4.5.42"
# Forced by saffron/cron
chrono = "^0.4.39"
compact_jwt = { version = "^0.4.2", default-features = false }
concread = "^0.5.3"
concread = "^0.5.5"
cron = "0.15.0"
crossbeam = "0.8.4"
csv = "1.3.1"
@ -173,7 +173,7 @@ dhat = "0.3.3"
dyn-clone = "^1.0.17"
fernet = "^0.2.1"
filetime = "^0.2.24"
fs4 = "^0.12.0"
fs4 = "^0.13.0"
futures = "^0.3.31"
futures-util = { version = "^0.3.30", features = ["sink"] }
gix = { version = "0.64.0", default-features = false }
@ -190,7 +190,7 @@ image = { version = "0.24.9", default-features = false, features = [
"jpeg",
"webp",
] }
itertools = "0.13.0"
itertools = "0.14.0"
enum-iterator = "2.1.0"
kanidmd_web_ui_shared = { path = "./server/web_ui/shared" }
# REMOVE this
@ -202,14 +202,15 @@ libc = "^0.2.168"
libnss = "^0.8.0"
libsqlite3-sys = "^0.25.2"
lodepng = "3.11.0"
lru = "^0.12.5"
lru = "^0.13.0"
mathru = "^0.13.0"
md-5 = "0.10.6"
mimalloc = "0.1.43"
notify-debouncer-full = { version = "0.1" }
notify-debouncer-full = { version = "0.5" }
num_enum = "^0.5.11"
oauth2_ext = { version = "^4.4.2", package = "oauth2", default-features = false }
openssl-sys = "^0.9"
openssl = "^0.10.70"
openssl = "^0.10.72"
opentelemetry = { version = "0.27.0" }
opentelemetry_api = { version = "0.27.0", features = ["logs", "metrics"] }
@ -240,6 +241,7 @@ reqwest = { version = "0.12.12", default-features = false, features = [
"json",
"gzip",
"rustls-tls-native-roots",
"rustls-tls-native-roots-no-provider",
] }
rusqlite = { version = "^0.28.0", features = ["array", "bundled"] }
rustls = { version = "0.23.21", default-features = false, features = [
@ -266,7 +268,7 @@ tempfile = "3.15.0"
testkit-macros = { path = "./server/testkit-macros" }
time = { version = "^0.3.36", features = ["formatting", "local-offset"] }
tokio = "^1.43.0"
tokio = "^1.44.2"
tokio-openssl = "^0.6.5"
tokio-util = "^0.7.13"
@ -292,7 +294,7 @@ webauthn-rs = { version = "0.5.1", features = ["preview-features"] }
webauthn-rs-core = "0.5.1"
webauthn-rs-proto = "0.5.1"
whoami = "^1.5.2"
whoami = "^1.6.0"
walkdir = "2"
x509-cert = "0.2.5"

View file

@ -53,7 +53,6 @@
- [Service Integration Examples](examples/readme.md)
- [Kubernetes Ingress](examples/kubernetes_ingress.md)
- [OAuth2 Examples](integrations/oauth2/examples.md)
- [Traefik](examples/traefik.md)
- [Replication](repl/readme.md)

View file

@ -58,6 +58,21 @@ can only use the UID range `1879048192` (`0x70000000`) to `2147483647` (`0x7ffff
limitations of the Linux kernel and
[systemd reserving other uids in the range](http://systemd.io/UIDS-GIDS/) for its exclusive use.
| name | min | max |
|------|-----|-----|
| system | 0 | 999 |
| user | 1000 | 60000 |
| systemd homed | 60001 | 60577 |
| unused A | 60578 | 61183 |
| systemd dynuser | 61184 | 65519 |
| unused B | 65520 | 65533 |
| nobody | 65534 | 65534 |
| 16bit limit | 65535 | 65535 |
| unused C | 65536 | 524287 |
| systemd nspawn | 524288 | 1879048191 |
| kanidm dyn alloc | 1879048192 | 2147483647 |
| unusable | 2147483648 | 4294967295 |
A valid concern is the possibility of duplication in the lower 24 bits. Given the
[birthday problem](https://en.wikipedia.org/wiki/Birthday_problem), if you have ~7700 groups and
accounts, you have a 50% chance of duplication. With ~5000 you have a 25% chance, ~930 you have a 1%
@ -67,7 +82,7 @@ We advise that if you have a site with greater than approximately 2,000 users yo
external system to allocate GID numbers serially or consistently to avoid potential duplication
events.
We recommend the use of the range `65536` through `524287` for manual allocation. This leaves the
We recommend the use of the range `65536` through `524287` (`unused C`) for manual allocation. This leaves the
range `1000` through `65535` for OS/Distro purposes, and allows Kanidm to continue dynamic
allocation in the range `1879048192` to `2147483647` if you choose a hybrid allocation approach.

View file

@ -45,6 +45,7 @@ can take many forms such as.
- firstname firstname lastname
- firstname lastname lastname
- firstname
- middlename lastname
- lastname firstname
And many many more that are not listed here. This is why our names are displayName as a freetext

View file

@ -100,21 +100,16 @@ You will need [rustup](https://rustup.rs/) to install a Rust toolchain.
### SUSE / OpenSUSE
> NOTE: clang and lld are required to build Kanidm due to performance issues with GCC/ld
You will need to install rustup and our build dependencies with:
```bash
zypper in rustup git libudev-devel sqlite3-devel libopenssl-3-devel libselinux-devel pam-devel systemd-devel tpm2-0-tss-devel
zypper in rustup git libudev-devel sqlite3-devel libopenssl-3-devel libselinux-devel \
pam-devel systemd-devel tpm2-0-tss-devel clang lld make sccache
```
You can then use rustup to complete the setup of the toolchain.
In some cases you may need to build other vendored components, or use an alternate linker. In these
cases we advise you to also install.
```bash
zypper in clang lld make sccache
```
You should also adjust your environment with:
```bash
@ -123,25 +118,16 @@ export CC="sccache /usr/bin/clang"
export CXX="sccache /usr/bin/clang++"
```
And add the following to a cargo config of your choice (such as ~/.cargo/config), adjusting for cpu
arch
```toml
[target.aarch64-unknown-linux-gnu]
linker = "clang"
rustflags = [
"-C", "link-arg=-fuse-ld=lld",
]
```
### Fedora
> NOTE: clang and lld are required to build Kanidm due to performance issues with GCC/ld
You will need [rustup](https://rustup.rs/) to install a Rust toolchain.
You will also need some system libraries to build this:
```text
systemd-devel sqlite-devel openssl-devel pam-devel
systemd-devel sqlite-devel openssl-devel pam-devel clang lld
```
Building the Web UI requires additional packages:
@ -152,12 +138,14 @@ perl-FindBin perl-File-Compare
### Ubuntu
> NOTE: clang and lld are required to build Kanidm due to performance issues with GCC/ld
You need [rustup](https://rustup.rs/) to install a Rust toolchain.
You will also need some system libraries to build this, which can be installed by running:
```bash
sudo apt-get install libudev-dev libssl-dev libsystemd-dev pkg-config libpam0g-dev
sudo apt-get install libudev-dev libssl-dev libsystemd-dev pkg-config libpam0g-dev clang lld
```
Tested with Ubuntu 20.04 and 22.04.

View file

@ -54,7 +54,6 @@ services:
- traefik.http.routers.kanidm.entrypoints=websecure
- traefik.http.routers.kanidm.rule=Host(`idm.example.com`)
- traefik.http.routers.kanidm.service=kanidm
- traefik.http.serversTransports.kanidm.insecureSkipVerify=true
- traefik.http.services.kanidm.loadbalancer.server.port=8443
- traefik.http.services.kanidm.loadbalancer.server.scheme=https
volumes:

View file

@ -137,9 +137,9 @@ chmod 666 ~/.cache/kanidm_tokens
docker pull kanidm/tools:latest
docker run --rm -i -t \
--network host \
--mount "type=bind,src=/etc/kanidm/config,target=/etc/kanidm/config" \
--mount "type=bind,src=$HOME/.config/kanidm,target=/home/kanidm/.config/kanidm" \
--mount "type=bind,src=$HOME/.cache/kanidm_tokens,target=/home/kanidm/.cache/kanidm_tokens" \
--mount "type=bind,src=/etc/kanidm/config,target=/data/config:ro" \
--mount "type=bind,src=$HOME/.config/kanidm,target=/root/.config/kanidm" \
--mount "type=bind,src=$HOME/.cache/kanidm_tokens,target=/root/.cache/kanidm_tokens" \
kanidm/tools:latest \
/sbin/kanidm --help
```

View file

@ -70,11 +70,10 @@ anything special for Kanidm (or another provider).
**Note:** some apps automatically append `/.well-known/openid-configuration` to
the end of an OIDC Discovery URL, so you may need to omit that.
</dd>
<dt>
[RFC 8414 OAuth 2.0 Authorisation Server Metadata](https://datatracker.ietf.org/doc/html/rfc8414) URL
[RFC 8414 OAuth 2.0 Authorisation Server Metadata](https://datatracker.ietf.org/doc/html/rfc8414)
URL **(recommended)**
</dt>
@ -86,6 +85,21 @@ the end of an OIDC Discovery URL, so you may need to omit that.
<dt>
[WebFinger URL](#webfinger) **(discouraged)**
</dt>
<dd>
`https://idm.example.com/oauth2/openid/:client_id:/.well-known/webfinger`
See [the WebFinger section](#webfinger) for more details, as there a number of
caveats for WebFinger clients.
</dd>
<dt>
User auth
</dt>
@ -148,7 +162,7 @@ Token endpoint
<dt>
OpenID Connect issuer URI
OpenID Connect Issuer URL
</dt>
@ -190,7 +204,7 @@ Token signing public key
### Create the Kanidm Configuration
By default, members of the `system_admins` or `idm_hp_oauth2_manage_priv` groups are able to create
By default, members of the `idm_admins` or `idm_oauth2_admins` groups are able to create
or manage OAuth2 client integrations.
You can create a new client by specifying its client name, application display name and the landing
@ -441,3 +455,86 @@ kanidm system oauth2 reset-secrets
```
Each client has unique signing keys and access secrets, so this is limited to each service.
## WebFinger
[WebFinger][webfinger] provides a mechanism for discovering information about
entities at a well-known URL (`https://{hostname}/.well-known/webfinger`).
It can be used by a WebFinger client to
[discover the OIDC Issuer URL][webfinger-oidc] of an identity provider from the
hostname alone, and seems to be intended to support dynamic client registration
flows for large public identity providers.
Kanidm v1.5.1 and later can respond to WebFinger requests, using a user's SPN as
part of [an `acct` URI][rfc7565] (eg: `acct:user@idm.example.com`). While SPNs
and `acct` URIs look like email addresses, [as per RFC 7565][rfc7565s4], there
is no guarantee that it is valid for any particular application protocol, unless
an administrator explicitly provides for it.
When setting up an application to authenticate with Kanidm, WebFinger **does not
add any security** over configuring an OIDC Discovery URL directly. In an OIDC
context, the specification makes a number of flawed assumptions which make it
difficult to use with Kanidm:
* WebFinger assumes that an identity provider will use the same Issuer URL and
OIDC Discovery document (which contains endpoint URLs and token signing keys)
for *all* OAuth 2.0/OIDC clients.
Kanidm uses *client-specific* Issuer URLs, endpoint URLs and token signing
keys. This ensures that tokens can only be used with their intended service.
* WebFinger endpoints must be served at the *root* of the domain of a user's
SPN (ie: information about the user with SPN `user@idm.example.com` is at
`https://idm.example.com/.well-known/webfinger?resource=acct%3Auser%40idm.example.com`).
Unlike OIDC Discovery, WebFinger clients do not report their OAuth 2.0/OIDC
client ID in the request, so there is no way to tell them apart.
As a result, Kanidm *does not* provide a WebFinger endpoint at its root URL,
because it could report an incorrect Issuer URL and lead the client to an
incorrect OIDC Discovery document.
You will need a load balancer in front of Kanidm's HTTPS server to send a HTTP
307 redirect to the appropriate
`/oauth2/openid/:client_id:/.well-known/webfinger` URL, *while preserving all
query parameters*. For example, with Caddy:
```caddy
# Match on a prefix, and use {uri} to preserve all query parameters.
# This only supports *one* client.
example.com {
redir /.well-known/webfinger https://idm.example.com/oauth2/openid/:client_id:{uri} 307
}
```
If you have *multiple* WebFinger clients, it will need to map some other
property of the request (such as a source IP address or `User-Agent` header)
to a client ID, and redirect to the appropriate WebFinger URL for that client.
* Kanidm responds to *all* WebFinger queries with
[an Identity Provider Discovery for OIDC URL][webfinger-oidc], **ignoring**
[`rel` parameter(s)][webfinger-rel].
If you want to use WebFinger in any *other* context on Kanidm's hostname,
you'll need a load balancer in front of Kanidm which matches on some property
of the request.
WebFinger clients *may* omit the `rel=` parameter, so if you host another
service with relations for a Kanidm [`acct:` entity][rfc7565s4] and a client
*does not* supply the `rel=` parameter, your load balancer will need to merge
JSON responses from Kanidm and the other service(s).
Because of these issues, we recommend that applications support *directly*
configuring OIDC using a Discovery URL or OAuth 2.0 Authorisation Server
Metadata URL instead of WebFinger.
If a WebFinger client only checks WebFinger once during setup, you may wish to
temporarily serve an appropriate static WebFinger document for that client
instead.
[rfc7565]: https://datatracker.ietf.org/doc/html/rfc7565
[rfc7565s4]: https://datatracker.ietf.org/doc/html/rfc7565#section-4
[webfinger]: https://datatracker.ietf.org/doc/html/rfc7033
[webfinger-oidc]: https://datatracker.ietf.org/doc/html/rfc7033#section-3.1
[webfinger-rel]: https://datatracker.ietf.org/doc/html/rfc7033#section-4.3

View file

@ -556,6 +556,65 @@ php occ config:app:set --value=0 user_oidc allow_multiple_user_backends
You can login directly by appending `?direct=1` to your login page. You can re-enable other backends
by setting the value to `1`
## OAuth2 Proxy
OAuth2 Proxy is a reverse proxy that provides authentication with OpenID Connect identity providers.
It is typically used to secure web applications without native OpenID Connect support.
Prepare the environment.
Due to a [lack of public client support](https://github.com/oauth2-proxy/oauth2-proxy/issues/1714) we have to set it up as a basic client.
```bash
kanidm system oauth2 create webapp 'webapp.example.com' 'https://webapp.example.com'
kanidm system oauth2 add-redirect-url webapp 'https://webapp.example.com/oauth2/callback'
kanidm system oauth2 update-scope-map webapp email openid
kanidm system oauth2 get webapp
kanidm system oauth2 show-basic-secret webapp
<SECRET>
```
Create a user group.
```bash
kanidm group create 'webapp_admin'
```
Setup the claim-map to add `webapp_group` to the userinfo claim.
```bash
kanidm system oauth2 update-claim-map-join 'webapp' 'webapp_group' array
kanidm system oauth2 update-claim-map 'webapp' 'webapp_group' 'webapp_admin' 'webapp_admin'
```
Authorize users for the application.
Additionally OAuth2 Proxy requires all users have an email, reference this issue for more details:
- <https://github.com/oauth2-proxy/oauth2-proxy/issues/2667>
```bash
kanidm person update '<user>' --legalname 'Personal Name' --mail 'user@example.com'
kanidm group add-members 'webapp_admin' '<user>'
```
And add the following to your OAuth2 Proxy config.
```toml
provider = "oidc"
scope = "openid email"
# change to match your kanidm domain and client id
oidc_issuer_url = "https://idm.example.com/oauth2/openid/webapp"
# client ID from `kanidm system oauth2 create`
client_id = "webapp"
# redirect URL from `kanidm system add-redirect-url webapp`
redirect_url = "https://webapp.example.com/oauth2/callback"
# claim name from `kanidm system oauth2 update-claim-map-join`
oidc_groups_claim = "webapp_group"
# user group from `kanidm group create`
allowed_groups = ["webapp_admin"]
# secret from `kanidm system oauth2 show-basic-secret webapp`
client_secret = "<SECRET>"
```
## Outline
> These instructions were tested with self-hosted Outline 0.80.2.

View file

@ -97,7 +97,7 @@ kanidm group add-members --name admin idm_radius_servers radius_service_account
Now reset the account password, using the `admin` account:
```bash
kanidm service-account credential generate --name admin radius_service_account
kanidm service-account api-token generate --name admin radius_service_account
```
## Deploying a RADIUS Container

View file

@ -5,57 +5,45 @@
- Debian packaging is complex enough that it lives in a separate repository:
[kanidm/kanidm_ppa_automation](https://github.com/kanidm/kanidm_ppa_automation).
- While official packages are available at https://kanidm.github.io/kanidm_ppa/ these instructions will guide you
through replicating the same process locally, using [cross](https://github.com/cross-rs/cross) & Docker to isolate the build process
from your normal computer and allow building packages for multiple architectures.
through replicating the same process locally, using Docker to isolate the build process from your normal computer.
- Due to the complexity of crosscompilation, we no longer support it and recommend building natively,
i.e. on the platform you're targeting.
- While the examples below will use `aarch64-unknown-linux-gnu` aka `arm64`,
the same process works for `x86_64-unknown-linux-gnu` aka `amd64` as well.
1. Start in the root directory of the main [kanidm/kanidm](https://github.com/kanidm/kanidm) repository.
1. Install cross:
```shell
cargo install cross
```
1. Pull in the separate deb packaging submodule:
```shell
git submodule update platform/debian/kanidm_ppa_automation
```
1. Launch your desired crossbuild target. Do note the script assumes you use rustup!
```shell
# See valid targets:
platform/debian/kanidm_ppa_automation/scripts/crossbuild.sh
# Launch a target:
platform/debian/kanidm_ppa_automation/scripts/crossbuild.sh debian-12-aarch64-unknown-linux-gnu
# You can also specify multiple targets within the same distribution:
platform/debian/kanidm_ppa_automation/scripts/crossbuild.sh debian-12-{aarch64,x86_64}-unknown-linux-gnu
```
1. Go get a drink of your choice while the build completes.
1. Create a sacrificial deb builder container to avoid changing your own system:
```shell
docker run --rm -it -e CI=true \
docker run --rm -it -e VERBOSE=true -e CI=true \
--mount "type=bind,src=$PWD,target=/src" \
--workdir /src \
rust:bookworm
```
1. In the container install dependencies with:
```shell
# The parameter given is which additional target debian architecture to enable (amd64, arm64, etc.)
# If your native platform is amd64, running with arm64 is enough to cover both archs.
platform/debian/kanidm_ppa_automation/scripts/install_ci_build_dependencies.sh arm64
platform/debian/kanidm_ppa_automation/scripts/install_ci_build_dependencies.sh
```
1. In the container launch the deb build:
1. Launch your desired target build:
```shell
platform/debian/kanidm_ppa_automation/scripts/build_native.sh aarch64-unknown-linux-gnu
```
1. Go get a drink of your choice while the build completes.
1. Launch the deb build:
```shell
platform/debian/kanidm_ppa_automation/scripts/build_debs.sh aarch64-unknown-linux-gnu
# Again, multiple targets also work:
platform/debian/kanidm_ppa_automation/scripts/build_debs.sh {aarch64,x86_64}-unknown-linux-gnu
```
1. You can now exit the container, the package paths displayed at the end under `target` will
persist.
## Adding or amending a deb package
The rough overview of steps is:
The rough overview of steps is as follows, see further down for details.
1. Add cargo-deb specific metadata to the rust package and any static assets. Submit your changes as
a PR.
2. Add build instructions to the separate packaging repo. Submit your changes as a PR.
2. Add build steps to the separate packaging repo. Submit your changes as a PR.
3. Go back to the main repo to update the packaging submodule reference to aid running manual dev
builds of the new package.
@ -72,8 +60,8 @@ an example, see `unix_integration/resolver/Cargo.toml`
### Configuration in the kanidm_ppa_automation repo
- The repo is: [kanidm/kanidm_ppa_automation](https://github.com/kanidm/kanidm_ppa_automation)
- Changes are needed if a new binary and/or package is added, or if build time dependencies change.
- Amend `scripts/crossbuild.sh` build rules to include new binaries or packages with shared
libraries. Search for the lines starting with `cross build`.
- Amend `scripts/build_native.sh` build rules to include new binaries or packages with shared
libraries.
- Add any new build time system dependencies to `scripts/install_ci_build_dependencies.sh`, be aware
of any difference in package names between Debian & Ubuntu.
- Add any new packages to `scripts/build_debs.sh`, search for the line starting with `for package in`.

View file

@ -147,8 +147,8 @@ Features or APIs may be removed with 1 release versions notice. Deprecations wil
### Python module
The python module will typically trail changes in functionality of the core Rust code, and will be
developed as we it for our own needs - please feel free to add functionality or improvements, or
The Python module will typically trail changes in functionality of the core Rust code, and has been
developed as we have needed it - please feel free to add functionality or improvements, or
[ask for them in a Github issue](http://github.com/kanidm/kanidm/issues/new/choose)!
All code changes will include full type-casting wherever possible.

View file

@ -22,6 +22,7 @@ This is a list of supported features and standards within Kanidm.
- [RFC4519 LDAP Schema](https://www.rfc-editor.org/rfc/rfc4519)
- FreeIPA User Schema
- [RFC7644 SCIM Bulk Data Import](https://www.rfc-editor.org/rfc/rfc7644)
- NOTE: SCIM is only supported for synchronisation from another IDP at this time.
# Database

View file

@ -1,5 +1,5 @@
# Kanidm minimal Service Configuration - /etc/kanidm/config
# For a full example and documentation, see /usr/share/kanidm/kanidm
# For a full example and documentation, see /usr/share/kanidm/config
# or `example/kanidm` in the source repository.
# Replace this with your kanidmd URI and uncomment the line

View file

@ -1,3 +1,6 @@
# The server configuration file version.
version = "2"
# The webserver bind address. Requires TLS certificates.
# If the port is set to 443 you may require the
# NET_BIND_SERVICE capability.

View file

@ -1,3 +1,6 @@
# The server configuration file version.
version = "2"
# The webserver bind address. Requires TLS certificates.
# If the port is set to 443 you may require the
# NET_BIND_SERVICE capability.

View file

@ -3,6 +3,9 @@
# The configuration file version.
version = '2'
# ⚠️ Ensure that you have the [kanidm] or other provider sections below
# configured else accounts from remote sources will not be available.
# Kanidm unix will bind all cached credentials to a local Hardware Security
# Module (HSM) to prevent exfiltration and attacks against these. In addition,
# any internal private keys will also be stored in this HSM.
@ -136,6 +139,7 @@ version = '2'
# allow_local_account_override = ["admin"]
# ========================================
# This section enables the Kanidm provider
[kanidm]

View file

@ -195,4 +195,20 @@ impl KanidmClient {
self.perform_get_request(&format!("/v1/group/{}/_attr/mail", id))
.await
}
pub async fn idm_group_purge_description(&self, id: &str) -> Result<(), ClientError> {
self.idm_group_purge_attr(id, "description").await
}
pub async fn idm_group_set_description(
&self,
id: &str,
description: &str,
) -> Result<(), ClientError> {
self.perform_put_request(
&format!("/v1/group/{}/_attr/description", id),
&[description],
)
.await
}
}

View file

@ -27,11 +27,12 @@ use std::time::Duration;
use compact_jwt::Jwk;
pub use http;
use kanidm_proto::constants::uri::V1_AUTH_VALID;
use kanidm_proto::constants::{
ATTR_DOMAIN_DISPLAY_NAME, ATTR_DOMAIN_LDAP_BASEDN, ATTR_DOMAIN_SSID, ATTR_ENTRY_MANAGED_BY,
ATTR_KEY_ACTION_REVOKE, ATTR_LDAP_ALLOW_UNIX_PW_BIND, ATTR_NAME, CLIENT_TOKEN_CACHE, KOPID,
KSESSIONID, KVERSION,
ATTR_KEY_ACTION_REVOKE, ATTR_LDAP_ALLOW_UNIX_PW_BIND, ATTR_LDAP_MAX_QUERYABLE_ATTRS, ATTR_NAME,
CLIENT_TOKEN_CACHE, KOPID, KSESSIONID, KVERSION,
};
use kanidm_proto::internal::*;
use kanidm_proto::v1::*;
@ -94,7 +95,7 @@ pub struct KanidmClientConfigInstance {
pub verify_hostnames: Option<bool>,
/// Whether to verify the Certificate Authority details of the server's TLS certificate, defaults to `true`.
///
/// Environment variable is slightly inverted - `KANIDM_SKIP_HOSTNAME_VERIFICATION`.
/// Environment variable is slightly inverted - `KANIDM_ACCEPT_INVALID_CERTS`.
pub verify_ca: Option<bool>,
/// Optionally you can specify the path of a CA certificate to use for verifying the server, if you're not using one trusted by your system certificate store.
///
@ -137,6 +138,7 @@ pub struct KanidmClientBuilder {
use_system_proxies: bool,
/// Where to store auth tokens, only use in testing!
token_cache_path: Option<String>,
disable_system_ca_store: bool,
}
impl Display for KanidmClientBuilder {
@ -170,33 +172,6 @@ impl Display for KanidmClientBuilder {
}
}
#[test]
fn test_kanidmclientbuilder_display() {
let defaultclient = KanidmClientBuilder::default();
println!("{}", defaultclient);
assert!(defaultclient.to_string().contains("verify_ca"));
let testclient = KanidmClientBuilder {
address: Some("https://example.com".to_string()),
verify_ca: true,
verify_hostnames: true,
ca: None,
connect_timeout: Some(420),
request_timeout: Some(69),
use_system_proxies: true,
token_cache_path: Some(CLIENT_TOKEN_CACHE.to_string()),
};
println!("testclient {}", testclient);
assert!(testclient.to_string().contains("verify_ca: true"));
assert!(testclient.to_string().contains("verify_hostnames: true"));
let badness = testclient.danger_accept_invalid_hostnames(true);
let badness = badness.danger_accept_invalid_certs(true);
println!("badness: {}", badness);
assert!(badness.to_string().contains("verify_ca: false"));
assert!(badness.to_string().contains("verify_hostnames: false"));
}
#[derive(Debug)]
pub struct KanidmClient {
pub(crate) client: reqwest::Client,
@ -233,6 +208,7 @@ impl KanidmClientBuilder {
request_timeout: None,
use_system_proxies: true,
token_cache_path: None,
disable_system_ca_store: false,
}
}
@ -290,6 +266,7 @@ impl KanidmClientBuilder {
request_timeout,
use_system_proxies,
token_cache_path,
disable_system_ca_store,
} = self;
// Process and apply all our options if they exist.
let address = match kcc.uri {
@ -316,6 +293,7 @@ impl KanidmClientBuilder {
request_timeout,
use_system_proxies,
token_cache_path,
disable_system_ca_store,
})
}
@ -416,6 +394,16 @@ impl KanidmClientBuilder {
}
}
/// Enable or disable the native ca roots. By default these roots are enabled.
pub fn enable_native_ca_roots(self, enable: bool) -> Self {
KanidmClientBuilder {
// We have to flip the bool state here due to Default on bool being false
// and we want our options to be positive to a native speaker.
disable_system_ca_store: !enable,
..self
}
}
pub fn danger_accept_invalid_hostnames(self, accept_invalid_hostnames: bool) -> Self {
KanidmClientBuilder {
// We have to flip the bool state here due to english language.
@ -453,6 +441,13 @@ impl KanidmClientBuilder {
}
}
pub fn set_token_cache_path(self, token_cache_path: Option<String>) -> Self {
KanidmClientBuilder {
token_cache_path,
..self
}
}
#[allow(clippy::result_unit_err)]
pub fn add_root_certificate_filepath(self, ca_path: &str) -> Result<Self, ClientError> {
//Okay we have a ca to add. Let's read it in and setup.
@ -520,6 +515,7 @@ impl KanidmClientBuilder {
// implement sticky sessions with cookies.
.cookie_store(true)
.cookie_provider(client_cookies.clone())
.tls_built_in_native_certs(!self.disable_system_ca_store)
.danger_accept_invalid_hostnames(!self.verify_hostnames)
.danger_accept_invalid_certs(!self.verify_ca);
@ -572,32 +568,6 @@ impl KanidmClientBuilder {
}
}
#[test]
fn test_make_url() {
use kanidm_proto::constants::DEFAULT_SERVER_ADDRESS;
let client: KanidmClient = KanidmClientBuilder::new()
.address(format!("https://{}", DEFAULT_SERVER_ADDRESS))
.build()
.unwrap();
assert_eq!(
client.get_url(),
Url::parse(&format!("https://{}", DEFAULT_SERVER_ADDRESS)).unwrap()
);
assert_eq!(
client.make_url("/hello"),
Url::parse(&format!("https://{}/hello", DEFAULT_SERVER_ADDRESS)).unwrap()
);
let client: KanidmClient = KanidmClientBuilder::new()
.address(format!("https://{}/cheese/", DEFAULT_SERVER_ADDRESS))
.build()
.unwrap();
assert_eq!(
client.make_url("hello"),
Url::parse(&format!("https://{}/cheese/hello", DEFAULT_SERVER_ADDRESS)).unwrap()
);
}
/// This is probably pretty jank but it works and was pulled from here:
/// <https://github.com/seanmonstar/reqwest/issues/1602#issuecomment-1220996681>
fn find_reqwest_error_source<E: std::error::Error + 'static>(
@ -616,6 +586,11 @@ fn find_reqwest_error_source<E: std::error::Error + 'static>(
}
impl KanidmClient {
/// Access the underlying reqwest client that has been configured for this Kanidm server
pub fn client(&self) -> &reqwest::Client {
&self.client
}
pub fn get_origin(&self) -> &Url {
&self.origin
}
@ -2075,6 +2050,18 @@ impl KanidmClient {
.await
}
/// Sets the maximum number of LDAP attributes that can be queryed in a single operation
pub async fn idm_domain_set_ldap_max_queryable_attrs(
&self,
max_queryable_attrs: usize,
) -> Result<(), ClientError> {
self.perform_put_request(
&format!("/v1/domain/_attr/{}", ATTR_LDAP_MAX_QUERYABLE_ATTRS),
vec![max_queryable_attrs.to_string()],
)
.await
}
pub async fn idm_set_ldap_allow_unix_password_bind(
&self,
enable: bool,
@ -2155,31 +2142,97 @@ impl KanidmClient {
}
}
#[tokio::test]
async fn test_no_client_version_check_on_502() {
let res = reqwest::Response::from(
http::Response::builder()
.status(StatusCode::GATEWAY_TIMEOUT)
.body("")
.unwrap(),
);
let client = KanidmClientBuilder::new()
.address("http://localhost:8080".to_string())
.build()
.expect("Failed to build client");
eprintln!("This should pass because we are returning 504 and shouldn't check version...");
client.expect_version(&res).await;
#[cfg(test)]
mod tests {
use super::{KanidmClient, KanidmClientBuilder};
use kanidm_proto::constants::CLIENT_TOKEN_CACHE;
use reqwest::StatusCode;
use url::Url;
let res = reqwest::Response::from(
http::Response::builder()
.status(StatusCode::BAD_GATEWAY)
.body("")
.unwrap(),
);
let client = KanidmClientBuilder::new()
.address("http://localhost:8080".to_string())
.build()
.expect("Failed to build client");
eprintln!("This should pass because we are returning 502 and shouldn't check version...");
client.expect_version(&res).await;
#[tokio::test]
async fn test_no_client_version_check_on_502() {
let res = reqwest::Response::from(
http::Response::builder()
.status(StatusCode::GATEWAY_TIMEOUT)
.body("")
.unwrap(),
);
let client = KanidmClientBuilder::new()
.address("http://localhost:8080".to_string())
.enable_native_ca_roots(false)
.build()
.expect("Failed to build client");
eprintln!("This should pass because we are returning 504 and shouldn't check version...");
client.expect_version(&res).await;
let res = reqwest::Response::from(
http::Response::builder()
.status(StatusCode::BAD_GATEWAY)
.body("")
.unwrap(),
);
let client = KanidmClientBuilder::new()
.address("http://localhost:8080".to_string())
.enable_native_ca_roots(false)
.build()
.expect("Failed to build client");
eprintln!("This should pass because we are returning 502 and shouldn't check version...");
client.expect_version(&res).await;
}
#[test]
fn test_make_url() {
use kanidm_proto::constants::DEFAULT_SERVER_ADDRESS;
let client: KanidmClient = KanidmClientBuilder::new()
.address(format!("https://{}", DEFAULT_SERVER_ADDRESS))
.enable_native_ca_roots(false)
.build()
.unwrap();
assert_eq!(
client.get_url(),
Url::parse(&format!("https://{}", DEFAULT_SERVER_ADDRESS)).unwrap()
);
assert_eq!(
client.make_url("/hello"),
Url::parse(&format!("https://{}/hello", DEFAULT_SERVER_ADDRESS)).unwrap()
);
let client: KanidmClient = KanidmClientBuilder::new()
.address(format!("https://{}/cheese/", DEFAULT_SERVER_ADDRESS))
.enable_native_ca_roots(false)
.build()
.unwrap();
assert_eq!(
client.make_url("hello"),
Url::parse(&format!("https://{}/cheese/hello", DEFAULT_SERVER_ADDRESS)).unwrap()
);
}
#[test]
fn test_kanidmclientbuilder_display() {
let defaultclient = KanidmClientBuilder::default();
println!("{}", defaultclient);
assert!(defaultclient.to_string().contains("verify_ca"));
let testclient = KanidmClientBuilder {
address: Some("https://example.com".to_string()),
verify_ca: true,
verify_hostnames: true,
ca: None,
connect_timeout: Some(420),
request_timeout: Some(69),
use_system_proxies: true,
token_cache_path: Some(CLIENT_TOKEN_CACHE.to_string()),
disable_system_ca_store: false,
};
println!("testclient {}", testclient);
assert!(testclient.to_string().contains("verify_ca: true"));
assert!(testclient.to_string().contains("verify_hostnames: true"));
let badness = testclient.danger_accept_invalid_hostnames(true);
let badness = badness.danger_accept_invalid_certs(true);
println!("badness: {}", badness);
assert!(badness.to_string().contains("verify_ca: false"));
assert!(badness.to_string().contains("verify_hostnames: false"));
}
}

View file

@ -33,6 +33,9 @@ tracing = { workspace = true }
uuid = { workspace = true }
x509-cert = { workspace = true, features = ["pem"] }
md-5 = { workspace = true }
sha-crypt = { workspace = true }
[dev-dependencies]
sketching = { workspace = true }

View file

@ -0,0 +1,99 @@
use md5::{Digest, Md5};
use std::cmp::min;
/// Maximium salt length.
const MD5_MAGIC: &str = "$1$";
const MD5_TRANSPOSE: &[u8] = b"\x0c\x06\x00\x0d\x07\x01\x0e\x08\x02\x0f\x09\x03\x05\x0a\x04\x0b";
const CRYPT_HASH64: &[u8] = b"./0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";
pub fn md5_sha2_hash64_encode(bs: &[u8]) -> String {
let ngroups = bs.len().div_ceil(3);
let mut out = String::with_capacity(ngroups * 4);
for g in 0..ngroups {
let mut g_idx = g * 3;
let mut enc = 0u32;
for _ in 0..3 {
let b = (if g_idx < bs.len() { bs[g_idx] } else { 0 }) as u32;
enc >>= 8;
enc |= b << 16;
g_idx += 1;
}
for _ in 0..4 {
out.push(char::from_u32(CRYPT_HASH64[(enc & 0x3F) as usize] as u32).unwrap_or('!'));
enc >>= 6;
}
}
match bs.len() % 3 {
1 => {
out.pop();
out.pop();
}
2 => {
out.pop();
}
_ => (),
}
out
}
pub fn do_md5_crypt(pass: &[u8], salt: &[u8]) -> Vec<u8> {
let mut dgst_b = Md5::new();
dgst_b.update(pass);
dgst_b.update(salt);
dgst_b.update(pass);
let mut hash_b = dgst_b.finalize();
let mut dgst_a = Md5::new();
dgst_a.update(pass);
dgst_a.update(MD5_MAGIC.as_bytes());
dgst_a.update(salt);
let mut plen = pass.len();
while plen > 0 {
dgst_a.update(&hash_b[..min(plen, 16)]);
if plen < 16 {
break;
}
plen -= 16;
}
plen = pass.len();
while plen > 0 {
if plen & 1 == 0 {
dgst_a.update(&pass[..1])
} else {
dgst_a.update([0u8])
}
plen >>= 1;
}
let mut hash_a = dgst_a.finalize();
for r in 0..1000 {
let mut dgst_a = Md5::new();
if r % 2 == 1 {
dgst_a.update(pass);
} else {
dgst_a.update(hash_a);
}
if r % 3 > 0 {
dgst_a.update(salt);
}
if r % 7 > 0 {
dgst_a.update(pass);
}
if r % 2 == 0 {
dgst_a.update(pass);
} else {
dgst_a.update(hash_a);
}
hash_a = dgst_a.finalize();
}
for (i, &ti) in MD5_TRANSPOSE.iter().enumerate() {
hash_b[i] = hash_a[ti as usize];
}
md5_sha2_hash64_encode(&hash_b).into_bytes()
}

View file

@ -11,26 +11,24 @@
#![deny(clippy::unreachable)]
use argon2::{Algorithm, Argon2, Params, PasswordHash, Version};
use base64::engine::general_purpose;
use base64::engine::GeneralPurpose;
use base64::{alphabet, Engine};
use tracing::{debug, error, trace, warn};
use base64::engine::general_purpose;
use base64urlsafedata::Base64UrlSafeData;
use rand::Rng;
use serde::{Deserialize, Serialize};
use std::fmt;
use std::time::{Duration, Instant};
use kanidm_hsm_crypto::{HmacKey, Tpm};
use kanidm_proto::internal::OperationError;
use openssl::error::ErrorStack as OpenSSLErrorStack;
use openssl::hash::{self, MessageDigest};
use openssl::nid::Nid;
use openssl::pkcs5::pbkdf2_hmac;
use openssl::sha::{Sha1, Sha256, Sha512};
use rand::Rng;
use serde::{Deserialize, Serialize};
use std::fmt;
use std::time::{Duration, Instant};
use tracing::{debug, error, trace, warn};
use kanidm_hsm_crypto::{HmacKey, Tpm};
mod crypt_md5;
pub mod mtls;
pub mod prelude;
pub mod serialise;
@ -84,6 +82,7 @@ pub enum CryptoError {
Argon2,
Argon2Version,
Argon2Parameters,
Crypt,
}
impl From<OpenSSLErrorStack> for CryptoError {
@ -137,65 +136,15 @@ pub enum DbPasswordV1 {
SHA512(Vec<u8>),
SSHA512(Vec<u8>, Vec<u8>),
NT_MD4(Vec<u8>),
}
#[derive(Serialize, Deserialize, Debug, PartialEq, Eq)]
#[allow(non_camel_case_types)]
pub enum ReplPasswordV1 {
TPM_ARGON2ID {
m_cost: u32,
t_cost: u32,
p_cost: u32,
version: u32,
salt: Base64UrlSafeData,
key: Base64UrlSafeData,
CRYPT_MD5 {
s: Base64UrlSafeData,
h: Base64UrlSafeData,
},
ARGON2ID {
m_cost: u32,
t_cost: u32,
p_cost: u32,
version: u32,
salt: Base64UrlSafeData,
key: Base64UrlSafeData,
CRYPT_SHA256 {
h: String,
},
PBKDF2 {
cost: usize,
salt: Base64UrlSafeData,
hash: Base64UrlSafeData,
},
PBKDF2_SHA1 {
cost: usize,
salt: Base64UrlSafeData,
hash: Base64UrlSafeData,
},
PBKDF2_SHA512 {
cost: usize,
salt: Base64UrlSafeData,
hash: Base64UrlSafeData,
},
SHA1 {
hash: Base64UrlSafeData,
},
SSHA1 {
salt: Base64UrlSafeData,
hash: Base64UrlSafeData,
},
SHA256 {
hash: Base64UrlSafeData,
},
SSHA256 {
salt: Base64UrlSafeData,
hash: Base64UrlSafeData,
},
SHA512 {
hash: Base64UrlSafeData,
},
SSHA512 {
salt: Base64UrlSafeData,
hash: Base64UrlSafeData,
},
NT_MD4 {
hash: Base64UrlSafeData,
CRYPT_SHA512 {
h: String,
},
}
@ -214,6 +163,9 @@ impl fmt::Debug for DbPasswordV1 {
DbPasswordV1::SHA512(_) => write!(f, "SHA512"),
DbPasswordV1::SSHA512(_, _) => write!(f, "SSHA512"),
DbPasswordV1::NT_MD4(_) => write!(f, "NT_MD4"),
DbPasswordV1::CRYPT_MD5 { .. } => write!(f, "CRYPT_MD5"),
DbPasswordV1::CRYPT_SHA256 { .. } => write!(f, "CRYPT_SHA256"),
DbPasswordV1::CRYPT_SHA512 { .. } => write!(f, "CRYPT_SHA512"),
}
}
}
@ -436,6 +388,16 @@ enum Kdf {
SSHA512(Vec<u8>, Vec<u8>),
// hash
NT_MD4(Vec<u8>),
CRYPT_MD5 {
s: Vec<u8>,
h: Vec<u8>,
},
CRYPT_SHA256 {
h: String,
},
CRYPT_SHA512 {
h: String,
},
}
#[derive(Clone, Debug, PartialEq)]
@ -498,78 +460,17 @@ impl TryFrom<DbPasswordV1> for Password {
DbPasswordV1::NT_MD4(h) => Ok(Password {
material: Kdf::NT_MD4(h),
}),
}
}
}
impl TryFrom<&ReplPasswordV1> for Password {
type Error = ();
fn try_from(value: &ReplPasswordV1) -> Result<Self, Self::Error> {
match value {
ReplPasswordV1::TPM_ARGON2ID {
m_cost,
t_cost,
p_cost,
version,
salt,
key,
} => Ok(Password {
material: Kdf::TPM_ARGON2ID {
m_cost: *m_cost,
t_cost: *t_cost,
p_cost: *p_cost,
version: *version,
salt: salt.to_vec(),
key: key.to_vec(),
DbPasswordV1::CRYPT_MD5 { s, h } => Ok(Password {
material: Kdf::CRYPT_MD5 {
s: s.into(),
h: h.into(),
},
}),
ReplPasswordV1::ARGON2ID {
m_cost,
t_cost,
p_cost,
version,
salt,
key,
} => Ok(Password {
material: Kdf::ARGON2ID {
m_cost: *m_cost,
t_cost: *t_cost,
p_cost: *p_cost,
version: *version,
salt: salt.to_vec(),
key: key.to_vec(),
},
DbPasswordV1::CRYPT_SHA256 { h } => Ok(Password {
material: Kdf::CRYPT_SHA256 { h },
}),
ReplPasswordV1::PBKDF2 { cost, salt, hash } => Ok(Password {
material: Kdf::PBKDF2(*cost, salt.to_vec(), hash.to_vec()),
}),
ReplPasswordV1::PBKDF2_SHA1 { cost, salt, hash } => Ok(Password {
material: Kdf::PBKDF2_SHA1(*cost, salt.to_vec(), hash.to_vec()),
}),
ReplPasswordV1::PBKDF2_SHA512 { cost, salt, hash } => Ok(Password {
material: Kdf::PBKDF2_SHA512(*cost, salt.to_vec(), hash.to_vec()),
}),
ReplPasswordV1::SHA1 { hash } => Ok(Password {
material: Kdf::SHA1(hash.to_vec()),
}),
ReplPasswordV1::SSHA1 { salt, hash } => Ok(Password {
material: Kdf::SSHA1(salt.to_vec(), hash.to_vec()),
}),
ReplPasswordV1::SHA256 { hash } => Ok(Password {
material: Kdf::SHA256(hash.to_vec()),
}),
ReplPasswordV1::SSHA256 { salt, hash } => Ok(Password {
material: Kdf::SSHA256(salt.to_vec(), hash.to_vec()),
}),
ReplPasswordV1::SHA512 { hash } => Ok(Password {
material: Kdf::SHA512(hash.to_vec()),
}),
ReplPasswordV1::SSHA512 { salt, hash } => Ok(Password {
material: Kdf::SSHA512(salt.to_vec(), hash.to_vec()),
}),
ReplPasswordV1::NT_MD4 { hash } => Ok(Password {
material: Kdf::NT_MD4(hash.to_vec()),
DbPasswordV1::CRYPT_SHA512 { h } => Ok(Password {
material: Kdf::CRYPT_SHA256 { h },
}),
}
}
@ -662,9 +563,47 @@ impl TryFrom<&str> for Password {
});
}
// Test 389ds formats
// Test 389ds/openldap formats. Shout outs openldap which sometimes makes these
// lowercase.
if let Some(ds_ssha1) = value.strip_prefix("{SHA}") {
if let Some(crypt) = value
.strip_prefix("{crypt}")
.or_else(|| value.strip_prefix("{CRYPT}"))
{
if let Some(crypt_md5_phc) = crypt.strip_prefix("$1$") {
let (salt, hash) = crypt_md5_phc.split_once('$').ok_or(())?;
// These are a hash64 format, so leave them as bytes, don't try
// to decode.
let s = salt.as_bytes().to_vec();
let h = hash.as_bytes().to_vec();
return Ok(Password {
material: Kdf::CRYPT_MD5 { s, h },
});
}
if crypt.starts_with("$5$") {
return Ok(Password {
material: Kdf::CRYPT_SHA256 {
h: crypt.to_string(),
},
});
}
if crypt.starts_with("$6$") {
return Ok(Password {
material: Kdf::CRYPT_SHA512 {
h: crypt.to_string(),
},
});
}
} // End crypt
if let Some(ds_ssha1) = value
.strip_prefix("{SHA}")
.or_else(|| value.strip_prefix("{sha}"))
{
let h = general_purpose::STANDARD.decode(ds_ssha1).map_err(|_| ())?;
if h.len() != DS_SHA1_HASH_LEN {
return Err(());
@ -674,7 +613,10 @@ impl TryFrom<&str> for Password {
});
}
if let Some(ds_ssha1) = value.strip_prefix("{SSHA}") {
if let Some(ds_ssha1) = value
.strip_prefix("{SSHA}")
.or_else(|| value.strip_prefix("{ssha}"))
{
let sh = general_purpose::STANDARD.decode(ds_ssha1).map_err(|_| ())?;
let (h, s) = sh.split_at(DS_SHA1_HASH_LEN);
if s.len() != DS_SHA_SALT_LEN {
@ -685,7 +627,10 @@ impl TryFrom<&str> for Password {
});
}
if let Some(ds_ssha256) = value.strip_prefix("{SHA256}") {
if let Some(ds_ssha256) = value
.strip_prefix("{SHA256}")
.or_else(|| value.strip_prefix("{sha256}"))
{
let h = general_purpose::STANDARD
.decode(ds_ssha256)
.map_err(|_| ())?;
@ -697,7 +642,10 @@ impl TryFrom<&str> for Password {
});
}
if let Some(ds_ssha256) = value.strip_prefix("{SSHA256}") {
if let Some(ds_ssha256) = value
.strip_prefix("{SSHA256}")
.or_else(|| value.strip_prefix("{ssha256}"))
{
let sh = general_purpose::STANDARD
.decode(ds_ssha256)
.map_err(|_| ())?;
@ -710,7 +658,10 @@ impl TryFrom<&str> for Password {
});
}
if let Some(ds_ssha512) = value.strip_prefix("{SHA512}") {
if let Some(ds_ssha512) = value
.strip_prefix("{SHA512}")
.or_else(|| value.strip_prefix("{sha512}"))
{
let h = general_purpose::STANDARD
.decode(ds_ssha512)
.map_err(|_| ())?;
@ -722,7 +673,10 @@ impl TryFrom<&str> for Password {
});
}
if let Some(ds_ssha512) = value.strip_prefix("{SSHA512}") {
if let Some(ds_ssha512) = value
.strip_prefix("{SSHA512}")
.or_else(|| value.strip_prefix("{ssha512}"))
{
let sh = general_purpose::STANDARD
.decode(ds_ssha512)
.map_err(|_| ())?;
@ -1223,6 +1177,20 @@ impl Password {
})
.map(|chal_key| chal_key.as_ref() == key)
}
(Kdf::CRYPT_MD5 { s, h }, _) => {
let chal_key = crypt_md5::do_md5_crypt(cleartext.as_bytes(), s);
Ok(chal_key == *h)
}
(Kdf::CRYPT_SHA256 { h }, _) => {
let is_valid = sha_crypt::sha256_check(cleartext, h.as_str()).is_ok();
Ok(is_valid)
}
(Kdf::CRYPT_SHA512 { h }, _) => {
let is_valid = sha_crypt::sha512_check(cleartext, h.as_str()).is_ok();
Ok(is_valid)
}
}
}
@ -1274,80 +1242,12 @@ impl Password {
Kdf::SHA512(hash) => DbPasswordV1::SHA512(hash.clone()),
Kdf::SSHA512(salt, hash) => DbPasswordV1::SSHA512(salt.clone(), hash.clone()),
Kdf::NT_MD4(hash) => DbPasswordV1::NT_MD4(hash.clone()),
}
}
pub fn to_repl_v1(&self) -> ReplPasswordV1 {
match &self.material {
Kdf::TPM_ARGON2ID {
m_cost,
t_cost,
p_cost,
version,
salt,
key,
} => ReplPasswordV1::TPM_ARGON2ID {
m_cost: *m_cost,
t_cost: *t_cost,
p_cost: *p_cost,
version: *version,
salt: salt.clone().into(),
key: key.clone().into(),
},
Kdf::ARGON2ID {
m_cost,
t_cost,
p_cost,
version,
salt,
key,
} => ReplPasswordV1::ARGON2ID {
m_cost: *m_cost,
t_cost: *t_cost,
p_cost: *p_cost,
version: *version,
salt: salt.clone().into(),
key: key.clone().into(),
},
Kdf::PBKDF2(cost, salt, hash) => ReplPasswordV1::PBKDF2 {
cost: *cost,
salt: salt.clone().into(),
hash: hash.clone().into(),
},
Kdf::PBKDF2_SHA1(cost, salt, hash) => ReplPasswordV1::PBKDF2_SHA1 {
cost: *cost,
salt: salt.clone().into(),
hash: hash.clone().into(),
},
Kdf::PBKDF2_SHA512(cost, salt, hash) => ReplPasswordV1::PBKDF2_SHA512 {
cost: *cost,
salt: salt.clone().into(),
hash: hash.clone().into(),
},
Kdf::SHA1(hash) => ReplPasswordV1::SHA1 {
hash: hash.clone().into(),
},
Kdf::SSHA1(salt, hash) => ReplPasswordV1::SSHA1 {
salt: salt.clone().into(),
hash: hash.clone().into(),
},
Kdf::SHA256(hash) => ReplPasswordV1::SHA256 {
hash: hash.clone().into(),
},
Kdf::SSHA256(salt, hash) => ReplPasswordV1::SSHA256 {
salt: salt.clone().into(),
hash: hash.clone().into(),
},
Kdf::SHA512(hash) => ReplPasswordV1::SHA512 {
hash: hash.clone().into(),
},
Kdf::SSHA512(salt, hash) => ReplPasswordV1::SSHA512 {
salt: salt.clone().into(),
hash: hash.clone().into(),
},
Kdf::NT_MD4(hash) => ReplPasswordV1::NT_MD4 {
hash: hash.clone().into(),
Kdf::CRYPT_MD5 { s, h } => DbPasswordV1::CRYPT_MD5 {
s: s.clone().into(),
h: h.clone().into(),
},
Kdf::CRYPT_SHA256 { h } => DbPasswordV1::CRYPT_SHA256 { h: h.clone() },
Kdf::CRYPT_SHA512 { h } => DbPasswordV1::CRYPT_SHA512 { h: h.clone() },
}
}
@ -1383,7 +1283,10 @@ impl Password {
| Kdf::SSHA256(_, _)
| Kdf::SHA512(_)
| Kdf::SSHA512(_, _)
| Kdf::NT_MD4(_) => true,
| Kdf::NT_MD4(_)
| Kdf::CRYPT_MD5 { .. }
| Kdf::CRYPT_SHA256 { .. }
| Kdf::CRYPT_SHA512 { .. } => true,
}
}
}
@ -1441,8 +1344,12 @@ mod tests {
#[test]
fn test_password_from_ds_sha1() {
let im_pw = "{SHA}W6ph5Mm5Pz8GgiULbPgzG37mj9g=";
let _r = Password::try_from(im_pw).expect("Failed to parse");
let im_pw = "{sha}W6ph5Mm5Pz8GgiULbPgzG37mj9g=";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
// Known weak, require upgrade.
assert!(r.requires_upgrade());
assert!(r.verify(password).unwrap_or(false));
@ -1451,8 +1358,12 @@ mod tests {
#[test]
fn test_password_from_ds_ssha1() {
let im_pw = "{SSHA}EyzbBiP4u4zxOrLpKTORI/RX3HC6TCTJtnVOCQ==";
let _r = Password::try_from(im_pw).expect("Failed to parse");
let im_pw = "{ssha}EyzbBiP4u4zxOrLpKTORI/RX3HC6TCTJtnVOCQ==";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
// Known weak, require upgrade.
assert!(r.requires_upgrade());
assert!(r.verify(password).unwrap_or(false));
@ -1461,8 +1372,12 @@ mod tests {
#[test]
fn test_password_from_ds_sha256() {
let im_pw = "{SHA256}XohImNooBHFR0OVvjcYpJ3NgPQ1qq73WKhHvch0VQtg=";
let _r = Password::try_from(im_pw).expect("Failed to parse");
let im_pw = "{sha256}XohImNooBHFR0OVvjcYpJ3NgPQ1qq73WKhHvch0VQtg=";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
// Known weak, require upgrade.
assert!(r.requires_upgrade());
assert!(r.verify(password).unwrap_or(false));
@ -1471,8 +1386,12 @@ mod tests {
#[test]
fn test_password_from_ds_ssha256() {
let im_pw = "{SSHA256}luYWfFJOZgxySTsJXHgIaCYww4yMpu6yest69j/wO5n5OycuHFV/GQ==";
let _r = Password::try_from(im_pw).expect("Failed to parse");
let im_pw = "{ssha256}luYWfFJOZgxySTsJXHgIaCYww4yMpu6yest69j/wO5n5OycuHFV/GQ==";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
// Known weak, require upgrade.
assert!(r.requires_upgrade());
assert!(r.verify(password).unwrap_or(false));
@ -1481,8 +1400,12 @@ mod tests {
#[test]
fn test_password_from_ds_sha512() {
let im_pw = "{SHA512}sQnzu7wkTrgkQZF+0G1hi5AI3Qmzvv0bXgc5THBqi7mAsdd4Xll27ASbRt9fEyavWi6m0QP9B8lThf+rDKy8hg==";
let _r = Password::try_from(im_pw).expect("Failed to parse");
let im_pw = "{sha512}sQnzu7wkTrgkQZF+0G1hi5AI3Qmzvv0bXgc5THBqi7mAsdd4Xll27ASbRt9fEyavWi6m0QP9B8lThf+rDKy8hg==";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
// Known weak, require upgrade.
assert!(r.requires_upgrade());
assert!(r.verify(password).unwrap_or(false));
@ -1491,8 +1414,12 @@ mod tests {
#[test]
fn test_password_from_ds_ssha512() {
let im_pw = "{SSHA512}JwrSUHkI7FTAfHRVR6KoFlSN0E3dmaQWARjZ+/UsShYlENOqDtFVU77HJLLrY2MuSp0jve52+pwtdVl2QUAHukQ0XUf5LDtM";
let _r = Password::try_from(im_pw).expect("Failed to parse");
let im_pw = "{ssha512}JwrSUHkI7FTAfHRVR6KoFlSN0E3dmaQWARjZ+/UsShYlENOqDtFVU77HJLLrY2MuSp0jve52+pwtdVl2QUAHukQ0XUf5LDtM";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
// Known weak, require upgrade.
assert!(r.requires_upgrade());
assert!(r.verify(password).unwrap_or(false));
@ -1617,6 +1544,39 @@ mod tests {
}
}
#[test]
fn test_password_from_crypt_md5() {
sketching::test_init();
let im_pw = "{crypt}$1$zaRIAsoe$7887GzjDTrst0XbDPpF5m.";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
assert!(r.requires_upgrade());
assert!(r.verify(password).unwrap_or(false));
}
#[test]
fn test_password_from_crypt_sha256() {
sketching::test_init();
let im_pw = "{crypt}$5$3UzV7Sut8EHCUxlN$41V.jtMQmFAOucqI4ImFV43r.bRLjPlN.hyfoCdmGE2";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
assert!(r.requires_upgrade());
assert!(r.verify(password).unwrap_or(false));
}
#[test]
fn test_password_from_crypt_sha512() {
sketching::test_init();
let im_pw = "{crypt}$6$aXn8azL8DXUyuMvj$9aJJC/KEUwygIpf2MTqjQa.f0MEXNg2cGFc62Fet8XpuDVDedM05CweAlxW6GWxnmHqp14CRf6zU7OQoE/bCu0";
let password = "password";
let r = Password::try_from(im_pw).expect("Failed to parse");
assert!(r.requires_upgrade());
assert!(r.verify(password).unwrap_or(false));
}
#[test]
fn test_password_argon2id_hsm_bind() {
sketching::test_init();

View file

@ -15,6 +15,9 @@ use std::os::macos::fs::MetadataExt;
#[cfg(target_os = "illumos")]
use std::os::illumos::fs::MetadataExt;
#[cfg(target_os = "android")]
use std::os::android::fs::MetadataExt;
use kanidm_utils_users::{get_current_gid, get_current_uid};
use std::fmt;

View file

@ -0,0 +1,14 @@
# The main difference from the release_linux profile is using
# per-package shared directories for a clearer separation and
# thus more consistent install & sysadmin experience.
# Don't set the value for autodetect
# cpu_flags = "none"
server_admin_bind_path = "/var/run/kanidmd/sock"
server_ui_pkg_path = "/usr/share/kanidmd/static"
server_config_path = "/etc/kanidmd/server.toml"
client_config_path = "/etc/kanidm/config"
# TODO: unixd should migrate to it's own config dir as part of the sparkled migration.
# No point in doing two back to back migrations.
resolver_config_path = "/etc/kanidm/unixd"
resolver_unix_shell_path = "/bin/bash"

@ -1 +1 @@
Subproject commit 942c7b69ca807cc38186b63ab02a391bac9eac7e
Subproject commit 8d7579fb543632df74e609892c69ce9f368fdd02

View file

@ -12,7 +12,7 @@ Conflicts=nscd.service
[Service]
DynamicUser=yes
SupplementaryGroups=tss shadow
SupplementaryGroups=tss
UMask=0027
CacheDirectory=kanidm-unixd
RuntimeDirectory=kanidm-unixd

View file

@ -22,6 +22,8 @@ pub enum Attribute {
AcpCreateClass,
AcpEnable,
AcpModifyClass,
AcpModifyPresentClass,
AcpModifyRemoveClass,
AcpModifyPresentAttr,
AcpModifyRemovedAttr,
AcpReceiver,
@ -80,6 +82,7 @@ pub enum Attribute {
IdVerificationEcKey,
Image,
Index,
Indexed,
IpaNtHash,
IpaSshPubKey,
JwsEs256PrivateKey,
@ -94,6 +97,7 @@ pub enum Attribute {
LdapEmailAddress,
/// An LDAP Compatible sshkeys virtual attribute
LdapKeys,
LdapMaxQueryableAttrs,
LegalName,
LimitSearchMaxResults,
LimitSearchMaxFilterTest,
@ -253,6 +257,8 @@ impl Attribute {
Attribute::AcpCreateClass => ATTR_ACP_CREATE_CLASS,
Attribute::AcpEnable => ATTR_ACP_ENABLE,
Attribute::AcpModifyClass => ATTR_ACP_MODIFY_CLASS,
Attribute::AcpModifyPresentClass => ATTR_ACP_MODIFY_PRESENT_CLASS,
Attribute::AcpModifyRemoveClass => ATTR_ACP_MODIFY_REMOVE_CLASS,
Attribute::AcpModifyPresentAttr => ATTR_ACP_MODIFY_PRESENTATTR,
Attribute::AcpModifyRemovedAttr => ATTR_ACP_MODIFY_REMOVEDATTR,
Attribute::AcpReceiver => ATTR_ACP_RECEIVER,
@ -310,6 +316,7 @@ impl Attribute {
Attribute::IdVerificationEcKey => ATTR_ID_VERIFICATION_ECKEY,
Attribute::Image => ATTR_IMAGE,
Attribute::Index => ATTR_INDEX,
Attribute::Indexed => ATTR_INDEXED,
Attribute::IpaNtHash => ATTR_IPANTHASH,
Attribute::IpaSshPubKey => ATTR_IPASSHPUBKEY,
Attribute::JwsEs256PrivateKey => ATTR_JWS_ES256_PRIVATE_KEY,
@ -322,6 +329,7 @@ impl Attribute {
Attribute::LdapAllowUnixPwBind => ATTR_LDAP_ALLOW_UNIX_PW_BIND,
Attribute::LdapEmailAddress => ATTR_LDAP_EMAIL_ADDRESS,
Attribute::LdapKeys => ATTR_LDAP_KEYS,
Attribute::LdapMaxQueryableAttrs => ATTR_LDAP_MAX_QUERYABLE_ATTRS,
Attribute::LdapSshPublicKey => ATTR_LDAP_SSHPUBLICKEY,
Attribute::LegalName => ATTR_LEGALNAME,
Attribute::LimitSearchMaxResults => ATTR_LIMIT_SEARCH_MAX_RESULTS,
@ -436,6 +444,8 @@ impl Attribute {
ATTR_ACP_CREATE_CLASS => Attribute::AcpCreateClass,
ATTR_ACP_ENABLE => Attribute::AcpEnable,
ATTR_ACP_MODIFY_CLASS => Attribute::AcpModifyClass,
ATTR_ACP_MODIFY_PRESENT_CLASS => Attribute::AcpModifyPresentClass,
ATTR_ACP_MODIFY_REMOVE_CLASS => Attribute::AcpModifyRemoveClass,
ATTR_ACP_MODIFY_PRESENTATTR => Attribute::AcpModifyPresentAttr,
ATTR_ACP_MODIFY_REMOVEDATTR => Attribute::AcpModifyRemovedAttr,
ATTR_ACP_RECEIVER => Attribute::AcpReceiver,
@ -493,6 +503,7 @@ impl Attribute {
ATTR_ID_VERIFICATION_ECKEY => Attribute::IdVerificationEcKey,
ATTR_IMAGE => Attribute::Image,
ATTR_INDEX => Attribute::Index,
ATTR_INDEXED => Attribute::Indexed,
ATTR_IPANTHASH => Attribute::IpaNtHash,
ATTR_IPASSHPUBKEY => Attribute::IpaSshPubKey,
ATTR_JWS_ES256_PRIVATE_KEY => Attribute::JwsEs256PrivateKey,
@ -505,6 +516,7 @@ impl Attribute {
ATTR_LDAP_ALLOW_UNIX_PW_BIND => Attribute::LdapAllowUnixPwBind,
ATTR_LDAP_EMAIL_ADDRESS => Attribute::LdapEmailAddress,
ATTR_LDAP_KEYS => Attribute::LdapKeys,
ATTR_LDAP_MAX_QUERYABLE_ATTRS => Attribute::LdapMaxQueryableAttrs,
ATTR_SSH_PUBLICKEY => Attribute::SshPublicKey,
ATTR_LEGALNAME => Attribute::LegalName,
ATTR_LINKEDGROUP => Attribute::LinkedGroup,
@ -628,6 +640,71 @@ impl From<Attribute> for String {
}
}
/// Sub attributes are a component of SCIM, allowing tagged sub properties of a complex
/// attribute to be accessed.
#[derive(Serialize, Deserialize, Clone, Debug, Eq, PartialEq, PartialOrd, Ord, Hash)]
#[serde(rename_all = "lowercase", try_from = "&str", into = "AttrString")]
pub enum SubAttribute {
/// Denotes a primary value.
Primary,
#[cfg(not(test))]
Custom(AttrString),
}
impl From<SubAttribute> for AttrString {
fn from(val: SubAttribute) -> Self {
AttrString::from(val.as_str())
}
}
impl From<&str> for SubAttribute {
fn from(value: &str) -> Self {
Self::inner_from_str(value)
}
}
impl FromStr for SubAttribute {
type Err = Infallible;
fn from_str(value: &str) -> Result<Self, Self::Err> {
Ok(Self::inner_from_str(value))
}
}
impl SubAttribute {
pub fn as_str(&self) -> &str {
match self {
SubAttribute::Primary => SUB_ATTR_PRIMARY,
#[cfg(not(test))]
SubAttribute::Custom(s) => s,
}
}
// We allow this because the standard lib from_str is fallible, and we want an infallible version.
#[allow(clippy::should_implement_trait)]
fn inner_from_str(value: &str) -> Self {
// Could this be something like heapless to save allocations? Also gives a way
// to limit length of str?
match value.to_lowercase().as_str() {
SUB_ATTR_PRIMARY => SubAttribute::Primary,
#[cfg(not(test))]
_ => SubAttribute::Custom(AttrString::from(value)),
// Allowed only in tests
#[allow(clippy::unreachable)]
#[cfg(test)]
_ => {
unreachable!(
"Check that you've implemented the SubAttribute conversion for {:?}",
value
);
}
}
}
}
#[cfg(test)]
mod test {
use super::Attribute;

View file

@ -39,6 +39,8 @@ pub const DEFAULT_SERVER_ADDRESS: &str = "127.0.0.1:8443";
pub const DEFAULT_SERVER_LOCALHOST: &str = "localhost:8443";
/// The default LDAP bind address for the Kanidm client
pub const DEFAULT_LDAP_LOCALHOST: &str = "localhost:636";
/// The default amount of attributes that can be queried in LDAP
pub const DEFAULT_LDAP_MAXIMUM_QUERYABLE_ATTRIBUTES: usize = 16;
/// Default replication configuration
pub const DEFAULT_REPLICATION_ADDRESS: &str = "127.0.0.1:8444";
pub const DEFAULT_REPLICATION_ORIGIN: &str = "repl://localhost:8444";
@ -60,6 +62,8 @@ pub const ATTR_ACP_CREATE_ATTR: &str = "acp_create_attr";
pub const ATTR_ACP_CREATE_CLASS: &str = "acp_create_class";
pub const ATTR_ACP_ENABLE: &str = "acp_enable";
pub const ATTR_ACP_MODIFY_CLASS: &str = "acp_modify_class";
pub const ATTR_ACP_MODIFY_PRESENT_CLASS: &str = "acp_modify_present_class";
pub const ATTR_ACP_MODIFY_REMOVE_CLASS: &str = "acp_modify_remove_class";
pub const ATTR_ACP_MODIFY_PRESENTATTR: &str = "acp_modify_presentattr";
pub const ATTR_ACP_MODIFY_REMOVEDATTR: &str = "acp_modify_removedattr";
pub const ATTR_ACP_RECEIVER_GROUP: &str = "acp_receiver_group";
@ -102,6 +106,7 @@ pub const ATTR_DYNGROUP_FILTER: &str = "dyngroup_filter";
pub const ATTR_DYNGROUP: &str = "dyngroup";
pub const ATTR_DYNMEMBER: &str = "dynmember";
pub const ATTR_LDAP_EMAIL_ADDRESS: &str = "emailaddress";
pub const ATTR_LDAP_MAX_QUERYABLE_ATTRS: &str = "ldap_max_queryable_attrs";
pub const ATTR_EMAIL_ALTERNATIVE: &str = "emailalternative";
pub const ATTR_EMAIL_PRIMARY: &str = "emailprimary";
pub const ATTR_EMAIL: &str = "email";
@ -121,6 +126,7 @@ pub const ATTR_GROUP: &str = "group";
pub const ATTR_ID_VERIFICATION_ECKEY: &str = "id_verification_eckey";
pub const ATTR_IMAGE: &str = "image";
pub const ATTR_INDEX: &str = "index";
pub const ATTR_INDEXED: &str = "indexed";
pub const ATTR_IPANTHASH: &str = "ipanthash";
pub const ATTR_IPASSHPUBKEY: &str = "ipasshpubkey";
pub const ATTR_JWS_ES256_PRIVATE_KEY: &str = "jws_es256_private_key";
@ -217,6 +223,8 @@ pub const ATTR_VERSION: &str = "version";
pub const ATTR_WEBAUTHN_ATTESTATION_CA_LIST: &str = "webauthn_attestation_ca_list";
pub const ATTR_ALLOW_PRIMARY_CRED_FALLBACK: &str = "allow_primary_cred_fallback";
pub const SUB_ATTR_PRIMARY: &str = "primary";
pub const OAUTH2_SCOPE_EMAIL: &str = ATTR_EMAIL;
pub const OAUTH2_SCOPE_GROUPS: &str = "groups";
pub const OAUTH2_SCOPE_SSH_PUBLICKEYS: &str = "ssh_publickeys";

View file

@ -130,6 +130,7 @@ pub enum CURegState {
None,
TotpCheck(TotpSecret),
TotpTryAgain,
TotpNameTryAgain(String),
TotpInvalidSha1,
BackupCodes(Vec<String>),
Passkey(CreationChallengeResponse),

View file

@ -222,6 +222,7 @@ pub enum OperationError {
MG0006SKConstraintsNotMet,
MG0007Oauth2StrictConstraintsNotMet,
MG0008SkipUpgradeAttempted,
MG0009InvalidTargetLevelForBootstrap,
//
KP0001KeyProviderNotLoaded,
KP0002KeyProviderInvalidClass,
@ -462,6 +463,7 @@ impl OperationError {
Self::MG0006SKConstraintsNotMet => Some("Migration Constraints Not Met - Security Keys should not be present.".into()),
Self::MG0007Oauth2StrictConstraintsNotMet => Some("Migration Constraints Not Met - All OAuth2 clients must have strict-redirect-uri mode enabled.".into()),
Self::MG0008SkipUpgradeAttempted => Some("Skip Upgrade Attempted.".into()),
Self::MG0009InvalidTargetLevelForBootstrap => Some("The request target domain level was not valid for bootstrapping a new server instance".into()),
Self::PL0001GidOverlapsSystemRange => None,
Self::SC0001IncomingSshPublicKey => None,
Self::SC0002ReferenceSyntaxInvalid => Some("A SCIM Reference Set contained invalid syntax and can not be processed.".into()),

View file

@ -7,7 +7,8 @@ use serde::{Deserialize, Serialize};
use serde_with::base64::{Base64, UrlSafe};
use serde_with::formats::SpaceSeparator;
use serde_with::{
formats, serde_as, skip_serializing_none, NoneAsEmptyString, StringWithSeparator,
formats, rust::deserialize_ignore_any, serde_as, skip_serializing_none, NoneAsEmptyString,
StringWithSeparator,
};
use url::Url;
use uuid::Uuid;
@ -353,6 +354,9 @@ pub enum ResponseType {
pub enum ResponseMode {
Query,
Fragment,
FormPost,
#[serde(other, deserialize_with = "deserialize_ignore_any")]
Invalid,
}
fn response_modes_supported_default() -> Vec<ResponseMode> {
@ -443,6 +447,21 @@ fn require_request_uri_parameter_supported_default() -> bool {
false
}
#[derive(Serialize, Deserialize, Debug)]
pub struct OidcWebfingerRel {
pub rel: String,
pub href: String,
}
/// The response to an Webfinger request. Only a subset of the body is defined here.
/// <https://datatracker.ietf.org/doc/html/rfc7033#section-4.4>
#[skip_serializing_none]
#[derive(Serialize, Deserialize, Debug)]
pub struct OidcWebfingerResponse {
pub subject: String,
pub links: Vec<OidcWebfingerRel>,
}
/// The response to an OpenID connect discovery request
/// <https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata>
#[skip_serializing_none]

View file

@ -1,7 +1,7 @@
//! These are types that a client will send to the server.
use super::ScimEntryGetQuery;
use super::ScimOauth2ClaimMapJoinChar;
use crate::attribute::Attribute;
use crate::attribute::{Attribute, SubAttribute};
use serde::{Deserialize, Serialize};
use serde_json::Value as JsonValue;
use serde_with::formats::PreferMany;
@ -134,3 +134,59 @@ impl TryFrom<ScimEntryPutKanidm> for ScimEntryPutGeneric {
})
}
}
#[derive(Debug, Clone, PartialEq, Eq, Deserialize)]
pub struct AttrPath {
pub a: Attribute,
pub s: Option<SubAttribute>,
}
impl From<Attribute> for AttrPath {
fn from(a: Attribute) -> Self {
Self { a, s: None }
}
}
impl From<(Attribute, SubAttribute)> for AttrPath {
fn from((a, s): (Attribute, SubAttribute)) -> Self {
Self { a, s: Some(s) }
}
}
#[derive(Debug, Clone, PartialEq, Eq, Deserialize)]
pub enum ScimFilter {
Or(Box<ScimFilter>, Box<ScimFilter>),
And(Box<ScimFilter>, Box<ScimFilter>),
Not(Box<ScimFilter>),
Present(AttrPath),
Equal(AttrPath, JsonValue),
NotEqual(AttrPath, JsonValue),
Contains(AttrPath, JsonValue),
StartsWith(AttrPath, JsonValue),
EndsWith(AttrPath, JsonValue),
Greater(AttrPath, JsonValue),
Less(AttrPath, JsonValue),
GreaterOrEqual(AttrPath, JsonValue),
LessOrEqual(AttrPath, JsonValue),
Complex(Attribute, Box<ScimComplexFilter>),
}
#[derive(Debug, Clone, PartialEq, Eq, Deserialize)]
pub enum ScimComplexFilter {
Or(Box<ScimComplexFilter>, Box<ScimComplexFilter>),
And(Box<ScimComplexFilter>, Box<ScimComplexFilter>),
Not(Box<ScimComplexFilter>),
Present(SubAttribute),
Equal(SubAttribute, JsonValue),
NotEqual(SubAttribute, JsonValue),
Contains(SubAttribute, JsonValue),
StartsWith(SubAttribute, JsonValue),
EndsWith(SubAttribute, JsonValue),
Greater(SubAttribute, JsonValue),
Less(SubAttribute, JsonValue),
GreaterOrEqual(SubAttribute, JsonValue),
LessOrEqual(SubAttribute, JsonValue),
}

View file

@ -33,7 +33,7 @@ pub enum ScimAttributeEffectiveAccess {
/// All attributes on the entry have this permission granted
Grant,
/// All attributes on the entry have this permission denied
Denied,
Deny,
/// The following attributes on the entry have this permission granted
Allow(BTreeSet<Attribute>),
}
@ -43,7 +43,7 @@ impl ScimAttributeEffectiveAccess {
pub fn check(&self, attr: &Attribute) -> bool {
match self {
Self::Grant => true,
Self::Denied => false,
Self::Deny => false,
Self::Allow(set) => set.contains(attr),
}
}
@ -257,6 +257,98 @@ pub enum ScimValueKanidm {
UiHints(Vec<UiHint>),
}
#[serde_as]
#[derive(Serialize, Debug, Clone, ToSchema)]
pub struct ScimPerson {
pub uuid: Uuid,
pub name: String,
pub displayname: String,
pub spn: String,
pub description: Option<String>,
pub mails: Vec<ScimMail>,
pub managed_by: Option<ScimReference>,
pub groups: Vec<ScimReference>,
}
impl TryFrom<ScimEntryKanidm> for ScimPerson {
type Error = ();
fn try_from(scim_entry: ScimEntryKanidm) -> Result<Self, Self::Error> {
let uuid = scim_entry.header.id;
let name = scim_entry
.attrs
.get(&Attribute::Name)
.and_then(|v| match v {
ScimValueKanidm::String(s) => Some(s.clone()),
_ => None,
})
.ok_or(())?;
let displayname = scim_entry
.attrs
.get(&Attribute::DisplayName)
.and_then(|v| match v {
ScimValueKanidm::String(s) => Some(s.clone()),
_ => None,
})
.ok_or(())?;
let spn = scim_entry
.attrs
.get(&Attribute::Spn)
.and_then(|v| match v {
ScimValueKanidm::String(s) => Some(s.clone()),
_ => None,
})
.ok_or(())?;
let description = scim_entry
.attrs
.get(&Attribute::Description)
.and_then(|v| match v {
ScimValueKanidm::String(s) => Some(s.clone()),
_ => None,
});
let mails = scim_entry
.attrs
.get(&Attribute::Mail)
.and_then(|v| match v {
ScimValueKanidm::Mail(m) => Some(m.clone()),
_ => None,
})
.unwrap_or_default();
let groups = scim_entry
.attrs
.get(&Attribute::DirectMemberOf)
.and_then(|v| match v {
ScimValueKanidm::EntryReferences(v) => Some(v.clone()),
_ => None,
})
.unwrap_or_default();
let managed_by = scim_entry
.attrs
.get(&Attribute::EntryManagedBy)
.and_then(|v| match v {
ScimValueKanidm::EntryReference(v) => Some(v.clone()),
_ => None,
});
Ok(ScimPerson {
uuid,
name,
displayname,
spn,
description,
mails,
managed_by,
groups,
})
}
}
impl From<bool> for ScimValueKanidm {
fn from(b: bool) -> Self {
Self::Bool(b)

View file

@ -19,7 +19,7 @@ pub use self::auth::*;
pub use self::unix::*;
/// The type of Account in use.
#[derive(Clone, Copy, Debug, ToSchema)]
#[derive(Serialize, Deserialize, Clone, Copy, Debug, ToSchema)]
pub enum AccountType {
Person,
ServiceAccount,

841
pykanidm/poetry.lock generated

File diff suppressed because it is too large Load diff

View file

@ -29,7 +29,7 @@ Authlib = "^1.2.0"
[tool.poetry.group.dev.dependencies]
ruff = ">=0.5.1,<0.9.5"
ruff = ">=0.5.1,<0.11.5"
pytest = "^8.3.4"
mypy = "^1.14.1"
types-requests = "^2.32.0.20241016"
@ -40,7 +40,7 @@ pylint-pydantic = "^0.3.5"
coverage = "^7.6.10"
mkdocs = "^1.6.1"
mkdocs-material = "^9.6.1"
mkdocstrings = "^0.27.0"
mkdocstrings = ">=0.27,<0.30"
mkdocstrings-python = "^1.13.0"
pook = "^2.1.3"

View file

@ -21,7 +21,8 @@ sudo apt-get install -y \
libsystemd-dev \
libudev-dev \
pkg-config \
ripgrep
ripgrep \
lld
export PATH="$HOME/.cargo/bin:$PATH"
@ -36,7 +37,7 @@ sudo chgrp vscode ~/ -R
# shellcheck disable=SC1091
source scripts/devcontainer_poststart.sh
cargo install
cargo install \
cargo-audit \
mdbook-mermaid \
mdbook

View file

@ -21,6 +21,8 @@ ${SUDOCMD} apt-get update &&
cmake \
build-essential \
jq \
lld \
clang \
tpm-udev
if [ -z "${PACKAGING}" ]; then
@ -73,10 +75,6 @@ if [ -z "$(which cargo)" ]; then
ERROR=1
fi
if [ $ERROR -eq 0 ] && [ -z "$(which cross)" ]; then
echo "You don't have cross installed! Installing it now..."
cargo install -f cross
fi
if [ $ERROR -eq 0 ] && [ -z "$(which cargo-deb)" ]; then
echo "You don't have cargo-deb installed! Installing it now..."
cargo install -f cargo-deb

View file

@ -25,7 +25,7 @@ def recover_account(username: str) -> str:
"recover-account",
username,
"--config",
"../../examples/insecure_server.toml",
"./insecure_server.toml",
"--output",
"json",
]

View file

@ -44,7 +44,7 @@ fi
# defaults
KANIDM_CONFIG_FILE="../../examples/insecure_server.toml"
KANIDM_CONFIG_FILE="./insecure_server.toml"
KANIDM_URL="$(rg origin "${KANIDM_CONFIG_FILE}" | awk '{print $NF}' | tr -d '"')"
KANIDM_CA_PATH="/tmp/kanidm/ca.pem"
@ -83,7 +83,7 @@ if [ "${REMOVE_TEST_DB}" -eq 1 ]; then
rm /tmp/kanidm/kanidm.db || true
fi
export KANIDM_CONFIG="../../examples/insecure_server.toml"
export KANIDM_CONFIG="./insecure_server.toml"
IDM_ADMIN_USER="idm_admin@localhost"
echo "Resetting the idm_admin user..."

View file

@ -11,7 +11,7 @@ WAIT_TIMER=5
echo "Building release binaries..."
cargo build --release --bin kanidm --bin kanidmd
cargo build --locked --release --bin kanidm --bin kanidmd
if [ -d '.git' ]; then
echo "You're in the root dir, let's move you!"
@ -25,7 +25,7 @@ if [ ! -f "run_insecure_dev_server.sh" ]; then
exit 1
fi
export KANIDM_CONFIG="../../examples/insecure_server.toml"
export KANIDM_CONFIG="./insecure_server.toml"
mkdir -p /tmp/kanidm/client_ca
@ -48,7 +48,7 @@ fi
ATTEMPT=0
KANIDM_CONFIG_FILE="../../examples/insecure_server.toml"
KANIDM_CONFIG_FILE="./insecure_server.toml"
KANIDM_URL="$(rg origin "${KANIDM_CONFIG_FILE}" | awk '{print $NF}' | tr -d '"')"
KANIDM_CA_PATH="/tmp/kanidm/ca.pem"

View file

@ -48,26 +48,49 @@ RUN --mount=type=cache,id=cargo,target=/cargo \
export SCCACHE_DIR=/sccache && \
export RUSTC_WRAPPER=/usr/bin/sccache && \
export CC="/usr/bin/clang" && \
cargo build -p daemon ${KANIDM_BUILD_OPTIONS} \
cargo build --locked -p daemon ${KANIDM_BUILD_OPTIONS} \
--target-dir="/usr/src/kanidm/target/" \
--features="${KANIDM_FEATURES}" \
--release; \
sccache -s
# Find and copy dynamically linked libraries using ldd
# caveat: this actually partially runs the binary, so it doesn't work for cross-compilation
RUN <<EOF
mkdir -p /out/libs
mkdir -p /out/libs-root
ldd /usr/src/kanidm/target/release/kanidmd
ldd /usr/src/kanidm/target/release/kanidmd | grep -v 'linux-vdso.so' | awk '{print $(NF-1) " " $1}' | sort -u -k 1,1 | awk '{print "install", "-D", $1, (($2 ~ /^\//) ? "/out/libs-root" $2 : "/out/libs/" $2)}' | xargs -I {} sh -c {}
ls -Rla /out/libs
ls -Rla /out/libs-root
EOF
# ======================
FROM repos
RUN \
--mount=type=cache,id=zypp,target=/var/cache/zypp \
zypper install -y \
timezone \
openssl-3 \
sqlite3 \
pam
FROM scratch
COPY --from=builder /usr/src/kanidm/target/release/kanidmd /sbin/
COPY --from=builder /usr/src/kanidm/server/core/static /hpkg
RUN chmod +x /sbin/kanidmd
WORKDIR /
# Copy root certs for tls into image
# You can also mount the certs from the host
# --volume /etc/ssl/certs:/etc/ssl/certs:ro
COPY --from=repos /etc/ssl/certs /etc/ssl/certs
# Copy our build
COPY --from=builder --chmod=0755 /usr/src/kanidm/target/release/kanidmd /sbin/
# Web assets
COPY --from=builder /usr/src/kanidm/server/core/static /hpkg/
# Copy fixed-path dynamic libraries to their position
COPY --from=builder /out/libs-root/ /
COPY --from=builder /out/libs/ /lib/
# Inform loader where to find libraries
# This is necessary because opensuse searches for libraries in /lib64 or /lib depending on the architecture, but we don't know which one we're on.
# Alternatively, we could symlink /lib64 to /lib, and /usr/lib64 to /usr/lib, etc.
# We could always fix this by invoking the loader on the host (which works in a cross build it seems), but this is easier.
# On debian, it always searches for libraries in /lib.
ENV LD_LIBRARY_PATH=/lib
WORKDIR /data

View file

@ -20,7 +20,7 @@ echo $RUSTC_WRAPPER && \
echo $RUSTFLAGS && \
echo $CC && \
cargo build \
--offline \
--frozen \
--features=concread/simd_support,libsqlite3-sys/bundled \
--release; \
if [ "${SCCACHE_REDIS}" != "" ]; \

View file

@ -9,6 +9,7 @@ use kanidm_proto::internal::{
IdentifyUserRequest, IdentifyUserResponse, ImageValue, OperationError, RadiusAuthToken,
SearchRequest, SearchResponse, UserAuthToken,
};
use kanidm_proto::oauth2::OidcWebfingerResponse;
use kanidm_proto::v1::{
AuthIssueSession, AuthRequest, Entry as ProtoEntry, UatStatus, UnixGroupToken, UnixUserToken,
WhoamiResponse,
@ -190,7 +191,7 @@ impl QueryServerReadV1 {
pub async fn handle_online_backup(
&self,
msg: OnlineBackupEvent,
outpath: &str,
outpath: &Path,
versions: usize,
) -> Result<(), OperationError> {
trace!(eventid = ?msg.eventid, "Begin online backup event");
@ -199,12 +200,12 @@ impl QueryServerReadV1 {
#[allow(clippy::unwrap_used)]
let timestamp = now.format(&Rfc3339).unwrap();
let dest_file = format!("{}/backup-{}.json", outpath, timestamp);
let dest_file = outpath.join(format!("backup-{}.json", timestamp));
if Path::new(&dest_file).exists() {
if dest_file.exists() {
error!(
"Online backup file {} already exists, will not overwrite it.",
dest_file
dest_file.display()
);
return Err(OperationError::InvalidState);
}
@ -217,10 +218,14 @@ impl QueryServerReadV1 {
.get_be_txn()
.backup(&dest_file)
.map(|()| {
info!("Online backup created {} successfully", dest_file);
info!("Online backup created {} successfully", dest_file.display());
})
.map_err(|e| {
error!("Online backup failed to create {}: {:?}", dest_file, e);
error!(
"Online backup failed to create {}: {:?}",
dest_file.display(),
e
);
OperationError::InvalidState
})?;
}
@ -266,7 +271,11 @@ impl QueryServerReadV1 {
}
}
Err(e) => {
error!("Online backup cleanup error read dir {}: {}", outpath, e);
error!(
"Online backup cleanup error read dir {}: {}",
outpath.display(),
e
);
return Err(OperationError::InvalidState);
}
}
@ -1509,6 +1518,21 @@ impl QueryServerReadV1 {
idms_prox_read.oauth2_openid_discovery(&client_id)
}
#[instrument(
level = "info",
skip_all,
fields(uuid = ?eventid)
)]
pub async fn handle_oauth2_webfinger_discovery(
&self,
client_id: &str,
resource_id: &str,
eventid: Uuid,
) -> Result<OidcWebfingerResponse, OperationError> {
let mut idms_prox_read = self.idms.proxy_read().await?;
idms_prox_read.oauth2_openid_webfinger(client_id, resource_id)
}
#[instrument(
level = "info",
skip_all,

View file

@ -1,6 +1,6 @@
use super::{QueryServerReadV1, QueryServerWriteV1};
use kanidm_proto::scim_v1::{
server::ScimEntryKanidm, ScimEntryGetQuery, ScimSyncRequest, ScimSyncState,
client::ScimFilter, server::ScimEntryKanidm, ScimEntryGetQuery, ScimSyncRequest, ScimSyncState,
};
use kanidmd_lib::idm::scim::{
GenerateScimSyncTokenEvent, ScimSyncFinaliseEvent, ScimSyncTerminateEvent, ScimSyncUpdateEvent,
@ -229,4 +229,27 @@ impl QueryServerReadV1 {
.qs_read
.scim_entry_id_get_ext(target_uuid, class, query, ident)
}
#[instrument(
level = "info",
skip_all,
fields(uuid = ?eventid)
)]
pub async fn scim_entry_search(
&self,
client_auth_info: ClientAuthInfo,
eventid: Uuid,
filter: ScimFilter,
query: ScimEntryGetQuery,
) -> Result<Vec<ScimEntryKanidm>, OperationError> {
let ct = duration_from_epoch_now();
let mut idms_prox_read = self.idms.proxy_read().await?;
let ident = idms_prox_read
.validate_client_auth_info_to_ident(client_auth_info, ct)
.inspect_err(|err| {
error!(?err, "Invalid identity");
})?;
idms_prox_read.qs_read.scim_search_ext(ident, filter, query)
}
}

File diff suppressed because it is too large Load diff

View file

@ -513,7 +513,7 @@ pub async fn oauth2_token_post(
}
}
// // For future openid integration
// For future openid integration
pub async fn oauth2_openid_discovery_get(
State(state): State<ServerState>,
Path(client_id): Path<String>,
@ -538,6 +538,46 @@ pub async fn oauth2_openid_discovery_get(
}
}
#[derive(Deserialize)]
pub struct Oauth2OpenIdWebfingerQuery {
resource: String,
}
pub async fn oauth2_openid_webfinger_get(
State(state): State<ServerState>,
Path(client_id): Path<String>,
Query(query): Query<Oauth2OpenIdWebfingerQuery>,
Extension(kopid): Extension<KOpId>,
) -> impl IntoResponse {
let Oauth2OpenIdWebfingerQuery { resource } = query;
let cleaned_resource = resource.strip_prefix("acct:").unwrap_or(&resource);
let res = state
.qe_r_ref
.handle_oauth2_webfinger_discovery(&client_id, cleaned_resource, kopid.eventid)
.await;
match res {
Ok(mut dsc) => (
StatusCode::OK,
[
(ACCESS_CONTROL_ALLOW_ORIGIN, "*"),
(CONTENT_TYPE, "application/jrd+json"),
],
Json({
dsc.subject = resource;
dsc
}),
)
.into_response(),
Err(e) => {
error!(err = ?e, "Unable to access discovery info");
WebError::from(e).response_with_access_control_origin_header()
}
}
}
pub async fn oauth2_rfc8414_metadata_get(
State(state): State<ServerState>,
Path(client_id): Path<String>,
@ -770,6 +810,10 @@ pub fn route_setup(state: ServerState) -> Router<ServerState> {
"/oauth2/openid/:client_id/.well-known/openid-configuration",
get(oauth2_openid_discovery_get).options(oauth2_preflight_options),
)
.route(
"/oauth2/openid/:client_id/.well-known/webfinger",
get(oauth2_openid_webfinger_get).options(oauth2_preflight_options),
)
// // ⚠️ ⚠️ WARNING ⚠️ ⚠️
// // IF YOU CHANGE THESE VALUES YOU MUST UPDATE OIDC DISCOVERY URLS
.route(

View file

@ -1396,6 +1396,7 @@ pub async fn credential_update_update(
return Err(WebError::InternalServerError(errmsg));
}
};
let session_token = match serde_json::from_value(cubody[1].clone()) {
Ok(val) => val,
Err(err) => {
@ -1406,6 +1407,7 @@ pub async fn credential_update_update(
};
debug!("session_token: {:?}", session_token);
debug!("scr: {:?}", scr);
state
.qe_r_ref
.handle_idmcredentialupdate(session_token, scr, kopid.eventid)

View file

@ -0,0 +1,19 @@
use crate::https::ServerState;
use axum::routing::get;
use axum::Router;
use axum_htmx::HxRequestGuardLayer;
mod persons;
pub fn admin_router() -> Router<ServerState> {
let unguarded_router = Router::new()
.route("/persons", get(persons::view_persons_get))
.route(
"/person/:person_uuid/view",
get(persons::view_person_view_get),
);
let guarded_router = Router::new().layer(HxRequestGuardLayer::new("/ui"));
Router::new().merge(unguarded_router).merge(guarded_router)
}

View file

@ -0,0 +1,193 @@
use crate::https::extractors::{DomainInfo, VerifiedClientInformation};
use crate::https::middleware::KOpId;
use crate::https::views::errors::HtmxError;
use crate::https::views::navbar::NavbarCtx;
use crate::https::views::Urls;
use crate::https::ServerState;
use askama::Template;
use axum::extract::{Path, State};
use axum::http::Uri;
use axum::response::{ErrorResponse, IntoResponse, Response};
use axum::Extension;
use axum_htmx::{HxPushUrl, HxRequest};
use futures_util::TryFutureExt;
use kanidm_proto::attribute::Attribute;
use kanidm_proto::internal::OperationError;
use kanidm_proto::scim_v1::client::ScimFilter;
use kanidm_proto::scim_v1::server::{ScimEffectiveAccess, ScimEntryKanidm, ScimPerson};
use kanidm_proto::scim_v1::ScimEntryGetQuery;
use kanidmd_lib::constants::EntryClass;
use kanidmd_lib::idm::server::DomainInfoRead;
use kanidmd_lib::idm::ClientAuthInfo;
use std::str::FromStr;
use uuid::Uuid;
const PERSON_ATTRIBUTES: [Attribute; 9] = [
Attribute::Uuid,
Attribute::Description,
Attribute::Name,
Attribute::DisplayName,
Attribute::Spn,
Attribute::Mail,
Attribute::Class,
Attribute::EntryManagedBy,
Attribute::DirectMemberOf,
];
#[derive(Template)]
#[template(path = "admin/admin_panel_template.html")]
pub(crate) struct PersonsView {
navbar_ctx: NavbarCtx,
partial: PersonsPartialView,
}
#[derive(Template)]
#[template(path = "admin/admin_persons_partial.html")]
struct PersonsPartialView {
persons: Vec<(ScimPerson, ScimEffectiveAccess)>,
}
#[derive(Template)]
#[template(path = "admin/admin_panel_template.html")]
struct PersonView {
partial: PersonViewPartial,
navbar_ctx: NavbarCtx,
}
#[derive(Template)]
#[template(path = "admin/admin_person_view_partial.html")]
struct PersonViewPartial {
person: ScimPerson,
scim_effective_access: ScimEffectiveAccess,
}
pub(crate) async fn view_person_view_get(
State(state): State<ServerState>,
HxRequest(is_htmx): HxRequest,
Extension(kopid): Extension<KOpId>,
VerifiedClientInformation(client_auth_info): VerifiedClientInformation,
Path(uuid): Path<Uuid>,
DomainInfo(domain_info): DomainInfo,
) -> axum::response::Result<Response> {
let (person, scim_effective_access) =
get_person_info(uuid, state, &kopid, client_auth_info, domain_info.clone()).await?;
let person_partial = PersonViewPartial {
person,
scim_effective_access,
};
let path_string = format!("/ui/admin/person/{uuid}/view");
let uri = Uri::from_str(path_string.as_str())
.map_err(|_| HtmxError::new(&kopid, OperationError::Backend, domain_info.clone()))?;
let push_url = HxPushUrl(uri);
Ok(if is_htmx {
(push_url, person_partial).into_response()
} else {
(
push_url,
PersonView {
partial: person_partial,
navbar_ctx: NavbarCtx { domain_info },
},
)
.into_response()
})
}
pub(crate) async fn view_persons_get(
State(state): State<ServerState>,
HxRequest(is_htmx): HxRequest,
Extension(kopid): Extension<KOpId>,
DomainInfo(domain_info): DomainInfo,
VerifiedClientInformation(client_auth_info): VerifiedClientInformation,
) -> axum::response::Result<Response> {
let persons = get_persons_info(state, &kopid, client_auth_info, domain_info.clone()).await?;
let persons_partial = PersonsPartialView { persons };
let push_url = HxPushUrl(Uri::from_static("/ui/admin/persons"));
Ok(if is_htmx {
(push_url, persons_partial).into_response()
} else {
(
push_url,
PersonsView {
navbar_ctx: NavbarCtx { domain_info },
partial: persons_partial,
},
)
.into_response()
})
}
async fn get_person_info(
uuid: Uuid,
state: ServerState,
kopid: &KOpId,
client_auth_info: ClientAuthInfo,
domain_info: DomainInfoRead,
) -> Result<(ScimPerson, ScimEffectiveAccess), ErrorResponse> {
let scim_entry: ScimEntryKanidm = state
.qe_r_ref
.scim_entry_id_get(
client_auth_info.clone(),
kopid.eventid,
uuid.to_string(),
EntryClass::Person,
ScimEntryGetQuery {
attributes: Some(Vec::from(PERSON_ATTRIBUTES)),
ext_access_check: true,
},
)
.map_err(|op_err| HtmxError::new(kopid, op_err, domain_info.clone()))
.await?;
if let Some(personinfo_info) = scimentry_into_personinfo(scim_entry) {
Ok(personinfo_info)
} else {
Err(HtmxError::new(kopid, OperationError::InvalidState, domain_info.clone()).into())
}
}
async fn get_persons_info(
state: ServerState,
kopid: &KOpId,
client_auth_info: ClientAuthInfo,
domain_info: DomainInfoRead,
) -> Result<Vec<(ScimPerson, ScimEffectiveAccess)>, ErrorResponse> {
let filter = ScimFilter::Equal(Attribute::Class.into(), EntryClass::Person.into());
let base: Vec<ScimEntryKanidm> = state
.qe_r_ref
.scim_entry_search(
client_auth_info.clone(),
kopid.eventid,
filter,
ScimEntryGetQuery {
attributes: Some(Vec::from(PERSON_ATTRIBUTES)),
ext_access_check: true,
},
)
.map_err(|op_err| HtmxError::new(kopid, op_err, domain_info.clone()))
.await?;
// TODO: inefficient to sort here
let mut persons: Vec<_> = base
.into_iter()
// TODO: Filtering away unsuccessful entries may not be desired.
.filter_map(scimentry_into_personinfo)
.collect();
persons.sort_by_key(|(sp, _)| sp.uuid);
persons.reverse();
Ok(persons)
}
fn scimentry_into_personinfo(
scim_entry: ScimEntryKanidm,
) -> Option<(ScimPerson, ScimEffectiveAccess)> {
let scim_effective_access = scim_entry.ext_access_check.clone()?; // TODO: This should be an error msg.
let person = ScimPerson::try_from(scim_entry).ok()?;
Some((person, scim_effective_access))
}

View file

@ -45,14 +45,14 @@ pub(crate) async fn view_apps_get(
.await
.map_err(|old| HtmxError::new(&kopid, old, domain_info.clone()))?;
let apps_partial = AppsPartialView { apps: app_links };
Ok({
(
HxPushUrl(Uri::from_static(Urls::Apps.as_ref())),
AppsView {
navbar_ctx: NavbarCtx { domain_info },
apps_partial: AppsPartialView { apps: app_links },
},
)
.into_response()
let apps_view = AppsView {
navbar_ctx: NavbarCtx { domain_info },
apps_partial,
};
(HxPushUrl(Uri::from_static(Urls::Apps.as_ref())), apps_view).into_response()
})
}

View file

@ -105,6 +105,7 @@ pub(crate) async fn view_enrol_get(
Ok(ProfileView {
navbar_ctx: NavbarCtx { domain_info },
profile_partial: EnrolDeviceView {
menu_active_item: ProfileMenuItems::EnrolDevice,
qr_code_svg,

View file

@ -1,6 +1,6 @@
use axum::http::StatusCode;
use axum::response::{IntoResponse, Redirect, Response};
use axum_htmx::{HxReswap, HxRetarget, SwapOption};
use axum_htmx::{HxEvent, HxResponseTrigger, HxReswap, HxRetarget, SwapOption};
use kanidmd_lib::idm::server::DomainInfoRead;
use utoipa::ToSchema;
use uuid::Uuid;
@ -8,7 +8,7 @@ use uuid::Uuid;
use kanidm_proto::internal::OperationError;
use crate::https::middleware::KOpId;
use crate::https::views::UnrecoverableErrorView;
use crate::https::views::{ErrorToastPartial, UnrecoverableErrorView};
// #[derive(Template)]
// #[template(path = "recoverable_error_partial.html")]
// struct ErrorPartialView {
@ -41,7 +41,23 @@ impl IntoResponse for HtmxError {
| OperationError::SessionExpired
| OperationError::InvalidSessionState => Redirect::to("/ui").into_response(),
OperationError::SystemProtectedObject | OperationError::AccessDenied => {
(StatusCode::FORBIDDEN, body).into_response()
let trigger = HxResponseTrigger::after_swap([HxEvent::new(
"permissionDenied".to_string(),
)]);
(
trigger,
HxRetarget("main".to_string()),
HxReswap(SwapOption::BeforeEnd),
(
StatusCode::FORBIDDEN,
ErrorToastPartial {
err_code: inner,
operation_id: kopid,
},
)
.into_response(),
)
.into_response()
}
OperationError::NoMatchingEntries => {
(StatusCode::NOT_FOUND, body).into_response()

View file

@ -8,6 +8,7 @@ use axum::{
use axum_htmx::HxRequestGuardLayer;
use crate::https::views::admin::admin_router;
use constants::Urls;
use kanidmd_lib::{
idm::server::DomainInfoRead,
@ -16,6 +17,7 @@ use kanidmd_lib::{
use crate::https::ServerState;
mod admin;
mod apps;
pub(crate) mod constants;
mod cookies;
@ -36,6 +38,13 @@ struct UnrecoverableErrorView {
domain_info: DomainInfoRead,
}
#[derive(Template)]
#[template(path = "admin/error_toast.html")]
struct ErrorToastPartial {
err_code: OperationError,
operation_id: Uuid,
}
pub fn view_router() -> Router<ServerState> {
let mut unguarded_router = Router::new()
.route(
@ -122,7 +131,11 @@ pub fn view_router() -> Router<ServerState> {
.route("/api/cu_commit", post(reset::commit))
.layer(HxRequestGuardLayer::new("/ui"));
Router::new().merge(unguarded_router).merge(guarded_router)
let admin_router = admin_router();
Router::new()
.merge(unguarded_router)
.merge(guarded_router)
.nest("/admin", admin_router)
}
/// Serde deserialization decorator to map empty Strings to None,

View file

@ -48,6 +48,7 @@ pub(crate) async fn view_profile_get(
Ok(ProfileView {
navbar_ctx: NavbarCtx { domain_info },
profile_partial: ProfilePartialView {
menu_active_item: ProfileMenuItems::UserProfile,
can_rw,

View file

@ -210,6 +210,8 @@ pub(crate) struct TotpInit {
pub(crate) struct TotpCheck {
wrong_code: bool,
broken_app: bool,
bad_name: bool,
taken_name: Option<String>,
}
#[derive(Template)]
@ -599,6 +601,25 @@ pub(crate) async fn add_totp(
let cu_session_token = get_cu_session(&jar).await?;
let check_totpcode = u32::from_str(&new_totp_form.check_totpcode).unwrap_or_default();
let swapped_handler_trigger =
HxResponseTrigger::after_swap([HxEvent::new("addTotpSwapped".to_string())]);
// If the user has not provided a name or added only spaces we exit early
if new_totp_form.name.trim().is_empty() {
return Ok((
swapped_handler_trigger,
AddTotpPartial {
totp_init: None,
totp_name: "".into(),
totp_value: new_totp_form.check_totpcode.clone(),
check: TotpCheck {
bad_name: true,
..Default::default()
},
},
)
.into_response());
}
let cu_status = if new_totp_form.ignore_broken_app {
// Cope with SHA1 apps because the user has intended to do so, their totp code was already verified
@ -624,6 +645,10 @@ pub(crate) async fn add_totp(
wrong_code: true,
..Default::default()
},
CURegState::TotpNameTryAgain(val) => TotpCheck {
taken_name: Some(val.clone()),
..Default::default()
},
CURegState::TotpInvalidSha1 => TotpCheck {
broken_app: true,
..Default::default()
@ -646,9 +671,6 @@ pub(crate) async fn add_totp(
new_totp_form.check_totpcode.clone()
};
let swapped_handler_trigger =
HxResponseTrigger::after_swap([HxEvent::new("addTotpSwapped".to_string())]);
Ok((
swapped_handler_trigger,
AddTotpPartial {

View file

@ -112,19 +112,19 @@ impl IntervalActor {
if !op.exists() {
info!(
"Online backup output folder '{}' does not exist, trying to create it.",
outpath
outpath.display()
);
fs::create_dir_all(&outpath).map_err(|e| {
error!(
"Online backup failed to create output directory '{}': {}",
outpath.clone(),
outpath.display(),
e
)
})?;
}
if !op.is_dir() {
error!("Online backup output '{}' is not a directory or we are missing permissions to access it.", outpath);
error!("Online backup output '{}' is not a directory or we are missing permissions to access it.", outpath.display());
return Err(());
}
@ -148,7 +148,7 @@ impl IntervalActor {
if let Err(e) = server
.handle_online_backup(
OnlineBackupEvent::new(),
outpath.clone().as_str(),
&outpath,
versions,
)
.await

View file

@ -1,8 +1,5 @@
use std::net;
use std::pin::Pin;
use std::str::FromStr;
use crate::actors::QueryServerReadV1;
use crate::CoreAction;
use futures_util::sink::SinkExt;
use futures_util::stream::StreamExt;
use kanidmd_lib::idm::ldap::{LdapBoundToken, LdapResponseState};
@ -10,13 +7,15 @@ use kanidmd_lib::prelude::*;
use ldap3_proto::proto::LdapMsg;
use ldap3_proto::LdapCodec;
use openssl::ssl::{Ssl, SslAcceptor};
use std::net;
use std::pin::Pin;
use std::str::FromStr;
use tokio::io::{AsyncRead, AsyncWrite};
use tokio::net::{TcpListener, TcpStream};
use tokio_openssl::SslStream;
use tokio_util::codec::{FramedRead, FramedWrite};
use crate::CoreAction;
use tokio::sync::broadcast;
use tokio::sync::mpsc;
use tokio_openssl::SslStream;
use tokio_util::codec::{FramedRead, FramedWrite};
struct LdapSession {
uat: Option<LdapBoundToken>,
@ -49,28 +48,14 @@ async fn client_process_msg(
.await
}
async fn client_process(
tcpstream: TcpStream,
tls_acceptor: SslAcceptor,
async fn client_process<STREAM>(
stream: STREAM,
client_address: net::SocketAddr,
qe_r_ref: &'static QueryServerReadV1,
) {
// Start the event
// From the parameters we need to create an SslContext.
let mut tlsstream = match Ssl::new(tls_acceptor.context())
.and_then(|tls_obj| SslStream::new(tls_obj, tcpstream))
{
Ok(ta) => ta,
Err(e) => {
error!("LDAP TLS setup error, continuing -> {:?}", e);
return;
}
};
if let Err(e) = SslStream::accept(Pin::new(&mut tlsstream)).await {
error!("LDAP TLS accept error, continuing -> {:?}", e);
return;
};
let (r, w) = tokio::io::split(tlsstream);
) where
STREAM: AsyncRead + AsyncWrite,
{
let (r, w) = tokio::io::split(stream);
let mut r = FramedRead::new(r, LdapCodec::default());
let mut w = FramedWrite::new(w, LdapCodec::default());
@ -126,7 +111,32 @@ async fn client_process(
}
}
/// TLS LDAP Listener, hands off to [client_process]
async fn client_tls_accept(
tcpstream: TcpStream,
tls_acceptor: SslAcceptor,
client_socket_addr: net::SocketAddr,
qe_r_ref: &'static QueryServerReadV1,
) {
// Start the event
// From the parameters we need to create an SslContext.
let mut tlsstream = match Ssl::new(tls_acceptor.context())
.and_then(|tls_obj| SslStream::new(tls_obj, tcpstream))
{
Ok(ta) => ta,
Err(err) => {
error!(?err, %client_socket_addr, "LDAP TLS setup error");
return;
}
};
if let Err(err) = SslStream::accept(Pin::new(&mut tlsstream)).await {
error!(?err, %client_socket_addr, "LDAP TLS accept error");
return;
};
tokio::spawn(client_process(tlsstream, client_socket_addr, qe_r_ref));
}
/// TLS LDAP Listener, hands off to [client_tls_accept]
async fn ldap_tls_acceptor(
listener: TcpListener,
mut tls_acceptor: SslAcceptor,
@ -145,10 +155,10 @@ async fn ldap_tls_acceptor(
match accept_result {
Ok((tcpstream, client_socket_addr)) => {
let clone_tls_acceptor = tls_acceptor.clone();
tokio::spawn(client_process(tcpstream, clone_tls_acceptor, client_socket_addr, qe_r_ref));
tokio::spawn(client_tls_accept(tcpstream, clone_tls_acceptor, client_socket_addr, qe_r_ref));
}
Err(e) => {
error!("LDAP acceptor error, continuing -> {:?}", e);
Err(err) => {
warn!(?err, "LDAP acceptor error, continuing");
}
}
}
@ -161,6 +171,34 @@ async fn ldap_tls_acceptor(
info!("Stopped {}", super::TaskName::LdapActor);
}
/// PLAIN LDAP Listener, hands off to [client_process]
async fn ldap_plaintext_acceptor(
listener: TcpListener,
qe_r_ref: &'static QueryServerReadV1,
mut rx: broadcast::Receiver<CoreAction>,
) {
loop {
tokio::select! {
Ok(action) = rx.recv() => {
match action {
CoreAction::Shutdown => break,
}
}
accept_result = listener.accept() => {
match accept_result {
Ok((tcpstream, client_socket_addr)) => {
tokio::spawn(client_process(tcpstream, client_socket_addr, qe_r_ref));
}
Err(e) => {
error!("LDAP acceptor error, continuing -> {:?}", e);
}
}
}
}
}
info!("Stopped {}", super::TaskName::LdapActor);
}
pub(crate) async fn create_ldap_server(
address: &str,
opt_ssl_acceptor: Option<SslAcceptor>,
@ -197,10 +235,7 @@ pub(crate) async fn create_ldap_server(
tls_acceptor_reload_rx,
))
}
None => {
error!("The server won't run without TLS!");
return Err(());
}
None => tokio::spawn(ldap_plaintext_acceptor(listener, qe_r_ref, rx)),
};
info!("Created LDAP interface");

View file

@ -36,9 +36,10 @@ mod ldaps;
mod repl;
mod utils;
use std::fmt::{Display, Formatter};
use std::sync::Arc;
use crate::actors::{QueryServerReadV1, QueryServerWriteV1};
use crate::admin::AdminActor;
use crate::config::{Configuration, ServerRole};
use crate::interval::IntervalActor;
use crate::utils::touch_file_or_quit;
use compact_jwt::{JwsHs256Signer, JwsSigner};
use kanidm_proto::internal::OperationError;
@ -50,17 +51,14 @@ use kanidmd_lib::status::StatusActor;
use kanidmd_lib::value::CredentialType;
#[cfg(not(target_family = "windows"))]
use libc::umask;
use std::fmt::{Display, Formatter};
use std::path::Path;
use std::sync::Arc;
use tokio::sync::broadcast;
use tokio::sync::mpsc;
use tokio::sync::Notify;
use tokio::task;
use crate::actors::{QueryServerReadV1, QueryServerWriteV1};
use crate::admin::AdminActor;
use crate::config::{Configuration, ServerRole};
use crate::interval::IntervalActor;
use tokio::sync::mpsc;
// === internal setup helpers
fn setup_backend(config: &Configuration, schema: &Schema) -> Result<Backend, OperationError> {
@ -80,7 +78,7 @@ fn setup_backend_vacuum(
let pool_size: u32 = config.threads as u32;
let cfg = BackendConfig::new(
config.db_path.as_str(),
config.db_path.as_deref(),
pool_size,
config.db_fs_type.unwrap_or_default(),
config.db_arc_size,
@ -115,9 +113,9 @@ async fn setup_qs_idms(
.await?;
// We generate a SINGLE idms only!
let is_integration_test = config.integration_test_config.is_some();
let (idms, idms_delayed, idms_audit) =
IdmServer::new(query_server.clone(), &config.origin).await?;
IdmServer::new(query_server.clone(), &config.origin, is_integration_test).await?;
Ok((query_server, idms, idms_delayed, idms_audit))
}
@ -335,7 +333,7 @@ pub fn dbscan_restore_quarantined_core(config: &Configuration, id: u64) {
};
}
pub fn backup_server_core(config: &Configuration, dst_path: &str) {
pub fn backup_server_core(config: &Configuration, dst_path: &Path) {
let schema = match Schema::new() {
Ok(s) => s,
Err(e) => {
@ -371,8 +369,11 @@ pub fn backup_server_core(config: &Configuration, dst_path: &str) {
// Let the txn abort, even on success.
}
pub async fn restore_server_core(config: &Configuration, dst_path: &str) {
touch_file_or_quit(config.db_path.as_str());
pub async fn restore_server_core(config: &Configuration, dst_path: &Path) {
// If it's an in memory database, we don't need to touch anything
if let Some(db_path) = config.db_path.as_ref() {
touch_file_or_quit(db_path);
}
// First, we provide the in-memory schema so that core attrs are indexed correctly.
let schema = match Schema::new() {
@ -1011,7 +1012,7 @@ pub async fn create_server_core(
let tls_accepter_reload_task_notify = tls_acceptor_reload_notify.clone();
let tls_config = config.tls_config.clone();
let ldap_configured = config.ldapaddress.is_some();
let ldap_configured = config.ldapbindaddress.is_some();
let (ldap_tls_acceptor_reload_tx, ldap_tls_acceptor_reload_rx) = mpsc::channel(1);
let (http_tls_acceptor_reload_tx, http_tls_acceptor_reload_rx) = mpsc::channel(1);
@ -1076,24 +1077,19 @@ pub async fn create_server_core(
};
// If we have been requested to init LDAP, configure it now.
let maybe_ldap_acceptor_handle = match &config.ldapaddress {
let maybe_ldap_acceptor_handle = match &config.ldapbindaddress {
Some(la) => {
let opt_ldap_ssl_acceptor = maybe_tls_acceptor.clone();
if !config_test {
// ⚠️ only start the sockets and listeners in non-config-test modes.
let h = ldaps::create_ldap_server(
la.as_str(),
opt_ldap_ssl_acceptor,
server_read_ref,
broadcast_tx.subscribe(),
ldap_tls_acceptor_reload_rx,
)
.await?;
Some(h)
} else {
None
}
let h = ldaps::create_ldap_server(
la.as_str(),
opt_ldap_ssl_acceptor,
server_read_ref,
broadcast_tx.subscribe(),
ldap_tls_acceptor_reload_rx,
)
.await?;
Some(h)
}
None => {
debug!("LDAP not requested, skipping");

View file

@ -1,32 +1,39 @@
use filetime::FileTime;
use std::fs::File;
use std::io::ErrorKind;
use std::path::PathBuf;
use std::path::Path;
use std::time::SystemTime;
pub fn touch_file_or_quit(file_path: &str) {
pub fn touch_file_or_quit<P: AsRef<Path>>(file_path: P) {
/*
Attempt to touch the file file_path, will quit the application if it fails for any reason.
Will also create a new file if it doesn't already exist.
*/
if PathBuf::from(file_path).exists() {
let file_path: &Path = file_path.as_ref();
if file_path.exists() {
let t = FileTime::from_system_time(SystemTime::now());
match filetime::set_file_times(file_path, t, t) {
Ok(_) => debug!(
"Successfully touched existing file {}, can continue",
file_path
file_path.display()
),
Err(e) => {
match e.kind() {
ErrorKind::PermissionDenied => {
// we bail here because you won't be able to write them back...
error!("Permission denied writing to {}, quitting.", file_path)
error!(
"Permission denied writing to {}, quitting.",
file_path.display()
)
}
_ => {
error!(
"Failed to write to {} due to error: {:?} ... quitting.",
file_path, e
file_path.display(),
e
)
}
}
@ -35,11 +42,12 @@ pub fn touch_file_or_quit(file_path: &str) {
}
} else {
match File::create(file_path) {
Ok(_) => debug!("Successfully touched new file {}", file_path),
Ok(_) => debug!("Successfully touched new file {}", file_path.display()),
Err(e) => {
error!(
"Failed to write to {} due to error: {:?} ... quitting.",
file_path, e
file_path.display(),
e
);
std::process::exit(1);
}

View file

@ -88,15 +88,6 @@
</cc:Work>
</rdf:RDF>
</metadata>
<rect
style="fill:#ffffff;stroke-width:0.243721"
id="rect443"
width="135.46666"
height="135.46666"
x="0"
y="0"
inkscape:label="background"
sodipodi:insensitive="true" />
<g
id="layer1"
transform="matrix(0.91407203,0,0,0.91407203,-34.121105,-24.362694)"

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 16 KiB

Before After
Before After

View file

@ -20,6 +20,15 @@ body {
max-width: 680px;
}
/*
* Bootstrap 5.3 fix for input-group validation
* :has checks that a child can be selected with the selector
* + selects the next sibling.
*/
.was-validated .input-group:has(.form-control:invalid) + .invalid-feedback {
display: block !important;
}
/*
* Sidebar
*/

View file

@ -0,0 +1,10 @@
(% extends "base_htmx_with_nav.html" %)
(% block title %)Admin Panel(% endblock %)
(% block head %)
(% endblock %)
(% block main %)
(( partial|safe ))
(% endblock %)

View file

@ -0,0 +1,19 @@
<main class="container-xxl pb-5">
<div class="d-flex flex-sm-row flex-column">
<div class="list-group side-menu">
<a href="/ui/admin/persons" hx-target="#main" class="list-group-item list-group-item-action (% block persons_item_extra_classes%)(%endblock%)">
<img src="/pkg/img/icon-accounts.svg" alt="Persons" width="20" height="20">
Persons</a>
<a href="/ui/admin/groups" hx-target="#main" class="list-group-item list-group-item-action (% block groups_item_extra_classes%)(%endblock%)">
<img src="/pkg/img/icon-groups.svg" alt="Groups" width="20" height="20">
Groups (placeholder)</a>
<a href="/ui/admin/oauth2" hx-target="#main" class="list-group-item list-group-item-action (% block oauth2_item_extra_classes%)(%endblock%)">
<img src="/pkg/img/icon-oauth2.svg" alt="Oauth2" width="20" height="20">
Oauth2 (placeholder)</a>
</div>
<div id="settings-window" class="flex-grow-1 ps-sm-4 pt-sm-0 pt-4">
(% block admin_page %)
(% endblock %)
</div>
</div>
</main>

View file

@ -0,0 +1,29 @@
(% macro string_attr(dispname, name, value, editable, attribute) %)
(% if scim_effective_access.search.check(attribute|as_ref) %)
<div class="row mt-3">
<label for="person(( name ))" class="col-12 col-md-3 col-lg-2 col-form-label fw-bold py-0">(( dispname ))</label>
<div class="col-12 col-md-8 col-lg-6">
<input readonly class="form-control-plaintext py-0" id="person(( name ))" name="(( name ))" value="(( value ))">
</div>
</div>
(% endif %)
(% endmacro %)
<form hx-validate="true" hx-ext="bs-validation">
(% call string_attr("UUID", "uuid", person.uuid, false, Attribute::Uuid) %)
(% call string_attr("SPN", "spn", person.spn, false, Attribute::Spn) %)
(% call string_attr("Name", "name", person.name, true, Attribute::Name) %)
(% call string_attr("Displayname", "displayname", person.displayname, true, Attribute::DisplayName) %)
(% if let Some(description) = person.description %)
(% call string_attr("Description", "description", description, true, Attribute::Description) %)
(% else %)
(% call string_attr("Description", "description", "none", true, Attribute::Description) %)
(% endif %)
(% if let Some(entry_managed_by) = person.managed_by %)
(% call string_attr("Managed By", "managed_by", entry_managed_by.value, true, Attribute::EntryManagedBy) %)
(% else %)
(% call string_attr("Managed By", "managed_by", "none", true, Attribute::EntryManagedBy) %)
(% endif %)
</form>

View file

@ -0,0 +1,57 @@
(% extends "admin/admin_partial_base.html" %)
(% block persons_item_extra_classes %)active(% endblock %)
(% block admin_page %)
<nav aria-label="breadcrumb">
<ol class="breadcrumb">
<li class="breadcrumb-item"><a href="/ui/admin/persons" hx-target="#main">persons Management</a></li>
<li class="breadcrumb-item active" aria-current="page">Viewing</li>
</ol>
</nav>
(% include "admin_person_details_partial.html" %)
<hr>
(% if scim_effective_access.search.check(Attribute::Mail|as_ref) %)
<label class="mt-3 fw-bold">Emails</label>
<form hx-validate="true" hx-ext="bs-validation">
(% if person.mails.len() == 0 %)
<p>There are no email addresses associated with this person.</p>
(% else %)
<ol class="list-group col-12 col-md-8 col-lg-6">
(% for mail in person.mails %)
<li id="personMail(( loop.index ))" class="list-group-item d-flex flex-row justify-content-between">
<div class="d-flex align-items-center">(( mail.value ))</div>
<div class="buttons float-end">
</div>
</li>
(% endfor %)
</ol>
(% endif %)
</form>
(% endif %)
(% if scim_effective_access.search.check(Attribute::DirectMemberOf|as_ref) %)
<label class="mt-3 fw-bold">DirectMemberOf</label>
<form hx-validate="true" hx-ext="bs-validation">
(% if person.groups.len() == 0 %)
<p>There are no groups this person is a direct member of.</p>
(% else %)
<ol class="list-group col-12 col-md-8 col-lg-6">
(% for group in person.groups %)
<li id="personGroup(( loop.index ))" class="list-group-item d-flex flex-row justify-content-between">
<div class="d-flex align-items-center">(( group.value ))</div>
<div class="buttons float-end">
</div>
</li>
(% endfor %)
</ol>
(% endif %)
</form>
(% endif %)
(% endblock %)

View file

@ -0,0 +1,23 @@
(% extends "admin/admin_partial_base.html" %)
(% block persons_item_extra_classes %)active(% endblock %)
(% block admin_page %)
<nav aria-label="breadcrumb">
<ol class="breadcrumb">
<li class="breadcrumb-item active" aria-current="page">Person Management</li>
</ol>
</nav>
<ul class="list-group">
(% for (person, _) in persons %)
<li class="list-group-item d-flex flex-row justify-content-between">
<div class="d-flex align-items-center">
<a href="/ui/admin/person/(( person.uuid ))/view" hx-target="#main">(( person.name ))</a> <span class="text-secondary d-none d-lg-inline-block mx-4">(( person.uuid ))</span>
</div>
<div class="buttons float-end">
</div>
</li>
(% endfor %)
</ul>
(% endblock %)

View file

@ -0,0 +1,12 @@
<div class="toast-container position-fixed bottom-0 end-0 p-3">
<div id="permissionDeniedToast" class="toast" role="alert" aria-live="assertive" aria-atomic="true">
<div class="toast-header">
<strong class="me-auto">Error</strong>
<button type="button" class="btn-close" data-bs-dismiss="toast" aria-label="Close"></button>
</div>
<div class="toast-body">
(( err_code )).<br>
OpId: (( operation_id ))
</div>
</div>
</div>

View file

@ -0,0 +1,11 @@
<div class="toast-container position-fixed bottom-0 end-0 p-3">
<div id="savedToast" class="toast" role="alert" aria-live="assertive" aria-atomic="true">
<div class="toast-header">
<strong class="me-auto">Success</strong>
<button type="button" class="btn-close" data-bs-dismiss="toast" aria-label="Close"></button>
</div>
<div class="toast-body">
Saved.
</div>
</div>
</div>

View file

@ -2,7 +2,9 @@
(% block body %)
(% include "navbar.html" %)
<div id="main">
(% block main %)(% endblock %)
</div>
(% include "signout_modal.html" %)
(% endblock %)

View file

@ -19,7 +19,8 @@
<label for="new-totp-name" class="form-label">Enter a name for your TOTP</label>
<input
aria-describedby="totp-name-validation-feedback"
class="form-control"
class="form-control (%- if let Some(_) = check.taken_name -%)is-invalid(%- endif -%)
(%- if check.bad_name -%)is-invalid(%- endif -%)"
name="name"
id="new-totp-name"
value="(( totp_name ))"
@ -51,6 +52,18 @@
<li>Incorrect TOTP code - Please try again</li>
</ul>
</div>
(% else if check.bad_name %)
<div id="neq-totp-validation-feedback">
<ul>
<li>The name you provided was empty or blank. Please provide a proper name</li>
</ul>
</div>
(% else if let Some(name) = check.taken_name %)
<div id="neq-totp-validation-feedback">
<ul>
<li>The name "((name))" is either invalid or already taken, Please pick a different one</li>
</ul>
</div>
(% endif %)
</form>

View file

@ -1,4 +1,4 @@
<nav class="navbar navbar-expand-md kanidm_navbar mb-4">
<nav hx-boost="false" class="navbar navbar-expand-md kanidm_navbar mb-4">
<div class="container-lg">
<a class="navbar-brand d-flex align-items-center" href="/ui/apps">
(% if navbar_ctx.domain_info.image().is_some() %)
@ -39,4 +39,4 @@
</ul>
</div>
</div>
</nav>
</nav>

View file

@ -16,7 +16,7 @@ repository = { workspace = true }
[[bin]]
name = "kanidmd"
path = "src/main.rs"
test = true
test = false
doctest = false
[features]
@ -57,6 +57,31 @@ clap = { workspace = true, features = ["derive"] }
clap_complete = { workspace = true }
kanidm_build_profiles = { workspace = true }
## Debian packaging
[package.metadata.deb]
name = "kanidmd"
maintainer = "James Hodgkinson <james@terminaloutcomes.com>"
# Can't use $auto depends because the name of libssl3 varies by distro and version
depends = [
"libc6",
"tpm-udev",
"libssl3 | libssl3t64",
]
section = "network"
priority = "optional"
changelog = "../../target/debian/changelog" # Generated by platform/debian/build_debs.sh
assets = [
[ "target/release/kanidmd", "usr/bin/", "755" ],
[ "debian/group.conf", "usr/lib/sysusers.d/kandimd.conf", "644" ],
[ "debian/server.toml", "etc/kanidmd/server.toml", "640" ],
[ "../../examples/server.toml", "usr/share/kanidmd/", "444" ],
[ "../core/static/**/*", "usr/share/kanidmd/static", "444" ],
]
maintainer-scripts = "debian/"
systemd-units = [
{ unit-name = "kanidmd", enable = false}, # Cannot start without manual config
]
[package.metadata.cargo-machete]
ignored = ["clap_complete", "kanidm_build_profiles"]

View file

@ -0,0 +1,2 @@
# This is a sysusers.d format config, please refer to man sysusers.d(5)
g kanidmd -

View file

@ -10,13 +10,15 @@ Before=radiusd.service
[Service]
Type=notify
DynamicUser=yes
StateDirectory=kanidm
User=kanidmd_dyn
Group=kanidmd
StateDirectory=kanidmd
StateDirectoryMode=0750
CacheDirectory=kanidmd
CacheDirectoryMode=0750
RuntimeDirectory=kanidmd
RuntimeDirectoryMode=0755
ExecStart=/usr/sbin/kanidmd server -c /etc/kanidm/server.toml
ExecStart=/usr/bin/kanidmd server
AmbientCapabilities=CAP_NET_BIND_SERVICE
CapabilityBoundingSet=CAP_NET_BIND_SERVICE

View file

@ -0,0 +1,38 @@
#!/bin/sh
# postinst script for kanidmd
#
# see: dh_installdeb(1)
set -e
case "$1" in
configure)
echo "Creating the kanidmd group for config & cert ownership..."
systemd-sysusers
echo "Fixing ownership of server configuration ..."
chown :kanidmd /etc/kanidmd/server.toml*
echo "============================="
echo "Thanks for installing Kanidm!"
echo "============================="
echo "Please ensure you modify the configuration file at /etc/kanidmd/server.toml"
echo "Only then: systemctl enable kanidmd.service"
echo "Full examples are in /usr/share/kanidmd/"
;;
abort-upgrade|abort-remove|abort-deconfigure)
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
# dh_installdeb will replace this with shell code automatically
# generated by other debhelper scripts.
#DEBHELPER#
exit 0

View file

@ -0,0 +1,51 @@
# Kanidm server minimal configuration - /etc/kanidm/server.toml
# For a full example and documentation, see /usr/share/kanidmd/server.toml
# or `example/server.toml` in the source repository
# NOTE: You must configure at least domain & origin below to allow the server to start!
# The webserver bind address. Requires TLS certificates.
# If the port is set to 443 you may require the
# NET_BIND_SERVICE capability.
# Defaults to "127.0.0.1:8443"
bindaddress = "127.0.0.1:8443"
# The path to the kanidm database.
# The provided example uses systemd dynamic user pathing for security
db_path = "/var/lib/private/kanidmd/kanidm.db"
# TLS chain and key in pem format. Both must be present.
# If the server receives a SIGHUP, these files will be
# re-read and reloaded if their content is valid.
# These should be owned by root:kanidmd to give the service access.
tls_chain = "/etc/kanidmd/chain.pem"
tls_key = "/etc/kanidmd/key.pem"
log_level = "info"
# The DNS domain name of the server. This is used in a
# number of security-critical contexts
# such as webauthn, so it *must* match your DNS
#
# ⚠️ WARNING ⚠️
#
# Changing this value after first use WILL break many types of
# registered credentials for accounts including but not limited
# to: webauthn, oauth tokens, and more.
# If you change this value you *must* run
# `kanidmd domain rename` immediately after.
# NOTE: You must set this value!
#domain = "idm.example.com"
#
# The origin for webauthn. This is the url to the server,
# with the port included if it is non-standard (any port
# except 443). This must match or be a descendent of the
# domain name you configure above. If these two items are
# not consistent, the server WILL refuse to start!
# origin = "https://idm.example.com"
# NOTE: You must set this value!
#origin = "https://idm.example.com:8443"
[online_backup]
path = "/var/lib/private/kanidmd/backups/"
schedule = "00 22 * * *"

View file

@ -1,3 +1,4 @@
version = "2"
bindaddress = "[::]:8443"
ldapbindaddress = "127.0.0.1:3636"

View file

@ -1,4 +1,4 @@
#!/bin/bash
#!/bin/sh
set -e
@ -22,14 +22,16 @@ fi
mkdir -p "${KANI_TMP}"/client_ca
CONFIG_FILE=${CONFIG_FILE:="${SCRIPT_DIR}/../../examples/insecure_server.toml"}
CONFIG_FILE=${CONFIG_FILE:="${SCRIPT_DIR}/insecure_server.toml"}
if [ ! -f "${CONFIG_FILE}" ]; then
echo "Couldn't find configuration file at ${CONFIG_FILE}, please ensure you're running this script from its base directory (${SCRIPT_DIR})."
exit 1
fi
pushd "${SCRIPT_DIR}" > /dev/null 2>&1
# Save current directory and change to script directory without pushd
OLD_DIR=$(pwd)
cd "${SCRIPT_DIR}" || exit 1
if [ -n "${1}" ]; then
COMMAND=$*
#shellcheck disable=SC2086
@ -40,4 +42,4 @@ else
#shellcheck disable=SC2086
cargo run ${KANI_CARGO_OPTS} --bin kanidmd -- server -c "${CONFIG_FILE}"
fi
popd > /dev/null 2>&1
cd "${OLD_DIR}" || exit 1

View file

@ -37,7 +37,7 @@ use kanidmd_core::admin::{
AdminTaskRequest, AdminTaskResponse, ClientCodec, ProtoDomainInfo,
ProtoDomainUpgradeCheckReport, ProtoDomainUpgradeCheckStatus,
};
use kanidmd_core::config::{Configuration, ServerConfig};
use kanidmd_core::config::{CliConfig, Configuration, EnvironmentConfig, ServerConfigUntagged};
use kanidmd_core::{
backup_server_core, cert_generate_core, create_server_core, dbscan_get_id2entry_core,
dbscan_list_id2entry_core, dbscan_list_index_analysis_core, dbscan_list_index_core,
@ -379,17 +379,13 @@ fn check_file_ownership(opt: &KanidmdParser) -> Result<(), ExitCode> {
}
// We have to do this because we can't use tracing until we've started the logging pipeline, and we can't start the logging pipeline until the tokio runtime's doing its thing.
async fn start_daemon(
opt: KanidmdParser,
mut config: Configuration,
sconfig: ServerConfig,
) -> ExitCode {
async fn start_daemon(opt: KanidmdParser, config: Configuration) -> ExitCode {
// if we have a server config and it has an OTEL URL, then we'll start the logging pipeline now.
// TODO: only send to stderr when we're not in a TTY
let sub = match sketching::otel::start_logging_pipeline(
&sconfig.otel_grpc_url,
sconfig.log_level.unwrap_or_default(),
&config.otel_grpc_url,
config.log_level,
"kanidmd",
) {
Err(err) => {
@ -423,8 +419,8 @@ async fn start_daemon(
return err;
};
if let Some(db_path) = sconfig.db_path.as_ref() {
let db_pathbuf = PathBuf::from(db_path.as_str());
if let Some(db_path) = config.db_path.as_ref() {
let db_pathbuf = db_path.to_path_buf();
// We can't check the db_path permissions because it may not exist yet!
if let Some(db_parent_path) = db_pathbuf.parent() {
if !db_parent_path.exists() {
@ -464,72 +460,75 @@ async fn start_daemon(
warn!("WARNING: DB folder {} has 'everyone' permission bits in the mode. This could be a security risk ...", db_par_path_buf.to_str().unwrap_or("invalid file path"));
}
}
config.update_db_path(db_path);
} else {
error!("No db_path set in configuration, server startup will FAIL!");
return ExitCode::FAILURE;
}
if let Some(origin) = sconfig.origin.clone() {
config.update_origin(&origin);
} else {
error!("No origin set in configuration, server startup will FAIL!");
return ExitCode::FAILURE;
}
if let Some(domain) = sconfig.domain.clone() {
config.update_domain(&domain);
} else {
error!("No domain set in configuration, server startup will FAIL!");
return ExitCode::FAILURE;
}
config.update_db_arc_size(sconfig.get_db_arc_size());
config.update_role(sconfig.role);
config.update_output_mode(opt.commands.commonopt().output_mode.to_owned().into());
config.update_trust_x_forward_for(sconfig.trust_x_forward_for);
config.update_admin_bind_path(&sconfig.adminbindpath);
config.update_replication_config(sconfig.repl_config.clone());
match &opt.commands {
let lock_was_setup = match &opt.commands {
// we aren't going to touch the DB so we can carry on
KanidmdOpt::ShowReplicationCertificate { .. }
| KanidmdOpt::RenewReplicationCertificate { .. }
| KanidmdOpt::RefreshReplicationConsumer { .. }
| KanidmdOpt::RecoverAccount { .. }
| KanidmdOpt::HealthCheck(_) => (),
| KanidmdOpt::HealthCheck(_) => None,
_ => {
// Okay - Lets now create our lock and go.
#[allow(clippy::expect_used)]
let klock_path = match sconfig.db_path.clone() {
Some(val) => format!("{}.klock", val),
None => std::env::temp_dir()
.join("kanidmd.klock")
.to_str()
.expect("Unable to create klock path, this is a critical error!")
.to_string(),
let klock_path = match config.db_path.clone() {
Some(val) => val.with_extension("klock"),
None => std::env::temp_dir().join("kanidmd.klock"),
};
let flock = match File::create(&klock_path) {
Ok(flock) => flock,
Err(e) => {
error!("ERROR: Refusing to start - unable to create kanidmd exclusive lock at {} - {:?}", klock_path, e);
Err(err) => {
error!(
"ERROR: Refusing to start - unable to create kanidmd exclusive lock at {}",
klock_path.display()
);
error!(?err);
return ExitCode::FAILURE;
}
};
match flock.try_lock_exclusive() {
Ok(()) => debug!("Acquired kanidm exclusive lock"),
Err(e) => {
error!("ERROR: Refusing to start - unable to lock kanidmd exclusive lock at {} - {:?}", klock_path, e);
Ok(true) => debug!("Acquired kanidm exclusive lock"),
Ok(false) => {
error!(
"ERROR: Refusing to start - unable to lock kanidmd exclusive lock at {}",
klock_path.display()
);
error!("Is another kanidmd process running?");
return ExitCode::FAILURE;
}
Err(err) => {
error!(
"ERROR: Refusing to start - unable to lock kanidmd exclusive lock at {}",
klock_path.display()
);
error!(?err);
return ExitCode::FAILURE;
}
};
Some(klock_path)
}
};
let result_code = kanidm_main(config, opt).await;
if let Some(klock_path) = lock_was_setup {
if let Err(reason) = std::fs::remove_file(&klock_path) {
warn!(
?reason,
"WARNING: Unable to clean up kanidmd exclusive lock at {}",
klock_path.display()
);
}
}
kanidm_main(sconfig, config, opt).await
result_code
}
fn main() -> ExitCode {
@ -556,10 +555,6 @@ fn main() -> ExitCode {
return ExitCode::SUCCESS;
};
//we set up a list of these so we can set the log config THEN log out the errors.
let mut config_error: Vec<String> = Vec::new();
let mut config = Configuration::new();
if env!("KANIDM_SERVER_CONFIG_PATH").is_empty() {
println!("CRITICAL: Kanidmd was not built correctly and is missing a valid KANIDM_SERVER_CONFIG_PATH value");
return ExitCode::FAILURE;
@ -581,49 +576,56 @@ fn main() -> ExitCode {
}
};
let sconfig = match ServerConfig::new(maybe_config_path) {
Ok(c) => Some(c),
Err(e) => {
config_error.push(format!("Config Parse failure {:?}", e));
let maybe_sconfig = if let Some(config_path) = maybe_config_path {
match ServerConfigUntagged::new(config_path) {
Ok(c) => Some(c),
Err(err) => {
eprintln!("ERROR: Configuration Parse Failure: {:?}", err);
return ExitCode::FAILURE;
}
}
} else {
eprintln!("WARNING: No configuration path was provided, relying on environment variables.");
None
};
let envconfig = match EnvironmentConfig::new() {
Ok(ec) => ec,
Err(err) => {
eprintln!("ERROR: Environment Configuration Parse Failure: {:?}", err);
return ExitCode::FAILURE;
}
};
// Get information on the windows username
#[cfg(target_family = "windows")]
get_user_details_windows();
let cli_config = CliConfig {
output_mode: Some(opt.commands.commonopt().output_mode.to_owned().into()),
};
if !config_error.is_empty() {
println!("There were errors on startup, which prevent the server from starting:");
for e in config_error {
println!(" - {}", e);
}
let is_server = matches!(&opt.commands, KanidmdOpt::Server(_));
let config = Configuration::build()
.add_env_config(envconfig)
.add_opt_toml_config(maybe_sconfig)
// We always set threads to 1 unless it's the main server.
.add_cli_config(cli_config)
.is_server_mode(is_server)
.finish();
let Some(config) = config else {
eprintln!(
"ERROR: Unable to build server configuration from provided configuration inputs."
);
return ExitCode::FAILURE;
}
let sconfig = match sconfig {
Some(val) => val,
None => {
println!("Somehow you got an empty ServerConfig after error checking? Cannot start!");
return ExitCode::FAILURE;
}
};
// ===========================================================================
// Config ready
// We always set threads to 1 unless it's the main server.
if matches!(&opt.commands, KanidmdOpt::Server(_)) {
// If not updated, will default to maximum
if let Some(threads) = sconfig.thread_count {
config.update_threads_count(threads);
}
} else {
config.update_threads_count(1);
};
// Get information on the windows username
#[cfg(target_family = "windows")]
get_user_details_windows();
// Start the runtime
let maybe_rt = tokio::runtime::Builder::new_multi_thread()
.worker_threads(config.threads)
.enable_all()
@ -643,16 +645,12 @@ fn main() -> ExitCode {
}
};
rt.block_on(start_daemon(opt, config, sconfig))
rt.block_on(start_daemon(opt, config))
}
/// Build and execute the main server. The ServerConfig are the configuration options
/// that we are processing into the config for the main server.
async fn kanidm_main(
sconfig: ServerConfig,
mut config: Configuration,
opt: KanidmdParser,
) -> ExitCode {
async fn kanidm_main(config: Configuration, opt: KanidmdParser) -> ExitCode {
match &opt.commands {
KanidmdOpt::Server(_sopt) | KanidmdOpt::ConfigTest(_sopt) => {
let config_test = matches!(&opt.commands, KanidmdOpt::ConfigTest(_));
@ -662,88 +660,90 @@ async fn kanidm_main(
info!("Running in server mode ...");
};
// configuration options that only relate to server mode
config.update_config_for_server_mode(&sconfig);
if let Some(i_str) = &(sconfig.tls_chain) {
let i_path = PathBuf::from(i_str.as_str());
let i_meta = match metadata(&i_path) {
Ok(m) => m,
Err(e) => {
error!(
"Unable to read metadata for TLS chain file '{}' - {:?}",
&i_path.to_str().unwrap_or("invalid file path"),
e
);
let diag = kanidm_lib_file_permissions::diagnose_path(&i_path);
info!(%diag);
return ExitCode::FAILURE;
// Verify the TLs configs.
if let Some(tls_config) = config.tls_config.as_ref() {
{
let i_meta = match metadata(&tls_config.chain) {
Ok(m) => m,
Err(e) => {
error!(
"Unable to read metadata for TLS chain file '{}' - {:?}",
tls_config.chain.display(),
e
);
let diag =
kanidm_lib_file_permissions::diagnose_path(&tls_config.chain);
info!(%diag);
return ExitCode::FAILURE;
}
};
if !kanidm_lib_file_permissions::readonly(&i_meta) {
warn!("permissions on {} may not be secure. Should be readonly to running uid. This could be a security risk ...", tls_config.chain.display());
}
};
if !kanidm_lib_file_permissions::readonly(&i_meta) {
warn!("permissions on {} may not be secure. Should be readonly to running uid. This could be a security risk ...", i_str);
}
}
if let Some(i_str) = &(sconfig.tls_key) {
let i_path = PathBuf::from(i_str.as_str());
let i_meta = match metadata(&i_path) {
Ok(m) => m,
Err(e) => {
error!(
"Unable to read metadata for TLS key file '{}' - {:?}",
&i_path.to_str().unwrap_or("invalid file path"),
e
);
let diag = kanidm_lib_file_permissions::diagnose_path(&i_path);
info!(%diag);
return ExitCode::FAILURE;
{
let i_meta = match metadata(&tls_config.key) {
Ok(m) => m,
Err(e) => {
error!(
"Unable to read metadata for TLS key file '{}' - {:?}",
tls_config.key.display(),
e
);
let diag = kanidm_lib_file_permissions::diagnose_path(&tls_config.key);
info!(%diag);
return ExitCode::FAILURE;
}
};
if !kanidm_lib_file_permissions::readonly(&i_meta) {
warn!("permissions on {} may not be secure. Should be readonly to running uid. This could be a security risk ...", tls_config.key.display());
}
#[cfg(not(target_os = "windows"))]
if i_meta.mode() & 0o007 != 0 {
warn!("WARNING: {} has 'everyone' permission bits in the mode. This could be a security risk ...", tls_config.key.display());
}
};
if !kanidm_lib_file_permissions::readonly(&i_meta) {
warn!("permissions on {} may not be secure. Should be readonly to running uid. This could be a security risk ...", i_str);
}
#[cfg(not(target_os = "windows"))]
if i_meta.mode() & 0o007 != 0 {
warn!("WARNING: {} has 'everyone' permission bits in the mode. This could be a security risk ...", i_str);
}
}
if let Some(ca_dir) = &(sconfig.tls_client_ca) {
// check that the TLS client CA config option is what we expect
let ca_dir_path = PathBuf::from(&ca_dir);
if !ca_dir_path.exists() {
error!(
"TLS CA folder {} does not exist, server startup will FAIL!",
ca_dir
);
let diag = kanidm_lib_file_permissions::diagnose_path(&ca_dir_path);
info!(%diag);
}
let i_meta = match metadata(&ca_dir_path) {
Ok(m) => m,
Err(e) => {
error!("Unable to read metadata for '{}' - {:?}", ca_dir, e);
if let Some(ca_dir) = tls_config.client_ca.as_ref() {
// check that the TLS client CA config option is what we expect
let ca_dir_path = PathBuf::from(&ca_dir);
if !ca_dir_path.exists() {
error!(
"TLS CA folder {} does not exist, server startup will FAIL!",
ca_dir.display()
);
let diag = kanidm_lib_file_permissions::diagnose_path(&ca_dir_path);
info!(%diag);
}
let i_meta = match metadata(&ca_dir_path) {
Ok(m) => m,
Err(e) => {
error!(
"Unable to read metadata for '{}' - {:?}",
ca_dir.display(),
e
);
let diag = kanidm_lib_file_permissions::diagnose_path(&ca_dir_path);
info!(%diag);
return ExitCode::FAILURE;
}
};
if !i_meta.is_dir() {
error!(
"ERROR: Refusing to run - TLS Client CA folder {} may not be a directory",
ca_dir.display()
);
return ExitCode::FAILURE;
}
};
if !i_meta.is_dir() {
error!(
"ERROR: Refusing to run - TLS Client CA folder {} may not be a directory",
ca_dir
);
return ExitCode::FAILURE;
}
if kanidm_lib_file_permissions::readonly(&i_meta) {
warn!("WARNING: TLS Client CA folder permissions on {} indicate it may not be RW. This could cause the server start up to fail!", ca_dir);
}
#[cfg(not(target_os = "windows"))]
if i_meta.mode() & 0o007 != 0 {
warn!("WARNING: TLS Client CA folder {} has 'everyone' permission bits in the mode. This could be a security risk ...", ca_dir);
if kanidm_lib_file_permissions::readonly(&i_meta) {
warn!("WARNING: TLS Client CA folder permissions on {} indicate it may not be RW. This could cause the server start up to fail!", ca_dir.display());
}
#[cfg(not(target_os = "windows"))]
if i_meta.mode() & 0o007 != 0 {
warn!("WARNING: TLS Client CA folder {} has 'everyone' permission bits in the mode. This could be a security risk ...", ca_dir.display());
}
}
}
@ -753,14 +753,6 @@ async fn kanidm_main(
#[cfg(target_os = "linux")]
{
let _ = sd_notify::notify(true, &[sd_notify::NotifyState::Ready]);
// Undocumented systemd feature - all messages should have a monotonic usec sent
// with them. In some cases like "reloading" messages, it is undocumented but
// failure to send this message causes the reload to fail.
if let Ok(monotonic_usec) = sd_notify::NotifyState::monotonic_usec_now() {
let _ = sd_notify::notify(true, &[monotonic_usec]);
} else {
error!("CRITICAL!!! Unable to access clock monotonic time. SYSTEMD WILL KILL US.");
};
let _ = sd_notify::notify(
true,
&[sd_notify::NotifyState::Status("Started Kanidm 🦀")],
@ -774,86 +766,80 @@ async fn kanidm_main(
{
let mut listener = sctx.subscribe();
tokio::select! {
Ok(()) = tokio::signal::ctrl_c() => {
break
}
Some(()) = async move {
let sigterm = tokio::signal::unix::SignalKind::terminate();
#[allow(clippy::unwrap_used)]
tokio::signal::unix::signal(sigterm).unwrap().recv().await
} => {
break
}
Some(()) = async move {
let sigterm = tokio::signal::unix::SignalKind::alarm();
#[allow(clippy::unwrap_used)]
tokio::signal::unix::signal(sigterm).unwrap().recv().await
} => {
// Ignore
}
Some(()) = async move {
let sigterm = tokio::signal::unix::SignalKind::hangup();
#[allow(clippy::unwrap_used)]
tokio::signal::unix::signal(sigterm).unwrap().recv().await
} => {
// Reload TLS certificates
// systemd has a special reload handler for this.
#[cfg(target_os = "linux")]
{
let _ = sd_notify::notify(true, &[sd_notify::NotifyState::Reloading]);
// CRITICAL - if you do not send a monotonic usec message after a reloading
// message, your service WILL BE KILLED.
if let Ok(monotonic_usec) = sd_notify::NotifyState::monotonic_usec_now() {
let _ =
sd_notify::notify(true, &[monotonic_usec]);
} else {
error!("CRITICAL!!! Unable to access clock monotonic time. SYSTEMD WILL KILL US.");
};
let _ = sd_notify::notify(true, &[sd_notify::NotifyState::Status("Reloading ...")]);
}
Ok(()) = tokio::signal::ctrl_c() => {
break
}
Some(()) = async move {
let sigterm = tokio::signal::unix::SignalKind::terminate();
#[allow(clippy::unwrap_used)]
tokio::signal::unix::signal(sigterm).unwrap().recv().await
} => {
break
}
Some(()) = async move {
let sigterm = tokio::signal::unix::SignalKind::alarm();
#[allow(clippy::unwrap_used)]
tokio::signal::unix::signal(sigterm).unwrap().recv().await
} => {
// Ignore
}
Some(()) = async move {
let sigterm = tokio::signal::unix::SignalKind::hangup();
#[allow(clippy::unwrap_used)]
tokio::signal::unix::signal(sigterm).unwrap().recv().await
} => {
// Reload TLS certificates
// systemd has a special reload handler for this.
#[cfg(target_os = "linux")]
{
if let Ok(monotonic_usec) = sd_notify::NotifyState::monotonic_usec_now() {
let _ = sd_notify::notify(true, &[sd_notify::NotifyState::Reloading, monotonic_usec]);
let _ = sd_notify::notify(true, &[sd_notify::NotifyState::Status("Reloading ...")]);
} else {
error!("CRITICAL!!! Unable to access clock monotonic time. SYSTEMD WILL KILL US.");
};
}
sctx.tls_acceptor_reload().await;
sctx.tls_acceptor_reload().await;
// Systemd freaks out if you send the ready state too fast after the
// reload state and can kill Kanidmd as a result.
tokio::time::sleep(std::time::Duration::from_secs(5)).await;
// Systemd freaks out if you send the ready state too fast after the
// reload state and can kill Kanidmd as a result.
tokio::time::sleep(std::time::Duration::from_secs(5)).await;
#[cfg(target_os = "linux")]
{
let _ = sd_notify::notify(true, &[sd_notify::NotifyState::Ready]);
if let Ok(monotonic_usec) = sd_notify::NotifyState::monotonic_usec_now() {
let _ =
sd_notify::notify(true, &[monotonic_usec]);
} else {
error!("CRITICAL!!! Unable to access clock monotonic time. SYSTEMD WILL KILL US.");
};
let _ = sd_notify::notify(true, &[sd_notify::NotifyState::Status("Reload Success")]);
}
#[cfg(target_os = "linux")]
{
if let Ok(monotonic_usec) = sd_notify::NotifyState::monotonic_usec_now() {
let _ = sd_notify::notify(true, &[sd_notify::NotifyState::Ready, monotonic_usec]);
let _ = sd_notify::notify(true, &[sd_notify::NotifyState::Status("Reload Success")]);
} else {
error!("CRITICAL!!! Unable to access clock monotonic time. SYSTEMD WILL KILL US.");
};
}
info!("Reload complete");
}
Some(()) = async move {
let sigterm = tokio::signal::unix::SignalKind::user_defined1();
#[allow(clippy::unwrap_used)]
tokio::signal::unix::signal(sigterm).unwrap().recv().await
} => {
// Ignore
}
Some(()) = async move {
let sigterm = tokio::signal::unix::SignalKind::user_defined2();
#[allow(clippy::unwrap_used)]
tokio::signal::unix::signal(sigterm).unwrap().recv().await
} => {
// Ignore
}
// we got a message on thr broadcast from somewhere else
Ok(msg) = async move {
listener.recv().await
} => {
debug!("Main loop received message: {:?}", msg);
break
}
}
info!("Reload complete");
}
Some(()) = async move {
let sigterm = tokio::signal::unix::SignalKind::user_defined1();
#[allow(clippy::unwrap_used)]
tokio::signal::unix::signal(sigterm).unwrap().recv().await
} => {
// Ignore
}
Some(()) = async move {
let sigterm = tokio::signal::unix::SignalKind::user_defined2();
#[allow(clippy::unwrap_used)]
tokio::signal::unix::signal(sigterm).unwrap().recv().await
} => {
// Ignore
}
// we got a message on thr broadcast from somewhere else
Ok(msg) = async move {
listener.recv().await
} => {
debug!("Main loop received message: {:?}", msg);
break
}
}
}
#[cfg(target_family = "windows")]
{
@ -880,34 +866,19 @@ async fn kanidm_main(
}
KanidmdOpt::CertGenerate(_sopt) => {
info!("Running in certificate generate mode ...");
config.update_config_for_server_mode(&sconfig);
cert_generate_core(&config);
}
KanidmdOpt::Database {
commands: DbCommands::Backup(bopt),
} => {
info!("Running in backup mode ...");
let p = match bopt.path.to_str() {
Some(p) => p,
None => {
error!("Invalid backup path");
return ExitCode::FAILURE;
}
};
backup_server_core(&config, p);
backup_server_core(&config, &bopt.path);
}
KanidmdOpt::Database {
commands: DbCommands::Restore(ropt),
} => {
info!("Running in restore mode ...");
let p = match ropt.path.to_str() {
Some(p) => p,
None => {
error!("Invalid restore path");
return ExitCode::FAILURE;
}
};
restore_server_core(&config, p).await;
restore_server_core(&config, &ropt.path).await;
}
KanidmdOpt::Database {
commands: DbCommands::Verify(_vopt),
@ -1088,8 +1059,6 @@ async fn kanidm_main(
vacuum_server_core(&config);
}
KanidmdOpt::HealthCheck(sopt) => {
config.update_config_for_server_mode(&sconfig);
debug!("{sopt:?}");
let healthcheck_url = match &sopt.check_origin {
@ -1110,12 +1079,15 @@ async fn kanidm_main(
.danger_accept_invalid_hostnames(!sopt.verify_tls)
.https_only(true);
client = match &sconfig.tls_chain {
client = match &config.tls_config {
None => client,
Some(ca_cert) => {
debug!("Trying to load {} to build a CA cert path", ca_cert);
Some(tls_config) => {
debug!(
"Trying to load {} to build a CA cert path",
tls_config.chain.display()
);
// if the ca_cert file exists, then we'll use it
let ca_cert_path = PathBuf::from(ca_cert);
let ca_cert_path = tls_config.chain.clone();
match ca_cert_path.exists() {
true => {
let mut cert_buf = Vec::new();
@ -1148,7 +1120,10 @@ async fn kanidm_main(
client
}
false => {
warn!("Couldn't find ca cert {} but carrying on...", ca_cert);
warn!(
"Couldn't find ca cert {} but carrying on...",
tls_config.chain.display()
);
client
}
}

View file

@ -13,7 +13,7 @@ repository = { workspace = true }
[lib]
proc-macro = true
test = true
test = false
doctest = false
[dependencies]

View file

@ -34,7 +34,7 @@ fn parse_attributes(
});
if !args_are_allowed {
let msg = "Invalid test config attribute. The following are allow";
let msg = "Invalid test config attribute. The following are allowed";
return Err(syn::Error::new_spanned(
input.sig.fn_token,
format!("{}: {}", msg, ALLOWED_ATTRIBUTES.join(", ")),

View file

@ -18,7 +18,8 @@ use crate::be::idl_sqlite::{
IdlSqlite, IdlSqliteReadTransaction, IdlSqliteTransaction, IdlSqliteWriteTransaction,
};
use crate::be::idxkey::{
IdlCacheKey, IdlCacheKeyRef, IdlCacheKeyToRef, IdxKey, IdxKeyRef, IdxKeyToRef, IdxSlope,
IdlCacheKey, IdlCacheKeyRef, IdlCacheKeyToRef, IdxKey, IdxKeyRef, IdxKeyToRef, IdxNameKey,
IdxSlope,
};
use crate::be::keystorage::{KeyHandle, KeyHandleId};
use crate::be::{BackendConfig, IdList, IdRawEntry};
@ -35,6 +36,10 @@ const DEFAULT_NAME_CACHE_RATIO: usize = 8;
const DEFAULT_CACHE_RMISS: usize = 0;
const DEFAULT_CACHE_WMISS: usize = 0;
const DEFAULT_IDX_CACHE_RMISS: usize = 8;
const DEFAULT_IDX_CACHE_WMISS: usize = 16;
const DEFAULT_IDX_EXISTS_TARGET: usize = 256;
#[derive(Debug, Clone, Ord, PartialOrd, Eq, PartialEq, Hash)]
enum NameCacheKey {
Name2Uuid(String),
@ -55,6 +60,9 @@ pub struct IdlArcSqlite {
entry_cache: ARCache<u64, Arc<EntrySealedCommitted>>,
idl_cache: ARCache<IdlCacheKey, Box<IDLBitRange>>,
name_cache: ARCache<NameCacheKey, NameCacheValue>,
idx_exists_cache: ARCache<IdxNameKey, bool>,
op_ts_max: CowCell<Option<Duration>>,
allids: CowCell<IDLBitRange>,
maxid: CowCell<u64>,
@ -66,6 +74,8 @@ pub struct IdlArcSqliteReadTransaction<'a> {
entry_cache: ARCacheReadTxn<'a, u64, Arc<EntrySealedCommitted>, ()>,
idl_cache: ARCacheReadTxn<'a, IdlCacheKey, Box<IDLBitRange>, ()>,
name_cache: ARCacheReadTxn<'a, NameCacheKey, NameCacheValue, ()>,
idx_exists_cache: ARCacheReadTxn<'a, IdxNameKey, bool, ()>,
allids: CowCellReadTxn<IDLBitRange>,
}
@ -74,6 +84,9 @@ pub struct IdlArcSqliteWriteTransaction<'a> {
entry_cache: ARCacheWriteTxn<'a, u64, Arc<EntrySealedCommitted>, ()>,
idl_cache: ARCacheWriteTxn<'a, IdlCacheKey, Box<IDLBitRange>, ()>,
name_cache: ARCacheWriteTxn<'a, NameCacheKey, NameCacheValue, ()>,
idx_exists_cache: ARCacheWriteTxn<'a, IdxNameKey, bool, ()>,
op_ts_max: CowCellWriteTxn<'a, Option<Duration>>,
allids: CowCellWriteTxn<'a, IDLBitRange>,
maxid: CowCellWriteTxn<'a, u64>,
@ -178,8 +191,8 @@ macro_rules! get_idl {
// or smaller type. Perhaps even a small cache of the IdlCacheKeys that
// are allocated to reduce some allocs? Probably over thinking it at
// this point.
//
// First attempt to get from this cache.
// Now attempt to get from this cache.
let cache_key = IdlCacheKeyRef {
a: $attr,
i: $itype,
@ -195,16 +208,47 @@ macro_rules! get_idl {
);
return Ok(Some(data.as_ref().clone()));
}
// If it was a miss, does the actually exist in the DB?
let idx_key = IdxNameKey {
a: $attr.clone(),
i: $itype,
};
let idx_r = $self.idx_exists_cache.get(&idx_key);
if idx_r == Some(&false) {
// The idx does not exist - bail early.
return Ok(None)
}
// The table either exists and we don't have data on it yet,
// or it does not exist and we need to hear back from the lower level
// If miss, get from db *and* insert to the cache.
let db_r = $self.db.get_idl($attr, $itype, $idx_key)?;
if let Some(ref idl) = db_r {
if idx_r == None {
// It exists, so track that data, because we weren't
// previously tracking it.
$self.idx_exists_cache.insert(idx_key, true)
}
let ncache_key = IdlCacheKey {
a: $attr.clone(),
i: $itype.clone(),
k: $idx_key.into(),
};
$self.idl_cache.insert(ncache_key, Box::new(idl.clone()))
}
} else {
// The DB was unable to return this idx because table backing the
// idx does not exist. We should cache this to prevent repeat hits
// on sqlite until the db does exist, at which point the cache is
// cleared anyway.
//
// NOTE: If the db idx misses it returns Some(empty_set), so this
// only caches missing index tables.
$self.idx_exists_cache.insert(idx_key, false)
};
Ok(db_r)
}};
}
@ -593,6 +637,7 @@ impl IdlArcSqliteWriteTransaction<'_> {
*/
self.entry_cache.clear();
self.idl_cache.clear();
self.idx_exists_cache.clear();
self.name_cache.clear();
Ok(())
}
@ -604,6 +649,7 @@ impl IdlArcSqliteWriteTransaction<'_> {
mut entry_cache,
mut idl_cache,
mut name_cache,
idx_exists_cache,
op_ts_max,
allids,
maxid,
@ -677,6 +723,7 @@ impl IdlArcSqliteWriteTransaction<'_> {
// Can no longer fail from this point.
op_ts_max.commit();
name_cache.commit();
idx_exists_cache.commit();
idl_cache.commit();
allids.commit();
maxid.commit();
@ -708,6 +755,7 @@ impl IdlArcSqliteWriteTransaction<'_> {
*self.maxid = mid;
}
#[instrument(level = "trace", skip_all)]
pub fn write_identries<'b, I>(&'b mut self, mut entries: I) -> Result<(), OperationError>
where
I: Iterator<Item = &'b Entry<EntrySealed, EntryCommitted>>,
@ -757,6 +805,7 @@ impl IdlArcSqliteWriteTransaction<'_> {
})
}
#[instrument(level = "trace", skip_all)]
pub fn write_idl(
&mut self,
attr: &Attribute,
@ -1127,9 +1176,17 @@ impl IdlArcSqliteWriteTransaction<'_> {
Ok(())
}
pub fn create_idx(&self, attr: &Attribute, itype: IndexType) -> Result<(), OperationError> {
// We don't need to affect this, so pass it down.
self.db.create_idx(attr, itype)
pub fn create_idx(&mut self, attr: &Attribute, itype: IndexType) -> Result<(), OperationError> {
self.db.create_idx(attr, itype)?;
// Cache that this exists since we just made it.
let idx_key = IdxNameKey {
a: attr.clone(),
i: itype,
};
self.idx_exists_cache.insert(idx_key, true);
Ok(())
}
/// ⚠️ - This function will destroy all indexes in the database.
@ -1141,6 +1198,7 @@ impl IdlArcSqliteWriteTransaction<'_> {
debug!("CLEARING CACHE");
self.db.danger_purge_idxs().map(|()| {
self.idl_cache.clear();
self.idx_exists_cache.clear();
self.name_cache.clear();
})
}
@ -1266,6 +1324,21 @@ impl IdlArcSqlite {
OperationError::InvalidState
})?;
let idx_exists_cache = ARCacheBuilder::new()
.set_expected_workload(
DEFAULT_IDX_EXISTS_TARGET,
cfg.pool_size as usize,
DEFAULT_IDX_CACHE_RMISS,
DEFAULT_IDX_CACHE_WMISS,
true,
)
.set_reader_quiesce(true)
.build()
.ok_or_else(|| {
admin_error!("Failed to construct idx_exists_cache");
OperationError::InvalidState
})?;
let allids = CowCell::new(IDLBitRange::new());
let maxid = CowCell::new(0);
@ -1279,6 +1352,7 @@ impl IdlArcSqlite {
entry_cache,
idl_cache,
name_cache,
idx_exists_cache,
op_ts_max,
allids,
maxid,
@ -1298,6 +1372,7 @@ impl IdlArcSqlite {
let db_read = self.db.read()?;
let idl_cache_read = self.idl_cache.read();
let name_cache_read = self.name_cache.read();
let idx_exists_cache_read = self.idx_exists_cache.read();
let allids_read = self.allids.read();
Ok(IdlArcSqliteReadTransaction {
@ -1305,6 +1380,7 @@ impl IdlArcSqlite {
entry_cache: entry_cache_read,
idl_cache: idl_cache_read,
name_cache: name_cache_read,
idx_exists_cache: idx_exists_cache_read,
allids: allids_read,
})
}
@ -1315,6 +1391,7 @@ impl IdlArcSqlite {
let db_write = self.db.write()?;
let idl_cache_write = self.idl_cache.write();
let name_cache_write = self.name_cache.write();
let idx_exists_cache_write = self.idx_exists_cache.write();
let op_ts_max_write = self.op_ts_max.write();
let allids_write = self.allids.write();
let maxid_write = self.maxid.write();
@ -1325,6 +1402,7 @@ impl IdlArcSqlite {
entry_cache: entry_cache_write,
idl_cache: idl_cache_write,
name_cache: name_cache_write,
idx_exists_cache: idx_exists_cache_write,
op_ts_max: op_ts_max_write,
allids: allids_write,
maxid: maxid_write,

View file

@ -1,27 +1,21 @@
use std::collections::{BTreeMap, BTreeSet, VecDeque};
use std::convert::{TryFrom, TryInto};
use std::sync::Arc;
use std::sync::Mutex;
use std::time::Duration;
use super::keystorage::{KeyHandle, KeyHandleId};
// use crate::valueset;
use hashbrown::HashMap;
use idlset::v2::IDLBitRange;
use kanidm_proto::internal::{ConsistencyError, OperationError};
use rusqlite::vtab::array::Array;
use rusqlite::{Connection, OpenFlags, OptionalExtension};
use uuid::Uuid;
use crate::be::dbentry::DbIdentSpn;
use crate::be::dbvalue::DbCidV1;
use crate::be::{BackendConfig, IdList, IdRawEntry, IdxKey, IdxSlope};
use crate::entry::{Entry, EntryCommitted, EntrySealed};
use crate::prelude::*;
use crate::value::{IndexType, Value};
// use uuid::Uuid;
use hashbrown::HashMap;
use idlset::v2::IDLBitRange;
use kanidm_proto::internal::{ConsistencyError, OperationError};
use rusqlite::vtab::array::Array;
use rusqlite::{Connection, OpenFlags, OptionalExtension};
use std::collections::{BTreeMap, BTreeSet, VecDeque};
use std::convert::{TryFrom, TryInto};
use std::sync::Arc;
use std::sync::Mutex;
use std::time::Duration;
use uuid::Uuid;
const DBV_ID2ENTRY: &str = "id2entry";
const DBV_INDEXV: &str = "indexv";
@ -205,12 +199,15 @@ pub(crate) trait IdlSqliteTransaction {
let mut stmt = self
.get_conn()?
.prepare(&format!(
"SELECT COUNT(name) from {}.sqlite_master where name = :tname",
"SELECT rowid from {}.sqlite_master where type=\"table\" AND name = :tname LIMIT 1",
self.get_db_name()
))
.map_err(sqlite_error)?;
let i: Option<i64> = stmt
.query_row(&[(":tname", tname)], |row| row.get(0))
// If the row doesn't exist, we don't mind.
.optional()
.map_err(sqlite_error)?;
match i {
@ -1709,7 +1706,7 @@ impl IdlSqliteWriteTransaction {
impl IdlSqlite {
pub fn new(cfg: &BackendConfig, vacuum: bool) -> Result<Self, OperationError> {
if cfg.path.is_empty() {
if cfg.path.as_os_str().is_empty() {
debug_assert_eq!(cfg.pool_size, 1);
}
// If provided, set the page size to match the tuning we want. By default we use 4096. The VACUUM
@ -1731,8 +1728,7 @@ impl IdlSqlite {
// Initial setup routines.
{
let vconn =
Connection::open_with_flags(cfg.path.as_str(), flags).map_err(sqlite_error)?;
let vconn = Connection::open_with_flags(&cfg.path, flags).map_err(sqlite_error)?;
vconn
.execute_batch(
@ -1761,8 +1757,7 @@ impl IdlSqlite {
);
*/
let vconn =
Connection::open_with_flags(cfg.path.as_str(), flags).map_err(sqlite_error)?;
let vconn = Connection::open_with_flags(&cfg.path, flags).map_err(sqlite_error)?;
vconn
.execute_batch("PRAGMA wal_checkpoint(TRUNCATE);")
@ -1783,8 +1778,7 @@ impl IdlSqlite {
OperationError::SqliteError
})?;
let vconn =
Connection::open_with_flags(cfg.path.as_str(), flags).map_err(sqlite_error)?;
let vconn = Connection::open_with_flags(&cfg.path, flags).map_err(sqlite_error)?;
vconn
.pragma_update(None, "page_size", cfg.fstype as u32)
@ -1818,7 +1812,7 @@ impl IdlSqlite {
.map(|i| {
trace!("Opening Connection {}", i);
let conn =
Connection::open_with_flags(cfg.path.as_str(), flags).map_err(sqlite_error);
Connection::open_with_flags(&cfg.path, flags).map_err(sqlite_error);
match conn {
Ok(conn) => {
// We need to set the cachesize at this point as well.

View file

@ -155,3 +155,9 @@ impl Ord for (dyn IdlCacheKeyToRef + '_) {
self.keyref().cmp(&other.keyref())
}
}
#[derive(Debug, Clone, Ord, PartialOrd, Eq, PartialEq, Hash)]
pub struct IdxNameKey {
pub a: Attribute,
pub i: IndexType,
}

View file

@ -4,20 +4,6 @@
//! is to persist content safely to disk, load that content, and execute queries
//! utilising indexes in the most effective way possible.
use std::collections::BTreeMap;
use std::fs;
use std::ops::DerefMut;
use std::sync::Arc;
use std::time::Duration;
use concread::cowcell::*;
use hashbrown::{HashMap as Map, HashSet};
use idlset::v2::IDLBitRange;
use idlset::AndNot;
use kanidm_proto::internal::{ConsistencyError, OperationError};
use tracing::{trace, trace_span};
use uuid::Uuid;
use crate::be::dbentry::{DbBackup, DbEntry};
use crate::be::dbrepl::DbReplMeta;
use crate::entry::Entry;
@ -31,6 +17,19 @@ use crate::repl::ruv::{
};
use crate::utils::trigraph_iter;
use crate::value::{IndexType, Value};
use concread::cowcell::*;
use hashbrown::{HashMap as Map, HashSet};
use idlset::v2::IDLBitRange;
use idlset::AndNot;
use kanidm_proto::internal::{ConsistencyError, OperationError};
use std::collections::BTreeMap;
use std::fs;
use std::ops::DerefMut;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use std::time::Duration;
use tracing::{trace, trace_span};
use uuid::Uuid;
pub(crate) mod dbentry;
pub(crate) mod dbrepl;
@ -132,7 +131,7 @@ impl IdxMeta {
#[derive(Clone)]
pub struct BackendConfig {
path: String,
path: PathBuf,
pool_size: u32,
db_name: &'static str,
fstype: FsType,
@ -141,10 +140,16 @@ pub struct BackendConfig {
}
impl BackendConfig {
pub fn new(path: &str, pool_size: u32, fstype: FsType, arcsize: Option<usize>) -> Self {
pub fn new(
path: Option<&Path>,
pool_size: u32,
fstype: FsType,
arcsize: Option<usize>,
) -> Self {
BackendConfig {
pool_size,
path: path.to_string(),
// This means if path is None, that "" implies an sqlite in memory/ram only database.
path: path.unwrap_or_else(|| Path::new("")).to_path_buf(),
db_name: "main",
fstype,
arcsize,
@ -154,7 +159,7 @@ impl BackendConfig {
pub(crate) fn new_test(db_name: &'static str) -> Self {
BackendConfig {
pool_size: 1,
path: "".to_string(),
path: PathBuf::from(""),
db_name,
fstype: FsType::Generic,
arcsize: Some(2048),
@ -549,10 +554,11 @@ pub trait BackendTransaction {
}
(_, fp) => {
plan.push(fp);
filter_error!(
let setplan = FilterPlan::InclusionInvalid(plan);
error!(
?setplan,
"Inclusion is unable to proceed - all terms must be fully indexed!"
);
let setplan = FilterPlan::InclusionInvalid(plan);
return Ok((IdList::Partial(IDLBitRange::new()), setplan));
}
}
@ -935,7 +941,7 @@ pub trait BackendTransaction {
self.get_ruv().verify(&entries, results);
}
fn backup(&mut self, dst_path: &str) -> Result<(), OperationError> {
fn backup(&mut self, dst_path: &Path) -> Result<(), OperationError> {
let repl_meta = self.get_ruv().to_db_backup_ruv();
// load all entries into RAM, may need to change this later
@ -1427,20 +1433,16 @@ impl<'a> BackendWriteTransaction<'a> {
if self.is_idx_slopeyness_generated()? {
trace!("Indexing slopes available");
} else {
admin_warn!(
"No indexing slopes available. You should consider reindexing to generate these"
);
warn!("No indexing slopes available. You should consider reindexing to generate these");
};
// Setup idxkeys here. By default we set these all to "max slope" aka
// all indexes are "equal" but also worse case unless analysed. If they
// have been analysed, we can set the slope factor into here.
let idxkeys: Result<Map<_, _>, _> = idxkeys
let mut idxkeys = idxkeys
.into_iter()
.map(|k| self.get_idx_slope(&k).map(|slope| (k, slope)))
.collect();
let mut idxkeys = idxkeys?;
.collect::<Result<Map<_, _>, _>>()?;
std::mem::swap(&mut self.idxmeta_wr.deref_mut().idxkeys, &mut idxkeys);
Ok(())
@ -1811,7 +1813,7 @@ impl<'a> BackendWriteTransaction<'a> {
Ok(slope)
}
pub fn restore(&mut self, src_path: &str) -> Result<(), OperationError> {
pub fn restore(&mut self, src_path: &Path) -> Result<(), OperationError> {
let serialized_string = fs::read_to_string(src_path).map_err(|e| {
admin_error!("fs::read_to_string {:?}", e);
OperationError::FsError
@ -2124,7 +2126,7 @@ impl Backend {
debug!(db_tickets = ?cfg.pool_size, profile = %env!("KANIDM_PROFILE_NAME"), cpu_flags = %env!("KANIDM_CPU_FLAGS"));
// If in memory, reduce pool to 1
if cfg.path.is_empty() {
if cfg.path.as_os_str().is_empty() {
cfg.pool_size = 1;
}
@ -2210,13 +2212,6 @@ impl Backend {
#[cfg(test)]
mod tests {
use std::fs;
use std::iter::FromIterator;
use std::sync::Arc;
use std::time::Duration;
use idlset::v2::IDLBitRange;
use super::super::entry::{Entry, EntryInit, EntryNew};
use super::Limits;
use super::{
@ -2226,6 +2221,12 @@ mod tests {
use crate::prelude::*;
use crate::repl::cid::Cid;
use crate::value::{IndexType, PartialValue, Value};
use idlset::v2::IDLBitRange;
use std::fs;
use std::iter::FromIterator;
use std::path::Path;
use std::sync::Arc;
use std::time::Duration;
lazy_static! {
static ref CID_ZERO: Cid = Cid::new_zero();
@ -2600,11 +2601,9 @@ mod tests {
#[test]
fn test_be_backup_restore() {
let db_backup_file_name = format!(
"{}/.backup_test.json",
option_env!("OUT_DIR").unwrap_or("/tmp")
);
eprintln!(" ⚠️ {db_backup_file_name}");
let db_backup_file_name =
Path::new(option_env!("OUT_DIR").unwrap_or("/tmp")).join(".backup_test.json");
eprintln!(" ⚠️ {}", db_backup_file_name.display());
run_test!(|be: &mut BackendWriteTransaction| {
// Important! Need db metadata setup!
be.reset_db_s_uuid().unwrap();
@ -2659,11 +2658,9 @@ mod tests {
#[test]
fn test_be_backup_restore_tampered() {
let db_backup_file_name = format!(
"{}/.backup2_test.json",
option_env!("OUT_DIR").unwrap_or("/tmp")
);
eprintln!(" ⚠️ {db_backup_file_name}");
let db_backup_file_name =
Path::new(option_env!("OUT_DIR").unwrap_or("/tmp")).join(".backup2_test.json");
eprintln!(" ⚠️ {}", db_backup_file_name.display());
run_test!(|be: &mut BackendWriteTransaction| {
// Important! Need db metadata setup!
be.reset_db_s_uuid().unwrap();

View file

@ -1,19 +1,12 @@
//! Constant Entries for the IDM
use std::fmt::Display;
use crate::constants::groups::idm_builtin_admin_groups;
use crate::constants::uuids::*;
use crate::entry::{Entry, EntryInit, EntryInitNew, EntryNew};
use crate::idm::account::Account;
use crate::value::PartialValue;
use crate::value::Value;
use crate::valueset::{ValueSet, ValueSetIutf8};
pub use kanidm_proto::attribute::Attribute;
use kanidm_proto::constants::*;
use kanidm_proto::internal::OperationError;
use kanidm_proto::v1::AccountType;
use uuid::Uuid;
use kanidm_proto::scim_v1::JsonValue;
//TODO: This would do well in the proto lib
// together with all the other definitions.
@ -129,6 +122,12 @@ impl From<EntryClass> for &'static str {
}
}
impl From<EntryClass> for JsonValue {
fn from(value: EntryClass) -> Self {
Self::String(value.as_ref().to_string())
}
}
impl AsRef<str> for EntryClass {
fn as_ref(&self) -> &str {
self.into()
@ -195,158 +194,9 @@ impl EntryClass {
}
}
lazy_static! {
/// Builtin System Admin account.
pub static ref BUILTIN_ACCOUNT_IDM_ADMIN: BuiltinAccount = BuiltinAccount {
account_type: AccountType::ServiceAccount,
entry_managed_by: None,
name: "idm_admin",
uuid: UUID_IDM_ADMIN,
description: "Builtin IDM Admin account.",
displayname: "IDM Administrator",
};
pub static ref E_SYSTEM_INFO_V1: EntryInitNew = entry_init!(
(Attribute::Class, EntryClass::Object.to_value()),
(Attribute::Class, EntryClass::SystemInfo.to_value()),
(Attribute::Class, EntryClass::System.to_value()),
(Attribute::Uuid, Value::Uuid(UUID_SYSTEM_INFO)),
(
Attribute::Description,
Value::new_utf8s("System (local) info and metadata object.")
),
(Attribute::Version, Value::Uint32(20))
);
pub static ref E_DOMAIN_INFO_DL6: EntryInitNew = entry_init!(
(Attribute::Class, EntryClass::Object.to_value()),
(Attribute::Class, EntryClass::DomainInfo.to_value()),
(Attribute::Class, EntryClass::System.to_value()),
(Attribute::Class, EntryClass::KeyObject.to_value()),
(Attribute::Class, EntryClass::KeyObjectJwtEs256.to_value()),
(Attribute::Class, EntryClass::KeyObjectJweA128GCM.to_value()),
(Attribute::Name, Value::new_iname("domain_local")),
(Attribute::Uuid, Value::Uuid(UUID_DOMAIN_INFO)),
(
Attribute::Description,
Value::new_utf8s("This local domain's info and metadata object.")
)
);
}
#[derive(Debug, Clone)]
/// Built in accounts such as anonymous, idm_admin and admin
pub struct BuiltinAccount {
pub account_type: kanidm_proto::v1::AccountType,
pub entry_managed_by: Option<uuid::Uuid>,
pub name: &'static str,
pub uuid: Uuid,
pub description: &'static str,
pub displayname: &'static str,
}
impl Default for BuiltinAccount {
fn default() -> Self {
BuiltinAccount {
account_type: AccountType::ServiceAccount,
entry_managed_by: None,
name: "",
uuid: Uuid::new_v4(),
description: "<set description>",
displayname: "<set displayname>",
}
}
}
impl From<BuiltinAccount> for Account {
fn from(value: BuiltinAccount) -> Self {
#[allow(clippy::panic)]
if value.uuid >= DYNAMIC_RANGE_MINIMUM_UUID {
panic!("Builtin ACP has invalid UUID! {:?}", value);
}
Account {
name: value.name.to_string(),
uuid: value.uuid,
displayname: value.displayname.to_string(),
spn: format!("{}@example.com", value.name),
mail_primary: None,
mail: Vec::with_capacity(0),
..Default::default()
}
}
}
impl From<BuiltinAccount> for EntryInitNew {
fn from(value: BuiltinAccount) -> Self {
let mut entry = EntryInitNew::new();
entry.add_ava(Attribute::Name, Value::new_iname(value.name));
#[allow(clippy::panic)]
if value.uuid >= DYNAMIC_RANGE_MINIMUM_UUID {
panic!("Builtin ACP has invalid UUID! {:?}", value);
}
entry.add_ava(Attribute::Uuid, Value::Uuid(value.uuid));
entry.add_ava(Attribute::Description, Value::new_utf8s(value.description));
entry.add_ava(Attribute::DisplayName, Value::new_utf8s(value.displayname));
if let Some(entry_manager) = value.entry_managed_by {
entry.add_ava(Attribute::EntryManagedBy, Value::Refer(entry_manager));
}
entry.set_ava(
Attribute::Class,
vec![
EntryClass::Account.to_value(),
EntryClass::MemberOf.to_value(),
EntryClass::Object.to_value(),
],
);
match value.account_type {
AccountType::Person => entry.add_ava(Attribute::Class, EntryClass::Person.to_value()),
AccountType::ServiceAccount => {
entry.add_ava(Attribute::Class, EntryClass::ServiceAccount.to_value())
}
}
entry
}
}
lazy_static! {
/// Builtin System Admin account.
pub static ref BUILTIN_ACCOUNT_ADMIN: BuiltinAccount = BuiltinAccount {
account_type: AccountType::ServiceAccount,
entry_managed_by: None,
name: "admin",
uuid: UUID_ADMIN,
description: "Builtin System Admin account.",
displayname: "System Administrator",
};
}
lazy_static! {
pub static ref BUILTIN_ACCOUNT_ANONYMOUS_DL6: BuiltinAccount = BuiltinAccount {
account_type: AccountType::ServiceAccount,
entry_managed_by: Some(UUID_IDM_ADMINS),
name: "anonymous",
uuid: UUID_ANONYMOUS,
description: "Anonymous access account.",
displayname: "Anonymous",
};
}
pub fn builtin_accounts() -> Vec<&'static BuiltinAccount> {
vec![
&BUILTIN_ACCOUNT_ADMIN,
&BUILTIN_ACCOUNT_IDM_ADMIN,
&BUILTIN_ACCOUNT_ANONYMOUS_DL6,
]
}
// ============ TEST DATA ============
#[cfg(test)]
pub const UUID_TESTPERSON_1: Uuid = ::uuid::uuid!("cc8e95b4-c24f-4d68-ba54-8bed76f63930");
#[cfg(test)]
pub const UUID_TESTPERSON_2: Uuid = ::uuid::uuid!("538faac7-4d29-473b-a59d-23023ac19955");
use crate::entry::{Entry, EntryInit, EntryInitNew, EntryNew};
#[cfg(test)]
lazy_static! {
@ -356,7 +206,10 @@ lazy_static! {
(Attribute::Class, EntryClass::Person.to_value()),
(Attribute::Name, Value::new_iname("testperson1")),
(Attribute::DisplayName, Value::new_utf8s("Test Person 1")),
(Attribute::Uuid, Value::Uuid(UUID_TESTPERSON_1))
(
Attribute::Uuid,
Value::Uuid(super::uuids::UUID_TESTPERSON_1)
)
);
pub static ref E_TESTPERSON_2: EntryInitNew = entry_init!(
(Attribute::Class, EntryClass::Object.to_value()),
@ -364,28 +217,9 @@ lazy_static! {
(Attribute::Class, EntryClass::Person.to_value()),
(Attribute::Name, Value::new_iname("testperson2")),
(Attribute::DisplayName, Value::new_utf8s("Test Person 2")),
(Attribute::Uuid, Value::Uuid(UUID_TESTPERSON_2))
(
Attribute::Uuid,
Value::Uuid(super::uuids::UUID_TESTPERSON_2)
)
);
}
// ⚠️ DOMAIN LEVEL 1 ENTRIES ⚠️
// Future entries need to be added via migrations.
//
// DO NOT MODIFY THIS DEFINITION
/// Build a list of internal admin entries
pub fn idm_builtin_admin_entries() -> Result<Vec<EntryInitNew>, OperationError> {
let mut res: Vec<EntryInitNew> = vec![
BUILTIN_ACCOUNT_ADMIN.clone().into(),
BUILTIN_ACCOUNT_IDM_ADMIN.clone().into(),
];
for group in idm_builtin_admin_groups() {
let g: EntryInitNew = group.clone().try_into()?;
res.push(g);
}
// We need to push anonymous *after* groups due to entry-managed-by
res.push(BUILTIN_ACCOUNT_ANONYMOUS_DL6.clone().into());
Ok(res)
}

View file

@ -1,20 +1,10 @@
// Re-export as needed
pub mod acp;
pub mod entries;
pub mod groups;
mod key_providers;
pub mod schema;
pub mod system_config;
pub mod uuids;
pub mod values;
pub use self::acp::*;
pub use self::entries::*;
pub use self::groups::*;
pub use self::key_providers::*;
pub use self::schema::*;
pub use self::system_config::*;
pub use self::uuids::*;
pub use self::values::*;
@ -54,14 +44,6 @@ pub type DomainVersion = u32;
/// previously.
pub const DOMAIN_LEVEL_0: DomainVersion = 0;
/// Deprecated as of 1.3.0
pub const DOMAIN_LEVEL_5: DomainVersion = 5;
/// Domain Level introduced with 1.2.0.
/// Deprecated as of 1.4.0
pub const DOMAIN_LEVEL_6: DomainVersion = 6;
pub const PATCH_LEVEL_1: u32 = 1;
/// Domain Level introduced with 1.3.0.
/// Deprecated as of 1.5.0
pub const DOMAIN_LEVEL_7: DomainVersion = 7;
@ -79,22 +61,28 @@ pub const PATCH_LEVEL_2: u32 = 2;
/// Deprecated as of 1.8.0
pub const DOMAIN_LEVEL_10: DomainVersion = 10;
/// Domain Level introduced with 1.7.0.
/// Deprecated as of 1.9.0
pub const DOMAIN_LEVEL_11: DomainVersion = 11;
// The minimum level that we can re-migrate from.
// This should be DOMAIN_TGT_LEVEL minus 2
pub const DOMAIN_MIN_REMIGRATION_LEVEL: DomainVersion = DOMAIN_LEVEL_7;
pub const DOMAIN_MIN_REMIGRATION_LEVEL: DomainVersion = DOMAIN_LEVEL_8;
// The minimum supported domain functional level (for replication)
pub const DOMAIN_MIN_LEVEL: DomainVersion = DOMAIN_TGT_LEVEL;
// The previous releases domain functional level
pub const DOMAIN_PREVIOUS_TGT_LEVEL: DomainVersion = DOMAIN_LEVEL_8;
pub const DOMAIN_PREVIOUS_TGT_LEVEL: DomainVersion = DOMAIN_TGT_LEVEL - 1;
// The target supported domain functional level. During development this is
// the NEXT level that users will upgrade too.
pub const DOMAIN_TGT_LEVEL: DomainVersion = DOMAIN_LEVEL_9;
// the NEXT level that users will upgrade too. In other words if we are
// developing 1.6.0-dev, then we need to set TGT_LEVEL to 10 which is
// the corresponding level.
pub const DOMAIN_TGT_LEVEL: DomainVersion = DOMAIN_LEVEL_10;
// The current patch level if any out of band fixes are required.
pub const DOMAIN_TGT_PATCH_LEVEL: u32 = PATCH_LEVEL_2;
// The target domain functional level for the SUBSEQUENT release/dev cycle.
pub const DOMAIN_TGT_NEXT_LEVEL: DomainVersion = DOMAIN_LEVEL_10;
pub const DOMAIN_TGT_NEXT_LEVEL: DomainVersion = DOMAIN_TGT_LEVEL + 1;
// The maximum supported domain functional level
pub const DOMAIN_MAX_LEVEL: DomainVersion = DOMAIN_LEVEL_10;
pub const DOMAIN_MAX_LEVEL: DomainVersion = DOMAIN_LEVEL_11;
// On test builds define to 60 seconds
#[cfg(test)]

View file

@ -6,7 +6,9 @@ use uuid::{uuid, Uuid};
pub const STR_UUID_ADMIN: &str = "00000000-0000-0000-0000-000000000000";
pub const UUID_ADMIN: Uuid = uuid!("00000000-0000-0000-0000-000000000000");
pub const UUID_IDM_ADMINS: Uuid = uuid!("00000000-0000-0000-0000-000000000001");
pub const NAME_IDM_ADMINS: &str = "idm_admins";
pub const UUID_IDM_PEOPLE_PII_READ: Uuid = uuid!("00000000-0000-0000-0000-000000000002");
pub const NAME_IDM_PEOPLE_PII_READ: &str = "idm_people_pii_read";
pub const UUID_IDM_PEOPLE_WRITE_PRIV: Uuid = uuid!("00000000-0000-0000-0000-000000000003");
pub const UUID_IDM_GROUP_WRITE_PRIV: Uuid = uuid!("00000000-0000-0000-0000-000000000004");
pub const UUID_IDM_ACCOUNT_READ_PRIV: Uuid = uuid!("00000000-0000-0000-0000-000000000005");
@ -26,6 +28,8 @@ pub const UUID_IDM_ADMIN: Uuid = uuid!("00000000-0000-0000-0000-000000000018");
pub const STR_UUID_SYSTEM_ADMINS: &str = "00000000-0000-0000-0000-000000000019";
pub const UUID_SYSTEM_ADMINS: Uuid = uuid!("00000000-0000-0000-0000-000000000019");
pub const NAME_SYSTEM_ADMINS: &str = "system_admins";
pub const UUID_DOMAIN_ADMINS: Uuid = uuid!("00000000-0000-0000-0000-000000000020");
pub const UUID_IDM_ACCOUNT_UNIX_EXTEND_PRIV: Uuid = uuid!("00000000-0000-0000-0000-000000000021");
pub const UUID_IDM_GROUP_UNIX_EXTEND_PRIV: Uuid = uuid!("00000000-0000-0000-0000-000000000022");
@ -50,6 +54,7 @@ pub const UUID_IDM_HP_SERVICE_ACCOUNT_INTO_PERSON_MIGRATE_PRIV: Uuid =
pub const UUID_IDM_ALL_PERSONS: Uuid = uuid!("00000000-0000-0000-0000-000000000035");
pub const STR_UUID_IDM_ALL_ACCOUNTS: &str = "00000000-0000-0000-0000-000000000036";
pub const UUID_IDM_ALL_ACCOUNTS: Uuid = uuid!("00000000-0000-0000-0000-000000000036");
pub const NAME_IDM_ALL_ACCOUNTS: &str = "idm_all_accounts";
pub const UUID_IDM_HP_SYNC_ACCOUNT_MANAGE_PRIV: Uuid =
uuid!("00000000-0000-0000-0000-000000000037");
@ -131,7 +136,6 @@ pub const UUID_SCHEMA_ATTR_PRIMARY_CREDENTIAL: Uuid = uuid!("00000000-0000-0000-
pub const UUID_SCHEMA_CLASS_PERSON: Uuid = uuid!("00000000-0000-0000-0000-ffff00000044");
pub const UUID_SCHEMA_CLASS_GROUP: Uuid = uuid!("00000000-0000-0000-0000-ffff00000045");
pub const UUID_SCHEMA_CLASS_ACCOUNT: Uuid = uuid!("00000000-0000-0000-0000-ffff00000046");
// GAP - 47
pub const UUID_SCHEMA_ATTR_ATTRIBUTENAME: Uuid = uuid!("00000000-0000-0000-0000-ffff00000048");
pub const UUID_SCHEMA_ATTR_CLASSNAME: Uuid = uuid!("00000000-0000-0000-0000-ffff00000049");
pub const UUID_SCHEMA_ATTR_LEGALNAME: Uuid = uuid!("00000000-0000-0000-0000-ffff00000050");
@ -323,6 +327,13 @@ pub const UUID_SCHEMA_ATTR_ALLOW_PRIMARY_CRED_FALLBACK: Uuid =
uuid!("00000000-0000-0000-0000-ffff00000185");
pub const UUID_SCHEMA_ATTR_DOMAIN_ALLOW_EASTER_EGGS: Uuid =
uuid!("00000000-0000-0000-0000-ffff00000186");
pub const UUID_SCHEMA_ATTR_LDAP_MAXIMUM_QUERYABLE_ATTRIBUTES: Uuid =
uuid!("00000000-0000-0000-0000-ffff00000187");
pub const UUID_SCHEMA_ATTR_INDEXED: Uuid = uuid!("00000000-0000-0000-0000-ffff00000188");
pub const UUID_SCHEMA_ATTR_ACP_MODIFY_PRESENT_CLASS: Uuid =
uuid!("00000000-0000-0000-0000-ffff00000189");
pub const UUID_SCHEMA_ATTR_ACP_MODIFY_REMOVE_CLASS: Uuid =
uuid!("00000000-0000-0000-0000-ffff00000190");
// System and domain infos
// I'd like to strongly criticise william of the past for making poor choices about these allocations.
@ -451,3 +462,9 @@ pub const UUID_DOES_NOT_EXIST: Uuid = uuid!("00000000-0000-0000-0000-fffffffffff
pub const UUID_ANONYMOUS: Uuid = uuid!("00000000-0000-0000-0000-ffffffffffff");
pub const DYNAMIC_RANGE_MINIMUM_UUID: Uuid = uuid!("00000000-0000-0000-0001-000000000000");
// ======= test data ======
#[cfg(test)]
pub const UUID_TESTPERSON_1: Uuid = uuid!("cc8e95b4-c24f-4d68-ba54-8bed76f63930");
#[cfg(test)]
pub const UUID_TESTPERSON_2: Uuid = uuid!("538faac7-4d29-473b-a59d-23023ac19955");

View file

@ -702,6 +702,13 @@ impl Credential {
}
}
pub(crate) fn has_totp_by_name(&self, label: &str) -> bool {
match &self.type_ {
CredentialType::PasswordMfa(_, totp, _, _) => totp.contains_key(label),
_ => false,
}
}
pub(crate) fn new_from_generatedpassword(pw: Password) -> Self {
Credential {
type_: CredentialType::GeneratedPassword(pw),

Some files were not shown because too many files have changed in this diff Show more